August 04, 2015

A while back, just before Dockercon 2015, the friendly folks behind Ubuntu, Juju, LXD, and a whole bunch of other goodness hosted a special event that was all about service modelling, orchestration, and making all the container-y Docker-y stuff work well with in the DevOps world.

We assembled a panel of industry luminaries, including our very own Ben Saller. For those of you who don't know Ben, he's one of the original creators of Juju and an all-around great guy.

At one point in the panel discussion, the moderator asked (I'm paraphrasing) whether the Twitter's and Google's of the world are a "special breed" with respect to the scale of containerization or whether that's become a more common design pattern for the "rest of us", i.e. the smaller companies... Though indirect, the question implied that the rest of the world was now ready for scale and the solutions that provide it.

Here's what Ben had to say in response:

    I don't thinks it's the scale that you're operating at, it's the properties that you demand of the infrastructure.
    Everybody wants the self healing. Everybody wants the dynamic recovery, the load balancing.
    The problem becomes an economic function for many people, whether or not they can run eight machines to have some kind of bespoke PaaS (1) to do the one piece of software they have. It's not worth it in some sense unless that piece of software is mision-critical to carry a lot of infrastructure. And, it's very difficult to specialize a team to gain the knowledge to do that for a small organization.
    So, when we talk about things like Kubernetes or the kinds of software that we have with Juju and the other things what we're really trying to do is exactly what you were talking about: Make those best practices available by capturing the automation stylings of the larger players and presenting them in a cost-effective way.
    And I think that everyone is interested in that. Absolutely.

Sometimes, the problem being solved isn't well formed. It has been framed in a manner that makes us blind to the path forward. (I think much of the tech industry does this on purpose, but that's the topic of a whole other article.) This concept resonates with me as someone who studied engineering. In my university days, engineering professors were particularly clever at creating assignment problems that were solvable only if framed correctly. Approach a problem the wrong way, and you'd be up all night dating an intractable problem with no solution in sight.

Ben obviously gets this. Watch the video and see for yourself. He's the guy with the beard ;)

So, before you jump on a tool to solve a problem, frame your problem carefully and with precision, then pick a tool to help you.

Yes, that tool could be Juju.

(1) PaaS = Platform-as-a-Service

on August 04, 2015 06:00 PM

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.


20150804 Meeting Agenda

Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:


Status: Wily Development Kernel

We have rebased our Wily master-next branch to the latest upstream
v4.2-rc5 and uploaded to our ~canonical-kernel-team PPA. We are
resolving fallout from DKMS packages at this time before we proceed
uploading to the archive.
Important upcoming dates:

    Thurs Aug 6 – 14.04.3 (~2 days away)
    Thurs Aug 20 – Feature Freeze (~2 weeks away)
    Thurs Aug 27 – Beta 1 (~3 weeks away)
    Thurs Sep 24 – Fina Beta (~7 weeks away)

Status: CVE’s

The current CVE status can be reviewed at the following link:


Status: Stable, Security, and Bugfix Kernel Updates – Precise/Trusty/Utopic/Vivid

tatus for the main kernels, until today:

  • Precise – Verification & Testing
  • Trusty – Verification & Testing
  • lts-Utopic – Verification & Testing
  • Vivid – Verification & Testing

    Current opened tracking bugs details:

    For SRUs, SRU report is a good source of information:


    cycle: 26-Jul through 15-Aug
    24-Jul Last day for kernel commits for this cycle
    26-Jul – 01-Aug Kernel prep week.
    02-Aug – 08-Aug Bug verification & Regression testing.
    09-Aug – 15-Aug Regression testing & Release to -updates.

Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

on August 04, 2015 05:10 PM

Conference Plans: Fall 2015

Svetlana Belkin

I’m going to two (2) conferences this fall: Open Help Conference in Cincinnati, Ohio and Ohio Linux Fest in Columbus, Ohio.  If anyone else* is going to either or both of them, please let me know.  I’m willing to show you Cincinnati, where I live now and explore Columbus!

* For the Open Help Con, I know of two (2) that are going and you know who you are!  And for Ohio Linux Fest, one (1).

on August 04, 2015 04:37 PM

As your business grows, so does the amount of data you manage. You may store thousands of pages of sensitive information electronically. If you use a website to convert prospects into customers, it’s critical that your website performs well. If all of this information is not secure, it can destroy your business. Use these tips to upgrade your company’s tech capabilities.

Challenges you face as your business grows

Doing business becomes more challenging as your company grows. Think about how you can address these issues as you increase sales:

  • Customer lists, competitive data: if your business is growing, you’re accumulating a great deal of data that is confidential. That includes your contact data and the buying history of your customers. You’ll also store your company budgets, forecasts and other data that is extremely sensitive.
  • Client data: in addition to contact data, you may store credit card information for your clients. Legal and regulatory bodies insist that all customer payment data you store is secure.
  • Employee data: as you add workers, you’re also required to collect and store sensitive employee data. This may include social security numbers and other personal data.

You need systems in place to protect all of this information from theft.

Keeping your business up and running

In addition to the sensitive data you must protect, you may need to upgrade your tech capabilities to operate your business. You’ll take more phone calls, answare a great number of emails and process more paperwork as you grow.

Many firms consider using an SIP Trunking system to operate more efficiently. MegaPath explains that SIP Trunking is a way to process your voice and Internet data through an Internet connection. SIP can reduce your costs, since you no longer process voice data through a phone line.

There are several other benefits to using SIP Trunking:

  • Purchase only the capacity that you need: with SIP, you can increase or decrease your data purchases easily. This concept allows you to control your data spending more precisely.
  • Scalability: SIP is also very scalable. You can increase your SIP usage to just about any data level you require. You’re not forced to switch to another tech service as you grow your business.
  • More Responsive to Clients: SIP allows you to route calls to an employee’s mobile phone. This helps your staff respond to customers faster.

Look into SIP Trunking to handle your operational needs as you grow.

Securing your data

SchoolRack lists some other great ideas to protect your data from cyber attacks:

  • Update CMS and plugins: many people use a Content Management System (CMS), such as WordPress, to build their website. You may also use plugins to perform specific tasks on your site. For example, a plugin can be used to place a contact form on your site. To secure your data, make sure that you use the most recent version of your CMS system and all plugins.
  • Passwords: it may sound simple, but using a strong password can still prevent hackers from accessing your data.
  • Password manager: if you have multiple passwords on different tech platforms, it can be difficult to keep track of all of your passwords. You can find a password manager to simplify the process of creating strong passwords and updating them periodically.

Every company that is growing has to face the demands of technology. Use these tips to manage your operations effectively. You can protect your sensitive data and grow your business.

The post How To Upgrade Your Company’s Tech Capabilities appeared first on deshack.

on August 04, 2015 01:30 PM

Last week KDE’s annual world summit, Akademy, happend. And how exciting it was.

Akademy always starts off with two days of ever so exciting talks on a number of engaging subjects. But this year particularly interesting things happened courtesy of Blue Systems.

First Plasma Mobile took the stage with a working prototype running on the Nexus 5 using KWin as Wayland compositor. This is particularly enjoyable as working on the prototype, currently built on Kubuntu, made me remember the Kubuntu phone and tablet ports we did some 4 years ago.

Plasma Mobile was followed by a presentation on Shashlik, technology meant to enable running Android applications on Linux systems that aren’t Android. So I can finally run candy crush on my desktop. Huzzah!

Rohan Garg and I also talked for a bit about our efforts to bring continuous integration and delivery to Kubuntu and Debian to integrate our packaging against KDE’s git repositories and as a byproduct offer daily new binaries of most software produced by KDE.

After a weekend of thrilling talks, Akademy tends to continue with a week of discussion and hacking with Birds of Feathers sessions.

Ever since the Ubuntu Developer Summits were discontinued it has been common practise for the Kubuntu team to hold a Kubuntu Day at Akademy instead, to discuss long term targets and get KDE contributor’s thoughts and input. Real life meetings are so very important to a community. Not just because it is easier to discuss when talking face to face making everyone more efficient and reducing the chances of people misunderstanding one another and getting frustrated, they also are an important aspect of community building through the social interaction they allow. There is something uniquely family-like about sharing a drink or having a team dinner.

A great many things were discussed pertaining to Kubuntu. Ranging from Canonical’s  IP rights policy and how it endangers what we try to achieve, to websites, support, and the ever so scary GCC 5 transition that José Manuel Santamaría put a great deal of effort into making as smooth as possible.

In the AppStream/Muon BoF session Matthias Klumpp gave us an overview on how AppStream works and we discussed ways to unblock its adoption in Kubuntu to replace the currently used app-install-data.

Muon, the previously Kubuntu specific software manager that is now part of the Plasma Workspace experience, is getting further detangled from Debian specific systems as the package manager UI is going to be moved to a separate Git repository. Also tighter integration into the overall workspace and design of Plasma is the goal for future development.

As always Akademy was quite the riot. I’d like to thank the Akademy team for organizing this great event and the Ubuntu community for sponsoring my attendance.


❤ KDE ❤

on August 04, 2015 12:39 PM

Ubuntu Make 0.9.2 has just been released and features language support in our Firefox Developer Edition installation!

Thanks to our new awesome community contributor Omer Sheikh, Ubuntu Make now enables developers to install Firefox Developer Edition in their language of choice! This is all backed with our mandatory medium and large extensive testsuites. Big thanks to him for getting that through!

The installation process will ask you (listing all available languages) what is your preference for that framework:

 $ umake web firefox-dev
 Choose installation path: /home/didrocks/tools/web/firefox-dev
 Choose language: (default: en-US)
 ach/af/sq/ar/an/hy-AM/as/ast/az/eu/... fr
 Downloading and installing requirements
 100% |#########################################################################|
 Installing Firefox Dev
 Installation done

And here we go, with Firefox Dev Edition installed in french:

Firefox Developer Edition en français svp!

You can as well use the new --lang= option to do that in non interactive mode, like scripts.

Brian P. Sizemore joined as well the Ubuntu Make contributor crew with this release with some clarification of our readme page. Valuable contribution to all newcomers, thanks to him as well!

Some general fixes as well were delivered into this new release, full list is available in the changelog.

As usual, you can get this latest version direcly in Ubuntu Wily, and through its ppa for the 14.04 LTS, 15.05 ubuntu releases.

Our issue tracker is full of ideas and opportunities, and pull requests remain opened for any issues or suggestions! If you want to be the next featured contributor and want to give an hand, you can refer to this post with useful links!

on August 04, 2015 09:24 AM

Recently I set up the Karma Javascript suite for locally running the Javascript tests for ownCloud. Since it was not totally straight forward to get it up an running, here be my setup notes!

First, it requires Node.js. They can be installed right out from the repositories. As I read somewhere, more recent versions are available on some PPA, but this one is sufficient. npm is its package manager.


sudo apt install nodejs npm sudo apt install nodejs nodejs-legacy npm

The nodejs-legacy package provides the symlink, so you don't need to mess around manually in /usr/bin. Thus the following step is obsolute. Thanks to Felix for the hint.

First obstacle: because of a hard coded path somewhere, npm would not be able to find node. A symlink helps:

sudo ln -s /usr/bin/nodejs /usr/bin/node


Afterwards, we need the karma test suite and the modules which are used by ownCloud. They are installed using npm. You will notice the -g flag, which stands stands for global. If you leave it, the stuff will be installed into the local directory. One time I forget the flag and spent hours figuring out why it is not working. However, this step is supposed to be optional as the autotest script we will run eventually should take care of this. For unknown reasons it did not work for me, so I executed this steps manually once.

sudo npm install -g karma
sudo npm install -g karma-jasmine
sudo npm install -g karma-junit-reporter
sudo npm install -g karma-coverage
sudo npm install -g karma-phantomjs-launcher

That's all. Finally you can cd into your git clone of ownCloud and let the autotest script do all the JS tests:


There is one minor flaw. I was too lazy too "You must set an output directory for JUnitReporter via the outputDir config property" so the JUnitReporter does not work. It does not matter to me at all, the output shows me whether tests succeed or which fail.

on August 04, 2015 06:02 AM

Recently, I’ve been using Fabric quite a bit. It is simple, Pythonic, and I’ve grown to enjoy using it for automating basic systems administration tasks when a full-fledged configuration management system is more than you need for the job.

For the most part, Fabric keeps to the basics, e.g. executing remote shell commands and uploading files. There are quite a few sets of tools that have popped up to extend it, but unfortunately there no is “official” contrib library. Many of these project serve very specific use cases like deploying a Django application and duplicate certain functionality.

One thing that I’ve become a bit frustrated with is copying around convenience functions into multiple Fabfiles. In particular, I end up cargo culting functions related to package management. So to finally rid myself of these, I’ve created fabric-package-management.

The source is on GitHub, and you can install it from PyPI with:

sudo pip install fabric-package-management

The aim is to provide basic primitives for package management with Fabric. Its focus is intentionally narrow. The 0.1 release only offers support for Apt, but I hope to see it grow support for more distributions. It could potentially add an abstraction layer for cross distro support.

Here’s a quick example of using it to update all your DigitalOcean servers:

import os
import digitalocean
from fabric.api import task, prompt, env, settings
from fabric.operations import reboot

from fabric_package_management import apt

USER = 'username'

def get_hosts():
    token = os.getenv('DO_TOKEN')
    manager = digitalocean.Manager(token=token)
    droplets = manager.get_all_droplets()
    hosts = []
    for d in droplets:

    return hosts

def run():
    hosts = get_hosts()
    for h in hosts:
        with settings(host_string=h, user=USER):
            if apt.reboot_required():
                prompt("Reboot required. Initiate now?\nYes/No?",
                if env.response.lower() == "yes":

Hope you find this useful!

on August 04, 2015 02:45 AM

August 03, 2015

Welcome to the Ubuntu Weekly Newsletter. This is issue #428 for the week July 27 – August 2, 2015, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Elizabeth K. Joseph
  • Mathias Hellsten
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on August 03, 2015 11:51 PM

The Mozilla We’ve Got

Bryan Quigley

This is a follow-up to The Mozilla I want from 2014 (same headings).  (I do post bugs and mailing lists links, but please don’t pile on them, that really doesn’t help)

DRM – Mozilla being played?

Nope, just non-Windows users being played so far [1]. I should have guessed with it being Adobe’s DRM that is being used that maybe Linux wouldn’t see the best support. It’s also depressing to me that Mozilla has given up on calling it what it is in some cases [2].


Abandon the DoNoTrack header, provide actual options

Mozilla has doubled down on DoNotTrack and our trying to get more companies to respect it with an add-on that blocks trackers if it’s not respected.  To be fair the EFF thinks this isn’t a lost cause either.. do they know something I don’t know here?  If anything it could be called DoNotMakeItAsObviousWeAreTrackingYou, that’s possible.

They’ve added DuckDuckGo as a preinstalled search engine!  Woot!

Push advertisers off of Flash (generally a good idea, but also will help with privacy – no flash cookies, etc) – Absolutely no progress on this[1] -The web is moving away from Flash and plugins but Mozilla is standing pretty still on pushing for it.  Guess Mobile and Chrome will get define this space.


SSL 3.0 – When will it go away?

That’s hilarious.  Really.   5 months or so after Mozilla removes the option to disable SSL 3.0 they have to make an add-on to disable it do to SSL 3.0 no longer being secure.

Could we just decide now to disable TLS 1.0 in 2018 or something? Maybe start warning about it in a year or so.  We know it has weaker security than TLS 1.2, so why wait until we have to do it in a panic?

Mobile – Firefox OS

I bought a ZTE Open C and it’s a cheap phone and had issues.  I’ve since given up on it and bought a ZTE Maven (Android 5.1) which I’m enjoying.  To be fair they both cost me about the same, but the Maven is a much better phone.

Mozilla hasn’t shipped a new version of Firefox OS since I bought the phone… Firefox 1.3 Released on 2014-03-17 is still the latest version (it’s 2015-08-01 today).  So much for the promised quarterly releases.  This isn’t even the harder “how long will you support this specific phone”, it’s just your schedule of releases.


Mozilla Adding unwanted things?

I really don’t mind Yahoo! Search (the new search widget rocks for using multiple search engines, imho), but adding Pocket just doesn’t make any sense to me.. oh well.

Signing add-ons I actually like and fully support.  What I didn’t like in that discussion was the idea that we can wait to figure out something for the enterprises, because they will be on the ESR release.   I’d prefer we try to bring everyone to be happy on the main release instead of making enterprises feel they really need to be on the ESR.

And Contributing!?

I’ve actual gotten my first (very very simple) patch into Firefox since my last blog post.  I’m hoping to do a bit more specifically around gstreamer.

Unfortunately, I’m feeling more like Chrome/Chromium provides a better and more secure out of the box experience for the average user today (Netflix, Flash updating, dropped NPAPI, much better video chat).   This is especially true on Linux.  It does help that Google has a specific platform (Chromebooks) that justifies investing heavily in it.

There is a lot of exciting stuff in the works (GTK3, wayland, electrolysis) and I’m going to at least stay around to see how that pans out.

on August 03, 2015 08:09 PM

There are very few businesses – successful ones anyway – that do not have a website. It is becoming more and more vital to have a website for your customers to visit. Fortunately, countless business owners, both large and small, are beginning to understand the importance of having a website that complements their physical business and/or provides visibility for their business.

What a Website Will do for Your Business


More and more potential customers use the web to research services and products they desire. If your business does not have a website, you lose credibility as a legitimate business. Not only that, you will lose business to your competitors that do have a website.

If your business is operated from home, having a website is even more crucial because you do not have the benefits of a brick and mortar location to promote your business.

Reach more people

Having a website gives your customers 24/7 accessibility to your business. It gives people outside of your local area an opportunity to browse your products and services, even if your store or office is closed.

Easier to Keep Your Customers Up-to-date

It is exponentially easier to update your customers via your website than in print ads and flyers. Print material can quickly become outdated, whereas your website is updated with the latest news, promotions, or new services.

Websites Save you Time and Money

After the initial design fees, a professional looking website costs anywhere from $20 to $100 to maintain. Compare to the high cost and limited reach of a regular newspaper ad.

Not only will you save money, having a website will save you time. Having a website allows customers to receive information on their own. This gives you time to focus on other aspects of your business, allowing you to grow your business. Growth means more money coming in.

What You Need to get Your Website Up and Running

Domain Name

First, you need to purchase a domain name from a domain name registrar. The domain name is the internet address of your website. is the domain name for this particular site.

Your domain name needs to be as simple and short as possible to make it more memorable for your current and potential customers.

Web Hosting

Many domain name registrars also offer web hosting. Essentially, web hosting is a service that keeps the details of your website on a server – usually a computer – and displays it to your customers when they enter your domain name into a browser. Some companies, such as HostGator web hosting, offer hosting services for as little as $3 to $4 a month for basic services.

A Professional Design

Chances are, you know more about your business than web design so you should leave your web design to a profession website designer. Sure, you could probably learn html (website design markup language) or use one of the many templates online but you probably do not have the time. Having your website created professionally will save you time and money. Basic – talking barebones here – can be hard for little cost but you run the risk of making your business look amateurish. For additional features – like eCommerce and social media integration – expect the cost to rise.

Plain and simple, if you own a business, you need a website. Yes, there are costs involved, especially upfront, but the costs will far out-weighed by website’s benefits. Moreover, setting up and maintaining a website does not have to be complicated. Having a website for your business is no-brainer.

The post Yes, You Need a Website: Here’s Why appeared first on deshack.

on August 03, 2015 02:17 PM
on August 03, 2015 01:45 PM

August 02, 2015

Launchpad news, July 2015

Launchpad News

Here’s a summary of what the Launchpad team got up to in July.


  • We fixed a regression in the wrapping layout of side-by-side diffs on (#1436483)
  • Various code pages now have meta tags to redirect “go get" to the appropriate Bazaar or Git URL, allowing the removal of special-casing from the “go" tool (#1465467)
  • Merge proposal diffs including mention of binary patches no longer crash the new-and-improved code review comment mail logic (#1471426), and we fixed some line-counting bugs in that logic as well (#1472045)
  • Links to the Git code browsing interface now use shorter URL forms

We’ve also made a fair amount of progress on adding support for triggering webhooks from Launchpad (#342729), which will initially be hooked up for pushes to Git repositories.  The basic code model, webservice API, and job retry logic are all in place now, but we need to sort out a few more things including web UI and locking down the proxy configuration before we make it available for general use.  We’ll post a dedicated article about this once the feature becomes available.

Mail notifications

We posted recently about improved filtering options (#1474071).  In the process of doing so, we cleaned up several older problems with the mails we send:

  • Notifications for a bug’s initial message no longer include a References header, which confuses some versions of some mail clients (#320034)
  • Package upload notifications no longer attempt to transliterate non-ASCII characters in package maintainer names into ASCII equivalents; they now use RFC2047 encoding instead (#362957)
  • Notifications about duplicate bugs now include an X-Launchpad-Bug-Duplicate header (#363995)
  • Package build failure notifications now include a “You are receiving this email because …” rationale (#410893)

Package build infrastructure

  • The sbuild upgrade last month introduced some regressions in our handling of package builds that need to wait for dependencies (e.g. #1468755), and it’s taken a few goes to get this right; this is somewhat improved now, and the next builder deployment will fix all the currently-known bugs in this area
  • In the same area, we’ve made some progress on adding minimal support for Debian’s new build profiles syntax, applying fixes to upload processing and dependency-wait analysis, although this should still be considered bleeding-edge and unlikely to work from end to end
  • We’ve been working on adding support for building snap packages (#1476405), but there’s still more to do here; we should be able to make this available to some alpha testers around mid-August


  • We’ve arranged to redirect translations for the overlay PPA used for current Ubuntu phone images to the ubuntu-rtm/15.04 series so that they can be translated effectively (#1463723); we’re still working on copying translations into place from before this fix
  • Projects and project groups no longer have separately-editable “display name” and “title” fields, which were very similar in purpose; they now just have display names (#1853, #4449)
  • Cancelled live file system builds are sorted to the end of the build history, rather than the start (#1424672)
on August 02, 2015 08:01 PM

I was initially annoyed to see implications earlier on Planet Ubuntu that Ubuntu community was in decline. I was tempted to name this article "Why the Negativity? Let's Get On With Making Ubuntu Awesome"

Ubuntu community is not in decline, if you take a broader view and stick to basics. Some (may) continue to focus on a very narrow segment of society (developers mostly) and that's a shame. It's also not the ubuntu I joined. I seem to recall that "We're all one." We do not count certain types of people over others and we should not proclaim the decline of a community when a thin demographic is not increasing in numbers.

Let's define some terms:

A metropolitan area (city) in British Columbia, Canada.

An area that is traversable on foot or bike or public transit within 45 minutes.

A group of people that share an affinity to one another, historically by virtue of being local.

An increase in numbers over time.

Without limit.

Any questions?

on August 02, 2015 07:12 PM

Hello Pelican!

Andrea Corbellini

Today I switched from to Pelican and GitHub Pages.

First off, let me say: almost all URLs that were previously working should still work. Only the feed URLs are broken, and this is not something I can fix. If you were following my blog via a feed reader, you should update to the new feed. Sorry for the inconvenience.

Having said that, I'd like to share with you the motivation that made me move and the details of the migration.

The bad things of WordPress

Now, this doesn't want to be a rant, so I'll be pretty concise. WordPress, the content management system, is an excellent platform for blogging. Easy to start with, easy to maintain, easy to use. makes things even easier. It also comes with many useful features, like comments and social networks integration.

The problem is: you can't customize things or add features without paying. Of course, this is business, and I do not want to discuss business decisions made at Not only that, but I could live fine with most of the major limitations. Also, I was perfectly conscious of this kind of problems with when I started (after all, this is not the first blog I started).

I actually become upset of when writing the series of blog posts about Elliptic Curve Cryptography. When writing these articles, I spent a lot of time employing workarounds to overcome limitations. Being used to Vim and its advanced features, I also found the editors (both the old and the new one) as a great obstacle for getting things done quickly. I do not want to enter the details of the problems I'm referring to, what matters is that, eventually, I gave up and I realized it was time to move on and seek for an alternative.

Why Pelican

Pelican is a static site generator. I've always thought that a static site had too many limitations for me. But while seeking an alternative to, I realized that many of those limitations were not affecting me in any way. Actually, with a static site I can do everything I want: edit my articles with Vim, render my equations with MathJax, customize my theme, version control my content, write scripts to post process my content.

The only bad thing about Pelican is that it does not come with any theme I truly like. I decided to make my own. I'm not entirely satisfied with it, as I feel it is too "anonymous", but I believe it is fully responsive, fast, readable and offers all the features I want. Perhaps I'll tweak it a little more to make it more "personal".

Setting up Pelican and migrating everything required some time, but at least this time I worked on true solutions, not on ugly hacks and workarounds like I did with WordPress. This implies that when writing articles I will be able to focus more on content than other details.

Why not other static site generators

In short: Pelican is written in Python and to my eyes it looked better than the other Python static site generators. I'll be honest and say that I did not truly evaluate all of the alternatives: I knew switched to Pelican and that made me try Pelican before all other solutions.


In the end I decided to leave WordPress for Pelican hosted on GitHub Pages. I'm pretty satisfied with the result I got. The nature of GitHub Pages prevents me from using HTTP redirects (and therefore the old feed links are broken), however in exchange I've got much more freedom, and this is what matters to me.

on August 02, 2015 06:55 PM

It’s Seafair weekend in Seattle. As always, the centerpiece is the H1 Unlimited hydroplane races on Lake Washington.

EllstromManufacturingHydroplaneIn my social circle, I’m nearly the only person I know who grew up in area. None of the newcomers I know had heard of hydroplane racing before moving to Seattle. Even after I explain it to them — i.e., boats with 3,000+ horse power airplane engines that fly just above the water at more than 320kph (200mph) leaving 10m+ (30ft) wakes behind them! — most people seem more puzzled than interested.

I grew up near the shore of Lake Washington and could see (and hear!) the races from my house. I don’t follow hydroplane racing throughout the year but I do enjoy watching the races at Seafair. Here’s my attempt to explain and make the case for the races to new Seattleites.

Before Microsoft, Amazon, Starbucks, etc., there were basically three major Seattle industries: (1) logging and lumber based industries like paper manufacturing; (2) maritime industries like fishing, shipbuilding, shipping, and the navy; (3) aerospace (i.e., Boeing). Vintage hydroplane racing represented the Seattle trifecta: Wooden boats with airplane engines!

The wooden U-60 Miss Thriftway circa 1955 (Thriftway is a Washinton-based supermarket that nobody outside has heard of) below is a picture of old-Seattle awesomeness. Modern hydroplanes are now made of fiberglass but two out of three isn’t bad.

miss_thriftwayAlthough the boats are racing this year in events in Indiana, San Diego, and Detroit in addition to the two races in Washington, hydroplane racing retains deep ties to the region. Most of the drivers are from the Seattle area. Many or most of the teams and boats are based in Washington throughout the year. Many of the sponsors are unknown outside of the state. This parochialness itself cultivates a certain kind of appeal among locals.

In addition to old-Seattle/new-Seattle cultural divide, there’s a class divide that I think is also worth challenging. Although the demographics of hydro-racing fans is surprisingly broad, it can seem like Formula One or NASCAR on the water. It seems safe to suggest that many of the demographic groups moving to Seattle for jobs in the tech industry are not big into motorsports. Although I’m no follower of motorsports in general, I’ve written before cultivated disinterest in professional sports, and it remains something that I believe is worth taking on.

It’s not all great. In particular, the close relationship between Seafair and the military makes me very uneasy. That said, even with the military-heavy airshow, I enjoy the way that Seafair weekend provides a little pocket of old-Seattle that remains effectively unchanged from when I was a kid. I’d encourage others to enjoy it as well!

on August 02, 2015 02:45 AM


Time for Trusty Tahr, yet again :)

As you know, Ubuntu GNOME 14.04 was our first LTS release. Thus, there are point of releases. Ubuntu GNOME 14.04.1 and 14.04.2 have been released already. Now, it is time for 14.04.3 to be released in the 6th of August, 2015.

This is a call for help to test the daily builds of Ubuntu GNOME Trusty Tahr to make sure 14.04.3 will be, just like our previous releases, as solid as rock.

Please, make sure to use the ISO Tracker:

If you are NEW to all the testing process, that’s not a problem at all. This page:

Which has been re-written to become easier and better, will help you to get started 😉

Please, help us and test the daily builds of Ubuntu GNOME Trusty Tahr. If you need any help or have any question, don’t hesitate to contact us:

As always, your endless help and continuous support are highly appreciated :)

Happy Testing!

on August 02, 2015 02:21 AM

August 01, 2015

tl;dr:  Your Ubuntu-based container is not a copyright violation.  Nothing to see here.  Carry on.
I am speaking for my employer, Canonical, when I say you are not violating our policies if you use Ubuntu with Docker in sensible, secure ways.  Some have claimed otherwise, but that’s simply sensationalist and untrue.

Canonical publishes Ubuntu images for Docker specifically so that they will be useful to people. You are encouraged to use them! We see no conflict between our policies and the common sense use of Docker.

Going further, we distribute Ubuntu in many different signed formats -- ISOs, root tarballs, VMDKs, AMIs, IMGs, Docker images, among others.  We take great pride in this work, and provide them to the world at large, on, in public clouds like AWS, GCE, and Azure, as well as in OpenStack and on DockerHub.  These images, and their signatures, are mirrored by hundreds of organizations all around the world. We would not publish Ubuntu in the DockerHub if we didn’t hope it would be useful to people using the DockerHub. We’re delighted for you to use them in your public clouds, private clouds, and bare metal deployments.

Any Docker user will recognize these, as the majority of all Dockerfiles start with these two words....

FROM ubuntu

In fact, we gave away hundreds of these t-shirts at DockerCon.

We explicitly encourage distribution and redistribution of Ubuntu images and packages! We also embrace a very wide range of community remixes and modifications. We go further than any other commercially supported Linux vendor to support developers and community members scratching their itches. There are dozens of such derivatives and many more commercial initiatives based on Ubuntu - we are definitely not trying to create friction for people who want to get stuff done with Ubuntu.

Our policy exists to ensure that when you receive something that claims to be Ubuntu, you can trust that it will work to the same standard, regardless of where you got it from. And people everywhere tell us they appreciate that - when they get Ubuntu on a cloud or as a VM, it works, and they can trust it.  That concept is actually hundreds of years old, and we’ll talk more about that in a minute....

So, what do I mean by “sensible use” of Docker? In short - secure use of Docker. If you are using a Docker container then you are effectively giving the producer of that container ‘root’ on your host. We can safely assume that people sharing an Ubuntu docker based container know and trust one another, and their use of Ubuntu is explicitly covered as personal use in our policy. If you trust someone to give you a Docker container and have root on your system, then you can handle the risk that they inadvertently or deliberately compromise the integrity or reliability of your system.

Our policy distinguishes between personal use, which we can generalise to any group of collaborators who share root passwords, and third party redistribution, which is what people do when they exchange OS images with strangers.

Third party redistribution is more complicated because, when things go wrong, there’s a real question as to who is responsible for it. Here’s a real example: a school district buys laptops for all their students with free software. A local supplier takes their preferred Linux distribution and modifies parts of it (like the kernel) to work on their hardware, and sells them all the PCs. A month later, a distro kernel update breaks all the school laptops. In this case, the Linux distro who was not involved gets all the bad headlines, and the free software advocates who promoted the whole idea end up with egg on their faces.

We’ve seen such cases in real hardware, and in public clouds and other, similar environments.  Digital Ocean very famously published some modified and very broken Ubuntu images, outside of Canonical's policies.  That's inherently wrong, and easily avoidable.

So we simply say, if you’re going to redistribute Ubuntu to third parties who are trusting both you and Ubuntu to get it right, come and talk to Canonical and we’ll work out how to ensure everybody gets what they want and need.

Here’s a real exercise I hope you’ll try...

  1. Head over to your local purveyor of fine wines and liquors.
  2. Pick up a nice bottle of Champagne, Single Malt Scotch Whisky, Kentucky Straight Bourbon Whiskey, or my favorite -- a rare bottle of Lambic Oude Gueze.
  3. Carefully check the label, looking for a seal of Appellation d'origine contrôlée.
  4. In doing so, that bottle should earn your confidence that it was produced according to strict quality, format, and geographic standards.
  5. Before you pop the cork, check the seal, to ensure it hasn’t been opened or tampered with.  Now, drink it however you like.
  6. Pour that Champagne over orange juice (if you must).  Toss a couple ice cubes in your Scotch (if that’s really how you like it).  Pour that Bourbon over a Coke (if that’s what you want).
  7. Enjoy however you like -- straight up or mixed to taste -- with your own guests in the privacy of your home.  Just please don’t pour those concoctions back into the bottle, shove a cork in, put them back on the shelf at your local liquor store and try to pass them off as Champagne/Scotch/Bourbon.

Rather, if that’s really what you want to do -- distribute a modified version of Ubuntu -- simply contact us and ask us first (thanks for sharing that link, mjg59).  We have some amazing tools that can help you either avoid that situation entirely, or at least let’s do everyone a service and let us help you do it well.

Believe it or not, we’re really quite reasonable people!  Canonical has a lengthy, public track record, donating infrastructure and resources to many derivative Ubuntu distributions.  Moreover, we’ve successfully contracted mutually beneficial distribution agreements with numerous organizations and enterprises. The result is happy users and happy companies.

FROM ubuntu,

The one and only Champagne region of France

on August 01, 2015 04:19 PM

S08E21 – United Passions - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Episode Twenty-one of Season Eight of the Ubuntu Podcast! Mark Johnson, Laura Cowen, Martin Wimpress, and Alan Pope are all together again!

In this week’s show:

We look at what’s been going on in the news:

We also take a look at what’s been going on in the community:

That’s all for this week, please send your comments and suggestions to:
Join us on IRC in #ubuntu-podcast on Freenode
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

on August 01, 2015 10:30 AM

July 31, 2015

FCM#100-1 is OUT!

Ronnie Tucker

FCM99-coverFull Circle – the independent magazine for the Ubuntu Linux community are proud to announce the release of our ninety-ninth issue.

This month:
* Command & Conquer
* How-To : LaTeX, LibreOffice, and Programming JavaScript
* Graphics : Inkscape.
* Chrome Cult
* Linux Labs: Customizing GRUB
* Ubuntu Phones
* Review: Meizu MX4 and BQ Aquaris E5
* Book Review: How Linux Works
* Ubuntu Games: Brutal Doom, and Dreamfall Chapters
plus: News, Arduino, Q&A, and soooo much more.

Get it while it’s hot!

We now have several issues available for download on Google Play/Books. If you like Full Circle, please leave a review.

AND: We have a Pushbullet channel which we hope will make it easier to automatically receive FCM on launch day.
on July 31, 2015 06:53 PM

Kubuntu Paddleboard Club

Jonathan Riddell

I always say the best way to tour a city is from the water






facebooktwittergoogle_pluslinkedinby feather
on July 31, 2015 03:30 PM

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I have been paid to work 15 hours on Debian LTS. In that time I did the following:

  • Finished the work on to make it display detailed security status on each supported release (example).
  • Prepared and released DLA-261-2 fixing a regression in the aptdaemon security update (happening only when you have python 2.5 installed).
  • Prepared and released DLA-272-1 fixing 3 CVE in python-django.
  • Prepared and released DLA-286-1 fixing 1 CVE in squid3. The patch was rather hard to backport. Thankfully upstream was very helpful, he reviewed and tested my patch.
  • Did one week of “LTS Frontdesk” with CVE triaging. I pushed 19 commits to the security tracker.

Kali Linux / Debian Stretch work

kaliKali Linux wants to experiment something close to Debian Constantly Usable Testing: we have a kali-rolling release that is based on Debian Testing and we want to take a new snapshot every 4 months (in order to have 3 releases per year).

More specifically we have a kali-dev repository which is exactly Debian Stretch + our own Kali packages (the kali package take precedence) updated 4 times a day, just like testing is. And we have a britney2 setup that generates kali-rolling out of kali-dev (without any requirement in terms of delay/RC bugs, it just ensures that dependencies are not broken), also 4 times a day.

We have jenkins job that ensures that our metapackages are installable in kali-dev (and kali-rolling) and that we can build our ISO images. When things break, I have to fix them and I try to fix them on the Debian side first. So here are some examples of stuff I did in response to various failures:

  • Reported #791588 on texinfo. It was missing a versioned dependency on tex-common and migrated too early. The package was uninstallable in testing for a few days.
  • Reported #791591 on pinba-engine-mysql-5.5: package was uninstallable (had to be rebuilt). It appeared on output files of our britney instance.
  • I made a non-maintainer upload (NMU) of chkrootkit to fix two RC bugs so that the package can go back to testing. The package is installed by our metapackages.
  • Reported #791647: debtags no longer supports “debtags update –local” (a feature that went away but that is used by Kali).
  • I made a NMU of debtags to fix a release critical bug (#791561 debtags: Missing dependency on python3-apt and python3-debian). kali-debtags was uninstallable because it calls debtags in its postinst.
  • Reported #791874 on python-guess-language: Please add a python 2 library package. We have that package in Kali and when I tried to sync it from Debian I broke something else in Kali which depends on the Python 2 version of the package.
  • I made a NMU of tcpick to fix a build failure with GCC5 so that the package could go back to testing (it’s part of our metapackages).
  • I requested a bin-NMU of jemalloc and a give-back of hiredis on powerpc in #792246 to fix #788591 (hiredis build failure on powerpc). I also downgraded the severity of #784768 to important so that the package could go back to testing. Hiredis is a dependency of OpenVAS and we need the package in testing.

If you analyze this list, you will see that a large part of the issues we had come down to package getting removed from testing due to RC bugs. We should be able to anticipate those issues and monitor the packages that have an impact on Kali. We will probably add new jenkins job that installs all the metapackages and then run how-can-i-help -s testing-autorm --old… I just submitted #794238 as a wishlist against how-can-i-help.

At the same time, there are bugs that make it into testing and that I fix / work around on the Kali side. But those fixes / work around might be more useful if they were pushed to testing via testing-proposed-updates. I tried to see whether other derivatives had similar needs to see if derivatives could join their efforts at this level but it does not look like so for now.

Last but not least, bugs reported on the Kali side also resulted in Debian improvements:

  • I reported #793360 on apt: APT::Never-MarkAuto-Sections not working as advertised. And I submitted a patch.
  • I orphaned dnswalk and made a QA upload to fix its only bug.
  • We wanted a newer version of the nvidia drivers. I filed #793079 requesting the new upstream release and the maintainer quickly uploaded it to experimental. I imported it on the Kali side but discovered that it was not working on i386 so I submitted #793160 with a patch.
  • I noticed that Kali build daemons tend to accumulate many /dev/shm mounts and tracked this down to schroot. I reported it as #793081.

Other Debian work

Sponsorship. I sponsored multiple packages for Daniel Stender who is packaging prospector, a software that I requested earlier (through RFP bug). So I reviewed and uploaded python-requirements-detector, python-setoptconf, pylint-celery and pylint-common. During a review I also discovered a nice bug in dh-python (#793609a comment in the middle of a Build-Depends could break a package). I also sponsored an upload of notmuch-addrlookup (new package requested by a Freexian customer).

Packaging. I uploaded python-django 1.7.9 in unstable and 1.8.3 in experimental to fix security issues. I uploaded a new upstream release of ditaa through a non-maintainer uploaded (again at the request of a Freexian customer).

Distro Tracker. Beside the work to integrate detailed security status, I fixed the code to be compatible with Django 1.8 and modified the tox configuration to ensure that the test suite is regularly run against Django 1.8. I also merged multiple patches of Christophe Siraut (cf #784151 and #754413).


See you next month for a new summary of my activities.

2 comments | Liked this article? Click here. | My blog is Flattr-enabled.

on July 31, 2015 02:45 PM

It took a while, but the Unreliable Town Clock finally lived up to its name. Surprisingly, the fault was not mine, but Amazon’s.

For several hours tonight, a number of AWS services in us-east-1, including SNS, experienced elevated error rates according to the AWS status page.

Successful, timely chimes were broadcast through the Unreliable Town Clock public SNS topic up to and including:

2015-07-31 05:00 UTC

and successful chimes resumed again at:

2015-07-31 08:00 UTC

Chimes in between were mostly unpublished, though SNS appears to have delivered a few chimes during that period up to several hours late and out of order.

I had set up Unreliable Town Clock monitoring and alerting through This worked perfectly and I was notified within 1 minute of the first missed chime, though it turned out there was nothing I could do but wait for AWS to correct the underlying issue with SNS.

Since we now know SNS has the potential to fail in a region, I have launched an Unreliable Town Clock public SNS Topic in a second region: us-west-2. The infrastructure in each region is entirely independent.

The public SNS topic ARNs for both regions are listed at the top of this page:

You are welcome to subscribe to the public SNS topics in both regions to improve the reliability of invoking your scheduled functionality.

The SNS message content will indicate which region is generating the chime.

Original article and comments:

on July 31, 2015 09:55 AM

Unnecessary Finger Pointing

Benjamin Kerensa

I just wanted to pen quickly that I found Chris Beard’s open letter to Satya Nadella (CEO of Microsoft) to be a bit hypocritical. In the letter he said:

“I am writing to you about a very disturbing aspect of Windows 10. Specifically, that the update experience appears to have been designed to throw away the choice your customers have made about the Internet experience they want, and replace it with the Internet experience Microsoft wants them to have.”

Right, but what about the experiences that Mozilla chooses to default for users like switching to Yahoo and making that the default upon upgrade and not respecting their previous settings ?What about baking Pocket and Tiles into the experience? Did users want these features? All I have seen is opposition to them.

“When we first saw the Windows 10 upgrade experience that strips users of their choice by effectively overriding existing user preferences for the Web browser and other apps, we reached out to your team to discuss this issue. Unfortunately, it didn’t result in any meaningful progress, hence this letter.”

Again see above and think about the past year or two where Mozilla has overridden existing user preferences in Firefox. The big difference here is Mozilla calls it acting on behalf of the user as its agent, but when Microsoft does the same it is taking away choice?

Set Firefox as Windows 10 DefaultClearly not that difficult

Anyways, I can go on but the gist is the letter is hypocritical and really unnecessarily finger pointing. Let’s focus on making great products for our users and technical changes like this to Windows won’t be a barrier to users picking Firefox. Sorry, that I cannot be a Mozillian that will blindly retweet you and support a misguided social media campaign to point fingers at Microsoft.

Read the entire letter here:

on July 31, 2015 06:39 AM

We launched Plasma Mobile at KDE’s Akademy conference, a free, open and community made mobile platform.

Kubuntu has made some reference images which can be installed on a Nexus 5 phone.

More information is on the Plasma Mobile wiki pages.

Reporting includes:

on July 31, 2015 01:32 AM

July 30, 2015

Ubuntu shell overpowered

Ayrton Araujo

In order to have more productivity under my environment, as a command line centric guy, I started three years ago to use zsh as my default shell. And for who never tried it, I would like to share my personal thoughts.

What are the main advantages?

  • Extended globbing: For example, (.) matches only regular files, not directories, whereas az(/) matches directories whose names start with a and end with z. There are a bunch of other things;
  • Inline glob expansion: For example, type rm *.pdf and then hit tab. The glob *.pdf will expand inline into the list of .pdf files, which means you can change the result of the expansion, perhaps by removing from the command the name of one particular file you don’t want to rm;
  • Interactive path expansion: Type cd /u/l/b and hit tab. If there is only one existing path each of whose components starts with the specified letters (that is, if only one path matches /u/l/b*), then it expands in place. If there are two, say /usr/local/bin and /usr/libexec/bootlog.d, then it expands to /usr/l/b and places the cursor after the l. Type o, hit tab again, and you get /usr/local/bin;
  • Nice prompt configuration options: For example, my prompt is currently displayed as tov@zyzzx:/..cts/research/alms/talk. I prefer to see a suffix of my current working directory rather than have a really long prompt, so I have zsh abbreviate that portion of my prompt at a maximum length.


The Z shell is mainly praised for its interactive use, the prompts are more versatility, the completion is more customizable and often faster than bash-completion. And, easy to make plugins. One of my favorite integrations is with git to have better visibility of current repository status.

As it focuses on the interactive use, is a good idea to keep maintaining your shell scripts starting with #!/bin/bash for interoperability reasons. Bash is still most mature and stable for shell scripting in my point of view.

So, how to install and set up?

sudo apt-get install zsh zsh-lovers -y

zsh-lovers will provide to you a bunch of examples to help you understand better ways to use your shell.

To set zsh as the default shell for your user:

chsh -s /bin/zsh

Don’t try to set zsh as default shell to your full system or some things should stop to work.

Two friends of mine, Yuri Albuquerque and Demetrius Albuquerque (brothers of a former hacker family =x) also recommended using Thanks for the tip.

How to install oh-my-zsh as a normal user?

curl -L | sh

My $ZSH_THEME is set to “bureau” under my $HOME/.zshrc. You can try “random” or other themes located inside $HOME/.oh-my-zsh/themes.

And, if you use Ruby under RVM, I also recommend to read this:

Happy hacking :-)

on July 30, 2015 11:53 PM

Because I've run make deb-pkg so many times, I've started to see exactly where it starts to slow down even with really large machines. Observing cpu usage, I noticed that many parts of the build were serialized on a single core. Upon further investigation I found the following.

on July 30, 2015 08:20 PM

The Alpha 2 of Lubuntu 15.10 is now released. Check out all about it at the wiki.
on July 30, 2015 06:06 PM
The Second Alpha of Wily (to become 15.10) has now been released!

The Alpha-2 images can be downloaded from:

More information on Kubuntu Alpha-2 can be found here:
on July 30, 2015 05:51 PM

"I do not think there is any thrill that can go through the human heart like that felt by the inventor as he sees some creation of he brain unfolding to success… such emotions make a man forget food, sleep, friends, love, everything."
– Nikola Tesla

The second alpha of the Wily Werewolf (to become 15.10) has now been released!

This alpha features images for Kubuntu, Lubuntu, Ubuntu MATE, Ubuntu Kylin and the Ubuntu Cloud images.

Pre-releases of the Wily Werewolf are *not* encouraged for anyone needing a stable system or anyone who is not comfortable running into occasional, even frequent breakage. They are, however, recommended for Ubuntu flavor developers and those who want to help in testing, reporting and fixing bugs as we work towards getting this release ready.

Alpha 2 includes a number of software updates that are ready for wider testing. This is quite an early set of images, so you should expect some bugs.

While these Alpha 2 images have been tested and work, except as noted in the release notes, Ubuntu developers are continuing to improve the Wily Werewolf. In particular, once newer daily images are available, system installation bugs identified in the Alpha 2 installer should be verified against the current daily image before being reported in Launchpad. Using an obsolete image to re-report bugs that have already been fixed wastes your time and the time of developers who are busy trying to make 15.10 the best Ubuntu release yet. Always ensure your system is up to date before reporting bugs.


Kubuntu uses KDE software and now features the new Plasma 5 desktop.

The Kubuntu 15.10 Alpha 2 images can be downloaded from:

More information about Kubuntu 15.10 Alpha 2 can be found here:


Lubuntu is a flavour of Ubuntu based on LXDE and focused on providing a very lightweight distribution.

The Lubuntu 15.10 Alpha 2 images can be downloaded from:

More information about Lubuntu 15.10 Alpha 2 can be found here:

Ubuntu MATE

Ubuntu MATE is a flavour of Ubuntu featuring the MATE desktop environment for people who just want to get stuff done.

The Ubuntu MATE 15.10 Alpha 2 images can be downloaded from:

More information about Ubuntu MATE 15.10 Alpha 2 can be found here:

Ubuntu Kylin

Ubuntu Kylin is a flavour of Ubuntu that is more suitable for Chinese users.

The Ubuntu Kylin 15.10 Alpha 2 images can be downloaded from:

More information about Ubuntu Kylin 15.10 Alpha 2 can be found here:

Ubuntu Cloud

Ubuntu Cloud images can be run on Amazon EC2, Openstack, SmartOS and many other clouds.

The Ubuntu Cloud 15.10 Alpha 2 images can be downloaded from:

Regular daily images for Ubuntu can be found at:

If you’re interested in following the changes as we further develop Wily, we suggest that you subscribe to the ubuntu-devel-announce list. This is a low-traffic list (a few posts a week) carrying announcements of approved specifications, policy changes, alpha releases and other interesting events.

A big thank you to the developers and testers for their efforts to pull together this Alpha release!

Originally posted to the ubuntu-release mailing list on Thu Jul 30 17:03:26 UTC 2015 by Martin Wimpress on behalf of Ubuntu Release Team

on July 30, 2015 05:18 PM
Here I was worrying attendance would be poor, Fool me,

Our first AfricanTeams meeting was a rip roaring success even though I struggled to keep up at times. Attendance peeked at 54 at times. Special thanks to CC and LC members who attended, you made my day. :D I cant write about the whole meeting because suggestions and ideas were flying to and fro so rapidly.
All I know is that the plan is working and the last 2 teams will be included in the next meeting. As you will see at we have added a section where lugs can now join our group. Being rather on the old side I don't understand if there is a difference between them and us. Aren't we all just one big Linux family. So what if some prefer other Linux Distros, betcha they have Ubuntu running somewhere. Personal thanks goes out to everyone involved for making this whole project such a success. Meeting minutes can be seen at
Thank you everyone
on July 30, 2015 04:07 PM

The power of components

Charles Butler

While dogfooding my own work, I decided it was time to upgrade my distributed docker services into the shiny Kubernetes charms now that 1.0 landed last week. I've been running my own "production" (I say in air quotes, because my 20 or so microservices aren't mission critical. If my RSS reader tanks, life will go on.) services with some of the charm concepts I've posted about over the last 4 months. Its time to really flex the Kubernetes work we've done and fire up the latest and greatest, and start to really feel the burn of a long-running kubernetes cluster, as upgrades happen and unforseen behaviors start to bubble up to the surface.


One of the things I knew right away, is that our provivded bundle was way overkill for what I wanted to do. I really only needed 2 nodes, and using colocation for the services - I could attain this really easily. We spent a fair amount of time deliberating about how to encapsulate the topology of a kubernetes cluster, and what that would look like with the mix and match components one could reasonably deploy with.

Node 1

  • ETCD (running solo, I like to live dangerously)
  • Kubernetes-Master

Node 2

  • Docker
  • Kubernetes Node (the artist formerly known as a minion)

Did you know: The Kubernetes project retired the minion title from their nodes and have re-labeled them as just 'node'?

Why this is super cool

I'm excited to say that our attention to requirements has made this ecosystem super simple to decompose and re-assemble in a manner that fits your needs. I'm even considering contributing a Single server bundle that will stuff all the component services on a single machine. This makes it even lower cost of entry to people looking to just kick the tires and get a feel for Kubernetes.

Right now our entire stack consumes bare minimum of 4 units.

  • 1 ETCD node
  • 2 Docker/Kubernetes Nodes
  • 1 Kubernetes-Master node

This distributed system is more along the lines of what I would recommend starting your staging system with, scaling ETCD to 3 nodes for quorem and HA/Failover and scaling your Kubernetes nodes as required. Leaving the Kubes-Master to only handle the API/Load of client interfacing, and ecosystem management.

I'm willing to eat this compute space on my node, as I have a rather small deployment topology, and Kubernetes is fairly intelligent with placement of services once a host starts to reach capacity.

Whats this look like in bundle format?

Note, I'm using my personal branch for the Docker charm, as it has a UFS filesystem fix that resolves some disk space concerns that hasn't quite landed in the Charm Store yet due to a rejected review. This will be updated to reflect the Store charm once that has landed.

Deploy Today

juju quickstart

Deploy Happy!

on July 30, 2015 12:26 PM

Akademy Day Trip

Jonathan Riddell


The GCC 5 Transition caused the apocalypse so we went out to see the world while it still existed


No Soy Líder, Ahora Soy El Capitán


See Hoarse


El Torre


We will climb this!


David reached the top


Fin de la terre!

facebooktwittergoogle_pluslinkedinby feather
on July 30, 2015 10:07 AM

The DebConf team have just published the first list of events scheduled for DebConf15 in Heidelberg, Germany, from 15 - 22 August 2015.

There are two specific events related to free real-time communications and a wide range of other events related to more general topics of encryption and privacy.

15 August, 17:00, Free Communications with Free Software (as part of the DebConf open weekend)

The first weekend of DebConf15 is an open weekend, it is aimed at a wider audience than the traditional DebConf agenda. The open weekend includes some keynote speakers, a job fair and various other events on the first Saturday and Sunday.

The RTC talk will look at what solutions exist for free and autonomous voice and video communications using free software and open standards such as SIP, XMPP and WebRTC as well as some of the alternative peer-to-peer communications technologies that are emerging. The talk will also look at the pervasive nature of communications software and why success in free RTC is so vital to the health of the free software ecosystem at large.

17 August, 17:00, Challenges and Opportunities for free real-time communications

This will be a more interactive session people are invited to come and talk about their experiences and the problems they have faced deploying RTC solutions for professional or personal use. We will try to look at some RTC/VoIP troubleshooting techniques as well as more high-level strategies for improving the situation.

Try the Debian and Fedora RTC portals

Have you registered for It can successfully make federated SIP calls with users of other domains, including Fedora community members trying

You can use for regular SIP (with clients like Empathy, Jitsi or Lumicall) or WebRTC.

Can't get to DebConf15?

If you can't get to Heidelberg, you can watch the events on the live streaming service and ask questions over IRC.

To find out more about deploying RTC, please see the RTC Quick Start Guide.

Did you know?

Don't confuse Heidelberg, Germany with Heidelberg in Melbourne, Australia. Heidelberg down under was the site of the athlete's village for the 1956 Olympic Games.

on July 30, 2015 09:23 AM
Nóirín Plunkett & Benjamin KerensaNóirín and I

Today I learned of some of the worst kind of news, my friend and a valuable contributor to the great open source community Nóirín Plunkett passed away. They (this is their preferred pronoun per their twitter profile) was well regarded in the open source community for contributions.

I had known them for about four years now, having met them at OSCON and seen them regularly at other events. They were always great to have a discussion with and learn from and they always had a smile on their face.

It is very sad to lose them as they demonstrated an unmatchable passion and dedication to open source and community and surely many of us will spend many days, weeks and months reflecting on the sadness of this loss.

Other posts about them:

on July 30, 2015 03:01 AM

July 29, 2015

I wanted to share a unique opportunity to get invovled with ubuntu and testing. Last cycle, as part of a datacenter shuffle, the automated installer testing that was occurring for ubuntu flavors stopped running. The images were being test automatically via a series of autopilot tests, written originally by the community (Thanks Dan et la!). These tests are vital in helping reduce the burden of manual testing required for images by running through the base manual test cases for each image automatically each day.

When it was noticed the tests didn't run this cycle, wxl from Lubuntu accordingly filed an RT to discover what happened. Unfortunately, it seems the CI team within Canonical can no longer run these tests. The good news however is that we as a community can run them ourselves instead.

To start exploring the idea of self-hosting and running the tests, I initially asked Daniel Chapman to take a look. Given the impending landing of dekko in the default ubuntu image, Daniel certainly has his hands full. As such Daniel Kessel has offered to help out and begun some initial investigations into the tests and server needs. A big thanks to Daniel and Daniel!

But they need your help! The autopilot tests for ubiquity have a few bugs that need solving. And a server and jenkins need to be setup, installed, and maintained. Finally, we need to think about reporting these results to places like the isotracker. For more information, you can read more about how to run the tests locally to give you a better idea of how they work.

The needed skillsets are diverse. Are you interested in helping make flavors better? Do you have some technical skills in writing tests, the web, python, or running a jenkins server? Or perhaps you are willing to learn? If so, please get in touch!

on July 29, 2015 08:26 PM

Users of some email clients, particularly Gmail, have long had a problem filtering mail from Launchpad effectively.  We put lots of useful information into our message headers so that heavy users of Launchpad can automatically filter email into different folders.  Unfortunately, Gmail and some other clients do not support filtering mail on arbitrary headers, only on message bodies and on certain pre-defined headers such as Subject.  Figuring out what to do about this has been tricky.  Space in the Subject line is at a premium – many clients will only show a certain number of characters at the start, and so inserting filtering tags at the start would crowd out other useful information, so we don’t want to do that; and in general we want to avoid burdening one group of users with workarounds for the benefit of another group because that doesn’t scale very well, so we had to approach this with some care.

As of our most recent code update, you’ll find a new setting on your “Change your personal details” page:

Screenshot of email configuration options

If you check “Include filtering information in email footers”, Launchpad will duplicate some information from message headers into the signature part (below the dash-dash-space line) of message bodies: any “X-Launchpad-Something: value” header will turn into a “Launchpad-Something: value” line in the footer.  Since it’s below the signature marker, it should be relatively unobtrusive, but is still searchable.  You can search or filter for these in Gmail by putting the key/value pair in double quotes, like this:

Screenshot of Gmail filter dialog with "Has new words" set to "Launchpad-Notification-Type: code-review"

At the moment this only works for emails related to Bazaar branches, Git repositories, merge proposals, and build failures.  We intend to extend this to a few other categories soon, particularly bug mail and package upload notifications.  If you particularly need this feature to work for some other category of email sent by Launchpad, please file a bug to let us know.

on July 29, 2015 04:43 PM
Este artículo es una traducción del post de Alan Pope, disponible aquí en Inglés.

Me gusta jugar en el móvil y en la tablet y quería añadir algunos juegos más a Ubuntu. Con poco trabajo se 'migran' fácilmente juegos a Ubuntu Phone. He puesto la palabra 'migrar' entre comillas porque en algunos casos es muy poco esfuerzo, por lo que llamarlo 'migrar' puede aparentar que es más trabajo del que realmente es.

Actualización: Algunos usuarios me preguntaron por qué alguien quedría hacer esto, pudiendo simplemente crear un marcador en el navegador. Mis disculpas si no dejé esto claro. La gran ventaja es que el juego es cacheado offline. Con la ventaja que tiene esto en muchas situaciones, por ejemplo en viajes o con mal acceso a Internet. Por supuesto, no todos los juegos pueden ser completamente offline, este tutorial no será de gran ayuda para juegos online, como Clash of Clans. Sin embargo, sí será útil para muchos otros. También se hace uso del confinamiento de aplicaciones en Ubuntu, por lo que la aplicación/juego no tendrá acceso exterior a su directorio de datos.

Invertí algunas tardes y fines de semana con sturmflut, quien también plasmó su experiencia en el artículo Panda Madness.

Nos divertimos mucho migrando algunos juegos y quiero compartir qué hicimos, para que facilite la tarea de otros desarrolladores. Creé una plantilla básica en Github que puede usarse como punto de partida, pero quiero explicar el proceso y los problemas que tuvimos, para que otros puedan migrar más aplicaciones y juegos.

Si tienes alguna duda, déjame un comentario, o si lo prefieres, también puedes escribirme por privado.

Prueba de concepto

Para demostrar que podemos migrar fácilmente juegos existentes, distribuí un par de juegos de Code Canyon. Tienda donde desarrolladores pueden distribuir sus juegos, a la vez que otros desarrolladores aprenden de ellos. Comencé con un pequeño juego llamado Don't Crash el cual es un juego HTML5 creado con Construct 2. Podría distribuir más juegos, e incluso hay más tiendas de juegos, pero esto es sólo un buen ejemplo para mostrar el proceso.

Apunte: Construct 2 de Scirra es una herramienta que sólo funciona en Windows, es popular, potente y rápida, para el desarrollo multiplataforma de aplicaciones y juegos HTML5. Es usado por muchos desarrolladores indie para crear juegos que se ejecutan en navegadores de escritorio y dispositivos móviles. En desarrollo está Construct 3, el cual será más compatible y también estará disponible para Linux.

Antes de distribuir Don't Crash comprobé que funcionaba bien en Ubuntu Phone usando la demo que hay en Code Canyon. Tras verificar que funcionaba, pagué y recibí los ficheros con el 'código' de Construct 2.

Si eres un desarrollador de tus propios juegos, puedes saltarte este paso, porque ya tendrás el código a migrar.

Migrando a Ubuntu

Lo mínimo que se necesita para migrar un juego son algunos ficheros de texto y el directorio que contiene el código fuente. Algunas veces hacen falta un par de trucos para los permisos y bloquear la rotación, pero en líneas generales, Simplemente Funciona (TM).

Yo estoy usando un ordenador con Ubuntu para todo el empaquetado y pruebas, pero para este juego necesité un ordenador con Windows para exportarlo desde Construct 2. Los requisitos pueden variar, pero si no tienes Ubuntu, puedes instalarlo en una máquina virtual como VMWare o VirtualBox, y sólo tendrás que añadir el SDK como se detalla en el

Este es el contenido entero del directorio, con el juego en la carpeta www/

alan@deep-thought:~/phablet/code/popey/licensed/html5_dontcrash⟫ ls -l
total 52
-rw-rw-r-- 1 alan alan   171 Jul 25 00:51 app.desktop
-rw-rw-r-- 1 alan alan   167 Jun  9 17:19 app.json
-rw-rw-r-- 1 alan alan 32826 May 19 19:01 icon.png
-rw-rw-r-- 1 alan alan   366 Jul 25 00:51 manifest.json
drwxrwxr-x 4 alan alan  4096 Jul 24 23:55 www

Creando el metadata


Contiene los detalles básicos acerca de la aplicación, como el nombre, descripción, autor, email y alguna cosa más. Aquí están los mios (en el manifest.json) de la última versión de Don't Crash. Los campos a rellenar son aclaratorios por sí mismos. Por lo que sustituye cada uno de ellos con los detalles de tu propia aplicación.

    "description":  "Don't Crash!",
    "framework":    "ubuntu-sdk-14.10-html",
    "hooks": {
        "dontcrash": {
            "apparmor": "app.json",
            "desktop":  "app.desktop"
    "maintainer":   "Alan Pope ",
    "name":         "dontcrash.popey",
    "title":        "Don't Crash!",
    "version":      "0.22"

Apunte: "popey" es mi nombre de desarrollador en la tienda, tienes que sustituirlo por el mismo nombre que usas en tu página del portal de desarrollador.


 Perfil de seguridad

El fichero app.json, detalla qué permisos necesita la aplicación para ejecutarse:

    "template": "ubuntu-webapp",
    "policy_groups": [
    "policy_version": 1.2

Fichero Desktop

Define cómo se lanza la aplicación, cual es el icono utilizado y algunos otros detalles:

[Desktop Entry]
Name=Don't Crash
Comment=Avoid the other cars
Exec=webapp-container $@ www/index.html

De nuevo, cambia los campos Name y Comment, y practicamente hemos finalizado.

Construyendo el paquete click

Con los ficheros creados y un icono icon.png, compilamos para crear el paquete .click que subiremos a la tienda. Este es el proceso entero:

alan@deep-thought:~/phablet/code/popey/licensed⟫ click build html5_dontcrash/
Now executing: click-review ./
./ pass
Successfully built package in './'.

En mi portátil apenas se compila en un segundo.

Ten en cuenta la salida del comando, la cual realiza comprobaciones de validez de paquetes .click al compilar, asegurándose de que no haya fallos que lo rechacen en la Tienda.

Comprobación en un dispositivo Ubuntu

Comprobar el paquete .click en un móvil es muy fácil. Copia el fichero .click desde el PC con Ubuntu vía USB, usando adb para instalarlo:

adb push /tmp
adb shell
pkcon install-local --allow-untrusted /tmp/

Vete al scope de aplicaciones y arrastra hacia abajo para que refresque, pulsa en el icono y prueba el juego.

¡Conseguido! :)


Configurando la aplicación

En este punto, para alguno de los juegos ví algunas mejoras, que las expondré aquí:

Cargar localmente los ficheros

Construct 2 indica que "Los juegos exportados no funcionarán hasta que los subas por un popup ("When running on the file:/// protocol, browsers block many features from working for security reasons") que se muestra en javascript. Borré esas líneas de js que comprueban que el index.html y el juego funcionan adecuadamente en nuestro navegador.

Orientación del dispositivo

Con la reciente actualización OTA de Ubuntu siempre podemos activar la orientación del dispositivo, lo cual significa que algunos juegos pueden rotarse y no ser jugables. Podemos bloquear los juegos en modo vertical u horizontal mediante el fichero .desktop (creado previamente) con simplemente añadir esta línea:


Obviamente cambiar "portrait" por "landscape" si el juego usa el modo horizontal. Para Don't Crash no lo hice porque el desarrollador tenía la deteción de la rotación por código y dice al jugador que rote el dispositivo a la posición necesaria.

Enlaces Twitter

Algunos juegos tenían enlaces de Twitter embebidos, mediante los cuales los jugadores pueden publicar su puntuación. Desafortunadamente la versión web móvil de Twitter no admite eso, por lo que no debería de haber un enlace que contiene "Check out my score in Don’t Crash". Por el momento, quité los enlaces a Twitter.


Nuestro navegador no soporta cookies locales. Algunos juegos las usan. Para Heroine Dusk cambié las cookies a Local Storage.

Publicando en la tienda

Publicar paquetes .click en la Tienda de Ubuntu es rápido y fácil. Simplemente accede a , identificate, pulsa en "New Application" y sigue los pasos para subir el paquete click.


¡Esto es todo! Seguiré publicando algunos juegos más en la tienda. Mejorasa la plantilla de Github son bienvenidas.

Artículo original de Alan Pope. Traducido por Marcos Costales.
on July 29, 2015 04:07 PM

The Age of Foundations

Thierry Carrez

At OSCON last week, Google announced the creation around Kubernetes of the Cloud-Native Computing Foundation. The next day, Jim Zemlin dedicated his keynote to the (recently-renamed) Open Container Initiative, confirming the Linux Foundation's recent shift towards providing Foundations-as-a-Service. Foundations ended up being the talk of the show, with some questioning the need for Foundations for everything, and others discussing the rise of Foundations as tactical weapons.

Back to the basics

The main goal of open source foundations is to provide a neutral, level and open collaboration ground around one or several open source projects. That is what we call the upstream support goal. Projects are initially created by individuals or companies that own the original trademark and have power to change the governance model. That creates a tilted playing field: not all players are equal, and some of them can even change the rules in the middle of the game. As projects become more popular, that initial parentage becomes a blocker for other contributors or companies to participate. If your goal is to maximize adoption, contribution and mindshare, transferring the ownership of the project and its governance to a more neutral body is the natural next step. It removes barriers to contribution and truly enables open innovation.

Now, those foundations need basic funding, and a common way to achieve that is to accept corporate members. That leads to the secondary goal of open source foundations: serve as a marketing and business development engine for companies around a common goal. That is what we call the downstream support goal. Foundations work to build and promote a sane ecosystem around the open source project, by organizing local and global events or supporting initiatives to make it more usable: interoperability, training, certification, trademark licenses...

Not all Foundations are the same

At this point it's important to see that a foundation is not a label, the name doesn't come with any guarantee. All those foundations are actually very different, and you need to read the fine print to understand their goals or assess exactly how open they are.

On the upstream side, few of them actually let their open source project be completely run by their individual contributors, with elected leadership (one contributor = one vote, and anyone may contribute). That form of governance is the only one that ensures that a project is really open to individual contributors, and the only one that prevents forks due to contributors and project owners not having aligned goals. If you restrict leadership positions to appointed seats by corporate backers, you've created a closed pay-to-play collaboration, not an open collaboration ground. On the downstream side, not all of them accept individual members or give representation to smaller companies, beyond their founding members. Those details matter.

When we set up the OpenStack Foundation, we worked hard to make sure we created a solid, independent, open and meritocratic upstream side. That, in turn, enabled a pretty successful downstream side, set up to be inclusive of the diversity in our ecosystem.

The future

I see the "Foundation" approach to open source as the only viable solution past a given size and momentum around a project. It's certainly preferable to "open but actually owned by one specific party" (which sooner or later leads to forking). Open source now being the default development model in the industry, we'll certainly see even more foundations in the future, not less.

As this approach gets more prevalent, I expect a rise in more tactical foundations that primarily exist as a trade association to push a specific vision for the industry. At OSCON during those two presentations around container-driven foundations, it was actually interesting to notice not the common points, but the differences. The message was subtly different (pods vs. containers), and the companies backing them were subtly different too. I expect differential analysis of Foundations to become a thing.

My hope is that as the "Foundation" model of open source gets ubiquitous, we make sure that we distinguish those which are primarily built to sustain the needs or the strategy of a dozen of large corporations, and those which are primarily built to enable open collaboration around an open source project. The downstream goal should stay a secondary goal, and new foundations need to make sure they first get the upstream side right.

In conclusion, we should certainly welcome more Foundations being created to sustain more successful open source projects in the future. But we also need to pause and read the fine print: assess how open they are, discover who ends up owning their upstream open source project, and determine their primary reason for existing.

on July 29, 2015 01:30 PM

I don't really know what to say as of late. I've been around but I've been hiding in the background. When you end up having to read appellate court decisions, Inspector General audit reports, GAO audit reports, and ponder if your job will be funded into the new fiscal gets weird. This is the closest illustration I can find of what I do at work:

With all the storm and stress that some persons seem to be trying to raise in the *buntu community I feel it appropriate to truly step away formally for a while. I'm still working on the cross-training matter relative to job functions at work. I'm still occasionally working on backports for pumpa and dianara. I am just going to be off the cadence for a while.

I'm wandering. With luck I may return.

on July 29, 2015 12:00 AM