July 25, 2016

AArch64 desktop hardware?

Marcin Juszkiewicz

Soon there will be four years since I started working on AArch64 architecture. Lot of software things changed during that time. Lot in a hardware too. But machines availability still sucks badly.

In 2012 all we had was software model. It was slow, terribly slow. Common joke was AArch64 developers standing in a queue for 10GHz x86-64 cpus. So I was generating working binaries by using cross compilation. But many distributions only do native builds. In models. Imagine Qt4 building for 3-4 days…

In 2013 I got access to first server hardware. With first silicon version of CPU. Highly unstable, we could use just one core etc. GCC was crashing like hell but we managed to get stable build results from it. Qt4 was building in few hours now.

Then amount of hardware at Red Hat was growing and growing. Farms of APM Mustangs, AMD Seattle and several other servers appeared, got racked and available to use. In 2014 one Mustang even landed on my desk (as first such machine in Poland).

But this was server land. Each of those machines costed about 1000 USD (if not more). And availability was hard too.

Linaro tried to do something about it and created 96boards project.

First came ‘Consumer Edition’ range. Yet another small form factor boards with functionality stripped as much as possible. No Ethernet, no storage other than emmc/usb, low amount of memory, chips taken from mobile phones etc. But it was selling! But only because people were hungry to get ANYTHING with AArch64 cores. First was HiKey then DragonBoard410 got released. Then few other boards. All with same set of issues: non-mainline kernel, weird bootloaders, binary blobs for this or that…

Then so called ‘Enterprise Edition’ got announced. With another ridiculous form factor (and microATX as an option). And that was it. There was a leak of Husky board which shown how fucked up design it was. Ports all around the edges, memory above and under board and of course incompatible with any industrial form factor. I would like to know what they were smoking…

Time passed by. Husky got forgotten for another year. Then Cello was announced as a “new EE 96boards board” while it looked as redesigned Husky with two SATA ports less (because who needs more than two SATA, right?). Last time I heard about Cello it was still ‘maybe soon, maybe another two weeks’. Prototypes looked like hand soldered, USB controller mounted rotated, dead on-board Ethernet etc.

In meantime we got few devices from other companies. Pine64 had big campaign on Kickstarter and shipped to developers. Hardkernel started selling ODROID-C2, Geekbox released their TV box and probably something else got released as well. But all those boards were limited to 1-2GB of memory, often lacked SATA and used mobile processors with their own set of bootloaders etc causing extra work for distributions.

Overdrive 1000 was announced. Without any options for expansion it looked like SoftIron wanted customers to buy Overdrive 3000 if they want to use PCI Express card.

So we have 2016 now. Four years of my work on AArch64 passed. Most of distributions support this architecture by building on proper servers but most of this effort is not used because developers do not have sane hardware to play with (sane means expendable, supported by distributions, capable).

There is no standard form factor mainboards (mini-itx, microATX, ATX) available on mass market. 96boards failed here, server vendors are not interested, small Chinese companies prefer to release yet-another-fruit/Pi with mobile processor. Nothing, null, nada, nic.

Developers know where to buy normal computer cases, storage, memory, graphics cards, USB controllers, SATA controllers and peripherals. So vendors do not have to worry/deal with this part. But still there is nothing to put those cards into. No mainboards which can be mounted into normal PC case, have some graphics plugged in, few SSD/HDD connected, mouse/keyboard, monitors and just be used.

Sometimes it is really hard to convince software developers to make changes for platform they are unable to test on. And current hardware situation does not help. All those projects of hardware being available “in a cloud” helps only for subset of projects — ever tried to run GNOME/KDE session over the network? With OpenGL acceleration etc?

So where is my AArch64 workstation? In desktop or laptop form.

Post written after my Google+ post where similar discussion happened in comments.

on July 25, 2016 12:08 PM

The latest Ubuntu kernel updates bring some Secure Boot enhancement for the kernel modules when the Secure Boot is enabled in BIOS settings. However there is no easy way to sign those kernel modules in DKMS packages automatically so far. If we want to use those DKMS packages, we need to disable Secure Boot in BIOS settings temporarily or we can also disable Secure Boot in shim-signed temporarily. The following steps introduced how to disable Secure Boot in shim-signed.

  1. Open a terminal by Ctrl + Alt + T, execute `sudo update-secureboot-policy` and then select ‘Yes’.
  2. Enter a temporary password between 8 to 16 digits. (For example, 12345678, we will use this password later.)
  3. Enter the same password again to confirm.
  4. Reboot the system and press any key when you see the blue screen (MOK management).
  5. Select “Change Secure Boot state”.
  6. Press the corresponding password character and press Enter. Repeat this step several times to confirm previous temporary password like ‘12345678’ in step 2&3. For exmaple, '2' for this screen.
  7. Select ‘Yes’ to disable Secure Boot in shim-signed.
  8. Press Enter key to finish the whole procedure.

We can still enable Secure Boot in shim-signed again. Just execute `sudo update-secureboot-policy --enable` and then follow the similar steps above.

on July 25, 2016 04:32 AM

July 24, 2016

Chrome on Kali for root

David Tomaschik

For many of the tools on Kali Linux, it’s easiest to run them as root, so the defacto standard has more or less become to run as root when using Kali. Google Chrome, on the other hand, would not like to be run as root (because it makes sandboxing harder when your user is all-powerful) so there have been a number of tricks to get it to run. I’m going to describe my preferred setup here. (Mostly as documentation for myself.)

Download and Install the Chrome .deb file.

I prefer the Google Chrome Beta build, but stable will work just fine too. Download the .deb file and install it:

dpkg -i google-chrome*.deb

If it’s a fresh Kali install, you’ll be missing libappindicator, but you can fix that via:

apt-get install -f

Getting a User to Run Chrome

We’ll create a dedicated user to run Chrome, this provides some user isolation and prevents Chrome from complaining.

useradd -m chrome

Setting up su

Now we’ll setup su to handle the passing of X credentials between the users. We’ll add pam_xauth to forward it, and configure root to pass credentials to the chrome user.

sed -i 's/@include common-session/session optional pam_xauth.so\n\0' \
mkdir -p /root/.xauth
echo chrome > /root/.xauth/export

Setting up the Chrome Desktop Icon

All that’s left now is to change the Application Menu entry (aka .desktop) to use the following as the command:

su -l -c "/usr/bin/google-chrome-beta" chrome
on July 24, 2016 07:00 AM

July 23, 2016

Earlier this week Debian unstable and Ubuntu Yakkety switched to load the ‘modesetting’ X video driver by default on Intel graphics gen4 and newer. This roughly maps to GPU’s made since 2007 (965GM->). The main reason for this was to get rid of chasing after upstream git, because there hasn’t been a stable release in nearly three years and even the latest devel snapshot is over a year and a half old. It also means sharing the glamor 2D acceleration backend with radeon/amdgpu, which is a nice change knowing that the intel SNA backend was constantly slightly broken for some GPU generation(s).

Xserver 1.18.4 was released this week with a number of backported fixes to glamor and modesetting driver from master, so the time was right to make the switch now while both Stretch and Yakkety are still on the development phase. So I wrote a small patch for the xserver to load intel driver only on gen2 & gen3 which can’t do glamor efficiently. Newer Intel GPU’s will fall back to modesetting. This approach is good since it can be easily overridden by dropping a conffile to /etc/X11 that uses something else.

I’ve seen only one bug filed that was caused by this change so far, and it turned out to be a kernel bug fixed in 4.6 (Yak will ship with 4.8). If you see something strange like corrupt widgets or whatnot after upgrading to current Yakkety, verify it doesn’t happen with intel (‘cp /usr/share/doc/xserver-xorg-video-intel/xorg.conf /etc/X11’ followed by login manager restart or reboot) and file a bug against xserver-xorg-core (verify xdiagnose is installed, then run ‘ubuntu-bug xserver-xorg-core)’. We’ll take it from there.

on July 23, 2016 08:37 PM

In previous posts we saw how to set up LXD on a DigitalOcean VPS, how to set up LXD on a Scaleway VPS, and how the lifecycle of an LXD container looks like.

In this post, we are going to

  1. Create multiple websites, each in a separate LXD container
  2. Install HAProxy as a TLS Termination Proxy, in an LXD container
  3. Configure HAProxy so that each website is only accessible through TLS
  4. Perform the SSL Server Test so that our websites really get the A+!

In this post, we are not going to install WordPress (or other CMS) on the websites. We keep this post simple as that is material for our next post.

The requirements are

Set up a VPS

We are using DigitalOcean in this example.


Ubuntu 16.04.1 LTS was released a few days ago and DigitalOcean changed the Ubuntu default to 16.04.1. This is nice.

We are trying out the smallest droplet in order to figure out how many websites we can squeeze in containers. That is, 512MB RAM on a single virtual CPU core, at only 20GB disk space!

In this example we are not using the new DigitalOcean block storage as at the moment it is available in only two datacentres.

Let’s click on the Create droplet button and the VPS is created!

Initial configuration

We are using DigitalOcean in this HowTo, and we have covered the initial configuration in this previous post.

Trying out LXD containers on Ubuntu on DigitalOcean

Go through the post and perform the tasks described in section «Set up LXD on DigitalOcean».

Creating the containers

We create three containers for three websites, plus one container for HAProxy.

ubuntu@ubuntu-512mb-ams3-01:~$ lxc init ubuntu:x web1
Creating web1
Retrieving image: 100%
ubuntu@ubuntu-512mb-ams3-01:~$ time lxc init ubuntu:x web2
Creating web2

real    0m6.620s
user    0m0.016s
sys    0m0.004s
ubuntu@ubuntu-512mb-ams3-01:~$ time lxc init ubuntu:x web3
Creating web3

real    1m15.723s
user    0m0.012s
sys    0m0.020s
ubuntu@ubuntu-512mb-ams3-01:~$ time lxc init ubuntu:x haproxy
Creating haproxy

real    0m48.747s
user    0m0.012s
sys    0m0.012s

Normally it takes a few seconds for a new container to initialize. Remember that we are squeezing here, it’s a 512MB VPS, and the ZFS pool is stored on a file (not a block device)! We are looking into the kernel messages of the VPS for lines similar to «Out of memory: Kill process 3829 (unsquashfs) score 524 or sacrifice child», which indicate that we reached the memory limit. While preparing this blog post, there were a couple of Out of memory kills, so I made sure that nothing critical was dying. If this is too much for you, you can select a 1GB RAM (or more) VPS and start over.

Let’s start the containers up!

ubuntu@ubuntu-512mb-ams3-01:~$ lxc start web1 web2 web3 haproxy
ubuntu@ubuntu-512mb-ams3-01:~$ lxc list
|  NAME   |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
| haproxy | RUNNING | (eth0)  |      | PERSISTENT | 0         |
| web1    | RUNNING | (eth0) |      | PERSISTENT | 0         |
| web2    | RUNNING | (eth0) |      | PERSISTENT | 0         |
| web3    | RUNNING | (eth0)  |      | PERSISTENT | 0         |

You may need to run lxc list a few times until you make sure that all containers got an IP address. That means that they all completed their startup.

DNS configuration

The public IP address of this specific VPS is For this test, I am using the domain ubuntugreece.xyz as follows:

  1. Container web1: ubuntugreece.xyz and www.ubuntugreece.xyz have IP
  2. Container web2: web2.ubuntugreece.xyz has IP
  3. Container web3: web3.ubuntugreece.xyz has IP

Here is how it looks when configured on a DNS management console,


From here and forward, it is a waiting game until these DNS configurations are propagated to the rest of the Internet. We need to wait until those hostnames resolve into their IP address.

ubuntu@ubuntu-512mb-ams3-01:~$ host ubuntugreece.xyz
ubuntugreece.xyz has address
ubuntu@ubuntu-512mb-ams3-01:~$ host web2.ubuntugreece.xyz
Host web2.ubuntugreece.xyz not found: 3(NXDOMAIN)
ubuntu@ubuntu-512mb-ams3-01:~$ host web3.ubuntugreece.xyz
web3.ubuntugreece.xyz has address

These are the results after ten minutes. ubuntugreece.xyz and web3.ubuntugreece.xyz are resolving fine, while web2.ubuntugreece.xyz needs a bit more time.

We can continue! (and ignore for now web2)

Web server configuration

Let’s see the configuration for web1. You must repeat the following for web2 and web3.

We install the nginx web server,

ubuntu@ubuntu-512mb-ams3-01:~$ lxc exec web1 — /bin/bash
root@web1:~# apt update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [94.5 kB]

3 packages can be upgraded. Run ‘apt list –upgradable’ to see them.
root@web1:~# apt upgrade
Reading package lists… Done

Processing triggers for initramfs-tools (0.122ubuntu8.1) …
root@web1:~# apt install nginx
Reading package lists… Done

Processing triggers for ufw (0.35-0ubuntu2) …

nginx needs to be configured so that it understands the domain name for web1. Here is the diff,

diff --git a/etc/nginx/sites-available/default b/etc/nginx/sites-available/default
index a761605..b2cea8f 100644
--- a/etc/nginx/sites-available/default
+++ b/etc/nginx/sites-available/default
@@ -38,7 +38,7 @@ server {
        # Add index.php to the list if you are using PHP
        index index.html index.htm index.nginx-debian.html;
-       server_name _;
+       server_name ubuntugreece.xyz www.ubuntugreece.xyz;
        location / {
                # First attempt to serve request as file, then

and finally we restart nginx and exit the web1 container,

root@web1:/etc/nginx/sites-enabled# systemctl restart nginx
root@web1:/etc/nginx/sites-enabled# exit

Forwarding connections to the HAProxy container

We are about the set up the HAProxy container. Let’s add iptables rules to perform the forwarding of connections to ports 80 and 443 on the VPS, to the HAProxy container.

ubuntu@ubuntu-512mb-ams3-01:~$ ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 04:01:36:50:00:01  
          inet addr:  Bcast:  Mask:
          inet6 addr: fe80::601:36ff:fe50:1/64 Scope:Link
          RX packets:40513 errors:0 dropped:0 overruns:0 frame:0
          TX packets:26362 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:360767509 (360.7 MB)  TX bytes:3863846 (3.8 MB)

ubuntu@ubuntu-512mb-ams3-01:~$ lxc list
|  NAME   |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
| haproxy | RUNNING | (eth0)  |      | PERSISTENT | 0         |
| web1    | RUNNING | (eth0) |      | PERSISTENT | 0         |
| web2    | RUNNING | (eth0) |      | PERSISTENT | 0         |
| web3    | RUNNING | (eth0)  |      | PERSISTENT | 0         |
ubuntu@ubuntu-512mb-ams3-01:~$ sudo iptables -t nat -I PREROUTING -i eth0 -p TCP -d --dport 80 -j DNAT --to-destination
[sudo] password for ubuntu: 
ubuntu@ubuntu-512mb-ams3-01:~$ sudo iptables -t nat -I PREROUTING -i eth0 -p TCP -d --dport 443 -j DNAT --to-destination

If you want to make those changes permanent, see Saving Iptables Firewall Rules Permanently (the part about the package iptables-persistent).

HAProxy initial configuration

Let’s see how to configure HAProxy in container haproxy. We enter the container, update the software and install the haproxy package.

ubuntu@ubuntu-512mb-ams3-01:~$ lxc exec haproxy -- /bin/bash
root@haproxy:~# apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
3 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@haproxy:~# apt upgrade
Reading package lists... Done
Processing triggers for initramfs-tools (0.122ubuntu8.1) ...
root@haproxy:~# apt install haproxy
Reading package lists... Done
Processing triggers for ureadahead (0.100.0-19) ...

We add the following configuration to /etc/haproxy/haproxy.conf. Initially, we do not have any certificates for TLS, but we need the Web servers to work with plain HTTP in order for Let’s Encrypt to be able to verify we own the websites. Therefore, here is the complete configuration, with two lines commented out (they start with ###) so that HTTP can work. As soon as we deal with Let’s Encrypt, we go full TLS (by uncommenting the two lines that start with ###) and never look back. We mention when to uncomment later in the post.

diff --git a/etc/haproxy/haproxy.cfg b/etc/haproxy/haproxy.cfg
index 86da67d..f6f2577 100644
--- a/etc/haproxy/haproxy.cfg
+++ b/etc/haproxy/haproxy.cfg
@@ -18,11 +18,17 @@ global
     ssl-default-bind-options no-sslv3
+        # Minimum DH ephemeral key size. Otherwise, this size would drop to 1024.
+        # @link: https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#3.2-tune.ssl.default-dh-param
+        tune.ssl.default-dh-param 2048
     log    global
     mode    http
     option    httplog
     option    dontlognull
+        option  forwardfor
+        option  http-server-close
         timeout connect 5000
         timeout client  50000
         timeout server  50000
@@ -33,3 +39,56 @@ defaults
     errorfile 502 /etc/haproxy/errors/502.http
     errorfile 503 /etc/haproxy/errors/503.http
     errorfile 504 /etc/haproxy/errors/504.http
+# Configuration of the frontend (HAProxy as a TLS Termination Proxy)
+frontend www_frontend
+    # We bind on port 80 (http) but (see below) get HAProxy to force-switch to HTTPS.
+    bind *:80
+    # We bind on port 443 (https) and specify a directory with the certificates.
+####    bind *:443 ssl crt /etc/haproxy/certs/
+    # We get HAProxy to force-switch to HTTPS, if the connection was just HTTP.
+####    redirect scheme https if !{ ssl_fc }
+    # TLS terminates at HAProxy, the container runs in plain HTTP. Here, HAProxy informs nginx
+    # that there was a TLS Termination Proxy. Required for WordPress and other CMS.
+    reqadd X-Forwarded-Proto:\ https
+    # Distinguish between secure and insecure requestsa (used in next two lines)
+    acl secure dst_port eq 443
+    # Mark all cookies as secure if sent over SSL
+    rsprep ^Set-Cookie:\ (.*) Set-Cookie:\ \1;\ Secure if secure
+    # Add the HSTS header with a 1 year max-age
+    rspadd Strict-Transport-Security:\ max-age=31536000 if secure
+    # Configuration for each virtual host (uses Server Name Indication, SNI)
+    acl host_ubuntugreece_xyz hdr(host) -i ubuntugreece.xyz www.ubuntugreece.xyz
+    acl host_web2_ubuntugreece_xyz hdr(host) -i web2.ubuntugreece.xyz
+    acl host_web3_ubuntugreece_xyz hdr(host) -i web3.ubuntugreece.xyz
+    # Directing the connection to the correct LXD container
+    use_backend web1_cluster if host_ubuntugreece_xyz
+    use_backend web2_cluster if host_web2_ubuntugreece_xyz
+    use_backend web3_cluster if host_web3_ubuntugreece_xyz
+# Configuration of the backend (HAProxy as a TLS Termination Proxy)
+backend web1_cluster
+    balance leastconn
+    # We set the X-Client-IP HTTP header. This is usefull if we want the web server to know the real client IP.
+    http-request set-header X-Client-IP %[src]
+    # This backend, named here "web1", directs to container "web1.lxd" (hostname).
+    server web1 web1.lxd:80 check
+backend web2_cluster
+    balance leastconn
+    # We set the X-Client-IP HTTP header. This is usefull if we want the web server to know the real client IP.
+    http-request set-header X-Client-IP %[src]
+    # This backend, named here "web2", directs to container "web2.lxd" (hostname).
+    server web2 web2.lxd:80 check
+backend web3_cluster
+    balance leastconn
+    # We set the X-Client-IP HTTP header. This is usefull if we want the web server to know the real client IP.
+    http-request set-header X-Client-IP %[src]
+    # This backend, named here "web3", directs to container "web3.lxd" (hostname).
+    server web3 web3.lxd:80 check

Let’s restart HAProxy. If you get any errors, run systemctl status haproxy and try to figure out what went wrong.

root@haproxy:~# systemctl restart haproxy
root@haproxy:~# exit

Does it work? Let’s visit the website,


It’s is working! Let’s Encrypt will be able to access and verify that we own the domain in the next step.

Get certificates from Let’s Encrypt

We exit out to the VPS and install letsencrypt.

ubuntu@ubuntu-512mb-ams3-01:~$ sudo apt install letsencrypt
[sudo] password for ubuntu: 
Reading package lists... Done
Setting up python-pyicu (1.9.2-2build1) ...

We run letsencrypt three times, one for each website. update It is also possible to simplify the following by using multiple domain (or Subject Alternative Names (SAN)) certificates. Thanks for @jack who mentioned this in the comments.

ubuntu@ubuntu-512mb-ams3-01:~$ sudo letsencrypt certonly --authenticator webroot --webroot-path=/var/lib/lxd/containers/web1/rootfs/var/www/html -d ubuntugreece.xyz -d www.ubuntugreece.xyz
... they ask for a contact e-mail address and whether we accept the Terms of Service...

 - If you lose your account credentials, you can recover through
   e-mails sent to xxxxx@gmail.com.
 - Congratulations! Your certificate and chain have been saved at
   /etc/letsencrypt/live/ubuntugreece.xyz/fullchain.pem. Your cert
   will expire on 2016-10-21. To obtain a new version of the
   certificate in the future, simply run Let's Encrypt again.
 - Your account credentials have been saved in your Let's Encrypt
   configuration directory at /etc/letsencrypt. You should make a
   secure backup of this folder now. This configuration directory will
   also contain certificates and private keys obtained by Let's
   Encrypt so making regular backups of this folder is ideal.
 - If you like Let's Encrypt, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le


For completeness, here are the command lines for the other two websites,

ubuntu@ubuntu-512mb-ams3-01:~$ sudo letsencrypt certonly --authenticator webroot --webroot-path=/var/lib/lxd/containers/web2/rootfs/var/www/html -d web2.ubuntugreece.xyz

 - Congratulations! Your certificate and chain have been saved at
   /etc/letsencrypt/live/web2.ubuntugreece.xyz/fullchain.pem. Your
   cert will expire on 2016-10-21. To obtain a new version of the
   certificate in the future, simply run Let's Encrypt again.
 - If you like Let's Encrypt, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

ubuntu@ubuntu-512mb-ams3-01:~$ time sudo letsencrypt certonly --authenticator webroot --webroot-path=/var/lib/lxd/containers/web3/rootfs/var/www/html -d web3.ubuntugreece.xyz

 - Congratulations! Your certificate and chain have been saved at
   /etc/letsencrypt/live/web3.ubuntugreece.xyz/fullchain.pem. Your
   cert will expire on 2016-10-21. To obtain a new version of the
   certificate in the future, simply run Let's Encrypt again.
 - If you like Let's Encrypt, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

real    0m18.458s
user    0m0.852s
sys    0m0.172s

Yeah, it takes only around twenty seconds to get your Let’s Encrypt certificate!

We got the certificates, now we need to prepare them so that HAProxy (our TLS Termination Proxy) can make use of them. We just need to join together the certificate chain and the private key for each certificate, and place them in the haproxy container at the appropriate directory.

ubuntu@ubuntu-512mb-ams3-01:~$ sudo mkdir /var/lib/lxd/containers/haproxy/rootfs/etc/haproxy/certs/
ubuntu@ubuntu-512mb-ams3-01:~$ DOMAIN='ubuntugreece.xyz' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /var/lib/lxd/containers/haproxy/rootfs/etc/haproxy/certs/$DOMAIN.pem'
ubuntu@ubuntu-512mb-ams3-01:~$ DOMAIN='web2.ubuntugreece.xyz' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /var/lib/lxd/containers/haproxy/rootfs/etc/haproxy/certs/$DOMAIN.pem'
ubuntu@ubuntu-512mb-ams3-01:~$ DOMAIN='web3.ubuntugreece.xyz' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /var/lib/lxd/containers/haproxy/rootfs/etc/haproxy/certs/$DOMAIN.pem'

HAProxy final configuration

We are almost there. We need to enter the haproxy container and uncomment those two lines (those that started with ###) that will enable HAProxy to work as a TLS Termination Proxy. Then, restart the haproxy service.

ubuntu@ubuntu-512mb-ams3-01:~$ lxc exec haproxy bash
root@haproxy:~# vi /etc/haproxy/haproxy.cfg 

root@haproxy:/etc/haproxy# systemctl restart haproxy
root@haproxy:/etc/haproxy# exit

Let’s test them!

Here are the three websites, notice the padlocks on all three of them,

LXD container web1 LXD container web2 LXD container web3

The SSL Server Report (Qualys)

Here are the SSL Server Reports for each website,

LXD container web1 LXD container web2 LXD container web3

You can check the cached reports for LXD container web1, LXD container web2 and LXD container web3.


The disk space requirements for those four containers (three static websites plus haproxy) are

ubuntu@ubuntu-512mb-ams3-01:~$ sudo zpool list
[sudo] password for ubuntu: 
mypool-lxd  14.9G  1.13G  13.7G         -     4%     7%  1.00x  ONLINE  -

The four containers required a bit over 1GB of disk space.

The biggest concern has been the limited RAM memory of 512MB. The Out Of Memory (OOM) handler was invoked a few times during the first steps of container creation, but not afterwards during the launching of the nginx instances.

ubuntu@ubuntu-512mb-ams3-01:~$ dmesg | grep "Out of memory"
[  181.976117] Out of memory: Kill process 3829 (unsquashfs) score 524 or sacrifice child
[  183.792372] Out of memory: Kill process 3834 (unsquashfs) score 525 or sacrifice child
[  190.332834] Out of memory: Kill process 3831 (unsquashfs) score 525 or sacrifice child
[  848.834570] Out of memory: Kill process 6378 (localedef) score 134 or sacrifice child
[  860.833991] Out of memory: Kill process 6400 (localedef) score 143 or sacrifice child
[  878.837410] Out of memory: Kill process 6436 (localedef) score 151 or sacrifice child

There was an error while creating one of the containers in the beginning. I repeated the creation command and it completed successfully. That error was probably related to this unsquashfs kill.


We set up a $5 VPS (512MB RAM, 1CPU core and 20GB SSD disk) with Ubuntu 16.04.1 LTS, then configured LXD to handle containers.

We created three containers for three static websites, and an additional container for HAProxy to work as a TLS Termination Proxy.

We got certificates for those three websites, and verified that they all pass with A+ at the Qualys SSL Server Report.

The 512MB RAM VPS should be OK for a few low traffic websites, especially those generated by static site generators.


on July 23, 2016 01:49 PM

July 22, 2016

We have set up LXD on either our personal computer or on the cloud (like DigitalOcean and Scaleway). Actually, we can even try LXD online for free at https://linuxcontainers.org/lxd/try-it/

What shall we do next?

Commands through “lxc”

Below we see a series of commands that start with lxc, then we add an action and finally we add any parameters. lxc here is the program that does the communication with the LXD service and performs the actions that we request. That is,

lxc action parameters

There are also a series of commands that are specific to a type of object. In that case, we add in the the object type and continue with the action and the parameters.

lxc object action parameters

List the available containers

Let’s use the list action, which lists the available containers.

ubuntu@myvps:~$ lxc list
Generating a client certificate. This may take a minute...
If this is your first time using LXD, you should also run: sudo lxd init
To start your first container, try: lxc launch ubuntu:16.04
The first time you run lxc list, it creates a client certificate (installs it in ~/.config/lxc/). It takes a few seconds and this process takes place only once.
The command also advices us to run sudo lxd init (note: lxd) if we haven’t done so before. Consult the configuration posts if in doubt here.
In addition, this command also suggests us on how to start (launch) our first container.
Finally, it shows the list of available containers on this computer, which is empty (because we have not created any yet).

List the locally available images for containers

Let’s use the image object, and then the list action, which lists the available (probably cached) images that are hosted by our LXD service.

ubuntu@myvps:~$ lxc image list

There are no locally available images yet, so the list is empty.

List the remotely available images for containers

Let’s use the image object, and then the list action, and finally a remote repository specifier (ubuntu:) in order to list some publicly available images that we can use to create containers.
ubuntu@myvps:~$ lxc image list ubuntu:
|       ALIAS        | FINGERPRINT  | PUBLIC |                   DESCRIPTION                   |  ARCH   |   SIZE   |          UPLOAD DATE          |        
| p (5 more)         | 6b6fa83dacb0 | yes    | ubuntu 12.04 LTS amd64 (release) (20160627)     | x86_64  | 155.43MB | Jun 27, 2016 at 12:00am (UTC) |        
| p/armhf (2 more)   | 06604b173b99 | yes    | ubuntu 12.04 LTS armhf (release) (20160627)     | armv7l  | 135.90MB | Jun 27, 2016 at 12:00am (UTC) |        
| x (5 more)         | f452cda3bccb | yes    | ubuntu 16.04 LTS amd64 (release) (20160627)     | x86_64  | 138.23MB | Jun 27, 2016 at 12:00am (UTC) |        
| x/arm64 (2 more)   | 46b365e258a0 | yes    | ubuntu 16.04 LTS arm64 (release) (20160627)     | aarch64 | 146.72MB | Jun 27, 2016 at 12:00am (UTC) |        
| x/armhf (2 more)   | 22f668affe3d | yes    | ubuntu 16.04 LTS armhf (release) (20160627)     | armv7l  | 148.18MB | Jun 27, 2016 at 12:00am (UTC) |        
|                    | 4c6f7b94e46a | yes    | ubuntu 16.04 LTS s390x (release) (20160516.1)   | s390x   | 131.07MB | May 16, 2016 at 12:00am (UTC) |        
|                    | ddfa8f2d4cfb | yes    | ubuntu 16.04 LTS s390x (release) (20160610)     | s390x   | 131.41MB | Jun 10, 2016 at 12:00am (UTC) |        
The repository ubuntu: is a curated list of containers from Canonical, and has all sorts of Ubuntu versions (from 12.04 or newer) and architectures (like x86_64, ARM and even S390x).
The first column is the nickname or alias. Ubuntu 16.04 LTS for x86_64 has the alias x, so we can use that or we can specify the fingerprint (here: f452cda3bccb).

Show information for a remotely available image for containers

Let’s use the image object, and then the list action, and finally a remote image specifier (ubuntu:x) in order to get info out of a specific publicly available image that we can use to create containers.
ubuntu@myvps:~$ lxc image info ubuntu:x
    Uploaded: 2016/06/27 00:00 UTC                                                                                                                           
    Expires: 2021/04/21 00:00 UTC                                                                                                                            

    aliases: 16.04,x,xenial                                                                                                                                  
    os: ubuntu                                                                                                                                               
    release: xenial                                                                                                                                          
    version: 16.04                                                                                                                                           
    architecture: amd64                                                                                                                                      
    label: release                                                                                                                                           
    serial: 20160627                                                                                                                                         
    description: ubuntu 16.04 LTS amd64 (release) (20160627)                                                                                                 

    - 16.04                                                                                                                                                  
    - 16.04/amd64                                                                                                                                            
    - x                                                                                                                                                      
    - x/amd64                                                                                                                                                
    - xenial                                                                                                                                                 
    - xenial/amd64                                                                                                                                           

Auto update: disabled           

Here we can see the full list of aliases for the 16.04 image (x86_64). The simplest of all, is x.

Life cycle of a container

Here is the life cycle of a container. First you initialize the image, thus creating the (stopped) container. Then you can start and stop it. Finally, in the stopped state, you may select to delete it.


 We initialise a container with Ubuntu 16.04 (ubuntu:x) and give the name mycontainer. Since we do not have yet any locally cached images, this one is downloaded and cached for us. If we need another container with Ubuntu 16.04, it will be prepared instantly since it is already cached localy.
When we initialise a container from an image, it gets the STOPPED state. When we start it, it gets into the RUNNING state.
When we start a container, the runtime (or rootfs) is booted up and may take a few seconds until the network is up and running. Below we can see that it took a few seconds until the container managed to get the IPv4 IP address through DHCP from LXD.
We can install web servers and other services into the container. Here, we just execute a BASH shell in order to get shell access inside the container and run the uname command.
We promptly exit from the container and stop it.
Then, we delete the container and verify that it has been delete (it is not shown in lxc list).
Finally, we also verify that the image is still cached locally on LXD, waiting for the next creation of a container.
Here are the commands,
ubuntu@myvps:~$ lxc init ubuntu:x mycontainer
Creating mycontainer                                                                                                                                         
Retrieving image: 100%                                                                                                                                       
ubuntu@myvps:~$ lxc image list
| ALIAS | FINGERPRINT  | PUBLIC |                 DESCRIPTION                 |  ARCH  |   SIZE   |         UPLOAD DATE          |                           
|       | f452cda3bccb | no     | ubuntu 16.04 LTS amd64 (release) (20160627) | x86_64 | 138.23MB | Jul 22, 2016 at 2:10pm (UTC) |                           
ubuntu@myvps:~$ lxc list
|    NAME     |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |                                                                                             
| mycontainer | STOPPED |      |      | PERSISTENT | 0         |                                                                                             
ubuntu@myvps:~$ lxc start mycontainer
ubuntu@myvps:~$ lxc list     
|    NAME     |  STATE  | IPV4 |                     IPV6                      |    TYPE    | SNAPSHOTS |                                                    
| mycontainer | RUNNING |      | 2607:f2c0:f00f:2770:216:3eff:fe4a:ccfd (eth0) | PERSISTENT | 0         |                                                    
ubuntu@myvps:~$ lxc list
|    NAME     |  STATE  |         IPV4          |                     IPV6                      |    TYPE    | SNAPSHOTS |                                   
| mycontainer | RUNNING | (eth0) | 2607:f2c0:f00f:2770:216:3eff:fe4a:ccfd (eth0) | PERSISTENT | 0         |                                   
ubuntu@myvps:~$ lxc exec mycontainer -- /bin/bash       
root@mycontainer:~# uname -a
Linux mycontainer 4.4.0-31-generic #50~14.04.1-Ubuntu SMP Wed Jul 13 01:07:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux                                        
root@mycontainer:~# exit
ubuntu@myvps:~$ lxc list
|    NAME     |  STATE  |         IPV4          |                     IPV6                      |    TYPE    | SNAPSHOTS |                                   
| mycontainer | RUNNING | (eth0) | 2607:f2c0:f00f:2770:216:3eff:fe4a:ccfd (eth0) | PERSISTENT | 0         |                                   
ubuntu@myvps:~$ lxc stop mycontainer
ubuntu@myvps:~$ lxc list
|    NAME     |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |                                                                                             
| mycontainer | STOPPED |      |      | PERSISTENT | 0         |                                                                                             
ubuntu@myvps:~$ lxc delete mycontainer
ubuntu@myvps:~$ lxc list
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |                                                                                                            
ubuntu@myvps:~$ lxc image list
| ALIAS | FINGERPRINT  | PUBLIC |                 DESCRIPTION                 |  ARCH  |   SIZE   |         UPLOAD DATE          |                           
|       | f452cda3bccb | no     | ubuntu 16.04 LTS amd64 (release) (20160627) | x86_64 | 138.23MB | Jul 22, 2016 at 2:10pm (UTC) |                           

Some tutorials mention the launch action, which does both init and start. Here is how the command would have looked like,

lxc launch ubuntu:x mycontainer

We are nearing the point where we can start doing interesting things with containers. Let’s see the next blog post!

on July 22, 2016 02:39 PM

What is CirrOS and why I was working on it? This was quite common question when I mentioned what I am working on during last weeks.

So, CirrOS is small image to run in a cloud. OpenStack developers use it to test their projects.

Technically it is yet another Frankenstein OS. Built using Buildroot 2015.05 uses uclibc or glibc (depending on target architecture). Then Ubuntu 16.04 kernel is applied on top and “grub” (also from Ubuntu) is used to make it bootable.

The problem was that it was not done in UEFI bootable way…

My first changes were: switch images to GPT, create EFI system partition and put some bootloader there. I first used CentOS “grub2-efi” packages (as they provided ready to use EFI binaries) and later switched to Ubuntu ones as upstream maintainer (Scott Moser) prefers to have all external binaries to came from one source.

When he was on vacations (so merge request had to wait) I started digging more and more into scripts.

Fixed getopt use as arguments passed between scripts were read partly via getopt, partially by assigning variables to ${X} (where X is a number).

All scripts were moved to use Bash (as /bin/sh in Ubuntu is usually Dash which is minimalist POSIX shell), whitespace got unified between all scripts and some other stuff happened as well.

At one moment all scripts had 1835 lines and my diff was 2250 lines (+1018/-603) long. Hopefully Scott was back and we got most of that stuff merged.

Recent (2016.07.21) images are available and work fine on all platforms. If someone uses them with OpenStack then please remember about setting “short_id” property to “ubuntu16.04” — otherwise there may be a problem with finding rootfs (no virtio-scsi in disk images).


architecture booting before booting after
aarch64 direct kernel UEFI or direct kernel
arm direct kernel UEFI or direct kernel
i386 BIOS or direct kernel BIOS, UEFI or direct kernel
powerpc direct kernel direct kernel
ppc64 direct kernel direct kernel
ppc64le direct kernel direct kernel
x86-64 BIOS or direct kernel BIOS, UEFI or direct kernel
on July 22, 2016 12:44 PM


Paul Tagliamonte

I’ll be at HOPE 11 this year - if anyone else will be around, feel free to send me an email! I won’t have a phone on me (so texting only works if you use Signal!)

Looking forward for a chance to see everyone soon!

on July 22, 2016 12:16 PM

Over the past few months, the Juju team has been working on a whole redesign of the Juju store homepage and we’re very happy to announce that it is now live!

Juju is an application and service modelling tool that enables you to quickly model, configure, deploy and manage applications in the cloud. Juju comes with ready-made solutions for everything you need – these solutions are encapsulated in Charms and Bundles:

Charms contain all the instructions necessary to deploy, manage and scale cloud applications.
Bundles are collections of charms that work together, deploying an entire application or chunk of infrastructure in one go.
The new Juju Charm store allows you to explore the growing ecosystem of over 300 charms and bundles – everything you need to build your app.

You can now get started with the featured charms and bundles at the top or explore the whole collection of categories:

Juju Store

We’ve surfaced key categories and highlighted their most popular services:

Juju Store

Juju Store

The search stays the same for now, but we’re working on improvements which will be released in the near future:

Juju Store

You can explore bundles and view charm details:

Juju Store

Juju Store

And deploy your chosen charm, using the GUI or CLI:

Juju Store

Check it out at: jujucharms.com/store

How did we arrive at this solution?

We’ve summarised four of the most important stages of the project for you to get an insight into our design process.

Defining the problem

You may want a shiny new design, but if you don’t understand the problems that you are trying to solve you’ll probably find yourself having to redesign the whole page again in no time. We therefore began by identifying the issues that we wanted this new design to tackle, and laying out the new store requirements.

This is what the store homepage looked like before the redesign:

Juju Store

The original goal of this page was to feature the breadth of the software available for Juju. However, there were a number of elements in our previous design that didn’t facilitate a smooth browsing experience. As the Juju ecosystem grew, we found the need to increase the store’s performance by:

  • Providing a more curated selection to users when they arrive at the store
  • Highlighting the most popular and interesting charms and bundles for users to get started
  • Providing better discovery methods for browsing
  • Encouraging exploration
  • Reducing cognitive load
  • Helping visitors find what they’re looking for with the least amount of friction

Understanding our audience

Before making any design decisions we:

  • Conducted a round of user testing to uncover friction points and reveal insights into our users’ behaviour and needs
  • Dived into our site’s analytics to learn more about how current users are moving across the store
  • Looked at conversion, bounce rate and page views
  • Identified what search terms are used most and what terms and categories were the most popular
  • Tagged our content to increase findability
  • It’s a surprisingly large amount of prep work but absolutely essential – all this research enabled us to gain some insight into our audience and allowed the definition of use cases which we then used as a basis for our designs

Researching our competitors

We also undertook a competitor benchmarking project with the aim of:

  • Comparing our general practices and performance with that of our competitors
  • Identifying the strengths and weaknesses of our competitors and review that against our own
  • Identifying pitfalls to avoid and ways in which we could improve our page

Design Process

Test the performance

Testing the design enabled us to continuously iterate towards a solution that, when finalised, was very well received by the community. We love conducting user testing sessions to see how our designs are performing, and it’s hard to over-emphasise the importance of watching actual people interact with your design!

We’ve enjoyed every stage of this process and are very happy it is now available to the public. We’d welcome any feedback, please don’t hesitate to share it here. Check it out here

Original article

on July 22, 2016 09:42 AM

Last week the design team had two interns undertaking their work experience at the London office.

Our first student is studying computer science for her GCSEs and has an interest in Python programming and software engineering. The second student is studying Geography and IT and had a general interest in IT.

The tasks

We wanted them to experience what working in the design team is like, so we set them two tasks:

Task 1 – Create a poster of your work experience

We asked them to keep a note of the key things that they had been doing and what they learned throughout the week.

They were then asked to create two paper versions of their posters, which were reviewed by one of our visual designers. After being reviewed, the designers helped the student to create a final electronic version, which they could take back to school with them.

Task 2 – Convergence tablet

We asked them to use the convergence tablet as their device during the week for user testing purposes.

We wanted them to:

  • Send emails
  • Take notes
  • Update social media
  • Take images
  • Organise their gallery
  • Share something with a friend
  • Play games
  • Play music
  • Read news articles or other articles

We asked for feedback on:

  • What they liked about convergence
  • What would they like to see on the tablet?
  • What was their favourite app
  • What can we improve?

They were expected to talk through their feedback for 15 minutes with two designers.


By the end of the week we wanted our interns to have the confidence to present their findings to us, as well as experience a review process with their poster designs.

The feedback we got from our first student – who used the tablet for 4 days, in between her other tasks – said: ‘ready, not a prototype, sleek and lightweight’. What she liked most about it was that ‘it can be a whole computer if you connect it’. She also liked the UbuntuStore.

The feedback we got back from our second student – who used the tablet in between tasks of making bootable USB drives and learning code – was that he ‘likes how it has the capability to … just pick it up and plug it into a monitor … Because it means that you don’t have to carry anything around, just a tablet.’
His favourite app was the Browser, he said ‘because it gives you access to everything’ and he thought it was ‘better than Safari because Safari blocks a lot of things like Flash’. He thought that the camera was of ‘good quality and focused quickly’ and felt it was easy to take photos and videos.

We also received suggestions on what they wished to see in the future and what they thought could be improved, which was great to see from the student demographic. We have captured all of this and can incorporate some of these ideas.

Work experience poster

With the help of one of our visual designer’s, we reviewed our first student’s paper designs and helped bring her poster to life.


This poster was the fruits of her labour and she was then tasked with finishing it off at home, ready to take back to school with her.

Our London office really values the work our work experience interns undertake in the week they are here during the summer. Many of them tend to have interests in technology and our open source nature is a good way to give them a flavour of the Ubuntu design and engineering process.

Want to be an intern at Canonical?

If you’re a student and like what you’ve seen so far, and would like to undertake your work experience with us please do get in touch with Stefanie Davenoy – stefanie.davenoy@canonical.com.

on July 22, 2016 09:20 AM

Linkdump 29/2016 ...

Dirk Deimeke

Heute aufgrund von Urlaub nur ein enorm verkürzter Linkdump, ich hoffe, Ihr findet trotzdem einen Artikel, der Euch gefällt.

Einzig zum Handout habe ich eine andere Meinung, ansonsten, kann ich die Tipps nur weiterempfehlen: 5 Fehler, die Sie beim Präsentieren unbedingt vermeiden sollten.

«Made in Germany» zieht mehr als Swissness - hach, immer die Vergleicherei. Da ist der eine um ein paar Promille besser und schon gibt es einen Schiefstand. Vielleicht könnte man auch einfach akzeptieren, dass Produkte aus beiden Ländern einen sehr guten Ruf haben.

Ich glaube schon lange nicht mehr, dass Podcasts Nische sind. Und wenn, wäre es auch egal, Her mit dem heißen Scheiß.

5 Präsentationstipps, mit denen Sie Ihr Publikum glücklich machen, auch gute Tipps, aber schade, dass nie zwischen Schulungspräsentationen und anderen unterschieden wird.

Damit hat der Terror sein Ziel erreich, nämlich Misstrauen zu säen, Mathe macht verdächtig.

Kommt jetzt der große Podcast-Durchbruch? - das hat nichts mit Podcasts zu tun, die per Definition aus einem Feed mit angehängter Audiodatei bestehen und nicht einen besonderen Client erfordern.

on July 22, 2016 03:04 AM

The first point release update to our LTS release 16.04 is out now. This contains all the bugfixes added to 16.04 since its first release in April. Users of 16.04 can run the normal update procedure to get these bugfixes.

See the 16.04.1 release announcement.

Download 16.04.1 images.

on July 22, 2016 02:57 AM

Ubuntu 16.04 in the SF Bay Area

Elizabeth K. Joseph

Back in June I gave a presentation on the 16.04 release down at FeltonLUG, which I wrote about here.

Making my way closer to home, I continued my tour of Ubuntu 16.04 talks in the San Francisco Bay Area. A couple weeks ago I gave the talk at SVLUG (Silicon Valley Linux Users Group) and on Tuesday I spoke at BALUG (Bay Area Linux Users Group).

I hadn’t been down to an SVLUG meeting in a couple years, so I appreciated the invitation. They have a great space set up for presentations, and the crowd was very friendly. I particularly enjoyed that folks came with a lot of questions, which meant we had an engaging evening and it stretched what is alone a pretty short talk into one that filled the whole presentation time. Slides: svlug_ubuntu_1604.pdf (6.0M), svlug_ubuntu_1604.odp (5.4M)

Presentation, tablets and giveaways at SVLUG

At BALUG this week things were considerably more casual. The venue is a projector-less Chinese restaurant these days and the meetings tend to be on the small side. After family style dinner, attendees gathered around my big laptop running Ubuntu as I walked through my slide deck. It worked better than expected, and the format definitely lent itself to people asking questions and having discussions throughout too. Very similar slides to the ones I had at SVLUG: balug_ubuntu_1604.pdf (6.0M), balug_ubuntu_1604.odp (5.4M)

Setup and giveaways at BALUG

Next week my Ubuntu 16.04 talk adventures culminate in the event I’m most excited about, the San Francisco Ubuntu 16.04 release party at OpenDNS office located at 135 Bluxome St in San Francisco!

The event is on Thursday, July 28th from 6:30 – 8:30PM.

It’s right near the Caltrain station, so where ever you are in the bay it should be easy to get to.

  • Laptops running Ubuntu and Xubuntu 16.04.
  • Tablets running the latest Ubuntu build, including the bq Aquaris M10 that shipped with Ubuntu and demonstrates convergence.
  • Giveaways, including the 9th edition of the Official Ubuntu book (new release!), pens, stickers and more.

I’ll need to plan for food, so I need folks to RSVP. There are a few options for RSVP:

Need more convincing? It’ll be fun! And I’m a volunteer whose systems engineering job is unrelated to the Ubuntu project. In order to continue putting the work into hosting these events, I need the satisfaction of having people come.

Finally, event packs from Canonical are now being shipped out to LoCos! It’s noteworthy that for this release instead of shipping DVDs, which have been in sharp popularity decline over the past couple of years, they are now shipping USB sticks. These are really nice, but the distribution is limited to just 25 USB sticks in the shipment for the team. This is an order of magnitude fewer than we got with DVDs, but they’re also much more expensive.

Event pack from Canonical

Not in the San Francisco Bay Area? If you feel inspired to give an Ubuntu 16.04 presentation, you’re welcome to use my slides, and I’d love to see pictures from your event!

on July 22, 2016 12:17 AM

July 21, 2016

S09E21 – Snapper Biscuit - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Episode Twenty-one of Season Nine of the Ubuntu Podcast! Alan Pope, Mark Johnson, Laura Cowen and Martin Wimpress are connected and speaking to your brain.

We’re here again, just about! Two of us are snapping in Heidelberg.

In this week’s show:

  • We interview Snapcraft developers Sergio Schvezov and Kyle Fazzari about snap and Snapcraft, why they (snap and Snapcraft, not the developers) exist, the problem they solve, and some upcoming features..

  • We also discuss testing superglue preservation tips (we’ll report back on the success or otherwise in a future episode), and playing Pokémon GO around campus and RimWorld on Steam on Linux.

  • We share a Command Line Lurve, Cheat, which provides you with a cheatsheet (brief, handy help) for a command.

  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • This weeks cover image is taken from Wikimedia.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on July 21, 2016 06:02 PM

The Ubuntu team is pleased to announce the release of Ubuntu 16.04.1 LTS (Long-Term Support) for its Desktop, Server, and Cloud products, as well as other flavours of Ubuntu with long-term support.

As usual, this point release includes many updates, and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-impact bugs, with a focus on maintaining stability and compatibility with Ubuntu 16.04 LTS.

Kubuntu 16.04.1 LTS, Xubuntu 16.04.1 LTS, Mythbuntu 16.04.1 LTS, Ubuntu GNOME 16.04.1 LTS, Lubuntu 16.04.1 LTS, Ubuntu Kylin 16.04.1 LTS, Ubuntu MATE 16.04.1 LTS and Ubuntu Studio 16.04.1 LTS are also now available. More details can be found in their individual release notes:


Maintenance updates will be provided for 5 years for Ubuntu Desktop, Ubuntu Server, Ubuntu Cloud, Ubuntu Base, and Ubuntu Kylin. All the remaining flavours will be supported for 3 years.

To get Ubuntu 16.04.1

In order to download Ubuntu 16.04.1, visit:


Users of Ubuntu 14.04 will soon be offered an automatic upgrade to 16.04.1 via Update Manager. For further information about upgrading, see:


As always, upgrades to the latest version of Ubuntu are entirely free of charge.

We recommend that all users read the 16.04.1 release notes, which document caveats and workarounds for known issues, as well as more in-depth notes on the release itself. They are available at:


If you have a question, or if you think you may have found a bug but aren’t sure, you can try asking in any of the following places:

Help Shape Ubuntu

If you would like to help shape Ubuntu, take a look at the list of ways you can participate at:


About Ubuntu

Ubuntu is a full-featured Linux distribution for desktops, laptops, clouds and servers, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

Professional services including support are available from Canonical and hundreds of other companies around the world. For more information about support, visit:


More Information

You can learn more about Ubuntu and about this release on our website listed below:


To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:


Originally posted to the ubuntu-announce mailing list on Thu Jul 21 14:53:03 UTC 2016 by Adam Conrad, on behalf of the Ubuntu Release Team

on July 21, 2016 05:58 PM

The Xubuntu team is pleased to announce the immediate release of Xubuntu 16.04.1 Xubuntu 16.04 is an LTS (Long-Term Support) release and will be supported for 3 years with point releases appearing at regular intervals.

The release images are available as Torrents and direct downloads from

During the next few days the upgrade from 14.04 to 16.04.1 will become available via the update-manager.

Those upgrading from 14.04 should take note of the following 2 issues which affect us particularly, details of these and other bugs are further detailed at the Release Notes

The Intel cursor bug is currently the subject of an SRU, the fix will be released in the near future.

In addition, those upgrading from 14.04 should take note of the deprecation of the fglrx driver detailed at the Ubuntu Release Note

Known Issues

  • Thunar is the subject of a few bugs, though they all appear to revolve around similar issues. We have 2 patches applied that, while not completely fixing the issue, do lessen the impact.
  • When returning from lock, the cursor disappears on the desktop, you can bring the cursor back with Ctrl+Alt+F1 followed by Ctrl+Alt+F7
on July 21, 2016 05:50 PM
Thanks to all the hard work from our contributors, we are pleased to announce that Lubuntu 16.04.1 LTS has been released! This release provides updates to the Lubuntu 16.04 LTS to reduce the update time post-download and an upgrade path for users running Lubuntu 14.04 LTS. Where can I download it? On the Ubuntu cdimage […]
on July 21, 2016 04:50 PM

Over the last year, Miles Sharpe (Kilos on IRC) worked hard on getting the African LoCos united and active again.  Now he is working with two other LoCo’s: ubuntu-bd and ubuntu-pk.

The problem that he found in the ubuntu-bd LoCo is this:

I started with ubuntu-bd and found 3 nicks on the irc channel and no one
responding. There were over 20 applicants waiting for approval on LP. some for 2 years already. So with some help from the LC we found the owner and he came and agreed to get things going again but said those users prefer facebook and later said he was to busy. And their mailing list is for announcements. I am not a fan of mailing lists but find that they are a good way of getting a message out when one has no irc contact with someone. At least the LP applicants are approved now.  He greeted a few times after that and has now withdrawn again, so pavlushka (the failed applicant) has been trying to get things going again.
At times there are 10 nicks in channel and from chatting to them for the last 5 months, I have learned that they aren’t satisfied with the way things are going there.The Ubuntu community spirit is missing.

Taken from here.

And for the ubuntu-pk LoCo:

I then started looking at the ubuntu-pk channel and found it in the same sad state of affairs. After a couple of months an old ubuntu user from pk arrived and was surprised to find any life there and he has been helping regrow the channel. And will apply for Ubuntu membership within a few months. By rights he could have done that years ago imo. But once again the leadership is at fault. Here is his old wiki page

Taken from here.

Both of this examples points to one problem: these LoCos are using Facebook instead of IRC/Mailing-Lists.  Two reasons come up to my mind on why: 1)we are in a new age where social media dominates and 2)these are third-world countries and Internet is expensive.  Because of that, the providers give “free” Internet where the users can only access Facebook and Twitter for free.  In turn, these people of these countries don’t have a sense of what the Internet is really is.  This is where Mozilla Learning aims to educate these people.  But we are not Mozilla, we are Ubuntu and this is not our problem.  Our problem is the health of our Community (looking at Memberships mainly), mainly the LoCos.

One solution is like the Ubuntu Forums system for Membership.  But the problem is how to deal with the applications on Facebook and other social media sites.  One solution is using groups, but that still requires the applicants to have a wiki page, sign the CoC, and the other items for Membership.  And who will oversee the process on these social media sites?

Other LoCos are inactive via the Ubuntu Community or even social media.  The Oceania LoCos are examples. I lied, they are using G+, I need a better example.  The problem with these LoCos is how would new comers be able to join and then find out that there is no one to greet them?  Or even help on rebooting the LoCo?  The solution is come and join #ubuntu-locoteams on irc.feenode.net, where we can help you connect with others of your LoCo or to give ideas on how to reboot your LoCo.

The last group of LoCos are LoCos who have members but they are scattered throughout the country/state.  My LoCo, Ubuntu Ohio, is one example.  One solution to the problem is figure out a common meeting spot and date and meet there.

The bottom line here is that we need to rethink our health of our LoCos as they are source of our Ubuntu Members and it’s a way to connect with others in real life.

EDIT TO ADD: Uniting LoCos in the same continent or country (USA for example) is another solution.

on July 21, 2016 04:30 PM

Hack The World

Jono Bacon


As some of you will know, recently I have been consulting with HackerOne.

I just wanted to share a new competition we launched yesterday called Hack The World. I think it could be interesting to those of you already hacking, but also those of you interested in learning to hack.

The idea is simple. HackerOne provides a platform where you can go and hack on popular products/services (e.g. Uber, Adobe, GitHub, Square, Slack, Dropbox, GM, Twitter, Yahoo!, and many more) and submit vulnerability reports. This is awesome for hackers as they can safely hack on products/services, try out new hacking approaches/tools, build relationships with security teams, build a resume of experience, and earn some cold hard cash.

Currently HackerOne has 550+ customers, has paid over $8.9 million in bounties, and fixed over 25,000 vulnerabilities, which makes for a safer Internet.

Hack The World

Hack The World is a competition that runs from 20th July 2016 – 19th September 2016. In that time period we are encouraging people to hack programs on HackerOne and submit vulnerability reports.

When you a submit a vulnerability report that is valid, the program may award you a bounty payment (many people all over the world earn significant buckets of money from bounties). In addition, you will be rewarded reputation and signal. Reputation is an indicator of active activity and participation, and signal is the average reputation in your reports.

Put simply, whoever earns the most reputation in the competition can win some awesome prizes including $1337 in cash, a hackable FPV drone kit, awesome limited edition swag, and bragging rights as being one of the most talented hackers in the world.

To ensure the competition is fair for everyone, we have two brackets – one for experienced hackers and one for new hackers. There will be 1st, 2nd, and runner up prizes in each bracket. This means you folks new at hacking have a fighting chance to win!

Joining in the fun

Getting started is simple. Just go and register an account or sign in if you already have an account.

To get you started, we are providing a free copy of Peter Yaworski’s awesome Web Hacking 101 book. Ensure you are logged in and then go here to grab the book. It will then be emailed to you.

Now go and and find a program, start hacking, learn how to write a great report, and submit reports.

When your reports are reviewed by the security teams in the programs you are hacking on the reputation will be awarded. You will then start appearing on the Hack The World Leaderboard which at the time of writing looks a little like this:

Screen Shot 2016-07-20 at 9.48.03 PM

This data is almost certainly out of date as you read this, so go and see the leaderboard here!

So that’s the basic idea. You can read all the details about Hack The World by clicking here.

Hack The World is a great opportunity to hack safely, explore new hacking methods/tools, make the Internet safer, earn some money, and potentially be crowned as a truly l33t hacker. Go hack and prosper, people!

on July 21, 2016 03:00 PM

Show Hosts

Ovidiu-Florin Bogdan

Rick Timmis

Aaron Honeycutt

Show Schedule


What have we (the hosts) been doing ?

  • Aaron
    • Working a sponsorship out with Linode
    • Working on uCycle
  •  Rick
    • #Brexit – It would be Rude Not to [talk about it]
    • Comodo – Let’s Encrypt Brand challenge https://letsencrypt.org//2016/06/23/defending-our-brand.html#1

Sponsor 1 Segment

Big Blue Button

Those of you that have attended the Kubuntu parties, will have seen our Big Blue Button conference and online education service.

Video, Audio, Presentation, Screenshare and whiteboard tools.

We are very grateful to Fred Dixon and the team at BigBlueButton.org. Go check out their project.

Kubuntu News

Elevator Picks

Identify, install and review one app each from the Discover software center and do a short screen demo and review.

In Focus

Joining us today is Marius Gripsgard from the UbPorts project.


Sponsor 2 Segment


We’ve been in talks with Linode, an awesome VPS with super fast SSDs, Data connections, and top notch support. We have worked out a sponsorship for a server to build packages quicker and get to our users faster. BIG SHOUT OUT to Linode for working with us!

Kubuntu Developer Feedback

  • Plasma 5.7 is unlikely to hit Xenial Backports in the short term, as it is still dependent on QT 5.6.1 for which there is currently no build for Xenial.
    There is an experimental build the Acheronuk has been working on, but there are still stability issues.

Game On

Steam Group: http://steamcommunity.com/groups/kubuntu-podcast

Review and gameplay from Shadow Warrior.


How to contact the Kubuntu Team:

How to contact the Kubuntu Podcast Team:

on July 21, 2016 09:46 AM

July 20, 2016

Summer vacations have been getting tougher in recent years. Airlines cut into your precious vacation time with their online check-in procedures and a dozen reminder messages, there is growing concern about airport security and Brexit has already put one large travel firm into liquidation leaving holidaymakers in limbo.

If that wasn't all bad enough, now there is a new threat: while you are relaxing in the sun, scammers fool your phone company into issuing a replacement SIM card or transferring your mobile number to a new provider and then proceed to use it to take over all your email, social media, Paypal and bank accounts. The same scam has been appearing around the globe, from Britain to Australia and everywhere in between. Many of these scams were predicted in my earlier blog SMS logins: an illusion of security (April 2014) but they are only starting to get publicity now as more aspects of our lives are at risk, scammers are ramping up their exploits and phone companies are floundering under the onslaught.

With the vast majority of Internet users struggling to keep their passwords out of the wrong hands, many organizations have started offering their customers the option of receiving two-factor authentication codes on their mobile phone during login. Rather than making people safer, this has simply given scammers an incentive to seize control of telephones, usually by tricking the phone company to issue a replacement SIM or port the number. It also provides a fresh incentive for criminals to steal phones while cybercriminals have been embedding code into many "free" apps to surreptitiously re-route the text messages and gather other data they need for an identity theft sting.

Sadly, telephone networks were never designed for secure transactions. Telecoms experts have made this clear numerous times. Some of the largest scams in the history of financial services exploited phone verification protocols as the weakest link in the chain, including a $150 million heist reminiscent of Ocean's 11.

For phone companies, SMS messaging came as a side-effect of digital communications for mobile handsets. It is less than one percent of their business. SMS authentication is less than one percent of that. Phone companies lose little or nothing when SMS messages are hijacked so there is little incentive for them to secure it. Nonetheless, like insects riding on an elephant, numerous companies have popped up with a business model that involves linking websites to the wholesale telephone network and dressing it up as a "security" solution. These companies are able to make eye-watering profits by "purchasing" text messages for $0.01 and selling them for $0.02 (one hundred percent gross profit), but they also have nothing to lose when SIM cards are hijacked and therefore minimal incentive to take any responsibility.

Companies like Google, Facebook and Twitter have thrown more fuel on the fire by encouraging and sometimes even demanding users provide mobile phone numbers to "prove they are human" or "protect" their accounts. Through these antics, these high profile companies have given a vast percentage of the population a false sense of confidence in codes delivered by mobile phone, yet the real motivation for these companies does not appear to be security at all: they have worked out that the mobile phone number is the holy grail in cross-referencing vast databases of users and customers from different sources for all sorts of creepy purposes. As most of their services don't involve any financial activity, they have little to lose if accounts are compromised and everything to gain by accurately gathering mobile phone numbers from as many users as possible.

Can you escape your mobile phone while on vacation?

Just how hard is it to get a replacement SIM card or transfer/port a user's phone number while they are on vacation? Many phone companies will accept instructions through a web form or a phone call. Scammers need little more than a user's full name, home address and date of birth: vast lists of these private details are circulating on the black market, sourced from social media, data breaches (99% of which are never detected or made public), marketing companies and even the web sites that encourage your friends to send you free online birthday cards.

Every time a company has asked me to use mobile phone authentication so far, I've opted out and I'll continue to do so. Even if somebody does hijack my phone account while I'm on vacation, the consequences for me are minimal as it will not give them access to any other account or service, can you and your family members say the same thing?

What can be done?

  • Opt-out of mobile phone authentication schemes.
  • Never give the mobile phone number to web sites unless there is a real and pressing need for them to call you.
  • Tell firms you don't have a mobile phone or that you share your phone with your family and can't use it for private authentication.
  • If you need to use two-factor authentication, only use technical solutions such as smart cards or security tokens that have been engineered exclusively for computer security. Leave them in a locked drawer or safe while on vacation. Be wary of anybody who insists on SMS and doesn't offer these other options.
  • Rather than seeking to "protect" accounts, simply close some or all social media accounts to reduce your exposure and eliminate the effort of keeping them "secure" and updating "privacy" settings.
  • If your bank provides a relationship manager or other personal contact, this
    can also provide a higher level of security as they get to know you.

Previous blogs on SMS messaging, security and two factor authentication, including my earlier blog SMS Logins: an illusion of security.

on July 20, 2016 05:48 PM

July 19, 2016

More updates and improvements! Our latest OTA-12 has just landed and we’re excited that you can now wirelessly connect your M10 tablet to a monitor! This gives users the full Ubuntu PC experience running from your tablet. All the services running from your tablet are now available on the desktop through just a wireless dongle and no cables – giving you the full Ubuntu convergence experience – if you missed the above, check out the magic moment in the short video.

Plus! You can now install any Ubuntu conventional desktop app to use on your Ubuntu phone or tablet when a mouse and keyboard are connected. We have introduced a smart new solution to allow access to the many more desktop applications residing in the Ubuntu archive. Conventional Debian packages will also work. This solution uses the Command Line Interface to access the archive however using the Store interface to directly install a full range of desktop apps will soon be available. For an in-depth technical explanation of this process, please read this article.

Check out the key performance improvements to the Ubuntu phone and M10 tablet below:

General Features

  • Fingerprint reader setup and unlock for Pro 5


  • Desktop Apps scope
  • M10 Wireless display (tech preview)
  • OSK for X apps (allows desktop apps to be used with an On Screen Keyboard)
  • Window management (touch friendly controls for windows)


  • Post photo and video from previews
  • Video consumption from scopes

App Details

  • Webbrowser (continuing the improve the browser is an ongoing and important focus for us) particularly Zoom


  • Color emojis
  • Cellular data toggle in settings and indicator

To learn more about all the updates, click here.

on July 19, 2016 09:28 AM

Introducing the new Juju store

Canonical Design Team

Over the past few months, the Juju team has been working on a whole redesign of the Juju store

Screen Shot 2016-07-18 at 12.53.40


Screen Shot 2016-07-18 at 12.54.36

Screen Shot 2016-07-18 at 12.55.28



Screen Shot 2016-07-18 at 12.59.10


Screenshot 2016-07-10 14.10.08

Screenshot 2016-07-10 14.11.20



Screenshot 2016-07-10 14.13.20




Screen Shot 2016-07-11 at 13.36.52Screen Shot 2016-07-11 at 13.37.10







on July 19, 2016 08:26 AM

Greetings Ubunteros,
The opinions below are my own and not necessarily the opinions of other Ubuntu users.
Everything is running smoothly in ubuntu-za and ubuntu-africa. Here is my blog about the ubuntu-africa project.
http://kilosubuntu.blogspot.com I think most will know by now that my main aim is to spread ubuntu as far as I can and see all users happy in their LoCos with good guidance and support.
and the correct way isnt by having over 400 ubuntu users on facebook and no one on irc or using mailing list.

So I got a foolish idea to try and revive LoCos in other areas of the globe and hopefully get them to run as efficiently as my home LoCo. I didnt know what I was getting into.
I started with ubuntu-bd and found 3 nicks on the irc channel and no one
responding. There were over 20 applicants waiting for approval on LP. some for 2 years already. So with some help from the LC we found the owner and he came and agreed to get things going again but said those users prefer facebook and later said he was to busy. And their mailing list is for announcements. I am not a fan of mailing lists but find that they are a good way of getting a message out when one has no irc contact with someone. At least the LP applicants are approved now.  He greeted a few times after that and has now withdrawn again, so pavlushka (the failed applicant) has been trying to get things going again.
At times there are 10 nicks in channel and from chatting to them for the last 5 months, I have learned that they aren't satisfied with the way things are going there.The Ubuntu community spirit is missing.
I then started looking at the ubuntu-pk channel and found it in the same sad state of affairs. After a couple of months an old ubuntu user from pk arrived and was surprised to find any life there and he has been helping regrow the channel. And will apply for Ubuntu membership within a few months. By rights he could have done that years ago imo. But once again the leadership is at fault. Here is his old wiki page
It seems to me that running a LoCo there is a personal badge of honour, and not in the best interests of Ubuntu or with any concern for other users.
Feel free to mail me at msdomdonner at ubuntu dot com or find me daily on irc in
on July 19, 2016 08:23 AM

Welcome to the Ubuntu Weekly Newsletter. This is issue #474 for the week July 11 – 17, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Walter Lapchynski
  • Leon G.Marincowitz
  • Chris Guiver
  • Chris Sirrs
  • Simon Quigley
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on July 19, 2016 02:44 AM

July 18, 2016

A two day workshop on women in free software and Fedora women’s day was held on the 15th and 16th of July’2016 at NSEC, Kolkata. This event was jointly organized by Ubuntu Women Project, Fedora Project and NSEC, Kolkata. It has been substantially sponsored by Ubuntu Women Project. The goal of the workshop was also to get new participants interested, improve level of participation and explore new avenues of free software community development. Given the factors involved, WWFS-FWD’2016 was a successful one.

Rebeka and myself were the main organizers of the event. While I handled most of the speaker selection, general logistics and part of the event publicity, Rebeka handled all of college level publicity and local organization. She is a brilliant student, but was handling an event for the first time and had a major health problem during the organization period. Overall her handling of the event was good, but was wanting on few points like ‘directions to the workshop lab’ and specificity of directions to the place.

We had low number of but dedicated participants. The participation from the college was very low (our original estimate was that 30 seats would be easily filled up by college students). The college in question is not easily reachable, though it is about 3 km from easily reachable places (by public transport). Any student of the college would be knowing that traveling to the place in the rainy season is a hard physical exercise (only wretched autos ply on 80% of the last muddy stretch). Speakers found it difficult to reach the place too and even I reached an hour late on the first day. Most of the participants were students of colleges or schools. Some pre-registered participants failed to turn up because of others reasons. This way our environment may interact with us.

Otherwise, the event went well with an optimal set of talks. The weather on day one was terrible (100% humidity with rain). I was to speak first, but  I let Trishna speak first as I was late. She spoke about her contribution to Bodhi and Fedora cloud, and about contributing to related projects.

My talk on ”Free Software, Women and Feminism” was on all of the following: basics of free software, differences with OSI, evil nature of proprietary software and related development models, lessons of MS-IE4, geek feminism aspects, feminist issues,  functional feminism, micro-aggressions, SH policies, free software development models, necessity of new interdisciplinary development models and licenses beyond GNU/GPL v3+.

After this talk we had lunch and the 2lbs cake for our event. We were left with extra lunch packets and part of the cake (we took them home and did not waste food).

My post lunch talk was on basics of ”GNU/Linux from a functional perspective”. I spoke of the whole eco-system, distribution types, basics of installation, Ubuntu, shells, permissions, files and filesystems, partitioning, apt-get, contributing to Ubuntu Women, documentation and bug fixing.

The next talk was a lively one by Priyanka(N) on “Imposter Syndrome” and steps that can be taken to overcome this environment induced problem on women. Of course competent feminists would be able to overcome this problem more easily. We shifted remaining talks scheduled for the day to 17th as we were all tired.

On the first session of the second day, Rebeka had her hands-on session on python scripting and system administration. The participants displayed much enthusiasm in the session.

This session was followed by Priyanka(S)’s session on handling twitter data with ”Parsey Mcparsey”. Due to a bug with the demo, the talk was shifted to the last session at 17:40 hrs. Gunjan spoke next at length on basics of images, using GIMP, color models, channels, layers, alpha channels, retouching photos, and python scripts in GIMP. The talk was well received.

Trupti’s video session on Drupal basics was played next. She introduces the basics of setting up websites with Drupal CMS in the video. The audio was a bit low.

After this we all had lunch.

Post lunch, Priyanka(N) talked about IRC,  bug tracker systems like Bugzilla, her use of Bugzilla in Mozilla projects, versioning systems, GIT and contributing to FOSS.

Next I introduced LaTeX in the context of related standards, then introduced SGML, subsets thereof, TeXLive, basic LaTeX markup, and considered representative markup. The source code of the schedule of our event was useful for demonstrating both structure and markup of tables with the booktab package. I did not use the source of the event poster in tikzposter (as it is a bit more complex).

After my talk, Swapna delivered an excellent hands on session on ”GNU/Octave” starting from basics and going all the way to svm code.

I interacted with all speakers and had optimized their talks for the workshops. This substantially contributed to improving the quality of talks. Originally Swapna was not even willing to speak and claimed no knowledge of GNU/Octave and that she is a Matlab user. So I had to convince her about code compatibility. As mentioned above, she delivered an excellent introduction to GNU/Octave on day two.

The slides of all our talks can be found at the links below:

Speaker Topic
A Mani Free Software, Women and Feminism
A Mani GNU/Linux, Ubuntu- A Functional View
Trishna Guha What I do in Fedora and How Can You Get Involved
Priyanka Nag Imposter Syndrome
Rebeka Mukherjee Python Scripting and System Administration
Trupti Kini Drupal Basics
Priyanka Sinha Parsey Mc Parseface
Gunjan Gautam GIMP
Priyanka Nag How To Contribute to FLOSS?
A Mani LATEX for Publishing
Swapna Agarwal GNU/Octave


More Event Reports

Rebeka has also written a nice blog report on the event.

on July 18, 2016 07:29 PM


Over the last few weeks, Tuesday has become the Snappy Playpen day. Although you can find us on IRC and Gitter all the time basically, Tuesday is where many of us have their eyeballs locked on the discussion and are happy to help out.

We’re making no exception tomorrow, 19th July 2016 will be another Snappy Playpen event.

It’s beautiful to see all the recent additions to the Snappy Playpen repository and other contributions. Just check out the snapcraft social media channels (Facebook, Twitter, Google+) to get an idea.

We very much want to continue down that road: get more software snapped, help newcomers, get snapcraft.yaml files submitted upstream, fix documentation, answer questions, and grow together as a community.

Tomorrow will have the great advantage, that most of the people working on snapd and snapcraft are sprinting in Heidelberg right now. So they are all in the same place physically, so we are going to try to talk them into helping out and joining us for some Playpen activity.

To get started, have a look at the snapcraft.io page and ask us all your questions tomorrow! We’re looking forward to seeing you there.

on July 18, 2016 03:49 PM

This weekend I dropped Erica off at the airport. Driving through San Francisco we saw an inventive billboard designed to reduce texting and driving. Driver distraction is a big problem, with a 2012 study suggesting over 3,000 deaths and 421,000 injuries were a result of distraction. I am pretty confident those shiny, always connected cellphones are indeed a common distraction during a boring drive or in times when you are anxious for information.

So anyway, we were driving past this billboard designed to reduce texting and driving and it included an Apple messages icon with a message awaiting. It was similar to, but not the same as this:


While these billboards are good to have, I suspect they are only effective when they go beyond advocating a behavior and are actually able to trigger a real behavioral change. Rory Sutherland’s example of Scotland changing speeding signs from the number to an unhappy face, being a prime example – instead of telling drivers to drive more slowly, they tapped into the psychology of initiating that behavioral change.

When I saw this sign, it actually had the opposite effect on me. Seeing the notification icon with a message waiting caused a cognitive discomfort that something needed checking, tending to, and completing. You guessed it: it made me actually want to check my phone.

The Psychology of Notifications

This got me thinking about the impact of notifications on our lives and whether part of the reason people text and drive is not because they voluntarily pick up the phone and screw around with it, but instead because they are either (a) notified by audio, or (b) feel the notification itch to regularly check their phone to see if there are new notifications and then action them. Given how both Android and Apple phones both display notifications on the unlocked screen, this makes it particularly easy to see a notification and then action it by clicking on it and loading the app, and then potentially smash your car into a Taco Bell sign.

There is of course some psychology that supports this. Classical Conditioning demonstrates that we can associate regularly exposed stimuli with key responses. As such, we could potentially associate time away from our computers, travel, or other cognitive functions such as driving, as a time when we think about our relationships, our work, and therefore feel the urge to use our phones. In addition to this, research in Florida demonstrated that any kind of audio notifications fundamentally disrupt productivity and thus are distracting.

A Software Solution?

As such, it strikes me that a simple solution for reducing texting and driving could be to simply reduce notifications while driving.

For this work, I think a solution would need to be:

  • Automatic – it detects when you are traveling and suitably disengages notifications.
  • Contextual – sometimes we are speeding along but not driving (such as taking a subway, or as a passenger in a car).
  • Incentivized – it is unlikely we can expect all phone makers to switch this on by default and not make it able to be disabled (nor should we). As such, we need to incentivize people to use a feature like this.

For the automatic piece some kind of manual installation would likely be needed but then the app could actively block notifications when it automatically detects the phone is above a given speed threshold. This could be done via transitional points between GPS waypoints and/or wifi hotspots (if in a database). If the app detects someone going faster than a given speed, it kicks in.

For the contextual piece I am running thin on ideas for how to do this. One option could be to use the accelerometer to determine if the phone is stationary or not (most people seem to put their phones in a cup holder or phone holder when they drive). If the accelerometer is wiggling around it might suggest the person is a passenger and has the phone on their lap, pocket, or in their hand. Another option could be an additional device that connects to the phone over bluetooth that determines proximity of the person in the car (e.g. a wrist-band, camera, sensor on the seat, or something else), but this would get away from the goals of it being automatic.

For the incentive piece, this is a critical component. With teenagers a common demographic, and thus first-time drivers, money could be an incentive. Lower insurance fees (particularly given how expensive teenagers are to insure), discounts/offers at stores teenagers care about (e.g. hot topic for the greebos out there, free food and other ideas could be an incentive. For older drivers the same benefits could apply, just in a different context.


While putting up billboards to tell people to be responsible human beings is one tool in reducing accidents, we are better positioned than ever to use a mixture of technology and psychology to creatively influence behavior more effectively. If I had the time, I would love to work on something like this, but I don’t have the time, so I figured I would share the idea here as a means to inspire some discussion and ideas.

So, comments, feedback, and ideas welcome!

on July 18, 2016 03:29 PM

July 17, 2016

The commands copy and paste are deadly useful, but what if you need to retain what you have copied and pasted after you copied something else?  That’s where clipboard managers come in play.  I know some use gedit/notepad but to me that requires an extra step since one needs to paste the item then recopy it when its needed.

There are programs that allow items to be copied onto a clipboard and then selected from a menu. One such program is Diodon which I use.  It’s an integrated clipboard manager for the Gnome/Unity desktop that has the features that I need.  Ones like:

  • Expendable item list
  • Searchable items via the Dash
  • Automatic paste when item is selected
  • Plug-ins!

If you need one, you can give Diodon a shot.

on July 17, 2016 03:20 PM

July 16, 2016



Hey guys (and girls),

For a long while I’ve been happily using Fluxbox, and actually I must say, that I started to get a little bored. The reason I moved to Fluxbox is to stay away from limiting-WM, like Unity or GNOME3. but after a while using Fluxbox, I felt I want something even more hardcore, that will give me even more control on my computer.

After a talk with my roommate (which just recently installed Linux, and quite quickly became a Linux master :D), We decided both to switch to Awesome WM, which from the current point of view, seems like a great decision!

Awesome is acutally named after Barney Stinson!

Awesome is acutally named after Barney Stinson!

Few words about Awesome and what makes it so awesome:
Awesome is a very lightweight, dynamic WM, in which you are able to modify just about anything. You have one (very long) configuration file named rc.lua, and yes, all the configuration is in Lua. actually the configuration is way more than just configuration, it’s the whole WM: widgets, toolbars, the way windows work, act and move, shortcuts, mouse actions, menus and more and more and more – everything is lua scripts, extremely configurable.

As I mentioned above, the configuration is huge lua file, containing just about anything in the wm, which makes it hard to read, and easy to mess. for that reason one of the first things I did after installing Awesome, is to split the configuration to several files, and by that making it very easy to understand and modify.
Of course that if I can think of something, probably it’s already in the internet, so after some googling I found phyber’s splitted rc.lua, which gave me a great something to start with  :)

Some more stuff I wrote for my configuration:
Language switcher (which made me write a patch for xkb-switcher)
– extremely generic startup autorun
– hacked Calendar35 a little bit to work with my local configuration

might wanna have a peek in my Github repository.

Anyways – any awesome users out there? I’m looking for tips, ideas, and mostly people for showing off from time to time (my gf isn’t impressed too much from my geeky shit :P)

Dor :)

on July 16, 2016 09:43 PM
1. Install Ubuntu 14.04 server.
Remember to enable the firewall:
costales@maps:~$ sudo ufw allow http
costales@maps:~$ sudo ufw allow ssh
costales@maps:~$ sudo ufw enable

2. Check that you have all locales right:

costales@maps:~$ locale

If some of them are empty, add them to /etc/environment, in my case LC_ALL & LANGUAGE:

costales@maps:~$ cat /etc/environment

3. Install the server from a PPA:

sudo apt-get install software-properties-common
sudo add-apt-repository ppa:kakrueger/openstreetmap
sudo apt-get update
sudo apt-get install libapache2-mod-tile osmctools

4. Import a map: We'll drop so many data for allow the smallest database, then space in hard disk ;)
4.1 Download from here in pbf. For example, europe-latest.osm.pbf:
costales@maps:~$ wget http://download.geofabrik.de/europe-latest.osm.pbf

4.2 Do it small, we'll keep only the roads:
costales@maps:~$ osmconvert europe-latest.osm.pbf -o=europe.o5m
costales@maps:~$ osmfilter europe.o5m --drop-author --drop-version --keep="highway=cycleway" --keep="highway=path" --keep="highway=footway" --keep="highway=track" --keep="highway=service" --keep="highway=pedestrian" --keep="highway=unclassified" --keep="highway=residential" --keep="highway=tertiary" --keep="highway=secondary" --keep="highway=primary" --keep="highway=trunk" --keep="highway=motorway" --keep="highway=" --drop-tags="alt_name" --drop-tags="source" --drop-tags="maxspeed" --drop-tags="created_by" --drop-tags="wheelchair*" -o=europe_tmp.o5m
costales@maps:~$ osmconvert europe_tmp.o5m -o=europe_end.pbf
costales@maps:~$ rm europe-latest.osm.pbf europe.o5m europe_tmp.o5m

4.3 Import it into the database:
costales@maps:~$ osm2pgsql --drop --slim -C 1700 --number-processes 2 europe_end.pbf
1700 is the MB of RAM and 2 the CPUs.

5. Set it as complete and restart the service:
costales@maps:~$ touch /var/lib/mod_tile/planet-import-complete
costales@maps:~$ sudo /etc/init.d/renderd restart

6. It's done! Check it: http://localhost/osm/slippymap.html
on July 16, 2016 08:39 PM

The Open Source License API

Paul Tagliamonte

Around a year ago, I started hacking together a machine readable version of the OSI approved licenses list, and casually picking parts up until it was ready to launch. A few weeks ago, we officially announced the osi license api, which is now live at api.opensource.org.

I also took a whack at writing a few API bindings, in Python, Ruby, and using the models from the API implementation itself in Go. In the following few weeks, Clint wrote one in Haskell, Eriol wrote one in Rust, and Oliver wrote one in R.

The data is sourced from a repo on GitHub, the licenses repo under OpenSourceOrg. Pull Requests against that repo are wildly encouraged! Additional data ideas, cleanup or more hand collected data would be wonderful!

In the meantime, use-cases for using this API range from language package managers pulling OSI approval of a licence programatically to using a license identifier as defined in one dataset (SPDX, for exampele), and using that to find the identifer as it exists in another system (DEP5, Wikipedia, TL;DR Legal).

Patches are hugly welcome, as are bug reports or ideas! I'd also love more API wrappers for other languages!

on July 16, 2016 07:30 PM

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In June, 158.25 work hours have been dispatched among 11 paid contributors. Their reports are available:

DebConf 16 Presentation

If you want to know more about how the LTS project is organized, you can watch the presentation I gave during DebConf 16 in Cape Town.

Evolution of the situation

The number of sponsored hours increased a little bit at 135 hours per month thanks to 3 new sponsors (Laboratoire LEGI – UMR 5519 / CNRS, Quarantainenet BV, GNI MEDIA). Our funding goal is getting closer but it’s not there yet.

The security tracker currently lists 40 packages with a known CVE and the dla-needed.txt file lists 38 packages awaiting an update.

Thanks to our sponsors

New sponsors are in bold.

on July 16, 2016 06:31 AM

July 15, 2016

Lubuntu Bug Day

Lubuntu Blog

The Lubuntu team is running a bug day on Tuesday, July 26, 2016. It’s a spin of a hug day. We will have an Ubuntu On Air session on Monday, July 25, 2016 from 19 to 20 UTC. I will give a presentation for the first half, then for the second half, I will be […]
on July 15, 2016 05:20 AM

Omnibus Grab Bag

Stephen Michael Kellat

Poking My Head Back In

It seems that I occasionally end up mentioning that I am still out there. This time it came about through leaving comments on a merge request. I haven't done such in a while due to the demands of my job. My account hasn't been hijacked on Launchpad and it really is me. After some time to reflect during the day I've come to recognize how jaded and cynical I can get in my day job.

Other Changes

It feels horrible not being a online content creator lately. I'm still a podcast listener and a working list of what I subscribe to is available via gpodder.net. I encourage everyone to subscribe to Cybersauce World News, the Podcast Of Power.

I will be creating some content and presenting it on August 21st. This will be done primarily with F/LOSS tools. On behalf of the domestic missions Field Activity for West Avenue Church of Christ I will be preaching and leading Sunday services at the Music Along the River festival in the Harpersfield Covered Bridge metropark. The event has a Facebook presence. I've submitted a holding title for the Sunday sermon but may still change my mind as to what I'm speaking about. Initial thoughts were to be working from James 5.

Repeating Something From Elsewhere

My job is not good for my health. My last medical appointment was one where the medical provider yelled at me asking why I was still there. Complaints about having to pay the bills, keeping a roof over not just my head but also the heads of my parents, and more were offered as reasons as why I'm still there.

We've been trying for games to help keep stress down. One has been to take a somewhat blanked out map of the USA and just put tick marks down each call to track which states are calling. That way you can see if the flavor of your day is Sunny Southwest, New England Chowder, Cajun/Gulf Coast, Cloudy Northwest, or EXTREMELY ANGRY New Yorker. Too many of the EXTREMELY ANGRY New Yorker especially with a helping of snotty Jersey boys leads to having A Bad Hair Day. My call queue keeps coming up Cajun/Gulf Coast.

It could be said that The Process Is The Punishment in dealing with my employer. That brings up Eighth Amendment concerns over cruel & unusual punishment. Then again, so do random thoughts about how unusual flogging might be if you took out the horse whip and substituted in the biggest fish possible and greased it in while lashing at a person in a flogging. That amendment to the constitution prohibits not just cruel punishments but also unusual ones that may shock the conscience but are not necessarily considered "cruel".

To bail me out of my nation's federal exchequer for something more stable yet sane such as the aerospace research project trying to consider the Outernet platform as it wants to move to cubesats even though it ignores cubesat history and suffers from a rejection by NASA to its aspirations while also evaluating what the startup is doing now, funds can be donated here: https://www.generosity.com/education-fundraising/fixing-potholes-of-the-information-superhighway/

The deadline for that is September 30th, the end of the federal government's fiscal year. I may be furloughed at any time before then as it stands now but September 30th is the hard upper limit due to the current lack of approved appropriations legislation. After furlough I am out without a job until an indefinite recall date. Raising the cash lets me quit my current post and conduct research for two years, publish papers, present at conferences, and build new directions.

Upgrade Paths

My laptop is now on Xubuntu 16.04 (64-bit). Assessing the household's machinery shows a surprising number of 32-bit machines. The one currently active is sitting on Lubuntu 14.04. Other machinery is in mothballs but is still usable. I've got some hard decisions to make.

Creative Commons License
Omnibus Grab Bag by Stephen Michael Kellat is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Based on a work at https://identi.ca/alpacaherder/note/SvpAZ05qTcy4mRdMZcWBaQ.

on July 15, 2016 03:38 AM

Linkdump 28/2016 ...

Dirk Deimeke

Der Urlaub wirft seine Schatten voraus, noch 206 Einträge in der Leseliste.

Da ist was dran, das Netzwerk ist austauschbar, Facebook ist das neue Fernsehen - und macht uns zu Analphabeten.

Die acht typischen Meeting-Fehler sind irgendwie auch schon länger bekannt.

Da sind wirtklich einmal zwei neue Tipps dabei, So holst Du das Maximum aus Deiner Arbeitszeit.

The Single Piece Of Advice That Changed The Course Of My Career a good article and a very good advice.

Even though I don't have a handwritten notebook, it is a good ritual to think about what needs to be in place for the next period of time, Migrating Notebooks.

Being tired isn’t a badge of honor for sure it is not, but I like to hear, if I should adjust me life to only sleep and work on weekdays.

Good one, if you own much, you have much to lose, Less is more? Yes, less is more..

Nahaufnahme - Die Limo-Prinzessin sehr sympathisch.

Es kommt natürlich auch immer darauf an, für welche Stelle man sich bewirbt, So testen Personaler im Vorstellungsgespräch eure emotionale Intelligenz.

letsencrypt.sh - get certificates with a shell script.

on July 15, 2016 03:11 AM

July 14, 2016

The Ubuntu Forums are currently down for maintenance. For the last several days they suffered several outages and slow performances. Canonical sysadmins have been doing basic maintenance until the database and the hardware needed intensive care. The forums will be down for some time again, please accept our apologies for the inconvenience. Many thanks to fo0bar who has been with us for over 24h now.

Edit : Forums have been back up around 21:00 UTC.

Edit2: The Forums are still down, further news when it becomes available.

on July 14, 2016 08:13 PM

S09E20 – Dad’s Old Bits - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Episode Twenty of Season Nine of the Ubuntu Podcast! Alan Pope, Mark Johnson, Laura Cowen and Martin Wimpress are connected and speaking to your brain.

We’re here again!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on July 14, 2016 02:00 PM

My day of convergence

Michael Hall

I’ve had a Nexus 4 since 2013, and I’ve been using it to test out desktop convergence (where you run a desktop environment from the phone) ever since that feature landed just over a year ago. Usually that meant plugging it into my TV via HDMI to make sure it automatically switched to the larger screen, and playing a bit with the traditional windowed-mode of Unity 8, or checking on adaptive layouts in some of the apps. I’ve also run it for hours on end as a demo at conferences such as SCaLE, FOSSETCON, OSCON and SELF. But through all that, I’ve never used it as an actual replacement for my laptop. Until now.

Thanks Frontier

A bit of back-story first. I had been a Verizon FiOS customer for years, and recently they sold all of their FiOS business to Frontier. The transition has been…..less than ideal. A couple of weeks ago I lost all services (phone, TV and internet) and was eventually told that nobody would be out to fix it until the following day. I still had my laptop, but without internet access I couldn’t really do my job on it. And while Ubuntu on phones can offer up a Hotspot, that particular feature doesn’t work on the Nexus 4 (something something, driver, something). Which meant that the only device that I had which could get online was my phone.

No Minecraft for you

13528720_10154238389913419_2608531900571217522_nFortunately, the fact that I’ve been demoing convergence at conferences meant I had all of the equipment I needed to turn my phone into a desktop and keep right on working. I have a bluetooth mouse and keyboard, and a Slimport adapter that let’s me plug it into a bigger screen. But while a TV works for testing, it’s not really great for long-term work. Don’t get me wrong, working from the couch is nice, but the screen is just too far away for reading and writing. Fortunately for me, and unfortunately for my children, their computer is at a desk and is plugged into a monitor with HDMI ports. So I took it over for the day. They didn’t have internet either that day, so they didn’t miss out on much right?

A day of observations

Throughout the day I posted a series of comments on Google+ about my experience. You could go through my post history looking for them, but I’m not going to make you do that. So here’s a quick summary of what I learned:

  • 3G is not nearly fast enough for my daily work. It’s good when using my phone as a phone, doing one thing at a time. But it falls short of broadband when I’ve got a lot of things using it. Still, on that day it was better than my fiber optic service, so there’s that.
  • I had more apps installed on my phone than I thought I did. I was actually taken aback when I opened the Dash in desktop mode and I saw so many icons. It’s far more than I had on Android, though not quite as many as on my laptop.
  • Having a fully-functional Terminal is a lifesaver. I do a lot of my work from the terminal, including IRC, and having one with tabs and keyboard shortcuts for them is a must for me to work.
  • I missed having physical buttons on my keyboard for home/end and page up/down. Thankfully a couple of people came to my rescue in the comments and taught me other combinations to get those.
  • Unity 8 is Unity. Almost all of the keyboard shortcuts that have become second nature to me (an there are a lot of them) were there. There was no learning curve, I didn’t have to change how I did anything or teach myself something new.
  • The phone is still a phone. I got a call (from Frontier, reminding me about an appointment that never happened) while using the device as a desktop. It was a bit disorienting at first, I had forgotten that I was running the desktop the Nexus 4, so when a notification of an incoming call popped up on the screen I didn’t know what was happening. That only lasted a second though, and after clicking answer and picking up the device, I just used it as a phone. Pretty cool


Must go faster

While I was able to do pretty much all of my work that day thanks to my phone, it wasn’t always easy or fun, and I’m not ready to give up my laptop just yet. The Nexus 4 is simply not powerful enough for the kind of workload I was putting on it. But then again, it’s a nearly 4 year old phone, and wasn’t considered a powerhouse even when it was released. The newest Ubuntu phone on the market, the Meizu Pro 5, packs a whole lot more power, and I think it would be able to give a really nice desktop experience.

on July 14, 2016 01:47 AM

July 11, 2016

Hello, everyone! Two blog posts and a flurry of tweets in a day, what the heck has gotten into me?

Some fun things have happened in the last development cycle leading up to Xenial for nginx! Let’s recap a couple of the big things that’re ‘great’ happenings:

  • NGINX 1.9.x was accepted into Xenial during the development process.
  • Later in the dev cycle, we were given the ACK by the Security Team to enable the HTTP/2 module (yay, HTTP/2 support!)
  • Close to the end, that was also updated to 1.10.x post-release to get us onto a Stable version for the duration of the LTS! Yay, an LTS with a Stable version!

All in all, a good dev cycle for getting NGINX into the Ubuntu repositories! Now, we look ahead to the future.

First, a note about Wily. The NGINX PPAs will no longer get any Wily updates, as of today. This close to the End of Life date of Wily, I can’t guarantee there’ll be any updates beyond security-critical ones prompting such updates, given the EOL date of Wily being in a couple weeks.

This means, for the most part, that bugs which are against the Wily package in Ubuntu also get less scrutiny as we focus on the future. Any such Wily-filed bugs will need to be confirmed in another release of an equal or newer version (basically, Xenial or later) before I poke at them or another person pokes at them (this doesn’t prevent the community from submitting patches though). This also means people on Wily boxes who want to get continued NGINX support should upgrade to Xenial because I can’t guarantee they’ll get updates as they wish. And once Wily goes EOL, they get nothing.

Secondly, the road ahead. Up in Debian, they’re starting to test builds against the next OpenSSL version (1.1.0). Unfortunately, NGINX Stable 1.10.x doesn’t build. After poking upstream, I’ve learned there is a fix for this… but for NGINX Mainline… and it won’t be backported to 1.10.x. This is a little bit of a headache, for a couple reasons.

  1. NGINX Stable 1.10.x is not going to be able to be supported at some point in the future in Ubuntu, because it won’t have OpenSSL support.
  2. To get NGINX Mainline as the version in NGINX, I need to merge in the quite-evil Debian ‘dynamic modules’ support.
  3. Further, to get NGINX Mainline into Ubuntu during a development cycle, I need to go and pull in from Debian Experimental, and then build test against the older OpenSSL to make sure nothing dies off.

The big issues of this are mostly that we don’t know the full timeline of OpenSSL 1.1.0 being released in Debian. I have assurances from the Ubuntu Security Team, however, that OpenSSL 1.1.0 will not be included until packages don’t Fail to Build from Source (FTBFS) against it. Which means that I don’t have to act on this immediately.

The additional headache added to this list though is that, while I merge in Dynamic Module Support, it is not 100% ‘supported’ yet in Debian, and it won’t be totally supported in a sane way for packages which ship third-party modules. There has been discussion threads on some third-party modules packaging their modules to work as a dynamic module for Ubuntu Universe / Debian. This is a double-edged sword. Not only do I have to worry about NGINX updates, but I will have to start making sure all the dynamic modules get rebuilt for each upload. I’ll be working to try and find a better solution to this, but this will preclude updates to things getting done at times, given the signature-based approach to dynamic modules that exists currently. We’ll work through this, though, at some point, and make it more supportable in the future.


Just wanted to give you all some insights into the future of NGINX, and the headaches I will have to work through, for Ubuntu’s packages going forward.

on July 11, 2016 04:46 PM