June 28, 2016

SELF 2016

Aaron Honeycutt

This post has been in my box since the 19th, I just got a bit lazy on finishing it up and posting it sorry!

This SELF (SouthEast LinuxFest) was as great the one before… ok maybe a little bit better with all the beer sponsored by my favorite VPS Linode and Google. I mean A LOT of beer!

received_1207441689276852

 

There was also a ton of Ubuntu devices at the booth. From gaming, convergence and a surprise visit from the UbuntuFL LoCo penguin!

img_20160610_102623_26997424843_o img_20160610_104832_27572691806_o img_20160610_123009_26997408133_o img_20160610_112609_27533559201_o

I even found a BQ M10 Ubuntu Tablet out in the wild!

 

IMG_20160611_101524

We also had awesome booth neighbors: system76 and Linode! I loved this trip from exploring the city again to making n

 

IMG_20160611_095103

I loved this trip from exploring the city again to making new friends!

img_20160610_211320_26997269103_o img_20160610_200539_27606537975_o img_20160610_203315_27329392750_o 27329347250_d2e6733091_o

photo644958832521488306

 

 

on June 28, 2016 11:31 PM

Are you running i386 (32-bit) Ubuntu?   We need your help to decide how much longer to build i386 images of Ubuntu Desktop, Server, and all the flavors.

There is a real cost to support i386 and the benefits have fallen as more software goes 64-bit only.

Please fill out the survey here ONLY if you currently run i386 on one of your machines.  64-bit users will NOT be affected by this, even if you run 32-bit applications.
http://goo.gl/forms/UfAHxIitdWEUPl5K2

on June 28, 2016 08:04 PM

The /etc/os-release zoo

Zygmunt Krynicki

If you've ever wanted to do something differently depending on the /etc/os-release but weren't in the mood of installing every common distribution under the sun, look no further. I give you the /etc/os-release zoo project.

A project like this is never complete so please feel free to contribute additional distribution bits there.
on June 28, 2016 12:16 PM

Juju GUI 2.0

Canonical Design Team

Juju is a cloud orchestration tool which enables users to build web environments to run applications. You can just as easily use it on a simple WordPress blog or on a complex big data platform. Juju is a command line tool but also has a graphical user interface (GUI) where users can choose services from a store, assemble them visually in the GUI, build relations and configure them with the service inspector.

Juju GUI allows users to

  • Add charms and bundles from the charm store
  • Configure services
  • Deploy applications to a cloud of their choice
  • Manage charm settings
  • Monitor environment health

Over the last year we’ve been working on a redesign of the Juju GUI. This redesign project focused on improving four key areas, which also acted as our guiding design principles.

1. Improve the functionality of the core features of the GUI

  • Organised similar areas of the core navigation to create a better UI model.
  • Reduced the visual noise of the canvas and the inspector to help users navigate complex environments.
  • Introduced a better flow between the store and the canvas to aid adding services without losing context.
Hero before
Hero after

‹ ›

Empty state of the canvas

 

Hero before
Hero after

‹ ›

Integrated store

 

Hero before
Hero after

‹ ›

Apache charm details

 

2. Reduce cognitive load and pace the user

  • Reduced the amount of interaction patterns to minimise the amount of visual translation.
  • Added animation to core features to inform users of the navigation model in an effort to build a stronger concept of home.
  • Created a symbiotic relationship between the canvas and the inspector to help navigation of complex environments.
Hero before
Hero after

‹ ›

Mediawiki deployment

 

3. Provide an at-a-glance understanding of environment health

  • Prioritised the hierarchy of status so users are always aware of the most pressing issues and can discern which part of the application is effected.
  • Easier navigation to units with a negative status to aid the user in triaging issues.
  • Used the same visual patterns throughout the web app so users can spot problematic issues.
Hero before
Hero after

‹ ›

Mediawiki deployment with errors

 

4. Surface functions and facilitate task-driven navigation

  • Established a new hierarchy based on key tasks to create a more familiar navigation model.
  • Redesigned the inspector from the ground up to increase discoverability of inspector led functions.
  • Simplified the visual language and interaction patterns to help users navigate at-a-glance and with speed to triage errors, configure or scale out.
  • Surfaced relevant actions at the right time to avoid cluttering the UI.
Hero before
Hero after

‹ ›

Inspector home view

 

Hero before
Hero after

‹ ›

Inspector errors view

 

Hero before
Hero after

‹ ›

Inspector config view

 

The project has been amazing, we’re really happy to see that it’s launched and are already planning the next updates.



on June 28, 2016 10:39 AM

Design in the open

Canonical Design Team

As the Juju design team grew it was important to review our working process and to see if we could improve it to create a more agile working environment. The majority of employees at Canonical work distributed around the globe, for instance the Juju UI engineering team has employees from Tasmania to San Francisco. We also work on a product which is extremely technical and feedback is crucial to our velocity.

We identified the following aspects of our process which we wanted to improve:

  • We used different digital locations for storing their design outcomes and assets (Google Drive, Google Sites and Dropbox).
  • The entire company used Google Drive so it was ideal for access, but its lacklustre performance, complex sharing options and poor image viewer meant it wasn’t good for designs.
  • We used Dropbox to store iterations and final designs but it was hard to maintain developer access for sharing and reference.
  • Conversations and feedback on designs in the design team and with developers happened in email or over IRC, which often didn’t include all interested parties.
  • We would often get feedback from teams after sign-off, which would cause delays.
  • Decisions weren’t documented so it was difficult to remember why a change had been made.

Finding the right tool

I’ve always been interested in the concept of designing in the open. Benefits of the practice include being more transparent, faster and more efficient. They also give the design team more presence and visibility across the organisation. Kasia (Juju’s project manager) and I went back and forth on which products to use and eventually settled on GitHub (GH).

The Juju design team works in two week iterations and at the beginning of a new iteration we decided to set up a GH repo and trial the new process. We outlined the following rules to help us start:

  • Issues should be created for each project.
  • All designs/ideas/wireframes should be added inline to the issues.
  • All conversations should be held within GH, no more email or IRC conversations, and notes from any meetings should be added to relevant issues to create a paper trail.

Reaction

As the iteration went on, feedback started rolling in from the engineering team without us requesting it. A few developers mentioned how cool it was to see how the design process unfolded. We also saw a lot of improvement in the Juju design team: it allowed us to collaborate more easily and it was much easier to keep track of what was happening.

At the end of the trial iteration, during our clinic day, we closed completed issues and uploaded the final assets to the “code” section of the repo, creating a single place for our files.

After the first successful iteration we decided to carry this on as a permanent part of our process. The full range of benefits of moving to GH are:

  • Most employees of Canonical have a GH account and can see our work and provide feedback without needing to adopt a new tool.
  • Project management and key stakeholders are able to see what we’re doing, how we collaborate, why a decision has been made and the history of a project.
  • Provides us with a single source for all conversations which can happen around the latest iteration of a project.
  • One place where anyone can view and download the latest designs.
  • A single place for people to request work.

Conclusion

As a result of this change our designs are more accessible which allows developers and stakeholders to comment and collaborate with the design team aiding in our agile process. Below is an example thread where you can see how GH is used in the process. I shows how we designed the new contextual service block actions.

GH_conversation_new_navigation

on June 28, 2016 09:52 AM

New Ubuntu SDK Beta Version

Ubuntu App Developer Blog

A few days ago we have released the first Beta of the Ubuntu SDK IDE using the LXD container solution to build and execute applications. 

A few days ago we have released the first Beta of the Ubuntu SDK IDE using the LXD container solution to build and execute applications.

The first reports were positive, however one big problem was discovered pretty quickly:

Applications would not start on machines using the proprietary Nvidia drivers. Reason for this is that indirect GLX is not allowed by default when using those. The applications need to have access to:

  1. The glx libraries for the currently used driver
  2. The DRI and Nvidia device files

Luckily the snappy team already tackled a similar problem, so thanks to Michael Vogt (a.k.a mvo) we had a first idea how to solve it by reusing the Nvidia binaries and device files from the host by mounting them into the container.

However it is a bit more complicated in our case, because once we have the devices and directories mounted into the containers they will stay there permanently. This is a problem because the Nvidia binary directory has a version numbering, e.g. /usr/lib/nvidia-315, which changes with the currently loaded module and would stop the container from booting after the driver was changed and the old directory on the host is gone, or the container would use the wrong nvidia dir if it was not removed from the host.

The situation gets worse with optimus graphics cards were the user can switch between a integrated and dedicated graphics chip, which means device files in /dev can come and go between reboots.

Our solution to the problem is to check the integrity of the containers on every start of the Ubuntu SDK IDE and if problems are detected, the user is informed and asked for the root password to run automatic fixes. Those checks and fixes are implemented in the “usdk-target” tool and can be used from the CLI as well.

As a bonus this work will enable direct rendering for other graphics chips as well, however since we do not have access to all possible chips there might be still special cases that we could not catch.

So please report all problems to us on one of those channels:

We have released the new tool into the Tools-development PPA where the first beta was released too. However existing container might not be completely fixed automatically. They are better be recreated or manually fixed. To manually fix an existing container use the maintain mode from the options menu and add the current user into the “video” group.

To get the new version of the IDE please update the installed Ubuntu SDK IDE package:

$ sudo apt-get update && sudo apt-get install ubuntu-sdk-ide ubuntu-sdk-tools

on June 28, 2016 05:53 AM

Welcome to the Ubuntu Weekly Newsletter. This is issue #471 for the week June 20 – 26, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Chris Guiver
  • Chris Sirrs
  • Leonard Viator
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on June 28, 2016 02:19 AM

June 27, 2016

Today I am going to be discussing parts. This is one of the pillars of snapcraft (together with plugins and the lifecycle).

For those not familiar, this is snapcraft’s general purpose landing page, http://snapcraft.io/ but if you are a developer and have already been introduced to this new world of snaps, you probably want to just go and hop on to http://snapcraft.io/create/

If you go over this snapcraft tour you will notice the many uses of parts and start to wonder how to get started or think that maybe you are duplicating work done by others, or even better, maybe an upstream. This is where we start to think about the idea of sharing parts and this is exactly what we are going to go over in this post.

To be able to reproduce what follows, you’d need to have snapcraft 2.12 installed.

An overview to using remote parts

So imagine I am someone wanting to use libcurl. Normally I would write the part definition from scratch and be on with my own business but surely I might be missing out on something about optimal switches used to configure the package or even build it. I would also need to research on how to use the specific plugin required. So instead, I’ll see if someone already has done the work for me, hence I will,

$ snapcraft update
Updating parts list... |
$ snapcraft search curl
PART NAME  DESCRIPTION
curl       A tool and a library (usable from many languages) for client side URL tra...

Great, there’s a match, but is this what I want?

$ snapcraft define curl
Maintainer: 'Sergio Schvezov <sergio.schvezov@ubuntu.com>'
Description: 'A tool and a library (usable from many languages) for client side URL transfers, supporting FTP, FTPS, HTTP, HTTPS, TELNET, DICT, FILE and LDAP.'

curl:
  configflags:
  - --enable-static
  - --enable-shared
  - --disable-manual
  plugin: autotools
  snap:
  - -bin
  - -lib/*.a
  - -lib/pkgconfig
  - -lib/*.la
  - -include
  - -share
  source: http://curl.haxx.se/download/curl-7.44.0.tar.bz2
  source-type: tar

Yup, it’s what I want.

An example

There are two ways to use these parts in your snapcraft.yaml, say this is your parts section

parts:
    client:
       plugin: autotools
       source: .

My client part which is using sources that sit alongside this snapcraft.yaml, will hypothetically fail to build as it depends on the curl library I don’t yet have. There are some options here to get this going, one using after in the part definition implicitly, another involving composing and last but not least just copy pasting what snapcraft define curl returned for the part.

Implicitly

The implicit path is really straightforward. It only involves making the part look like:

parts:
    client:
       plugin: autotools
       source: .
       after: [curl]

This will use the cached definition of the part and may potentially be updated by running snapcraft update.

Composing

What if we like the part, but want to try out a new configure flag or source release? Well we can override pieces of the part; so for the case of wanting to change the source:

parts:
    client:
        plugin: autotools
        source: .
        after: [curl]
    curl:
        source: http://curl.haxx.se/download/curl-7.45.0.tar.bz2

And we will get to build curl but using a newer version of curl. The trick is that the part definition here is missing the plugin entry, thereby instructing snapcraft to look for the full part definition from the cache.

Copy/Pasting

This path is a path one would take if they want full control over the part. It is as simple as copying in the part definition we got from running snapcraft define curl into your own. For the sake of completeness here’s how it would look like:

parts:
    client:
        plugin: autotools
        source: .
        after: [curl]
    curl:
        configflags:
            - --enable-static
            - --enable-shared
            - --disable-manual
        plugin: autotools
        snap:
            - -bin
            - -lib/*.a
            - -lib/pkgconfig
            - -lib/*.la
            - -include
            - -share
        source: http://curl.haxx.se/download/curl-7.44.0.tar.bz2
        source-type: tar

Sharing your part

Now what if you have a part and want to share it with the rest of the world? It is rather simple really, just head over to https://wiki.ubuntu.com/snapcraft/parts and add it.

In the case of curl, I would write a yaml document that looks like:

origin: https://github.com/sergiusens/curl.git
maintainer: Sergio Schvezov <sergio.schvezov@ubuntu.com>
description:
  A tool and a library (usable from many languages) for
  client side URL transfers, supporting FTP, FTPS, HTTP,
  HTTPS, TELNET, DICT, FILE and LDAP.
project-part: curl

What does this mean? Well, the part itself is not defined on the wiki, just a pointer to it with some meta data, the part is really defined inside a snapcraft.yaml living in the origin we just told it to use.

The extent of the keywords is explained in the documentation, that is an upstream link to it.

The core idea is that a maintainer decides he wants to share a part. Such a maintainer would add a description that provides an idea of what that part (or collection of parts) is doing. Then, last but not least, the maintainer declares which parts to expose to the world as maybe not all of them should. The main part is exposed in project-part and will carry a top level name, the maintainer can expose more parts from snapcraft.yaml using the general parts keyword. These parts will be namespaced with the project-part.

on June 27, 2016 03:57 PM

 

Island of Ventotene – Roman harbour

There once was a Kingdom strongly United, built on the honours of the people of Wessex, of Mercia, Northumbria and East Anglia who knew how to deal with the invasion of the Vikings from the east and of Normans from the south, to come to unify the territory under an umbrella of common intents. Today, however, 48% of them, while keeping solid traditions, still know how to look forward to the future, joining horizons and commercial developments along with the rest of Europe. The remaining 52%, however, look back and can not see anything in front of them if not a desire of isolation, breaking the European dream born on the shores of Ventotene island in 1944 by Altiero Spinelli, Ernesto Rossi and Ursula Hirschmann through the “Manifesto for a free and united Europe“. An incurable fracture in the country was born in a referendum on 23 June, in which just over half of the population asked to terminate his marriage to the great European family, bringing the UK back by 43 years of history.

<Read More…[by Fabio Marzocca]>

on June 27, 2016 07:54 AM

Hello, Sense!

Paul Tagliamonte

A while back, I saw a Kickstarter for one of the most well designed and pretty sleep trackers on the market. I fell in love with it, and it has stuck with me since.

A few months ago, I finally got my hands on one and started to track my data. Naturally, I now want to store this new data with the rest of the data I have on myself in my own databases.

I went in search of an API, but I found that the Sense API hasn't been published yet, and is being worked on by the team. Here's hoping it'll land soon!

After some subdomain guessing, I hit on api.hello.is. So, naturally, I went to take a quick look at their Android app and network traffic, lo and behold, there was a pretty nicely designed API.

This API is clearly an internal API, and as such, it's something that should not be considered stable. However, I'm OK with a fragile API, so I've published a quick and dirty API wrapper for the Sense API to my GitHub..

I've published it because I've found it useful, but I can't promise the world, (since I'm not a member of the Sense team at Hello!), so here are a few ground rules of this wrapper:

  • I make no claims to the stability or completeness.
  • I have no documentation or assurances.
  • I will not provide the client secret and ID. You'll have to find them on your own.
  • This may stop working without any notice, and there may even be really nasty bugs that result in your alarm going off at 4 AM.
  • Send PRs! This is a side-project for me.

This module is currently Python 3 only. If someone really needs Python 2 support, I'm open to minimally invasive patches to the codebase using six to support Python 2.7.

Working with the API:

First, let's go ahead and log in using python -m sense.

$ python -m sense
Sense OAuth Client ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Sense OAuth Client Secret: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Sense email: paultag@gmail.com
Sense password: 
Attempting to log into Sense's API
Success!
Attempting to query the Sense API
The humidity is **just right**.
The air quality is **just right**.
The light level is **just right**.
It's **pretty hot** in here.
The noise level is **just right**.
Success!

Now, let's see if we can pull up information on my Sense:

>>> from sense import Sense
>>> sense = Sense()
>>> sense.devices()
{'senses': [{'id': 'xxxxxxxxxxxxxxxx', 'firmware_version': '11a1', 'last_updated': 1466991060000, 'state': 'NORMAL', 'wifi_info': {'rssi': 0, 'ssid': 'Pretty Fly for a WiFi (2.4 GhZ)', 'condition': 'GOOD', 'last_updated': 1462927722000}, 'color': 'BLACK'}], 'pills': [{'id': 'xxxxxxxxxxxxxxxx', 'firmware_version': '2', 'last_updated': 1466990339000, 'battery_level': 87, 'color': 'BLUE', 'state': 'NORMAL'}]}

Neat! Pretty cool. Look, you can even see my WiFi AP! Let's try some more and pull some trends out.

>>> values = [x.get("value") for x in sense.room_sensors()["humidity"]][:10]
>>> min(values)
45.73904
>>> max(values)
45.985928
>>> 

I plan to keep maintaining it as long as it's needed, so I welcome co-maintainers, and I'd love to see what people build with it! So far, I'm using it to dump my room data into InfluxDB, pulling information on my room into Grafana. Hopefully more to come!

Happy hacking!

on June 27, 2016 01:42 AM

SNAPs are the cross-distro, cross-cloud, cross-device Linux packaging format of the future.  And we're already hosting a fantastic catalog of SNAPs in the SNAP store provided by Canonical.  Developers are welcome to publish their software for distribution across hundreds millions of Ubuntu servers, desktops, and devices.

Several people have asked the inevitable open source software question, "SNAPs are awesome, but how can I stand up my own SNAP store?!?"

The answer is really quite simple...  SNAP stores are really just HTTP web servers!  Of course, you can get fancy with branding, and authentication, and certificates.  But if you just want to host SNAPs and enable downstream users to fetch and install software, well, it's pretty trivial.

In fact, Bret Barker has published an open source (Apache License) SNAP store on GitHub.  We're already looking at how to flesh out his proof-of-concept and bring it into snapcore itself.

Here's a little HOWTO install and use it.

First, I launched an instance in AWS.  Of course I could have launched an Ubuntu 16.04 LTS instance, but actually, I launched a Fedora 24 instance!  In fact, you could run your SNAP store on any OS that currently supports SNAPs, really, or even just fork this GitHub repo and install it stand alone..  See snapcraft.io.



Now, let's find and install a snapstore SNAP.  (Note that in this AWS instance of Fedora 24, I also had to 'sudo yum install squashfs-tools kernel-modules'.


At this point, you're running a SNAP store (webserver) on port 5000.


Now, let's reconfigure snapd to talk to our own SNAP store, and search for a SNAP.


Finally, let's install and inspect that SNAP.


How about that?  Easy enough!

Cheers,
Dustin
on June 27, 2016 01:09 AM

June 26, 2016

You can have LXD containers on your home computer, you can also have them on your Virtual-Private Server (VPS). If you have any further questions on LXD, see https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/

Here we see how to configure on a VPS at DigitalOcean (yeah, referral). We go cheap and select the 512MB RAM and 20GB disk VPS for $5/month. Containers are quite lightweight, so it’s interesting to see how many we can squeeze. We are going to use ZFS for the storage of the containers, stored on a file and not a block device. Here is what we are doing today,

  1. Set up LXD on a 512MB RAM/20GB diskspace VPS
  2. Create a container with a web server
  3. Expose the container service to the Internet
  4. Visit the webserver from our browser

Set up LXD on DigitalOcean

do-create-droplet

When creating the VPS, it is important to change these two options; we need 16.04 (default is 14.04) so that it has ZFS pre-installed as a kernel module, and we try out the cheapest VPS offering with 512MB RAM.

Once we create the VPS, we connect with

$ ssh root@128.199.41.205    # change with the IP address you get from the DigitalOcean panel
The authenticity of host '128.199.41.205 (128.199.41.205)' can't be established.
ECDSA key fingerprint is SHA256:7I094lF8aeLFQ4WPLr/iIX4bMs91jNiKhlIJw3wuMd4.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '128.199.41.205' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-24-generic x86_64)

* Documentation: https://help.ubuntu.com/

0 packages can be updated.
0 updates are security updates.

root@ubuntu-512mb-ams3-01:~# apt update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [94.5 kB]
Hit:2 http://ams2.mirrors.digitalocean.com/ubuntu xenial InRelease 
Get:3 http://security.ubuntu.com/ubuntu xenial-security/main Sources [24.9 kB]
...
Fetched 10.2 MB in 4s (2,492 kB/s)
Reading package lists... Done
Building dependency tree 
Reading state information... Done
13 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@ubuntu-512mb-ams3-01:~# apt upgrade
Reading package lists... Done
Building dependency tree 
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
 dnsmasq-base initramfs-tools initramfs-tools-bin initramfs-tools-core
 libexpat1 libglib2.0-0 libglib2.0-data lshw python3-software-properties
 shared-mime-info snapd software-properties-common wget
13 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 6,979 kB of archives.
After this operation, 78.8 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
...
Processing triggers for initramfs-tools (0.122ubuntu8.1) ...
update-initramfs: Generating /boot/initrd.img-4.4.0-24-generic
W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
Processing triggers for libc-bin (2.23-0ubuntu3) ...

We update the package list and then upgrade any packages that need upgrading.

root@ubuntu-512mb-ams3-01:~# apt policy lxd
lxd:
 Installed: 2.0.2-0ubuntu1~16.04.1
 Candidate: 2.0.2-0ubuntu1~16.04.1
 Version table:
 *** 2.0.2-0ubuntu1~16.04.1 500
 500 http://mirrors.digitalocean.com/ubuntu xenial-updates/main amd64 Packages
 500 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages
 100 /var/lib/dpkg/status
 2.0.0-0ubuntu4 500
 500 http://mirrors.digitalocean.com/ubuntu xenial/main amd64 Packages

The lxd package is already installed, all the better. Nice touch 🙂

root@ubuntu-512mb-ams3-01:~# apt install zfsutils-linux
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following additional packages will be installed:
 libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-doc zfs-zed
Suggested packages:
 default-mta | mail-transport-agent samba-common-bin nfs-kernel-server
 zfs-initramfs
The following NEW packages will be installed:
 libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-doc zfs-zed
 zfsutils-linux
0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded.
Need to get 881 kB of archives.
After this operation, 2,820 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
...
zed.service is a disabled or a static unit, not starting it.
Processing triggers for libc-bin (2.23-0ubuntu3) ...
Processing triggers for systemd (229-4ubuntu6) ...
Processing triggers for ureadahead (0.100.0-19) ...
root@ubuntu-512mb-ams3-01:~# _

We installed zfsutils-linux in order to be able to use ZFS as storage for our containers. In this tutorial we are going to use a file as storage (still, ZFS filesystem) instead of a block device. If you subscribe to the DO Beta for block storage volumes, you can get a proper block device for the storage of the containers. Currently free to beta members, available only on the NYC1 datacenter.

root@ubuntu-512mb-ams3-01:~# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/vda1  20G  1.1G 18G     6% /
root@ubuntu-512mb-ams3-01:~# _

We got 18GB free diskspace, so let’s allocate 15GB for LXD.

root@ubuntu-512mb-ams3-01:~# lxd init
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: lxd-pool
Would you like to use an existing block device (yes/no)? no
Size in GB of the new loop device (1GB minimum): 15
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? yes
we accept the default settings for the bridge configuration
Warning: Stopping lxd.service, but it can still be activated by:
 lxd.socket
LXD has been successfully configured.
root@ubuntu-512mb-ams3-01:~# _

What we did,

  • we initialized LXD with the ZFS storage backend,
  • we created a new pool and gave a name (here, lxd-pool),
  • we do not have a block device, so we get a (sparse) image file that contains the ZFS filesystem
  • we do not want now to make LXD available over the network
  • we want to configure the LXD bridge for the inter-networking of the containters

Let’s create a new user and add them to the lxd group,

root@ubuntu-512mb-ams3-01:~# adduser ubuntu
Adding user `ubuntu' ...
Adding new group `ubuntu' (1000) ...
Adding new user `ubuntu' (1000) with group `ubuntu' ...
Creating home directory `/home/ubuntu' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: ********
Retype new UNIX password: ********
passwd: password updated successfully
Changing the user information for ubuntu
Enter the new value, or press ENTER for the default
 Full Name []: <ENTER>
 Room Number []: <ENTER>
 Work Phone []: <ENTER>
 Home Phone []: <ENTER>
 Other []: <ENTER>
Is the information correct? [Y/n] Y
root@ubuntu-512mb-ams3-01:~# _

The username is ubuntu. Make sure you add a good password, since we do not deal in this tutorial with best security practices. Many people use scripts on these VPSs that try common usernames and passwords. When you create a VPS, it is nice to have a look at /var/log/auth.log for those failed attempts to get into your VPS. Here are a few lines from this VPS,

Jun 26 18:36:15 digitalocean sshd[16318]: Failed password for root from 121.18.238.29 port 45863 ssh2
Jun 26 18:36:15 digitalocean sshd[16320]: Connection closed by 123.59.134.76 port 49378 [preauth]
Jun 26 18:36:17 digitalocean sshd[16318]: Failed password for root from 121.18.238.29 port 45863 ssh2
Jun 26 18:36:20 digitalocean sshd[16318]: Failed password for root from 121.18.238.29 port 45863 ssh2

We add the ubuntu user into the lxd group in order to be able to run commands as a non-root user.

root@ubuntu-512mb-ams3-01:~# adduser ubuntu lxd
Adding user `ubuntu' to group `lxd' ...
Adding user ubuntu to group lxd
Done.
root@ubuntu-512mb-ams3-01:~# _

We are now good to go. Log in as user ubuntu and run an LXD command to list images.

do-lxc-list

Create a Web server in a container

We launch (init and start) a container named c1.

do-lxd-launch

The ubuntu:x in the screenshot is an alias for Ubuntu 16.04 (Xenial), that resides in the ubuntu: repository of images. You can find other distributions in the images: repository.

As soon as the launch action was completed, I run the list action. Then, after a few seconds, I run it again. You can notice that it took a few seconds before the container actually booted and got an IP address.

Let’s enter into the container by executing a shell. We update and then upgrade the container.

ubuntu@ubuntu-512mb-ams3-01:~$ lxc exec c1 -- /bin/bash
root@c1:~# apt update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [94.5 kB]
Hit:2 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:3 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [94.5 kB]
...
Fetched 9819 kB in 2s (3645 kB/s) 
Reading package lists... Done
Building dependency tree 
Reading state information... Done
13 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@c1:~# apt upgrade
Reading package lists... Done
Building dependency tree 
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
 dnsmasq-base initramfs-tools initramfs-tools-bin initramfs-tools-core libexpat1 libglib2.0-0 libglib2.0-data lshw python3-software-properties shared-mime-info snapd
 software-properties-common wget
13 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 6979 kB of archives.
After this operation, 3339 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 initramfs-tools all 0.122ubuntu8.1 [8602 B]
...
Processing triggers for initramfs-tools (0.122ubuntu8.1) ...
Processing triggers for libc-bin (2.23-0ubuntu3) ...
root@c1:~#

Let’s install nginx, our Web server.

root@c1:~# apt install nginx
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following additional packages will be installed:
 fontconfig-config fonts-dejavu-core libfontconfig1 libfreetype6 libgd3 libjbig0 libjpeg-turbo8 libjpeg8 libtiff5 libvpx3 libxpm4 libxslt1.1 nginx-common nginx-core
Suggested packages:
 libgd-tools fcgiwrap nginx-doc ssl-cert
The following NEW packages will be installed:
 fontconfig-config fonts-dejavu-core libfontconfig1 libfreetype6 libgd3 libjbig0 libjpeg-turbo8 libjpeg8 libtiff5 libvpx3 libxpm4 libxslt1.1 nginx nginx-common nginx-core
0 upgraded, 15 newly installed, 0 to remove and 0 not upgraded.
Need to get 3309 kB of archives.
After this operation, 10.7 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 libjpeg-turbo8 amd64 1.4.2-0ubuntu3 [111 kB]
...
Processing triggers for ufw (0.35-0ubuntu2) ...
root@c1:~#

Is the Web server running? Let’s check with the ss command (preinstalled, from package iproute2)

root@c1:~# ss -tula 
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port 
udp UNCONN 0 0 *:bootpc *:* 
tcp LISTEN 0 128 *:http *:* 
tcp LISTEN 0 128 *:ssh *:* 
tcp LISTEN 0 128 :::http :::* 
tcp LISTEN 0 128 :::ssh :::*
root@c1:~#

The parameters mean

  • -t: Show only TCP sockets
  • -u: Show only UDP sockets
  • -l: Show listening sockets
  • -a: Show all sockets (makes no difference because of previous options; it’s just makes an easier word to remember, tula)

Of course, there is also lsof with the parameter -i (IPv4/IPv6).

root@c1:~# lsof -i
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
dhclient 240 root 6u IPv4 45606 0t0 UDP *:bootpc 
sshd 306 root 3u IPv4 47073 0t0 TCP *:ssh (LISTEN)
sshd 306 root 4u IPv6 47081 0t0 TCP *:ssh (LISTEN)
nginx 2034 root 6u IPv4 51636 0t0 TCP *:http (LISTEN)
nginx 2034 root 7u IPv6 51637 0t0 TCP *:http (LISTEN)
nginx 2035 www-data 6u IPv4 51636 0t0 TCP *:http (LISTEN)
nginx 2035 www-data 7u IPv6 51637 0t0 TCP *:http (LISTEN)
root@c1:~#

From both commands we verify that the Web server is indeed running inside the VPS, along with a SSHD server.

Let’s change a bit the default Web page,

root@c1:~# nano /var/www/html/index.nginx-debian.html

do-lxd-nginx-page

Expose the container service to the Internet

Now, if we try to visit the public IP of our VPS at http://128.199.41.205/ we obviously notice that there is no Web server there. We need to expose the container to the world, since the container only has a private IP address.

The following iptables line exposes the container service at port 80. Note that we run this as root on the VPS (root@ubuntu-512mb-ams3-01:~#), NOT inside the container (root@c1:~#).

iptables -t nat -I PREROUTING -i eth0 -p TCP -d 128.199.41.205/32 --dport 80 -j DNAT --to-destination 10.160.152.184:80

Adapt accordingly the public IP of your VPS and the private IP of your container (10.x.x.x). Since we have a web server, this is port 80.

We have not made this firewall rule persistent as it is outside of our scope; see iptables-persistent on how to make it persistent.

Visit our Web server

Here is the URL, http://128.199.41.205/ so let’s visit it.

do-lxd-welcome-nginx

That’s it! We created an LXD container with the nginx Web server, then exposed it to the Internet.

 

on June 26, 2016 11:57 PM

June 25, 2016

Post-Brexit - The What Now?

Dimitri John Ledkov

Out of 46,500,001 electorate 17,410,742 voted to leave, which is a mere 37.4% or just over a third. [source]. On my books this is not a clear expression of the UK wishes.

The reaction that the results have caused are devastating. The Scottish First Minister has announced plans for 2nd Scottish Independence referendum [source]. Londoners are filing petitions calling for Independent London [source, source]. The Prime Minister announced his resignation [source]. Things are not stable.

I do not believe that super majority of the electorate are in favor of leaving the EU. I don't even believe that those who voted to leave have considered the break up of the UK as the inevitable outcome of the leave vote. There are numerous videos on the internet about that, impossible to quantify or reliably cite, but for example this [source]

So What Now?

P R O T E S T

I urge everyone to start protesting the outcome of the mistake that happened last Thursday. 4th of July is a good symbolic date to show your discontent with the UK governemnt and a tiny minority who are about to cause the country to fall apart with no other benefits. Please stand up and make yourself heard.
  • General Strikes 4th & 5th of July
There are 64,100,000 people living in the UK according to the World Bank, maybe the government should fear and listen to the unheard third. The current "majority" parliament was only elected by 24% of electorate.

It is time for people to actually take control, we can fix our parliament, we can stop austerity, we can prevent the break up of the UK, and we can stay in the EU. Over to you.

ps. How to elect next PM?

Electing next PM will be done within the Conservative Party, and that's kind of a bummer, given that the desperate state the country currently is in. It is not that hard to predict that Boris Johnson is a front-runner. If you wish to elect a different PM, I urge you to splash out 25 quid and register to be a member of the Conservative Party just for one year =) this way you will get a chance to directly elect the new Leader of the Conservative Party and thus the new Prime Minister. You can backdoor the Conservative election here.
on June 25, 2016 07:24 PM

This post is about containers, a construct similar to virtual machines (VM) but so much lightweight that you can easily create a dozen on your desktop Ubuntu!

A VM virtualizes a whole computer and then you install in there the guest operating system. In contrast, a container reuses the host Linux kernel and simply contains just the root filesystem (aka runtimes) of our choice. The Linux kernel has several features that rigidly separate the running Linux container from our host computer (i.e. our desktop Ubuntu).

By themselves, Linux containers would need some manual work to manage them directly. Fortunately, there is LXD (pronounced Lex-deeh), a service that manages Linux containers for us.

We will see how to

  1. setup our Ubuntu desktop for containers,
  2. create a container,
  3. install a Web server,
  4. test it a bit, and
  5. clear everything up.

Set up your Ubuntu for containers

If you have Ubuntu 16.04, then you are ready to go. Just install a couple of extra packages that we see below. If you have Ubuntu 14.04.x or Ubuntu 15.10, see LXD 2.0: Installing and configuring LXD [2/12] for some extra steps, then come back.

Make sure the package list is up-to-date:

sudo apt update
sudo apt upgrade

Install the lxd package:

sudo apt install lxd

If you have Ubuntu 16.04, you can enable the feature to store your container files in a ZFS filesystem. The Linux kernel in Ubuntu 16.04 includes the necessary kernel modules for ZFS. For LXD to use ZFS for storage, we just need to install a package with ZFS utilities. Without ZFS, the containers would be stored as separate files on the host filesystem. With ZFS, we have features like copy-on-write which makes the tasks much faster.

Install the zfsutils-linux package (if you have Ubuntu 16.04.x):

sudo apt install zfsutils-linux

Once you installed the LXD package on the desktop Ubuntu, the package installation scripts should have added you to the lxd group. If your desktop account is a member of that group, then your account can manage containers with LXD and can avoid adding sudo in front of all commands. The way Linux works, you would need to log out from the desktop session and then log in again to activate the lxd group membership. (If you are an advanced user, you can avoid the re-login by newgrp lxd in your current shell).

Before use, LXD should be initialized with our storage choice and networking choice.

Initialize lxd for storage and networking by running the following command:

$ sudo lxd init
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: lxd-pool
Would you like to use an existing block device (yes/no)? no
Size in GB of the new loop device (1GB minimum): 30
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? yes 
> You will be asked about the network bridge configuration. Accept all defaults and continue.
Warning: Stopping lxd.service, but it can still be activated by:
 lxd.socket
 LXD has been successfully configured.
$ _

We created the ZFS pool as a filesystem inside a (single) file, not a block device (i.e. in a partition), thus no need for extra partitioning. In the example I specified 30GB, and this space will come from the root (/) filesystem. If you want to look at this file, it is at /var/lib/lxd/zfs.img.

 

That’s it! The initial configuration has been completed. For troubleshooting or background information, see https://www.stgraber.org/2016/03/15/lxd-2-0-installing-and-configuring-lxd-212/

Create your first container

All management commands with LXD are available through the lxc command. We run lxc with some parameters and that’s how we manage containers.

lxc list

to get a list of installed containers. Obviously, the list will be empty but it verifies that all are fine.

lxc image list

shows the list of (cached) images that we can use to launch a container. Obviously, the list will be empty but it verifies that all are fine.

lxc image list ubuntu:

shows the list of available remote images that we can use to download and launch as containers. This specific list shows Ubuntu images.

lxc image list images:

shows the list of available remote images for various distributions that we can use to download and launch as containers. This specific list shows all sort of distributions like Alpine, Debian, Gentoo, Opensuse and Fedora.

Let’s launch a container with Ubuntu 16.04 and call it c1:

$ lxc launch ubuntu:x c1
Creating c1
Starting c1
$ _

We used the launch action, then selected the image ubuntu:x (x is an alias for the Xenial/16.04 image) and lastly we use the name c1 for our container.

Let’s view our first installed container,

$ lxc list

+---------+---------+----------------------+------+------------+-----------+
| NAME | STATE   | IPV4                 | IPV6 | TYPE       | SNAPSHOTS    |
+---------+---------+----------------------+------+------------+-----------+
| c1   | RUNNING | 10.173.82.158 (eth0) |      | PERSISTENT | 0            |
+---------+---------+----------------------+------+------------+-----------+

Our first container c1 is running and it has an IP address (accessible locally). It is ready to be used!

Install a Web server

We can run commands in our container. The action for running commands, is exec.

$ lxc exec c1 -- uptime
 11:47:25 up 2 min, 0 users, load average: 0.07, 0.05, 0.04
$ _

After the action exec, we specify the container and finally we type command to run inside the container. The uptime is just 2 minutes, it’s a fresh container :-).

The — thing on the command line has to do with parameter processing of our shell. If our command does not have any parameters, we can safely omit the –.

$ lxc exec c1 -- df -h

This is an example that requires the –, because for our command we use the parameter -h. If you omit the –, you get an error.

Let’s get a shell in the container, and update the package list already.

$ lxc exec c1 bash
root@c1:~# apt update
Ign http://archive.ubuntu.com trusty InRelease
Get:1 http://archive.ubuntu.com trusty-updates InRelease [65.9 kB]
Get:2 http://security.ubuntu.com trusty-security InRelease [65.9 kB]
...
Hit http://archive.ubuntu.com trusty/universe Translation-en 
Fetched 11.2 MB in 9s (1228 kB/s) 
Reading package lists... Done
root@c1:~# apt upgrade
Reading package lists... Done
Building dependency tree 
...
Processing triggers for man-db (2.6.7.1-1ubuntu1) ...
Setting up dpkg (1.17.5ubuntu5.7) ...
root@c1:~# _

We are going to install nginx as our Web server. nginx is somewhat cooler than Apache Web server.

root@c1:~# apt install nginx
Reading package lists... Done
Building dependency tree
...
Setting up nginx-core (1.4.6-1ubuntu3.5) ...
Setting up nginx (1.4.6-1ubuntu3.5) ...
Processing triggers for libc-bin (2.19-0ubuntu6.9) ...
root@c1:~# _

Let’s view our Web server with our browser. Remeber the IP address you got 10.173.82.158, so I enter it into my browser.

lxd-nginx

Let’s make a small change in the text of that page. Back inside our container, we enter the directory with the default HTML page.

root@c1:~# cd /var/www/html/
root@c1:/var/www/html# ls -l
total 2
-rw-r--r-- 1 root root 612 Jun 25 12:15 index.nginx-debian.html
root@c1:/var/www/html#

We can edit the file with nano, then save

lxd-nginx-nano

Finally, let’s check the page again,

lxd-nginx-modified

Clearing up

Let’s clear up the container by deleting it. We can easily create new ones when we need them.

$ lxc list
+---------+---------+----------------------+------+------------+-----------+
| NAME | STATE   | IPV4                 | IPV6 | TYPE       | SNAPSHOTS    |
+---------+---------+----------------------+------+------------+-----------+
| c1   | RUNNING | 10.173.82.169 (eth0) |      | PERSISTENT | 0            |
+---------+---------+----------------------+------+------------+-----------+
$ lxc stop c1
$ lxc delete c1
$ lxc list
+---------+---------+----------------------+------+------------+-----------+
| NAME | STATE   | IPV4                 | IPV6 | TYPE       | SNAPSHOTS    |
+---------+---------+----------------------+------+------------+-----------+
+---------+---------+----------------------+------+------------+-----------+

We stopped (shutdown) the container, then we deleted it.

That’s all. There are many more ideas on what do with containers. Here are the first steps on setting up our Ubuntu desktop and trying out one such container.

on June 25, 2016 12:26 PM

June 24, 2016

Akademy! and fundraising

Valorie Zimmerman

https://qtcon.org/


Akademy is approaching! And I can hardly wait. This spring has been personally difficult, and meeting with friends and colleagues is the perfect way to end the summer. This year will be special because it's in Berlin, and because it is part of Qt.con, with a lot of our freedom-loving friends, such as Qt, VideoLAN, Free Software Foundation Europe and KDAB. As usual, Kubuntu will also be having our annual meetup there.

Events are expensive! KDE needs money to support Akademy the event, support for those who need travel and lodging subsidy, support for other events such as our Randa Meetings, which just successfully ended. We're still raising money to support the sprints:

https://www.kde.org/fundraisers/randameetings2016/

Of course that money supports Akademy too, which is our largest annual meeting.

Ubuntu helps here too! The Ubuntu Community fund sends many of the Kubuntu team, and often funds a shared meal as well. Please support the Ubuntu Community Fund too if you can!

I'm going!

I can't seem to make the image a link, so go to https://qtcon.org/ for more information.
on June 24, 2016 11:07 PM

SNAPs are the cross-distro, cross-cloud, cross-device Linux packaging format of the future.  And we’re already hosting a fantastic catalog of SNAPs in the SNAP store provided by Canonical.  Developers are welcome to publish their software for distribution across hundreds millions of Ubuntu servers, desktops, and devices.

Several people have asked the inevitable open source software question, “SNAPs are awesome, but how can I stand up my own SNAP store?!?”

The answer is really quite simple…  SNAP stores are really just HTTP web servers!  Of course, you can get fancy with branding, and authentication, and certificates.  But if you just want to host SNAPs and enable downstream users to fetch and install software, well, it’s pretty trivial.

In fact, Bret Barker has published an open source (Apache License) SNAP store on GitHub.  We’re already looking at how to flesh out his proof-of-concept and bring it into snapcore itself.

Here’s a little HOWTO install and use it.

First, I launched an instance in AWS.  Of course I could have launched an Ubuntu 16.04 LTS instance, but actually, I launched a Fedora 24 instance!  In fact, you could run your SNAP store on any OS that currently supports SNAPs, really, or even just fork this GitHub repo and install it stand alone..  See snapcraft.io.

Now, let’s find and install a snapstore SNAP.  (Note that in this AWS instance of Fedora 24, I also had to ‘sudo yum install squashfs-tools kernel-modules’.

At this point, you’re running a SNAP store (webserver) on port 5000.

Now, let’s reconfigure snapd to talk to our own SNAP store, and search for a SNAP.

Finally, let’s install and inspect that SNAP.<

How about that?  Easy enough!

Original article

on June 24, 2016 07:58 PM

Linkdump 25/2016 ...

Dirk Deimeke

Nettigkeiten der vergangenen Woche.

Eine der besten Zusammenfassungen, warum man sein eigenes Blog starten sollte: Starte (d)ein Blog – heute!

Dies ist keine Übung - externe Festplatten werden im schlimmsten Fall übrigens auch verschlüsselt. In jedem Fall sollte man die Verschlüsselung rechtzeitig bemerken, sonst ist Essig mit Restore.

Es lohnt sich der Aufwand, sich selber kennen zu lernen. Man könnte sogar ein ganz netter Typ sein. :-) Positives Selbstwertgefühl, wie Sie Ihre persönlichen Stärken im Job am besten einsetzen.

The Life-Changing Magic Of Shorter Emails comes from the "Captain Obvious Department" ... (sorry!).

Ja, wissen wir, aber Ausbildung tut trotzdem Not. Der Fachkräftemangel ist ein Phantom.

The Epic Story of Dropbox’s Exodus From the Amazon Cloud Empire - nice one about a major infrastructure change without interruption of service.

Billig ist nicht immer günstig, auch wenn es sehr einfach wirkt, Fachwissen ist dennoch nötig: Auf diese Aspekte müssen Admins achten.

on June 24, 2016 03:36 AM

June 23, 2016

To celebrate Xubuntu’s tenth birthday*, the Xubuntu team is glad to announce a new campaign and competition!

We’re looking for your most memorable and fun Xubuntu story. In order to participate, submit the story to xubuntu-contacts@lists.ubuntu.com. Or you may share an image (photo, drawing, painting, etc) to Elizabeth K. Joseph <lyz@ubuntu.com> and Pasi Lallinaho <pasi@shimmerproject.org>, please restrict your file size to a maximum of 5M.

For example, have you shared Xubuntu with a friend or family member, and had them react in a memorable way? Or have you created Xubuntu-themed cookies, cakes or artwork? No story or experience is too simple to share and don’t be restricted by these examples, surprise us!

Bonus: Share it on Twitter and hashtag it with #LoveXubuntu and during the competition, the Xubuntu team will retweet a posts on the Twitter account for Xubuntu. Additionally, we encourage to share your stories all over the social media!

At the end of the competition, we will select 5 finalists. All finalists will receive a set of Xubuntu stickers from UnixStickers! We will pick 2 winners from the finalists who will also receive a Xubuntu t-shirt! We will be in touch with the finalists and winners after the contest has ended to check their address details and preferred t-shirt size and color (for winners).

Notes on licensing: Submissions to the #LoveXubuntu campaign will be accepted under the CC BY-SA 4.0 license and available for use for Xubuntu marketing in the future without further consent from the participants. That said, we’re friendly folks and will try to communicate with you before using your story or image!

* The first official Xubuntu release was 6.06, released on June 1, 2006.
on June 23, 2016 09:12 PM

Want to get deeper under the hood with Kubuntu?

Interested in becoming a developer?

Then come and join us in the Kubuntu Dojo:

Thursday 30th June 2016 – 18:00 UTC

Packaging is one of the primary development tasks in a large Linux distribution project. Packaging is the essential way of getting the latest and best software to the user.

We continue our Kubuntu Dojo and Ninja developers training courses. These courses are free to attend, and take place on the last Thursday of the month at 18:00 UTC.

This is course number 2, where we will look at Debian and Ubuntu packaging. Candidates will create their first packages including uploading them to their own PPA on LaunchPad, all deliver inside our online video classroom.

Details for accessing the Kubuntu Big Blue Button Conference server will be announced in the G+ event stream, and on IRC: irc://irc.freenode.net

#kubuntu
#kubuntu-devel

Why it rocks

All the cool kids are doing it.
Packagers know everyone.
Not only will you be part of an elite group, but also get to know with Debian’s finest, as well as KDE developers and other application developers.

For more details about the Kubuntu Ninja’s programme see our wiki:

https://wiki.kubuntu.org/Kubuntu/GettingInvolved/Development

on June 23, 2016 06:22 PM

As you may know, Ubuntu Membership is a recognition of significant and sustained contribution to Ubuntu and the Ubuntu community. To this end, the Community Council recruits members of our current membership community for the valuable role of reviewing and evaluating the contributions of potential members to bring them on board or assist with having them achieve this goal.

We have seven members of our boards expiring from their 2 year terms within the next couple months, which means we need to do some restaffing of this Membership Board.

We’re looking for Ubuntu Members who can participate either in the 20:00 UTC meetings or 22:00 UTC (if you can make both, even better).

Both the 20:00 UTC and the 22:00 UTC meetings happen once a month, specific day may be discussed by the board upon addition of new members.

We have the following requirements for nominees:

  • be an Ubuntu member (preferably for some time)
  • be confident that you can evaluate contributions to various parts of our community
  • be committed to attending the membership meetings
  • broad insight into the Ubuntu community at large is a plus

Additionally, those sitting on membership boards are current Ubuntu Members with a proven track record of activity in the community. They have shown themselves over time to be able to work well with others and display the positive aspects of the Ubuntu Code of Conduct. They should be people who can discern character and evaluate contribution quality without emotion while engaging in an interview/discussion that communicates interest, a welcoming atmosphere, and which is marked by humanity, gentleness, and kindness. Even when they must deny applications, they should do so in such a way that applicants walk away with a sense of hopefulness and a desire to return with a more complete application rather than feeling discouraged or hurt.

To nominate yourself or somebody else (please confirm they wish to accept the nomination and state you have done so), please send a mail to the membership boards mailing list (ubuntu-membership-boards at lists.ubuntu.com). You will want to include some information about the nominee, a launchpad profile link and which time slot (20:00 or 22:00) the nominee will be able to participate in.

We will be accepting nominations through Friday July 1st at 12:00 UTC. At that time all nominations will be forwarded to the Community Council who will make the final decision and announcement.

Thanks in advance to you and to the dedication everybody has put into their roles as board members.

Originally posted to the ubuntu-news-team mailing list on Mon Jun 20 13:04:03 UTC 2016 by Svetlana Belkin, on the behalf of the Community Council

on June 23, 2016 06:01 PM

It’s Episode Seventeen of Season Nine of the Ubuntu Podcast! Mark Johnson, Alan Pope, Laura Cowen, Martin Wimpress, and Mycroft are here and speaking to your brain.

We’re here – all of us!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on June 23, 2016 02:00 PM

As of today and part of our weekly release cadence, a new snapd is making its way to your 16.04 systems. Here is what’s new!

Command line

  • snap interfaces can now give you a list of all snaps connected to a specific interface:
    1a42fb817c663169453b0c7c5e24302d24ecb376.png
  • Introduction of snap run <app.command>, which will provide a clean and simple way to run commands and hooks for any installed revision of a snap. As of writing this post, to try it, you need to wait for a newer core snap to be promoted to the stable channel, or alternatively, switch to the beta channel with snap refresh --channel=beta ubuntu-core

Ecosystem

  • Enable full confinement on Elementary 0.4 (Loki)
  • If a distribution doesn’t support full confinement through Apparmor and seccomp, snaps are installed in devmode by default.

Misc

  • Installing the core snap will now request a restart
  • Rest API: added support to send apps per snap, to allow finer-grained control of snaps from the Software center.

Have a look at the full changelog for more details.

What’s next?

Here are some of the fixes already lined up for the next snapd release:

  • New interfaces to enable more system access for confined snaps, such as “camera”, “optical-drive” and “mpris”. This will give a lot more latitude for media players (control through the mpris dbus interface, playing DVDs, etc.) and communication apps. You can try them now by building snapd from source.
  • Better handling of snaps on nvidia graphics
  • And much more to come, watch the new Snapcraft social channels ( twitter, google+, facebook) for updates!
on June 23, 2016 01:37 PM

June 22, 2016

Es wird so langsam ...

Dirk Deimeke

Und wieder einen Schritt weiter.

Gestern, am längsten und vermutlich auch schwülsten Tag des Jahres, hatte ich meine Prüfung zum 2. Kyu und tatsächlich auch bestanden.

Die Programme werden immer komplexer und jetzt werden auch "Haltungsnoten" vergeben. ;-) Ist natürlich Quatsch, aber je weiter man kommt, desto mehr wollen die Prüfer auch sehen, dass nicht nur die Technik verstanden und durchgeführt wurde, sondern dass sie auch mit Stil und Haltung durchgeführt wird.

Da habe ich - gelinde gesagt - noch Verbesserungspotential.

Ich arbeite daran.

on June 22, 2016 07:16 AM

June 21, 2016

KDE neon User Edition 5.6 came out a couple of weeks ago, let’s have a look at the commentry.

Phoronix stuck to their reputation by announcing it a day early but redeemed them selves with a follow up article KDE neon: The Rock & Roll Distribution. “KDE neon feels amazing. There’s simply no other way to say it.

CIO had an exclusive interview with moi, “It is a continuously updated installable image that can be used not just for exploration and testing but as the main operating system for people enthusiastic about the latest desktop software.”

For the Spanish speaker MuyLinux wrote KDE Neon lanza su primera versión para usuarios. “La primera impresión ha sido buena.” or “The first impression was good”.

On YouTube we got a review from Jeff Linux Turner. “This thing’s actually pretty good.  I like it.” While Wooden User gives an unvoiced tour with funky music.  Riba Linux has the same but with more of an indy soundtrack.

Reddit had several threads on it including a review by luxitanium which I’ll selectively quote with “Is it ready for consumers? It is definitely getting there, oh yes“.

The award winning Spanish language KDE Blog covered Probando KDE Neon User Edition 5.6. “Estamos ante un gran avance para la Comunidad KDE” or “We are facing a breakthrough for the KDE Community“.

Meanwhile on Twitter:

Want to meet the genius behind the neon light? Harald is giving a talk at the opensuse conference on Thursday. Do drop by in Nürnberg.

facebooktwittergoogle_pluslinkedinby feather
on June 21, 2016 10:29 PM

Please welcome our newest Member, vasa1.

Not only vasa1 has been a long time contributor to the forums, but he’s also a member of the Forums Staff.

vasa1’s application thread can be viewed here.

Congratulations from the Forums Council!

If you have been a contributor to the forums and wish to apply to Ubuntu Membership, please follow the process outlined here.


on June 21, 2016 05:23 PM

June 20, 2016

Classic Ubuntu 16.04 LTS, on an rpi2
Hopefully by now you're well aware of Ubuntu Core -- the snappiest way to run Ubuntu on a Raspberry Pi...

But have you ever wanted to run classic (apt/deb) Ubuntu Server on a RaspberryPi2?


Well, you're in luck!  Follow these instructions, and you'll be up in running in minutes!

First, download the released image (214MB):

$ wget http://cdimage.ubuntu.com/releases/16.04/release/ubuntu-16.04-preinstalled-server-armhf+raspi2.img.xz

Next, uncompress it:

$ unxz *xz

Now, write it to a microSD card using dd.  I'm using the card reader built into my Thinkpad, but you might use a USB adapter.  You'll need to figure out the block device of your card, and perhaps unmount it, if necessary.  Then, you can write the image to disk:

$ sudo dd if=ubuntu-16.04-preinstalled-server-armhf+raspi2.img of=/dev/mmcblk0 bs=32M
$ sync

Now, pop it into your rpi2, and power it on.

If it's connected to a USB mouse and an HDMI monitor, then you'll land in a console where you can login with the username 'ubuntu' and password 'ubuntu', and then you'll be forced to choose a new password.

Assuming it has an Ethernet connection, it should DHCP.  You might need to check your router to determine what IP address it got, or it sets it's hostname to 'ubuntu'.  In my case, I could automatically resolve it on my network, at ubuntu.canyonedge, with IP address 10.0.0.113, and ssh to it:

$ ssh ubuntu@ubuntu.canyonedge

Again, you can login on first boot with password 'ubuntu' and you're required to choose a new password.

On first boot, it will automatically resize the filesystem to use all of the available space on the MicroSD card -- much nicer than having to resize2fs yourself in some offline mode!

Now, you're off and running.  Have fun with sudo, apt, byobu, lxd, docker, and everything else you'd expect to find on a classic Ubuntu server ;-)  Heck, you'll even find the snap command, where you'll be able to install snap packages, right on top of your classic Ubuntu Server!  And if that doesn't just bake your noodle...

Cheers,
Dustin
on June 20, 2016 07:35 PM


Classic Ubuntu 16.04 LTS, on an rpi2

Hopefully by now you’re well aware of Ubuntu Core — the snappiest way to run Ubuntu on a Raspberry Pi…

But have you ever wanted to run classic (apt/deb) Ubuntu Server on a RaspberryPi2?

Well, you’re in luck!  Follow these instructions, and you’ll be up in running in minutes!

First, download the released image (214MB):

$ wget http://cdimage.ubuntu.com/releases/16.04/release/ubuntu-16.04-preinstalled-server-armhf+raspi2.img.xz

Next, uncompress it:

$ unxz *xz

Now, write it to a microSD card using dd.  I’m using the card reader built into my Thinkpad, but you might use a USB adapter.  You’ll need to figure out the block device of your card, and perhaps unmount it, if necessary.  Then, you can write the image to disk:

$ sudo dd if=ubuntu-16.04-preinstalled-server-armhf+raspi2.img of=/dev/mmcblk0 bs=32M
$ sync

Now, pop it into your rpi2, and power it on.

If it’s connected to a USB mouse and an HDMI monitor, then you’ll land in a console where you can login with the username ‘ubuntu‘ and password ‘ubuntu‘, and then you’ll be forced to choose a new password.

Assuming it has an Ethernet connection, it should DHCP.  You might need to check your router to determine what IP address it got, or it sets it’s hostname to ‘ubuntu’.  In my case, I could automatically resolve it on my network, at ubuntu.canyonedge, with IP address 10.0.0.113, and ssh to it:

$ ssh ubuntu@ubuntu.canyonedge

Again, you can login on first boot with password ‘ubuntu‘ and you’re required to choose a new password.

On first boot, it will automatically resize the filesystem to use all of the available space on the MicroSD card — much nicer than having to resize2fs yourself in some offline mode!

Now, you’re off and running.  Have fun with sudo, apt, byobu, lxd, docker, and everything else you’d expect to find on a classic Ubuntu server 😉

Heck, you’ll even find the snap command, where you’ll be able to install snap packages, right on top of your classic Ubuntu Server!  And if that doesn’t just bake your noodle…

Original article

on June 20, 2016 07:35 PM

Doxyqml 0.3.0 released

Aurélien Gâteau

The master branch of Doxyqml, a QML input filter for Doxygen, had been waiting for a release for a long time. Olivier Churlaud, the new KApidox hero, reported that it did not work with Python 3 and submitted a patch to fix this. I integrated his patch, fixed a few other things, set up Travis to test future commits and finally released Doxyqml 0.3.0, featuring the following changes:

  • Port to Python 3 (Olivier Churlaud, Aurélien Gâteau)
  • Skip directory imports (Aurélien Gâteau)
  • Support comment after class declaration (Cédric Cabessa)
  • Find qmldir for relative paths (Mathias Hasselmann)
  • Read import statements to help base class lookup (Mathias Hasselmann)
  • Generate qualified component names (Mathias Hasselmann)
  • Handle singleton pragmas (Mathias Hasselmann)

Note that this new version is Python 3 only, I think it is safe to assume that Python 3 is widespread enough nowadays that this should not be a problem.

on June 20, 2016 05:59 PM

This year a significant number of students are working on RTC-related projects as part of Google Summer of Code, under the umbrella of the Debian Project. You may have already encountered some of them blogging on Planet or participating in mailing lists and IRC.

WebRTC plugins for popular CMS and web frameworks

There are already a range of pseudo-WebRTC plugins available for CMS and blogging platforms like WordPress, unfortunately, many of them are either not releasing all their source code, locking users into their own servers or requiring the users to download potentially untrustworthy browser plugins (also without any source code) to use them.

Mesut is making plugins for genuinely free WebRTC with open standards like SIP. He has recently created the WPCall plugin for WordPress, based on the highly successful DruCall plugin for WebRTC in Drupal.

Keerthana has started creating a similar plugin for MediaWiki.

What is great about these plugins is that they don't require any browser plugins and they work with any server-side SIP infrastructure that you choose. Whether you are routing calls into a call center or simply using them on a personal blog, they are quick and convenient to install. Hopefully they will be made available as packages, like the DruCall packages for Debian and Ubuntu, enabling even faster installation with all dependencies.

Would you like to try running these plugins yourself and provide feedback to the students? Would you like to help deploy them for online communities using Drupal, WordPress or MediaWiki to power their web sites? Please come and discuss them with us in the Free-RTC mailing list.

You can read more about how to run your own SIP proxy for WebRTC in the RTC Quick Start Guide.

Finding all the phone numbers and ham radio callsigns in old emails

Do you have phone numbers and other contact details such as ham radio callsigns in old emails? Would you like a quick way to data-mine your inbox to find them and help migrate them to your address book?

Jaminy is working on Python scripts to do just that. Her project takes some inspiration from the Telify plugin for Firefox, which detects phone numbers in web pages and converts them to hyperlinks for click-to-dial. The popular libphonenumber from Google, used to format numbers on Android phones, is being used to help normalize any numbers found. If you would like to test the code against your own mailbox and address book, please make contact in the #debian-data channel on IRC.

A truly peer-to-peer alternative to SIP, XMPP and WebRTC

The team at Savoir Faire Linux has been busy building the Ring softphone, a truly peer-to-peer solution based on the OpenDHT distribution hash table technology.

Several students (Simon, Olivier, Nicolas and Alok) are actively collaborating on this project, some of them have been fortunate enough to participate at SFL's offices in Montreal, Canada. These GSoC projects have also provided a great opportunity to raise Debian's profile in Montreal ahead of DebConf17 next year.

Linux Desktop Telepathy framework and reSIProcate

Another group of students, Mateus, Udit and Balram have been busy working on C++ projects involving the Telepathy framework and the reSIProcate SIP stack. Telepathy is the framework behind popular softphones such as GNOME Empathy that are installed by default on the GNU/Linux desktop.

I previously wrote about starting a new SIP-based connection manager for Telepathy based on reSIProcate. Using reSIProcate means more comprehensive support for all the features of SIP, better NAT traversal, IPv6 support, NAPTR support and TLS support. The combined impact of all these features is much greater connectivity and much greater convenience.

The students are extending that work, completing the buddy list functionality, improving error handling and looking at interaction with XMPP.

Streamlining provisioning of SIP accounts

Currently there is some manual effort for each user to take the SIP account settings from their Internet Telephony Service Provider (ITSP) and transpose these into the account settings required by their softphone.

Pranav has been working to close that gap, creating a JAR that can be embedded in Java softphones such as Jitsi, Lumicall and CSipSimple to automate as much of the provisioning process as possible. ITSPs are encouraged to test this client against their services and will be able to add details specific to their service through Github pull requests.

The project also hopes to provide streamlined provisioning mechanisms for privately operated SIP PBXes, such as the Asterisk and FreeSWITCH servers used in small businesses.

Improving SIP support in Apache Camel and the Jitsi softphone

Apache Camel's SIP component and the widely known Jitsi softphone both use the JAIN SIP library for Java.

Nik has been looking at issues faced by SIP users in both projects, adding support for the MESSAGE method in camel-sip and looking at why users sometimes see multiple password prompts for SIP accounts in Jitsi.

If you are trying either of these projects, you are very welcome to come and discuss them on the mailing lists, Camel users and Jitsi users.

GSoC students at DebConf16 and DebConf17 and other events

Many of us have been lucky to meet GSoC students attending DebConf, FOSDEM and other events in the past. From this year, Google now expects the students to complete GSoC before they become eligible for any travel assistance. Some of the students will still be at DebConf16 next month, assisted by the regular travel budget and the diversity funding initiative. Nik and Mesut were already able to travel to Vienna for the recent MiniDebConf / LinuxWochen.at

As mentioned earlier, several of the students and the mentors at Savoir Faire Linux are based in Montreal, Canada, the destination for DebConf17 next year and it is great to see the momentum already building for an event that promises to be very big.

Explore the world of Free Real-Time Communications (RTC)

If you are interesting in knowing more about the Free RTC topic, you may find the following resources helpful:

RTC mentoring team 2016

We have been very fortunate to build a large team of mentors around the RTC-themed projects for 2016. Many of them are first time GSoC mentors and/or new to the Debian community. Some have successfully completed GSoC as students in the past. Each of them brings unique experience and leadership in their domain.

Helping GSoC projects in 2016 and beyond

Not everybody wants to commit to being a dedicated mentor for a GSoC student. In fact, there are many ways to help without being a mentor and many benefits of doing so.

Simply looking out for potential applicants for future rounds of GSoC and referring them to the debian-outreach mailing list or an existing mentor helps ensure we can identify talented students early and design projects around their capabilities and interests.

Testing the projects on an ad-hoc basis, greeting the students at DebConf and reading over the student wikis to find out where they are and introduce them to other developers in their area are all possible ways to help the projects succeed and foster long term engagement.

Google gives Debian a USD $500 grant for each student who completes a project successfully this year. If all 2016 students pass, that is over $10,000 to support Debian's mission.

on June 20, 2016 03:02 PM

A little while back I shared that I decided to leave GitHub. Firstly, thanks to all of you for your incredible support. I am blessed to have such wonderful people in my life.

Since that post I have been rather quiet about what my next adventure is going to be, and some of the speculation has been rather amusing. Now I am finally ready to share more details.

In a nutshell, I have started a new consultancy practice to provide community management, innersourcing, developer workflow/relations, and other related services. To keep things simple right now, this new practice is called Jono Bacon Consulting (original, eh?)

As some of you know, I have actually been providing community strategy and management consultancy for quite some time. Previously I have worked with organizations such as Deutsche Bank, Sony Mobile, ON.LAB, Open Networking Foundation, Intel and others. I am also an active advisor for organizations such as AlienVault, Open Networking Foundation, Open Cloud Consortium, Mycroft AI and I also advise some startup accelerators.

I have always loved this kind of work. My wider career ambitions have always been to help organizations build great communities and to further the wider art and science of collaboration and community development. I love the experience and insight I gain with each new client.

When I made the decision to move on from GitHub I was fortunate to have some compelling options on the table for new roles. After spending some time thinking about what I love doing and these wider ambitions, it became clear that consulting was the right step forward. I would have shared this news earlier but I have already been busy traveling and working with clients. 😉

I am really excited about this new chapter. While I feel I have a lot I can offer my clients today, I am looking forward to continuing to broaden my knowledge, expertise, and diversity of community strategy and leadership. I am also excited to share these learnings with you all in my writing, presentations, and elsewhere. This has always been a journey, and each new road opens up interesting new questions and potential, and I am thirsty to discover and explore more.

So, if you are interested in building a community, either inside or outside (or both) your organization, feel free to discover more and get in touch and we can talk more.

on June 20, 2016 02:45 PM

It takes a special kind of people who enjoy being in the first in a new community. It’s a time when there’s a lot of empty canvas, wide landscapes to uncover, lots of dragons still on a map, I guess you already see what I mean. It takes some pioneer spirit to feel comfortable when the rules are not all figured out yet and stuff is still a bit harder than it should be.

The last occurrence where I saw this live was the Snappy Playpen. A project where all the early snap contributors hang out, figure out problems, document best-practices and have fun together.

We use Github and Gitter/IRC to coordinate things, we have been going for a bit more than two weeks now and I’m quite happy with where we’ve got. We had about 60 people in the Gitter channel, had more than 30 snaps contributed and about the same number or more being in the works.

playpen

But it’s not just the number of snaps. It’s also the level of helping each other out and figuring out bigger problems together. Here’s just a (very) few things as an example:

  • David Planella wrote a common launcher for GTK apps and we could move snaps like leafpad, galculator and ristretto off of their own custom launchers today. It’s available as a wiki part, so it’s quite easy to consume today.
  • Simon Quigley and Didier Roche figured out better contribution guidelines and moved the existing snaps to use them instead.
  • With new interfaces landing in snapd, it was nice to see how they were picked up in existing snaps and formerly existing issues resolved. David Callé for example fixed the vlc and scummvm snaps this way.
  • Sometimes it takes perseverance to your snap landed. It took Andy Keech quite a while to get imagemagick (both stable and from git) to build and work properly, but thanks to Andy’s hard work and collaboration with the Snapcraft developers they’re included now.
  • The docs are good, but they don’t cover all use-cases yet and we’re finding new ways to use the tools every day.

As I said earlier: it takes some pioneer spirit to be happy in such circumstances and all the folks above (and many others) have been working together as a team together in the last days. For me, as somebody who’s supporting the project, this was very nice to see. Particularly seeing people from all over the open source spectrum (users of cloud tools, GTK and Qt apps, python scripts, upstream developers, Java tools and many more).

Tomorrow we are going to have our kickoff event for week 3 of Snappy Playpen. As I said in the mail, one area of focus is going to be server apps and electron based apps, but feel free to bring whatever you enjoy working on.

I’d like to thank each and everyone of you who is participating in this initiative (not just the people who committed something). The atmosphere is great, we’re solving problems together and we’re excited to bring a more complete, easier to digest and better to use snap experience to new users.

on June 20, 2016 02:41 PM

June 19, 2016

Go Debian!

Paul Tagliamonte

As some of the world knows full well by now, I've been noodling with Go for a few years, working through its pros, its cons, and thinking a lot about how humans use code to express thoughts and ideas. Go's got a lot of neat use cases, suited to particular problems, and used in the right place, you can see some clear massive wins.

I've started writing Debian tooling in Go, because it's a pretty natural fit. Go's fairly tight, and overhead shouldn't be taken up by your operating system. After a while, I wound up hitting the usual blockers, and started to build up abstractions. They became pretty darn useful, so, this blog post is announcing (a still incomplete, year old and perhaps API changing) Debian package for Go. The Go importable name is pault.ag/go/debian. This contains a lot of utilities for dealing with Debian packages, and will become an edited down "toolbelt" for working with or on Debian packages.

Module Overview

Currently, the package contains 4 major sub packages. They're a changelog parser, a control file parser, deb file format parser, dependency parser and a version parser. Together, these are a set of powerful building blocks which can be used together to create higher order systems with reliable understandings of the world.

changelog

The first (and perhaps most incomplete and least tested) is a changelog file parser.. This provides the programmer with the ability to pull out the suite being targeted in the changelog, when each upload was, and the version for each. For example, let's look at how we can pull when all the uploads of Docker to sid took place:

func main() {
    resp, err := http.Get("http://metadata.ftp-master.debian.org/changelogs/main/d/docker.io/unstable_changelog")
    if err != nil {
        panic(err)
    }
    allEntries, err := changelog.Parse(resp.Body)
    if err != nil {
        panic(err)
    }
    for _, entry := range allEntries {
        fmt.Printf("Version %s was uploaded on %s\n", entry.Version, entry.When)
    }
}

The output of which looks like:

Version 1.8.3~ds1-2 was uploaded on 2015-11-04 00:09:02 -0800 -0800
Version 1.8.3~ds1-1 was uploaded on 2015-10-29 19:40:51 -0700 -0700
Version 1.8.2~ds1-2 was uploaded on 2015-10-29 07:23:10 -0700 -0700
Version 1.8.2~ds1-1 was uploaded on 2015-10-28 14:21:00 -0700 -0700
Version 1.7.1~dfsg1-1 was uploaded on 2015-08-26 10:13:48 -0700 -0700
Version 1.6.2~dfsg1-2 was uploaded on 2015-07-01 07:45:19 -0600 -0600
Version 1.6.2~dfsg1-1 was uploaded on 2015-05-21 00:47:43 -0600 -0600
Version 1.6.1+dfsg1-2 was uploaded on 2015-05-10 13:02:54 -0400 EDT
Version 1.6.1+dfsg1-1 was uploaded on 2015-05-08 17:57:10 -0600 -0600
Version 1.6.0+dfsg1-1 was uploaded on 2015-05-05 15:10:49 -0600 -0600
Version 1.6.0+dfsg1-1~exp1 was uploaded on 2015-04-16 18:00:21 -0600 -0600
Version 1.6.0~rc7~dfsg1-1~exp1 was uploaded on 2015-04-15 19:35:46 -0600 -0600
Version 1.6.0~rc4~dfsg1-1 was uploaded on 2015-04-06 17:11:33 -0600 -0600
Version 1.5.0~dfsg1-1 was uploaded on 2015-03-10 22:58:49 -0600 -0600
Version 1.3.3~dfsg1-2 was uploaded on 2015-01-03 00:11:47 -0700 -0700
Version 1.3.3~dfsg1-1 was uploaded on 2014-12-18 21:54:12 -0700 -0700
Version 1.3.2~dfsg1-1 was uploaded on 2014-11-24 19:14:28 -0500 EST
Version 1.3.1~dfsg1-2 was uploaded on 2014-11-07 13:11:34 -0700 -0700
Version 1.3.1~dfsg1-1 was uploaded on 2014-11-03 08:26:29 -0700 -0700
Version 1.3.0~dfsg1-1 was uploaded on 2014-10-17 00:56:07 -0600 -0600
Version 1.2.0~dfsg1-2 was uploaded on 2014-10-09 00:08:11 +0000 +0000
Version 1.2.0~dfsg1-1 was uploaded on 2014-09-13 11:43:17 -0600 -0600
Version 1.0.0~dfsg1-1 was uploaded on 2014-06-13 21:04:53 -0400 EDT
Version 0.11.1~dfsg1-1 was uploaded on 2014-05-09 17:30:45 -0400 EDT
Version 0.9.1~dfsg1-2 was uploaded on 2014-04-08 23:19:08 -0400 EDT
Version 0.9.1~dfsg1-1 was uploaded on 2014-04-03 21:38:30 -0400 EDT
Version 0.9.0+dfsg1-1 was uploaded on 2014-03-11 22:24:31 -0400 EDT
Version 0.8.1+dfsg1-1 was uploaded on 2014-02-25 20:56:31 -0500 EST
Version 0.8.0+dfsg1-2 was uploaded on 2014-02-15 17:51:58 -0500 EST
Version 0.8.0+dfsg1-1 was uploaded on 2014-02-10 20:41:10 -0500 EST
Version 0.7.6+dfsg1-1 was uploaded on 2014-01-22 22:50:47 -0500 EST
Version 0.7.1+dfsg1-1 was uploaded on 2014-01-15 20:22:34 -0500 EST
Version 0.6.7+dfsg1-3 was uploaded on 2014-01-09 20:10:20 -0500 EST
Version 0.6.7+dfsg1-2 was uploaded on 2014-01-08 19:14:02 -0500 EST
Version 0.6.7+dfsg1-1 was uploaded on 2014-01-07 21:06:10 -0500 EST

control

Next is one of the most complex, and one of the oldest parts of go-debian, which is the control file parser (otherwise sometimes known as deb822). This module was inspired by the way that the json module works in Go, allowing for files to be defined in code with a struct. This tends to be a bit more declarative, but also winds up putting logic into struct tags, which can be a nasty anti-pattern if used too much.

The first primitive in this module is the concept of a Paragraph, a struct containing two values, the order of keys seen, and a map of string to string. All higher order functions dealing with control files will go through this type, which is a helpful interchange format to be aware of. All parsing of meaning from the Control file happens when the Paragraph is unpacked into a struct using reflection.

The idea behind this strategy that you define your struct, and let the Control parser handle unpacking the data from the IO into your container, letting you maintain type safety, since you never have to read and cast, the conversion will handle this, and return an Unmarshaling error in the event of failure.

Additionally, Structs that define an anonymous member of control.Paragraph will have the raw Paragraph struct of the underlying file, allowing the programmer to handle dynamic tags (such as X-Foo), or at least, letting them survive the round-trip through go.

The default decoder contains an argument, the ability to verify the input control file using an OpenPGP keyring, which is exposed to the programmer through the (*Decoder).Signer() function. If the passed argument is nil, it will not check the input file signature (at all!), and if it has been passed, any signed data must be found or an error will fall out of the NewDecoder call. On the way out, the opposite happens, where the struct is introspected, turned into a control.Paragraph, and then written out to the io.Writer.

Here's a quick (and VERY dirty) example showing the basics of reading and writing Debian Control files with go-debian.

package main

import (
    "fmt"
    "io"
    "net/http"
    "strings"

    "pault.ag/go/debian/control"
)

type AllowedPackage struct {
    Package     string
    Fingerprint string
}

func (a *AllowedPackage) UnmarshalControl(in string) error {
    in = strings.TrimSpace(in)
    chunks := strings.SplitN(in, " ", 2)
    if len(chunks) != 2 {
        return fmt.Errorf("Syntax sucks: '%s'", in)
    }
    a.Package = chunks[0]
    a.Fingerprint = chunks[1][1 : len(chunks[1])-1]

    return nil
}

type DMUA struct {
    Fingerprint     string
    Uid             string
    AllowedPackages []AllowedPackage `control:"Allow" delim:","`
}

func main() {
    resp, err := http.Get("http://metadata.ftp-master.debian.org/dm.txt")
    if err != nil {
        panic(err)
    }

    decoder, err := control.NewDecoder(resp.Body, nil)
    if err != nil {
        panic(err)
    }

    for {
        dmua := DMUA{}
        if err := decoder.Decode(&dmua); err != nil {
            if err == io.EOF {
                break
            }
            panic(err)
        }
        fmt.Printf("The DM %s is allowed to upload:\n", dmua.Uid)
        for _, allowedPackage := range dmua.AllowedPackages {
            fmt.Printf("   %s [granted by %s]\n", allowedPackage.Package, allowedPackage.Fingerprint)
        }
    }
}

Output (truncated!) looks a bit like:

...
The DM Allison Randal <allison@lohutok.net> is allowed to upload:
   parrot [granted by A4F455C3414B10563FCC9244AFA51BD6CDE573CB]
...
The DM Benjamin Barenblat <bbaren@mit.edu> is allowed to upload:
   boogie [granted by 3224C4469D7DF8F3D6F41A02BBC756DDBE595F6B]
   dafny [granted by 3224C4469D7DF8F3D6F41A02BBC756DDBE595F6B]
   transmission-remote-gtk [granted by 3224C4469D7DF8F3D6F41A02BBC756DDBE595F6B]
   urweb [granted by 3224C4469D7DF8F3D6F41A02BBC756DDBE595F6B]
...
The DM أحمد المحمودي <aelmahmoudy@sabily.org> is allowed to upload:
   covered [granted by 41352A3B4726ACC590940097F0A98A4C4CD6E3D2]
   dico [granted by 6ADD5093AC6D1072C9129000B1CCD97290267086]
   drawtiming [granted by 41352A3B4726ACC590940097F0A98A4C4CD6E3D2]
   fonts-hosny-amiri [granted by BD838A2BAAF9E3408BD9646833BE1A0A8C2ED8FF]
   ...
...

deb

Next up, we've got the deb module. This contains code to handle reading Debian 2.0 .deb files. It contains a wrapper that will parse the control member, and provide the data member through the archive/tar interface.

Here's an example of how to read a .deb file, access some metadata, and iterate over the tar archive, and print the filenames of each of the entries.

func main() {
    path := "/tmp/fluxbox_1.3.5-2+b1_amd64.deb"
    fd, err := os.Open(path)
    if err != nil {
        panic(err)
    }
    defer fd.Close()

    debFile, err := deb.Load(fd, path)
    if err != nil {
        panic(err)
    }

    version := debFile.Control.Version
    fmt.Printf(
        "Epoch: %d, Version: %s, Revision: %s\n",
        version.Epoch, version.Version, version.Revision,
    )

    for {
        hdr, err := debFile.Data.Next()
        if err == io.EOF {
            break
        }
        if err != nil {
            panic(err)
        }
        fmt.Printf("  -> %s\n", hdr.Name)
    }
}

Boringly, the output looks like:

Epoch: 0, Version: 1.3.5, Revision: 2+b1
  -> ./
  -> ./etc/
  -> ./etc/menu-methods/
  -> ./etc/menu-methods/fluxbox
  -> ./etc/X11/
  -> ./etc/X11/fluxbox/
  -> ./etc/X11/fluxbox/window.menu
  -> ./etc/X11/fluxbox/fluxbox.menu-user
  -> ./etc/X11/fluxbox/keys
  -> ./etc/X11/fluxbox/init
  -> ./etc/X11/fluxbox/system.fluxbox-menu
  -> ./etc/X11/fluxbox/overlay
  -> ./etc/X11/fluxbox/apps
  -> ./usr/
  -> ./usr/share/
  -> ./usr/share/man/
  -> ./usr/share/man/man5/
  -> ./usr/share/man/man5/fluxbox-style.5.gz
  -> ./usr/share/man/man5/fluxbox-menu.5.gz
  -> ./usr/share/man/man5/fluxbox-apps.5.gz
  -> ./usr/share/man/man5/fluxbox-keys.5.gz
  -> ./usr/share/man/man1/
  -> ./usr/share/man/man1/startfluxbox.1.gz
...

dependency

The dependency package provides an interface to parse and compute dependencies. This package is a bit odd in that, well, there's no other library that does this. The issue is that there are actually two different parsers that compute our Dependency lines, one in Perl (as part of dpkg-dev) and another in C (in dpkg).

To date, this has resulted in me filing three different bugs. I also found a broken package in the archive, which actually resulted in another bug being (totally accidentally) already fixed. I hope to continue to run the archive through my parser in hopes of finding more bugs! This package is a bit complex, but it basically just returns what amounts to be an AST for our Dependency lines. I'm positive there are bugs, so file them!

func main() {
    dep, err := dependency.Parse("foo | bar, baz, foobar [amd64] | bazfoo [!sparc], fnord:armhf [gnu-linux-sparc]")
    if err != nil {
        panic(err)
    }

    anySparc, err := dependency.ParseArch("sparc")
    if err != nil {
        panic(err)
    }

    for _, possi := range dep.GetPossibilities(*anySparc) {
        fmt.Printf("%s (%s)\n", possi.Name, possi.Arch)
    }
}

Gives the output:

foo (<nil>)
baz (<nil>)
fnord (armhf)

version

Right off the bat, I'd like to thank Michael Stapelberg for letting me graft this out of dcs and into the go-debian package. This was nearly entirely his work (with a one or two line function I added later), and was amazingly helpful to have. Thank you!

This module implements Debian version comparisons and parsing, allowing for sorting in lists, checking to see if it's native or not, and letting the programmer to implement smart(er!) logic based on upstream (or Debian) version numbers.

This module is extremely easy to use and very straightforward, and not worth writing an example for.

Final thoughts

This is more of a "Yeah, OK, this has been useful enough to me at this point that I'm going to support this" rather than a "It's stable!" or even "It's alive!" post. Hopefully folks can report bugs and help iterate on this module until we have some really clean building blocks to build solid higher level systems on top of. Being able to have multiple libraries interoperate by relying on go-debian will be a massive ease. I'm in need of more documentation, and to finalize some parts of the older sub package APIs, but I'm hoping to be at a "1.0" real soon now.

on June 19, 2016 04:30 PM

We all browse the web and we all use different browsers.  But the main two are used is Chrome and Firefox.  I use Firefox because I like to support FOSS and to be away from Google as far away as possible.

Like Chrome, Firefox has add-ons to make it more custom and usable.  I use a few:

  • Adblock Plus- Who likes ads?
  • Ubuntu Modifications- built-in when you install Ubuntu
  • Xmarks- to keep my bookmarks safe and synced, dunno if I will switch to Firefox sync
  • YouTube Video and Audio Downloader- When I need something to watch when I don’t have internet

That’s pretty much Firefox for me, just a (basic) program to surf the web.  I really don’t know why I blogged about it.

on June 19, 2016 03:26 PM

June 18, 2016

Whither cgmanager

Serge Hallyn

A few years ago we started the cgmanager project to address two issues:

  1. Simplify code in callers.
  2. Support safe delegation of cgroups to nested container managers.

Historically, advice on where and how to mount cgroups was varied. As a result, programs which wanted to manipulate cgroups had quite a bit of work to do to find cgroup mountpoints for specific controllers and calculate the full paths to use. Cgmanager simplified this code by doing that work for callers. With the ‘cgm’ command, it was also greatly simplified for users. Today, with the advent of the cgroup-lite package, now the cgroupfs-mount package, and systemd, all of which agree on mounting each controller separately under /sys/fs/cgroup/$controller, container manager code can be greatly simplified. This can be evidenced by comparing the older ‘cgfs’ cgroup driver in lxc to the newer ‘cgfsng’ cgroup driver which benefits from assuming the simpler layout (falling back to the more complicated driver if needed).

The other core motivating factor for cgmanager, safer support of cgroup delegation, is now – at last! – deprecated by the availability of cgroup namespaces.

A few lessons?

When starting actual coding for cgmanager, one open question was what kind of communication it should support. We decided on using a dbus interface, implemented using libnih-dbus. I had strong reservations about that. In retrospect, it worked better at the time than I had expected, but had some severe problems. First, performance was pretty horrific. Every dbus connection required 4 round trips to get started. For the simple upstart based systems of the time this was ok, and we kept it from becoming worse by having a cgmanager proxy in a multiply-nested container talk straight to the host cgmanager. However, as Ubuntu switched to systemd, which made very heavy use of cgroups, we started seeing huge performance impacts. It also failed to satisfy the requirements of google folks who otherwise may have been more interested.

Secondly, as Ubuntu switched from systemd to upstart, and upstart – and libnih – became unsupported, this started affecting cgmanager.

So, if cgmanager were still needed today, I would strongly consider rewriting it to have a simple interface over a unix socket. However, as described above, cgmanager has happily become unnecessary. A different sort of cgroup manager may in fact be needed – a modern day ‘libcgroup’ for simple administration of service resources on systems without systemd. However, that is not cgmanager’s role.

So, with that, the cgmanager project is effectively closed. It will continue to be supported by the lxc project (us) on legacy systems. But on the whole, migrating systems to a kernel supporting cgroup namespaces (or at least lxcfs-provided cgroup filesystems) is highly recommended.

PS

By the same token, the cgroup part of lxcfs is also effectively deprecated. I will probably move it under a deprecation build flag soon. lxcfs will continue to be used for procfs virtualization, and likely expanded to support some /sys information.


on June 18, 2016 07:00 AM

June 17, 2016

limba-smallI wanted to write this blogpost since April, and even announced it in two previous posts, but never got to actually write it until now. And with the recent events in Snappy and Flatpak land, I can not defer this post any longer (unless I want to answer the same questions over and over on IRC ^^).

As you know, I develop the Limba 3rd-party software installer since 2014 (see this LWN article explaining the project better then I could do 😉 ) which is a spiritual successor to the Listaller project which was in development since roughly 2008. Limba got some competition by Flatpak and Snappy, so it’s natural to ask what the projects next steps will be.

Meeting with the competition

At last FOSDEM and at the GNOME Software sprint this year in April, I met with Alexander Larsson and we discussed the rather unfortunate situation we got into, with Flatpak and Limba being in competition.

Both Alex and I have been experimenting with 3rd-party app distribution for quite some time, with me working on Listaller and him working on Glick and Glick2. All these projects never went anywhere. Around the time when I started Limba, fixing design mistakes done with Listaller, Alex started a new attempt at software distribution, this time with sandboxing added to the mix and a new OSTree-based design of the software-distribution mechanism. It wasn’t at all clear that XdgApp, later to be renamed to Flatpak, would get huge backing by GNOME and later Red Hat, becoming a very promising candidate for a truly cross-distro software distribution system.

The main difference between Limba and Flatpak is that Limba allows modular runtimes, with things like the toolkit, helper libraries and programs being separate modules, which can be updated independently. Flatpak on the other hand, allows just one static runtime and enforces everything that is not in the runtime already to be bundled with the actual application. So, while a Limba bundle might depend on multiple individual other bundles, Flatpak bundles only have one fixed dependency on a runtime. Getting a compromise between those two concepts is not possible, and since the modular vs. static approach in Limba and Flatpak where fundamental, conscious design decisions, merging the projects was also not possible.

Alex and I had very productive discussions, and except for the modularity issue, we were pretty much on the same page in every other aspect regarding the sandboxing and app-distribution matters.

Sometimes stepping out of the way is the best way to achieve progress

So, what to do now? Obviously, I can continue to push Limba forward, but given all the other projects I maintain, this seems to be a waste of resources (Limba eats a lot of my spare time). Now with Flatpak and Snappy being available, I am basically competing with Canonical and Red Hat, who can make much more progress faster then I can do as a single developer. Also, Flatpaks bigger base of contributors compared to Limba is a clear sign which project the community favors more.

Furthermore, I started the Listaller and Limba projects to scratch an itch. When being new to Linux, it was very annoying to me to see some applications only being made available in compiled form for one distribution, and sometimes one that I didn’t use. Getting software was incredibly hard for me as a newbie, and using the package-manager was also unusual back then (no software center apps existed, only package lists). If you wanted to update one app, you usually needed to update your whole distribution, sometimes even to a development version or rolling-release channel, sacrificing stability.

So, if now this issue gets solved by someone else in a good way, there is no point in pushing my solution hard. I developed a tool to solve a problem, and it looks like another tool will fix that issue now before mine does, which is fine, because this longstanding problem will finally be solved. And that’s the thing I actually care most about.

I still think Limba is the superior solution for app distribution, but it is also the one that is most complex and requires additional work by upstream projects to use it properly. Which is something most projects don’t want, and that’s completely fine. 😉

And that being said: I think Flatpak is a great project. Alex has much experience in this matter, and the design of Flatpak is sound. It solves many issues 3rd-party app development faces in a pretty elegant way, and I like it very much for that. Also the focus on sandboxing is great, although that part will need more time to become really useful. (Aside from that, working with Alexander is a pleasure, and he really cares about making Flatpak a truly cross-distributional, vendor independent project.)

Moving forward

So, what I will do now is not to stop Limba development completely, but keep it going as a research project. Maybe one can use Limba bundles to create Flatpak packages more easily. We also discussed having Flatpak launch applications installed by Limba, which would allow both systems to coexist and benefit from each other. Since Limba (unlike Listaller) was also explicitly designed for web-applications, and therefore has a slightly wider scope than Flatpak, this could make sense.

In any case though, I will invest much less time in the Limba project. This is good news for all the people out there using the Tanglu Linux distribution, AppStream-metadata-consuming services, PackageKit on Debian, etc. – those will receive more attention 😉

An integral part of Limba is a web service called “LimbaHub” to accept new bundles, do QA on them and publish them in a public repository. I will likely rewrite it to be a service using Flatpak bundles, maybe even supporting Flatpak bundles and Limba bundles side-by-side (and if useful, maybe also support AppImageKit and Snappy). But this project is still on the drawing board.

Let’s see 🙂

P.S: If you come to Debconf in Cape Town, make sure to not miss my talks about AppStream and bundling 🙂

on June 17, 2016 08:24 PM

The gears have been churning away and Plasma 5.7 beta is out for testing.  But what’s that I hear? How can you test it?

Why with KDE Neon Developer Edition Git-Stable Branches of course, until now the little sister of Git-Unstable Branch variant, it comes into its own when you need to test betas such as this.

Grab the ISO, install it (on virtual machine or real hardware as you please) and find issues to report to the Plasma team.

neonroundtext

facebooktwittergoogle_pluslinkedinby feather
on June 17, 2016 06:15 PM

I promised we’d be back, and here we are! Took us just a couple months to get things up and rolling, but UbuCon Latin America is now being organized. This year, we decided we’ll be hosting it in Lima, Peru again, on the 5-6th August!

We have now opened the Call for Papers for the conference. If you are willing to speak at UbuConLA, just fill out this form. Also, if you have any talk suggestions send us an email to info@ubuconla.org. We’ll be more than happy to see if we can get a talk in on the topic you’re interested.

On the other hand, we are looking for sponsors! If you or someone you know is interested in sponsoring our event, feel free to drop us an email to sponsors@ubuconla.org. We’ll be more than happy to answer all your questions about sponsorship, so email us anytime.

Registration for the conference will open soon, so stay tuned!


on June 17, 2016 05:42 AM

June 16, 2016

The Forums Council is happy to announce our newest Ubuntu Member via forums contributions : Rex Bouwense
LP : https://launchpad.net/~rexbouwense
Wiki : https://wiki.ubuntu.com/rexbouwense
Forums profile : http://ubuntuforums.org/member.php?u=767164

Rex has been a long time contributor to the forums, thanks much for all you do!


on June 16, 2016 08:36 PM

We are in the second week of the Snappy Playpen and it’s simply beautiful to see how new folks are coming in and collaborate on getting snaps done, improve existing ones, answer questions and work together. The team spirit is strong and we’re all learning loads.

Keep up the good work everyone! 🙂

It’s only Thursday, but let’s have a quick look at the highlights of this week.

New snaps

  • Added Tyrant Unleashed Optimizer, by Christian Ehrhardt
  • Added mpv git build, by Alan Pope
  • Added imagemagick6-stable, by Andy Keech
  • Added keepassx, by Leo Arias
  • Added consul, by Leo Arias
  • dcos-cli snap, by Leo Arias
  • deis workflow snap, by Leo Arias

Work in progress snaps

Some of these snaps still need help, so take a look at the list of our open PRs and dive in.

Fritzing snap

  • Initial working version of Fritzing, by Will Cooke
  • Continued work on GIMP git, by Andy Keech
  • Proposed Imagemagick 7 from Git, by Andy Keech
  • Initial working version of Sylpheed, by Simon Quigley
  • Pidgin snap from git, by Simon Quigley
  • TeXworks snap, by Galileo Sartor
  • MATE Desktop snap, by Martin Wimpress

Updated snaps

  • Moved Ristretto to use a common GTK part (David Planella)
  • Moved Leafpad to use a common GTK part (David Planella)
  • Moved Galculator to use a common GTK part (Simon Quigley)
  • Cleaned and fixed snaps

General updates

  • Didier Roche fixed the Travis CI
  • Created a reusable wiki part for GTK apps

Get involved

If you want to get involved, it’s easy:

on June 16, 2016 02:15 PM

S09E16 – Reluctant Dragon - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Episode Sixteen of Season Nine of the Ubuntu Podcast! Alan Pope, Mark Johnson, Laura Cowen and Martin Wimpress are connected and speaking to your brain.

We’re here again!

In this week’s show:

  • We discuss having the motivation to adopt Ubuntu Devices whole-heartedly.

  • We also discuss going on holiday, the on-going saga of fixing Mark’s laptop, and naming the new Entroware laptop.

  • We share a Command Line Lurve – pv, which was sent in by The Hatterman. It monitors the progress of data through a pipe and great when used in conjunction with dd to provide a progress bar. For example:

    pv -pa ubuntu_16.04.iso  |  sudo  dd  of=/dev/sdb
    
    

    -p, --progress show progress bar
    -a, --average-rate show data transfer average rate counter

  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

    • For the disappearing cursor fix, create the file /etc/X11/xorg.conf.d/90-cursoron.conf, and put in it the following content:
    #--makes mouse cursor re-appear
    Section "Device"
    Identifier "Device0"
    Driver "intel"
    Option "AccelMethod" "uxa"
    Option "SWCursor" "on"
    EndSection
    #--
    

    Be careful using UXA acceleration on modern Intel IGPs. It has been superceeded by SNA for some years now and may signifcantly degrade graphics performance.

  • This weeks cover image is taken from the http://ayay.co.uk/.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on June 16, 2016 02:00 PM