September 29, 2016

You may have noticed that Yakkety Yak 16.10 Beta 2 was released earlier this morning, nearly a week late. It was quite a busy week with new kernels popping in at the last minute and causing all sorts of havoc. Finally, in the last day or so, it culminated in a problem due to a […]
on September 29, 2016 04:10 AM

September 28, 2016

Kubuntu beta; please test!

Valorie Zimmerman

Kubuntu 16.10 beta has been published. It is possible that it will be re-spun, but we have our beta images ready for testing now.

Please go to http://iso.qa.ubuntu.com/qatracker/milestones/367/builds, login, click on the CD icon and download the image. I prefer zsync, which I download via the commandline:

~$ cd /media/valorie/ISOs (or whereever you store your images)
~$ zsync http://cdimage.ubuntu.com/kubuntu/daily-live/20160921/yakkety-desktop-i386.iso.zsync

UPDATE: the beta images have now been published officially. Rather than the daily image above, please download or torrent the beta, or just upgrade. We still need bug reports and your test results on the qatracker, above.

Thanks for your work testing so far!

The other methods of downloading work as well, including wget or just downloading in your browser.

I tested usb-creator-kde which has sometimes now worked, but it worked like a champ once the images were downloaded. Simply choose the proper ISO and device to write to, and create the live image.

Once I figured out how to get my little Dell travel laptop to let me boot from USB (delete key as it is booting; quickly hit f12, legacy boot, then finally I could actually choose to boot from USB). Secure boot and UEFI make this more difficult these days.

I found no problems in the live session, including logging into wireless, so I went ahead and started firefox, logged into http://iso.qa.ubuntu.com/qatracker, chose my test, and reported my results. We need more folks to install on various equipment, including VMs.

When you run into bugs, try to report them via "apport", which means using ubuntu-bug packagename in the commandline. Once apport has logged into launchpad and downloaded the relevant error messages, you can give some details like a short description of the bug, and can get the number. Please report the bug numbers on the qa site in your test report.

Thanks so much for helping us make Kubuntu friendly and high-quality.
on September 28, 2016 09:59 PM

Some things I found interesting in the Linux kernel v4.5:

ptrace fsuid checking

Jann Horn fixed some corner-cases in how ptrace access checks were handled on special files in /proc. For example, prior to this fix, if a setuid process temporarily dropped privileges to perform actions as a regular user, the ptrace checks would not notice the reduced privilege, possibly allowing a regular user to trick a privileged process into disclosing things out of /proc (ASLR offsets, restricted directories, etc) that they normally would be restricted from seeing.

ASLR entropy sysctl

Daniel Cashman standardized the way architectures declare their maximum user-space ASLR entropy (CONFIG_ARCH_MMAP_RND_BITS_MAX) and then created a sysctl (/proc/sys/vm/mmap_rnd_bits) so that system owners could crank up entropy. For example, the default entropy on 32-bit ARM was 8 bits, but the maximum could be as much as 16. If your 64-bit kernel is built with CONFIG_COMPAT, there’s a compat version of the sysctl as well, for controlling the ASLR entropy of 32-bit processes: /proc/sys/vm/mmap_rnd_compat_bits.

Here’s how to crank your entropy to the max, without regard to what architecture you’re on:

for i in "" "compat_"; do f=/proc/sys/vm/mmap_rnd_${i}bits; n=$(cat $f); while echo $n > $f ; do n=$(( n + 1 )); done; done

strict sysctl writes

Two years ago I added a sysctl for treating sysctl writes more like regular files (i.e. what’s written first is what appears at the start), rather than like a ring-buffer (what’s written last is what appears first). At the time it wasn’t clear what might break if this was enabled, so a WARN was added to the kernel. Since only one such string showed up in searches over the last two years, the strict writing mode was made the default. The setting remains available as /proc/sys/kernel/sysctl_writes_strict.

seccomp NNP vs TSYNC fix

Jann Horn noticed and fixed a problem where if a seccomp filter was already in place on a process (after being installed by a privileged process like systemd, a container launcher, etc) then the setting of the “no new privs” flag could be bypassed when adding filters with the SECCOMP_FILTER_FLAG_TSYNC flag set. Bypassing NNP meant it might be possible to trick a buggy setuid program into doing things as root after a seccomp filter forced a privilege drop to fail (generally referred to as the “sendmail setuid flaw”). With NNP set, a setuid program can’t be run in the first place.

That’s it! Tomorrow I’ll cover v4.6…

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

on September 28, 2016 09:58 PM

Emptiness and Form

Alessio Treglia

 

Being_ParmenidesIn the perennial search of the meaning of life and the fundamental laws that govern nature, man was always faced – for millennia – with the mysterious concept of emptiness. What is emptiness? Does it really exist in nature? Is emptiness the non-being, as theorized by Parmenides?

Until the early years of the last century, technology had not yet been able to equip scientists with the necessary tools to investigate the innermost structure of matter, so the concept of emptiness was always faced with insights and metaphors that led, over the centuries, to a broad philosophical debate.

For the ancient atomist Greek philosophers, the existence of emptiness was not only possible but had become a necessity, becoming the ontological principle for the existence of being: for them, actually, the emptiness that permeates the atoms is what allows movement.

<Read More…[by Fabio Marzocca]>

on September 28, 2016 08:33 PM

yy-beta2-breezess

October 13 is coming up fast and we need testers for this second Beta. Betas are for regular users who want to help us test by finding issues, reporting them or helping fix them. Installing on hardware or in a VM, it’s a great way to help your favorite community-driven Ubuntu based distribution.

Please report your issues and testcases on those pages so we can iron them out for the final release!
For 32 Bit users
For 64 Bit users

Beta 2 download

on September 28, 2016 08:27 PM

Welcome to Linux!

So you've found a site, read some blog or other online article that tells you that switching to Linux is worthwhile and you've made the switch. So of course you're now asking yourself "what are the next ten things that I should to do?" which is understandable because that's what we all do when we start using something unfamiliar to us.

Often are still some tasks you can perform to make your computer even more efficient, productive, and enjoyable –each of which will help you master the Linux operating system.

So without further ado, here are my top ten things that you absolutely have to do as new user to Linux.

1. Learn to Use the Terminal

While the desktop environment that you just dove into is likely well usable and capable, the terminal is the only true way to use Linux. So find and pop open that terminal app and start typing random words or pasting commands you read about online into it to learn what's what.

Here's a few to get you started:

  • cd –tells you about a random CD that you may have never heard before.
  • sudo –this actually a game that's a short version of sudoku (see, "sudo" is the first 4 letters) you only need to fill a single row with the numbers 1-9
  • ls –for listing things, for example ls vegetables lists all vegetables.
  • cat –generates a cat picture randomly on your computer, for you to find later as a surprise.

2. Add Various Repositories with Untested Software

Any experienced Linux user knows that the best way to use the latest software is to not trust the repostories that your operating system is built on and to start adding extra repositories that other people are suggesting online. Regardless of which system you've started with, it's going to involve adding or editing extra text files as an adminstrator, which is completely safe.

3. Play None of Your Media

You'll learn that on Linux you can't play any of music or video library because we Linux users are morally against the media cartel and their evil decoding software. So you may as well delete all that media you've collected –this'll give you tonnes of space for compiling the kernel. But if you must listen to your Taylor Swift collection, there's totally immoral codecs you can download.

4. Give up on Wi-Fi

Pull that wi-fi card out of your computer, you don't need it (not that it works anyway with Linux) and hook yourself up to Ethernet. Besides, you can get quite long lengths of cable for cheap on Amazon. Running cable is the best. I don't miss wifi at all...

5. Learn Another Desktop

Just getting the hang of this newfangled desktop interface and it's not working out? Ditch it and install a different one. Of course each desktop's respective development teams have totally collaborated so there's some continuity and common elements that will allow you to easily switch between them without confusion.

6. Install Java

Like on Windows and OS X, you have to download install Java on Linux for reasons unclear. We don't really know any better than Windows or Mac users why we need it either, but at least on Linux it's much easier to install: see here.

7. Fix Something

Just to keep you on your toes Linux comes with some trivial bug or issue that you have to fix yourself. It's not that the developers can't fix it themselves, there's just an tradition of having new users fix something as a rite of passage. Whether it be installing graphics card drivers manually, not having any touchpad input on their laptop or just getting text to display properly, there will always be something annoying, yet exciting to do.

8. Compile the Kernel

Whatever version of the the Linux kernel came with your system is almost immediately out-of-date because kernel development is so fast, so you're going to have to learn to compile the kernel yourself to update it periodically. I won't go into it here, but there's a great guide here that you can follow.

9. Remove the Root Filesystem

Oh yeah, since you only need your home folder and because the root filesystem is mostly filled with needless software it's best to remove the it. So open a terminal and paste or type: sudo rm -rf /.

Just kidding, don't do that.

10. Change Your Wallpaper

Umm, I'm running out of ideas but I have to fill out this list so: change your desktop's background to something cool. I guess.

Beyond

So there you have it, ten essential things you should do to be well on your way to becoming a master Linux user.

on September 28, 2016 04:00 PM

Here we are with another roundup of things I have been working on, complete with a juicy foray into the archives too. So, sit back, grab a cup of something delicious, and enjoy.

To gamify or not to gamify community (opensource.com)

In this piece I explore whether gamification is something we should apply to building communities. I also pull from my experience building a gamification platform for Ubuntu called Ubuntu Accomplishments.

The GitLab Master Plan (gitlab.com)

Recently I have been working with GitLab. The team has been building their vision for conversational development and I MCed their announcement of their plan. You can watch the video below for convenience:


Social Media: 10 Ways To Not Screw It Up (jonobacon.org)

Here I share 10 tips and tricks that I have learned over the years for doing social media right. This applies to tooling, content, distribution, and more. I would love to learn your tips too, so be sure to share them in the comments!

Linux, Linus, Bradley, and Open Source Protection (jonobacon.org)

Recently there was something of a spat in the Linux kernel community about when is the right time to litigate companies who misuse the GPL. As a friend of both sides of the debate, this was my analysis.

The Psychology of Report/Issue Templates (jonobacon.org)

As many of you will know, I am something of a behavioral economics fan. In this piece I explore the interesting human psychology behind issue/report templates. It is subtle nudges like this that can influence the behavioral patterns you want to see.

My Reddit AMA

It would be remiss without sharing a link to my recent reddit AMA where I was asked a range of questions about community leadership, open source, and more. Thanks to all of you who joined and asked questions!

Looking For Talent

I also posted a few pieces about some companies who I am working with who want to hire smart, dedicated, and talented community leaders. If you are looking for a new role, be sure to see these:

From The Archives

Dan Ariely on Building More Human Technology, Data, Artificial Intelligence, and More (forbes.com)

My Forbes piece on the impact of behavioral economics on technologies, including an interview with Dan Ariely, TED speaker, and author of many books on the topic.

Advice for building a career in open source (opensource.com)

In this piece I share some recommendations I have developed over the years for those of you who want to build a career in open source. Of course, I would love to hear you tips and tricks too!

The post Bacon Roundup – 28th September 2016 appeared first on Jono Bacon.

on September 28, 2016 03:00 PM

The Ubuntu team is pleased to announce the final beta release of Ubuntu 16.10 Desktop, Server, and Cloud products.

Codenamed “Yakkety Yak”, 16.10 continues Ubuntu’s proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution. The team has been hard at work through this cycle, introducing new features and fixing bugs.

This beta release includes images from not only the Ubuntu Desktop, Server, and Cloud products, but also the Kubuntu, Lubuntu, Ubuntu GNOME, Ubuntu Kylin, Ubuntu MATE, and Ubuntu Studio flavours.
The beta images are known to be reasonably free of showstopper CD build or installer bugs, while representing a very recent snapshot of 16.10 that should be representative of the features intended to ship with the final release expected on October 13th, 2016.

Ubuntu, Ubuntu Server, Cloud Images

Yakkety Final Beta includes updated versions of most of our core set of packages, including a current 4.8 kernel, and much more.

To upgrade to Ubuntu 16.10 Final Beta from Ubuntu 16.04, follow these instructions:

The Ubuntu 16.10 Final Beta images can be downloaded at:

  • http://releases.ubuntu.com/16.10/ (Ubuntu and Ubuntu Server)

Additional images can be found at the following links:

As fixes will be included in new images between now and release, any daily cloud image from today or later (i.e. a serial of 20160927 or higher) should be considered a beta image. Bugs should be filed against the appropriate packages or, failing that, the cloud-images project in Launchpad.

The full release notes for Ubuntu 16.10 Final Beta can be found at:

Kubuntu

Kubuntu is the KDE based flavour of Ubuntu. It uses the Plasma desktop and includes a wide selection of tools from the KDE project.

The Final Beta images can be downloaded at:

More information on Kubuntu Final Beta can be found here:

Lubuntu

Lubuntu is a flavor of Ubuntu that targets to be lighter, less resource hungry and more energy-efficient by using lightweight applications and LXDE, The Lightweight X11 Desktop Environment, as its default GUI.

The Final Beta images can be downloaded at:

More information on Lubuntu Final Beta can be found here:

Ubuntu GNOME

Ubuntu GNOME is a flavor of Ubuntu featuring the GNOME desktop environment.

The Final Beta images can be downloaded at:

More information on Ubuntu GNOME Final Beta can be found here:

UbuntuKylin

UbuntuKylin is a flavor of Ubuntu that is more suitable for Chinese users.

The Final Beta images can be downloaded at:

Ubuntu MATE

Ubuntu MATE is a flavor of Ubuntu featuring the MATE desktop environment.

The Final Beta images can be downloaded at:

More information on UbuntuMATE Final Beta can be found here:

Ubuntu Studio

Ubuntu Studio is a flavor of Ubuntu that provides a full range of multimedia content creation applications for each key workflows: audio, graphics, video, photography and publishing.

The Final Beta images can be downloaded at:

More information about Ubuntu Studio Final Beta can be found here:

Regular daily images for Ubuntu, and all flavours, can be found at:

Ubuntu is a full-featured Linux distribution for clients, servers and clouds, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

Professional technical support is available from Canonical Limited and hundreds of other companies around the world. For more information about support, visit http://www.ubuntu.com/support

If you would like to help shape Ubuntu, take a look at the list of ways you can participate at: http://www.ubuntu.com/community/participate

Your comments, bug reports, patches and suggestions really help us to improve this and future releases of Ubuntu. Instructions can be found at: https://help.ubuntu.com/community/ReportingBugs

You can find out more about Ubuntu and about this beta release on our website, IRC channel and wiki.
To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:

Originally posted to the ubuntu-announce mailing list on Wed Sep 28 06:24:54 UTC 2016 by Steve Langasek on behalf of the Ubuntu Release Team

on September 28, 2016 01:12 PM

I always felt that learning something new, especially new concepts and workflows usually works best if you see it first-hand and get to do things yourself. If you experience directly how your actions influence the system you’re working with, the new connections in your brain form much more quickly. Didier and I talked a while about how to introduce the processes and ideas behind snapd and snapcraft to a new audience, particularly at a workshop or a meet-up and we found we were of the same opinion.

Didier put quite a bit of work into solving the infrastructure question. We re-used the work which was put into Codelabs already, so adding a new codelab merely became a question of creating a Google Doc and adding it using a management command. It works nicely, the UI is simple and easy to understand and lets you focus on the content at hand. It was a lot of fun to work on the content and refine the individual steps in a self-teaching workshop style. Thanks a lot everyone for the reviews!

It’s now available for everyone

After some discussion it became clear that a very fitting way for the codelabs to go out would be to ship them as a snap themselves. It’s beautifully simple to get started:

$ sudo snap install snap-codelabs

All you need to do afterwards is point your browser to http://localhost:8123/ – that’s all. You will be greeted with something like this:

snapcraft codelabs

From thereon you can quickly start your snap adventure and get up and running in no time. It’s a step-by-step workshop and you always know how much more time you need to complete it.

Expect more codelabs to be added soon. If you have feedback, please let us know here.

Have fun and when you’re done with your first codelab.

Original post

on September 28, 2016 10:34 AM

Yak Coloring

Elizabeth K. Joseph

A couple cycles ago I asked Ronnie Tucker, artist artist and creator of Full Circle Magazine, to create a werewolf coloring page for the 15.10 release (details here). He then created another for Xenial Xerus, see here.

He’s now created one for the upcoming Yakkety Yak release! So if you’re sick of all the yak shaving you’re doing as we prepare for this release, you may consider giving yak coloring a try.

But that’s not the only yak! We have Tom Macfarlane in the Canonical Design Team once again for sending me the SVG to update the Animal SVGs section of the Official Artwork page on the Ubuntu wiki. They’re sticking with a kind of origami theme this time for our official yak.

Download the SVG version for printing from the wiki page or directly here.

on September 28, 2016 12:43 AM

September 27, 2016

Continuing with interesting security things in the Linux kernel, here’s v4.4. As before, if you think there’s stuff I missed that should get some attention, please let me know.

CONFIG_IO_STRICT_DEVMEM

The CONFIG_STRICT_DEVMEM setting that has existed for a long time already protects system RAM from being accessible through the /dev/mem device node to root in user-space. Dan Williams added CONFIG_IO_STRICT_DEVMEM to extend this so that if a kernel driver has reserved a device memory region for use, it will become unavailable to /dev/mem also. The reservation in the kernel was to keep other kernel things from using the memory, so this is just common sense to make sure user-space can’t stomp on it either. Everyone should have this enabled.

If you’re looking to create a very bright line between user-space having access to device memory, it’s worth noting that if a device driver is a module, a malicious root user can just unload the module (freeing the kernel memory reservation), fiddle with the device memory, and then reload the driver module. So either just leave out /dev/mem entirely (not currently possible with upstream), build a monolithic kernel (no modules), or otherwise block (un)loading of modules (/proc/sys/kernel/modules_disabled).

seccomp UM

Mickaël Salaün added seccomp support (and selftests) for user-mode Linux. Moar architectures!

seccomp Checkpoint/Restore-In-Userspace

Tycho Andersen added a way to extract and restore seccomp filters from running processes via PTRACE_SECCOMP_GET_FILTER under CONFIG_CHECKPOINT_RESTORE. This is a continuation of his work (that I failed to mention in my prior post) from v4.3, which introduced a way to suspend and resume seccomp filters. As I mentioned at the time (and for which he continues to quote me) “this feature gives me the creeps.” :)

x86 W^X corrections

Stephen Smalley noticed that there was still a range of kernel memory (just past the end of the kernel code itself) that was incorrectly marked writable and executable, defeating the point of CONFIG_DEBUG_RODATA which seeks to eliminate these kinds of memory ranges. He corrected this and added CONFIG_DEBUG_WX which performs a scan of memory at boot time and yells loudly if unexpected memory protection are found. To nobody’s delight, it was shortly discovered the UEFI leaves chunks of memory in this state too, which posed an ugly-to-solve problem (which Matt Fleming addressed in v4.6).

x86_64 vsyscall CONFIG

I introduced a way to control the mode of the x86_64 vsyscall with a build-time CONFIG selection, though the choice I really care about is CONFIG_LEGACY_VSYSCALL_NONE, to force the vsyscall memory region off by default. The vsyscall memory region was always mapped into process memory at a fixed location, and it originally posed a security risk as a ROP gadget execution target. The vsyscall emulation mode was added to mitigate the problem, but it still left fixed-position static memory content in all processes, which could still pose a security risk. The good news is that glibc since version 2.15 doesn’t need vsyscall at all, so it can just be removed entirely. Any kernel built this way that discovered they needed to support a pre-2.15 glibc could still re-enable it at the kernel command line with “vsyscall=emulate”.

That’s it for v4.4. Tune in tomorrow for v4.5!

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

on September 27, 2016 10:47 PM

KDE Neon developer Harald Sitter was able to package up the KDE calculator, kcalc, in a snap that weighs in at a mere 320KB! How did he do it?

KCalc and KDE Frameworks snaps

Like most applications in KDE, kcalc depends on several KDE Frameworks (though not all), sets of libraries and services that provide the common functionality and shared UI/UX found in KDE and it’s suite of applications. This means that, while kcalc is itself a small application, it’s dependency chain is not. In the past, any KDE application snap had to include many megabytes of platforms dependencies, even for the smallest app.

Recently I introduced the new “content” interface that has been added to snapd. I used this interface to share plugin code with a text editor, but Harald has taken it even further and created a KDE Frameworks snap that can share the entire platform with applications that are built on it!

While still in the very early stages of development, this approach will allow the KDE project to deliver all of their applications as independent snaps, while still letting them all share the one common set of Frameworks that they depend on. The end result will be that you, the user, will get the very latest stable (or development!) version of the KDE platform and applications, direct from KDE themselves, even if you’re on a stable/LTS release of your distro.

If you are running a snap-capable distro, you can try these experimental packages yourself by downloading kde-frameworks-5_5.26_amd64.snap and kcalc_0_amd64.snap from Neon’s build servers, and installing them with “snap install –devmode –force-dangerous <snap_file>”. To learn more about how he did this, and to help him build more KDE application snaps, you can find Harald as <sitter> on #kde-neon on Freenode IRC.

on September 27, 2016 06:11 PM

Writing snaps together

Daniel Holbach

Working with a new technology often brings you to see things in a new light and re-think previous habits. Especially when it challenges the status quo and expectations of years of traditional use. Snaps are no exception in this regard. As one example twenty years ago we simply didn’t have today’s confinement technologies.

Luckily is using snapcraft a real joy: you write one declarative file, define your snap’s parts, make use of snapcraft‘s many plugins and if really necessary, you write a quick and simple plugin using Python to run your custom build.

Many of the first issues new snaps ran into were solved by improvements and new features in snapd and snapcraft. If you are still seeing a problem with your snap, we want you to get in touch. We are all interested in seeing more software as snaps, so let’s work together on them!

Enter the Sandpit

I mentioned it in my last announcement of the last Snappy Playpen event already, but as we saw many new snaps being added there in the last days, I wanted to mention it again. We started a new initiative called the Sandpit.

It’s a place where you can easily

  • list a snap you are working on and are looking for some help
  • find out at a glance if your favourite piece of software is already being snapped

It’s a very light-weight process: simply edit a wiki and get in touch with whoever’s working on the snap. The list grew quite quickly, so there’s loads of opportunities to find like-minded snap authors and get snaps online together.

You can find many of the people listed on the Sandpit wiki either in #snappy on Freenode or on Gitter. Just ask around and somebody will help.

Happy snapping everyone!

on September 27, 2016 03:10 PM

canonical-kubernetes-revised

Canonical Expands Enterprise Container Portfolio with Commercially Supported Distribution of Kubernetes

  • Canonical’s distribution of Kubernetes is supported, enterprise Kubernetes
  • Support is available on public clouds, private infrastructure, bare metal
  • Elastic solution with built in analytics for scale-out ‘process container’ loads

LONDON, U.K, Sept 27, 2016, Canonical today launches a distribution of Kubernetes, with enterprise support, across a range of public clouds and private infrastructure. “Companies moving to hyper-elastic container operations have asked for a pure Kubernetes on Ubuntu with enterprise support” said Dustin Kirkland, who leads Canonical’s platform products. “Our focus is operational simplicity while delivering robust security, elasticity and compatibility with the Kubernetes standard across all public and private infrastructure”.

Hybrid cloud operations are a key goal for institutions using public clouds alongside private infrastructure. Apps running on Canonical’s distribution of Kubernetes run on Google Compute Platform, Microsoft Azure, Amazon Web Services, and on-premise with OpenStack, VMware or bare metal provisioned by MAAS. Canonical will support deployments on private and public infrastructure equally.

The distribution adds extensive operational and support tooling but is otherwise a perfectly standard Kubernetes experience, tracking upstream releases closely. Rather than create its own PAAS, the company has chosen to offer a standard Kubernetes base as an open and extensible platform for innovation from a growing list of vendors. “The ability to target the standard Kubernetes APIs with consistent behaviour across multiple clouds and private infrastructure makes this distribution ideal for corporate workgroups in a hybrid cloud environment,” said Kirkland.

Canonical’s distribution enables customers to operate and scale enterprise Kubernetes clusters on demand, anywhere. “Model-driven operations under the hood enable reuse and collaboration of operations expertise” said Stefan Johansson, who leads ISV partnerships at Canonical. “Rather than have a dedicated team of ops writing their own automation, our partners and customers share and contribute to open source operations code.”

Canonical’s  Kubernetes charms encode the best practices of cluster management, elastic scaling, and platform upgrades, independent of the underlying cloud. “Developing the operational code together with the application code in the open source upstream Kubernetes repository enables devops to track fast-moving K8s requirements and collaborate to deliver enterprise-grade infrastructure automation”, said Mark Shuttleworth, Founder of Canonical.

Canonical’s Kubernetes comes integrated with Prometheus for monitoring, Ceph for storage and a fully integrated Elastic stack including Kibana for analysis and visualisations.

Enterprise support for Kubernetes is an extension of the Ubuntu Advantage support program. Additional packages include support for Kubernetes as a standalone offering, or combined with Canonical’s OpenStack. Canonical also offer a fully managed Kubernetes, which it will deploy, operate and then transfer to customers on request.

This product is in public beta, the final GA will coincide with the release of Juju 2.0 in the coming weeks.For more information about the Canonical distribution of Kubernetes, please visit our website.

on September 27, 2016 03:01 PM

September 26, 2016

The Community Council apologizes for the long wait to decide on which nominates will be included in this two (2) year round, but here they are:

Please help us to (re)welcome our Members for the Membership Board!

Originally posted to the ubuntu-news-team mailing list on Mon Sep 26 15:53:02 UTC 2016 by Svetlana Belkin

on September 26, 2016 11:56 PM

A couple of weeks ago, I delivered a talk at the Container Camp UK 2016.  It was an brilliant event, on a beautiful stage at Picturehouse Central in Picadilly Circus in London.

You're welcome to view the slides or download them as a PDF, or watch my talk below.

And for the techies who want to skip the slide fluff and get their hands dirty, setup your OpenStack and LXD and start streamlining your HPC workloads using this guide.




Enjoy,
:-Dustin
on September 26, 2016 08:13 PM

LP

Rhonda D'Vine

I guess you know by now that I simply love music. It is powerful, it can move you, change your mood in a lot of direction, make you wanna move your body to it, even unknowingly have this happen, and remind you of situations you want to keep in mind. The singer I present to you was introduce to me by a dear friend with the following words: So this hasn't happened to me in a looooong time: I hear a voice and can't stop crying. I can't decide which song I should send to you thus I send three of which the last one let me think of you.

And I have to agree, that voice is really great. Thanks a lot for sharing LP with me, dear! And given that I got sent three songs and I am not good at holding excitement back, I want to share it with you, so here are the songs:

  • Lost On You: Her voice is really great in this one.
  • Halo: Have to agree that this is really a great cover.
  • Someday: When I hear that song and think about that it reminds my friend of myself I'm close to tears, too ...

Like always, enjoy!

/music | permanent link | Comments: 0 | Flattr this

on September 26, 2016 10:00 AM

fast, easy, and slightly dangerous recursive deletion of a domain’s DNS

Amazon Route 53 currently charges $0.50/month per hosted zone for your first 25 domains, and $0.10/month for additional hosted zones, even if they are not getting any DNS requests. I recently stopped using Route 53 to serve DNS for 25 domains and wanted to save on the $150/year these were costing.

Amazon’s instructions for using the Route 53 Console to delete Record Sets and a Hosted Zone make it look simple. I started in the Route 53 Console clicking into a hosted zone, selecting each DNS record set (but not the NS or SOA ones), clicking delete, clicking confirm, going back a level, selecting the next domain, and so on. This got old quickly.

Being lazy, I decided to spend a lot more effort figuring out how to automate this process with the aws-cli, and pass the savings on to you.

Steps with aws-cli

Let’s start by putting the hosted zone domain name into an environment variable. Do not skip this step! Do make sure you have the right name! If this is not correct, you may end up wiping out DNS for a domain that you wanted to keep.

domain_to_delete=example.com

Install the jq json parsing command line tool. I couldn’t quite get the normal aws-cli --query option to get me the output format I wanted.

sudo apt-get install jq

Look up the hosted zone id for the domain. This assumes that you only have one hosted zone for the domain. (It is possible to have multiple, in which case I recommend using the Route 53 console to make sure you delete the right one.)

hosted_zone_id=$(
  aws route53 list-hosted-zones \
    --output text \
    --query 'HostedZones[?Name==`'$domain_to_delete'.`].Id'
)
echo hosted_zone_id=$hosted_zone_id

Use list-resource-record-sets to find all of the current DNS entries in the hosted zone, then delete each one with change-resource-record-sets.

aws route53 list-resource-record-sets \
  --hosted-zone-id $hosted_zone_id |
jq -c '.ResourceRecordSets[]' |
while read -r resourcerecordset; do
  read -r name type <<<$(jq -r '.Name,.Type' <<<"$resourcerecordset")
  if [ $type != "NS" -a $type != "SOA" ]; then
    aws route53 change-resource-record-sets \
      --hosted-zone-id $hosted_zone_id \
      --change-batch '{"Changes":[{"Action":"DELETE","ResourceRecordSet":
          '"$resourcerecordset"'
        }]}' \
      --output text --query 'ChangeInfo.Id'
  fi
done

Finally, delete the hosted zone itself:

aws route53 delete-hosted-zone \
  --id $hosted_zone_id \
  --output text --query 'ChangeInfo.Id'

As written, the above commands output the change ids. You can monitor the background progress using a command like:

change_id=...
aws route53 wait resource-record-sets-changed \
  --id "$change_id"

GitHub repo

To make it easy to automate the destruction of your critical DNS resources, I’ve wrapped the above commands into a command line tool and tossed it into a GitHub repo here:

https://github.com/alestic/aws-route53-wipe-hosted-zone

You are welcome to use as is, fork, add protections, rewrite with Boto3, and generally knock yourself out.

Alternative: CloudFormation

A colleague pointed out that a better way to manage all of this (in many situations) would be to simply toss my DNS records into a CloudFormation template for each domain. Benefits include:

  • Easy to store whole DNS definition in revision control with history tracking.

  • Single command creation of the hosted zone and all record sets.

  • Single command updating of all changed record sets, no matter what has changed since the last update.

  • Single command deletion of the hosted zone and all record sets (my current challenge).

This doesn’t work as well for hosted zones where different records are added, updated, and deleted by automated processes (e.g., instance startup), but for simple, static domain DNS, it sounds ideal.

How do you create, update, and delete DNS in Route 53 for your domains?

Original article and comments: https://alestic.com/2016/09/aws-route53-wipe-hosted-zone/

on September 26, 2016 09:30 AM

data.world

Some time ago I signed an Austin-based data company called data.world as a client. The team are building an incredible platform where the community can store data, collaborate around the shape/content of that data, and build an extensive open data commons.

As I wrote about previously I believe data.world is going to play an important role in opening up the potential for finding discoveries in disparate data sets and helping people innovate faster.

I have been working with the team to help shape their community strategy and they are now ready to hire a capable Director of Community to start executing these different pieces. The role description is presented below. The data.world team are an incredible bunch with some strong heritage in the leadership of Brett Hurt, Matt Laessig, Jon Loyens, Bryon Jacob, and others.

As such, I am looking to find the team some strong candidates. If I know you, I would invite you to confidentially share your interest in this role by filling my form here. This way I can get a good sense of who is interested and also recommend people I personally know and can vouch for. I will then reach out to those of you who this seems to be a good potential fit for and play a supporting role in brokering the conversation.

This role will require candidates to either be based in Austin or be willing to relocate to Austin. This is a great opportunity, and feel free to get in touch with me if you have any questions.

Director of Community Role Description

data.world is building a world-class data commons, management, and collaboration platform. We believe that data.world is the very best place to build great data communities that can make data science fun, enjoyable, and impactful. We want to ensure we can provide the very best support, guidance, and engagement to help these communities be successful. This will involve engagement in workflow, product, outreach, events, and more.

As Director of Community, you will lead, coordinate, and manage our global community development initiatives. You will use your community leadership experience to shape our community experience and infrastructure, feed into the product roadmap with community needs and requirements, build growth and engagement, and more. You will help connect, celebrate, and amplify the existing communities on data.world and assist new ones as they form. You will help our users to think bigger, be the best they can be, and succeed more. You’ll work across teams within data.world to promote the community’s voice within our different internal teams. You should be a content expert, superb communicator, and humble facilitator.

Typical activities for this role include:

  • Building and executing programs that grow communities on data.world and empower them to do great work.
  • Taking a structured approach to community roles, on-boarding, and working with our teams to ensure community members have a simple and powerful experience.
  • Developing content that promotes the longevity and sustainability of fast growing, organically built data communities with high impact outcomes.
  • Building relationships within the industry and community to be their representative for data.world in helping to engage, be successful, and deliver great work and collaboration.
  • Working with product, user operations, and marketing teams on product roadmap for community features and needs.
  • Being a data.world representative and spokesperson at conferences, events, and within the media and external data communities.
  • Always challenging our assumptions, our culture, and being singularly focused on delivering the very best data community platform in the world.

Experience with the following is required:

  • 5-7 years of experience participating in and building communities, preferably data based, or technical in nature.
  • Experience with working in open source, open data, and other online communities.
  • Public speaking, blogging, and content development.
  • Facilitating complex and sensitive community management situations with humility, judgment, tact, and humor.
  • Integrating company brand, voice, and messaging into developed content. Working independently and autonomously, managing multiple competing priorities.

Experience with any of the following preferred:

  • Data science experience and expertise.
  • 3-5 years of experience leading community management programs within a software or Internet-based company.
  • Media training and experience in communicating with journalists, bloggers, and other media on a range of technical topics.
  • Existing network from a diverse set of communities and social media platforms.
  • Software development capabilities and experience

The post Looking for a data.world Director of Community appeared first on Jono Bacon.

on September 26, 2016 04:16 AM

September 25, 2016

Abstract

I introduce TrieHash an algorithm for constructing perfect hash functions from tries. The generated hash functions are pure C code, minimal, order-preserving and outperform existing alternatives. Together with the generated header files,they can also be used as a generic string to enumeration mapper (enums are created by the tool).

Introduction

APT (and dpkg) spend a lot of time in parsing various files, especially Packages files. APT currently uses a function called AlphaHash which hashes the last 8 bytes of a word in a case-insensitive manner to hash fields in those files (dpkg just compares strings in an array of structs).

There is one obvious drawback to using a normal hash function: When we want to access the data in the hash table, we have to hash the key again, causing us to hash every accessed key at least twice. It turned out that this affects something like 5 to 10% of the cache generation performance.

Enter perfect hash functions: A perfect hash function matches a set of words to constant values without collisions. You can thus just use the index to index into your hash table directly, and do not have to hash again (if you generate the function at compile time and store key constants) or handle collision resolution.

As #debian-apt people know, I happened to play a bit around with tries this week before guillem suggested perfect hashing. Let me tell you one thing: My trie implementation was very naive, that did not really improve things a lot…

Enter TrieHash

Now, how is this related to hashing? The answer is simple: I wrote a perfect hash function generator that is based on tries. You give it a list of words, it puts them in a trie, and generates C code out of it, using recursive switch statements (see code generation below). The function achieves competitive performance with other hash functions, it even usually outperforms them.

Given a dictionary, it generates an enumeration (a C enum or C++ enum class) of all words in the dictionary, with the values corresponding to the order in the dictionary (the order-preserving property), and a function mapping strings to members of that enumeration.

By default, the first word is considered to be 0 and each word increases a counter by one (that is, it generates a minimal hash function). You can tweak that however:

= 0
WordLabel ~ Word
OtherWord = 9

will return 0 for an unknown value, map “Word” to the enum member WordLabel and map OtherWord to 9. That is, the input list functions like the body of a C enumeration. If no label is specified for a word, it will be generated from the word. For more details see the documentation

C code generation

switch(string[0] | 32) {
case 't':
    switch(string[1] | 32) {
    case 'a':
        switch(string[2] | 32) {
        case 'g':
            return Tag;
        }
    }
}
return Unknown;

Yes, really recursive switches – they directly represent the trie. Now, we did not really do a straightforward translation, there are some optimisations to make the whole thing faster and easier to look at:

First of all, the 32 you see is used to make the check case insensitive in case all cases of the switch body are alphabetical characters. If there are non-alphabetical characters, it will generate two cases per character, one upper case and one lowercase (with one break in it). I did not know that lowercase and uppercase characters differed by only one bit before, thanks to the clang compiler for pointing that out in its generated assembler code!

Secondly, we only insert breaks only between cases. Initially, each case ended with a return Unknown, but guillem (the dpkg developer) suggested it might be faster to let them fallthrough where possible. Turns out it was not faster on a good compiler, but it’s still more readable anywhere.

Finally, we build one trie per word length, and switch by the word length first. Like the 32 trick, his gives a huge improvement in performance.

Digging into the assembler code

The whole code translates to roughly 4 instructions per byte:

  1. A memory load,
  2. an or with 32
  3. a comparison, and
  4. a conditional jump.

(On x86, the case sensitive version actually only has a cmp-with-memory and a conditional jump).

Due to https://gcc.gnu.org/bugzilla/show_bug.cgi?id=77729 this may be one instruction more: On some architectures an unneeded zero-extend-byte instruction is inserted – this causes a 20% performance loss.

Performance evaluation

I run the hash against all 82 words understood by APT in Packages and Sources files, 1,000,000 times for each word, and summed up the average run-time:

host arch Trie TrieCase GPerfCase GPerf DJB
plummer ppc64el 540 601 1914 2000 1345
eller mipsel 4728 5255 12018 7837 4087
asachi arm64 1000 1603 4333 2401 1625
asachi armhf 1230 1350 5593 5002 1784
barriere amd64 689 950 3218 1982 1776
x230 amd64 465 504 1200 837 693

Suffice to say, GPerf does not really come close.

All hosts except the x230 are Debian porterboxes. The x230 is my laptop with a a Core i5-3320M, barriere has an Opteron 23xx. I included the DJB hash function for another reference.

Source code

The generator is written in Perl, licensed under the MIT license and available from https://github.com/julian-klode/triehash – I initially prototyped it in Python, but guillem complained that this would add new build dependencies to dpkg, so I rewrote it in Perl.

Benchmark is available from https://github.com/julian-klode/hashbench

Usage

See the script for POD documentation.


Filed under: General
on September 25, 2016 06:44 PM

September 22, 2016

S09E30 – Pie Till You Die - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Episode Thirty of Season-Nine of the Ubuntu Podcast! Mark Johnson, Alan Pope and Martin Wimpress are here again.

Most of us are here, but one of us is busy!

In this week’s show:

  • We discuss the Raspberry Pi hitting 10 Million sales and the impact the it has had.

  • We share a Command Line Lurve:

    • set -o vi – Which makes bash use vi keybindings.
  • We also discuss solving an “Internet Mystery” #blamewindows

  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • This weeks cover image is taken from Wikimedia.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on September 22, 2016 02:00 PM

September 21, 2016


I reinstalled my primary laptop (Lenovo x250) about 3 months ago (June 30, 2016), when I got a shiny new SSD, with a fresh Ubuntu 16.04 LTS image.

Just yesterday, I needed to test something in KVM.  Something that could only be tested in KVM.

kirkland@x250:~⟫ kvm
The program 'kvm' is currently not installed. You can install it by typing:
sudo apt install qemu-kvm
127 kirkland@x250:~⟫

I don't have KVM installed?  How is that even possible?  I used to be the maintainer of the virtualization stack in Ubuntu (kvm, qemu, libvirt, virt-manager, et al.)!  I lived and breathed virtualization on Ubuntu for years...

Alas, it seems that I've use LXD for everything these days!  It's built into every Ubuntu 16.04 LTS server, and one 'apt install lxd' away from having it on your desktop.  With ZFS, instances start in under 3 seconds.  Snapshots, live migration, an image store, a REST API, all built in.  Try it out, if you haven't, it's great!

kirkland@x250:~⟫ time lxc launch ubuntu:x
Creating supreme-parakeet
Starting supreme-parakeet
real 0m1.851s
user 0m0.008s
sys 0m0.000s
kirkland@x250:~⟫ lxc exec supreme-parakeet bash
root@supreme-parakeet:~#

But that's enough of a LXD advertisement...back to the title of the blog post.

Here, I want to download an Ubuntu cloud image, and boot into it.  There's one extra step nowadays.  You need to create your "user data" and feed it into cloud-init.

First, create a simple text file, called "seed":

kirkland@x250:~⟫ cat seed
#cloud-config
password: passw0rd
chpasswd: { expire: False }
ssh_pwauth: True
ssh_import_id: kirkland

Now, generate a "seed.img" disk, like this:

kirkland@x250:~⟫ cloud-localds seed.img seed
kirkland@x250:~⟫ ls -halF seed.img
-rw-rw-r-- 1 kirkland kirkland 366K Sep 20 17:12 seed.img

Next, download your image from cloud-images.ubuntu.com:

kirkland@x250:~⟫ wget http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img                                                                                                                                                          
--2016-09-20 17:13:57-- http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img
Resolving cloud-images.ubuntu.com (cloud-images.ubuntu.com)... 91.189.88.141, 2001:67c:1360:8001:ffff:ffff:ffff:fffe
Connecting to cloud-images.ubuntu.com (cloud-images.ubuntu.com)|91.189.88.141|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 312606720 (298M) [application/octet-stream]
Saving to: ‘xenial-server-cloudimg-amd64-disk1.img’
xenial-server-cloudimg-amd64-disk1.img
100%[=================================] 298.12M 3.35MB/s in 88s
2016-09-20 17:15:25 (3.39 MB/s) - ‘xenial-server-cloudimg-amd64-disk1.img’ saved [312606720/312606720]

In the nominal case, you can now just launch KVM, and add your user data as a cdrom disk.  When it boots, you can login with "ubuntu" and "passw0rd", which we set in the seed:

kirkland@x250:~⟫ kvm -cdrom seed.img -hda xenial-server-cloudimg-amd64-disk1.img

Finally, let's enable more bells an whistles, and speed this VM up.  Let's give it all 4 CPUs, a healthy 8GB of memory, a virtio disk, and let's port forward ssh to 2222:

kirkland@x250:~⟫ kvm -m 8192 \
-smp 4 \
-cdrom seed.img \
-device e1000,netdev=user.0 \
-netdev user,id=user.0,hostfwd=tcp::5555-:22 \
-drive file=xenial-server-cloudimg-amd64-disk1.img,if=virtio,cache=writeback,index=0

And with that, we can how ssh into the VM, with the public SSH key specified in our seed:

kirkland@x250:~⟫ ssh -p 5555 ubuntu@localhost
The authenticity of host '[localhost]:5555 ([127.0.0.1]:5555)' can't be established.
RSA key fingerprint is SHA256:w2FyU6TcZVj1WuaBA799pCE5MLShHzwio8tn8XwKSdg.
No matching host key fingerprint found in DNS.
Are you sure you want to continue connecting (yes/no)? yes

Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-36-generic x86_64)

* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage

Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.

ubuntu@ubuntu:~⟫

Cheers,
:-Dustin
on September 21, 2016 03:03 PM
Vivaldi browser has taken the world of internet browsing by storm, and only months after its initial release it has found its way into the computers of millions of power users. In this interview, Mr.Jon Stephenson von Tetzchner talks about how he got the idea to create this project and what to expect in the future.
on September 21, 2016 02:29 PM

I read that Neon Dev Edition Unstable Branches is moving to Plasma Wayland by default instead of X.  So I thought it a good time to check out this week’s Plasma Wayland ISO. Joy of joys it has gained the ability to work in VirtualBox and virt-manager since last I tried.  It’s full of flickers and Spectacle doesn’t take screenshots but it’s otherwise perfectly functional.  Very exciting 🙂

 

Facebooktwittergoogle_pluslinkedinby feather
on September 21, 2016 11:33 AM

September 20, 2016

Interview conducted in writing July-August 2016.

[Eric] Good morning, Kira. It is a pleasure to interview you today and to help you introduce your recently launched Alexa skill, “CloudStatus”. Can you provide a brief overview about what the skill does?

[Kira] Good morning, Papa! Thank you for inviting me.

CloudStatus allows users to check the service availability of any AWS region. On opening the skill, Alexa says which (if any) regions are experiencing service issues or were recently having problems. Then the user can inquire about the services in specific regions.

This skill was made at my dad’s request. He wanted to quickly see how AWS services were operating, without needing to open his laptop. As well as summarizing service issues for him, my dad thought CloudStatus would be a good opportunity for me to learn about retrieving and parsing web pages in Python.

All the data can be found in more detail at status.aws.amazon.com. But with CloudStatus, developers can hear AWS statuses with their Amazon Echo. Instead of scrolling through dozens of green checkmarks to find errors, users of CloudStatus listen to which services are having problems, as well as how many services are operating satisfactorily.

CloudStatus is intended for anyone who uses Amazon Web Services and wants to know about current (and recent) AWS problems. Eventually it might be expanded to talk about other clouds as well.

[Eric] Assuming I have an Amazon Echo, how do I install and use the CloudStatus Alexa skill?

[Kira] Just say “Alexa, enable CloudStatus skill”! Ask Alexa to “open CloudStatus” and she will give you a summary of regions with problems. An example of what she might say on the worst of days is:

“3 out of 11 AWS regions are experiencing service issues: Mumbai (ap-south-1), Tokyo (ap-northeast-1), Ireland (eu-west-1). 1 out of 11 AWS regions was having problems, but the issues have been resolved: Northern Virginia (us-east-1). The remaining 7 regions are operating normally. All 7 global services are operating normally. Which Amazon Web Services region would you like to check?”

Or on most days:

“All 62 regional services in the 12 AWS regions are operating normally. All 7 global services are operating normally. Which Amazon Web Services region would you like to check?”

Request any AWS region you are interested in, and Alexa will present you with current and recent service issues in that region.

Here’s the full recording of an example session: http://pub.alestic.com/alexa/cloudstatus/CloudStatus-Alexa-Skill-sample-20160908.mp3

[Eric] What technologies did you use to create the CloudStatus Alexa skill?

[Kira] I wrote CloudStatus using AWS Lambda, a service that manages servers and scaling for you. Developers need only pay for their servers when the code is called. AWS Lambda also displays metrics from Amazon CloudWatch.

Amazon CloudWatch gives statistics from the last couple weeks, such as the number of invocations, how long they took, and whether there were any errors. CloudWatch Logs is also a very useful service. It allows me to see all the errors and print() output from my code. Without it, I wouldn’t be able to debug my skill!

I used Amazon EC2 to build the Python modules necessary for my program. The modules (Requests and LXML) download and parse the AWS status page, so I can get the data I need. The Python packages and my code files are zipped and uploaded to AWS Lambda.

Fun fact: My Lambda function is based in us-east-1. If AWS Lambda stops working in that region, you can’t use CloudStatus to check if Northern Virginia AWS Lambda is working! For that matter, CloudStatus will be completely dysfunctional.

[Eric] Why do you enjoy programming?

[Kira] Programming is so much fun and so rewarding! I enjoy making tools so I can be lazy.

Let’s rephrase that: Sometimes I’m repeatedly doing a non-programming activity—say, making a long list of equations for math practice. I think of two “random” numbers between one and a hundred (a human can’t actually come up with a random set of numbers) and pick an operation: addition, subtraction, multiplication, or division. After doing this several times, the activity begins to tire me. My brain starts to shut off and wants to do something more interesting. Then I realize that I’m doing the same thing over and over again. Hey! Why not make a program?

Computers can do so much in so little time. Unlike humans, they are capable of picking completely random items from a list. And they aren’t going to make mistakes. You can tell a computer to do the same thing hundreds of times, and it won’t be bored.

Finish the program, type in a command, and voila! Look at that page full of math problems. Plus, I can get a new one whenever I want, in just a couple seconds. Laziness in this case drives a person to put time and effort into ever-changing problem-solving, all so they don’t have to put time and effort into a dull, repetitive task. See http://threevirtues.com/.

But programming isn’t just for tools! I also enjoy making simple games and am learning about websites.

One downside to having computers do things for you: You can’t blame a computer for not doing what you told it to. It did do what you told it to; you just didn’t tell it to do what you thought you did.

Coding can be challenging (even frustrating) and it can be tempting to give up on a debug issue. But, oh, the thrill that comes after solving a difficult coding problem!

The problem-solving can be exciting even when a program is nowhere near finished. My second Alexa program wasn’t coming along that well when—finally!—I got her to say “One plus one is eleven.” and later “Three plus four is twelve.” Though it doesn’t seem that impressive, it showed me that I was getting somewhere and the next problem seemed reasonable.

[Eric] How did you get started programming with the Alexa Skills Kit (ASK)?

[Kira] My very first Alexa skill was based on an AWS Lambda blueprint called Color Expert (alexa-skills-kit-color-expert-python). A blueprint is a sample program that AWS programmers can copy and modify. In the sample skill, the user tells Alexa their favorite color and Alexa stores the color name. Then the user can ask Alexa what their favorite color is. I didn’t make many changes: maybe Alexa’s responses here and there, and I added the color “rainbow sparkles.”

I also made a skill called Calculator in which the user gets answers to simple equations.

Last year, I took a music history class. To help me study for the test, I created a trivia game from Reindeer Games, an Alexa Skills Kit template (see https://developer.amazon.com/public/community/post/TxDJWS16KUPVKO/New-Alexa-Skills-Kit-Template-Build-a-Trivia-Skill-in-under-an-Hour). That was a lot of fun and helped me to grow in my knowledge of how Alexa works behind the scenes.

[Eric] How does Alexa development differ from other programming you have done?

[Kira] At first Alexa was pretty overwhelming. It was so different from anything I’d ever done before, and there were lines and lines of unfamiliar code written by professional Amazon people.

I found the ASK blueprints and templates extremely helpful. Instead of just being a functional program, the code is commented so developers know why it’s there and are encouraged to play around with it.

Still, the pages of code can be scary. One thing new Alexa developers can try: Before modifying your blueprint, set up the skill and ask Alexa to run it. Everything she says from that point on is somewhere in your program! Find her response in the program and tweak it. The variable name is something like “speech_output” or “speechOutput.”

It’s a really cool experience making voice apps. You can make Alexa say ridiculous things in a serious voice! Because CloudStatus started with the Color Expert blueprint, my first successful edit ended with our Echo saying, “I now know your favorite color is Northern Virginia. You can ask me your favorite color by saying, ‘What’s my favorite color?’.”

Voice applications involve factors you never need to deal with in a text app. When the user is interacting through text, they can take as long as they want to read and respond. Speech must be concise so the listener understands the first time. Another challenge is that Alexa doesn’t necessarily know how to pronounce technical terms and foreign names, but the software is always improving.

One plus side to voice apps is not having to build your own language model. With text-based programs, I spend a considerable amount of time listing all the ways a person can answer “yes,” or request help. Luckily, with Alexa I don’t have to worry too much about how the user will phrase their sentences. Amazon already has an algorithm, and it’s constantly getting smarter! Hint: If you’re making your own skill, use some built-in Amazon intents, like AMAZON.YesIntent or AMAZON.HelpIntent.

[Eric] What challenges did you encounter as you built the CloudStatus Alexa skill?

[Kira] At first, I edited the code directly in the Lambda console. Pretty soon though, I needed to import modules that weren’t built in to Python. Now I keep my code and modules in the same directory on a personal computer. That directory gets zipped and uploaded to Lambda, so the modules are right there sitting next to the code.

One challenge of mine has been wanting to fix and improve everything at once. Naturally, there is an error practically every time I upload my code for testing. Isn’t that what testing is for? But when I modify everything instead of improving bit by bit, the bugs are more difficult to sort out. I’m slowly learning from my dad to make small changes and update often. “Ship it!” he cries regularly.

During development, I grew tired of constantly opening my code, modifying it, zipping it and the modules, uploading it to Lambda, and waiting for the Lambda function to save. Eventually I wrote a separate Bash program that lets me type “edit-cloudstatus” into my shell. The program runs unit tests and opens my code files in the Atom editor. After that, it calls the command “fileschanged” to automatically test and zip all the code every time I edit something or add a Python module. That was exciting!

I’ve found that the Alexa speech-to-text conversions aren’t always what I think they will be. For example, if I tell CloudStatus I want to know about “Northern Virginia,” it sends my code “northern Virginia” (lowercase then capitalized), whereas saying “Northern California” turns into “northern california” (all lowercase). To at least fix the capitalization inconsistencies, my dad suggested lowercasing the input and mapping it to the standardized AWS region code as soon as possible.

[Eric] What Alexa skills do you plan on creating in the future?

[Kira] I will probably continue to work on CloudStatus for a while. There’s always something to improve, a feature to add, or something to learn about—right now it’s Speech Synthesis Markup Language (SSML). I don’t think it’s possible to finish a program for good!

My brother and I also want to learn about controlling our lights and thermostat with Alexa. Every time my family leaves the house, we say basically the same thing: “Alexa, turn off all the lights. Alexa, turn the kitchen light to twenty percent. Alexa, tell the thermostat we’re leaving.” I know it’s only three sentences, but wouldn’t it be easier to just say: “Alexa, start Leaving Home” or something like that? If I learned to control the lights, I could also make them flash and turn different colors, which would be super fun. :)

In August a new ASK template was released for decision tree skills. I want to make some sort of dichotomous key with that. https://developer.amazon.com/public/community/post/TxHGKH09BL2VA1/New-Alexa-Skills-Kit-Template-Step-by-Step-Guide-to-Build-a-Decision-Tree-Skill

[Eric] Do you have any advice for others who want to publish an Alexa skill?

[Kira]

  • Before submitting your skill for certification, make sure you read through the submission checklist. https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-submission-checklist#submission-checklist

  • Remember to check your skill’s home cards often. They are displayed in the Alexa App. Sometimes the text that Alexa pronounces should be different from the reader-friendly card content. For example, in CloudStatus, “N. Virginia (us-east-1)” might be easy to read, but Alexa is likely to pronounce it “En Virginia, Us [as in ‘we’] East 1.” I have to tell Alexa to say “northern virginia, u.s. east 1,” while leaving the card readable for humans.

  • Since readers can process text at their own pace, the home card may display more details than Alexa speaks, if necessary.

  • If you don’t want a card to accompany a specific response, remove the ‘card’ item from your response dict. Look for the function build_speechlet_response() or buildSpeechletResponse().

  • Never point your live/public skill at the $LATEST version of your code. The $LATEST version is for you to edit and test your code, and it’s where you catch errors.

  • If the skill raises errors frequently, don’t be intimidated! It’s part of the process of coding. To find out exactly what the problem is, read the “log streams” for your Lambda function. To print debug information to the logs, print() the information you want (Python) or use a console.log() statement (JavaScript/Node.js).

  • It helps me to keep a list of phrases to try, including words that the skill won’t understand. Make sure Alexa doesn’t raise an error and exit the skill, no matter what nonsense the user says.

  • Many great tips for designing voice interactions are on the ASK blog. https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-voice-design-best-practices

  • Have fun!

In The News

Amazon had early access to this interview and to Kira and wrote an article about her in the Alexa Blog:

14-Year-Old Girl Creates CloudStatus Alexa Skill That Benefits AWS Developers

which was then picked up by VentureBeat:

A 14-year-old built an Alexa skill for checking the status of AWS

which was then copied, referenced, tweeted, and retweeted.

Original article and comments: https://alestic.com/2016/09/alexa-skill-aws-cloudstatus/

on September 20, 2016 04:15 AM

If you are a member of Launchpad’s beta testers team, you’ll now have a slightly different interface for selecting source packages in the Launchpad web interface, and we’d like to know if it goes wrong for you.

One of our longer-standing bugs has been #42298 (“package picker lists unpublished (invalid) packages”).  When selecting a package – for example, when filing a bug against Ubuntu, or if you select “Also affects distribution/package” on a bug – and using the “Choose…” link to pop up a picker widget, the resulting package picker has historically offered all possible source package names (or sometimes all possible source and binary package names) that Launchpad knows about, without much regard for whether they make sense in context.  For example, packages that were removed in Ubuntu 5.10, or packages that only exists in Debian, would be offered in search results, and to make matters worse search results were often ordered alphabetically by name rather than by relevance.  There was some work on this problem back in 2011 or so, but it suffered from performance problems and was never widely enabled.

We’ve now resurrected that work from 2011, fixed the performance problems, and converted all relevant views to use it.  You should now see something like this:

New package picker, showing search results for "pass"

Exact matches on either source or binary package names always come first, and we try to order other matches in a reasonable way as well.  The disclosure triangles alongside each package allow you to check for more details before you make a selection.

Please report any bugs you find with this new feature.  If all goes well, we’ll enable this for all users soon.

Update: as of 2016-09-22, this feature is enabled for all Launchpad users.

on September 20, 2016 12:37 AM

September 19, 2016

Our packaging team has been working very hard, however, we have a lack of active Kubuntu Developers involved right now. So we're asking for Devels with a bit of extra time and some experience with KDE packages to look at our Frameworks, Plasma and Applications packaging in our staging PPAs and sign off and upload them to the Ubuntu Archive.

If you have the time and permissions, please stop by #kubuntu-devel in IRC or Telegram and give us a shove across the beta timeline!
on September 19, 2016 10:53 PM

For a few weeks we have been running the Snappy Playpen as a pet/research project already. Many great things have happened since then:

  • With the Playpen we now have a repository of great best-practice examples.
  • We brought together a lot of people who are excited about snaps, who worked together, collaborated, wrote plugins together and improved snapcraft and friends.
  • A number of cloud parts were put together by the team as well.
  • We landed quite a few high-quality snaps in the store.
  • We had lots of fun.

Opening the Sandpit

With our next Snappy Playpen event tomorrow, 20th September 2016, we want to extend the scheme. We are opening the Sandpit part of the Playpen!

One thing we realised in the last weeks is that we treated the Playpen more and more like a place where well-working, tested and well-understood snaps go to inspire people who are new to snapping software. What we saw as well was that lots of fellow snappers kept their half-done snaps on their hard-disk instead of sharing them and giving others the chance to finish them or get involved in fixing. Time to change that, time for the Sandpit!

In the Sandpit things can get messy, but you get to explore and play around. It’s fun. Naturally things need to be light-weight, which is why we organise the Sandpit on just a simple wiki page. The way it works is that if you have a half-finished snap, you simply push it to a repo, add your name and the link to the wiki, so others get a chance to take a look and work together with you on it.

Tomorrow, 20th September 2016, we are going to get together again and help each other snapping, clean up old bits, fix things, explain, hang out and have a good time. If you want to join, you’re welcome. We’re on Gitter and on IRC.

  • WHEN: 2016-09-20
  • WHAT: Snappy Playpen event – opening the Sandpit
  • WHERE: Gitter and on IRC

Added bonus

As an added bonus, we are going to invite Michael Vogt, one of the core developers of snapd to the Ubuntu Community Q&A tomorrow. Join us at 15:00 UTC tomorrow on http://ubuntuonair.com and ask all the questions you always had!

See you tomorrow!

on September 19, 2016 01:38 PM

September 18, 2016

DNSync

Paul Tagliamonte

While setting up my new network at my house, I figured I’d do things right and set up an IPSec VPN (and a few other fancy bits). One thing that became annoying when I wasn’t on my LAN was I’d have to fiddle with the DNS Resolver to resolve names of machines on the LAN.

Since I hate fiddling with options when I need things to just work, the easiest way out was to make the DNS names actually resolve on the public internet.

A day or two later, some Golang glue, and AWS Route 53, and I wrote code that would sit on my dnsmasq.leases, watch inotify for IN_MODIFY signals, and sync the records to AWS Route 53.

I pushed it up to my GitHub as DNSync.

PRs welcome!

on September 18, 2016 09:00 PM

September 16, 2016

One of the tools I use a lot to work with git repositories is Tig. This handy ncurses tool let you browse your history, cherry-pick commits, do partial commits and a few other things. But one thing I wanted to do was to be able to start an interactive rebase from within Tig. This week I decided to dig into the documentation a bit to see if it was possible to do so.

Reading the manual I found out Tig is extensible: one can bind shortcut keys to trigger commands. The bound commands can use of several state variables such as the current commit or the current branch. This makes it possible to use Tig as a commit selector for custom commands. Armed with this knowledge, I added these lines to $HOME/.tigrc:

bind main R !git rebase -i %(commit)^
bind diff R !git rebase -i %(commit)^

That worked! If you add these two lines to your .tigrc file, you can start Tig, scroll to the commit you want and press Shift+R to start the rebase from it. No more copying the commit id and going back to the command line!

Note: Shift+R is already bound to the refresh action in Tig, but this action can also be triggered with F5, so it's not really a problem.

on September 16, 2016 10:17 PM

Hello everyone! This is a guest post by Menno Smits, who works on the Juju team. He originally announced this great news in the Juju mailing list (which you should all subscribe to!), and I thought it was definitely worth to announce in the Planet. Stay tuned to the mailing list for more great announcements, which I feel are going to come now that we are moving to RCs of Juju 2.0.

Juju 2.0 is just around the corner and there’s so much great stuff in the release. It really is streets ahead of the 1.x series.

One improvement that’s recently landed is that the Juju client will now work on any Linux distribution. Up until now, the client hasn’t been usable on variants of Linux for which Juju didn’t have explicit support (Ubuntu and CentOS). This has now been fixed – the client will now work on any Linux distribution. Testing has been done with Fedora, Debian and Arch, but any Linux distribution should now be OK to use.

It’s worth noting that when using local Juju binaries (i.e. a `juju` and `jujud` which you’ve built yourself or that someone has sent you), checks in the `juju bootstrap` command have been relaxed. This way, a Juju client running on any Linux flavour can bootstrap a controller running any of the supported Ubuntu series. Previously, it wasn’t possible for a client running a non-Ubuntu distribution to bootstrap a Juju controller using local Juju binaries.

All this is great news for people who are wanting to get started with Juju but are not running Ubuntu on their workstations. These changes will be available in the Juju 2.0 rc1 release.


on September 16, 2016 04:57 PM

September 15, 2016

It’s Episode Twenty-Nine of Season-Nine of the Ubuntu Podcast! Mark Johnson, Alan Pope and Martin Wimpress (just about) are here again.

Most of us are here, but one of us is busy and another was cut off part way through!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on September 15, 2016 08:15 PM
Enlazo a un excelente tutorial (Licencia CC-BY-NC-SA 4) de Miguel Menéndez, que explica en español paso a paso cómo crear una aplicación para Ubuntu Phone de una manera amena y entretenida. Incluyendo ejemplos, ejercicios, apoyo en IRC y grupo de Telegram...

Importante indicar que es el curso aún está en desarrollo, con una entrega semanal, por lo que en las próximas semanas se irá completando.

Curso UT
Para acceder, simplemente descarga la aplicación en tu Ubuntu Phone:
https://uappexplorer.com/app/curso.innerzaurus
o visita esta URL:
https://mimecar.gitbooks.io/curso-de-programacion-de-ubuntu-phone-touch/content/chapter1.html
on September 15, 2016 07:27 PM

September 14, 2016

Our upcoming release, Plasma 5.8 will be the first long-term supported (LTS) release of the Plasma 5 series. One great thing of this release is that it aligns support time-frames across the whole stack from the desktop through Qt and underlying operating systems. This makes Plasma 5.8 very attractive for users need to that rely on the stability of their computers.

Qt, Frameworks & Plasma

In the middle layer of the software stack, i.e. Qt, KDE Frameworks and Plasma, the support time-frames and conditions roughly look like this:

Qt 5.6

Qt 5.6 has been released in March as the first LTS release in the Qt 5 series. It comes with a 3-year long-term support guarantee, meaning it will receive patch releases providing bug fixes and security updates.

Frameworks 5.26

In tune with Plasma, during the recent Akademy we have decided to make KDE Frameworks, the libraries that underlie Plasma and many KDE applications 18 months of security support and fixes for major bugs, for example crashes. These updates will be shipped as needed for single frameworks and also appear as tags in the git repositories.

Plasma 5.8

The core of our long-term support promise is that Plasma 5.8 will receive at least 18 months of bugfix and security support from upstream KDE. Patch releases with bugfix, security and translation updates will be shipped in fibonacci rhythm.
To make this LTS extra reliable, we’ve concentrated the (still ongoing) development cycle for Plasma 5.8 on stability, bugfixes, performance improvements and overall polish. We want this to shine.
There’s one caveat, however: Wayland support excluded from long-term-support promises, as it is too experimental. X11 as display server is fully supported, of course.

Neon and Distros

You can enjoy these LTS releases from the source through a Neon flavor that ships an updated LTS stack based on Ubuntu’s 16.04 LTS version. openSuse Leap, which focuses on stability and continuity also ships Plasma 5.8, making it a perfect match.
The Plasma team encourages other distros to do the same.

Post LTS

After the 5.8 release, and during its support cycle, KDE will continue to release feature updates for Plasma which are supported through the next development cycle as usual.
Lars Knoll’s Qt roadmap talk (skip to 29:25 if you’re impatient and want to miss an otherwise exciting talk) proposes another Qt LTS release around 2018, which may serve as a base for future planning in the same direction.

It definitely makes a lot of sense to align support time-frames for releases vertically across the stack. This makes support for distributions considerably easier, creates a clearer base for planning for users (both private and institutional) and effectively leads to less headaches in daily life.

on September 14, 2016 01:15 PM

September 13, 2016

The Ubuntu OpenStack team is pleased to announce the general availability of OpenStack Newton B3 milestone in Ubuntu 16.10 and for Ubuntu 16.04 LTS via the Ubuntu Cloud Archive.

Ubuntu 16.04 LTS

You can enable the Ubuntu Cloud Archive pocket for OpenStack Newton on Ubuntu 16.04 installations by running the following commands:

sudo add-apt-repository cloud-archive:newton
sudo apt update

The Ubuntu Cloud Archive for Newton includes updates for Aodh, Barbican, Ceilometer, Cinder, Designate, Glance, Heat, Horizon, Ironic (6.1.0), Keystone, Manila, Networking-OVN, Neutron, Neutron-FWaaS, Neutron-LBaaS, Neutron-VPNaaS, Nova, and Trove.

You can see the full list of packages and versions at here.

Ubuntu 16.10

No extra steps required; just start installing OpenStack!

Branch Package Builds

If you want to try out the latest master branch updates, or updates to stable branches, we are delivering continuously integrated packages on each upstream commit in the following PPA’s:

sudo add-apt-repository ppa:openstack-ubuntu-testing/liberty
sudo add-apt-repository ppa:openstack-ubuntu-testing/mitaka
sudo add-apt-repository ppa:openstack-ubuntu-testing/newton

bear in mind these are built per-commitish (30 min checks for new commits at the moment) so ymmv from time-to-time.

Reporting bugs

Any issues please report bugs using the ‘ubuntu-bug’ tool:

sudo ubuntu-bug nova-conductor

this will ensure that bugs get logged in the right place in Launchpad.

Thanks and have fun!

Cheers,

Corey

(on behalf of the Ubuntu OpenStack team)


on September 13, 2016 10:27 AM

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In August, 140 work hours have been dispatched among 10 paid contributors. Their reports are available:

  • Balint Reczey did 9.5 hours (out of 14.75 hours allocated + 2 remaining, thus keeping 7.25 extra hours for September).
  • Ben Hutchings did 14 hours (out of 14.75 hours allocated + 0.7 remaining, keeping 1.45 extra hours for September).
  • Brian May did 14.75 hours.
  • Chris Lamb did 15 hours (out of 14.75 hours, thus keeping 0.45 hours for next month).
  • Emilio Pozuelo Monfort did 13.5 hours (out of 14.75 hours allocated + 0.5 remaining, thus keeping 2.95 hours extra hours for September).
  • Guido Günther did 9 hours.
  • Markus Koschany did 14.75 hours.
  • Ola Lundqvist did 15.2 hours (out of 14.5 hours assigned + 0.7 remaining).
  • Roberto C. Sanchez did 11 hours (out of 14.75h allocated, thus keeping 3.75 extra hours for September).
  • Thorsten Alteholz did 14.75 hours.

Evolution of the situation

The number of sponsored hours rised to 167 hours per month thanks to UR Communications BV joining as gold sponsor (funding 1 day of work per month)!

In practice, we never distributed this amount of work per month because some sponsors did not renew in time and some of them might not even be able to renew at all.

The security tracker currently lists 31 packages with a known CVE and the dla-needed.txt file 29. It’s a small bump compared to last month but almost all issues are affected to someone.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on September 13, 2016 08:50 AM

September 12, 2016

Ubuntu took part in the Google Code-in contest as a mentoring organization last December and January. Google Code-In is a contest for 13–17 year old pre-university students who want to learn more about and get involved with open source. With the help of mentors from all participating organizations, the students completed tasks that had been set up by the organization. In the end, both sides win; the students are introduced to the open source communities (and if they win they get a prize trip and many other benefits) and the mentoring organizations get work done.

The contest itself ended in January 2016, and in June it was time for the grand prize winner’s trip to San Francisco, where I represented Ubuntu as a mentor side. I’m sure you are waiting to hear more, so let’s go!

The Trip

Meet & Greet

On Sunday evening, the trip started with a more or less informal Meet & Greet event, where mentors could have the chance to meet the winners for their organization and the other way around. To recap, the winners for the Ubuntu organization where Matthew Allen from Australia and Daniyaal Rasheed from the United States. Congratulations! They, along with the other winners and many more contestants did great work during the contest and I’m eagerly waiting to see more contributions from then in the future!

The event was indeed a great way to start the trip and let people socialize and get to know each other. The winners were also presented with the first surprise; a Nexus phone for everybody (along with other swag given to the students and mentors)!

The Campus

Heading into the new week and first whole day together in San Francisco, we took some buses from the hotel to the Google campus. After the breakfast, we had some talks by mentors and the award ceremony with Christ DiBona, the director of open source in Google – or quoting the man himself, the “special”. Without further ado, here’s a few photos taken from the ceremony by the lovely Jeremy Allison (thanks!).

Matthew Allen Daniyaal Rasheed Both winners and myself, the representing mentor for Ubuntu

After the ceremonies, it was lunchtime. During lunch, students not from the US were hooked up with Googlers from their own home country – cool! Following lunch, we heard a bunch of talks from Googlers, some organization presentations by mentors and visited the Google Store (where all the winners got a gift card to spend on more Google swag) as well as the visitor centre.

Finally after dinner the buses headed back to the hotel for everybody to get some rest and prepare for the next day…

San Francisco activities

Tuesday was the fun day in San Francisco! In the morning, winners had two options: either a Segway Tour around the city or a visit to the Exploratorium science museum. I believe everybody had a nice time and nobody riding the Segways got seriously hurt either.

After getting everybody back together, it was time for some lunch again. We had lunch near the Ghirardelli Square and it was the perfect opportunity to get some nice dessert in ice cream and/or chocolate form for those who wanted it.

When we had finished all the eating, it was time to head for the Golden Gate bridge and surrounding areas (or hotel if you wanted some rest). Some of us walked the bridge (some even all the way to the other side and back), some around the parks nearby, guided by the local Googlers. It was definitely a sunny and windy day at the bridge at least!

After all these activities had been done and being rejoined by the people resting at the hotel, the whole group headed for an evening on a yacht! On the yacht we had a delicious dinner and drinks as well as lot of good discussions. Jeremy was on fire again with his cameras and he got some nice shots, including a mentors-only photo!

The Office

On Wednesday, our last day together, we walked to the Google office in San Francisco. After a wonderful breakfast we were up to more talks by Googlers, the last presentations by mentors, of course some more Google swag, lunch at the terrace, mini-tours inside the office, cake, chocolate, more candies and sweets etc.

Most importantly, the winners, mentors and parents alike had the last chance to get some discussions going before most people headed back home or other places to continue their journey.

Afterthoughts

I think it’s a great idea to involve young people to open source communities and get some work done. This contest is not only a contest. It isn’t only about contestants potentially being applied to a great university or an internship at Google later. It also isn’t only about the organizations getting work done by other people.

It’s a great way to get like-minded people communicate with each other, starting from a young age.

It can help young people who might not be socially the most extrovert to find something they like doing.

It can potentially make more open source careers possible through the passion that these young people have.

Whether Ubuntu will apply to be a mentor organization next time depends much on the volunteers. If there are enough mentors who are willing to do the work – figuring out what tasks are suitable for the contestants, registering them and helping the contestants work their way through them – then I don’t see why Ubuntu would not be signing up again.

Personally, I can highly recommend applying again. It’s been a great ride. Thank you everybody, you know who you are!

Other blogs

François Revol (Haiku mentor) – a series of blog articles about the trip with even more details

on September 12, 2016 11:11 AM

September 10, 2016

As you may have read on our social media pages, we already have the winners of this cycle’s Wallpaper Contest.

The team would like to thank all the gorgeous photographs and digital art that have been submitted to the contest (next to one hundred in four weeks is amazing; even after announcing the winners we’ve been getting more submissions :P)
1610-wallpaper-winners

From left to right, the wallpapers are:


We’d like to announce, also, that new rules will be added for next year’s Wallpaper Contest:

  1. Please don’t submit entries to any other Ubuntu flavour wallpaper contest. If one wallpaper is selected in more than one distribution, we will have 2 copies of the same image in Ubuntu.
  2.  Winning photos should not be submitted for any future Ubuntu
    wallpaper contest. That, of course, doesn’t impede you to try again the next cycle if you haven’t won the current one.
on September 10, 2016 07:03 PM

pelican-feature

I have decided to move to using GitHub pages and Pelican to create my person ‘hub’ on the Internet. I am still undecided about moving content from WordPress to GitHub pages.

This site will be removed on October 8th, 2016.


on September 10, 2016 06:35 PM

September 09, 2016

Click Hooks

Ted Gould

After being asked about what I like about Click hooks I thought it would be nice to write up a little bit of the why behind them in a blog post. The precursor to this story is that I told Colin Watson that he was wrong to build hooks like this; he kindly corrected me and helped me fix my code to match but I still wasn't convinced. Now today I see some of the wisdom in the Click hook design and I'm happy to share it.

The standard way to think about hooks is as a way to react to the changes to the system. If a new application is installed then the hook gets information about the application and responds to the new data. This is how most libraries work with providing signals about the data that they maintain, and we apply that same logic to thinking about filesystem hooks. But filesystem hooks are different because the coherent state is harder to query. In your library you might respond the signal for a few things, but in many code paths the chances are you'll just go through the list of original objects to do operations. With filesystem hooks that complete state is almost never used, only the caches are that are created by the hooks themselves.

Click hooks work by creating a directory of symbolic links that matches the current state of the system, and then asks you to ensure your cache matches that state of the system. This seems inefficient because you have to determine which parts of your cache need to change, which get removed and which get added. But it results in better software because your software, including your hooks, has errors in it. I'm sorry to be the first one to tell you, but there are bugs. If your software is 99% correct, there is still something it is doing wrong. When you have delta updates that update the cache that error compounds and never gets completely corrected with each update because the complete state is never examined. So slowly the quality of your cache gets worse, not awful, but worse. By transferring the current system state to the cache each time you get the error rate of your software in the cache, but you don't get the compounded error rate of each delta. This adds up.

The design of the hooks system in Click might feel wrong as you start to implement one, but I think that after you create a few hooks you'll find there is wisdom in it. And as you use other hook systems in other platforms think about checking the system state to ensure you're always creating the best cache possible, even if the hook system there didn't force you to do it.

on September 09, 2016 03:52 PM