September 03, 2015

I simply cannot take time off work to attend DebConf, so each year I watch the videos instead. It took almost a month, thanks to the back-to-school rush at work, but I finally got through the sessions I wanted to see.

Here are my highlights from DebConf 15:

Cool Stuff


Creating A More Inviting Environment For Newcomers New Experiences From MoM SoB Teammetrics - A detailed discussion of how a mature team with tapering contributions re-energized itself with new enthusiasts. How they were recruited, mentored, trained, and finally assigned key roles in the team. Lots of discussion of mentoring strategies and the costs of mentoring (less time for the work) from the developer/maintainer perspective. Lots of good ideas for any mature team, and thoroughly applicable to Ubuntu teams too.

Linux in the City of Munich AKA LiMux - There has been a lot of FUD written about one of the largest public conversions to an open-source platform, and it was great to see an actual insider talking about the project. Worth a watch.

Lightning Talks 2 - The first Lightning Talk was a proposal to add a new service to Debian. The service tests all uploaded packages for many known faults (using valgrind, infer, etc.), and automatically files bug reports on the faults. This should provide a large number of real bite-sized bugs for drive-by patches, and corresponding hefty improvement in code quality. Most cool.


Under the hood


Your Systemd Tool Box - Dissecting And Debugging Boot And Services - This is a great walk-through of the new (to me) tools. Had a terminal window open alongside to try each of the tools. Saved the video for a refresh, it's a lot to digest in one sitting.

Systemd How We Survived Jessie And How We Will Break Stretch - Fantastic discussion of coming systemd features: Persistent interface names, networkd, kdbus, and more. Also great discussion of how to get involved around the edges.

Dpkg The Interface - A presentation by the current maintainer, explaining how he keeps dpkg stable and the future roadmap. Since Snappy uses dpkg (but not apt), that roadmap is important! I have used dpkg for a decade, but never thought about all the bits of it I never see....


Keeping Free Software Free


Debians Central Role In The Future Of Software Freedom - A presentation by the President of the Software Freedom Conservancy (SFC), explaining the problems they see, their strategies to attack those problems, and how they try to effectively challenge GPL violations. A bit of Canonical-bashing in this one at a couple points (some deserved, some not).

At 23:30, it introduces the Debian Copyright Aggregation Project, where Debian contributors can opt to revocably assign their copyright to SFC, and can also permit the SFC to enforce those copyrights. This is one strategy SFC is pursuing to fight both CLAs and license violations.




on September 03, 2015 03:19 PM

Pardon for the weird formatting.
The Ubuntu Membership Board is responsible for approving new Ubuntu members. I interviewed our board members in order for the Community to get to know them better and get over the fear of applying to Membership.

The seventh interviewee is Aaron Honeycutt:

[Q]What do you do for a career?
[A]I work for the School Board of Broward County (not a teacher lol).
[Q]What was your first computing experience?
[A]I remember a EMachine with Windows XP on it. Oh was that bad.
[Q]How long have you been involved with Ubuntu?
[A]5 years or so but just 2 years ago I got my Membership.
[Q]Since you are all fairly new to the Board, why did you join?
[A]To help out, welcome new members with open arms.
[Q]What are some of the projects you’ve worked on in Ubuntu over the years?
[A]Ubuntu Touch, Documentation.
[Q]What is your focus in Ubuntu today?
[A]Getting every Documentation team to work together in a similar fashion .
[Q]Do you contribute to other free/open source projects? Which ones?
[A]I help spread the word of LibreOffice and help their QA team when I can.
[Q]If you were to give a newcomer some advice about getting involved with Ubuntu, what would it be?
[A]Get onto the Mailing List, IRC and talk to people, we are a very friendly bunch!
[Q]Do you have any other comments else you wish to share with the community?
[A]Once you find that spot when your both needed and want to be, you’ll find happiness in the community
on September 03, 2015 02:30 PM

S08E26 – Courageous - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Episode Twenty-six of Season Eight of the Ubuntu Podcast! Mark Johnson is back with Laura Cowen, Martin Wimpress, and Alan Pope!

In this week’s show:

That’s all for this week, please send your comments and suggestions to: show@ubuntupodcast.org
Join us on IRC in #ubuntu-podcast on Freenode
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

on September 03, 2015 01:41 PM

During years of development GCC got several switches which are considered obsolete/deprecated now. And as such they are not available for new ports. Guess what? AArch64 has such status too.

One of switches is “-posix” one. It is not needed anymore as “_POSIX_SOURCE” macro deprecated it:

Macro: _POSIX_SOURCE

If you define this macro, then the functionality from the POSIX.1 standard (IEEE Standard 1003.1) is available, as well as all of the ISO C facilities.

But it happens sometimes (I saw it in pdfedit 0.4.5 which is so old that it still uses Qt3). So if you find it somewhere then please save world with “s/-posix/-D_POSIX_SOURCE/g” :)

on September 03, 2015 01:13 PM
Find-a-Task is the Ubuntu community's job board for volunteers.

Introduced in January 2015, Find-a-Task shows fellow volunteers the variety of tasks and roles available.

The goal of Find-a-Task is for a volunteer, after exploring the Ubuntu Project, to land on a team or project's wiki page. They are interested, ready to join, and ready to start learning the skills and tools. 

However, it only works if *you* use it, too.


Try it.


Take a quick look, and see the variety of volunteer roles available. We have listings for many different skills and interests, including many non-technical tasks.


Is your team listed?


Hey teams, are you using Find-a-Task to recruit volunteers?
  • Are your team roles listed?
  • Are they accurate?
  • Is your landing page welcoming and useful to a new volunteer?

When it's time to update your postings on the job board, simply jump into Freenode IRC: #ubuntu-community-team.


Gurus: Are your pointing Padwans toward it?


Find-a-Task is a great place to send new enthusiasts. No signup, no login, no questions. It's a great way to survey the roles available in the big, wide, Ubuntuverse, and get new enthusiasts involved in a team.

It's also handy for experienced enthusiasts looking for a new challenge, of course.
  • If you're active in the various forums, refer new enthusiasts to Find-a-Task.
  • Add it to your signature.
  • If you know a Find-a-Task success story, please share.

Improving Find-a-Task


Ideas to increase usage of Find-a-Task are welcome.
Ideas on how to improve the tool itself are also welcome.
Please share your suggestions to improve Find-a-Task on the ubuntu-community-team mailing list.

on September 03, 2015 01:50 AM

September 02, 2015

This is the Jonathan Riddell™ IP Policy.  It applies to all Jonathan’s intellectual property in Ubuntu archives.  Jonathan is one of the top 5 uploaders, usually the top 1 uploader, to Ubuntu compiling hundreds of packages in the Ubuntu archive.  Further Jonathan reviews new and updated packages in the archive.  Further Jonathan selects compiler defaults and settings for KDE and Qt and other packages in the Ubuntu archive.  Further Jonathan builds and runs tests for Ubuntu packages in the archives.  Further Jonathan Riddell™ is a trademark of Jonathan Riddell™in Scotland, Catalunya and other countries; a trademark which is included in all packages edited by Jonathan Riddell™.  Further Jonathan is the author of numberous works in the Ubuntu archive.  Further Jonathan is the main contributor to the selection of software in Kubuntu. Therefore Jonathan has IP in the Ubuntu archive possibly including but not limited to copyright, patents, trademarks, sales marks, geographical indicators, database rights, compilation copyright, designs, personality rights and plant breeders rights.  To deal with, distribute, modify, look at or smell Jonathan’s IP you must comply with this policy.

Policy: give Jonathan a hug before using his IP.

If you want a licence for Jonathan’s IP besides this one you must contact Jonathan first and agree one in writing.

Nothing in this policy shall be taken to override or conflict with free software licences already put on relevant works.

 

facebooktwittergoogle_pluslinkedinby feather
on September 02, 2015 04:54 PM

I’m receiving more requests for upload accounts to the Deb-o-Matic servers lately (yay!), but that means the resources need to be monitored and shared between the build daemons to prevent server lockups.

My servers are running systemd, so I decided to give systemd.resource-control a try. My goal was to assign lower CPU shares to the build processes (debomatic itself, sbuild, and all the related tools), in order to avoid blocking other important system services from being spawned when necessary.

I created a new slice, and set a lower CPU share weight:
$ cat /etc/systemd/system/debomatic.slice
[Slice]
CPUAccounting=true
CPUShares=512
$

Then, I assigned the slice to the service unit file controlling debomatic daemons by adding the Slice=debomatic.slice option under the Service directive.

That was not enough, though, as some processes were assigned to the user slice instead, which groups all the processes spawned by users:
systemd-cgls

This is probably because schroot spawns a login shell, and systemd considers it belonging to a different process group. So, I had to launch the command systemctl set-property user.slice CPUShares=512, so all processes belonging to the user.slice will receive the same share of the debomatic ones. I consider this a workaround, I’m open to suggestions how to properly solve this issue :)

I’ll try to explore more options in the coming days, so I can improve my knowledge of systemd a little bit more :)


on September 02, 2015 04:31 PM

Here’s a summary of what the Launchpad team got up to in August.

Code

  • Webhook support for Git repositories is almost finished, and only needs a bit more web UI work (#1474071)
  • The summary of merge proposal pages now includes a link to the merged revision, if any (#892259)
  • Viewing individual comments on Git-based merge proposals no longer OOPSes (#1485907)

Mail notifications

Our internal stakeholders in Canonical recently asked us to work on improving the ability to filter Launchpad mail using Gmail.  The core of this was the “Include filtering information in email footers” setting that we added recently, but we knew there was some more to do.  Launchpad’s mail notification code includes some of the oldest and least consistent code in our tree, and so improving this has entailed paying off quite a bit of technical debt along the way.

  • Bug notifications and package upload notifications now honour the “Include filtering information in email footers” setting (#1474071)
  • Bug notifications now log an OOPS rather than crashing if the SMTP server rejects an individual message (#314420, #916939)
  • Recipe build notifications now include an X-Launchpad-Archive header (#776160)
  • Question notification rationales are now more consistent, including team annotations for subscribers (#968578)
  • Package upload notifications now include X-Launchpad-Message-Rationale and X-Launchpad-Notification-Type headers, and have more specific footers (#117155, #127917)

Package build infrastructure

  • Launchpad now supports building source packages that use Debian’s new build profiles syntax, currently only with no profiles activated
  • Launchpad can now build snap packages (#1476405), with some limitations; this is currently only available to a group of alpha testers, so let us know if you’re interested
  • Builders can now access Launchpad’s Git hosting (HTTPS only) in the same way that they can access its Bazaar hosting
  • All amd64/i386 builds now take place in ScalingStack, and the corresponding bare-metal builders have been detached pending decommissioning; some of the newer of those machines will be used to further expand ScalingStack capacity
  • We have a new ScalingStack region including POWER8-based ppc64el builders, which is currently undergoing production testing; this will replace the existing POWER7-based builders in a few weeks, and also provide virtualised build capacity for ppc64el PPAs
  • We’ve fixed a race condition that sometimes caused a user’s first PPA to be published unsigned for a while (#374395)

Miscellaneous

  • The project release file upload limit is now 1 GiB rather than 200 MiB (#1479441)
  • We spent some more time supporting translations for the overlay PPA used for current Ubuntu phone images, copying a number of existing translations into place from before the point when they were redirected automatically
  • Your user index page now has a “Change password” link (#1471961)
  • Bug attachments are no longer incorrectly hidden when displaying only some bug comments (#1105543)
on September 02, 2015 01:04 PM

September 01, 2015

The Next Generation SDK

Ubuntu App Developer Blog

Up until now the basic architecture of the SDK IDE and tools packaging was that we have packaged and distributed the QtCreator IDE and our Ubuntu plugins as separate distro packages which strongly depend on the Qt available in the same release.

Since 14.04 we have been jumping through hoops to provide the very same developer experience from a single development branch of the SDK projects. Just to give a quick picture on what we have available in the last few releases (note that 1.3 UITK is not yet released):

14.04 Trusty Qt 5.2.1 QtCreator 3.0.1 UI Toolkit 0.1
14.10 Utopic Qt 5.3. QtCreator 3.1.1 UI Toolkit 1.1
15.04 Vivid Qt 5.4.1 QtCreator 3.1.1 UI Toolkit 1.2
15.10 Wily Qt 5.4.2 QtCreator 3.5.0 UI Toolkit 1.3

Life could have been easier by sticking to one stable Qt and QtCreator and base our SDK on it. Obviously it was not a realistic option as the phone development needed the most recent Qt and our friend Kubuntu required a hot new engine under its hood too. So Qt was quickly moving forward and the SDK followed it. Of course it was all beneficial as new Qt releases brought us bugfixes, new features and improved performance.

But on the way we came to realize that continuously backporting the UITK and the QtCreator plugins to older releases and the LTS was simply not going to be possible. It went fine for some time, but the more API breaks the new Qt and QtCreator releases brought the more problems we had to face. Some people have asked why we don’t backport the latest Qt releases to the LTS or to the stable Ubuntu. As an idea it may sound good, but changing the Qt to 5.4.2 under an application in LTS what was built against 5.2.1 Qt would certainly break that application. So it is simply not cool to mess around with such fundamental bits of a stable and long term supported release.

The only option we had was to decouple the SDK from the archive release of Qt and build it as a standalone package without any external Qt dependencies. That way we could provide the exact same experience and tools to all developers regardless if they are playing safe on Trusty/LTS or enjoy the cutting edge on the daily developed release of Wily.

The idea manifested in a really funny project. The source tree of the project is pretty empty. Only cmake and the debian/rules take care of the job. The builder pulls the latest stable Qt, QtCreator and UITK. Builds and integrates the libdbusmenu-qt and appmenu-qt5 projects and deploys the SDK IDE. The package itself is super skinny. Opposing the old model where QtCreator has pulled most of the Qt modules as dependencies this package contains all it needs and the size of it is impressing 36MB. Cheap. Just the way I like it. Plus this package already contains the 1.3 UITK as our QtCreator plugin (Devices Tab) is using it. So in fact we are just one step from enabling desktop application development on 14.04 LTS with the same UI Toolkit as we use on the commercial phone devices. And that is a super hot idea.

The Ubuntu SDK IDE project lives here: https://launchpad.net/ubuntu-sdk-ide

If you want to check out how it is done:

$ bzr branch lp:ubuntu-sdk-ide

Since we considered such a big facelift on the SDK I thought why not to make the change much bigger. Some might remember that there was a discussion on the Ubuntu Phone mailing list about the possibility to improve the Kit creation in the IDE. Since then we have been playing with the idea and I think it is now a good time to unleash the static chroots.

The basic idea is that creating the builder chroots runtime is a super slow and fragile process. The bootstrapping of the click chroot already takes a long time and installing the SDK API packages (all the libs and dev packages with headers) into the chroot is also time consuming. So why not to create these root filesystems in advance and provide them as single installable packages.

This is exactly what we have done. The base of the API packages is the Vivid core image. It is small and contains only the absolutely necessary packages, we install the SDK libs, dev packages and development tools on the core image and configure the Overlay PPA too. So the final image is pretty much equivalent with the image on a freshly updated device out there. It means that the developer can build and test against the same API set as it is available on the devices.

These API packages are still huge. Their size is around 500MB, so on a slow connection it still takes ages to download, but still it is way faster than bootstrapping a 1.6GB chroot package by package.

This API packages contain a single tar.gz file and the post install script of the package puts the content of this tar.gz to the right place and wires it in, in the way it should be. Once the package is installed the new Kit will be automatically recognized by the IDE.

One important note on this API package! If you have an armhf 15.04 Kit (click chroot) already on your system when you install this package, then your original Kit will not be removed but simply renamed to backup-[timestamp]-[original name]. So do not worry if you have customized Kits, they are safe.

The Ubuntu SDK API project is only a packaging project with a simple script to take care of the dirty details. The project is hosted here: https://launchpad.net/ubuntu-sdk-api-15.04

And if you want to see what is in it just do

$ bzr branch lp:ubuntu-sdk-api-15.04  

The release candidate packages are available from the Tools Development PPA of the SDK team: https://launchpad.net/~ubuntu-sdk-team/+archive/ubuntu/tools-development

How to test these packages?

$ sudo add-apt-repository ppa:ubuntu-sdk-team/tools-development -y

$ sudo apt-get update

$ sudo apt-get install ubuntu-sdk-ide ubuntu-sdk-api-tools

$ sudo apt-get install ubuntu-sdk-api-15.04-armhf ubuntu-sdk-api-15.04-i386

After that look for the Ubuntu SDK IDE in the dash.

on September 01, 2015 06:50 PM

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20150901 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://kernel.ubuntu.com/reports/kt-meeting.txt


Status: CVE’s

The current CVE status can be reviewed at the following link:

  • http://kernel.ubuntu.com/reports/kernel-cves.html


Status: Stable, Security, and Bugfix Kernel Updates – Precise/Trusty/lts-utopic/Vivid

Status for the main kernels, until today:

  • Precise – Verification & Testing
  • Trusty – Verification & Testing
  • lts-Utopic – Verification & Testing
  • Vivid – Verification & Testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html
    For SRUs, SRU report is a good source of information:
  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 16-Aug through 05-Sep
    ====================================================================
    14-Aug Last day for kernel commits for this cycle
    15-Aug – 22-Aug Kernel prep week.
    23-Aug – 29-Aug Bug verification & Regression testing.
    30-Aug – 05-Sep Regression testing & Release to -updates.


Status: Wily Development Kernel

We have rebased and uploaded Wily master-next branch to 4.2 final from upstream.
—–
Important upcoming dates:

  • https://wiki.ubuntu.com/WilyWerewolf/ReleaseSchedule
    Thurs Sep 24 – Final Beta (~3 weeks away)
    Thurs Oct 8 – Kernel Freeze (~5 weeks away)
    Thurs Oct 15 – Final Freeze (~6 weeks away)
    Thurs Oct 22 – 15.10 Release (~7 weeks away)


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

on September 01, 2015 05:25 PM

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I have been paid to work 6.5 hours on Debian LTS. In that time I did the following:

  • Prepared and released DLA-301-1 fixing 2 CVE in python-django.
  • Did one week of “LTS Frontdesk” with CVE triaging. I pushed 11 commits to the security tracker.

Apart from that, I also gave a talk about Debian LTS at DebConf 15 in Heidelberg and also coordinated a work session to discuss our plans for Wheezy. Have a look at the video recordings:

DebConf 15

I attended DebConf 15 with great pleasure after having missed DebConf 14 last year. While I did not do lots of work there, I participated in many discussions and I certainly came back with a renewed motivation to work on Debian. That’s always good. :-)

For the concrete work I did during DebConf, I can only claim two schroot uploads to fix the lack of support of the new “overlay” filesystem that replaces “aufs” in the official Debian kernel, and some Distro Tracker work (fixing an issue that some people had when they were logged in via Debian’s SSO).

While the numerous discussions I had during DebConf can’t be qualified as “work”, they certainly contribute to build up work plans for the future:

As a Kali developer, I attended multiple sessions related to derivatives (notably the Debian Derivatives Panel).

I was also interested by the “Debian in the corporate IT” BoF led by Michael Meskes (Credativ’s CEO). He pointed out a number of problems that corporate users might have when they first consider using Debian and we will try to do something about this. Expect further news and discussions on the topic.

Martin Kraff, Luca Filipozzi, and me had a discussion with the Debian Project Leader (Neil) about how to revive/transform the Debian’s Partner program. Nothing is fleshed out yet, but at least the process initiated by the former DPL (Lucas) is again moving forward.

Other Debian work

Sponsorship. I sponsored an NMU of pep8 by Daniel Stender as it was a requirement for prospector… which I also sponsored since all the required dependencies are now available in Debian. \o/

Packaging. I NMUed libxml2 2.9.2+really2.9.1+dfsg1-0.1 fixing 3 security issues and a RC bug that was breaking publican. Since there’s no upstream fix for more than 8 months, I went back to the former version 2.9.1. It’s in line with the new requirement of release managers… a package in unstable should migrate to testing reasonably quickly, it’s not acceptable to keep it unfixed for months. With this annoying bug fixed, I could again upload a new upstream release of publican… so I prepared and uploaded 4.3.2-1. It was my first source only upload. This release was more work than I expected and I filed no less than 3 bug to upstream (new bash-completion install path, request to provide sources of a minified javascript file, drop a .po file for an invalid language code).

GPG issues with smartcard. Back from DebConf, when I wanted to sign some key, I stumbled again upon the problem which makes it impossible for me to use my two smartcards one after the other without first deleting the stubs for the private key. It’s not a new issue but I decided that it was time to report it upstream, so I did it: #2079 on bugs.gnupg.org. Some research helped me to find a way to work-around the problem. Later in the month, after a dist-upgrade and a reboot, I was no longer able to use my smartcard as a SSH authentication key… again it was already reported but there was no clear analysis, so I tried to do my own one and added the results of my investigation in #795368. It looks like the culprit is pinentry-gnome3 not working when started by the gpg-agent which is started before the DBUS session. Simple fix is to restart the gpg-agent in the session… but I have no idea yet of what the proper fix should be (letting systemd manage the graphical user session and start gpg-agent would be my first answer, but that doesn’t solve the issue for users of other init systems so it’s not satisfying).

Distro Tracker. I merged two patches from Orestis Ioannou fixing some bugs tagged newcomer. There are more such bugs (I even filed two: #797096 and #797223), go grab them and do a first contribution to Distro Tracker like Orestis just did! I also merged a change from Christophe Siraut who presented Distro Tracker at DebConf.

I implemented in Distro Tracker the new authentication based on SSL client certificates that was recently announced by Enrico Zini. It’s working nice, and this authentication scheme is far easier to support. Good job, Enrico!

tracker.debian.org broke during DebConf, it stopped being updated with new data. I tracked this down to a problem in the archive (see #796892). Apparently Ansgar Burchardt changed the set of compression tools used on some jessie repositorie, replacing bz2 by xz. He dropped the old Packages.bz2 but missed some Sources.bz2 which were thus stale… and APT reported “Hashsum mismatch” on the uncompressed content.

Misc. I pushed some small improvement to my Salt formulas: schroot-formula and sbuild-formula. They will now auto-detect which overlay filesystem is available with the current kernel (previously “aufs” was hardcoded).

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on September 01, 2015 11:49 AM

Last thurday, the Unity 3D team announced providing some experimental build of Unity editor to Linux.

This was quite an exciting news, especially for me as a personal Unity 3D user. Perfect opportunity to implements this install support in Ubuntu Make, and this is now available for download! The "experimental" comes from the fact that it's experimental upstream as well, there is only one version out (and so, no download section when we'll always fetch latest) and no checksum support. We talked about it on upstream's IRC channel and will work with them on this in the future.

Unity3D editor on Ubuntu!

Of course, all things is, as usual, backed up with tests to ensure we spot any issue.

Speaking of tests, this release as well fix Arduino download support which broke due to upstream versioning scheme changes. This is where our heavy tests investment really shines as we could spot it before getting any bug reports on this!

Various more technical "under the wood" changes went in as well, to make contributors' life easier. We got recently even more excellent contributions (it's starting to be hard for me to keep up with them to be honest due to the load!), more on that next week with nice incoming goodies which are cooking up.

The whole release details are available here. As usual, you can get this latest version direcly through its ppa for the 14.04 LTS, 15.05 and wily ubuntu releases.

Our issue tracker is full of ideas and opportunities, and pull requests remain opened for any issues or suggestions! If you want to be the next featured contributor and want to give an hand, you can refer to this post with useful links!

on September 01, 2015 09:30 AM

August’s reading list

Canonical Design Team

The design team members are constantly sharing interesting, fun, weird, links with each other, so we thought it might be a nice idea to share a selection of those links with everyone.

Here are the links that have been passed around during last month:

Thanks to Robin, Luca, Elvira, Anthony, Jamie, Joe and me, for the links this month!

on September 01, 2015 07:59 AM

Welcome to the Ubuntu Weekly Newsletter. This is issue #432 for the week August 24 – 30, 2015, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Elizabeth K. Joseph
  • Chris Guiver
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on September 01, 2015 03:45 AM

August 31, 2015

Assume we have a stream of events coming in one at a time, and we need to count the frequency of the different types of events in the stream.

In other words: We are receiving fruits one at a time in no given order, and at any given time we need to be able to answer how many of a specific fruit did we receive.

The most naive implementation is a dictionary in the form of , and is most accurate and suitable for streams with limited types of events.

Let us assume a unique item consists of 15 bytes and has a dedicated uint32 (4 bytes) counter assigned to it.

At 10 million unique items we end up using 19 MB which is a bit much, but on the plus side its as accurate as it gets.

But what if we don't have the 19 MB. Or what if we have to keep track of several streams?

Maybe saving to a DB? Well when querying the DB upon request, something in the lines of:

SELECT count(event) WHERE event = ?)

The more items we add, the more resource intensive the query becomes.

Thankfully solutions come in the form of Probabalistic datastructures (sketches).

I won't get into details but to solve this problem I semi-evaluated the following data structures:

  • Count-Min sketch (CMS) [2]
  • Count-Min-Log sketch (CML) [1][2]
  • Probabilistic Multiplicity Counting sketch (PMC) [1]

Test details:

For each sketch I linearly added a new flow with equivalently linear events. So the first flow got 1 event inserted. The second flow for 2 events inserted, all the way up to 10k-th flow with 10k events inserted.

flow 1: 1 event  
flow 2: 2 events  
...
flow 10000: 10000 events  

All three data structures were configured to have a size of 217KB (exactly 1739712 bits).

A couple dozen runs yielded the following results (based on my unoptimized code esp. for PMC and CML)

CMS: 07s for 50005000 insertion (fill rate: 31%)  
CML: 42s for 50005000 insertion (fill rate: 09%)  
PMC: 18s for 50005000 insertion (fill rate: 54%)  

CMS with ɛ: 0.0001, δ: 0.99 (code) CMS

Observe the biased estimation of CMS. CMS will never underestimate. In our case looking at the top border of the diagram we can see the there was a lot of overestimation.

CML with ɛ: 0.000025, δ: 0.99 (16-bit counters) (code)CML

Just like CMS, CML is also biased and will never underestimate. However unlike CMS the top border of the diagram is less noisy. Yet accuracy seems to be decreasing for the high counting flows.

PMC with (256x32) virtual metrices (code) PMC

Unlike the previous two sketches. This sketch is unbiased, so underestimations exist. Also the estimate flow count increases with the actual flow count (linearly bigger errors). The drawback here is that PMC gets filled up very quickly which means at some point it will just have everything overestimated. It is recommended to know what the max num of different flows will be beforehand.

Bringing it all together ALL

So what do you think. If you are familiar with these algorithms or can propose a different benchmarking scenario, please comment. I might be able to work on that on a weekend. The code was all written in Go, feel free to suggest optimizations of fix any bugs found (links above respective plots).

on August 31, 2015 10:25 PM

[PROMO] Plasma Evolving

The Kubuntu team is proud to announce the references images for Plasma Mobile.

Plasma Mobile was announced today at KDE’s Akademy conference.

Our images can be installed on a Nexus 5 phone.

More information on Plasma Mobile’s website.

on August 31, 2015 10:09 PM

Now that we've open sourced the code for Ubuntu One filesync, I thoughts I'd highlight some of the interesting challenges we had while building and scaling the service to several million users.

The teams that built the service were roughly split into two: the foundations team, who was responsible for the lowest levels of the service (storage and retrieval of files, data model, client and server protocol for syncing) and the web team, focused on user-visible services (website to manage files, photos, music streaming, contacts and Android/iOS equivalent clients).
I joined the web team early on and stayed with it until we shut it down, so that's where a lot of my stories will be focused on.

Today I'm going to focus on the challenge we faced when launching the Photos and Music streaming services. Given that by the time we launched them we had a few years of experience serving files at scale, our challenge turned out to be in presenting and manipulating the metadata quickly to each user, and be able to show the data in appealing ways to users (showing music by artist, genre and searching, for example). Photos was a similar story, people tended to have many thousands of photos and songs and we needed to extract metadata, parse it, store it and then be able to present it back to users quickly in different ways. Easy, right? It is, until a certain scale  :)
Our architecture for storing metadata at the time was about 8 PostgreSQL master databases where we sharded metadata across (essentially your metadata lived on a different DB server depending on your user id) plus at least one read-only slave per shard. These were really beefy servers with a truck load of CPUs, more than 128GB of RAM and very fast disks (when reading this, remember this was 2009-2013, hardware specs seem tiny as time goes by!).  However, no matter how big these DB servers got, given how busy they were and how much metadata was stored (for years, we didn't delete any metadata, so for every change to every file we duplicated the metadata) after a certain time we couldn't get a simple listing of a user's photos or songs (essentially, some of their files filtered by mimetype) in a reasonable time-frame (less than 5 seconds). As it grew we added caches, indexes, optimized queries and code paths but we quickly hit a performance wall that left us no choice but a much feared major architectural change. I say much feared, because major architectural changes come with a lot of risk to running services that have low tolerance for outages or data loss, whenever you change something that's already running in a significant way you're basically throwing out most of your previous optimizations. On top of that as users we expect things to be fast, we take it for granted. A 5 person team spending 6 months to make things as you expect them isn't really something you can brag about in the middle of a race with many other companies to capture a growing market.
In the time since we had started the project, NoSQL had taken off and matured enough for it to be a viable alternative to SQL and seemed to fit many of our use cases much better (webscale!). After some research and prototyping, we decided to generate pre-computed views of each user's data in a NoSQL DB (Cassandra), and we decided to do that by extending our existing architecture instead of revamping it completely. Given our code was pretty well built into proper layers of responsibility we hooked up to the lowest layer of our code,-database transactions- an async process that would send messages to a queue whenever new data was written or modified. This meant essentially duplicating the metadata we stored for each user, but trading storage for computing is usually a good trade-off to make, both in cost and performance. So now we had a firehose queue of every change that went on in the system, and we could build a separate piece of infrastructure who's focus would only be to provide per-user metadata *fast* for any type of file so we could build interesting and flexible user interfaces for people to consume back their own content. The stated internal goals were: 1) Fast responses (under 1 second), 2) Less than 10 seconds between user action and UI update and 3) Complete isolation from existing infrastructure.
Here's a rough diagram of how the information flowed throw the system:

U1 Diagram

It's a little bit scary when looking at it like that, but in essence it was pretty simple: write each relevant change that happened in the system to a temporary table in PG in the same transaction that it's written to the permanent table. That way you get transactional guarantees that you won't loose any data on that layer for free and use PG's built in cache that keeps recently added records cheaply accessible.
Then we built a bunch of workers that looked through those rows, parsed them, sent them to a persistent queue in RabbitMQ and once it got confirmation it was queued it would delete it from the temporary PG table.
Following that we took advantage of Rabbit's queue exchange features to build different types of workers that processes the data differently depending on what it was (music was stored differently than photos, for example).
Once we completed all of this, accessing someone's photos was a quick and predictable read operation that would give us all their data back in an easy-to-parse format that would fit in memory. Eventually we moved all the metadata accessed from the website and REST APIs to these new pre-computed views and the result was a significant reduction in load on the main DB servers, while now getting predictable sub-second request times for all types of metadata in a horizontally scalable system (just add more workers and cassandra nodes).

All in all, it took about 6 months end-to-end, which included a prototype phase that used memcache as a key/value store.

You can see the code that wrote and read from the temporary PG table if you branch the code and look under: src/backends/txlog/
The worker code, as well as the web ui is still not available but will be in the future once we finish cleaning it up to make it available. I decided to write this up and publish it now because I believe the value is more in the architecture rather than the code itself   :)

on August 31, 2015 09:17 PM

With the move to Plasma 5, updating the Kubuntu website seemed timely. Many people have contributed, including Ovidiu-Florin Bogdan, Aaron Honeycutt, Marcin Sągol and many others.

We want to show off the beauty of Plasma 5, as well as allow easy access for Kubuntu users to the latest news, downloads, documentation, and other resources.

We want your help! Whether you code/program or not.

Web development, packaging, bug triage, documentation, promotion and social media are all areas where we can use your talents and skill, as well as offering help to new or troubled users.

For instance, people regularly report problems on Facebook, Reddit, Google+, Twitter, Telegram now and of course, #kubuntu on Freenode IRC, rather than filing bugs.

Sometimes their problems are easily solved, sometimes they have encountered real bugs, which we can help them file.

Please use our new site to find what you need, and tell us if you find something which needs improvement.

on August 31, 2015 03:45 PM

August 30, 2015

Disclaimer: I am not a member of the Mycroft team, but I think this is neat and an important example of open innovation that needs support.

Mycroft is an Open Source, Open Hardware, Open APIs product that you talk to and it provides information and services. It is a wonderful example of open innovation at work.

They are running a kickstarter campaign that is pretty close to the goal, but it needs further backers to nail it.

I recorded a short video about why I think this is important. You can watch it here.

I encourage you to go and back the campaign. This kind of open innovation across technology, software, hardware, and APIs is how we make the world a better and more hackable place.

on August 30, 2015 09:42 PM

August 29, 2015

CCCamp 2015

Riccardo Padovani

I’d like to give a big thank you to Ubuntu Community who paid for my entrance ticket which let me take part in CCCamp 2015. The Chaos Communication Camp is an international meeting of hackers that takes place every four years and organized by the Chaos Computer Club (CCC).

cccamp

My experience was amazing from the people who I talked to and the people I met.

Talks

There was quite a bit of talks so I’ll highlight some of them:

How to make your software build reproducibly

Lunar, a Debian Developer, explained why it’s important to make reproducible the build of packages from source code. The main issue in the chain of trust at the moment is we can read the source code of the packages we install, but we trust third party servers where packages has been build. To make the things secure we need to build packages in a deterministic way, so everyone could check if the package was build from the source without editing.

While the problem seems easy to solve, it isn’t. Debian has worked on that problem for two years, and still haven’t completed the fix to this issue.

How to organize a CTF

CTF - Capture the flag - are contests where the task is to maintain a server running multiple services, while simultaneously trying to get access to the other team’s servers. Each successful penetration gains points, as well as keeping services up and functional during the course of the game.

At the camp itself there was a CTF contest, and it’s incredible how some people could hack in a system and find vulnerabilities with very complicated way to bypass security system.

Towards Universal Access to All Knowledge: Internet Archive

Archive.org is a well-known service: their goal is to backup all the world! Brewster Kahle explained how they’re working to take as much data as possible, and why it’s important.

TLS interception considered harmful

With the more widespread use of encrypted HTTPS connections many software vendors intercept these connections by installing a certificate into the user’s browser. This is widely done by Antivirus applications, parental filter software or ad injection software. This can go horribly wrong, as the examples of Superfish and Privdog have shown. But even if implemented properly these solutions almost always decrease the security of HTTPS.

The talk explains how bad some of these Software Companies are which reduce your security for their gains.

Let’s encrypt

Let’s Encrypt is a new free and automated certificate authority, launching in summer 2015.

In these dark times security and privacy are more important than ever. EFF, Mozilla, Cisco, Akamai, IdenTrust, and a team at the University of Michigan are working to make the web a safer place, adopting HTTPS everywhere. I really hope this project will have a large adoption.

People

be excellent

As usual, the best part of events like this are meeting new people, listening to different stories and acquire a common knowledge. I’m not going to report them here: lot of them value their privacy (at the camp there were ‘No photos’ signs everywhere). I had an awesome time talking to people and learning new things from them.

A big thanks to my travel friends, Ruio and Bardo, for their knowledge and even more for their company in that long trip.

Also, a thanks to all the Italian Embassy - good guys, with a lot of free grappa. Awesome!

italian embassy

CCC Angels

The event is organized by volunteers so a big thanks goes to all of them, they were able to provide energy and Internet for everyone all 5000 plus of us.

I want to say thanks again to the Ubuntu Community for sponsoring me, and to all guys I met.

The next CCCamp is in 4 years, I hope I’ll be able to join again, ‘cause it’s a very strong experience.

Thanks to Aaron Honeycutt for helping me writing this article.

If you like my work and want to support me, just send me a Thank you! by email or offer me a beer:-)

Ciao,
R.

on August 29, 2015 11:17 PM
The Intel SuspendResume project aims to help identify delays in suspend and resume.  After seeing it demonstrated by Len Brown (Intel) at this years Linux Plumbers conference I gave it a quick spin and was delighted to see how easy it is to use.

The project has some excellent "getting started" documentation describing how to configure a system and run the suspend resume analysis script which should be read before diving in too deep.

For the impatient, one can do try it out using the following:

git clone https://github.com/01org/suspendresume.git
cd suspendresume
sudo ./analyze_suspend.py


..and manually resume once after the machine has completed a successful suspend.

This will create a directory containing dumps of the kernel log and ftrace output as well as an html web page that one can read into your favourite web browser to view the results.  One can zoom in/out of the web page to drill down and see where the delays are occurring, an example from the SuspendResume project page is shown below:

example webpage (from https://01.org/suspendresume)

It is a useful project, kudos to Intel for producing it.  I thoroughly recommend using it to identify the delays in suspend/resume.
on August 29, 2015 05:45 PM

The Gist

There are many apps in the Ubuntu Touch store everywhere that come with unique, difficult to remember names for example, Fahrplan, Dekko, Podbird, Chancho, Wunderlist etc. How do users find them? Are users expected to scroll through the app scope searching for the app? Of course not..they can either type to search or filter by category. However, what if they don't remember the name of the app, then searching by name becomes irrelevant. So how do we app developers make it easier?

Keywords

Magic of Keywords

Unity 7 (and also Unity 8), Gnome Shell and most popular Desktop Environments provide support for keywords. Keywords allow users to search for apps by the functionality that they offer.

For instance, Fahrplan might make sense to users who speak German, but for the rest of the world, it is rather ambiguous and difficult to remember. Hell, just by looking at the app name can you guess what it does? However Fahrplan developers added keywords to their desktop file as shown below,

[Desktop Entry]
Version=1.0
Type=Application
Terminal=false
Name=Fahrplan
Keywords=travel;train;tram;bus;journey;
Exec=./bin/fahrplan2
Icon=fahrplan2-square.svg

You can now search Fahrplan using common strings like travel, train, bus, journey. Do these keywords make it clear as to what the app does ;-) ? Fahrplan is a journey/travel planner that connects with many services like bahne.de, 9292ov.nl, 511.org etc etc and returns train/bus/ferry schedules. One nifty little app that covers many services across Europe.

Tip: You can also make the keywords translatable by adding a underscore "_" as shown below,

_Keywords=travel;train;tram;bus;journey;

If you configure your CMake or QMake projects correctly, you can expose them to translators who can translate them. You would be surprised at how many apps don't do this. Little things like this matter. Adding them in your app takes at most a minute of your time. What are you waiting for? Go!

Liked this post? You can find other similar tutorials/guides written by me here.

on August 29, 2015 01:03 PM

August 28, 2015

At 100 pages this is the biggest issue EVAR!

This month:
* Our Great Ancestor: Warty Warthog
* Command & Conquer
* How-To : Python, LibreOffice, Website with Infrastructure, and Programming COBOL
* Graphics : Inkscape.
* Survey Results
* Chrome Cult
* Linux Labs: How I Learned To Love Ubuntu
* Site Review
* A Quick Look At: Linux in Industry, and the French Translation Team
* Ubuntu Phones
* [NEW!] Linux Loopback
* Ubuntu Games: The Current State of Linux Gaming
plus: News, Q&A, and soooo much more.

I’m also trying new avenues to promote FCM, so please take the time to upvote my Reddit post to help bring FCM awareness to the masses: https://www.reddit.com/r/Ubuntu/comments/3iqy13/full_circle_magazine_releases_issue_100/

http://fullcirclemagazine.org/issue-100/

 

on August 28, 2015 05:21 PM

Go enjoy Python3

Dimitri John Ledkov

Given a string, get a truncated string of length up to 12.

The task is ambiguous, as it doesn't say anything about whether or not 12 should include terminating null character or not. None the less, let's see how one would achieve this in various languages.
Let's start with python3

import sys
print(sys.argv[1][:12])

Simple enough, in essence given first argument, print it up to length 12. As an added this also deals with unicode correctly that is if passed arg is 車賈滑豈更串句龜龜契金喇車賈滑豈更串句龜龜契金喇, it will correctly print 車賈滑豈更串句龜龜契金喇. (note these are just random Unicode strings to me, no idea what they stand for).

In C things are slightly more verbose, but in essence, I am going to use strncpy function:

#include <stdio.h>
#include <string.h>
void main(int argc, char *argv[]) {
char res[12];
strncpy(res,argv[1],12);
printf("%s\n",res);
}
This treats things as byte-array instead of unicode, thus for unicode test it will end up printing just 車賈滑豈. But it is still simple enough.
Finally we have Go
package main

import "os"
import "fmt"
import "math"

func main() {
fmt.Printf("%s\n", os.Args[1][:int(math.Min(12, float64(len(os.Args[1]))))])
}
This similarly treats argument as a byte array, and one needs to cast the argument to a rune to get unicode string handling. But there are quite a few caveats. One cannot take out of bounds slices. Thus a naïve os.Args[1][:12] can result in a runtime panic that slice bounds are out of range. Or if a string is known at compile time, a compile time error. Hence one needs to calculate length, and do a min comparison. And there lies the next caveat, math.Min() is only defined for float64 type, and slice indexes can only be integers and thus we end up writing ]))))])...

12 points for python3, 8 points for C, and Go receives nul points Eurovision style.

EDIT: Andreas Røssland and James Hunt are full of win. Both suggesting fmt.Printf("%.12s\n", os.Args[1]) for go. I like that a lot, as it gives simplicity & readability without compromising the default safety against out of bounds access. Hence the scores are now: 14 points for Go, 12 points for python3 and 8 points for C.

EDIT2: I was pointed out much better C implementation by Keith Thompson - http://pastebin.com/5i7rFmMQ in essence it uses strncat() which has much better null termination semantics. And Ben posted a C implementation which handles wide characters http://www.decadent.org.uk/ben/blog/truncating-a-string-in-c.html. I regret to inform you that this blog post got syndicated onto hacker news and has now become the top viewed post on my blog of all time, overnight. In retrospect, I regret awarding points at the end of the blog post, as that's just was merely an expression of opinion and is highly subjective measure. But this problem statement did originate from me reviewing go code that did "if/then/else" comparison and got it wrong to truncate a string and I thought surely one can just do [:12] which has lead me down the rabbit hole of discovering a lot about Go; it's compile and runtime out of bounds access safeguards; lack of universal Min() function; runes vs strings handling and so on. I'm only a beginner go programmer and I am very sorry for wasting everyone's time on this. I guess people didn't have much to do on a Throwback Thursday.

The postings on this site are my own and don't necessarily represent Intel’s positions, strategies, or opinions.
on August 28, 2015 09:48 AM

Note: I’m sorry, this post is a bit of a mess.

I wrote a post 2 days ago, outlining an idea for a non-windowing display server — a layer that wayland compositors (or other programs) could be built upon. It got quite a bit more attention than I expected, and there were many responses to the idea.

Before I go on, I wish to address a few things that weren’t clear in the original post:

The first being that I am not an ubuntu developer, and am in no way associated with canonical. I am only an ubuntu member :) Even though I don’t use ubuntu personally, I wish to improve the user experience of those who do.

Second is a point that I did not address clearly in the original post: One of the main reasons for this idea is to enable users to modify the video resolution, gamma ramp, orientation, brightness, etc. DRM provides an API for doing these operations, however, AFAIK, you cannot run modesetting operations on a virtual terminal that is already running an application that has called video modesetting operations. In other words, you cannot run a DRM-based application on an already-running wayland server in order to run a modesetting operation. So, AFAIK, the only way to enable an application to do this is to write a sort of “proxy” server that handles requests, and then runs the video modesetting operations.

Since I am currently confusing myself re-reading this, I’ll try to provide a diagram in order to explain what I mean.

If you want to change the gamma ramp, for example, this is impossible:

drm_client_wayland

So with the display server acting as a proxy of sorts, it becomes possible:

drm_client_display_server

This is also why I believe that having a server over a shared library is crucial. A shared library would allow for abstraction over multiple backends, however, it doesn’t allow communication with more than one application. A wayland compositor can access all of the functions, yes, but wayland clients cannot.

The third clarification is that this is not only meant for wayland. Though this is the main “client” I have in mind for this server, it isn’t restricted to only wayland. The idea is that it could be used by anything, for example, as one response pointed out, xen virtualization. Or, in my case, I actually want to write clients that use this server directly, without even using a windowing server like wayland (yes, I actually have a good reason for wanting this XD ). In other words, though I believe that the group that would use this the most would be wayland users (hence why I wrote the original post tailored towards this), it isn’t only meant for wayland.

There were a few responses saying that wayland intentionally doesn’t support this, not because of the reason I originally suspected (it being “only” a windowing protocol), but because one of wayland’s main goals is to let the compositor to have full control over the display, and make sure that there are no flickers or tearing etc., which changing the video resolution (or some other modesetting operations) would undoubtedly cause. I understand and respect this, however, I still want to be able to change the resolution or gamma ramp (etc.) myself, and suffer the consequences of the momentary flickering or whatever else. Again though, I respect wayland’s decision in this aspect, so my proposal, instead, is this: To make this an optional backend for wayland compositors. Instead of my original proposal, which was to build wayland compositors on top of this (in order to help simplify the stack), instead, have this as an option, so that if users wish to have the video modesetting (etc.) capabilities, they can use this backend instead.

A pretty large concern that many people (including myself) have is performance. Having an extra server on the stack would definitely have an impact on performance, but the question is how much.

So with this being said, going forwards, I am currently working on implementing a proof-of-concept prototype in order to have a better sense of what it entails, especially in regards to performance. The prototype will be anything but production-ready, but hopefully will at least work … maybe XD .


on August 28, 2015 01:22 AM

August 27, 2015

Recently there has been a flurry of concerns relating to the IP policy at Canonical. I have not wanted to throw my hat into the ring, but I figured I would share a few simple thoughts.

Firstly, the caveat. I am not a lawyer. Far from it. So, take all of this with a pinch of salt.

The core issue here seems to be whether the act of compiling binaries provides copyright over those binaries. Some believe it does, some believe it doesn’t. My opinion: I just don’t know.

The issue here though is with intent.

In Canonical’s defense, and specifically Mark Shuttleworth’s defense, they set out with a promise at the inception of the Ubuntu project that Ubuntu will always be free. The promise was that there would not be a hampered community edition and full-flavor enterprise edition. There will be one Ubuntu, available freely to all.

Canonical, and Mark Shuttleworth as a primary investor, have stuck to their word. They have not gone down the road of the community and enterprise editions, of per-seat licensing, or some other compromise in software freedom. Canonical has entered multiple markets where having separate enterprise and community editions could have made life easier from a business perspective, but they haven’t. I think we sometimes forget this.

Now, from a revenue side, this has caused challenges. Canonical has invested a lot of money in engineering/design/marketing and some companies have used Ubuntu without contributing even nominally to it’s development. Thus, Canonical has at times struggled to find the right balance between a free product for the Open Source community and revenue. We have seen efforts such as training services, Ubuntu One etc, some of which have failed, some have succeeded.

Again though, Canonical has made their own life more complex with this commitment to freedom. When I was at Canonical I saw Mark very specifically reject notions of compromising on these ethics.

Now, I get the notional concept of this IP issue from Canonical’s perspective. Canonical invests in staff and infrastructure to build binaries that are part of a free platform and that other free platforms can use. If someone else takes those binaries and builds a commercial product from them, I can understand Canonical being a bit miffed about that and asking the company to pay it forward and cover some of the costs.

But here is the rub. While I understand this, it goes against the grain of the Free Software movement and the culture of Open Source collaboration.

Putting the legal question of copyrightable binaries aside for one second, the current Canonical IP policy is just culturally awkward. I think most of us expect that Free Software code will result in Free Software binaries and to make claim that those binaries are limited or restricted in some way seems unusual and the antithesis of the wider movement. It feels frankly like an attempt to find a loophole in a collaborative culture where the connective tissue is freedom.

Thus, I see this whole thing from both angles. Firstly, Canonical is trying to find the right balance of revenue and software freedom, but I also sympathize with the critics that this IP approach feels like a pretty weak way to accomplish that balance.

So, I ask my humble readers this question: if Canonical reverts this IP policy and binaries are free to all, what do you feel is the best way for Canonical to derive revenue from their products and services while also committing to software freedom? Thoughts and ideas welcome!

on August 27, 2015 11:59 PM

"I am Groot."
– Groot

The first beta of the Wily Werewolf (to become 15.10) has now been released!

This beta features images for Kubuntu, Lubuntu, Ubuntu GNOME, Ubuntu Kylin, Ubuntu MATE, Xubuntu and the Ubuntu Cloud images.

Pre-releases of the Wily Werewolf are *not* encouraged for anyone needing a stable system or anyone who is not comfortable running into occasional, even frequent breakage. They are, however, recommended for Ubuntu flavor developers and those who want to help in testing, reporting and fixing bugs as we work towards getting this release ready.

Beta 1 includes a number of software updates that are ready for wider testing. This is quite an early set of images, so you should expect some bugs.

While these Beta 1 images have been tested and work, except as noted in the release notes, Ubuntu developers are continuing to improve the Wily Werewolf. In particular, once newer daily images are available, system installation bugs identified in the Beta 1 installer should be verified against the current daily image before being reported in Launchpad. Using an obsolete image to re-report bugs that have already been fixed wastes your time and the time of developers who are busy trying to make 15.10 the best Ubuntu release yet. Always ensure your system is up to date before reporting bugs.

Kubuntu

Kubuntu uses KDE software and now features the new Plasma 5 desktop.

The Kubuntu 15.10 Beta 1 images can be downloaded from:

http://cdimage.ubuntu.com/kubuntu/releases/wily/beta-1/

More information about Kubuntu 15.10 Beta 1 can be found here:

https://wiki.ubuntu.com/WilyWerewolf/Beta1/Kubuntu

Lubuntu

Lubuntu is a flavour of Ubuntu based on LXDE and focused on providing a very lightweight distribution.

The Lubuntu 15.10 Beta 1 images can be downloaded from:

http://cdimage.ubuntu.com/lubuntu/releases/wily/beta-1/

More information about Lubuntu 15.10 Beta 1 can be found here:

https://wiki.ubuntu.com/WilyWerewolf/Beta1/Lubuntu

Ubuntu GNOME

Ubuntu GNOME is a flavour of Ubuntu featuring the GNOME3 desktop environment.

The Ubuntu GNOME 15.10 Beta 1 images can be downloaded from:

http://cdimage.ubuntu.com/ubuntu-gnome/releases/wily/beta-1/

More information about Ubuntu GNOME 15.10 Beta 1 can be found here:

https://wiki.ubuntu.com/WilyWerewolf/Beta1/UbuntuGNOME

Ubuntu Kylin

Ubuntu Kylin is a flavour of Ubuntu that is more suitable for Chinese users.

The Ubuntu Kylin 15.10 Beta 1 images can be downloaded from:

http://cdimage.ubuntu.com/ubuntukylin/releases/wily/beta-1/

More information about Ubuntu Kylin 15.10 Beta 1 can be found here:

https://wiki.ubuntu.com/WilyWerewolf/Beta1/UbuntuKylin

Ubuntu MATE

Ubuntu MATE is a flavour of Ubuntu featuring the MATE desktop environment for people who just want to get stuff done.

The Ubuntu MATE 15.10 Beta 1 images can be downloaded from:

http://cdimage.ubuntu.com/ubuntu-mate/releases/wily/beta-1/

More information about Ubuntu MATE 15.10 Beta 1 can be found here:

https://wiki.ubuntu.com/WilyWerewolf/Beta1/UbuntuMATE

Xubuntu

Xubuntu is a flavour of Ubuntu shipping with the XFCE desktop environment.

The Xubuntu 15.10 Beta 1 images can be downloaded from:

http://cdimage.ubuntu.com/xubuntu/releases/wily/beta-1/

More information about Xubuntu MATE 15.10 Beta 1 can be found here:

https://wiki.ubuntu.com/WilyWerewolf/Beta1/Xubuntu

Ubuntu Cloud

Ubuntu Cloud images can be run on Amazon EC2, Openstack, SmartOS and many other clouds.

The Ubuntu Cloud 15.10 Beta 1 images can be downloaded from:

http://cloud-images.ubuntu.com/releases/wily/beta-1/

Regular daily images for Ubuntu can be found at:

http://cdimage.ubuntu.com

If you’re interested in following the changes as we further develop Wily, we suggest that you subscribe to the ubuntu-devel-announce list. This is a low-traffic list (a few posts a week) carrying announcements of approved specifications, policy changes, beta releases and other interesting events.

http://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-announce

A big thank you to the developers and testers for their efforts to pull together this Beta release!

Originally posted to the ubuntu-devel-announce mailing list on Thu Aug 27 14:27:17 UTC 2015 by Martin Wimpress on behalf of Ubuntu Release Team

on August 27, 2015 10:02 PM

Jon recently published a blog post stating that you’re free to create Ubuntu derivatives as long as you remove trademarks. I do not necessarily agree with this statement, primarily because of this clause in the IP rights policy :

Copyright

The disk, CD, installer and system images, together with Ubuntu packages and binary files, are in many cases copyright of Canonical (which copyright may be distinct from the copyright in the individual components therein) and can only be used in accordance with the copyright licences therein and this IPRights Policy.

From what I understand, Canonical is asserting copyright over various binaries that are shipped on the ISO, and they’re totally in the clear to do so for any packages that end up on the ISO that are permissively licensed ( X11 for eg. ), because permissive licenses, unlike copyleft licenses, do not prohibit additional restrictions on top of the software. A reading of the GPL has the explicit statement :

4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.

Whereas licenses such as the X11 license explicitly allow sub licensing :

… including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software …

Depending on the jurisdiction you live in, Canonical *can* claim copyrights over the binaries that are produced in the Ubuntu archive. This is something that multiple other parties such as the SF Conservancy, FSF as well as Bradley Kuhn have agreed on.

So once again, all of this is very much dependent on where you live and where your ISO’s are hosted. So if you’re distributing an Ubuntu derivative, I’d very much recommend talking to a professional lawyer who’d best be able to advise you about how the policy affects you in your jurisdiction. It may very well be that you require a license, or it may be that you don’t. I’m not a lawyer and AFAIK, neither is Jon.

Addendum/Afterthought :

Taken a bit more extreme, one could even argue that in order to be GPL compliant, derivatives should provide sources to all the packages that land on the ISO, and just passing off this responsibility to Canonical is a potential GPL violation.


on August 27, 2015 06:47 PM

Ubuntu is entirely committed to the principles of free software development; we encourage people to use free and open source software, improve it and pass it on.” is what used to be printed on the front page of ubuntu.com. This is still true but recently has come under attack when the project’s main sponsor, Canonical, put up an IP policy which broke the GPL and free software licences generally by claiming packages need to be recompiled. Rather than apologising for this in the modern sense of the word by saying sorry, various staff members have apologised in an older sense of the word meaning to excuse. But everything in Ubuntu is free to share, copy and modify (or just free to share and copy in the case of restricted/multiverse). The archive admins wills only let in packages which comply to this and anyone saying otherwise is incorrect.

In this twitter post Michael Hall says “If a derivative distro uses PPAs it needs an additional license.” But he doesn’t say what there is that needs an additional licence, the packages already have copyright licences all, of them free software.

It should be very obvious that Canonical doesn’t control the world and a licence is only needed if there is some law that allows them to restrict what others want to do. There’s been a few claims on what that law might be but nothing that makes sense when you look at it. It’s worth examining their claims because people will fall for them and that will destroy Ubuntu as a community project. Community projects depend on everyone having the freedom to do whatever they want with the code else nobody will give their time to a project that someone else will then control.

In this blog post Dustin Kirkland again doesn’t say what needs a licence but says one is needed based on Geographical Indication. It’s hard to say if he’s being serious. A geographical indication (GI) is a sign used on products that have a specific geographical origin and possess qualities or a reputation that are due to that origin and are then assessed before being registered. There is no Geographical Indication registration in Ubuntu and it’s completely irrelevant to everything. So lets move on.

A more dangerous claim you can see on this reddit post where Michael Hall claims “for permissively licensed code where you did not build the binary, there is no pre-existing right to redistribution of that binary”.    This is incorrect, everything in Ubuntu has a free software licence with an explicit right to redistribution. (Or a few bits are public domain where no licence is needed at all.)  Let’s take the libX11 as a random example, it gets shipped with a copyright file containing this licence:

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”),  to deal in the Software without restriction

so we do have permission.  Shame on those who say otherwise.  This applies to the source of course and so it applied to any derived work such as the binaries, which is why it’s shipped with the binaries.  It even says you can’t remove remove the licence:
“The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software.”
So it’s free software and the licence requires it to remain free software.  It’s not copyleft, so if you combine it with another work which is not free software then the result is proprietary, but we don’t do that in Ubuntu.  The copyright owner could put extra restrictions on but nobody else can because it’s a free world and you can’t make me do stuff just because you say so, you have to have some legal way to restrict me first.
One of the items allowed by this X11 licence is the ability to “sublicense” which is just putting another licence on it, but you can’t remove the original licence as it says in the part of the licence I quoted above.  Once I have a copy of the work I can copy it all I want under the X11 licence and ignore your sublicence.
This is even true of works under the public domain or a WTFPL style licence, once I’ve got a copy of the work it’s still public domain so I can still copy, share and modify it freely.  You can’t claim it’s your copyright because, well, it’s not.

In Matthew Garrett’s recent blog post he reports that “Canonical assert that the act of compilation creates copyright over the binaries”.  Fortunately this is untrue and can be ignored.  Copyright requires some creative input, it’s not enough to run a work through a computer program.  In the very unlikely case a court did decide that compiling a programme added some copyright then they would not decide that copyright was owned by the owners of the computer it ran on but on the copyright owners of the compiler, which is the Free Software Foundation and the copyright would be GPL.

In conclusion there is nothing which restricts people making derivatives of Ubuntu except the trademark, and removing branding is easy. (Even that is unnecessary unless you’re trading which most derivatives don’t, but it’s a sign of good faith to remove it anyway.)

Which is why Mark Shuttleworth says “you are fully entitled and encouraged to redistribute .debs and .iso’s”. Lovely.

 

facebooktwittergoogle_pluslinkedinby feather
on August 27, 2015 02:50 PM

The Xubuntu team is pleased to announce the immediate release of Xubuntu 15.10 Beta 1. This is the first beta towards the final release in October.

The first beta release also marks the end of the period to land new features in the form of Ubuntu Feature Freeze. This means any new updates to packages should be bug fixes only, the Xubuntu team is committed to fixing as many of the bugs as possible before the final release.

The Beta 1 release is available for download by torrents and direct downloads from
http://cdimages.ubuntu.com/xubuntu/releases/wily/beta-1/

Highlights and known issues

New features and enhancements

  • LibreOffice Calc and Writer and now included. These applications replace Gnumeric and Abiword respectively.
  • A new theme for LibreOffice, libreoffice-style-elementary is also included and is default for Wily Werewolf.

Known Issues

Some issues during testing of image were found, in addition some bugs related to Xubuntu have been noted during the development cycle. Full detail of all of these can be found in the release notes at https://wiki.ubuntu.com/WilyWerewolf/Beta1/Xubuntu

on August 27, 2015 02:39 PM

Hello,

Ubuntu GNOME Team is glad to announce the release of Beta 1 of Ubuntu GNOME Wily Werewolf (15.10).

What’s new and how to get it?

Please do read the release notes:
https://wiki.ubuntu.com/WilyWerewolf/Beta1/UbuntuGNOME

As always, thanks a million to each and everyone who has helped, supported and contributed to make this yet another successful milestone!

We have great testers and without their endless support, we don’t think we can ever make it. Please, keep the great work up!

Thank you!

on August 27, 2015 02:31 PM

S08E25 – Jurassic Shark - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Episode Twenty-five of Season Eight of the Ubuntu Podcast! Mark Johnson is back with Laura Cowen, Martin Wimpress, and Alan Pope!

In this week’s show:

We look at what’s been going on in the news:

We also take a look at what’s been going on in the community:

There are even events:

That’s all for this week, please send your comments and suggestions to: show@ubuntupodcast.org
Join us on IRC in #ubuntu-podcast on Freenode
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

on August 27, 2015 09:41 AM

August 26, 2015

For the past couple of weeks I’ve been playing with a variety of boards, and a single problem kept raising its head over and over again, I needed to build test images quickly in order to be able to checkout whether or not these boards had the features that I wanted.

This lead me to investigating tools around building images for these boards. And the tools I came across for each of these boards were absymal to say the least. All of them were either very board specific or were not versatile enough for my needs. Linaro’s HWPack’s came very very close to what I needed, but still had some of the following limitations :

  • HWPack’s are inflexible in terms of partitioning layout, the entire partitioning layout is internal to the tool, and you could only specify one of three variations of the partition layout, and not control anything else, such as start sectors of the partitions.
  • HWPack’s are inflexible in terms of bootloader flashing, as far as I can tell, there was no way to specify the start sector, the byte size and other options that some of these boards were passing dd to flash the bootloader to the image.
  • HWPacks, as far as I could tell, could not generate config files that would be used by u-boot at boot.
  • HWPack’s only support Apt.

So with those 4 problems to solve, I set out writing my own replacement for Linaro’s HWPack’s , and lo and behold, you can find it here. ( I’m quite terrible at coming up with awesome names for my projects, so I chose the most simple and descriptive name I could think of ;)

Here’s a sample config for the ODROID C1, a neat little board from HardKernel.

The rootfs section

You can specify a rootfs for your board in this section, it will take a url to the rootfs tar and optionally a md5sum for the tar.

The firmware section

We currently have 2 firmware backends for installing the firmware ( things like the kernel, and other board specific packages ). One is the tar backend which, like the rootfs section, takes a url to the firmware tar and optionally a md5sum and the Apt backend. I only have time to maintain these 2 backends, so I’d absolutely love it if someone could write more backends such as yum or pacman and send me a pull request.

The tar backend will copy everything from the boot/* folder inside the tar onto the first partition, and anything inside the firmware/* and modules/* folder into the rootfs’s /lib folder. This is a bit implicit and I’m trying to figure out a way to make this better.

The apt backend can take multiple apt repos to be added to the rootfs and a list of packages to install afterwards.

The bootloader section

The bootloader has a :config section which will take a ERB file to be rendered and installed into both the rootfs and the bootfs ( if you have one ).

Here’s a quote of the sample ERB file for the ODROID C1:

This allows me to dynamically render boot files depending on what kernel was installed on the image and what the UUID of the rootfs is. You can in fact access more variables as described here.

Moving on to the :uboot section of the bootloader, you can specify as many stages as you want to flash onto the image. Each stage will take a :file to flash and optionally :dd_opts, which are options that you might want to pass to dd when writing the bootloader. The stages are flashed in the sequence that is declared in config.yml and the files are searched for in the rootfs first, failing which they’re searched for in the bootfs partition, if you have one.

The login section

The login section is quite self-explanatory and takes a user, a password for the user and a list of groups the user should be added to on the target image.

The login section is optional and can be skipped if your rootfs already has a pre-configured user.

At the moment I have configs for the ODROID C1, Cubox-I ( thanks to Solid Run for sending me a free-extra board! :) and the Raspberry Pi 2.

If you have questions send me an email or leave them in the comments below, and I’ll try to answer them ASAP :).

If you end up writing a config for your board, please send me a PR with the config, that’d be most awesome.

PS: Some of the most awesome people I know are meeting up at Randa next month to work on bringing Touch to KDE. It’d be supremely generous of you if you could donate towards the effort.


on August 26, 2015 03:34 PM
[Updates (1) and (2) at the bottom of the post]

It's 01:24am, on Tuesday, August 25, 2015.  I am, again, awake in the the middle of the night, due to another false alarm from Google's spitefully sentient, irascibly ignorant Nest Protect "smart" smoke alarm system.

Exactly how I feel right now.  Except I'm in my pajamas.
Warning: You'll find very little profanity on this blog.  However, the filter is off for this post.  Apologies in advance.

ARRRRRRRRRRRRRRRRRGGGGGGGGGHHHHHHHHHHH!!!!!!!!!!!
Oh.
My.
God.
FOR FUCK'S SAKE.

"Heads up, there's smoke in the kids room," she says.  Not once, but 3 times in a 30 minute period, between 12am and 1am, last night.


That's my alarm clock.  Right now.  I'm serious.
"Heads up, there's smoke in the guest bedroom," she says again tonight a few minutes ago, at 12:59am.

There was in fact never any smoke to clear.
Is it possible for anything wake you up more seriously and violently in a cold panic than a smoke alarm detecting something amiss in your 2 year old's bedroom?

Here's what happens (each time)...

Every Nest Protect unit in the house announces, in unison, "Heads up, there's smoke in the kids' room."  Then both my phone and my wife's phone buzzes on our night stands, with the urgent incoming message from the Nest app.  Another few seconds pass, and a another set of alarms, this time delivered by email, in case you missed the first two.

The first and second time it happens, you jump up immediately.  You run into their room and make sure everyone is okay -- both the infant in the crib and toddler who's into everything.  You walk the whole house, checking the oven, the stove, the toaster, the computer equipment.  You open the door and check around outside.  When everything is okay, you're left with a tingling in the back of your mind, wondering what went wrong.  When you're a computer engineer by trade, you're trying to debug the hardware and/or software bug causing the false positive.  Then you set about trying to calm your family and get them back into bed.  And at some point later, you calm your own nerves and try to get some sleep.  It's a work night after all.

But the third, fourth, and fifth time it happens?  From 3 different units?

Well, it never ceases to scare the ever living shit out of you, waking up out of deep sleep, your mind racing, assessing the threat.

But then, reality kind of sets in.  It's just the stupid Nest Protect fucking it all up again.

Roll over, go back to bed, and pray that the full alarm doesn't sound this time, waking up both kids and setting us up for a really bad night and next few days at school.

It's not over yet, though.  You then wait for the same series of messages announcing the all clear -- first the bitch over the loudspeaker, followed by the Android app notification, then the email -- each with the same message:  "Caution, the smoke is clearing..."

THERE WAS NEVER ANY FUCKING SMOKE, YOU STUPID CYBORG. 

20 years later, and the smartest company in the world
creates a smoke detector that broadcasts the IoT equivalent
of PC LOAD LETTER to your smart home, mobile app, and email.
But not this time.  I'm not rolling over.  I'm here, typing with every ounce of anger this Thinkpad can muster. I'm mashing these keys in the guest bedroom that's supposedly on fire.  I can most assuredly tell you that it's a comfy 72 F, that the air is as clean as a summer breeze.

I'm writing this, hoping that someone, somewhere hears how disturbingly defective, and dangerously disingenuous this product actually is.

It has one job to do.  Detect and report smoke.  And it's unable to do that effectively.  If it can't reliably detect normality, what confidence should I have that it'll actually detect an emergency if that dreaded day ever comes?

The sad, sobering reality is: zero.  I have zero confidence whatsoever in the Nest Protect.

What's worse, is that I'm embarrassed to say that I've been duped into buying 7 (yes, seven) of these broken pieces of shit, at $99 apiece.  I'm a pretty savvy technical buyer, and admittedly a pretty magnanimous early adopter.  But while I'm accepting on beta versions of gadgets and gizmos, I am entirely unforgiving on the safety and livelihood of my family and guests.

Michael Larabel of Phoronix recounts his similar experience here.  He destroyed one with a sledgehammer, which might provide me with some catharsis when (not if, but when) this happens again.

Michael Larabel of Phoronix destroyed his malfunctioning Nest Protect
with a 20 lb sledgehammer, to silence the false alarm in the middle of the night
 There's a sad, long, thread on Nest's customer support forum, calling for a better "silence" feature.  I'm sorry, that's just wrong.  The solution is not a better way to "silence" false positives.  Root out the false positives to begin with.  Or recall the hardware.  Tut, tut, tut.

You can't be serious...
This is from me to Google and Nest on behalf of thousands of trusting families out there:  You have the opportunity, and ultimately the obligation.  Please make this right.  Whatever that means, you owe the world that.
  • Ship working firmware.
  • Recall faulty hardware.
  • Refund the product.
Okay, the empassioned rant is over.  Time for data.  Here is the detailed, distressing timeline.
  • January 2015: I installed 5 Nest Protects: one in each of two kids' rooms, the master bedroom, the hallway, and the kitchen/living room
  • February 2015: While on a business trip to, South Africa, I received notification via email and the Nest App that there was a smoke emergency at my home, half a world away, with my family in bed for the night.  My wife called me immediately -- in the middle of the night in Texas.  My heart raced.  She assured me it was a false alarm, and that she had two screaming kids awake from the noise.  I filed a support ticket with Nest (ref:_00D40Mlt9._50040jgU8y:ref) and tried to assure my wife that it was just a glitch and that I'd fix it when I got home.

  • May 23, 2015: We thought it was funny enough to post to Facebook, "When Nest mistakes a diaper change for a fire, that's one impressive poop, kiddo!"  Not so funny now.


  • August 9, 2015: I installed 2 more Nest Protects, in the guest bedroom and my office
  • August 21, 2015, 11:26am: While on a flight home from another business, trip, I receive another set of daytime warnings about smoke in the house.  Another false alarm.
  • August 24, 2015, 12am: While asleep, I receive another 3 false alarms.
  • August 25, 2015, 1am: Again, asleep, another false alarm.  Different room, different unit.  I'm fucking done with these.
I'm counting on you Google/Nest.  Please make it right.

Burning up but not on fire,
Dustin

Update #1: I was contacted directly by email and over Twitter, by Nest's "Executive Relations", who offer to replace of all 7 of my "v1" Nest Protects with 7 new "v2" Nest Protects, at no charge.  The new "v2" Protect reportedly has an improved design with better photoelectric detector that reduces false positives.  I was initially inclined to try the new "v2" Protects, however, neither the mounting bracket nor the wiring harness are compatible from v1 to v2.  So I would have to replace all of the brackets and redoing all of the wiring myself.  I asked, but Nest would not cover the cost of a professional (re-)installation.  At this point, as expressed my disappointment in this alternative, and I was offered a full refund, in 4-6 weeks time, after I return the 7 units.  I've accepted this solution and will replace the Nest Protects with a simpler, more reliable traditional smoke detector. 
Update #2: I suppose I should mention that I generally like my Nest Thermostat and (3) Dropcams.  This blog post is really only complaining about the Titanic disaster that is the Nest Protect.
on August 26, 2015 02:06 PM

In addition to using developer documentation (see A compact style for jQuery API documentation), people who work with communities need to use community and communication related websites. The bigger the community, the more tools it needs.

In a large community like Ubuntu, the amount of maintenance is big and the variety of platforms is huge. On top of these, many of the website aren’t directly maintained for the community (which has both good and bad sides). For these reasons, it’s sometimes hard and/or slow to get updates landed for the CSS files for the websites.

While workarounds aren’t ideal, at least we can fight the problematic styles with modern technology. That said, I’ve created a gist for a Stylish style that provides some minor improvements for some ubuntu.com websites.

Currently, the style brings the following improvements:

  • The last line of the chat is completely shown in Ubuntu Etherpad pads
  • Images and code blocks aren’t overlapping the content section in Planet Ubuntu, avoiding horizontal scrollbars
  • In the Ubuntu wiki, list items do not have a large bottom padding, making the lists more readable
  • Also in the wiki, tables are always full width but not too wide, keeping them aligned nicely

If you are constantly hitting other annoying styling issues on the Ubuntu websites, leave me a comment and I’ll see whether I can update the gist with a workaround. However, please report the bugs and issues for concerned maintaining parties as well, so we can stop using these workarounds as soon as possible. Thank you!

on August 26, 2015 01:41 PM
Can you believe Linux is celebrating 24 years already? It was on this day, August 25, back in 1991 when a young Linus Torvalds made his now-legendary announcement on the comp.os.minix newsgroup:

Hello everybody out there using minix -

I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I'd like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things).

I've currently ported bash(1.08) and gcc(1.40), and things seem to work. This implies that I'll get something practical within a few months, and I'd like to know what features most people would want. Any suggestions are welcome, but I won't promise I'll implement them :-)

Linus

PS. Yes – it's free of any minix code, and it has a multi-threaded fs. It is NOT portable (uses 386 task switching etc), and it probably never will support anything other than AT-harddisks, as that's all I have :-(.

Quite an understated beginning if I ever heard one!

There's some debate in the Linux community as to whether we should be celebrating Linux's birthday today or on October 5 when the first public release was made, but Linus says he is O.K. with you celebrating either one, or both! So as we say happy birthday, let's take a quick look back at the years that have passed and how far we have come.

Via OpenSource.
on August 26, 2015 12:48 PM

This is a mini case study, or rather a report from me, on how difficult it can be to run multiple services from the same server. Especially when they listen on similar ports for different aspects. In this post, I examine the headaches of making two things work on the same server: GitLab (via their Omnibus .deb packages), and Landscape (Canonical’s systems management tool).

I am not an expert on either of the software I listed, but what I do know I will state here.

The Software

Landscape

Many of you have probably heard of Landscape, Canonical’s systems management tool for the Ubuntu operating system. Some of you probably know about how we can deploy Landscape standalone for our own personal use with 10 Virtual and 10 Physical machines managed by Landscape, via Juju, or manually.

Most of my systems/servers are Ubuntu, and I have enough that makes management by one individual a headache. In the workplace, we have an entire infrastructure set up for a specific set of applications, all on an Ubuntu base, and a similar headache in managing them all one at a time. For me, discovering Landscape Dedicated Server, the setup yourself, makes management FAR easier. Landscape has a dependency on Apache

GitLab

GitLab is almost like GitHub in a sense. It provides a web interface for working with code, via the Git Version Control System. Github and GitLab are both very useful, but for those of us wanting the same interface in only one organization, or for personal use, and not trusting the Cloud hosts like GitHub or GitLab’s cloud, we can run it via their Omnibus package, which is Gitlab pre-packaged for different distributions (Ubuntu included!)

It includes its own copy of nginx for serving content, and uses Unicorn for the Ruby components. It listens on both port 80 and 8080, initially, per the gitlab configuration file which rewrites and modifies all the other configurations for Gitlab, which includes both of those servers.

The tricky parts

But then, I ran into a dilemma on my own personal setup of it: What happens if you need Landscape and multiple other sites run from the same server, some parts with SSL, some without? Throw into the mix that I am not an Apache person, and part of the dilemma appears.

1: Port 8080.

There’s a conflict between these two softwares. Part of Landscape (I believe the appserver part) and part of GitLab (it’s Unicorn server, which handles the Ruby-to-nginx interface both try and bind to port 8080.

2: Conflicting Web Servers on Same Web Ports

Landscape relies on Apache. GitLab relies on its own-shipped nginx. Both are set by default to listen on port 80. Landscape’s Apache config also listens on HTTPS.

These configurations, out of the box by default, have a very evil problem: both try to bind to port 80, so they don’t work together on the same server.

My solution

Firstly, some information. The nginx bundled as part of GitLab is not easily configured for additional sites. It’s not very friendly to be a ‘reverse proxy’ handler. Secondly, I am not an Apache person. Sure, you may be able to get Apache to work as the ‘reverse proxy’, but it is unwieldy for me to do that, as I’m an nginx guy.

These steps also needed to be done with Landscape turned off. (That’s as easy as running sudo lsctl stop)

1: Solve the Port 8080 conflict

Given that Landscape is something by Canonical, I chose to not mess with it. Instead, we can mess with GitLab to make it bind Unicorn to a different port.

What we have to do with GitLab is tell its Unicorn to listen on a different IP/Port combination. These two lines in the default configuration file control it (the file is located at /etc/gitlab/gitlab.rb in the Omnibus packages):

# unicorn['listen'] = '127.0.0.1'
# unicorn['port'] = 8080

These are commented out by default. The default binding is to bind to 127.0.0.1:8080. We can easily change GitLab’s configurations though, by editing the file, and uncommenting both lines. We have to uncomment both because otherwise it tries to bind to the specified port, but also *:8080 (which breaks Landscape’s services). After making those changes, we now run sudo gitlab-ctl reconfigure and it redoes its configurations and makes everything adapt to those changes we just made.

2: Solve the web server problem

As I said above, I’m an nginx guy. I also discovered revising the GitLab nginx server to do this is a painful thing, so I did an ingenious thing.

First up: Apache.

I set the Apache bindports to be something else. In this case, I revised /etc/apache2/ports.conf to be the following:

# If you just change the port or add more ports here, you will likely also
# have to change the VirtualHost statement in
# /etc/apache2/sites-enabled/000-default.conf

Listen 10080


Listen 10443


Listen 10443

# vim: syntax=apache ts=4 sw=4 sts=4 sr noet

Now, I went into the sites-enabled configuration for Landscape, and also changed the bindports accordingly – the HTTP listener on Port 80 now listens on 10080, and the SSL listener on Port 443 now listens on 10443 instead.

Second: GitLab.

This one’s easier, since we simply edit /etc/gitlab/gitlab.rb, and modify the following lines:

#nginx['listen_addresses'] = ['127.0.0.1']
#nginx['listen_port'] = 80

First, we uncomment the lines. And then, we change the 'listen_port' item to be whatever we want. I chose 20080. Then sudo gitlab-ctl reconfigure will apply those changes.

Finally, a reverse proxy server to handle everything.

Behold, we introduce a third web server: nginx, 1.8.0, from the NGINX Stable PPA.

This works by default because we already changed all the important bindhosts for services. Now the headache: we have to configure this nginx to do what we want.

Here’s a caveat: I prefer to run things behind HTTPS, with SSL. To do this, and to achieve it with multiple domains, I have a few wildcard certs. You’ll have to modify the configurations that I specify to set them up to use YOUR SSL certs. Otherwise, though, the configurations will be identical.

I prefer to use different site configuration files for each site, so we’ll do that. Also note that you will need to put in real values where I say DOMAIN.TLD and such, same for SSL certs and keys.

First, the catch-all for catching other domains NOT hosted on the server, placed in /etc/nginx/sites-available/catchall:

server {
listen 80 default_server;

server_name _;

return 406; # HTTP 406 is "Not Acceptable". 404 is "Not Found", 410 is "Gone", I chose 406.
}

Second, a snippet file with the configuration to be imported in all the later configs, with reverse proxy configurations and proxy-related settings and headers, put into /etc/nginx/snippets/proxy.settings.snippet:


proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_max_temp_file_size 0;

proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;

proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;

Third, the reverse-proxy configuration for Landscape, which is fairly annoying and took me multiple tries to get working right, placed in /etc/nginx/sites-available/landscape_reverseproxy. Don’t forget that Landscape needs SSL for parts of it, so you can’t skip SSL here:


server {
listen 443 ssl;

server_name landscape.DOMAIN.TLD;

ssl_certificate PATH_TO_SSL_CERTIFICATE; ##### PUT REAL VALUES HERE!
ssl_certificate_key PATH_TO_SSL_CERTIFICATE_KEY; ##### PUT REAL VALUES HERE

# These are courtesy of https://cipherli.st, minus a few things.
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;

include /etc/nginx/snippets/proxy.settings.snippet;

location / {
proxy_pass https://127.0.0.1:10443/;
}

location /message-system {
proxy_pass https://127.0.0.1:10443/;
}
}

server {
listen 80;
server_name landscape.DOMAIN.TLD;

include /etc/nginx/snippets/proxy.settings.snippet;

location / {
return 301 https://landscape.DOMAIN.TLD$request_uri;
}

location /ping {
proxy_pass http://127.0.0.1:10080/;
}
}

Forth, the reverse-proxy configuration for GitLab, which was not as hard to make working. Remember, I put this behind SSL, so I have SSL configurations here. I’m including comments for what to put if you want to NOT have SSL:

# If you don't want to have the SSL listener, you don't need this first server block
server {
listen 80;
server_name gitlab.DOMAIN.TLD

# We just send all HTTP traffic over to HTTPS here.
return 302 https://gitlab.DOMAIN.TLD$request_uri;
}

server {
listen 443 ssl;
# If you want to have this listen on HTTP instead of HTTPS,
# uncomment the below line, and comment out the other listen line.
#listen 80;
server_name gitlab.DOMAIN.TLD;

# If you're not using HTTPS, remove from here to the line saying
# "Stop SSL Remove" below
ssl_certificate /etc/ssl/hellnet.io/hellnet.io.chained.pem;
ssl_certificate_key /etc/ssl/hellnet.io/hellnet.io.key;

ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off; # Requires nginx >= 1.5.9
# Stop SSL Remove

include /etc/nginx/snippets/proxy.settings.snippet;

location / {
proxy_pass http://127.0.0.1:20080/;
}
}

System specifications considerations

Landscape is not light on resources. It takes about a gig of RAM to run safely, from what I’ve observed, but 2GB is more recommended.

GitLab recommends AT LEAST 2GB of RAM. It uses at least that, so you should have 3GB for this at the minimum.

Running both demands just over 3GB of RAM. You can run it on a 4GB box, but it’s better to have double that space just in case, especially if Landscape and Gitlab both get heavy use. I run it on an 8GB converted desktop, which is now a Linux server but used to be a Desktop.

on August 26, 2015 12:12 PM

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In July, 79.50 work hours have been dispatched among 7 paid contributors. Their reports are available:

Evolution of the situation

August has seen a small decrease in terms of sponsored hours (71.50 hours per month) because two sponsors did not pay their renewal invoice on time. That said they reconfirmed their willingness to support us and things should be fixed after the summer. And we should be able to reach our first milestone of funding the equivalent of a half-time position, in particular since a new platinum sponsor might join the project.

DebConf 15 happened this month and Debian LTS was featured in a talk and in a work session. Have a look at the video recordings:

In terms of security updates waiting to be handled, the situation is better than last month: the dla-needed.txt file lists 20 packages awaiting an update (4 less than last month), the list of open vulnerabilities in Squeeze shows about 22 affected packages in total (11 less than last month). The new LTS frontdesk ensures regular triage of CVE reports and the difference between both counts dropped significantly. That’s good!

Thanks to our sponsors

Thanks to Sig-I/O, a new bronze sponsor, which joins our 35 other sponsors.

One comment | Liked this article? Click here. | My blog is Flattr-enabled.

on August 26, 2015 09:14 AM

By now, you are probably more than a little tired of hearing people tell you how easy it is to do things like build a website or add ecommerce to an existing site. But when do you need a professional?

Does It Effect the Customer Experience?

If the thing you want to do will have an adverse effect on the client experience if it goes horribly wrong, then you will want to bring in a licensed professional. The last thing you want to do is inadvertently do something that will increase customer confusion.

Avoid changing major design elements of your site just because you are bored. If you are not a designer, you may be changing something that is crucial to navigation or discoverability. It is like knocking out a wall without determining if it is a load-bearing wall. If your site enjoys high levels of customer experience, leave changes to a pro.

Does It Effect Security?

The only thing more sacrosanct than customer experience is customer security. At this point in time, it is safe to say that no company ought to be left as the sole proprietor of consumer security. At the very least, there needs to be third-party security auditing to be sure things are as secure as you think they are.

That is the type of thing that is outsourced to IT services from Firewall Technical, and other such companies. Not every company is big enough to justify having its own IT department. But if you handle customer data, you are required to perform due diligence. In some instances, that means outsourcing security matters to a professional.

Is It Going to Void Your Warranty?

There are plenty of changes you can make to your tech and web presence that are inward facing. If you have the time and skills to take o those projects, knock yourself out. But even those projects should be shifted to a professional if there is the danger of voiding your warranty if something goes awry. Even if nothing goes wrong, some upgrades will void your warranty just because you did them.

You don’t know how, watch a couple of YouTube videos, and have at it. But when it is time to upgrade those slow, unreliable, spinning hard drives to SSDs, check your nerve, and your warranty. While one may be sufficient, the other may not be.

Some people feel ashamed to call for help when it is something they should be able to do themselves. But the real shame is letting pride be the cause of your downfall when help was only a phone call away.

The post You Might Need a Pro for These Tech Upgrades appeared first on deshack.

on August 26, 2015 06:27 AM

For the TL;DR folk who are concerned with the title: It’s not an alternative to wayland or X11. It’s layer that wayland compositors (or other) can use.

As a quick foreward: I’m still a newbie at this field. While I try my best to avoid inaccuracies, there might be a few things I state here that are wrong, feel free to correct me!

Wayland is mainly a windowing protocol. It allows clients to draw windows (or, as the wayland documentation puts it, “surfaces”), and receive input from those surfaces. A wayland server (or “compositor”) has the task of drawing these surfaces, and providing the input to the clients. That is the specification.

However, where does a compositor draw these surfaces to? How does the compositor receive input? It has to provide many backends for various methods of drawing the composited surface. For example, the weston compositor has support for drawing the composited surface using 7 different backends (DRM, Linux Framebuffer, Headless [a fake rendering device], RDP, Raspberry Pi, Wayland, and X11). The amount of work put into making these backends work must be incredible, which is exactly where the problem relies in: it’s arguably too much work for a developer to put in if they want to make a new compositor.

That’s not the only issue though. Another big problem is that there is then no standard way to configure the display. Say you wanted a wayland compositor to change the video resolution to 800×600. The only way to do that is to use a compositor-specific extension to the protocol, since the protocol, AFAIK, has no method for changing the video resolution — and rightfully so. Wayland is a windowing protocol, not a display protocol.

My idea is to create a display server that doesn’t handle windowing. It handles display-related things, such as drawing pixels on the screen, changing video mode, etc… Wayland compositors and other programs that require direct access to the screen could then use this server and trust that the server will take care of everything display-related for them.

I believe that this would enable for much simpler code, and add a good deal more power and flexibility.

To give a more graphic description (forgive my horrible diagraming skills):

Current Stack:

wayland_current

Proposed Stack:

 

wayland_new

I didn’t talk about the input server, but it’s the same idea as the display server: Have a server dedicated to providing input. Of course, if the display server uses something like SDL as the backend, it may have to also provide the input server, due to the SDL library, AFAIK, doesn’t allow a program to access the input of another program.

This is an idea I have toyed around with for some time now (ever since I tried writing my own wayland compositor, in fact! XD), so I’m curious as to what people think of it. I would be more than happy to work with others to implement this.


on August 26, 2015 05:42 AM