May 24, 2016

Welcome to the Ubuntu Weekly Newsletter. This is issue #465 for the weeks of of May 2 – 15, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Chris Guiver
  • Chris Sirrs
  • Aaron Honeycutt
  • Simon Quigley
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on May 24, 2016 12:47 AM

Welcome to the Ubuntu Weekly Newsletter. This is issue #466 for the week May 16 – 22, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Chris Guiver
  • Chris Sirrs
  • Simon Quigley
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on May 24, 2016 12:33 AM

May 23, 2016

The importance of URLs

Stuart Langridge

“You don’t control Xykon! He controls you!”
“Like I said: subtle.”

Redcloak, The Order of the Stick

Lots of discussion about progressive web apps recently, with a general consensus among forward-thinking web people that this is the way we should be building things for the web from now on. Websites that work offline, that deal well with lie-fi1, that are responsive, that are progressive, that work everywhere but are better on devices that can cope with the glory. Alex Russell, who originally coined the term, talks about PWAs being responsive, connectivity independent, fresh, safe, discoverable, re-engageable, installable, linkable, and having app-like interactions.

We could discuss every part of that description, every word in that definition, for hours and hours, and if someone wants to nominate a pub with decent beer then I’m more than happy to have that discussion and a pint while doing it. But today, we’re talking about the word linkable.

Linkability

Jeremy tweeted:

Strongly disagree with Lighthouse wanting “Manifest’s display property set to standalone/fullscreen to allow launching without address bar.”

Jeremy Keith

A little background

First, a little background. Google Chrome attempts to detect whether the website you’re looking at “qualifies” as a Progressive Web App, because if it does then they will show an “install to home screen” banner on your second visit. This is a major improvement over the previous state of a user having to manually install a site they like to their home screen by fishing through the menus2 or using the “add to home screen” button in iOS Safari3. The Chrome team have then created Lighthouse, a tool which invokes a Chrome browser, checks whether a site passes their checks for “this looks like a PWA”, and returns a result.

So Jeremy’s point is this: Lighthouse is declaring that to be a valid PWA, you have to insist that when you’re added to the home screen, you stop showing the URL bar. And he doesn’t agree, because

I want people to be able to copy URLs. I want people to be able to hack URLs. I’m not ashamed of my URLs …I’m downright proud.

Jeremy Keith

This is inspirational stuff, and it’s true. URLs are important. Individual addressability of parts on the web is important.

However. (You knew there was a “however” coming.) Whether your web app shows a URL bar is not actually a thing about that web app.

A little more background

A bit more background. In order to qualify as a progressive web app, you have to provide a manifest.4 That manifest lists various properties about this web app which are useful to operating systems: what its human-readable name is, what a human-readable short name for it is, what its icon should be, a theme colour for it, and so on. This is all good.

But the manifest also lists a display_mode, defined in the spec as “how the web application is being presented within the context of an OS (e.g., in fullscreen, etc.)” Essentially, the options for the display mode are fullscreen (the app will take all the screen; hardware keys and the status bar will not be shown), standalone (no browser UI is shown, but the hardware keys and status bar will be displayed), and browser (the app will be shown with normal browser UI, ie. as a normal website).

Now we see Jeremy’s point. Chrome propose that you only qualify as a “real” PWA if you request “fullscreen” or “standalone” mode: that is, that you hide the URL bar. Jeremy says that URLs are important; they’re not a thing to hide away or to pretend that don’t exist. And he has a point. The hackability of URLs is surprisingly important, and unsurprisingly dismissed by app developers who want to lock down the user experience.

But, and this is the important point, whether a web app shows its URLs is not a property of that app. It’s a property of how that app’s developer thinks about the web.

Property versus preference

If Jeremy and I were both to work on a website, and then discuss what should be in the manifest, we’d agree on what the app’s name was, what a shortened name was, what the icon is. But we might disagree on whether the app should show a URL bar when launched. That disagreement isn’t about the app itself; it’s about whether you, the developer, think it’s OK to hide that an app is actually on the web, or whether you should proudly declare that it’s on the web. That doesn’t differ from app to app; it differs from developer to developer. The app manifest declares properties of the app, but the display property isn’t about the app; it’s about how the app’s developer wants it to be shown. Do they want to proudly declare that this app is on the web and of the web? Then they’ll add the URL bar. Do they want to conceal that this is actually a web app in order to look more like “native” apps? Then they’ll hide the URL bar. The display property feels rather less like it’s actually tied to the app, and rather more like it should be chosen at “add-to-home-screen” time by the user; do you, the bookmarking user, prefer to think of this as a web thing? Include the URL bar. Do you want to think of it as an app which doesn’t involve the web? Hide the URL bar. It’s a preference. It’s not a property.

On the desktop

The above argument stands alone. But there are additional issues with having a URL bar showing on an added-to-home-screen web app. We should discuss these separately, but here I have them in the same essay because it’s all relevant.

The additional issue is, essentially, this. On my desktop — not my phone — I add an app to my “home screen”. This might add it to my desktop as a shortcut icon, or to my Start Menu, or in the Applications list, or all of the above, depending on which OS I’m on.5 If that PWA declares itself as being standalone then how to handle it is obvious: open it in a new window, with no URL bar showing. Similarly, fullscreen web apps launched from an icon should be full screen. But what do we do when launching a browser display-mode web app on a desktop?

Since we’re launching something indistinguishable from just another browser tab, it should launch a browser tab, right? I mean, we’re opening something which is essentially a bookmark. But… wouldn’t it feel strange to you to pick something from your app menu or an icon from your desktop and have it just open a browser tab? It would for me, at least. So maybe we should launch a new browser window, with URL bar intact, as though you’d clicked “open in new window” on a link. But then I’d have a whole new browser window for something which doesn’t really deserve a whole new window; it’s just one more web page, so why does it get a window by itself? I manage my browser windows according to project; window A has tabs relevant to project A, window B has tabs relevant to project B, and so on. I don’t want a whole new window, and indeed I have extensions installed so that links which think they deserve a new window actually get a new tab instead.

It’s not very clear what should happen here. The whole idea of launching a website from an OS-level icon doesn’t actually mesh very well at all with the idea of tabbed browser windows. It does mesh well with the 2002-era idea of a-new-browser-window-for-every-URL, but that idea has gone away. We have tabbed browsing, and people like it.6

The Chrome team’s idea, that basically you can’t add an “OS-level bookmark” for a website which wants to be treated as a website, avoids these problems.

Jeremy’s got a point, though. Hiding away URLs, pretending that this thing you’re looking at is a “native” app, does indeed sacrifice one of the key strengths of the web — that everything’s individually addressable. You can’t bookmark the “account” page in Steam, or the “settings” window in Keynote or Word or LibreOffice. With the web, you can. That’s a good thing. We shouldn’t give it up lightly. But you already can’t do that for apps which use web technologies but pretend to be native. If Word or iTunes used a WebView to render its preferences dialog, would it be good if you could link directly to it with a URL like itunes://settings? Yes it would. Would it be good if the iTunes user interface had a URL bar at the top showing that URL all the time? Not really, no.

There is a paternalism discussion, here. URLs are a good thing about the web; the addressability of parts is a good thing about the web. People don’t necessarily appreciate that. How much effort should we put into making this stuff available even though people don’t want it, because they’re wrong to not want it? Do we actually know better than they do? I think: yes we do.7 But I don’t know how important that is, when we can also win people over to the web by pretending that it’s native apps, which is what people wrongly want.

Conclusions

On balance, therefore, I approve of the Lighthouse team’s idea that you don’t qualify as an add-to-home-screen-able app if you want a URL bar. I can see the argument against this, and I do agree that we’re giving up something important, something fundamental to the web by hiding away URLs. But I think that wanting to see the URL is not a property of an app; it’s a property of how you personally want to deal with apps. So browsers should, when adding things to the home screen, pretend that display:browser actually said display:standalone, but give people who care the ability to override that if they want. And if we want more people to care, then that’s what evangelism is for; having individual app developers decide how they want their app to be displayed just leads to fragmentation. Let’s educate people on why URLs are important, and then they can flip a switch and see the URLs for everything they use… but until we’ve convinced them, let’s not force them to see the URLs when what they want is a native-like experience.

  1. Jake Archibald eloquently names lie-fi as that situation where your phone claims to have a connection but actually it doesn’t, the lying sack of dingo’s entrails that it is, and just spins forever when you tell it to connect to a website. If you’ve ever toggled a device into airplane mode and back out again, you know what we’re talking about
  2. although the Chrome approach is not without its problems
  3. which is obscure enough that Matteo Spinelli made a library to show a pointer to the add-to-home-screen button; the library is wonderful, don’t get me wrong, but it ought to not need to exist
  4. If you don’t know how to create one, see the manifest generator that Bruce and I created
  5. and it should be noted that basically nobody actually handles PWAs properly on desktop yet; it’s all about mobile. But desktop is coming, and we’ll need to solve this.
  6. Whether tabbed browsing actually makes conceptual sense is not up for discussion, here; we’ve collectively decided to use it, much as we’ve collectively decided that one-file-manager-window-per-folder isn’t the way we want to go either.
  7. Hubris is a great idea. The Greeks taught us that.
on May 23, 2016 11:22 PM

PostBooks 4.9.5 was recently released and the packages for Debian (including jessie-backports), Ubuntu and Fedora have been updated.

Postbooks at pgDay.ch in Rapperswil, Switzerland

pgDay.ch is coming on Friday, 24 June. It is at the HSR Hochschule für Technik Rapperswil, at the eastern end of Lake Zurich.

I'll be making a presentation about Postbooks in the business track at 11:00.

Getting started with accounting using free, open source software

If you are not currently using a double-entry accounting system or if you are looking to move to a system that is based on completely free, open source software, please see my comparison of free, open source accounting software.

Free and open source solutions offer significant advantages: flexibility, businesses can choose any programmer to modify the code, and use of SQL back-ends, multi-user support and multi-currency support are standard. These are all things that proprietary vendors charge extra money for.

Accounting software is the lowest common denominator in the world of business software, people keen on the success of free and open source software may find that encouraging businesses to use one of these solutions is a great way to lay a foundation where other free software solutions can thrive.

PostBooks new web and mobile front end

xTuple, the team behind Postbooks, has been busy developing a new Web and Mobile front-end for their ERP, CRM and accounting suite, powered by the same PostgreSQL backend as the Linux desktop client.

More help is needed to create official packages of the JavaScript dependencies before the Web and Mobile solution itself can be packaged.

on May 23, 2016 05:35 PM

Last year I joined GitHub as Director Of Community. My role has been to champion and manage GitHub’s global, scalable community development initiatives. Friday was my last day as a hubber and I wanted to share a few words about why I have decided to move on.

My passion has always been about building productive, engaging communities, particularly focused on open source and technology. I have devoted my career to understanding the nuances of this work and which workflow, technical, psychological, and leadership ingredients can deliver the most effective and rewarding results.

As part of this body of work I wrote The Art of Community, founded the annual Community Leadership Summit, and I have led the development of community at Canonical, XPRIZE, OpenAdvantage, and for a range of organizations as a consultant and advisor.

I was attracted to GitHub because I was already a fan and was excited by the potential within such a large ecosystem. GitHub’s story has been a remarkable one and it is such a core component in modern software development. I also love the creativity and elegance at the core of GitHub and the spirit and tone in which the company operates.

Like any growing organization though, GitHub will from time to time need to make adjustments in strategy and organization. One component in some recent adjustments sadly resulted in the Director of Community role going away.

The company was enthusiastic about my contributions and encouraged me to explore some other roles that included positions in product marketing, professional services, and elsewhere. So, I met with these different teams to explore some new and existing positions and see what might be a good fit. Thanks to everyone in those conversations for your time and energy.

Unfortunately, I ultimately didn’t feel they matched my passion and skills for building powerful, productive, engaging communities, as I mentioned above. As such, I decided it was time to part ways with GitHub.

Of course, I am sad to leave. Working at GitHub was a blast. GitHub is a great company and is working on some valuable and important areas that strike right at the center of how we build great software. I worked with some wonderful people and I have many fond memories. I am looking forward to staying in touch with my former colleagues and executives and I will continue to be an ardent supporter, fan, and user of both GitHub and Atom.

So, what is next? Well, I have a few things in the pipeline that I am not quite ready to share yet, so stay tuned and I will share this soon. In the meantime, to my fellow hubbers, live long and prosper!

on May 23, 2016 03:20 PM

May 22, 2016

As I said in this post, I’m doing a series of blog posts about the programs that I use on Ubuntu.

My first program that I want to talk about is GitHub’s Atom code editor.  I tired a few code editors (mostly with working with Markdown) and none of them is as awesome as Atom.  Why?  See below:

Screenshot from 2016-05-21 17-45-46 Screenshot from 2016-05-21 17-48-01

In the first screenshot  on the left, you can see that I’m working on Ubuntu Membership workshop (that I plan to do a bit before the global jam or after it). The first panel on the left is the file manger for that specific project.  The middle is the editor where there is syntax highlighting happening for Markdown.  And on the last panel on the right, is the Markdown preview for that file.

In the next and final screenshot on the right, you can see the same file being worked on but in the middle panel shows what I have added (in green), deleted (in red), and changed (dark yellow) after the last commit.  It’s a handy feature.

I only have used Atom for a week now, and I don’t have much to say since I only work with Markdown files at the moment.  But that will change as I start to work on my coding projects, as stated here.  I know that Jono Bacon wrote an amazing post on Atom where I know that I will check out those other features and maybe report on them here.  Most likely, I will have a follow up post once I start to code more.

As for this series, I will post one post every Sunday until I run out of programs to talk about.  The next one that I will talk about is Mudlet, a M* client that I use to play my favorite text-based online roleplay.

See you next week!

on May 22, 2016 04:18 PM

May 20, 2016

So I’ve been using Ubuntu on my Nexus 7 (2013 Wi-Fi) for a few weeks. I’ve hit a few bugs here and there since I’ve been on rc-proposed (weekday images) pretty much the whole time. One of the most annoying ones was that the Unity 8 Shell would not rotate from landscape to portrait, if you ever used the Nexus 7 2012 or 2013 then you know this tablet is weighted for perfect portrait use. While apps have been able to do it for while now the Apps Scope as well as the other scopes have not till last week.

screenshot20160406_152708460   <– landscape                                                                                       screenshot20160504_104032104                                                                                                                                                                                                   portrait –>

 

 

 

 

I’ve seen other bigger bugs get fixed last week alone! Like the Camera app giving a error message after *trying* to record a video:

screenshot20160505_150027205

Or the Camera app not rotating correctly:

screenshot20160505_145933408

I’ve tried to use the tablet for my work on Kubuntu:

Editing files on the docs.kubuntu.org server with nano 🙂

screenshot20160412_182609436

Also having a tablet has helped me test my app uBeginner for AdaptivePageLayout: (insert shameless self promotion!)

screenshot20160423_193017448

I also enabled Read and Write Mode so I can install other applications like VIM for the hell of it lol

screenshot20160424_192216498

I’ve also found that Hangouts Video calling works in the default Ubuntu Web Browser:

screenshot20160512_194207585

while in Convergence Mode! no less! Only issue was that I could not switch to the front camera and I had a bit of echo on my end. With that I’ll so some more Convergence screenshots:

screenshot20160509_160640021 screenshot20160509_160629670 screenshot20160505_210405301

The next post will have some of my progress (thanks to the awesome other developers) on my new project uCycle. 🙂

 

on May 20, 2016 10:49 PM

Screenshot_20160518_160431

1. sudo apt-add-repository ppa:kubuntu-ppa/backports
2. sudo apt update
3. sudo apt full-upgrade -y

on May 20, 2016 08:15 PM
My tutos about how to flash bq E4.5, Meizu MX4 and Nexus 4.

Why? You can change to stable or RC proposed images. Replace any 'stable' by 'rc-proposed' in this tuto if you want to try the proposed images.

You'll erase any data in your phones.

What will you need in your Ubuntu Desktop?

These packages:
sudo add-apt-repository ppa:ubuntu-sdk-team/ppa
sudo apt-get update
sudo apt-get install ubuntu-device-flash
sudo apt-get install phablet-tools

What will you need in your phone?

Unlock the bootloader. Any Ubuntu Phone has that boot unlocked.

USB in developer mode. Then, you should see your phone from PC:
costales@dev:~$ adb devices
List of devices attached 
0065f7f61916b80b    device
costales@dev:~$

Then, reboot the phone into that bootloader:
costales@dev:~$ adb reboot bootloader
costales@dev:~$

Flash Nexus 4

I flashed my Nexus 4 from Android to Ubuntu Touch here.
Do you see the device into the bootloader mode?

Unlock OEM:
costales@dev:~$ sudo fastboot oem unlock
[sudo] password for costales: 
...
OKAY [ 14.160s]
finished. total time: 14.160s
costales@dev:~$

Flash phone:
ubuntu-device-flash touch --channel=ubuntu-touch/stable/ubuntu --bootstrap



Flash bq E4.5

You'll need a recovery image (source):
cd ~
wget http://people.canonical.com/~jhm/barajas/recovery.img

Flash phone:
ubuntu-device-flash touch --channel=ubuntu-touch/stable/bq-aquaris.en --bootstrap --recovery-image ~/recovery.img

Flash Meizu MX4

You'll need a recovery image (source):
cd ~
wget http://people.canonical.com/~alextu/tangxi/recovery/recovery.img

Flash phone:
ubuntu-device-flash touch --channel=ubuntu-touch/stable/meizu.en --bootstrap --recovery-image ~/recovery.img
on May 20, 2016 06:51 PM

Colour palette updates

Canonical Design Team

Over the past few months, we’ve given you a peek into our evolving Suru design language with our wallpaper, convergence and Clock App blog posts.

Suru is based on Japanese aesthetics: minimalism, lightness and fluidity. This is why you will see the platform moving to a cleaner and more modern look.

Naturally, part of this evolution is colour. The new palette features a lightened set of colours with clearly defined usage to create  better visual hierarchy and an improved aesthetic sense of visual harmony.

On the technical side, SDK colour handling has been improved so that in the future colour usage will be more consistent and less prone to bugs.

The new palette has also expanded in scope with more colours to enable the creation of designs with greater depth, particularly as we move towards convergence, and reflects the design values we wish to impart onto the platform.

colour_palette

The new palette

Like our SDK, the colour palette has to be scalable. As we worked on visual designs for both apps and the shell, we found that having a colour palette which only contained six colours was too limiting. When approaching elements like the indicators or even shadows in the task switcher, light grey and dark grey weren’t going to be deep enough to stand out on a windowed view, where you have wallpaper and other windows to compete with.

The new palette is made up of thirteen colours. Some noticeable differences are an overall lighter look and additional greys. Purple is gone and we’ve added a yellow to the palette. This broader palette works to solve bugs where contrast and visibility were an issue, especially with the dark theme.

How we came to choose the colours was by iteratively reworking the visual designs across the whole platform and discovering what was needed as we designed. The greys came about when we worked on the revamped dark theme and shell projects, for example the upcoming contextual menus. While we added several new greys, the UI itself has taken on a less grey look.

App backgrounds have been upped to white, the grey neutral button has been lightened to Porcelain (a very light grey) in order to keep the button from looking disabled. These changes were made to improve visibility and contrast, to lighten the palette across the board and to keep everything consistent.

 

nearby_scopes
The previous design of the Nearby scope (left) and the updated design using the new palette (right) The background is lighter, buttons don’t look as disabled and text is higher contrast against the background.

 

The new palette allows developers to have more flexibility as the theme is now dynamic, rather than the colours being hard-coded to the UI as they were previously. In fact, our palette theme is built upon a layering system for which you can find the tutorial here.

Changing the use of orange

Previously orange was liberally used throughout our SDK, however such wide-ranging use of orange caused a number of UX issues:

  • Orange is so close to red that, at a glance, components could be misconstrued to be in an error state when in fact their current state was nominal.
  • Because orange is so close to red, the frequent use of orange made it harder for users to pick out actual error states in the UI.
  • Orange attracts the eye to wherever it is used, but frequently these elements didn’t warrant such a high level of visibility.

Around the same time as these issues were identified, we were also working on the design of focus states for keyboard navigation.

A focus state needs to be instantly visible so that a user can effortlessly see which item is focused without having to pause, look harder, and think.

After exploring a wide range of different concepts, we settled on using an orange frame as our keyboard navigation focus state.  However the use of this frame only worked if orange in all other areas was significantly toned down.

In order to fix the UX issues with the overuse of orange and to enable the use of an orange frame as our keyboard navigation focus state, the decision was made to be much more selective as to where and when orange should be applied.  The use of orange should now be limited to a single hero item per surface in addition to its use as our keyboard focus state.

This change has:

  • Improved visual hierarchy
  • Made error states instantly recognisable
  • Enabled the use of an orange frame as the keyboard navigation focus state

Usage of blue

For many years blue has been used in Ubuntu to alert the user to activities that are neutral (neither positive or negative).  Examples include the Launcher pips changing to blue to give the user a persistent notification of an app alert, or the messaging menu indicator changing to blue to indicate unread messages.

Previously in some (but not all) cases orange was being used to represent select and activity states but effective keyboard navigation had not yet been designed for Unity.

As part of our work on focus states, we also needed to consider a consistent visual language for select states, with a key requirement being that an item could be both focused and selected at the same time.

After much research, experimentation and testing, blue was chosen as the Ubuntu selected state colour.  Blue has also returned to being used for natural activity, for example in progress bars.  The use of blue for selected and other activity states works with almost all other elements, on both dark and light backgrounds, and stands out clearly and precisely when used in combination with a focus state.

Now that our usage of colour is more precisely and consistently defined (with orange = focus, blue = selected and neutral activity), you will see use of orange minimised so that it stands out as a focus state and more blue to replace orange’s previous selection and activity use.

 

Inbox
The sections headers now use blue to indicate which section is selected. This works well with the new focus frame that can be present when keyboard navigation is active.

The future for the palette

Colour is important for aesthetics (the palette needs to visually work together) but it also needs to convey meaning. Therefore a semantic approach is critical for maximum usability.  Some colours have cultural meanings, other colours have meanings applied by their context.

By extending the colours in our palette and organising them in a semantic way, we have created a stable framework of colour that developers can use to build their apps without time consuming and unnecessary work.  We can now be confident that our Suru design values are being consistently applied to every colour related design problem as we move forward with designing and building convergence.

on May 20, 2016 11:18 AM

May 19, 2016

S09E12 – Accordian Man - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Episode Twelve of Season Nine of the Ubuntu Podcast! Alan Pope, Mark Johnson, Laura Cowen and Martin Wimpress are connected and speaking to your brain.

We’re here again!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on May 19, 2016 02:00 PM

May 18, 2016

Python Bindings

Python provides a C API to define libraries which can be imported into a python environment. That python C interface is often used to provide bindings to existing libraries written in C++ and there are multiple existing technologies for that such as PyQt, PySide, Boost.Python and pybind.

pybind seems to be a more modern implementation of what Boost.Python provides. To create a binding, C++ code is written describing how the target API is to be exposed. That gives the flexibility of defining precisely which methods get exposed to the python environment and which do not.

PyQt and PySide are also similar in that they require the maintenance of a binding specification. In the case of PySide, that is an XML file, and in the case of PyQt that is a DSL which is similar to a C++ header. The advantage both of these systems have for Qt-based code is that they ship with bindings for Qt, relieving the binding author of the requirement to create those bindings.

PyKF5 application using KItemModels and KWidgetsAddons

PyKF5 application using KItemModels and KWidgetsAddons

Generated Python Bindings

For libraries which are large library collections, such as Qt5 and KDE Frameworks 5, manual binding specification soon becomes a task which is not suitable for manual creation by humans, so it is common to have the bindings generated from the C++ headers themselves.

The way that KDE libraries and bindings were organized in the time of KDE4 led to PyQt-based bindings being generated in a semi-automated process and then checked into the SCM repository and maintained there. The C++ header parsing tools used to do that were written before standardization of C++11 and have not kept pace with compilers adding new language features, and C++ headers using them.

Automatically Generated Python Bindings

It came as a surprise to me that no bindings had been completed for the KDE Frameworks 5 libraries. An email from Shaheed Hague about a fresh approach to generating bindings using clang looked promising, but he was hitting problems with linking binding code to the correct shared libraries and generally figuring out what the end-goal looks like. Having used clang APIs before, and having some experience with CMake, I decided to see what I could do to help.

Since then I’ve been helping get the bindings generator into something of a final form for the purposes of KDE Frameworks, and any other Qt-based or even non-Qt-based libraries. The binding generator uses the clang python cindex API to parse the headers of each library and generate a set of sip files, which are then processed to create the bindings. As the core concept of the generator is simply ‘use clang to parse the headers’ it can be adapted to other binding technologies in the future (such as PySide). PyQt-based bindings are the current focus because that fills a gap between what was provided with KDE4 and what is provided by KDE Frameworks 5.

All of that is internal though and doesn’t appear in the buildsystem of any framework. As far as each buildsystem is concerned, a single CMake macro is used to enable the build of python (2 and 3) bindings for a KDE Frameworks library:


  ecm_generate_python_binding(
    TARGET KF5::ItemModels
    PYTHONNAMESPACE PyKF5
    MODULENAME KItemModels
    SIP_DEPENDS
      QtCore/QtCoremod.sip
    HEADERS
      ${KItemModels_HEADERS}
  )

Each of the headers in the library are parsed to create the bindings, meaning we can then write this code:


  #!/usr/bin/env python
  #-*- coding: utf-8 -*-

  import sys

  sys.path.append(sys.argv[1])

  from PyQt5 import QtCore
  from PyQt5 import QtWidgets

  from PyKF5 import KItemModels

  app = QtWidgets.QApplication(sys.argv)

  stringListModel = QtCore.QStringListModel(
    ["Monday", "Tuesday", "Wednesday",
    "Thursday", "Friday", "Saturday", "Sunday"]);

  selectionProxy = KItemModels.KSelectionProxyModel()
  selectionProxy.setSourceModel(stringListModel)

  w = QtWidgets.QWidget()
  l = QtWidgets.QHBoxLayout(w)

  stringsView = QtWidgets.QTreeView()
  stringsView.setModel(stringListModel)
  stringsView.setSelectionMode(
    QtWidgets.QTreeView.ExtendedSelection)
  l.addWidget(stringsView)

  selectionProxy.setSelectionModel(
    stringsView.selectionModel())

  selectionView = QtWidgets.QTreeView()
  selectionView.setModel(selectionProxy)
  l.addWidget(selectionView)

  w.show()

  app.exec_()

and it just works with python 2 and 3.

Other libraries’ headers are more complex than KItemModels, so they have an extra rules file to maintain. The rules file is central to the design of this system in that it defines what to do when visiting each declaration in a C++ header file. It contains several small databases for handling declarations of containers, typedefs, methods and parameters, each of which may require special handling. The rules file for KCoreAddons is here.

The rules file contains entries to discard some method which can’t be called from python (in the case of heavily templated code for example) or it might replace the default implementation of the binding code with something else, in order to implement memory management correctly or for better integration with python built-in types.

Testing Automatically Generated Python Bindings

Each of the KDE Frameworks I’ve so far added bindings for gets a simple test file to verify that the binding can be loaded in the python interpreter (2 and 3). The TODO application in the screenshot is in the umbrella repo.

The binding generator itself also has tests to ensure that changes to it do not break generation for any framework. Actually extracting the important information using the cindex API is quite difficult and encounters many edge cases, like QStringLiteral (actually includes a lambda) in a parameter initializer.

Help Testing Automatically Generated Python Bindings

There is a call to action for anyone who wishes to help on the kde-bindings mailing list!


on May 18, 2016 09:18 PM

Screenshot_20160518_160431

Come over to #kubuntu-devel on freenode IRC, if you care to test from backports-landing.

on May 18, 2016 08:03 PM

On Friday last week I flew out to Austin to run the Community Leadership Summit and join OSCON. When I arrived in Austin, I called home and our son, Jack, was rather upset. It was clear he wasn’t just missing daddy, he also wasn’t feeling very well.

As the week unfolded he developed strep throat. While a fairly benign issue in the scheme of things, it is clearly uncomfortable for him and pretty scary for a 3 year-old. With my wife, Erica, flying out today to also join OSCON and perform one of the keynotes, it was clear that I needed to head home to take care of him. So, I packed my bag, wrestled to keep the OSCON FOMO at bay, and headed to the airport.

Coordinating the logistics was no simple feat, and stressful. We both feel awful when Jack is sick, and we had to coordinate new flights, reschedule meetings, notify colleagues and handover work, coordinate coverage for the few hours in-between her leaving and me landing, and other things. As I write this I am on the flight heading home and at some point she will zoom past me on another flight heading to Austin.

Now, none of this is unusual. Shit happens. People face challenges every day, and many far worse than this. What struck me so notably today though was the sheer level of kindness from our friends, family, and colleagues.

People wrapped around us like a glove. Countless people offered to take care of responsibilities, help us with travel and airport runs, share tips for helping Jack feel better, provide sympathy and support, and more.

This was all after a weekend of running the Community Leadership Summit, an event that solicited similar levels of kindness. There were volunteers who got out of bed at 5am to help us set up, people who offered to prepare and deliver keynotes and sessions, coordinate evening events, equipment, sponsorship contributions, and help run the event itself. Then, to top things off, there were remarkably generous words and appreciation for the event as a whole when it drew to a close.

This is the core of what makes community so special, and so important. While at times it can seem the world has been overrun with cynicism, narcissism, negativity, and selfishness, we are instead surrounded by an abundance of kindness. What helps this kindness bubble to the surface are great relationships, trust, respect, and clear ways in which people can play a participatory role and support each other. Whether it is something small like helping Erica and I to take care of our little man or something more involved such as an open source project, it never ceases to inspire and amaze me how innately kind and collaborative we are.

This is another example of why I have devoted my life to understanding every nuance I can of how we can tap into and foster these fundamental human instincts. This is how we innovate, how we make the world a better place, and how we build opportunity for everyone, no matter what their background is.

When we harness these instincts, understand the subtleties of how we think and operate, and wrap them in effective collaborative workflows and environments, we create the ability to build and disrupt things more effectively than ever.

It is an exciting journey, and I am thankful every day to be joined on it by so many remarkable people. We are going build an exciting future together and have a rocking great time doing so.

on May 18, 2016 07:48 PM

Increased security, reliability and ease of use, now available on Raspberry Pi

May 18, 2016, London. Today Screenly, the most popular digital signage solution for the Raspberry Pi, and Canonical, the company behind Ubuntu, the world’s most popular open-source platform, jointly announce a partnership to build Screenly on Ubuntu Core. Screenly is adopting Ubuntu Core to give its customers a stable platform that is secure, robust, simple to use and manage, all available on a $35 Raspberry Pi.

Screenly commercialises an easy to install digital signage box or “player” and a cloud-based interface that today powers thousands of screens around the world. This enables restaurants, universities, shops, offices and anyone with a modern TV or monitor to create a secure, reliable digital sign or dashboard. This cost effective solution is capable of displaying full HD quality moving imagery, web content and static images.

Ubuntu Core offers a production environment for IoT devices. In particular, this new “snappy” rendition of Ubuntu offers the ability to update and manage the OS and any applications independently. This means that Screenly players will be kept up to date with the latest version of the Screenly software, and also benefit from continuous OS updates for enhanced security, stability and performance. Transactional updates means that any update can automatically be rolled back, ensuring reliable performance even in a failed update scenario.

Furthermore, Ubuntu Core devices can be managed from a central location, allowing Screenly users to manage a globally distributed fleet of digital signs easily, and without expensive on-site visits. A compromised display can be corrected immediately and the security of devices that are in public sphere is drastically improved.

Viktor Petersson, CEO of Screenly explains, “Ubuntu Core enables us to be more flexible and to focus on our software rather than managing an OS and software distribution across our large fleet of devices.”

Ubuntu Core also offers standardised OS and interfaces, available across a variety of chipsets and hardware. This means that Screenly can expand their portfolio of players across platforms without the costs traditionally associated with porting software to a new architecture.

Viktor Petersson at Screenly continues, “In terms of hardware, it can run on multiple hardware platforms and therefore if one of our partners requires a different hardware platform, the need to rebuild and retest our whole solution for a new OS goes away. This takes away bargaining power of the hardware vendor and gives the power back to the service providers, which for us means we’ll see greater innovation in this area.”

Mark Shuttleworth, Canonical founder adds, “Ubuntu Core is perfectly suited to applications in digital signage. Its application isolation and transactional updates provide unrivalled security, stability and ease of use, something vital for constantly visible content. We’re pleased to be working with Screenly, whose agile approach is a perfect example of innovation in the digital signage space.”

on May 18, 2016 11:00 AM

May 17, 2016

Thank you CC

Mark Shuttleworth

Just to state publicly my gratitude that the Ubuntu Community Council has taken on their responsibilities very thoughtfully, and has demonstrated a proactive interest in keeping the community happy, healthy and unblocked. Their role is a critical one in the Ubuntu project, because we are at our best when we are constantly improving, and we are at our best when we are actively exploring ways to have completely different communities find common cause, common interest and common solutions. They say that it’s tough at the top because the easy problems don’t get escalated, and that is particularly true of the CC. So far, they are doing us proud.

 

on May 17, 2016 08:16 PM

Lubuntu.me is back

Lubuntu Blog

First of all, we need to apologise for being offline for several days, due to server problems. Now everything’s solved and working fine. The download links have been repaired, the usual sections are still there and, of course, the blog and comment ability is restored. Again, sorry for the annoyance and… Happy downloading!
on May 17, 2016 07:43 PM

Random pairing with Python

Charles Profitt

I am adviser to a high school robotics team and wrote a small Python script to solve a pairing problem. We are starting our spring fund raising drive and I … Continue reading
on May 17, 2016 03:01 PM

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In April, 116.75 work hours have been dispatched among 9 paid contributors. Their reports are available:

  • Antoine Beaupré did 16h.
  • Ben Hutchings did 12.25 hours (out of 15 hours allocated + 5.50 extra hours remaining, he returned the remaining 8.25h to the pool).
  • Brian May did 10 hours.
  • Chris Lamb did nothing (instead of the 16 hours he was allocated, his hours have been redispatched to other contributors over May).
  • Guido Günther did 2 hours (out of 8 hours allocated + 3.25 remaining hours, leaving 9.25 extra hours for May).
  • Markus Koschany did 16 hours.
  • Santiago Ruano Rincón did 7.50 hours (out of 12h allocated + 3.50 remaining, thus keeping 8 extra hours for May).
  • Scott Kitterman posted a report for 6 hours made in March but did nothing in April. His 18 remaining hours have been returned to the pool. He decided to stop doing LTS work for now.
  • Thorsten Alteholz did 15.75 hours.

Many contributors did not use all their allocated hours. This is partly explained by the fact that in April Wheezy was still under the responsibility of the security team and they were not able to drive updates from start to finish.

In any case, this means that they have more hours available over May and since the LTS period started, they should hopefully be able to make a good dent in the backlog of security updates.

Evolution of the situation

The number of sponsored hours reached a new record with 132 hours per month, thanks to two new gold sponsors (Babiel GmbH and Plat’Home). Plat’Home’s sponsorship was aimed to help us maintain Debian 7 Wheezy on armel and armhf (on top of already supported amd64 and i386). Hopefully the trend will continue so that we can reach our objective of funding the equivalent of a full-time position.

The security tracker currently lists 45 packages with a known CVE and the dla-needed.txt file lists 44 packages awaiting an update.

This is a bit more than the 15-20 open entries that we used to have at the end of the Debian 6 LTS period.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on May 17, 2016 01:57 PM

Random pairing with Python

Charles Profitt

I am adviser to a high school robotics team and wrote a small Python script to solve a pairing problem. We are starting our spring fund raising drive and I … Continue reading
on May 17, 2016 12:36 PM

This weekend we had the GStreamer Spring Hackfest 2016 in Thessaloniki, my new home town. As usual it was great meeting everybody in person again, having many useful and interesting discussions and seeing what everybody was working on. It seems like everybody was quite productive during these days!

Apart from the usual discussions, cleaning up some of our Bugzilla backlog and getting various patches reviewed, I was working with Luis de Bethencourt on writing a GStreamer plugin with a few elements in Rust. Our goal was to be able to be able to read and write a file, i.e. implement something like the “cp” command around gst-launch-1.0 with just using the new Rust elements, while trying to have as little code written in C as possible and having a more-or-less general Rust API in the end for writing more source and sink elements. That’s all finished, including support for seeking, and I also wrote a small HTTP source.

For the impatient, the code can be found here: https://github.com/sdroege/rsplugin

Why Rust?

Now you might wonder why you would want to go through all the trouble of creating a bridge between GStreamer in C and Rust for writing elements. Other people have written much better texts about the advantages of Rust, which you might want to refer to if you’re interested: The introduction of the Rust documentation, or this free O’Reilly book.

But for myself the main reasons are that

  1. C is a rather antique and inconvenient language if you compare it to more modern languages, and Rust provides a lot of features from higher-level languages while still not introducing all the overhead that is coming with it elsewhere, and
  2. even more important are the safety guarantees of the language, including the strong type system and the borrow checker, which make a whole category of bugs much more unlikely to happen. And thus saves you time during development but also saves your users from having their applications crash on them in the best case, or their precious data being lost or stolen.

Rust is not the panacea for everything, and not even the perfect programming language for every problem, but I believe it has a lot of potential in various areas, including multimedia software where you have to handle lots of complex formats coming from untrusted sources and still need to provide high performance.

I’m not going to write a lot about the details of the language, for that just refer to the website and very well written documentation. But, although being a very young language not developed by a Fortune 500 company (it is developed by Mozilla and many volunteers), it is nowadays being used in production already at places like Dropbox or Firefox (their MP4 demuxer, and in the near future the URL parser). It is also used by Mozilla and Samsung for their experimental, next-generation browser engine Servo.

The Code

Now let’s talk a bit about how it all looks like. Apart from Rust’s standard library (for all the basics and file IO), what we also use are the url crate (Rust’s term for libraries) for parsing and constructing URLs, and the HTTP server/client crate called hyper.

On the C side we have all the boilerplate code for initializing a GStreamer plugin (plugin.c), which then directly calls into Rust code (lib.rs), which then calls back into C (plugin.c) for registering the actual GStreamer elements. The GStreamer elements themselves have then an implementation written in C (rssource.c and rssink.c), which is a normal GObject subclass of GstBaseSrc or GstBaseSink but instead of doing the actual work in C it is just calling into Rust code again. For that to work, some metadata is passed to the GObject class registration, including a function pointer to a Rust function that creates a new instance of the “machinery” of the element. This is then implementing the Source or Sink traits (similar to interfaces) in Rust (rssource.rs and rssink.rs):

pub trait Source: Sync + Send {
    fn set_uri(&mut self, uri_str: Option<&str>) -> bool;
    fn get_uri(&self) -> Option;
    fn is_seekable(&self) -> bool;
    fn get_size(&self) -> u64;
    fn start(&mut self) -> bool;
    fn stop(&mut self) -> bool;
    fn fill(&mut self, offset: u64, data: &mut [u8]) -> Result;
    fn do_seek(&mut self, start: u64, stop: u64) -> bool;
}

pub trait Sink: Sync + Send {
    fn set_uri(&mut self, uri_str: Option<&str>) -> bool;
    fn get_uri(&self) -> Option;
    fn start(&mut self) -> bool;
    fn stop(&mut self) -> bool;
    fn render(&mut self, data: &[u8]) -> GstFlowReturn;
}

And these traits (plus a constructor) are in the end all that has to be implemented in Rust for the elements (rsfilesrc.rs, rsfilesink.rs and rshttpsrc.rs).

If you look at the code, it’s all still a bit rough at the edges and missing many features (like actual error reporting back to GStreamer instead of printing to stderr), but it already works and the actual implementations of the elements in Rust is rather simple and fun. And even the interfacing with C code is quite convenient at the Rust level.

How to test it?

First of all you need to get Rust and Cargo, check the Rust website or your Linux distribution for details. This was all tested with the stable 1.8 release. And you need GStreamer plus the development files, any recent 1.x version should work.

# clone GIT repository
git clone https://github.com/sdroege/rsplugin
# build it
cd rsplugin
cargo build
# tell GStreamer that there are new plugins in this path
export GST_PLUGIN_PATH=`pwd`
# this dumps the Cargo.toml file to stdout, doing all file IO from Rust
gst-launch-1.0 rsfilesrc uri=file://`pwd`/Cargo.toml ! fakesink dump=1
# this dumps the Rust website to stdout, using the Rust HTTP library hyper
gst-launch-1.0 rshttpsrc uri=https://www.rust-lang.org ! fakesink dump=1
# this basically implements the "cp" command and copies Cargo.toml to a new file called test
gst-launch-1.0 rsfilesrc uri=file://`pwd`/Cargo.toml ! rsfilesink uri=file://`pwd`/test
# this plays Big Buck Bunny via HTTP using rshttpsrc (it has a higher rank than any
# other GStreamer HTTP source currently and is as such used for HTTP URIs)
gst-play-1.0 http://download.blender.org/peach/bigbuckbunny_movies/big_buck_bunny_480p_h264.mov

What next?

The three implemented elements are not too interesting and were mostly an experiment to see how far we can get in a weekend. But the HTTP source for example, once more features are implemented, could become useful in the long term.

Also, in my opinion, it would make sense to consider using Rust for some categories of elements like parsers, demuxers and muxers, as traditionally these elements are rather complicated and have the biggest exposure to arbitrary data coming from untrusted sources.

And maybe in the very long term, GStreamer or parts of it can be rewritten in Rust. But that’s a lot of effort, so let’s go step by step to see if it’s actually worthwhile and build some useful things on the way there already.

For myself, the next step is going to be to implement something like GStreamer’s caps system in Rust (of which I already have the start of an implementation), which will also be necessary for any elements that handle specific data and not just an arbitrary stream of bytes, and it could probably be also useful for other applications independent of GStreamer.

Issues

The main problem with the current code is that all IO is synchronous. That is, if opening the file, creating a connection, reading data from the network, etc. takes a bit longer it will block until a timeout has happened or the operation finished in one way or another.

Rust currently has no support for non-blocking IO in its standard library, and also no support for asynchronous IO. The latter is being discussed in this RFC though, but it probably will take a while until we see actual results.

While there are libraries for all of this, having to depend on an external library for this is not great as code using different async IO libraries won’t compose well. Without this, Rust is still missing one big and important feature, which will definitely be needed for many applications and the lack of it might hinder adoption of the language.

on May 17, 2016 11:47 AM
A few years ago, I wrote and released a fun little script that would carve up an Ubuntu Byobu terminal into a bunch of splits, running various random command line status utilities.

100% complete technical mumbo jumbo.  The goal was to turn your terminal into something that belongs in a Hollywood hacker film.

I am proud to see it included in this NBCNews piece about "Ransomware".  All of the screenshots, demonstrating what a "hacker" is doing with a system are straight from Ubuntu, Byobu, and Hollywood!







Here are a few screenshots, and the video is embedded below...



Enjoy!
:-Dustin
on May 17, 2016 07:07 AM

May 16, 2016

Cloud is now mainstream, but what’s holding it back, what are the biggest concerns of technology decision makers? Are industry leaders choosing public, private or hybrid clouds? Canonical has commissioned Forrester Consulting to explore enterprise cloud platform trends and adoption. Learn about:

  • Most popular cloud deployment and strategies
  • What percentage of enterprises are already utilizing IaaS/PaaS
  • The desired goals and benefits of implementing various cloud models
  • What enterprises plan to do with their clouds
  • Enterprise top concerns about the cloud

The report summarizes how decision makers really feel about the promise of greater flexibility, scalability, agility and cost savings offered by the Cloud.

Download eBook

on May 16, 2016 02:26 PM

You read it right! After several years of being absent, Ubuntu is going to be present at OSCON this 2016. We are going to be there as a non-profit, so make sure you visit us at booth 631-3.

It has been several years since we had a presence as exhibitors, and I am glad to say we’re going to have awesome things this year. It’s also OSCON’s first year at Austin. New year, new venue! But getting to the point,  we will have:

  • System76 laptops so you can play and experience with Ubuntu Desktop
  • A couple Nexus 4 phones, so you can try out Ubuntu Touch
  • A bq M10 Ubuntu Edition tablet so you can see how beautiful it is, and see convergence in action (thanks Popey!)
  • A Mycroft! (Thanks to the Mycroft guys, can’t wait to see one in person myself!)
  • Some swag for free (first come-first serve basis, make sure to drop by!)
  • And a raffle for the Official Ubuntu Book, 8th Edition!

The conference starts Monday the 16th May (tomorrow!) but the Expo Hall opens on Tuesday night. You could say we start on Wednesday:) If you are going to be there, don’t forget to drop by and say hi. It’s my first time at OSCON, so we’ll see how the conference is. I am pretty excited about it – hope to see several of you there!


on May 16, 2016 02:27 AM

May 13, 2016

Some LoCo updates

Aaron Honeycutt

Ubuntu 16.04 LTS Release:

I did not get around to posting the results for the 16.04 LTS release party since I locked myself out of this blog lol.

Here are some pictures! Even my dad got into the release spirit!

highres_448524371 highres_448528960 highres_448528986

On to Ubuntu Hour’s:

We’re still having them and I think they are going very well bring in some new people every so often.

highres_445748101

SELF 2016

SouthEast Linux Fest is right around the corner! Starting on June 9 and the Florida LoCo will be there of course!

Up next will be a Ubuntu Touch update which I’ll get out within the next week. Thanks for reading!

on May 13, 2016 07:06 PM

May 12, 2016

snapd updated to 2.0.3

Zygmunt Krynicki

Ubuntu 16.04 has just been updated with a new release of snapd (2.0.3)

Our release manager, Michael Vogt, has prepared and pushed this release into the Ubuntu archive. You can look at the associated milestone sru-1 on Launchpad for more details.

Work is already under way on sru-2

You can find the changelog below.

   * New upstream micro release:
- integration-tests, debian/tests: add unity snap autopkg test
- snappy: introduce first feature flag for assumes: common-data-dir
- timeout,snap: add YAML unmarshal function for timeout.Timeout
- many: go into state.Retry state when unmounting a snap fails.
(LP: #1571721, #1575399)
- daemon,client,cmd/snap: improve output after snap
install/refresh/remove (LP: #1574830)
- integration-tests, debian/tests: add test for home interface
- interfaces,overlord: support unversioned data
- interfaces/builtin: improve the bluez interface
- cmd: don't include the unit tests when building with go test -c
for integration tests
- integration-tests: teach some new trick to the fake store,
reenable the app refresh test
- many: move with some simplifications test snap building to
snap/snaptest
- asserts: define type for revision related errors
- snap/snaptest,daemon,overlord/ifacestate,overlord/snapstate: unify
mocking snaps behind MockSnap
- snappy: fix openSnapFile's handling of sideInfo
- daemon: improve snap sideload form handling
- snap: add short and long description to the man-page
(LP: #1570280)
- snappy: remove unused SetProperty
- snappy: use more accurate test data
- integration-tests: add a integration test about remove removing
all revisions
- overlord/snapstate: make "snap remove" remove all revisions of a
snap (LP: #1571710)
- integration-tests: re-enable a bunch of integration tests
- snappy: remove unused dbus code
- overlord/ifacestate: fix setup-profiles to use new snap revision
for setup (LP: #1572463)
- integration-tests: add regression test for auth bug LP:#1571491
- client, snap: remove obsolete TypeCore which was used in the old
SystemImage days
- integration-tests: add apparmor test
- cmd: don't perform type assertion when we know error to be nil
- client: list correct snap types
- intefaces/builtin: allow getsockname on connected x11 plugs
(LP: #1574526)
- daemon,overlord/snapstate: read name out of sideloaded snap early,
improved change summary
- overlord: keep tasks unlinked from a change hidden, prune them
- integration-tests: snap list on fresh boot is good again
- integration-tests: add partial term to the find test
- integration-tests: changed default release to 16
- integration-tests: add regression test for snaps not present after
reboot
- integration-tests: network interface
- integration-tests: add proxy related environment variables to
snapd env file
- README.md: snappy => snap
- etc: trivial typo fix (LP:#1569892)
- debian: remove unneeded /var/lib/snapd/apparmor/additional
directory (LP: #1569577)
on May 12, 2016 11:47 PM
One of the first pain points I've been attempting to help smooth out was how Juju is packaged and consumed. The Juju QA Team have put together a new daily ppa you can use, dubbed the Juju Daily ppa. It contains the latest blessed builds from CI testing. Installing this ppa and upgrading regularly allows you to stay in sync with the absolute latest version of Juju that passes our CI testing.

Naturally, this ppa is intended for those who like living on the edge, so it's not recommended for production use. If you find bugs, we'd love to hear about them!

To add the ppa, you will need to add ppa:juju/daily to your software sources.

sudo add-apt-repository ppa:juju/daily

Do be aware that adding this ppa will upgrade any version of Juju you may have installed. Also note this ppa contains builds without published streams, so you will need to generate or acquire streams on your own. For most users, this means you should pass --upload-tools during the bootstrap process. However you may also pass the agent-metadata-url and agent-stream as config options. See the ppa description and simplestreams documentation for more details.

Finally, should you wish to revert to a stable version of Juju, you can use the ppa-purge tool to remove the daily ppa and the installed version of Juju.

I'd love to hear your feedback, and encourage you to give it a try.
on May 12, 2016 08:36 PM

S09E11 – Sweet Baby Robocop - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Episode Eleven of Season Nine of the Ubuntu Podcast! Alan Pope, Mark Johnson, Laura Cowen and Martin Wimpress are connected and speaking to your brain.

We’re here again!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on May 12, 2016 03:11 PM

My Yakkety Yak has arrived!

Elizabeth K. Joseph

I like toys, but I’m an adult who lives in a small condo, so I need to behave myself when it comes to bringing new friends into our home. I made an agreement with myself to try and limit my stuffed toy purchases to two per year, one for each Ubuntu release.

Even so, I now have quite the collection.

These toys serve the purpose of brightening up our events with some fun, and enjoy the search for a new animal to match Mark Shuttleworth’s latest animal announcement. Truth be told, my tahr is a goat that I found that kind of looks like a tahr. The same goes for my xerus. My pangolin ended up having to be a plastic toy, though awareness about the animal (and conservation effords) has grown since 2012 so I’d likely be able to find one now. The quetzal was the trickiest, I had to admit defeat bought an ornament instead, but I did find and buy some quetzal earrings during our honeymoon in Mexico.

I’ve had fun as well and learned more about animals, which I love anyway. For the salamander I bought a $55 Hellbender Salamander Adoption Kit from the World Wildlife fund, an organization my husband and I now donate to annually. Learning about pangolins led me to visit one in San Diego and become a made me aware of the Save Pangolins organization.

It is now time for a Yakkety Yak! After some indecisiveness, I went with an adorable NICI yak, which I found on Amazon and shipped from Shijiazhuang, China. He arrived today.

Here he is!

…though I did also enjoy the first photo I took, where trusty photobombed us.

on May 12, 2016 01:38 AM

May 11, 2016

In the coming months we will be rolling out the new app design guidelines, which will give you the latest toolkit best practices and patterns so you can make your own convergent app for Ubuntu.

Why do we need design guidelines?

The guidelines are a big part of communicating design practices and philosophy to the community and the wider audience. Without it, we wouldn’t have consistency in our design language and people who want to develop or design on Ubuntu wouldn’t be able to maintain the identity we so love.

Throughout the guide you will see the rationale behind our design values through the development our Suru visual language and philosophy.

What’s to come?

We are going to start releasing parts of the Building Blocks section first, which contains all the components you need to start building your application.

Back in November last year we took a look at defining styles for the guide e.g. how to illustrate examples of best practice. The guide will feature UI examples of how the component will look inside an app across screen sizes, as well as breaking them apart so you can see what goes where and how.

Here’s a sneak peak…

Screen Shot 2016-05-03 at 09.50.13

Screen Shot 2016-05-11 at 14.35.25

After user testing on the current design guide, users said it was hard to find content as it was lost in the large amounts of text, so we have given the guide a bit of a nip and tuck by balancing visuals and text accordingly.

All new sections

As well as ‘Get Started’, ‘Patterns’ and ‘Building blocks’ we will now introduce: ‘System integration’ and ‘Resources’ to the list, as well as an overview page for each section highlighting key areas.

System integration will feature the number of a touchpoints your app can plug into inside the Ubuntu operating system shell, such as the launcher, notifications and indicators. For example, you can add a count emblem over your app icon inside the Launcher to show unread messages or available updates.

The ‘Resources’ section will feature downloads such as grid layout templates, the new colour palette and our icon set.

Over to you…

We can’t wait to see your app designs and hope that our design practices will help you achieve a great user experience on Ubuntu.

on May 11, 2016 03:42 PM
I'm not a fan of IDEs. So when developing apps for Ubuntu Phone, I went looking for a way of doing everything from the command line. I'll describe the method I'm currently using to do this. It's pretty rough and there are probably better ways of doing this, but it does work!

For an example, I've make a small application that:
  • Has some C++ code
  • Has some QML code
  • Uses gettext for translations
I need to do the following things:
  • Extract translatable strings from source files and merge in translations
  • Cross-compile for the target device (ARM CPU) from my laptop (AMD64 CPU)
  • Create a click package
  • Deploy the click package to my phone for testing
I'm using make for the build system. It's pretty basic, but it's fairly easy to understand what it does.

The project

My example project has the following files:
 
Makefile
hello.apparmor
hello.cpp
hello.desktop.in
hello.h
hello.png
main.qml
manifest.json
po/fr.po
po/de.po
po/hello.robert-ancell.pot
share/locale/de/LC_MESSAGES/
share/locale/fr/LC_MESSAGES/

If you've done some Ubuntu phone apps hopefully these should be familiar.

Translations 

To make my app translatable to other languages (French and German in this example) I've used gettext:
  • The .desktop file needs translations added to it. This is done by using hello.dekstop.in and prefixing the fields that need translating with '_' (e.g. _Name). These are then combined with the translations to make hello.desktop.
  • In the .qml files translatable strings are marked with i18n.tr ("text to translate").
  • The compiled message catalogues (.mo files) need to be in the click package as share/locale/(language)/(appid).mo.
Gettext / intltool are somewhat scary to use, but here's the magic rules that work for me:

hello.desktop: hello.desktop.in po/*.po
    intltool-merge --desktop-style po $< $@

po/hello.robert-ancell.pot: main.qml hello.desktop.in
    xgettext --from-code=UTF-8 --language=JavaScript --keyword=tr --keyword=tr:1,2 --add-comments=TRANSLATORS main.qml -o po/hello.robert-ancell.pot
    intltool-extract --type=gettext/keys hello.desktop.in
    xgettext --keyword=N_ hello.desktop.in.h -j -o po/hello.robert-ancell.pot
    rm -f hello.desktop.in.h

share/locale/%/LC_MESSAGES/hello.robert-ancell.mo: po/%.po
    msgfmt -o $@ $<

Cross-compiling for the target device

To compile our package I need to make a chroot:

$ sudo click chroot -a armhf -f ubuntu-sdk-15.04 create

The following Makefile rule runs make inside this chroot, then packages the results into a click file:

click:
        click chroot -a armhf -f ubuntu-sdk-15.04 run ARCH_PREFIX=arm-linux-gnueabihf- make
        click build --ignore=Makefile --ignore=*.cpp --ignore=*.h --ignore=*.pot --ignore=*.po --ignore=*.in --ignore=po .

Note the ARCH_PREFIX variable in the above. I've used this to run the correct cross-compiler when inside the chroot. When compiling from my laptop this variable is not set so it uses the local compiler.

hello_moc.cpp: hello.h
    moc $< -o $@

hello: hello.cpp hello_moc.cpp
    $(ARCH_PREFIX)g++ -g -Wall -std=c++11 -fPIC $^ -o $@ `$(ARCH_PREFIX)pkg-config --cflags --libs Qt5Widgets Qt5Quick`

Running 'make click' in this project will spit out hello.robert-ancell_1_armhf.click.

Testing

I connect my phone with a USB cable and copy the click package over and install it locally:

$ adb push hello.robert-ancell_1_armhf.click /tmp/hello.robert-ancell_1_armhf.click
$ phablet-shell
$ pkcon install-local --allow-untrusted /tmp/hello.robert-ancell_1_armhf.click

Then on the phone I quit any instances of this app running, refresh the app scope (pull down at the top) and run my test version.

Summary

Just one more rule to add to the Makefile, then it all works:

all: hello \
     hello.desktop \
     po/hello.robert-ancell.pot \
     share/locale/de/LC_MESSAGES/hello.robert-ancell.mo \
     share/locale/fr/LC_MESSAGES/hello.robert-ancell.mo

The whole example is available in Launchpad:

$ bzr branch lp:~robert-ancell/+junk/hello-example

Happy hacking!
on May 11, 2016 11:22 AM

I recently found out that I have access to a 1 TB cloud storage drive by 1&1, so I decided to start taking off-site backups of my $HOME (well, backups at all, previously I only mirrored the latest version from my SSD to an HDD).

I initially tried obnam. Obnam seems like a good tool, but is insanely slow. Unencrypted it can write about 3 MB/s, which is somewhat OK, but even then it can spend hours forgetting generations (1 generation takes probably 2 minutes, and there might be 22 of them). In encrypted mode, the speed reduces a lot, to about 500 KB/s if I recall correctly, which is just unusable.

I found borg backup, a fork of attic. Borg backup achieves speeds of up to 15 MB/s which is really nice. It’s also faster with scanning: I can now run my bihourly backups in about 1 min 30s (usually backs up about 30 to 80 MB – mostly thanks to Chrome I suppose!). And all those speeds are with encryption turned on.

Both borg and obnam use some form of chunks from which they compose files. Obnam stores each chunk in its own file, borg stores multiple chunks (even from different files) in a single pack file which is probably the main reason it is faster.

So how am I backing up: My laptop has an internal SSD and an HDD.  I backup every 2 hours (at 09,11,13,15,17,19,21,23,01:00 hours) using a systemd timer event, from the SSD to the HDD. The backup includes all of $HOME except for Downloads, .cache, the trash, Android SDK, and the eclipse and IntelliJ IDEA IDEs.

Now the magic comes in: The backup repository on the HDD is monitored by git-annex assistant, which automatically encrypts and uploads any new files in there to my 1&1 WebDAV drive and registers them in a git repository hosted on bitbucket. All files are encrypted and checksummed using SHA256, reducing the chance of the backup being corrupted.

I’m not sure how the WebDAV thing will work once I want to prune things, I suspect it will then delete some pack files and repack things into new files which means it will spend more bandwidth than obnam would. I’d also have to convince git-annex to actually drop anything from the WebDAV remote, but that is not really that much of a concern with 1TB storage space in the next 2 years at least…

I also have an external encrypted HDD which I can take backups on, it currently houses a fuller backup of $HOME that also includes Downloads, the Android SDK, and the IDEs for quicker recovery. Downloads changes a lot, and all of them can be fairly easily re-retrieved from the internet as needed, so there’s not much point in painfully uploading them to a WebDAV backup site.

 


Filed under: Uncategorized
on May 11, 2016 09:47 AM
Ryan Lortie and I have been tinkering away with making getting GTK+ applications to run in Unity 8 and as you can see below it works!


This shows me running the Unity 8 preview session. Simple Scan shows up as an option and can be launched and perform a scan.

This is only a first start, and there's still lots of work to be done. In particular:

  • Applications need to set X-Ubuntu-Touch=true in their .desktop files to show in Unity 8.
  • Application icons from the gnome theme do not show (bug).
  • GTK+ applications don't go fullscreen (bug).
  • No cursors changes (bug).
  • We only support single window applications because we can't place/focus the subwindows yet (bug). We're currently faking menus and tooltips by drawing them onto the same surface.

If you are using Ubuntu 14.10 you can install the packages for this from a PPA:

$ sudo apt-add-repository ppa:ubuntu-desktop/gtk-mir
$ sudo apt-get update
$ sudo apt-get upgrade

The PPA contains a version of GTK+ with Mir support, fixes for libraries that assume you are running in X and a few select applications patched so they show in Unity 8.

The Mir backend currently on the wip/mir branch in the GTK+ git repository. We will keep developing it there until it is complete enough to propose into GTK+ master. We have updated jhbuild to support Mir so we can easily build and test this backend going forward.

on May 11, 2016 09:35 AM

May 10, 2016

Did I completely blow out of proportion a seemingly innocent comment meant to be in jest? Absolutely!
This is the result of Real World problems that happen when you have no income. I am currently looking into
some new opportunities that have presented themselves. Please note that I never intended to completely dump Kubuntu, but I do need to sort out
my career 🙂

Thank you all for your kind words in this fiasco.

Scarlett Clark

on May 10, 2016 08:23 PM

Meeting information

Agenda

Minutes

Review ACTION points from previous meeting

The discussion about “Review ACTION points from previous meeting” started at 16:00.

  • No actions from previous meeting.

Development Release

The discussion about “Development Release” started at 16:00.

Assigned merges/bugwork (rbasak)

The discussion about “Assigned merges/bugwork (rbasak)” started at 16:02.

  • rbasak has triaged some bugs, tagging some “bitesize”, and advised prospective Ubuntu developers to take a look at these.

Server & Cloud Bugs (caribou)

The discussion about “Server & Cloud Bugs (caribou)” started at 16:05.

  • Squid, Samba and MySQL bugs noted; all are in progress. Nothing else in particular to bring up.

Weekly Updates & Questions for the QA Team

The discussion about “Weekly Updates & Questions for the QA Team” started at 16:06.

  • No questions to or from the QA team
  • jgrimm noted that the Canonical Server Team currently have an open QA position and invited applications and recommendations.

Weekly Updates & Questions for the Kernel Team (smb, sforshee, arges)

The discussion about “Weekly Updates & Questions for the Kernel Team (smb, sforshee, arges)” started at 16:07.

  • No questions to/from the kernel team

Upcoming Call For Papers

The discussion about “Upcoming Call For Papers” started at 16:10.

  • No upcoming CfPs of note

Ubuntu Server Team Events

The discussion about “Ubuntu Server Team Events” started at 16:11.

  • No upcoming events of note

Open Discussion

The discussion about “Open Discussion” started at 16:11.

Announce next meeting date, time and chair

The discussion about “Announce next meeting date, time and chair” started at 16:18.

  • The next meeting will be at Tue 17 May 16:00:00 UTC 2016. jamespage will chair.

Generated by MeetBot 0.1.5 (http://wiki.ubuntu.com/meetingology)

on May 10, 2016 04:47 PM

Dogfooding Unity 8

Michael Hall

screenshot20160506_103257823During the Ubuntu Online Summit last week, my colleague Daniel Holbach came up with what he called a “10 day challenge” to some of the engineering manager directing the convergence work in Ubuntu. The idea is simple, try and use only the Unity 8 desktop for 10 working days (two weeks). I thought this was a great way to really identify how close it is to being usable by most Ubuntu users, as well as finding the bugs that cause the most pain in making the switch. So on Friday of last week, with UOS over, I took up the challenge.

Below I will discuss all of the steps that I went through to get it working to my needs. They are not the “official” way of doing it (there isn’t an official way to do all this yet) and they won’t cover every usage scenario, just the ones I faced. If you want to try this challenge yourself they will help you get started. If at any time you get stuck, you can find help in the #ubuntu-unity channel on Freenode, where the developers behind all of these components are very friendly and helpful.

Getting Unity 8

To get started you first need to be on the latest release of Ubuntu. I am using Ubuntu 16.04 (Xenial Xerus), which is the best release for testing Unity 8. You will also need the stable-phone-overlay PPA. Don’t let the name fool you, it’s not just for phones, but it is where you will find the very latest packages for Mir, Unity 8, Libertine and other components you will need. You can install is with this command:

sudo add-apt-repository ppa:ci-train-ppa-service/stable-phone-overlay

Then you will need to install the Unity 8 session package, so that you can select it from the login screen:

sudo apt install unity8-desktop-session-mir

When I did this there was a bug in the libhybris package that was causing Mir to try and use some Android stuff, which clearly isn’t available on my laptop. The fix wasn’t yet in the PPA, so I had to take the additional step of installing a fix from our continuous integration system (Note: originally the command below used silo 53, but I’ve been told it is now in silo 31). If you get a black screen when trying to start your Unity 8 session, you probably need this too.

sudo apt-get install phablet-tools phablet-tools-citrain
citrain host-upgrade 031

This was enough to get Unity 8 to load for me, but all my apps would crash within a half second of being launched. It turned out to be a problem with the cgroups manager, specifically the cgmanager service was disabled for me (I suspect this was leftover configurations from previous attempts at using Unity 8). After re-enabling it, I was able to log back into Unity 8 and start using apps!

sudo systemctl enable cgmanager

Essential Core Apps

The first thing you’ll notice is that you don’t have many apps available in Unity 8. I had probably more than most, having installed some Ubuntu SDK apps natively on my laptop already. If you haven’t installed the webbrowser-app already, you should. It’s in the Xenial archive and the PPA you added above, so just

sudo apt install webbrowser-app

But that will only get you so far. What you really need are a terminal and file manager. Fortunately those have been created as part of the Core Apps project, you just need to install them. Because the Ubuntu Store wasn’t working for me (see bottom of this post) I had to manually download and install them:

sudo click install --user mhall com.ubuntu.filemanager_0.4.525_multi.click
sudo click install --user mhall com.ubuntu.terminal_0.7.170_multi.click

If you want to use these apps in Unity 7 as well, you have to modify their .desktop files located in ~/.local/share/applications/ and add the -x flag after aa-exec-click, this is because by default it prevents running these apps under X11 where they won’t have the safety of confinement that they get under Mir.

The file manager needed a bit of extra effort to get working. It contains many Samba libraries that allow it to access windows network shares, but for some reason the app was looking for them in the wrong place. As a quick and dirty hack, I ended up copying whatever libraries it needed from /opt/click.ubuntu.com/com.ubuntu.filemanager/current/lib/i386-linux-gnu/ to /usr/lib/i386-linux-gnu/samba/. It’s worth the effort, though, because you need the file manager if you want do things like upload files through the webbrowser.

Using SSH

IRC is a vital communication tool for my job, we all use it every day. In fact, I find it so important that I have a remote client that stays connected 24/7, which I connect to via ssh. Thanks to the Terminal core app, I have quick and easy access to that. But when I first tried to connect to my server, which uses public-key authentication (as they all should), my connection was refused. That is because the Unity 8 session doesn’t run the ssh-agent service on startup. You can start it manually from the terminal:

ssh-agent

This will output some shell commands to setup environment variables, copy those and paste them right back into your terminal to set them. Then you should be able to ssh like normal, and if your key needs a passphrase you will be prompted for it in the terminal rather than in a dialog like you get in Unity 7.

Getting traditional apps

Now that you’ve got some apps running natively on Mir, you probably want to try out support for all of your traditional desktop apps, as you’ve heard advertised. This is done by a project called Libertine, which creates an LXC container and XMir to keep those unconfined apps safely away from your new properly confined setup. The first thing you will need to do is install the libertine packages:

apt-get install libertine libertine-scope

screenshot20160506_105035760Once you have those, you will see a Libertine app in your Apps scope. This is the app that lets you manage your Libertine containers (yes, you can have more than one), and install apps into them. Creating a new container is simply a matter of pressing the “Install” button. You can give it a name of leave it blank to get the default “Xenial”.

screenshot20160506_105618896Once your container is setup, you can install as many apps into it as you want, again using the Libertine container manager. You can even use it to search the archives if you don’t know the exact package name. It will also install any dependencies that package needs into your Libertine container.

screenshot20160506_105942480Now that you have your container setup and apps installed into it, you are ready to start trying them out. For now you have to access them from a separate scope, since the default Apps scope doesn’t look into Libertine containers. That is why you had to install the libertine-scope package above. You can find this scope by clicking on the Dash’s bottom edge indicator to open the Scopes manger, and selecting the Legacy Applications Scope. There you will see launchers for the apps you have installed.

Libertine uses a special container manager to launch apps. If it isn’t running, as was the case for me, your legacy app windows will remain black. To fix that, open up the terminal and manually start the manager:

initctl --session start libertine-lxc-manager

Theming traditional apps

screenshot20160506_122713187By default the legacy apps don’t look very nice. They default to the most basic of themes that look like you’ve time-traveled back to the mid-1990s, and nobody wants to do that. The reason for this is because these apps (or rather, the toolkit they use) expect certain system settings to tell them what theme to use, but those settings aren’t actually a dependency of the application’s package. They are part of a default desktop install, but not part of the default Libertine image.

screenshot20160506_112259969I found a way to fix this, at least for some apps, by installing the light-themes and ubuntu-settings packages into the Libertine container. Specifically it should work for any Gtk3 based application, such as GEdit. It does not, however, work for apps that still use the Gtk2 toolkit, such as Geany. I have not dug deeper to try and figure out how to fix Gtk2 themes, if anybody has a suggestion please leave it in the comments.

What works

It has been a couple of months since I last tried the Unity 8 session, back before I upgraded to Xenial, and at that time there wasn’t much working. I went into this challenge expecting it to be better, but not by much. I honestly didn’t expect to spend even a full day using it. So I was really quite surprised to find that, once I found the workarounds above, I was not only able to spend the full day in it, but I was able to do so quite easily.

screenshot20160509_121832656Whenever you have a new DE (which Unity 8 effectively is) and the latest UI toolkit (Qt 5) you have to be concerned about performance and resource use, and given the bleeding-edge nature of Unity 8 on the desktop, I was expecting to sacrifice some CPU cycles, battery life and RAM. If anything, the opposite was the case. I get at least as many hours on my battery as I do with Unity 7, and I was using less than half the RAM I typically do.

screenshot20160509_103139434Moreover, things that I was expecting to cause me problems surprisingly didn’t. I was able to use Google Hangouts for my video conferences, which I knew had just been enabled in the browser. But I fully expected suspend/resume to have trouble with Mir, given the years I spent fighting it in X11 in the past, but it worked nearly flawlessly (see below). The network indicator had all of my VPN configurations waiting to be used, and they worked perfectly. Even pulse audio was working as well as it did in Unity 7, though this did introduce some problems (again, see below). It even has settings to adjust the mouse speed and disable the trackpad when I’m typing. Most imporantly, nearly all of the keyboard shortcuts that have become subconcious to me in Unity 7 are working in Unity 8.

Most importantly, I was able to write this blog post from Unity 8. That includes taking all of the screenshots and uploading them to WordPress. Switching back and forth between my browser and my notes document to see what I had done over the last few days, or going to the terminal to verify the commands I mentioned above.

What doesn’t

Of course, it wasn’t all unicorns and rainbows, Unity 8 is still very bleeding edge as a desktop shell, and if you want to use it you need to be prepared for some pain. None of it has so far been bad enough to stop me, but your mileage may vary.

One of the first minor pain-points is the fact that middle-click doesn’t paste the active text highlight. I hadn’t realized how much I have become dependent on that until I didn’t have it. You also can’t copy/paste between a Mir and an XMir window, which makes legacy apps somewhat less useful, but that’s on the roadmap to be fixed.

Speaking of windows, Unity 8 is still limited to one per app. This is going to change, but it is the current state of things. This doesn’t matter so much for native apps, which were build under this restriciton, and the terminal app having tabs was a saving grace here. But for legacy apps it presents a bigger issue, especially apps like GTG (Getting Things Gnome) where multi-window is a requirement.

Some power-management is missing too, such as dimming the screen after some amount of inactivity, or turning it off altogether. The session also will not lock when you suspend it, so don’t depend on this in a security-critical way (but really, if you’re running bleeding-edge desktops in security-critical environments, you have bigger problems).

I also had a minor problem with my USB headset. It’s actually a problem I have in Unity 7 too, since upgrading to Xenial the volume and mute controls don’t automatically switch to the headset, even though the audio output and input do. I had a workaround for that in Unity 7, I could open the sound settings and manually change it to the headset, at which point the controls work on it. But in Unity 8’s sound settings there is no such option, so my workaround isn’t available.

The biggest hurdle, from my perspective, was not being able to install apps from the store. This is due to something in the store scope, online accounts, or Ubuntu One, I haven’t figured out which yet. So to install anything, I had to get the .click package and do it manually. But asking around I seem to be the only one having this problem, so those of you who want to try this yourself may not have to worry about that.

The end?

No, not for me. I’m on day 3 of this 10 day challenge, and so far things are going well enough for me to continue. I have been posting regular small updates on Google+, and will keep doing so. If I have enough for a new blog post, I may write another one here, but for the most part keep an eye on my G+ feed. Add your own experiences there, and again join #ubuntu-unity if you get stuck or need help.

on May 10, 2016 03:51 PM

With OTA-11 on the horizon, rc-proposed now has a new framework. If you want to use the latest UI Toolkit API, that is part of Ubuntu.Components 1.3, you should bump your framework to 15.04.5 - in QtCreator you can find your manifest.json.in under Other Files and simply select the new version. Now you can use the new subtitle property with PageHeader which complements the existing title and shows a smaller label at the bottom of the header. PageHeaderStyle gains subtitleColor which can be used via StyleHints to customize the looks of the new subtitle. disabledForegroundColor further more now allows changing the color of the actions when the header is disabled. For example


Page {
    header: PageHeader {
        id: pageHeader
        title: i18n.tr("Hello World")
        subtitle: i18n.tr('Lorem ipsum dolor sit amet')
        StyleHints {
            backgroundColor: UbuntuColors.inkstone
            foregroundColor: UbuntuColors.blue
            // The color of disabled actions
            disabledForegroundColor: UbuntuColors.red
            // The divider at the bottom
            dividerColor: UbuntuColors.red
            // The new subtitle
            subtitleColor: UbuntuColors.green
        }
        trailingActionBar.actions: [
            Action {
                iconName: 'list-add'
                onTriggered: console.log('Hello world')
            },
           Action {
              iconName: 'list-remove'
              enabled: false
           }
        ]
    }
}


"enabled: false" in the Action turns it red as it no longer responds to touch, mouse or keyboard input (the same can be done for all actions by disabling PageHeader).

Remember OTA-10 and framework 15.04.4?

Already shipping on most if not everyone’s Ubuntu devices now is OTA-10 which brought with it new API for the BottomEdge component: preloadContent. This new boolean property when set to true causes all contents to preload in the background even before the hint is being used to reveal it - the default is false, which means contents are loaded on demand like before. This can speed things up a great deal in some cases.

BottomEdge {
    id: bottomEdge
    height: parent.height - units.gu(20)
    hint.text: “My bottom edge”
    preloadContent: true
    contentComponent: Rectangle {
        color: UbuntuColors.green
        width: page.width
        height: page.height
    }
}

 

on May 10, 2016 01:11 PM

Today I have released the new major version of the ReText editor.

This release would not be possible without Maurice van der Pot who was the author of the greatest features because of which the version number was bumped:

  • Synchronized scrolling for Markdown. It means that the preview area scrolls automatically to make sure the paragraph you are editing is visible.

  • The multi-process mode. The text conversion now happens in a separate process so will never block the UI. As a result, the interface should be now more responsive when editing huge texts.

    This also required refactoring the PyMarkups library. ReText now depends on version 2.0 of it, which was released yesterday (or a newer version).

Other news worth mentioning are:

  • The tags box was replaced with a new “Formatting” box with quick access to most Markdown features, such as headers, lists, links, images and so on. The initial version of this code was contributed by Donato Marrazzo.

  • Images now can be pasted into ReText: first a save dialog for them will be presented, and then the valid Markdown or reStructuredText markup for them will be inserted. This was contributed by Bart Clephas and Maurice van der Pot.

  • The basic CSS styling for tables has been added (to make sure they have borders visible). This applies only unless you have your own CSS.

  • Added a button to quickly close the search bar, for those who prefer buttons over keyboard shortcuts.

  • Hitting Return key twice now will close the itemized lists and remove the inserted list markers from the previous line.

As usual, some bugs have been fixed (most of fixes have also been backported to 5.3.1 release), and some translations have been updated from Transifex.

Please report any bugs you find to our issue tracker.

on May 10, 2016 11:45 AM

Once upon a time, we were all having the most fun ever, in Austin, Texas, playing mudfootball!

Everyone got to play. We took turns. No one got hurt. It was great.  It was a happy time.  Everybody wins in mudfootball!

But a couple of kids didn't like some of the rules of mudfootball, so they suggested changing them. We all listened to their new proposed rules, and put it to a vote.

Some players (like me) were cool with the new proposed rules, but after the vote, it turned out that the majority preferred the existing rules.

So a few of the kids got mad and left :-( But more than just that, they were sort of bratty, and they took their football with them.

Now, the rest of us are kind of sad and muddy and want to play more mudfootball but the ball went away. Oh well...

Hopefully the mad kids change their mind and come back and play!  Otherwise, I guess we'll have to go find a new football to play with?

:-Dustin

p.s. I really, really, really hope Uber and Lyft remain in Austin!  The City of Austin requires:
(1) fingerprint based background checks,
(2) no dropoffs/pickups in active lanes of traffic, and
(3) placards marking a car as an Uber/Lyft vehicle.
That's pretty much it.  Please come back, Uber and Lyft!
on May 10, 2016 02:53 AM

May 09, 2016

Want to get deeper under the hood with Kubuntu?

Interested in becoming a developer?

Then come and join us in the Kubuntu Dojo:

Thursday 26th May 2016 – 18:00 UTC

Packaging is one of the primary development tasks in a large Linux distribution project. Packaging is the essential way of getting the latest and best software to the user.

We are restarting the Kubuntu Dojo and Ninja developers training courses. These courses are free to attend, and take place on the last Thursday of the month at 18:00 UTC.

Thanks go to Fred Dixon and the team at BigBlueButton.org for making this possible, as their BBB product is aimed specifically at Education, and is a perfect fit for on-line training.

Details for accessing the Kubuntu Big Blue Button Conference server will be announced in the G+ event stream, and on IRC: irc://irc.freenode.net

#kubuntu
#kubuntu-devel

Why it rocks

All the cool kids are doing it.
Packagers know everyone.
Not only will you be part of an elite group, but also get to know with Debian’s finest, as well as KDE developers and other application developers.

For more details about the Kubuntu Ninja’s programme see our wiki:

https://wiki.kubuntu.org/Kubuntu/GettingInvolved/Development

on May 09, 2016 08:35 PM