August 29, 2014

Engullimos como patos el extraordinario desayuno buffet. El motivo no es otro que teníamos el tiempo muy justo por salir temprano el avión.


Antes de despegar de Bogotá Avianca nos obligó a pagar el ESTA americano (14$), en caso contrario, la multaban.
Tras aterrizar en la ciudad de Dexter Morgan debemos entregar 'voluntariamente' pasaporte, datos personales, declaración de aduana, declaración ESTA, que nos tomen las huellas dactilares y foto... ¡Ni mi propio gobierno sabe ahora tanto de mi como USA! Y eso que sólo queríamos cambiar de avión.
También me sorprendieron las cámaras en el aeropuerto, una cada 20m.
En Bogotá nos habían dicho que el equipaje lo recogíamos en Lisboa, pero nos enteramos de casualidad de que había que cogerlo en la cinta de maletas de Miami para llevarlo a un mostrador a 50m ?:O No quedaron las maletas en Miami de milagro.

Y tras el vuelo de 7 horas y pico de avión desde Miami, llegamos a Lisboa.
La capital estaba amaneciendo y prestó pasear sus calles y plazas desiertas.
¿Dónde está la gente?

Eso sí, durante demasiadas horas nuestro radio de acción consistió en 250m desde el punto de información turística (la razón no es otra que nuestra diarrea seguía a su ritmo y ahí había un baño público de pago).
(:   [TSA|TP] + NE
Descansamos en el hotel toda la tarde, posiblemente debido al jetlag. Y a la hora de cenar disfrutamos de sardinas y bonito, acompañados de un excelente vino verde.

Intentando olvidar el jetlag
Al día siguiente tocó volver p'Asturies, tras un viaje exprimido hasta en su último minuto, pero ese resumen pertenece al post final :)

Ñam, ñam...

Al otru lláu de la mar... Colombia :)

Y nos encontramos con el día más caluroso del año

Continúa leyendo más de este viaje.
on August 29, 2014 05:04 PM

S07E22 – The One with the Joke

Ubuntu Podcast from the UK LoCo

We’re back with Season Seven, Episode Twenty of the Ubuntu Podcast! Alan Pope, Mark Johnson, and Laura Cowen are drinking tea and eating homemade tiffin in Studio L.

In this week’s show:

  • We interview Daniel Holbach from the Ubuntu Community Team…

  • We also discuss:

    • Playing with old console games…
    • Raising a bug on Ubuntu…
    • Attending JISC SOC Innovation…
  • We share some Command Line Lurve that sets up a Socks proxy on localhost port xxx which you can use to (say) browse the web from some_host (from @MartijnVdS):
      ssh -D xxx some_host
    
  • And we read your feedback. Thanks for sending it in!

We’ll be back next week, so please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

on August 29, 2014 04:34 PM

This month:
* Command & Conquer
* How-To : Minimal Ubuntu Install, LibreOffice, and GRUB2.
* Graphics : Blender and Inkscape.
* Linux Labs: Ripping DVDs with Handdrake, and Compiling a Kernel
* Arduino
plus: Q&A, Security, Ubuntu Games, and soooo much more.

ALSO: Don’t forget to search for ‘full circle magazine’ on Google Play/Books.

http://fullcirclemagazine.org/issue-88/

 

on August 29, 2014 04:03 PM
We’re preparing Lubuntu 14.10, the Utopic Unicorn, for distribution in October 2014. With this early Beta pre-release, you can see what we are trying out in preparation for our next version (with  3.16.0-11 Ubuntu Linux kernel). Remember that this is an early beta pre-release, so don't use it on daily production computers.

We'd like you to join us for testing, especially if you have a PPC machine. We didn't have PPC testers this release, do there is no PPC release.

Read the release notes before getting the disc images, and contact us with feedback.
on August 29, 2014 03:20 PM

Kubuntu on LinkedIn

Kubuntu Wire

We can sit in our own nerdy world in open source communities too much so at Kubuntu we have been setting up social media forums and we have just added a LinkedIn page for Kubuntu which should get the usual news stories of new releases and updates.  There is also a Kubuntu Users group on LinkedIn if you want to share experiences with people who like to take more of a business approach to their computers than users of other social media websites.

14.10 Beta 1 is out, you can give us feedback on Google +https://plus.google.com/u/0/107577785796696065138/posts or Facebookhttps://www.facebook.com/kubuntu.org or Twitterhttps://twitter.com/kubuntu or Linkedin https://www.linkedin.com/company/kubuntu

on August 29, 2014 09:43 AM

Google's libphonenumber is a universal library for parsing, validating, identifying and formatting phone numbers. It works quite well for numbers from just about anywhere. Here is a Java code sample (C++ and JavaScript also supported) from their web site:


String swissNumberStr = "044 668 18 00";
PhoneNumberUtil phoneUtil = PhoneNumberUtil.getInstance();
try {
  PhoneNumber swissNumberProto = phoneUtil.parse(swissNumberStr, "CH");
} catch (NumberParseException e) {
  System.err.println("NumberParseException was thrown: " + e.toString());
}
boolean isValid = phoneUtil.isValidNumber(swissNumberProto); // returns true
// Produces "+41 44 668 18 00"
System.out.println(phoneUtil.format(swissNumberProto, PhoneNumberFormat.INTERNATIONAL));
// Produces "044 668 18 00"
System.out.println(phoneUtil.format(swissNumberProto, PhoneNumberFormat.NATIONAL));
// Produces "+41446681800"
System.out.println(phoneUtil.format(swissNumberProto, PhoneNumberFormat.E164));

This is particularly useful for anybody working with international phone numbers. This is a common requirement in the world of VoIP where people mix-and-match phones and hosted PBXes in different countries and all their numbers have to be normalized.

About the packages

The new libphonenumber package provides support for C++ and Java users. Upstream also supports JavaScript but that hasn't been packaged yet.

Using libphonenumber from Evolution and other software

Lumicall, the secure SIP/ZRTP client for Android, has had libphonenumber from the beginning. It is essential when converting dialed numbers into E.164 format to make ENUM queries and it is also helpful to normalize all the numbers before passing them to VoIP gateways.

Debian includes the GNOME Evolution suite and it will use libphonenumber to improve handling of phone numbers in contact records if enabled at compile time. Fredrik has submitted a patch for that in Debian.

Many more applications can potentially benefit from this too. libphonenumber is released under an Apache license so it is compatible with the Mozilla license and suitable for use in Thunderbird plugins.

Improving libphonenumber

It is hard to keep up with the changes in dialing codes around the world. Phone companies and sometimes even whole countries come and go from time to time. Numbering plans change to add extra digits. New prefixes are created for new mobile networks. libphonenumber contains metadata for all the countries and telephone numbers that the authors are aware of but they also welcome feedback through their mailing list for anything that is not quite right.

Now that libphonenumber is available as a package, it may be helpful for somebody to try and find a way to split the metadata from the code so that metadata changes could be distributed through the stable updates catalog along with other volatile packages such as anti-virus patterns.

on August 29, 2014 08:02 AM

Test processes as servers

Robert Collins

Since its very early days subunit has had a single model – you run a process, it outputs test results. This works great, except when it doesn’t.

On the up side, you have a one way pipeline – there’s no interactivity needed, which makes it very very easy to write a subunit backend that e.g. testr can use.

On the downside, there’s no interactivity, which means that anytime you want to do something with those tests, a new process is needed – and thats sometimes quite expensive – particularly in test suites with 10’s of thousands of tests.Now, for use in the development edit-execute loop, this is arguably ok, because one needs to load the new tests into memory anyway; but wouldn’t it be nice if tools like testr that run tests for you didn’t have to decide upfront exactly how they were going to run. If instead they could get things running straight away and then give progressively larger and larger units of work to be run, without forcing a new process (and thus new discovery directory walking and importing) ? Secondly, testr has an inconsistent interface – if testr is letting a user debug things to testr through to child workers in a chain, it needs to use something structured (e.g. subunit) and route stdin to the actual worker, but the final testr needs to unwrap everything – this is needlessly complex. Lastly, for some languages at least, its possibly to dynamically pick up new code at runtime – so a simple inotify loop and we could avoid new-process (and more importantly complete-enumeration) *entirely*, leading to very fast edit-test cycles.

So, in this blog post I’m really running this idea up the flagpole, and trying to sketch out the interface – and hopefully get feedback on it.

Taking subunit.run as an example process to do this to:

  1. There should be an option to change from one-shot to server mode
  2. In server mode, it will listen for commands somewhere (lets say stdin)
  3. On startup it might eager load the available tests
  4. One command would be list-tests – which would enumerate all the tests to its output channel (which is stdout today – so lets stay with that for now)
  5. Another would be run-tests, which would take a set of test ids, and then filter-and-run just those ids from the available tests, output, as it does today, going to stdout. Passing somewhat large sets of test ids in may be desirable, because some test runners perform fixture optimisations (e.g. bringing up DB servers or web servers) and test-at-a-time is pretty much worst case for that sort of environment.
  6. Another would be be std-in a command providing a packet of stdin – used for interacting with debuggers

So that seems pretty approachable to me – we don’t even need an async loop in there, as long as we’re willing to patch select etc (for the stdin handling in some environments like Twisted). If we don’t want to monkey patch like that, we’ll need to make stdin a socketpair, and have an event loop running to shepard bytes from the real stdin to the one we let the rest of Python have.

What about that nirvana above? If we assume inotify support, then list_tests (and run_tests) can just consult a changed-file list and reload those modules before continuing. Reloading them just-in-time would be likely to create havoc – I think reloading only when synchronised with test completion makes a great deal of sense.

Would such a test server make sense in other languages?  What about e.g. testtools.run vs subunit.run – such a server wouldn’t want to use subunit, but perhaps a regular CLI UI would be nice…


on August 29, 2014 04:10 AM

August 28, 2014

The first beta of the Utopic Unicorn (to become 14.10) has now been released!

This beta features images for Kubuntu, Lubuntu, Ubuntu GNOME, UbuntuKylin, Xubuntu and the Ubuntu Cloud images.

Pre-releases of the Utopic Unicorn are *not* encouraged for anyone needing a stable system or anyone who is not comfortable running into occasional, even frequent breakage. They are, however, recommended for Ubuntu flavor developers and those who want to help in testing, reporting and fixing bugs as we work towards getting this release ready.

Beta 1 includes a number of software updates that are ready for wider testing. This is quite an early set of images, so you should expect some bugs.

While these Beta 1 images have been tested and work, except as noted in the release notes, Ubuntu developers are continuing to improve the Utopic Unicorn. In particular, once newer daily images are available, system installation bugs identified in the Beta 1 installer should be verified against the current daily image before being reported in Launchpad. Using an obsolete image to re-report bugs that have already been fixed wastes your time and the time of developers who are busy trying to make 14.10 the best Ubuntu release yet. Always ensure your system is up to date before reporting bugs.

Kubuntu

Kubuntu is the KDE based flavour of Ubuntu. It uses the Plasma desktop and includes a wide selection of tools from the KDE project.

Kubuntu development is now focussing on the next generation of KDE Software, Plasma 5. This is not yet stable enough for everyday use, so our default option is the trusted Plasma 4 desktop. A tech preview of Plasma 5 is available for those who want to try out the future.

The Beta-1 images can be downloaded at:

http://cdimage.ubuntu.com/kubuntu/releases/utopic/beta-1/
http://cdimage.ubuntu.com/kubuntu-plasma5/releases/utopic/beta-1/

More information on Kubuntu Beta-1 can be found here:
https://wiki.ubuntu.com/UtopicUnicorn/Beta1/Kubuntu

Lubuntu

Lubuntu is a flavor of Ubuntu based on LXDE and focused on providing a very lightweight distribution.

Lubuntu development is currently focused on the transition away from GTK+ to the Qt framework. This is not stable enough for everyday use, so the focus this version is on fixing bugs.

The Beta 1 images can be downloaded at:
http://cdimage.ubuntu.com/lubuntu/releases/utopic/beta-1/

Ubuntu GNOME

Ubuntu GNOME is a flavor of Ubuntu featuring the GNOME desktop environment.

The Beta-1 images can be downloaded at:
http://cdimage.ubuntu.com/ubuntu-gnome/releases/utopic/beta-1/

More information on Ubuntu GNOME Beta-1 can be found here:
https://wiki.ubuntu.com/UtopicUnicorn/Beta1/UbuntuGNOME

UbuntuKylin

UbuntuKylin is a flavor of Ubuntu that is more suitable for Chinese users.

The Beta-1 images can be downloaded at:
http://cdimage.ubuntu.com/ubuntukylin/releases/utopic/beta-1/

More information on UbuntuKylin Beta-1 can be found here:
https://wiki.ubuntu.com/Ubuntu%20Kylin/1410-beta-1-ReleaseNote

Xubuntu

Xubuntu is a flavor of Ubuntu shipping with the XFCE desktop environment.

The Beta-1 images can be downloaded at:
http://cdimage.ubuntu.com/xubuntu/releases/utopic/beta-1/

More information on Xubuntu Beta-1 can be found here:
https://wiki.ubuntu.com/UtopicUnicorn/Beta1/Xubuntu

Ubuntu Cloud

These images can be run on Amazon EC2, Openstack, SmartOS and many other clouds. Beta-1 images have been published to Windows Azure and Amazon EC2.

http://cloud-images.ubuntu.com/releases/utopic/beta-1/

Regular daily images for Ubuntu Cloud can be found at:
http://cloud-images.ubuntu.com/daily/server/

Daily Images

Regular daily images for Ubuntu can be found at: http://cdimage.ubuntu.com

If you’re interested in following the changes as we further develop Utopic, we suggest that you subscribe to the ubuntu-devel-announce list. This is a low-traffic list (a few posts a week) carrying announcements of approved specifications, policy changes, beta releases and other interesting events.

http://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-announce

A big thank you to the developers and testers for their efforts to pull together this Beta release!

Originally posted to the ubuntu-devel-announce mailing list on Thu Aug 28 21:04:39 UTC 2014 by Stéphane Graber

on August 28, 2014 09:56 PM

The Xubuntu team is pleased to announce the immediate release of Xubuntu 14.10 Beta 1. This is the first beta towards the final release in October. Before this beta we have landed various of enhancements and some new features. Now it’s time to start polishing the last edges and improve the stability.

The first beta release also marks the end of the period to land new features in the form of Ubuntu Feature Freeze. This means any new updates to packages should be bug fixes only, the Xubuntu team is committed to fixing as many of the bugs as possible before the final release.

The beta 1 release is available for download by torrents and direct downloads from
http://cdimage.ubuntu.com/xubuntu/releases/utopic/beta-1/

Highlights and known issues

New features and enhancements

  • Inxi, a tool to gather system information, is now included
  • To allow users to use pkexec for selected applications instead of gksu(do), appropriate profiles are now included for Thunar and Mousepad
  • The display dialog has been updated, multiple dispays can now be arranged by drag and drop
  • The power manager can now control the keyboard-backlight and features a new panel plugin, which shows the battery’s status, other connected devices with batteries and controls the display’s backlight brightness
  • The themes now support Gtk3.12
  • The alt-tab dialog can now be clicked with the mouse to select a window
  • Xubuntu minimal install available – information on installation and testing will follow shortly.

Bug fixes

  • Setting-related menu items earlier available only under Settings manager are now shown and searchable in Whiskermenu (1310264)
  • Presentation mode in Xfce4 power manager is now working (1193716)
  • apt-offline is now functional, previously “Something is wrong with the apt system” (1357217)

Known Issues

  • Video corruption when booting a virtual livesession (1357702)
  • Failure to configure wifi in live-session (1351590)
  • com32r error on boot with usb (1325801)

New application versions in the Xubuntu packageset

  • Catfish 1.2.1
  • Xfwm4 4.11.2
  • Updates to xfdesktop4 (4.11.7), xfce4-panel (4.11.1), login screen (lightdm-gtk-greeter 1.9.0)
  • xfce4-appfinder (4.11.0)
  • xfce4-notifyd (0.2.4-3)
  • xfce4-settings (4.11.3)
  • xfce4-power-manager (1.3.2)
  • xfce4-whiskermenu-plugin (1.4.0)
  • Light-locker-settings (1.4.0)
  • Menulibre (2.0.5)
  • Mugshot (0.2.4)

Other changes

XChat is removed from the default installation; we recommend trying the Pidgin IRC feature if you need to connect sporadically. Otherwise, if you prefer XChat, it’s still available for installation in the repositories.

on August 28, 2014 09:16 PM
Kubuntu 14.10 beta 1 is out now for testing by early adopters. This release comes with the stable Plasma 4 we know and love. It also adds another flavour - Kubuntu Plasma 5 Tech Preview.
on August 28, 2014 09:10 PM

On release day we can get up to 8,000 requests a second to ubuntu.com from people trying to download the new release. In fact, last October (13.10) was the first release day in a long time that the site didn’t crash under the load at some point during the day (huge credit to the infrastructure team).

Ubuntu.com has been running on Drupal, but we’ve been gradually migrating it to a more bespoke Django based system. In March we started work on migrating the download section in time for the release of Trusty Tahr. This was a prime opportunity to look for ways to reduce some of the load on the servers.

Choosing geolocated download mirrors is hard work for an application

When someone download Ubuntu from ubuntu.com (on a thank-you page), they are actually sent to one of the 300 or so mirror sites that’s nearby.

To pick a mirror for the user, the application has to:

  1. Decide from the client’s IP address what country they’re in
  2. Get the list of mirrors and find the ones that are in their country
  3. Randomly pick them a mirror, while sending more people to mirrors with higher bandwidth

This process is by far the most intensive operation on the whole site, not because these tasks are particularly complicated in themselves, but because this needs to be done for each and every user – potentially 8,000 a second while every other page on the site can be aggressively cached to prevent most requests from hitting the application itself.

For the site to be able to handle this load, we’d need to load-balance requests across perhaps 40 VMs.

Can everything be done client-side?

Our first thought was to embed the entire mirror list in the thank-you page and use JavaScript in the users’ browsers to select an appropriate mirror. This would drastically reduce the load on the application, because the download page would then be effectively static and cache-able like every other page.

The only way to reliably get the user’s location client-side is with the geolocation API, which is only supported by 85% of users’ browsers. Another slight issue is that the user has to give permission before they could be assigned a mirror, which would slightly hinder their experience.

This solution would inconvenience users just a bit too much. So we found a trade-off:

A mixed solution – Apache geolocation

mod_geoip2 for Apache can apply server rules based on a user’s location and is much faster than doing geolocation at the application level. This means that we can use Apache to send users to a country-specific version of the download page (e.g. the German desktop thank-you page) by adding &country=GB to the end of the URL.

These country specific pages contain the list of mirrors for that country, and each one can now be cached, vastly reducing the load on the server. Client-side JavaScript randomly selects a mirror for the user, weighted by the bandwidth of each mirror, and kicks off their download, without the need for client-side geolocation support.

This solution was successfully implemented shortly before the release of Trusty Tahr.

(This article was also posted on robinwinslow.co.uk)

on August 28, 2014 06:34 PM

Checkbox Project Insights

Zygmunt Krynicki

Another day behind us. Another day hacking on the Checkbox Project.

Today we got a few issues on the 3.2 SRU kernel for precise. I've recorded a short explanation of how the SRU process looks like from our (Certification) perspective. We're investigating those to see if those are kernel problems or test bugs.

I've started the day by working on a few code reviews and SRU reviews. The bulk of the time was spent on the new validation subsystem for Checkbox. As before, you can see most of that via the Live Coding videos, specifically episodes #17, #18, #19 and #20) on my YouTube channel.

You can always find us, checkbox hackers in #checkbox on freenode. If you care about testing hardware with free software, join us!
on August 28, 2014 05:40 PM
Fartamos el desayuno buffet con la idea de comer ligero, porque llegamos tarde a Bogotá y así exprimir más el poco tiempo que estuvimos en la capital de Colombia.

Perdida toda la mañana con el vuelo aéreo llegamos al hotel. Jugo de recepción, una habitación casi tan grande como una casa, jacuzzi, piscina, bar... A mirar y no tocar jajaja, no vayamos a acostumbrarnos a este nivel :P (Y mañana toca hostal en Lisboa, ¡LOL!)

Poco más hicimos en Bogotá que salir a buscar unos regalos, pero la ciudad es tan grande, que tardas muchísimas horas yendo de un punto a otro.
Una pena no haber podido tener más días para conocer Bogotá
Conseguimos estar de vuelta justo al anochecer. Y tachán, invitados a unos jugos y a cenar :O Al final va a salir barato el hotelito :P
Jugos, jugos y más jugos, ¡están impresionantes!
Al no tener el ESTA americano, Avianca no nos permitió hacer el checkin online. Por tener que estar muy temprano en el aeropuerto, lo rellenamos por Internet, pero no lo pagamos (14$/persona).

El resto de la noche tocó disfrutar del lujoso hotel.

Continúa leyendo más de este viaje.
on August 28, 2014 04:13 PM

inxi

Xubuntu

inxi is a full featured system information script that will detect information about hardware specifications, including but not limited to vendor details, CPU info, graphic and sound cards. Most importantly, it will output everything in a easy to read format and it can also be used on irc clients like irssi, weechat or xchat.

How to use inxi?

The general use of inxi is inxi -<color> -<option>. inxi output is colored and to change the color for better visibility use the c option followed by a number between 0-32.

Information type Command, usage, and more information
System information inxi -b and inxi -F
The b option output basic system information, while the F option will output full system information.
Hard drive details inxi -D
Outputs information on your hard drives, like make, model and size
Hard drive partitions inxi -p
Outputs information about all mounted partitions, mount points and space usage
Networking inxi -n and inxi -ni
Outputs information about the details of the network interfaces and configuration. When the i option is used with n, Inxi will output IP address details (for both WAN and LAN).
Hardware inxi -AG and inxi -h
The A and G options output information about the audio and graphics hardware respectively. You usually want to use them together. The h option outputs you the full list of options you can use to get even more information about your hardware.

Using inxi in IRC clients

Client Usage
Xchat, irssi and most other clients /exec -o inxi -<option> | pastebinit
The -o option shows the output to the channel. Without it, only the user will see the output.
Weechat /shell -o inxi -<option> | pastebinit
Note: For weechat to run external scripts like inxi, shell.py has to be installed.

Using inxi -c0 within a IRC client environment is highly advisable because colored output doesn’t work in pastebins.

on August 28, 2014 04:01 PM

August 27, 2014

Juju <3's Big Data

Charles Butler

Syndicators, there is a video link above that may not make syndication. Click the source link to view the 10 minute demo video.

Over the past 4 months Amir Sanjar and I have been working dilligently on Juju's Big Data story. Working with software vendors to charm up big name products like the Demo'd Hortonworks Hadoop distribution.

To those of you that know nothing about Hadoop - Hadoop is a large scale big data framework / suite of applications. It provides facilities to build an entire ecosystem to crunch numbers from seemingly unrelated data sources, and compute through petabytes of data via Map/Reduce applications.

A traditional hadoop deployment consists of a few components:

  • Map / Reduce Engine (or cluster of engines)
  • Data Warehousing Facility
  • Distributed Filesystem to cache results across the cluster
  • Data sources (MySQL, MongoDB, HBASE, Couch, PostGRES just to name a few)

Setting up these different services and interconnecting them can be a full day process for a seasoned professional in the Big Data ecosystem. Juju offers you a quick way to distill all of that setup and interconnectivity knowledge so you can be a master at USING hadoop. Not at deploying it.

Some people say Juju negates the need to read the book, and while this may be true; I still advise you read the book at least once - so you know how it's put together, why certain configurations were chosen, and how to troubleshoot the bundle should anything go wrong. Then you're free to wield the community provided Hadoop bundle(s) like a pro.

Enjoy the Demo, and look for more Big Data tools and products on the Juju Charm Store

on August 27, 2014 09:06 PM

Whoa! Dropbox

Matthew Helmke

Dropbox just announced they are increasing the storage space for paid accounts ($9.99/mo) from 100GB to a full terabyte for the same price. My account has been automatically updated. I think that earns them a mention on my blog. Here is a referral link that you are free to ignore.

on August 27, 2014 08:42 PM

August 26, 2014

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140826 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:
- http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

The Utopic kernel remains based on the v3.16.1 upstream stable kernel
and is available for testing in the archive, ie. linux-3.16.0-11.16.
Please test and let us know your results.
—–
Important upcoming dates:
Thurs Aug 28 – Utopic Beta 1 (~2 days)
Mon Sep 22 – Utopic Final Beta Freeze (~4 weeks away)
Thurs Sep 25 – Utopic Final Beta (~4 weeks away)
Thurs Oct 9 – Utopic Kernel Freeze (~6 weeks away)
Thurs Oct 16 – Utopic Final Freeze (~7 weeks away)
Thurs Oct 23 – Utopic 14.10 Release (~8 weeks away)
o/
o/


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Precise/Lucid

o/
Status for the main kernels, until today (Aug. 26):

  • Lucid – verification & testing
  • Precise – verification & testing
  • Trusty – verification & testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 08-Aug through 29-Aug
    ====================================================================
    08-Aug Last day for kernel commits for this cycle
    10-Aug – 16-Aug Kernel prep week.
    17-Aug – 23-Aug Bug verification & Regression testing.
    24-Aug – 29-Aug Regression testing & Release to -updates.

    cycle: 29-Aug through 20-Sep
    ====================================================================
    29-Aug Last day for kernel commits for this cycle
    31-Sep – 06-Sep Kernel prep week.
    07-Sep – 13-Sep Bug verification & Regression testing.
    14-Sep – 20-Sep Regression testing & Release to -updates.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

on August 26, 2014 05:15 PM

This year I mentored two students doing work in support of Debian and free software (as well as those I mentored for Ganglia).

Both of them are presenting details about their work at DebConf 14 today.

While Juliana's work has been widely publicised already, mainly due to the fact it is accessible to every individual DD, Andrew's work is also quite significant and creates many possibilities to advance awareness of free software.

The Java project that is not just about Java

Andrew's project is about recursively building Java dependencies from third party repositories such as the Maven Central Repository. It matches up well with the wonderful new maven-debian-helper tool in Debian and will help us to fill out /usr/share/maven-repo on every Debian system.

Firstly, this is not just about Java. On a practical level, some aspects of the project are useful for many other purposes. One of those is the aim of scanning a repository for non-free artifacts, making a Git mirror or clone containing a dfsg branch for generating repackaged upstream source and then testing to see if it still builds.

Then there is the principle of software freedom. The Maven Central repository now requires that people publish a sources JAR and license metadata with each binary artifact they upload. They do not, however, demand that the sources JAR be complete or that the binary can be built by somebody else using the published sources. The license data must be specified but it does not appeared to be verified in the same way as packages inspected by Debian's legendary FTP masters.

Thanks to the transitive dependency magic of Maven, it is quite possible that many Java applications that are officially promoted as free software can't trace the source code of every dependency or build plugin.

Many organizations are starting to become more alarmed about the risk that they are dependent upon some rogue dependency. Maybe they will be hit with a lawsuit from a vendor stating that his plugin was only free for the first 3 months. Maybe some binary dependency JAR contains a nasty trojan for harvesting data about their corporate network.

People familiar with the principles of software freedom are in the perfect position to address these concerns and Andrew's work helps us build a cleaner alternative. It obviously can't rebuild every JAR for the very reason that some of them are not really free - however, it does give the opportunity to build a heat-map of trouble spots and also create a fast track to packaging for those heirarchies of JARs that are truly free.

Making WebRTC accessible to more people

Juliana set out to update rtc.debian.org and this involved working on JSCommunicator, the HTML5/JavaScript softphone based on WebRTC.

People attending the session today or participating remotely are advised to set up your RTC / VoIP password at db.debian.org well in advance so the server will allow you to log in and try it during the session. It can take 30 minutes or so for the passwords to be replicated to the SIP proxy and TURN server.

Please also check my previous comments about what works and what doesn't and in particular, please be aware that Iceweasel / Firefox 24 on wheezy is not suitable unless you are on the same LAN as the person you are calling.

on August 26, 2014 04:33 PM

Live Coding Experiment

Zygmunt Krynicki

Hey.

Last week I've started doing recording videos of me, coding, live with screen sharing  and background context on everything I do. I did this to increase transparency of FOSS development as well as to increase awareness of the Checkbox project that I participate in.


I think while the actual videos are a bit too long for casual watching the experiment itself is interesting and worth pursuing.

I'm recording about 3-4 videos a day. I'll try to focus on making the content more interesting for both casual viewers that bail out after a minute or two and my hardcore colleagues that sometimes watch those to get up-to-speed about new feature development.

In any case, it is out there, in the open. If you want to talk to us, join #checkbox on freenode. Ping me on Google+. Browse the code. Improve translations or get involved in any other way you want.

Lastly, for a bit of self promotion, have a look at the latest video
on August 26, 2014 03:55 PM

Welcome to the Ubuntu Weekly Newsletter. This is issue #380 for the week August 18 – 24, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Diego Turcios
  • Jose Antonio Rey
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on August 26, 2014 12:32 AM

August 25, 2014

LXPanel 0.7.0 released

Lubuntu Blog

A huge update to the GTK+ panel was released. See the list below for some changes. Full log of changes can be fund in git. Lots of new functionalities like:

  • new plugin ‘launchtaskbar’ combining ‘launchbar’ and ‘taskbar’
  • replaced ‘pager’ plugin with former ‘wnckpager’ one
  • allowed drag applications from system menu plugin
  • using human readable sensor names if available (like ‘Core 0′, etc)
  • renamed button to configure plugin from ‘Edit’ to ‘Properties’
  • etc.

Soon in Lubuntu repositories. More info here.

Via LXDE Blog
on August 25, 2014 11:39 PM
As we continue to iterate on new ubuntu touch images, it's important for everyone to be able to enjoy the ubuntu phone experience in their native language. This is where you can help!

We need your input and help to make sure the phone images are well localized for your native language. If you've never contributed a translation before, this is a perfect opportunity for you to learn. There's a wiki guide to help you, along with translation teams who speak your language and can help.

Don't worry, you don't need a ubuntu phone to do this work. The wiki guide details how to translate using a phone, emulator, or even just your desktop PC running ubuntu. If nothing else, you can help review other folks translations by simply using launchpad in your web browser.

If this sounds interesting to you and the links don't make sense or you would like some more personal help, feel free to contact me. English is preferred, but in the spirit of translation feel free to contact me in French, Spanish or perhaps even German :-).

Happy Translating everyone!

P.S. If you are curious about the status of your language translation, or looking for known missing strings, have a look at the stats page kept by David Planella.
on August 25, 2014 09:16 PM

Linux Distro for Kids?

Matthew Helmke

Short, informal survey. Feel free to comment here or via private messages/email. I may not respond to all comments, but will read with appreciation any you make.

What is your favorite Linux distribution that is intended for use by kids, say anywhere between the ages of 8 and 18? If you have more than one, feel free to name each.

Why do you like it?

If your preference for kids is a standard distro and not one intended for that audience, which is it and why?

on August 25, 2014 01:09 PM

August 24, 2014

Stefano Zacchiroli opened DebConf’14 with an insightful talk titled Debian in the Dark Ages of Free Software (slides available, video available soon).

He makes the point (quoting slide 16) that the Free Software community is winning a war that is becoming increasingly pointless: yes, users have 100% Free Software thin client at their fingertips [or are really a few steps from there]. But all their relevant computations happen elsewhere, on remote systems they do not control, in the Cloud.

That give-up on control of computing is a huge and important problem, and probably the largest challenge for everybody caring about freedom, free speech, or privacy today. Stefano rightfully points out that we must do something about it. The big question is: how can we, as a community, address it?

Towards a Free Service Definition?

I believe that we all feel a bit lost with this issue because we are trying to attack it with our current tools & weapons. However, they are largely irrelevant here: the Free Software Definition is about software, and software is even to be understood strictly in it, as software programs. Applying it to services, or to computing in general, doesn’t lead anywhere. In order to increase the general awareness about this issue, we should define more precisely what levels of control can be provided, to understand what services are not providing to users, and to make an informed decision about waiving a particular level of control when choosing to use a particular service.

Benjamin Mako Hill pointed out yesterday during the post-talk chat that services are not black or white: there aren’t impure and pure services. Instead, there’s a graduation of possible levels of control for the computing we do. The Free Software Definition lists four freedoms — how many freedoms, or types of control, should there be in a Free Service Definition, or a Controlled-Computing Definition? Again, this is not only about software: the platform on which a particular piece of software is executed has a huge impact on the available level of control: running your own instance of WordPress, or using an instance on wordpress.com, provides very different control (even if as Asheesh Laroia pointed out yesterday, WordPress does a pretty good job at providing export and import features to limit data lock-in).

The creation of such a definition is an iterative process. I actually just realized today that (according to Wikipedia) the very first occurrence of an attempt at a Free Software Definition was published in 1986 (GNU’s bulletin Vol 1 No.1, page 8) — I thought it happened a couple of years earlier. Are there existing attempts at defining such freedoms or levels of controls, and at benchmarking such criteria against existing services? Such criteria would not only include control over software modifications and (re)distribution, but also likely include mentions of interoperability and open standards, both to enable the user to move to a compatible service, and to avoid forcing the user to use a particular implementation of a service. A better understanding of network effects is also needed: how much and what type of service lock-in is acceptable on social networks in exchange of functionality?

I think that we should inspire from what was achieved during the last 30 years on Free Software. The tools that were produced are probably irrelevant to address this issue, but there’s a lot to learn from the way they were designed. I really look forward to the day when we will have:

  • a Free Software Definition equivalent for services
  • Debian Free Software Guidelines-like tests/checklist to evaluate services
  • an equivalent of The Cathedral and the Bazaar, explaining how one can build successful business models on top of open services

Exciting times!

on August 24, 2014 03:39 PM

August 23, 2014

Mythbuntu 14.04.1 has been released. This is a point release on our 14.04 LTS release. If you are already on 14.04, you can get these same updates via the normal update process.

The 14.04.x series of releases is the Mythbuntu team's second LTS and is supported until shortly after the 16.04 release.

You can get the Mythbuntu ISO from our downloads page.

Highlights

  • MythTV 0.27
  • This is our second LTS release (the first being 12.04). See this page for more info.
  • Bug fixes from 14.04 release

Underlying system

  • Underlying Ubuntu updates are found here

MythTV

  • Recent snapshot of the MythTV 0.27 release is included (see 0.27 Release Notes)
  • Mythbuntu theme fixes

We appreciated all comments and would love to hear what you think. Please make comments to our mailing list, on the forums (with a tag indicating that this is from 14.04 or trusty), or in #ubuntu-mythtv on Freenode. As previously, if you encounter any issues with anything in this release, please file a bug using the Ubuntu bug tool (ubuntu-bug PACKAGENAME) which automatically collects logs and other important system information, or if that is not possible, directly open a ticket on Launchpad (http://bugs.launchpad.net/mythbuntu/14.04/).

Known issues

  • If you are upgrading and want to use the HTTP Live Streaming you need to create a Streaming storage group
  • During an upgrade to 0.27, you may receive the following error message

    "ERROR 1046 (3D000) at line 22: No database selected" This means that your /etc/mythtv/config.xml file has incorrect info in it. Please fix this and try running the update again.
on August 23, 2014 04:13 AM
It seems so soon after returning home from Randa and Geneva, but already the day of departure to Vienna and then Brno looms. So excited! For starters, both Scarlett and I got funding from Ubuntu so the e.V. is spared the cost of our travel! I've often felt guilty about how much airfare from Seattle is, for previous meetings. We're having a Kubuntu gathering on Thursday the 11th of September. Ping us if you have an issue you want discussed or worked on.

Also, Scarlett and I will be traveling together, which will be fun. And we're meeting Stefan Derkits in Vienna, to see some of his favorite places. Oh, a whole day in Vienna seems like heaven. We have a hostel booked; I hope it's nice. Now I need to figure out the bus or train from Vienna <> Brno.



Then there is the e.V. annual meeting, which I enjoy since I was admitted to membership. It is great to hear the reports personally, and meet people I usually only hear from in email or IRC.

Finally, there is Akademy, which is always a blur of excitement, learning, socializing, and interacting with the amazing speakers. My favorite part is always hearing from the GSoC students about their projects, and their experience in the KDE community. After Akademy proper, there are days of BOFs, and our Kubuntu meeting. This part is often the most energizing, as each meeting is like a small-scale sprint.

Of course we do take some time to walk through the city, and eat out, and party a bit. Face-to-face meetings are the BEST! Sometimes we return home exhausted and jetlagged, but it is always worth it. KDE is a community, and our annual gathering is one important way for us to nurture that community. This energizes the entire next year of creating amazing software.

An extra-special part of Akademy this year is that we are planning to release our new KDE Frameworks 5 Cookbook at Akademy. Get some while they're hot!
on August 23, 2014 02:49 AM

August 22, 2014

For years, the Ubuntu Cloud Images have been built on a timer (i.e. cronjob or Jenkins). Every week, you can reasonably expect that stable and LTS releases to be built twice a week while our development build is build once a day.  Each of these builds is given a serial in the form of YYYYMMDD. 

While time-based building has proven to be reliable, different build serials may be functionally the same, just put together at a different point in time. Many of the builds that we do for stable and LTS releases are pointless.

When the whole heartbleed fiasco hit, it put the Cloud Image team into over-drive, since it required manually triggering builds the LTS releases. When we manually trigger builds, it takes roughly 12-16 hours to build, QA, test and release new Cloud Images. Sure, most of this is automated, but the process had to be manually started by a human. This got me thinking: there has to be a better way.

What if we build the Cloud Images when the package set changes?

With that, I changed the Ubuntu 14.10 (Utopic Unicorn) build process from time-based to archive trigger-based. Now, instead of building every day at 00:30 UTC, the build starts when the archive has been updated and the packages in the prior cloud image build is older than the archive version. In the last three days, there were eight builds for Utopic. For a development version of Ubuntu, this just means that developers don't have to wait 24 hours for the latest package changes to land in a Cloud Image.

Over the next few weeks, I will be moving the 10.04 LTS, 12.04 LTS and 14.04 LTS build processes from time to archive trigger-based. While this might result less frequent daily builds, the main advantage is that the daily builds will contain the latest package sets. And if you are trying to respond to the latest CVE, or waiting on a bug fix to land, it likely means that you'll have a fresh daily that you can use the following day.
on August 22, 2014 06:11 PM

KDE Project:

After sell out dates in Glasgow and Belgium the tour of my dramatic talk "Do you need to be brain damaged to care about desktop Linux?" is making a stop in Brno for the KDE Conference Akademy. In it I'll talk about the struggles of recoving from a head injury mixed with creating a beautiful and friendly Linux distro: Kubuntu. It'll have drame, it'll have emotion, it'll have a discussion of the relative merits of community against in-house development. Make sure you book your tickets now!

Also at Akademy is the Kubuntu day on Thursday, sign up now if you want to come and talk about your ideas or grumble about your problems with Kubuntu. Free hugs will be in store.

on August 22, 2014 04:34 PM

On my way to DebConf 14

Paul Tagliamonte

Slowly, but I’ll be in by Tonight, PST (early morning EST!)

Hope to see everyone soon!

on August 22, 2014 03:33 PM

Docker 1.0.1 is available for testing, in Ubuntu 14.04 LTS!

Docker 1.0.1 has landed in the trusty-proposed archive, which we hope to SRU to trusty-updates very soon.  We would love to have your testing feedback, to ensure both upgrades from Docker 0.9.1, as well as new installs of Docker 1.0.1 behave well, and are of the highest quality you have come to expect from Ubuntu's LTS  (Long Term Stable) releases!  Please file any bugs or issues here.

Moreover, this new version of the Docker package now installs the Docker binary to /usr/bin/docker, rather than /usr/bin/docker.io in previous versions. This should help Ubuntu's Docker package more closely match the wealth of documentation and examples available from our friends upstream.

A big thanks to Paul Tagliamonte, James Page, Nick Stinemates, Tianon Gravi, and Ryan Harper for their help upstream in Debian and in Ubuntu to get this package updated in Trusty!  Also, it's probably worth mentioning that we're targeting Docker 1.1.2 (or perhaps 1.2.0) for Ubuntu 14.10 (Utopic), which will release on October 23, 2014.

Here are a few commands that might help your testing...

Check What Candidate Versions are Available

$ sudo apt-get update
$ apt-cache show docker.io | grep ^Version:

If that shows 0.9.1~dfsg1-2 (as it should), then you need to enable the trusty-proposed pocket.

$ echo "deb http://archive.ubuntu.com/ubuntu/ trusty-proposed universe" | sudo tee -a /etc/apt/sources.list
$ sudo apt-get update
$ apt-cache show docker.io | grep ^Version:

And now you should see the new version, 1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1, available (probably in addition to 1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1).

Upgrades

Check if you already have Docker installed, using:

$ dpkg -l docker.io

If so, you can simply upgrade.

$ sudo apt-get upgrade

And now, you can check your Docker version:

$ sudo dpkg -l docker.io | grep -m1 ^ii | awk '{print $3}'
0.9.1~dfsg1-2

New Installations

You can simply install the new package with:

$ sudo apt-get install docker.io

And ensure that you're on the latest version with:

$ dpkg -l docker.io | grep -m1 ^ii | awk '{print $3}'
1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1

Running Docker

If you're already a Docker user, you probably don't need these instructions.  But in case you're reading this, and trying Docker for the first time, here's the briefest of quick start guides :-)

$ sudo docker pull ubuntu
$ sudo docker run -i -t ubuntu /bin/bash

And now you're running a bash shell inside of an Ubuntu Docker container.  And only bash!

root@1728ffd1d47b:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 13:42 ? 00:00:00 /bin/bash
root 8 1 0 13:43 ? 00:00:00 ps -ef

If you want to do something more interesting in Docker, well, that's whole other post ;-)

:-Dustin
on August 22, 2014 02:21 PM



Thanks to nicknorton, just follow his instructions. Watch his video for more guided info.


  1. Install imwheel using whatever package manager you use.
  2. Debian based distros: sudo apt-get install imwheel
  3. Download the script http://www.nicknorton.net/mousewheel.sh
  4. Save it into your home folder, make it executable. Run it and enjoy.
on August 22, 2014 11:30 AM

Learning to git

Valorie Zimmerman

A few years ago, I learned from Myriam's fine blog how to build Amarok from source, which is kept in git. It sounds mysterious, but once all the dependencies are installed, PATH is defined and the environment is properly set up, it is extremely easy to refresh the source (git pull) and rebuild. In fact, I usually use the up-arrow in the konsole, which finds the previous commands, so I rarely have to even type anything! Just hit return when the proper command is in place.

Now we're using git for the KDE Frameworks book, so I learned how to not only pull the new or changed source files, but also to commit my own few or edited files locally, then push those commits to git, so others can see and use them.

To be able to write to the repository, an SSH key must be uploaded, in this case done in the KDE Identity account. If the Identity account is not a developer account, that must first be granted.

Just as in building Amarok, first the folders need to be created, and the repository cloned. Once cloned, I can see either in konsole or Dolphin the various files. It's interesting to me to poke around in most of them, but the ones I work in are markdown files, which is a type of text file. I can open them in kate (or your editor of choice) either from Dolphin or directly from the cli (for instance kate ki18n/ki18n.in.md).

Once edited, save the file, then it's time to commit. If there are a number of files to work on, they can be all committed at once. git commit -a is the command you need. Once you hit return, you will be immediately put into nano, a minimal text editor. Up at the top, you will see it is waiting for your commit message, which is a short description of the file or the changes you have made. Most of my commits have said something like "Edited for spelling and grammar." Once your message is complete, hit Control X, and y and return to save your changes.

It's a good idea to do another git pull just to be sure no one else has pushed a conflicting file while the commit message was being crafted, then git push. At this point the passphrase for the ssh key is asked for; once that is typed and you hit return, you'll get something like the following:

Counting objects: 7, done.                                                                                                                                                                              
Delta compression using up to 8 threads.                                                                                                                                                                
Compressing objects: 100% (4/4), done.                                                                                                                                                                  
Writing objects: 100% (4/4), 462 bytes | 0 bytes/s, done.                                                                                                                                                
Total 4 (delta 2), reused 1 (delta 0)                                                                                                                                                                    
remote: This commit is available for viewing at:
remote: http://commits.kde.org/kf5book/90c863e4ee2f82e4d8945ca74ae144b70b9e9b7b
To git@git.kde.org:kf5book                                                                                                                                                                              
   1d078fe..90c863e  master -> master                                                                                                                                                                    
valorie@valorie-HP-Pavilion-dv7-Notebook-PC:~/kde/book/kf5book$

In this case, the new file is now part of the KDE Frameworks 5 book repository. Git is a really nifty way to keep files of any sort organized and backed up. I'm really happy that we decided to develop the book using this powerful tool.
on August 22, 2014 09:55 AM

August 21, 2014

It’s been about a year since I started building my own Steam console for my living room. A ton has changed since then. SteamOS has been released, In Home Streaming is out of beta and generally speaking the living room experience has gotten a ton better.

This blog post will be a summary of what’s changed in the past year, in the hopes that it will help someone who might be interested in building their own “next-gen console” for about the same price, and take advantage of nicer hardware and all the things that PC gaming has to offer.

Step 1: Choosing the hardware

  • I consider the NVIDIA GTX 750Ti to be the best thing to happen in hardware for this sort of project. It’s based on their newest Maxwell technology so it runs cool, it does not need a special power supply plug, and it’s pretty small. It’s also between $120-$150 – which means nearly any computer is now capable of becoming a game console. And a competent one at that.

  • I have settled on the Cooler Master 110 case, which is one of the least obnoxious PC case you can find that won’t look too bad in the living room. Unfortunately Valve’s slick-looking case did not kick the case makers into making awesome-looking living room style cases. The closest you can find is the Silverstone RVZ01, which has the right insides, but they ruined the outside with crazy plastic ribs. The Digital Storm Bolt II looks great, but you can’t buy the case seperately. Both cases have CD drives for some reason, boo!

  • Nvidia has a great guide on building a PC within the console-price range if you want to look around. I also recommend checking out r/buildapc, which has tons of Mini-ITX/750Ti builds.

  • Another alternative is the excellent Intel NUC and Gigabyte Brix. These make for great portable machines, but for the upcoming AAA titles for Linux like Metro Redux, Star Citizen, and so on I decided to go with a dedicated graphics card. Gigabyte makes a very interesting model that is the size of a NUC, but with a GTX 760(!). This looks to be ideal, but unfortunately when Linus reviewed it he found heat/throttling issues. When they make a Maxwell based one of these it will likely be awesome.

  • Don’t forget the controller. The Xbox wireless ones will work out of the box. I recommend avoiding the off-brand dongles you see on Amazon, they can be hit or miss.

Step 2: Choosing the software

I’ve been using SteamOS since it came out. The genious about SteamOS is that fundamentally it does only 2 things. It boots, and then runs Steam Big Picture (BPM) mode. This means for a dedicated console, the OS is really not important. I have 2 drives in the box, one with SteamOS, and one with Ubuntu running BPM. After running both I prefer Ubuntu/Steam to SteamOS:

  • Faster boot (Upstart v. SysV)
  • PPAs enable fresh access to new Nvidia drivers and Plex Home Theater
  • Newer kernels and access to HWE kernels over the next 5 years

I tend to alternate between the two, but since I am more familiar with Ubuntu it makes it easier to use for, so the rest of this post will cover how to build a dedicated Steam Box using Ubuntu.

This isn’t to say SteamOS is bad, in fact, setting it up is actually easier than doing the next few steps; remember that the entire point is to not care about the OS underneath, and get you into Steam. So build whatever is most comfortable for you!

Step 3: Installation

These are the steps I am currently doing. It’s not for beginners, you should be comfortable admining an Ubuntu system.

  • Install Ubuntu 14.04.
  • (Optional) - Install openssh-server. I don’t know about you but lugging a keyboard/mouse back and forth to my living room is not my idea of a good time. I prefer to sit on the couch, and ssh into the box from my laptop.
  • Add the xorg-edgers PPA. You don’t need this per se, but let’s go all in!
  • Install the latest Nvidia drivers: As of this writing, nvidia-graphics-drivers-343.

After you’ve installed the drivers and all the security updates you should reboot to get to your nice new clean desktop system. Now it’s time to make it a console:

  • Log in, and install Steam. Log into steam, make sure it works.
  • Add the Marc Deslaurier’s SteamOS packages PPA. These are rebuilt for Ubuntu and he does a great job keeping them up to date.
  • sudo apt-get install steamos-compositor steamos-modeswitch-inhibitor steamos-xpad-dkms
  • Log out, and in the login screen, click on the Ubuntu symbol by the username and select the Steam session. This will get you the dedicated Steam session. Make sure that works. Exit out of that and now let’s make it so we can boot into that new session by default
  • Enable autologin in LightDM after the fact so that when your machine boots it goes right into Steam’s Big Picture mode.

We’re using Valve’s xpad module instead of xboxdrv because that’s what they use in SteamOS and I don’t want to deviate too much. But if you prefer about xboxdrv then follow this guide.

  • Steam updates itself at the client level so there’s no need to worry about that, the final step for a console-like experience is to enable automatic updates. Remember you’re using PPAs, so if you’re not confident that you can fix things, just leave it and do maintenance by hand every once in a while.

Step 4: Home Theater Bling

If you’re going to have a nice living room box, then let’s use it for other things. I have a dedicated server with media that I share out with Plex Media Server, so in this step I’ll install the client side.

Plex Home Theater:

sudo add-apt-repository ppa:plexapp/plexht
sudo add-apt-repository ppa:ppa:pulse-eight/libcec
sudo apt-get update && sudo apt-get install plexhometheater

In Steam you can then click on the + symbol, Add a non-steam game, and then add Plex. Use the gamepad (not the stick) to navigate the UI once you launch it. If you prefer XBMC/Kodi you can install that instead. I found that the controller also works out of the box there, so it’s a nice experience no matter which one you choose.

Step 5: In Home Streaming

This is a killer Steam feature, that allows you to stream your Windows games to your new console. It’s very straight forward, just have both machines on and logged into Steam on the same network, they will autodiscover each other, and your Windows games will show up in your Ubuntu/Steam UI, and you can stream them. Though it works suprisingly well over wireless, you’ll definately want to ensure you’ve got gigabit ethernet if you want to stream games 1080p at 60 frames per second.

Conclusion

And that’s basically it! There’s tons of stuff I’ve glossed over, but these are the basic steps. There’s lots of little things you can do, like remove a bunch of desktop packages you won’t need (so you don’t need to download and update them) and other tips and tricks, I’ll try to keep everyone up to date on how it’s going.

Enjoy your new next-gen gaming console!

TODO:

  • You can change out the plymouth theme to use the one for SteamOS - but I have an SSD in the box and combined with the fast boot it never comes up for me anyway.
  • It’d be cool to make a prototype of Ubuntu Core and then provide Steam in an LXC container on top of that so we don’t have to use a full blown desktop ISO.
on August 21, 2014 11:02 PM

S07E21 – The One with the Rumour

Ubuntu Podcast from the UK LoCo

Laura Cowen, Alan Pope, and Mark Johnson are in Studio L for Season Seven, Episode Twenty-One of the Ubuntu Podcast!

In this week’s show:-

We’ll be back next week, when we’ll be interviewing Daniel Holbach, and we’ll go through your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

on August 21, 2014 08:23 PM

Recognition is like money, it only really has value when it’s being passed between one person and another. Otherwise it’s just potential value, sitting idle.  Communication gives life to recognition, turning it’s potential value into real value.

As I covered in my previous post, Who do you contribute to?, recognition doesn’t have a constant value.  In that article I illustrated how the value of recognition differs depending on who it’s coming from, but that’s not the whole story.  The value of recognition also differs depending on the medium of communication.

communication_triangleOver at the Community Leadership Knowledge Base I started documenting different forms of communication that a community might choose, and how each medium has a balance of three basic properties: Speed, Thoughtfulness and Discoverability. Let’s call this the communication triangle. Each of these also plays a part in the value of recognition.

Speed

Again, much like money, recognition is something that is circulated.  It’s usefulness is not simply created by the sender and consumed by the receiver, but rather passed from one person to another, and then another.  The faster you can communicate recognition around your community, the more utility you can get out of even a small amount of it. Fast communications, like IRC, phone calls or in-person meetups let you give and receive a higher volume of recognition than slower forms, like email or blog posts. But speed is only one part, and faster isn’t necessarily better.

Thoughtfulness

Where speed emphasizes quantity, thoughtfulness is a measure of the quality of communication, and that directly affects the value of recognition given. Thoughtful communications require consideration upon both receiving and replying. Messages are typically longer, more detailed, and better presented than those that emphasize speed. As a result, they are also usually a good bit slower too, both in the time it takes for a reply to be made, and also the speed at which a full conversation happens. An IRC meeting can be done in an hour, where an email exchange can last for weeks, even if both end up with the same word-count at the end.

Discoverability

The third point on our communication triangle, discoverability, is a measure of how likely it is that somebody not immediately involved in a conversation can find out about it. Because recognition is a social good, most of it’s value comes from other people knowing who has given it to whom. Discoverability acts as a multiplier (or divisor, if done poorly) to the original value of recognition.

There are two factors to the discoverability of communication. The first, accessibility, is about how hard it is to find the conversation. Blog posts, or social media posts, are usually very easy to discover, while IRC chats and email exchanges are not. The second factor, longevity, is about how far into the future that conversation can still be discovered. A social media post disappears (or at least becomes far less accessible) after a while, but an IRC log or mailing list archive can stick around for years. Unlike the three properties of communication, however, these factors to discoverability do not require a trade off, you can have something that is both very accessible and has high longevity.

Finding Balance

Most communities will have more than one method of communication, and a healthy one will have a combination of them that compliment each other. This is important because sometimes one will offer a more productive use of your recognition than another. Some contributors will respond better to lots of immediate recognition, rather than a single eloquent one. Others will respond better to formal recognition than informal.  In both cases, be mindful of the multiplier effect that discoverability gives you, and take full advantage of opportunities where that plays a larger than usual role, such as during an official meeting or when writing an article that will have higher than normal readership.

on August 21, 2014 01:00 PM

August 20, 2014

Prentice Hall has just released the 8th Ed. of “The Official Ubuntu Book”, authored by Matthew Helmke and Elizabeth K. Joseph with José Antonio Rey, Philip Ballew and Benjamin Mako Hill.

This is the book’s first update in 2 years and as the authors state in their Preface, “…a large part of this book has been rewritten—not because the earlier editions were bad, but because so much has happened since the previous edition was published. This book chronicles the major changes that affect typical users and will help anyone learn the foundations, the history, and how to harness the potential of the free software in Ubuntu.”

As with prior editions, publisher Prentice Hall has kindly offered to ship approved LoCo teams each (1) free copy of this new edition. To keep this as simple as possible, you can request your book by following these steps. The team contact shown on our LoCo Team List (and only the team contact) should send an email to Heather Fox at heather.fox@pearson.com and include the following details:

  • Your full name
  • Which team you are from
  • If your team resides within North America, please provide: Your complete street address (the book will ship by UPS)
  • If your team resides outside North America, you will first be emailed a voucher code to download the complete eBook bundle from the publisher site, InformIT, which includes the ePub/mobi/pdf files.

If your team does reside outside North America and you wish to be considered for a print copy, please provide:

Your complete street address, region, country AND IMPORTANT: Your phone number, including country and area code. (Pearson will make its best effort to arrange shipment through its nearest corporate office.)

A few notes:

  • Only approved teams are eligible for a free copy of the book.
  • Only the team contact for each team can make the request for the book.
  • There is a limit of (1) copy of each book per approved team.
  • Prentice Hall will cover postage, but not any import tax or other shipping fees.
  • When you have the books, it is up to you what you do with them. We recommend you share them between members of the team. LoCo Leaders: please don’t hog them for yourselves!

If you have any questions or concerns, please directly contact Pearson/Prentice Hall’s Heather Fox at heather.fox@pearson.com Also, for those teams who are not approved or yet to be approved, you can still score a rather nice 35% discount on the books by registering your LoCo with the Pearson User Group Program.

on August 20, 2014 05:16 PM

Today, in mail, came my Certificate of Ubuntu Membership that I requested back in February.  The photo was taken via my Ubuntu Touch on my Nexus 7 2013.

image20140820_0001


on August 20, 2014 05:10 PM

Packages for the release of KDE SC 4.14 are available for Kubuntu 14.04LTS and our development release. You can get them from the Kubuntu Backports PPA. It includes an update of Plasma Desktop to 4.11.11.

Bugs in the packaging should be reported to kubuntu-ppa on Launchpad. Bugs in the software to KDE.

on August 20, 2014 03:11 PM

The Ubuntu Women Project is pleased to present an orientation quiz that is aimed to help new comers into the Ubuntu Community to find their niche and get involved.  The base quiz was taken from the Ubuntu Italian LoCo. Our plans is to put this quiz on community.ubuntu.com  but we are seeking testers for it first!

Screenshot from 2014-08-19 18:47:33

How to test it:

Go to the page where the quiz is and play around with answering the questions.  If you find an issue, please e-mail the Ubuntu Women Mailing-List at ubuntu-women@lists.ubuntu.com.  If you want to see the code, you may ask Lyz at lyz@ubuntu.com or me at belkinsa@ubuntu.com.

on August 20, 2014 11:56 AM

Qt Licence Update

Jonathan Riddell

KDE Project:

Today Qt announced some changes to their licence. The KDE Free Qt team have been working behind the scenes to make these happen and we should be very thankful for the work they put in. Qt code was LGPLv2.1 or GPLv3 (this also allows GPLv2). Existing modules will add LGPLv3 to that. This means I can get rid of the part of the KDE Licensing Policy which says "Note: code may not be copied from Qt into KDE Platform as Qt is LGPLv2.1 only which would prevent it being used under LGPL 3".

New modules, starting with the new web module QtWebEngine (which uses Blink) will be LGPLv3 or GPLv2. Getting rid of LGPLv2.1 means better preserving our freedoms (can't use patents to restrict, must allow reverse enginerring, must allow to replace Qt etc). It's not a problem for the new Qt modules to link to LGPLv2 or LGPLv2+ libraries or applications of any licence (as long as they allow the freedoms needed such as those listed above). One problem with LGPLv3 is you can't link a GPLv2 only application to it (not because LGPLv3 prevents it but because GPL2 prevents it), this is not a problem here because it will be dual licenced as GPLv2 alongside.

The main action this prevents is directly copying code from the new Qt modules into Frameworks, but as noted above we forbid doing that anyway.

With the new that Qt moved to Digia and there is a new company being spun out I had been slightly worried that the new modules would be restricted further to encourage more commercial licences of Qt. This is indeed the case and it's being done in the best possible way, thanks Digia.

on August 20, 2014 09:27 AM