November 01, 2014

Season of KDE - Let's go!

Valorie Zimmerman

We're now in the countdown. The deadline for applications is midnight 31 October UT. So give the ideas page one last look:

https://community.kde.org/SoK/Ideas/2014

We even got one last task just now, just for you devops -- UPDATE: this task is taken.

Please talk to your prospective mentor and get their OK before signing up on https://season.kde.org/ . If you have already signed up and your mentor has signed off on your plans and timeline, get to work!

===========================================

UPDATE: Because of the glitches in the schedule, we are extending the student deadline a few days, to match the mentor deadline for logging into and making an account on https://season.kde.org/

Please don't delay. Make all necessary accounts, subscribe to KDE-Soc-Mentor or KDE-Soc list, and get the proposals posted and approved. Please ping us in #kde-soc if there are any problems we can help you with. Otherwise, get to work!


on November 01, 2014 02:14 AM

October 31, 2014

Debugging hung machines can be a bit tricky. Here I'll document methods to trigger a crashdump when these hangs occur.

What exactly does it mean when a machine 'hangs' or 'freezes-up'? More information can be found in the kernel documentation [1], but overall there are a few types of hangs A "Soft Lock-Up" is when the kernel loops in kernel mode for a duration without giving tasks a chance to run. A "Hard Lock-Up" is when the kernel loops in kernel mode for a duration without letting other interrupts run. In addition a "Hung Task" is when a userspace task has been blocking for a duration. Thankfully the kernel has options to panic on these conditions and thus create a proper crashdump.

In order to setup crashdump, on an Ubuntu machine we can do the following. First we need to install and setup crashdump, more info can be found here [2].
sudo apt-get install linux-crashdump
Select NO unless you really would like to use kexec for your reboots.

Next we need to enable it since by default it is disabled.
sudo sed -i 's/USE_KDUMP=0/USE_KDUMP=1/' /etc/default/kdump-tools

Reboot to ensure the kernel cmdline options are properly setup
sudo reboot

After reboot run the following:
sudo kdump-config show

If this command shows 'ready to dump', then we can test a crash to ensure kdump has enough memory and will dump properly. This command will crash your computer, so hopefully you are doing this on a test machine.
echo c | sudo tee /proc/sysrq-trigger

The machine will reboot and you'll see a crash in /var/crash.

All of this is already documented in [2], so now we need to enable panics for hang and lockup conditions. Now we need to enable crashing on lockups, so we'll enable many cases at once.

Edit /etc/default/grub and change this line to the following:
GRUB_CMDLINE_LINUX="nmi_watchdog=panic hung_task_panic=1 softlockup_panic=1 unknown_nmi_panic"

In addition you could enable these via /proc/sys/kernel or sysctl. For more information about these parameters there is documentation here [3].

If you've used the command line change, update grub and then reboot.
sudo update-grub && sudo reboot

Now your machine should crash when it locks up, and you'll get a nice crashdump to analyze. If you want to test such a setup I wrote a module [4] that induces a hang to see if this works properly.

Happy hacking.

  1. https://www.kernel.org/doc/Documentation/lockup-watchdogs.txt
  2. https://wiki.ubuntu.com/Kernel/CrashdumpRecipe
  3. https://www.kernel.org/doc/Documentation/kernel-parameters.txt
  4. https://github.com/arges/hanger


on October 31, 2014 08:53 PM
Full Circle
Issue #90
Full Circle - the independent magazine for the Ubuntu Linux community are proud to announce the release of our ninetieth issue.

This month:
* Command & Conquer
* How-To : OpenConnect to Cisco, LibreOffice, and Broadcasting With WebcamStudio
* Graphics : Inkscape.
* Linux Labs: Compiling a Kernel Pt.3
* Review: MEGAsync
* Ubuntu Games: Prison Architect, and X-Plane Plugins
plus: News, Arduino, Q&A, and soooo much more.

Get it while it’s hot!
http://fullcirclemagazine.org/issue-90
We now have several issues available for download on Google Play/Books. If you like Full Circle, please leave a review.
AND: We have a Pushbullet channel which we hope will make it easier to automatically receive FCM on launch day.
on October 31, 2014 08:10 PM

Washington Devices Sprint

Canonical Design Team

Last week was a week of firsts for me: my first trip to America, my first Sprint and my first chili-dog.

Introducing myself as the new (only) Editorial and Web Publisher, I dove head first into the world of developers, designers and Community members. It was a very absorbing week, which after felt more like a marathon than a sprint.

After being grilled by Customs, finally we arrived at Tyson’s Corner where 200 or so other developers, designers and Community members gathered for the Devices Sprint. It was a great opportunity for me to see how people from each corner of the world contribute to Ubuntu, and share their passion for open source. I especially found it interesting to see how designers and developers work together, given their different mind sets and how they collaborated together.

The highlight for me was talking to some of the Community guys, it was really interesting to talk to them about why and how they contribute from all corners of the world.

From left to right: Riccardo, Andrew, Filippo and Victor.

(From left to right: Riccardo, Andrew, Filippo and Victor)

The main ballroom.

(The main Ballroom)

Design Team dinner.  From the left: TingTing, Andrew, John, Giorgio, Marcus, Olga, James, Florian, Bejan and Jouni.

(Design Team dinner. From the left: TingTing, Andrew, John, Giorgio, Marcus, Olga, James, Florian, Bejan and Jouni)

I caught up with Olga and Giorgio to share their thoughts and experiences from the Sprint:

So how did the Sprint go for you guys?

Olga: “It was very busy and productive in terms of having face time with development, which was the main reason we went, as we don’t get to see them that often.

For myself personally, I have a better understanding of things in terms of what the issues are and what is needed, and also what can or cannot be done in certain ways. I was very pleased with the whole sprint. There was a lot of running around between meetings, where I tried to use the the time in-between to catch-up with people. On the other hand as well, Development made the approach to the Design Team in terms of guidance, opinions and a general catch-up/chat, which was great!

Steph: “I agree, I found it especially productive in terms of getting the right people in the same room and working face-to-face, as it was a lot more productive than sharing a document or talking on IRC.”

Giorgio: “Working remotely with the engineers works well for certain tasks, but the Design Team sometimes needs to achieve a higher bandwidth through other means of communication, so these sprints every 3 months are incredibly useful.

What a Sprint allows us to do is to put a face to the name and start to understand each other’s needs, expectations and problems, as stuff gets lost in translation.

I agree with Olga, this Sprint was a massive opportunity to shift to much higher level of collaboration with the engineers.

What was your best moment?

Giorgio: “My best moment was when the engineers perception towards the efforts of the Design Team changed. My goal is to better this collaboration process with each Sprint.”

Did anything come up that you didn’t expect?

Giorgio: “Gaming was an underground topic that came up during the Sprint. There was a nice workshop on Wednesday on it, which was really interesting.”

Steph: “Andrew a Community Developer I interviewed actually made two games one evening during the Sprint!”

Olga: “They love what they do, they’re very passionate and care deeply.”

Do you feel as a whole the Design Team gave off a good vibe?

Giorgio: “We got a good vibe but it’s still a working progress, as we need to raise our game and become even better. This has been a long process as the design of the Platform and Apps wasn’t simply done overnight. However, now we are in a mature stage of the process where we can afford to engage with Community more. We are all in this journey together.

Canonical has a very strong engineering nature, as it was founded by engineers and driven by them, and it is has evolved because of this. As a result, over the last few years the design culture is beginning to complement that. Now they expect steer from the Design Team on a number of things, for example: Responsive design and convergence.

The Sprint was good, as we finally got more of a perception on what other parties expect from you. It’s like a relationship, you suddenly have a moment of clarity and enlightenment, where you start to see that you actually need to do that, and that will make the relationship better.”

Olga: The other parties and the Development Team started to understand that initiated communication is not just the responsibility of the Design Team, but it’s an engagement we all need to be involved in.”

In all it was a very productive week, as everyone worked hard to push for the first release of the BQ phone; together with some positive feedback and shout-outs for the Design Team :)

Unicorn hard at work.

(Unicorn hard at work)

There was a bit of time for some sightseeing too…

It would have been rude not to see what the capital had to offer, so on the weekend before the sprint we checked out some of Washington’s iconic sceneries.

The Washington Monument.

(The Washington Monument)

We saw most of the important parliamentary buildings like the White House, Washington Monument and Lincoln’s Statue. Seeing them in the flesh was spectacular, however, I half expected a UFO to appear over the Monument like in ‘Independence Day’, and for Abraham Lincoln to suddenly get up off his chair like in the movie ‘Night at the Museum’ - unfortunately none of that happened.

The White House.

(The White House)

D.C. isn’t as buzzing as London but it definitely has a lot of character, as it embodies an array of thriving ethnic pockets that represented African, Asian and Latin American cultures, and also a lot of Italians. Washington is known for getting its sax on, so me and a few of the Design Team decided to check-out the night scene and hit a local Jazz Club in Georgetown.

...And all the jazz.

(Twins Jazz Club)

On the Sunday, we decided to leave the hustle and bustle of the city and venture to the beautiful Great Falls Park, which was only 10-15 minutes from the hotel. The park was located in the Northern Fairfax County along the banks of the Potomac River, which is an integral part of the George Washington Memorial Parkway. Its creeks and rapids made for some great selfie opportunities…

Great Falls Park.

(Great Falls Park)

on October 31, 2014 02:21 PM

I've seen more than a few Ask Ubuntu users struggling with how to batch rename their files. They get lost in Bash and find -exec loops and generally make a big mess of things before asking for help. But there is an easy method in Ubuntu that relatively few users know about: the rename command.

I was a couple of years into Ubuntu before I discovered the rename command but now wonder how I ever got along without it. I seem to use it at least once a week for personal gain and that again for helping other people. Let's just spend a second or two marvelling at the outward simplicity of syntax and we'll crack on.

rename [-v] [-n] [-f] perlexpr [filenames]

Before we get too crazy, let's talk about those first two flags. -v will tell you what it's doing and -n will exit before if does anything. If you are in any doubt about your syntax, sling -vn on the end. It'll tell you what it would have done if you hadn't had the -n there. -f will give permission for rename to overwrite files. Be careful.

Replacing (and adding and removing) with s/.../.../

This is probably the most common usage for rename. You've got a bunch of files that have the wrong junk in their filenames in them, or you want to change the formatting, or add a prefix, or replace certain characters... rename lets us do all this through simple regular expressions.

I'm using -vn here so the changes aren't persisting. Let's start by creating a few files:

$ touch dog{1..3}.dog
$ ls
dog1.dog  dog2.dog  dog3.dog

Replacing the first dog with cat:

$ rename 's/dog/cat/' * -vn
dog1.dog renamed as cat1.dog
dog2.dog renamed as cat2.dog
dog3.dog renamed as cat3.dog

Replacing the last dog with cat (note $ means "end of line" in this context, ^ means start):

$ rename 's/dog$/cat/' * -vn
dog1.dog renamed as dog1.cat
dog2.dog renamed as dog2.cat
dog3.dog renamed as dog3.cat

Replacing all instances of dog with the /g (global) flag:

$ rename 's/dog/cat/g' * -vn
dog1.dog renamed as cat1.cat
dog2.dog renamed as cat2.cat
dog3.dog renamed as cat3.cat

Removing a string is as simple as replacing it with nothing. Let's nuke the first dog:

$ rename 's/dog//' * -vn
dog1.dog renamed as 1.dog
dog2.dog renamed as 2.dog
dog3.dog renamed as 3.dog

Adding strings is a case of finding your insertion point and replacing it with your string. Here's how to add "PONIES-" to the start:

$ rename 's/^/PONIES-/' * -vn
dog1.dog renamed as PONIES-dog1.dog
dog2.dog renamed as PONIES-dog2.dog
dog3.dog renamed as PONIES-dog3.dog

This is all fairly simple and your ability to use it in the wild is largely going to depend on your ability to manipulate regular expressions. I've been using them for well over a decade in a professional setting so this might be something I take for granted, but they aren't hard once you get past the syntax. Here's a fairly simple introduction to REGEX if you would like to learn more.

Zero-padding numbers so they sort correctly

ls can be pretty shoddy sorting numbers correctly. Here's a simple example:

$ touch {1..11}
$ ls
1  10  11  2  3  4  5  6  7  8  9

It's sorting each character position at a time, left to right. This isn't too bad when we're only talking about tens, but this scales up and you end up with thousands coming before 9s. A good way to fix this (ls isn't the only application with this issue) is to zero-pad the beginnings of the numbers so all numbers are the same length and their values are in corresponding positions. Rename makes this super-easy because we can dip into Perl and use sprintf to reformat the number:

$ rename 's/\d+/sprintf("%02d", $&)/e' *
$ ls
01  02  03  04  05  06  07  08  09  10  11

The %02d there means we're printing at least 2 characters and padding it with zeroes if we need to. If you're dealing with thousands, increase that to 4, 5 or 6.

$ rename 's/\d+/sprintf("%05d", $&)/e' *
$ ls
00001  00002  00003  00004  00005  00006  00007  00008  00009  00010  00011

Similarly, you can parse a number and remove the zero padding with something like this:

$ rename 's/\d+/sprintf("%d", $&)/e' *
$ ls
1  10  11  2  3  4  5  6  7  8  9

Attaching a counter into the filename

Say we have three files and we want to add a counter onto the filename:

$ touch {a..c}
$ rename 's/$/our $i; sprintf("-%02d", 1+$i++)/e' * -vn
a renamed as a-01
b renamed as b-02
c renamed as c-03

It's the our $i that let's us persist a variable state over multiple passes.

Incrementing an existing number in a file

Given three files with consecutive numbers, increment them. It's a simple enough expression but we have to be mindful that sometimes there are going to be conflicting filenames. Here's an example that moves all the files to temporary filenames and then strips them back to what they should be.

$ touch file{1..3}.ext
$ rename 's/\d+/sprintf("%d-tmp", $& + 1)/e' * -v
file1.ext renamed as file2-tmp.ext
file2.ext renamed as file3-tmp.ext
file3.ext renamed as file4-tmp.ext

$ rename 's/(\d+)-tmp/$1/' * -v
file2-tmp.ext renamed as file2.ext
file3-tmp.ext renamed as file3.ext
file4-tmp.ext renamed as file4.ext

Changing a filename's case

Until now we've been using substitutions and expressions but there are other forms of Perl expression. In this case we can remap all lowercase characters to uppercase with a simple translation:

$ touch lowercase UPPERCASE MixedCase
$ rename 'y/a-z/A-Z/' * -vn
lowercase renamed as LOWERCASE
MixedCase renamed as MIXEDCASE

We could do that with an substitution-expression like: s/[a-z]/uc($&)/ge

Fixing extension based on actual content or MIME

What if you are handed a bunch of files without extensions? Well you could loop through and use things like the file command, or you could just use Perl's File::MimeInfo::Magic library to parse the file and hand you an extension to tack on.

rename 's/.*/use File::MimeInfo qw(mimetype extensions); $&.".".extensions(mimetype($&))/e' *

This one is a bit of a monster but further highlights that anything you can do in Perl can be done with rename. You could read ID3 tags from music or process internal data to get filename fragments.

Why doesn't my rename take a perlexpr?!

Ubuntu's default rename is actually a link to a Perl script called prename. Some distributions ship the util-linux version of rename --called rename.ul in Ubuntu-- instead. You can work out which version you have using the following command:

$ dpkg -S $(readlink -f $(which rename))
perl: /usr/bin/prename

So unless you want to shunt things around, you'll have to install and call prename instead of rename.

on October 31, 2014 01:41 PM

The consensus in the security community is that passwords suck, but they're here to stay, at least for a while longer. Given breaches like Adobe, ..., it's becoming more and more evident that the biggest threat is not weak passwords, but password reuse. Of course, the solution to password to reuse is to use one password for every site that requires you to log in. The problem is that your average user has dozens of online accounts, and they probably can't remember those dozens of passwords. So, we build tools to help people remember passwords, mostly password managers, but do we build them well?

I don't think so. But before I look at the password managers that are out there, it's important to define the criteria that a good password manager would meet.

  1. Use well-understood encryption to protect the data. A good password manager should use cryptographic constructions that are well understood and reviewed. Ideally, it would build upon existing cryptographic libraries or full cryptosystems. This includes the KDF (Key-derivation function) as well as encryption of the data itself. Oh, and all of the data should be encrypted, not just the passwords.

  2. The source should be auditable. No binaries, no compressed/minified Javascript. If built in a compiled language, it should have source available with verifiable builds. If built in an interpreted language, the source should be unobfuscated and readable. Not everyone will audit their password manager, but it should be possible.

  3. The file format should be open. The data should be stored in an open, documented, format, allowing for interoperability. Your passwords should not be tired into a particular manager, whether that's because the developer of that manager abandoned it, or because it's not supported on a particular platform, or because you like a blue background instead of grey.

  4. It should integrate with the browser. Yes, there are some concerns about exposing the password manager within the browser, but it's more important that this be highly usable. That includes making it easy to generate passwords, easy to fill passwords, and most importantly: harder to phish. In-browser password managers can compare the origin of the page you're on to the data stored, so users are less likely to enter their password in the wrong page. With a separate password manager, users generally copy/paste their passwords into a login page, which relies on the user to ensure they're putting their password into the right site.

  5. Sync, if offered, should be independent to encryption. Your encryption passphrase should not be used for sync. In fact, your encryption passphrase should never be sent to the provider: not at signup, not at login, not ever. Sync, unfortunately, sounds simple: drop a file in Dropbox or Google Drive, right? What happens if the file gets updated while the password manager is open? How do changes get synced if two clients are open?

These are just the five most important features as I see them, and not a comprehensive design document for password managers. I've yet to find a manager that meets all of these criteria, but I'm hoping we're moving in this direction.

on October 31, 2014 01:16 AM

October 30, 2014

S07E31 – The One with the Dozen Lasagnas

Ubuntu Podcast from the UK LoCo

Join Laura Cowen, Tony Whitmore, Mark Johnson and Alan Pope in Studio L for season seven, episode thirty-one of the Ubuntu Podcast!

In this week’s show:-

We’ll be back next week, when we’ll be talking about the fanfare surrounding the latest Ubuntu release and looking over your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

on October 30, 2014 08:00 PM

With all of the new documentation coming to support the development of Unity Scopes, it’s time for us to have another development shodown! Contestants will have five (5) weeks to develop a project, from scratch, and submit it to the Ubuntu Store. But this time all of the entries must be Scopes.

Be sure to update to the latest SDK packages to ensure that you have the correct template and tools. You should also create a new Click chroot to get the latest build and runtime packages.

Prizes

prizesWe’ve got some great prizes lined up for the winners of this competition.

  • 1st place will win a new Dell XPS 13 Laptop, Developer Edition (preloaded with Ubuntu)
  • Runners up will receive one of:
    • Logitech UE Boom Bluetooth speakers
    • Nexus 7 running Ubuntu
    • An Ubuntu bundle, featuring:
      • Ubuntu messenger bag
      • Ubuntu Touch Infographic T-shirt
      • Ubuntu Neoprene Laptop Sleeve
    • An Ubuntu bundle, featuring:
      • Ubuntu backpack
      • Ubuntu Circle of Friends Dot Design T-shirt
      • Ubuntu Neoprene Laptop Sleeve

Judging

Scope entries will be reviewed by a panel of judges from a variety of backgrounds and specialties, all of whom will evaluate the scope based on the following criteria:

  • General Interest – Scopes that are of more interest to general phone users will be scored higher. We recommend identifying what kind of content phone users want to have fast, easy access to and then finding an online source where you can query for it
  • Creativity – Scopes are a unique way of bringing content and information to a user, and we’ve only scratched the surface of what they can do. Thinking outside the box and providing something new and exciting will lead to a higher score for your Scope
  • Features – There’s more to scopes than basic searching, take advantage of the departments, categories and settings APIs to enhance the functionality of your Scope
  • Design – Scopes offer a variety of ways to customize the way content is displayed, from different layouts to visual styling. Take full advantage of what’s possible to provide a beautiful presentation of your results.
  • Awareness / Promotion – we will award extra points to those of you who blog, tweet, facebook, Google+, reddit, and otherwise share updates and information about your scope as it progresses.

The judges for this contest are:

  • Chris Wayne developer behind a number of current pre-installed Scopes
  • Joey-Elijah Sneddon Author and editor of Omg!Ubuntu!
  • Victor Thompson Ubuntu Core Apps developer
  • Jouni Helminen Designer at Canonical
  • Alan Pope from the Ubuntu Community Team at Canonical

Learn how to write Ubuntu Scopes

To get things started we’ve recently introduced a new Unity Scope project template into the Ubuntu SDK, you can use this to get a working foundation for your code right away. Then you can follow along with our new SoundCloud scope tutorial to learn how to tailor your code to a remote data source and give your scope a unique look and feel that highlights both the content and the source. To help you out along the way, we’ll be scheduling a series of online Workshops that will cover how to use the Ubuntu SDK and the Scope APIs. In the last weeks of the contest we will also be hosting a hackathon on our IRC channel (#ubuntu-app-devel on Freenode) to answer any last questions and help you get your c If you cannot join those, you can still find everything you need to know in our scope developer documentation.

How to participate

If you are not a programmer and want to share some ideas for cool scopes, be sure to add and vote for scopes on our reddit page. The contest is free to enter and open to everyone. The five week period starts on the Thursday 30th October and runs until Wednesday 3rd December 2014! Enter the Ubuntu Scope Showdown >

on October 30, 2014 06:36 PM

As  you may know, Packt Publishing supports Full Circle Magazine with review copies of books, so it’s only fair that we help them by bringing this offer to your attention:

packt

PacktLib provides full online access to over 2000 books and videos to give users the knowledge they need, when they need it. From innovative new solutions and effective learning services to cutting edge guides on emerging technologies, Packt’s extensive library has got it covered. For a limited time only, Packt is offering 5 free eBook or Video downloads in the first month of a new annual subscription – up to $150 worth of extra content. That’s in addition to one free download a month for the rest of the year.

This special PacktLib Plus offer marks the release of the new and improved reading and watching platform, packed with new features.

The deal expires on 4 November.

on October 30, 2014 06:33 PM

I’ve many distributed Linux instances across several clouds, be them global, such as Amazon or Digital Ocean, or regional clouds such as TeutoStack or Enter.

Probably many of you are facing the same issue: having a consistent UNIX identity across all multiple instances. While in an ideal world LDAP would be a perfect choice, letting LDAP open to the wild Internet is not a great idea.

So, how to solve this issue, while being secure? The trick is to use the new NSS module for SecurePass.

While SecurePass has been traditionally used into the operating system just as a two factor authentication, the new beta release is capable of holding “extended attributes”, i.e. arbitrary information for each user profile.

We will use SecurePass to authenticate users and store Unix information with this new capability. In detail, we will:

  • Use PAM to authenticate the user via RADIUS
  • Use the new NSS module for SecurePass to have a consistent UID/GID/….

 SecurePass and extended attributes

The next generation of SecurePass (currently in beta) is capable of storing arbitrary data for each profile. This is called “Extended Attributes” (or xattrs) and -as you can imagine- is organized as key/value pair.

You will need the SecurePass tools to be able to modify users’ extended attributes. The new releases of Debian Jessie and Ubuntu Vivid Vervet have a package for it, just:

# apt-get install securepass-tools

ERRATA CORRIGE: securepass-tools hasn’t been uploaded to Debian yet, Alessio is working hard to make the package available in time for Jessie though.

For other distributions or previous releases, there’s a python package (PIP) available. Make sure that you have pycurl installed and then:

# pip install securepass-tools

While SecurePass tools allow local configuration file, we highly recommend for this tutorial to create a global /etc/securepass.conf, so that it will be useful for the NSS module. The configuration file looks like:

[default]
app_id = xxxxx
app_secret = xxxx
endpoint = https://beta.secure-pass.net/

Where app_id and app_secrets are valid API keys to access SecurePass beta.

Through the command line, we will be able to set UID, GID and all the required Unix attributes for each user:

# sp-user-xattrs user@domain.net set posixuid 1000

While posixuid is the bare minimum attribute to have a Unix login, the following attributes are valid:

  • posixuid → UID of the user
  • posixgid → GID of the user
  • posixhomedir → Home directory
  • posixshell → Desired shell
  • posixgecos → Gecos (defaults to username)

Install and Configure NSS SecurePass

In a similar way to the tools, Debian Jessie and Ubuntu Vivid Vervet have native package for SecurePass:

# apt-get install libnss-securepass

For previous releases of Debian and Ubuntu can still run the NSS module, as well as CentOS and RHEL. Download the sources from:

https://github.com/garlsecurity/nss_securepass

Then:

./configure
make
make install (Debian/Ubuntu Only)

For CentOS/RHEL/Fedora you will need to copy files in the right place:

/usr/bin/install -c -o root -g root libnss_sp.so.2 /usr/lib64/libnss_sp.so.2
ln -sf libnss_sp.so.2 /usr/lib64/libnss_sp.so

The /etc/securepass.conf configuration file should be extended to hold defaults for NSS by creating an [nss] section as follows:

[nss]
realm = company.net
default_gid = 100
default_home = "/home"
default_shell = "/bin/bash"

This will create defaults in case values other than posixuid are not being used. We need to configure the Name Service Switch (NSS) to use SecurePass. We will change the /etc/nsswitch.conf by adding “sp” to the passwd entry as follows:

$ grep sp /etc/nsswitch.conf
 passwd:     files sp

Double check that NSS is picking up our new SecurePass configuration by querying the passwd entries as follows:

$ getent passwd user
 user:x:1000:100:My User:/home/user:/bin/bash
$ id user
 uid=1000(user)  gid=100(users) groups=100(users)

Using this setup by itself wouldn’t allow users to login to a system because the password is missing. We will use SecurePass’ authentication to access the remote machine.

Configure PAM for SecurePass

On Debian/Ubuntu, install the RADIUS PAM module with:

# apt-get install libpam-radius-auth

If you are using CentOS or RHEL, you need to have the EPEL repository configured. In order to activate EPEL, follow the instructions on http://fedoraproject.org/wiki/EPEL

Be aware that this has not being tested with SE-Linux enabled (check off or permissive).

On CentOS/RHEL, install the RADIUS PAM module with:

# yum -y install pam_radius

Note: as per the time of writing, EPEL 7 is still in beta and does not contain the Radius PAM module. A request has been filed through RedHat’s Bugzilla to include this package also in EPEL 7

Configure SecurePass with your RADIUS device. We only need to set the public IP Address of the server, a fully qualified domain name (FQDN), and the secret password for the radius authentication. In case of the server being under NAT, specify the public IP address that will be translated into it. After completion we get a small recap of the already created device. For the sake of example, we use “secret” as our secret password.

Configure the RADIUS PAM module accordingly, i.e. open /etc/pam_radius.conf and add the following lines:

radius1.secure-pass.net secret 3
radius2.secure-pass.net secret 3

Of course the “secret” is the same we have set up on the SecurePass administration interface. Beyond this point we need to configure the PAM to correct manage the authentication.

In CentOS, open the configuration file /etc/pam.d/password-auth-ac; in Debian/Ubuntu open the /etc/pam.d/common-auth configuration and make sure that pam_radius_auth.so is in the list.

auth required   pam_env.so
auth sufficient pam_radius_auth.so try_first_pass
auth sufficient pam_unix.so nullok try_first_pass
auth requisite  pam_succeed_if.so uid >= 500 quiet
auth required   pam_deny.so

Conclusions

Handling many distributed Linux poses several challenges, from software updates to identity management and central logging.  In a cloud scenario, it is not always applicable to use traditional enterprise solutions, but new tools might become very handy.

To freely subscribe to securepass beta, join SecurePass on: http://www.secure-pass.net/open
And then send an e-mail to info@garl.ch requesting beta access.

on October 30, 2014 12:55 PM

October 29, 2014

Feminist Year

Rhonda D'Vine

If someone would have told me that I would visit three feminist events this year I would have slowly nodded at them and responded with "yeah, sure..." not believing it. But sometimes things take their own turns.

It all started with the Debian Women Mini-Debconf in Barcelona. The organizers did ask me how they have to word the call for papers so that I would feel invited to give a speech, which felt very welcoming and nice. So we settled for "people who identify themselves as female". Due to private circumstances I didn't prepare well for my talk, but I hope it was still worth it. The next interesting part though happened later when there were lightning talks. Someone on IRC asked why there are male people in the lightning talks, which was explicitly allowed for them only. This also felt very very nice, to be honest, that my talk wasn't questioned. Those are amongst the reasons why I wrote My place is here, my home is Debconf.

Second event I went to was the FemCamp Wien. It was my first event that was a barcamp, I didn't know what to expect organization wise. Topic-wise it was set about Queer Feminism. And it was the first event that I went to which had a policy. Granted, there was an extremely silly written part in it, which naturally ended up in a shit storm on twitter (which people from both sides did manage very badly, which disappointed me). Denying that there is sexism against cis-males is just a bad idea, but the background of it was that this wasn't the topic of this event. The background of the policy was that usually barcamps but events in general aren't considered that save of a place for certain people, and that this barcamp wanted to make it clear that people usually shying away from such events in the fear of harassment can feel at home there.
And what can I say, this absolutely was the right thing to do. I never felt any more welcomed and included in any event, including Debian events—sorry to say that so frankly. Making it clear through the policy that everyone is on the same boat with addressing each other respectfully totally managed to do exactly that. The first session of the event about dominant talk patterns and how to work around or against them also made sure that the rest of the event was giving shy people a chance to speak up and feel comfortable, too. And the range of the sessions that were held was simply great. This was the event that I came up with the pattern that I have to define the quality of an event on the sessions that I'm unable to attend. The thing that hurt me most in the afterthought was that I couldn't attend the session about minorities within minorities. :/

Last but not least I attended AdaCamp Berlin. This was a small unconference/barcamp dedicated to increase women's participation in open technology and culture named after Ada Lovelace who is considered the first programmer. It was a small event with only 50 slots for people who identify as women. So I was totally hyper when I received the mail that was accepted. It was another event with a policy, and at first reading it looked strange. But given that there are people who are allergic to ingredients of scents, it made sense to raise awareness of that topic. And given that women are facing a fair amount of harassment in the IT and at events, it also makes sense to remind people to behave. After all it was a general policy for all AdaCamps, not for this specific one with only women.
I enjoyed the event. Totally. And that's not only because I was able to meet up with a dear friend who I haven't talked to in years, literally. I enjoyed the environment, and the sessions that were going on. And quite similar to the FemCamp, it started off with a session that helped a lot for the rest of the event. This time it was about the Impostor Syndrome which is extremely common for women in IT. And what can I say, I found myself in one of the slides, given that I just tweeted the day before that I doubted to belong there. Frankly spoken, it even crossed my mind that I was only accepted so that at least one trans person is there. Which is pretty much what the impostor syndrome is all about, isn't it. But when I was there, it did feel right. And we had great sessions that I truly enjoyed. And I have to thank one lady once again for her great definition on feminism that she brought up during one session, which is roughly that feminism for her isn't about gender but equality of all people regardless their sexes or gender definition. It's about dropping this whole binary thinking. I couldn't agree more.

All in all, I totally enjoyed these events, and hope that I'll be able to attend more next year. From what I grasped all three of them think of doing it again, the FemCamp Vienna already has the date announced at the end of this year's event, so I am looking forward to meet most of these fine ladies again, if faith permits. And keep in mind, there will always be critics and haters out there, but given that thy wouldn't think of attending such an event anyway in the first place, don't get wound up about it. They just try to talk you down.

P.S.: Ah, almost forgot about one thing to mention, which also helps a lot to reduce some barrier for people to attend: The catering during the day and for lunch both at FemCamp and AdaCamp (there was no organized catering at the Debian Women Mini-Debconf) did take off the need for people to ask about whether there could be food without meat and dairy products by offering mostly Vegan food in the first place, even without having to query the participants. Often enough people otherwise choose to go out of the event or bring their own food instead of asking for it, so this is an extremely welcoming move, too. Way to go!

/personal | permanent link | Comments: 0 | Flattr this

on October 29, 2014 07:47 PM

Kubuntu Vivid in Bright Blue

Jonathan Riddell

KDE Project:

Kubuntu Vivid is the development name for what will be released in April next year as Kubuntu 15.04.

The exiting news is that following some discussion and some wavering we will be switching to Plasma 5 by default. It has shown itself as a solid and reliable platform and it's time to show it off to the world.

There are some bits which are missing from Plasma 5 and we hope to fill those in over the next six months. Click on our Todo board above if you want to see what's in store and if you want to help out!

The other change that affects workflow is we're now using Debian git to store our packaging in a kubuntu branch so hopefully it'll be easier to share updates.

on October 29, 2014 07:11 PM

Washington sprint

Daniel Holbach

In the Community Q&A with Alan and Michael yesterday, I talked a bit about the sprint in Washington already, but I thought I’d write up a bit more about it again.

First of all: it was great to see a lot of old friends and new faces at the sprint. Especially with the two events (14.10 release and upcoming phone release) coming together, it was good to lock people up in various rooms and let them figure it out when nobody could run away easily. For me it was a great time to chat with lots of people and figure out if we’re still on track and if our old assumptions still made sense.  :-)

We were all locked up in a room as well...We were all locked up in a room as well…

What was pretty fantastic was the general vibe there. Everyone was crazy busy, but everybody seemed happy to see that their work of the last months and years is slowly coming together. There are still bugs to be fixed but we are close to getting the first Ubuntu phone ever out the door. Who would have thought that a couple of years ago?

It was great to catch up with people about our App Development story. There were a number of things we looked at during the sprint:

  • Up until now we had a Virtualbox image with Ubuntu and the SDK installed for people at training (or App Dev School) events, who didn’t have Ubuntu installed. This was a clunky solution, my beta testing at xda:devcon confirmed that. I sat down with Michael Vogt who encouraged me to look into providing something more akin to an “official ISO” and showed me the ropes in terms of creating seeds and how livecd-rootfs is used.
  • I had a number of conversations with XiaoGuo Liu, who works for Canonical as well, and has been testing our developer site and our tools for the last few months. He also wrote lots and lots of great articles about Ubuntu development in Chinese. We talked about providing our developer site in Chinese as well, how we could integrate code snippets more easily and many other things.
  • I had a many chats at the breakfast buffet with Zoltan and Zsombor of the SDK team (it always looked like we were there at the same time).  We talked about making fat packages easier to generate, my experiences with kits and many other things.
  • It was also great to catch up with David Callé who is working on scopes documentation. He’s just great!

What also liked a lot was being able to debug issues with the phone on the spot. I changed to the proposed channel, set it to read-write and installed debug symbols and voilà, grabbing the developer was never easier. My personal recommendation: make sure the problem happens around 12:00, stand in the hallway with your laptop attached to the phone and wait for the developer in charge to grab lunch. This way I could find out more about a couple of issues which are being fixed now.

It was also great to meet the non-Canonical folks at the sprint who worked on the Core Apps like crazy.

What I liked as well was our Berlin meet-up: we basically invited Berliners, ex-Berliners and honorary Berliners and went to a Mexican place. Wished I met those guys more often.

I also got my Ubuntu Pioneers T-Shirt. Thanks a lot! I’ll make sure to post a selfie (as everyone else :-)) soon.

Thanks a lot for a great sprint, now I’m looking forward to the upcoming Ubuntu Online Summit (12-14 Nov)! Make sure you register and add your sessions to the schedule!

on October 29, 2014 05:46 PM

Do you make software that solves real-world problems? Do you want your software to be instantly available to everyone that's building cloud solutions? Did you know that Ubuntu powers most of the cloud?

Some fun Ubuntu folks will be with their IBM and OpenPower friends just south of San Francisco, California next Wednesday (Nov. 5th, 2014) to talk about the future: Ubuntu on Power.

The event is free, but you'll have to register in advance.

Click the power button to get more information and to register!

Cheers,
Randall
Ubuntu Community Manager
Ubuntu on *Power*

--
Questions? randall AT ubuntu DOT com

on October 29, 2014 05:36 PM

Eclipse and Android ADT support now in Ubuntu Developer Tools Center

Now that the excellent Ubuntu 14.10 is released, it's time to focus as part of our Ubuntu Loves Developers effort on the Ubuntu Developer Tools Center and cutting a new release, bringing numerous new exciting features and framework support!

0.1 Release main features

Eclipse support

Eclipse is now part of the Ubuntu Developer Tools Center thanks to the excellent work of Tin Tvrtković who implemented the needed bits to bring that up to our users! He worked on the test bed as well to ensure we'll never break unnoticed! That way, we'll always deliver the latest and best Eclipse story on ubuntu.

To install it, just run:

$ udtc ide eclipse

and let the system set it up for you!

eclipse.png

Android Developer Tools support (with eclipse)

The first release introduced the Android Studio (beta) support, which is the default in UDTC for the Android category. In addition to that, we now complete the support in bringing ADT Eclipse support with this release.

eclipse-adt.png

It can be simply installed with:

$ udtc android eclipse-adt

Accept the SDK license like in the android studio case and be done! Note that from now on as suggested by a contributor, with both Android Studio and Eclipse ADT, we add the android tools like adb, fastboot, ddms to the user PATH.

Ubuntu is now a truly first-class citizen for Android application developers as their platform of choice!

Removing installed platform

As per a feature request on the ubuntu developer tools center issue tracker, it's now really easy to remove any installed platform. Just enter the same command than for installing, and append --remove. For instance:

$ udtc android eclipse-adt --remove
Removing Eclipse ADT
Suppression done

Enabling local frameworks

As requested as well on the issue tracker, users can now provide their own local frameworks, by using either UDTC_FRAMEWORKS=/path/to/directory and dropping those frameworks here, or in ~/.udtc/frameworks/.

On glorious details, duplicated categories and frameworks loading order is the following:

  1. UDTC_FRAMEWORKS content
  2. ~/.udtc/frameworks/ content
  3. System ones.

Note though that duplicate filenames aren't encouraged, but supported. This will help as well testing for large tests with a basic framework for all the install, reinstall, remove and other cases common in all BaseInstaller frameworks.

Other enhancements from the community

A lot of typo fixes have been included into that release thanks to the excellent and regular work of Igor Vuk, providing regular fixes! A big up to him :) I want to highlight as well the great contributions that we got in term of translations support. Thanks to everyone who helped providing or updating de, en_AU, en_CA, en_GB, es, eu, fr, hr, it, pl, ru, te, zh_CN, zh_HK support in udtc! We are eager to see what next language will enter into this list. Remember that the guide on how to contribute to Ubuntu Developer Tools Center is available here.

Exciting! How can I get it?

The 0.1 release is now tagged and all tests are passing (this release brings 70 new tests). It's available directly on vivid.

For 14.04 LTS and 14.10, use the ubuntu-developer-tools-center ppa where it's already available.

Contributions

As you have seen above, we really listen to our community and implement & debate anything coming through. We start as well to see great contributions that we accept and merge in. We are just waiting for yours!

If you want to discuss some ideas or want to give a hand, please refer to this blog post which explains how to contribute and help influencing our Ubuntu loves developers story! You can as well reach us on IRC on #ubuntu-desktop on freenode. We'll likely have an opened hangout soon during the upcoming Ubuntu Online Summit as well. More news in the coming days here. :)

on October 29, 2014 11:23 AM

October 28, 2014

How to add settings to your scope

Ubuntu App Developer Blog

A scope can provide persistent settings for simple customizations, such as allowing the user to configure an email address or select a distance unit as metric or imperial.

In this tutorial, you well learn how to add settings to your scope and allow users to customize their experience.

Read…

scope-settings_coffeenearby2 scope-settings_visitparis2 scope-settings_indieconcerts1

on October 28, 2014 10:31 PM

I thought I written post for 14.10 but I didn’t…I think real life was too stressful on me at that time, but I wrote one for 14.04.

Between the 14.04 cycle and the 14.10 cycle, I completed one of the six (6) or so goals and I’m working on one them, which   I understand why I was not able to complete the other four (4) (or so) goals and I will explain why:

Ubuntu Doc Team

I learned that the main focus of the the Doc team should be the desktop/server docs not the wiki.  But still, there should be a some group of people that should be the admins of the wiki.  What is really required is recruiting experts on the subject matter to update the wiki pages along with the wiki admins to rename and delete pages.

Ubuntu Ohio Team

I learned that most of the LoCo’s are dead and Ubuntu Ohio is one of them.  Or I am not putting in enough energy in recruiting people into the LoCo.  Or not networking enough.

Ubuntu Women

I learned that we don’t have resources to run an outreach program.  But I learned that there is other ways to do “outreach”.

Between the time that I started to get involved and now, I joined three teams and was elected as an Elected Leader and as a Memebership Board member.  I created new goals as I failed many of them to many factors.  These are:

Ubuntu Doc Team

Nothing for now.

Ubuntu Oho Team

Nothing for now.

Ubuntu Women

I have three (3) goals, two of which are sub-goals of the main goal: help get more women involved with Ubuntu and FOSS.  The other one is related to the main goal of help get more women involved but it’s a collaboration between another team.

The first two goals are finish the Orientation Quiz and publish it, and get Harvest developed enough for anyone to use.  The other goal is to start a collaboration Ubuntu Scientists since that was one thing that was brought up while working on the Orientation Quiz.

Ubuntu Scientists

The collaboration project mainly and perhaps one of the other goals of the team. And also try to get the team active.

Ubuntu Leadership

Most likely, the goal is to collect information on issues that leaders face and write those articles.  And also try to get the team active.

 

 


on October 28, 2014 09:13 PM

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20141028 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Vivid Development Kernel

The Vivid kernel has been opened and master-next rebased to the lastest
v3.18-rc2 upstream kernel. We have witheld uploading to the archive
until we’ve progressed to a later -rc candidate.
—–
Important upcoming dates:
The Vivid ReleaseSchedule has not yet been posted.


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Utopic/Trusty/Precise/Lucid

Status for the main kernels, until today (Sept. 30):

  • Lucid -
  • Precise -
  • Trusty – Testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 10-Oct through 31-Oct
    ====================================================================
    8-Oct Last day for kernel commits for this cycle
    09-Oct – 10-Oct Kernel prep
    12-Oct – 18-Oct Bug verification & Regression testing.
    19-Oct – 25-Oct Bug verification & Regression testing.
    26-Oct – 01-Nov Regression testing & Release to -updates.


Open Discussion or Questions? Raise your hand to be recognized

No open discussions.

on October 28, 2014 05:50 PM

October 27, 2014

Welcome to the Ubuntu Weekly Newsletter. This is issue #389 for the week October 20 – 26, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Elizabeth K. Joseph
  • Mathias Hellsten
  • Stephen Michael Kellat
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on October 27, 2014 10:23 PM

Sprinting in DC: Friday

Nicholas Skaggs

This week, my team and I are sprinting with many of the core app developers and other folks inside of Ubuntu Engineering. Each day I'm attempting to give you a glimpse of what's happening.

Friday brings an end to an exciting week, and the faces of myself and those around me reflect the discussions, excitement, fun and lack of sleep this week has entailed.

Bubbles!
The first session of the day involved hanging out with the QA team while they heard feedback from various teams on issues with quality and process within there project. Always fun to hear about what causes different teams the most issues when it comes to testing.

Next I spent some time interviewing a couple folks for publishing later. In my case I interviewed Thomi from the QA team and Zoltan from the SDK team about the work going on within there teams and how the last cycle went. The team as a whole has been conducting interviews all week. Look for these interviews to appear on youtube in the coming weeks.

Thursday night while having a look through a book store, I came across an ad for ubuntu in Linux Voice magazine. It made me smile. The dream of running ubuntu on all my devices is becoming closer every day.


I'd like to thank all the community core app developers who joined us this week. Thanks for hanging out with us, providing feedback, and most of all for the creating the wonderful apps we have for the ubuntu phone. Your work has helped shaped the device and turn it into what it is today.

Looking back over the schedule there were sessions I wish I had been able to attend, and it was wonderful catching up with everyone. Sadly my flight home prevented me from attending the closing session and presumably getting a summary of some of these sessions. I can say I was delighted to talk and interact with the unity8 team on the next steps for unity8 on the desktop. I trust next cycle we as a community can do more around testing there work.

As I head to the airport for home, it's time to celebrate the release of utopic unicorn!
on October 27, 2014 08:00 AM
Season of KDE (#SoK2014) was delayed a bit, but we're in business now:

http://heenamahour.blogspot.in/2014/10/season-of-kde-2014.html

Please stop by the ideas page if you need an idea. Otherwise, contact a KDE devel you've worked with before, and propose a project idea.

Once you have something, please head over to the Season of KDE website: https://season.kde.org and jump in. You can begin work as soon as you have a mentor sign off on your plan.

Student application deadline: Oct 31 2014, 12:00 am UTC - so spread the word! #SoK2014

Go go go!
on October 27, 2014 07:32 AM

October 26, 2014

I used to believe that computer mediated communication made the world a better place...

Have you ever noticed a couple sitting together not being "together"? Or perhaps a group of friends eating in a restaurant or enjoying drinks in a bar, but largely not interacting with one another? In these situations, the people that seem to be the centre of the event are the people that aren't there.

"Smart" phones, you make me ill. You are incentivizing human disconnection. You are weakening the bonds between people that inhabit the same space.

You are the ultimate expression of design fail.

You see, computer mediated communications should not have a distance bias. Why only mediate conversation between people that are challenged by distance separation? By doing so, you are creating, or at least accelerating, a culture of "not being there."

You see, the most important aspect of being beside another human being is enjoying that person in the moment, with full attention. Phone, you are just too dumb to realize it. Or are you simple conveniently ignoring it for the sake of a sociopathic business model?

Guess what? You know the two people in the photo are beside each other. You also know that they are in each other's contact list. You may even know that they're on a beach. It's a romantic place. Put two and two together, please.

Figure it out, phone! For the sake of humanity, this is not the 80's. It's time to wise up. Prince Ea and I and our posse are on to you...

http://www.youtube.com/watch?v=dRl8EIhrQjQhttp://www.youtube.com/watch?v=dRl8EIhrQjQ

---
Our best chance at a phone that repects humanity is here:
http://www.ubuntu.com/phone

More reasons "smart" phones aren't are here:
http://blog.josephliau.com/documenting-the-death-of-the-dumb-telephone-p...
http://randall.executiv.es/dumbphones03
http://blog.josephliau.com/documenting-the-death-of-the-dumb-telephone-p...
http://blog.josephliau.com/documenting-the-death-of-the-dumb-telephone-p...
http://randall.executiv.es/dumbphones02
http://randall.executiv.es/dumbphones01

---
image by Leo Reynolds
https://www.flickr.com/photos/lwr/

on October 26, 2014 10:18 PM

The Ubuntu Code of Conduct says:

Step down considerately: When somebody leaves or disengages from the project, we ask that they do so in a way that minimises disruption to the project. They should tell people they are leaving and take the proper steps to ensure that others can pick up where they left off.

I've been working on Ubuntu for over ten years now, almost right from the very start; I'm Canonical's employee #17 due to working out a notice period in my previous job, but I was one of the founding group of developers. I occasionally tell the story that Mark originally hired me mainly to work on what later became Launchpad Bugs due to my experience maintaining the Debian bug tracking system, but then not long afterwards Jeff Waugh got in touch and said "hey Colin, would you mind just sorting out some installable CD images for us?". This is where you imagine one of those movie time-lapse clocks ... At some point it became fairly clear that I was working on Ubuntu, and the bug system work fell to other people. Then, when Matt Zimmerman could no longer manage the entire Ubuntu team in Canonical by himself, Scott James Remnant and I stepped up to help him out. I did that for a couple of years, starting the Foundations team in the process. As the team grew I found that my interests really lay in hands-on development rather than in management, so I switched over to being the technical lead for Foundations, and have made my home there ever since. Over the years this has given me the opportunity to do all sorts of things, particularly working on our installers and on the GRUB boot loader, leading the development work on many of our archive maintenance tools, instituting the +1 maintenance effort and proposed-migration, and developing the Click package manager, and I've had the great pleasure of working with many exceptionally talented people.

However. In recent months I've been feeling a general sense of malaise and what I've come to recognise with hindsight as the symptoms of approaching burnout. I've been working long hours for a long time, and while I can draw on a lot of experience by now, it's been getting harder to summon the enthusiasm and creativity to go with that. I have a wonderful wife, amazing children, and lovely friends, and I want to be able to spend a bit more time with them. After ten years doing the same kinds of things, I've accreted history with and responsibility for a lot of projects. One of the things I always loved about Foundations was that it's a broad church, covering a wide range of software and with a correspondingly wide range of opportunities; but, over time, this has made it difficult for me to focus on things that are important because there are so many areas where I might be called upon to help. I thought about simply stepping down from the technical lead position and remaining in the same team, but I decided that that wouldn't make enough of a difference to what matters to me. I need a clean break and an opportunity to reset my habits before I burn out for real.

One of the things that has consistently held my interest through all of this has been making sure that the infrastructure for Ubuntu keeps running reliably and that other developers can work efficiently. As part of this, I've been able to do a lot of work over the years on Launchpad where it was a good fit with my remit: this has included significant performance improvements to archive publishing, moving most archive administration operations from excessively-privileged command-line operations to the webservice, making build cancellation reliable across the board, and moving live filesystem building from an unscalable ad-hoc collection of machines into the Launchpad build farm. The Launchpad development team has generally welcomed help with open arms, and in fact I joined the ~launchpad team last year.

So, the logical next step for me is to make this informal involvement permanent. As such, at the end of this year I will be moving from Ubuntu Foundations to the Launchpad engineering team.

This doesn't mean me leaving Ubuntu. Within Canonical, Launchpad development is currently organised under the Continuous Integration team, which is part of Ubuntu Engineering. I'll still be around in more or less the usual places and available for people to ask me questions. But I will in general be trying to reduce my involvement in Ubuntu proper to things that are closely related to the operation of Launchpad, and a small number of low-effort things that I'm interested enough in to find free time for them. I still need to sort out a lot of details, but it'll very likely involve me handing over project leadership of Click, drastically reducing my involvement in the installer, and looking for at least some help with boot loader work, among others. I don't expect my Debian involvement to change, and I may well find myself more motivated there now that it won't be so closely linked with my day job, although it's possible that I will pare some things back that I was mostly doing on Ubuntu's behalf. If you ask me for help with something over the next few months, expect me to be more likely to direct you to other people or suggest ways you can help yourself out, so that I can start disentangling myself from my current web of projects.

Please contact me sooner or later if you're interested in helping out with any of the things I'm visible in right now, and we can see what makes sense. I'm looking forward to this!

on October 26, 2014 09:55 PM

So Flask has a really convenient mechanism for registering handlers, actions to be run before/after requests, etc. Using decorators, Flask registers these functions to be called, as in:

1
2
3
4
5
6
7
@app.route('/')
def homepage_handler():
    return 'Hello World'

@app.before_request
def do_something_before_each_request():
    ...

This is pretty convenient, and works really well, because it means you don't have to list all your routes in one place (like Django requires) but it comes with a cost. You can end up with Python modules that are only needed for the side effects of importing them. No functions from those modules are directly called from your other modules, but they still need to be imported somewhere to get the routes registered.

Of course, if you import a module just to get its side effects, then pylint won't be aware you need this import, and will helpfully suggest that you remove it. This generally isn't too bad, if you remove a file with views defined in it, they'll just fail, you'll notice quickly, and readd the import.

On the other hand, if you're using a before_request function to, say, provide CSRF protection, then you'll have a serious problem. Of course, that's the case I found myself in. So, you'll want to make sure that doesn't occur and use a resource from the file or disable pylint.

on October 26, 2014 06:51 PM
Over the past few weeks in spare moments I've been adding more stress methods to stress-ng  ready for Ubuntu 15.04 Vivid Vervet.   My intention is to produce a rich set of stress methods that can stress and exercise many facets of a system to force out bugs, catch thermal over-runs and generally torture a kernel in a controlled repeatable manner.

I've also re-structured the tool in several ways to enhance the features and make it easier to maintain.  The cpu stress method has been re-worked to include nearly 40 different ways to stress a processor, covering:
  • Bit manipulation: bitops, crc16, hamming
  • Integer operations: int8, int16, int32, int64, rand
  • Floating point:  long double, double,  float, ln2, hyperbolic, trig
  • Recursion: ackermann, hanoi
  • Computation: correlate, euler, explog, fibonacci, gcd, gray, idct, matrixprod, nsqrt, omega, phi, prime, psi, rgb, sieve, sqrt, zeta
  • Hashing: jenkin, pjw
  • Control flow: jmp, loop
..the intention was to have a wide enough eclectic mix of CPU exercising tests that cover a wide range of typical operations found in computationally intense software.   Use the new --cpu-method option to select the specific CPU stressor, or --cpu-method all to exercise all of them sequentially.

I've also added more generic system stress methods too:
  • bigheap - re-allocs to force OOM killing
  • rename - rename files rapidly
  • utime - update file modification times to create lots of dirty file metadata
  • fstat - rapid fstat'ing of large quantities of files
  • qsort - sorting of large quantities of random data
  • msg - System V message sending/receiving
  • nice - rapid re-nicing processes
  • sigfpe - catch rapid division by zero errors using SIGFPE
  • rdrand - rapid reading of Intel random number generator using the rdrand instruction (Ivybridge and later CPUs only)
Other new options:
  • metrics-brief - this dumps out only the bogo-op metrics that are relevant for just the tests that were run.
  • verify - this will sanity check the stress results per iteration to ensure memory operations and CPU computations are working as expected. Hopefully this will catch any errors on a hot machine that has errors in the hardware. 
  • sequential - this will run all the stress methods one by one (for a default of 60 seconds each) rather than all in parallel.   Use this with the --timeout option to run all the stress methods sequentially each for a specified amount of time. 
  • Specifying 0 instances of any stress method will run an instance of the stress method on all online CPUs. 
The tool also builds and runs on Debian kFreeBSD and GNU HURD kernels although some stress methods or stress options are not included due to lack of support on these other kernels.
The stress-ng man page gives far more explanation of each stress method and more detailed examples of how to use the tool.

For more details, visit here or read the manual.
on October 26, 2014 03:56 PM
Now you can use Folder Color in Ubuntu MATE 14.10 too!

Caja & Folder Color
Just run these commands into a Terminal:
sudo add-apt-repository ppa:costales/folder-color
sudo apt-get update
sudo apt-get install python-caja gir1.2-caja caja libgtk2.0-bin folder-color-caja

If your Ubuntu is 32bits:
sudo cp /usr/lib/i386-linux-gnu/girepository-1.0/Caja-2.0.typelib /usr/lib/girepository-1.0/

If your Ubuntu is 64bits:
sudo cp /usr/lib/x86_64-linux-gnu/girepository-1.0/Caja-2.0.typelib /usr/lib/girepository-1.0/

Finally, restart your file browser:
caja -q

PS: If you're using the Nemo file browser (Usually with Cinnamon) see how to install here.
on October 26, 2014 10:16 AM

October 25, 2014

After the success[1] of their first Long-Term-Support (LTS) version in April this year, the Head of the Developer Team, Julien Lavergne, has finished work on the Utopic Unicorn which can now be downloaded at https://help.ubuntu.com/community/Lubuntu/GetLubuntu.

Acting Release Manager, Walter Lapchynski, shortly after the release: “This cycle we mainly focused on fixing known bugs. But”, he adds “there is a downside, too: due to several serious bugs, we had to skip PPC versions of the Unicorn. We recommend using of the LTS version for now and do hope, that we are able to present a PPC Version in April next year. For the moment we are still working on our plans to implement LXQt in either 15.04 or 15.10.”

Read more »
on October 25, 2014 10:05 PM

R2-D2 is not dumb. But my phone is. “[It] talks in maths. [It] buzzes like a fridge. [It's] like a detuned radio.”1

My phone has a communication problem. It beeps and boops, and sometimes screams to let me know that something is going on, but something is missing there. It’s all a bunch of noise. What exactly are you telling me, phone? Yes, there are some custom notifications to a certain degree, but normally they under the rules of a 3rd party. How do I know the difference between an emergency, an update, or an unimportant piece of information without constantly having to look at my phone? The answer is NOT a watch. In that case, maybe my phone shouldn’t have notifications at all!

Is it possible to tell me who is contacting, by what means, the type of information, and deliver the message at an appropriate time and in an appropriate fashion?
Is it possible to communicate with my digital, social, and spacial environments and tell my when my ship’s hyperdrive has been deactivated BEFORE I attempt to make the jump to lightspeed?

A *smart* phone could do that.

Dumb phone, you can beep and boop all you want, but you’re not the phone I’m looking for. Into the garbage chute!

sop

[1] Radio Head – Karma Police

on October 25, 2014 08:35 PM

Review of Ubucon 2014 Germany

Sujeevan Vijayakumaran

Last weekend the german Ubucon took place in the town of Katlenburg-Lindau. It was the 8th Ubucon in Germany and it was the third time that I attented an Ubucon as a visitor and as a speaker.

Preparation

It was the second time that I participated in the organisation of the Ubucon. Last year I was part of the organisation team for the event in Heidelberg. This time the organisation was rather „silent“. In my opinion it was sometimes too silent on the mailing list. The town where the event took place was rather small, therefore there were fewer speakers and also fewer visitors compared to the last two years. First I didn't expect that the event would be great, but luckily I was wrong!

The event

Friday

The first day of the event was friday. All visitors and speakers got a name plate with their full name and their nickname at the front desk. Last year, my name was actually too long. This time only one character was missing. Atleast I got used to mistakes in my name. :-)

The opening keynote was hold by Torsten Franz who was also the head of the organisation team. After this, he was talking about „10 years Ubuntu, 10 years Community“. Later a part of the visitors went to the first Social-Event which took place at a castle next to the school. Personally I didn't go to this event.

Saturday

The second day started at 10 o'clock in the morning. It was the first time that I did a workshop, which also started on that time. I talked about „Git for Beginners“. At the beginning we had a few issues with the Wifi. This also affected my workshop, because it took a rather long time for the participants to download and install git. Therefore, I changed a few things of my workshop, so afterwards the participants didn't need a working internet connection. I planned about 3 hours, but we finished after about 2,5h.

On the rest of the day, I didn't attend any talks. I rather talked to all the other nice people :-). At the evening we had two Social-Events. A big part went to „Theater der Nacht“ („Theatre of the night“). The other smaller part stayed at the school, where two persons played Live-Music. The Live-Music was quiet good, but all the other people who went to the Theatre said, that it was really great. It seems that I missed something. Bernhard took a few really nice photos in there.

Sunday

On Sunday I only attended to talks. The first one was about LVM, the other one about systemd. Both talks were hold by Stefan J. Betz. and they were really informative and also a bit funny.

At the afternoon the Ubucon ended. We had really many people who helped to clean up and pack everything. Therefore, many people could leave earlier than expected.

The location

The location was great! I didn't expect that a primary school was a good place for a Ubucon, but it is! The technical infrastructure was really good. The school had several „Smartboards“ with projectors. At the entrance area there was a big hall, where you can sit and talk if you're not in hearing a talk. In this hall there were several computers with different Linux-Distributions and Desktop-Environments.

It was the first time that we had a Gaming-Lounge. There were two rooms which contained four Ubuntu-PCs with large TVs and also two Table football. The idea was great and also the rooms were nice. There were many people who played games there. I hope that we will have a similar Gaming-Lounge on future Ubucons.

All speakers got a nice gift-bag from the local organisation team. This bag mainly contained several items of the region. In my bag there were a few sausages, wine, beer and a sauce. Personally I don't eat and drink that stuff, but it was a really good idea and gesture!

On all our Ubucons, the entrance fee of 10€ includes the money for food and drinks. On the last few years we had only two or three different types of bread roll. This time we also had bread rolls, but we also had Bockwurst and different types of soups. All of them were really tasty and everybody had a bigger choice to eat something which they like.

Conclusion

This years Ubucon was great! Compared to last years Ubucon we had a smaller amount of attendees but this time the organisation team in Katlenburg was really good. They had different really good ideas, like the Gaming-Lounge and the gift-bag for all speakers.

I simply hope that next years Ubucon will as good as this years Ubucon. The place is not fixed yet, we are going to search for another place for next years Ubucon. By shifting the place of the Ubucon every year, all attendees will see different cities and you can also meet different new nice people. The latter reason is my main reason why I attend and help to organize the Ubucon.

If you're looking for a few nice photos of this years Ubucon, have a look here. Bernhard Hanakam took some really good photos.

on October 25, 2014 12:00 PM

I’m still buzzing from this morning. No, it’s not because of the “crystal meth”1; nor is it because of the amazing cold brew coffee2 that’s sitting in my fridge. I’m on a mental high from listening to a great mind. This morning I went to see Cory Doctorow at the Vancouver Writer’s Fest, and I’m a better person because of it.

I’ll admit that I wasn’t initially too keen on attending the Writer’s Fest, but I said to myself, “hey, this is Cory Doctorow.”
In fact, I’m not really that into books and reading much3… but this is Cory Doctorow.
And, I’m really not that entertained by copyright talk… but, hey, this is Cory Doctorow.

If it wasn’t obvious already, I’m a pretty big fan of Cory Doctorow. He’s kind of an Alchemist of the Internet Age, except that he’s not afraid to share his knowledge. I had followed him for a while on boingboing, and I was inspired enough to read Little Brother. (Before doing so, I thought I should read George Orwell’s 1984, and so I did … for the first time. Yes, I’m not very well-read… yet). Little Brother was so impressive that I continued to buy the audiobook of Homeland. I didn’t have to pay for it, but I chose to because I valued the author and his work, which completely supports Doctorow’s Laws for the Internet Age.

At the Vancouver Writer’s Fest, Cory Doctorow gave an overview of his new book, by eloquently summarizing three laws that he had come up with for the Internet Age. It was followed by a discussion on some of the values discussed in his writing. When asked about his views on “free and open source software,” Cory was quite excited to share Ubuntu with the crowd :)

The entire discussion was probably one of the best overviews of Internet freedom that I have ever heard, and having such a master-of-language deliver the message made it all the better. I was educated, entertained, and encouraged to read and write more freely. You might say that I have turned over a new page with regards to information.

doctorow

I’m still buzzing.

If you get a chance to see Cory Doctorow during his current tour, then by all means do so, because, hey, it’s Cory Doctorow!

 

 

[1] Those who attended the event will get the inside joke.
[2] I learned about this from Cory Doctorow via Little Brother.
[3] Irlen Syndrome

on October 25, 2014 05:20 AM

October 24, 2014

In 2013, Debian participated in both rounds of the GNOME Outreach Program for Women (OPW). The first round was run in conjunction with GSoC and the second round was a standalone program.

The publicity around these programs and the strength of the Google and Debian brands attracted a range of female candidates, many of whom were shortlisted by mentors after passing their coding tests and satisfying us that they had the capability to complete a project successfully. As there are only a limited number of places for GSoC and limited funding for OPW, only a subset of these capable candidates were actually selected. The second round of OPW, for example, was only able to select two women.

Google to the rescue

Many of the women applying for the second round of OPW in 2013 were also students eligible for GSoC 2014. Debian was lucky to have over twenty places funded for GSoC 2014 and those women who had started preparing project plans for OPW and getting to know the Debian community were in a strong position to be considered for GSoC.

Chandrika Parimoo, who applied to Debian for the first round of OPW in 2013, was selected by the Ganglia project for one of five GSoC slots. Chandrika made contributions to PyNag and the ganglia-nagios-bridge.

Juliana Louback, who applied to Debian during the second round of OPW in 2013, was selected for one of Debian's GSoC 2014 slots working on the Debian WebRTC portal. The portal is built using JSCommunicator, a generic HTML5 softphone designed to be integrated in other web sites, portal frameworks and CMS systems.

Juliana has been particularly enthusiastic with her work and after completing the core requirements of her project, I suggested she explore just what is involved in embedding JSCommunicator into another open source application. By co-incidence, the xTuple development team had decided to dedicate the month of August to open source engagement, running a program called haxTuple. Juliana had originally applied to OPW with an interest in financial software and so this appeared to be a great opportunity for her to broaden her experience and engagement with the open source community.

Despite having no prior experience with ERP/CRM software, Juliana set about developing a plugin/extension for the new xTuple web frontend. She has published the extension in Github and written a detailed blog about her experience with the xTuple extension API.

Participation in DebConf14

Juliana attended DebConf14 in Portland and gave a presentation of her work on the Debian RTC portal. Many more people were able to try the portal for the first time thanks to her participation in DebConf. The video of the GSoC students at DebConf14 is available here.

Continuing with open source beyond GSoC

Although GSoC finished in August, xTuple invited Juliana and I to attend their annual xTupleCon in Norfolk, Virginia. Google went the extra mile and helped Juliana to get there and she gave a live demonstration of the xTuple extension she had created. This effort has simultaneously raised the profile of Debian, open source and open standards (SIP and WebRTC) in front of a wider audience of professional developers and business users.

Juliana describes her work at xTupleCon, Norfolk, 15 October 2014

It started with OPW

The key point to emphasize is that Juliana's work in GSoC was actually made possible by Debian's decision to participate in and promote Outreach Program for Women in 2013.

I've previously attended DebConf myself to help more developers become familiar with free and open RTC technology. I wasn't able to get there this year but thanks to the way GSoC and OPW are expanding our community, Juliana was there to help out.

on October 24, 2014 11:53 PM

This Giant Octopus that I’m talking about is GOOGLE.  Google has it’s giant arms everywhere in the tech world and it’s mind is only on one thing: PRIVACY INVASION.

Today, I read a post by Oli Warner about Paypal’s app on the android and the permissions that it requires the user to accept when installing or updating (see image on right, credit Oil).  Google is the only one that tells the developers that you must allow these permissions when the app is installed.  This allows developers to easily take your data, or even a hacker, and use that data and do whatever they want with it.  That is a huge risk that people are taking when they don’t read the permissions when they install/update.

I ask to protect from Google’s evil and use CyanogenMod with it’s Privacy Guard or some other app that protects you.  Or even better, install F-droid and go Google free. Also, please use Firefox, not Chrome.

There are other evils that Google has but that will be another post for another day.

P.S Read THIS also.

P.S.S.: I want to thank Oli for posting his post.  It’s one thing that I was ranted on but never really wrote a post about the issue.


on October 24, 2014 09:03 PM

This title is kind of a misnomer, as of course, all this goodness is available to Ubuntu 14.04 users, so it’s more of a “Things that happen to line up with” Ubuntu 14.10.

More new items next week, hint hint!

on October 24, 2014 05:35 PM

Sprinting in DC

Alan Pope

For the last week I’ve been working with 230 other Ubuntu people in Washington, DC. We have sprints like this pretty frequently now and are a great way to collaborate and Get Things Done™ at high velocity.

This is the second sprint where we’ve invited some of the developers who are blazing a trail with our Core Apps project. Not everyone could make it to the sprint, and those who didn’t were certainly missed. These are people who give their own time to work on some of the featured and default apps on the Ubuntu Phone, and perhaps in the future on the converged desktop.

It’s been a busy week with discussion & planning punctuating intense hacking sessions. Once again I’m proud of the patience, professionalism and and hard work done by these guys working on bringing up our core apps project on a phone that hasn’t event shipped a single device yet!

We’ve spent much of the week discussing and resolving design issues, fixing performance bugs, crashers and platform integration issues, as well as the odd game of ‘Cards Against Humanity’ & ‘We Didn’t Playtest This At All’ in the bar afterwards.

Having 10 community developers in the same place as 200+ Canonical people accelerates things tremendously. Being able to go and sit with the SDK team allowed Robert Schroll to express his issues with the tools when developing Beru, the ebook reader. When Filippo Scognamiglio needed help with mouse and touch input, we could grab Florian Boucault and Daniel d’Andrada to provide tips. Having Renato Filho nearby to fix problems in Evolution Data Server allowed Kunal Parmar and Mihir Soni to resolve calendar issues. The list goes on.

All week we’ve been collaborating towards a common goal of high quality, beautiful, performant and stable applications for the phone today, and desktop of the future. It’s been an incredibly fun and productive week, and I’m a little sad to be heading home today. But I’m happy that we’ve had this time together to improve the free software we all care deeply about.

The relationships built up during these sprints will of course endure. We all exchange email addresses and IRC nicknames, so we can continue the conversation once the sprint is over. Development and meetings will continue beyond the sprint, in the virtual world of IRC, hangouts and mailing lists.

on October 24, 2014 05:17 PM
Kubuntu T-shirts and Polo Shirts are available again. This time our supplier is HelloTux who are working with the Hungarian Ubuntu LoCo. $3 from each shirt goes to Kubuntu and $1.5 to the Hungarian LoCo team.
on October 24, 2014 03:51 PM

The SSLv3 “POODLE” Vulnerability.

Most of us are aware of the recent protocol flaw vulnerability in SSLv3. Officially designated CVE-2014-3566, it is more commonly referred to as the “POODLE” (Padding Oracle On Downgraded Legacy Encryption) vulnerability.

The vulnerability is a result of a flaw in the way that the (now old) SSLv3 protocol behaves and operates. There is a Ubuntu-specific question on the POODLE vulnerability on Ask Ubuntu (link) which answers common questions on it. There is also a more general question on the POODLE vulnerability on the Information Security Stack Exchange site (link) with more general details on the POODLE vulnerability. If you would like more details, you should refer to those sites, or read the OpenSSL Whitepaper on the POODLE vulnerability (link).

As this is a protocol flaw in SSLv3, ALL implementations of SSLv3 are affected, so the only way to truly protect against POODLE is to disable SSLv3 protocol support in your web application, whether it be software you write, or hosted by a web server.


Disable SSLv3 in nginx:

Since the recommendation is to no longer use SSLv3, the simplest thing to do is disable SSLv3 for your site. In nginx, this is very simple to achieve.

Typically, one would have SSL enabled on their site with the following protocols line or similar if using the example in the default-shipped configuration files (in latest Debian or the NGINX PPAs, prior to the latest updates that happened in the past week or so):
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;

To resolve this issue and disable SSLv3 support, we merely need to use the following instead to use only TLS:
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

Note that on really old implementations of OpenSSL, you won’t be able to get TLSv1.1 and TLSv1.2, so at the very least you can just have TLSv1 on the ssl_protocols line. You should probably consider updating to a more recent version of OpenSSL, though, because of other risks/issues in OpenSSL.


Update OpenSSL to get TLS_FALLBACK_SCSV Support:

More importantly than just disabling SSLv3, you should definitely update your OpenSSL, or whatever SSL implementation you use, to receive support for TLS_FALLBACK_SCSV. There is an attack vector that would make you vulnerable to POODLE by starting a TLS session, but then falling back to SSLv3, and then open you to the POODLE vulnerability. By updating, and then having the use of TLS_FALLBACK_SCSV, you will be protecting yourself from protocol downgrading attacks which would also make you vulnerable to POODLE.


Ubuntu Users:

OpenSSL:

Fortunately for all users of Ubuntu, the OpenSSL packages were updated to protect against SSL downgrade attacks. This is detailed in “USN-2385-1: OpenSSL vulnerabilities” (link). Simply running sudo apt-get update with the security repositories enabled should get you the OpenSSL update to address this.

nginx from the Ubuntu Repositories:

Due to the vulnerability, and Debian already having these changes done, I was able to get in a last-minute update (courtesy of the Ubuntu Security Team and the Ubuntu Release Team), into the nginx package for the Utopic (14.10) release, which happened officially yesterday (October 23, 2014). In Utopic, the nginx package’s default config does NOT have SSLv3 on the ssl_protocols line. All other supported versions of Ubuntu do not have this change (this means that Precise and Trusty are both affected).

PPA Users:

Of course, many users of Ubuntu and nginx like the newer features of the latest nginx Stable or Mainline releases. This is why the nginx PPAs exist. Originally maintained by some of the Debian maintainers of the nginx package, I’ve taken primary responsibility of updating the nginx packages, and keeping them in sync (as close as I can) to the Debian nginx packaging.

As of today (October 24, 2014), both the Stable and Mainline PPAs have been updated to be in sync with the latest Debian packaging of the nginx package. This includes the removal of SSLv3 from the default ssl_protocols line.


Debian Users:

OpenSSL:

Fortunately, like Ubuntu, Debian has also updated the OpenSSL packages to protect against SSL downgrade attacks. This is detailed in “DSA-3053-1 openssl — security update” (link). Like in Ubuntu, this can be fixed by running sudo apt-get update or similar to update your packages.

nginx in the Debian Repositories:

If you are on Debian Unstable, you are in luck. The Debian package in Unstable has this change in it already.

If you are on Debian Testing or Debian Stable or Debian Old Stable, you’re unfortunately out of luck, this change isn’t in those versions of the package yet. You can easily do the aforementioned changes, though, and fix your configs to disable SSLv3.

on October 24, 2014 02:54 PM

Who actually checks the permissions of applications they're installing? A little while ago a Paypal update stalled because it required extra permissions. This is what happens if an app you have already installed wants more power. I was more than a little surprised with what I found.

Update: Unfortunately some of the /r/Android and /r/Technology readers don't seem to be making it past the title. Rather than repeatedly telling me why Paypal might occasionally need access to my camera, perhaps consider why I need to give it permanent access. And why do I have to give something access for features I don't use. This —as you'll see if you keep reading— can be solved by both Paypal and Google.

It's easy to overlook app permissions. After all, you want something, and if there's no tangible sacrifice attached to it, people don't see the problem.

I do. I look after a few servers; security is something that's always in or around my consciousness. The prime tenet of data security is to only give access to things that need it, ideally only when they need it.

The Paypal app can, as it turns out, do a raft of things that include your peripheral hardware. Like magnetic stripe readers, scanning credit cards and OCRing cheques. I've still no idea why it needs SMS/MMS, calendar, location and app inspection access... So answers on a postcard.

That isn't really the point. My first problem comes in that Paypal are normalising applications doing a permission land-grab at install time. Something that was installed to let me do lightweight management of my account (and get notifications) has mutated into this beast that wants permanent access to my physical life.

Now, you can probably trust Paypal; they've only been shown to be moderately evil in the past... But who is to say that will always be true. They could decide to monetise this access. Or they could get hacked. Or another app could manipulate it to escalate its own privileges. In any case the result is the same: it can track you, it can watch you, it can hear you and it can smuggle data off your phone without you ever realising. You're installing the perfect tracking, wiretapping bug.

There is an argument that Android should be marshalling access to privileges better but before I get there, Paypal could and should be more considerate about what they're asking users to hand over. They could easily split the application out into plugins and distribute those in separate packages with their own privileges. It would leave the core application svelte, concentrated on core functionality, allowing cranky old users like me their simple, secure access and giving coffee-shop-hopping Alice and Bob all the naff features they want to trade for their privacy.

But the biggest issue -- as comments are highlighting-- is how Android allows developers to request permissions. It all has to be done at install-time and it all or nothing. If the user won't accept it, they can't install or update. They have to uninstall or ignore the updates... Which is obviously another massive security issue.

If an iOS app wants to use the camera, you're asked when it wants to use the camera. That might seem like Vista's UAC all over again, but that's the call here... And I think Apple are on the better side. Android needs to start thinking about permissions in an interactive sense.

Back to Paypal. Given I only use the Paypal app to manage my Paypal account, I decided to uninstall it.

There has been a great discussion following this on Hacker News. I particularly like some of the interface suggestions on how this could work without being annoying. Google could learn something from this dialogue.

on October 24, 2014 01:56 PM

Sprinting in DC: Thursday

Nicholas Skaggs

This week, my team and I are sprinting with many of the core app developers and other folks inside of Ubuntu Engineering. Each day I'm attempting to give you a glimpse of what's happening.

Today started with some UOS planning which is happening in a couple short weeks. If you haven't yet put it on your calendar, please do so! And plan to not only attend, but consider submitting a session as well. The users track might be just the place for your session. Session topics can be about anything ubuntu related you might want to share or discuss with others.

As the week has progressed I've enjoyed getting to know the core apps developers better. Today we met with all of them to hear feedback on how the projects have been going. Lots of good discussion was had discussing how things like meetings and reviews work, individual project needs and actions that could be taken to improve all of the projects. It's wonderful to have everyone in the same place and able to talk.

After lunch the QA team discussed manual testing and proposed utilizing moztrap for some of the manual testing they are undertaking as part of the CI process for ubuntu touch images. While it is too early to say what implications this will have on manual testing from a community perspective, I'm happy to see the conversation has begun around the current issues facing manual tests. I'm also happy someone else is willing to be a guinea pig for changes like this! For image testing, the qatracker has served us well and will continue to do so, but I hope in the future we can improve the experience. In fact, we have done work in this area recently, and would love to hear from anyone who wants to help improve the qatracker experience. So, whether or not a migration to moztrap occurs at some point, the future looks bright.

The core app developers also got a chance to both get and receive feedback from the SDK and design teams. The deep dives into applications like calendar were very much appreciated and I expect those suggestions will filter into the applications in the near future. As usual the core apps developers came prepared with suggestions and grievances for the SDK team, as well as praises for things done well.

Finally to end the day, we discussed developer mode on the device. Rather than talk about the history of how it was implemented, let me share with you the future. Rather than locking adb access via a password, we'll utilize certificates. The password based solution already will ensure your locked device isn't vulnerable to nefarious humans who might want to connect and steal your data or reflash your phone. However, things like passwordless sudo will be possible with using certificates. In addition if security is the bane of your existence, you will be able to enable developer mode without setting a password at all.

Whew, today was very full!
on October 24, 2014 07:30 AM

Hi everyone,

With each and every cycle, the same question is being asked over and over again:

Should I upgrade to the latest release? or should I keep my system as it is?

Well, luckily with Ubuntu GNOME, you don’t need to worry much or be confused at all. We shall make life super easy for you so relax and read this post :)

As of this very moment, Ubuntu GNOME has ONLY two main releases:

  1. Ubuntu GNOME 14.04 (Trusty Tahr) LTS.
  2. Ubuntu GNOME 14.10 (Utopic Unicorn).

Ubuntu 14.04 LTS has been released in April, 2014 while Ubuntu 14.10 has been released yesterday.

The answer to the endless question is very easy, more than what you may think:

  • If you would like to run and use the latest packages/software we are offering with our latest version of Ubuntu GNOME – that is 14.10 Utopic UnicornAND you do not mind a short term support release (9 months only) THEN go ahead and upgrade – please read this.
  • If you would like to run and use a rock solid system with long term support (3 years) AND you care less about using the latest packages/software THEN do not upgrade and stick to Ubuntu GNOME 14.04 LTS.

Tip: a side from upgrading, of course you can always do a fresh new install but please do NOT forget to backup your files – either way. Better safe than sorry.

So, mystery is solved. You need to ask yourself before asking anyone:

What do I need?

Run the latest release? or run the Long Term Supported release? you are the only one who knows the answer to that question and we have tried to make life easier for you. Now, you know what to do with each and every cycle ;)

If truth to be told, Ubuntu/Canonical Team has made life easier. It is either you keep and use the LTS version that is supported for 3 or 5 years (depends on which flavour you are using) OR you use the latest release and keep upgrading (or do fresh new installation) each 6 months.

You need to understand there is NOTHING wrong to keep the old version and there is NOTHING wrong to upgrade to the latest one. This is entirely up to the user to decide based on his/her needs.

By the way, this applies to Ubuntu and all its official flavours. Starting from 14.04 (Trusty Tahr), all Ubuntu and its official flavours have LTS releases.

Hope this will help many who are confused and keep asking :)

Thank you for choosing and using Ubuntu GNOME!

Ali/amjjawad
Non-Technical Leader of Ubuntu GNOME

on October 24, 2014 06:05 AM

Bad Voltage Turns 1

Jono Bacon

Today Bad Voltage celebrates our first birthday. We plan on celebrating it by having someone else blow out our birthday candles while we smash a cake and quietly defecate on ourselves.

For those of you unaware of the show, Bad Voltage is an Open Source, technology, and “other things we find interesting” podcast featuring Stuart Langridge (LugRadio, Shot Of Jaq), Bryan Lunduke (Linux Action Show), Jeremy Garcia (Linux Questions), and myself (LugRadio, Shot Of Jaq). The show takes fun but informed take on various topics, and includes interviews, reviews, competitions, and challenges.

Over the last year we have covered quite the plethora of topics. This has included VR, backups, atheism, ElementaryOS, guns, bitcoin, biohacking, PS4 vs. XBOX, kids and coding, crowdfunding, genetics, Open Source health, 3D printed weapons, the GPL, work/life balance, Open Source political parties, the right to be forgotten, smart-watches, equality, Mozilla, tech conferences, tech on TV, and more.

We have interviewed some awesome guests including Chris Anderson (Wired), Tim O’Reilly (O’Reilly Media), Greg Kroah-Hartman (Linux), Miguel de Icaza (Xamarin/GNOME), Stormy Peters (Mozilla), Simon Phipps (OSI), Jeff Atwood (Discourse), Emma Marshall (System76), Graham Morrison (Linux Voice), Matthew Miller (Fedora), Ilan Rabinovitch (Southern California Linux Expo), Daniel Foré (Elementary), Christian Schaller (Redhat), Matthew Garrett (Linux), Zohar Babin (Kaltura), Steven J. Vaughan-Nicols (ZDNet), and others.

…and then there are the competitions and challenges. We had a debate where we had to take the opposite viewpoints of what we think, we had a rocking poetry contest, challenged our listeners to mash up the shows to humiliate us, ran a selfie competition, and more. In many cases we punished each other when we lost and even tried to take on a sausage company.

It is all a lot of fun, and if you haven’t checked the show out, be sure to head over to www.badvoltage.org and load up on some shows.

One of the most awesome aspects of Bad Voltage is our community. Our pad is at community.badvoltage.org and we have a fantastically diverse community of different ideas, perspectives and viewpoints. In many cases we have discussed a topic on the show and there has been a long and interesting (and always respectful debate on the forum). It is so much fun to be around.

I just want to say a huge thank-you to everyone who has supported the show and stuck with us through our first year. We have a lot of fun doing it, but the Bad Voltage community make every ounce of effort worthwhile. I also want to thank my fellow presenters, Bryan, Stuart, and Jeremy; it is a pleasure getting to shoot the proverbial with you guys every few weeks.

Live Voltage!

Before I wrap up, I need to share an important piece of information. The Bad Voltage team will be performing our very first live show at the Southern California Linux Expo on the evening of Friday 20th Feb 2015 in Los Angeles.

We can’t think of a better place to do our first live show than SCALE, and we hope to see you there!

on October 24, 2014 04:39 AM