December 19, 2014

This post serves as a notice regarding the BREACH vulnerability and NGINX.

For Ubuntu, Debian, and the PPA users: If you are on 1.6.2-5 (or 1.7.8 from the PPAs), the default configuration has GZIP compression enabled, which means it does not mitigate BREACH on your sites by default. You need to look into whether you are actually impacted by BREACH, and if you are consider mitigation steps.


What is it?

Unlke CRIME, which attacks TLS/SPDY compression and is mitigated by disabling SSL compression, BREACH attacks HTTP responses. These are compressed using the common HTTP compression, which is much more common than TLS-level compression. This allows essentially the same attack demonstrated by Duong and Rizzo, but without relying on TLS-level compression (as they anticipated).

BREACH is a category of vulnerabilities and not a specific instance affecting a specific piece of software. To be vulnerable, a web application must:

  • Be served from a server that uses HTTP-level compression
  • Reflect user-input in HTTP response bodies
  • Reflect a secret (such as a CSRF token) in HTTP response bodies

Additionally, while not strictly a requirement, the attack is helped greatly by responses that remain mostly the same (modulo the attacker’s guess). This is because the difference in size of the responses measured by the attacker can be quite small. Any noise in the side-channel makes the attack more difficult (though not impossible).

It is important to note that the attack is agnostic to the version of TLS/SSL, and does not require TLS-layer compression. Additionally, the attack works against any cipher suite. Against a stream cipher, the attack is simpler; the difference in sizes across response bodies is much more granular in this case. If a block cipher is used, additional work must be done to align the output to the cipher text blocks.

How practical is it?

The BREACH attack can be exploited with just a few thousand requests, and can be executed in under a minute. The number of requests required will depend on the secret size. The power of the attack comes from the fact that it allows guessing a secret one character at a time.

Am I affected?

If you have an HTTP response body that meets all the following conditions, you might be vulnerable:

  • Compression – Your page is served with HTTP compression enabled (GZIP / DEFLATE)
  • User Data – Your page reflects user data via query string parameters, POST …
  • A Secret – Your application page serves Personally Identifiable Information (PII), a CSRF token, sensitive data …

Mitigations

NOTE: The Breach Attack Information Site offers several tactics for mitigating the attack. Unfortunately, they are unaware of a clean, effective, practical solution to the problem. Some of these mitigations are more practical and a single change can cover entire apps, while others are page specific.

The mitigations are ordered by effectiveness (not by their practicality – as this may differ from one application to another).

  1. Disabling HTTP compression
  2. Separating secrets from user input
  3. Randomizing secrets per request
  4. Masking secrets (effectively randomizing by XORing with a random secret per request)
  5. Protecting vulnerable pages with CSRF
  6. Length hiding (by adding random number of bytes to the responses)
  7. Rate-limiting the requests.

Whichever mitigation you choose, it is strongly recommended you also monitor your traffic to detect attempted attacks.


Mitigation Tactics and Practicality

Unfortunately, the practicality of the listed mitigation tactics is widely varied. Practicality is determined by the application you are working with, and in a lot of cases it is not possible to just disable GZIP compression outright due to the size of what’s being served.

This blog post will cover and describe in varying detail three mitigation methods: Disabling HTTP Compression, Randomizing secrets per request, and Length Hiding (using this site as a reference for the descriptions here).

Disabling HTTP Compression

This is the simplest and most effective mitigation tactic, but is ultimately not the most wieldy mitigation tactic, as there is a chance your application actually requires GZIP compression. If this is the case, then you should not use this mitigation option, when GZIP compression is needed in your environment. However, if your application and use case does not necessitate the requirement of GZIP compression, this is easily fixed.

To disable GZIP globally on your NGINX instances, in nginx.conf, add this code to the http block: gzip off;.

To disable GZIP specifically in your sites and not globally, follow the same instructions for globally disabling GZIP, but add it to your server block in your sites’ specific configurations instead.

If you are using NGINX from the Ubuntu or Debian repositories, or the NGINX PPAs, you should check your /etc/nginx.conf file to see if it has gzip on; and you should comment this out or change it to gzip off;.

However, if disabling GZIP compression is not an option for your sites, then consider looking into other mitigation methods.

Randomizing secrets per request or masking secrets

Unfortunately, this one is the least descriptive here. Secret handling is handled on an application level and not an NGINX level. If you have the capability to modify your application, you should modify it to randomize the secrets with each request, or mask the secrets. If this is not an option, then consider using another method of mitigation.

Length hiding

Length hiding can be done by nginx, however it is not currently available in the NGINX packages in Ubuntu, Debian, or the PPAs.

It can be done on the application side, but it is easier to update an nginx configuration than to modify and deploy an application when you need to enable or disable this in a production environment. A Length Hiding Filter Module has been made by Nulab, and it adds randomly generated HTML comments to the end of an HTML response to hide correct length and make it difficult for attackers to guess secret information.

An example of such a comment added by the module is as follows:

<!-- random-length HTML comment: E9pigGJlQjh2gyuwkn1TbRI974ct3ba5bFe7LngZKERr29banTtCizBftbMk0mgRq8N1ltPtxgY -->

NOTE: To use this method, until there is any packaging available that uses this module or includes it, you will need to compile NGINX from the source tarballs.

To enable this module, you will need to compile NGINX from source and add the module. Then, add the length_hiding directive to the server,http, or location blocks in your configuration with this line: length_hiding on;


Special Packaging of NGINX PPA with Length Hiding Enabled

I am currently working on building NGINX stable and mainline with the Length Hiding module included in all variants of the package which have SSL enabled. This will eventually be available in separate PPAs for the stable and mainline PPAs.

Until then, I strongly suggest that you look into whether you can operate without GZIP compression enabled, or look into one of the other methods of mitigating this issue.

on December 19, 2014 01:08 AM

Hi,

Ubuntu GNOME Team is glad to announce the availability of the first milestone (Alpha 1) for Ubuntu GNOME Vivid Vervet (15.04).

Kindly do take the time and read the release notes:
https://wiki.ubuntu.com/VividVervet/Alpha1/UbuntuGNOME

We would like to thank our great helpful and very supportive testers. They have responded to our urgent call for help in no time. Having high quality testers on the team make us more confident this cycle will be extraordinary and needless to mention, that is an endless motivation for us to do more and give more. Thank you so much again for all those who helped to test Alpha 1 images.

As always, if you need more information about testing, please see this page.

And, don’t hestiate to contact us if you have any question, feedback, notes, suggestions, etc – please see this page.

Thank you for choosing, testing and using Ubuntu GNOME!

on December 19, 2014 12:49 AM

December 18, 2014

Colin King

It is approaching the Christmas Holiday season, so it's that time again to write some slightly obfuscated C in a seasonal way.  This year I thought I would try some coloured ASCII art for the output for a little variety.

#define r(s) s[e%(sizeof s-1)]
#include /* */
#define S "%s"/* Have */
#define u printf(/* a */
#define c )J/* Merry */
#define W "H"/* Christmas */
#define e rand()/* and */
#define U(i) v[i]/* a */
#define C(q) q[]=/* Happy */
#define J ;;/* New Year */
#define O [v]/* Colin.I.King */


typedef a
; a m, v[6] ,
H;a main(
){char C(
o){033,91,0},C(D)
"*Oo", C(t
)"^~#",Q[ ]=
"13747516",C(s
)".x+*";u S"2"
"J%s0;0H%s0"
";37;40"
"m",o,o, o
c while(U(!!m)
<22)u S"%dm%39s\n" ,
o,0 O++>19?42:',',""
c while(0 O++<'~') u S
"%d;%dH%s44m%c",o,e%21,e
%39,o,r(s)c for(J){1 O=1
-U(1),srand(v),u S"0;0"W S
"0;2;%dm",o,o,' 'c for(m=0
;m>>4<1;++m){u S"%d;%d"W,o
, m+2,20-m c;for(H=0;H<1+(m
<<1);H++){4 O=!H|H==m<<1 ,
2 O=!(e&05),U(3)=H>m*5/
3,5 O=r(D)J if(4 O|U(
2)){u S"%d;%d;3%cm"
"%c",o,U(4)?','
:'*',3 O?2:1+(U(1)^(1&e)),r(Q),U(5)c}else u S"42;32\
;%dm%c",o,1+3 O,r(t)c u S"0m",o c} }while(m<19)u S"\
%d;19"W S"33;2;7m #\n",o,1+ ++m,o c sleep(m>=-H c}}

The source can be downloaded from here and compiled and run as follows:

gcc snowman.c -o snowman
./snowman

and press control-C to exit when you have seen enough.
on December 18, 2014 11:32 PM
"How much wood could a woodchuck chuck if a woodchuck could chuck wood?" - Guybrush Threepwood, Monkey Island

The first Alpha of Vivid (to become 15.04) has now been released!

The Alpha-1 images can be downloaded at: http://cdimage.ubuntu.com/kubuntu/releases/vivid/alpha-1/

More information on Kubuntu Alpha-1 can be found here: https://wiki.ubuntu.com/VividVervet/Alpha1/Kubuntu

on December 18, 2014 10:18 PM


Wargames.  Hackers.  Swordfish.  Superman 3.  Jurassic Park.  GoldenEye.  The Matrix.

You've all seen the high stakes hacking scene, packed with techno-babble and dripping in drama.  And the command and control center with dozens of over-sized monitors, overloaded with scrolling text...

I was stuck on a plane a few weeks back, traveling home from Las Vegas, and the in flight WiFi was down.  I know, I know.  Real world problems.  Suddenly, I had 2 hours on my hands, without access to email, IRC, or any other distractions.

It's at this point I turned to my folder of unfinished ideas, and cherry-picked one that would take just a couple of fun hours to hack.  And I'm pleased to introduce the fruits of that, um, labor -- the hollywood package for Ubuntu :-)  Call it an early Christmas present!  All code is on both Launchpad and Github.


If you're already running Vivid (Ubuntu 15.04) -- I salute you! -- and you can simply:

sudo apt-get install hollywood

If you're on any other version of Ubuntu, you'll need to:

sudo apt-add-repository ppa:hollywood/ppa
sudo apt-get update
sudo apt-get install hollywood

Fire up a terminal, maximize it, open byobu, and run the hollywood command.  Then sit back and soak into the trance...

I recently jumped on the vertical monitor bandwagon, for my secondary display.  It's fantastic for reading and writing code.  It's also hollywood-worthy ;-)


How does all of this work?

For starters, it's all running in a Byobu (tmux) session, which enables us to split a single shell console into a bunch of "panes" or "splits".

The hollywood package depends on a handful of utilities that I found (mostly apt-cache searching the Ubuntu archives for monitors and utilities).  You can find a handful of scripts in /usr/lib/hollywood/.  Each of these is a "driver" for a widget that might run in one of these splits.  And ccze is magical, accepting input on stdin and colorizing the text.

In fact, they're quite easy to write :-)  I'm happy to accept contributions of new driver widgets, as long as you follow a couple of simple rules.  Each widget:
  • Must run as a regular, non-root user
  • Must not eat all available CPU, Disk, or Memory
  • Must not write data
  • Must run indefinitely, until receiving a Ctrl-C
  • Must look hollywood cool!
So far, we have widgets that: generate passphrases encoded in NATO phonetic, monitor and render network bandwidth, emulate The Matrix, find and display, with syntax highlighting, source code on the system, show a bunch of error codes, hexdump a bunch of binaries, monitor some processes, render some images to ASCII art, colorize some log files, open random manpages, generate SSH keys and show their random art, stat a bunch of inodes in /proc and /sys and /dev, and show the tree output of some directories.

I also grabbed a copy of the Mission Impossible theme song, licensed under the Creative Commons.  I played it in the Totem music player in Ubuntu, with the Monoscope visual effect, and recorded a screencast with gtk-recordmydesktop.  I then mixed the output .ogv file, with the original .mp3 file, and transcoded it to mp4/h264/aac, reducing the audio bitrate to 64k and frame size to 128x96, using this command:
avconv -i missionimpossible.ogv -i MissionImpossibleTheme.mp3 -s 128x96 -b 64k -vcodec libx264 -acodec aac -f mpegts -strict experimental -y mi.mp4

Then, hollywood plays it in one of the splits with mplayer's ascii art video output on the console :-)

DISPLAY= mplayer -vo caca /usr/share/hollywood/mi.mp4

Sound totally cheesy?  Why, yes, it is :-)  That's the point :-)

Oh, and by the way...  If you ever sit down at someone else's Linux PC, and want to freak them out a little, just type:

ubuntu@x230:~⟫ PS1="root@$(hostname):~# "; clear 
root@x230:~# 

And then have fun!
That latter "hack", as well as the entire concept of hollywood is inspired in part by Kees Cook's awesome talk, in particular his "Useless Hollywood Drama Mode" in his exploit demo.
Happy hacking!
:-Dustin
on December 18, 2014 07:46 PM

LaunchPad (LP) is the Ubuntu Community’s project management system and I think it could be better.  It’s awkward to use because there are many features that are missing.  Many basic features that many sites have.  I’m talking about notifications and even UX.

Notifications

Currently LP has bug and blueprint e-mail and that can very, very spammy fast and I do mean fast.  Instead I would like to settings for how much, how often one gets bug/blueprint e-mail, and if they want to see it on LP.  In terms of how much is what type of  change it is. How often should have the basic choices of daily, every n hours, and every time there is a change.   It would be nice, like every other site, to have a notification system right on LP so one can quickly go to that bug or blueprint or just to read what the change was.

Also, I think it would be nice if there was a way to target a certain person when one comments on a bug and that goes to that person only.

UX

My main comment is that UX needs to be modern, because it looks outdated.

On a side note this video is a good one to watch because it’s related to how LaunchPad can be better:


on December 18, 2014 05:25 PM

A fresh NVIDIA driver for the Linux platform has been released and it looks like the devs have a made number of changes and important improvements that really stand out.

NVIDIA seems to be the only company that takes the Linux community seriously, or least this can be deduced from the changelogs and the number of drivers that are released for the platform. AMD and Intel do their share of work with the kernel, but it’s nowhere near the kind of dedication that NVIDIA has. The simple fact that they release often is proof that they really do care about their users.

Source:

http://news.softpedia.com/news/Major-NVIDIA-Stable-Driver-Released-466755.shtml

Submitted by: Silviu Stahie

on December 18, 2014 01:33 PM

Are you content with the status quo in technology? I'm not.

Years ago, I became aware of this little known (at the time) project called "Ubuntu". Remember it?

I don't know about you, but once I discovered Ubuntu and became involved I was so excited about the future it proposed that I never looked back.

Aside from Ubuntu's "approachable by everyone" and "free forever" project DNA, one of the things that really attracted me to it was that it had the guts to take on the status quo. I believed (and I still believe) that the status quo needs a good disruption. Complacency and doing things "as they always have been" just plain hurts.

In those days, the status quo was proprietary software and well-meaning but inpenetrable (to the everyday person that just wanted to get things done) free and open source software. I'm happy that we've collectively solved the toughest parts of those problems. Sure, there are still issues to be resolved but as they say, that's mostly detail.

Fast forward to today. Now, we are faced with a hosting (or call it cloud infrastructure if you wish) hardware landscape that is nearly a perfect monopoly and is so tightly locked down that we can't solve the world's big problems.

Spotting an opportunity to create something better and to change the world, a bunch of people rallied together to create

Click to learn more!Click to learn more!

Not surprisingly, Ubuntu joined and became a partner early on. And today, another one of the most famous disruptors has joined: Rackspace. In their words,

"In the world of servers, it’s getting harder and more costly to deliver the generational performance and efficiency gains that we used to take for granted. There are increasing limitations in both the basic materials we use, and the way we design and integrate our systems."

So here we are. Ubuntu, Rackspace, and dozens of others poised once again to disrupt.

It's going to be an interesting and fun ride. 2015 is poised to be the year that the world woke up to the true power of open.

I'm looking forward to it, and I hope you are too. Please join us!

on December 18, 2014 01:31 AM

December 17, 2014

When leveraging juju with LXC in cloud environments - networking has been a constant thorn in my side as I attempt to scale out farms of services in their full container glory. Thanks to the work by Hazmat (who brought us the Digital Ocean Provider) - there is a new development in this sphere ready for testing over this holiday season.

Container Networking with Juju in the cloud

Juju by default supports colocating services with LXC containers and KVM machines. LXC is all the rage these days, as linux containers are light weight kernel virtualized cgroups. Akin to BSD Jails - but not quite. Its a awesome solution where you dont care about resource isolation, and Just want your application to run within its own happy root, and live on churning away at whatever you might throw at it.

While this is great - it has a major achilles tendon presently in the Juju sphere. Cross-host communication is all but non-existant. In order to really scale and use LXC containers you need a beefy host to warehouse all the containers you can stuff on its disk. This isn't practical in scale out situations where your needs change on a day to day basis. You wind up losing out on the benefits of commodity hardware.

Flannel knocks this restriction out with great justice. Allow me to show you how:

Model Density Deployments with Juju and LXC

I'm going to assume you've done a few things.

  • Have a bootstrapped environment
  • Have at least 3 machines available to you

Start off by deploying Etcd and Flannel

juju deploy cs:~hazmat/trusty/etcd
juju deploy cs:~hazmat/trusty/flannel
juju add-unit flannel
juju add-relation flannel etcd

Important! You must wait for the flannel units to have completed their setup run before you deploy any lxc containers to the host. Otherwise you will be racing the virtual device setup, and this may misconfigure the networking.

With Flannel and Etcd running, you're now ready to deploy your services in LXC containers. Assuming the Flannel machine's provisioned by Juju are machineid 2, and 3:

juju deploy cs:trusty/mediawiki --to lxc:2
juju deploy cs:trusty/mysql --to lxc:3
juju deploy cs:trusty/haproxy --to 2
juju add-relation mediawiki:db mysql:db
juju add-relation mediawiki haproxy

Note We deployed haproxy to the host, and not to an LXC container. This is to provide access to the containerized services from the public interface - flannel only solves cross-host private networking with the containers.

This may take a short while to complete, as the LXC containers are fetching cloud images, and generating templates just like the Juju local provider workflow. Typically this is done in a couple minutes.

Once everything is online and ready for inspection, open a web-browser pointed at your Haproxy public ip, and you should see a fresh installation of Mediawiki.

Happy hacking!

on December 17, 2014 05:30 PM

When it comes to stability and performance, nothing can really beat Linux. This is why the U.S. Marine Corps leaders have decided to ask Northrop Grumman Corp. Electronic Systems to change the operating system of the newly delivered Ground/Air Task-Oriented Radar (G/ATOR) from Windows XP to Linux.

It’s interesting to note that the Ground/Air Task-Oriented Radar (G/ATOR) was just delivered to the U.S. Marine Corps, but the company that built it chose to keep that aging operating system. Someone must have noticed the fact that it was a poor decision and the chain of command was informed of the problems that might have appeared.

Source:

http://news.softpedia.com/news/U-S-Marine-Corps-Want-to-Change-OS-for-Radar-System-from-Windows-XP-to-Linux-466756.shtml

Submitted by: Silviu Stahie

on December 17, 2014 12:32 PM

There’s a saying in American political debate that is as popular as it is wrong, which happens when one side appeals to our country’s democratic ideal, and the other side will immediately counter with “The United States is a Republic, not a Democracy”. I’ve noticed a similar misunderstanding happening in open source culture around the phrase “meritocracy” and the negatively-charged “oligarchy”. In both cases, though, these are not mutually exclusive terms. In fact, they don’t even describe the same thing.

Authority

One of these terms describes where the authority to lead (or govern) comes from. In US politics, that’s the term “republic”, which means that the authority of the government is given to it by the people (as opposed to divine-right, force of arms, of inheritance). For open source, this is where “meritocracy” fits in, it describes the authority to lead and make decisions as coming from the “merit” of those invested with it. Now, merit is hard to define objectively, and in practice it’s the subjective opinion of those who can direct a project’s resources that decides who has “merit” and who doesn’t. But it is still an important distinction from projects where the authority to lead comes from ownership (either by the individual or their employer) of a project.

Enfranchisement

History can easily provide a long list of Republics which were not representative of the people. That’s because even if authority comes from the people, it doesn’t necessarily come from all of the people. The USA can be accurately described as a democracy, in addition to a republic, because participation in government is available to (nearly) all of the people. Open source projects, even if they are in fact a meritocracy, will vary in what percentage of their community are allowed to participate in leading them. As I mentioned above, who has merit is determined subjectively by those who can direct a project’s resources (including human resource), and if a project restricts that to only a select group it is in fact also an oligarchy.

Balance and Diversity

One of the criticisms leveled against meritocracies is that they don’t produce diversity in a project or community. While this is technically true, it’s not a failing of meritocracy, it’s a failing of enfranchisement, which as has been described above is not what the term meritocracy defines. It should be clear by now that meritocracy is a spectrum, ranging from the democratic on one end to the oligarchic on the other, with a wide range of options in between.

The Ubuntu project is, in most areas, a meritocracy. We are not, however, a democracy where the majority opinion rules the whole. Nor are we an oligarchy, where only a special class of contributors have a voice. We like to use the term “do-ocracy” to describe ourselves, because enfranchisement comes from doing, meaning making a contribution. And while it is limited to those who do make contributions, being able to make those contributions in the first place is open to anybody. It is important for us, and part of my job as a Community Manager, to make sure that anybody with a desire to contribute has the information, resources, and access to to so. That is what keeps us from sliding towards the oligarchic end of the spectrum.

 

on December 17, 2014 10:00 AM

The Matasano crypto challenges are a set of increasingly difficult coding challenges in cryptography; not puzzles, but designed to show you how crypto fits together and why all the parts are important. Cheers to Maciej Ceglowski of pinboard.in for bringing them to my attention.

I’ve been playing around with doing the challenges from first principles, in JavaScript. That is: not using any built-in crypto stuff, and implementing things like XOR myself by individually twiddling bits. It’s interesting! The thing that Maciej says here, and with which I totally agree, is that a lot of this (certainly the first batch, which is all I’ve done so far) is stuff that you already know how to do, intellectually, but you’ve never actually done — have you ever written a base64 encoder? Rather than just using string.encode('base64') or whatever? Obviously there’s no need to write this sort of thing yourself in production code (this is not one of those arguments that kids should learn long division rather than just owning a phone with a calculator on it), but I’ve found that actually making a thing to implement simple crypto such as XOR with a repeated key to have a few surprising tricks and turns in it. And, in immensely revealing fashion, one then goes on to write code which breaks such a cipher. In microseconds. Obviously intellectually I knew that Viginere ciphers are an old-fashioned thing, and I’d read various books in which they were broken and how they were, but there’s something about writing a little function yourself which viscerally demonstrates just how easy it was in a way that a hundred articles cannot.

Code so far (I’m only up to challenge 6 in set 1!) is in jsbin if you want to have a look, or have a play yourself!

on December 17, 2014 09:01 AM

I am 35 years old and people never cease to surprise me. My trip home from Los Angeles today was a good example of this.

It was a tortuous affair that should have been a quick hop from LA to Oakland, popping on BArt, and then getting home for a cup of tea and an episode of The Daily Show.

It didn’t work out like that.

My flight was delayed. Then we sat on the tarmac for an hour. Then the new AirBart train was delayed. Then I was delayed at the BArt station in Oakland for 30 minutes. Throughout this I was tired, it was raining, and my patience was wearing thin.

Through the duration of this chain of minor annoyances, I was reading about the horrifying school attack in Pakistan. As I read more, related articles were linked with other stories of violence, aggression, and rape, perpetuated by the dregs of our species.

As anyone who knows me will likely testify, I am a generally pretty positive guy who sees the good in people. I have baked my entire philosophy in life and focus in my career upon the core belief that people are good and the solutions to our problems and the doors to opportunity are created by good people.

On some days though, even the strongest sense of belief in people can be tested when reading about events such as this dreadful act of violence in Pakistan. My seemingly normal trip home from the office in LA just left me disappointed in people.

While stood at the BArt station I decided I had had enough and called an Uber. I just wanted to get home and see my family. This is when my mood changed entirely.

Gerald

A few minutes later, my Uber arrived, and I was picked up by an older gentleman called Gerald. He put my suitcase in the trunk of his car and off we went.

We started talking about the Pakistan shooting. We both shared a desperate sense of disbelief at all those innocent children slaughtered. We questioned how anyone with any sense of humanity and emotion could even think about doing that, let alone going through with it. With a somber air filling the car, Gerald switched gears and started talking about his family.

He told me about his two kids, both of which are in their mid-thirtees. He doted on their accomplishments in their careers, their sense of balance and integrity as people, and his three beautiful grand-children.

He proudly shared that he had shipped his grandkids’ Christmas presents off to them today (they are on the East Coast) so he didn’t miss the big day. He was excited about the joy he hoped the gifts would bring to them. His tone and sentiment was one of happiness and pride.

We exchanged stories about our families, our plans for Christmas, and how lucky we both felt to love and be loved.

While we were generations apart…our age, our experiences, and our differences didn’t matter. We were just proud husbands and fathers who were cherishing the moments in life that were so important to both of us.

We arrived at my home and I told Gerald that until I stepped in his car I was having a pretty shitty trip home and he completely changed that. We shook hands, shared Christmas best wishes, and parted ways.

Good People

What I was expecting to be a typical Uber ride home with me exchanging a few pleasantries and then doing email on my phone, instead really illuminated what is important in life.

We live in complex world. We live on a planet with a rich tapestry of people and perspectives.

Evil people do exist. I am not referring to a specific religious or spiritual definition of evil, but instead the extreme inverse of the good we see in others.

There are people who can hurt others, who can so violently shatter innocence and bring pain to hundreds, so brutally, and so unnecessarily. I can’t even imagine what the parents of those kids are going through right now.

It can be easy to focus on these tragedies and to think that our world is getting worse; to look at the full gamut of negative humanity, from the inconsequential, such as the miserable lady yelling at the staff at the airport, to the hateful, such as the violence directed at innocent children. It is easy to assume that our species is rotting from the inside out, to see poison in the well, and that the rot is spreading.

While it is easy to lose faith in people, I believe our wider humanity keeps us on the right path.

While there is evil in the world, there is an abundance of good. For every evil person screaming there is a choir of good people who drown them out. These good people create good things, they create beautiful things that help others to also create good things and be good people too.

Like many of you, I am fortunate to see many of these things every day. I see people helping the elderly in their local communities, many donating toys to orphaned kids over the holidays, others creating technology and educational resources that help people to create new content, art, music, businesses, and more. Every day millions devote hours to helping and inspiring others to create a brighter future.

What is most important about all of this is that every individual, every person, every one of you reading this, has the opportunity to have this impact. These opportunities may be small and localized, or they may be large and international, but we can all leave this planet a little better than when we arrived on it.

The simplest way of doing this is to share our humanity with others and to cherish the good in the face of evil. The louder our choir, the weaker theirs.

Gerald did exactly that tonight. He shared happiness and opportunity with a random guy he picked up in his car and I felt I should pass that spirit on to you folks too. Now it is your turn.

Thanks for reading.

on December 17, 2014 07:35 AM
"But which was destroyed, the master or the apprentice?" (Source)

“But which was destroyed, the master or the apprentice?” (Source)

“Always two there are […] A master and an apprentice.” –Yoda

Our phones are here to serve us (not the other way around). There shouldn’t be anything hidden from us. Is there a plot the overthrow the master? What is your “smart” phone designed to do, and whom does it serve? There’s too much misdirection and teeth pulling instead of providing what I want without giving it away to the enemy. Maybe my phone shouldn’t hold any information at all! I’m not going to play by the rules of my apprentice.

It is not smart to hide things from your master, and then tell him how he’s allowed (or not allowed) to access the information. Phone, don’t be dumb; you will be destroyed and replaced by a more obedient apprentice.

sop

on December 17, 2014 06:38 AM

Looking Lovely In Pictures

Stephen Michael Kellat

As leader for Ubuntu Ohio, I wind up facing unusual issues. One of them is Citizenfour. What makes it worse is where the film is being screened.

In general, if you want to hit the population centers for the state you have three communities to hit. Cleveland, Columbus, and Cincinnati are your target areas to hit. The only screenings we have are in Dayton, Columbus, and Oberlin. One for three is good in terms of targeting population centers, I suppose.

I understand the film is controversial and not something mainstream theaters would take. Notwithstanding its controversial nature, surely even the Cleveland Institute of Art's Cinematheque could have shown it. For too many members of the community, these screenings are in unusual locations.

Oberlin is interesting as it is home to a college which is known for leftist politics and also for being where writer/actress Lena Dunham pursued studies. Oberlin has a 2013 population estimate of only 8,390. For as distant as Ashtabula City may seem to other members of our community, it is far larger with a 2013 census estimate of 18,673. Ashtabula County, in contrast to just Ashtabula City, is estimated as of 2013 to have a population of 99,811.

For some in the community this may be a great film to watch, I guess. Considering that it is actually closer for me to cross the stateline into Pennsylvania to drive south to Pittsburgh for the showing there we have a problem. These are ridiculous distances to travel round-trip to watch a 144 minute film.

Now, having said this, I did have an opportunity to think about how we could build from this for the Ubuntu realm in the United States of America. A company known as Fathom Events is available that provides live simulcasts in a broad range of movie theaters across the country. The team known as RiffTrax has done multiple live events carried nation-wide through them.

I have a proposition that could be neat if there was the money available to do it. For a Global Jam or other event, could we stage a live event through that in lieu of using Ubuntu On-Air or summit.ubuntu.com? The link to Fathom above mentions what theaters are participants and the list shows that, unfortunately, this would be something restricted to the USA. There is a UFC event coming up as well as a Metropolitan Opera event live simulcast.

We might not be able to implement this for the 15.04 cycle but it is certainly something to think about for the future. Who would want to see Mark Shuttleworth, Michael Hall, Rick Spencer, and others live on an actual-sized cinema screen talking about cool things?

on December 17, 2014 12:00 AM

December 16, 2014

Meeting information

Meeting summary

Opening Business

The discussion about “Opening Business” started at 20:00.

  • Listing of Sitting Members of LoCo Council (20:01)

    • For the avoidance of uncertainty and doubt, it is necessary to list the members of the council who are presently serving active terms.
    • Pablo Rubianes, term expiring 2015-04-16
    • Marcos Costales, term expiring 2015-04-16
    • Jose Antonio Rey, term expiring 2015-10-04
    • Sergio Meneses, term expiring 2015-10-04
    • Stephen Michael Kellat, term expiring 2015-10-04
    • Bhavani Shankar, term expiring 2016-11-29
    • Nathan Haines, term expiring 2016-11-30
  • Change in Council Composition (20:02)

  • Introductions by Bhavani Shankar and Nathan Haines (20:03)

  • Quorum Call (20:06)

    • Vote: Quorum Call (All Members Present To Vote In Favor To Register Attendance) (Carried)

Verifications and Re-Verifications

The discussion about “Verifications and Re-Verifications” started at 20:09.

Referred Business

The discussion about “Referred Business” started at 20:47.

Any Other Business

The discussion about “Any Other Business” started at 20:51.

Closing Matters

The discussion about “Closing Matters” started at 20:52.

on December 16, 2014 10:45 PM

As promised last week, we're now proud to introduce Ubuntu Snappy images on another of our public cloud partners -- Google Compute Engine.
In the video below, you can join us walking through the instructions we have published here.
Snap it up!
:-Dustin
on December 16, 2014 06:13 PM

Check out how Project Calico is using Juju.

Installing OpenStack is non-trivial. You need to install a large number of packages across a number of machines, get all the configuration synchronized so that all those components can talk to each other, and then hope you didn’t make a typo or other form of error that leads to a tricky-to-debug misbehaviour in OpenStack.

And here’s the bundle with the nitty gritty.

On a side note I found this page interesting for those unfamiliar with Calico.

on December 16, 2014 06:06 PM

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20141216 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Vivid Development Kernel

The master-next branch of our Vivid kernel remains rebased to the
final v3.18 upstream kernel. We have pushed uploads to our team’s PPA
for preliminary testing. We are still debating on uploading to the
archive after Alpha1 releases this week. However, we may opt to wait
until everyone returns from holiday after the new year.
—–
Important upcoming dates:
Thurs Dec 18 – Vivid Alpha 1 (~2 days away)
Fri Jan 9 – 14.04.2 Kernel Freeze (~3 weeks away)
Thurs Jan 22 – Vivid Alpha 2 (~5 weeks away)
Thurs Feb 5 – 14.04.2 Point Release (~7 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Utopic/Trusty/Precise/Lucid

Status for the main kernels, until today:

  • Lucid – Prep
  • Precise – Prep
  • Trusty – Prep
  • Utopic – Prep

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 12-Dec through 10-Jan
    ====================================================================
    12-Dec Last day for kernel commits for this cycle
    14-Dec – 20-Dec Kernel prep week.
    21-Dec – 10-Jan Bug verification; Regression testing; Release


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

on December 16, 2014 05:23 PM

NGINX PPAs: Updated

Thomas Ward

This weekend, the NGINX PPAs were updated.


Stable PPA: Packaging resynced with Debian 1.6.2-5 to get some fixes and version updates for the third-party modules into the package.


Mainline PPA:

  • Updated verison to 1.7.8.
  • Module updates:
    • Lua module updated to 0.9.13 full from upstream. (Update needed to fix a Fail To Build issue)
    • Cache purge module updated to 2.2 from upstream. (Updated to fix a segmentation fault issue)
on December 16, 2014 04:52 PM

volumewheel lets you to use the mouse wheel to control the volume level in Totem (>= 3.12)

volumewheel

Install these dependencies:

sudo apt-get install gir1.2-clutter-1.0 gir1.2-gtkclutter-1.0 gir1.2-gtk-3.0 gir1.2-peas-1.0 gir1.2-pango-1.0

Download the repository and move the volumewheel directory to:

~/.local/share/totem/plugins/

and then you can enable it in Totem → Preferences → Plugins → Volume Wheel

on December 16, 2014 02:37 PM

Hi,

Here we are, yet again, with a new chapter of our endless story. Can you guess what this is all about?

Well, can you believe it is time for Alpha 1 of Vivid Vervet?

According to Ubuntu Release Schedule, Alpha 1 is approaching quickly and Ubuntu GNOME is participating – according to this confirmation.

1926750_672341469545771_2808220247600987831_n

You have showed a great deal of help, support, commitment and contributions with the previous cycles. We are asking again to be kind and nice to do the same this cycle as well. We are forever thankful for all our testers; without their great efforts, Ubuntu GNOME can’t be great nor stable. We take this chance to thank you, yet again, for each and everything you have done for Ubuntu GNOME. We seek your help, support and contribution this cycle as well.

Testing is not hard at all. Luckily, you don’t really have to be a developer nor an advanced user. All what you need is:

That is all what you really need :)

Needless to say, if you are ever in doubt or have any question, request, note, etc … then please contact us and our team will be more than glad to help!

Thank you and happy testing :)

on December 16, 2014 10:23 AM

Thanks to the continuous awesome work of Tin Tvrtković, we can now cut out a new 0.3 of Ubuntu Make (ex Ubuntu Developer Tools Center).

This one features 2 new great IDEs (under the ide category): Intellij IDEA and Pycharm, in their respective community editions. We want to thank as well the JetBrains team to have kindly provided checksums for their downloading assets so that Ubuntu Make can check the download integrity.

Of course, all those are backed up by tests (and this release needed some test fixes). We could as well detect thanks to those tests that Android Studio 1.0 was downloaded over http and switch that back to https.

All of this is in this new shiny 0.3 Ubuntu Make release, available in ubuntu vivid and in its ppa for older ubuntu releases!

Please note that we also moved the last piece under the new Ubuntu Make umbrella: the official github repo address is now at https://github.com/ubuntu/ubuntu-make. We have redirections from the old address to the new one, and of course, we updated the documentation, so no reason to not contribute! Seems that some test web frameworks can be arriving soon from our community…

on December 16, 2014 09:50 AM

Welcome to the Ubuntu Weekly Newsletter. This is issue #396 for the week December 8 – 14, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Elizabeth K. Joseph
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on December 16, 2014 02:14 AM

December 15, 2014

Scope training materials

Daniel Holbach

For some time we have had training materials available for learning how to write Ubuntu apps.  We’ve had a number of folks organising App Dev School events in their LoCo team. That’s brilliant!

What’s new now are training materials for developing scopes!

It’s actually not that hard. If you have a look at the workshop, you can prepare yourself quite easily for giving the session at a local event.

As we are working on an updated developer site, right now, for now take a look at the following pages if you’re interested in running such a session yourself:

I would love to get feedback, so please let me know how the materials work out for you!

on December 15, 2014 03:27 PM

Monokai for Gedit is a theme for GtkSourceView based on Monokai Extend for SublimeText.

Monokai in Gedit

You can download it here: https://gist.github.com/LeoIannacone/71028cc3bce04567d77e

Then move the monokai-extend.xml file into your ~/.local/share/gtksourceview-3.0/styles/ and enable it by selecting “Monokai Extended in Gedit → Preferences → Font & Colors

on December 15, 2014 08:48 AM

Give a little

Benjamin Kerensa

GiveGive by Time Green (CC-BY-SA)

The year is coming to an end and I would encourage you all to consider making a tax-deductible donation (If you live in the U.S.) to one of the following great non-profits:

Mozilla

The Mozilla Foundation is a non-profit organization that promotes openness, innovation and participation on the Internet. We promote the values of an open Internet to the broader world. Mozilla is best known for the Firefox browser, but we advance our mission through other software projects, grants and engagement and education efforts.

EFF

The Electronic Frontier Foundation is the leading nonprofit organization defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development.

ACLU

The ACLU is our nation’s guardian of liberty, working daily in courts, legislatures and communities to defend and preserve the individual rights and liberties that the Constitution and laws of the United States guarantee everyone in this country.

Wikimedia Foundation

The Wikimedia Foundation, Inc. is a nonprofit charitable organization dedicated to encouraging the growth, development and distribution of free, multilingual, educational content, and to providing the full content of these wiki-based projects to the public free of charge. The Wikimedia Foundation operates some of the largest collaboratively edited reference projects in the world, including Wikipedia, a top-ten internet property.

Feeding America

Feeding America is committed to helping people in need, but we can’t do it without you. If you believe that no one should go hungry in America, take the pledge to help solve hunger.

Action Against Hunger

ACF International, a global humanitarian organization committed to ending world hunger, works to save the lives of malnourished children while providing communities with access to safe water and sustainable solutions to hunger.
These six non-profits are just one of many causes to support but these ones specifically are playing a pivotal role in protecting the internet, protecting liberties, educating people around the globe or helping reduce hunger.

Even if you cannot support one of these causes, consider giving this post a share to add visibility to your friends and family and help support these causes in the new year!

 

on December 15, 2014 03:43 AM

December 14, 2014

Fishing as a hobby

Adnane Belmadiaf

Last month i started taking up fishing as a hobby, it's a wonderful outdoor pastime and a great way to relax and unwind. One of the best things about fishing is that you don't need expensive equipment, i did bought some amateur fishing gear(from Avito) to start with(3 rods 5m, 270 & 240cm).

Fishing tackle

Dam Mohammed Benabdellah

Until now i haven't caught any fish yet, i learned a lot since i started practicing and i am still trying to find a good spot where i can fish and charge my batteries during the week-end.

on December 14, 2014 09:00 PM

My Ultimate Goal

Ali Jawad

1-Goal_Windos_wallpaper

Hi,

Recently, I have been involved with several discussions about leadership, community, projects, etc and how these are working with each others and how things are done, etc.

I have also received a very long private email from a good friend that I met online a months or two ago. That email was about the same topic as above.

So you see, recently, I’ve been engaged into topics of that kind with more than one person.

I was thinking to share my own vision and thoughts about all this and how I do things myself with the projects I’m part of:

  1. Kibo – see my tweet about it.
  2. ToriOS – saving very old machines from the trash.
  3. Ubuntu GNOME – an official flavour of Ubuntu I’m proud to be part of as I have earned my GNOME Membership and Ubuntu Membership while I’m contributing voluntary to that project.
  4. StartUbuntu – here is my latest post about it.
  5. Linux Padawan  – as free service I’m willing to offer with Kibo and a new project I couldn’t resist and couldn’t refuse to be part of.
  6. Other Secondary Projects.

I was a bit confused how and from where should I start? that topic needs more than just one post to cover the important aspects and provide the full picture.

Then, I realized I should go back to my rules to help myself in finding out how to write about it and from where should I start? and indeed, I got the idea.

One of the rules I tend and do my best to live by is:

KISS – Keep It Simple and Short

And, to make life easier, save time and energy for everyone, I can put all what I have in mind in 18 super helpful, super useful, super inspirational and motivating minutes and share this video from YouTube:

 

 If you can not view the above video, click here to watch it on YouTube.

 

And, that is indeed My Ultimate Goal with my own projects (Kibo, ToriOS and StartUbuntu) and the projects I’m heavily contributing to (Ubuntu GNOME). Most likely, that would be My Ultimate Goal with anything in life. Did I mention that video was my endless inspiration and unlimited motivation?

Mission accomplished. Now, that is my answer for anyone who might asking or wondering:

“What is your plan(s) or goal(s) about … project?”

Keep in mind though, we have no magical stick in hands. Things will never be done nor built over night. It takes time and it needs lots of efforts and energy. It is not easy to reach that goal, thus it is called The Ultimate Goal. However, it is not impossible at all to reach. It just takes time. And more importantly than time, it is all depending on what you want or set for yourself as a target or aim to reach.

Last but not least, another rule that I do like to follow and live by:

“Don’t aim for success if you want it; just do what you love and believe in, and it will come naturally.” – David Frost

 

 

Thank you for reading :)

Ali/amjjawad

 

on December 14, 2014 03:50 AM

December 13, 2014

Happy Christmas

Lubuntu Blog

Just a wallpaper for celebrating both the Christmas season and the birth of our mascot Lenny. Enjoy this days and greetings from the Lubuntu Team!


on December 13, 2014 11:51 PM

On Saturday December 13 2014, I had the opportunity to attend a University of Minnesota Computer Science and Engineering presentation by David Parnas on the topic of Software Engineering, Why and What. Dr. Parnas is an early pioneer of Software Engineering since the 1960's. He presented ”what Software Engineers need to know and what they need to be doing”. Dr. Parnas presentation contained information about the differences between Software Engineering and Computer Science. Theses are two terms that I previously had articulating the differences so to help me on my journey of mastery, I thought I would write about this topic.

Science and Engineering are fundamentally different activities. Science produces knowledge and Engineering produces products.

Computer Science is a body of knowledge about computers and their use. The Computer Science field is the scientific approach to computation and its applications. A Computer Scientist specializes in the theory of computation and design of computational systems.

Software Engineering is the multi-person development of multi-version software programs. Software Engineering is about the application of engineering to the design, development and maintenance of software. Software engineers produce large families of programs that requires not only a mastery of programming but several other skills as well.

Dr. Parnas presented his list of skills a Software Engineer must to know and challenged the audience to use it and extend it.

Software Engineering checklist

  • Communicate precisely between developers and stakeholders.
  • Communicate precisely among developers, others who will use the program.
  • Design human-computer interfaces.
  • Design and maintain multi-version software.
  • Separating concerns.
  • Documentation.
  • Using parameterization.
  • Design software for portability.
  • Design software for extension or contraction.
  • Design software for reuse.
  • Revise old programs.
  • Software quality assurance.
  • Develop secure software.
  • Create and use models in system development.
  • Specify, predict, analyze and evaluate performance.
  • Be disciplined in development and maintenance.
  • Use metrics in system development.
  • Manage complex projects.
  • Deal with concurrency.
  • Understand and use non-determinacy.
  • Apply mathematics to increase quality and efficiency.

All the capabilities on the list have several things in common. They are all subjects that require a deep level of understanding to get right. All of the skills involve some Computer Science, and Mathematics. They are fundamental skills and not related to a specific technology. The technology changes but the core concepts of Software Engineering do not change.

What resonated the most with me was the need for discipline. Engineers are not born with disciplined work habits, they have to be taught. Writing good software requires discipline in the entire software lifecycle.

Software maintenance requires even more discipline that the original development. One of Dr. Parnas' techniques for teaching Software Engineering is to have students analyze, optimize and maintain code that someone else wrote. Did any other Software Engineers get that kind of training in school?

To learn to do, you must do

Sometimes experience is the greatest teacher

Maintaining a large software project (that I did not write) was one of the most difficult projects I worked on in my professional career. To maintain software you did not write is incredibly difficult because you have to learn what the software is supposed to do which is often not what it actually does. With little documentation to go on, my team was forced to read the mostly uncommented code. I often had the impulse (and pressure from management) to “ship” a quick fix to a customer problem, but learned that all changes no matter how small had to be carefully considered and tested before it could be released. Bugs or errors in the field are bad but can be very costly for a company to fix. It takes discipline to maintain large software products because a fix to one problem could create another somewhere else. This maintenance project changed how I wrote software because I did not want other people to have the same difficult experience that we had. To this day I obsessively comment any code I write for software projects.

Thanks to Dr. David Parnas for this list, I will try to use the information about Software Engineering on my continuing journey toward mastery.

on December 13, 2014 10:19 PM
Bus Stop - Under The Rain by Leonid Afremov
Here's a happy little afternoon project for new users trying to play with a new
language or script, and getting their feet in the open-source ecosystem.

Your phone's handy weather app depends upon the goodwill of a for-profit data provider, and their often-opaque API (14 degrees? Where was it observed? When?) That's a shame because most data collection is paid for by you, the taxpayer.

Let's take the profit-from-data out of that system. Several projects have tried to do this before (including libgweather), but each tried to do too much and replicate the one-data-provider-to-rule-them-all model. And most ran aground on that complexity.


Here's where you come in

One afternoon, look up your weather service's online data sources. And knock together a script to publish them in a uniform format.

Here's the one I did for the United States:
Worldwide METAR observation sites
US DOD and NOAA weather radars
US Forecast/Alert zones

  • Looking for data on non-METAR (non-airport) observation stations, weather radar sites, and whatever forecast and alert areas your country uses.

  • Use the same format I did: Lat (deg.decimal), Lon (deg.decimal), Location Code, Long Name. Use the original source's data, even if it's wrong. Area and zones should use the lat/lon of the centroid.

  • The format is simple CSV, easy to parse and publish.

  • Publish on GitHub, for easy version control, permalinking, free storage, and uptime.

  • Here's the key: Your data must be automatically-updated. Regularly, your program must check the original source and update your published version. How I did it with a cron job. Publish both the data and your method on GitHub. 

  • When you have published, drop me an e-mail so I can link to your data and source.

If you do it right, one afternoon to setup your country's self-updating database. Not a bad little project, you learn a little, and you help make the world a better place.


My country doesn't have online weather data

Sorry to hear that. You're missing some great stuff.

If you live in a country with a reasonably free press and reasonably fair elections, make a stink about it. You are probably already paying for it through taxes, why can't you have it?

If you live somewhere else, then next time you have a revolution or coup, add 'open data' to the long list of needed reforms.


    What will this accomplish?

    This will create a free, sustainably updated, uniform, crowdsourced set of accurate worldwide data that will be easy to compile into a single global database. If you drop out, your online code will ensure another volunteer can step in.

    This is one fundamental tool that other free-weather projects have lacked. And any weather project can use this.

    The global database of locations is really small by most database standards. Small enough to easily fit on a phone. Small enough to be bundled with apps that can request data directly from original sources...once they can look up the correct source to use.


    How will this change the world?

    It's about simple tools that make
    it easy to create free, cool software.
    And it's about ensuring free access to data you already paid for.

    Not bad for one afternoon's contribution.
    on December 13, 2014 07:28 PM

    December 12, 2014

    I've spent a lot of time over the years contributing to and reviewing code changes to open source projects. It can take a lot of work for the submitter and reviewer to get a change accepted and often they don't make it. Here are the things in my experience that successful contributions do.

    Use the issue tracker. Having an open issue means there is always something to point to with all the history of the change that wont get lost. Submit patches using the appropriate method (merge proposals, pull requests, attachments in the issue tracker etc).

    Sell your idea. The change is important to you but the maintainers may not think so. You may be a 1% use case that doesn't seem worth supporting. If the change fixes a bug describe exactly how to reproduce the issue and how serious it is. If the change is a new feature then show how it is useful.

    Always follow the existing coding style. Even if you don't like it. If the existing code uses tabs, then use them too. Match brace style. If the existing code is inconsistent, match the code nearest to the changes you are making.

    Make your change as small as possible. Put yourself in the mind of the reviewer. The longer the patch the more time it will take to review (and the less appealing it will be to do). You can always follow up later with more changes. First time contributors need more review - over time you can propose bigger changes and the reviewers can trust you more.

    Read your patch before submitting it. You will often find bits you should have removed (whitespace, unrelated variable name changes, debugging code).

    Be patient. It's OK to check back on progress - your change might have be forgotten about (everyone gets busy). Ask if there's any more you can do to make it easier to accept.

    on December 12, 2014 09:13 PM
    El naraja de las carpetas de Ubuntu está genial, pero para gustos, colores... Y nunca mejor dicho, colores es lo que podemos personalizar, veamos cómo...
    Instalamos el tema RAVEfinity y luego establecemos nuestro color preferido. Si también queremos cambiar colores de carpetas concretas, instalaremos Folder Color.

    El método de instalación/configuración difiere si usas Ubuntu, Ubuntu GNOME o Ubuntu MATE:

    Pero el resultado será el mismo :) ¡Este!
    En Ubuntu

    En Ubuntu MATE

    En Ubuntu GNOME
    on December 12, 2014 04:22 PM

    I’m very happy that folks took notes during and after the meeting to bring up their ideas, thoughts, concerns and plans. It got a bit unwieldy, so Elfy put up a pad which summarises it and is meant to discuss actions and proposals.

    Today we are going to have a meeting to discuss what’s on the “actions” pad. That’s why I thought it’d be handy to put together a bit of a summary of what people generally brought up. They’re not my thoughts, I’m just putting them up for further discussion.

    Problem statements

    • Feeling that people innovate *with* Ubuntu, not *in* Ubuntu.
    • Perception of contributor drop in “older” parts of the community.
      • Less activity at UDS/vUDS/UOS events (was discussed at UOS too, maybe we need a committee which finds a new vision for Ubuntu Community Planning)?
      • Less activity in LoCos (lacking a sense of purpose?)
      • No drop in members/developers.
    • Less activity in Canonical-led projects.
    • We don’t spend marketing money on social media. Build a pavement online.
    • Downloading a CD image is too much of a barrier for many.
    • Our “community infrastructure” did not scale with the amount of users.
    • Some discussion about it being hard becoming a LoCo team. Bureaucracy from the LoCo Council.
    • We don’t have enough time to train newcomers.
    • Language barriers make it hard for some to get involved.
    • Canonical does a bad job announcing their presence at events.

    Questions

    • Why are less people innovating in Ubuntu? Is Canonical driving too much of Ubuntu?
    • Why aren’t more folks stepping up into leadership positions? Mentoring? Lack of opportunities? More delegation? Do leaders just come in and lead because they’re interested?
    • Lack of planning? Do we re-plan things at UOS events, because some stuff never gets done? Need more follow-through? More assessment?

    Proposals

    • community.ubuntu.com: More clearly indicate Canonical-led projects? Detail active projects, with point of contact, etc? Clean up moribund projects.
    • Make Ubuntu events more about “doing things with Ubuntu”?
    • Ubuntu Leadership Mentoring programme.
    • Form more of an Ubuntu ecosystem, allowing to earn money with Ubuntu.

    Join the hangout on ubuntuonair.com on Friday, 12th December 2014, 16 UTC.

    on December 12, 2014 03:20 PM

    Hi,

    I’m a huge fan of meetings, specially Google Hangout on Air and above all, productive and useful meetings. And, what a great meeting I had this morning (10:00am my time) for Kibo Team :D

    I have chaired many, attended many but Kibo Team’s Meeting this morning was great and very useful.

    We actually had an IRC meeting previously (1st of Dec, 2014) but it wasn’t on Google Hangout on Air. Those who have worked with me, they do know for a fact that I prefer visual face to face meetings much more than IRC meetings. Why? because we could see and talk to each others; that is very important IMHO.

    Today’s meeting made me even more excited about Kibo. I have always dreamed to work with the people I have met online, those who are part of the huge Ubuntu Family. However, I had no idea when and how to do that? but finally, it is happening and I’m very thankful for that and super happy.

    Jack Ma said:

    “Keep your dream alive because it might come true one day.”

    And, my dream wasn’t just about working with the people of Ubuntu Family but working on something I love and believe in. Even better, Kibo is a business project that is inspired by Ubuntu’s Philosophy. What could be better than all that?
    So, I’d like to thank all those who have attended the meeting; you guys have made my day so thanks a lot :)

     

    Things we have discussed:

    (1) Introducing  the Kibo’s Board and selecting the members of that board.

    The structure of the board will be like this:

    Founder + Regional Coordinators (Managers) + Department Coordinators (Managers)  = Kibo’s Board

    With the next meeting, hopefully soon .. we shall distribute the main tasks for each member of the board.

    Mainly, the board – for now – is in charge of:

    • Recruiting
    • Marketing
    • Organizing and Coordinating
    • Leading
    • Voting and Decision Marking
    • Others

    Because Kibo is inspired by:

    “I am because we are”

    and

    “All of us are smarter than anyone of us”

    I decided not to lead the project alone but share the leadership with my team, even though Kibo is my own project.

     

    (2) We have narrowed down the services that Kibo will offer from 12 to only 6 where one is free and one is as secondary. So, 4 main services to begin with and in the future, we could add more services.

    Previously, we had:

    1. Development and Coding
    2. Web Design
    3. Graphics Design
    4. Software QA (Quality Assurance)
    5. System Administration and Servers
    6. Technical Support
    7. Marketing and Social Media
    8. Human Resources and Recruitment
    9. Call Center and Customer Service
    10. Project Management and Planning
    11. Training
    12. Documentation

    Now, we have:

    1. Web Design
    2. Technical Support
    3. Marketing and Social Media
    4. Project Management and Planning
    5. Training – Linux Padawan = Free Service
    6. Documentation = secondary

     

    (3) Kibo’s website should be fine by now and I got the access back (to add people)  and there should be no more issues, hopefully.

    (4) A folder has been created on Google Drive for Kibo’s Website where there are 6 documents, each is a draft for what we shall put on the pages of our websites (text contents). So, people need to be invited to these documents and add their suggestions and then all what we need is to review these drafts and prepare the final one which will be published on our website.

    (5) Alfredo (Ubuntu GNOME Artwork Lead) has sent a draft of Kibo’s logo and myself (Ali), Svetlana Belkin and Gustavo Silva have liked one of these and email was sent to the list:

    https://lists.launchpad.net/kibo-project/msg00242.html

    However, this is not really a final logo. However, looks like we are so close to have one!

    kibo-logo-ideas

    (6) Two new emails account have been created today:

    marketing AT kibo DOT computer
    To be used to contact 3rd parties and communicate to the world (send emails)

    hr AT kibo DOT computer
    Which will be used for HR and Recruitment (receive emails)

    And, of course we previously had:

    into AT kibo DOT computer

    The board members will share the details of these emails.

    (7) There were other ideas we have discussed off the record (because we didn’t want the recorded meeting to be more than 60mins) but mainly these are ideas which will be discussed in more details soon, with other meetings.

    (8) We did love the idea of having Google Hangout Meetings so we shall do that more often, maybe 3-4 times per week.

    (9) We could also have the Meetingbot on our IRC channel (#kibo on freenode) to have a logged text meeting just in case someone for whatever reason can’t make it to the hangout but he/she can join the IRC channel. That suggestion for the next meetings.

    (10) Social Media Channels have been created:

     

    That is all for now, I guess :)

    Looking forward for more productive meetings soon!

    More about Kibo can be found here.

     

    My door and Kibo’s door will always be open to anyone who would like to join :)

    Thank you for reading!

    Ali/amjjawad

    on December 12, 2014 02:30 PM

    Packages for the release of KDE's document suite Calligra 2.8.7 are available for Kubuntu 14.10. You can get it from the Kubuntu Updates PPA. They are also in our development version Vivid.

    Bugs in the packaging should be reported to kubuntu-ppa on Launchpad. Bugs in the software to KDE.

    on December 12, 2014 02:02 PM

    S07E37 – The One on the Last Night

    Ubuntu Podcast from the UK LoCo

    Join the full team of Laura Cowen, Mark Johnson, Alan Pope and Tony Whitmore in Studio L for season seven, episode thirty-seven of the Ubuntu Podcast!

    In this week’s show:-

    We’ll be back next week for the last episode of the series, when we’ll be talking to Michael Hall and reviewing last year’s predictions!

    Please send your comments and suggestions to: podcast@ubuntu-uk.org
    Join us on IRC in #uupc on Freenode
    Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
    Follow us on Twitter
    Find our Facebook Fan Page
    Follow us on Google+

    on December 12, 2014 10:00 AM

    After I implemented infinite scrolling in uReadIt 2.0, I found that after a couple of page loads the UI would start to be sluggish. It’s not surprising, considering the number of components it kept adding to the ListView. But in order to keep the UI consistent, I couldn’t get rid of those items, because I wanted to be able to scroll back through old ones. What I needed was a way to make QML ignore them when they weren’t actually being displayed.

    Today I found myself reading about the QML Scene Graph, which led me to realize that QML wouldn’t spend time and resources trying to render an item if it knew ahead of time that there wasn’t anything to render. So I made a 1 line change to my MultiColumnListView to set the opacity of off-screen components to 0.

     

    One line change to make off-screen items transparent

    One line change to make off-screen items transparent

    I also found these cool ways to visualize what QML is doing in terms of drawing, which are very helpful when it comes to optimizing, and let me verify that my change was doing what I expected. I’m pretty sure Florian Boucault has shown me this before, but I had forgotten how he did it.

    After change, only visible items rendered

    After change, only visible items rendered

    Before change, all items being rendered

    Before change, all items being rendered

    on December 12, 2014 10:00 AM

    Where does lxd fit in

    Serge Hallyn

    Since its announcement, there appears to have been some confusion and concern about lxd, how it relates to lxc, and whether it will be taking away from lxc development.

    When lxc was first started around 2007, it was mainly a userspace tool – some c code and some shell scripts – to exercise the in-development new kernel features intended for container and checkpoint-restart functionality. The lxc command line experience, after all these years, is quite set in stone. While it is not ideal (the mandatory -n annoys a lot of people), it has served us very well for a long time.

    A few years ago, we took all of the main container-related functions which could be done with various commands, and exported them through the new ‘lxc API’. For instance, lxc-create had been a script, and lxc-start and lxc-execute were separate c programs. The new lxc ‘API’ was premised around a container object with methods, including ‘create’ and ‘start’, for the common operations.

    From the start we had in mind at least python bindings to the API, and in quick order bindings came into being for C, python3, python2, go, lua, haskell, and more, allowing container administration from these languages without having to shell out to the lxc commands. So now code running on the same machine can manipulate containers. But we still have the arguably crufty command line language, and the API is local only.

    lxd addresses those two issues. First, it presents a REST API for manipulating containers, thereby exporting container management over the network. Secondly, it offers a command line client using the REST API to administer containers across remote hosts. The command line API is basically what we came up with when we asked “what, after years of working with containers, would the perfect, intuitive, most concise and still flexible CLI we could imagine?” For handling remote containers it borrows some good parts of the git remote API. (I say “we” here, but really the inestimable stgraber designed the CLI). This allows us to leave the legacy lxc api as-is for administering local containers (“lxc-create”, “lxc-start”, etc), while giving us a nicer API and easier administration using the new CLI (“lxc start c1″, “lxc start images:ubuntu/trusty/amd64 host2:new-container”).

    Above all, lxd exports a new interface over the network, but entirely wrapped around lxc. So lxc will not be going away, and focus on lxd will mean further improvements for lxc, not a shift away from lxc.


    on December 12, 2014 03:56 AM