May 26, 2015

public SNS Topic with a trigger event every quarter hour

Scheduled executions of AWS Lambda functions on an hourly/daily/etc basis is a frequently requested feature, ever since the day Amazon introduced the service at AWS re:Invent 2014.

Until Amazon releases a reliable, premium cron feature for AWS Lambda, I’m offering a community-built alternative which may be useful for some non-critical applications.

arn:aws:sns:us-east-1:522480313337:unreliable-town-clock-topic-178F1OQACHTYF

Background

Beyond its event-driven convenience, the primary attraction of AWS Lambda is eliminating the need to maintain infrastructure to run and scale code. The AWS Lambda function code is simply uploaded to AWS and Amazon takes care of providing systems to run on, keeping it available, scaling to meet demand, recovering from infrastructure failures, monitoring, logging, and more.

The available methods to trigger AWS Lambda functions already include some powerful and convenient events like S3 object creation, DynamoDB changes, Kinesis stream processing, and my favorite: the all-purpose SNS Topic subscription.

Even so, there is a glaring need for code that wants to run at regular intervals: time-triggered, recurring, scheduled event support for AWS Lambda. Attempts to to do this yourself generally ends up with having to maintain your own supporting infrastructure, when your original goal was to eliminate the infrastructure worries.

Unreliable Town Clock (UTC)

The Unreliable Town Clock (UTC) is a new, free, public SNS Topic (Amazon Simple Notification Service) that broadcasts a “chime” message every quarter hour to all subscribers. It can send the chimes to AWS Lambda functions, SQS queues, and email addresses.

You can use the chime attributes to run your code every fifteen minutes, or only run your code once an hour (e.g., when minute == "00") or once a day (e.g., when hour == "00" and minute == "00") or any other series of intervals.

You can even subscribe a function you only want to run only once at a specific time in the future: Have the function ignore all invocations until it’s after the time it wants. When it is time, it can perform its job, then unsubscribe itself from the SNS Topic.

Connecting your code to the Unreliable Town Clock is fast and easy. No application process or account creation is required:

Example: AWS Lambda Function

These commands subscribe an AWS Lambda function to the Unreliable Town Clock:

# AWS Lambda function
lambda_function_name=YOURLAMBDAFUNCTION
account=YOURACCOUNTID
lambda_function_arn="arn:aws:lambda:us-east-1:$account:function:$lambda_function_name"

# Unreliable Town Clock public SNS Topic
sns_topic_arn=arn:aws:sns:us-east-1:522480313337:unreliable-town-clock-topic-178F1OQACHTYF

# Allow the SNS Topic to invoke the AWS Lambda function
aws lambda add-permission \
  --function-name "$lambda_function_name"  \
  --action lambda:InvokeFunction \
  --principal sns.amazonaws.com \
  --source-arn "$sns_topic_arn" \
  --statement-id $(uuidgen)

# Subscribe the AWS Lambda function to the SNS Topic
aws sns subscribe \
  --topic-arn "$sns_topic_arn" \
  --protocol lambda \
  --notification-endpoint "$lambda_function_arn"

Example: Email Address

These commands subscribe an email address to the Unreliable Town Clock (useful for getting the feel, testing, and debugging):

# Email address
email=YOUREMAIL@YOURDOMAIN

# Unreliable Town Clock public SNS Topic
sns_topic_arn=arn:aws:sns:us-east-1:522480313337:unreliable-town-clock-topic-178F1OQACHTYF

# Subscribe the email address to the SNS Topic
aws sns subscribe \
  --topic-arn "$sns_topic_arn" \
  --protocol email \
  --notification-endpoint "$email"

Example: SQS Queue

These commands subscribe an SQS queue to the Unreliable Town Clock:

# SQS Queue
sqs_queue_name=YOURQUEUE
account=YOURACCOUNTID
sqs_queue_arn="arn:aws:sqs:us-east-1:$account:$sqs_queue_name"
sqs_queue_url="https://queue.amazonaws.com/$account/$sqs_queue_name"

# Unreliable Town Clock public SNS Topic
sns_topic_arn=arn:aws:sns:us-east-1:522480313337:unreliable-town-clock-topic-178F1OQACHTYF

# Allow the SNS Topic to post to the SQS queue
sqs_policy='{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Principal": { "AWS": "*" },
    "Action": "sqs:SendMessage",
    "Resource": "'$sqs_queue_arn'",
    "Condition": {
      "ArnEquals": {
        "aws:SourceArn": "'$sns_topic_arn'"
}}}]}'
sqs_policy_escaped=$(echo $sqs_policy | perl -pe 's/"/\\"/g')
aws sqs set-queue-attributes \
  --queue-url "$sqs_queue_url" \
  --attributes '{"Policy":"'"$sqs_policy_escaped"'"}'

# Subscribe the SQS queue to the SNS Topic
aws sns subscribe \
  --topic-arn "$sns_topic_arn" \
  --protocol sqs \
  --notification-endpoint "$sqs_queue_arn"

Chime message

The chime message includes convenient attributes like the following:

{
  "type" : "chime",
  "timestamp": "2015-05-26 02:15 UTC",
  "year": "2015",
  "month": "05",
  "day": "26",
  "hour": "02",
  "minute": "15",
  "day_of_week": "Tue",
  "unique_id": "2d135bf9-31ba-4751-b46d-1db6a822ac88",
  "region": "us-east-1",
  "sns_topic_arn": "arn:aws:sns:...",
  "reference": "...",
  "support": "...",
  "disclaimer": "UNRELIABLE SERVICE {ACCURACY,CONSISTENCY,UPTIME,LONGEVITY}"
}

You should only run your code’s primary function when the message type == "chime"

Other values are reserved for other message types which may include things like service notifications or alerts. Those message types may have different attributes.

It might make sense to forward non-chime messages to a human (e.g., post to an SNS Topic where you have an email address subscribed).

Regions

The Unreliable Town Clock is currently available in the following AWS Regions:

  • us-east-1

If you would like to use it in other regions, please let me know.

Cost

The Unreliable Town Clock is free for unlimited “lambda” and “sqs” subscriptions.

Yes. Unlimited. Amazon takes care of the scaling and does not charge for sending to these endpoints through SNS.

You may currently add “email” subscriptions, especially to test and see the message format, but if there are too many email subscribers, new subscriptions may be disabled, as it costs the sending account $0.70/year for each address at the current chime frequency.

You are naturally responsible for any charges that occur in your own accounts.

Running an AWS Lambda function four times an hour for a year results in 35,000 invocations, which is negligible if not free, but you need to take care what your functions do and what resources they consume as they are running in your AWS account.

Source

The source code for the infrastructure of the Unreliable Town Clock is available on GitHub

https://github.com/alestic/alestic-unreliable-town-clock

You are welcome to run your own copy, but note that the current code marks the SNS Topic as public so that anybody can subscribe.

Support

The following Google Group mailing list can be used for discussion, questions, enhancement requests, and alerts about problems.

http://groups.google.com/d/forum/unreliable-town-clock

If you plan to use the Unreliable Town Clock, you should subscribe to this mailing list so that you receive service notifications (e.g., if the public SNS Topic ARN is going to change).

Disclaimer

The Unreliable Town Clock service is intended but not guaranteed to be useful. As the name explicitly states, you should consider it unreliable and should not use it for anything you consider important.

Here are some, but not all, of the dimensions in which it is unreliable:

  • Accuracy: The times messages are sent may not be the true times they indicate. Messages may be delayed, get sent early, or be duplicated.

  • Uptime: Chime messages may be skipped for short or long periods of time.

  • Consistency: The formats or contents of the messages may change without warning.

  • Longevity: The service may disappear without warning at any time.

There is no big company behind this service, just a human being. I have experience building and supporting public services used by individuals, companies, and other organizations around the world, but I’m still just one fellow, and this is just an experimental service for the time being.

Comments

What are you thinking of using recurring AWS Lambda invocations for?

Any other features you would like to see?

Original article and comments: https://alestic.com/2015/05/aws-lambda-recurring-schedule/

on May 26, 2015 09:01 AM

May 25, 2015

Rule the Stack

Last week during the the OpenStack Summit in Vancouver, Intel organized a Rule the Stack contest. That's the third one, after Atlanta a year ago and Paris six months ago. In case you missed earlier episodes, SUSE won the two previous contests with Dirk being pretty fast in Atlanta and Adam completing the HA challenge so we could keep the crown. So of course, we had to try again!

For this contest, the rules came with a list of penalties and bonuses which made it easier for people to participate. And indeed, there were quite a number of participants with the schedule for booking slots being nearly full. While deploying Kilo was a goal, you could go with older releases getting a 10 minutes penalty per release (so +10 minutes for Juno, +20 minutes for Icehouse, and so on). In a similar way, the organizers wanted to see some upgrade and encouraged that with a bonus that could significantly impact the results (-40 minutes) — nobody tried that, though.

And guess what? SUSE kept the crown again. But we also went ahead with a new challenge: outperforming everyone else not just once, but twice, with two totally different methods.

For the super-fast approach, Dirk built again an appliance that has everything pre-installed and that configures the software on boot. This is actually not too difficult thanks to the amazing Kiwi tool and all the knowledge we have accumulated through the years at SUSE about building appliances, and also the small scripts we use for the CI of our OpenStack packages. Still, it required some work to adapt the setup to the contest and also to make sure that our Kilo packages (that were brand new and without much testing) were fully working. The clock result was 9 minutes and 6 seconds, resulting in a negative time of minus 10 minutes and 54 seconds (yes, the text in the picture is wrong) after the bonuses. Pretty impressive.

But we also wanted to show that our product would fare well, so Adam and I started looking at this. We knew it couldn't be faster than the way Dirk picked, and from the start, we targetted the second position. For this approach, there was not much to do since this was similar to what he did in Paris, and there was work to update our SUSE OpenStack Cloud Admin appliance recently. Our first attempt failed miserably due to a nasty bug (which was actually caused by some unicode character in the ID of the USB stick we were using to install the OS... we fixed that bug later in the night). The second attempt went smoother and was actually much faster than we had anticipated: SUSE OpenStack Cloud deployed everything in 23 minutes and 17 seconds, which resulted in a final time of 10 minutes and 17 seconds after bonuses/penalties. And this was with a 10 minutes penalty due to the use of Juno (as well as a couple of minutes lost debugging some setup issue that was just mispreparation on our side). A key contributor to this result is our use of Crowbar, which we've kept improving over time, and that really makes it easy and fast to deploy OpenStack.

Wall-clock time for SUSE OpenStack Cloud

Wall-clock time for SUSE OpenStack Cloud

These two results wouldn't have been possible without the help of Tom and Ralf, but also without the whole SUSE OpenStack Cloud team that works on a daily basis on our product to improve it and to adapt it to the needs of our customers. We really have an awesome team (and btw, we're hiring)!

For reference, three other contestants succeeded in deploying OpenStack, with the fastest of them ending at 58 minutes after bonuses/penalties. And as I mentioned earlier, there were even more contestants (including some who are not vendors of an OpenStack distribution), which is really good to see. I hope we'll see even more in Tokyo!

Results of the Rule the Stack contest

Results of the Rule the Stack contest

Also thanks to Intel for organizing this; I'm sure every contestant had fun and there was quite a good mood in the area reserved for the contest.

on May 25, 2015 10:58 PM
As simple experiment, I thought it would be interesting to investigate stress-ng compiled with GCC 4.9.1 and GCC 5.1.1 in terms of computational improvement and power consumption on various CPU stress methods.   The stress-ng CPU stress test contains various different mixes of integer, floating point, bit operations and logic operations that can be used for processor loading, so it makes a useful test to see how well the code gets optimized with GCC.

Stress-ng provides a "bogo-ops" mechanism to measure a "unit of operation", normally this is just a count of the number of operations performed in a unit of time, hence allowing us to compare the relative performance of each stress method when compiled with different versions of GCC.  Running each stress method for a relatively long time (a few minutes) on an idle machine allows us to get a fairly stable and accurate measurement of bogo-ops per second.  Tests were run on a Lenovo x230 with an i5-3210M CPU.

The first chart below shows the relative improvement in bogo-ops per second between the two versions of GCC.  A value of n indicates GCC 5.1.1 is n times faster  in terms of bogo-ops per second than GCC 4.9.1, hence values less than 1.0 show that GCC 5.1.1 has regressed in performance.

It appears that int64, int32, int16, int8 and rand show some remarkable improvements with GCC 5.1.1; these all perform various integer operations (add, subtract, multiply, divide, xor, and, or, shift).

In contrast, hamming, hanoi, parity and sieve show degraded performance with GCC 5.1.1.  Hanoi just exercises recursion of a function with a few arguments and some memory load/stores.  Hamming, parity and sieve exercise bit twiddling operations and memory load/stores.

Further to just measuring computation, I used the Intel RAPL CPU package power measurements (using powerstat) to next measure the power consumed and then compute bogo ops per Watt for stress-ng built with GCC 4.9.1 and 5.1.1.  I then compared the relative improvement of 5.1.1 compared to 4.9.1:
The chart above shows the same kind of characteristics as the first chart, but in terms of computational improvement per Watt.  Note that there are even better improvements in relative terms for the integer and rand CPU stress methods.  For example, the rand stress method shows a 1.6 x improvement in terms of computation per second and a 2.1 x improvement in terms of computation per Watt comparing GCC 4.9.1 with 5.1.1.

It seems that benchmarking performance in terms of just compute improvements really should take into consideration the power consumption too to get a better idea of how compiler optimization improvements.  Compute-per-watt rather than compute-per-second should perhaps be the preferred benchmark in the modern high-density compute farms.

Of course, these comparisons are just with one specific x86 micro-architecture,  so one would expect different results for different x86 CPUs..  I guess that is for another weekend to test if I get time.
on May 25, 2015 04:13 PM

My new bazaar workflow

Riccardo Padovani

This weekend I went to DUCC-IT, an italian event about Debian, Ubuntu and all the opensource world.

The event was great with a lot of interesting talks. But the best part of this kind of meetings is meet old and new friends, drink a beer together and learn something new.

Blank

I was talking with 3v1n0 (Unity 7 dev) about my last contributions in the Ubuntu world, and about how was my bzr workflow: a directory, a branch. While I was contributing to little projects, like calculator app, it was a good approach: codebase is few megabytes, so it doesn’t take too much space on hard disk, there is nothing to compile (or in other projects, like reminders, compilation time is very short, so I can compile all the app every time I do a new branch).

But when I started to contribute to webbrowser app this approach started to have some problems: it tooks like 10 minutes to compile all the code, and it uses too much space on hard disk. But it was okay, it didn’t bother me too much so I continued using a different directory for each different branch I had.

When I started to contribute to oxide, I understood I had to change approach: every branch is about 15GB, and it takes like 40 minutes to compile from scratch (or maybe more, I did only one) all the code.

Anyway, I’m very lazy, so I postponed the resolution of the problem (and, consequently, new oxide contributions) until this weekend, when 3v1n0 showed me the solution.

I report it here, hoping could be useful for someone else. It’s based on bzr light checkouts, and it works very similary to git checkouts: you will have only 1 directory with all your branches, and when you switch from one to another you have only to recompile what’s different.

It requires another directory, so to don’t have it in the project directory I create a main directory for the project, with inside a directory called like the project itself and a directory for branches.

I know I’m not good to explain myself in english, so let’s see an example. A last thing before starting: I use oh-my-zsh with Numix theme, at the end of the post I’ll explain how to add bzr support to zsh, because could be very useful.

Bzr setup

So, as example, let’s take the webbrowser-app. You want to contribute, so the first thing you do is creating a directory for the code:

mkdir webbrowser-app

then, you enter the directory and take the code from Launchpad:

cd webbrowser-app && bzr branch lp:webbrowser-app && cd webbrowser-app

Now, you need to choose a directory where you will save the branches: as I explained, I’ll use one in the parent directory, and I’ll call it bzr-repo, but you can call it as you wish.

bzr init-repo --no-trees ../bzr-repo

This first command will init a repo in the bzr-repo directory, and the –no-trees tells to don’t create a copy of the working tree to don’t waste too much space.

Now we create the main branch, and we call it trunk:

bzr branch . ../bzr-repo/trunk

We need only to reconfigure the local directory to use the trunk we just created as base for checkouts:

bzr reconfigure --lightweight-checkout --bind-to ../bzr-repo/trunk .

Now some useful commands: - to create a new branch, type bzr switch -b new-branch - to list all branches, type bzr branches - to change branch, use bzr switch branch-name - to remove a local branch, use bzr remove-branch branch-name

Taking a remote branch and deleting one is a bit more complicated (but I created a couple of alias, so it becomes really simple).

To take a remote branch, we need to indicates we want to download it in the ../bzr-repo directory, so you’ve to use:

bzr branch lp:~rpadovani/webbrowser-app/remove-404-history ../bzr-repo/remove-404-history

To delete it, just use rm -r:

rm -r ../bzr-repo/remove-404-history

Aliases

To don’t have to type so much to create, take and delete branches I created 3 alias:

alias init-repo='bzr init-repo --no-trees ../bzr-repo && bzr branch . ../bzr-repo/trunk && bzr reconfigure --lightweight-checkout --bind-to ../bzr-repo/trunk .' take-branch() {bzr branch "$@" ../bzr-repo/$(echo "$@" | cut -d "/" -f3) } delete-branch() {rm -rf ../bzr-repo/"$@"}

To enable lightweight-checkout take the code, and do init-repo. Then, use take-branch lp:~username/project/branch to take a branch and delete-branch branch to delete a branch (before deleting a branch, switch out of it, otherwise you’ll break the world).

They use ../bzr-repo as directory, but you can easily change it.

Zsh

To have bzr support in zsh, as in the above image, you need to use one of the few themes that implement it or implement by yourself and contribute to zsh. It’s very easy to do, take a look to my implementation for the theme I use.

If you like my contributions to Ubuntu and want to support me, just send me a Thank you! by email or offer me a beer:-)

Ciao,
R.

on May 25, 2015 02:52 PM

On the latest issue of DistroWatch.

kubuntu-15.04-settings-small

 

on May 25, 2015 01:40 PM

Catalan LoCo Team celebrated on May 9th release party of the next Ubuntu version, in this case, 15.04 Vivid Vervet. Sorry abaout the delay reporting.

This time, we went to Terrassa, near Barcelona, thanks to our friends of the Nicolau Copèrnic School.

As always, we started explaining what Ubuntu is and how it adapts to new times and devices, along with speeches from the school director and a Terrassa Councillor really understanding the Ubuntu meaning.

 

 

Quite a lot of people registering for the party.

 

Raspberry Pi and Open Source Hardware on Ubuntu were both present at the party.

 

And in another room, LibreOffice.

 

And, of course, Ubuntu Phone as well.

 

A lot of time passed since we offered a speech on Gimp.

 

Local TV came and made a report for the evening news.

on May 25, 2015 12:49 PM

As I create architectures that include AWS Lambda functions, I find there are situations where I just want to know that the AWS Lambda function is getting invoked and to review the exact event data structure that is being passed in to it.

I found that a simple “echo” function can be dropped in to copy the AWS Lambda event to the console log (CloudWatch Logs). It’s easy to review this output to make sure the function is getting invoked at the right times and with the right data.

There are probably dozens of debug/echo AWS Lambda functions floating around out there, but for my own future reference, I have created a GitHub repo with a four line echo function that does the trick for me. I’ve included a couple scripts to install and uninstall the AWS Lambda function in an account, including the required IAM role and policies.

Here’s the repo for the lambda-echo AWS Lambda function:

https://github.com/alestic/lambda-echo

The README.md provides instructions on how to install and test.

Note: Once you install an AWS Lambda function, there is no reason to delete it if you think it might be useful in the future. It costs nothing to let Amazon store it for you and keep it available for when you want to run it again.

Amazon has indicated that they may prune AWS Lambda functions that go unused for long periods of time, but I haven’t seen this happen in practice yet.

Is there a standard file structure yet for a directory with AWS Lambda function source and the related IAM role/policies? Should I convert this to the format expected by Mitch Garnaat’s kappa perhaps?

Original article and comments: https://alestic.com/2015/05/aws-lambda-echo/

on May 25, 2015 08:03 AM

May 24, 2015

I’ve been looking at single-page-app frameworks and not keen on any of them. The list at http://todomvc.com/ has a bunch, and I think I’ve rejected them all — I concede that some of these reasons are just flat-out prejudice or reasonably baseless bad feelings or seem pretty trivial to whoever’s reading, so this is your chance to talk me into something you think is good and show me why my bad feelings are unjustified or wrong.

What I’d like is two-way data binding, routing, and some notion of making separate isolated components. I don’t have a pure REST put/delete/get API to back end onto, and I do not want a “framework” which has as a selling point that you can switch different routing or template components in and out; if I’m assembling my own preferences out of bits, then I won’t use a framework at all; I’ll stitch them together myself. The point of using a framework is exactly that it does everything, and I’m OK with using the framework’s methods to do X, Y, and Z, rather than cursing because I want to use a different routing library. I am a large believer in identifying the approach that a particular tool takes, and not using the tool if what you want to do doesn’t match that approach, even if it’s possible to use the tool in ways other than its expected that you will. Every time I’ve done this in the past I’ve ended up being burned by it. The thing I want is to identify something which wants to do the things I plan, and then use it; not to find something which doesn’t want to work that way and then make it work that way instead.

So, convince me I’m wrong on your preferred choice from this list, or mention something I don’t know about. And yes, I could write my own, but I’m massively resistant to that because that just makes the another-day-another-framework problem even worse.

There is one non-negotiable rule: JavaScript. No other languages which compile to JavaScript. I don’t mind things like React’s JSX, because that’s being used to write HTML (although I’m not sure about it), but no Coffeescript libraries, please.

  • Backbone.js - also requires underscore and jquery. Some ideal thing for this really ought to be self-contained. Concerned about “is pre-configured to sync with a RESTful API” too, and I have to plug in my own choice of templating library.
  • AngularJS - used it in the past for things. It makes me sad. There’s too much magic; it’s fine until something doesn’t work, and then you have to reverse-engineer all the magic.
  • Ember.js - is massive and confuses the hell out of me. I did a bit of hacking on Discourse, which is Ember, and it took me about a day just to work out how to change the simplest thing.
  • KnockoutJS - seems way, way verbose. function AppViewModel() { this.firstName = ko.observable("Bert"); this.fullName = ko.computed(function() { ... }, this); } ko.applyBindings(new AppViewModel());, seriously? Gnah. I hate that.
  • Dojo - old. I never liked it the first time around, really, and I have the feeling that the zeitgeist has not gone with it.
  • YUI - old, and pretty much abandoned upstream afaik. Used it at Canonical; “aspect-oriented programming” is basically computed COME FROM from Intercal, which puts invisible trapdoors in your code so following the thread of control is hell on legs.
  • Knockback.js - builds on Knockout, so see knockout.
  • CanJS - models stuff seems specifically designed to back end to a put/get/delete rest API, which I don’t have.
  • Polymer - seems like it may be ropy on iOS. Also, it has a nice big library of pre-built components, which is great, but they’re all Material Design components, which is not great at all; I’m not putting an Android-themed app on other platforms, for the same reason I wouldn’t put an app which looks like iOS on Android.
  • React - I fear React. It seems to be the popular thing, but it’s very much a thing where one steps out of the world and into the React World instead. Have spoken to a number of people using it for real projects and I don’t like the idea of it.
  • Mithril - I utterly utterly do not understand the m.prop stuff. I don’t know whether the docs explain it badly or I’m just not getting it, but I completely cannot grasp the Zen of Mithril. Shame, because it looks cool, if I could get it, which I can’t.
  • Ampersand - I’m in two minds about the “it’s a zillion small libraries” thing. More to the point, I have no sense of how to structure the overall application; individual bits I can see how to do, but I don’t have a good sense of how to put it together
  • Flight - I’m not sure this is the right thing for new apps. For restructuring an existing thing, I think it’d be good, but I’m building from scratch.
  • Vue.js - doesn’t do routing or provide large-scale structure; they have some notes saying “hey, that means you can do it your way” which I don’t want to do
  • MarionetteJS - depends on Backbone, so see Backbone.
  • TroopJS + RequireJS - lots of dependencies. I do not get how to actually structure an application with it, and the docs are not helpful at this.

I wish I were happy about any of these. To be honest, my secret worry is that all frameworks are either incomplete, so I don’t want them, or do do everything I want but are consequentially too magical and so I don’t like them. I hope I’m wrong and there’s either something I’ve missed or a framework I don’t know about. Speak on, readers: leave comments on Google Plus or on your own site by webmention.

on May 24, 2015 10:38 PM

9th Ubucon 2015 in Germany

Sujeevan Vijayakumaran

This year we're going to organize the 9th Ubucon in Germany. This time it will be held in the capitol Berlin starting on the 23th and ending on the 25th October 2015. I was a member of the organization team for the since 2013, but this year I am the head of the organisation team.

We also started the „Call for Papers“. The main language is German, but we might also allow talks in English. I'm really looking forward to the event. And I'm also hoping that more people will visit the Ubucon. In the last years we had roughly 100-150 attendees. This years slogan is „Community in Touch“, which includes the opportunity of getting in touch with the community and „Ubuntu Touch“.

on May 24, 2015 09:45 PM

Today, Ubuntu Vancouver is proud to release our newest ubuntu-themed cocktail: the Juju Charmer!

The Juju Charmer cocktail has been meticulously crafted to meet the highest quality standards of the Juju Charmers team and community Charmers everywhere. After a full development cycle including rigorous testing, an alpha, and a beta, and numerous reviews we've refined this cocktail to match the quality and consistency that one would expect from the best Charms. Best practices distilled and mixed!

We've also worked extra hard to ensure that the taste and colour of this beautiful cocktail is something that you, your friends, and your family can enjoy regardless of whether they've ever heard of ubuntu or juju.

In fact, when you enjoy a Juju Charmer together, you might just find that they get quite curious about the world's friendliest and most collaborative development project. They may even get curious enough to sample the freedom that you enjoy every day, thanks to ubuntu and juju.

So raise a glass and cheer "Juju" (joo-joo), or even "Ubuntu" (oo-boon-too) and watch heads turn. Watch people wonder what all the fuss is about.

A full-resolution image suitable for printing is available at http://www.ubuntuvancouver.org/jujucharmer. Why not print a few thousand of these cards and hand them out to bartenders everywhere? That's how ubuntu spreads.

:~$juju deploy spin

Enjoy!

--
Special thanks go to Joe Liau, co-creator.
The creators wish to thank Marco Ceppi for his superb choice of rum and also Canonical's Juju Ecosystems team for graciously providing feedback and for adding enough units to ensure spin!

on May 24, 2015 08:24 PM
The cpuburn package contains several hand crafted assembler "burn" programs to load x86 processors and to maximize heat production to stress a system.  This also is the intention of the stress-ng "cpu" stress test which contains a variety of methods to stress CPUs with a wide range of instruction mixes.   Stress-ng is written in C and relies on the the compiler to generate efficient code to hopefully load the CPU.  So how does stress-ng compared to the hand crafted cpuburn suite of programs on modern processors?

Since there is a correlation between power consumed and heat generated, I took the liberty to measure the CPU package power consumption measures using the Intel RAPL interface as one way of comparing cpuburn and stress-ng.  Recent versions of powerstat supports RAPL, so I ran each stressor for 120 seconds and took CPU package power measurements every 4 seconds over this interval with powerstat.

So, the cpuburn "burn" programs do well, however, some of the stress-ng CPU stress methods seem to do better.   The best stress-ng CPU methods are: ackermann, callfunc, hanoi, decimal128, dither, int128decimal128, trig and zeta.  It appears that ackermann, callfunc and hanoi do well because these are very localised deeply recursive function calls, so I expect register save/restores and some stack activity is the main power consumer.  The rest exercise the integer and floating point units and memory load/stores.

As it stands, a handful of stress-ng CPU stressors aren't as good as cpuburn. What is noticeable is that burnBX on an i3120M seems to do rather well in terms of loading the CPU.

One conclusion to draw from this is that modern C compilers such as gcc (in this case, gcc 4.9.2) with a suitably chosen mix of stores, loads and integer/floating point operations can outperform hand written assembler in terms of loading the full CPU package.  When I have a little more time, I will try and repeat this experiment with clang and gcc 5
on May 24, 2015 04:03 PM

May 23, 2015

Earlier this month Kilos (a Membership Board member) started a discussion on the Membership Board ML about adding a new slot to the board, the proposition is to have 2 boards on the 1st Thursday of the month, the 1st by 20UTC and the 2nd by 22UTC as usual..

After discussing the idea for almost two weeks & after 4 days of the last email in that thread, I believe that everyone is now OK to add that 3rd slot..

So I changed the Membership Wiki Page adding the new slot:

If you're want to know more about the Ubuntu Membership please visit the link.



on May 23, 2015 08:26 PM

This post is the second in the series ECC: a gentle introduction.

In the previous post, we have seen how elliptic curves over the real numbers can be used to define a group. Specifically, we have defined a rule for point addition: given three aligned points, their sum is zero (P + Q + R = 0). We have derived a geometric method and an algebraic method for computing point additions.

We then introduced scalar multiplication (nP = P + P + · · · + P) and we found out an “easy” algorithm for computing scalar multiplication: double and add.

Now we will restrict our elliptic curves to finite fields, rather than the set of real numbers, and see how things change.

The field of integers modulo p

A finite field is, first of all, a set with a finite number of elements. An example of finite field is the set of integers modulo p, where p is a prime number. It is generally denoted as \mathbb{Z}/p, GF(p) or \mathbb{F}_p. We will use the latter notation.

In fields we have two binary operations: addition (+) and multiplication (·). Both are closed, associative and commutative. For both operations, there exist a unique identity element, and for every element there’s a unique inverse element. Finally, multiplication is distributive over the addition: x · (y + z) = x · y + x · z.

The set of integers modulo p consists of all the integers from 0 to p – 1. Addition and multiplication work as in modular arithmetic (also known as “clock arithmetic”). Here are a few examples of operations in \mathbb{F}_{23}:

  • Addition: (18 + 9) mod 23 = 4
  • Subtraction: (7 – 14) mod 23 = 16
  • Multiplication: 4 · 7 mod 23 = 5
  • Additive inverse: –5 mod 23 = 18
    Indeed: (5 + (–5)) mod 23 = (5 + 18) mod 23 = 0
  • Multiplicative inverse: 9–1 mod 23 = 18
    Indeed: 9 · 9–1 mod 23 = 9 · 18 mod 23 = 1

If these equations don’t look familiar to you and you need a primer on modular arithmetic, check out Khan Academy.

As we already said, the integers modulo p are a field, and therefore all the properties listed above hold. Note that the requirement for p to be prime is important! The set of integers modulo 4 is not a field: 2 has no multiplicative inverse (i.e. the equation 2 · x mod 4 = 1 has no solutions).

Division in modulo p

We will soon define elliptic curves over \mathbb{F}_p, but before doing so we need a clear idea of what x / y means in \mathbb{F}_p. Simply put: x / y = x · y–1, or, in plain words, x over y is equal to x times the multiplicative inverse of y. This fact is not surprising, but gives us a basic method to perform division: find the multiplicative inverse of a number and then perform a single multiplication.

Computing the multiplicative inverse can be “easily” done with the extended Euclidean algorithm, which is O(log p) (or O(k) if we consider the bit length) in the worst case.

We won’t enter the details of the extended Euclidean algorithm, as it is off-topic, however here’s a working Python implementation:

def extened_euclidean_algorithm(a, b):
    """
    Returns a three-tuple (gcd, x, y) such that
    a * x + b * y == gcd, where gcd is the greatest
    common divisor of a and b.

    This function implements the extended Euclidean
    algorithm and runs in O(log b) in the wrost case.
    """
    s, old_s = 0, 1
    t, old_t = 1, 0
    r, old_r = b, a

    while r != 0:
        quotient = old_r // r
        old_r, r = r, old_r - quotient * r
        old_s, s = s, old_s - quotient * s
        old_t, t = t, old_s - quotient * t

    return old_r, old_s, old_t


def inverse_of(n, p):
    """
    Returns the multiplicative inverse of
    n modulo p.

    This function returns an integer m such that
    (n * m) % p == 1.
    """
    gcd, x, y = extened_euclidean_algorithm(n, p)
    assert (n * x + p * y) % p == gcd

    if gcd != 1:
        # Either n is 0, or p is not a prime number.
        raise ValueError(
            '{} has no multiplicative inverse '
            'modulo {}'.format(n, p))
    else:
        return x % p

Elliptic curves in \mathbb{F}_p

Now we have all the necessary elements to restrict elliptic curves over \mathbb{F}_p. The set of points, that in the previous post was {(x, y) ∈ ℝ2 | y2 = x3 + ax + b, 4a3 + 27b2 ≠ 0} ∪ {0}, now becomes:

\begin{array}{rl} \left\{(x, y) \in (\mathbb{F}_p)^2\ | \right. & \left. y^2 = (x^3 + ax + b) \bmod{p}, \right. \\ & \left. (4a^3 + 27b^2) \bmod{p} \ne 0\right\}\ \cup\ \left\{0\right\} \end{array}

where 0 is still the point at infinity, and a and b are two integers in \mathbb{F}_p.

Elliptic curves in FpThe curve y2 = (x3 – 7x + 10) mod p with p = 19, 97, 127, 487. Note that, for every x, there are at most two points. Also note the symmetry about y = p / 2.
Singular curve in FpThe curve y2 = x3 mod 29 is singular and has a triple point in (0, 0). It is not a valid elliptic curve.

What previously was a continuous curve is now a set of disjoint points in the xy-plane. But we can prove that, even if we have restricted our domain, elliptic curves in \mathbb{F}_p still form an abelian group.

Point addition

Clearly, we need to change a bit our definition of addition in order to make it work in \mathbb{F}_p. With reals, we said that the sum of three aligned points was zero (P + Q + R = 0). We can keep this definition, but what does it mean for three points to be aligned in \mathbb{F}_p?

We can say that three points are aligned if there’s a line that connects all of them. Now, of course, lines in \mathbb{F}_p are not the same as lines in . We can say, informally, that a line in \mathbb{F}_p is the set of points (x, y) that satisfy the equation (ax + by + c) mod p = 0 (this is the standard line equation, with the addition of “mod p“).

Point addition for elliptic curves in Z/pPoint addition over the curve y2 = (x3x + 3) mod 127, with P = (16, 20) and Q = (41, 120). Note how the line y = (4x + 83) mod 127 that connects the points “repeats” itself in the plane.

Given that we are in a group, point addition retains the properties we already know:

  • Q + 0 = 0 + Q = Q (from the definition of identity element).
  • Given a non-zero point Q, the inverse Q is the point having the same abscissa but opposite ordinate. Or, if you prefer, Q = (xQ, –yQ mod p).
    For example, if a curve in \mathbb{F}_{29} has a point Q = (2, 5), the inverse is Q = (2, –5 mod 29) = (2, 24).
  • Also, P + (–P) = 0 (from the definition of inverse element).

Algebraic sum

The equations for calculating point additions are exactly the same as in the previous post, except for the fact that we need to add “mod p” at the end of every expression. Therefore, given P = (xP, yP), Q = (xQ, yQ) and R = (xR, yR), we can calculate P + Q = –R as follows:

\begin{array}{rcl} x_R & = & (m^2 - x_P - x_Q) \bmod{p} \\ y_R & = & [y_P + m(x_R - x_P)] \bmod{p} \\ & = & [y_Q + m(x_R - x_Q)] \bmod{p} \end{array}

If PQ, the the slope m assumes the form:

m = (y_P - y_Q)(x_P - x_Q)^{-1} \bmod{p}

Else, if P = Q, we have:

m = (3 x_P^2 + a)(2 y_P)^{-1} \bmod{p}

It’s not a coincidence that the equations have not changed: in fact, these equations work in every field, finite or infinite (with the exception of \mathbb{F}_2 and \mathbb{F}_3, which are special cased). Now I feel I have to provide a justification for this fact. The problem is: proofs for the group law generally involve complex mathematical concepts. However, I found out a proof from Stefan Friedl that uses only elementary concepts. Read it if you are interested in why these equations work in (almost) every field.

Back to us — we won’t define a geometric method: in fact, there are a few problems with that. For example, in the previous post, we said that to compute P + P we needed to take the tangent to the curve in P. But without continuity, the word “tangent” does not make any sense. We can workaround this and other problems, however a pure geometric method would just be too complicated and not practical at all.

Instead, you can play with the interactive tool I’ve written for computing point additions.

The order of an elliptic curve group

We said that an elliptic curve defined over a finite field has a finite number of points. An important question that we need to answer is: how many points are there exactly?

Firstly, let’s say that the number of points in a group is called the order of the group.

Trying all the possible values for x from 0 to p – 1 is not a feasible way to count the points, as it would require O(p) steps, and this is “hard” if p is a large prime.

Luckily, there’s a faster algorithm for computing the order: Schoof’s algorithm. I won’t enter the details of the algorithm — what matters is that it runs in polynomial time, and this is what we need.

Scalar multiplication and cyclic subgroups

As with reals, multiplication can be defined as:

n P = \underbrace{P + P + \dots + P}_{n\ \text{times}}

And, again, we can use the double and add algorithm to perform multiplication in O(log n) steps (or O(k), where k is the number of bits of n). I’ve written an interactive tool for scalar multiplication too.

Multiplication over points for elliptic curves in \mathbb{F}_p has an interesting property. Take the curve y2 = (x3 + 2x + 3) mod 97 and the point P = (3, 6). Now calculate all the multiples of P:

Cyclic subgroupThe multiples of P = (3, 6) are just five distinct points (0, P, 2P, 3P, 4P) and they are repeating cyclically. It’s easy to spot the similarity between scalar multiplication on elliptic curves and addition in modular arithmetic.
  • 0P = 0
  • 1P = (3, 6)
  • 2P = (80, 10)
  • 3P = (80, 87)
  • 4P = (3, 91)
  • 5P = 0
  • 6P = (3, 6)
  • 7P = (80, 10)
  • 8P = (80, 87)
  • 9P = (3, 91)

Here we can immediately spot two things: firstly, the multiples of P are just five: the other points of the elliptic curve never appear. Secondly, they are repeating cyclically. We can write:

  • 5kP = 0
  • (5k + 1)P = P
  • (5k + 2)P = 2P
  • (5k + 3)P = 3P
  • (5k + 4)P = 4P

for every integer k. Note that these five equations can be “compressed” into a single one, thanks to the modulo operator: kP = (k mod 5)P.

Not only that, but we can immediately verify that these five points are closed under addition. Which means: however I add 0, P, 2P, 3P or 4P, the result is always one of these five points. Again, the other points of the elliptic curve never appear in the results.

The same holds for every point, not just for P = (3, 6). In fact, if we take a generic P:

nP + mP = \underbrace{P + \dots + P}_{n\ \text{times}} + \underbrace{P + \dots + P}_{m\ \text{times}} = (n + m)P

Which means: if we add two multiples of P, we obtain a multiple of P (i.e.: multiples of P are closed under addition). This is enough to prove that the set of the multiples of P is a cyclic subgroup of the group formed by the elliptic curve.

A “subgroup” is a group which is a subset of another group. A “cyclic subgroup” is a subgroup which elements are repeating cyclically, like we have shown in the previous example. The point P is called generator or base point of the cyclic subgroup.

Cyclic subgroups are the foundations of both ECC and RSA. We will see why in the next post.

Subgroup order

We can ask ourselves what the order of a subgroup generated by a point P is (or, equivalently, what the order of P is). To answer this question we can’t use Schoof’s algorithm, because that algorithm only works on whole elliptic curves, not on subgroups. Before approaching the problem, we need a few more bits:

  • So far, we have the defined the order as the number of points of a group. This definition is still valid, but within a cyclic subgroup we can give a new, equivalent definition: the order of P is the smallest positive integer n such that nP = 0.
    In fact, if you look at the previous example, our subgroup contained five points, and we had 5P = 0.
  • The order of P is linked to the order of the elliptic curve by Lagrange’s theorem, which states that the order of a subgroup is a divisor of the order of the parent group.
    In other words, if an elliptic curve contains N points and one of its subgroups contains n points, then n is a divisor of N.

These two information together give us a way to find out the order of a subgroup with base point P:

  1. Calculate the elliptic curve’s order N using Schoof’s algorithm.
  2. Find out all the divisors of N.
  3. For every divisor n of N, compute nP.
  4. The smallest n such that nP = 0 is the order of the subgroup.

For example, the curve y2 = x3x + 3 over the field \mathbb{F}_{37} has order N = 42. Its subgroups may have order n = 1, 2, 3, 6, 7, 14, 21 or 42. If we try P = (2, 3) we can see that P ≠ 0, 2P ≠ 0, …, 7P = 0, hence the order of P is n = 7.

Note that it’s important to take the smallest divisor, not a random one. If we proceeded randomly, we could have taken n = 14, which is not the order of the subgroup, but one of its multiples.

Another example: the elliptic curve defined by the equation y2 = x3x + 1 over the field \mathbb{F}_29 has order N = 37, which is a prime. Its subgroups may only have order n = 1 or 37. As you can easily guess, when n = 1, the subgroup contains only the point at infinity; when n = N, the subgroup contains all the points of the elliptic curve.

Finding a base point

For our ECC algorithms, we want subgroups with a high order. So in general we will choose an elliptic curve, calculate its order (N), choose a high divisor as the subgroup order (n) and eventually find a suitable base point. That is: we won’t choose a base point and then calculate its order, but we’ll do the opposite: we will first choose an order that looks good enough and then we will hunt for a suitable base point. How do we do that?

Firstly, we need to introduce one more term. Lagrange’s theorem implies that the number h = N / n is always an integer (because n is a divisor of N). This number h has a name: it’s the cofactor of the subgroup.

Now consider that for every point of an elliptic curve we have NP = 0. This happens because N is a multiple of any candidate n. Using the definition of cofator, we can write:

n(hP) = 0

Now suppose that n is a prime number (for reason that will be explained in the next post, we prefer prime orders). This equation, written in this form, is telling us that the point G = hP generates a subgroup of order n (except when G = hP = 0, in which case the subgroup has order 1).

In the light of this, we can outline the following algorithm:

  1. Calculate the order N of the elliptic curve.
  2. Choose the order n of the subgroup. For the algorithm to work, this number must be prime and must be a divisor of N.
  3. Compute the cofactor h = N / n.
  4. Choose a random point P on the curve.
  5. Compute G = hP.
  6. If G is 0, then go back to step 4. Otherwise we have found a generator of a subgroup with order n and cofactor h.

Note that this algorithm only works if n is a prime. If n weren’t prime, then the order of G could be one of the divisors of n.

Discrete logarithm

As we did when working with continuous elliptic curves, we are now going to discuss the question: if we know P and Q, what is k such that Q = kP?

This problem, which is known as the discrete logarithm problem for elliptic curves, is believed to be a “hard” problem, in that there is no known polynomial time algorithm that can run on a classical computer. There are, however, no mathematical proofs for this belief.

This problem is also analogous to the one used with RSA (it’s not a coincidence that they have the same name). The difference is that, with RSA, we use modulo exponentiation instead of scalar multiplication. RSA’s discrete logarithm problem can be stated as follows: if we know a and b, what’s k such that b = ak mod p?

Both these problems are “discrete” because they involve finite sets (more precisely, cyclic subgroups). And they are “logarithms” because they are analogous to ordinary logarithms.

What makes ECC interesting is that, as of today, the discrete logarithm problem seems to be “harder” for elliptic curves, if compared to the same problem with modulo exponentiation. This implies that we need fewer bits for the integer k in order to achieve the same level of security as with RSA, as we will see in details in the fourth and last post of this series.

More next week!

Enough for today! I really hope you enjoyed this post. Leave a comment if you didn’t.

Next week’s post will be the third in this series and will be about ECC algorithms: key pair generation, ECDH and ECDSA. That will be one of the most interesting parts of this series. Don’t miss it!

on May 23, 2015 02:08 PM

In the converged world of Unity-8, applications will work on small mobile screens, tablets and desktop monitors (with a mouse and keyboard attached) as if by magic. To achieve this transformation for your own app with little to no extra work required when considering the UI, simply design using grid units for a few predetermined virtual screen targets. Combined with Ubuntu off-the-shelf UI components built with convergence in mind, most of the hard work is done, freeing developers and designers to focus on what’s most important to their users.

What’s a grid unit? And why 40, 50, or 90 of them?

A grid unit (GU) is a virtual measure of screen space that’s independent of device hardware details like pixels or aspect ratio: those complexities are mapped under the covers by Ubuntu. Instead, by targeting just three ‘fixed’ virtual GU portrait widths—40, 50, and 90 GU— you’re guaranteed to be addressing the largest number of devices, including the desktop, to a high degree of design quality and consistency where relative spacing and content sizing just works.

The 40, 50, and 90 GU dimensions correspond to smaller smartphones, larger smartphones/phablets, and tablets respectively in portrait mode. These particular panel-widths weren’t chosen arbitrarily: they were selected by analyzing the most popular device specs on the market and picking the portrait dimensions that would embrace the largest number of possibilities most successfully, including for the desktop (more on that later).

For example, compact phones such as the BQ Aquarius E4.5 are best suited to the 40 GU-wide virtual portrait screen, offering the right balance of content to screen real estate for palm-sized viewing. For larger phones with more screen space such as the Meizu MX4, the 50 GU layout is most fitting, allowing more room for content. Finally, for edge-to-edge tablet portrait layouts for the N7 or N10, the 90 GU layout works best.

Try this exercise

Having trouble envisioning the system in action? Close your eyes and imagine a two-dimensional graph paper divided into squares that can adapt according to just three simple rules:

  • It can only be 40, 50, or 90 whole units along the short edge but the long edge can be variable
  • The long edge (in landscape mode or on the desktop) will be the whole number of GUs that carves out the maximum area rectangle that will fit within any given device’s physical screen in landscape mode based on the physical dimension of the GU determined from portrait mode (in pixels)
  • The last rule is simple but key: the squares of the graph paper must always be square—the graph paper, just to push the image a bit too far—is made of something more like graphene than polypropylene (no squeezed or stretched GUs allowed.)

Try it for yourself here: https://dl.dropboxusercontent.com/u/360991/canonical/grid-units/grid-units.html

There is one additional factor that can impact the final available screen area, but it’s a bit of a technical convolution. The under-the-covers pixels to grid unit mapping can’t include fractional pixels (this may seem like an obvious point, admittedly). But at the end of the day, the user sees the largest possible version of the 40, 50, or 90 GU wide virtual screen that’s possible on any given device. That means that all you have to do as a designer or developer is plan for the virtual dimensions we’ve been talking about, and you’re assured your user is getting the best possible rendering.

Though the system may seem abstract at first, the benefits of this system are all to easy to understand from a developer or designer standpoint: it’s far more predictable and simpler to design for layouts that follow rules rather than trying to account for a universe of idiosyncratic device possibilities. In addition, by using these layouts as the foundation, the convergence goal is much more easily achieved.

What about landscape & desktop? Use building blocks

By assembling these key portrait views together, it’s far easier to achieve landscape and desktop layouts than ever before. For example, if your app lends itself to a two panel layout, simply join together 40 and 50 GU phone layouts (that you’ve already designed) to achieve a landscape layout (or even a portrait tablet layout!)

Similarly, switching from portrait to landscape mode on tablet—also a desktop-friendly layout—could be as simple as joining a 40 GU layout and a 90 GU layout for a total of 130 GU, which fits nicely within both 16:9 and 16:10 tablet landscape screens as well as on any desktop monitor.

Since landscape and desktop layouts are the least predictable due to device variations and manual stretching by users, you can designate that of one of your panel layouts be of flexible width to fill the available space using one of these strategies:

  • Center the layout in the available space
  • Stretch or squeeze the layout to fit the available space
  • Combine these two, depending on the individual components within the layout

More complex layouts can also be achieved by joining three or more portrait layouts, too. For example, three 40 GU layouts can be joined side by side, which happen to fit perfectly into a 4:3 landscape tablet screen.

Columns, too

To help developers even further with one of the most common layouts—columnar or grid types—we’re adding a capability that maintains column-to-content size relationships across devices and the desktop the same way that type sizes are specified. This makes it very simple to achieve the proper content readability and density regardless of the device. For example, by specifying a “medium” sized column filled with “small” type, these relative relationships can be preserved throughout the converged-device experience without having to manually dig into pixel measurements.

The column capability can also adapt responsively to extra wide, variable landscape layouts, such as 16:10 aspect ratio tablets or manually stretched desktop layouts. This means that as more space becomes available as a user stretches the corners of the app window on the desktop, additional columns can be added on cue, providing more room for content.

Putting it all together across all form factors

By making screen dimensions virtual, we can minimize the vagaries of individual hardware specs that can frustrate device-convergent thinking and help developers focus more on their user’s needs. A combination of snap-together layouts, automated column layouts, and adaptive UI toolkit components like the header, list component, and bottom edge component help ensure users will experience a consistent, elegant journey from mobile to desktop and back again.

 

 

 

on May 23, 2015 11:12 AM

May 21, 2015

[ES]

¡Y llega la hora del año en la que abrimos el llamado a charlas para la UbuConLA 2015!

La conferencia se va a realizar en Lima, Perú el 7 y 8 de agosto. Tendremos slots para ponentes en Inglés y Español, con charlas en formato Plenario y Workshop.

El día miércoles se va a abrir el registro de asistentes, donde se publicará más información sobre la conferencia.

Si deseas proponer una charla, por favor rellena el siguiente formulario.

[EN]

And it’s this time of the year when we open the UbuConLA 2015 CFP!

The conference will take place in Lima, Peru, the 7th and 8th of August. We’ll have slots for speakers in both English and Spanish, with Plenary and Workshop talks.

The attendee registration will open on Wednesday, where more information about the conference will be published.

If you want to propose a talk, please fill out the following form.


on May 21, 2015 04:26 PM

S08E11- Blubberella - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Episode Eleven of Season Eight of the Ubuntu Podcast! Alan Pope, Mark Johnson, Laura Cowen, and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week, please send your comments and suggestions to: show@ubuntupodcast.org
Join us on IRC in #ubuntu-podcast on Freenode
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

on May 21, 2015 01:19 PM

Working Together

Scott Kitterman

At the time of Mark’s last UOS keynote (see starting about 8:19),I recall wondering what Canonical was going to do to reach out as he was suggesting.  I got distracted and forgot about it until I ran across this article.  So now that I’m reminded, I am curious what Canonical is doing to reach out and bridge the existing gaps?  Dear lazweb, does anyone have information on this?

on May 21, 2015 12:09 PM

LightDM GTK+ Greeter and it’s accompanying configuration application have been updated!  A number of bugs have been fixed in the greeter, and a new multihead configuration dialog has been added to LightDM GTK+ Greeter Settings.

New in LightDM GTK+ Greeter 2.0.1

  • New Features
    • Support for multiple configuration files (LP: #1421587)
  • Bugs Fixed
    • Multihead setup: black/white/grey screen at startup (LP: #1410406, #1426664, #1449567, #1448761)
    • Switching active monitors with Onboard on-screen can leave it’s window in an invalid state
    • Onboard does not react to mouse clicks
    • Window focus is lost after changing monitor configuration
    • Every lock activates a new virtual terminal with GTK 3.16 (LP: #1445461)
    • Broken delayed autologin (LP: #854261)
    • Message label can remain hidden when it must be visible (GTK: #710888)

New in Greeter Settings 1.2.0

  • Support for LightDM GTK+ Greeter 2.0.1
  • New Features
    • xembed mode: Run app without privileges if pkexec fails
    • New multihead setup dialog
    • Support for multiple configuration files (see above)
  • Bugs Fixed
    • Infinite loop in IndicatorsEntry (LP: #1435635)
    • Table column title hard to understand (LP: #1428224)
  • New and Updated Translations
    • Arabic, Brazilian Portuguese, Catalan, Croatian, Finnish, French, German, Japanese, Lithuanian, Polish, Portuguese, Russian, Serbian, Spanish

Screenshots

Xfce Settings Manager Integration GTK Headerbar Integration Greeter Settings without CSD Multihead Setup LightDM Gtk Greeter 2.0 LightDM Gtk Greeter with Numix (1.8.x)

Downloads

The latest versions of LightDM GTK+ Greeter and Settings can be downloaded from Launchpad:

Both applications will be updated for supported Ubuntu releases at the Stable PPA this week, and will make their way into Ubuntu 15.10 as regular updates.

Thanks

Thanks to all the translators who have worked diligently to translate these projects into their native languages.  Also, a huge thanks to Andrew P., who has largely taken over maintenance of the greeter and is solely responsible for the existence of the settings application.

If you want to contribute to these projects, head over to each project page on Launchpad:

on May 21, 2015 03:00 AM

May 20, 2015

Working from home in a distributed team within and organisation can definitely be a plus but with it comes some hurdles to get used to. This is my 3rd role in which I’ve been fortunate to have the ability to work from home (WFH) and I personally love it. No more dealing with crazy morning starts fighting to get on the train and then battling to get a seat. Instead each morning I go into my home office and start work.

For me it’s ideal and I still have the option of going to London when needed to the office to meet people. Best of both worlds really. It does take some getting used to and for some it’s not suitable as they need the office style environment. The benefit of being able to work from my garden during the summer when it’s hot is an added bonus of not having to deal with working indoors!

Things I’ve found that work

Creating an office space – carving out something that I consider my work place. Where I can do my job and close the door away from distractions has been very good especially when there are others in the house.

Daily conversations with your team – have one central place that you join daily and say hi. Hang out there and ask questions. You’d do this in real life in the canteen or going for lunch so you need to try and find in your virtual world. The best example of this was when I worked on Canonical. Everyone from HR to Payrole, Engineers to CEO were on IRC so you could ping them and ask them questions. It was really great to see people with various technical abilities all in one place. It was the online office!

Be professional! Don’t work from home in your Pjs! Get up and get to your desk. One tip I was told years ago when I worked for GE was have a mirror on your desk that way when you talk to someone you don’t see you are can see the faces you make and this is conveyed over the phone by your tone.

Obstacles to over come

The biggest thing I’ve found hard to wrap my head around is the amount of tools each team or person uses. Nobody seems to want to standardise the tools!

One day you are having meetings using one tool and the next day you have to download another tool and get it to work. On a given day I use hip chat for my team conversations, Skype for calls, Bluejeans for group calls or GoTo meeting and then there are the conversations I have on hangouts. Each are interchangeable depending on which team you work with. I have found more engineering types of people use one tool over another compared to Sales and Marketing but perhaps this is just because people work a certain way.

Frustrations of communicating and following up on items! In organisations that are spread out you need to track what’s being done where and when, and any activity linked to it. This can be done via RT, Jira, burn down charts, khanban boards. What ever it is again it should be set in stone in a company this is the tool we use. All teams no matter their discipline should use them. Asking people to send requests via email is not scalable, it leads to items not being done and it’s not possible to get an overview on how progress is being made.

People assume when you work from home it’s ok to pop over. It’s not and that’s often hard for them to understand. You have a working day and when you have guests they assumed you can just down tools. It’s not as easy as that and best to just close the door however rude it may seem, you wouldn’t do this to someone who was in an office.

Downside to timezones and people being in various locations and needing to talk to people in different teams means you often have early morning or late night calls. Avoid being on late always try and alternate with people so the onus doesn’t always fall to the same people to stay back late. They have a life also. If you do have to ask someone to stay back for a meeting in their timeszone even if it’s for lunch, make sure you say Thank you, show some appreciation. It makes a difference.

Things that are hard!

I struggle daily to take a break or get up and stretch things you’d take for granted when working in an office environment. Take that lunch break, I’ve started to walk Bash in this time as has become useful to getting me to leave my desk!

Closing the lid and logging off. I think this is next to impossible. All Geeks are connected now more so than ever before, twitter, facebook, skype email notifications it’s harder to separate work from non work so you remain connected. Try and avoid to replying to mails late at night it means you’re always on and always reachable and people get used to that.

Being visible – This is tricky how do you let the powers that be know that you’re working and accomplishing a lot. If you go for that promotion you want to be in with a fair chance and not have the fact you WFH and not based in HQ held against you. I think this is the hardest bit for a person who works from home is up against. It’s great to get the job but for many organisations the ability to change to other roles is dependent on your location.

The list isn’t exhaustive it’s based on my experience over the last 5 years. I do love working from home with my snoring little pug Bash and wouldn’t change it for anything. I’m sure over time I’ll come across obstacles or find other things that work well. Many organisations are moving towards WHF and it does work – but it’s also dependent on the person. It’s not for everyone. 

on May 20, 2015 09:24 PM

Folks, I've noticed many of you are either in Vancouver or on your way to party with us. That's a good thing!


Our party is tomorrow (Thursday May 21st). You've made the right decision to join us.

Tickets are going fast. I recommend that you grab some while you can.

Remember the Ubuntini? On Thursday, we'll be unveiling something the world has not seen (or tasted) yet; the perfect encore to our now globally famous Ubuntini.

Be there for the world premiere of our latest ubuntu-themed cocktail!

Wear orange, dress as a cosmonaut, or simply come as you are. We're going to dance, socialize and celebrate the community that is ubuntu.

See you soon.

on May 20, 2015 06:58 PM

Last week I had the pleasure of speaking at Protocols Plugfest Europe 2015.  It was really good to get out of the bubble of free software desktops where the community love makes it tempting to think we’re the most important thing in the world and experience the wider industry where of course we are only a small player.

This conferences, and its namesakes in the US, are sponsored by Microsoft among others and there’s obviously a decent amount of money in it, the venue is a professional conference venue and there’s a team of people making sure small but important details are taken care of like printed signposts to the venue.

What’s it all About?

In 2008 Microsoft lost an EU antitrust case because they had abused their monopoly position in operating systems.  This required them to document their file formats such as MS Office and protocols such as SMB.  This conference is part of that EU requirement meaning they have to work with anyone who wants to use their formats and protocols.  They have a website where you can file a request for information on any of their documents and protocols and everyone said they were very responsive in assigning engineers getting answers.

Since 2008 Microsoft have lost a lot of ground in new areas in the industry such as mobile and cloud.  Because they’re not the dominant player here they realise they have to use formats and protocols others can use too otherwise they lock themselves out.

The Talks

I spoke about Interoperability on the Linux Desktop which seemed well received, the reason Linux desktop hasn’t taken off is there are many other systems we need to interoperate with and many of them don’t want to interoperate with us. (Of course there are financial reasons too.) It was well received with many people thanking me for a good talk.

I went to talks by people working on Samba, LibreOffice and Kolab which all gave pleasing insight into how these project work and what they have to do to workaround complex proprietary protocols and formats.  LibreOffice explained how they work with OpenDocument, they add feature and for any feature added they submit a request for it to be added to the standard.  It’s a realistic best practice alternative.

I went to a bunch of Microsoft talks too about changes in their file formats, protocols and use of their cloud service Azure.

The inter-talks

It was great meeting some people from the free software and MS worlds at the conference.  I spoke to Christopher about how he had been hired to document SMB for MS, to Dan about taking over the world, to Miklos about LibreOffice and many others.  On the MS side I spoke to Tom about file formats, Darryl about working with Linux, to Jingyu about developing in MS.

I hope I won’t offend anyone to say that there’s a notable culture difference between the open source and the MS sides.  Open Source people really do dress scruffy and act socially awkward.  MS people reminded me of the bosses in Walter Mitty, strong handshakes, strong smiles and neat dress.

culture difference

One part of the culture that depressingly wasn’t difference was the gender ratio, there was only half a dozen women there and half of those were organising staff.

The Microsoft people seemed pretty pleased at how they were open and documented their protocols and formats, but it never occurred to them to use existing standards.  When I asked why they invented OOXML instread of using OpenDocument I was told it was “MS Office’s standard”.  When I asked if Skype protocols were open they seemed not to know.  It probably doesn’t come under the EU court requirements so it doesn’t interest them, but then all their talk of openness is for nothing.  When I suggested Skype should talk XMPP so we can use it with Telepathy I was given largely blank faces in return.

Talking to Samba people and OpenChange people about my opinion that their products should be stop gaps until a better open protocol can be used was met with the reasonable argument that in many cases there are no better open protocols.  Which is a shame.

I went into the MS testing lab to test some basic file sharing with Samba and reminded myself about the problems in Kubuntu and discovered some problems in Windows.  They had to turn off firewalls and twiddle permissions just to be able to share files, which was something I always thought Windows was very good at.  Even then it only worked with IP address and not browsing.  They had no idea why but the Samba dudes knew straight away that name browsing had been disabled a while ago and a DNS server was needed for that.  Interesting the MS interoperability staff aren’t great at their own protocols.

Zaragoza

I had a great time in Zaragoza, only spoiled by travellers flu on the last day meaning I couldn’t go to the closing drinks.  It’s on the site of a 2008 world fair expo which feels like one of those legacy projects that get left to rot, 2008 wasn’t a great year to be trying to initiate legacy I think.  But the tapas was special and the vermut sweet.  The conference timetable was genius, first day starts at 9:00 next at 10:00 and final at 11:00.  The Zentyal staff who organised it was very friendly and they are doing incredible stuff reimplementing exchange.  It’s lovely to see MS want to talk to all of us but they’ve a way to go yet before they learn that interoperability should be about an even playing field not only on their terms.

 

facebooktwittergoogle_pluslinkedinby feather
on May 20, 2015 04:06 PM

Berge

Rhonda D'Vine

I wrote well over one year ago about Earthlings. It really did have some impact on my life. Nowadays I try to avoid animal products where possible, especially for my food. And in the context of vegan information that I tracked I stumbled upon a great band from Germany: Berge. They recently started a deal with their record label which says that if they receive one million clicks within the next two weeks on their song 10.000 Tränen their record label is going to donate 10.000,- euros to a German animal rights organization. Reason enough for me to share this band with you! :)
(For those who are puzzled by the original upload date of the video: Don't let yourself get confused, the call for it is from this monday)

  • 10.000 Tränen: This is the song that needs the views. It's a nice tune and great lyrics to think about. Even though its in German it got English subtitles. :)
  • Schauen was passiert: In the light of 10.000 Tränen it was hard for me to select other songs, but this one sounds nice. "Let's see what happens". :)
  • Meer aus Farben: I love colors. And I hate the fact that most conference shirts are black only. Or that it seems to be impossible to find colorful clothes and shoes for tall women.

Like always, enjoy!

/music | permanent link | Comments: 3 | Flattr this

on May 20, 2015 09:21 AM

Daniel McGuire is unstoppable. The work I mentioned yesterday was great, here’s some more, showing what would happen when the user selects “Playing Music”.

help app - playing music

 

More feedback we received so far:

  • Kevin Feyder suggested using a different icon for the app.
  • Michał Prędotka asked if we were planning to add more icons/pictures and the answer is “yes, we’d love to if it doesn’t clutter up the interface too much”. We are going to start a call for help with the content soon.
  • Robin of ubuntufun.de asked the same thing as Michał and wondered where the translations were. We are going to look into that. He generally like the Ubuntu-like style.

Do you have any more feedback? Anything you’d like to look or work differently? Anything you’d like to help with?

on May 20, 2015 06:53 AM

May 19, 2015

Burning trees

Stuart Langridge

Today I made a little thing, which I find rather more fascinating than I probably should. You see, Joey said, “I wonder if this still works?”

'The Sands of Time' Linux Desktop

That’s quite cool — sand dunes in front of a clock — and it made me remember that years and years ago you used to get these programs where you could click and it would create sand which accumulated at the bottom of the window. The very first one I saw was on the Archimedes. But what came along a little later was one where you could click to produce various different substances — sand, oil, water, fire — and oil floated on water, fire set the oil alight, and so on. It was all rather amazing back when the phrase “particle system” hadn’t been invented. Anyway, I thought: hey, what’d be cool is if the clock in that picture was obscured by actual moving sand, rather than just a static picture of sand dunes. A tiny bit of poking around brought me to Dust, an implementation of precisely the sand/oil/water thing with WebGL in the browser. So I completely forgot about the clock thing and just played with Dust. Which is rather fun.

After some faffing around I discovered that it had two things I liked: “lava”, which is like a static piece of fire in that it ignites things that touch it but is not itself consumed, and “Life Itself” which is stuff that grows, like bacteria in a petri dish. But the life stuff is ignited by lava. so if you drop a couple of tiny bits of lava into the world, and then some green fungus life stuff, the fungus grows and takes over the whole window until it touches the lava, and then it gets burned up and vanishes… but, critically for this, it isn’t entirely consumed. A few specks remain, and those specks start growing again. Very cool. I spent ages just watching it!

Then I thought, well, this is nice and all, but this Dust thing uses WebGL, which is hassle, and it can’t actually cope with filling its whole window up with particles because it runs out of memory or space or shaders or something. So I figured I’d lash together a quick version myself.

Burning Trees on jsbin.com

And lo, it is so: a noddy version in JavaScript. This is superbly inefficient; it regenerates the whole grid and then innerHTMLs it into the page at every clock tick, and it’s completely character-based, like some sort of BBC Micro program. (At least it’s using requestAnimationFrame so it doesn’t hammer the CPU in a background tab!) But I could still sit there and watch it for ages. I really like it; the sense of watching the green take over and then get burned back.

I think what I’d like to do with it is make it considerably more efficient and then try to make it a sort of “live wallpaper” for my Ubuntu desktop. For that, I need to read about Life rendering algorithms; this Life implementation at pmav.eu uses a JS port of Tony Finch’s List Life algorithm to do the calculations. There’s also Golly, the Life simulator, which can do things ridiculously fast, and it’s possible to program your own ruleset (rather than just follow John Conway’s original rules), and if I understood how to do this (I do not) then I could probably turn my little Burning Trees thing into something that’s renderable by Golly at a much bigger size than my inefficient JavaScript can manage. There seems to be a quite large community of people working on Life, still, to my amazement. Where do these people hang out, I wonder? So I can ask them how to write a Golly ruleset. And then see if I can make Golly run fullscreen and render to the root window and have the coolest desktop background imaginable, especially once it’s graphics rather than block characters, and maybe the green is different colours depending on how old it is, and the fire has a slightly cooler effect, and… well, you can see, I like this idea, so making it look pretty would be wonderful. Maybe I’ll even put a clock behind it. But if I did it’d either be this one which I pinched from an imgur idea or my favourite clock that I wrote, which is this:

Stuart’s cool clock on jsbin.com

Anyway, none of this is what Joey wanted. Sorry, Joey. I hope the thing you wanted still works, even if it is waaaay complex to set up. Someone should step up and make that easier for you, because I like it when we have pretty things, and there aren’t enough of them on our desktop.

on May 19, 2015 11:58 PM
Hello readers, I have not been posting much in here, specially since I decided to dedicate some time to my thesis. I’m on the final stages of it and I really need to deliver it well. Nevertheless, after release times, it is usually something much slower. However, I have got two news for you: 1.&ellipsisRead the full post »
on May 19, 2015 06:12 PM

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20150519 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://kernel.ubuntu.com/reports/kt-meeting.txt


Status: Wily Development Kernel

The master-next branch of our wily kernel has recently been rebased to
the 4.0.4 stable kernel. We’re in the processing of parsing results of
initial DKMS testing against wily. We’ll upload to the archive once we
have this sorted.
—–
Important upcoming dates:


Status: CVE’s

The current CVE status can be reviewed at the following link:

  • http://kernel.ubuntu.com/reports/kernel-cves.html


Status: Stable, Security, and Bugfix Kernel Updates – Precise/Trusty/Utopic/Vivid

Status for the main kernels, until today:

  • Precise – Testing & Verification
  • Trusty – Testing & Verification
  • Utopic – Testing & Verification
  • Vivid – Testing & Verification

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html
    For SRUs, SRU report is a good source of information:
  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 02-May through 23-May
    ====================================================================
    01-May Last day for kernel commits for this cycle
    03-May – 09-May Kernel prep week.
    10-May – 23-May Bug verification; Regression testing; Release


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

on May 19, 2015 05:14 PM

RFC: Help app design

Daniel Holbach

Some of you might have noticed the Help app in the store, which has been around for a couple of weeks now. We are trying to make it friendlier and easier to use. Maybe you can comment and share your ideas/thoughts.

Apart from actual bugs and adding more and more useful content, we also wanted the app to look friendlier and be more intuitive and useful.

The latest trunk lp:help-app can be seen as version 0.3 in the store or if you run

bzr branch lp:help-app
less help-app/HACKING

you can run and check it out locally.

Here’s the design Daniel McGuire suggested going forward.

help-mockup

What are your thoughts? If you look at the content we currently have, how else would you expect the app to look like or work?

Thanks a lot Daniel for your work on this! :-)

on May 19, 2015 03:16 PM

Ubuntu is sponsoring the South East Linux Fest this year in Charlotte North Carolina, and as part of that event we will have a room to use all day Friday, June 12, for an UbuCon. UbuCon is a mini-conference with presentations centered around Ubuntu the project and it’s community.

I’m recruiting speakers to fill the last three hour-long slots, if anybody is willing and able to attend the conference and wants to give a presentation to a room full of enthusiastic Ubuntu users, please email me at mhall119@ubuntu.com. Topic can be anything Ubuntu related, design, development, client, cloud, using it, community, etc.

on May 19, 2015 12:45 PM

Welcome to the Ubuntu Weekly Newsletter. This is issue #417 for the week May 11 – 17, 2015, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Elizabeth K. Joseph
  • Aaron Honeycutt
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on May 19, 2015 04:28 AM

May 18, 2015

In January 2014, we launched the rtc.debian.org service for the Debian community. An equivalent service has been in testing for the Fedora community at FedRTC.org.

Some key points about the Fedora service:

  • The web front-end is just HTML, CSS and JavaScript. PHP is only used for account creation, the actual WebRTC experience requires no server-side web framework, just a SIP proxy.
  • The web code is all available in a Github repository so people can extend it.
  • Anybody who can authenticate against the FedOAuth OpenID is able to get a fedrtc.org test account immediately.
  • The server is built entirely with packages from CentOS 7 + EPEL 7, except for the SIP proxy itself. The SIP proxy is reSIProcate, which is available as a Fedora package and builds easily on RHEL / CentOS.

Testing it with WebRTC

Create an RTC password and then log in. Other users can call you. It is federated, so people can also call from rtc.debian.org or from freephonebox.net.

Testing it with other SIP softphones

You can use the RTC password to connect to the SIP proxy from many softphones, including Jitsi or Lumicall on Android.

Copy it

The process to replicate the server for another domain is entirely described in the Real-Time Communications Quick Start Guide.

Discuss it

The FreeRTC mailing list is a great place to discuss any issues involving this site or free RTC in general.

WebRTC opportunities expanding

Just this week, the first batch of Firefox OS televisions are hitting the market. Every one of these is a potential WebRTC client that can interact with free communications platforms.

on May 18, 2015 05:48 PM

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In April, 81.75 work hours have been dispatched among 5 paid contributors (20.75 hours where unused hours of Ben and Holger that were re-dispatched to other contributors). Their reports are available:

Evolution of the situation

May has seen a small increase in terms of sponsored hours (66.25 hours per month) and June is going to do even better with at least a new gold sponsor. We will have no problems sustaining the increased workload it implies since three Debian developers joined the team of contributors paid by Freexian (Antoine Beaupré, Santiago Ruano Rincón, Scott Kitterman).

The Jessie release probably shed some light on the Debian LTS project since we announced that Jessie will benefit from 5 years of support. Let’s hope that the trend will continue in the following months and that we reach our first milestone of funding the equivalent of a half-time position.

In terms of security updates waiting to be handled, the situation is a bit contrasted: the dla-needed.txt file lists 28 packages awaiting an update (12 less than last month), the list of open vulnerabilities in Squeeze shows about 60 affected packages in total (4 more than last month). The extra hours helped to make a good stride in the packages awaiting an update but there are many new vulnerabilities waiting to be triaged.

Thanks to our sponsors

The new sponsors of the month are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on May 18, 2015 09:58 AM

Weekly Schedule

Ali Jawad

Hi everyone,

Very very long time, no writing. I have my own reasons of course but no need to make this a boring post 😉

After the big failure of my first plan which I called Triple 8, I am trying now to come up with a better plan.

The 888 plan failed for some reasons:

  1. It was very strict, tough and very very hard to follow.
  2. Instead of helping me to organize my time or do any better to my life, it did quite the opposite.
  3. And because of that, things got even worse than before.

That said, and to keep it simple and short, I think it is time to come up with a better plan.

Back in January, I wrote about Self Development and how is it better to hold few eggs than dozen of them.

Today, I’m trying yet again to fix ad correct things so hopefully it will be better this time.

 
Day Task 1 Task 2 Task 3
Monday Kibo Ubuntu GNOME Other
Tuesday Kibo Ubuntu GNOME ditto
Wednesday Kibo Ubuntu GNOME ditto
Thursday Kibo Other Projects ditto
Friday Kibo Other Projects ditto
Saturday Kibo ToriOS ditto
Sunday Kibo ToriOS ditto

 

And, by “Other” here I mean anything else or could be nothing.

 

So, each day will be dedicated to Kibo, my business project. While the rest of the day is dedicated to other projects I am involved with. If and only if there will be some time left, I will try to invest that in something useful.

 

Just like with Triple 8, I can’t know/tell whether this is good or bad unless I try it. So, only time will tell how this new plan will work. Hopefully everything will be better this time.

 

Thank you!

 

on May 18, 2015 06:19 AM

May 17, 2015

I got an email last year pointing out a cosmetic issue with changelogs.debian.net. I think at the time of the email, the only problem was some bitrot in PHP's built-in server variables making some text appear incorrectly.

I duly added something to my TODO list to fix it, and it subsequently sat there for like 13 months. In the ensuing time, Debian changed some stuff, and my code started incorrectly handling a 302 as well, which actually broke it good and proper.

I finally got around to fixing it.

I also fixed a problem where sometimes there can be multiple entries in the Sources file for a package (switching to using api.ftp-master.debian.org would also address this), which caused sometimes caused an incorrect version of the changelog to be returned.

In the resulting tinkering, I learned about api.ftp-master.debian.org, which is totally awesome. I could stop maintaining and parsing a local copy of sid's Sources file, and just make a call to this instead.

Finally, I added linking to CVEs, because it was a quick thing to do, and adds value.

In light of api.ftp-master.debian.org, I'm very tempted to rewrite the redirector. The code is very old and hard for present-day Andrew to maintain, and I despise PHP. I'd rather write it in Python today, with some proper test coverage. I could also potentially host it on AppEngine instead of locally, just so I get some experience with AppEngine

It's also been suggested that I fold the changes into the changelog hosting on ftp-master.debian.org. I'm hesitant to do this, as it would require changing the output from plain text to HTML, which would mess up consumers of the plain text (like the current implementation of changelogs.debian.net)

on May 17, 2015 02:42 PM

Those of you who know what public-key cryptography is may have already heard of ECC, ECDH or ECDSA. The first is an acronym for Elliptic Curve Cryptography, the others are names for algorithms based on it.

Today, we can find elliptic curves cryptosystems in TLS, PGP and SSH, which are just three of the main technologies on which the modern web and IT world are based. Not to mention Bitcoin and other cryptocurrencies.

Before ECC become popular, almost all public-key algorithms were based on RSA and DSA, alternative cryptosystems based on prime number factorization. RSA and friends are still very important today, and often are used alongside ECC. However, while the magic behind RSA and friends can be easily explained, is widely understood, and rough implementations can be written quite easily, the foundations of ECC are still a mystery to most.

With a series of blog posts I’m going to give you a gentle introduction to the world of elliptic curve cryptography. My aim is not to provide a complete and detailed guide to ECC (the web is full of information on the subject), but to provide a simple overview of what ECC is and why it is considered secure, without losing time on long mathematical proofs or boring implementation details. I will also give helpful examples together with visual interactive tools and scripts to play with.

Specifically, here are the topics I’ll touch:

  1. Elliptic curves over real numbers and the group law (covered in this blog post)
  2. Elliptic curves over finite fields and the discrete logarithm problem
  3. Key pair generation and two ECC algorithms: ECDH and ECDSA
  4. Algorithms for breaking ECC security, and a comparison with RSA

In order to understand what’s written here, you’ll need to know some basic stuff of set theory, geometry and modular arithmetic, and have familiarity with symmetric and asymmetric cryptography. Lastly, you need to have a clear idea of what an “easy” problem is, what a “hard” problem is, and their roles in cryptography.

Ready? Let’s start!

Elliptic Curves

First of all: what is an elliptic curve? Wolfram MathWorld gives an excellent and complete definition. But for our aims, an elliptic curve will simply be the set of points described by the equation:

y^2 = x^3 + ax + b

where 4a3 + 27b2 ≠ 0 (this is required to exclude singular curves). The equation above is what is called Weierstrass normal form for elliptic curves.

Different shapes for different elliptic curvesDifferent shapes for different elliptic curves (b = 1, a varying from 2 to -3).
Types of singularitiesTypes of singularities: on the left, a curve with a cusp (y2 = x3). On the right, a curve with a self-intersection (y2 = x3 – 3x + 2). None of them is a valid elliptic curve.

Depending on the value of a and b, elliptic curves may assume different shapes on the plane. As it can be easily seen and verified, elliptic curves are symmetric about the x-axis.

For our aims, we will also need a point at infinity (also known as ideal point) to be part of our curve. From now on, we will denote our point at infinity with the symbol 0 (zero).

If we want to explicitly take into account the point at infinity, we can refine our definition of elliptic curve as follows:

\left\{ (x, y) \in \mathbb{R}^2\ |\ y^2 = x^3 + ax + b,\ 4 a^3 + 27 b^2 \ne 0 \right\}\ \cup\ \left\{ 0 \right\}

Groups

A group in mathematics is a set for which we have defined a binary operation that we call “addition” and indicate with the symbol +. In order for the set 𝔾 to be a group, addition must defined so that it respects the following four properties:

  1. closure: if a and b are members of 𝔾, then a + b is a member of 𝔾;
  2. associativity: (a + b) + c = a + (b + c);
  3. there exists an identity element 0 such that a + 0 = 0 + a = a;
  4. every element has an inverse, that is: for every a there exists b such that a + b = 0.

If we add a fifth requirement:

  1. commutativity: a + b = b + a,

then the group is called abelian group.

With the usual notion of addition, the set of integer numbers is a group (moreover, it’s an abelian group). The set of natural numbers however is not a group, as the fourth property can’t be satisfied.

Groups are nice because, if we can demonstrate that those four properties hold, we get some other properties for free. For example: the identity element is unique; also the inverses are unique, that is: for every a there exists only one b such that a + b = 0 (and we can write b as a). Either directly or indirectly, these and other facts about groups will be very important for us later.

The group law for elliptic curves

We can define a group over elliptic curves. Specifically:

  • the elements of the group are the points of an elliptic curve;
  • the identity element is the point at infinity 0;
  • the inverse of a point P is the one symmetric about the x-axis;
  • addition is given by the following rule: given three aligned, non-zero points P, Q and R, their sum is P + Q + R = 0.
Three aligned pointsThe sum of three aligned point is 0.

Note that with the last rule, we only require three aligned points, and three points are aligned without respect to order. This means that, if P, Q and R are aligned, then P + (Q + R) = Q + (P + R) = R + (P + Q) = · · · = 0. This way, we have intuitively proved that our + operator is both associative and commutative: we are in an abelian group.

So far, so great. But how do we actually compute the sum of two arbitrary points?

Geometric addition

Thanks to the fact that we are in an abelian group, we can write P + Q + R = 0 as P + Q = –R. This equation, in this form, lets us derive a geometric method to compute the sum between two points P and Q: if we draw a line passing through P and Q, this line will intersect a third point on the curve, R (this is implied by the fact that P, Q and R are aligned). If we take the inverse of this point, R, we have found the result of P + Q.

Point additionDraw the line through P and Q. The line intersects a third point R. The point symmetric to it, R, is the result of P + Q.

This geometric method works but needs some refinement. Particularly, we need to answer a few questions:

  • What if P = 0 or Q = 0? Certainly, we can’t draw any line (0 is not on the xy-plane). But given that we have defined 0 as the identity element, P + 0 = P and 0 + Q = Q, for any P and for any Q.
  • What if P = –Q? In this case, the line going through the two points is vertical, and does not intersect any third point. But if P is the inverse of Q, then we have P + Q = P + (-P) = 0 from the definition of inverse.
  • What if P = Q? In this case, there are infinitely many lines passing through the point. Here things start getting a bit more complicated. But consider a point Q’P. What happens if we make Q’ approach P, getting closer and closer to it?
    The result of P + Q as Q is approaching PAs the two points become closer together, the line passing through them becomes tangent to the curve.

    As Q’ tends towards P, the line passing through P and Q’ becomes tangent to the curve. In the light of this we can say that P + P = –R, where R is the point of intersection between the curve and the line tangent to the curve in P.
  • What if PQ, but there is no third point R? We are in a case very similar to the previous one. In fact, we are in the case where the line passing through P and Q is tangent to the curve.
    The result of P + Q as Q is approaching PIf our line intersects just two points, then it means that it’s tangent to the curve. It’s easy to see how the result of the sum becomes symmetric to one of the two points.

    Let’s assume that P is the tangency point. In the previous case, we would have written P + P = –Q. That equation now becomes P + Q = –P. If, on the other hand, Q were the tangency point, the correct equation would have been P + Q = –Q.

The geometric method is now complete and covers all cases. With a pencil and a ruler we are able to perform addition involving every point of any elliptic curve. If you want to try, take a look at the HTML5/JavaScript visual tool I’ve built for computing sums on elliptic curves!

Algebraic addition

If we want a computer to perform point addition, we need to turn the geometric method into an algebraic method. Transforming the rules described above into a set of equations may seem straightforward, but actually it can be really tedious because it requires solving cubic equations. For this reason, here I will report only the results.

First, let’s get get rid of the most annoying corner cases. We already know that P + (-P) = 0, and we also know that P + 0 = 0 + P = P. So, in our equations, we will avoid these two cases and we will only consider two non-zero, non-symmetric points P = (xP, yP) and Q = (xQ, yQ).

If P and Q are distinct (xPxQ), the line through them has slope:

m = \frac{y_P - y_Q}{x_P - x_Q}

The intersection of this line with the elliptic curve is a third point R = (xR, yR):

\begin{array}{rcl} x_R & = & m^2 - x_P - x_Q \\ y_R & = & y_P + m(x_R - x_P) \end{array}

or, equivalently:

y_R = y_Q + m(x_R - x_Q)

Hence (xP, yP) + (xQ, yQ) = (xR, –yR) (pay attention at the signs and remember that P + Q = –R).

If we wanted to check whether this result is right, we would have had to check whether R belongs to the curve and whether P, Q and R are aligned. Checking whether the points are aligned is trivial, checking that R belongs to the curve is not, as we would need to solve a cubic equation, which is not fun at all.

Instead, let’s play with an example: according to our visual tool, given P = (1, 2) and Q = (3, 4) over the curve y2 = x3 – 7x + 10, their sum is P + Q = –R = (-3, 2). Let’s see if our equations agree:

\begin{array}{rcl} m & = & \frac{y_P - y_Q}{x_P - x_Q} = \frac{2 - 4}{1 - 3} = 1 \\ x_R & = & m^2 - x_P - x_Q = 1^2 - 1 - 3 = -3 \\ y_R & = & y_P + m(x_R - x_P) = 2 + 1 \cdot (-3 - 1) = -2 \\ & = & y_Q + m(x_R - x_Q) = 4 + 1 \cdot (-3 - 3) = -2 \end{array}

Yes, this is correct!

Note that these equations works even if one of P or Q is a tangency point. Let’s try with P = (-1, 4) and Q = (1, 2).

\begin{array}{rcl} m & = & \frac{y_P - y_Q}{x_P - x_Q} = \frac{4 - 2}{-1 - 1} = -1 \\ x_R & = & m^2 - x_P - x_Q = (-1)^2 - (-1) - 1 = 1 \\ y_R & = & y_P + m(x_R - x_P) = 4 + -1 \cdot (1 - (-1)) = 2 \end{array}

We get the result P + Q = (1, -2), which is the same result given by the visual tool.

The case P = Q needs to be treated a bit differently: the equations for xR and yR are the same, but given that xP = xQ, we must use a different equation for the slope:

m = \frac{3 x_P^2 + a}{2 y_P}

Note that, as we would expect, this expression for m is the first derivative of:

y_P = \pm \sqrt{x_P^3 + ax_P + b}

To prove the validity of this result it is enough to check that R belongs to the curve and that the line passing through P and R has only two intersections with the curve. But again, we don’t prove this fact, but instead try with an example: P = Q = (1, 2).

\begin{array}{rcl} m & = & \frac{3x_P^2 + a}{2 y_P} = \frac{3 \cdot 1^2 - 7}{2 \cdot 2} = -1 \\ x_R & = & m^2 - x_P - x_Q = (-1)^2 - 1 - 1 = -1 \\ y_R & = & y_P + m(x_R - x_P) = 2 + (-1) \cdot (-1 - 1) = 4 \end{array}

Which gives us P + P = –R = (-1, -4). Correct!

Although the procedure to derive them can be really tedious, our equations are pretty compact. This is thanks to Weierstrass normal form: without it, these equations could have been really long and complicated!

Scalar multiplication

Other than addition, we can define another operation: scalar multiplication, that is:

n P = \underbrace{P + P + \dots + P}_{n\ \text{times}}

where n is a natural number. I’ve written a visual tool for scalar multiplication too, if you want to play with that.

Written in that form, it may seem that computing nP requires n additions. If n has k binary digits, then our algorithm would be O(2k), which is not really good. But there exist faster algorithms.

One of them is the double and add algorithm. Its principle of operation can be better explained with an example. Take n = 151. Its binary representation is 100101112. This binary representation can be turned into a sum powers of two:

151 = 1 \cdot 2^7 + 0 \cdot 2^6 + 0 \cdot 2^5 + 1 \cdot 2^4 + 0 \cdot 2^3 + 1 \cdot 2^2 + 1 \cdot 2^1 + 1 \cdot 2^0

(We have taken each binary digit of n and multiplied it by a power of two.)

In view of this, we can write:

151 \cdot P = 2^7 P + 2^4 P + 2^2 P + 2^1 P + 2^0 P

What the double and add algorithm tells us to do is:

  • Take P.
  • Double it, so that we get 2P.
  • Add 2P to P (in order to get the result of 21P + 20P).
  • Double 2P, so that we get 22P.
  • Add it to our result (so that we get 22P + 21P + 20P).
  • Double 22P to get 23P.
  • Don’t perform any addition involving 23P.
  • Double 23P to get 24P.
  • Add it to our result (so that we get 24P + 22P + 21P + 20P).

In the end, we can compute 151 · P performing just seven doublings and four additions.

If this is not clear enough, here’s a Python snippet that implements the algorithm:

def bits(n):
    """
    Generates the binary digits of n, starting
    from the least significant bit.

    bits(151) -> 1, 1, 1, 0, 1, 0, 0, 1
    """
    while n:
        yield n & 1
        n >>= 1

def double_and_add(n, x):
    """
    Returns the result of n * x, computed using
    the double and add algorithm.
    """
    result = 0
    addend = x

    for bit in bits(n):
        if bit == 1:
            result += addend
        addend *= 2

    return result

If doubling and adding are both O(1) operations, then this algorithm is O(log n) (or O(k) if we consider the bit length), which is pretty good. Surely much better than the initial O(n) algorithm!

Logarithm

Given n and P, we now have at least one polynomial time algorithm for computing Q = nP. But what about the other way round? What if we know Q and P and need to find out n? This problem is known as the logarithm problem. We call it “logarithm” instead of “division” for conformity with RSA (where instead of multiplication we have exponentiation).

I don’t know of any “easy” algorithm for the logarithm problem, however playing with multiplication it’s easy to see some patterns. For example, take the curve y2 = x3 – 3x + 1 and the point P = (0, 1). We can immediately verify that, if n is odd, nP is on the curve on the left semiplane; if n is even, nP is on the curve on the right semiplane. If we experimented more, we could probably find more patterns that eventually could lead us to write an algorithm for computing the logarithm on that curve efficiently.

But there’s a variant of the logarithm problem: the discrete logarithm problem. As we will see in the next post, if we reduce the domain of our elliptic curves, scalar multiplication remains “easy”, while the discrete logarithm becomes a “hard” problem. This duality is the key brick of elliptic curve cryptography.

See you next week

That’s all for today, I hope you enjoyed this post! Next week we will discover finite fields and the discrete logarithm problem, along with examples and tools to play with. If this stuff sounds interesting to you, then stay tuned!

Read the next post of the series »

on May 17, 2015 11:24 AM

May 16, 2015

People like shirts, stickers and goodies to show support of their favorite operation system, and though the Xubuntu project has been slower than our friends over at Kubuntu at offering them, we now have a decent line-up offered by companies we’re friendly with. Several months ago the Xubuntu team was contacted by Gabor Kum of HELLOTUX to see if we’d be interested in offering shirts through their site. We were indeed interested! So after he graciously sent our project lead a polo shirt to evaluate, we agreed to start offering his products on our site, alongside the others. See all products here.

Polos aren’t really my thing, so when the Xubuntu shirts went live I ordered the Xubuntu sweater. Now a language difference may be in play here, since I’d call it a sweatshirt with a zipper, or a light jacket, or a hoodie without a hood. But it’s a great shirt, I’ve been wearing it regularly since I got it in my often-chilly city of San Francisco. It fits wonderfully and the embroidery is top notch.

Xubuntu sweatshirt
Close-up of HELLOTUX Xubuntu embroidery

In other Ubuntu things, given my travel schedule Peter Ganthavorn has started hosting some of the San Francisco Ubuntu Hours. He hosted one last month that I wasn’t available for, and then another this week which I did attend. Wearing my trusty new Xubuntu sweatshirt, I also brought along my Wily Werewolf to his first Ubuntu Hour! I picked up this fluffy-yet-fearsome werewolf from Squishable.com, which is also where I found my Natty Narwhal.

When we wrapped up the Ubuntu Hour, we headed down the street to our favorite Chinese place for Linux meetings where I was hosting a Bay Area Debian Meeting and Jessie Release Party! I was pretty excited about doing this, since the Toy Story character Jessie is a popular one, I jumped at the opportunity to pick up some party supplies to mark the occasion, and ended up with a collection of party hats and notepads:

There were a total of 5 of us there, long time BAD member Michael Paoli being particularly generous with his support of my ridiculous hats:

We had a fun time, welcoming a couple of new folks to our meeting as well. A few more photos from the evening here: https://www.flickr.com/photos/pleia2/sets/72157650542082473

Now I just need to actually upgrade my servers to Jessie!

on May 16, 2015 03:09 AM

May 15, 2015


In November of 2006, Canonical held an "all hands" event, which included a team building exercise.  Several teams recorded "Ubuntu commercials".

On one of the teams, Mark "Borat" Shuttleworth amusingly proffered,
"Ubuntu make wonderful things possible, for example, Linux appliance, with Ubuntu preinstalled, we call this -- the fridge!"


Nine years later, that tongue-in-cheek parody is no longer a joke.  It's a "cold" hard reality!

GE Appliances, FirstBuild, and Ubuntu announced a collaboration around a smart refrigerator, available today for $749, running Snappy Ubuntu Core on a Raspberry Pi 2, with multiple USB ports and available in-fridge accessories.  We had one in our booth at IoT World in San Francisco this week!










While the fridge prediction is indeed pretty amazing, the line that strikes me most is actually "Ubuntu make(s) wonderful things possible!"

With emphasis on "things".  As in, "Internet of Things."  The possibilities are absolutely endless in this brave new world of Snappy Ubuntu.  And that is indeed wonderful.

So what are you making with Ubuntu?!?

:-Dustin
on May 15, 2015 04:38 AM

May 14, 2015

It’s Episode Ten of Season Eight of the Ubuntu Podcast! Alan Pope, Laura Cowen, Mark Johnson, and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

  • We discuss Mark Shuttleworth’s Ubuntu Online Summit keynote…

  • We share some Command Line Lurve which is the super useful listadmin, which does stuff. Listen to find out what…

  • And we also chat about taking a whole 3 minutes (that’s right!) off a PB (personal best time) at Parkrun, playing Windows games on Linux, getting Cyanogen OS 12 on to a OnePlus One phone, going to the Egham Raspberry Jam, and making Ubuntu MATE a download for the Raspberry Pi 2.

That’s all for this week, please send your comments and suggestions to: show@ubuntupodcast.org
Join us on IRC in #ubuntu-podcast on Freenode
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

on May 14, 2015 06:43 PM

In the last days I was working at Centricular on adding PTP clock support to GStreamer. This is now mostly done, and the results of this work are public but not yet merged into the GStreamer code base. This will need some further testing and code review, see the related bug report here.

You can find the current version of the code here in my freedesktop.org GIT repository. See at the very bottom for some further hints at how you can run it.

So what does that mean, how does it relate to GStreamer?

Precision Time Protocol

PTP is the Precision Time Protocol, which is a network protocol standardized by the IEEE (IEEE1588:2008) to synchronize the clocks between different devices in a network. It’s similar to the better-known Network Time Protocol (NTP, IETF RFC 5905), which is probably used by millions of computers down there to automatically set the local clock. Different to NTP, PTP promises to give much more accurate results, up to microsecond (or even nanosecond with PTP-aware network hardware) precision inside appropriate networks. PTP is part of a few broadcasting and professional media standards, like AES67, RAVENNA, AVB, SMPTE ST 2059-2 and others for inter-device synchronization.

PTP comes in 3 different versions, the old PTPv1 (IEEE1588-2002), PTPv2 (IEEE1588-2008) and IEEE 802.1AS-2011. I’ve implemented PTPv2 via UDPv4 for now, but this work can be extended to other variants later.

GStreamer network synchronization support

So what does that mean for GStreamer? We are now able to synchronize to a PTP clock in the network, which allows multiple devices to accurately synchronize media to the same clock. This is useful in all scenarios where you want to play the same media on different devices, and want them all to be completely synchronized. You can probably imagine quite a few use cases for this yourself now, especially in the context of the “Internet of Things” but also for more normal things like video walls or just having multiple screens display the same thing in the same room.

This was already possible previously with the GStreamer network clock, but that clock implements a custom protocol that only other GStreamer applications can understand currently. See for example here, here or here. With the PTP clock we now get another network clock that speaks a standardized protocol and can interoperate with other software and hardware.

Performance, WiFi and other unreliable networks

When running the code, you will probably notice that PTP works very well in controlled and reliable networks (2-20 microseconds accuracy is what I got here). But it’s not that accurate in wireless networks or in general unreliable networks. It seems like in those networks the custom GStreamer network clock protocol works more reliable currently, partially by design.

Future

As a next step, at Centricular we’re going to look at implementing support for RFC7273 in GStreamer, which allows to signal media clocks for RTP. This is part of e.g. AES67 and RAVENNA and would allow multiple RTP receivers to be perfectly synchronized against a PTP clock without any further configuration. And just for completeness, we’re probably going to release a NTP based GStreamer clock in the near future too.

Running the code

If you want to test my code, you can run it for example against PTPd. And if you want to test the accuracy of the clock, you can measure it with the ptp-clock-reflector (or here, instructions in the README) that I wrote for testing. The latter allows you to measure the accuracy, and in a local wired network I got around 2-20 microseconds accuracy. A GStreamer example application can be found here, which just prints the local and remote PTP clock times. Other than that you can use it just like any other clock on any GStreamer pipeline you can imagine.

on May 14, 2015 05:44 PM

The Ubuntu OpenStack team is pleased to announce the general availability of OpenStack 2015.1.0 (Kilo) release in Ubuntu 15.04 and for Ubuntu 14.04 LTS via the Ubuntu Cloud Archive.

Ubuntu 14.04 LTS

You can enable the Ubuntu Cloud Archive for OpenStack Kilo on Ubuntu 14.04 installations by running the following commands:

 sudo add-apt-repository cloud-archive:kilo
 sudo apt-get update

The Ubuntu Cloud Archive for Kilo includes updates for Nova, Glance, Keystone, Neutron, Cinder, Horizon, Swift, Ceilometer and Heat; Ceph (0.94.1), RabbitMQ (3.4.2), QEMU (2.2), libvirt (1.2.12) and Open vSwitch (2.3.1) back-ports from 15.04 have also been provided.

Additionally Trove, Sahara, Ironic, Designate and Manila are also provided in the Ubuntu Cloud Archive for Kilo.  Note that Canonical are not providing support for these packages as they are not in Ubuntu main – these packages are community supported inline with other Ubuntu universe packages.

You can checkout the full list of packages and versions here.

NOTE: We’re shipping Swift 2.2.2 for release – due to the relatively late inclusion of new dependencies to support erasure coding in Swift, we’ve opted not to update to 2.3.0 this cycle in Ubuntu.

NOTE: Designate and Trove are still working through the Stable Release Update process, due to some unit testing and packaging issues,  so are lagging behind the rest of the release.

Ubuntu 15.04

No extra steps required; just start installing OpenStack!

Neutron Driver Decomposition

Ubuntu are only tracking the decomposition of Neutron FWaaS, LBaaS and VPNaaS from Neutron core in the Ubuntu archive; we expect to add additional packages for other Neutron ML2 mechanism drivers and plugins early during the Liberty/15.10 development cycle – we’ll provide these as backports to OpenStack Kilo users as and when they become available.

Reporting bugs

Any issues please report bugs using the ‘ubuntu-bug’ tool:

 sudo ubuntu-bug nova-conductor

this will ensure that bugs get logged in the right place in Launchpad.

Thanks and have fun!


on May 14, 2015 03:57 PM

May 13, 2015

I finally got around to finishing off and publishing the LWN Chrome extension that I wrote a couple of months ago.

I received one piece of feedback from someone who read my blog via Planet Debian, but didn't appear to email me from a usable email address, so I'll respond to the criticisms here.

I wrote a Chrome extension because I use Google Chrome. To the best of my knowledge, it will work with Chromium as well, but as I've never used it, I can't really say for sure. I've chosen to licence the source under the Apache Licence, and make it freely available. So the extension is available to anyone who cares to download the source and "side load" it, if they don't want to use the Chrome Web Store.

As for whether a userscript would have done the job, maybe, but I have no experience with them.

Basically, I had an itch, and I scratched it, for the browser I choose to use, and I also chose to share it freely.

on May 13, 2015 10:03 PM