January 05, 2016

Sites like Twitter and Facebook are not fundamentally free platforms, despite the fact they don't ask their users for money. Look at how Facebook's censors confused Denmark's mermaid statue with pornography or how quickly Twitter can make somebody's account disappear, frustrating public scrutiny of their tweets and potentially denying access to vital information in their "direct message" mailbox. Then there is the fact that users don't get access to the source code, users don't have a full copy of their own data and, potentially worst of all, if most people bothered to read the fine print of the privacy policy they would find it is actually a recipe for downright creepiness.

Nonetheless, a significant number of people have accounts in these systems and are to some extent contactable there.

Many marketing campaigns that have been successful today, whether they are crowdfunding, political activism or just finding a lost cat claim to have had great success because of Twitter or Facebook. Is this true? In reality, many users of those platforms follow hundreds of different friends and if they only check-in once a day, filtering algorithms show them only a small subset of what all their friends posted. Against these odds, just posting your great idea on Facebook doesn't mean that more than five people are actually going to see it. Those campaigns that have been successful have usually had something else going in their favour, perhaps it was a friend working in the media who gave their campaign a plug on his radio show or maybe they were lucky enough to be slashdotted. Maybe it was having the funds for a professional video production with models who pass off as something spontaneous. The use of Facebook or Twitter alone did not make such campaigns successful, it was just part of a bigger strategy where everything fell into place.

Should free software projects, especially those revolving around free communications technology, use such platforms to promote themselves?

It is not a simple question. In favour, you could argue that everything we promote through public mailing lists and websites is catalogued by Google anyway, so why not make it easier to access for those who are on Facebook or Twitter? On top of that, many developers don't even want to run their own mail server or web server any more, let alone a self-hosted social-media platform like pump.io. Even running a basic SIP proxy server for the large Debian and Fedora communities involved a lot of discussion about the approach to support it.

The argument against using Facebook and Twitter is that you are shooting yourself in the foot, when you participate in those networks, you give them even more credibility and power (which you could quantify using Metcalfe's law). The Metcalfe value of their network, being quadratic rather than linear, shoots ahead of the Metcalfe value of your own solution, putting your alternative even further out of reach. On top of that, the operators of the closed platform are able to evaluate who is responding to your message and how they feel about it and use that intelligence to further undermine you. In some cases, there may be passive censorship, such as WhatsApp silently losing messages that link to rival Telegram.

How do you feel about this choice? How and when should free software projects and their developers engage with mainstream social media technology? Please come and share your ideas on the Free-RTC mailing list or perhaps share and Tweet them.

on January 05, 2016 02:27 PM
Nachdem der Verlag Open Source Press leider zum 31. Dezember 2015 die Pforten geschlossen hat, haben alle Autoren die Rechte an Ihren Büchern komplett übertragen bekommen. Valentin Haenel und Julius Plenz, die beiden Autoren von "Git - Verteilte Versionsverwaltung für Code und Dokumente" haben sich entschieden Ihr Buch unter einer Creative Commons Lizenz frei zu geben. Webseite, das Buch, das Repository und Foliensätze für eine Schulung. Ich zitiere einmal aus der README.md-Datei:
Falls Du ein DocBook-Experte und/oder Webprogrammierer bist, und denkst: Das Design könnte man ja wohl mal professionalisieren – dann gerne! Ich habe aus den Quelldateien nur schnell eine halbwegs ansehnliche Webseite zusammengestoppelt. Auch sind gegebenenfalls eine PDF- oder EPUB-Version interessant; falls Du dies übernehmen willst, gerne!
on January 05, 2016 04:18 AM

Welcome to the Ubuntu Weekly Newsletter. This is issue #448 for the weeks of December 21, 2015 – January 3, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Paul White
  • Walter Lapchynski
  • Simon Quigley
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on January 05, 2016 02:10 AM

I’m organizing an event at the University of Washington in Seattle that involves a reading, the screening of a documentary film, and a Q&A about Aaron Swartz. The event coincides with the third anniversary of Aaron’s death and the release of a new book of Swartz’s writing that I contributed to.

aaronsw-tiob_bwcstw

The event is free and open the public and details are below:

WHEN: Wednesday, January 13 at 6:30-9:30 p.m.

WHERE: Communications Building (CMU) 120, University of Washington

We invite you to celebrate the life and activism efforts of Aaron Swartz, hosted by UW Communication professor Benjamin Mako Hill. The event is next week and will consist of a short book reading, a screening of a documentary about Aaron’s life, and a Q&A with Mako who knew Aaron well – details are below. No RSVP required; we hope you can join us.

Aaron Swartz was a programming prodigy, entrepreneur, and information activist who contributed to the core Internet protocol RSS and co-founded Reddit, among other groundbreaking work. However, it was his efforts in social justice and political organizing combined with his aggressive approach to promoting increased access to information that entangled him in a two-year legal nightmare that ended with the taking of his own life at the age of 26.

January 11, 2016 marks the third anniversary of his death. Join us two days later for a reading from a new posthumous collection of Swartz’s writing published by New Press, a showing of “The Internet’s Own Boy” (a documentary about his life), and a Q&A with UW Communication professor Benjamin Mako Hill – a former roommate and friend of Swartz and a contributor to and co-editor of the first section of the new book.

If you’re not in Seattle, there are events with similar programs being organized in Atlanta, Chicago, Dallas, New York, and San Francisco.  All of these other events will be on Monday January 11 and registration is required for all of them. I will be speaking at the event in San Francisco.

on January 05, 2016 01:07 AM

January 04, 2016

Xenial Xerus alpha 1

Lubuntu Blog

Hi folks, the 1st milestone release of what will be our 16.04 LTS is now out in the wild. I’m only posting the release notes link, as it is important you know of issues (none are computer critical) so that you don’t waste your time reporting an issue we already know of. If you find […]
on January 04, 2016 07:41 PM

"You can’t be friends with a squirrel! A squirrel is just a rat with a cuter outfit."
– Sarah Jessica Parker

Our own squirrel, Xenial Xerus (to become 16.04 LTS), should be much better than your average rat. You can see for yourself, as Alpha 1 has now been released!

This alpha features images for Lubuntu, Ubuntu MATE, and UbuntuKylin.

Pre-releases of the Xenial Xerus are *not* encouraged for anyone needing a stable system or anyone who is not comfortable running into occasional, even frequent breakage. They are, however, recommended for Ubuntu flavor developers and those who want to help in testing, reporting and fixing bugs as we work towards getting this release ready.

Alpha 1 includes a number of software updates that are ready for wider testing. This is quite an early set of images, so you should expect some bugs.

While these Alpha 1 images have been tested and work, except as noted in the release notes, Ubuntu developers are continuing to improve the Xenial Xerus. In particular, once newer daily images are available, system installation bugs identified in the Alpha 1 installer should be verified against the current daily image before being reported in Launchpad. Using an obsolete image to re-report bugs that have already been fixed wastes your time and the time of developers who are busy trying to make 16.04 the best Ubuntu release yet. Always ensure your system is up to date before reporting bugs.

Lubuntu

Lubuntu is a flavour of Ubuntu based on LXDE and focused on providing a very lightweight distribution.

The Alpha 1 images can be downloaded at: http://cdimage.ubuntu.com/lubuntu/releases/xenial/alpha-1/

More information on Lubuntu Alpha-1 can be found here: https://wiki.ubuntu.com/XenialXerus/Alpha1/Lubuntu

Ubuntu MATE

Ubuntu MATE is a flavour of Ubuntu featuring the MATE desktop environment.

The Alpha-1 images can be downloaded at: http://cdimage.ubuntu.com/ubuntu-mate/releases/xenial/alpha-1/

More information on Ubuntu MATE Alpha-1 can be found here: https://wiki.ubuntu.com/XenialXerus/Alpha1/UbuntuMATE

UbuntuKylin

UbuntuKylin is a flavour of Ubuntu that is more suitable for Chinese users.

The Alpha-1 images can be downloaded at: http://cdimage.ubuntu.com/ubuntukylin/releases/xenial/alpha-1/

More information on UbuntuKylin Alpha-1 can be found here: https://wiki.ubuntu.com/XenialXerus/Alpha1/UbuntuKylin

Regular daily images for Ubuntu can be found at: http://cdimage.ubuntu.com

If you’re interested in following the changes as we further develop Xenial, we suggest that you subscribe to the ubuntu-devel-announce list. This is a low-traffic list (a few posts a week) carrying announcements of approved specifications, policy changes, alpha releases and other interesting events.

A big thank you to the developers and testers for their efforts to
pull together this Alpha release!

Originally posted to the ubuntu-devel-announce mailing list on Mon Jan 4 19:13:20 UTC 2016 by Walter Lapchynski, on behalf of the Ubuntu Release Team

on January 04, 2016 07:35 PM

Serendipity 2.0.3 ...

Dirk Deimeke

Das Jahr fängt ja gut an. :-) Gerade eben sehe ich dass Serendipity 2.0.3 freigegeben wurde. Da es um eine Sicherheitslücke geht, ist das Update strengstens empfohlen. Hier im Blog läuft aus Testzwecken die Alpha-Version.
on January 04, 2016 11:33 AM

 

Recent article by professors Karim Lakhani and Marco Iansiti on the Harvard Business Review, “Digital Ubiquity: How Connection, Sensors and Data are Revolutionizing Business”, gave me the opportunity for interesting insights and considerations.

Digital technology evolution and the development of modern “Internet of Things” devices are introducing huge transformative effects within social inter-relationships and its business models. These effects can not be ignored if we want to perceive – with the right clarity and meaning – the innovation process that inevitably comes with it.

The three fundamental properties of digital technology…

<Read More…>

on January 04, 2016 09:13 AM

December’s reading list

Canonical Design Team

Happy 2016!

Here are the best links shared by the design team in December:

  1. Stance Star Wars socks
  2. 24 Ways
  3. Sketch Blog: Leaving the Mac App Store
  4. 12 Devs
  5. Web Development Calendars for 2015
  6. These photos are why I’m trapped in Tokyo forever now
  7. Inside Abbey Road
  8. Improve the Apple remote with a purple rubber band
  9. Frank Underwood 2016
  10. Spotify Star Wars

Thank you to Joe, Karl, Luca, Olga, Richard, Robin and me for the links this month!

on January 04, 2016 08:42 AM

The New Press has published a new collection of Aaron Swartz’s writing called The Boy Who Could Change the World: The Writings of Aaron Swartz. I worked with Seth Schoen to introduce and help edit the opening section of book that includes Aaron’s writings on free culture, access to information and knowledge, and copyright. Seth and I have put our introduction online under an appropriately free license (CC BY-SA).

aaronsw_book_coverOver the last week, I’ve read the whole book again. I think the book really is a wonderful snapshot of Aaron’s thought and personality. It’s got bits that make me roll my eyes, bits that make me want to shout in support, and bits that continue to challenge me. It all makes me miss Aaron terribly. I strongly recommend the book.

Because the publication is post-humous, it’s meant that folks like me are doing media work for the book. In honor of naming the book their “progressive pick” of the week, Truthout has also published an interview with me about Aaron and the book.

Other folks who introduced and/or edited topical sections in the book are David Auerbach (Computers), David Segal (Politics), Cory Doctorow (Media), James Grimmelmann (Books and Culture), and Astra Taylor (Unschool). The book is introduced by Larry Lessig.

on January 04, 2016 02:12 AM

January 03, 2016

When I was looking back on my contributions to Ubuntu during 2015, I thought that although there have been some significant changes I'm still involved with answering occasional posts on the Ubuntu Forums, reporting bugs, testing daily ISOs, and contributing to the production of the Ubuntu Weekly Newsletter, all activities that I've been involved with for several years now.

My activities in 2015

To summarise, my "Ubuntu" year looked like this:
  • Bought a cheap laptop to test Xubuntu.
  • Successfully applied for an Ubuntu Membership.
  • Released an edition of the Ubuntu Weekly Newsletter [UWN].
  • Won the Xubuntu QA Incentive for ISO testing.
  • Stopped testing Xubuntu.
  • Took up a moderator position at the Ubuntu Forums.
  • Switched from using Kubuntu to Ubuntu on my main laptop.
  • Stood down as a forum moderator.
  • Passed a personal milestone of contributing to 100 issues of UWN.
  • Subsequently cut back on my newsletter contributions.
  • Started testing Ubuntu GNOME.
  • Reverted to testing Xubuntu but on a more casual basis than before.
Toshiba Satellite C-50B Xubuntu 15.10
Toshiba Satellite C-50B running Xubuntu 15.10 
In June I quickly accepted an invitation to become a moderator of the Ubuntu Forums. After a few weeks I started to doubt that I was going to be as good as I needed to be and in the middle of September I decided that the challenge was too great for me so I stood down. I am extremely grateful to the Forum Council for the opportunity that I was given and hopefully at sometime in the far distant future I'll be given another chance to do something that in the main I quite enjoyed.

A change of Ubuntu flavour

Also in June I started saying farewell to Kubuntu after being a regular user since the Natty Narwhal release in April 2011. I mean no disrespect to the Kubuntu team as they just have to deal with whatever KDE release but I feel that Plasma 5 is a great disappointment compared to KDE 4. It took a few weeks to adjust to using Ubuntu (Unity) as I hadn't used it regularly since the 12.04 release. I still have Kubuntu 14.04 LTS installed on a couple of backup machines but I'll be installing another flavour on both of them in due course. Ubuntu MATE, which I've recently tested in a live environment, looks very interesting!

The Ubuntu Weekly Newsletter

Although I've cut back on my input into the production of UWN for now I'll take this opportunity to make yet another appeal for help with the newsletter as it is very much needed at this time. Finding links to articles and writing summaries is not difficult and requires no more than 30 minutes or so of your time each week. If you're short of ideas on how you can contribute to Ubuntu then please consider joining the team.

Anyway, just a few days into the new year and I'm already wondering what my summary of 2016 will look like.
on January 03, 2016 06:25 PM

January 02, 2016

Tributes:

Over the last week, people have been saying a lot about the wonderful life of Ian Murdock and his contributions to Debian and the world of free software. According to one news site, a San Francisco police officer, Grace Gatpandan, has been doing the opposite, starting a PR spin operation, leaking snippets of information about what may have happened during Ian's final 24 hours. Sadly, these things are now starting to be regurgitated without proper scrutiny by the mainstream press (note the erroneous reference to SFGate with link to SFBay.ca, this is mainstream media at its best).

The report talks about somebody "trying to break into a residence". Let's translate that from the spin-doctor-speak back to English: it is the silly season, when many people have a couple of extra drinks and do silly things like losing their keys. "a residence", or just his own home perhaps? Doesn't the choice of words make the motive sound so much more sinister? Nobody knows the full story, so snippets of information like this are not helpful.

Did they really mean to leave people with the impression that one of the greatest visionaries of Silicon Valley was also a cat burglar? That somebody who spent his life giving selflessly and generously for the benefit of the whole world (his legacy is far greater than Steve Jobs, as Debian comes with no strings attached) spends the Christmas weekend taking things from other people's houses in the dark of the night?

If having a few drinks and losing your keys in December is such a sorry state to be in, many of us could potentially be framed in the same terms at some point in our lives. That is one of the reasons I feel so compelled to write this: it is not just Ian who has suffered an injustice here, somebody else could be going through exactly the same experience at the moment you are reading this. Any of us could end up facing an assault as brutal as the tweets imply at some point in the future. At least I can console myself that as a privileged white male, the risk to myself is much lower than for those with mental illness, the homeless, transgender, Muslim or black people but as Ian appears to have discovered, that risk is still very real.

The story reports that officers made a decision to detain Ian on the grounds that he "matched the description of the person trying to break in". This also seems odd. If he had weapons or drugs or he was known to police that would have almost certainly been emphasized. Is it right to rush in and deprive somebody of their liberties without first giving them an opportunity to identify themselves and possibly confirm if they had a reason to be there?

The report goes on, "he was belligerent", "he became violent", "banging his head" all by himself. How often do you see intelligent and successful people like Ian Murdock spontaneously harming themselves in that way? How often do you see reports that somebody "banged their head", all by themselves of course, during some encounter with law enforcement? Does Ms Gatpandan really expect us to believe it is merely coincidence? Do the police categorically deny they ever gave a suspect a shove in the back, or tripped a suspect's legs such that he fell over or just made a mistake?

If any person was genuinely trying to spontaneously inflict a head injury on himself, as the police have suggested, why wouldn't the police leave them in the hospital or other suitable care? Do they really think that when people are displaying signs of such distress, rounding them up and taking them to jail will be in their best interests?

Now, I'm not suggesting that there was a pre-meditated conspiracy to harm Ian personally. Police may have been at the end of a long shift (and it is a disgrace that many US police are not paid for their overtime) or just had a rough experience with somebody far more sinister. On the other hand, there may have been a mistake, gaps in police training or an inappropriate use of a procedure that is not always justified, like a strip search, that causes profound suffering for many victims.

A select number of US police forces have been shamed around the world for a series of incidents of extreme violence in recent times, including the death of Michael Brown in Ferguson, shooting Walter Scott in the back, death of Freddie Gray in Baltimore and the attempts of Chicago's police to run an on-shore version of Guantanamo Bay. Beyond those highly violent incidents, the world has also seen the abuse of Ahmed Mohamed, the Muslim schoolboy arrested for his interest in electronics and in 2013, the suicide of Aaron Swartz which appears to be a direct consequence of the "Justice" department's obsession with him.

What have the police learned from all this bad publicity? Are they changing their methods, or just hiring more spin doctors? If that is their response, then doesn't it leave them with a big advantage over somebody like Ian who is now deceased?

Isn't it standard practice for some police to simply round up anybody who is a bit lost and write up a charge sheet for resisting arrest or assaulting an officer as insurance against questions about their own excessive use of force?

When British police executed Jean Charles de Menezes on a crowded tube train and realized they had just done something incredibly outrageous, their PR office went to great lengths to try and protect their image, even photoshopping images of Menezes to make him look more like some other suspect in a wanted poster. To this day, they continue to refer to Menezes as a victim of the terrorists, could they be any more arrogant? While nobody believes the police woke up that morning thinking "let's kill some random guy on the tube", it is clear they made a mistake and like many people (not just police), they immediately prioritized protecting their reputation over protecting the truth.

Nobody else knows exactly what Ian was doing and exactly what the police did to him. We may never know. However, any disparaging comments from the police should be viewed with some caution.

The horrors of incarceration

It would be hard for any of us to understand everything that somebody goes through when detained by the police. The recently released movie about The Stanford Prison Experiment may be an interesting place to start, a German version produced in 2001, Das Experiment, may be even better.

The United States has the largest prison population in the world and the second-highest per-capita incarceration rate. The system, and the police and prison officers who operate it, treat these people as packages on a conveyor belt, without even the most basic human dignity. Whether their encounter lasts for just a few hours or a decade, is it any surprise that something dies inside them when society is so cruel?

Worldwide, there is an increasing trend to make incarceration as degrading as possible. People may be innocent until proven guilty, but this hasn't stopped police in the UK from locking up and strip-searching over 4,500 children in a five year period, would these children go away feeling any different than if they had an encounter with Jimmy Saville or Rolf Harris? One can only wonder what they do to adults.

What all this boils down to is that people shouldn't really be incarcerated unless it is clear the danger they pose to society is greater than the danger they may face in a prison.

What can people do for Ian and for justice?

Now that the spin doctors have started trying to do a job on him, it would be great to try and fill the Internet with stories of the great things Ian has done for the world. Write whatever you feel about Ian's work and your own experience of Debian.

While the circumstances of the final tweets from his Twitter account are confusing, the tweets appear to be consistent with many other complaints about US law enforcement. Are there positive things that people can do in their community to help reduce the harm?

Sending books to prisoners (the UK tried to ban this) can make a difference. Treat them like humans, even if the system doesn't.

Recording incidents of police activities can also make a huge difference, such as the video of the shooting of Walter Scott or the UK police making a brutal unprovoked attack on a newspaper vendor. Don't just walk past a violent situation and assume the police are the good guys. People making recordings may find themselves in danger, it is recommended to use software that automatically duplicates each recording, preferably to the cloud, so that if the police ask you to delete a recording (why would they?), you can let them watch you delete it and still have a copy.

Can anybody think of awards that Ian Murdock should be nominated for, either in free software, computing or engineering in general? Some, like the prestigious Queen Elizabeth Prize for Engineering can't be awarded posthumously but others may be within reach. Come and share your ideas on the debian-project mailing list, there are already some here.

Best of all, Ian didn't just build software, he built an organization, Debian. Debian's principles have helped to unite many people from otherwise different backgrounds and carry on those principles even when Ian is no longer among us. Find out more, install it on your computer or even look for ways to participate in the project.

on January 02, 2016 08:45 PM

A review of my 2015

Riccardo Padovani

Hello all, and first of all happy 2016 to you all!

I hope it will bring to you all happiness you deserve :-)

2015 has been a great year for me on a personal plan - I had to fight some battles, and I had ups and downs, as everyone I suppose. But I’m sure that so far has been the most amazing year I had. Even better, it laid the foundations for an even better 2016.

Ubuntu

I think 90% of readers of this blog are interested about Ubuntu, so I think is important to highlight how awesome has been this year for the project I was most involved in: Ubuntu Phone.

The first Ubuntu Phone hit the market in February. Others two phone arrived in June. And thanks to a lot of updates, they improve each month.

I’m very proud I have helped a little bit this awesome project. And with Mivoligo and Tyrel we created one of the most loved games for Ubuntu Phones: Falldown.

Donations

In 2015 I received 128.55 euros in donations. This is mind-blowing, when I added the paypal donations form one year ago I could never imagine a so huge support, so thanks you!

Also, for Falldown we received 140 euros of donations. We spent some of them in Germany for UbuCon Europe, and we’ll see how to spend others to provide a better game (or new games) :)

2016

2015 has been an amazing year, and it’s incredible to look back and see how many things changed!

What’s the plan for 2016 then? Well, if you followed me you know I have a little job now, and I’m very happy with it and it gives me some big satisfactions.

While during summer I was able to do my job and do some opensource contributions, unfortunately in September I restared university and I haven’t found yet a way to do as many contributions as I used to.

So in 2016 I hope to find a way to do all the things I love, I would love to release a couple of updates for Falldown and do other things. But the first thing to do for me is finding an equilibrium in my time to do all the things I care about.

I hope you all will have an awesome 2016!

Ciao, R.

on January 02, 2016 06:45 PM

Starting 2016

Stephen Michael Kellat

Happy New Year! 2015 was not a happy time for some. In other cases we saw great things happen that need to spread further this year.

To reduce things to list-like form:

  1. Humorist Dave Barry's year in review column for 2015 is #disturbing. He's trying hard to make things funny. Relative to 2015 it is just hard.
  2. The local newspaper's look back at 2015 is simply sad news. There is not much happy to it.
  3. We're being asked to "do more with less" at work while attrition is skyrocketing. We've also got quite an imbalance between 12-month staff and staff who are on variable seasons of 10 months duty or less per calendar year. There has been talk of expecting "weeping and gnashing of teeth" this year.
  4. I will be taking classes at Lakeland Community College once the Spring 2016 semester starts.
  5. Whatever happened to the new season of Person of Interest? There is still no air date.
  6. It was disturbing to hear of the loss of Ian Murdock. I'm left with various questions that bother me greatly. For now I'm keeping them to myself.
  7. I try not to make predictions for the year ahead. It is bad enough that it is a presidential election year. If the Republican Party's candidate is that real estate developer then I might be reconsidering continuing in my job as a civil servant. Reviewing boltholes in the Pacific is an evergreen matter, it seems.

And finally...

$ ubuntu-support-status
Support status summary of 'WOTAN':

You have 2011 packages (67.8%) supported until July 2016 (9m)
You have 18 packages (0.6%) supported until September 2016 (9m)

You have 0 packages (0.0%) that can not/no-longer be downloaded
You have 939 packages (31.6%) that are unsupported

Run with --show-unsupported, --show-supported or --show-all to see more details

I'm doing okay on Xubuntu 15.10 for the moment but when a second alpha milestone rolls around I may see about jumping to test Xenial Xerus on my amd64 hardware. Dodo Chaplet is nowhere to be found. The computer is also nowhere near the Post Office Tower for those certain fans of black & white television serials from the past.

Creative Commons License
Starting 2016 by Stephen Michael Kellat is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

on January 02, 2016 03:30 AM

December 31, 2015

Glucosio in 2016

Benjamin Kerensa

Happy New Year, friends!

Our core team and contributors have much to be proud about reflecting on the work we did in the past few months. While there are many things to be proud of, I think one of the biggest accomplishments was we built an open source project and released a product to Google Play in under four months. We then went on to do four more releases and are growing our user base internationally on a daily basis.

Glucosio for AndroidGlucosio for Android

We have had an astounding amount of coverage from the media about the vision we have for Glucosio and how we can use open source software to not only help people with diabetes improve their outcomes but further research through anonymous crowdsourcing.

I’m proud of the work our core team has put in over the past few months and excited what the new year has in store for us as a project. One big change next year is we will be formally be under the leadership of a non-profit foundation (Glucosio Foundation) which should help us be more organized but also have the financial and legal structure we need to grow as a project and deliver on our vision.

I’ve been able to meet and talk with third parties like Dexcom, Nightscout Foundation and many others including individual developers, researchers and other foundations who are very interested in the work we are pioneering and are interested in partnering, supporting or collaborating with Glucosio.

One exciting thing we hope to kick off in the New Year are Diabetes Hack Days, where organizers around the world can host hack days in their community to get people to come together to hack on software and hardware projects that will spur new innovation and creativity around diabetes technology. Most importantly though, we are very excited to launch our API to researchers next year so they can begin extracting anonymized data from our platform to help further their diabetes research.

We also look forward to releasing Glucosio for iOS in the first quarter of 2016 which has had a lot of interest and been under development for a couple months now.

In closing, we would like to invite developers, translators, and anyone else to get in touch and get connected with our project and start contributing to the vision we have of amazing open source software to help people with diabetes. We’d also ask you to consider a donation to the project, which will help us in our launch of our iOS in Q1 of 2016, and help us more rapidly produce features by offering bounties via BountySource and expand into a more mature open source project.

on December 31, 2015 04:00 PM

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I have been paid to work 21.25 hours on Debian LTS. During this time I worked on the following things:

  • Sent a first patch and later an updated patch to modify DAK so that it can send the accept/reject mails to the signer of the upload instead of the maintainer. Details in #796784.
  • Uploaded MySQL 5.5 compabitility fixes for phpmyadmin and postfix-policyd so that we could release MySQL 5.5 as an upgrade option MySQL 5.1 (see DLA 359-1).
  • Released DLA 361-1 on bouncycastle after having gotten the green light from upstream.
  • Released DLA 362-1 on dhcpd fixing three CVE.
  • Released DLA 366-1 on arts fixing one CVE.
  • Released DLA 367-1 on kdelibs fixing one CVE.
  • Handled the LTS frontdesk for a whole week.
  • Sponsored the upload of foomatic-filters for DLA 371-1.
  • Filed #808256 and #808257 to get libnsbmp/libnsgif removed. Both packages had recent CVE and were sitting unused in Debian since their introduction 6 years ago…
  • Released DLA 372-1 announcing the end of support of virtualbox-ose.
  • Updated git repository of debian-security-support to account for the former change and also took care of a few pending issues.
  • Released DLA 376-1 on mono to fix one CVE.
  • Added some initial DEP-8 tests to python-django that will help to ensure that a security update doesn’t break the package.

Distro Tracker

I put a big focus on tracker.debian.org work this month. I completed the switch of the mail interface from packages.qa.debian.org to tracker.debian.org and I announced the change on debian-devel-announce.

The changes resulted in a few problems that I quickly fixed (like #807073) and some other failures seen only by me and that were generated by weird spam messages (did you know that a subject can’t have a newline character but that it can be encoded and folded over multiple lines?).

Related to that I fixed some services so that they send their mails to tracker.debian.org directly instead of relying on the old emails (they get forwarded for now but it would be nice to be able to get rid of that forward). I updated (with the help of Lucas Nussbaum) the service that forwards the Launchpad bugs to the tracker, I sent a patch to update the @packages.debian.org aliases (not yet applied), I updated the configuration of all git commit notice scripts in the Alioth collab-maint and python-modules project (many remain to be done). I asked Ubuntu’s Merge-O-Matic to use the new emails as well (see LP 1525497). DAK and the Debian BTS still have to be updated, as of yet nobody reacted to my announce… last but not least I updated many wiki pages which duplicated the instructions to setup the commit notice sent to the PTS.

While on a good track I opted to tackle the long-standing RC bug that was plaguing tracker.debian.org (#789183), so I updated the codebase to rely on Twitter’s bootstrap v4 instead of v2. I had to switch to something else for the icons since glyphicons is no longer provided as part of bootstrap and the actual license for the standalone version was not suitable for use. I opted for Github’s Octicons. I made numerous little improvements while doing that (closing some bugs in the process) and I believe that the result is more pleasant to use.

I also did a lot of bug triage and fixed a few small issues like the incomplete architecture list (#793547), or fixing a page used only by people with javascript disabled that was not working. Or the invalid links for packages still using CVS (ugh, see #561228).

Misc packaging

Django. After having added DEP-8 tests (as part of my LTS work, see above), I discovered that the current version in unstable did not pass its test suite… so I filed the issue upstream (ticket 26016) and added the corresponding patch. And I encouraged others to update python-bcrypt in Debian to a newer version that would have worked with Django 1.9 (see #803096). I also fixed another small issue in Django (see ticket 26017 with my pull request that got accepted).

I asked the release managers to consider accepting the latest 1.7.x version in jessie (see #807654) but I have gotten zero answer so far. And I’m not the only one waiting an answer. It’s a bit of a sad situation… we still have a few weeks until the next point release but for once I do it in advance and I would love to have timely feedback.

Last but not least, I started the maintaining the current LTS release (1.8.x) in jessie-backports.

Tryton. I upgraded to Tryton 3.8 and discovered an issue that I filed in #806781. I sponsored 5 new tryton modules for Matthias Behrle (who is DM) as well as one security upload (for CVE-2015-0861).

Debian Handbook. I uploaded a new version to Debian Unstable and requested (to the release managers) the permission to upload a backport of it to jessie so that jessie has a version of the package that documents jessie and not wheezy… contrary to my other Django request, this one should be non-controversial but I also have had zero answer so far, see #807515.

Misc. I filed #808583 when sbuild stopped working with Perl 5.22. I handled #807860 on publican, I found the corresponding upstream ticket and discovered a work around with the help of upstream (see here).

Kali related work

I reported a bug to #debian-apt about apt miscalculating download size (ending up with 18 EB!) which resulted in a fix here in version 1.1.4. Installing a meta-package that needed more than 2GB was no longer possible without this fix and we have a kali-linux-all metapackage in that situation that gets regularly installed in a Jenkins test.

I added captcha support to Distro Tracker and enabled this feature on pkg.kali.org.

I filed #808863 against uhd-host because it was not possible to install the package in a systemd-nspawn’s managed chroot where /proc is read-only. And we started using this to test dist-upgrade from one version of Kali to the next…

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on December 31, 2015 10:51 AM

Remembering Ian Murdock

Benjamin Kerensa

Ian MurdockPhoto by Yuichi Sakuraba / CC BY

There is clearly great sadness felt in the open source community today after learning of the passing of Ian Murdock who founded the Debian Linux Distribution and was the first Debian Project Leader. Ian is the “ian” in Debian and Deb, his then-girlfriend (Debra Lynn) for those not familiar with the history of the naming of the project.

I was fortunate to meet Ian Murdock some years ago at an early Linux Conference (LinuxWorld) and it was very inspiring to hear him talk about open source and open culture. I feel still today that he was one of the many people who helped shape my own direction and contributions in open source. Ian was very passionate about open source and helped create the bricks (philosophy, vision, governance, practice) that power many open source projects today.

If it were not for Ian, we would not have many of the great Debian forks we have today including the very popular Ubuntu. There is no doubt that the work he did and his contributions to the early days of open source have had an impact across many projects and losing Ian at such a young age is a tragedy.

That said, I think the circumstances around Ian’s death are quite concerning as we have seen the tweets he made. I do hope that if Ian suffered excessive force at the hands of the San Francisco Police Department that justice will eventually be served.

I hope that we can all reflect on the values that Ian championed and the important work that he did and celebrate his contributions, which have had a very large and positive impact on computing.

Thank you Ian!

on December 31, 2015 06:25 AM

December 30, 2015

Ian Murdock was perhaps best known professionally as the founder of the Debian project, which he created while still a student at Purdue University, where he earned his bachelor’s degree in computer science in 1996. Debian was one of the first Linux distros to be forged, and it is widely regarded as a one of […]
on December 30, 2015 08:58 PM

Ian Murdock passed away

Marcin Juszkiewicz

I am not writing after people outside of my family die but when I read that Ian Murdock is no longer with us I got a feeling that I have to write few words.

Never met him but lot of things in my FOSS career happened because of his most famous project: Debian. For those who do not know: he was “ian” while “Deb” was from his girlfriend name Debra.

First GNU/Linux distribution installed: Debian. First on Amiga 1200, then on PC (where it was my main operating system for years). My first package was made for Debian (“tex-skak” – already removed from archive). I was considering applying for Debian Developer status but found OpenEmbedded first.

Debian way of handling non-free packages was something which allowed me to freely hack on anything I wanted as I knew that I can because someone else already checked licenses. Try that in PalmOS or Microsoft Windows worlds.

Sure, there were other distributions (Slackware, Red Hat Linux) in 90s but it was Debian which brought me to FOSS world. And still is my favorite (despite working for Red Hat).

on December 30, 2015 08:38 PM

Today we heard the sad news that Ian Murdock has passed away. He was 42.

Although Ian and my paths crossed relatively infrequently, over the years we became friends. His tremendous work in Debian was an inspiration for my own work in Ubuntu. At times when I was unsure of what to do in my work, Ian would share his guidance and wisdom. He never asked for anything in return. He never judged. He always supported the growth of Open Source and Free Software. He was precisely the kind of person that makes the Open Source and Free Software world so beautiful.

As such, when I heard about some of his erratic tweets a few days back as I landed back home from the UK for Christmas, I reached out with a friendly arm to see if there was anything I could do to help. Sadly, I got no response. I now know why: he had likely just passed away when I reached out to him.

While it is natural for us to grieve his passing, we should also take time to focus on what he gave us all. He gave us a sparkling personality, a passion for everyone to succeed, and a legacy of Open Source and Free Software that would be hard to match.

Ian, wherever you may be, rest in peace. We will miss you

on December 30, 2015 08:06 PM

Not only do I keep incrementing version numbers faster than ever before, APT also keeps getting faster. But not only that, it also has some bugs fixed and the cache is now checked with a hash when opening.

Important fix for 1.1.6 regression

Since APT 1.1.6, APT uses the configured xz compression level. Unfortunately, the default was set to 9, which requires 674 MiB of RAM, compared to the 94 MiB required at level 6.

This caused the test suite to fail on the Ubuntu autopkgtest servers, but I thought it was just some temporary hickup on their part, and so did not look into it for the 1.1.7, 1.1.8, and 1.1.9 releases.  When the Ubuntu servers finally failed with 1.1.9 again (they only started building again on Monday it seems), I noticed something was wrong.

Enter git bisect. I created a script that compiles the APT source code and runs a test with ulimit for virtual and resident memory set to 512 (that worked in 1.1.5), and let it ran, and thus found out the reason mentioned above.

The solution: APT now defaults to level 6.

New Features

APT 1.1.8 introduces /usr/lib/apt/apt-helper cat-file which can be used to read files compressed by any compressor understood by APT. It is used in the recent apt-file experimental release, and serves to prepare us for a future in which files on the disk might be compressed with a different compressor (such as LZ4 for Contents files, this will improve rred speed on them by factor 7).

David added a feature that enables servers to advertise that they do not want APT to download and use some Architecture: all contents when they include all in their list of architectures. This is to allow archives to drop Architecture: all packages from the architecture-specific content files, to avoid redundant data and (thus) improve the performance of apt-file.

Buffered writes

APT 1.1.9 introduces buffered writing for rred, reducing the runtime by about 50% on a slowish SSD, and maybe more on HDDs. The 1.1.9 release is a bit buggy and might mess up things when a write syscall is interrupted, this is fixed in 1.1.10.

Cache generation improvements

APT 1.1.9 and APT 1.1.10 improve the cache generation algorithms in several ways: Switching a lookup table from std::map to std::unordered_map, providing an inline isspace_ascii() function, and inlining the tolower_ascii() function which are tiny functions that are called a lot.

APT 1.1.10 also switches the cache’s hash function to the DJB hash function and increases the default hash table sizes to the smallest prime larger than 15000, namely 15013. This reduces the average bucket size from 6.5 to 4.5. We might increase this further in the future.

Checksum for the cache, but no more syncs

Prior to APT 1.1.10 writing the cache was a multi-part process:

  1. Write the the cache to a temporary file with the dirty bit set to true
  2. Call fsync() to sync the cache
  3. Write a new header with the dirty bit set to false
  4. Call fsync() to sync the new header
  5. (Rename the temporary file to the target name)

The last step was obviously not needed, as we could easily live with an intact cache that has its dirty field set to false, as we can just rebuild it.

But what matters more is step 2. Synchronizing the entire 40 or 50 MB takes some time. On my HDD system, it consumed 56% of the entire cache generation time, and on my SSD system, it consumed 25% of the time.

APT 1.1.10 does not sync the cache at all. It now embeds a hashsum (adler32 for performance reasons) in the cache. This helps ensure that no matter what parts of the cache are written in case of some failure somewhere, we can still detect a failure with reasonable confidence (and even more errors than before).

This means that cache generation is now much faster for a lot of people. On the bad side, commands like apt-cache show that previously took maybe 10 ms to execute can now take about 80 ms.

Please report back on your performance experience with 1.1.10 release, I’m very interested to see if that works reasonably for other people. And if you have any other idea how to solve the issue, I’d be interested to hear them (all data needs to be written before the header with dirty=0 is written, but we don’t want to sync the data).

Future work

We seem to have a lot of temporary (?) std::string objects during the cache generation, accounting for about 10% of the run time. I’m thinking of introducing a string_view class similar to the one proposed for C++17 and make use of that.

I also thought about calling posix_fadvise() before starting to parse files, but the cache generation process does not seem to spend a lot of its time in system calls (even with all caches dropped before the run), so I don’t think this will improve things.

If anyone has some other suggestions or patches for performance stuff, let me know.


Filed under: Debian, Ubuntu
on December 30, 2015 01:05 AM

December 29, 2015

I was rather decently sent a Titan charging cable by the people at Fusechicken, and I reviewed it for the upcoming episode of Bad Voltage. I thought my review could also gain itself a home here:

I’ve got a bunch of USB cables for charging things; my flat is littered with them. Some came with phones, or Kindles, or speakers; most are the now-standard micro-USB although some are that stupid old-style mini-USB trapezium shaped thing which I keep around so half the crap in my laptop graveyard stays working. A couple are lightning cables for iphones. And most are a bit frayed or bent or have dodgy connections because they’ve been run over by a chair or screwed up in my pocket or used to tie a damsel to the railroad tracks or whatever. one company, Fusechicken, believe they’ve solved this problem and kindly sent us what they call Titan: the toughest cable on earth. Apparently, it’s the last cable you’ll ever need. Because it’s wrapped in flexible steel. According to them, you can chainsaw this cable and it won’t be harmed, so if you need to charge your phone while in Leatherhead’s cellar, this is clearly the place to go. I’ll say this: when they call it tough, they are not kidding. I have dropped it off a balcony and had it run over by a car and played a game of tug-of-war with it and it still doesn’t have any problem charging. The box for “undamageable cable” is firmly ticked. It doesn’t coil up very small because it is basically the same vibe as that huge flying snake thing from the end of the Avengers; you don’t wanna keep this in your coat pocket, unless you also need a convenient way of hanging a car off the edge of a bridge while you’re out in town. It comes in microusb and lightning flavours so it’ll charge any phone or device you’ve got lying around. The micro USB one is $25 and the lightning one is $35, which on the one hand is thirty times the price of a bog standard cable but on the other hand, forgoing three pints or a happy ending to your next massage to ensure you’ll never have a frayed cable again sounds like a good idea to me. The Bad Voltage verdict: if your cables get frayed, get a Titan and they won’t. Plus, it’s nice and shiny.

on December 29, 2015 11:39 PM

I just want to clarify this event happened on August. But we were a small group organizing the event and this was a completely volunteer event.

What is the VII Central America Summit?

Central American Free Software Summit is a space of articulation, coordination and exchange of ideas between Free Open source Software (FLOSS) communities that make up the SLCA agreements and strengthening ways of working together to facilitate and promote the use of Free Software development in the region(Central America).

Objectives of the Event

  • Strengthening the processes of social awareness, philosophy, and policy of Free software in Honduras and Central America.
  • A multidisciplinary space that will allow managers of social projects and achieve regional politicians present their initiatives and manage contact network collaboration and / or support.
  • Create an educational application during the hackathon that will take during the event, this app will benefit the 7 countries of Central America
  • Companies, organizations and sponsoring free software projects have thematic areas both at the meeting or side events to promote their products or recruit partners, supporters and / or collaborators.

Special Thanks To: !!!

This event was a great success thanks to the following open source companies, universities foundations and local companies that believe in us.

Google was the first company who gave us the Yes! The Open Source Program was the department at Google that help us with the sponsor. A special thanks to Cat Allman, she was the person who believed in this event.


A special thanks to Canonical for beleiving in this region, its communities have done a terrific job promoting Ubuntu on the region. Thanks David Planella for all your help!



The Beagleboard.org foundation donated 10 Beagle Bone Blacks which where distributed in 6 universities from Central America. 4 Honduras, 2 Costa Rica, 2 El Salvador, 1 Guatemala and 1 Nicaragua. All of this was done with the help of Jason Kridner!!!


Mozilla Foundation helped giving 10 scholarships so Mozillans Devs could come to Honduras and show us the virtues of this magnificent browser. Thanks Guillermo Movia for your help!



Honduras Local University that was the place, where the event took place.

And many other people who believed in us and with their help, the event really rocked!!!

Some images of the event















Some articles in local newspapers (In Spanish)


Sorry for the delay of informing about this event.
on December 29, 2015 10:25 PM
While developing stress-ng I wanted to be able to see if the various memory stressors were touching memory in the way I had anticipated.  While digging around in the Linux documentation I discovered the very useful soft/dirty bit on Page Table Entries (PTEs) that get set when a page is written to.  The mechanism to check for the soft/dirty bit is described in Documentation/vm/soft-dirty.txt; one needs to:
  1. Clear the soft-dirty bits on the PTEs on a chosen process by writing "4" to /proc/$PID/clear_refs
  2. Wait a while for some page activity to occur
  3. Read the soft-dirty bits on the PTEs to see which pages got written to.
Not too tricky, so how about using this neat feature? While on rather long and dull flight over the Atlantic back in August I hacked up a very crude ncurses based tool to continually check the PTEs of a given process and display the soft/dirty activity in real time.  During this Christmas break I picked this code up and re-worked into a more polished tool.  One can scroll up/down the memory maps and also select a page and view the contents changing in real time.  The tool identifies the type of memory mapping a page belongs to, so one can easily scan through memory looking at pages of memory belonging data, code, heap, stack, anonymous mappings or even swapped out pages.

Running it on X, compiz, firefox or thunderbird is quite instructive as one can see a lot of page activity on the large heap allocations.  The ability to see pages getting swapped out when memory pressure is high is also rather useful.

Page view of Xorg
Memory view of stack
The code is still early development quality (so expect some buglets!) and I need to work on optimising it in a lot of places, but for now, it works well enough to be a fairly interesting tool. I've currently got a package built for Ubuntu Xenial in ppa:colin-king/pagemon and the source can be cloned from http://kernel.ubuntu.com/git/cking/pagemon.git/

So, to install on Xenial, currently one needs to do:

sudo add-apt-repository ppa:colin-king/pagemon
sudo apt-get update
sudo apt-get install pagemon

I may be adding a few more features in the next few weeks, and then getting the tool into Ubuntu and Debian.

and as an example, running it on Xorg, it is invoked as:

sudo pagemon -p $(pidof Xorg)

Unfortunately sudo is required to allow one to dig so intrusively into a running process. For more details on how to use pagemon consult the pagemon man page, or press "h" or "?" while running pagemon.
on December 29, 2015 08:56 PM

An excellent resource

Linux Padawan

We are all ways looking for good quality free teaching books. It seems that we are not the only people. I have come accross an amazing list and am delighted to share it with everyone! https://github.com/vhf/free-programming-books/blob/master/free-programming-books.md#professional-development
on December 29, 2015 01:58 PM

It's been a little bit since I last updated, and it's been a busy time. I did want to take a quick moment to update and note that I accomplished something I'm pretty proud of. As of Christmas Eve, I'm now an Offensive Security Certified Professional.

OSCP Logo

Even though I've been working in security for more than two years, the lab and exam were still a challenge. Given that I mostly deal with web security at work, it was a great change to have a lab environment of more than 50 machines to attack. Perhaps most significantly, it gave me an opportunity to fight back a little bit of the impostor syndrome I'm perpetually afflicted with.

Up next: Offensive Security Certified Expert and Cracking the Perimeter.

on December 29, 2015 05:32 AM

December 28, 2015

 

Pallinux: Artwork by Fabio "Pixel" Colinelli

Pallinux: Artwork by Fabio “Pixel” Colinelli

In a world far away, in the dark Land of Digitos only populated by machines and computers, the evil Mister Woo was ruling over all. Over time, this terrible dictator was becoming a horrendous fire-eyed giant, walking the whole day by vibrating the heavy steps into his Kingdom, leaving behind him a trail of smoke and terror. Mr. Woo always wore a long, shabby and dirty top hat that had once been white, so old and ragged that he could not even keep it up straight on his head.

Throughout the Land of Digitos, the inhabitants – computers – were scattered, each…

<Read more…>

on December 28, 2015 10:08 AM

December 27, 2015

People of earth, waving at Saturn, courtesy of NASA.
“It Doesn't Look Like Ubuntu Reached Its Goal Of 200 Million Users This Year”, says Michael Larabel of Phoronix, in a post that it seems he's been itching to post for months.

Why the negativity?!? Are you sure? Did you count all of them?

No one has.  And no one can count all of the Ubuntu users in the world!

Canonical, unlike Apple, Microsoft, Red Hat, or Google, does not require each user to register their installation of Ubuntu.

Of course, you can buy laptops preloaded with Ubuntu from Dell, HP, Lenovo, and Asus.  And there are millions of them out there.  And you can buy servers powered by Ubuntu from IBM, Dell, HP, Cisco, Lenovo, Quanta, and compatible with the OpenCompute Project.

In 2011, hardware sales might have been how Mark Shuttleworth hoped to reach 200M Ubuntu users by 2015.

But in reality, hundreds of millions of PCs, servers, devices, virtual machines, and containers have booted Ubuntu to date!

Let's look at some facts...
How many "users" of Ubuntu are there ultimately?  I bet there are over a billion people today, using Ubuntu -- both directly and indirectly.  Without a doubt, there are over a billion people on the planet benefiting from the services, security, and availability of Ubuntu today.
  • More people use Ubuntu than we know.
  • More people use Ubuntu than you know.
  • More people use Ubuntu than they know.
More people use Ubuntu than anyone actually knows.

Because of who we all are.

:-Dustin
on December 27, 2015 04:44 PM

December 26, 2015

APT’s performance in applying the Pdiffs files, which are the diff format used for Packages, Sources, and other files in the archive has been slow.

Improving performance for uncompressed files

The reason for this is that our I/O is unbuffered, and we were reading one byte at a time in order to read lines. This changed on December 24, by adding read buffering for reading lines, vastly improving the performance of rred.

But it was still slow, so today I profiled – using gperftools – the rred method running on a 430MB uncompressed Contents file with a 75 KB large patch. I noticed that our ReadLine() method was calling some method which took a long time (google-pprof told me it was some _nss method, but that was wrong [thank you, addr2line]).

After some further look into the code, I noticed that we set the length of the buffer using the length of the line. And whenever we moved some data out of the buffer, we called memmove() to move the remaining data to the front of the buffer.

So, I tried to use a fixed buffer size of 4096 (commit). Now memmove() would spend less time moving memory around inside the buffer. This helped a lot, bringing the run time on my example file down from 46 seconds to about 2 seconds.

Later on, I rewrote the code to not use memmove() at all – opting for start and end variables instead; and increasing the start variable when reading from the buffer (commit).

This in turn further improved things, bringing it down to about 1.6 seconds. We could now increase the buffer size again, without any negative effect.

Effects on apt-get update

I measured the run-time of apt-get update, excluding appstream and apt-file files, for the update from todays 07:52 to the 13:52 dinstall run. Configured sources are unstable and experimental with amd64 and i386 architectures. appstream and apt-file indexes are disabled for testing, so only Packages and Sources indexes are fetched.

The results are impressive:

  • For APT 1.1.6, updating with PDiffs enabled took 41 seconds.
  • For APT 1.1.7, updating with PDiffs enabled took 4 seconds.

That’s a tenfold increase in performance. By the way, running without PDiffs took 20 seconds, so there’s no reason not to use them.

Future work

Writes are still unbuffered, and account for about 75% to 80% of our runtime. That’s an obvious area for improvements.

rred-profile

Performance for patching compressed files

Contents files are usually compressed with gzip, and kept compressed locally because they are about 500 MB uncompressed and only 30MB compressed. I profiled this, and it turns out there is not much we can do about it: The majority of the time is spent inside zlib, mostly combining CRC checksums:

rred-gz-profile

Going forward, I think a reasonable option might be to recompress Contents files using lzo – they will be a bit bigger (50MB instead of 30MB), but lzo is about 6 times as fast (compressing a 430MB Contents file took 1 second instead of 6).


Filed under: Debian, Uncategorized
on December 26, 2015 07:15 PM

December 25, 2015

Skizze - A probabilistic data-structures service and storage (Alpha)

At my day job we deal with a lot of incoming data for our product, which requires us to be able to calculate histograms and other statistics on the data-stream as fast as possible.

One of the best tools for this is Redis, which will give you 100% accuracy in O(1) (except for its HyperLogLog implementation which is a probabilistic data-structure). All in all Redis does a great job.
The problem with Redis for me personally is that, when using it for 100 of millions of counters, I could end up with Gigabytes of memory.

I also tend to use Top-K, which is not implemented in Redis but via Lua scripting can be built on top of the ZSet data-structure. The Top-K data-structure is used to keep track of the top "k" heavy hitters in a stream without having to keep track all "n" flows (k < n), with a O(1) complexity.

Anyhow, dealing with a massive amount of data the interest is most of the time in heavy hitters, that could be estimated while using less memory with an O(1) complexity for reading and writing (that is if you don't care about a count being 124352435 or 124352011 because on the UI of an app you will be showing "over 124 Million").

There are a lot of algorithms floating around and used to solve counting, frequency, membership and top-k problems, which in practice are implemented and used as part of a data-stream pipeline where stuff is counted, merged then stored.

I couldn't find a one-stop-shop service to fire & forget my data at.

Basically in need of a solution where I can set up sketches to answer cardinality, frequency, membership as well as ranking queries about my data-stream (without having to reimplement the algorithm in a pipeline embedded in storm, spark, etc...) led to the development of Skizze (which is in alpha state).

What is Skizze?

Skizze ([ˈskɪt͡sə]: german for sketch) is a probabilistic data-structures (sketch) service & store to deal with all problems around counting and sketching using probabilistic data-structures. (https://github.com/seiflotfy/skizze)

Unlike a Key-Value store, Skizze does not store values, but rather appends values to sketches, to solve frequency and cardinality queries in near O(1) time, with minimal memory footprint.

Which data structures are supported?

Currently the following data structures are supported:

  • HyperLogLog++ to query cardinality of values in the sketch.
  • Count-Min-Log Sketch to query frequency of values in the sketch.
  • Top-K to list the top k values in the sketch.
  • Bloom Filter query membership of a value in the sketch.
  • Dictionary to 100% accurately query membership and frequency of values in the sketch.

What are upcoming data structures

Soon we intend to implement/integrate the following sketches:

How to use?

Skizze runs as a single service for now, and exposes a Restful API.

Who helped out?

I'd like to thank the following contributors who helped develop this project:

What else?

The project is in alpha state, and we intend to improve it in every possible way, e.g:

  • Benchmarks to test data structures against each other.
  • Referencing algorithms, instead of copying into the local source (once Go 1.6's vendoring lands)
  • Storage currently Skizze writes to disc after n seconds or m operations. Soon I'd like to be able to write only dirty segments of the sketch to disc (in case a sketch is large e.g: 1 GB).

Feel free to open issues or help out with more specs development of the project on GitHub. All input appreciated.

on December 25, 2015 09:53 AM

December 24, 2015

It’s Episode Forty-two of Season Eight of the Ubuntu Podcast! Alan Pope, Mark Johnson, Laura Cowen and Martin Wimpress are connected and speaking to your brain.

In this week’s show…

  • We installed a whole bunch of different Linux distros, went Go-Karting for Christmas, spoke at some conferences, did the San Francisco parkrun, and worked on the Ubuntu Pi Flavour Maker.
  • We look back at our 2015 predictions and make some new ones for 2016.
  • We have a command line love for removing all metadata tag information from an image:
    exiftool -all= -overwrite_original foo.png
    
  • We go over your feedback.

Our 2016 Predictions

Laura

  • 3D printing will become more accessible and mainstream.
  • IoT – infrastructure, scaling, security – back to 70s computing.
  • IoT – privacy issues will become more prominent in the development of IoT products.

Alan

  • There will be 10 commercial Ubuntu Touch devices by the end of 2016.
  • A large motor vehichle manufacturer will switch to using Ubuntu Snappy for their in-car system.
  • Edward Snowden will leave Russia.
  • Julian Assange will leave the Ecuadorian Embassy.

Mark

  • 1 of the top 5 Linux distributions, according to Distrowatch.com, will have a desktop release with Wayland or Mir as the default display server. It won’t be well recieved due to missing features and/or poor driver support. Mint, Debian, Ubuntu, OpenSUSE, Fedora were the top 5 distros at the end of 2015.
  • The release of Ubuntu’s first converged device will be accompanied by significant mainstream marketing and media coverage, i.e. not just “tech news sites”.
  • A big game publisher with it’s own digital distribution platform (i.e. not Steam, someone like Blizzard with Battle.Net or EA with Origin) will release a Linux version of its client software.

Martin

  • Vulkan will be finalised and released. Linux drivers will be released for Intel IGPs (open source), NVIDIA (via proprietary drivers) and AMD (via proprietary drivers). SteamOS will include Vulkan support and, on equivalent hardware, will out perform Windows 10. Android will announce support for Vulkan but iOS will not.
  • Virtual Reality will continue to lack adoption. There will be no official VR headset products released for PlayStation 4, XBox One or Steam.
  • There will be at least 10 consumer products, not intended for makers and not manufactured by the Raspberry Pi Foundation, launched for sale in 2016 that use a Raspberry Pi (any model) at it’s heart.

That’s all for this week. And that’s it for Season Eight. We’ll be going for curry in 2016 to decide whether we’ll be back for a new season. Please send your comments and suggestions to:

on December 24, 2015 09:00 AM

December 22, 2015

New Mir Release (0.18)

Kevin DuBois

Mir Image

If a new Mir release was on your Christmas wishlist (like it was on mine), Mir 0.18 has been released! I’ve been working on this the last few days, and its out the door now.  Full text of changelog. Special thanks to mir team members who helped with testing, and the devs in #ubuntu-ci-eng for helping move the release along.

Graphics

  • Internal preparation work needed for Vulkan, hardware decoded multimedia optimizations, and latency improvements for nested servers.
  • Started work on plugin renderers. This will better prepare mir for IoT, where we might not have a Vulkan/GLES stack on the device, and might have to use the CPU.
  • Fixes for graphics corruption affecting Xmir (blocky black bars)
  • Various fixes for multimonitor scenarios, as well as better support for scaling buffers to suit the the monitor its on.

Input

  • Use libinput by default. We had been leaning on an old version of the Android input stack. Completely remove this in favor of using libinput.

Bugs

  • Quite a long list of bug correction. Some of these were never ‘in the wild’ but existed in the course of 0.18 development.

What’s next?

Its always tricky to pin down what exactly will make it into the next release, but I can at least comment on the stuff we’re working on, in addition to the normal rounds of bugfixing and test improvements:

  • various Internet-o-Things and convergence topics (eg, snappy, figuring out different rendering options on smaller devices).
  • buffer swapping rework to accommodate different render technologies (Vulkan!) accommodations for multimedia, and improve latency for nested servers.
  • more flexible screenshotting support
  • further refinements to our window management API
  • refinements to our platform autodetection

How can I help?

Writing new Shells

A fun way to help would be to write new shells! Part of mir’s goals is to make this as easy to do as possible, so writing a new shell always helps us make sure we’re hitting this goals.

If you’re interested in the mir C++ shell API, then you can look at some of our demos, available in the ‘mir-demos’ package. (source here, documentation here)

Even easier than that might be writing a shell using QML like unity8 is doing via the qtmir plugin. An example of how to do that is here (instructions on running here).

Tinkering with technology

If you’re more of the nuts and bolts type, you can try porting a device, adding a new rendering platform to mir (OpenVG or pixman might be an interesting, beneficial challenge), or figuring out other features to take advantage of.

Standard stuff

Pretty much all open source projects recommend bug fixing or triaging, helping on irc (#ubuntu-mir on freenode) or documentation auditing as other good ways to start helping.

on December 22, 2015 07:18 PM

ubucon

I’m very excited about UbuCon Summit which will bring many many Ubuntu people from all parts of its community together in January. David Planella did a great job explaining why this event is going to be just fantastic.

I look forward to meeting everyone and particularly look forward to what we’ve got to show in terms of Snappy Ubuntu Core.

Manik Taneja and Sergio Schvezov

We are going to have Manik Taneja and Sergio Schvezov there who are going to give the following talk:

Internet of Things gets ‘snappy’ with Ubuntu Core

Snappy Ubuntu Core is the new rendition of Ubuntu, designed from the ground up to power the next generation of IoT devices. The same Ubuntu and its vast ecosystem, but delivered in a leaner form, with state-of-the art security and reliable update mechanisms to ensure devices and apps are always up-to-date.

This talk will introduce Ubuntu Core, the technologies of its foundations and the developer experience with Snapcraft. We will also discuss how public and branded stores can kickstart a thriving app ecosystem and how Ubuntu meets the needs of connected device manufacturers, entrepreneurs and innovators.

And there’s more! Sergio Schvezov will also give the following workshop:

Hands-on demo: creating Ubuntu snaps with Snapcraft

Overview the snapcraft features and demo how easily a snap can be created using multiple parts from different sources. We will also show how to create a plugin for unhandled source types.

In addition to that we are going to have a few nice things at our booth, so we can show give you a Snappy experience there as well.

If you want to find out more, like check the entire schedule or register for the event, do it at ubucon.org.

I’m looking forward to seeing you there! 😀

on December 22, 2015 03:23 PM

ReText 5.3 released

Dmitry Shachnev

On Sunday I have released ReText 5.3, and here finally comes the official announcement.

Highlights in this release are:

  • A code refactoring has been performed — a new “Tab” class has been added, and all methods that affect only one tab (not the whole window) have been moved there.

    From the user’s point of view this means two things:

    • The tabs are now draggable and reorderable (this was a feature requested long time ago).

    • Some operations are now faster and more efficient. For example, in the previous release turning the WebKit renderer on/off required removing all the tabs and then re-adding them back, this giant hack has been dropped now.

  • A new previewer feature was contributed by Jan Korte: now, if the document contains a local link like

    [click me](foo.mkd)
    

    and a file named foo.mkd exists, it is opened in a new ReText tab.

    It is also possible to specify names without the extension (just foo) or relative paths (../foo/bar.mkd).

  • The colors used in the editor are now fully configurable via the standard configuration mechanism. This is most useful for users of dark themes.

    For example, you can change the color of line numbers area, the cursor position box, and all colors used by the highlighter.

    The possible colors and the procedure to change them is described in the “Color scheme setting” section in the documentation.

  • The “Display right margin at column” feature now displays the line more precisely: in the previous version it was some pixels left to the cursor, now it is exactly on the same horizontal position as the cursor.

  • Some bug fixes have been made for users that install ReText using pip or setup.py install:

    • The desktop file no longer hardcodes the path to executable in the Exec field, it uses just retext now. This fix has been contributed by Buo-Ren Lin.

    • The setup.py script now installs the application logo into a location where ReText can find it. Note: this works only for installs into user’s home directory (with --user passed to pip or setup.py install), installing software globally this way is not recommended anyway.

  • The AppStream metadata included in the previous version was updated to fix some warnings from the appstream.debian.org metadata validator.

Also, a week before ReText 5.3 a new version of PyMarkups was released, bringing enhanced support for the Textile markup. You can now edit Textile files in ReText too, provided that python-textile module for Python 3 is installed.

As usual, you can get the latest release from PyPI or from the Debian/Ubuntu repositories.

Please report any bugs you find to our issue tracker.

on December 22, 2015 11:00 AM

December 20, 2015

Suppose you are building a Qt application which must run on Linux, Mac OS and Windows. At some point, your application is likely to have to deal with file paths. Working on your Linux machine, but caring about your Windows users, you might be tempted to construct a file path like this:

QString filePath = someDir + QDir::separator() + "foo.ext";

Don't do this! Make your life simpler and just use:

QString filePath = someDir + "/foo.ext";

As QDir::separator() documentation says:

You do not need to use this function to build file paths. If you always use "/", Qt will translate your paths to conform to the underlying operating system. If you want to display paths to the user using their operating system's separator use toNativeSeparators().

Using QDir::separator() can actually cause subtle platform-dependent bugs. Let's have a look at this code snippet:

QString findBiggestFile(const QString &dirname)
{
    QDir dir(dirname);
    int size = 0;
    QString path;
    Q_FOREACH(const QFileInfo &info, dir.entryInfoList(QDir::Files)) {
         if (info.size() > size) {
              path = info.absoluteFilePath();
              size = info.size();
         }
    }
    return path;
}

So far so good. Now imagine you want to unit-test your code. You setup a set of files and expect the file named "file.big" to be the biggest, so you write something like this:

void testFindBiggestFile()
{
    QString result = findBiggestFile(mTestDir);
    QString expected = mTestDir + QDir::separator() + "file.big";
    QCOMPARE(biggestFile, expected);
}

This test passes on a Linux system, but fails on a Windows system: findBiggestFile() returns a path created by QFileInfo, so assuming mTestDir is C:/build/tests, result will be C:/build/tests/file.big, but expected will be C:/build/tests\file.big.

This simpler test, on the other hand, works as expected, on all platforms:

void testFindBiggestFile()
{
    QString result = findBiggestFile(mTestDir);
    QString expected = mTestDir + "/file.big";
    QCOMPARE(biggestFile, expected);
}

Though you might want to pass expected through QDir::cleanPath() so that if mTestDir ends with a slash, the test does not fail:

void testFindBiggestFile()
{
    QString result = findBiggestFile(mTestDir);
    QString expected = QDir::cleanPath(mTestDir + "/file.big");
    QCOMPARE(result, expected);
}

What about paths displayed in the user interface?

There are situations where you need to use native separators, for example when you are preparing paths which will be shown in your user interface or when you need to fork a process which expects native separators as command-line arguments.

In such situations, QDir::separator() is not a good idea either. It's simpler and more reliable to create the path with forward slashes, then pass it through QDir::toNativeSeparators(). This way you can be sure you won't let one forward slash go through.

on December 20, 2015 08:31 PM
In a few days, many Ubuntu users will unwrap new hardware, plug it in, and have a fantastic experience.

Some users will get inspired to join the community to solve bugs, add features, contribute code, and much more.


Support Gurus: use Find-a-Task

New, enthusiastic users often show up in the many Ubuntu help forums.

Encourage them to try Find-a-Task to see the variety of ways they can help.
Just send them over, and we'll do the rest.


Team Leaders: Is your team ready?

Is your team ready to welcome, train, and integrate these new volunteers?

Has your team looked at it's Find-a-Task roles for volunteers? It's easy to add or change your team's listings.

Is your team approachable? Can you be contacted easily by a new volunteer? Is your web page for new volunteers accurate?


Improving Find-a-Task

Find-a-Task is the Ubuntu community's job board for volunteers. Introduced in January 2015, Find-a-Task shows fellow volunteers the variety of tasks and roles available, and links those roles to the team web pages.

Please share your suggestions to improve Find-a-Task to the Ubuntu Community Team mailing list.
on December 20, 2015 02:40 PM

December 19, 2015

Attending UbuCon Summit US in 01/2016

Sujeevan Vijayakumaran

2016 will be my favourite „UbuCon-year“. The first UbuCon Summit in Pasadena will take place and at the end of the year the first UbuCon Europe will take place in Essen, Germany from 18th to 20th November. In the latter I'm the head of the organisation team.

The UbuCon Summit is just around the corner and I'm really looking forward to attending the event. It's the first time that I requested money from the Ubuntu Community Donations Fund which got thankfully accepted. The schedule is complete since a few days and there are many interesting talks, including the opening keynote by Mark Shuttleworth. I'm also going to give a talk about the Labdoo Project, which is a humanitarian social network to bring education around the globe. This will be also my first talk at a conference in English ;-).

If you live in South California and didn't hear about UbuCon Summit yet, you should definitely consider visiting this event. It'll be cohosted by the South California Linux Expo which alos have many interesting talks.

I'm looking forward to meeting all my old and new friends. Especially those which I didn't meet yet, like Richard Gaskin and Nathan Haines, who organises the UbuCon Summit. See you there!

on December 19, 2015 05:50 PM

December 18, 2015

I just published a live Plasma image with Wayland. A great milestone in a multi-year project of the Plasma team led by the awesome Martin G.  Nowhere near end-user ready yet but the road forward is now visible to humble mortals who don’t know how to write their own Wayland protocol.  It’ll give a smoother and more secure graphics system when it’s done and ensures KDE’s software and Linux on the desktop stays relevant for another 30 years.

facebooktwittergoogle_pluslinkedinby feather
on December 18, 2015 05:30 PM

The last two major autopkgtest releases (3.18 from November, and 3.19 fresh from yesterday) bring some new features that are worth spreading.

New LXD virtualization backend

3.19 debuts the new adt-virt-lxd virtualization backend. In case you missed it, LXD is an API/CLI layer on top of LXC which introduces proper image management, seamlessly use images and containers on remote locations, intelligently caching them locally, automatically configure performant storage backends like zfs or btrfs, and just generally feels really clean and much simpler to use than the “classic” LXC.

Setting it up is not complicated at all. Install the lxd package (possibly from the backports PPA if you are on 14.04 LTS), and add your user to the lxd group. Then you can add the standard LXD image server with

  lxc remote add lco https://images.linuxcontainers.org:8443

and use the image to run e. g. the libpng test from the archive:

  adt-run libpng --- lxd lco:ubuntu/trusty/i386
  adt-run libpng --- lxd lco:debian/sid/amd64

The adt-virt-lxd.1 manpage explains this in more detail, also how to use this to run tests in a container on a remote host (how cool is that!), and how to build local images with the usual autopkgtest customizations/optimizations using adt-build-lxd.

I have btrfs running on my laptop, and LXD/autopkgtest automatically use that, so the performance really rocks. Kudos to Stéphane, Serge, Tycho, and the other LXD authors!

The motivation for writing this was to make it possible to move our armhf testing into the cloud (which for $REASONS requires remote containers), but I now have a feeling that soon this will completely replace the existing adt-virt-lxc virt backend, as its much nicer to use.

It is covered by the same regression tests as the LXC runner, and from the perspective of package tests that you run in it it should behave very similar to LXC. The one problem I’m aware of is that autopkgtest-reboot-prepare is broken, but hardly anything is using that yet. This is a bit complicated to fix, but I expect it will be in the next few weeks.

MaaS setup script

While most tests are not particularly sensitive about which kind of hardware/platform they run on, low-level software like the Linux kernel, GL libraries, X.org drivers, or Mir very much are. There is a plan for extending our automatic tests to real hardware for these packages, and being able to run autopkgtests on real iron is one important piece of that puzzle.

MaaS (Metal as a Service) provides just that — it manages a set of machines and provides an API for installing, talking to, and releasing them. The new maas autopkgtest ssh setup script (for the adt-virt-ssh backend) brings together autopkgtest and real hardware. Once you have a MaaS setup, get your API key from the web UI, then you can run a test like this:

  adt-run libpng --- ssh -s maas -- \
     --acquire "arch=amd64 tags=touchscreen" -r wily \
     http://my.maas.server/MAAS 123DEADBEEF:APIkey

The required arguments are the MaaS URL and the API key. Without any further options you will get any available machine installed with the default release. But usually you want to select a particular one by architecture and/or tags, and install a particular distro release, which you can do with the -r/--release and --acquire options.

Note that this is not wired into Ubuntu’s production CI environment, but it will be.

Selectively using packages from -proposed

Up until a few weeks ago, autopkgtest runs in the CI environment were always seeing/using the entirety of -proposed. This often led to lockups where an application foo and one of its dependencies libbar got a new version in -proposed at the same time, and on test regressions it was not clear at all whose fault it was. This often led to perfectly good packages being stuck in -proposed for a long time, and a lot of manual investigation about root causes.

.

These days we are using a more fine-grained approach: A test run is now specific for a “trigger”, that is, the new package in -proposed (e. g. a new version of libbar) that caused the test (e. g. for “foo”) to run. autopkgtest sets up apt pinning so that only the binary packages for the trigger come from -proposed, the rest from -release. This provides much better isolation between the mush of often hundreds of packages that get synced or uploaded every day.

This new behaviour is controlled by an extension of the --apt-pocket option. So you can say

  adt-run --apt-pocket=proposed=src:foo,libbar1,libbar-data ...

and then only the binaries from the foo source, libbar1, and libbar-data will come from -proposed, everything else from -release.

Caveat:Unfortunately apt’s pinning is rather limited. As soon as any of the explicitly listed packages depends on a package or version that is only available in -proposed, apt falls over and refuses the installation instead of taking the required dependencies from -proposed as well. In that case, adt-run falls back to the previous behaviour of using no pinning at all. (This unfortunately got worse with apt 1.1, bug report to be done). But it’s still helpful in many cases that don’t involve library transitions or other package sets that need to land in lockstep.

Unified testbed setup script

There is a number of changes that need to be made to testbeds so that tests can run with maximum performance (like running dpkg through eatmydata, disabling apt translations, or automatically using the host’s apt-cacher-ng), reliable apt sources, and in a minimal environment (to detect missing dependencies and avoid interference from unrelated services — these days the standard cloud images have a lot of unnecessary fat). There is also a choice whether to apply these only once (every day) to an autopkgtest specific base image, or on the fly to the current ephemeral testbed for every test run (via --setup-commands). Over time this led to quite a lot of code duplication between adt-setup-vm, adt-build-lxc, the new adt-build-lxd, cloud-vm-setup, and create-nova-image-new-release.

I now cleaned this up, and there is now just a single setup-commands/setup-testbed script which works for all kinds of testbeds (LXC, LXD, QEMU images, cloud instances) and both for preparing an image with adt-buildvm-ubuntu-cloud, adt-build-lx[cd] or nova, and with preparing just the current ephemeral testbed via --setup-commands.

While this is mostly an internal refactorization, it does impact users who previously used the adt-setup-vm script for e. g. building Debian images with vmdebootstrap. This script is now gone, and the generic setup-testbed entirely replaces it.

Misc

Aside from the above, every new version has a handful of bug fixes and minor improvements, see the git log for details. As always, if you are interested in helping out or contributing a new feature, don’t hesitate to contact me or file a bug report.

on December 18, 2015 06:27 AM

December 17, 2015

The other day I needed to incorporate a large blob of binary data in a C program. One simple way is to use xxd, for example, on the binary data in file "blob", one can do:

xxd --include blob 

unsigned char blob[] = {
0xc8, 0xe5, 0x54, 0xee, 0x8f, 0xd7, 0x9f, 0x18, 0x9a, 0x63, 0x87, 0xbb,
0x12, 0xe4, 0x04, 0x0f, 0xa7, 0xb6, 0x16, 0xd0, 0x70, 0x06, 0xbc, 0x57,
0x4b, 0xaf, 0xae, 0xa2, 0xf2, 0x6b, 0xf4, 0xc6, 0xb1, 0xaa, 0x93, 0xf2,
0x12, 0x39, 0x19, 0xee, 0x7c, 0x59, 0x03, 0x81, 0xae, 0xd3, 0x28, 0x89,
0x05, 0x7c, 0x4e, 0x8b, 0xe5, 0x98, 0x35, 0xe8, 0xab, 0x2c, 0x7b, 0xd7,
0xf9, 0x2e, 0xba, 0x01, 0xd4, 0xd9, 0x2e, 0x86, 0xb8, 0xef, 0x41, 0xf8,
0x8e, 0x10, 0x36, 0x46, 0x82, 0xc4, 0x38, 0x17, 0x2e, 0x1c, 0xc9, 0x1f,
0x3d, 0x1c, 0x51, 0x0b, 0xc9, 0x5f, 0xa7, 0xa4, 0xdc, 0x95, 0x35, 0xaa,
0xdb, 0x51, 0xf6, 0x75, 0x52, 0xc3, 0x4e, 0x92, 0x27, 0x01, 0x69, 0x4c,
0xc1, 0xf0, 0x70, 0x32, 0xf2, 0xb1, 0x87, 0x69, 0xb4, 0xf3, 0x7f, 0x3b,
0x53, 0xfd, 0xc9, 0xd7, 0x8b, 0xc3, 0x08, 0x8f
};
unsigned int blob_len = 128;

..and redirecting the output from xxd into a C source and compiling this simple and easy to do.

However, for large binary blobs, the C source can be huge, so an alternative way is to use the linker ld as follows:

ld -s -r -b binary -o blob.o blob  

...and this generates the blob.o object code. To reference the data in a program one needs to determine the symbol names of the start, end and perhaps the length too. One can use objdump to find this as follows:

 objdump -t blob.o  
blob.o: file format elf64-x86-64
SYMBOL TABLE:
0000000000000000 l d .data 0000000000000000 .data
0000000000000080 g .data 0000000000000000 _binary_blob_end
0000000000000000 g .data 0000000000000000 _binary_blob_start
0000000000000080 g *ABS* 0000000000000000 _binary_blob_size

To access the data in C, use something like the following:

 cat test.c  

#include <stdio.h>
int main(void)
{
extern void *_binary_blob_start, *_binary_blob_end;
void *start = &_binary_blob_start,
*end = &_binary_blob_end;
printf("Data: %p..%p (%zu bytes)\n",
start, end, end - start);
return 0;
}

...and link and run as follows:

 gcc test.c blob.o -o test  
./test
Data: 0x601038..0x6010b8 (128 bytes)

So for large blobs, I personally favour using ld to do the hard work for me since I don't need another tool (such as xxd) and it removes the need to convert a blob into C and then compile this.
on December 17, 2015 05:17 PM