October 22, 2014

The following announcement will affect users using the Schedules Direct service to get guide data, including but not limited to USA and Canada.

On November 1st, 2014, the existing SD service is changing. 

We have been informed that Gracenote (formerly Tribune Media Services) will be ending the guide data service currently used by most users of Schedules Direct. Their plan is to end support for this service on November 1, 2014.

A service is being developed to mimic the DataDirect feed. It has most, but not all of the data currently in the Data Direct feed and will be updated daily. 

What does this mean for Schedules Direct?

The guide data provider (Gracenote) that Schedules Direct uses is changing how they present the guide data to users. Schedules Direct has taken it upon themselves to write a server side compatibility layer so existing applications will continue to get guide data. This does require a change in the URL that applications use to download which is why an update to MythTV is necessary.

What does this mean to you as a user?

If you have a paid subscription to Schedules Direct that will continue the way it has worked previously. A simple update to MythTV will be required for users on a supported version of MythTV.

Users that have enabled the MythTV Updates repo and are on a current version of MythTV and a supported version of Ubuntu will receive the fix for this via regular updates. The Mythbuntu team has always recommended enabling the MythTV Updates repo in the Mythbuntu Control Centre and staying up to date on fixes builds. The fix for this issue was added to our packages in the versions in the below table. More information on the Mythbuntu provided MythTV Update repo can be found here

Users on builds prior to 0.27 (eg. 0.26, 0.25) will need to either upgrade to a supported build version (see Mythbuntu Repos) or use one of the workarounds (See MythTV Wiki)

MythTV Version   Fixed in version
0.28 (development)2:0.28.0~master.20141013.4cb10e5-0ubuntu0mythbuntu#
0.27.X2:0.27.4+fixes.20141015.e4f65c8-0ubuntu0mythbuntu#
0.26.X2:0.26.2+fixes.20141022.4c4bb29-0ubuntu0mythbuntu1
Prior to 0.26.XWILL NOT BE FIXED, please either update or see the MythTV Wiki for a workaround


For more information on this issue, please see the writeup on the MythTV wiki. Questions can be directed to the MythTV-Users mailing list
on October 22, 2014 07:13 PM

Sprinting in DC: Tuesday

Nicholas Skaggs

This week, my team and I are sprinting with many of the core app developers and other folks inside of Ubuntu Engineering. Each day I'm attempting to give you a glimpse of what's happening.

On Tuesday I was finally able to sit down with the team and plan our week. In addition I was able to plan some of the work I had in mind with the community folks working on the core apps. Being obsessed with testing, my primary goals this week are centered around quality. Namely I want to make it easier for developers to write tests. Asking them to write tests is much easier when it's easy to do so. Fortunately, I think (hope?) all of the community core apps developers recognize the benefits to tests and thus are motivated to drive maturity into the testing story.

I'm also keen to work on the manual testing story. The community is imperative in helping test images for not only ubuntu, but also all of it's flavors. Seriously, you should say thank you to those folks helping make sure your install of ubuntu works well. They are busy this week helping make sure utopic is as good as it can be. Rock on image testers! But the tools and process used weigh on my mind, and I'm keen to chat later in the week with the canonical QA team and get there feedback.

During the day I attended sessions regarding changes and tweaks to the CI process. For core apps developers, errors in jenkins should be easier to replicate after these changes. CI will be moving to utilizing adt-run (autopkgtest) for there test execution (and you should too!). They will also provide the exact commands used to run the test. That means you can easily duplicate the results on the dashboard locally and fix the issues found. No more works on my box excuses!

I also met the team responsible for the application store and gave them feedback on the application submission process. Submitting apps is already so simple, but even more cool things are happening on this front.

The end of the evening found us shuffling into cab's for a team dinner. We had a long table of folks eating Italian food and getting to know each other better.


After dinner, I pressured a few folks into having some dessert and ordered a sorbet for myself. After receiving no less than 4 fruit sorbets due to a misunderstanding, I began carving the fruits and sending plates of sorbet down the table. My testcase failed however when the plates all came back :-(



on October 22, 2014 06:13 PM

Sprinting in DC: Monday

Nicholas Skaggs

This week, my team and I are sprinting in Washington DC with many of the core app developers and other folks inside of Ubuntu Engineering. Sprints are always busy, but the work tends to be a mix of social and technical. I get to assign names (IRC nicknames mostly) to faces as well as get to know my co-workers and other community members better.

I thought it might be useful to give writeups each day of what's going on, at least from my perspective during the sprint. I won't yammer on too much about quality and instead bring you pictures of what you really want. And some of this too. Whoops, here's one.

Pictures of people taking pictures . . .
Monday was the first day of the sprint, and also the day of my arrival! Personally I'm busy at home during this week, so it's tough to get away. That said, I can't imagine being anywhere else for the week. The sprints are a wonderful source of respite for everyone.

Monday itself consisted of making sure everything is ready for the week, planning events, and icebreakers. In typical fashion, an opening plenary set the bar for the week with notes about the progress being made on the phone as well as the future of the desktop. Lots of meetings and a few blurry jet lagged hours later, everyone was ready to sit for a bit and have some non-technical conversation!

Fortunately for us there was an event planned to meet both our social and hunger needs. After being split randomly into teams of bugs (love the play on quality), we played a bit of trivia. After each round teams were scored not only on the correct response, but also how quickly they responded. The questions varied from the obscure to fun bits about ubuntu. The final round centered around Canonical itself which was fun trip down memory lane to remember.

As I crawled into bed I still had the wonderfully cheesy announcer playing trivia questions in my head.


on October 22, 2014 06:01 PM
Recently I'm fixing a rather difficult bug that deals with doing one simple task reliably. Run a program and watch (i.e. intercept and process) stdout and stderr until the process terminates.

Doing this is surprisingly difficult and I was certainly caught in a few mistakes the first time I tried to do this. I recently posted a lengthy comment on the corresponding bug. It took me a few moments to carefully analyze and re-think the situation and how a reliable approach should work. Non the less I am only human and I certainly have made my set of mistakes.

Below is the reproduction for my current approach. The implementation is still in progress but it seems to work (I need to implement the termination phase of non-kill-able processes and switch to fully non-blocking I/O). So far I've used epoll(7) and signalfd(7). I'm still planning to use timerfd_create(2) for the timer, perhaps with CLOCK_RTC for hard wall-clock-time limit enforcement. I'll post the full, complete examples once I'm done with this but you can look at how it mostly looks like today in the python-glibc git tree's demos/ directory.

I'd like to ask everyone that has experience with this part of systems engineering to poke holes in my reasoning and show how this might fail and misbehave. Thanks.

The current approach, that so far works good on all the pathological cases is to do this.
The general idea is that we're in a I/O loop, using non-blocking I/O and a select-like mechanism to wait for wait for:
 - timeout (optional, new feature)
 - read side of the stdout pipe data
 - read side of the stdout pipe being closed
 - read side of the stderr pipe data
 - read side of the stderr pipe being closed
 - SIGCHLD being delivered with the intent to say that the process is dead
In general we keep looping and terminate only when the set of waited things (stdout depleted, stderr depleted, process terminated) is empty. This is not always true so see below. The action that we do on each is event is obviously different:
If the timeout has elapsed we proceed to send SIGTERM, reset the timer for shutdown period, followed by SIGQUIT and another timer reset. After that we send SIGKILL. This can fail as the process may have elevated itself beyond our capabilities. This is still undecided but perhaps, at this time, we should use an elevated process manager (see below). If we fail to terminate the process special provisions apply (see below).
If we have data to read we just do and process that (send to log files, process, send to .record.gz). This is a point where we can optimize the process and improve reliability in event of sudden system crash. Using more modern facilities we can implement tee in kernel space which lowers processing burden on python and, in general, makes it more likely that the log files will see actual output the process made just prior to its death.
We can also use pipes in O_DIRECT (aka packet mode) here to ensure that all writes() end up as individual records, which is the indented design of the I/O log record concept. This won't address the inherent buffering that is enabled in all programs that detect when they are redirected and no longer attached to a tty.
Whenever one of the pipes is depleted (which may *never* happen, lesson learned) we just close our side.
When the child dies, and this is the most important part and the actual bugfix, we do the following sequence of events:
 - if we still have stdout pipe open, read at most one PIPE_BUF. We cannot read more as the pipe may live on forever and we can just hang as we currently do. Reading one PIPE_BUF ensures that we catch the last moments of what the originally started process intended to tell us. Then we close the pipe. This will likely result in SIGPIPE in any processes that are still attached to it though we have no guarantee that it will rally kill them as that signal can be blocked.
 - if we still have stderr pipe open we follow the same logic as for stdout above.
 - we restore some signal handling that was blocked during the execution of the loop and terminate.
There's one more trick up our sleeve and that is PR_SET_CHILD_SUBREAPER but I'll describe that in a separate bug report that deals with runaway processes. Think dbus-launch or anything that double-forks and demonizes

If you have any comments or ideas please post them here (wherever you are reading this), on the launchpad bug report page or via email. Thanks a lot!

on October 22, 2014 02:50 PM

Meeting information

#ubuntu-meeting: Regular LoCo Council Meeting for October 2014, 21 Oct at 20:00 — 21:33 UTC
Full logs at http://ubottu.com/meetingology/logs/ubuntu-meeting/2014/ubuntu-meeting.2014-10-21-20.00.log.html
Meeting summary

Opening Business

The discussion about “Opening Business” started at 20:00.

Listing of Sitting Members of LoCo Council (20:00)
For the avoidance of uncertainty and doubt, it is necessary to list the members of the council who are presently serving active terms.
Marcos Costales, term expiring 2015-04-16
Jose Antonio Rey, term expiring 2015-10-04
Pablo Rubianes, term expiring 2015-04-16
Sergio Meneses, term expiring 2015-10-04
Stephen Michael Kellat, term expiring 2015-10-04
There is currently one vacant seat on LoCo Council
Roll Call (20:00)
Vote: LoCo Council Roll Call (All Members Present To Vote In Favor To Register Attendance) (Carried)
Re-Verification: France

The discussion about “Re-Verification: France” started at 20:03.

Vote: That the re-verification application of France be approved and that the period of verification be extended for a period of two years from this date. (Carried)
Update on open cases before the LoCo Council

The discussion about “Update on open cases before the LoCo Council” started at 20:19.

LoCo Council presently has before it pending verification and re-verification proceedings for the following LoCo Teams: Mauritius, Finland, Netherlands, Peru, Russia, Serbia.
The loco-contacts thread “Our teams reject the new LoCo Council policy”

The discussion about “The loco-contacts thread ‘Our teams reject the new LoCo Council policy’” started at 20:20.

Requests from the Galician and Asturian teams

The discussion about “Requests from the Galician and Asturian teams” started at 20:59.

Vote: That the Galician Team, pursuant to their request this day, be considered an independent LoCo team notwithstanding representing less than a country. (Carried)
Vote: That the Asturian Team, pursuant to their request this day, be considered an independent LoCo Team notwithstanding representing less than a country. (Carried)
Marcos Costales, in his capacity as leader of Ubuntu Spain and as a member of LoCo Council, stood aside from both votes.
Any Other Business

The discussion about “Any Other Business” started at 21:13.

Those who have requests of the LoCo Council are advised to write to it at loco-council@lists.ubuntu.com for assistance.
Vote results

LoCo Council Roll Call (All Members Present To Vote In Favor To Register Attendance)

Motion carried (For/Against/Abstained 4/0/0)
Voters PabloRubianes, skellat, costales, SergioMeneses
That the re-verification application of France be approved and that the period of verification be extended for a period of two years from this date.

Motion carried (For/Against/Abstained 4/0/0)
Voters PabloRubianes, skellat, costales, SergioMeneses
That the Galician Team, pursuant to their request this day, be considered an independent LoCo team notwithstanding representing less than a country.

Motion carried (For/Against/Abstained 2/0/1)
Voters PabloRubianes, skellat, SergioMeneses
That the Asturian Team, pursuant to their request this day, be considered an independent LoCo Team notwithstanding representing less than a country.

Motion carried (For/Against/Abstained 2/0/1)
Voters PabloRubianes, skellat, SergioMeneses

on October 22, 2014 12:45 PM

Debian hangs during boot

Mattia Migliorini

This morning I came to work a hour earlier than usual. I started my work PC and waited for it to boot into Debian Jessie. And waited… waited… waited…

This sounds strange, doesn’t it? It generally boots rather quickly. In fact Debian hangs during boot with this message:

A start job is running for Create Volatile Files and Directories

Followed by a timer and no limit. You can leave it there, but it does not finish and just hangs there. So, let’s try understand the problem.

 

The problem

The problem here is quite obvious: in the previous session you updated systemd to version 215-5+b1. If you have a look at your system’s /tmp directory (you can’t do it now, but we’ll do it later for sake of knowledge), you find out that it’s bloated. Here’s the bug report.

Edit

As OdyX points out in the comments, the real problem has to do only with the /tmp directory and is caused by a bug in system-config-printer, and systemd is responsible only to expose the problem.

 

The solution

Thankfully, the solution is pretty straightforward. Reboot your computer with Ctrl+Alt+Del and wait for Grub to load, then press e to edit Debian’s entry. After the line with /boot/vmlinuz... add the following:

--add rw init=/bin/bash

And press F10 to boot. Debian will load as a shell with root permissions, so you can do whatever you want (but be careful, because you can cause big issues too!

Now it’s time to check your /tmp directory:

ls -l /tmp

You should wait some minutes until it finishes, and the output may scare you. It’s bloated, as I told you before. What can you do now? Just remove and recreate it.

rm -rf /tmp
mkdir /tmp
chmod 1777 /tmp

Now restart your PC and check it out: Debian will boot correctly!

 

Conclusion

Is systemd ready to go towards a Debian stable release? I don’t think so. The team has to work hard to accomplish this step. So, good luck guys, and please test it a little more next time!

See edit above.

 

Source: Debian User Forums

The post Debian hangs during boot appeared first on deshack.

on October 22, 2014 08:05 AM
Interview today for Linux Unplugged 63 which was fun! However we never discussed Kubuntu, which I understood was the subject. I had gotten together facts and links in case they were needed, so I thought I would post them in case anybody needs the information.

Created and supported by community: http://www.kubuntu.org/support
Professional support for users: http://kubuntu.emerge-open.com/buy
Support by Blue Systems to some developers & projects:
Infrastructure support by Ubuntu, KDE, Blue Systems and Debian
Governance: Kubuntu Council https://launchpad.net/~kubuntu-council

How to contact us: kubuntu.org, freenode irc: #kubuntu (-devel), kubuntu-user list, kubuntu-devel list, kubuntuforum
  - Documentation on KDE userbase: http://userbase.kde.org/Kubuntu
  - Kubuntu in the news: http://wire.kubuntu.org/

* our "upstream" KDE is also making big changes, starting by splitting kdelibs into the Frameworks, and basing them on Qt5
  - that work is largely done, although of course each library is being improved as time goes along. Releases monthly.
  - We're writing a KDE Frameworks book; more about that at books.kde.org
  - Developers: apidox at api.kde.org

* KDE has now released Plasma 5, based on those new frameworks
  - that is nearly done, and 5.1 was released 15 Oct.
  - lots of excitement around that, because it looks and works really elegant, smooth and modern
  - Riddell: 14.12 release of KDE Applications will be in December with a mix of Qt 4 and Qt 5 apps, they should both work equally well on your Plasma 4 or 5 desktop and look the same with the classic Oxygen or lovely new Breeze themes

*  so our upstream is up to lots of new wonderful stuff, including using CI too (CI: continuous integration with automated testing)

* meanwhile, bugfixes continue on KDE4:

* Our base for 14.10 (codename Utopic Unicorn) is that stable KDE platform.
* At the same time, we are releasing weekly ISOs of Plasma 5, to make
it easy for people to test
 - Riddell: We're releasing a tech preview of Kubuntu Plasma 5 as part of 14.10 for people to test. I'm using it daily and it's working great but expect testers to be competent enough to check for and report beasties

* we're following along to KDE's CI effort, and doing that with our packages
  - see #kubuntu-ci IRC channel for the reports as they are generated
 - Riddell: gory details at http://kci.pangea.pub/
 - packages built constantly to check for any updates that need changed

* Our new packaging is now in Debian git, so we can share packaging work
  - as time goes on, all our packaging files will be there
  - tooling such as packaging scripts are being updated
  - Debian and Kubuntu packagers will both save time which they can use to improve quality

* moving from LightDM to SDDM (Simple Desktop Display Manager), KDE/Qt default
graphical login program

* moving to systemd replacing upstart along with Debian and Ubuntu at some point in the future

* moving to Wayland when it is ready along with KDE (Kwin); now on xorg windowing system. We do not plan to use Ubuntu's Mir

* Testing until release (please!) on the 23rd:

* Testing Plasma 5:
(fresh install)

* Another way we stay close to KDE is that since Ubuntu stopped inviting community members to participate in face-to-face meetings, we have a Kubuntu Day with Akademy, KDE's annual meeting. Thanks to the Ubuntu Contributors who paid the travel costs for some of us to attend


--
Thanks to Jonathan Riddell for his clarifications and corrections
on October 22, 2014 05:56 AM

October 21, 2014

I’m considering a proposal to have 16.04 LTS be the last release of Ubuntu with 32 bit images to run on 32 bit only machines (on x86 aka Intel/AMD only – this has no bearing on ARM). You would still be able to run 32 bit applications on 64 bit Ubuntu.

Please answer my survey on how this would affect you or your organization.

Please only answer if you are running 32-bit (x86) Ubuntu! Thanks!

If you can’t see the form below click here.

Loading…

on October 21, 2014 06:32 PM
This is the config that I have but I need help (below):
[Linux: HuiJia USB GamePad]
plugged
= True
plugin
= 2
mouse
= False
AnalogDeadzone = 100,100
AnalogPeak = 20000,20000
DPad R = button(13)
DPad L = button(15)
DPad D = button(14)
DPad U = button(15)
Start = button(9)
Z
Trig = button(7)
B
Button = button(2)
A
Button = button(1)
C
Button R = axis(3+)
C
Button L =
C
Button D = axis(4-)
C
Button U =
R
Trig = button(3)
L
Trig = button(0)
Mempak switch = key(109)
Rumblepak switch = key(114)
X
Axis = axis(0-,0+)
Y
Axis = axis(1-,1+)

I almost have every button/axis working but not the c-pad as an axis.  Based on what the joystick test program is giving me, the d-pad is also an axis, but I can’t that axis to work.  Can someone help me with that.

Hardware: http://www.amazon.com/gp/product/B0089NVTDM/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1
OS: Ubuntu 14.04 32-bit


on October 21, 2014 04:07 PM

The creator of systemd, Lennart Poettering, had some very harsh words to say about the Linux community and about one of its role models, Linus Torvalds.

It might seem that the Linux community in its entirety is all about rainbows and bunnies, but the truth is that it’s made up of regular people and the likes. Most of the other communities are formed in this way and Linux is no exception. The problem is that Linus is pegged as one of the people responsible by Lennart Poettering.

There has been some small friction between the two projects, Linux and systemd, but nothing that would indicate that something was amiss. In fact, when asked what he thought about systemd, just a couple of weeks ago, Linus Torvalds was actually very tactful about it.

Source:

http://news.softpedia.com/news/Systemd-Creator-Say-Linux-Community-Is-Rotten-Points-at-Linus-Torvalds-as-the-Source-461219.shtml

Submitted by: Silviu Stahie

on October 21, 2014 06:00 AM

Welcome to the Ubuntu Weekly Newsletter. This is issue #388 for the week October 13 – 19, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Elizabeth K. Joseph
  • John Mahoney
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on October 21, 2014 12:12 AM

October 20, 2014

10 years ago today, Mark Shuttleworth made the 4th post ever to the ubuntu-announce mailing list when he wrote: Announcing Ubuntu 4.10 “The Warty Warthog Release”

In this announcement, Mark wrote:

Ubuntu is a new Linux distribution that brings together the extraordinary breadth of Debian with a fast and easy install, regular releases (every six months), a tight selection of excellent packages installed by default and a commitment to security updates with 18 months of security and technical support for every release.

So it’s with much excitement, the Ubuntu News team wishes Ubuntu a happy 10th Birthday!

Ubuntu cake

Over the years, we’ve had several cakes celebrating releases, here are a sampling we found on Flickr, first from the 8.04 release party in London:

ubuntu cake

And an amazing trio from Kitchener-Waterloo, Ontario, Canada for 9.10, 10.10 and 11.04:

Ubuntu 9.10: Karmic Koala Release Party
CIMG4679.JPG
CIMG4817

And dozens of strictly Ubuntu logo cakes over the years (this one from 2006):

Ubuntu cake!!

With the release of 14.10 just days away, enjoy your release parties and perhaps take some time to reflect upon how far we’ve come in these 10 years!

Posted by Elizabeth K. Joseph, on behalf of the Ubuntu News Team

on October 20, 2014 07:34 PM

Today is Ubuntu’s ten year anniversary. Scott did a wonderful job summarizing many of those early years and his own experience, and while I won’t be as articulate as him, I wanted to share a few thoughts on my experience too.

I heard of this super secret Debian startup from Scott James Remnant. When I worked at OpenAdvantage we would often grab lunch in Birmingham, and he filled me in on what he was working on, but leaving a bunch of the blanks out due to confidentiality.

I was excited about this new mystery distribution. For many years I had been advocating at conferences about a consumer-facing desktop, and felt that Debian and GNOME, complete with the exciting Project Utopia work from Robert Love and David Zeuthen made sense. This was precisely what this new distro would be shipping.

When Warty was released I installed it and immediately became an Ubuntu user. Sure, it was simple, but the level of integration was a great step forward. More importantly though, what really struck me was how community-focused Ubuntu was. There was open governance, a Code Of Conduct, fully transparent mailing lists and IRC channels, and they had the Oceans 11 of rock-star developers involved from Debian, GNOME, and elsewhere.

I knew I wanted to be part of this.

While at GUADEC in Stuttgart I met Mark Shuttleworth and had a short meeting with him. He seemed a pretty cool guy, and I invited him to speak at our very first LugRadio Live in Wolverhampton.

Mark at LugRadio Live.

I am not sure how many multi-millionaires would consider speaking to 250 sweaty geeks in a football stadium sports bar in Wolverhampton, but Mark did it, not once, but twice. In fact, one time he took a helicopter to Wolverhampton and landed at the dog racing stadium. We had to have a debate in the LugRadio team for who had the nicest car to pick him up in. It was not me.

This second LugRadio Live appearance was memorable because two weeks previous I had emailed Mark to see if he had a spot for me at Canonical. OpenAdvantage was a three-year funded project and was wrapping up, and I was looking at other options.

Mark’s response was:

“Well, we are opening up an Ubuntu Community Manager position, but I am not sure it is for you.”

I asked him if he could send over the job description. When I read it I knew I wanted to do it.

Fast forward four interviews, the last of which being in his kitchen (which didn’t feel awkward, at all), and I got the job.

The day I got that job was one of the greatest days of my life. I felt like I had won the lottery; working on a project with mission, meaning, and something that could grow my career and skill-set.

Canonical team in 2007

The day I got the job was not without worry though.

I was going to be working with people like Colin Watson, Scott James Remnant, Martin Pitt, Matt Zimmerman, Robert Collins, and Ben Collins. How on earth was I going to measure up?

A few months later I flew out to my first Ubuntu Developer Summit in Mountain View, California. Knowing little about California in November, I packed nothing but shorts and t-shirts. Idiot.

I will always remember the day I arrived, going to a bar with Scott and some others, meeting the team, and knowing absolutely nothing about what they were saying. It sounded like gibberish, and I felt like I was a fairly technical guy at this point. Obviously not.

What struck me though was how kind, patient, and friendly everyone was. The delta in technical knowledge was narrowed with kindness and mentoring. I met some of my heroes, and they were just normal people wanting to make an awesome Linux distro, and wanting to help others get in on the ride too.

What followed was an incredible seven and a half years. I travelled to Ubuntu Developer Summits, sprints, and conferences in more than 30 countries, helped create a global community enthused by a passion for openness and collaboration, experimented with different methods of getting people to work together, and met some of the smartest and kindest people walking on this planet.

The awesome Ubuntu community

Ubuntu helped to define my career, but more importantly, it helped to define my perspective and outlook on life. My experience in Ubuntu helped me learn how to think, to manage, and to process and execute ideas. It helped me to be a better version of me, and to fill my world with good people doing great things, all of which inspired my own efforts.

This is the reason why Ubuntu has always been much more than just software to me. It is a philosophy, an ethos, and most importantly, a family. While some of us have moved on from Canonical, and some others have moved on from Ubuntu, one thing we will always share is this remarkable experience and a special connection that makes us Ubuntu people.

on October 20, 2014 05:52 PM

TL;DR: I apparently typed mkfs.vfat /dev/sda1 at some point. Oops.

So I rarely reboot my machines, and last night, when I rebooted my laptop (for graphics card weirdness) Grub just came up with:

Error: unknown filesystem.
grub rescue>

WTF, I wonder how I borked my grub config? Let's see what happens when we ls my /boot partition.

grub rescue>ls (hd0,msdos1)
unknown filesystem

Hrrm, that's no good. An ls on my other partition isn't going to be very useful, it's a LUKS-encrypted LVM PV. Alright, time for a live system. I grab a Kali live USB (not because Kali is necessarily the best option here, it's just what I happen to have handy) and put it in the system and boot from that. file tells me its an x86 boot sector, which is not at all what I'm expecting from an ext4 boot partition. It slowly dawns on me that at some point, intending to format a flash drive or SD card, I must've run mkfs.vfat /dev/sda1 instead of mkfs.vfat /dev/sdb1. That one letter makes all the difference. Of course, it turns out it's not even a valid FAT filesystem... since the device was mounted, the OS had kept writing to it like an ext4 filesystem, so it was basically a mangled mess. fsck wasn't able to restore it, even pointing to backup superblocks: it seems as though, among other things, the root inode was destroyed.

So, at this point, I basically have a completely useless /boot partition. I have approximately two options: reinstall and reconfigure the entire OS, or try to fix it manually. Since it didn't seem I had much to lose and it would probably be faster to fix manually (if I could), I decided to give door #2 a try.

First step: recreate a valid filesystem. mkfs.ext4 -L boot /dev/sda1 takes care of that, but you better believe I checked the device name about a dozen times. Now I need to get all the partitions and filesystems mounted for a chroot and then get into it:

% mkdir /target
% cryptsetup luksOpen /dev/sda5 sda5_crypt
% vgchange -a y
% mount /dev/mapper/ubuntu-root /target
% mount /dev/sda1 /target/boot
% mount -o bind /proc /target/proc
% mount -o bind /sys /target/sys
% mount -o bind /dev /target/dev
% chroot /target /bin/bash

Now I'm in my system and it's time to replace my missing files, but how to figure out what goes there? I know there are at least files for grub, kernels, initrds. I wonder if dpkg-query can be useful here?

# dpkg-query -S /boot
linux-image-3.13.0-36-generic, linux-image-3.13.0-37-generic, memtest86+, base-files: /boot

Well, there's a handful of packages. Let's reinstall them:

# apt-get install --reinstall linux-image-3.13.0-36-generic linux-image-3.13.0-37-generic memtest86+ base-files

That's gotten our kernel and initrd replace, but no grub files. Those can be copied by grub-install /dev/sda. Just to be on the safe side, let's also make sure our grub config and initrd images are up to date.

# grub-install /dev/sda
# update-grub2
# update-initramfs -k all -u

At this point, I've run out of things to double check, so I decide it's time to find out if this was actually good for anything. Exit the chroot and unmount all the filesystems, then reboot from the hard drive.

...

It worked! Fortunately for me, /boot is such a predictable skeleton that it's relatively easy to rebuild when destroyed. Here's hoping you never find yourself in this situation, but if you do, maybe this will help you get back to normal without a full reinstall.

on October 20, 2014 02:19 PM

V is for Vivid

Mark Shuttleworth

Release week! Already! I wouldn’t call Trusty ‘vintage’ just yet, but Utopic is poised to leap into the torrent stream. We’ve all managed to land our final touches to *buntu and are excited to bring the next wave of newness to users around the world. Glad to see the unicorn theme went down well, judging from the various desktops I see on G+.

And so it’s time to open the vatic floodgates and invite your thoughts and contributions to our soon-to-be-opened iteration next. Our ventrous quest to put GNU as you love it on phones is bearing fruit, with final touches to the first image in a new era of convergence in computing. From tiny devices to personal computers of all shapes and sizes to the ventose vistas of cloud computing, our goal is to make a platform that is useful, versal and widely used.

Who would have thought – a phone! Each year in Ubuntu brings something new. It is a privilege to celebrate our tenth anniversary milestone with such vernal efforts. New ecosystems are born all the time, and it’s vital that we refresh and renew our thinking and our product in vibrant ways. That we have the chance to do so is testament to the role Linux at large is playing in modern computing, and the breadth of vision in our virtual team.

To our fledgling phone developer community, for all your votive contributions and vocal participation, thank you! Let’s not be vaunty: we have a lot to do yet, but my oh my what we’ve made together feels fantastic. You are the vigorous vanguard, the verecund visionaries and our venerable mates in this adventure. Thank you again.

This verbose tract is a venial vanity, a chance to vector verbal vibes, a map of verdant hills to be climbed in months ahead. Amongst those peaks I expect we’ll find new ways to bring secure, free and fabulous opportunities for both developers and users. This is a time when every electronic thing can be an Internet thing, and that’s a chance for us to bring our platform, with its security and its long term support, to a vast and important field. In a world where almost any device can be smart, and also subverted, our shared efforts to make trusted and trustworthy systems might find fertile ground. So our goal this next cycle is to show the way past a simple Internet of things, to a world of Internet things-you-can-trust.

In my favourite places, the smartest thing around is a particular kind of monkey. Vexatious at times, volant and vogie at others, a vervet gets in anywhere and delights in teasing cats and dogs alike. As the upstart monkey in this business I can think of no better mascot. And so let’s launch our vicenary cycle, our verist varlet, the Vivid Vervet!

on October 20, 2014 01:22 PM

Pinit, Pinterest for WordPress, is a handy plugin that lets you add Pinterest badges to your website quickly and with no effort.

Today I released the first complete version of this plugin, which was around since 30/10/2013. Although it had only a few widgets and was not so powerful, it has been appreciated by more than 800 people in one year of life. But now it’s time to change! With this new 1.0 release you can leverage the simplicity, lightness and power of Pinit.

 

Download Pinit

Features

Pinit 1.0, or Pinterest for WordPress, includes only one widget to let you add three different Pinterest badges to your website’s sidebar:

  • Pin Widget
  • Profile Widget
  • Board Widget

Interested in adding badges to your posts and pages too? New in this version are three shortcodes:

  • Pin Shortcode [pit-pin]
  • Profile Shortcode [pit-profile]
  • Board Shortcode [pit-board]

 

Pinit Shortcodes Usage

Here is a little reference for the shortcodes.

 

Pin Shortcode

The Pin Shortcode [pit-pin] lets you add the badge of a single pin to your posts and pages and accepts only one argument:

  • url: the URL to the pin (e.g. http://www.pinterest.com/pin/99360735500167749/)

Example:

[pit-pin url="http://www.pinterest.com/pin/99360735500167749/"]

 

Profile Shortcode

With the Profile Shortcode [pit-profile] you can add a Pinterest profile’s badge to your WordPress. It accepts up to four arguments:

  • url: the URL to the profile (e.g. http://www.pinterest.com/pinterest/)
  • imgWidth: width of the badge’s images. Must be an integer. Defaults to 92.
  • boxHeight: height of the badge. Must be an integer. Defaults to 175.
  • boxWidth: width of the badge. Defaults to auto.

Example:

[pit-profile url="http://www.pinterest.com/pinterest/" imgWidth="100" boxHeight="300" boxWidth="200"]

 

Board Shortcode

The Board Shortcode [pit-board] lets you add a Board badge to your pages and posts. It accepts the same arguments of the Profile Shortcode:

  • url: the URL to the profile (e.g. http://www.pinterest.com/pinterest/pin-pets/)
  • imgWidth: width of the badge’s images. Must be an integer. Defaults to 92.
  • boxHeight: height of the badge. Must be an integer. Defaults to 175.
  • boxWidth: width of the badge. Defaults to auto.

Example:

[pit-board url="http://www.pinterest.com/pinterest/pin-pets/" imgWidth="100" boxHeight="300" boxWidth="200"]

 

Languages

Pinterest for WordPress is currently available in 3 different languages:

You can submit new translations with a pull request to the GitHub repository or by email to deshack AT ubuntu DOT com.

 

Conclusion

Feel free to submit issues to the GitHub repository or the official support forum. If you like this plugin, you can contribute back to it simply by leaving a review.

The post Pinit 1.0: Pinterest for WordPress rewritten appeared first on deshack.

on October 20, 2014 12:23 PM

Kubuntu 14.10 is due out this week bringing a choice of rock solid Plasma 4 or the tech preview of Kubuntu Plasma 5.  The team has a couple of interviews lined up to talk about this.

At 21:00UTC tomorrow (Tuesday) Valorie will be talking with Jupiter Broadcasting’s Linux Unplugged about what’s new and what’s cool.
Watch it live 21:00UTC Tuesday or watch it recorded.

Then on Thursday just fresh from 14.10 being released into the wild me and Scarlett will be on the AtRandom video podcast starting at 20:30UTC.Watch it live 20:30UTC Thursday or watch it recorded.

And feel free to send in questions to either if there is anything you want to know.

 

on October 20, 2014 11:36 AM

In 2006, Amazon was an E-commerce site building out its own IT infrastructure in order to sell more books. Now, AWS and EC2 are well-known acronyms to system administrators and developers across the globe looking to the public cloud to build and deploy web-scale applications. But how exactly did a book seller become a large cloud vendor?

Amazon’s web services business was devised in order to cut data center costs – a feat accomplished largely through the use of Linux and open source software, said Chris Schlaeger, director of kernel and operating systems at Amazon Web Services in his keynote talk at LinuxCon and CloudOpen Europe today in Dusseldorf.

Founder Jeff Bezos “quickly realized that in order to be successful in the online business, he needed a sophisticated IT infrastructure,” Schlaeger said. But that required expensive proprietary infrastructure with enough capacity to handle peak holiday demand. Meanwhile, most of the time the machines were idle. By building their infrastructure with open source software and charging other sellers to use their unused infrastructure, Amazon could cover the up front cost of data center development.

Source:

http://www.linux.com/news/featured-blogs/200-libby-clark/791472-amazon-web-services-aims-for-more-open-source-involvement

Submitted by: Libby Clark

on October 20, 2014 07:58 AM
Season of KDE (#SoK2014) was delayed a bit, but we're in business now:

http://heenamahour.blogspot.in/2014/10/season-of-kde-2014.html

Please stop by the ideas page if you need an idea. Otherwise, contact a KDE devel you've worked with before, and propose a project idea.

Once you have something, please head over to the Season of KDE website: https://season.kde.org and jump in. You can begin work as soon as you have a mentor sign off on your plan.

Student application deadline: Oct 31 2014, 12:00 am UTC - so spread the word! #SoK2014

Go go go!
on October 20, 2014 06:28 AM

October 19, 2014

I spent a few minutes this morning writing the comprehensive Ubuntu Contributors' Guide.

Here it is in all its glory:

Yes, that's really all there is to it. It's simple.

As obvious as this seems, there are people (names withheld) that will want you to believe otherwise. I'll elaborate in a future post.

When you encounter them, please forward a copy of this flow chart. Tell them Randall sent you.

on October 19, 2014 04:38 PM
Pictures from the CDSW sessions in Spring 2014Pictures from the CDSW sessions in Spring 2014

I am helping coordinate three and a half day-long workshops in November for anyone interested in learning how to use programming and data science tools to ask and answer questions about online communities like Wikipedia, free and open source software, Twitter, civic media, etc. This will be a new and improved version of the workshops run successfully earlier this year.

The workshops are for people with no previous programming experience and will be free of charge and open to anyone.

Our goal is that, after the three workshops, participants will be able to use data to produce numbers, hypothesis tests, tables, and graphical visualizations to answer questions like:

  • Are new contributors to an article in Wikipedia sticking around longer or contributing more than people who joined last year?
  • Who are the most active or influential users of a particular Twitter hashtag?
  • Are people who participated in a Wikipedia outreach event staying involved? How do they compare to people that joined the project outside of the event?

If you are interested in participating, fill out our registration form here before October 30th. We were heavily oversubscribed last time so registering may help.

If you already know how to program in Python, it would be really awesome if you would volunteer as a mentor! Being a mentor will involve working with participants and talking them through the challenges they encounter in programming. No special preparation is required. If you’re interested, send me an email.

on October 19, 2014 01:19 AM

October 18, 2014

Como programador, alguna que otra vez me sucedió algo tan especial como ayer...

Un usuario de Folder Color me envió un email solicitando que los iconos dependan del tema, más particularmente del set de iconos Numix.

Algo que a priori creía que no era factible técnicamente (o al menos sin remapear manualmente muchísimos iconos) se resolvió gracias a la comunidad. El usuario me remitió a su pregunta al upstream y ahí la inestimable ayuda de Joshua Fogg de Numix me permitió aprender cómo funcionan los temas en Ubuntu y tras unas horas de desarrollo y pruebas, ¡voalá! Nueva versión, más funcional y bonita que nunca :D ¡Gracias compañeros!

Y así, en este mundillo linuxero: proyecto x proyecto = proyecto3
Sí, al cubo ;) no me equivoqué.
on October 18, 2014 02:12 PM

IMG_20141012_175133

I went to Akademy with two notebooks and a plan. They should both be filled by KDE contributors with writing and sketching about one thing they think would make KDE better. Have a look at the result:
IMG_20140912_222725_v1IMG_20140907_113618IMG_20140908_162427IMG_20140906_150108

The complete set is in this Flickr album. Check it out! What’s your favorite? What’s your one thing – big or small – that would make KDE better?

(Thanks to Fabrice for the idea.)

on October 18, 2014 12:14 PM

Trans Gender Moves

Rhonda D'Vine

Yesterday I managed to get the last ticket from the waitinglist for the premiere of Trans Gender Moves. It is a play about the lives of three people: A transman, a transwoman and an intersexual person. They tell stories from their life, their process of finding their own identity over time. With in parts amusing anecdotes and ones that gets you thinking I can just wholeheartly encourage you to watch it if you have the chance to. It will still be shown the next few days, potentially extending depending on the requests for tickets, from what I've been told by one of the actors.

The most funny moment for me though was when I was talking with one of the actors about that it really touched me that I was told that one of them will be moving into into the same building I will be moving into in two year's time. Unfortunately that will be delayed a bit because they found me thinks field hamster or the likes in the ground and have to wait until spring for them to move. :/

/personal | permanent link | Comments: 5 | Flattr this

on October 18, 2014 10:14 AM
Folder Color has a new improvement: It's themable now! :)

If your custom theme has the "folder-color" icons (read how to create those icons), you'll see them! By example, this is a screenshot with the awesome Numix icons (WIP yet):


Numix icon set

You can watch it in action in this video.


How to install: Here.

I want to thank you to Joshua Fogg from the Numix Proyect for his help & knowledge!! Really thank you ;)

Enjoy it! :)
on October 18, 2014 06:20 AM

October 17, 2014

I’m on my way home from Düsseldorf where I attended the LinuxCon Europe and Linux Plumber conferences. I was quite surprised how huge LinuxCon was, there were about 1.500 people there! Certainly much more than last year in New Orleans.

Containers (in both LXC and docker flavors) are the Big Thing everybody talks about and works with these days; there was hardly a presentation where these weren’t mentioned at all, and (what felt like) half of the presentations were either how to improve these, or how to use these technologies to solve problems. For example, some people/companies really take LXC to the max and try to do everything in them including tasks which in the past you had only considered full VMs for, like untrusted third-party tenants. For example there was an interesting talk how to secure networking for containers, and pretty much everyone uses docker or LXC now to deploy workloads, run CI tests. There are projects like “fleet” which manage systemd jobs across an entire cluster of containers (distributed task scheduler) or like project-builder.org which auto-build packages from each commit of projects.

Another common topic is the trend towards building/shipping complete (r/o) system images, atomic updates and all that goodness. The central thing here was certainly “Stateless systems, factory reset, and golden images” which analyzed the common requirements and proposed how to implement this with various package systems and scenarios. In my opinion this is certainly the way to go, as our current solution on Ubuntu Touch (i. e. Ubuntu’s system-image) is far too limited and static yet, it doesn’t extend to desktops/servers/cloud workloads at all. It’s also a lot of work to implement this properly, so it’s certainly understandable that we took that shortcut for prototyping and the relatively limited Touch phone environment.

On Plumbers my main occupations were mostly the highly interesting LXC track to see what’s coming in the container world, and the systemd hackfest. On the latter I was again mostly listening (after all, I’m still learning most of the internals there..) and was able to work on some cleanups and improvements like getting rid of some of Debian’s patches and properly run the test suite. It was also great to sync up again with David Zeuthen about the future of udisks and some particular proposed new features. Looks like I’m the de-facto maintainer now, so I’ll need to spend some time soon to review/include/clean up some much requested little features and some fixes.

All in all a great week to meet some fellows of the FOSS world a gain, getting to know a lot of new interesting people and projects, and re-learning to drink beer in the evening (I hardly drink any at home :-P).

If you are interested you can also see my raw notes, but beware that there are mostly just scribbling.

Now, off to next week’s Canonical meeting in Washington, DC!

on October 17, 2014 04:54 PM

I am proud to announce that Plasma 5 weekly ISOs have returned today.

http://files.kde.org/snapshots/unstable-i386-latest.iso.mirrorlist

Grab today’s ISO while it is hot. And don’t forget to report the bugs you might notice.

Plasma 5 weekly ISOs bring you the latest and greatest Plasma right from the tip of development.

As some of you might have noticed the previous Plasma 5 weekly ISOs stopped updating a while ago. This was because we at Blue Systems were migrating to new system for distribution level integration. More on this to follow soon. Until then you’ll have to believe me that it is 300% more awesome :)

on October 17, 2014 01:38 PM

TL;DR: static version of http://debaday.debian.net/, as it was when it was shut down in 2009, available!

A long time ago, between 2006 and 2009, there was a blog called Debian Package of the Day. About once per week, it featured an article about one of the gems available in the Debian archive: one of those many great packages that you had never heard about.

At some point in November 2009, after 181 articles, the blog was hacked and never brought up again. Last week I retrieved the old database, generated a static version, and put it online with the help of DSA. It is now available again at http://debaday.debian.net/. Some of the articles are clearly outdated, but many of them are about packages that are still available in Debian, and still very relevant today.

on October 17, 2014 01:05 PM

New Irssi

Rhonda D'Vine

After a long time a new irssi upstream release hit the archive. While the most notable change in 0.8.16 was DNSSEC DANE support which is enabled (for linux, src:dnsval has issues to get compiled on kFreeBSD), the most visible change in 0.8.17 was addition of support for both 256 colors and truecolor. While the former can be used directly, for the later you have to explicitly switch the setting colors_ansi_24bit to on. A terminal support it is needed for that though. To test the 256 color support, your terminal has to support it, your TERM environment variable has to be properly set, and you can test it with the newly added /cubes alias. If you have an existing configuration, look at the Testing new Irssi wiki page which helps you get that alias amongst giving other useful tipps, too.

The package currently only lives in unstable, but once it did flow over to testing I will update it in wheezy-backports, too.

Enjoy!

/debian | permanent link | Comments: 0 | Flattr this

on October 17, 2014 12:39 PM

Got new tool, Dell XPS 13 developer edition, running Ubuntu 12.04. Here’s some experiences using it and also a note for future self what needed to be done to make everything work.

After taking restore disc from the pre-installed Ubuntu using the tool Dell provided, I proceeded on clean installing Kubuntu 14.04. I have to say for the size and price of this piece of hardware is rather amazing, only nitpicking could be the RAM capability being capped to 8 GiB. Having modern Linux distribution running smoothly in any circumstances is simply nice experience. I haven’t hit yet for the limitations of the integrated Intel GPU either, which is surprising, or maybe it is just telling my way of using these things. (:

Touch screen is maybe the most interesting bit on this laptop. Unfortunately I have to say the use of it is limited by UI not working well with touch interaction in many cases. Maybe choosing apps differently I would get better experience. At least some websites are working just fine when using Chromium browser.

Note on hardware support

Everything else works like a charm out of the box in Kubuntu 14.04, except cooling. After some searching I found out some Dell laptops need separate tools for managing the cooling. I figured out the following:

I needed to install i8kutils, which can be found in Ubuntu repositories.

Then I made the following contents to /etc/i8kmon.conf

# Run as daemon, override with --daemon option
set config(daemon)      0

# Automatic fan control, override with --auto option
set config(auto)        1

# Report status on stdout, override with --verbose option
set config(verbose) 1

# Status check timeout (seconds), override with --timeout option
set config(timeout) 12

# Temperature thresholds: {fan_speeds low_ac high_ac low_batt high_batt}
set config(0)   {{-1 0}  -1  48  -1  48}
set config(1)   {{-1 1}  45  60  45  60}
set config(2)   {{-1 2}  50  128  50  128}

# end of file

Note that some options are overridden in the init script, for example it does set i8kmon to daemon mode. Timeout of 12 seconds is there because I noticed every time fan speed is set, the speed begins to fall down in ~10 seconds so that in half a minute point you notice clearly the accumulated change on the fan speed. My 12 seconds is just compromise I found working for me well, YMWV etc.

Also to have i8kmon control cooling without human interaction, I needed to enable it in /etc/default/i8kmon

ENABLED=1

That’s it for now, I might end up updating the post if something new comes up regarding hardware support.

on October 17, 2014 08:14 AM

Sometimes we need text so that we can document history, such as the death of our beloved smart phones. But, our phones are not smart; smart things do not fill themselves with nonsense. For some reason, the number of chatting, texting, mailing, talking channels is constantly increasing, which is also increasing the amount of “garbage information” that is entering our brains. Sometimes there is so much that I have to cut off myself off from the channels. Maybe my phone shouldn’t have a text function at all! It needs to be saved.

In a future post, I will discuss how we might mitigate this by adjusting our habits, but considering that all of these messages contain text, my smart phone should be able to consolidate, cross-reference, reply in-line, or find a way reduce the number of channels and the number of taps required to explain something.

A smart phone does not walk itself into traffic because it needs to reply to so many messages. Poor phones.

sop

on October 17, 2014 03:57 AM

consistent control over more AWS services with aws-cli, a single, powerful command line tool from Amazon

Readers of this tech blog know that I am a fan of the power of the command line. I enjoy presenting functional command line examples that can be copied and pasted to experience services and features.

The Old World

Users of the various AWS legacy command line tools know that, though they get the job done, they are often inconsistent in where you get them, how you install them, how you pass options, how you provide credentials, and more. Plus, there are only tool sets for a limited number of AWS services.

I wrote an article that demonstrated the simplest approach I use to install and configure the legacy AWS command line tools, and it ended up being extraordinarily long.

I’ve been using the term “legacy” when referring to the various old AWS command line tools, which must mean that there is something to replace them, right?

The New World

The future of the AWS command line tools is aws-cli, a single, unified, consistent command line tool that works with almost all of the AWS services.

Here is a quick list of the services that aws-cli currently supports: Auto Scaling, CloudFormation, CloudSearch, CloudWatch, Data Pipeline, Direct Connect, DynamoDB, EC2, ElastiCache, Elastic Beanstalk, Elastic Transcoder, ELB, EMR, Identity and Access Management, Import/Export, OpsWorks, RDS, Redshift, Route 53, S3, SES, SNS, SQS, Storage Gateway, Security Token Service, Support API, SWF, VPC.

Support for the following appears to be planned: CloudFront, Glacier, SimpleDB.

The aws-cli software is being actively developed as an open source project on Github, with a lot of support from Amazon. You’ll note that the biggest contributors to aws-cli are Amazon employees with Mitch Garnaat leading. Mitch is also the author of boto, the amazing Python library for AWS.

Installing aws-cli

I recommend reading the aws-cli documentation as it has complete instructions for various ways to install and configure the tool, but for convenience, here are the steps I use on Ubuntu:

sudo apt-get install -y python-pip
sudo pip install awscli

Add your Access Key ID and Secret Access Key to $HOME/.aws/config using this format:

[default]
aws_access_key_id = <access key id>
aws_secret_access_key = <secret access key>
region = us-east-1

Protect the config file:

chmod 600 $HOME/.aws/config

Optionally set an environment variable pointing to the config file, especially if you put it in a non-standard location. For future convenience, also add this line to your $HOME/.bashrc

export AWS_CONFIG_FILE=$HOME/.aws/config

Now, wasn’t that a lot easier than installing and configuring all of the old tools?

Testing

Test your installation and configuration:

aws ec2 describe-regions

The default output is in JSON. You can try out other output formats:

 aws ec2 describe-regions --output text
 aws ec2 describe-regions --output table

I posted this brief mention of aws-cli because I expect some of my future articles are going to make use of it instead of the legacy command line tools.

So go ahead and install aws-cli, read the docs, and start to get familiar with this valuable tool.

Notes

Some folks might already have a command line tool installed with the name “aws”. This is likely Tim Kay’s “aws” tool. I would recommend renaming that to another name so that you don’t run into conflicts and confusion with the “aws” command from the aws-cli software.

[Update 2013-10-09: Rename awscli to aws-cli as that seems to be the direction it’s heading.]

*[Update 2014-10-16: Use new .aws/config filename standard.]

Original article: http://alestic.com/2013/08/awscli

on October 17, 2014 01:54 AM

October 16, 2014

S07E29 – The One with the Baby on the Bus

Ubuntu Podcast from the UK LoCo

Join Laura Cowen, Tony Whitmore and Alan Pope in Studio L for Season Seven, Episode Twenty-Nine of the Ubuntu Podcast!

In this week’s show:-

We’ll be back next week, when we’ll be talking about diversity at events like OggCamp and looking over your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

on October 16, 2014 07:30 PM

October 15, 2014

A friend of mine sent me a link from her "+" account last night, publicizing a fundraising effort...

Admittedly, I've never been impressed with "+", so I rarely (if ever) look at it. Because she was a friend, and I like to help friends, I decided to go in and see what the link was about. I ended up staying longer than I originally planned and took a look around.

What did I see? I saw a lot of people who used to make Planet Ubuntu a lively, exciting, and vibrant place writing prolifically on "+" instead. Sadly and disappointingly, they rarely post on Planet these days.

Are you one of these people?

Friends, do consider the effect of the following:

When you upload, submit, store, send or receive content to or through our Services, you give Google (and those we work with) a worldwide license to use, host, store, reproduce, modify, create derivative works (such as those resulting from translations, adaptations or other changes we make so that your content works better with our Services), communicate, publish, publicly perform, publicly display and distribute such content. The rights you grant in this license are for the limited purpose of operating, promoting, and improving our Services, and to develop new ones. This license continues even if you stop using our Services ...
(Source: http://www.google.com/intl/en/policies/terms/)

Something smells wrong with this.

Friends, it's really not that difficult to host a blog and to use a more respectful service. I hope you'll consider that one small step in the sprit of not becoming the product, or even better, in the spirit of making Planet Ubuntu *the* place for Ubuntu happenings.

--
image by Terry O'Fee
https://www.flickr.com/photos/tmofee/

on October 15, 2014 02:15 PM

KDE Project:

Last month I posted about packaging and why it takes time. I commented that the Stable Release Update process could not be rushed because a regression is worse than a known bug. Then last week I was pointed to a problem where Baloo was causing a user's system to run slow. Baloo is the new indexer from KDE and will store all your files in a way you can easily search for them and was a faster replacement for Nepomuk. Baloo has been written to be as lightweight as these things can be using IONice, a feature of Linux which allows processes to say "this isn't very important let everyone else go first".

Except IONice wasn't working. Turns out Ubuntu changed the default Linux scheduler from CFQ to Deadline which doesn't support IONice. Kubuntu devs who had been looking at this for some time had already worked out how to change it back to the upstream defaults in our development version Utopic and in the backports packages we put on Launchpad. Last week we uploaded it as a proposed Stable Release Update and as expected the SRU team was sceptical. We should have been faster with the SRU which is our fault. They're there to be sceptical but the only change here is to go back to using upstream defaults. After much wondering why it was changed in the first place it seems that Unity was having problems with the CFQ scheduler and so it was changed, now we have suggestions that Baloo should be changed to adapt to that which is crazy. Nobody seems to have considered fixing Unity or that making the change in the scheduler in the first place would affect software outside of Unity. We tried taking the issue to the Ubuntu Technical Board but their meeting didn't happen this week.

So alas no fix in the immediate future, if it bothers you best use Kubuntu Backports. When someone on the SRU team is brave enough to approve it into -proposed we'll put out a call for testers and it'll get into -updates eventually. It's what happens when you have a large project like Ubuntu with many competing demands, but it would be nice if the expectation was on Unity to get fixed rather than on Kubuntu to deal with the bureaucracy to workaround their workarounds.

on October 15, 2014 11:35 AM

How to customize and brand your scope

Ubuntu App Developer Blog

Scopes come with a very flexible customization system. From picking the text color to rearranging how results are laid out, a scope can easily look like a generic RSS reader, a music library or even a store front.

In this new article, you will learn how to make your scope shine by customizing its results, changing its colors, adding a logo and adapting its layout to present your data in the best possible way. Read…

screenshot20145615_125616591

on October 15, 2014 11:14 AM

Like last month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In September 2014, 3 contributors have been paid for 11h each. Here are their individual reports:

Evolution of the situation

Compared to last month, we have gained 5 new sponsors, that’s great. We’re now at almost 25% of a full-time position. But we’re not done yet. We believe that we would need at least twice as many sponsored hours to do a reasonable work with at least the most used packages, and possibly four times as much to be able to cover the full archive.

We’re now at 39 packages that need an update in Squeeze (+9 compared to last month), and the contributors paid by Freexian did handle 11 during last month (this gives an approximate rate of 3 hours per update, CVE triage included).

Open questions

Dear readers, what can we do to convince more companies to join the effort?

The list of sponsors contains almost exclusively companies from Europe. It’s true that Freexian’s offer is in Euro but the economy is world-wide and it’s common to have international invoices. When Ivan Kohler asked if having an offer in dollar would help convince other companies, we got zero feedback.

What are the main obstacles that you face when you try to convince your managers to get the company to contribute?

By the way, we prefer that companies take small sponsorship commitments that they can afford over multiple years over granting lots of money now and then not being able to afford it for another year.

Thanks to our sponsors

Let me thank our main sponsors:

on October 15, 2014 07:45 AM

I just created an add-on that literally just changes the one bit* needed to disable SSL 3.0 support in Firefox

You can get it here: https://addons.mozilla.org/en-US/firefox/addon/disable-ssl-30/

*It’s trivial to do in about:config, yet I don’t really want to recommend that to anyone..

on October 15, 2014 04:56 AM

We have finished packaging KDE 4.14.2 release.
We have also backported to Trusty LTS!
KDE announcement can be found here:

KDE 4.14.2 Release notes
Kubuntu Release with install instructions can be found here:
Kubuntu KDE 4.14.2 Release

on October 15, 2014 12:32 AM

Packages for the release of KDE SC 4.14.2 are available for Kubuntu 14.04LTS and our development release. You can get them from the Kubuntu Backports PPA, and the Kubuntu Utopic Updates PPA

Bugs in the packaging should be reported to kubuntu-ppa on Launchpad. Bugs in the software to KDE.

on October 15, 2014 12:21 AM