June 30, 2015

Using the compiz grid plugin, Unity supports placing windows, one at a time, in a tiled-like fashion. However, there is no support for tilling a workspace in one fell stroke. That is something which users of dwm, wmii, i3, xmonad, awesome, qtile etc come to expect.

A few years ago I ran across a python script called stiler which tiled all windows, mainly using wmctrl. I’ve made a few updates to make that work cleanly in Unity, and have been using that for about a week. Here is how it works:

windows-enter is mapped to “stiler term”. This starts a new terminal (of the type defined in ~/.stilerrc), then tiles the current desktop. windows-j and windows-k are mapped to ‘stiler simple-next’ and ‘stiler simple-prev’, which first call the ‘simple’ function to make sure windows are tiled if they weren’t already, then focuses the next or previous window. So, if you have a set of windows which isn’t tiled (for instance you just exited a terminal), you can win-j to tile the remaining windows. Windows-shift-j cycles the tile locations so that the active window becomes the first non-tiled, etc.

This is clearly very focused on a dwm-like experience. stiler also supports vertical and horizontal layouts, and could easily be taught others like matrix.

If this is something that anyone but me actually wants to use, I’ll package properly in ppa, but for now the script can be found at
http://people.canonical.com/~serge/stiler .


on June 30, 2015 08:30 PM

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20150630 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:
– http://kernel.ubuntu.com/reports/kt-meeting.txt


Status: CVE’s

The current CVE status can be reviewed at the following link:
– http://kernel.ubuntu.com/reports/kernel-cves.html


Status: Stable, Security, and Bugfix Kernel Updates – Precise/Trusty/Utopic/Vivid

Status for the main kernels, until today:

  • Precise – Verification & Testing
  • Trusty – Verification & Testing
  • Utopic – Verification & Testing
  • Vivid – Verification & Testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html
    For SRUs, SRU report is a good source of information:
  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 13-Jun through 04-Jul
    ====================================================================
    12-Jun Last day for kernel commits for this cycle
    14-Jun – 20-Jun Kernel prep week.
    21-Jun – 04-Jul Bug verification; Regression testing; Release


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

on June 30, 2015 05:11 PM

Publishing lxd images

Serge Hallyn

While some work remains to be done for ‘lxc publish’, the current support is sufficient to show a full cycle of image workload with lxd.

Ubuntu wily comes with systemd by default. Sometimes you might need a wily container with upstart. And to repeatedly reproduce some tests on wily with upstart, you might want to create a container image.

# lxc remote add lxc images.linuxcontainers.org
# lxc launch lxc:ubuntu/wily/amd64 w1
# lxc exec w1 -- apt-get -y install upstart-bin upstart-sysv
# lxc stop w1
# lxc publish --public w1 --alias=wily-with-upstart
# lxc image copy wily-with-upstart remote:  # optional

Now you can start a new container using

# lxc launch wily-with-upstart w-test-1
# lxc exec w-test-1 -- ls -alh /sbin/init
lrwxrwxrwx 1 root root 7 May 18 10:20 /sbin/init -> upstart
# lxc exec w-test-1 run-my-tests

Importantly, because “–public” was passed to the lxc publish command, anyone who can reach your lxd server or the image server at “remote:” will also be able to use the image. Of course, for private images, don’t use “–public”.

Enjoy!


on June 30, 2015 03:20 AM

Super star Ubuntu Weekly Newsletter contributor Paul White recently was reflecting upon his work with the newsletter and noted that he was approaching 100 issues that he’s contributed to. Wow!

That caused me to look at how long I’ve been involved. Back in 2011 the newsletter when on a 6 month hiatus when the former editor had to step down due to obligations elsewhere. After much pleading for the return of the newsletter, I spent a few weeks working with Nathan Handler to improve the scripts used in the release process and doing an analysis of the value of each section of the newsletter in relation to how much work it took to produce each week. The result was a slightly leaner, but hopefully just as valuable newsletter, which now took about 30 minutes for an experienced editor to release rather than 2+ hours. This change was transformational for the team, allowing me to be involved for a whopping 205 consecutive issues.

If you’re not familiar with the newsletter, every week we work to collect news from around our community and the Internet to bring together a snapshot of that week in Ubuntu. It helps people stay up to date with the latest in the world of Ubuntu and the Newsletter archive offers a fascinating glimpse back through history.

But we always need help putting the newsletter together. We especially need people who can take some time out of their weekend to help us write article summaries.

Summary writers. Summary writers receive an email every Friday evening (or early Saturday) US time with a link to the collaborative news links document for the past week which lists all the articles that need 2-3 sentence summaries. These people are vitally important to the newsletter. The time commitment is limited and it is easy to get started with from the first weekend you volunteer. No need to be shy about your writing skills, we have style guidelines to help you on your way and all summaries are reviewed before publishing so it’s easy to improve as you go on.

Interested? Email editor.ubuntu.news@ubuntu.com and we’ll get you added to the list of folks who are emailed each week.

I love working on the newsletter. As I’ve had to reduce my commitment to some volunteer projects I’m working on, I’ve held on to the newsletter because of how valuable and enjoyable I find it. We’re a friendly team and I hope you can join us!

Still just interested in reading? You have several options:

And everyone is welcome to drop by #ubuntu-news on Freenode to chat with us or share links to news we may found valuable for the newsletter.

on June 30, 2015 02:29 AM

Welcome to the Ubuntu Weekly Newsletter. This is issue #423 for the week June 22 – 28, 2015, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Elizabeth K. Joseph
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on June 30, 2015 01:49 AM

June 29, 2015

En la review del BQ E5, un lector me hacía una pregunta tan concisa como interesante: ¿Qué ventajas tiene un móvil con Ubuntu?

Lo cierto es que no lo había pensado detenidamente. Llevo usando Ubuntu Phone durante 5 meses porque personalmente prefiero que Ubuntu sea el sistema operativo en mi móvil y ese para mi, es un factor de peso más que suficiente.


 
Meizu MX4 Ubuntu Edition



Tras pensarlo un rato, en mi opinión, Ubuntu Phone destaca por estos motivos:

  • Servicios, no aplicaciones: Ubuntu Phone está enfocado a que usemos servicios y no tanto las aplicaciones. ¿Cómo? Mediante los scopes, que marcan una disrupción original y única en los sistemas operativos móviles actuales. Lo mejor es ver este vídeo, para entender qué son los scopes y su potencial:
Scopes
  • Libertad: Un sistema operativo 100% libre y la gran mayoría de aplicaciones, también libres.
  • Ubuntu en tu bolsillo: Un sistema operativo 100% Linux, sin emulaciones.
  • Cierra el círculo: Podrás usar Ubuntu en Desktop + Servidor + Nube + Móvil.
  • Evitar al Gran Hermano en los que se convirtieron Google y Apple: Y la NSA que añadió posteriormente la guinda al pastel.
  • Diseño sencillo, a la vez que elegante. Aunque para gustos, colores :)
  • Sin fragmentación: Con actualizaciones OTA mensuales para todos los móviles.
  • Un "Ubuntu" Phone: Canonical está totalmente volcada en Ubuntu Phone y la sensación es de que tienes un móvil de Ubuntu, más que tener un móvil BQ o Meizu.
  • Gestos: Unity muestra todo su potencial táctil. Sencillo, rápido e intuitivo.
  • Desarrollo: Un buen SDK y muchísimas posibilidades de desarrollo: QML, HTML5 (incluído cordova), webapps... Y una buena tienda (¡por fin!) desde donde descargarlas/descubrirlas.

BQ Ubuntu Edition


Obviamente no todo es perfecto. Echo en falta:
  • Aplicaciones: Hay muchas, pero faltan aplicaciones importantes para muchos usuarios.
  • tethering: Recordemos que Android lo sacó en la versión 2.2
  • bluetooth: No funciona aún como debería.
 
Cada usuario es un mundo. Pero yo estoy totalmente satisfecho con mi Ubuntu Phone :)
on June 29, 2015 09:22 AM

on June 29, 2015 09:12 AM

kde-connect-install-running-620x551

DEDOIMEDO previews this new capability.

 

on June 29, 2015 09:08 AM

Just Say It!

Ted Gould

While I love typing on small on screen keyboards on my phone, it is much easier to just talk. When we did the HUD we added speech recognition there, and it processed the audio on the device giving the great experience of controlling your phone with your voice. And that worked well with the limited command set exported by the application, but to do generic voice, today, that requires more processing power than a phone can reasonably provide. Which made me pretty excited to find out about HP's IDOL on Demand service.

I made a small application for Ubuntu Phone that records the audio you speak at it, and sends it up to the HP IDOL on Demand service. The HP service then does the speech recognition on it and returns the text back to us. Once I have the text (with help from Ken VanDine) I set it up to use Content Hub to export the text to any other application that can receive it. This way you can use speech recognition to write your Telegram notes, without Telegram having to know anything about speech at all.

The application is called Just Say It! and is in the Ubuntu App Store right now. It isn't beautiful, but definitely shows what can be done with this type of technology today. I hope to make it prettier and add additional features in the future. If you'd like to see how I did it you can look at the source.

As an aside: I can't get any of the non-English languages to work. This could be because I'm not a native speaker of those languages. If people could try them I'd love to know if they're useful.


on June 29, 2015 04:29 AM

June 28, 2015

If you read my blog you already know that since April I’ve a job as developer at Archon. It’s a thing I enjoy a lot, and the last week has been awesome.

Archon joined the Hello Tomorrow Conference 2015, so last week I travelled to Paris.

And there I met some of the best people of the world, those that change the world, not only in the computer science world, but in every field.

Here some of things most inspired me, I hope they could inspire you too, and give you energy to be the change you want to see in the world.

Archon Team

First of all thanks to guys that were with me in Paris: Davide Venturelli, the CEO, works at NASA and is currently in charge of surveying the scientific investigations performed at the Quantum Artificial Intelligence Laboratory. What he does it’s incredible, and he motivated me a lot to follow my dreams now.

You can watch his pitch at the conference on Youtube (recorded by my Ubuntu Phone). Seriously, find 8 minutes today and watch it, so you can understand what Archon is about, and why I like it.

Giovanni Landi is our 3D expert, and has a lot of different passions. I had a lot of fun working with him at our stand, and I learned a lot of things about art (one of his passions).

Roberto Navoni is our hardware expert, and his life should be an inspiration for every Italian. He’s an entrepreneur, has created a company in Italy and despite the difficulties he didn’t expatriate.

Davide Ghezzi is our CFO. Unfortunately he was able to join us only for the first day, but he did a lot of things. I have no idea on how is possible a single man has so much energies, but wow!

The stand

I spent most of the time at our stand, where I explained what is our product to both potentially investors and casual visitors. As you can read, my English isn’t so good, so I was quite surprised everyone understood what I was saying.

Anyway, meet so many people from all around the world was amazing, everyone with incredible experiences and cool backgrounds. I spent a lot of time talking about the future, and how to do things that could impact the world. I listened to a lot of stories, and I remember each of them, because everyone was incredible.

The keynotes

During the event I was able to take a look to a couple of keynotes (I spent the rest of the time at the stand), and both were something you don’t see everyday.

The first one was by Obi Felten, moonshots at Google[X]. I don’t agree with a lot of Google’s policies, but the energies these guys have in trying to build something beautiful, and how they work hard with open minds is something that deserves deep respect and admiration.

The second one was by the CEO of G-Therapeutics. They have developed a working (but still in development) technology that helps paralyzed people walk again. Let me repeat: a stimulation system to rehabilitate individuals with spinal cord injury.

The presentation was the most moving thing I’ve ever seen, and it has earned minutes of applause.

The companies

Other than Archon, there were a lot of other interesting companies, both for what they do or for the stories of their founders.

I leave here a little list about the ones that I liked more, which it is far from complete (you can read the entire list on the Hello Tomorrow website).

  • Blitab is a braille tablet helping blind people. I love howntechnologies nowadays could help less lucky people to live a better life
  • BioCarbon Engineering is changing the world 1 billion trees at a time. They use drones to make precision planting and optimize reforestation. You know, trees don’t give free wifi, but they give oxigen, so they are useuful. Indeed, BioCarbon won the competition.
  • Artomatix builds a software to automating the generation of art, to enable digital graphic artists to focus on being creative, in addition to reducing project times and costs. Ok, it’s not a world changer, but the gamer that is in me loves the software, so I really hope they could have success
  • Solenica is building Lucy. Lucy’s mirror follows the sun and reflects sunlight into your rooms, creating a beautiful natural glow. Other than the product (they allow you to reduce your carbon footprint by up to 1 ton/year by saving electricity, I like things that help the ecology), I like the story of the startup, founded by 3 Italians. It’s sad they had to go to the U.S. to follow their dream, but I love their stubbornness in going forward. Only people as their go forward, and make the world a better place.

Conclusion

Other than the inspiration, in that week I had also the confirmation I’m on the right path to do something in my life to help the world to be a better place. A lot of people incited me to continue on that way, and you know, public recognition of your work is important.

Ciao,
R.

on June 28, 2015 10:56 PM

Thoughts on Meizu MX4

Riccardo Padovani

10 days ago I switched my phone, I dismissed the BQ Aquaris E4.5 and started to use the Meizu MX4 (both with Ubuntu for Phones). On Internet there are a lot of reviews about hardware and software, written by people more prepared than me and in a better English. What I want to do here is to highlight how it fits my user case and why it’s 9 months I don’t use a smartphone if it doesn’t have Ubuntu.

meizu

A couple of premises

I use Ubuntu as only system on my smartphone since Sep ‘14. I started with a Nexus 4, until Feb ‘15, when I switched to the BQ Aquaris, and then to Meizu MX4 a couple of weeks ago. All three devices have been given to me by Canonical, the company which develops Ubuntu, to thank me the support in the development of the system.

Despite this, they didn’t ask me to do a good review, or else, but just to be honest. And I’ll do, as I did back in February.

Also, I’m happy with Ubuntu, but that doesn’t mean it is the best system on the market (hint: it isn’t), or it has the best hardware (hint: it hasn’t), or the best applications (hint: no way). But I like it, I love to improve it, and I’m so happy there is an opensource system on the market (yes, there is Firefox OS too, but I prefer Ubuntu). So don’t buy it if you aren’t sure about what you’re doing.

The good

The screen

The screen of this phone is something so beautiful, so perfect, so whatever positive adjective you can think, that I fall in love with it. It was the worst thing on the BQ, and it’s the best here: lateral borders are little, the screen is big, there are so much pixels you don’t see them, and everything is so perfect sized. You know, size matters ;-)

The performances

It definitely has better performances than the BQ. Especially browser and Telegram. Browser has a very fast rendering, and Telegram is so better than on the BQ that I’m not sure it is at the same version on two phones. I asked to developers and they said yes, but I swear I’m not convinced yet, it’s definitely faster, it never freeze, and it’s really nice to use it.

And no, it’s not because I use it little. In fact, I receive more than 500 messages every day (guys, to abandon Android I had to persuade all my friends to switch to Telegram).

The system

I really love the system: good performance, long battery life, every time there is an update you see so many changes, and I help to shape it writing code (and this is one of the greatest satisfactions that there may be).

After the last update OTA-4 it’s a wonderful world where to live, and it’s opensource. And this is the fundamental thing.

The bad

The optimization

Considering how smoothly Ubuntu runs on the Aquaris, I definitely expect better things from this phone. I have to recharge it every day (while I was recharging Aquaris one every two days) and sometimes the system freezes. Developers are working on all the bugs, so I’m sure they will be fixed and in future the optimization will be better, but the system is definitely better optimized for the Aquaris than for the MX4

The price

Considering the hardware, the maturity of the system and the optimization (see above) it’s overpriced. While I suggest you to buy the Aquaris if you want to try Ubuntu, I cannot suggest to buy this one. At least, until another update fix all issuses I highlighted before.

I really like the screen, so I’ll continue to use this one, but if you aren’t huge fans of large screens, there is no reason to buy this over the Aquaris at the moment I’m writing (end of Jun ‘15), but I hope the situation will improve with OTA-5 (ETA: end of Jul ‘15).

The home button

We don’t need buttons. More screen and less buttons, please.

The dream

Canonical and community are working together to make a dream possible. We want opensource runs the world, we want to create a better place with software.

I don’t know if we will be successful, or if we will change the world, but at least we’re trying. So don’t settle down, continue to write code, report bugs, translate apps. At the end, good deeds are always rewarded.

As Alan Kay wrote, The best way to predict the future is to invent it.

Someone says I’m wasting my little free time doing things for free for a commercial company. But I’m not wasting my time, I’m building a better future, and so do you every time you do something for the opensource world.

Do you like this article? Please consider to buy me an English course, so I can improve it or just send me a feedback at riccardo@rpadovani.com.

Ciao,
R.

on June 28, 2015 05:45 PM

Availability

Stuart Langridge

Some very interesting discussions happened at Edgeconf 5, including a detailed breakout session on making your web apps work for everyone which was well run by Lyza Danger Gardner. We talked about performance, and how if your page contains HTML then your users see your interface sooner. About fallbacks, and how if you’re on a train with a dodgy 3g connection the site should work and that’s a competitive advantage for you, because your competitors’ sites probably don’t. About isomorphic JavaScript and how the promise of it is that your Angular website won’t have to wait until it’s all downloaded before showing anything. About Opera Mini’s 250 million users. It’s about whether the stuff you build is available to the most people. About your reach, and you being able to reach more than the others.

In the past, we’ve called this “progressive enhancement”, but people don’t like that word. Because it sounds hard. It sounds like you’re not allowed to use modern tools in case one user has IE4. Like you have to choose between slick design and theoretical users in Burma.

Much rhetorical use has been made of the gov.UK team’s results of people not getting the script on their pages. The important part of that result was that 0.9% of visits didn’t run the client side scripting even though they should have done. It’s not people with JavaScript turned off, it’s people with browsers that for some reason didn’t run it at all. Did you open a hundred web pages yesterday? I probably did. So for every hundred web pages opened by someone, one of them didn’t work. Maybe they were in a tunnel and the 3g cut out. Maybe they were on hotel WiFi. Maybe the CDN went down for ten seconds. Maybe the assets server crashed. But, for whatever reason, some of your site didn’t work. Did that make your site unavailable to them? Not if it was written right, written to be available.

And “written right” does not mean that you have double the work to build a version of your WebGL photo editor that works in Lynx. If you do this by having isomorphic JS, so your node server provides HTML which makes your pages load before your 2MB of bower JS arrives, that’s fine. Because you’re available to everybody; a Macbook user in a cafe, a finance director on her Windows desktop, a phone-using tween in a field with no coverage, and yes even Opera Mini users in Burma.

It’s not about giving up your frameworks to cater for fictional example users with scripting disabled. It is true that not everyone has JS and that sometimes that’s you, so let’s work out how to do this without regressing to 1998.

So I’m not going to be talking about progressive enhancement any more. I’m going to be talking about availability. About reach. About my web apps being for everyone even when the universe tries to get in the way.

(Also, more on why availability matters, with smiling diagrams!)

on June 28, 2015 02:20 PM

When I first left desktops behind for a laptop (Lenovo T500) it was a tough step. I was used to building my own desktops from the components I selected. I was used to the power of a desktop. Converting to using a laptop was an exercise in compromises. The transition from a 15″ laptop to a smaller lighter laptop is similar, but this is the first time I have taken a step back in the area of memory. I am converting from a Lenovo T530 to a Dell XPS 13 (9343) Developer Edition. This article will cover accessories I own or am considering purchasing to replace some of the lost features of the larger laptop.

main_256Video Out
If you use your laptop to present or would like to have a larger monitor at your desk then you will want to have an adapter from mini-displayport to some other input (VGA, DVI or Displayport). In my case I went with the MDP-HDMI from Puggable which converts from mini-displayport to HDMI. Most of the presentations I do get displayed on large screen television with HDMI inputs which make this solution ideal. Linux does not have support for USB 3.0 Display Link device, but you could also choose to utilize a USB 2.0 docking station. As long as you do not have USB 3.0 drives or a need for 1000MB Ethernet connections that would be a possible solution.

main_256Network
Wireless works great when you are mobile and fairly well even when you are not. For most people there is no need for wired connections, but if you move large files then having a gigabit connection is a must have. For this I use a Pluggable Model USB3-E1000 device. Moving large files at 118 MB/s is much more enjoyable than 35 MB/s.

main_256USB 3.0 Hub
With only two USB ports a hub can make it easier to attach multiple devices. In my case since I decided to use the USB3-E1000 device I would only have one available USB port. I have the Plugable USB3-HUB7A which has seven ports. This USB hub has not been stable for me with either the the Lenovo T530 nor the Dell XPS 13 (9343). I am not sure if there is a firmware issue or something else. The current issues are that devices plugged in to the hub are not always recognized. That said this hub still allows me to use three devices directly attached and another two through a second USB 2.0 hub.

 


on June 28, 2015 02:07 AM

June 27, 2015

Since 2014 I have been running static code analysis using tools such as cppcheck and smatch against the Linux kernel source on a regular basis to catch bugs that creep into the kernel.   After each cppcheck run I then diff the logs and get a list of deltas on the error and warning messages, and I periodically review these to filter out false positives and I end up with a list of bugs that need some attention.

Bugs such as allocations returning NULL pointers without checks, memory leaks, duplicate memory frees and uninitialized variables are easy to find with static analyzers and generally just require generally one or two line fixes.

So what are the overall trends like?

Warnings and error messages from cppcheck have been dropping over time and "portable warnings" have been steadily increasing.  "Portable warnings" are mainly from arithmetic on void * pointers (which GCC handles has byte sized but is not legal C), and these are slowly increasing over time.   Note that there is some variation in the results as I use the latest versions of cppcheck, and occasionally it finds a lot of false positives and then this gets fixed in later versions of cppcheck.

Comparing it to the growth in kernel size the drop overall warning and error message trends from cppcheck aren't so bad considering the kernel has grown by nearly 11% over the time I have been running the static analysis.

Kernel source growth over time
Since each warning or error reported has to be carefully scrutinized to determine if they are false positives (and this takes a lot of effort and time), I've not yet been able determine the exact false positive rates on these stats.  Compared to the actual lines of code, cppcheck is finding ~1 error per 15K lines of source.

It would be interesting to run this analysis on commercial static analyzers such as Coverity and see how the stats compare.  As it stands, cppcheck is doing it's bit in detecting errors and helping engineers to improve code quality.
on June 27, 2015 10:13 AM

The idea behind this video is to show Ubuntu equivalents for my most used Android apps.

Ubuntu apps shown:
Google+
YouTube
Gmail/Photos/Calendar/Drive
HERE / OSMtouch
Camera
Udropcabin
File Manager
CuteSpotify

Honourable Mentions:
OSMtouch
Telegram

EDIT: The small font bug in OSMscout (on the MX4) is now fixed. Yay!

on June 27, 2015 09:17 AM

June 26, 2015

Desde hace casi medio año uso el BQ E4.5 Ubuntu Edition. Narré el acto de presentación, la entrevista a su CEO, una review a su hardware, al sistema operativo, a las aplicaciones, calidades de foto y batería, funda e incluso a la percepción en el día a día y tras un mes de uso.

Pero la apuesta de BQ no sólo ha quedado en ese modelo. La empresa española sigue apostando muy fuerte por nuestro sistema operativo móvil favorito, en esta ocasión con su buque insignia, el BQ E5, un móvil con una de las mejores calidades/prestaciones/precio del mercado.


Presentación

El modelo E5 se nos presenta en una caja muy parecida a la del E4.5. Las letras rojas sobre negro ya nos indica qué joya alberga en su interior.
Exteriormente el móvil tiene líneas simples y elegantes, de tacto suave.
Internamente disponemos de doble SIM (podemos usar 2 números de teléfono simultáneamente) y soporte para ampliar el almacenamiento interno de 16GB con una microSD de hasta 32GB.


Muy buen diseño: sobrio y elegante
Su pantalla de 5" HD 720 x 1280 (294 HDPI) sobresale especialmente y es muy útil porque disponemos de un teclado más ancho para escribir más fácil. Tras probar el E5 una semana, al volver al E4.5 me parecía pequeño...

La pantalla, por resolución y tamaño es la mejor característica del dispositivo
La cámara de 13 Mp es mejor que la del E4.5. No sólo por más resolución, si no por más calidad de foto en color, contraste y luz:

Cámara

Este par de fotos están hechas en el mismo sitio y a la misma hora:

BQ E4.5 Ubuntu Edition BQ E5 Ubuntu Edition



Su batería de 2500 mAH se nota y mucho. Dando para más horas que la del E4.5.


Pesa poco para la gran batería que tiene

¿Cual comprar? ¿E4.5 ó E5?
El E4.5 cuesta 169,9€ y el E5 cuesta 199,9€.
Si prefieres pagar lo mínimo y un móvil con pantalla normal tirando a grande y batería normal, el E4.5 te será perfecto.
Yo personalmente compraría el E5. Por sólo 30€ de diferencia, la calidad y tamaño de pantalla, cámara, batería y almacenamiento interno, merece la pena.

BQ E5

Puedes comprar el BQ E5 aquí. Puedes comprar el BQ E4.5 aquí.

Fotografías por David Castañón bajo licencia CC BY 3.0.
on June 26, 2015 08:00 PM

The Meizu MX4 Challenge deadline is approaching fast with just a few days to go. July 1st is the last day for submissions to be accepted for the challenge.

meizu

Registering your submission

To register your submission for the judges review, you will need a couple of minutes to fill in the registration form. You can find the registration form here.

Scope Submissions

Please submit your scopes to the store. The upload workflow is exactly the same as for apps, and with automated reviews it takes just a few minutes from upload to your scope being available for everyone on the Ubuntu Software Store. When you're ready to start the upload, you can follow the 5-step process to get it published.

Design Submissons

The challenge was also targeted at designers to showcase their design mockups to improve the UX experience of apps and scopes in the UT ecosystem. Feel free to upload your work to any publicly accessible sites like Google Drive, Dropbox, G+ etc.

If you have any questions, feel free to contact me by email or in IRC #ubuntu-app-devel (nik90, popey).

Good Luck!

on June 26, 2015 04:10 PM

 

FCM98-cover

Full Circle – the independent magazine for the Ubuntu Linux community are proud to announce the release of our ninety-eighth issue.

This month:
* Command & Conquer
* How-To : Conky Reminder, LibreOffice, and Programming JavaScript
* Graphics : Inkscape.
* Chrome Cult
* Linux Labs: Midnight Commander
* Ubuntu Phones
* Review: Saitek Pro Flight System
* Book Reviews: Automate Boring Stuff With Python, and Teach Your Kids To Code
* Ubuntu Games: Minetest, and Free to Play Games
plus: News, Arduino, Q&A, and soooo much more.

Get it while it’s hot!
http://fullcirclemagazine.org/issue-98
on June 26, 2015 02:53 PM

S08E16 – The Hottie & the Nottie - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Episode Sixteen of Season Eight of the Ubuntu Podcast! Alan Pope, Mark Johnson, Laura Cowen, and Martin Wimpress are all together again and speaking to your brain.

In this week’s show:

That’s all for this week, please send your comments and suggestions to: show@ubuntupodcast.org
Join us on IRC in #ubuntu-podcast on Freenode
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

on June 26, 2015 12:34 PM

June 25, 2015

"I’m getting really sick of being misquoted in release announcements."
– Oscar Wilde, probably.

The first alpha of the Wily Werewolf (to become 15.10) has now been released!

This alpha features images for Kubuntu, Lubuntu, Ubuntu MATE, UbuntuKylin and the Ubuntu Cloud images.

Pre-releases of the Wily Werewolf are *not* encouraged for anyone needing a stable system or anyone who is not comfortable running into occasional, even frequent breakage. They are, however, recommended for Ubuntu flavor developers and those who want to help in testing, reporting and fixing bugs as we work towards getting this release ready.

Alpha 1 includes a number of software updates that are ready for wider testing. This is quite an early set of images, so you should expect some bugs.

While these Alpha 1 images have been tested and work, except as noted in the release notes, Ubuntu developers are continuing to improve the Wily Werewolf. In particular, once newer daily images are available, system installation bugs identified in the Alpha 1 installer should be verified
against the current daily image before being reported in Launchpad. Using an obsolete image to re-report bugs that have already been fixed wastes your time and the time of developers who are busy trying to make 15.10 the best Ubuntu release yet. Always ensure your system is up to date before reporting bugs.

Kubuntu

Kubuntu uses KDE software and now features the new Plasma 5 desktop.

The Alpha-1 images can be downloaded at: http://cdimage.ubuntu.com/kubuntu/releases/wily/alpha-1/

More information on Kubuntu Alpha-1 can be found here: https://wiki.ubuntu.com/WilyWerewolf/Alpha1/Kubuntu

Lubuntu

Lubuntu is a flavour of Ubuntu based on LXDE and focused on providing a very lightweight distribution.

The Alpha 1 images can be downloaded at: http://cdimage.ubuntu.com/lubuntu/releases/wily/alpha-1/

More information on Lubuntu Alpha-1 can be found here: https://wiki.ubuntu.com/WilyWerewolf/Alpha1/Lubuntu

Ubuntu MATE

Ubuntu MATE is a flavour of Ubuntu featuring the MATE desktop environment.

The Alpha-1 images can be downloaded at: http://cdimage.ubuntu.com/ubuntu-mate/releases/wily/alpha-1/

More information on Ubuntu MATE Alpha-1 can be found here: https://wiki.ubuntu.com/WilyWerewolf/Alpha1/UbuntuMATE

UbuntuKylin

UbuntuKylin is a flavour of Ubuntu that is more suitable for Chinese users.

The Alpha-1 images can be downloaded at: http://cdimage.ubuntu.com/ubuntukylin/releases/wily/alpha-1/

More information on UbuntuKylin Alpha-1 can be found here: https://wiki.ubuntu.com/WilyWerewolf/Alpha1/UbuntuKylin

Ubuntu Cloud

Ubuntu Cloud images can be run on Amazon EC2, Openstack, SmartOS and many other clouds.

http://cloud-images.ubuntu.com/releases/wily/alpha-1/

Regular daily images for Ubuntu can be found at: http://cdimage.ubuntu.com

If you’re interested in following the changes as we further develop Wily, we suggest that you subscribe to the ubuntu-devel-announce list. This is a low-traffic list (a few posts a week) carrying announcements of approved specifications, policy changes, alpha releases and other interesting events.

http://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-announce

A big thank you to the developers and testers for their efforts to pull together this Alpha release!

Originally posted to the ubuntu-devel-announce mailing list on Thu Jun 25 18:35:35 UTC 2015 by Adam Conrad on behalf of the Ubuntu Release Team,

on June 25, 2015 08:02 PM
The first Alpha of Wily (to become 15.10) has now been released!

The Alpha-1 images can be downloaded from: http://cdimage.ubuntu.com/kubuntu/releases/wily/alpha-1/

More information on Kubuntu Alpha-1 can be found here: https://wiki.kubuntu.org/WilyWerewolf/Alpha1/Kubuntu
on June 25, 2015 07:54 PM
Again, some changes in the GTK libraries made our theme looking wrong. This is a fix applied to GK3 apps with list boxes being greyed out making them totally unreadable (bug #1464349). See the differences before and after:




Also, some fixes in the core Ubuntu theme corrected the titlebar for Unity and the toolbar "continuity effect" in all environments. Before and after:



As always, you can upgrade or get your theme from the Artwork page. If you're a Wily Werewolf user or you added the PPA to your system, these changes will arrive soon.
on June 25, 2015 04:34 PM

It is a well known fact that social media platforms like Twitter help in marketing a business or product. As a blogger your success depends on the number of visitors to your site. But to make that happen is not enough to have a Twitter account and make the occasional tweets. You have to do more and here are some tips that can help you along.

Get influencial followers

Just like the real world, influence counts for a lot in the virtual world. For instance, if your niche is tech and security, then find the Twitter users in this niche. You can use tools for this also to find out the quality of their tweets and its response. Another way to find users in your niche would be to look for hashtags with relevant names.

What you should be looking for are people who are really active on Twitter, have a lot of followers and have a habit of retweeting content. You can become their follower and in time they will return the favor.

Be active

Do not wait for followers to come to you. This is not the forum to be passive. Find followers and retweet their content. You should make your presence felt. The only way to be heard is first to listen and then engage. Sticking with tech and and everything Internet, imagine someone trying to educate people on things such as Internet security, who would take advice from a random account they have never heard of?

Build your following and then prove your worth.

Information that you share should be unique but also useful

It should never appear that you are tweeting for the sake of tweeting. People are not fools, they will just ignore that and go to people who are sharing relevant and unique content. The content that you are posting need to be your own. You can and should retweet any content that is relevant to your niche. Sharing tweets is also a way of showing people that you are personally invested in helping them.

You can use tools that are available online which will set up auto tweets throughout the day. This way the tweets are spaced out properly without you having to remember that a tweet is due.

Never send spam to followers

This is the worst thing that you could do to yourself. So, never send auto-direct messages which are considered spam by Twitter users. If you do so you do it at the risk of making your brand suffer. But you can use direct messaging if you want to engage in a conversation.

Be consistent with your profile

If you have a presence in other social media too, then ensure that your profile is consistent across all of them. So don’t have one thing in Twitter and another type on Facebook and so on. Ensure that the pictures are relevant to your brand.

Analyze your replies

Any tweet from you will result in replies. See how many you are getting and which tweet gets more replies. This way you can make out which keywords are more popular and you can send more tweets with these words. Same way we analyze the replies that you send for other’s tweets.

Twitter is a useful tool if you know how to use it, so spend time in learning about it and you will reap the benefits.

The post Top Twitter Marketing Tips for bloggers appeared first on deshack.

on June 25, 2015 04:16 PM

Pardon for the format of this post.

The Ubuntu Membership Board is responsible for approving new Ubuntu members.   I interviewed our board members in order for the Community to get to know them better and get over the fear of applying to Membership.

The third interviewee is El Achèche ANIS:

What do you do for a career?

Right now I’m working as an IT guy (SysAdmin, DBA, NetAdmin, some DevOps, etc..) a job that I, somehow, got because of the Ubuntu community :)

What was your first computing experience?

I don’t really remmeber what was, the only thing that I remember is a when my cousin give me more than 100 CDs full of programs to intall and try, that was  before I got a ADSL line.. I was so happy, speding the whole weekend trying every single software on those CDs..

How long have you been involved with Ubuntu?

I joined the LoCo Team since September 2009.. After some months I joined the team in an event for 1st time ever, meeting nizarus, the Spiritual father of my LoCo Team..  Since then I was trying to be more and more involved in the team.. Till I joined the Ubuntu-tn board team.. And then a whole new adventure starts 😀

Since you are all fairly new to the Board, why did you join?

My main goal is to motivate myself, being part of a LoCo which was #1 prioritie after my Family and seing myelf almost alone in my LoCo and seing it having less activities is really a demotivating thing.. And I can tell you that I’m not disappointed at all :) 😀

 The second goal was to be more involved with the international Community, and as a Board memeber, reviewing the condidates Wikis and LP accounts activites, is a good opportunity to see what people all arround the world are doing to help the community, and have a new inspiration source to try to kick-off again my LoCo..

What are some of the projects you’ve worked on in Ubuntu over the years?

Most of my contributions was arround my LoCo.. Supporting users, try to keep our Wiki updated (that was a real strugle x( ), Helping translating some Ubuntu strings and of course planning many events in the Tunisian Universities.

What is your focus in Ubuntu today?

I believe that the main focus of everybody should be the Community it self, during the last years I saw many people joined the community then left to join others, the bright side here that they joined other FOSS communities, which is a great thing because after all we are a FOSS community too.. But the dark side that we are not able (maybe am the only one who feels like that, I don’t know) to make new Ubuntu users involved in the communty.

Do you contribute to other free/open source projects? Which ones?

Right now the main FOSS project that I’m contributing to is Ubuntu & the community, but I try to help anyone who’s looking for technical support on IRC or Social Networks.

If you were to give a newcomer some advice about getting involved with Ubuntu, what would it be?

If you’re not having fun, you’re doing it wrong.. So have fun and meet new people and learn about new cultures :)

Do you have any other comments else you wish to share with the community?

You’re contributing to Ubuntu for many years now? People arround you thinks that you should’ve be a Ubuntu Member since many years? So what are you waiting for, go ahead an apply

on June 25, 2015 03:21 PM
With Ubuntu 12.04.2, the kernel team introduced the idea of the "hardware enablement kernel" (HWE), originally intended to support new hardware for bare metal server and desktop. In fact, the documentation indicates that HWE images are not suitable for Virtual or Cloud Computing environments.  The thought was that cloud and virtual environments provide stable hardware and that the newer kernel features would not be needed.

Time has proven this assumption painfully wrong. Take for example the need for drivers in virtual environments. Several of the Cloud providers that we have engaged with have requested the use of the HWE kernel by default. On GCE, the HWE kernels provide support for their NVME disks or multiqueue NIC support. Azure has benefited from having an updated HyperV driver stack resulting in better performance. When we engaged with VMware Air, the 12.04 kernel lacked the necessary drivers.

Perhaps more germane to our Cloud users is that containers are using kernel features. 12.04 users need to use the HWE kernel in order to make use of Docker. The new Ubuntu Fan project will be enabled for 14.04 via the HWE-V kernel for Ubuntu 14.04.3. If you use Ubuntu as your container host, you will likely consider using an HWE kernel.

And with that there has been a steady chorus of people requesting that we provide HWE image builds for AWS. The problem has never been the base builds; building the base bits is fairly easy. The hard part is that by adding base builds, each daily and release build goes form 96 images for AWS to 288 (needless to say that is quite a problem). Over the last few weeks -- largely in my spare time -- I've been working out what it would take to deliver HWE images for AWS.

I am happy to announce that as of today, we are now building HWE-U (3.16) and HWE-V (3.19) Ubuntu 14.04 images for AWS. To be clear, we are not making any behavioral changes to the standard Ubuntu 14.04 images. Unless users opt into using an HWE image on AWS they will continue to get the 3.13 kernel. However, for those who want newer kernels, they now have the choice.

For the time being, only amd64 and i386 builds are being published.. Over the next few weeks, we expect the HWE images to reach full feature parity including release promotions, and indexing. And I fully expect that the HWE-V version of 14.04 will include our recent Fan project once the SRU's complete.

Check them out at http://cloud-images.ubuntu.com/trusty/current/hwe-u and http://cloud-images.ubuntu.com/trusty/current/hwe-v .

As always, feedback is always welcome.
on June 25, 2015 02:56 PM

DOS Gaming

Sam Hewitt

C:\>DIR

⠀Volume in C drive is SNWH_BLOG
⠀Directory of C:\

DARKF PNG 1640922 25-06-15 1:14pm
WHEEL GIF 5032 25-06-15 1:42pm
POST EXE 10618 25-06-15 2:02pm
3 file(s) 1,656,572 bytes

C:\>POST.EXE

I consider PC games of the DOS/WIN95 era more a part of my youth than console games of that time, such as those on Nintendo -even though I did spend a lot time there as well. Mind you, I didn't play many of the DOS games that influenced me until I was much older.

---------

My memories of DOS gaming at that time are from watching my older brother play (mostly FPS) games like Doom, Wolfenstein and Duke Nukem, all of which I was arguably too young to be watching but still influenced my later gaming preferences (and looky-here I didn't commit a school shooting).

(Actually this Wheel of Fortune game is one of my earliest memories of computer games.)

---------

Even many many years later I still enjoy (re)replaying old DOS games that I've collected, simply for the nostaligia of it.

For instance, I'm currently replaying Star Wars Dark Forces -which is still one of my favourite games- using DOSBox, more specifically Boxer.

This nostalgia and fond memories of games long past is why many people still find themselves wanting to replay them and/or develop ways to do so and many games are still great in spite of their age.

C:\>

on June 25, 2015 02:00 PM

Hi,

Before I start planning for Ubuntu GNOME 15.10, I thought it is better to start from the users (and the team) of Ubuntu GNOME by asking them 4 simple questions that should not take more than 3 minutes of their time.

If you are using Ubuntu GNOME

or

If you have used either Ubuntu GNOME 14.04 LTS and/or Ubuntu GNOME 15.04

Please, help me/us by taking this survey.

 

If you have not yet used Ubuntu GNOME, then what is stopping you from using it? I’d be very glad to know :)

 

Thank you for your time and appreciate your help in advance :)

on June 25, 2015 04:37 AM

Since we're using buildroot for the OpenPower firmware build infrastructure, it's relatively straightforward to generate a standalone toolchain to build add-ons to the petitboot environment. This toolchain will allow you to cross-compile from your build host to an OpenPower host running the petitboot environment.

This is just a matter of using op-build's toolchain target, and specifying the destination directory in the BR2_HOST_DIR variable. For this example, we'll install into /opt/openpower/ :

sudo mkdir /opt/openpower/
sudo chown $USER /opt/openpower/
op-build BR2_HOST_DIR=/opt/openpower/ toolchain

After the build completes, you'll end up with a toolchain based in /opt/openpower.

Using the toolchain

If you add /opt/openpower/usr/bin/ to your PATH, you'll have the toolchain binaries available.

[jk@pecola ~]$ export PATH=/opt/openpower/usr/bin/:$PATH
[jk@pecola ~]$ powerpc64le-buildroot-linux-gnu-gcc --version
powerpc64le-buildroot-linux-gnu-gcc (Buildroot 2014.08-git-g80a2f83) 4.9.0
Copyright (C) 2014 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Currently, this toolchain isn't relocatable, so you'll need to keep it in the original directory for tools to correctly locate other toolchain components.

OpenPower doesn't (yet) specify an ABI for the petitboot environment, so there are no guarantees that a petitboot plugin will be forwards- or backwards- compatible with other petitboot environments.

Because of this, if you use this toolchain to build binaries for a petitboot plugin, you'll need to either:

  • ensure that your op-build version matches the one used for the target petitboot image; or
  • provide all necessary libraries and dependencies in your distributed plugin archive.

We're working to address this though, by defining the ABI that will be regarded as stable across petitboot builds. Stay tuned for updates.

Using the toolchain for subsequent op-build runs

Because op-build has a facility to use an external toolchain, you can re-use the toolchain build above for subsequent op-build invocations, where you want to build actual firmware binaries. If you're using multiple op-build trees, or are regularly building from scratch, this can save a lot of time as you don't need to continually rebuild the toolchain from source.

This is a matter of configuring your op-build tree to use an "External Toolchain", in the "Toolchain" screen of the menuconfig interface:

You'll need to set the toolchain path to the path you used for BR2_HOST_DIR above, with /usr appended. The other toolchain configuration parameters (kernel header series, libc type, features enabled) will need to match the parameters that were given in the initial toolchain build. However, the buildroot code will check that these match and print a helpful error message if there are any inconsistencies.

For the example toolchain built above, these are the full configuration parameters I used:

BR2_TOOLCHAIN=y
BR2_TOOLCHAIN_USES_GLIBC=y
BR2_TOOLCHAIN_EXTERNAL=y
BR2_TOOLCHAIN_EXTERNAL_CUSTOM=y
BR2_TOOLCHAIN_EXTERNAL_PREINSTALLED=y
BR2_TOOLCHAIN_EXTERNAL_PATH="/opt/openpower/usr/"
BR2_TOOLCHAIN_EXTERNAL_CUSTOM_PREFIX="$(ARCH)-linux"
BR2_TOOLCHAIN_EXTERNAL_PREFIX="$(ARCH)-linux"
BR2_TOOLCHAIN_EXTERNAL_GLIBC=y
BR2_TOOLCHAIN_EXTERNAL_HEADERS_3_15=y
BR2_TOOLCHAIN_EXTERNAL_CUSTOM_GLIBC=y
BR2_TOOLCHAIN_EXTERNAL_INET_RPC=y
BR2_TOOLCHAIN_EXTERNAL_CXX=y
BR2_TOOLCHAIN_EXTRA_EXTERNAL_LIBS=""
BR2_TOOLCHAIN_HAS_NATIVE_RPC=y
BR2_TOOLCHAIN_HAS_THREADS=y
BR2_TOOLCHAIN_HAS_THREADS_DEBUG=y
BR2_TOOLCHAIN_HAS_THREADS_NPTL=y
BR2_TOOLCHAIN_HAS_SHADOW_PASSWORDS=y
BR2_TOOLCHAIN_HAS_SSP=y

Once that's done, anything you build using that op-build configuration will refer to the external toolchain, and use that for the general build process.

on June 25, 2015 04:09 AM

Feedback Time

Ubuntu GNOME

Hi everyone,

I just can NOT believe how fast the time flies?! WOW, it’s been 2 months since we have released Ubuntu GNOME 15.04? well, believe it or not, we’re 2 months older now 😉

Okay, so while our real life struggle continues, we do our best to find sometime to contribute to our beloved project, that is without a doubt, Ubuntu GNOME.

feedbackBefore we start the planning for the current cycle and the future ones, I thought to do a quick and simple survey and ask our users for their honest, direct and true opinion about Ubuntu GNOME so far. Specially 15.04 (our latest stable release) and 14.04 LTS (our Long Term Support release).

It should NOT take more than 3 minutes of your time :)

Please, click here to help us and do the survey.

 

Thank you so much for your time, help and support!

on June 25, 2015 02:59 AM

June 24, 2015

[UPDATE] The Image ID's have been updated with the latest builds which now include Docker 1.6.2, the latest LXD and of course the Ubuntu Fan driver. 

This week, Dustin Kirkland announced the Ubuntu Fan Project.  To steal from the description, "The Fan is not a software-defined network, and relies on neither distributed databases nor consensus protocols.  Rather, routes are calculated deterministically and traffic carries no additional overhead beyond routine IP tunneling.  Canonical engineers have already demonstrated The Fan operating at 5Gpbs between two Docker containers on separate hosts."

My team at Canonical is responsible for the production of these images. Once the official SRU's land, I anticipate that we will publish an official stream over at cloud-images.ubuntu.com. But until then, check back here for images and updates. As always, if you have feedback, please hop into #server on FreeNode or send email.

GCE Images

Images for GCE have been published to the "ubuntu-os-cloud-devel" project.

The Images are:
  • daily-ubuntu-docker-lxd-1404-trusty-v20150620
  • daily-ubuntu-docker-lxd-1504-vivid-v20150621
To launch an instance, you might run:
$ gcloud compute instances create \
    --image-project ubuntu-os-cloud-devel \
    --image <IMAGE> <NAME>

You need to make sure that IPIP traffic is enable:
$ gcloud compute firewall-rules create fan2 --allow 4 --source-ranges 10.0.0.0/8

Amazon AWS Images

The AWS images are HVM-only, AMD64 builds. 


Version
Region
HVM-SSD
HVM-Instance
14.04-LTS
eu-central-1
ami-7e94ac63
ami-8e93ab93
sa-east-1
ami-f943c1e4
ami-e742c0fa
ap-northeast-1
ami-543c9b54
ami-b4298eb4
eu-west-1
ami-4ae2a73d
ami-48e7a23f
us-west-1
ami-fbd126bf
ami-6bd3242f
us-west-2
ami-63585c53
ami-875357b7
ap-southeast-2
ami-7de69c47
ami-1de19b27
ap-southeast-1
ami-aca4a0fe
ami-2a9b9f78
us-east-1
ami-95877efe
ami-e58b728e
15.04
eu-central-1
ami-9a94ac87
ami-ae93abb3
sa-east-1
ami-1340c20e
ami-0743c11a
ap-northeast-1
ami-9c3c9b9c
ami-42379042
eu-west-1
ami-a2e2a7d5
ami-e4e7a293
us-west-1
ami-4bd0270f
ami-1dd32459
us-west-2
ami-f9585cc9
ami-1dd32459
ap-southeast-2
ami-5de69c67
ami-01e19b3b
ap-southeast-1
ami-74a5a126
ami-c89b9f9a
us-east-1
ami-29f90042
ami-8d8a73e6

It is important to note that these images are only usable inside of a VPC. Newer AWS users are in VPC by default, but older users may need to create and update their VPC. For example:
$ ec2-authorize --cidr <CIDR_RANGE> --protocol 4 <SECURITY_GROUP>


on June 24, 2015 05:56 PM

Protocols Plugfest Photos

Jonathan Riddell

Protocol Plugfest photos are up

Foto finish
Interoperabilidad

Jonathan Riddell
I don’t lead you, but I can’t stop you following me

facebooktwittergoogle_pluslinkedinby feather
on June 24, 2015 12:41 PM

As of commit 2aff5ba6 in the op-build tree, we're able to easily replace the kernel in an OpenPower firmware image.

This commit adds a new partition (called BOOTKERNEL) to the PNOR image, which provides the petitboot bootloader environment. Since it's now in its own partition, we can replace the image with a custom build. Here's a little guide to doing that, using an example of using a separate branch of op-build that provides a little-endian kernel.

You can check if your currently-running firmware has this BOOTKERNEL partition by running pflash -i on the BMC. It should list BOOTKERNEL in the partition table listing:

# pflash -i
Flash info:
-----------
Name          = Micron N25Qx512Ax
Total size    = 64MB 
Erase granule = 4KB 

Partitions:
-----------
ID=00            part 00000000..00001000 (actual=00001000)
ID=01            HBEL 00008000..0002c000 (actual=00024000)
[...]
ID=11            HBRT 00949000..00ca9000 (actual=00360000)
ID=12         PAYLOAD 00ca9000..00da9000 (actual=00100000)
ID=13      BOOTKERNEL 00da9000..01ca9000 (actual=00f00000)
ID=14        ATTR_TMP 01ca9000..01cb1000 (actual=00008000)
ID=15       ATTR_PERM 01cb1000..01cb9000 (actual=00008000)
[...]
#  

If your partition table does not contain a BOOTKERNEL partition, you'll need to upgrade to a more recent PNOR image to proceed.

First (if you don't have one already), grab a suitable version of op-build. In this example, we'll use my le branch, which has little-endian support:

git clone --recursive git://github.com/jk-ozlabs/op-build.git
cd op-build
git checkout -b le origin/le
git submodule update

Then, prepare our environment and configure for the relevant platform - in this case, habanero:

. op-build-env
op-build habanero_defconfig

If you'd like to change any of the kernel config (for example, to add or remove drivers), you can do that now, using the 'linux-menuconfig' target. This is only necessary if you wish to make changes. Otherwise, the default kernel config will work.

op-build linux-menuconfig

Next, we build just the userspace and kernel parts of the firmware image, by specifying the linux26-rebuild-with-initramfs build target:

op-build linux26-rebuild-with-initramfs

If you're using a fresh op-build tree, this will take a little while, as it downloads and builds a toolchain, userspace and kernel. Once that's complete, you'll have a built kernel image in the output tree:

 output/build/images/zImage.epapr

Transfer this file to the BMC, and flash using pflash. We specify the -P <PARTITION> argument to write to a single PNOR partition:

pflash -P BOOTKERNEL -e -p /tmp/zImage.epapr

And that's it! The next boot will use your newly-build kernel in the petitboot bootloader environment.

Out-of-tree kernel builds

If you'd like to replace the kernel from op-build with one from your own external source tree, you have two options. Either point op-build at your own tree, or build you own kernel using the initramfs that op-build has produced.

For the former, you can override certain op-build variables to reference a separate source. For example, to use an external git tree:

op-build LINUX_SITE=git://github.com/jk-ozlabs/linux LINUX_VERSION=v3.19

See Customising OpenPower firmware for other examples of using external sources in op-build.

The latter option involves doing a completely out-of-op-build build of a kernel, but referencing the initramfs created by op-build (which is in output/images/rootfs.cpio.xz). From your kernel source directory, add CONFIG_INITRAMFS_SOURCE argument, specifying the relevant initramfs. For example:

make O=obj ARCH=powerpc \
    CONFIG_INITRAMFS_SOURCE=../op-build/output/images/rootfs.cpio.xz
on June 24, 2015 02:26 AM

I installed Ubuntu 15.04 on my Dell XPS 13 (9343) Developer Edition and found Bluetooth to be non-functional. I read several posts on the web that called for getting the firmware from Windows and using a tool to convert the hex to hcd. I knew that Bluetooth had been working on the unit prior to replacing the preloaded 14.04 so I plugged in my recovery USB stick and poked around to see if I could find the firmware. After little digging I found a package that contained the firmware and extracted it. (note: the install will put the firmware in /lib/firmware and it needs to be in /lib/firmware/brcm)

Dell Receovery XPS 13 9343 Developer Edition

Dell Receovery XPS 13 9343 Developer Edition

1. Go to the debs folder and find the bt-dw1560-firmware_1.0_all.deb.

Broadcom Debian Package

Broadcom Debian Package

2. Open this file with Archive Manager.

3. Navigate to /usr/share/bt-dw1560/firmware/

Archive Manager Extract Firmware

Archive Manager Extract Firmware

4. Extract the fw-0a5c_216f.hcd file

5. Move it to /lib/firmware/brcm with the name BCM20702A0-0a5c-216f.hcd (note: your path may vary – I put mine in my home directory)

sudo mv fw-0a5c_216f.hcd /lib/firmware/brcm/BCM20702A0-0a5c-216f.hcd

6. unload bluetooth using the command:

sudo modprobe -r btusb

7.  load bluetooth using the command:

sudo modprobe btusb

8. Bluetooth should now be working.

After following this process I was able to pair devices and send files from my phone to my computer.


on June 24, 2015 02:09 AM

June 23, 2015

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20150623 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://kernel.ubuntu.com/reports/kt-meeting.txt


Status: Wily Development Kernel

Our wily kernel remains rebased on 4.0.5. We have cleaned up some
config discrepancies and will plan to upload to our
canonical-kernel-team ppa today. We’ll the hopefully get that copied
out to the archive sometime this week or next. Also, with 4.1 final
having just been release, we’ll get our master branch in
git://kernel.ubuntu.com/ubuntu/unstable.git rebased. We will then plan
on rebasing Wily to 4.1 and uploading as well.
—–
Important upcoming dates:

  • https://wiki.ubuntu.com/WilyWerewolf/ReleaseSchedule
    Thurs June 25 – Alpha 1 (~2 days away)
    Thurs July 30 – Alpha 2 (~5 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

  • http://kernel.ubuntu.com/reports/kernel-cves.html


Status: Stable, Security, and Bugfix Kernel Updates – Precise/Trusty/Utopic/Vivid

Status for the main kernels, until today:

  • Precise – Verification & Testing
  • Trusty – Verification & Testing
  • Utopic – Verification & Testing
  • Vivid – Verification & Testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html
    For SRUs, SRU report is a good source of information:
  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 13-Jun through 04-Jul
    ====================================================================
    12-Jun Last day for kernel commits for this cycle
    14-Jun – 20-Jun Kernel prep week.
    21-Jun – 04-Jul Bug verification; Regression testing; Release


Open Discussion or Questions? Raise your hand to be recognized

No open discussions.

on June 23, 2015 05:15 PM
A thing of beauty
If you read my last post, perhaps you followed the embedded instructions and ran hundreds of LXD system containers on your own Ubuntu machine.

Or perhaps you're already a Docker enthusiast and your super savvy microservice architecture orchestrates dozens of applications among a pile of process containers.

Either way, the massive multiplication of containers everywhere introduces an interesting networking problem:
"How do thousands of containers interact with thousands of other containers efficiently over a network?  What if every one of those containers could just route to one another?"

Canonical is pleased to introduce today an innovative solution that addresses this problem in perhaps the most elegant and efficient manner to date!  We call it "The Fan" -- an extension of the network tunnel driver in the Linux kernel.  The fan was conceived by Mark Shuttleworth and John Meinel, and implemented by Jay Vosburgh and Andy Whitcroft.

A Basic Overview

Each container host has a "fan bridge" that enables all of its containers to deterministically map network traffic to any other container on the fan network.  I say "deterministically", in that there are no distributed databases, no consensus protocols, and no more overhead than IP-IP tunneling.  [A more detailed technical description can be found here.]  Quite simply, a /16 network gets mapped on onto an unused /8 network, and container traffic is routed by the host via an IP tunnel.



A Demo

Interested yet?  Let's take it for a test drive in AWS...


First, launch two instances in EC2 (or your favorite cloud) in the same VPC.  Ben Howard has created special test images for AWS and GCE, which include a modified Linux kernel, a modified iproute2 package, a new fanctl package, and Docker installed by default.  You can find the right AMIs here.
Build and Publish report for trusty 20150621.1228.
-----------------------------------
BUILD INFO:
VERSION=14.04-LTS
STREAM=testing
BUILD_DATE=
BUG_NUMBER=1466602
STREAM="testing"
CLOUD=CustomAWS
SERIAL=20150621.1228
-----------------------------------
PUBLICATION REPORT:
NAME=ubuntu-14.04-LTS-testing-20150621.1228
SUITE=trusty
ARCH=amd64
BUILD=core
REPLICATE=1
IMAGE_FILE=/var/lib/jenkins/jobs/CloudImages-Small-CustomAWS/workspace/ARCH/amd64/trusty-server-cloudimg-CUSTOM-AWS-amd64-disk1.img
VERSION=14.04-LTS-testing-20150621.1228
INSTANCE_BUCKET=ubuntu-images-sandbox
INSTANCE_eu-central-1=ami-1aac9407
INSTANCE_sa-east-1=ami-59a22044
INSTANCE_ap-northeast-1=ami-3ae2453a
INSTANCE_eu-west-1=ami-d76623a0
INSTANCE_us-west-1=ami-238d7a67
INSTANCE_us-west-2=ami-53898c63
INSTANCE_ap-southeast-2=ami-ab95ef91
INSTANCE_ap-southeast-1=ami-98e9edca
INSTANCE_us-east-1=ami-b1a658da
EBS_BUCKET=ubuntu-images-sandbox
VOL_ID=vol-678e2c29
SNAP_ID=snap-efaa288b
EBS_eu-central-1=ami-b4ac94a9
EBS_sa-east-1=ami-e9a220f4
EBS_ap-northeast-1=ami-1aee491a
EBS_eu-west-1=ami-07602570
EBS_us-west-1=ami-318c7b75
EBS_us-west-2=ami-858b8eb5
EBS_ap-southeast-2=ami-558bf16f
EBS_ap-southeast-1=ami-faeaeea8
EBS_us-east-1=ami-afa25cc4
----
6cbd6751-6dae-4da7-acf3-6ace80c01acc




Next, ensure that those two instances can talk to one another.  Here, I tested that in both directions, using both ping and nc.

ubuntu@ip-172-30-0-28:~$ ifconfig eth0
eth0 Link encap:Ethernet HWaddr 0a:0a:8f:f8:cc:21
inet addr:172.30.0.28 Bcast:172.30.0.255 Mask:255.255.255.0
inet6 addr: fe80::80a:8fff:fef8:cc21/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
RX packets:2904565 errors:0 dropped:0 overruns:0 frame:0
TX packets:9919258 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:13999605561 (13.9 GB) TX bytes:14530234506 (14.5 GB)

ubuntu@ip-172-30-0-28:~$ ping -c 3 172.30.0.27
PING 172.30.0.27 (172.30.0.27) 56(84) bytes of data.
64 bytes from 172.30.0.27: icmp_seq=1 ttl=64 time=0.289 ms
64 bytes from 172.30.0.27: icmp_seq=2 ttl=64 time=0.201 ms
64 bytes from 172.30.0.27: icmp_seq=3 ttl=64 time=0.192 ms

--- 172.30.0.27 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.192/0.227/0.289/0.045 ms
ubuntu@ip-172-30-0-28:~$ nc -l 1234
hi mom
─────────────────────────────────────────────────────────────────────
ubuntu@ip-172-30-0-27:~$ ifconfig eth0
eth0 Link encap:Ethernet HWaddr 0a:26:25:9a:77:df
inet addr:172.30.0.27 Bcast:172.30.0.255 Mask:255.255.255.0
inet6 addr: fe80::826:25ff:fe9a:77df/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
RX packets:11157399 errors:0 dropped:0 overruns:0 frame:0
TX packets:1671239 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:16519319463 (16.5 GB) TX bytes:12019363671 (12.0 GB)

ubuntu@ip-172-30-0-27:~$ ping -c 3 172.30.0.28
PING 172.30.0.28 (172.30.0.28) 56(84) bytes of data.
64 bytes from 172.30.0.28: icmp_seq=1 ttl=64 time=0.245 ms
64 bytes from 172.30.0.28: icmp_seq=2 ttl=64 time=0.185 ms
64 bytes from 172.30.0.28: icmp_seq=3 ttl=64 time=0.186 ms

--- 172.30.0.28 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.185/0.205/0.245/0.030 ms
ubuntu@ip-172-30-0-27:~$ echo "hi mom" | nc 172.30.0.28 1234

If that doesn't work, you might have to adjust your security group until it does.


Now, import the Ubuntu image in Docker in both instances.

$ sudo docker pull ubuntu:latest
Pulling repository ubuntu
...
e9938c931006: Download complete
9802b3b654ec: Download complete
14975cc0f2bc: Download complete
8d07608668f6: Download complete

Now, let's create a fan bridge on each of those two instances.  We can create it on the command line using the new fanctl command, or we can put it in /etc/network/interfaces.d/eth0.cfg.

We'll do the latter, so that the configuration is persistent across boots.

$ cat /etc/network/interfaces.d/eth0.cfg
# The primary network interface
auto eth0
iface eth0 inet dhcp
up fanctl up 250.0.0.0/8 eth0/16 dhcp
down fanctl down 250.0.0.0/8 eth0/16

$ sudo ifup --force eth0

Now, let's look at our ifconfig...

$ ifconfig
docker0 Link encap:Ethernet HWaddr 56:84:7a:fe:97:99
inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

eth0 Link encap:Ethernet HWaddr 0a:0a:8f:f8:cc:21
inet addr:172.30.0.28 Bcast:172.30.0.255 Mask:255.255.255.0
inet6 addr: fe80::80a:8fff:fef8:cc21/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
RX packets:2905229 errors:0 dropped:0 overruns:0 frame:0
TX packets:9919652 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:13999655286 (13.9 GB) TX bytes:14530269365 (14.5 GB)

fan-250-0-28 Link encap:Ethernet HWaddr 00:00:00:00:00:00
inet addr:250.0.28.1 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::8032:4dff:fe3b:a108/64 Scope:Link
UP BROADCAST MULTICAST MTU:1480 Metric:1
RX packets:304246 errors:0 dropped:0 overruns:0 frame:0
TX packets:245532 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:13697461502 (13.6 GB) TX bytes:37375505 (37.3 MB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:1622 errors:0 dropped:0 overruns:0 frame:0
TX packets:1622 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:198717 (198.7 KB) TX bytes:198717 (198.7 KB)

lxcbr0 Link encap:Ethernet HWaddr 3a:6b:3c:9b:80:45
inet addr:10.0.3.1 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::386b:3cff:fe9b:8045/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:648 (648.0 B)

tunl0 Link encap:IPIP Tunnel HWaddr
UP RUNNING NOARP MTU:1480 Metric:1
RX packets:242799 errors:0 dropped:0 overruns:0 frame:0
TX packets:302666 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:12793620 (12.7 MB) TX bytes:13697374375 (13.6 GB)

Pay special attention to the new fan-250-0-28 device!  I've only shown this on one of my instances, but you should check both.

Now, let's tell Docker to use that device as its default bridge.

$ fandev=$(ifconfig | grep ^fan- | awk '{print $1}')
$ echo $fandev
fan-250-0-28
$ echo "DOCKER_OPTS='-d -b $fandev --mtu=1480 --iptables=false'" | \
sudo tee -a /etc/default/docker*

Make sure you restart the docker.io service.  Note that it might be called docker.

$ sudo service docker.io restart || sudo service docker restart

Now we can launch a Docker container in each of our two EC2 instances...

$ sudo docker run -it ubuntu
root@261ae39d90db:/# ifconfig eth0
eth0 Link encap:Ethernet HWaddr e2:f4:fd:f7:b7:f5
inet addr:250.0.28.3 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::e0f4:fdff:fef7:b7f5/64 Scope:Link
UP BROADCAST RUNNING MTU:1480 Metric:1
RX packets:7 errors:0 dropped:2 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:558 (558.0 B) TX bytes:648 (648.0 B)


And here's a second one, on my other instance...

sudo docker run -it ubuntu
root@ddd943163843:/# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 66:fa:41:e7:ad:44
inet addr:250.0.27.3 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::64fa:41ff:fee7:ad44/64 Scope:Link
UP BROADCAST RUNNING MTU:1480 Metric:1
RX packets:12 errors:0 dropped:2 overruns:0 frame:0
TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:936 (936.0 B) TX bytes:1026 (1.0 KB)

Now, let's send some traffic back and forth!  Again, we can use ping and nc.



root@261ae39d90db:/# ping -c 3 250.0.27.3
PING 250.0.27.3 (250.0.27.3) 56(84) bytes of data.
64 bytes from 250.0.27.3: icmp_seq=1 ttl=62 time=0.563 ms
64 bytes from 250.0.27.3: icmp_seq=2 ttl=62 time=0.278 ms
64 bytes from 250.0.27.3: icmp_seq=3 ttl=62 time=0.260 ms
--- 250.0.27.3 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.260/0.367/0.563/0.138 ms
root@261ae39d90db:/# echo "here come the bits" | nc 250.0.27.3 9876
root@261ae39d90db:/#
─────────────────────────────────────────────────────────────────────
root@ddd943163843:/# ping -c 3 250.0.28.3
PING 250.0.28.3 (250.0.28.3) 56(84) bytes of data.
64 bytes from 250.0.28.3: icmp_seq=1 ttl=62 time=0.434 ms
64 bytes from 250.0.28.3: icmp_seq=2 ttl=62 time=0.258 ms
64 bytes from 250.0.28.3: icmp_seq=3 ttl=62 time=0.269 ms
--- 250.0.28.3 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.258/0.320/0.434/0.081 ms
root@ddd943163843:/# nc -l 9876
here come the bits

Alright, so now let's really bake your noodle...

That 250.0.0.0/8 network can actually be any /8 network.  It could be a 10.* network or any other /8 that you choose.  I've chosen to use something in the reserved Class E range, 240.* - 255.* so as not to conflict with any other routable network.

Finally, let's test the performance a bit using iperf and Amazon's 10gpbs instances!

So I fired up two c4.8xlarge instances, and configured the fan bridge there.
$ fanctl show
Bridge Overlay Underlay Flags
fan-250-0-28 250.0.0.0/8 172.30.0.28/16 dhcp host-reserve 1

And
$ fanctl show
Bridge Overlay Underlay Flags
fan-250-0-27 250.0.0.0/8 172.30.0.27/16 dhcp host-reserve 1

Would you believe 5.46 Gigabits per second, between two Docker instances, directly addressed over a network?  Witness...

Server 1...

root@84364bf2bb8b:/# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 92:73:32:ac:9c:fe
inet addr:250.0.27.2 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::9073:32ff:feac:9cfe/64 Scope:Link
UP BROADCAST RUNNING MTU:1480 Metric:1
RX packets:173770 errors:0 dropped:2 overruns:0 frame:0
TX packets:107628 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:6871890397 (6.8 GB) TX bytes:7190603 (7.1 MB)

root@84364bf2bb8b:/# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 250.0.27.2 port 5001 connected with 250.0.28.2 port 35165
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 6.36 GBytes 5.46 Gbits/sec

And Server 2...

root@04fb9317c269:/# ifconfig eth0
eth0 Link encap:Ethernet HWaddr c2:6a:26:13:c5:95
inet addr:250.0.28.2 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::c06a:26ff:fe13:c595/64 Scope:Link
UP BROADCAST RUNNING MTU:1480 Metric:1
RX packets:109230 errors:0 dropped:2 overruns:0 frame:0
TX packets:150164 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:28293821 (28.2 MB) TX bytes:6849336379 (6.8 GB)

root@04fb9317c269:/# iperf -c 250.0.27.2
multicast ttl failed: Invalid argument
------------------------------------------------------------
Client connecting to 250.0.27.2, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local 250.0.28.2 port 35165 connected with 250.0.27.2 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 6.36 GBytes 5.47 Gbits/sec

Multiple containers, on separate hosts, directly addressable to one another with nothing more than a single network device on each host.  Deterministic routes.  Blazing fast speeds.  No distributed databases.  No consensus protocols.  Not an SDN.  This is just amazing!

RFC

Give it a try and let us know what you think!  We'd love to get your feedback and use cases as we work the kernel and userspace changes upstream.

Over the next few weeks, you'll see the fan patches landing in Wily, and backported to Trusty and Vivid.  We are also drafting an RFC, as we think that other operating systems and the container world and the Internet at large would benefit from Fan Networking.

I'm already a fan!
Dustin
on June 23, 2015 03:58 PM

June 22, 2015

Forum Staff Additions

Forums Council

It was this time again the Forum Council consider welcoming new moderators as a few people had stood down in the last months. It is always a refreshing and exciting process, to look for new people to add to a team.

With that in mind, please welcome our newest additions to the Forums Staff Team, Ajgreeny and PaulW2U.

Both have shown sustained and helpful contributions to the forums and meet the requirements stated in the team nomination wiki, namely:

  1. Be consistently helpful and active on the forum.
  2. Be an Ubuntu Member by way of Forum contributions or an Ubuntu Member who is active on the forum.
  3. Have no infractions or have no current infractions (depending on the severity, at least 18 months old ).
  4. Have demonstrated a consistent attitude of friendliness, kindness and who have shown a pattern of helpfulness in their posts.

Congratulations from the Ubuntu Forums Council.


on June 22, 2015 05:16 PM

Introducing Vanilla

Canonical Design Team

Why we needed a new framework

Some time ago the web team at Canonical developed a CSS framework the we called ‘Guidelines’. Guidelines helped us to maintain our online visual language across all our sites and comprised of a number of base and component Sass files which were combined and served as a monolithic CSS file on our asset server.

We began to use Guidelines as the baseline styles for a number of our sites; www.ubuntu.com, www.canonical.com, etc.

This worked well until we needed to update a component or base style. With each edit we had to check it wasn’t going to break any of the sites we knew used it and hope it didn’t break the sites we were not aware.

Another deciding factor for us was was the feedback that we started receiving as internal teams started adopting Guidelines. We received a resounding request to break the components into modular parts so they could customise which ones they could include. Another request we heard a lot was the ability to pull the Sass files locally for offline development but keep the styling up to date.

Therefore, we set out to develop a new and improved build and delivery system, which lead us to a develop a whole new architecture and we completely refactored the Sass infrastructure.

This gave birth the Vanilla; our new and improved CSS framework.

Building Vanilla

The first decision we made was to remove the “latest” version target, so sites could no longer directly link to the bleeding edge version of the styles. Instead sites should target a specific version of Vanilla and manually upgrade as new versions are released. This helps twofold, shifting the testing and QA to the maintainers of each particular site allows for staggered updates without a sweeping update to all sites at once. Secondly, allowed us to modify current modules without updating the sites until the update was applied.

We knew that we needed to make the update process as easy as possible to help other teams keep their styles up to date. We decided against using Bower as our package manager and chose NPM to reduce the number of dependencies required to use Vanilla.

We knew we needed a build system and, as it was a greenfield project, the world was our oyster. Really it came down to Gulp vs Grunt. We had a quick discussion and decided to run with Gulp as we had more experience with it. Gulp had all the plugins we required and we all preferred the Gulp syntax instead of the Grunt spaghetti.

We had a number of JavaScript functions in Guidelines to add simple dynamic functionality to our sites, such as, equal heights or tabbed content. The team decided we wanted to try and remove the JS dependency for Vanilla and make it a pure CSS framework. So we stepped through each function and tried to work out if we, most importantly, required it at all. If so, we tried to develop a CSS replacement with an acceptable degradation for less modern browsers. We managed to cover all required functions with CSS and removed some older functionality we did not want any more.

Using Vanilla

Importing Vanilla

To start using Vanilla simple run $ npm install vanilla-framework --save in the root of your site. Then in your main stylesheet simple add:


@import ../path/to/node_modules/vanilla-framework/build/scss/build.scss
@include vanilla;

The first line in the code above imports the main build file of the vanilla-framework. Then included as it is entirely controlled with mixins, which will be explained in a future post.

Now that you have Vanilla imported correctly you should see the some default styling applied to your site. To take full advantage of the framework we require a small amount of mark up changes.

Mark up amendments

There are a number of classes used by Vanilla to set up the site wrappers. Please refer to the source for our demo site.

Vanilla-framework

Conclusion

This is still a work in progress project but we are close to releasing www.ubuntu.com and www.canonical.com based on Vanilla. Please do use Vanilla and any feedback would be very much appreciated.

For more information please visit the Vanilla project page.

on June 22, 2015 12:45 PM

Canonical just announced a new, free, and very cool way to provide thousands of IP addresses to each of your VMs on AWS. Check out the fan networking on Ubuntu wiki page to get started, or read Dustin’s excellent fan walkthrough. Carry on here for a simple description of this happy little dose of awesome.

Containers are transforming the way people think about virtual machines (LXD) and apps (Docker). They give us much better performance and much better density for virtualisation in LXD, and with Docker, they enable new ways to move applications between dev, test and production. These two aspects of containers – the whole machine container and the process container, are perfectly complementary. You can launch Docker process containers inside LXD machine containers very easily. LXD feels like KVM only faster, Docker feels like the core unit of a PAAS.

The density numbers are pretty staggering. It’s *normal* to run hundreds of containers on a laptop.

And that is what creates one of the real frustrations of the container generation, which is a shortage of easily accessible IP addresses.

It seems weird that in this era of virtual everything that a number is hard to come by. The restrictions are real, however, because AWS restricts artificially the number of IP addresses you can bind to an interface on your VM. You have to buy a bigger VM to get more IP addresses, even if you don’t need extra compute. Also, IPv6 is nowehre to be seen on the clouds, so addresses are more scarce than they need to be in the first place.

So the key problem is that you want to find a way to get tens or hundreds of IP addresses allocated to each VM.

Most workarounds to date have involved “overlay networking”. You make a database in the cloud to track which IP address is attached to which container on each host VM. You then create tunnels between all the hosts so that everything can talk to everything. This works, kinda. It results in a mess of tunnels and much more complex routing than you would otherwise need. It also ruins performance for things like multicast and broadcast, because those are now exploding off through a myriad twisty tunnels, all looking the same.

The Fan is Canonical’s answer to the container networking challenge.

We recognised that container networking is unusual, and quite unlike true software-defined networking, in that the number of containers you want on each host is probably roughly the same. You want to run a couple hundred containers on each VM. You also don’t (in the docker case) want to live migrate them around, you just kill them and start them again elsewhere. Essentially, what you need is an address multiplier – anywhere you have one interface, it would be handy to have 250 of them instead.

So we came up with the “fan”. It’s called that because you can picture it as a fan behind each of your existing IP addresses, with another 250 IP addresses available. Anywhere you have an IP you can make a fan, and every fan gives you 250x the IP addresses. More than that, you can run multiple fans, so each IP address could stand in front of thousands of container IP addresses.

We use standard IPv4 addresses, just like overlays. What we do that’s new is allocate those addresses mathematically, with an algorithmic projection from your existing subnet / network range to the expanded range. That results in a very flat address structure – you get exactly the same number of overlay addresses for each IP address on your network, perfect for a dense container setup.

Because we’re mapping addresses algorithmically, we avoid any need for a database of overlay addresses per host. We can calculate instantly, with no database lookup, the host address for any given container address.

More importantly, we can route to these addresses much more simply, with a single route to the “fan” network on each host, instead of the maze of twisty network tunnels you might have seen with other overlays.

You can expand any network range with any other network range. The main idea, though, is that people will expand a class B range in their VPC with a class A range. Who has a class A range lying about? You do! It turns out that there are a couple of class A networks that are allocated and which publish no routes on the Internet.

We also plan to submit an IETF RFC for the fan, for address expansion. It turns out that “Class E” networking was reserved but never defined, and we’d like to think of that as a new “Expansion” class. There are several class A network addresses reserved for Class E, which won’t work on the Internet itself. While you can use the fan with unused class A addresses (and there are several good candidates for use!) it would be much nicer to do this as part of a standard.

The fan is available on Ubuntu on AWS and soon on other clouds, for your testing and container experiments! Feedback is most welcome while we refine the user experience.

Configuration on Ubuntu is super-simple. Here’s an example:

In /etc/network/fan:

# fan 241
241.0.0.0/8 172.16.3.0/16 dhcp

In /etc/network/interfaces:

iface eth0 static
address 172.16.3.4
netmask 255.255.0.0
up fanctl up 241.0.0.0/8 172.16.3.4/16
down fanctl down 241.0.0.0/8 172.16.3.4/16

This will map 250 addresses on 241.0.0.0/8 to your 172.16.0.0/16 hosts.

Docker, LXD and Juju integration is just as easy. For docker, edit /etc/default/docker.io, adding:

DOCKER_OPTS=”-d -b fan-10-3-4 –mtu=1480 –iptables=false”

You must then restart docker.io:

sudo service docker.io restart

At this point, a Docker instance started via, e.g.,

docker run -it ubuntu:latest

will be run within the specified fan overlay network.

Enjoy!

on June 22, 2015 10:40 AM

June 20, 2015

Despite the fact that it's been over 2 years since I posted Part 1, I got bored and decided I should take another look at the Patriot Gauntlet Node. So I go and grab the latest firmware from Patriot's website (V21_1.2.4.6) and use the same binwalk techniques described in the first post, I extracted the latest firmware.

So, the TL;DR is: It's unexciting because Patriot makes no effort to secure the device. It seems that their security model is "if you're on the network, you own the device", which is pretty much the case. Not only can you enable telnet as I've discussed before, there's even a convenient web-based interface to run commands: http://10.10.10.254:8088/adm/system_command.asp. Oh, and it's not authenticated. Even if you set an admin password (which is hidden at http://10.10.10.254:8088/adm/management.asp).

The device runs two webservers: on port 80 you have httpd from busybox, and on port 8088, you have a proprietary embedded webserver called GoAhead. It uses not ASP, as the file extensions would have you believe, but actually uses an embedded JavaScript interpreter called Ejscript to generate active pages.

I don't intend to spend much more time on this device from a security PoV: it doesn't seem intended to be secure at all, so it's not like there's anything to break. The device is essentially pre-rooted, so go to town and have fun!

on June 20, 2015 10:13 PM

June 19, 2015


On Sunday night (June 21st), the friendly folks that bring you Ubuntu, Juju, LXD, and a whole bunch of other goodness are hosting a special, pre-Dockercon event that's all about service modelling, orchestration, and making all the container-y Docker-y stuff work well with in the DevOps world.

Interested in systems architecture and design? This event is for you.

We have an panel of industry luminaries assembled to discuss things like:

  • What is the importance of service modelling?
  • What does orchestration really mean in real-world terms?
  • Where is the management of complex systems headed?

Expect lively discussion, debate, and a healthy dose of the future.

Additionally, we'll have lightning talks before the panel,

Best of all, it's free and light refreshments will be provided.

Register now! We're almost out of space...

http://www.eventbrite.com/e/conducting-systems-and-services-an-evening-a...

Hope to see you there.

--
If you have no idea what this world of Docker, containers, orchestration, etc. etc. is all about, then I recommend a couple of articles to get your wheels turning:

http://blog.circleci.com/its-the-future/
http://blog.circleci.com/it-really-is-the-future

on June 19, 2015 07:20 PM