September 21, 2014

Intel has revealed a new, interesting concept called the Connected Wheelchair, which takes data from users and allows people to share that info with the community and is powered by Linux.

When people say Intel, they usually think about processors, but the company also makes a host of other products, including very cool or useful concepts that might have some very important applications in everyday life.

The latest initiative is called the Connected Wheelchair and the guys from Intel even convinced the famous Stephen Hawking to help them spread the word about this amazing project. It’s still in the testing phases and it’s one of those products that might show a lot of promise but never go anywhere because there is no one to produce and sell it.

Source:

http://news.softpedia.com/news/Stephen-Hawking-Talks-About-the-Linux-Based-Intel-Connected-Wheelchair-Project-458539.shtml

Submitted by: Silivu Stahie

on September 21, 2014 05:07 AM

September 20, 2014

S07E25 – The One Where the Monkey Gets Away

Ubuntu Podcast from the UK LoCo

Just Laura Cowen and Alan Pope are in Studio L for Season Seven, Episode Twenty-Five of the Ubuntu Podcast!

Apologies for the terrible audio quality in this episode. It turns out one of the channels on the compressor is broken and we didn’t realise until much later on.

In this week’s show:-

We’ll be back next week, when we’ll have some interviews from JISC, and we’ll go through your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

on September 20, 2014 01:30 PM

Canonical has shared some details about a number of Thunderbird vulnerabilities identified in its Ubuntu 14.04 LTS and Ubuntu 12.04 LTS operating systems, and the devs have pushed a new version into the repositories.

The Thunderbird email client was updated a couple of days ago and the new version has landed pretty quickly in the Ubuntu repos. This means that it should be available when users update their systems.

For example, “Abhishek Arya discovered a use-after-free during DOM interactions with SVG. If a user were tricked in to opening a specially crafted message with scripting enabled, an attacker could potentially exploit this to cause a denial of service via application crash or execute arbitrary code with the privileges of the user invoking Thunderbird,” reads the announcement.

Source:

http://news.softpedia.com/news/Mozilla-Thunderbird-13-1-1-Lands-in-the-Ubuntu-458664.shtml

Submitted by: Silviu Stahie

on September 20, 2014 05:06 AM

September 19, 2014

auron

There were many announcements made at the 2014 Intel Developers Conference, but a Broadwell powered Chromebook was not on the list. That is not to say it’s not coming.

The Intel Broadwell Core M 5Y70 system on a chip “SoC” is a great fit for a Chromebook.

Central Processing Unit

You won’t find significant innovation or design changes in the Broadwell central processing unit (CPU) as the primary focus of this release was to shrink the manufacturing die from 22nm to 14nm to gain power efficiency. The result is a 4.5W power draw which should result in 8 or more hours of battery life.

From a getting stuff done perspective it looks like the CPU will perform as well or better than a Haswell Celeron 2955U found in many Chrome devices. Intel states twice the performance of a four year old i5. Some folks are speculating the performance will equal the new i3-4005U but I anticipate Octane scores will equal or exceed 11000.

Graphical Processing Unit

This is where it gets interesting. I assume Intel is feeling some pressure from nVidia, Imagination Technologies, and new entrants such as RockChip which compelled them to enhance the performance of the graphical processing unit (GPU). Indeed, preliminary 3DMark benchmarks are successfully showing a 40% advantage to the nVidia Tegra 32 bit K1.

Will there be a Chromebook?

Francois Beaufort blogged a Broadwell development board has been added the Chrome OS repository which supports the fact a Chromebook is under consideration. As a platinum member of the Linux foundation, Intel has the knowledge and experience to optimize Chrome OS to leverage the features of their processors and I am confidant they will do so for Broadwell.

The big question becomes price. Wholesale pricing for this SoC is expected to exceed $250 and that is a premium price for an entry level Chromebook but may work for a professional grade device (viz. Pixel 2) targeted to college students and the enterprise.

Pixel

  • High Quality Touch IPS FHD 13 inch screen
  • 4 GB RAM
  • 64 GB SSD
  • Backlit Keyboard
  • Wifi ac / Bluetooth 4.0 / USB 3.0 / HDMI

Retail priced at $599.

The post The Promise of a Broadwell Chromebook appeared first on john's journal.

on September 19, 2014 10:53 PM

Greetings folks,

The Juju Ecosystem team at Canonical (joined remotely by community members) recently had a developer sprint in beautiful Dillon, Colorado to Get Things Done(™).

Here are the highlights:

Automated Charm Testing

Tim Van Steenburgh and Marco Ceppi made a ton of progress with automated charm testing, here’s the prelim state-of-the-world:

Jenkins Jobs Fired off: 22

This enabled us to dedicate hours of block time of getting as many of those red charms to green as possible. The priority for our team over the next few weeks will be fixing these charms, and of course, gating new charms via this method, as well as kicking back broken charms to personal namespaces.

Ben Saller helped out by prototyping “Dockerizing” charm testing so that developers can test their charms in a fully containerized way. This will help CI by giving us isolation, density, and reliability.

Charm Tests are now launched from review queue to help gating based on tests passing.

Thanks to Aaron Bentley for supporting our efforts here!

Review Queue

The Charmers (Marco Ceppi, Charles Butler, and Matt Bruzek) dedicated time to getting through reviews. The whole team spent time creating fixes for the automated test results mentioned above. We’re in great shape to driving this down and not ever letting it get out control again thanks to our new team review guidelines: http://review.juju.solutions/

The goal was to help submitters and reviews know the where they are at in a review, and next steps needed.

Here are the numbers:

  • Reviews Performed: 189
  • Commits: 228
  • Charms Promulgated: 10
  • Charms Unpromulgated: 7
  • Lines of Code touched: 34109 (artificially high due to SVG icons, heh)
  • Reviews Submitted: 84
  • Energy Drinks: 80

Some new features:

  • Users can now log in with Ubuntu SSO and see what reviews they have submitted, and reviewed
  • Ability to query the review system and search/filter reviews based on several metrics (http://review.juju.solutions/search)
  • Ability for charmers to fire off an automated test of a charm on demand right from the queue. When an MP is done against a charm, we’ll now automatically reply to the MP with a link to the test results. \o/
  • You can now “lock” a review when you’re doing one so that the rest of the community can see when a review is claimed so we don’t duplicate work. (Essential for mass reviews!)
  • Queues divided and separated to highlight priority items and items for different teams

CloudFoundry

  • Improving the downloader/packaging story so it’s more reusable
  • Cory Johns developed a pattern for charm helpers for CloudFoundry; the CF sub-team feels this will be a useful pattern for other charmers. They’re calling it the “charm services framework”, expect to hear more from them in the future.
  • We were able to replicate the Juju/Rails Framework deployment of an application and compare doing the same thing on CF: https://plus.google.com/117270619435440230164/posts/gHgB6k5f7Fv
  • Whit concentrated on tracking changes to Pivotal’s build procedures.

Charm Developer Workflow

This involves two things:

“The first 30 minutes of Juju”

This primarily involved finding and fixing issues with our user and developer workflow. This included doing some initial work on what we’re calling “Landing Pages”, which will be topic based landing pages for different places where people can use Juju, so for example, a “Big Data” page with specific solutions for that field. We expect to have these for a bunch of different fields of study.

We have identified the following 5 charms as “flagbearer””: Rails (in progress), elasticsearch, postgresql, meteor, and chamilo. We consider these charms to be excellent examples of quality, integration with other tools, and usage of charm tools. We will be modifying the documentation to highlight these charms as reference points. All these charms have tests now, though some might not have landed yet.

Better tools for Charm Authors:

Ben, Tim, and Whit have a prototype of a fully Dockerized developer environment that contains all of the local developer tools and all of the flagbearer charms. The intention is to also provide a fully bootstrapped local provider. The goal is “test anything in 30 seconds in a container”.

In addition to this, Adam Israel tackled some of our Vagrant development stories, that will allow us to provide better Vagrant developer workflow, thanks to Ben Howard and his team for helping us get these features in our boxes.

We expect both the Docker-based and Vagrant-based approaches to eventually converge. Having both now gives us a nice “spread” to cover developers on multiple operating systems with tools they’re familiar with.

Big Data

Amir/Chuck worked on the following things:

  • Upgrading the ELK stack for Trusty
  • Planning out new Landing Pages focused on the Big Data story
  • Bringing up existing Big Data (Hortonworks) Stack to Charm Store standards for Trusty, and getting those charms merged
  • Pre-planning for next phase of Big Data Workloads (MapR, Apache distributions)

Other

  • General familiarity training with MAAS, OpenStack on OBs and NUCs.
  • Very fast firehose drinking for new team members, Adam Israel, Randall Ross, and Kevin Monroe have joined the team.
  • Special Thanks to Jose Antonio Rey, Sebas, and Josh Strobl, for joining us to help get reviews and fixes in the store and documentation.
  • We have a new team blog at: http://juju-solutions.github.io/ (Beta, thanks Whit.)
  • Most of the topics here had corresponding fixes/updates to the Juju documentation.
on September 19, 2014 04:49 PM

This is a workaround to force your preferred terminal emulator to use the Dark variant of Adwaita theme in GNOME >= 3.12 (maybe less, but untested).

Just add these lines to your ~/.bashrc file:

# set dark theme for xterm emulators
if [ "$TERM" == "xterm" ] ; then
 xprop -f _GTK_THEME_VARIANT 8u -set _GTK_THEME_VARIANT "dark" -id `xprop -root | awk '/^_NET_ACTIVE_WINDOW/ {print $5}'`
fi

This is how it works with Terminator:

Before

Before

After

After

on September 19, 2014 02:13 PM

The Grantlee community is pleased to announce the release of Grantlee version 0.5 (Mirror). Source and binary compatibility are maintained as with all previous releases. Django is an implementation of the Django template system in Qt.

This release builds with both Qt 5 and Qt 4. The Qt 5 build is only for transitional purposes so that a downstream can get their own code built and working with Qt 5 without being first blocked by Grantlee backward incompatible changes. The Qt 5 based version of Grantlee 0.5.0 should not be relied upon as a stable interface. It is only there to assist porting. There won’t be any more Qt 4 based releases, except to fix build issues if needed.

The next release of Grantlee will happen next week and will be exclusively Qt 5 based. It will have a small number of backward incompatible changes, such as adding missing const and dropping some deprecated stuff.

The minimum CMake required has also been increased to version 2.8.11. This release contains most of the API for usage requirements and so allows cleaning up a lot of older CMake code.

Also in this release is a small number of new bug fixes and memory leak plugs etc.


on September 19, 2014 09:29 AM

This week, the reSIProcate project completed the move from SVN to Git.

With many people using the SIP stack in both open source and commercial projects, the migration was carefully planned and tested over an extended period of time. Hopefully some of the experience from this migration can help other projects too.

Previous SVN committers were tracked down using my script for matching emails to Github accounts. This also allowed us to see their recent commits on other projects and see how they want their name and email address represented when their previous commits in SVN were mapped to Git commits.

For about a year, the sync2git script had been run hourly from cron to maintain an official mirror of the project in Github. This allowed people to test it and it also allowed us to start using some Github features like travis-CI.org before officially moving to Git.

At the cut-over, the SVN directories were made read-only, sync2git was run one last time and then people were advised they could commit in Git.

Documentation has also been created to help people get started quickly sharing patches as Github pull requests if they haven't used this facility before.

on September 19, 2014 06:47 AM

Fundamentally connected

Stuart Langridge

Aaron Gustafson recently wrote a very interesting monograph bemoaning a recent trend to view JavaScript as “a virtual machine in the browser”. I’ll quote fairly extensively, because Aaron makes some really strong points here, and I have a lot of sympathy with them. But at bottom I think he’s wrong, or at the very least he’s looking at this question from the wrong direction, like trying to divine the purpose of the Taj Mahal by looking at it from underneath.

The one problem I’ve seen,” says Aaron, “is the fundamental disconnect many of these developers [who began taking JavaScript seriously after Ajax became popular] seem to have with the way deploying code on the Web works. In traditional software development, we have some say in the execution environment. On the Web, we don’t.” He goes on to explain: “If we’re writing server-side software in Python or Rails or even PHP, … we control the server environment [or] we have knowledge of it and can author… accordingly”, and “in the more traditional installed software world, we can similarly control the environment by placing certain restrictions on what operating systems our code can run on”.

I believe that this criticism, while essentially valid, misapprehends the real case here. It underestimates the universality of JavaScript implementations, it overestimates the stability of old-fashioned software development, and most importantly it starts from the presumption that building things for one particular computer is actually a good idea. Which it isn’t.

Now, nobody is arguing that the web environment is occasionally challengingly different across browsers and devices. But a lot of it isn’t. No browser ships with a JavaScript implementation in which 1 and 1 add to make 3, or in which Arrays don’t have a length property, or in which the for keyword doesn’t exist. If we ignore some of the Mozilla-specific stuff which is becoming ES6 (things such as array comprehensions, which nobody is actually using in actual live code out there in the universe), JavaScript is pretty stable and pretty unchanging across all its implementations. Of course, what we’re really talking about here is the DOM model, not JavaScript-the-language, and to claim that “JavaScript can be the virtual machine” and then say “aha I didn’t mean the DOM” is sophistry on a par with a child asking “can I not not not not not have an ice-cream?”. But the DOM model is pretty stable too, let’s be honest. In things I build, certainly I find myself in murky depths occasionally with JavaScript across different browsers and devices, but those depths are as the sparkling waters of Lake Treviso by comparison with CSS across different browsers. In fact, when CSS proves problematic across browsers, JavaScript is the bandage used to fix it and provide a consistent experience — your keyframed CSS animation might be unreliable, but jQuery plugins work everywhere. JavaScript is the glue that binds the other bits together.

Equally, I am not at all sold that “we have knowledge of [the server environment] and can author your program accordingly so it will execute as anticipated” when doing server development. Or, at least, that’s possible, but nobody does. If you doubt this, I invite you to go file a bug on any server-side app you like and say “this thing doesn’t work right for me” and then add at the bottom “oh, and I’m running FreeBSD, not Ubuntu”. The response will occasionally be “oh really? we had better get that fixed then!” but is much more likely to be “we don’t support that. Use Ubuntu and this git repository.” Now, that’s a valid approach — we only support this specific known configuration! — but importantly, on the web Aaron sees requiring a specific browser/OS combination as an impractical impossibility and the wrong thing to do, whereas doing this on the server is positively virtuous. I believe that this is no virtue. Dismissing claims of failure with “well, you should be using the environment I demand” is just as large a sin on the server or the desktop as it is in the browser. You, the web developer, can’t require that I use your choice of browser, but equally you, the server developer, shouldn’t require that I use your particular exact combination of server packages either. Why do client users deserve more respect than server users? If a developer using your server software should be compelled to go and get a different server, how’s that different from asking someone to install a different web browser? Sure, I’m not expecting someone who built a server app running on Linux to necessarily also make it run on Windows (although wise developers will do so), but then I’m not really expecting someone who’s built a 3d game with WebGL to make the experience meaningful for someone browsing with Lynx, either.

Perhaps though you differ there, gentle reader. That the web is the web, and one should have a meaningful experience (although importantly not necessarily the same meaningful experience) which ever class of browser and device and capability one uses to get at the web. That is a very good point, one with which I have a reasonable amount of sympathy, and it leads me on to the final part of the argument.

It is this. Web developers are actually better than non-web developers. And Aaron explains precisely why. It is because to build a great web app is precisely to build a thing which can be meaningfully experienced by people on any class of browser and device and capability. The absolute tip-top very best “native” app can only be enjoyed by those to whom it is native. “Native apps” are poetry: undeniably beautiful when done well, but useless if you don’t speak the language. A great web app, on the other hand, is a painting: beautiful to experience and available to everybody. The Web has trained its developers to attempt to build something that is fundamentally egalitarian, fundamentally available to everyone. That’s why the Web’s good. The old software model, of something which only works in one place, isn’t the baseline against which the Web should be judged; it’s something that’s been surpassed. Software development is easiest if it only has to work on your own machine, but that doesn’t mean that that’s all we should aim for. We’re all still collaboratively working out exactly how to build apps this way. Do we always succeed? No. But by any measure the Web is the largest, most widely deployed, most popular and most ubiquitous computing platform the world has ever known. And its programming language is JavaScript.

on September 19, 2014 04:11 AM

For the tl;dr: Docker FDW is a thing. Star it, hack it, try it out. File bugs, be happy. If you want to see what it's like to read, there's some example SQL down below.

The question is first, what the heck is a PostgreSQL Foreign Data Wrapper? PostgreSQL Foreign Data Wrappers are plugins that allow C libraries to provide an adaptor for PostgreSQL to talk to an external database.

Some folks have used this to wrap stuff like MongoDB, which I always found to be hilarous (and an epic hack).

Enter Multicorn

During my time at PyGotham, I saw a talk from Wes Chow about something called Multicorn. He was showing off some really neat plugins, such as the git revision history of CPython, and parsed logfiles from some stuff over at Chartbeat. This basically blew my mind.

All throughout the talk I was coming up with all sorts of things that I wanted to do -- this whole library is basically exactly what I've been dreaming about for years. I've always wanted to provide a SQL-like interface into querying API data, joining data cross-API using common crosswalks, such as using Capitol Words to query for Legislators, and use the bioguide ids to JOIN against the congress api to get their Twitter account names.

My first shot was to Multicorn the new Open Civic Data API I was working on, chuckled and put it aside as a really awesome hack.

Enter Docker

It wasn't until tianon connected the dots for me and suggested a Docker FDW did I get really excited. Cue a few hours of hacking, and I'm proud to say -- here's Docker FDW.

Currently it only implements reading from the API, but extending this to allow for SQL DELETE operations isn't out of the question, and likely to be implemented soon. This lets us ask all sorts of really interesting questions out of the API, and might even help folks writing webapps avoid adding too much Docker-aware logic.

Setting it up

I'm going to assume you have a working Multicorn, PostgreSQL and Docker setup (including adding the postgres user to the docker group)

So, now let's pop open a psql session. Create a database (I called mine dockerfdw, but it can be anything), and let's create some tables.

Before we create the tables, we need to let PostgreSQL know where our objects are. This takes a name for the server, and the Python importable path to our FDW.

CREATE SERVER docker_containers FOREIGN DATA WRAPPER multicorn options (
    wrapper 'dockerfdw.wrappers.containers.ContainerFdw');

CREATE SERVER docker_image FOREIGN DATA WRAPPER multicorn options (
    wrapper 'dockerfdw.wrappers.images.ImageFdw');

Now that we have the server in place, we can tell PostgreSQL to create a table backed by the FDW by creating a foreign table. I won't go too much into the syntax here, but you might also note that we pass in some options - these are passed to the constructor of the FDW, letting us set stuff like the Docker host.

CREATE foreign table docker_containers (
    "id"          TEXT,
    "image"       TEXT,
    "name"        TEXT,
    "names"       TEXT[],
    "privileged"  BOOLEAN,
    "ip"          TEXT,
    "bridge"      TEXT,
    "running"     BOOLEAN,
    "pid"         INT,
    "exit_code"   INT,
    "command"     TEXT[]
) server docker_containers options (
    host 'unix:///run/docker.sock'
);


CREATE foreign table docker_images (
    "id"              TEXT,
    "architecture"    TEXT,
    "author"          TEXT,
    "comment"         TEXT,
    "parent"          TEXT,
    "tags"            TEXT[]
) server docker_image options (
    host 'unix:///run/docker.sock'
);

And, now that we have tables in place, we can try to learn something about the Docker containers. Let's start with something fun - a join from containers to images, showing all image tag names, the container names and the ip of the container (if it has one!).

SELECT docker_containers.ip, docker_containers.names, docker_images.tags
  FROM docker_containers
  RIGHT JOIN docker_images
  ON docker_containers.image=docker_images.id;
     ip      |            names            |                  tags                   
-------------+-----------------------------+-----------------------------------------
             |                             | {ruby:latest}
             |                             | {paultag/vcs-mirror:latest}
             | {/de-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
             | {/ny-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
             | {/ar-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
 172.17.0.47 | {/ms-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
 172.17.0.46 | {/nc-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
             | {/ia-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
             | {/az-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
             | {/oh-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
             | {/va-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
 172.17.0.41 | {/wa-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
             | {/jovial_poincare}          | {<none>:<none>}
             | {/jolly_goldstine}          | {<none>:<none>}
             | {/cranky_torvalds}          | {<none>:<none>}
             | {/backstabbing_wilson}      | {<none>:<none>}
             | {/desperate_hoover}         | {<none>:<none>}
             | {/backstabbing_ardinghelli} | {<none>:<none>}
             | {/cocky_feynman}            | {<none>:<none>}
             |                             | {paultag/postgres:latest}
             |                             | {debian:testing}
             |                             | {paultag/crank:latest}
             |                             | {<none>:<none>}
             |                             | {<none>:<none>}
             | {/stupefied_fermat}         | {hackerschool/doorbot:latest}
             | {/focused_euclid}           | {debian:unstable}
             | {/focused_babbage}          | {debian:unstable}
             | {/clever_torvalds}          | {debian:unstable}
             | {/stoic_tesla}              | {debian:unstable}
             | {/evil_torvalds}            | {debian:unstable}
             | {/foo}                      | {debian:unstable}
(31 rows)

Success! This is just a taste of what's to come, so please feel free to hack on Docker FDW, tweet me @paultag, file bugs / feature requests. It's currently a bit of a hack, and it's something that I think has long-term potential after some work goes into making sure that this is a rock solid interface to the Docker API.

on September 19, 2014 01:49 AM

September 18, 2014

Hello All,

As you may know the LoCo council members are set with a two years term, due this situation we are facing the difficult task of replacing Bhavani. A special thanks to Bhavani for all of the great contributions he had made while serving with us on the LoCo Council.

So with that in mind, we are writing this to ask for volunteers to step forward and nominate themselves or another contributor for the three open positions. The LoCo Council is defined on our wiki page.

Wiki: https://wiki.ubuntu.com/LoCoCouncil

Team Agenda: https://wiki.ubuntu.com/LoCoCouncilAgenda

Typically, we meet up once a month in IRC to go through items on the team agenda also we started to have Google Hangouts too (The time for hangouts may vary depending the availability of the members time). This involves approving new LoCo Teams, Re-approval of Approved LoCo Teams, resolving issues within Teams, approving LoCo Team mailing list requests, and anything else that comes along.

We have the following requirements for Nominees:

Be an Ubuntu member

Be available during typical meeting times of the council

Insight into the culture(s) and typical activities within teams is a plus

Here is a description of the current LoCo Council:

They are current Ubuntu Members with a proven track record of activity in the community. They have shown themselves over time to be able to work well with others, and display the positive aspects of the Ubuntu Code of Conduct. They should be people who can judge contribution quality without emotion while engaging in an interview/discussion that communicates interest, a welcoming atmosphere, and which is marked by humanity, gentleness, and kindness.

If this sounds like you, or a person you know, please e-mail the LoCo Council with your nomination(s) using the following e-mail address: loco-council<at>lists.ubuntu.com.

Please include a few lines about yourself, or whom you’re nominating, so we can get a good idea of why you/they’d like to join the council, and why you feel that you/they should be considered. If you plan on nominating another person, please let them know, so they are aware.

We welcome nominations from anywhere in the world, and from any LoCo team. Nominees do not need to be a LoCo Team Contact to be nominated for this post. We are however looking for people who are active in their LoCo Team.

The time frame for this process is as follows:

Nominations will open: September 18th, 2014

Nominations will close: October 2nd, 2014

We will then forward the nominations to the CC, Requesting they take the following their next meeting to make their selections.

on September 18, 2014 08:16 PM

Priorities & Perseverance

Robbie Williamson

Screenshot from 2014-09-17 22:48:22

This is a not a stock ticker, rather a health ticker…and unlike with a stock price, a downward trend is good.  Over the last 3 years or so, I’ve been on a personal mission of improving my health.  As you can see it wasn’t perfect, but I managed to lose a good amount of weight.

So why did I do it…what was the motivation…it’s easy, I decided in 2011 that I needed to put me first.   This was me from 2009

Screenshot from 2014-09-10 09:54:50IMG_84318618356313

At my biggest, I was pushing 270lbs.  I was so busy trying to do for others, be it work, family, or friends, I was constantly putting my needs last, i.e. exercise and healthy eating.  You see, I actually like to exercise and healthy eating isn’t a hard thing for me, but when you start putting those things last on your priorities, it becomes easy to justify skipping the exercise or grabbing junk food because your short on time or exhausted from being the “hero”.

Now I have battled weight issues most of my life.  Given how I looked as a baby, this shouldn’t come as a surprise. LOL

20140917_231831

But I did thin out as a child.

530336_10151620134146242_1946930333_n

To only get bigger again

20140917_231901

And even bigger again

20140917_232423

But then I got lucky.  My metabolism kicked into high gear around 20, and I grew about 5 inches and since I was playing a ton of basketball daily, I ate anything I wanted and still stayed skinny

10475956_10152202484326242_6086082912878589217_o

I remained so up until I had my first child, then the pounds began to come on.  Many parents will tell you that the first time is always more than you expected, so it’s not surprising with sleep deprivation and stress, you gain weight.  To make it even more fun, I had decide to start a new job and buy a new house a few years later, when my second child came…even more “fun”.

2014-08-24 22.07.43

To be clear, I’m not blaming any of my weight gain on these events, however they became easy crutches to justify putting myself last.  And here’s the crazy part, by doing all this, I actually ended up doing less for those I cared about in the long run, because I was physically exhausted, mentally fatigued, and emotionally spent a lot of the time.

So, around October of 2012 I made a decision.  In order for me to be the man I wanted to be for my family, friends, and even colleagues, I had to put myself first.  While it sounds selfish, it’s the complete opposite.  In order to be the best I could be for others, I realized I had to get myself together first.  For those of you who followed me on Facebook then, you already know what it took…a combination of MyFitnessPal calorie tracking and a little known workout program called Insanity:

Insanity-Workout

Me and my boy, Shaun T, worked out religiously…everyday…sometimes mornings…sometimes afternoons…sometimes evenings.  I carried him with me all for work travel on my laptop and phone…doing Insanity videos in hotels rooms around the world.  I did the 60day program about 4 times through (with breaks in between cycles)…adding in some weight workouts towards the end.  The results were great, as you can see in the first graphic starting around October 2012.  By staying focused and consistent, I dropped from about 255lbs to 226lbs at my lowest in July 2013.  I got rid of a lot of XXL shirts and 42in waist pants/shorts, and got to a point where I didn’t always feel the need to swim with a shirt on….if ya know what I mean ;-).  So August rolled around, and while I was feeling good about myself…didn’t feel great, because I knew that while I was lighter, and healthier, I wasn’t necessarily that much stronger.  I knew that if I wanted to really be healthy and keep this weight off, I’d need more muscle mass…plus I’d look better too :-P.

So the Crossfit journey began.

Now I’ll be honest, it wasn’t my first thought.  I had read all the horror stories about injuries and seen some of the cult-like stuff about it.  However, a good friend of mine from college was a coach, and pretty much called me out on it…she was right…I was judging something based on others opinions and not my own (which is WAY outta character for me).  So…I went to my first Crossfit event…the Women’s Throwdown in Austin, TX (where I live) held by Woodward Crossfit in July of 2013.  It was pretty awesome….it wasn’t full of muscle heads yelling at each other or insane paleo eating nut jobs trying to out shine another…it was just hardworking athletes pushing themselves as hard as they could…for a great cause (it’s a charity event)…and having a lot of fun.  I planned to only stay for a little bit, but ended up staying the whole damn day! Long story, short…I joined Woodward Crossfit a few weeks after (the delay was because I was determined to complete my last Insanity round, plus I had to go on a business trip), which was around the week of my birthday (Aug 22).

download

1381407_609309165778302_680124169_n

Fast forward a little over a year, with a recently added 21-day Fitness Challenge by David King (who also goes to the same gym), and as of today I’m down about 43lbs (212), with a huge reduction in body fat percentage.  I don’t have the starting or current percentage, but let’s just say all 43lbs lost was fat, and I’ve gained a good amount of muscle in the last year as well…which is why the line flattened a bit before I kicked it up another notch with the 21-Day last month.

Now I’m not posting any more pictures, because that’s not the point of this post (but trust me…I look goooood :P).  My purpose is exactly what the subject says, priorities & perseverance.  What are you prioritizing in your life?  Are you putting too many people’s needs ahead of your own?  Are you happy as a result?  If you were like me, I already know the answer…but you don’t have to stay this way.  You only get one chance at this life, so make the most out of it.  Make the choice to put your happiness first, and I don’t mean selfishly…that’s called pleasure.  You’re happier when your loved ones are doing well and happy…you’re happier when you have friends who like you and that you can depend on….you’re happier when you kick ass at work…you’re happier when you kill it on the basketball court (or whatever activity you like).  Make the decision to be happy, set your goals, then perservere until you attain them…you will stumble along the way…and there will be those around you who either purposely or unknowingly discourage you, but stay focused…it’s not their life…it’s yours.  And when it gets really hard…just remember the wise words of Stuart Smalley:


on September 18, 2014 05:32 AM

Ubuntu shell overpowered

Ayrton Araujo

In order to have more productivity under my environment, as a command line centric guy, I started three years ago to use zsh as my defaul shell. And for who never tried it, I would like to share my personal thoughts.

What are the main advantages?

  • Extended globbing: For example, (.) matches only regular files, not directories, whereas az(/) matches directories whose names start with a and end with z. There are a bunch of other things;
  • Inline glob expansion: For example, type rm *.pdf and then hit tab. The glob *.pdf will expand inline into the list of .pdf files, which means you can change the result of the expansion, perhaps by removing from the command the name of one particular file you don’t want to rm;
  • Interactive path expansion: Type cd /u/l/b and hit tab. If there is only one existing path each of whose components starts with the specified letters (that is, if only one path matches /u/l/b*), then it expands in place. If there are two, say /usr/local/bin and /usr/libexec/bootlog.d, then it expands to /usr/l/b and places the cursor after the l. Type o, hit tab again, and you get /usr/local/bin;
  • Nice prompt configuration options: For example, my prompt is currently displayed as tov@zyzzx:/..cts/research/alms/talk. I prefer to see a suffix of my current working directory rather than have a really long prompt, so I have zsh abbreviate that portion of my prompt at a maximum length.

Font: http://www.quora.com/What-are-the-advantages-and-disadvantages-of-using-zsh-instead-of-bash-or-other-shells

The Z shell is mainly praised for its interactive use, the prompts are more versatilly, the completion is more customizable and often faster than bash-completion. And, easy to make plugins. One of my favorite integrations is with git to have better visibility of current repository status.

As it focus on interactive use, is a good idea to keep maintaining your shell scripts starting with #!/bin/bash for interoperability reasons. Bash is still most mature and stable for shell scripting in my point of view.

So, how to install and set up?

sudo apt-get install zsh zsh-lovers -y

zsh-lovers will provide to you a bunch of examples to help you understand better ways to use your shell.

To set zsh as the default shell for your user:

chsh -s /bin/zsh

Don't try to set zsh as default shell to your full system or some things should stop to work.

Two friends of mine, Yuri Albuquerque and Demetrius Albuquerque (brothers of a former hacker family =x) also recommended to use https://github.com/robbyrussell/oh-my-zsh. Thanks for the tip.

How to install oh-my-zsh as a normal user?

curl -L http://install.ohmyz.sh | sh

My $ZSH_THEME is seted to "bureau" under my $HOME/.zshrc. You can try "random" or other themes located inside $HOME/.oh-my-zsh/themes.

For command-not-found integration:

echo "source /etc/zsh_command_not_found" >> ~/.zshrc

If you doesn't have command-not-found package:

sudo apt-get install command-not-found -y

And, if you use Ruby under RVM, I also recommend to read this:
http://rvm.io/integration/zsh

Happy hacking :-)

on September 18, 2014 12:28 AM

September 17, 2014

Responsive Dummies

Stuart Langridge

After Remy “Remington” Sharp and Bruce “Bruce” Lawson published Introducing HTML5 in 2010, the web development community have been eager to see what they’ll turn their talents to next.1 Now their new book is out, Responsive Design for Dummies.

It’s… got its good points and its bad points. As the cover proudly proclaims, they fully embrace the New World Order of delivering essential features via Web Components. I particularly liked their demonstration of how to wrap a whole site inside a component, thus making your served HTML just be <bruces-site> and so saving you bandwidth2. Their recommendation that Flickr and Facebook use this approach to stop users stealing images may be the best suggestion for future-proofing the web that we’ve heard in 2014 so far. The sidebar on how to use this approach and hash-bang JavaScript URLs together ought to become the new way that we build everything, and I’m eager to see libraries designed for slow connections and accesssibility such as Angular.js adopt the technique.

Similarly, the discussion of how Service Workers can deliver business advantages on the Apple iWatch was welcome, particularly given the newness of the release. It’s rare to see a book this up-to-date and this ready to engage with driving the web forward. Did Bruce and Remy get early access to iWatch prototypes or something? I am eager to start leveraging these techniques with my new startup3.

It’s not all perfect, though. I think that devoting three whole chapters to a Dawkins-esque hymn of hatred for everyone who opposed the <picture> element was a bit more tactless than I was hoping for. You won, chaps, there’s no need to rub it in.4

I’d also like to see, if I’m honest, ideas for when breakpoints are less appropriate. I appreciate that the book comes with a free $500 voucher for Getty Images, but after at Bruce and Remy’s recommendation I downloaded separate images for breakpoints at 17px, 48px, 160px, 320px, 341px, 600px, 601px, 603px, 631px, 800px, 850px, 900px, 1280px, 2560px, and 4200px for retina Firefox OS devices, I only had $2.17 left to spend and my server has run out of disc space. Even after using their Haskell utility to convert the images to BMP and JPEG2000 formats I still only score 13.6% on the Google Pagespeed test, and my router has melted. Do better next time, chaps.

Nonetheless, despite these minor flaws, and obvious copy-editing flubs such as “responsive” being misspelled on the cover itself5, I’d recommend this book. Disclaimer: I know both the authors biblicallypersonally and while Bruce has indeed promised me “a night to remember” for a positive review, that has not affected at all my judgement of this book as the most important and seminal work in the Web field since Kierkegaard’s “Sarissa.js Tips and Tricks”.

Go and buy it. It’s so popular that it might actually be hard to find a copy, but if your bookseller doesn’t have it, you should shout at them.

  1. other than inappropriate swimwear, obviously
  2. I also liked their use of VML and HTML+TIME in a component
  3. it’s basically Uber for pie fillings
  4. although if you don’t rub it in it’ll stain the mankini
  5. clearly it was meant to say “ahahaha responsive design, what evaaaaar”, but maybe that didn’t fit
on September 17, 2014 01:11 PM

Windows applications sometimes fail to load. But why? It’ll not tell you, it will instead show a generic and pointless “Application Error” message. Inside this message you will read something like this:

The application was unable to start correctly (0xc0000142). Click OK to close the application.

The only thing you can do here is close the application and search on the Internet for that cryptic error code. And maybe it’s the reason why you are reading this post.
It’s not that easy to find a solution to this problem, but I found it thanks to Up and Ready and want to share it with you.

The problem

Windows tells you that the application was unable to start. You can try a hundred times, but the error does not solve itself magically, because it’s not casual. The problem is that the ddl that launches the application is unsigned or digitally no longer valid. And it’s not up to you, maybe you just downloaded the program from the official site.

The solution

To solve the Application Error you need an advanced Windows Sysinternals Tool called Autoruns for Windows. You can download it from the official website.

Windows Application Error Autoruns AppInit

Click on the image to view it full size.

Extract the archive you downloaded, launch autoruns.exe and go to the AppInit tab, which will list all the dll that are unsigned or digitally no longer valid on you computer. Right click each of them, one at a time, go to Properties and rename them. After renaming each of them, try launching the application again to find the problematic dll.

If the previous method didn’t solve the application error, right click on the following entry:

HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\AppInit_Dlls

and click on Jump to entry…

Windows Application Error System Registry Editor

A new window opens: it’s the System Registry Editor. Double click LoadAppInit_DLLs and change the value from 1 to 0. Click OK to confirm and exit. Now launch the compromised program and it’ll start.

Note: some applications may change that value back to 1 after they get launched!

The post Windows: How to Solve Application Error 0xc0000142 and 0xc0000005 appeared first on deshack.

on September 17, 2014 01:07 PM

September 16, 2014

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140916 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

The Utopic kernel remains based on a v3.16.2 upstream stable kernel and
is uploaded to the archive, ie. linux-3.16.0-15.21. Please test and let
us know your results.
I’d also like to point out that our Utopic kernel freeze date is about 3
weeks away on Thurs Oct 9. Please don’t wait until the last minute to
submit patches needing to ship in the Utopic 14.10 release.
—–
Important upcoming dates:
Mon Sep 22 – Utopic Final Beta Freeze (~1 weeks away)
Thurs Sep 25 – Utopic Final Beta (~1 weeks away)
Thurs Oct 9 – Utopic Kernel Freeze (~3 weeks away)
Thurs Oct 16 – Utopic Final Freeze (~4 weeks away)
Thurs Oct 23 – Utopic 14.10 Release (~5 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Precise/Lucid

Status for the main kernels, until today (Sept. 16):

  • Lucid – verification & testing
  • Precise – verification & testing
  • Trusty – verification & testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 29-Aug through 20-Sep
    ====================================================================
    29-Aug Last day for kernel commits for this cycle
    31-Aug – 06-Sep Kernel prep week.
    07-Sep – 13-Sep Bug verification & Regression testing.
    14-Sep – 20-Sep Regression testing & Release to -updates.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

on September 16, 2014 05:15 PM

Ubuntu at Fossetcon 2014

Elizabeth K. Joseph

Last week I flew out to the east coast to attend the very first Fossetcon. The conference was on the smaller side, but I had a wonderful time meeting up with some old friends, meeting some new Ubuntu enthusiasts and finally meeting some folks I’ve only communicated with online. The room layout took some getting used to, but the conference staff was quick to put up signs and directing conference attendees in the right direction and in general leading to a pretty smooth conference experience.

On Thursday the conference hosted a “day zero” that had training and an Ubucon. I attended the Ubucon all day, which kicked off with Michael Hall doing an introduction to the Ubuntu on Phones ecosystem, including Mir, Unity8 and the Telephony features that needed to be added to support phones (voice calling, SMS/MMs, Cell data, SIM card management). He also talked about the improved developer portal with more resources aimed at app developers, including the Ubuntu SDK and simplified packaging with click packages.

He also addressed the concern of many about whether Ubuntu could break into the smartphone market at this point, arguing that it’s a rapidly developing and changing market, with every current market leader only having been there for a handful of years, and that new ideas need need to play to win. Canonical feels that convergence between phone and desktop/laptop gives Ubuntu a unique selling point and that users will like it because of intuitive design with lots of swiping and scrolling actions, gives apps the most screen space possible. It was interesting to hear that partners/OEMs can offer operator differentiation as a layer without fragmenting the actual operating system (something that Android struggles with), leaving the core operating system independently maintained.

This was followed up by a more hands on session on Creating your first Ubuntu SDK Application. Attendees downloaded the Ubuntu SDK and Michael walked through the creation of a demo app, using the App Dev School Workshop: Write your first app document.

After lunch, Nicholas Skaggs and I gave a presentation on 10 ways to get involved with Ubuntu today. I had given a “5 ways” talk earlier this year at the SCaLE in Los Angeles, so it was fun to do a longer one with a co-speaker and have his five items added in, along with some other general tips for getting involved with the community. I really love giving this talk, the feedback from attendees throughout the rest of the conference was overwhelmingly positive, and I hope to get some follow-up emails from some new contributors looking to get started. Slides from our presentation are available as pdf here: contributingtoubuntu-fossetcon-2014.pdf


Ubuntu panel, thanks to Chris Crisafulli for the photo

The day wrapped up with an Ubuntu Q&A Panel, which had Michael Hall and Nicholas Skaggs from the Community team at Canonical, Aaron Honeycutt of Kubuntu and myself. Our quartet fielded questions from moderator Alexis Santos of Binpress and the audience, on everything from the Ubuntu phone to challenges of working with such a large community. I ended up drawing from my experience with the Xubuntu community a lot in the panel, especially as we drilled down into discussing how much success we’ve had coordinating the work of the flavors with the rest of Ubuntu.

The next couple days brought Fossetcon proper, with I’ll write about later. The Ubuntu fun continued though! I was able to give away 4 copies of The Official Ubuntu Book, 8th Edition which I signed, and got José Antonio Rey to sign as well since he had joined us for the conference from Peru.

José ended up doing a talk on Automating your service with Juju during the conference, and Michael Hall had the opportunity to a talk on Convergence and the Future of App Development on Ubuntu. The Ubuntu booth also looked great and was one of the most popular of the conference.

I really had a blast talking to Ubuntu community members from Florida, they’re a great and passionate crowd.

on September 16, 2014 05:01 PM

New SubLoCo Policy

Ubuntu LoCo Council

Hi, after a lot of work, thinking and talking about the problem of the LoCo Organization and the SubLoCos, we came up with the following policy:

  • Each team will be a country (or state in the United States). We will call this a ‘LoCo’.
  • Each LoCo can have sub-teams. This sub-teams will be created at the will and need of each LoCo.
  • A LoCo may have sub-teams or not have sub-teams.
  • In the event a LoCo does have sub-teams, a Team Council needs to be created.
  • A Team Council is conformed by at least one member of each sub-team.
  • The members that will be part of the Team Council will be chosen by other current members of the team.
  • The Team Council will have the power to make decisions regarding to the LoCo.
  •  The Team Council will also have the power to request partner items, such as conference and DVD packs.
  • The LoCo Council will only recognize one team per country (or state in the United States). This is the team that will be in the ~locoteams team in Launchpad.
  • In the event a LoCo wants to go through the verification process, the LoCo will go through it, and not individual sub-teams.
  • LoCos not meeting the criteria of country/state teams will be denied verification.
  • In the event what is considered a sub-team wants to be considered a LoCo, it will need to present a request to the LoCo Council.
  • The LoCo Council will provide a response, which is, in no way, related to verification. The LoCo will still have to apply for verification if wanted.

We encourage the LoCo teams to see if this new form of organization is fits for you, if so please start forming subteams as you find useful. If a team needs help with this or anything else contact us, we are here to help!

on September 16, 2014 03:24 PM

September 15, 2014

Welcome to the Ubuntu Weekly Newsletter. This is issue #383 for the week September 8 – 14, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Jose Antonio Rey
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on September 15, 2014 11:51 PM

3 years and counting…

José Antonio Rey

On a 15th September, 3 years ago, I got my Ubuntu Membership.

There’s only thing I can say about it: it’s been the most wonderful and awesome 3 years I could have. I would’ve never thought that I would find such welcoming and amazing community.

Even though I may have not worked with you directly, thank you. You all are what makes the community awesome – I wouldn’t imagine it without one of you. We are all building the future, so let’s continue!

As I said on the title, I hope that it’s not only 3 years. I’ll keep on counting!


on September 15, 2014 04:22 PM

Back in April, I upstreamed (that is, reported a bug to Debian) regarding the `nginx-naxsi` packages. The initial bug I upstreamed was about the outdated naxsi version in the naxsi packages. (see this bug in Ubuntu and the related bug in Debian)

The last update on the Debian bug is on September 10, 2014. That update says the following, and was made by Christos Trochalakis:

After discussing it with the fellow maintainers we have decided that it is
better to remove the nginx-naxsi package before jessie is freezed.

Packaging naxsi is not trivial and, unfortunately, none of the maintainers uses
it. That’s the reason nginx-naxsi is not in a good shape and we are not feeling
comfortable to release and support it.

We are sorry for any inconvenience caused.

I asked what the expected timeline was for the packages being dropped. In a response from Christos today, September 15, 2014, it was said:

It ‘ll get merged and released (1.6.1-3) by the end of the month.


In Ubuntu, these changes will likely not make it into 14.10, but future versions of Ubuntu beyond 14.10 (such as 15.04) will likely have this change.

In the PPAs, the naxsi packages will be dropped with stable 1.6.1-3+precise0 +trusty0 +utopic0 and mainline 1.7.4-1+precise0 +trusty0 +utopic0 or will be dropped in later versions if a new point release is made before then.

In Debian, these changes are likely to hit by the end of the month (with 1.6.1-3).

on September 15, 2014 02:50 PM

Last week I attended FOSSETCON, a new open source convention here in central Florida, and I had the opportunity to give a couple of presentations on Ubuntu phones and app development. Anybody who knows me knows that I love talking about these things, but a lot fewer people know that doing it in front of a room of people I don’t know still makes me extremely nervous. I’m an introvert, and even though I have a public-facing job and work with the wider community all the time, I’m still an introvert.

I know there are a lot of other introverts out there who might find the idea of giving presentations to be overwhelming, but they don’t have to be.  Here I’m going to give my personal experiences and advice, in the hope that it’ll encourage some of you to step out of your comfort zones and share your knowledge and talent with the rest of us at meetups and conferences.

You will be bad at it…

Public speaking is like learning how to ride a bicycle, everybody falls their first time. Everybody falls a second time, and a third. You will fidget and stutter, you will lose your train of thought, your voice will sound funny. It’s not just you, everybody starts off being bad at it. Don’t let that stop you though, accept that you’ll have bruises and scrapes and keep getting back on that bike. Coincidentally, accepting that you’re going to be bad at the first ones makes it much less frightening going into them.

… until you are good at it

I read a lot of things about how to be a good and confident public speaker, the advice was all over the map, and a lot of it felt like pure BS.  I think a lot of people try different things and when they finally feel confident in speaking, they attribute whatever their latest thing was with giving them that confidence. In reality, you just get more confident the more you do it.  You’ll be better the second time than the first, and better the third time than the second. So keep at it, you’ll keep getting better. No matter how good or bad you are now, you will keep getting better if you just keep doing it.

Don’t worry about your hands

You’ll find a lot of suggestions about how to use your hands (or not use them), how to walk around (or not walk around) or other suggestions about what to do with yourself while you’re giving your presentation. Ignore them all. It’s not that these things don’t affect your presentation, I’ll admit that they do, it’s that they don’t affect anything after your presentation. Think back about all of the presentations you’ve seen in your life, how much do you remember about how the presenter walked or waved their hands? Unless those movements were integral to the subject, you probably don’t remember much. The same will happen for you, nobody is going to remember whether you walked around or not, they’re going to remember the information you gave them.

It’s not about you

This is the one piece of advice I read that actually has helped me. The reason nobody remembers what you did with your hands is because they’re not there to watch you, they’re there for the information you’re giving them. Unless you’re an actual celebrity, people are there to get information for their own benefit, you’re just the medium which provides it to them.  So don’t make it about you (again, unless you’re an actual celebrity), focus on the topic and information you’re giving out and what it can do for the audience. If you do that, they’ll be thinking about what they’re going to do with it, not what you’re doing with your hands or how many times you’ve said “um”. Good information is a good distraction from the things you don’t want them paying attention to.

It’s all just practice

Practicing your presentation isn’t nearly as stressful as giving it, because you’re not worried about messing up. If you mess up during practice you just correct it, make a note to not make the same mistake next time, and carry on. Well if you plan on doing more public speaking there will always be a next time, which means this time is your practice for that one. Keep your eye on the presentation after this one, if you mess up now you can correct it for the next one.

 

All of the above are really just different ways of saying the same thing: just keep doing it and worry about the content not you. You will get better, your content will get better, and other people will benefit from it, for which they will be appreciative and will gladly overlook any faults in the presentation. I guarantee that you will not be more nervous about it than I was when I started.

on September 15, 2014 09:00 AM

Last week’s autopkgtest 3.5 release (in Debian sid and Ubuntu Utopic) brings several new features which I’d like to announce.

Tests that reboot

For testing low-level packages like init or the kernel it is sometimes desirable to reboot the testbed in the middle of a test. For example, I added a new boot_and_services systemd autopkgtest which configures grub to boot with systemd as pid 1, reboots, and then checks that the most important services like lightdm, D-BUS, NetworkManager, and cron come up as expected. (This test will be expanded a lot in the future to cover other areas like the journal, logind, etc.)

In a testbed which supports rebooting (currently only QEMU) your test will now find an “autopkgtest-reboot” command which the test calls with an arbitrary “marker” string. autopkgtest will then reboot the testbed, save/restore any files it needs to (like the tests file tree or previously created artifacts), and then re-run the test with ADT_REBOOT_MARK=mymarker.

The new “Reboot during a test” section in README.package-tests explains this in detail with an example.

Implicit test metadata for similar packages

The Debian pkg-perl team recently discussed how to add package tests to the ~ 3.000 Perl packages. For most of these the test metadata looks pretty much the same, so they created a new pkg-perl-autopkgtest package which centralizes the logic. autopkgtest 3.5 now supports an implicit debian/tests/control control file to avoid having to modify several thousand packages with exactly the same file.

An initial run already looked quite promising, 65% of the packages pass their tests. There will be a few iterations to identify common failures and fix those in pkg-perl-autopkgtest and autopkgtestitself now.

There is still some discussion about how implicit test control files go together with the DEP-8 specification, as other runners like sadt do not support them yet. Most probably we’ll declare those packages XS-Testsuite: autopkgtest-pkg-perl instead of the usual autopkgtest.

In the same vein, Debian’s Ruby maintainer (Antonio Terceiro) added implicit test control support for Ruby packages. We haven’t done a mass test run with those yet, but their structure will probably look very similar.

on September 15, 2014 08:23 AM

Hi all,
after long time I return to write to show you how to create a simple game for Ubuntu for Phones (but also for Android) with Bacon2D.

Bacon2D is a framework to ease 2D game development, providing ready-to-use QML elements representing basic game entities needed by most of games.

As tutorial I’ll explain you how I create my first QML game, 100balls, that you could find on Ubuntu Store on Phones. Source is available on Github.

Installation

So, first of all we need to install Bacon2D on our system. I suppose you have already installed QT on your system, so we only need to take source and compile it:

git clone git@github.com:Bacon2D/Bacon2D.git
cd Bacon2D
mkdir build && cd build
qmake ..
make
sudo make install

Now you have Bacon2D on your system, and you can import it in every project you want.

A first look to Bacon2D

Bacon2D provides a good number of custom components for your app. Of course, I can’t describe them all in one article, so please read the documentation. We’ll use only few of them, and I think the best way to introduce you to them is writing the app.
So, let’s start!

First of all, we create our base file, called 100balls.qml:

import QtQuick 2.0
import Bacon2D 1.0

The first element we add is the Game element. Game is the top-level container, where all the game will be. We set some basic property and the name of the game, with gameName property:

import QtQuick 2.0
import Bacon2D 1.0
 
Game {
    id: game
    anchors.centerIn: parent
 
    height: 680
    width: 440
 
    gameName: "com.ubuntu.developer.rpadovani.100balls" // Ubuntu Touch name format, you can use whatever you want
}

But the Game itself is useless, we need to add one or more Scene to it. A scene is the place where all Entity of the game will be placed.
Scene has a lot of property, for now is importat to set two of them: running indicates if all things in the scene will move, and if game engine works; second property is physics, that indicates if Box2D has to be used to simulate physic in the game. We want a game where some balls fall, so we need to set it to true.

import QtQuick 2.0
import Bacon2D 1.0
 
Game {
    id: game
    anchors.centerIn: parent
 
    height: 680
    width: 440
 
    gameName: "com.ubuntu.developer.rpadovani.100balls" // Ubuntu Touch name format, you can use whatever you want
 
    Scene {
        id: gameScene
        physics: true
        running: true
    }
}
on September 15, 2014 07:00 AM

September 14, 2014

I’m quitting relinux

Joel Leclerc

I will start this off by saying: I’m very (and honestly) sorry for, well, everything.

To give a bit of history, I started relinux as a side-project for my CosmOS project (cloud-based distribution … which failed), in order to build the ISO’s. The only reasonable alternative at the time was remastersys, and I realized I would have to patch it anyways, so I thought that I might as well make a reusable tool for other distributions to use too.

Then came a rather large amount of friction between me and the author of remastersys, of which I will not go into any detail of. I acted very immaturely then, and wronged him several times. I had defamed him, made quite a few people very angry at him, and even managed to get some of his supporters against him. True, age and maturity had something to do with it (I was 12 at the time), but that still doesn’t excuse my actions at all.

So my first apology is to Tony Brijeski, the author of remastersys, for all the trouble and possible pain I had put him through. I’m truly sorry for all of this.

However, though the dynamics with Tony and remastersys are definitely a large part of why I’m quitting relinux, that is not all. The main reason, actually, is lack of interest. I have rewritten relinux a total of 7 times (including the original fork of remastersys), and I really hate the debugging process (takes 15-20 minutes to create an ISO, so that I can debug it). I have also lost interest in creating linux distributions, so not only am I very tired of working on it, I also don’t really care about what it does.

On this note, my second apologies (and thanks) have to go those who have helped me so much through the process, especially those who have tried to encourage me to finish relinux. Those listed are in no particular order, and if I forgot you, then let me know (and I apologize for that!):

  • Ko Ko Ye
  • Raja Genupula
  • Navdeep Sidhu
  • Members of the TSS Web Dev Club
  • Ali Hallahi
  • Gert van Spijker
  • Aritra Das
  • Diptarka Das
  • Alejandro Fernandez
  • Kendall Weaver

Thank you very much for everything you’ve done!

Lastly, I would like to explain my plans for it, in case anyone wants to continue it (by no means do I want to enforce these, these are just ideas).

My plan for the next release of relinux was to actually make a very generic and scriptable CLI ISO creation tool, and then make relinux as a specific set of “profiles” for that tool (plus an interface). The tool would basically contain a few libraries for the chosen scripting language, for things like storing the filesystem (SquashFS or other), ISO creation, and general utilities for editing files while keeping permissions, mutli-threading/processing, etc… The “profiles” would then copy, edit, and delete files as needed, set up the tool wanted for running the live system (in ubuntu’s case, this’d be casper), setup the installer/bootloader, and such.

I would like to apologize to you all, the people who have used relinux and have waited for a stable version for 3 years, for not doing this. Thank you very much for your support, and I’m very sorry for having constantly pushed releases back and having never made a stable or well working version of relinux. Though I do have some excuses as to why the releases didn’t work, or why I didn’t test them well enough, none of them can cover why I didn’t fix them or work on it more. And for that, I am very sorry.

I know that this is a very large post for something so simple, but I feel that it would not be right if I didn’t apologize to those I have done wrong to, and thanked those who have helped me along the way.

So to summarize, thank you, sorry, and relinux is now dead.

- Joel Leclerc (MiJyn)


on September 14, 2014 11:24 PM

Getting Started in CTFs

David Tomaschik

My last post was about getting started in a career in information security. This post is about the sport end of information security: Capture the Flag (CTFs).

I'd played around with some wargames (Smash the Stack, Over the Wire, and Hack this Site) before, but my first real CTF (timed, competitive, etc.) was the CTF run by Mad Security at BSides SF 2013. By some bizarre twist of fate, I ended up winning the CTF, and I was hooked. I've probably played in about 30 CTFs since, most of them online with the team Shadow Cats. It's been a bumpy ride, but I've learned a lot about a variety of topics by doing this.

If you're in the security industry and you've never tried a CTF, you really should. Personally, I love CTFs because they get me to exercise skills that I never get to use at work. They also inspire some of my research and learning. The only problem is making the time. :)

Here's some resources I've thought were interesting:

on September 14, 2014 08:07 PM

September 13, 2014

"Your release sucks."

Luke Faraone

I look forward to Ubuntu's semiannual release day, because it's the completion of 6ish months of work by Ubuntu (and by extension Debian) developers.

I also loathe it, because every single time we get people saying "This Ubuntu release is the worst release ever!".

Ubuntu releases are always rocky around release time, because the first time Ubuntu gets widespread testing is on or after release day.

We ship software to 12 Million Ubuntu Users with only 150 MOTUs who work directly on the platform. That's a little less than 1 developer with upload rights to the archive for every 60,000 users. ((This number, like all other usage data, is dated, and probably wasn't even accurate when it was first calculated)) Compared to Debian, which (at last estimate in 2010) had 1.5 million uniques on security.debian.org, yet has around 1000 Debian Developers.

Debian has a strong testing culture; someone once estimated that around ¾ of Debian users are running unstable or testing. In Ubuntu, we don't have good metrics on how many people are using the development release that I'm aware of (pointers welcome), but I'd guess that it's a very very small percentage. A common thread in bug reports, if we get a response at all, goes on as follows:
Triager: ((Developer, bugcontrol member, etc. Somebody who is not experiencing the problem but wants to help.)) "Is this a problem in $devel?"
User: "I'll let you know when it hits final"
Triager: "It's too late then. Then we'll want you to test in the next release. We have to fix it BEFORE its final"
User: "Ok, I'll test at beta."
Triager: "That's 2 weeks before release, which will be too late. Please test ASAP if you want us to have time to fix it"

Of course, there are really important bugs with hardware support which keep on cropping up. But if they're just getting reported on or around release day, there are limits to what can be done about them this cycle.

We need to make it easier for people to run early development versions, and encourage more people to use them (as long as they're willing to deal with breakage). I'm not sure whether unstable/testing is appropriate for Ubuntu, and I'm fairly confident that we don't want to move to a rolling release (currently being discussed in Debian, summary). But we badly need more developers, and equally importantly, more testers to try it out earlier in the release process.

To users: please, please try out the development versions. Download a LiveCD and run a smoketest, or check if bugs you reported are in fact fixed in the later versions. And do it early and often.
on September 13, 2014 08:43 PM

I've only been an information security practitioner for about a year now, but I've been doing things on my own for years before that. However, many people are just getting into security, and I've recently stumbled on a number of resources for newcomers, so I thought I'd put together a short list.

on September 13, 2014 07:30 PM

September 12, 2014

student chromebook

Are you are enrolled in college, need a laptop computer, and willing to accept a new Chromebook? If so, Google got a deal for you and it’s called the Google Lending Library.

The Chromebook Lending Library is traveling to 12 college campuses across the U.S. loaded with the latest Chromebooks. The Lending Library is a bit like your traditional library, but instead of books, we’re letting students borrow Chromebooks (no library card needed). Students can use a Chromebook during the week for life on campus— whether it’s in class, during an all-nighter, or browsing the internet in their dorm.

Lindsay Rumer, Chrome Marketing


Assuming you attend one the partnered Universities, here is how it works.

1. Request a Chromebook from the Library
2. Agree to the Terms of Use Agreement
3. Use the Chromebook as you like while you attend school
4. Return it when you want or when you leave

What happens if you don’t return it? Expect to receive a bill for the fair market value not to exceed $220.

Here’s the fine print.

“Evaluation Period” means the period of time specified to you at the time of checkout of a Device.

“Checkout Location” means the location specified by Google where Devices will be issued to you and collected from you.

1.1 Device Use. You may use the Device issued to you for your personal evaluation purposes. Upon your use of the Device, Google transfers title to the Device equipment to you, but retains all ownership rights, title and interest to any Google Devices and services and anything else that Google makes available to you, including without limitation any software on the Device.

1.2 Evaluation Period. You may use the Device during the Evaluation Period. Upon (i) expiration of the Evaluation Period, or (ii) termination of this Agreement, if this Agreement is terminated early in accordance Section 4, you agree to return the Device to the Checkout Location. If you fail to return the Device at the end of the Evaluation Period or upon termination of this Agreement, you agree Google may, to the extent allowed by applicable law, charge you up to the fair market value of the Device less normal wear and tear and any applicable taxes for an amount not to exceed Two Hundred Twenty ($220.00) Dollars USD.

1.3 Feedback. Google may ask you to provide feedback about the Device and related Google products optimized for Google Services. You are not required to provide feedback, but, if you do, it must only be from you, truthful, and accurate and you grant Google permission to use your name, logo and feedback in presentations and marketing materials regarding the Device. Your participation in providing feedback may be suspended at any time.

1.4 No Compensation. You will not be compensated for your use of the Devices or for your feedback.

2. Intellectual Property Rights. Nothing in this Agreement grants you intellectual property rights in the Devices or any other materials provided by Google. Except as provided in Section 1.1, Google will own all rights to anything you choose to submit under this Agreement. If that isn’t possible, then you agree to do whichever of the following that Google asks you to do: transfer all of your rights regarding your submissions to Google; give Google an exclusive, irrevocable, worldwide, royalty-free license to your submissions to Google; or grant Google any other reasonable rights. You will transfer your submissions to Google, and sign documents and provide support as requested by Google, and you appoint Google to act on your behalf to secure these rights from you. You waive any moral rights you have and agree not to exercise them, unless you notify Google and follow Google’s instructions.

3. Confidentiality. Your feedback and other submissions, is confidential subject to Google’s use of your feedback pursuant to Section 1.3.

4. Term. This Agreement becomes effective when you click the “I Agree” button and remains in force through the end of the Evaluation Period or earlier if either party gives written termination notice, which will be effective immediately. Upon expiration or termination, you will return the Device as set forth below. Additionally, Google will remove you from any related mailing lists within thirty (30) days of expiration or termination. Sections 1.3, 1.4, and Sections 2 through 5 survive any expiration or termination of this Agreement.

5. Device Returns. You will return the Device(s) to Google or its agents to the Checkout Location at the time specified to you at the time of checkout of the Device or if unavailable, to Google Chromebook Lending Library, 1600 Amphitheatre Parkway, Mountain View, CA 94043. Google may notify you during or after the term of this Agreement regarding return details or fees chargeable to you if you fail to return the Device.

The post Get a free Chromebook from the Google Lending Library appeared first on john's journal.

on September 12, 2014 10:52 PM

Dyn's free dynamic DNS service closed on Wednesday, May 7th, 2014.

CloudFlare, however, has a little known feature that will allow you to update
your DNS records via API or a command line script called ddclient. This will
give you the same result, and it's also free.

Unfortunately, ddclient does not work with CloudFlare out of the box. There is
a patch available
and here is how to hack[1] it up on Debian or Ubuntu, also works in Raspbian with Raspberry Pi.

Requirements

basic command line skills, and a domain name
that you own.

CloudFlare

Sign up to CloudFlare and add your domain name.
Follow the instructions, the default values it gives should be fine.

You'll be letting CloudFlare host your domain so you need to adjust the
settings at your registrar.

If you'd like to use a subdomain, add an 'A' record for it. Any IP address
will do for now.

Let's get to business...

Installation

$ sudo apt-get install ddclient

Patch

$ sudo apt-get install curl sendmail libjson-any-perl libio-socket-ssl-perl
$ curl -O http://blog.peter-r.co.uk/uploads/ddclient-3.8.0-cloudflare-22-6-2014.patch 
$ sudo patch /usr/sbin/ddclient < ddclient-3.8.0-cloudflare-22-6-2014.patch

Config

$ sudo vi /etc/ddclient.conf

Add:

##
### CloudFlare (cloudflare.com)
###
ssl=yes
use=web, web=dyndns
protocol=cloudflare, \
server=www.cloudflare.com, \
zone=domain.com, \
login=you@email.com, \
password=api-key \
host.domain.com

Comment out:

#daemon=300

Your api-key comes from the account page

ssl=yes might already be in that file

use=web, web=dyndns will use dyndns to check IP (useful for NAT)

You're done. Log in to https://www.cloudflare.com and check that the IP listed for
your domain matches http://checkip.dyndns.com

To verify your settings:

sudo ddclient -daemon=0 -debug -verbose -noquiet

Fork this:
https://gist.github.com/ayr-ton/f6db56f15ab083ab6b55

on September 12, 2014 06:47 PM

My Family…

Harald Sitter

… is the best in the whole wide world!
akademy2014

on September 12, 2014 03:33 PM

S07E24 – The One with the Holiday Armadillo

Ubuntu Podcast from the UK LoCo

We’re back with Season Seven, Episode Twenty-Four of the Ubuntu Podcast! Alan Pope, Mark Johnson, and Laura Cowen are drinking tea and eating Battenburg cake in Studio L.

In this week’s show:

  • We discuss whether communities suck…

  • We also discuss:

    • Aurasma augmented reality
    • Upgrading to 14.10
    • Converting a family member to Ubuntu
  • We share some Command Line Lurve which does this (from Patrick Archibald on G+):
      curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","method":"GUI.ShowNotification","params":{"title":"This is the title of the message","message":"This is the body of the message"},"id":1}' http://wopr.local:8080/jsonrpc
    
  • And we read your feedback. Thanks for sending it in!

We’ll be back next week, so please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

on September 12, 2014 01:05 PM

September 11, 2014

Off to Berlin

Benjamin Kerensa

Right now, as this post is published, I’m probably settling into my seat for the next ten hours headed to Berlin, Germany as part of a group of leaders at Mozilla who will be meeting for ReMo Camp. This is my first transatlantic trip ever and perhaps my longest flight so far, so I’m both […]
on September 11, 2014 08:45 PM

Akademy Poll

Jonathan Riddell

KDE Project:

on September 11, 2014 08:42 PM

KDE Project:

DSC_0769
Hacking hard in the hacking room

DSC_0773
Blue Systems Beer

DSC_0775
You will keep GStreamer support in Phonon

DSC_0780
Boat trip on the loch

DSC_0781
Off the ferry

DSC_0783
Bushan leads the way

DSC_0784
A fairy castle appears in the distance

DSC_0787
The talent show judges

DSC_0790
Sinny models our stylish Kubuntu polo shirts

DSC_0793
Kubuntu Day discussions with developers from the Munich Kubuntu rollout

IMG 9510 v1
Kubuntu Day group photo with people from Kubuntu, KDE, Debian, Limux and Net-runner

c IMG 8903 v1
Jonathan gets a messiah complex

on September 11, 2014 08:34 PM

On Wearable Technology

Benjamin Kerensa

The Web has been filled with buzz of the news of new Android watches and the new Apple Watch but I’m still skeptical as to whether these first iterations of Smartwatches will have the kind of sales Apple and Google are hoping for. I do think wearable tech is the future. In fact, I owned […]
on September 11, 2014 12:00 PM

Accessible KDE, Kubuntu

Valorie Zimmerman

KDE is community. We welcome everyone, and make our software work for everyone. So, accessibility is central to all our work, in the community, in testing, coding, documentation. Frederik has been working to make this true in Qt and in KDE for many years, Peter has done valuable work with Simon and Jose is doing testing and some patches to fix stuff.

However, now that KF5 is rolling out, we're finding a few problems with our KDE software such as widgets, KDE configuration modules (kcm) and even websites. However, the a11y team is too small to handle all this! Obviously, we need to grow the team.

So we've decided to make heavier use of the forums, where we might find new testers and folks to fix the problems, and perhaps even people to fix up the https://accessibility.kde.org/ website to be as
awesome as the KDE-Edu site. The Visual Design Group are the leaders here, and they are awesome!

Please drop by #kde-accessibility on Freenode or the Forum https://forum.kde.org/viewforum.php?f=216 to read up on what needs doing, and learn how to test. People stepping up to learn forum
moderation are also welcome. Frederik has recently posted about the BoF: https://forum.kde.org/viewtopic.php?f=216&t=122808

A11y was a topic in the Kubuntu BoF today, and we're going to make a new push to make sure our accessibility options work well out of the box, i.e. from first boot. This will involve working with the Ubuntu a11y team, yeah!

More information is available at
https://community.kde.org/Accessibility and
https://userbase.kde.org/Applications/Accessibility
on September 11, 2014 10:31 AM

Canonical and Ubuntu at dConstruct

Canonical Design Team

Brighton is not just a lovely seaside town, mostly known for being overcrowded in Summer by Londoners in search for a bit of escapism, but also the home of a thriving community of designers, makers and entrepreneurs. Some of these people run dConstruct, a gathering where creative minds of all sorts converge every year to discuss important themes around digital innovation and culture.

When I found out that we were sponsoring the conference this year, I promptly jumped in to help my colleagues in the Phone, Web and Juju design teams. Our stand was situated in the foyer of the Brighton Dome, flashing the orange banner of Ubuntu and a number of origami unicorns.

The Ubuntu Stand

Origami Unicorns

We had an incredibly positive response from the attendees, as our stand was literally teeming with Ubuntu enthusiasts who were really keen to check our progress with the phone. We had a few BQ phones on display where we showed the new features and designs.

Testing the phone

For us, it was a great occasion to gather fresh impressions of the user experience on the phone and across a variety of apps. After a few moments, people started to understand the edge interactions and began to swipe left and right, giving positive feedback on the responsiveness of the UI. Our pre-release models of BQ phones don’t have the final shell and they still display softkeys, as a result some people found this confusing. We took the opportunity to quickly design our own custom BQ phone by using a bunch of Ubuntu stickers…and viola, problem solved! ;)

Ubuntu phone - customised

Our ‘Make your Unicorn’ competition had a fantastic response. To celebrate the coming release of Utopic Unicorn and of the BQ phone, the maker of the best origami unicorn being awarded a new phone. The crowd did not hesitate to tackle the complex paper-bending challenge and came up with a bunch of creative outcomes. We were very impressed to see how many people managed to complete the instructions, as I didn’t manage to go beyond step 15..

Ubuntu fans

Twitter   Search - #dconstruct #ubuntu

on September 11, 2014 09:57 AM

I read an interesting article on OMG! Ubuntu! about whether Canonical will enter the wearables business, now the smartwatch industry is hotting up.

On one hand (pun intended), it makes sense. Ubuntu is all about convergence; a core platform from top to bottom that adjusts and expands across different form factors, with a core developer platform, and a focus on content.

On the other hand (pun still intended), the wearables market is another complex economy, that is heavily tethered, both technically and strategically, to existing markets and devices. If we think success in the phone market is complex, success in the burgeoning wearables market is going to be just as complex too.

Now, to be clear, I have no idea whether Canonical is planning on entering the wearables market or not. It wouldn’t surprise me if this is a market of interest though as the investment in Ubuntu over the last few years has been in building a platform that could ultimately scale. It is logical to think this could map to a smartwatch as “another form factor”.

So, if technically it is doable, Canonical should do it, right?

No.

I want to see my friends and former colleagues at Canonical succeed, and this needs focus.

Great companies focus on doing a small set of things and doing them well. Spiraling off in a hundred different directions means dividing teams, dividing focus, and limiting opportunities. To use a tired saying…being a “jack of all trades and master of none”.

While all companies can be tempted in this direction, I am happy that on the client side of Canonical, the focus has been firmly placed on phone. TV has taken a back seat, tablet has taken a back seat. The focus has been on building a featureful, high-quality platform that is focused on phone, and bringing that product to market.

I would hate to think that Canonical would get distracted internally by chasing the smartwatch market while it is young. I believe it would do little but direct resources away from the major push now, which is phone.

If there is something we can learn from Apple here is that it isn’t important to be first. It is important to be the best. Apple rarely ships the first innovation, but they consistently knock it out of the park by building brilliant products that become best in class.

So, I have no doubt that the exciting new convergent future of Ubuntu could run on a watch, but lets keep our heads down and get the phone out there and rocking, and the wearables and other form factors can come later.

on September 11, 2014 05:11 AM

September 10, 2014

The Open Source movement has evolved into other areas of computering.  Open Data, Open Hardware, and ,the topic that I want to talk about, Open Science, are three examples of this.  Since I’m a biologist, I’m deeply connected to the science community but I want to also tie in my hobby of FOSS/Linux into my work.  There are many non-coding (and coding) based things and groups that one can use for research and I want to talk about a few of them.

Mozilla Science Lab

Mozilla, the creators of Firefox and Thunderbird, started a group last year that aims to help scientists, “to use the power of the open web to change the way science is done. [They] build educational resources, tools and prototypes for the research community to make science more open, collaborative and efficient.” (main page of Mozilla Science Lab).

Right now, they are are focusing on teaching scientists the basic skills in research via the Software Carpentry project.  But I know that they are planning to get some projects for the community-building side for non-coders.  I don’t know what those projects are but I know that they will be listed soon on the mailing-list of the group.  For myself, I can’t wait until I get my hands on those projects to help them grow.

Open Science Framework

Another fairly new project within the last two years that was started by Center of Open Science that focuses on creating a framework that allows scientists to use the, “entire research lifecycle: planning, execution, reporting, archiving, and discovery”, (main page of OSF) fully and be able to share that with other people in there teams but thy could be in another place not near the head researcher.

I think this is one of the best tools out there because it allows you to upload things on the site and also from Dropbox and other services.  I played around with it a bit but I have not fully used it, but when I do, I will write a post about it.

Open Notebook Science

This is maybe one of the oldest projects that I think there is for Open Science and it’s Open Notebook Science.  It’s the idea of have the lab notebook publicly available online.  There is a small network of these.

I think, along with the OSF project, it is one of the best tools out there mainly because the data and other stuff is publicly available online for everyone to learn from your mistakes or to work with the data.

Hopefully as the time goes by, these projects will grow and researchers can collaborate better.

 


on September 10, 2014 10:39 PM