Tuesday, February 12, 2019

linux.conf.au 2019

Along with a number of other Canonical staff I recently attended linux.conf.au 2019 in Christchurch, New Zealand. I consider this the major Australia/New Zealand yearly conference that covers general open source development. This year the theme of the conference was "Linux of Things" and many of the talks had an IoT connection.

One of the premium swag items was a Raspberry Pi Zero. It is unfortunate that this is not a supported Ubuntu Core device (CPU a generation too old) as this would have been a great opportunity to show an Ubuntu Core device in action. I did prepare a lightning talk showing some Ubuntu Core development on a Raspberry Pi 3, but this sadly didn't make the cut. You can see it in blog form.

LCA consistently has high quality talks, so choosing what to attend is hard. Mostly everything was recorded and is viewable on their YouTube channel. Here is some highlights that I saw:

STM32 Development Boards (literally) Falling From The Sky (video) - This talk was about tracking and re-purposing hardware from weather balloons. I found it interesting as it made me think about the amount of e-waste that is likely to be generated as IoT increases and ways in that it can be re-cycled, particularly with open source software.

Plastic is Forever: Designing Tomu's Injection-Molded Case (video) and SymbiFlow - The next generation FOSS FPGA toolchain (video) - FPGA development is something that has really struggled to break into the mainstream. I think this is mostly down to two things - a lack of a quality open source toolchain and cheap hardware. These talks make it seem like we're getting really close with the SymbiFlow toolchain and hardware like the Fomu. I think we'll get some really interesting new developments when we get something close to the Rasberry Pi/Arduino experience and I'm looking forward to writing some code in the FPGA and IoT space, hopefully soon!

The Tragedy of systemd (video) - It's the conflict that just keeps giving 😭 Benno talked about how regardless of how systemd came to exist the value of modern middleware is valuable. I had thought the majority had come to this conclusion but it seems this is still an idea that needs selling. I think the talk was effective in doing that.

Sequencing DNA with Linux Cores and Nanopores (video) - This was a live (!) demonstration of doing DNA sequencing on the speakers lunch. This was done using the MinION - a USB DNA sequencer. As well as being able to complete the task what impressed me was this was done on a laptop and no special software was required. Given this device costs something around $1000 and is easy to use this opens up DNA analysis to the open source world.

Around the world in 80 Microamps - ESP32 and LoRa for low-power IoT (video) - This discussed real world cases of building IoT / automation solutions using battery power (e.g. solar not suitable). It covered how it's very hard to run a Linux based solution for a long time on a battery, but technology is slowly improving. Turns out the popularity of e-scooters is making bigger and cheaper batteries available.

Christchurch has recently started trialing Lime scooters. These were super popular with a hacker crowd and quickly accumulated around the venue. I planned to scooter from the airport to the venue but sadly that day there weren't any nearby, so I walked half way and scootered the rest. They're super fun and useful so I recommend you try them if you are visiting a city that has them. 🙂

Tuesday, February 05, 2019

Easy IoT with Ubuntu Core and Raspberry Pi

My current job involves me mostly working in the upper layers of the desktop software stack however I started out working in what was then called embedded engineering but now would probably be know as the Internet of Things (IoT). I worked on a number of projects which normally involved taking some industrial equipment (radio infrastructure, camera control system) and adding a stripped down Linux kernel and an application.

While this was cutting edge at the time, there were a number of issues with this approach:
  • You essentially had to make your own mini-distribution to match the hardware you were using. There were some distributions available at the time but they were often not light weight enough or had a financial cost.
  • You had to build your own update system. That comes with a lot of security risks.
  • The hardware was often custom.
The above issues meant a large overhead building and maintaining the platform instead of spending that time and money on your application. If you wanted to make a hobby project it was going to be expensive.

But we live in exciting times! It's now possible to use cheap hardware and easily accessible software to make a robust IoT device. For around $USD60 you can make a highly capable device using Ubuntu Core and Raspberry Pi. I decided to make a device that showed a scrolling LED display, but there are many other sensors and output devices you could attach.

The Raspberry Pi 3 A+ is a good choice to build with. It was just recently released and is the same as the B+ variant but on a smaller board. This means you save some money and space but only lose some connectors that you can probably live without in an IoT device.

I added an SD card and for protection put it in a case. I chose an nice Ubuntu orange colour.

Next step was to connect up a display (also in Ubuntu orange). Note this didn't need the wires - it should fit flat onto the case but I spent too much time photographing the process that I accidentally soldered on the connector backwards. So don't make that mistake... 😕

Final step was to connect a USB power supply (e.g. a phone charger). The hardware is complete, now for the software...

Using Ubuntu Core 18 is as simple as downloading a file and copying it onto the SD card. Then I put the SD card into the Raspberry Pi, powered it on and all I had to do was:
  1. Select my home WiFi network.
  2. Enter my email address for my Ubuntu SSO account.
  3. Secure shell into the Raspberry Pi from my Ubuntu laptop.
The last step is magically easy. If you connect a screen to the Pi it shows you the exact ssh command to type to log into it (i.e. you don't have to work out the IP address) and it uses the SSH key you have attached to your Ubuntu SSO account - no password necessary!

$ ssh robert-ancell@

Now to write my application. I decided to write it in C so it would be fast and have very few dependencies. The easiest way to quickly develop was to cross-compile it on my Ubuntu laptop, then ssh the binary over the the Pi. This just required installing the appropriate compiler:

$ sudo apt install gcc-arm-linux-gnueabihf
$ arm-linux-gnueabihf-gcc test.c -o test
$ scp test robert-ancell@
$ ssh robert-ancell@ ./test

Once I was happy my application worked the next step was to package it to run on Ubuntu Core. Core doesn't use .deb packages, instead the whole system is built using Snaps.

All that is required to generate a snap is to fill out the following metadata (running snapcraft init creates the template for you):

name: little-orange-display
base: core18
version: git
summary: Demonstration app using Ubuntu Core and a Raspberry Pi
description: |
  This is a small app used to demonstrate using Ubuntu Core with a Raspberry Pi.
  It uses a Scroll pHAT HD display to show a message.

  - build-on: all
    run-on: armhf

grade: stable
confinement: strict

    daemon: simple
    command: display-daemon
      - i2c

    plugin: make
    source: .

This describes the following:
  • Information for users to understand the app.
  • It is an armhf package that is stable and confined.
  • It should run as a daemon.
  • It needs a special access to I2C devices (the display).
  • How to build it (use the Makefile I wrote).
To test the package I built it on my laptop and installed the .snap file on the Raspberry Pi:

$ snapcraft
$ scp little-orange-display_0+git.aaa6688_armhf.snap robert-ancell@
$ ssh robert-ancell@
$ snap install little-orange-display_0+git.aaa6688_armhf.snap
$ snap connect little-orange-display:i2c pi:i2c-1
$ snap start little-orange-display

And it ran!

The last stage was to upload it to the Snap store. This required me to register the name (little-orange-display) and upload it:

$ snapcraft register little-orange-display
$ snapcraft push little-orange-display_0+git.aaa6688_armhf.snap

And with that little-orange-display is in the store. If I wanted to make more devices I can by installing Ubuntu Core and enter the following on each device:

$ snap install little-orange-display
$ snap connect little-orange-display:i2c pi:i2c-1
$ snap start little-orange-display

And that's the end of my little project. I spent very little time installing Ubuntu Core and doing the packaging and the majority of the time writing the app, so it solved the issues I would have traditionally encountered building a project like this.

Using Ubuntu Core and Snaps this project now has following functionality available:
  • It automatically updates.
  • The application I wrote is confined, so any bugs I introduce are unlikely to break the OS or any other app that might be installed.
  • I can use Snap channels to test software easily. In their simplest usage I can have a device choose to be on the edge channel which contains a snap built directly from the git repository. When I'm happy that's working I can move it to the beta channel for wider testing and finally to the stable channel for all devices.
  • I get metrics on where my app is being used. Apparently it has one user in New Zealand currently (i.e. me). 🙂

Friday, December 14, 2018

Interesting things about the GIF image format

I recently took a deep dive into the GIF format. In the process I learnt a few things by reading the specification.

A GIF is made up of multiple images


I thought the GIF format would just contain a set of pixels. In fact, a GIF is made up of multiple images. So a simple example like:

 Could actually be made up of multiple images like this:


GIF has transparency, but that doesn't mean you have transparent GIFs


In the above example the sun and house images have the background in them. If the background was very detailed then this would be inefficient. So instead you can set a transparent colour index for each image. Pixels with this index don't replace the background pixels when the images are composited together.

That's the only transparency in the specification. The background colour is actually encoded in the file so technically a GIF picture has all pixels set to a colour. However at some point renderers decided they wanted transparency and ignored the background colour and set it to transparent instead. It's not in the spec, but it's what everyone does. This is the reason that GIF transparency looks bad - there's no alpha channel, just a hack abusing another feature.

You can have more than 256 colours


GIFs are well known for having a palette of only up to 256 colours. However, you can have a different palette for each image in the GIF. That means in the above example you could use a palette with lots of greens and blues for the background, lots of reds for the house and lots of yellows for the sun. The combined image could have up to 768 colours! With some clever encoding you can have a GIF file that uses up to 24 million colours.

Animation is just delaying the rendering 

GIFs are most commonly used for small animations. This wasn't in the original specification but at some point someone realised if you inserted a delay between each image you could make an animation! In the above example we could animate by adding more images of the sun that were rotated from the previous frame with a delay before them:


Why we can't have nice things

With all of the above GIF is both a simple but powerful format. You can make an animation that is made up of small updates efficiently encoded.

Sadly however someone decided that all images inside a GIF file should be treated as animation frames. And they should have a minimum delay time (including zero delays being rounded up to 20ms or so). So if you want you GIF to look as you intended you're stuck with one image per frame and only 256 colours per frame unless the common decoders are fixed. It seems the main reason they continue to be like this is there are badly encoded GIF files online and they don't want them to stop working.

GIF, you are a surprisingly beautiful format and it's a shame we don't see your full potential!


Here is the story of how I fell down a rabbit hole and ended up learning far more about the GIF image format than I ever expected...
We had a problem with users viewing a promoted snap using GNOME Software. When they opened the details page they'd have huge CPU and memory usage. Watching the GIF in Firefox didn't show a problem - it showed a fairly simple screencast demoing the app without any issues.
I had a look at the GIF file and determined:
  • It was quite large for a GIF (13Mb).
  • It had a lot of frames (625).
  • It was quite high resolution (1790×1060 pixels).
  • It appeared the GIF was generated from a compressed video stream, so most of the frame data was just compression artifacts. GIF is lossless so it was faithfully reproducing details you could barely notice.
GNOME Software uses GTK+, which uses gdk-pixbuf to render images. So I had a look a the GIF loading code. It turns out that all the frames are loaded into memory. That comes to 625×1790×1060×4 bytes. OK, that's about 4.4Gb... I think I see where the problem is. There's a nice comment in the gdk-pixbuf source that sums up the situation well:

 /* The below reflects the "use hell of a lot of RAM" philosophy of coding */

They weren't kidding. 🙂

While this particular example is hopefully not the normal case the GIF format has has somewhat come back from the dead in recent years to be a popular format. So it would be nice if gdk-pixbuf could handle these cases well. This was going to be a fairly major change to make.

The first step in refactoring is making sure you aren't going to break any existing behaviour when you make changes. To do this the code being refactored should have comprehensive tests around it to detect any breakages. There are a good number of GIF tests currently in gdk-pixbuf, but they are mostly around ensuring particular bugs don't regress rather than checking all cases.

I went looking for a GIF test suite that we could use, but what was out there was mostly just collections of GIFs people had made over the years. This would give some good real world examples but no certainty that all cases were covered or why you code was breaking if a test failed.

If you can't find what you want, you have to build it. So I wrote PyGIF - a library to generate and decode GIF files and made sure it had a full test suite. I was pleasantly surprised that GIF actually has a very well written specification, and so implementation was not too hard. Diversion done, it was time to get back to gdk-pixbuf.

Tests plugged in, and the existing code actually has a number of issues. I fixed them, but this took a lot of sanity to do so. It would have been easier to replace the code with new code that met the test suite, but I wanted the patches to be back-portable to stable releases (i.e. Ubuntu 16.04 and 18.04 LTS).

And with a better foundation, I could now make GIF frames load on demand. May your GIF viewing in GNOME continue to be awesome.

Thursday, November 15, 2018

Counting Code in GNOME Settings

I've been spending a bit of time recently working on GNOME Settings. One part of this has been bringing some of the older panel code up to modern standards, one of which is making use of GtkBuilder templates.

I wondered if any of these changes would show in the stats, so I wrote a program to analyse each branch in the git repository and break down the code between C and GtkBuilder. The results were graphed in Google Sheets:

This is just the user accounts panel, which shows some of the reduction in C code and increase in GtkBuilder data:

Here's the breakdown of which panels make up the codebase:

I don't think this draws any major conclusions, but is still interesting to see. Of note:
  • Some of the changes make in 3.28 did reduce the total amount of code! But it was quickly gobbled up by the new Thunderbolt panel.
  • Network and Printers are the dominant panels - look at all that code!
  • I ignored empty lines in the files in case differing coding styles would make some panels look bigger or smaller. It didn't seem to make a significant difference.
  • You can see a reduction in C code looking at individual panels that have been updated, but overall it gets lost in the total amount of code.
I'll have another look in a few cycles when more changes have landed (I'm working on a new sound panel at the moment).

Monday, July 16, 2018

GUADEC 2018 Almería

I recently attended the recent GNOME Users and Developers European Conference (GUADEC) in Almería, Spain. This was my fifth GUADEC and as always I was able to attend thanks to my employer Canonical paying for me to be there. This year we had seven members of the Ubuntu desktop team present. Almería was a beautiful location for the conference and a good trade for the winter weather I left on the opposite side of the world in New Zealand.

This was the second GUADEC since the Ubuntu desktop switched back to shipping GNOME and it’s been great to be back. I was really impressed how positive and co-operative everyone was; the community seems to be in a really healthy shape. The icing on the cake is the anonymous million dollar donation the foundation has received which they announced will be used to hire some staff.

The first talk of the week was from my teammates Ken VanDine, Didier Roche and Marco Treviño who talked about how we’d done the transition from Unity to GNOME in Ubuntu desktop. I was successful in getting an open talk slot and did a short talk about the state of Snap integration into GNOME. I talked about the work I’d done making snapd-glib and the Snap plugin in GNOME Software. I also touched on some of the work James Henstridge has been working on making Snaps work with portals. It was quite fun to see James be a bit of a celebrity after a long period of not being at a GUADEC - he is the JH in JHBuild!

After the first three days of talks the remaining three days are set for Birds of a Feather sessions where we get together in groups around a particular topic and discuss and hack on that. I organised a session on settings which turned out to be surprisingly popular! It was great to see everyone that I work with online in-person and allowed us to better understand each other. In particular I caught up with Georges Stavracas who has been very patient in reviewing the many patches I have been working on in GNOME Control Center.

I hope to see everyone again next year!

Friday, December 08, 2017

Setting up Continuous Integration on gitlab.gnome.org

Simple Scan recently migrated to the new gitlab.gnome.org infrastructure. With modern infrastructure I now have the opportunity to enable Continuous Integration (CI), which is a fancy name for automatically building and testing your software when you make changes (and it can do more than that too).

I've used CI in many projects in the past, and it's a really handy tool. However, I've never had to set it up myself and when I've looked it's been non-trivial to do so. The great news is this is really easy to do in GitLab!

There's lots of good documentation on how to set it up, but to save you some time I'll show how I set it up for Simple Scan, which is a fairly typical GNOME application.

To configure CI you need to create a file called .gitlab-ci.yml in your git repository. I started with the following:

  image: ubuntu:rolling
    - apt-get update
    - apt-get install -q -y --no-install-recommends meson valac gcc gettext itstool libgtk-3-dev libgusb-dev libcolord-dev libpackagekit-glib2-dev libwebp-dev libsane-dev
    - meson _build
    - ninja -C _build install

The first line is the name of the job - "build_ubuntu". This is going to define how we build Simple Scan on Ubuntu.

The "image" is the name of a Docker image to build with. You can see all the available images on Docker Hub. In my case I chose an official Ubuntu image and used the "rolling" link which uses the most recently released Ubuntu version.

The "before_script" defines how to set up the system before building. Here I just install the packages I need to build simple-scan.

Finally the "script" is what is run to build Simple Scan. This is just what you'd do from the command line.

And with that, every time a change is made to the git repository Simple Scan is built on Ubuntu and tells me if that succeeded or not! To make things more visible I added the following to the top of the README.md:

[![Build Status](https://gitlab.gnome.org/GNOME/simple-scan/badges/master/build.svg)](https://gitlab.gnome.org/GNOME/simple-scan/pipelines)

This gives the following image that shows the status of the build:

pipeline status

And because there's many more consumers of Simple Scan that just Ubuntu, I added the following to.gitlab-ci.yml:

  image: fedora:latest
    - dnf install -y meson vala gettext itstool gtk3-devel libgusb-devel colord-devel PackageKit-glib-devel libwebp-devel sane-backends-devel
    - meson _build
    - ninja -C _build install

Now it builds on both Ubuntu and Fedora with every commit!

I hope this helps you getting started with CI and gitlab.gnome.org. Happy hacking.

Wednesday, November 01, 2017

Retiring my Ubuntu Phone after 1000 days

With some sadness I recently replaced my Ubuntu Phone with a Nexus 5. It lasted me just over 1000 days (almost three years) as my everyday phone, and I last wrote about it at the 500 mark.

Even though this is the end for me and Ubuntu Phone the hope of a true open source phone platform continues on:
  • The Ubuntu Phone project lives on ubports.
  • As I put my Ubuntu phone to rest the Purism Librem 5 project was funded with over $2 million!
I wish both these projects all the best.

My thoughts on my time with Ubuntu Phone:
  • It worked!
  • While the hardware (Meizu MX4) was reasonable hardware, it would have been nice to see it on something newer/faster and have gone through some more iterations on software performance.
  • The apps I missed most were:
    • An app for my bank and network provider (that I could use to quickly check balances).
    • Communication apps (e.g. Facebook messenger, WhatsApp)
    • Uber
  • I used a reasonable amount of webapps, which mostly filled the gap where apps weren't available. I does appear that most companies put more effort into their mobile apps than mobile web.