Friday, May 09, 2014

errors.ubuntu.com

One of the most useful tools I use is errors.ubuntu.com. This shows statistics for the crash reports that are sent from users machines. Before I would trawl through Launchpad trying to work out which crash reports were more significant. Now we have numbers to see what's important.

Here is a graph showing the crash reports received for the last six releases over the last year:
Some interesting things from the graph:

  • We get more crash reports on weekdays than weekends (see the ripple in the 12.04 and 12.10 lines which otherwise seem quite stable).
  • Apparently people stop using Ubuntu development releases around Christmas (huge dip in the middle of the Ubuntu 14.04 line). Stable releases are unaffected.
  • You can see a step in crash reports for the 14.04 beta release (March). Looks like there was an outage then too as every release has a dip in reports before.
  • It looks like as soon as 14.04 was released (April) there's been a rapid migration from 13.10 users to 14.04 so there is now probably less than half the number of 13.10 users there were before.
  • If you look closely you can also see a slight decrease in crash reports from 12.04 after the 14.04 release, so people are migrating LTS to LTS.
  • I guess the 12.10 users love it because they don't seem to have started migrating at all.
  • We get a huge number of crash reports from release days and these very smoothly drop off over approximately three months. I guess this is due to the bugs being fixed and the users slowly updating.
  • Sorry, I don't get the vertical axis any more than you do other than to say "bigger means more crash reports" (bug). The X axis also show months from 2013/2014 (bug).
  • Not sure why the left hand side is so high - have we really reduced crash reports that much?

Sunday, March 23, 2014

Why the display server doesn't matter

Display servers are the component in the display stack that seems to hog a lot of the limelight. I think this is a bit of a mistake, as it's actually probably the least important component, at least to a user.

In the modern display stack there are five main components:
  • Hardware
  • Driver
  • Display Server / Shell
  • Toolkit / Platform API
  • Applications
The hardware we have no control over. We just get to pick which hardware to buy. The driver we have more control over - drivers range from completely closed source to fully open source. There's a tug of war between the hardware manufacturers who are used to being closed (like their hardware) and the open source community which wants to be able to modify / fix the drivers.

For (too) many years we've lived with the X display server in the open source world. But now we are moving into next generation display servers (as Apple and Microsoft did many years ago). At the moment there are two new classes of contender for X replacement, Mir and a set of Wayland based compositors (e.g. weston, mutter-wayland etc).

Applications use toolkits and platform APIs to access graphical functionality. There are plenty of toolkits out there (e.g. GTK+, Qt) and existing libraries are growing more broad, consistent and stable to be considered as a complete platform API (which is great for developers).

If you read the Internet you would think the most important part in this new world is the display server. But actually it's just a detail that doesn't matter that much.
  • Applications access the display server via a toolkit. All the successful toolkits support multiple backends because there's more than one OS out there today. In general you can take a GTK+ application and run it in Windows and everything just works.
  • The hardware and drivers are becoming more and more generic. Video cards used to have very specialised functionality and OpenGL used to provide only a fixed function function. Now video cards are basically massively parallel processors (see OpenCL) and OpenGL is a means of passing shaders and buffer contents.
The result of this is the display server doesn't matter much to applications because we have pretty good toolkits that already hide all this information from us. And it doesn't matter much to drivers as they're providing much the same operations to anything that uses them (i.e. buffer management and passing shaders around).

So what does matter now?
  • It does matter that we have open drivers. Because there will be different things exercising them we need to be able to fix drivers when display server B hits a bug but A doesn't. We saw this working with Mir on Android drivers. Since these drivers are only normally used by SurfaceFlinger there are odd bugs if you do things differently. Filing a bug report is no substitute to being able to read and fix the driver yourself.
  • The shell matters a lot more. We're moving on from the WIMP paradigm. We have multiple form factors now. The shell expresses what an application can do and different shells are likely to vary in what they allow.
I hope I've given some insight into the complex world of display stacks and shown we have plenty of room for innovation in the middle without causing major problems to the bits that matter to users. 

Tuesday, March 12, 2013

Ubuntu GNOME

Thanks to the hard work of Jeremy Bicha and others Ubuntu GNOME is now an official Ubuntu flavour. Flavours get some infrastructure and support benefits such as ISO creation that make it easier to release and support.

More information on Mir

With the recent announcement of Mir there's been some concern about what this means for Ubuntu and the wider Linux ecosystem. Christopher Halse Rogers who is on the Mir team has written some excellent posts covering some of the major questions: why Mir and not Wayland/Weston, what does this mean for other desktops on Ubuntu and what does this mean for Linux graphics drivers.

Well worth the read.

Tuesday, March 05, 2013

Mir

Today we go public with the Ubuntu graphics stack for the post X world. Since the beginning Ubuntu has relied on the X server to support the user experience and while it has worked generally well; it’s time for something new. My team is working on a big new component for this - Mir. Mir is a graphics technology that allows us to implement user experience we want for Ubuntu across all devices we support.

In many ways, Mir will be completely transparent to the user. Applications that use toolkits (e.g. Qt, GTK+) will not need to be recompiled. Unity will still look like Unity. We will support legacy X applications for the foreseeable future.

This is a big task. A lot of work has already been done and there’s a lot more to go. We’re aiming to do incremental improvements, and you can find more about this on the Wiki page and in the blueprints. You can help. From today our project is public, it’s GPL licensed and you’re welcome to use the source and propose changes.

It’s exciting times, and I hope you enjoy the results of this work!

Thursday, January 10, 2013

Vala support for Protocol Buffers

Recently I was playing around with Protocol Buffers, a data interchange format from Google. In the past I have spent quite a bit of time working with ASN.1 which is a similar format that has been around for many years. Protocol buffers seem to me to be a nice distillation of the useful parts of efficient data interchange and a welcome relief to the enormous size of the ASN.1 specifications.

With Vala being my favourite super-productive language I felt the need to add support to it. Solution: protobuf-vala.

Let's see it in action. Say you have the following protocol in rating.proto:

message Rating {
  required string thing = 1;
  required uint32 n_stars = 2 [default = 3];
  optional string comment = 3;
}


Run it through the protocol buffer compiler with:

$ protoc rating.proto --vala_out=.

This will create a Vala file rating.pb.vala with a class like this:

public class Rating
{

  string thing;
  uint32 n_stars;
  string comment;
  etc
}


You can use this class to encode a rating, e.g. for storing to a file or sending over a network protocol:

var rating = new Rating ();
rating.thing = "Vala";
rating.comment = "Vala is super awesome!";
rating.n_stars = 5;
var buffer = new Protobuf.EncodeBuffer ();
rating.encode (buffer);
do_something_with_data (buffer.data);

And decode it:

var data = get_data_from_somewhere ();
var buffer = new Protobuf.DecodeBuffer (data);
var rating = new Rating ();
rating.decode (buffer);
stderr.printf ("%s is %d stars\n", rating.thing, rating.n_stars);

That's pretty much it!

If you're using Ubuntu (12.04 LTS, 12.10 or 13.04) then you can install Vala protocol buffer support with:

$ sudo apt-add-repository ppa:protobuf-vala-team/ppa
$ sudo apt-get update
$ sudo apt-get install protobuf-compiler-vala libprotobuf-vala-dev

Have fun!

Thursday, December 27, 2012

A script for supporting multiple Ubuntu releases in a PPA

Something I find time consuming is uploading to a PPA when you want to support multiple Ubuntu releases. For my projects I generally want to support the most recent LTS release, the current stable release and the current development release (precise, quantal and raring when this was written).

I have my program in a branch and release that with make distcheck and lp-project-upload. The packaging is stored in another branch.

For each release I update the packaging with dch -i and add a new entry, e.g.

myproject (0.1.5-0ubuntu1) precise; urgency=low

  * New upstream release:
    - New exciting stuff

 -- Me <me@canonical.com>  Thu, 27 Dec 2012 16:52:22 +1300


I then run release.sh and this generates three source packages and uploads them to the PPA:

NAME=myproject
PPA=ppa:myteam/myproject
RELEASES="raring quantal precise"

VERSION=`head -1 debian/changelog | grep -o '[0-9.]*' | head -1`
ORIG_RELEASE=`head -1 debian/changelog | sed 's/.*) \(.*\);.*/\1/'`
for RELEASE in $RELEASES ;
do
  cp debian/changelog debian/changelog.backup
  sed -i "s/${ORIG_RELEASE}/${RELEASE}/;s/0ubuntu1/0ubuntu1~${RELEASE}1/" debian/changelog
  bzr-buildpackage -S -- -sa
  dput ${PPA} ../${NAME}_${VERSION}-0ubuntu1~${RELEASE}1_source.changes
  mv debian/changelog.backup debian/changelog
done


Hope this is useful for someone!

Note I don't use source recipes as I want just a single package uploaded for each release.

Wednesday, November 21, 2012

Testing Kerberos in Ubuntu

In fixing a LightDM bug recently I needed to set up Kerberos authentication for testing. Now, Kerberos comes with quite a reputation for complexity so this was not a task I was looking forward to. And googling around to get some simple Ubuntu instructions only ended up confirming my expectations. But in the end, I was able to get it to work [1] and here is what I did. You should probably not rely on this information for an actual Kerberos implementation.

I start with two machines running Ubuntu, one as the Kerberos server [2] and one as a client. The client is already installed with a user account called test.

Server configuration


Edit /etc/krb5.conf to set the default realm [3]:
 
[libdefaults]
default_realm = TEST

Install the Kerberos server:

$ sudo apt-get install krb5-kdc krb5-admin-server

Create the realm. You will be prompted for a master password for the realm:

$ sudo krb5_newrealm

Add a new user (called a principal in Kerberos language) into the realm with the same username as on the client. You will be prompted for a password for this user [4]:

$ sudo kadmin.local
kadmin.local:  add_principal test


And now the server should be running. You can check things are working by watching the log:

$ tail -f /var/log/auth.log

Client configuration


The client is a lot easier, as the packages do most of the work for you:

$ sudo apt-get install krb5-user

You will be prompted for the following information:
  • Set "Default Kerberos version 5 realm" to TEST
  • Set "Kerberos server for your realm" to address / hostname of your server
  • Set "Administrative server for your Kerberos realm" to address / hostname of your server
Now you can test by getting a ticket [5] from the server. You will be prompted for the password you set when running kadmin.local on the server:

$ kinit
$ kdestroy


If that worked then you're ready to go. Have a look at the auth.log on the sever if it didn't work (the error messages are a bit cryptic though).

The next step is to setup PAM [6] to allow authentication with Kerberos. There's no configuration required, just install it:

$ sudo apt-get install libpam-krb5

Now you can log into your client machine (e.g. from LightDM/Unity Greeter) using the Kerberos password you setup on the server. Remember if something went wrong you can still use the local password to get in [7].

The reason I set all this up was to test Kerberos accounts which need password changes. You can control this feature from the server using the following:

$ sudo kadmin.local
kadmin.local:  modify_principal +needchange test



[1] on Ubuntu 13.04 (server) and 12.04 (client). I don't know which other combinations will work.
[2] Called a Key Distribution Centre in Kerberos jargon.
[3] Kerberos calls different authentication domains realms. I've used the realm TEST though in proper usage this would be a domain name e.g. EXAMPLE.COM to avoid name collision.
[4] You will already have a password set for this user on the client machine. Pick a different password as this allows you log in with either Kerberos or local passwords - both passwords will work.
[5] A ticket is the name for an authentication token provided by the server. In a real implementation this ticket will allow you to access services without re-entering your password.
[6] PAM is the library that does authentication when logging into Ubuntu.
[7] The PAM configuration that the packages setup first tries your password with the Kerberos server, then the local passwords (/etc/shadow) if that fails.

Thursday, February 09, 2012

So You Want to Write a LightDM Greeter…

Matt Fischer wrote a great post about writing a greeter for LightDM.  Runs through an example of a Python greeter and how it works.

Thursday, December 08, 2011

Gnome Games Modernisation

The GNOME Games project maintains fifteen small "five-minute" games for the GNOME desktop.

Unfortunately over time the games have struggled to keep up with the latest GNOME technology due to the time required to do this.  And the further behind we've got the harder it is for new developers to get involved as the code is hard to work with.

So the time has come for a great modernising.  And here's where you fit it :)

We've picked eight of the games we think are the best and we want to focus on bringing them up to modern standards.  The games are Chess, Five or More, Mines, Iagno, Mahjongg, Sudoku and Swell Foop.  These games all have been or are in progress of being ported to Vala.  Vala is a modern programming language that will be familiar to anyone who has used Java or C#.

We have a Matrix of things to do:
with the goal being to turn everything to green.


So, if you're interested in helping out follow any of the bug links and start fixing the bugs!  All the tasks should be able to be completed independently and shouldn't be too complex to achieve.  Anyone is welcome to attempt these and there are non-coding tasks (documentation and design).

Saturday, September 10, 2011

GNOME OS

There was a very good interview with Jon McCann recently about GNOME 3 and GNOME OS.  Reading public comments on this interview showed a lot of negativity which I think missed the good points.

GNOME OS is unfortunately very loosely defined, but from what I can gather it's essentially controlling the entire stack that GNOME is to make a better experience and make it easier to work on the project.

What I like about this strategy:
  • It's focuses on the users that GNOME is targeted at.  We've had a strong direction since GNOME 2 days and the focus on features that these users need through design is the right way of getting there.
  • It's dropping old desktop metaphors and moving to new ones.  There are other desktops like XFCE which will continue the Windows 95 desktop metaphor and be successful with it; it's right for GNOME to move on and be more cutting edge.
What I don't like about it:
  • It downplays the value of GNOME as a "box of bits".  The drum I'm banging at the moment is about sharing infrastructure.  This is something GNOME has been very successful with in the past and discouraging this cuts off a lot of places where GNOME can get investment from other projects.
  • It puts very strong requirements on distributors which they don't want to / can't meet.  GNOME is not like Apple, it can't control the entire stack from hardware to sales.  It needs to work with distributors or have a distribution strategy.  Building a perfect desktop is not enough.
The impression I get is GNOME OS is now effectively the strategy of GNOME and it's generally a good direction.  We need to make sure to flesh it out and ensure that we can have sustained development in GNOME and get wide distribution.

Desktop common ground

There's a common argument that you hear about open source desktops which goes something like "we have less than 1% market share; the other desktops are laughing at us; we should pool together and make a real contender".

And you know what, they're right!  We don't have a significant market share, and we're not at the point where we have a truly amazing desktop experience (but we're getting closer).  Anything we can do to get there faster must be a good thing.

And you know what, they're wrong.  We don't have a finite developer resource.  Open-source is amazing like that - when a project starts that people care about suddenly the community grows.  Trying to mash everyone together into one project wouldn't work and would probably make us even slower.  Putting all our eggs in one basket is a big risk.

How can we share resources to grow that market share without pushing us together into a big compromise?  We need to share infrastructure.  What we need is a POSIX for the 21st century.  We've been slowly building this with things like the Linux, D-Bus, X, GStreamer.  We can do more.

There's been a rise in design thinking in Open-Source which has been really good for everyone.  But I think the pendulum has swung too far.  User experience is not the only factor in deciding what to do.  Infrastructure is expensive.  Every bad API is slowing us down progress on the layers above it.  Every desktop developer that dives into infrastructure is not working on those layers either.

Sharing is hard.  But the cost of not sharing is huge.  Lets make sure the infrastructure we're building for tomorrow works cross-desktop and we can share those costs.

Friday, September 02, 2011

Desktop Summit 2011

Last month I attended the Desktop Summit 2011 in Berlin.  Unfortunately I was only there for the core days because Berlin is an awesome city and the summit is awesome too.

The quality of the talks this year were great, and I only had one or two slots the whole time where there was nothing I wanted to go to.  This summit felt more integrated than the last one and I hope this continues into the future.

Some highlights:
  • There was a good response to LightDM.  I felt my talk had a lack of GNOME people present, but I think the GTK4 talk may have absorbed them.
  • Lennart did a well researched talk on revamping the login system which sounds very good and left me wanting to know more.
  • From what I heard the future of GTK+4 and Clutter looks very promising, but I haven't been able to see the talk as I was doing mine.  Can't wait for the videos to come out so I can find out more.
  • Vincent Untz is did a really thoughtful talk on his experiences as a GNOME release manager.
  • GNOME Shell seems to be progressing very well and there were a lot of talks on it.
  • Plasma/KDE also seems to be doing a lot of innovation.
Some negatives:
  • There was basically no mention of Unity.
  • There was the usual amount of Canonical bashing and it's not helping anyone.  The GNOME State of the Union had too many cheap jabs and the half hearted laughter shows it's just not funny anymore.
  • There was a lack of Canonical people present, and it was commented on numerous times.  I'm personally not surprised, as every year more of my colleagues just don't want to be there.  Andrea Cimitan, who is a great guy, summed it up when he said on Google+ "when I say around here I'm working for "Canonical", people stop smiling :)".
  • There was little mention of GNOME OS.  Sometimes we need to be more than just hackers talking about technology and really talk more about planning and strategy.
What I'd like to see at the next summit:
  • Increased visibility of other desktops - it still feels very GNOME and KDE centric, I think we can learn a lot from projects like XFCE, LXDE, Elementary, Unity etc.
  • Increased collaboration on infrastructure - we need to get freedesktop.org in a better shape so we can pool out resources on the boring stuff and focus more on the user facing component which make us successful.

Thursday, September 01, 2011

Broken PDFs in Simple Scan

Since version 2.32 Simple Scan has had a bug where it generates PDF files with invalid cross-reference tables.  The good news is this bug is now fixed, and will work correctly in simple-scan 3.2; thanks to RafaÅ‚ MużyÅ‚o who diagnosed this.  You may not have noticed this bug as a number of PDF readers handle these types of failures and rebuild the table (e.g. Evince).  It was noticed that some versions of Adobe Reader do not handle these failures.

I've added a command line option that can fix existing PDF files that you have generated with Simple Scan.  To use run the following:

simple-scan --fix-pdf ~/Documents/*.pdf

It should be safe to run this on all PDF documents but PLEASE BACKUP FIRST. It will copy the existing document to DocumentName.pdf~ before replacing it with the fixed version so you have those in case anything goes wrong.

If you can't wait for the next simple-scan, you can also run this Python program (i.e. python fixpdf.py broken.pdf > fixed.pdf)

import sys
import re
lines = file (sys.argv[1]).readlines ()
xref_offset = int(lines[-2])
xref_offset = 0
for (n, line) in enumerate (lines):
        # Fix PDF header and binary comment
        if (n == 0 or n == 1) and line.startswith ('%%'):
                xref_offset -= 1
                line = line[1:]
        # Fix xref format
        match = re.match ('(\d\d\d\d\d\d\d\d\d\d) 0000 n\n', line)
        if match != None:
                offset = int (match.groups ()[0])
                line = '%010d 00000 n \n' % (offset + xref_offset)
        # Fix xref offset
        if n == len(lines) - 2:
                line = '%d\n' % (int (line) + xref_offset)
        # Fix EOF marker
        if n == len(lines) - 1 and line.startswith ('%%%%'):
            line = line[2:]
        print line,

Wednesday, August 03, 2011

LightDM at the Desktop Summit

I'm going to the Desktop Summit and will be doing a talk on LightDM; please come along if you're interested in this project.  It's at the same time as the GTK4 talk unfortunately, but I promise I'll watch that one on video and turn up to mine...

Thursday, May 19, 2011

Razing the Bazaar

Currently in GNOME there is some tension as we move into the post 3.0 world about the scope and direction of the project.  I wont go into the details of this, but essentially a number of core developers are pushing for a future in which:
  • The scope is widened to include more as part of core GNOME.  This is to allow more control and integration to produce a better user experience.
  • The project focus is being narrowed to have tighter requirements.  This is to reduce support overhead and complexity.
This change is being pushed under the "GNOME OS" banner.  While I think these ideas are being pushed for noble reasons (to make GNOME as good as it can be), there are some serious risks I am worried about:
  • If we build the perfect OS in GNOME it will not be enough.  History is littered with better products that fail to succeed.  Making an OS successful is as much about the OS design and quality as the ability to deliver that OS to end users.
  • If we base all our decision making on "what user visible change does this have?" then we risk losing innovation in our platform.  End-users are only one type of user in an OS and not all changes are relevant to them.  We have to think more in terms of "will this have a bad effect on end-users?" and look at other aspects.
  • If we narrow our focus too much we risk losing some of our current community.  The community is an enormous asset of GNOME, and not something we should take for granted.  This is not a company, and is driven by motivated individuals (some of who are then employed by companies).  There is great number of communities out there and GNOME needs to be competitive.
  • If we try and control everything then we increase the burden of maintenance onto one project.  There is no funding guaranteed to get us to GNOME 4.  We should always look (within reason) for opportunities to collaborate with other communities.
To abuse the metaphor by Eric S. Raymond it feels like we are razing the bazaar to build the GNOME OS cathedral.  We have a great product in GNOME but to build it faster and better we don't have to clean up our messy edges.  The bazaar around the cathedral is interesting and fun and throws up new ideas.  It's not stopping us from achieving success.

UPDATE:  Changed description of project focus, as it is confusing the point of this post.

Wednesday, March 09, 2011

And now for some good news

With all the doom and gloom blog posts running around at the moment you may be forgetting all the awesome progress that is being made.  So I just wanted to shout out some things that are happening that I love:

GTK3

GTK+ has been cleaned up and it shows!  GTK is a great toolkit but it had been showing its age.  The tidying up (particularly removing the GDK stuff) has significantly reduced the learning curve.  And more improvements planned for GTK4!

GNOME Shell/Unity

The core user interface is being pulled from the 1990s to the future!  There are real risks and challenges here but it's progress in making GNOME the front-running interface it deserves to be.

GObject Introspection

No more out-of-date language bindings!  With introspection information GNOME developers have huge flexibility in picking languages and all languages are first class citizens.

Vala

A modern language for a modern desktop!  Languages like Java and C# offered a lot of promise, but never seemed to break into GNOME.  A modern language makes us more productive, attracts new experienced developers and gives us an opportunity to escape from the Albatross around our neck (C).

Monday, December 06, 2010

Brainstorm Idea #25877: GNOME System Monitor lacks in-depth information

One of the popular Ubuntu Brainstorm ideas is to improve GNOME System Monitor.  This has been reported to the upstream project, and the project developers agree with the concept.

There is a proposed design: (no attribution as it's not clear who made this)

How can you get involved?

If you have some coding skills, then consider making a patch to fix this!  This is a well-defined feature request, and should be relatively easy to get to work.  Here's how:
  1. Comment on the bug that you are interested in working on this.  Ideally the GNOME System Monitor developers will be able to help you, but I am also watching the bug and willing to help out.
  2. Get the dependencies required to build the GNOME System Monitor.  On Ubuntu this is as easy as: sudo apt-get build-dep gnome-system-monitor.
  3. Check out the upstream: git clone git://git.gnome.org/gnome-system-monitor.
  4. Build and test it: ./autogen.sh --prefix=`pwd`/install ; make ; make install ; ./install/bin/gnome-system-monitor.
  5. Make some changes and repeat from step 4 until everything works.
  6.  When you are done, do a git commit -a; git format-patch origin and attach the patch to the bug report.
Happy coding!

Thursday, November 11, 2010

Using the latest SANE drivers in Ubuntu 10.04, 10.10

If you are running Ubuntu 10.04 LTS or 10.10 and your scanner is not supported, then you can try the latest releases of the SANE drivers from my sane-backends PPA.  The following steps from the terminal will enable it.

$ sudo add-apt-repository ppa:robert-ancell/sane-backends
$ sudo apt-get update
$ sudo apt-get upgrade


Please feedback to the SANE project if you continue to have problems!

GNOME3 PPA

In the Ubuntu Desktop team we're currently packaging GNOME 3 components for Ubuntu as per the blueprint decided at the last Ubuntu Developer Summit.

If you're already brave enough to be running Natty, then you can additionally try some new GNOME 3 applications by adding the GNOME3 builds PPA into your sources.  Expect the usual - the packages may not work perfectly, and it's non-trivial to downgrade, so be warned!

We're going to evaluate these packages in the PPA and decide how many are appropriate to include in Natty.