Tue, 14 Sep 2010
I'm keen to try and write more about the things that I work on as part of my job at Canonical. In order to get started I wanted to write a summary of some of the things that I have done, as well as a little about what I am working on now.
Ubuntu Distributed Development
This isn't the catchiest name for a project ever, and has an unfortunate collision with an Debian project, also shortened to "UDD." However, the aim is for this title to become a thing of the past, and this just to be the way things are done.
This effort is firstly about getting Ubuntu to use Bazaar, and a suite of associated tools, to get the packaging work done. There are multiple reasons for this.
First, and most simply, is to give developers the power of version control as they are working on Ubuntu packages. This is useful for both the large things and the small. For instance I sometimes appreciate being able to walk through the history of a package, comparing diffs here and files there when debugging a complex problem. Sometimes though it's just being able to "bzr revert" a file, rather than having to unpack the source again somewhere else, extracting the file and copying it over the top.
There are higher purposes with the work too. The goal is to link the packaging with the upstream code at the version control level, so that one flows in to the other. This has practical uses, such as being able to follow changes as they flow upstream and back down again, or better merging of new upstream versions. I believe it has some other benefits too, such as being able to see the packages more clearly as what they are, a branch of upstream. We won't just talk about them being that, but they truly will be.
Some of you will be thinking "that's all well and good, but <project> uses git," and you are absolutely right. Throughout this work we have had two principles in mind, to work with multiple systems outside of Ubuntu, and to provide a consistent interface within Ubuntu.
Due to the way that Ubuntu works an Ubuntu developer could be working on any package next. I would really like it if the basics of working with that package were the same regardless of what it was. We have a lot of work to do on the packaging level to get there, but this project gets this consistency on the version control level.
We can't get everyone outside of Ubuntu to follow us in this though. We have to work with the system that upstream uses, and also to work with Debian in the middle. This means that we have to design systems that can interface between the two, so we rely a lot on Launchpad's bzr code imports. We also want to interface at the other end as well, at "push" time. This means that if an Ubuntu developer produces a patch that they want to send upstream they can do that without having to reach for a possibly different VCS.
Thanks mainly to the work of Jelmer Vernooij we are doing fairly well at being able to produce patches in the format appropriate for the upstream VCS, but we still have a way to go to close the loop. The difficultly here is more around the hundreds of ways that projects like to have patches submitted, whether it is a mailing list or a bug tracker, or in some other form. At this stage I'd like to provide the building blocks that developers can put together as appropriate for that project.
Daily package builds
Relatedly, but with slightly different aims, I have been working on a project in conjunction with the Launchpad developers to allow people to have daily builds of their projects as packages.
Currently there is too often a gap between using packaged versions of a project, and running the tip of that project daily. I believe that there are lots of people that would like to follow the development of their favourite projects closely, but either don't feel comfortable building from the VCS, or don't want to go through the hassle.
Packages are of course a great way to distribute pre-compiled software, so it was natural to want to provide builds in this format, but I'm not aware of many projects doing that, aside from those which fta provides builds for. Now that Launchpad provides PPAs and code imports, and the previous project provides imports of the packaging of all Debian and Ubuntu packages in to bzr, all the pieces are there in order to allow you to produce packages of a project automatically every day.
This is currently available in beta in Launchpad, so you can go and try it out, though there are a few known problems that we are working on until it will be as pleasant as we want.
This has the potential to do great things for projects if used correctly. It can increase the number of people testing fresh code and giving feedback by orders of magnitude. Also, just building the packages acts as a kind of continuous integration, and can provide early warning of problems that will affect the packaging of the project. Also, they provide an easy way for people to test the latest code if a bug is believed to be fixed.
Obviously there are some dangers associated with automatic builds, but if they are used by people who know what they are doing then it can help to close the loop between users and developers.
There are also many more things that can be done with this feature by people with imagination, so I'm excited to see what directions people will take it in.
Aside from these projects, I was also given some time to work on Ubuntu itself, but without long-term projects to ship. That meant that I was able to fix things that were standing in my way, either in the way of the above projects, or just hampering my use of Ubuntu, or fix important bugs in the release.
In addition I took on smaller projects, such as getting kerneloops enabled by default in Ubuntu. While doing this I realised that the user experience of that tool could be improved a lot for Ubuntu users, as well as allowing us to report the problems caught by the tool as bugs in Launchpad if we wished.
I really enjoyed having this flexibility, as it allowed me to learn about many areas of the Ubuntu system, and beyond, and also played to my strengths of being able to quickly dive in to a new codebase and diagnose problems.
I think that in my own small way, each of these helped to improve Ubuntu releases, and in turn the projects that Ubuntu is built from.
While I'm sorry to say that other demands have pulled my code review time in to other projects, I did used to spend a lot of time reviewing and sponsoring changes in to Ubuntu.
I highlight this mainly as another chance to emphasise how important I think code review is, especially when it is review of code from people new to the project. It improves code quality, but is also a great opportunity for mentoring, encouraging good habits, and helping new developers join the project. I hope that my efforts in this are had a few of these characteristics and helped increase the number of free software developers. Oh how I wish there were more time to continue doing this.
I've now been started working on the Linaro project, specifically in the Infrastructure team, working on tools and Infrastructure for Linaro developers and beyond. I'm not one to be all talk and no action, so I won't talk to much about what I am working on, but I would like to talk about why it is important.
Firstly I think that Linaro is an important project for Free Software, as it has the opportunity to lead to more devices being sold that are built on or entirely free software, some in areas that have historically been home to players that have not been good open source citizens.
Also, I think tools are an important area to work on, not just in Linaro. They pervade the development experience, and can be a huge pain to work with. It's important that we have great tools for developing free software so as not to put people off. Developers, volunteers and paid, aren't going to carry on too long with tools that cause them more problems than they are worth, and not all are going to persist because they value Free Software over their own enjoyment of what they do.
Fri, 09 Apr 2010
The deadline for students to submit their applications to Google for Summer of Code is imminent.
If you were waiting for the last minute to submit, that is now!
If you are mentor and have the perfect student you have been working with, check with them that they have submitted the application to Google, otherwise you will be stuck.
Next week we'll start to process the huge number of applications that we have for Ubuntu.
Fri, 26 Mar 2010
As you've probably heard by now, Ubuntu has been accepted to Google Summer of Code this year. We're currently at the point where we are looking for students to take part and the mentors to pair with them to make the proposal. We have some ideas on the wiki, but there's nothing to stop you coming up with your own if you have a great idea. The only requirement is that you find a mentor to help you with it.
The best way to do this is to write up a proposal on your wiki page on the Ubuntu wiki, and then to email the Ubuntu Summer of Code mailing list about it. You can also ask for possible mentors on IRC and on other Ubuntu mailing lists related to your idea.
I have a couple of ideas on the wiki page, but I am happy to consider ideas from students that fall in my area of expertise.
I spend most of my time working on developer tools and infrastructure. These are things that users of Ubuntu won't see, but are used every day by developers of Ubuntu. Improvements we can make in this area can in turn improve Ubuntu by giving us happier, more productive, developers. It's also an interesting area to work in, as there are usually different constraints to developing user software, as developers have different demands.
If you think that sounds interesting and you have a great idea that falls in to that area, or you like one of my ideas on the wiki page, then get in touch with me. I will be happy to discuss your ideas and help you flesh them out in to a possible proposal, but I won't be able to mentor everyone.
I would consider mentoring any idea that either improved existing tools used by Ubuntu developers (bzr, pbuilder, devscripts, ubuntu-dev-tools, etc.) or created a new one that would make things easier. In the same spirit, anything that makes it easier for someone to get started with Ubuntu development, such as Harvest, helpers for creating packages, etc. could be a possible project. The last category would be infrastructure-type projects such as the idea to automate test-merging-and-building of new upstreams, or similar ideas.
I've also posted about some ideas that I would like to see previously on my blog, which might be a source of inspriation.
If this interests you then you can find out how to contact me on my Launchpad profile.
Sun, 14 Mar 2010
The Bazaar package importer is a service that we run to allow people to use Bazaar for Ubuntu development by importing any source package uploads in to bzr. It's not something that most Ubuntu developers will interact with directly, but is of increasing importance.
I've spent a lot of time working in the background on this project, and while the details have never been secret, and in fact the code has been available for a while, I'm sure most people don't know what goes on. I wanted to rectify that, and so started with some wiki documentation on the internals. This post is more abstract, talking about the archtecture.
While it has a common pattern of requirements, and so those familiar with the architecture of job systems will recognise the solution, the devil is in the details. I therefore present this as a case-study of one such system that can be used to constrast other similar sytstems as an aid to learning how differing requirements affect the finished product.
For the Ubuntu Distributed Development initative we have a need for a process that imports packages in to bzr on an ongoing basis as they are uploaded to Ubuntu. This is so that we can have a smooth transition rather than a flag day where everyone switches. For those that are familiar with them think Launchpad's code imports but with Debian/Ubuntu packages as the source, rather than a foreign VCS.
This process is required to watch for uploads to Debian and Ubuntu and trigger a run to import that upload to the bzr branches, pushing the result to LP. It should be fast, though we currently have a publication delay in Ubuntu that means we are used to latencies of an hour, so it doesn't have to be greased lightning to get acceptance. It is more important to be reliable, so that the bzr branches can be assumed to be up to date, that is crucial for acceptance.
It should also keep an audit trail of what it thinks is in the branches. As we open up write access to the resulting branches to Ubuntu developers we can not rely on the content of the branches not being tampered with. I don't expect this will ever be a problem, but I wanted to ensure that we could at least detect tampering, even if we couldn't know exactly what had happened by keeping private copies of everything.
The Building Blocks
The first building block of the solution is the import script for a single package. You can run this at any time and it will figure out what is unimported, and do the import of the rest, so you can trigger it as many times as you like without worrying that it will cause problems. Therefore the requirement is only to trigger it at least once when there has been an upload since the last time it was run, which is a nicer requirement than "exactly once per upload" or similar.
However, as it may import to a number of branches (both lucid and karmic-security in the case of a security upload, say), and these must be consistent on Launchpad, only one instance can run at once. There is no way to do atomic operations on sets of branches on Launchpad, therefore we use locks to ensure that only one process is running per-package at any one time. I would like to explore ways to remove this requirement, such as avoiding race conditions by operating on the Launchpad branches in a consistent manner, as this would give more freedom to scale out.
The other part of the system is a driver process. We use separate processes so that any faults in the import script can be caught in the supervisor process, with the errors being logged. The driver process picks a package to import and triggers a run of the script for it. It uses something like the following to do that:
write_failure(package, "died") try: import(package) except: write_failure(packge, stderr) finally: remove_failure(package)
write_failure creates a record that the package failed to import with a reason. This provides a list of problems to work through, and also means that we can avoid trying to import a package if we know it has failed. This ensures that previous failures are dealt with properly without giving them a chance to corrupt things later.
I said that the driver picks a package and imports it. To do this it simply queries the database for the highest priority job waiting, dispatching the result, or sleeping if there are no waiting jobs. It can actually dispatch multiple jobs in parallel as it uses processes to do the work.
The queue is filled by a couple of other processes triggered by cron. This is useful as it means that further threads are not required, and there is less code running in the monitor process, and so less chance that bugs will bring it down.
The first process is one that checks for new uploads since the last check and adds a job for them, see below for the details. The second is one that looks at the current list of failures and retries some of them automatically, if the failure looks like it was likely to be transient, such as a timeout error trying to reach Launchpad. It only retries after a timeout of a couple of hours has elapsed, and also if that package hasn't failed in that same way several times in a row (to protect against e.g. the data that job is sending to LP causing it to crash and so give timeout errors.)
It may be better to use an AMQP broker or a job server such as Gearman for this task, rather that just using the database. However, we don't really need any of the more advanced features that these provide, and already have some degree of loose-coupling, so using fewer moving parts seems sensible.
Reacting to new uploads
I find this to be a rather neat solution, thanks to the Launchpad team. We use the API for this, notably a method on IArchive called getPublishedSources(). They key here is the parameter "created_since_date". We keep track of this and pass it to the API calls to get the uploads since the last time we ran, and then act on those. Once we processed them all then we update the stored date and go around again.
This has some nice properties, it is a poll interface, but has some things in common with an event-based one. Key in my eyes is that we don't have to have perfect uptime in order to ensure we never miss events.
However, I am not convinced that we will never get a publication that appears later than one that we have dealt with, but that reports an earlier time. If this happens we would never see it. The times we use always come from LP, so don't require synchronised clocks between the machine where this runs and the LP machines, but it could still happen inside LP. To avoid this I subtract a delta when I send the request, so assuming the skew would not be greater than that delta we won't get hit. This does mean that you repeatedly try and import the same things, but that is just a mild inefficiency.
There is a synchronisation point when we push to Launchpad. Before and after this critical period we can blow away what we are doing with no issues. During it though we will have an inconsistent state of the world if we did that. Therefore I used a protocol to ensure that we guard this section.
As we know locking ensures that only one process runs at a time, meaning that the only way to race is with "yourself." All the code is written to assume that things can go down at any time as I said, the supervisor catches this and marks the failures, and even guards against itself dying. Therefore when it picks back up and restarts the jobs that it was processing before dying it needs to ensure that it wasn't in the critical section.
To do this we use a three-phase commit on the audit data to accomany the push. When we are doing the import we track the additions to the audit data separately from the committed data. Then if we die before we reach the critical section we can just drop it again, returning to the inital state.
The next phase marks in the database that the critical section has begun. We then start the push back. If we die here we know we were in the critical section and can restart the push. Only once the push has fully completed do we move the new audit data in to place.
The next step cleans up the local branches, dying here means we can just carry on with the cleanup. Finally the mark that we are in the critical section is removed, and we are back to the start state, indicating that the last run was clean, and any subsequent run can proceed.
All of this means that if the processes go down for any reason, they will clean up or continue as they restart as normal.
Dealing with Launchpad API issues
The biggest area of operational headaches I have tends to come from using the Launchpad API. Overall the API is great to have, and generally a pleasure to use, but I find that it isn't as robust as I would like. I have spent quite some time trying to deal with that, and I would like to share some tips from my experience. I'm also keen to help diagnose the issues further if any Launchpad developers would like so that it can be more robust off the bat.
The first tip is: partition the data. Large datasets combined with fluctuating load may mean that you suddenly hit a timeout error. Some calls allow you to partition the data that you request. For instance, getPublishedSources that I spoke about above allows you to specify a distro_series parameter. Doing
is far far more likely to timeout than
for s in distro.series: distro.main_archive.getPublishedSources(distro_series=s)
in fact, for Ubuntu, the former is guaranteed to timeout, it is a lot of data.
This is more coding, and not the natural way to do it, therefore it would be great if launchpadlib automatically partioned and recombined the data.
The second tip is: expect failure. This one should be obvious, but the API doesn't make it clear, unlike something like python-couchdb. It is a webservice, so you will sometimes get HTTP exceptions, such as when LP goes offline for a rollout. I've implemented randomized exponential backoff to help with this, as I tend to get frequent errors that don't apparently correspond to service issues. I very frequently see 502 return codes, on both edge and production, which I believe means that apache can't reach the appservers in time.
Overall, I think this architecture is good, given the synchronisation requirements we have for pushing to LP, without those it could be more loosely coupled.
The amount of day-to-day hand-holding required has reduced as I have learnt about the types of issues that are encountered and changed the code to recognise and act on them.
Mon, 01 Feb 2010
David, it's interesting that you posted about that, as it's something I've been toying with for the last couple of years. For the last few months I've been (very) slowly experimenting in my free time with an approach that I think works well, and I think it's time to tell more people about it and to ask for contributions.
Opportunistic programmers are useful to cater for here, as Debian/Ubuntu development isn't trivial, and so we are simplifying something existing, which means that it will still be powerful, which is also important. I'm not only interested in improving the experience for the opportunistic programmer though, why should they get all the cool stuff? I'm interested in producing something that I can use for doing Ubuntu development too (though not every last detail).
The project I am talking about has been christened "cambria" and is now available on Launchpad. It's a library that aims to provide great APIs for working with packages throughout the lifecycle, including things like Bazaar, PPAs, local builds, testing, lintian, etc. It should be pleasurable to use and also allow you to build tools on top that are also pleasurable. It should also allow for easy extension in to different GUI toolkits and for command-line tools, though I've only been working with GTK so far.
In addition, there is a gedit plugin that allows you to perform common tasks from within gedit. I chose gedit as it has a pleasant Python API for plugins, isn't so complicated that it takes much learning, and will already be installed on most Ubuntu desktop systems. As I said though, the libarary allows you to implement in anything you like (that can use a python library.)
I've put together some mockups that suggest some of the things that I would like to do:
The RATIONALE file includes some more reasons for the project:
Project cambria is about wrapping the existing tools for Debian/Ubuntu development to allow a more task-based workflow. Depending on the task the developer is doing there may be several things that must be done, but they must currently work each one out individually. We have documentation to help with this, but it's much simpler if your tools can take care of it for you.
Project cambria aims to make Ubuntu development easier to get started with. There are several ways that it will help. Providing a task-based workflow where you are prompted for the information that is needed to complete the task, and other things are done automatically, or defaults chosen helps as it means you can concentrate on completing the task, rather than learning about all the possible changes you could make and deciding which applies.
Project cambria aims to make Ubuntu development easier for everyone by automating common tasks, and alleviating some of the tool tax that we pay. It won't just be a beginner tool, but will provide tools and APIs that experienced developers can use, or can build upon to build tools that suit them.
Project cambria will help to take people from novice to experienced developer by providing documentation that allows you to learn about the issues related to your current task. This provides an easier way in to the documentation than a large individual document (but it can still be read that way if you like).
Project cambria will make Ubuntu development more pleasurable by focusing on the user experience. It will aim to pull together disparate interfaces in to a single pleasing one. Where it needs to defer to a different interface it should provide the user with an explanation of what they will be seeing to lessen the jarring effect.
I'm keen for others to contribute, there is some information about this in the project's CONTRIBUTING file. I'm looking for all sorts of contributions from all kinds of people and keen to help you get started if you aren't confident with the type of contribution you would like to make.
There's a mailing list as part of the ~cambria team on Launchpad and IRC channel if you are interested in discussing it more.
Wed, 16 Dec 2009
You may well have heard about it (on this blog especially), but though I spend lots of my time involved with it and talking to people about it, there may be some people who aren't entirely sure what we are doing with the Ubuntu Distributed Development initiative, or what we are trying to achieve. To try and help this I wrote up an overview of what we are doing.
If this project interests you and you would like to help, or just observe, then you can subscribe to the mailing list. There's lots of fun projects that you could take on: there's far more that is possible and would be hugely useful to Ubuntu developers than we can currently work on. If you want to work on something then feel free to talk to me about it and we can see if there is something that would suit you.
Without further ado...
The TL;DR version:
- Version Control rocks.
- Distributed version control rocks even more.
- Bazaar rocks particularly well.
- Let's use Bazaar for Ubuntu.
Or, if you prefer a more verbose version...
Ubuntu is a global project with many people contributing to the development of it in many ways. In particular development/packaging involves many people working on packages, and much of this requires more than one person to work on the change that it is being made, for e.g.
- Working on the problem together
- Other review
These things usually require the code to be passed backwards and forwards, and in particular, merged. In addition, we sometimes have to do things like merge the patch in the bug with a later version of the Ubuntu package. In fact, Ubuntu is a derivative of Debian, and we expend a huge effort every cycle merging the two.
Distributed version control systems have to be good at merging, it's a fundamental property. We currently do without, but we have tools such as MoM that use version control techniques to help us with some of the merging. We could carry on in this fashion, or we could move to use a distributed version control system and make use of its features, and gain a lot of other things in the process.
Tasks such as viewing history, and annotating to find who made a particular change and why, also become much easier than when you have to download and unpack lots of tarballs.
This isn't to say that there aren't costs to the transition, and tools and processes we currently use that don't currently have an obvious analogue in the bzr world. That just means we have to identify those things and put the work in to provide an alternative, or to port, where it makes sense.
The aim is therefore to help make Ubuntu developers more productive, and enable us to increase the number of developers, by making use of modern technologies, in particular Bazaar, though there are several other things that are also being used to do this.
What it isn't
This isn't a project to overhaul all the Ubuntu development tools. While there are many things I would like to fix about some of our tools (see some of the things that Barry had to get his head around in the "First Impressions" thread), that can go ahead without having to tie it in to this project. I hope that when me make some common tasks easier, it will focus attention on others that are still overly complex, and encourage people to work on those too.
We are not replacing the entire stack. We are building upon the lower layers, and replacing some of the higher ones. We aim for compatibility where possible, and not breaking existing workflows until it makes sense.
You can read the original overall specification for this work at
It is rather dry and lacking in commentary, and also a little out of date as we drill down in to each of the phases. Therefore I'll say a little more about the plan here.
The plan is to work from the end of the Ubuntu developers, converting the things that we work most directly with first. This should give the biggest impact. We will then work to pull in other things that improve the system.
This means that we start by making all packages available in bzr, and make it possible to use bzr to do packaging tasks. In addition to this we are working with the LP developers to make it possible for Soyuz to build a source package from the branch, so that you don't have to leave bzr to make a change to a package. This work is underway.
After that we make all of Debian available in bzr in the same way. This allows us to merge from Debian directly in bzr. At a first cut, this just allows us to replace MoM, but in fact allows for more than that. Have a conflict? You have much more information available as to why the changes were made, which should help when deciding what to do.
The next step after that is to also bring the Vcs-* branches in to the history. These are the branches used by the Debian maintainer, and so allow you to work directly with the Debian maintainer without switching out of the system that you have learnt.
In a similar way we then want to pull in the upstream branches themselves. Again, this will allow you to work closely with upstream, without having to step out of the normal workflow you know.
The last point deserves some more explanation. The idea is that you will be able to grab a package as you normally do, work on a patch, and then when you are happy run a command or three that does something like the following:
- Merges your change in to the tip of upstream, allowing you to resolve any conflicts.
- Provide a cover letter for the change (seeded with the changelog entry and/or commit message(s).
- Send the change off to upstream in their preferred format and location (LP merge proposal, patch in the bugtracker, mailing list etc.)
As you can imagine, there are a fair number of prerequisites that we need to complete before we can get to that stage, but I think of that as the goal. This will smooth some of the difficulties that arise in packaging from having to deal with a variety of upstreams. Finding the upstream VCS, working out their preferred form and location for submission, rebasing your change on their tip etc. I hope this will make Ubuntu developers more efficient, make forwarding changes easier to do and do well, and save new contributors from having to learn too many things at once.
Where we are now
We currently have all of Ubuntu imported (give or take), you can
bzr branch lp:ubuntu/<source package name>
which is great in itself for many people.
We also have all of Debian imported, and similarly available with
bzr branch lp:debian/<source package name>
which naturally allows
bzr merge lp:debian/<source package name>
so you can make use of that right now.
We are also currently looking at the sponsorship process around bzr branches, and once we have that cracked it will be much easier for upstream developers who know bzr to submit a bugfix, and that's a large constituency.
In addition, this means that a new contributor can start without having to learn debdiff etc., we can pass code around without having to merge two diffs and the like.
This is great in itself, but we are still some way from the final goal.
We are currently working on the VCS-* branches, to make them mergeable, but their are a number of prerequisites.
In addition the Launchpad team are also working on making it possible to build from a branch.
Where we can go
As I said, building on top of bzr makes a number of things easier.
For instance, once LP can build from branches, we could have a MoM-a-like that very cheaply tries to merge from Debian every time there is an upload there, and if it succeeds build the package. This could then tell you not only if there were any conflicts in the merge, but any build failures, even before you download the code.
In addition, we are currently talking a lot about Daily Builds, building the latest code every day (or commit, week, whatever). There are a number of things this brings. It doesn't strictly require version control, but as it's basically a merging problem having everything in Bazaar makes it much easier to do. We have a system now built on "recipes" that we are working to add to LP.
Parts of the work
There are a number of parts to the work, and you will see these and others being discussed on the list:
- bzr (obviously), which we sometimes need to change to make this work possible, either bug fixes, or sometimes new features.
- bzr-builddeb, which is a bzr plugin that knows how to go from branch to package and vice-versa.
- bzr-builder, the bzr plugin that implements "recipes."
- Launchpad, which hosts the branches, provides the merge prosals, and will allow building from branches and daily builds.
- The bzr importer, this is the process that mirrors the Ubuntu and Debian archives in to bzr and pushes the branches to LP.
and probably others that I have forgotten right now.
Wed, 23 Sep 2009
One of the new things that is going to be in karmic is that the kerneloops daemon will be installed and running by default. This tool, created by Arjan van de Ven, watches the kernel logs for problems. It has a companion service, kerneloops.org which aggregates reports of these problems, and can sort by kernel version and the like. This allows kernel developers to spot the most commonly encountered problems, areas of the code which are prone to bugs etc. When the kerneloops daemon catches a problem it allows you to send the problem to kerneloops.org.
We however, are not using the applet that comes with kerneloops to do this, we are making use of the brilliant Apport. There are a couple of reasons for this. We also want to make it easy for you to report these issues as bugs to Launchpad, and we don't want to prompt you with two different interfaces to do that.
The changes mean that if your machine has a kernel issue you will get an apport prompt as usual. As well as asking if you would like to report the problem to Launchpad like it does for other crashes it will ask if you would also like to report it to kerneloops.org. Passing the information through apport means that it can also be used on servers as well without running X.
Hopefully you will never see this improvement, but it's now going to be there for when those bugs do creep in.
Tue, 18 Aug 2009
I've just implemented the most requested feature in bzr-builder (Hi Ted), command support.
Sometimes you need to run a particular command to prepare a branch of your project for packaging (e.g. autoreconf). I think this should generally go in your build target, but not everyone agrees, and sometimes there is just no other way.
Therefore I added a new instruction to bzr-builder recipes, "run". If you put
run some command here
in your recipe then it will run "some command here" at that point when assembling.
Note that running commands that require arbitrary network access is still to be discouraged, as you don't know in what environment someone may assemble the recipe. I'd also advise against using commands unless you really need them, but that's obviously your call.
I recently gave a talk to some fellow Canonical employees about where we are with the "Distributed Development" project. For that I made a screencast showing some of the Launchpad Codehosting features that you can now use for Ubuntu. Thanks to the Launchpad team for making this happen. We're still ironing out the remaining kinks that make it a pain to use, and getting all the packages imported, but it's possible to use them now.
One of the things the video shows is how to request someone review your change, i.e. how to get a change sponsored in to Ubuntu. I'm keen to have people test this, as it's not something I do very often now that I am a core-dev. Therefore if you want to help test then propose a merge and set the appropriate sponsor team as the reviewer, and I will prioritise it and you can give me feedback in return.
Note that a bug in Launchpad means that I won't get a notification when they are created, so feel free to drop me a line via email or on IRC until that bug is fixed next month. I'll continue to poll the lists though, so nothing will get dropped.
Tue, 04 Aug 2009
As well as seeing use of PPAs for providing bug fixes, new upstream versions, proposed packages, testing etc., we are also seeing them used for providing daily builds of packages. For instance Fabien Tassin provides daily builds of lots of Mozilla-related packages and snapshots of Chromium in his various PPAs. Also, there is Project Neon, to provide daily builds of Amarok.
They massively lessen the barrier to using and testing code that is fresh from the fingers of the developers. They avoid you having to build a project from source every day, making sure to keep up with changes in dependencies. They allow you to be testing code almost as it is written, speeding up the feedback cycle to the developers, and potentially increasing the number of people involved in that feedback cycle.
In addition they allow you to verify bugs against the latest code, so that bug reports are of more relevance to the developers. If you so choose they can also be set up so that bugs are also tested with fewer distribution patches, further increasing the developers' confidence in the bug reports.
Mark had an idea for an elegant way to describe how to combine the code to produce the package, and we worked on producing a tool to follow the steps. You can find the result of this in the bzr plugin bzr-builder. I've documented how to use it on the wiki.
There's still more we can do to improve the process, and we have a lot to discuss about what makes a good daily package, and what the limits of them are. If you are interested in discussing this then please join the list of the dailydebs team in Launchpad.
I'm currently running the bzr-nightly-ppa using this tool, and have improved some things based on this, but more testing, feedback, and patches are always welcome.
Tue, 21 Apr 2009
Jo Shields posts about his intention to propose switching Rhythmbox for Banshee in Karmic. (I'm trying to convince him to apply to be a MOTU, or at least an Ubuntu member, so he can be on Planet Ubuntu.) He's not the first to suggest it, but he is in the right place to make the proposal for this release.
I'm personally quite happy with Rhythmbox, and haven't really tried Banshee, though for my use I imagine they are pretty similar. (Two things that would be definite benefits for me would be remembering what I was doing when I closed it when it starts, and understanding mix CDs better). I can certainly understand some of the arguments for switching as well, so I wouldn't be against it.
There is an idea on Brainstorm about this, and while Brainstorm can't capture the intricacies of the debate, it suggests that Rhythmbox is quite popular with the people that voted (though it's not clear how many had tried the alternatives.)
This post isn't really to argue one way or the other, or to attempt to cover all the criteria by which a decision will be made. My point is to emphasise that the Banshee developers have done themselves a great favour in this debate by making one aspect of switching easier. The have implemented importing from Rhythmbox. This means that any switch wouldn't mean that all users had to re-import their collection.
That's not everything that is involved in switching, and indeed, many of the issues around trying to change a default don't have good answers, and that's something we should work to improve as a community.
You don't have to spend time implementing importers for every similar application out there, but easing migration from commonly used apps can help users switch, and is a big benefit when trying to switch a large number of users painlessly. Also, while importing is useful, it's not the ideal solution. A common storage format, and shared storage would be superior in many ways for this purpose.
Jaunty just froze a little bit more, with the last few normal uploads being done. From here on in it's mainly about getting the CDs perfect for release, which will hopefully go smoothly.
Over the last few days there have been a number of people working on Universe to get it in to the best shape we could in the remaining time. We did a pretty good job of it too, towards the end we were scavenging around for any more fixes that were ready to upload. As always, with more people we could have done more, but it seemed to be a very smooth landing this time.
The sponsorship queue is virtually all things that were not appropriate for Jaunty, with just a couple of desirable fixes not making the cut (we'll work to have those in jaunty-updates ASAP). In addition to that, NBS was clear, meaning that there were no outstanding library transitions or similar, and there are very few uninstallables. Obviously we would want all of these numbers to be zero, but you can't have that with a time-based release schedule. Unfortunately the FTBFS list is rather long (mainly due to toolchain changes), but it's generally infrequently updated packages on there, which will tend to be of less interest.
The MOTUs also did a fantastic job of the python 2.6 transition, which was a huge job, and a compressed timeframe to do it in. Unfortunately there are going to be some issues with the change in the default python for some time to come, but given the state of python a couple of months ago this is a great acheivement.
Also, I'll make special mention of the Mono 2.0 transition. Co-ordinated by Jo Shields, and thanks to a lot of people on both the Debian and Ubuntu sides, this was completed with very little fuss. It was a great example of co-ordinating work on a large number of packages, and of collaboration between Debian and Ubuntu. I also think that it showed some of the advantages of the Ubuntu method of development over the Debian one, but the shared work trumps that.
If you are reading this thinking "You might think you did a good job, but what about this bug that I provided a patch for 3 months ago, why didn't you fix that?" then all I can really do is point you to the sponsorship process. Yes, it sucks that not knowing about this cost you, but reviewing every bug with an attachment tagged "patch" is currently a little out of our reach. I'm always looking for ways to improve this, and I hope one day we can do that, but in the meantime using the sponsorship process will help get your patch included.
Thu, 26 Feb 2009
The second installment of Developer News went out on Monday, and boy was it hard work. It's great to see so much going on, but it does make preparing the summary time consuming. Thanks a lot to Stefan Lesicnik for his help in preparing this one. As well as having more to report I also broadened the intended audience this month.
For the first month the intended audience was just Ubuntu developers, so I didn't include anything from ubuntu-devel-announce, as I assumed that everyone would already have seen those announcements. The first issue was picked up by both the fridge and LWN though, so it was more widely read, and showed there was an interest in having more news for external people. Therefore I wrote this edition to include those people as well.
I'm pretty sure that there was plenty more that we could have included as well, it's just I didn't know about it, or there was nothing to link to. I think we need to do a better job as a community to communicate what we are doing. My process was simply to trawl the archives looking for announcements and discussions of things, so all anyone had to do was write one email.
Therefore I think we have two problems to solve, firstly getting everything that is going on announced in the right places, and secondly making it easier to summarise all of the activity.
I hope that the first will become more a part of our culture, and the Developer News will help by getting more exposure for those who do communicate about what they are doing.
I'm not too sure how to improve the second though. We have a defined process for submitting items for the news, but to date there have been no submissions using that process, and one submission by editing the wiki page as I was preparing the second issue. Why is this? Why have you not submitted anything? Is it that you didn't know about the process? Is it too much work? Is it that you never think to use it? That you are not sure that your item counts? That you are not sure that someone else hasn't alread submitted it?
The feedback I have got about the Developer News has been both frequent and entirely positive, so I believe it is a valuable service that should be carried on, but I fear that the current way of doing it won't really scale. I don't think it would be worth a day of my time a month to complete it.
Any suggestions for improving the situation would be appreciated.
Thu, 22 Jan 2009
Yesterday as part of Ubuntu Developer Week I gave a session entitled "Bazaar for Packaging". At the last minute I decided to change the session somewhat, so that it would show how things would work if you were to use bzr to modify an Ubuntu package once Distributed Development is fully up and running.
The session went ok, and while I was showing some fairly experimental things it all worked quite well. The biggest problem was when we grabbed a change from SVN using the bzr-svn plugin. The rather simple step of extracting a patch took up quite a lot of the session as bzr-svn initialised it's metatdata about the Subversion branch. bzr-svn is amazing, it allows you to store a bzr branch inside an svn repository, while still making it readable to svn. However, to do this it has to do some fairly intensive transformations to maintain the mapping. The biggest impact of this is when you access the SVN repository for the first time though, so it wasn't the smartest idea for an IRC tutorial. That's the problem with changing your session 10 minutes before it starts.
You can go and read the transcript of the session if you want to see how all of this worked. I'd like to skip ahead a little bit and show you how it will work in a short while when launchpad hosts the branches and all the bits are in place.
First we need to grab the source for the package we want to work on. We'll grab a whole branch here, but you could just as well use a lightweight checkout or a stacked branch to transfer less data.
$ bzr branch lp:ubuntu/jaunty/gnome-utils gnome-utils.jaunty
Will give us a local copy of that branch in gnome-utils.jaunty. We can now make our changes in that branch.
$ cd gnome-utils.jaunty
You can run bzr log (or better bzr viz from bzr-gtk or bzr qlog from qbzr) to see the history if that is interesting for the change we are making.
We're just going to apply the patch from SVN though. This is what we did in the session:
$ bzr diff -c svn:8378 http://svn.gnome.org/svn/gnome-utils/trunk | bzr patch
(where bzr patch is supplied by bzrtools. Try it it's cool, it works over any transport, so you can apply a remote patch file without downloading it)
What we are doing here is in fact a "cherry-pick". bzr will happily do these for you, but it does the equivalent of diff + patch, it is hoped to improve this and record which revisions were merged, and use that information to help you understand what changes have and haven't been merged.
To more directly do a cherry-pick you can run
$ bzr merge -c svn:8378 http://svn.gnome.org/svn/gnome-utils/trunk
(merge the changes introduced in svn revision 8378 of this branch please, the svn: is neccesary as bzr and svn count their revisions differently)
However, this won't currently work, as the branches have what is called "different rich-root support", so we have to use the explicit "diff and patch" for now. This is a pain, and will hopefully go away sometime soon. This method would work fine with most bzr branches though.
Once we have applied the patch we write the changelog entry for it. We run "dch -i -D UNRELEASED", which will create us a new changelog entry, and mark it as "UNRELEASED" so that it is clear it still needs to be uploaded. Obviously if there is an existing UNRELEASED changelog entry then we want to add to that. I would like to write a wrapper than did the right thing here.
For that changelog entry we write the usual thing, something like:
* Don't crash when asked to show a path that has been excluded. (LP: #301952)
Now we are ready to build and test our changes. Running
$ bzr builddeb -S
will spit out a source package in the parent directory, in the same way as debuild -S. We can then build this package in our normal fashion, in pbuilder say, or upload it to a PPA.
Once we are happy then we can commit our changes. The easiest way to do this is to run
which uses our changelog entry as the commit message, saving us from typing the same thing again.
There's one extra bit of magic that goes on here. bzr supports the --fixes option to commit. This marks the resulting revision as fixing the specified bug, for example --fixes lp:301952 would indicate that we closed the bug that we are working on in this revision. In Intrepid (thanks to the idea from Colin Watson) I implemented support for this in debcommit. If debcommit sees you closing a bug in the changelog message that it is using it will automatically add the corresponding --fixes argument (it works for Debian bugs too). We'll see where this comes in useful in a minute.
The last step is to get our changes in to the distribution. If we have upload rights for the package then we can dput our source package that we created a minute ago, and then run
$ bzr push lp:ubuntu/jaunty/gnome-utils
to push the bzr branch back. (Yes, launchpad plans to support building directly from a branch, so you just need to push along with some undecided mechanism to request it be included in Ubuntu)
If you don't have upload rights for the package then you need someone to sponsor the change for you. To do this you first push your branch to launchpad somewhere under your name. For instance I would run
$ bzr push lp:~james-w/ubuntu/jaunty/fix-301952
Note that thanks to the launchpad and bazaar developers implementing support for "stacked branches" and automatic stacking in launchpad this will be a very cheap operation, only pushing a single revision.
Next we would create a merge proposal for this change. You can either do this from the branch page on launchpad, or you can use bzr send. Just running
$ bzr send
should do the right thing. It will open up a new message in your mail client. You then enter your "cover letter" for the change, and hit send. It will mail the request to launchpad which will interpret the machine-readable attachment and turn it in to a merge request. The developers can then review the changes and either ask for improvements to be done, or upload the package.
Remember the --fixes information that was stored? That was also used by launchpad. The bug that we were fixing now has a link to our branch on it, so that anyone that wants to test the fix can find the right place to get the change from. This currently does not generate any bugmail though, so you have to go to the page to see it. I think this is something we need to improve.
Some of the things that I have explained here haven't been fully decided, so this isn't documentation, that will come later, the intent is to give an idea of how this may work.
I've been asked a few times about an IRC channel where we can discuss this sort of thing, so I created #ubuntu-bzr today. If you are interested in shaping how this will work then please join it and we can discuss it. Support can continue anywhere though.
Tue, 06 Jan 2009
After an idea from bigon on #launchpad today I threw together a tool using the Launchpad API. I've christened this tool ppamadison. It does the same thing as rmadison, but for PPAs. You tell it who's PPA to examine, and what source package to get the information for, and it tells you what versions are available.
$ ppamadison james-w bzr-builddeb bzr-builddeb | 2.0~0ubuntu1~ppa1 | intrepid | source bzr-builddeb | 2.0~ppa1~hardy1 | hardy | source
There's still some things left to do, such as replicating rmadison's odd output formatting, some things missing from the Launchpad API and some interesting things you could add, but the idea is there. One thing missing from the Launchpad APIs as far as I can see is an efficient way to find out which PPAs contain a certain source package name. This would be quite an interesting thing to know.
Would ppamadison be a useful thing to have in ubuntu-dev-tools? If it is worthwhile then I will integrate it. Because this blog post is something that developers might not see, but might be interested in I would then pass it on to the Developer News service, as all it would take would be a quick email, as little as a link to the blog post would do.
(Yes, I am being facetious, but we haven't had a single submission yet)
As an aside, I play with the Launchpad APIs every couple of months and they are getting better, to the point now where most data I want for things I do is available. Thanks to the Launchpad team for their work on it. There are some real problems for some use-cases, such as a cache hit requiring a https connection, but ways can be found to deal with them. In any case, the APIs will allow us to do some really useful things.
P.S. Thank you all for your kind words.