I’ve been embarking on a bit of a project since Jan when someone from Groklaw was asking about what vulns we released last year affected a certain distribution and the only way I could get her that info was by doing some convoluted grepping and re-formatting… it wasn’t pretty, but it was sufficient. So, since then I’ve been putting info on security updates we work on into a FileMaker database (yeah yeah, I know… the zealots out there will crucify me for using commercial software but… it works and saves me tonnes of time… find me something as good on Linux and I’ll look at it (FM works on OS X and Windows only, more’s the shame)).

Anyways, I just finished inputting the last bit of data from last year’s advisories which was a manual process because the data I wanted to put into the database wasn’t all in one place, or even could be consistently retrieved.. I’m sure I could write a script to do it, but it probably would have taken twice as long. But some interesting stuff has come out of this little database now that I can look at certain trends. I’ll have to make some reports and such to really do some more interesting things with the info, but from a flat “here’s a list ordered by MDKSA/MDKA id” I can see certain things.

For one, we’re almost spot on where we were last year in terms of advisories. Last year at this time we had reached MDKSA-2005:061 and last week we put up MDKSA-2006:060 (now, this doesn’t take into account non-security MDKA advisories or “-1” advisories). I actually thought we were busier this year than last, and I think we were, but things have been not quite as hectic the last 2-3 weeks (which is wierd… I’m waiting for something like zlib to nail us again); if the trend had continue we were on, we would have put out MDKSA-2006:075 (or higher) last week.

It’s also interesting to see how quick the turn-around is from when we start working on an advisory to when we have it out. There are, obviously, some exceptions to the rule based on external circumstances such as agreed-upon embargos for non-public advisories, work being done by another team (ie. Fred Crozat (God bless him) typically does all our Mozilla updates and the kernel team does the kernel updates), etc. Sometimes we’ve begun working on something, tested it, etc. only to find through peer review that a patch is insufficient or whatever.

Anyways, it looks like the average time to handle an update from start to finish is 3-4 days, which I don’t think is bad at all considering the “multiples” we have to deal with; ie. one advisory in, say, apache, might affect all platforms for which we may be actively supporting 5 different distributions on two different architectures meaning there are 10 different tests that need to be performed. This also includes the time to build and patch updates (but doesn’t include research time as this info is based on when we actually start building an update). Our longest update was 79 days which, when I look at it now, is absolutely astounding. Of course, this was from the old kernel maintainer (and it was for the kernel) and our process in place now is much much better. In fact, I believe that update could very well have been one of the contributing factors to his being…. dismissed. Kernels definitely don’t take that long anymore. =)

Other interesting data (and I only went back to incorporate all of 2005 in this database… no way I’m going back further): there are 398 records in the database, so 398 updates since Jan 1, 2005, or in the last 15 months. That works out to being 26.5 updates/month, or 1.325 updates/day (assuming on average 20 working days in a month).

I’m actually extremely impressed by that. I didn’t realize Stew and I worked that hard. =) I may have more interesting statistics available in the future… now that I have the data in the database I can actually work with it and make it do something useful.

Share on: TwitterLinkedIn


Published

Category

Linux

Stay in touch