Posts for September 2010

2010-09-01: git-pbuilder 1.16

Getting the git-pbuilder script included in git-buildpackage has significantly increased the usage, which means lots more people finding new features and facilities that they need. I love that part of free software work. Here's a new version with a few new facilities.

Since the previous release announcement (for 1.13), this adds:

You can get the latest version from my scripts distribution page.

2010-09-17: GnuPG key replacement

Since last year, I've been slowly working on introducing a new 4096-bit RSA key (complete overkill, I know) with all the modern hash preferences and so forth so that I can retire my old 1024-bit DSA key. After DebConf in New York City at the beginning of August, I now have enough signatures on the new key to finish this transition.

If you have signed my old key and are willing to sign the new key on the strength of a key transition document, here is key transition document signed with both the old and the new keys. While my new key has a reasonable number of signatures, I'm always happy to get more.

My old key is 0x0AFC7476, and my new key is 0x82004173. You can retrieve the new key (and the old key, for that matter) from any public keyserver or from my web site (new key and old key).

Shortly after posting this, I'll submit the RT ticket to transition my key in the Debian keyring.

The new key has a 2048-bit signing RSA subkey which I may export to a few places other than my primary secure system so that I can do package uploads in some more convenient ways. The new primary key has a three year expiration period (which will be periodically extended), and the signing subkey has a one year expiration period (likewise).

2010-09-18: now with IPv6

I've now enabled IPv6 for both systems and have published corresponding DNS records (although DNS as always may take some time to update).

Panix mentioned a while back that they now supported IPv6 (with, as usual, a great admin console interface that made it trivial to turn it on and set up proper reverse DNS), but I knew there were going to be some issues with setting it up so I put it off. Amusingly, it was Debian that finally pushed me into configuring it: I kept getting the AAAA record for first and waiting for the connection to time out before connecting to the IPv4 address, and I got tired of waiting.

For the most part, I just had to turn it on and everything simply worked. There were only a couple bits of weirdness:

2010-09-20: rra-c-util 2.7

I'm releasing a new version of WebAuth, so time to release a new version of rra-c-util as well.

The only code change in this release is an improvement to the krb5.m4 Autoconf macros. It now looks for krb5-config in /usr/kerberos/bin after checking the user's PATH, since versions of Red Hat Enterprise prior to 6 installed krb5-config there instead.

This release also uncomments all the additional probes in so that they'll be tested during rra-c-util builds, and fixes one syntax error in the Kerberos v4 probes.

You can get the latest version from the rra-c-util distribution page.

2010-09-20: WebAuth 3.7.3

This is another bug fix release addressing more fallout from the 3.7.0 release. Some of the refactoring of LDAP attribute handling in mod_webauthldap broke the backward compatibility support for WebAuth 2.x, fixed in this release. mod_webauthldap also wasn't being explicitly linked with the portability glue library, which caused build failures on RHEL 4 x86_64. It also checks in /usr/kerberos/bin for krb5-config in addition to the user's PATH.

As of this release, the WebAuth libwebauth library also installs a pkg-config configuration file for clients that want to use pkg-config to find the appropriate compilation and link flags.

You can get the latest version from the official WebAuth distribution page or from my unofficial distribution page.

2010-09-21: Lightweight DNS servers

Since I got a lot of responses to my previous journal entry about tinydns, I wanted to share the answers with everyone else (and with search engines) in case anyone else was wondering the same thing.

The general consensus choice for a simple authoritative DNS server was NSD (which is also packaged for Debian, currently as nsd3). I took a brief look and indeed it would do what I want. It's still a bit more complicated than I would prefer, but nowhere near as bad as BIND, and it handles all modern DNS features (IPv6, DNSSEC, etc.).

Several people mentioned unbound, which looks like a great solution for a different problem than the problem that I have. It's a caching DNS server rather than an authoritative DNS server, although it supports some authoritative overrides.

There was also one recommendation of PowerDNS (Debian package pdns-server and friends), which I'd heard of before and which I think I'd turn to if I was looking for a full-featured DNS server. I think it's overkill for my tiny problem, but it has the neat feature that you can run an arbitrary command to provide DNS responses. That means that it could potentially replace lbnamed, should we need something with more features for some of the DNS tricks that we do with Stanford.

Finally, tinydns development hasn't completely stopped since djb stopped working on it, and there are maintained forks that have patches to support new record types. Still no DNSSEC support that I'm aware of, but continuing to use it with patches to support SRV and AAAA records is quite appealing, since I much prefer the zone file format (the "standard" zone file format is a horrible bodge) and, of course, I'm already familiar with it. If a new version were packaged for Debian, I'd probably just keep using it.

And as a bonus follow-up from my original post, on the topic of the divergent handling of iptables for IPv4 and IPv6, Martin Krafft recommended ferm, which looks very interesting but which I haven't yet investigated further.

2010-09-25: New archive key

Since I'm in a GnuPG rekeying sort of mood, I've generated a new archive key that's a 4096-bit RSA key (overkill, I know) and is sign-only. I've updated the eyrie-keyring package available from my personal archive to install it. I haven't changed the archive signatures over yet, to give people who may be using that archive a chance to update. I'll probably do that after squeeze releases.

The new package still uses the old apt-key methods, since I wanted to update the key in lenny as well. I will probably eventually switch to using /etc/apt/trusted.gpg.d as supported by the current version of apt. It's rather hard to tell from apt's changelog when that support was added, but I think it will be in squeeze; the squeeze apt-key man page at least makes that sound plausible.

While I was cleaning things up, I also purged from my personal archive all the packages that have since been uploaded to Debian, and also dropped the etch repository since all the Debian infrastructure is dropping it.

You can get the current information about how to add my archive and what packages to expect in it, as well as now-updated links to the Debian packages that I help maintain, on my Debian package page.

2010-09-28: Coding styles

I'm one of those people who tries to keep a very consistent coding style across all of my projects, including project documentation and build system layout, and also like reading other coding style guides to look for good ideas and clean ways of organizing packages. I first wrote a coding style guide for INN once I started working on it regularly, then moved and expanded on that guide and added Perl for my work at Stanford. Over time, I wrote a guide for C build systems and package documentation.

These guides were all stranded in our old group web pages, which I'm working on retiring, and since they're my personal styles (although largely used by my group at work), it seemed to make the most sense to pull them back into my personal web pages. They're now available in a new coding style section of my technical notes.

The build system document is heavily revised, since I originally wrote it before I started using Automake, and the documentation guide now has much more detailed documentation of how I lay out README files. The other documents just have minor revisions.

2010-09-30: Productivity

For once, not using that journal entry title to talk about my own.

Someone posted a pointer to an opinion piece looking at productivity differences between the US and Germany to a private hierarchy I read. I know productivity numbers like this are somewhat disputed, but I don't want to get into that. What I'm more interested in is the analysis.

Whether or not the US does significantly better or worse than other countries in this regard is hotly debated, but what isn't is that the length of typical work weeks for employees, particularly in IT, is not going down. Fifty or a hundred years ago, it was common to expect that the growth of automation would mean more leisure for humanity. There was, at the time, some argument over whether that would be a good thing, but it seemed to be a trend.

This didn't happen here. The above article argues that the amount of vacation that US employees take is going down over time. And in IT it's common to hear about 50 hour weeks (or 80 hour weeks in startups). Little of that promise of increased leisure from modern technology seems to have happened in the US, although it seems to have happened somewhat in Europe; about all we managed was to get down to a 40-hour work week, which is now honored in the breech for a lot of exempt employees. And that was seventy years ago.

Most of the scientific research that we have available is quite clear that working longer hours causes diminished productivity. Standard time management advice is to focus more and work in shorter intervals. Hours of unfocused work are considerably less effective than much shorter periods of focused work, recovery time is vital, and forcing oneself to work long hours leads to bad decision-making. Even in less cerebral and more repetitive work, such as some assembly line work, we've known for many decades that working longer hours leads to more accidents and more bad decisions, and there the costs of an accident are often much higher.

So, why are we so dead-set on doing the exact opposite of what available evidence says is the best way to work?

I think this is to some extent a Rorschach blot question: you're going to see the answer in this that fits your pre-conceived notions about what's dysfunctional about US workplaces. I rambled elsewhere about my own pet peeve: the overwhelming focus on money and short-term profit and cutting cost, which I think leads to a lot of the nasty employment abuses in the US.

But here I want to raise a different point: as the article says, we work long hours because that's what we're measured on and rewarded for. We work unproductively for many of those hours partly because it's impossible to be productive for that many hours. We get used to feeling intermittantly unproductive at work; it becomes normal. And in many jobs, particularly in IT, we don't measure ourselves by results because we have no idea what a reasonable level of results actually is.

Management gets a lot of blame for failure to manage by results rather than by artificial measures like attendance and time worked (or tickets closed, or hours logged against customer requests, or similar trivially manipulatable and artifical measures). But I think this goes deeper than management. I think even we ourselves, with as good of knowledge as anyone could possibly have of what our jobs are and what good results look like, don't have a clear idea of how much to expect from ourselves on a daily basis.

I know from time management exercises that I wildly mis-estimate tasks and don't have a good feedback loop to get better at estimates. I also tend to plan towards maximums rather than the average. If I have a very good day, I try to work at that level and hold myself to that level of work, rather than trying to find a long-term average. And my perceptions about what results constitute good results vary wildly based on the most recent events and surrounding context (such as whether a project is late), even though I know that my basic work capacity is not going to change significantly because a project is late.

If we want to change how we measure work, I think this may be the place to start. We can't ask managers to measure us by results if we don't even know what that measurement would look like.

I'm going to start asking myself some new questions: What is the average amount of work I can get done in a day? What's the standard deviation? Is there some way that I can measure this over time so that I'm not fooling myself based on circumstances or temporary surges of productivity, since my day-to-day productivity seems to have a lot of "noise"? What realistically constitutes finishing a week's worth of work?

When am I "working" but not accomplishing something, and more importantly, why am I doing that? What broke when that happens?

I'm firmly convinced that one of my personal breakthroughs in time management will happen when I can stop "working" when I'm not being productive. Whenever I manage this (rarely, right now), I both find I needed the break and find that I can return far more productive than I was before I took the break. But usually I continue to "work" unproductively, clocking hours without creating results, and feel both guilty about the lack of results and even less productive. That's the trap that I think this article is pointing at.

Last spun 2024-01-01 from thread modified 2022-06-12