Posts for December 2012

2012-12-07: Consent: Do you speak it?

Yes, humor is a matter of taste. Does this mean that practical jokes, pranks, and social engineering are just a matter of taste?

No.

What's different, and what's missing, is consent. A key characteristic of practical jokes is that they involve a third party in the joke without getting specific prior consent because to get prior consent would ruin the joke. There are situations in which that's fine: if the people involved know each other well, they can give implicit consent to that sort of joke, for example. If the victim isn't known to the pranksters, they are outside of the set of situations in which this is okay.

Consent is something that human society in general seems to be struggling with right now. Consent to sex. Consent to have one's personal information used by a corporation. Consent to have one's pictures posted on-line. And in every case, I think the default position should be the same as the gold standard for sex: enthusiastic consent. If you want to make someone else part of your business, your art work, your joke, your sex act, or anything else that happens outside of your head in a way that affects them, the required step is very simple: you need to get their enthusiastic consent.

If that's too much trouble for you, if that makes it less fun for you, if that means you can't make money, if you think your great idea overrides their rights over their own life, body, ideas, information, or participation, then you fail. That simple.

If you want to use someone else, you have to ask first. It's not a hard concept. One might even say that it's basic politeness.

Now, the role of governments in this is limited and often not practical. For example, I don't think one can write laws about radio performers who like prank calling, at least without causing so many negative side effects that the effort isn't really practical. Doing that doesn't make you a criminal. It just makes you an asshole.

Also, doing things to other people without their consent doesn't mean that you're then morally responsible for anything that subsequently happens that may or may not be related. But you're still an asshole. And if your employer decides that you being an asshole as part of your job has suddenly stopped being funny and fires you, hey, works for me.

2012-12-19: rra-c-util 4.7

This release is a rollup of a variety of miscellaneous features and bug fixes that have accumulated since September. It takes the first few steps towards the general Perl coding style revision that I'm working on, but only a few. I've yet to move some generic Perl test frameworks into this package (hopefully soon).

In the new feature department, there's a new portability wrapper around sys/statvfs.h that converts statvfs code to statfs code for older systems, and a lot of enhancements to the portable/apr.h header to fix up some more constants for APR 0.9 compatibility.

In the bug fix department, the Kerberos and GSS-API probes now probe for Heimdal's libroken by looking for a function that it has provided since at least 0.4 instead of one that's new in 1.3, fixing some portability issues to BSD systems. The Kerberos header probes also now check for headers using file existence checks instead of the compiler in some cases to fix problems with detecting the wrong header when using one of several different Kerberos library installations on a system.

In the test suite department, this release updates the POD and POD spelling checks to be generic and use more modern modules and techniques, and adds a new set of generic Perl tests that will be useful for any of my packages that contain Perl code. There's also a new valgrind suppression file for assisting with valgrind testing in packages that use Kerberos.

You can get the latest version from the rra-c-util distribution page.

2012-12-19: WebAuth 4.4.0

WebAuth is a site-wide web authentication system with single sign-on support. We use it extensively at Stanford for most of our user-facing web applications that need to authenticate people.

This is the culmination of the last four months of work, so it's rather nice to finally get it out the door. Even if not everything I wanted to get done is done. That's always how it is with software releases.

This release fixes a bug in the encoding of delegated Kerberos credentials when Heimdal is used as the Kerberos library. If you're building WebAuth against Heimdal, upgrade any mod_webauth modules that receive delegated credentials before upgrading mod_webkdc on the WebKDC. Otherwise, the flags of delegated tickets won't be sent correctly.

The major new feature in this version is support for authorization identities separate from authentication identities. This can be used for testing, for administrators to proxy into an application as another user, or in other cases where there's a mismatch between the canonical concept of users and the capabilities of an application. The WebLogin server and WebKDC now support an ACL file that controls what alternative identities users can assert to specific web sites, and there is a new flow on the WebLogin side to set or change identities.

All of this is disbled by default in each component. There is a new WebKDC directive to enable this support and specify the identity ACL file, and there is a new directive in mod_webauth to tell it to trust the authorization identity. There is a new environment variable that will be set to the authorization identity (trusted or not), and REMOTE_USER will be set to the authorization identity only if it is trusted. WEBAUTH_USER will always remain the authentication identity, so both identities can be logged and web applications can distinguish.

To address one of the edge cases required by this support, there's a new mod_webkdc Apache directive, WebKdcLoginTimeLimit, which controls how long a multi-step login process can take, and also how recent a login has to be to contribute its authentication factors to the session factors for an authentication. This is also now used for WebAuthForceLogin, which means that forced logins won't require re-authentication if they're within the WebKdcLoginTimeLimit interval of the last login. This allows that feature to work properly in conjunction with authorization identities and with some multifactor authentication methods.

Also in this release, optional replay detection and rate limiting of failed logins has been added to the WebLogin server. Either or both can be enabled in the configuration file. This support requires a memcached server (shared across any pool of WebLogin servers) be available to store the necessary state. There are new parameters in the error template to handle the error messages generated by these features.

The WebLogin server also has a few more minor improvements: single sign-on cookies are now set even when displaying error pages if any are available, fixing some looping issues with some scenarios around restricting users to authenticate to specific sites, and the @REMUSER_REALMS setting has been broken into two settings so that its two properties can be changed independently. The old setting is still supported for backward compatibility.

Multiple bugs in Kerberos ticket encoding have been fixed, some that were introduced in 4.3.0 and some that have been present since the first days of Heimdal support. Other fixes include the mapping of WebKDC error codes to names in WebLogin (which previously resulted in Perl warnings in the error log) and the missing documentation for the WebAuthRequireSSL directive.

Finally, the thing that I'm the happiest about in this release is that I finished my complete refactoring of the libwebauth library. Most of the low-level interfaces are gone in favor of higher-level manipulation of WebAuth protocol objects. The last pieces of code that used the old token encoding system have been replaced with the new data-driven encoder. And the last parts of the library have been converted to APR, so there is no longer a mix of traditional memory management and APR pools. This comes with lots of simplification of the API and removal of old cruft from the public headers.

You can get the latest release from the official WebAuth distribution site or from my WebAuth distribution pages.

2012-12-25: Kindle haul

Happy winder holiday of choice to all and sundry! Enough new Kindle books have accumulated that it seems time for another post so that I can remember what I bought.

Elizabeth Bear — Garrett Investigates (sff)
eluki bes shahar — Hellflower (sff)
Elizabeth Cleghorn Gaskell — Cranford (mainstream)
John Scalzi — Subterranean Scalzi Super Bundle (sff)

Cranford was free since it was written in the 1800s. I picked it up because of Jo Walton's review on Tor.com. The eluki bes shahar novel is from Daniel Moran's e-book site FS&, an impulse buy from browsing through his catalog while picking up the Long Run bundle for my mother. The Scalzi super-bundle is a collection of five separately-published novellas and chapbooks plus one non-fiction essay, only one of which I already owned (The God Engines), which seemed like a fairly good deal.

In a separate note, the problem with watching interesting lectures on YouTube (apart from the fact that it's very easy to lose a day doing so) is that I usually end up wanting to buy the book on the topic and then read that. That happened to me today: I finally got around to watching Philip Ball's lecture on the chemistry of painting, and now Bright Earth is in my non-fiction want list. And there are three more hour-long TVO programs linked off of related material that look interesting. I fear the day may disappear....

Well, I will be rescued at least partly by being called to help decorate Christmas cookies.

This is the beginning of by far my favorite two weeks of the year, although not for the reasons that people might suspect. Other than food (yum) my family doesn't do Christmas: no decorations, no huge family get-togethers, and we all give each other the freedom from guilt and stress of having to find presents. But I get two weeks of mandatory vacation from work, as does, importantly, everyone else. Which means there's nothing going on to suck me in from vacation and a very pleasant sense of resting and resetting for the new year. It's quiet, I can sit and read, I can focus on one thing at a time, and I can reorganize and order my life and revisit priorities and to-do lists with a lovely sense of perspective.

Any other mandatory or at least widespread holiday would work as well, so it's unrelated to the season. (I find the commercialism somewhat distracting rather than a positive part of the experience.) But this is the one that we get, and it happens to coincide nicely with the meaningless but psychologically significant change of the year number. And it's delightful every year.

2012-12-25: git-pbuilder 1.29

git-pbuilder is a shell script suitable for use as the builder command in git-buildpackage that simplifies using pbuilder, cowbuilder, or qemubuilder to build packages in a chroot. It's now part of the git-buildpackage Debian package, but I maintain it somewhat separately.

This version supports a new environment variable, GIT_PBUILDER_OUTPUT_DIR, that can be set to override the default output location of ".." (the parent directory). It also changes the script to be a bash script and uses arrays to handle the options passed to the builder command so that shell metacharacters in the GIT_PBUILDER_OPTIONS environment variable will hopefully be handled correctly. As part of that change, it also tries to escape any options that will be passed to --debbuildopts to hopefully protect them from the additional layer of shell parsing that they'll be subject to.

Thanks to Guido Günther and paul cannon for the patches.

I hate having to make the script bash-specific, but trying to get the shell expansion right is otherwise a huge headache. That probably means that it's outlived the usefulness of the original implementation language (which was shell only because this started life as a quick hack). I should probably take the time to rewrite it in either Perl (which I know best) or Python (which is the language the rest of git-buildpackage is written in). But that's for another day. (Here's an excellent case in point for why one shouldn't embed language-specific extensions in the name of an executable that's part of a user interface or configuration interface.)

You can get the latest version from my scripts page.

2012-12-27: Charity review

I'm thankfully in a place where I can afford to give money to charity, and I've slowly built up a set of charities that I give money to. But I've never been that systematic about it. I've had a vague target for how much I want to give, I add new charities when they seem neat, I'll rarely drop a charity when it doesn't seem to be effective, and I probably spread my money out too much.

Recently, I followed a chain of blog references and discovered GiveWell, a site that's devoted to doing more in-depth analysis of charities than the basic financial analysis done by sites like Charity Navigator. In particular, they do literature surveys to try to dig into the effectiveness of particular interventions and take a closer look at how effectively a charity can use new donations.

After reading quite a bit on their site and some related sites, I sat down yesterday and did a comprehensive review of my goals and principles and then put together a new plan. Some of that may be interesting to other people, so I'll write up that process here. This will be somewhat US-centric, since charities vary a lot by tax regime, but I think some of the general principles will still apply elsewhere. (But most of my specific examples will be US charities.)

First, a few basic principles about charitable giving:

  1. Charity goals cannot be evaluated uniformly. Different types of organizations need different criteria. I've found that charities divide generally into ones that are trying to accomplish a specific task and ones that are trying to create political change. For the former, small and quiet organizations are often the best. For the latter, the charity has to get to a certain size before it's part of the conversation, and that involves a different set of tradeoffs.

  2. When you care about efficiency (number of lives saved or quality of life improvement per dollar spent), don't give money to organizations you've heard about via means other than word of mouth or third-party evaluations. This is for a simple reason: if the charity has successfully made you aware of it, that means they're advertising themselves to you. Advertising generally costs quite a bit of money. That's money they're not spending on whatever they're trying to accomplish.

    Excellent examples are many of the health research and education charities, such as the American Heart Association and the American Lung Association, or the massive disaster relief organizations like the American Red Cross. These are not efficient organizations. If you're trying to get the most impact for your dollar, this isn't a good place to spend it.

    As a special case of this, please don't ever give money to any organization that produces TV commercials showing starving children in Africa. If the commercial makes you want to give money, that's great; just don't give it to the organization on TV. TV commercials are extremely expensive, and those organizations are notorious for being some of the least efficient charities in existence (in some cases bordering on scams). You'll be lucky if 50% of your money actually goes to Africa, and the money that is spent probably isn't spent wisely.

  3. If your donation gets you a membership that comes with a glossy magazine, remember that means that the organization is spending your money producing that magazine. This is counted as program funding, not administrative costs, in sites like Charity Navigator because the charity probably counts the magazine as part of their educational mission.

    If you want to subscribe to the magazine and enjoy it, there's not necessarily anything wrong with this. But if you were giving them money to do something else, be aware that money is being used to produce the magazine (and the Christmas cards and the mailing labels and the tote bag) instead of doing things that you might consider program funding. For example, I stopped giving money to the Audubon Foundation because I want my environmental giving to go directly to protecting the environment, not towards printing calendars and making a glossy magazine full of bird pictures.

    As a counter-example, I consider the Southern Poverty Law Center to be one of the most effective US political charities, despite the fact that they produce a glossy magazine. That's because a large part of their mission is investigative reporting and research, and that magazine is how they publish that research. In this case, I both want to read the magazine and consider the magazine something that I'm happy to fund.

  4. Efficiency isn't everything. For some types of charities, the charity has to be huge in order to be effective. This is particularly true of political charities. For example, the ACLU is not a particularly efficient charity and has several fund-raising practices that I dislike. However, what the ACLU does requires that they have extensive media access and respect from legislators and the judicial system. It cannot be done effectively by a smaller and more efficient organization. Similar principles apply to a lot of political and environmental charities, and to some international aid charities such as Doctors without Borders (Médecins Sans Frontières).

If you're looking to increase the efficiency and effectiveness of your giving according to objective measures, I highly recommend reading through GiveWell's site. They recommend charities that avoid all the pitfalls above. You've probably never heard of them because they don't spend their money on advertising and marketing. However, keep in mind that efficiency isn't everything.

With those principles in mind, and after doing a lot of reading on GiveWell, I redid my charity plan by dividing my giving up into five categories with different goals.

Services. There are some things that I use that I think should be paid for by tax dollars but aren't, and are instead funded by charitable donations. I don't consider this charity in the same sense as the rest of this list. Rather, I'm paying for what I use plus some extra so that other people who can't afford it can use the service for free. Efficiency doesn't matter as much because there's usually only one organization supporting the thing that I personally use. Examples include my local PBS (public broadcasting) station, the non-profit that works with city government to support the local library system, and the trust that helps maintain the state parks where I go on vacation.

Education. This is a special category for me since I regularly give money to the community college I attended. I know the region and the college and have some idea of what they need and what matters to the students there, so I can make specific choices. I give all the money in this category to the same place.

Free software. Here, I know that my donations won't be used as efficiently in terms of quality of life as they could be if sent somewhere else, but this is my community and I want to support it because it's my community. Here, as with education, I have a lot more data and can pick and choose places that I think can use my money. Examples here are the Ada Initiative and the Free Software Foundation.

Politics. This is the hardest area for efficiency, since it's so difficult to measure effectiveness in any organization where much of the goal of the organization is to persuade politicians or the general public. I try to measure effectiveness by how many concrete actions the organization takes (rather than press releases), which is easiest for organizations like the ACLU or the EFF that file lawsuits or legal documents and requests. As mentioned above, I think the standout in this area is the Southern Poverty Law Center, but political charities are so targeted that I give to multiple charities to cover my range of interests. I've found environmental charities particularly difficult to evaluate, but the one that appears the most effective to me so far is the Environmental Defense Fund, so that's where I concentrate my environmental money.

I used to give to small political charities but have given up on that as basically useless, at least for national politics. I think the charity has to be of a minimum size to get anyone in government to listen to them.

Poverty and Health. This is where efficiency matters much more than size. GiveWell's analysis is very interesting here, and sadly shows that most interventions either don't work or at least can't be proven to work. Here, I abandoned nearly all of my traditional large charities and have gone almost entirely with GiveWell's recommendations, with two exceptions. I still cycle money through Kiva (a microfinance support charity) because I have a bit of an emotional attachment to it, although microfinance is looking increasingly suspect as an effective charity method and I may yet drop them. And I still give money to Doctors without Borders because, despite being less efficient, I think their size and history has won them an international credibility that lets them get into areas that smaller, more efficient charities wouldn't be able to help.

One of GiveWell's recommendations is my new favorite charity: GiveDirectly. All aid charities run a serious risk of having a colonialist bent, where rich countries come into poor countries and build things or tell them how to do things following the priorities of the rich countries. (This is one of the reasons why I prefer medical charities, since they're less susceptible to this.) GiveDirectly identifies the poorest people in a region (using a very transparent process) and transfers money to them directly using the M-Pesa cell system to spend however they choose, with no strings attached other than some due diligence to protect against fraud. This is refreshingly non-paternalistic and makes me far more comfortable than typical aid projects.

Originally, I was going to assign money into those buckets and then break the buckets down further, but I ended up not quite doing that. The services bucket is more driven by the services I use than by an even allocation. Of the rest, it goes approximately three shares to poverty and health, two to politics, and one each to education and free software.

I think I've managed to eliminate most of the places where I'm giving money to multiple organizations that do the same thing, but I'm still spreading my money rather widely and could stand to focus it more. That's hard, though; there are a lot of different things that I care about, and political charities in particular aren't very interchangeable.

I'm currently not giving locally very much, which might be worth changing. The easiest way to do so would be to find a good local food bank and start donating to them (giving cash, not food).

For a more comprehensive list of charities I support, see my charities link page.

2012-12-30: Term::ANSIColor 4.00

Term::ANSIColor is a Perl core module that provides a few different interfaces to get at the ANSI color and text attribute escape sequences. It can be used to, for example, print out bold text or colorize output. This is a major new feature release, incorporating patches sent to me by Kurt Starsinic and Stephen Thirlwall.

Kurt Starsinic contributed support for 256-color emulators. These hang special extended sequences off the otherwise-unused 38 and 48 sequences, providing an alternative naming of the base 16 colors, 216 colors arranged in an RGB namespace, and 24 shades of grey. Those colors are now available via all the normal Term::ANSIColor interfaces. He also contributed a program to print out test data for 256-color emulators that I enhanced to print out test data for the basic and 16-color sets as well, which allowed me to get rid of the static test files in the distribution.

Stephen Thirlwall contributed support for creating aliases for the standard colors via an environment variable. I extended his patch to also provide a coloralias() function interface. This feature probably won't be that widely used, but it allows a user to set up custom color names for applications that take color names from user configuration (which might be handy for doing things like using the Solarized color scheme), or to define an alias like "alert" or "warning" in one place and then use it throughout the code. With the new 256-color support, it may be useful to set up more human-readable aliases for some of the RGB colors.

Since I was working on this project anyway, I also redid the whole module in the new Perl coding style that I've been experimenting, based on Damian Conway's Perl Best Practices, and added a ton of new tests. There are a lot of great CPAN modules out there to do automated tests of various aspects of Perl code, and most of them don't require much effort to use. I'll probably write a separate post about that later.

That sort of comprehensive review of course uncovered a bunch of microbugs and lack of clarity in the code, which is now hopefully much improved. I'm pleased to report that the test suite achieves 100% code coverage as reported by Devel::Cover (and in fact there's a maintainer-only test to ensure that it stays that way).

You can get the latest version from the Term::ANSIColor distribution page.

2012-12-31: Term::ANSIColor 4.01

It was too much to hope for that all that testing would catch everything in a 4.00 release, but it's ironic that the main bug was a bug in one of the test cases! BlueT - Matthew Lien - 練喆明 found that one of the tests malfunctioned when Test::Warn was not installed.

Since I was cutting a new release anyway, I also improved the README somewhat and added POD documentation to the example script.

You can get the latest version from the Term::ANSIColor distribution page.

Last spun 2024-01-01 from thread modified 2023-09-09