Probably best not to attempt using the phrase “posterior analysis” as a term of art. Fortunately I caught this before attempting to send it to anyone else…
This is the final weekend of the Edinburgh Fringe Festival, an enormous and insane annual event which draws around half a million people to a city of around half a million people. Walking round the city this week, I thought of a game to pass the time when stuck in a Festival crowd.
The game is called “Spot the Scot”. To play, start by walking down the streets of Edinburgh. Then, pick a group of people coming toward you on the street, not too distant, but far enough that you can’t hear them. Give them a good look over, and guess whether they are actually Scottish or not. As they pass you, eavesdrop to find out if you were right.
I have found this game thoroughly enjoyable, and I highly recommend it. Feel free to post strategies or high scores in the comments.
No matter one’s political persuasion, it is hard not to think, as Willard Foxton argues in an interesting essay that the income tax code in the UK (and in the US too, for that matter) is too complex. In a more cynical mood I would be tempted to say that the tax law is so complex, because complex tax laws benefit the rich, and the rich make the laws.
In the UK there have been several scandals on tax avoidance, perhaps most notably, one of the two biggest Scottish football teams blowing up due to an offshore tax evasion scheme. At first I was unable to understand in the news reports why other football teams, and their fans, seemed so rabidly angry at the Rangers. But of course: football does not have a salary cap, so if a team unfairly spend less money on tax, it can spend more money on players. By cheating at their taxes, the Rangers were also cheating at football.
Taxes are political footballs as well, especially in the US. In the US there is an additional crazy phenomenon that creating a new program makes you an irresponsible tax and spend liberal that is taking money out of the pockets of working families, while cutting taxes makes you a deficit hawk. (I mean, uh, not to get too overtly political or anything?) Therefore if you are an American politician—of either party—and you want to create a new program for a noble goal, e.g., to pay for college scholarships for middle class families, why not make it a tax credit? That way, you get the noble program, and you can say that you’re cutting taxes too!
Ali Eslami has just writen a terrific page on organizing your experimental code and output. I pretty much agree with everything he says. I’ve thought quite a bit about this and would like to add some background. Programming for research is very different than programming for industry. There are several reasons for this, which I will call Principles of Research Code. These principles underly all of the advice in Ali’s post and in this post. These principles are:
- As a researcher, your product is not code. Your product is knowledge. Most of your research code you will completely forget once your paper is done.
- Unless you hit it big. If your paper takes off, and lots of people read it, then people will start asking you for a copy of your code. You should give it to them, and best to be prepared for this in advance.
- You need to be able to trust your results. You want to do enough testing that you do not, e.g., find a bug in your baselines after you publish. A small amount of paranoia comes in handy.
- You need a custom set of tools. Do not be afraid to write infrastructure and scripts to help you run new experiments quickly. But don’t go overboard with this.
- Reproducability. Ideally, your system should be set up so that five years from now, when someone asks you about Figure 3, you can immediately find the command line, experimental parameters, and code that you used to generate it.
Principle 1 implies that the primary thing that you need to optimise for in research code is your own time. You want to generate as much knowledge as possible as quickly as possible. Sometimes being able to write fast code gives you a competitive advantage in research, because you can run on larger problems. But don’t spend time optimising unless you’re in a situation like this. Also, I have some more practical suggestions to augment what Ali has said. These are
- Version control: Ali doesn’t mention this, probably because it is second nature to him, but you need to keep all of your experimental code under version control. To not do this is courting disaster. Good version control systems include SVN, git, or Mercurial, etc. I now use Mercurial, but it doesn’t really matter what you use. Always commit all of your code before you run an experiment. This way you can reproduce your experimental results by checking out the version of your code form the time that you ran an experiment.
- Random seeds: Definitely take Ali’s advice to take the random seed as a parameter to your methods. Usually what I do is pick a large number of random seeds, save them to disk, and use them over and over again. Otherwise debugging is a nightmare.
- Parallel option sweeps: It takes some effort to get set up on a cluster like ECDF, but if you invest this, you get some nice benefits like the ability to run a parameter sweep in parallel.
Directory trees: It is good to have your working directory in a different part of the directory space from your code, because then you don’t get annoying messages from your version control system asking you why you haven’t committed your experimental results. So I end up with a directory structure like
Notice how I match the directory names to help me remember what script generated the results.
- Figures list. The day after I submit a paper, I add enough information to my notebook to meet Principle 5. That is, for every figure in the paper, I make a note of which output directory and which data file contains the results that made that figure. Then for those output directories, I make sure to have a note of which script and options generated those results.
- Data preprocessing. Lots of times we have some complicated steps to do data cleaning, feature extraction, etc. It’s good to save these intermediate results to disk. It’s also good to use a text format rather than binary, so that you can do a quick visual check for problems. One tip that I use to make sure I keep track of what data cleaning I do is to use Makefiles to run the data cleaning step. I have a different Makefile target for each intermediate result, which gives me instant documentation.
I’ve just made an update to my list of software I like motivated by my experiences setting up a new computer.