Various folks have asked me how I liked my month as a visiting fellow at the Singularity Institute, and since telling them one at a time clearly doesn't scale, I thought I would bore everyone about what I did on my summer non-vacation.

Also, because I am a geek, it is in FAQ format.

Q: Did you like it?
A: Yes.

Q: What is the Singularity Institute?
A: The Singularity Institute (SI) is a non-profit think tank; their main concerns are the long term risks of human level and beyond human level artificial intelligence. In practice, what they mostly do is write papers about these risks and what we might do about them. Along with Ray Kurzweil, they also run the Singularity Summit, a yearly conference which despite its name is more of a grab bag of lectures a transhumanist might be interested in than in the singularity per se. The Singularity Institute has no formal ties with the Google affiliated Singularity University, but various SI folks have given talks there.

Q: What the heck does the word "singularity" mean?
A: There are two widely used but different meanings. The Kurzweil definition is an asymptote of accelerating exponential technological progress, beyond which prediction is more or less impossible. The older definition (which the SI uses) is of an "intelligence explosion": an artificial intelligence that is smart enough to recursively make itself smarter, with no diminishing returns up to the laws of physics.

Q: Is the singularity near?
A: No one knows. Some super genius might figure out the nature of intelligence and program one next week, or it might never happen. If you go to http://theuncertainfuture.com/index.php, you can generate your own probability distribution over years it might occur, based on your estimates of individual factors.

Q: Isn't it way too early to be worrying about the singularity?
A: For a given individual, maybe. For society as a whole, almost certainly not. The SI position is along the lines of: wouldn't it have been nice if folks had thought long and hard about the dangers of nuclear weapons decades if not centuries before we had created them?

Q: What did you do at SI?
A: A bunch of different stuff. SI is what I call a "startup think-tank"; they are still pretty new and don't have a lot of funding or permanent staff. A good portion of the time I just chipped in with whatever needed to be done that week, day, or hour. Among other things, I helped someone to learn how to program in C, I helped write an NSF grant application, I helped prepare for the Singularity Summit, I proofread a bunch of stuff that other folks were writing, and I wrote a rough draft of a philosophy/CS paper applying Goodhart's Law to AI.

Q: What the hell is Goodhart's Law and what does it have to do with AI?
A: Goodhart's Law states that "Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes." It comes from economics, where "control" means "policy." An example would be the Federal Reserve observing a strong correlation between inflation and unemployment, a correlation which breaks down the second the Fed tries to manipulate the inflation rate to reduce unemployment. The paper argues that AIs have to deal with a variant of Goodhart's Law, especially as they grow more powerful.

Q: How was the Singularity Summit?
A: Fun! My favorite talks were by my life long hero James "The Amazing" Randi, and by Dr. Lance Becker, who I had never heard of before the summit, but is doing kick ass things in ER rooms with artificial circulation plus controlled reperfusion. And by "kick ass", I mean raising the dead: at one point Dr. Becker talked about a study which started with six legally dead people who post-treatment "had a 50% survival rate".

Q: What were the most important things you learnt over your month at SI?
A: (1) AIXI is the real deal, a fundamental advance in AI theory on the order of say minimum description length. For some of you, that was already obvious, but as a mostly AI-outsider I had previously not recognized its importance. (2) Climate change models are extremely important, but not because of global warming. Even extreme global warming would with high probability take over a century to kill all of us, but no one right now knows the chances of a small scale nuclear exchange (between India and Pakistan, say) causing agriculture to not work world wide for over a decade. Or rather, current climate change models say that that is a likely possibility, but I would really feel more comfortable with that conclusion if the models were open source and not written in spaghetti code Fortran.

Q: What was the worse part of your month?
A: Sourdough bread. That stuff is a blight over the entire greater SF area.