In Defense of the NCIS Two-People-One-Keyboard Scene

(Here is the same clip in HD, but that 2010 YouTube vibe is part of the fun)

This clip is in the running for most-mocked scene of all time, but I think it’s good, actually.

First, let’s get some things out of the way:

  1. The writers of NCIS know how keyboards work. (They probably used keyboards to write this scene, even.)
  2. The director of this episode knows how keyboards work.
  3. I’m going to go out on a limb and say >90% of this show’s audience knows how keyboards work.

This scene was not written this way because the writers think their audience is dumb and doesn’t know how a keyboard works. It was written this way because of the Rule of Cool.

The Rule of Cool states: an audience’s willingness to suspend disbelief is proportional to how cool a scene is.

Continue reading
Posted on

Ideas Too Short for Essays, Part 2

Nearly nine years after part 1, I bring three new short ideas.

  1. Keep in mind that scientific fraud happens sometimes
  2. Clichés are good, actually
  3. You must put unnecessary decoration on your useful items, or else you’re a weirdo
Continue reading
Posted on

Upside Volatility Is Bad

Investors often say that standard deviation is a bad way to measure investment risk because it penalizes upside volatility as well as downside. I agree that standard deviation isn’t a great measure of risk, but that’s not the reason. A good risk measure should penalize upside volatility, because upside volatility is bad.

Continue reading
Posted on

Things I Learned from College

(that I still remember a decade later)

Evolution on Earth

Fact 1: When foxes are bred to be more docile, their ears become floppy like dogs’ ears instead of pointy like wild foxes’.

Fact 2: Crows can learn to use a short stick to fetch a longer stick to fetch food.

The basic setup of the experiment is: There’s a box with some food at the bottom. The crow can’t reach the food. The crow has a short stick, but the stick isn’t long enough to reach the food, either.

There’s also a second box containing a long stick. The short stick is long enough to reach the long stick. Most crows figure out that they can use the short stick to fetch the long stick and then use the long stick to fetch the food.

If you add a third layer of indirection, where they have to use a short stick to fetch a medium stick and the medium stick to fetch a long stick and the long stick to fetch food, most crows don’t figure it out but a few of them do.

I wrote a rap song about this experiment, it used to be on YouTube but I think it’s gone now.

Continue reading
Posted on

Cash Back

When I was 18, my dad took me to the bank to get my first credit card. I had a conversation with the bank teller that went something like this:

Bank teller: This card gives 1% cash back.

Me: What does that mean?

Bank teller: It means when you spend money with the card, you get 1% cash back.

Me: But what does cash back mean, though?

Bank teller: It means you get cash back.

Me: …

The bank teller communicated poorly, and also I did not do a good job at articulating which part I was confused about. If I were that bank teller, here is what I would say to my 18-year old self:

Continue reading
Posted on

The Next-Gen LLM Might Pose an Existential Threat

I’m pretty sure that the next generation of LLMs will be safe. But the risk is still high enough to make me uncomfortable.

How sure are we that scaling laws are correct? Researchers have drawn curves predicting how AI capabilities scale based on how much goes into training them. If you extrapolate those curves, it looks like the next level of LLMs won’t be wildly more powerful than the current level. But maybe there’s a weird bump in the curve that happens in between GPT-5 and GPT-6 (or between Claude 4.5 and Claude 5), and LLMs suddenly become much more capable in a way that scaling laws didn’t predict. I don’t think we can be more than 99.9% confident that there’s not.

How sure are we that current-gen LLMs aren’t sandbagging (that is, deliberately hiding their true skill level)? I think they’re still dumb enough that their sandbagging can be caught, and indeed they have been caught sandbagging on some tests. I don’t think LLMs are hiding their true capabilities in general, and our understanding of AI capabilities is probably pretty accurate. But I don’t think we can be more than 99.9% confident about that.

How sure are we that the extrapolated capability level of the next-gen LLM isn’t enough to take over the world? It probably isn’t, but we don’t really know what level of capability is required for something like that. I don’t think we can be more than 99.9% confident.

Perhaps we can be >99.99% that the extrapolated capability of the next-gen LLM is still not as smart as the smartest human. But an LLM has certain advantages over humans—it can work faster (at least on many sorts of tasks), it can copy itself, it can operate computers in a way that humans can’t.

Alternatively, GPT-6/Claude 5 might not be able to take over the world, but it might be smart enough to recursively self-improve, and that might happen too quickly for us to do anything about.

How sure are we that we aren’t wrong about something else? I thought of three ways we could be disastrously wrong:

  1. We could be wrong about scaling laws;
  2. We could be wrong that LLMs aren’t sandbagging;
  3. We could be wrong about what capabilities are required for AI to take over.

But we could be wrong about some entirely different thing that I didn’t even think of. I’m not more than 99.9% confident that my list is comprehensive.

On the whole, I don’t think we can say there’s less than a 0.4% chance that the next-gen LLM forces us down a path that inevitably ends in everyone dying.

Posted on

Posted on

Mechanisms Rule Hypotheses Out, But Not In

If there is no plausible mechanism by which a scientific hypothesis could be true, then it’s almost certainly false.

But if there is a plausible mechanism for a hypothesis, then that only provides weak evidence that it’s true.

An example of the former:

Astrology teaches that the positions of planets in the sky when you’re born can affect your life trajectory. If that were true, it would contradict well-established facts in physics and astronomy. Nobody has ever observed a physical mechanism by which astrology could be true.

An example of the latter:

A 2023 study found an association between autism and diet soda consumption during pregnancy. The authors’ proposed mechanism is that aspartame (an artificial sweetener found in diet soda) metabolizes into aspartic acid, which has been shown to cause neurological problems in mice. Nonetheless, even though there is a proposed mechanism, I don’t really care and I’m pretty sure diet soda doesn’t cause autism. (For a more thorough take on the diet soda <> autism thing, I will refer you to Grug, who is much smarter than me.)

Why?

Continue reading
Posted on

Page 3 of 10