This was my keynote talk for FOCI 2024.1.
Censorship and circumvention are often characterized as an “arms race” or “cat-and-mouse game”.
These terms, while having some basis in fact, are overly reductive—they don’t tell the whole story. Not false, but incomplete. They risk limiting the way we practitioners model problems, and how others think of the research field.
Let’s thoughtfully examine what assumptions we bring into research. Is the facet we’re studying well-characterized as an arms race? Maybe it is—or maybe it should get a more precise description.
Early-ish examples of cat-and-mouse games / arms races:
Announce a mirror, it gets blocked, announce a new one.
Tor TLSHistory (, predating pluggable transports)
This will read like a comedy of errors; please don’t judge our missteps too harshly.
Our unusual cipher list, and our our funny-looking certs made Tor pretty easy to profile. So we switched to an approach where we would begin by sending a list of ciphers hacked to match the list sent by Firefox…
We began generating bogus domain names and sticking them in the commonName part of the certificates.
Iran blocked Tor based on our choice of Diffie–Hellman parameters. We switched to copy the fixed DH parameters from Apache’s mod_ssl…
When we started getting detected and blocked based on our use of renegotiation, we switched to an improved handshake…
We know we’re in an arms race.
The cipher should not require secrecy, and it should not be a problem if it falls into enemy hands.
The enemy knows the system being used.
Who is the cat and who is the mouse?
Alternatives to arms race modeling: costs and tradeoffs
Research specifically on arms race aspects remains legitimate
Jumping out of the system