blocking.tex 20 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506
  1. \documentclass{llncs}
  2. \usepackage{url}
  3. \usepackage{amsmath}
  4. \usepackage{epsfig}
  5. %\setlength{\textwidth}{5.9in}
  6. %\setlength{\textheight}{8.4in}
  7. %\setlength{\topmargin}{.5cm}
  8. %\setlength{\oddsidemargin}{1cm}
  9. %\setlength{\evensidemargin}{1cm}
  10. \newenvironment{tightlist}{\begin{list}{$\bullet$}{
  11. \setlength{\itemsep}{0mm}
  12. \setlength{\parsep}{0mm}
  13. % \setlength{\labelsep}{0mm}
  14. % \setlength{\labelwidth}{0mm}
  15. % \setlength{\topsep}{0mm}
  16. }}{\end{list}}
  17. \begin{document}
  18. \title{Design of a blocking-resistant anonymity system}
  19. \author{}
  20. \maketitle
  21. \pagestyle{plain}
  22. \begin{abstract}
  23. Websites around the world are increasingly being blocked by
  24. government-level firewalls. Many people use anonymizing networks like
  25. Tor to contact sites without letting an attacker trace their activities,
  26. and as an added benefit they are no longer affected by local censorship.
  27. But if the attacker simply denies access to the Tor network itself,
  28. blocked users can no longer benefit from the security Tor offers.
  29. Here we describe a design that uses the current Tor network as a
  30. building block to provide an anonymizing network that resists blocking
  31. by government-level attackers.
  32. \end{abstract}
  33. \section{Introduction and Goals}
  34. Websites like Wikipedia and Blogspot are increasingly being blocked by
  35. government-level firewalls around the world.
  36. China is the third largest user base for Tor clients~\cite{geoip-tor}.
  37. Many people already want it, and the current Tor design is easy to block
  38. (by blocking the directory authorities, by blocking all the server
  39. IP addresses, or by filtering the signature of the Tor TLS handshake).
  40. Now that we've got an overlay network, we're most of the way there in
  41. terms of building a blocking-resistant tool.
  42. And adding more different classes of users and goals to the Tor network
  43. improves the anonymity for all Tor users~\cite{econymics,tor-weis06}.
  44. \subsection{A single system that works for multiple blocked domains}
  45. We want this to work for people in China, people in Iran, people in
  46. Thailand, people in firewalled corporate networks, etc. The blocking
  47. censor will be at different stages of the arms race in different places;
  48. and likely the list of blocked addresses will be different in each
  49. location too.
  50. \section{Adversary assumptions}
  51. \label{sec:adversary}
  52. Three main network attacks by censors currently:
  53. \begin{tightlist}
  54. \item Block destination by string matches in TCP packets.
  55. \item Block destination by IP address.
  56. \item Intercept DNS requests.
  57. \end{tightlist}
  58. Assume the network firewall has very limited CPU per
  59. user~\cite{clayton-pet2006}.
  60. Assume that readers of blocked content will not be punished much
  61. (relative to publishers).
  62. Assume that while various different adversaries can coordinate and share
  63. notes, there will be a significant time lag between one attacker learning
  64. how to overcome a facet of our design and other attackers picking it up.
  65. (Corollary: in the early stages of deployment, the insider threat isn't
  66. as high of a risk.)
  67. Assume that our users have control over their hardware and software -- no
  68. spyware, no cameras watching their screen, etc.
  69. Assume that the user will fetch a genuine version of Tor, rather than
  70. one supplied by the adversary; see~\ref{subsec:trust-chain} for discussion
  71. on helping the user confirm that he has a genuine version.
  72. \section{Related schemes}
  73. \subsection{public single-hop proxies}
  74. Anonymizer and friends
  75. \subsection{personal single-hop proxies}
  76. Psiphon, circumventor, cgiproxy.
  77. Simpler to deploy; might not require client-side software.
  78. \subsection{break your sensitive strings into multiple tcp packets;
  79. ignore RSTs}
  80. \subsection{steganography}
  81. infranet
  82. \subsection{Internal caching networks}
  83. Freenet is deployed inside China and caches outside content.
  84. \subsection{Skype}
  85. port-hopping. encryption. voice communications not so susceptible to
  86. keystroke loggers (even graphical ones).
  87. \section{Components of the current Tor design}
  88. Anonymizing networks such as
  89. Tor~\cite{tor-design}
  90. aim to hide not only what is being said, but also who is
  91. communicating with whom, which users are using which websites, and so on.
  92. These systems have a broad range of users, including ordinary citizens
  93. who want to avoid being profiled for targeted advertisements, corporations
  94. who don't want to reveal information to their competitors, and law
  95. enforcement and government intelligence agencies who need
  96. to do operations on the Internet without being noticed.
  97. Tor provides three security properties:
  98. \begin{tightlist}
  99. \item 1. A local observer can't learn, or influence, your destination.
  100. \item 2. No single piece of the infrastructure can link you to your
  101. destination.
  102. \item 3. The destination, or somebody watching the destination,
  103. can't learn your location.
  104. \end{tightlist}
  105. We care most clearly about property number 1. But when the arms race
  106. progresses, property 2 will become important -- so the blocking adversary
  107. can't learn user+destination pairs just by volunteering a relay. It's not so
  108. clear to see that property 3 is important, but consider websites and
  109. services that are pressured into treating clients from certain network
  110. locations differently.
  111. Other benefits:
  112. \begin{tightlist}
  113. \item Separates the role of relay from the role of exit node.
  114. \item (Re)builds circuits automatically in the background, based on
  115. whichever paths work.
  116. \end{tightlist}
  117. \subsection{Tor circuits}
  118. can build arbitrary overlay paths given a set of descriptors~\cite{blossom}
  119. \subsection{Tor directory servers}
  120. central trusted locations that keep track of what Tor servers are
  121. available and usable.
  122. (threshold trust, so not quite so bad. See
  123. Section~\ref{subsec:trust-chain} for details.)
  124. \subsection{Tor user base}
  125. Hundreds of thousands of users from around the world. Some with publically
  126. reachable IP addresses.
  127. \section{Why hasn't Tor been blocked yet?}
  128. Hard to say. People think it's hard to block? Not enough users, or not
  129. enough ordinary users? Nobody has been embarrassed by it yet? "Steam
  130. valve"?
  131. \section{Components of a blocking-resistant design}
  132. Here we describe what we need to add to the current Tor design.
  133. \subsection{Bridge relays}
  134. Some Tor users on the free side of the network will opt to become
  135. \emph{bridge relays}. They will relay a small amount of bandwidth into
  136. the main Tor network, so they won't need to allow
  137. exits.
  138. They sign up on the bridge directory authorities (described below),
  139. and they use Tor to publish their descriptor so an attacker observing
  140. the bridge directory authority's network can't enumerate bridges.
  141. ...need to outline instructions for a Tor config that will publish
  142. to an alternate directory authority, and for controller commands
  143. that will do this cleanly.
  144. \subsection{The bridge directory authority (BDA)}
  145. They aggregate server descriptors just like the main authorities, and
  146. answer all queries as usual, except they don't publish full directories
  147. or network statuses.
  148. So once you know a bridge relay's key, you can get the most recent
  149. server descriptor for it.
  150. Problem 1: need to figure out how to fetch some server statuses from the BDA
  151. without fetching all statuses. A new URL to fetch I presume?
  152. \subsection{Putting them together}
  153. If a blocked user has a server descriptor for one working bridge relay,
  154. then he can use it to make secure connections to the BDA to update his
  155. knowledge about other bridge
  156. relays, and he can make secure connections to the main Tor network
  157. and directory servers to build circuits and connect to the rest of
  158. the Internet.
  159. So now we've reduced the problem from how to circumvent the firewall
  160. for all transactions (and how to know that the pages you get have not
  161. been modified by the local attacker) to how to learn about a working
  162. bridge relay.
  163. The following section describes ways to bootstrap knowledge of your first
  164. bridge relay, and ways to maintain connectivity once you know a few
  165. bridge relays. (See Section~\ref{later} for a discussion of exactly
  166. what information is sufficient to characterize a bridge relay.)
  167. \section{Discovering and maintaining working bridge relays}
  168. Most government firewalls are not perfect. They allow connections to
  169. Google cache or some open proxy servers, or they let file-sharing or
  170. Skype or World-of-Warcraft connections through.
  171. For users who can't use any of these techniques, hopefully they know
  172. a friend who can -- for example, perhaps the friend already knows some
  173. bridge relay addresses.
  174. (If they can't get around it at all, then we can't help them -- they
  175. should go meet more people.)
  176. Thus they can reach the BDA. From here we either assume a social
  177. network or other mechanism for learning IP:dirport or key fingerprints
  178. as above, or we assume an account server that allows us to limit the
  179. number of new bridge relays an external attacker can discover.
  180. Going to be an arms race. Need a bag of tricks. Hard to say
  181. which ones will work. Don't spend them all at once.
  182. \subsection{Discovery based on social networks}
  183. A token that can be exchanged at the BDA (assuming you
  184. can reach it) for a new IP:dirport or server descriptor.
  185. The account server
  186. Users can establish reputations, perhaps based on social network
  187. connectivity, perhaps based on not getting their bridge relays blocked,
  188. (Lesson from designing reputation systems~\cite{p2p-econ}: easy to
  189. reward good behavior, hard to punish bad behavior.
  190. \subsection{How to give bridge addresses out}
  191. Hold a fraction in reserve, in case our currently deployed tricks
  192. all fail at once; so we can move to new approaches quickly.
  193. (Bridges that sign up and don't get used yet will be sad; but this
  194. is a transient problem -- if bridges are on by default, nobody will
  195. mind not being used.)
  196. Perhaps each bridge should be known by a single bridge directory
  197. authority. This makes it easier to trace which users have learned about
  198. it, so easier to blame or reward. It also makes things more brittle,
  199. since loss of that authority means its bridges aren't advertised until
  200. they switch, and means its bridge users are sad too.
  201. (Need a slick hash algorithm that will map our identity key to a
  202. bridge authority, in a way that's sticky even when we add bridge
  203. directory authorities, but isn't sticky when our authority goes
  204. away. Does this exist?)
  205. Divide bridgets into buckets. You can learn only from the bucket your
  206. IP address maps to.
  207. \section{Security improvements}
  208. \subsection{Minimum info required to describe a bridge}
  209. There's another possible attack here: since we only learn an IP address
  210. and port, a local attacker could intercept our directory request and
  211. give us some other server descriptor. But notice that we don't need
  212. strong authentication for the bridge relay. Since the Tor client will
  213. ship with trusted keys for the bridge directory authority and the Tor
  214. network directory authorities, the user can decide if the bridge relays
  215. are lying to him or not.
  216. Once the Tor client has fetched the server descriptor at least once,
  217. it should remember the identity key fingerprint for that bridge relay.
  218. If the bridge relay moves to a new IP address, the client can then
  219. use the bridge directory authority to look up a fresh server descriptor
  220. using this fingerprint.
  221. \subsubsection{Scanning-resistance}
  222. If it's trivial to verify that we're a bridge, and we run on a predictable
  223. port, then it's conceivable our attacker would scan the whole Internet
  224. looking for bridges. It would be nice to slow down this attack. It would
  225. be even nicer to make it hard to learn whether we're a bridge without
  226. first knowing some secret.
  227. % XXX this para is in the wrong section
  228. Could provide a password to the bridge user. He provides a nonced hash of
  229. it or something when he connects. We'd need to give him an ID key for the
  230. bridge too, and wait to present the password until we've TLSed, else the
  231. adversary can pretend to be the bridge and MITM him to learn the password.
  232. \subsection{Hiding Tor's network signatures}
  233. The simplest format for communicating information about a bridge relay
  234. is as an IP address and port for its directory cache. From there, the
  235. user can ask the directory cache for an up-to-date copy of that bridge
  236. relay's server descriptor, including its current circuit keys, the port
  237. it uses for Tor connections, and so on.
  238. However, connecting directly to the directory cache involves a plaintext
  239. http request, so the censor could create a firewall signature for the
  240. request and/or its response, thus preventing these connections. Therefore
  241. we've modified the Tor protocol so that users can connect to the directory
  242. cache via the main Tor port -- they establish a TLS connection with
  243. the bridge as normal, and then send a Tor "begindir" relay cell to
  244. establish a connection to its directory cache.
  245. Predictable SSL ports:
  246. We should encourage most servers to listen on port 443, which is
  247. where SSL normally listens.
  248. Is that all it will take, or should we set things up so some fraction
  249. of them pick random ports? I can see that both helping and hurting.
  250. Predictable TLS handshakes:
  251. Right now Tor has some predictable strings in its TLS handshakes.
  252. These can be removed; but should they be replaced with nothing, or
  253. should we try to emulate some popular browser? In any case our
  254. protocol demands a pair of certs on both sides -- how much will this
  255. make Tor handshakes stand out?
  256. \subsection{Anonymity issues from becoming a bridge relay}
  257. You can actually harm your anonymity by relaying traffic in Tor. This is
  258. the same issue that ordinary Tor servers face. On the other hand, it
  259. provides improved anonymity against some attacks too:
  260. \begin{verbatim}
  261. http://wiki.noreply.org/noreply/TheOnionRouter/TorFAQ#ServerAnonymity
  262. \end{verbatim}
  263. \section{Performance improvements}
  264. \subsection{Fetch server descriptors just-in-time}
  265. I guess we should encourage most places to do this, so blocked
  266. users don't stand out.
  267. \section{Other issues}
  268. \subsection{How many bridge relays should you know about?}
  269. If they're ordinary Tor users on cable modem or DSL, many of them will
  270. disappear and/or move periodically. How many bridge relays should a
  271. blockee know
  272. about before he's likely to have at least one reachable at any given point?
  273. How do we factor in a parameter for "speed that his bridges get discovered
  274. and blocked"?
  275. The related question is: if the bridge relays change IP addresses
  276. periodically, how often does the bridge user need to "check in" in order
  277. to keep from being cut out of the loop?
  278. \subsection{How do we know if a bridge relay has been blocked?}
  279. We need some mechanism for testing reachability from inside the
  280. blocked area.
  281. The easiest answer is for certain users inside the area to sign up as
  282. testing relays, and then we can route through them and see if it works.
  283. First problem is that different network areas block different net masks,
  284. and it will likely be hard to know which users are in which areas. So
  285. if a bridge relay isn't reachable, is that because of a network block
  286. somewhere, because of a problem at the bridge relay, or just a temporary
  287. outage?
  288. Second problem is that if we pick random users to test random relays, the
  289. adversary should sign up users on the inside, and enumerate the relays
  290. we test. But it seems dangerous to just let people come forward and
  291. declare that things are blocked for them, since they could be tricking
  292. us. (This matters even moreso if our reputation system above relies on
  293. whether things get blocked to punish or reward.)
  294. Another answer is not to measure directly, but rather let the bridges
  295. report whether they're being used. If they periodically report to their
  296. bridge directory authority how much use they're seeing, the authority
  297. can make smart decisions from there.
  298. If they install a geoip database, they can periodically report to their
  299. bridge directory authority which countries they're seeing use from. This
  300. might help us to track which countries are making use of Ramp, and can
  301. also let us learn about new steps the adversary has taken in the arms
  302. race. (If the bridges don't want to install a whole geoip subsystem, they
  303. can report samples of the /24 network for their users, and the authorities
  304. can do the geoip work. This tradeoff has clear downsides though.)
  305. Worry: adversary signs up a bunch of already-blocked bridges. If we're
  306. stingy giving out bridges, users in that country won't get useful ones.
  307. (Worse, we'll blame the users when the bridges report they're not
  308. being used?)
  309. Worry: the adversary could choose not to block bridges but just record
  310. connections to them. So be it, I guess.
  311. \subsection{Cablemodem users don't provide important websites}
  312. ...so our adversary could just block all DSL and cablemodem networks,
  313. and for the most part only our bridge relays would be affected.
  314. The first answer is to aim to get volunteers both from traditionally
  315. ``consumer'' networks and also from traditionally ``producer'' networks.
  316. The second answer (not so good) would be to encourage more use of consumer
  317. networks for popular and useful websites.
  318. Other attack: China pressures Verizon to discourage its users from
  319. running bridges.
  320. \subsection{The trust chain}
  321. \label{subsec:trust-chain}
  322. Tor's ``public key infrastructure'' provides a chain of trust to
  323. let users verify that they're actually talking to the right servers.
  324. There are four pieces to this trust chain.
  325. Firstly, when Tor clients are establishing circuits, at each step
  326. they demand that the next Tor server in the path prove knowledge of
  327. its private key~\cite{tor-design}. This step prevents the first node
  328. in the path from just spoofing the rest of the path. Secondly, the
  329. Tor directory authorities provide a signed list of servers along with
  330. their public keys --- so unless the adversary can control a threshold
  331. of directory authorities, he can't trick the Tor client into using other
  332. Tor servers. Thirdly, the location and keys of the directory authorities,
  333. in turn, is hard-coded in the Tor source code --- so as long as the user
  334. got a genuine version of Tor, he can know that he is using the genuine
  335. Tor network. And lastly, the source code and other packages are signed
  336. with the GPG keys of the Tor developers, so users can confirm that they
  337. did in fact download a genuine version of Tor.
  338. But how can a user in an oppressed country know that he has the correct
  339. key fingerprints for the developers? As with other security systems, it
  340. ultimately comes down to human interaction. The keys are signed by dozens
  341. of people around the world, and we have to hope that our users have met
  342. enough people in the PGP web of trust~\cite{pgp-wot} that they can learn
  343. the correct keys. For users that aren't connected to the global security
  344. community, though, this question remains a critical weakness.
  345. \subsection{Bridge users without Tor clients}
  346. They could always open their socks proxy. This is bad though, firstly
  347. because they learn the bridge users' destinations, and secondly because
  348. we've learned that open socks proxies tend to attract abusive users who
  349. have no idea they're using Tor.
  350. \section{Future designs}
  351. \subsection{Bridges inside the blocked network too}
  352. Assuming actually crossing the firewall is the risky part of the
  353. operation, can we have some bridge relays inside the blocked area too,
  354. and more established users can use them as relays so they don't need to
  355. communicate over the firewall directly at all? A simple example here is
  356. to make new blocked users into internal bridges also -- so they sign up
  357. on the BDA as part of doing their query, and we give out their addresses
  358. rather than (or along with) the external bridge addresses. This design
  359. is a lot trickier because it brings in the complexity of whether the
  360. internal bridges will remain available, can maintain reachability with
  361. the outside world, etc.
  362. Hidden services as bridges. Hidden services as bridge directory authorities.
  363. Make all Tor users become bridges if they're reachable -- needs more work
  364. on usability first, but we're making progress.
  365. \bibliographystyle{plain} \bibliography{tor-design}
  366. \end{document}