tor-design.tex 22 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521
  1. \documentclass[times,10pt,twocolumn]{article}
  2. \usepackage{latex8}
  3. %\usepackage{times}
  4. \usepackage{url}
  5. \usepackage{graphics}
  6. \usepackage{amsmath}
  7. \pagestyle{empty}
  8. \renewcommand\url{\begingroup \def\UrlLeft{<}\def\UrlRight{>}\urlstyle{tt}\Url}
  9. \newcommand\emailaddr{\begingroup \def\UrlLeft{<}\def\UrlRight{>}\urlstyle{tt}\Url}
  10. % If an URL ends up with '%'s in it, that's because the line *in the .bib/.tex
  11. % file* is too long, so break it there (it doesn't matter if the next line is
  12. % indented with spaces). -DH
  13. %\newif\ifpdf
  14. %\ifx\pdfoutput\undefined
  15. % \pdffalse
  16. %\else
  17. % \pdfoutput=1
  18. % \pdftrue
  19. %\fi
  20. \newenvironment{tightlist}{\begin{list}{$\bullet$}{
  21. \setlength{\itemsep}{0mm}
  22. \setlength{\parsep}{0mm}
  23. % \setlength{\labelsep}{0mm}
  24. % \setlength{\labelwidth}{0mm}
  25. % \setlength{\topsep}{0mm}
  26. }}{\end{list}}
  27. \begin{document}
  28. %% Use dvipdfm instead. --DH
  29. %\ifpdf
  30. % \pdfcompresslevel=9
  31. % \pdfpagewidth=\the\paperwidth
  32. % \pdfpageheight=\the\paperheight
  33. %\fi
  34. \title{Tor: Design of a Next-Generation Onion Router}
  35. \author{Anonymous}
  36. %\author{Roger Dingledine \\ The Free Haven Project \\ arma@freehaven.net \and
  37. %Nick Mathewson \\ The Free Haven Project \\ nickm@freehaven.net \and
  38. %Paul Syverson \\ Naval Research Lab \\ syverson@itd.nrl.navy.mil}
  39. \maketitle
  40. \thispagestyle{empty}
  41. \begin{abstract}
  42. We present Tor, a connection-based low-latency anonymous communication
  43. system. It is intended as an update and replacement for onion routing
  44. and addresses many limitations in the original onion routing design.
  45. Tor works in a real-world Internet environment,
  46. requires little synchronization or coordination between nodes, and
  47. protects against known anonymity-breaking attacks as well
  48. as or better than other systems with similar design parameters.
  49. \end{abstract}
  50. %\begin{center}
  51. %\textbf{Keywords:} anonymity, peer-to-peer, remailer, nymserver, reply block
  52. %\end{center}
  53. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  54. \Section{Overview}
  55. \label{sec:intro}
  56. Onion routing is a distributed overlay network designed to anonymize
  57. low-latency TCP-based applications such as web browsing, secure shell,
  58. and instant messaging. Users choose a path through the network and
  59. build a \emph{virtual circuit}, in which each node in the path knows its
  60. predecessor and successor, but no others. Traffic flowing down the circuit
  61. is sent in fixed-size \emph{cells}, which are unwrapped by a symmetric key
  62. at each node, revealing the downstream node. The original onion routing
  63. project published several design and analysis papers
  64. \cite{or-jsac98,or-discex00,or-ih96,or-pet02}. While there was briefly
  65. a wide area onion routing network,
  66. the only long-running and publicly accessible
  67. implementation was a fragile proof-of-concept that ran on a single
  68. machine. Many critical design and deployment issues were never implemented,
  69. and the design has not been updated in several years.
  70. Here we describe Tor, a protocol for asynchronous, loosely
  71. federated onion routers that provides the following improvements over
  72. the old onion routing design:
  73. \begin{tightlist}
  74. \item \textbf{Perfect forward secrecy:} The original onion routing
  75. design is vulnerable to a single hostile node recording traffic and later
  76. forcing successive nodes in the circuit to decrypt it. Rather than using
  77. onions to lay the circuits, Tor uses an incremental or \emph{telescoping}
  78. path-building design, where the initiator negotiates session keys with
  79. each successive hop in the circuit. Onion replay detection is no longer
  80. necessary, and the network as a whole is more reliable to boot, since
  81. the initiator knows which hop failed and can try extending to a new node.
  82. \item \textbf{Applications talk to the onion proxy via Socks:}
  83. The original onion routing design required a separate proxy for each
  84. supported application protocol, resulting in a lot of extra code (most
  85. of which was never written) and also meaning that a lot of TCP-based
  86. applications were not supported. Tor uses the unified and standard Socks
  87. \cite{socks4,socks5} interface, allowing us to support any TCP-based
  88. program without modification.
  89. \item \textbf{Many applications can share one circuit:} The original
  90. onion routing design built one circuit for each request. Aside from the
  91. performance issues of doing public key operations for every request, it
  92. also turns out that regular communications patterns mean building lots
  93. of circuits, which can endanger anonymity \cite{wright03}.
  94. %[XXX Was this
  95. %supposed to be Wright02 or Wright03. In any case I am hesitant to cite
  96. %that work in this context. While the point is valid in general, that
  97. %work is predicated on assumptions that I don't think typically apply
  98. %to onion routing (whether old or new design). -PS]
  99. %[I had meant wright03, but I guess wright02 could work as well.
  100. % If you don't think these attacks work on onion routing, you need to
  101. % write up a convincing argument of this. Such an argument would
  102. % be very worthwhile to include in this paper. -RD]
  103. Tor multiplexes many
  104. connections down each circuit, but still rotates the circuit periodically
  105. to avoid too much linkability.
  106. \item \textbf{No mixing or traffic shaping:} The original onion routing
  107. design called for full link padding both between onion routers and between
  108. onion proxies (that is, users) and onion routers \cite{or-jsac98}. The
  109. later analysis paper \cite{or-pet02} suggested \emph{traffic shaping}
  110. to provide similar protection but use less bandwidth, but did not go
  111. into detail. However, recent research \cite{econymics} and deployment
  112. experience \cite{freedom} indicate that this level of resource
  113. use is not practical or economical; and even full link padding is still
  114. vulnerable to active attacks \cite{defensive-dropping}.
  115. % [XXX what is being referenced here, Dogan? -PS]
  116. %[An upcoming FC04 paper. I'll add a cite when it's out. -RD]
  117. \item \textbf{Leaky pipes:} Through in-band signalling within the circuit,
  118. Tor initiators can direct traffic to nodes partway down the circuit. This
  119. allows for long-range padding to frustrate timing attacks at the initiator
  120. \cite{defensive-dropping}, but because circuits are used by more than
  121. one application, it also allows traffic to exit the circuit from the
  122. middle -- thus frustrating timing attacks based on observing exit points.
  123. %Or something like that. hm. Tone this down maybe? Or support it. -RD
  124. \item \textbf{Congestion control:} Earlier anonymity designs do not
  125. address traffic bottlenecks. Unfortunately, typical approaches to load
  126. balancing and flow control in overlay networks involve inter-node control
  127. communication and global views of traffic. Our decentralized ack-based
  128. congestion control maintains reasonable anonymity while allowing nodes
  129. at the edges of the network to detect congestion or flooding attacks
  130. and send less data until the congestion subsides.
  131. \item \textbf{Directory servers:} Rather than attempting to flood
  132. link-state information through the network, which can be unreliable and
  133. open to partitioning attacks or outright deception, Tor takes a simplified
  134. view towards distributing link-state information. Certain more trusted
  135. onion routers also serve as directory servers; they provide signed
  136. \emph{directories} describing all routers they know about, and which
  137. are currently up. Users periodically download these directories via HTTP.
  138. \item \textbf{End-to-end integrity checking:} Without integrity checking
  139. on traffic going through the network, an onion router can change the
  140. contents of cells as they pass by, e.g. by redirecting a connection on
  141. the fly so it connects to a different webserver, or by tagging encrypted
  142. traffic and looking for traffic at the network edges that has been
  143. tagged \cite{minion-design}.
  144. \item \textbf{Robustness to node failure:} router twins
  145. \item \textbf{Exit policies:}
  146. Tor provides a consistent mechanism for each node to specify and
  147. advertise an exit policy.
  148. \item \textbf{Rendezvous points:}
  149. location-protected servers
  150. \end{tightlist}
  151. We review previous work in Section \ref{sec:background}, describe
  152. our goals and assumptions in Section \ref{sec:assumptions},
  153. and then address the above list of improvements in Sections
  154. \ref{sec:design}-\ref{sec:maintaining-anonymity}. We then summarize
  155. how our design stands up to known attacks, and conclude with a list of
  156. open problems.
  157. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  158. \Section{Background and threat model}
  159. \label{sec:background}
  160. \SubSection{Related work}
  161. \label{sec:related-work}
  162. Modern anonymity designs date to Chaum's Mix-Net\cite{chaum-mix} design of
  163. 1981. Chaum proposed hiding sender-recipient connections by wrapping
  164. messages in several layers of public key cryptography, and relaying them
  165. through a path composed of Mix servers. Mix servers in turn decrypt, delay,
  166. and re-order messages, before relay them along the path towards their
  167. destinations.
  168. Subsequent relay-based anonymity designs have diverged in two
  169. principal directions. Some have attempted to maximize anonymity at
  170. the cost of introducing comparatively large and variable latencies,
  171. for example, Babel\cite{babel}, Mixmaster\cite{mixmaster-spec}, and
  172. Mixminion\cite{minion-design}. Because of this
  173. decision, such \emph{high-latency} networks are well-suited for anonymous
  174. email, but introduce too much lag for interactive tasks such as web browsing,
  175. internet chat, or SSH connections.
  176. Tor belongs to the second category: \emph{low-latency} designs that
  177. attempt to anonymize interactive network traffic. Because such
  178. traffic tends to involve a relatively large numbers of packets, it is
  179. difficult to prevent an attacker who can eavesdrop entry and exit
  180. points from correlating packets entering the anonymity network with
  181. packets leaving it. Although some work has been done to frustrate
  182. these attacks, most designs protect primarily against traffic analysis
  183. rather than traffic confirmation \cite{or-jsac98}. One can pad and
  184. limit communication to a constant rate or at least to control the
  185. variation in traffic shape. This can have prohibitive bandwidth costs
  186. and/or performance limitations. One can also use a cascade (fixed
  187. shared route) with a relatively fixed set of users. This assumes a
  188. degree of agreement and provides an easier target for an active
  189. attacker since the endpoints are generally known. However, a practical
  190. network with both of these features has been run for many years
  191. \cite{web-mix}.
  192. they still...
  193. [XXX go on to explain how the design choices implied in low-latency result in
  194. significantly different designs.]
  195. The simplest low-latency designs are single-hop proxies such as the
  196. Anonymizer \cite{anonymizer}, wherein a single trusted server removes
  197. identifying users' data before relaying it. These designs are easy to
  198. analyze, but require end-users to trust the anonymizing proxy.
  199. More complex are distributed-trust, channel-based anonymizing systems. In
  200. these designs, a user establishes one or more medium-term bidirectional
  201. end-to-end tunnels to exit servers, and uses those tunnels to deliver a
  202. number of low-latency packets to and from one or more destinations per
  203. tunnel. Establishing tunnels is comparatively expensive and typically
  204. requires public-key cryptography, whereas relaying packets along a tunnel is
  205. comparatively inexpensive. Because a tunnel crosses several servers, no
  206. single server can learn the user's communication partners.
  207. Systems such as earlier versions of Freedom and onion routing
  208. build the anonymous channel all at once (using an onion). Later
  209. designs of each of these build the channel in stages as does AnonNet
  210. \cite{anonnet}. Amongst other things, this makes perfect forward
  211. secrecy feasible.
  212. Some systems, such as Crowds \cite{crowds-tissec}, do not rely on the
  213. changing appearance of packets to hide the path; rather they employ
  214. mechanisms so that an intermediary cannot be sure when it is
  215. receiving/sending to the ultimate initiator. There is no public-key
  216. encryption needed for Crowds, but the responder and all data are
  217. visible to all nodes on the path so that anonymity of connection
  218. initiator depends on filtering all identifying information from the
  219. data stream. Crowds is also designed only for HTTP traffic.
  220. Hordes \cite{hordes-jcs} is based on Crowds but also uses multicast
  221. responses to hide the initiator. Some systems go even further
  222. requiring broadcast \cite{herbivore,p5} although tradeoffs are made to
  223. make this more practical. Both Herbivore and P5 are designed primarily
  224. for communication between communicating peers, although Herbivore
  225. permits external connections by requesting a peer to serve as a proxy.
  226. Allowing easy connections to nonparticipating responders or recipients
  227. is a practical requirement for many users, e.g., to visit
  228. nonparticipating Web sites or to send mail to nonparticipating
  229. recipients.
  230. Distributed-trust anonymizing systems differ in how they prevent attackers
  231. from controlling too many servers and thus compromising too many user paths.
  232. Some protocols rely on a centrally maintained set of well-known anonymizing
  233. servers. Others (such as Tarzan and MorphMix) allow unknown users to run
  234. servers, while using a limited resource (DHT space for Tarzan; IP space for
  235. MorphMix) to prevent an attacker from owning too much of the network.
  236. [XXX what else? What does (say) crowds do?]
  237. All of the above systems Several systems with varying design goals
  238. and capabilities but all of which require that communicants be
  239. intentionally participating are mentioned here.
  240. Some involve multicast or more to work
  241. herbivore
  242. There are also many systems which are intended for anonymous
  243. and/or censorship resistant file sharing. [XXX Should we list all these
  244. or just say it's out of scope for the paper?
  245. eternity, gnunet, freenet, freehaven, publius, tangler, taz/rewebber]
  246. [XXX Should we add a paragraph dividing servers by all-at-once approach to
  247. tunnel-building (OR1,Freedom1) versus piecemeal approach
  248. (OR2,Anonnet?,Freedom2) ?]
  249. Channel-based anonymizing systems also differ in their use of dummy traffic.
  250. [XXX]
  251. Finally, several systems provide low-latency anonymity without channel-based
  252. communication. Crowds and [XXX] provide anonymity for HTTP requests; [...]
  253. [XXX Mention error recovery?]
  254. Web-MIXes \cite{web-mix} (also known as the Java Anon Proxy or JAP)
  255. use a cascade architecture with relatively constant groups of users
  256. sending and receiving at a constant rate.
  257. Some, such as Crowds \cite{crowds-tissec}, do nothing against such
  258. confirmation but still make it difficult for nodes along a connection to
  259. perform timing confirmations that would more easily identify when
  260. the immediate predecessor is the initiator of a connection, which in
  261. Crowds would reveal both initiator and responder to the attacker.
  262. anonymizer
  263. pipenet
  264. freedom v1
  265. freedom v2
  266. onion routing v1
  267. isdn-mixes
  268. crowds
  269. real-time mixes, web mixes
  270. anonnet (marc rennhard's stuff)
  271. morphmix
  272. P5
  273. gnunet
  274. rewebbers
  275. tarzan
  276. herbivore
  277. hordes
  278. cebolla (?)
  279. [XXX Close by mentioning where Tor fits.]
  280. \SubSection{Our threat model}
  281. \label{subsec:threat-model}
  282. \SubSection{Known attacks against low-latency anonymity systems}
  283. \label{subsec:known-attacks}
  284. We discuss each of these attacks in more detail below, along with the
  285. aspects of the Tor design that provide defense. We provide a summary
  286. of the attacks and our defenses against them in Section \ref{sec:attacks}.
  287. Passive attacks:
  288. simple observation,
  289. timing correlation,
  290. size correlation,
  291. option distinguishability,
  292. Active attacks:
  293. key compromise,
  294. iterated subpoena,
  295. run recipient,
  296. run a hostile node,
  297. compromise entire path,
  298. selectively DOS servers,
  299. introduce timing into messages,
  300. directory attacks,
  301. tagging attacks
  302. \Section{Design goals and assumptions}
  303. \label{sec:assumptions}
  304. [XXX Perhaps the threat model belongs here.]
  305. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  306. \Section{The Tor Design}
  307. \label{sec:design}
  308. \Section{Other design decisions}
  309. \SubSection{Exit policies and abuse}
  310. \label{subsec:exitpolicies}
  311. \SubSection{Directory Servers}
  312. \label{subsec:dir-servers}
  313. \Section{Rendezvous points for location privacy}
  314. \label{sec:rendezvous}
  315. Rendezvous points are a building block for \emph{location-hidden services}
  316. (that is, responder anonymity) in the Tor network. Location-hidden
  317. services means Bob can offer a tcp service, such as an Apache webserver,
  318. without revealing the IP of that service.
  319. We provide censorship resistance for Bob by allowing him to advertise
  320. several onion routers (nodes known as his Introduction Points, see
  321. Section \ref{subsec:intro-point}) as his public location. Alice,
  322. the client, chooses a node known as a Meeting Point (see Section
  323. \ref{subsec:meeting-point}). She connects to one of Bob's introduction
  324. points, informs him about her meeting point, and then waits for him to
  325. connect to her meeting point. This extra level of indirection is needed
  326. so Bob's introduction points don't serve files directly (else they open
  327. themselves up to abuse, eg from serving Nazi propaganda in France). The
  328. extra level of indirection also allows Bob to choose which requests to
  329. respond to, and which to ignore.
  330. We provide the necessary glue code so that Alice can view
  331. webpages on a location-hidden webserver, and Bob can run a
  332. location-hidden server, with minimal invasive changes (see Section
  333. \ref{subsec:client-rendezvous}). Both Alice and Bob must run local
  334. onion proxies (OPs) -- software that knows how to talk to the onion
  335. routing network.
  336. The steps of a rendezvous:
  337. \begin{tightlist}
  338. \item Bob chooses some Introduction Points, and advertises them on a
  339. Distributed Hash Table (DHT).
  340. \item Bob establishes onion routing connections to each of his
  341. Introduction Points, and waits.
  342. \item Alice learns about Bob's service out of band (perhaps Bob gave her
  343. a pointer, or she found it on a website). She looks up the details
  344. of Bob's service from the DHT.
  345. \item Alice chooses and establishes a Meeting Point (MP) for this
  346. transaction.
  347. \item Alice goes to one of Bob's Introduction Points, and gives it a blob
  348. (encrypted for Bob) which tells him about herself and the Meeting
  349. Point she chose. The Introduction Point sends the blob to Bob.
  350. \item Bob chooses whether to ignore the blob, or to onion route to MP.
  351. Let's assume the latter.
  352. \item MP plugs together Alice and Bob. Note that MP doesn't know (or care)
  353. who Alice is, or who Bob is; and it can't read anything they
  354. transmit either, because they share a session key.
  355. \item Alice sends a 'begin' cell along the circuit. It makes its way
  356. to Bob's onion proxy. Bob's onion proxy connects to Bob's webserver.
  357. \item Data goes back and forth as usual.
  358. \end{tightlist}
  359. Ian Goldberg developed a similar notion of rendezvous points for
  360. low-latency anonymity systems \cite{goldberg-thesis}. His ``service tag''
  361. is the same concept as our ``hash of service's public key''. We make it
  362. a hash of the public key so it can be self-authenticating, and so the
  363. client can recognize the same service with confidence later on.
  364. The main differences are:
  365. * We force the client to use our software. This means
  366. - the client can get anonymity for himself pretty easily, since he's
  367. already running our onion proxy.
  368. - the client can literally just click on a url in his Mozilla, paste it
  369. into wget, etc, and it will just work. (The url is a long-term
  370. service tag; like Ian's, it doesn't expire as the server changes
  371. public addresses. But in Ian's scheme it seems the client must
  372. manually hunt down a current location of the service via gnutella?)
  373. - the client and server can share ephemeral DH keys, so at no point
  374. in the path is the plaintext exposed.
  375. * I fear that we would get *no* volunteers to run Ian's rendezvous points,
  376. because they have to actually serve the Nazi propaganda (or whatever)
  377. in plaintext. So we add another layer of indirection to the system:
  378. the rendezvous service is divided into Introduction Points and
  379. Meeting Points. The introduction points (the ones that the server is
  380. publically associated with) do not output any bytes to the clients. And
  381. the meeting points don't know the client, the server, or the stuff
  382. being transmitted. The indirection scheme is also designed with
  383. authentication/authorization in mind -- if the client doesn't include
  384. the right cookie with its request for service, the server doesn't even
  385. acknowledge its existence.
  386. \subsubsection{Integration with user applications}
  387. \Section{Maintaining anonymity sets}
  388. \label{sec:maintaining-anonymity}
  389. \SubSection{Using a circuit many times}
  390. \label{subsec:many-messages}
  391. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  392. \Section{Attacks and Defenses}
  393. \label{sec:attacks}
  394. Below we summarize a variety of attacks and how well our design withstands
  395. them.
  396. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  397. \Section{Future Directions and Open Problems}
  398. \label{sec:conclusion}
  399. Tor brings together many innovations from many different projects into
  400. a unified deployable system. But there are still several attacks that
  401. work quite well, as well as a number of sustainability and run-time
  402. issues remaining to be ironed out. In particular:
  403. \begin{itemize}
  404. \item foo
  405. \end{itemize}
  406. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  407. \Section{Acknowledgments}
  408. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  409. \bibliographystyle{latex8}
  410. \bibliography{tor-design}
  411. \end{document}
  412. % Style guide:
  413. % U.S. spelling
  414. % avoid contractions (it's, can't, etc.)
  415. % 'mix', 'mixes' (as noun)
  416. % 'mix-net'
  417. % 'mix', 'mixing' (as verb)
  418. % 'Mixminion Project'
  419. % 'Mixminion' (meaning the protocol suite or the network)
  420. % 'Mixmaster' (meaning the protocol suite or the network)
  421. % 'middleman' [Not with a hyphen; the hyphen has been optional
  422. % since Middle English.]
  423. % 'nymserver'
  424. % 'Cypherpunk', 'Cypherpunks', 'Cypherpunk remailer'
  425. %
  426. % 'Whenever you are tempted to write 'Very', write 'Damn' instead, so
  427. % your editor will take it out for you.' -- Misquoted from Mark Twain