tor-design.tex 22 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520
  1. \documentclass[times,10pt,twocolumn]{article}
  2. \usepackage{latex8}
  3. %\usepackage{times}
  4. \usepackage{url}
  5. \usepackage{graphics}
  6. \usepackage{amsmath}
  7. \pagestyle{empty}
  8. \renewcommand\url{\begingroup \def\UrlLeft{<}\def\UrlRight{>}\urlstyle{tt}\Url}
  9. \newcommand\emailaddr{\begingroup \def\UrlLeft{<}\def\UrlRight{>}\urlstyle{tt}\Url}
  10. % If an URL ends up with '%'s in it, that's because the line *in the .bib/.tex
  11. % file* is too long, so break it there (it doesn't matter if the next line is
  12. % indented with spaces). -DH
  13. %\newif\ifpdf
  14. %\ifx\pdfoutput\undefined
  15. % \pdffalse
  16. %\else
  17. % \pdfoutput=1
  18. % \pdftrue
  19. %\fi
  20. \newenvironment{tightlist}{\begin{list}{$\bullet$}{
  21. \setlength{\itemsep}{0mm}
  22. \setlength{\parsep}{0mm}
  23. % \setlength{\labelsep}{0mm}
  24. % \setlength{\labelwidth}{0mm}
  25. % \setlength{\topsep}{0mm}
  26. }}{\end{list}}
  27. \begin{document}
  28. %% Use dvipdfm instead. --DH
  29. %\ifpdf
  30. % \pdfcompresslevel=9
  31. % \pdfpagewidth=\the\paperwidth
  32. % \pdfpageheight=\the\paperheight
  33. %\fi
  34. \title{Tor: Design of a Next-Generation Onion Router}
  35. \author{Anonymous}
  36. %\author{Roger Dingledine \\ The Free Haven Project \\ arma@freehaven.net \and
  37. %Nick Mathewson \\ The Free Haven Project \\ nickm@freehaven.net \and
  38. %Paul Syverson \\ Naval Research Lab \\ syverson@itd.nrl.navy.mil}
  39. \maketitle
  40. \thispagestyle{empty}
  41. \begin{abstract}
  42. We present Tor, a connection-based low-latency anonymous communication
  43. system which addresses many limitations in the original onion routing design.
  44. Tor works in a real-world Internet environment,
  45. requires little synchronization or coordination between nodes, and
  46. protects against known anonymity-breaking attacks as well
  47. as or better than other systems with similar design parameters.
  48. \end{abstract}
  49. %\begin{center}
  50. %\textbf{Keywords:} anonymity, peer-to-peer, remailer, nymserver, reply block
  51. %\end{center}
  52. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  53. \Section{Overview}
  54. \label{sec:intro}
  55. Onion routing is a distributed overlay network designed to anonymize
  56. low-latency TCP-based applications such as web browsing, secure shell,
  57. and instant messaging. Users choose a path through the network and
  58. build a \emph{virtual circuit}, in which each node in the path knows its
  59. predecessor and successor, but no others. Traffic flowing down the circuit
  60. is sent in fixed-size \emph{cells}, which are unwrapped by a symmetric key
  61. at each node, revealing the downstream node. The original onion routing
  62. project published several design and analysis papers
  63. \cite{or-jsac98,or-discex00,or-ih96,or-pet02}. While there was briefly
  64. a network of about a dozen nodes at three widely distributed sites,
  65. the only long-running and publicly accessible
  66. implementation was a fragile proof-of-concept that ran on a single
  67. machine. Many critical design and deployment issues were never implemented,
  68. and the design has not been updated in several years.
  69. Here we describe Tor, a protocol for asynchronous, loosely
  70. federated onion routers that provides the following improvements over
  71. the old onion routing design:
  72. \begin{tightlist}
  73. \item \textbf{Perfect forward secrecy:} The original onion routing
  74. design is vulnerable to a single hostile node recording traffic and later
  75. forcing successive nodes in the circuit to decrypt it. Rather than using
  76. onions to lay the circuits, Tor uses an incremental or \emph{telescoping}
  77. path-building design, where the initiator negotiates session keys with
  78. each successive hop in the circuit. Onion replay detection is no longer
  79. necessary, and the network as a whole is more reliable to boot, since
  80. the initiator knows which hop failed and can try extending to a new node.
  81. \item \textbf{Applications talk to the onion proxy via Socks:}
  82. The original onion routing design required a separate proxy for each
  83. supported application protocol, resulting in a lot of extra code (most
  84. of which was never written) and also meaning that a lot of TCP-based
  85. applications were not supported. Tor uses the unified and standard Socks
  86. \cite{socks4,socks5} interface, allowing us to support any TCP-based
  87. program without modification.
  88. \item \textbf{Many applications can share one circuit:} The original
  89. onion routing design built one circuit for each request. Aside from the
  90. performance issues of doing public key operations for every request, it
  91. also turns out that regular communications patterns mean building lots
  92. of circuits, which can endanger anonymity \cite{wright03}.
  93. %[XXX Was this
  94. %supposed to be Wright02 or Wright03. In any case I am hesitant to cite
  95. %that work in this context. While the point is valid in general, that
  96. %work is predicated on assumptions that I don't think typically apply
  97. %to onion routing (whether old or new design). -PS]
  98. %[I had meant wright03, but I guess wright02 could work as well.
  99. % If you don't think these attacks work on onion routing, you need to
  100. % write up a convincing argument of this. Such an argument would
  101. % be very worthwhile to include in this paper. -RD]
  102. Tor multiplexes many
  103. connections down each circuit, but still rotates the circuit periodically
  104. to avoid too much linkability.
  105. \item \textbf{No mixing or traffic shaping:} The original onion routing
  106. design called for full link padding both between onion routers and between
  107. onion proxies (that is, users) and onion routers \cite{or-jsac98}. The
  108. later analysis paper \cite{or-pet02} suggested \emph{traffic shaping}
  109. to provide similar protection but use less bandwidth, but did not go
  110. into detail. However, recent research \cite{econymics} and deployment
  111. experience \cite{freedom} indicate that this level of resource
  112. use is not practical or economical; and even full link padding is still
  113. vulnerable to active attacks \cite{defensive-dropping}.
  114. % [XXX what is being referenced here, Dogan? -PS]
  115. %[An upcoming FC04 paper. I'll add a cite when it's out. -RD]
  116. \item \textbf{Leaky pipes:} Through in-band signalling within the circuit,
  117. Tor initiators can direct traffic to nodes partway down the circuit. This
  118. allows for long-range padding to frustrate timing attacks at the initiator
  119. \cite{defensive-dropping}, but because circuits are used by more than
  120. one application, it also allows traffic to exit the circuit from the
  121. middle -- thus frustrating timing attacks based on observing exit points.
  122. %Or something like that. hm. Tone this down maybe? Or support it. -RD
  123. \item \textbf{Congestion control:} Earlier anonymity designs do not
  124. address traffic bottlenecks. Unfortunately, typical approaches to load
  125. balancing and flow control in overlay networks involve inter-node control
  126. communication and global views of traffic. Our decentralized ack-based
  127. congestion control maintains reasonable anonymity while allowing nodes
  128. at the edges of the network to detect congestion or flooding attacks
  129. and send less data until the congestion subsides.
  130. \item \textbf{Directory servers:} Rather than attempting to flood
  131. link-state information through the network, which can be unreliable and
  132. open to partitioning attacks or outright deception, Tor takes a simplified
  133. view towards distributing link-state information. Certain more trusted
  134. onion routers also serve as directory servers; they provide signed
  135. \emph{directories} describing all routers they know about, and which
  136. are currently up. Users periodically download these directories via HTTP.
  137. \item \textbf{End-to-end integrity checking:} Without integrity checking
  138. on traffic going through the network, an onion router can change the
  139. contents of cells as they pass by, e.g. by redirecting a connection on
  140. the fly so it connects to a different webserver, or by tagging encrypted
  141. traffic and looking for traffic at the network edges that has been
  142. tagged \cite{minion-design}.
  143. \item \textbf{Robustness to node failure:} router twins
  144. \item \textbf{Exit policies:}
  145. Tor provides a consistent mechanism for each node to specify and
  146. advertise an exit policy.
  147. \item \textbf{Rendezvous points:}
  148. location-protected servers
  149. \end{tightlist}
  150. We review previous work in Section \ref{sec:background}, describe
  151. our goals and assumptions in Section \ref{sec:assumptions},
  152. and then address the above list of improvements in Sections
  153. \ref{sec:design}-\ref{sec:maintaining-anonymity}. We then summarize
  154. how our design stands up to known attacks, and conclude with a list of
  155. open problems.
  156. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  157. \Section{Background and threat model}
  158. \label{sec:background}
  159. \SubSection{Related work}
  160. \label{sec:related-work}
  161. Modern anonymity designs date to Chaum's Mix-Net\cite{chaum-mix} design of
  162. 1981. Chaum proposed hiding sender-recipient connections by wrapping
  163. messages in several layers of public key cryptography, and relaying them
  164. through a path composed of Mix servers. Mix servers in turn decrypt, delay,
  165. and re-order messages, before relay them along the path towards their
  166. destinations.
  167. Subsequent relay-based anonymity designs have diverged in two
  168. principal directions. Some have attempted to maximize anonymity at
  169. the cost of introducing comparatively large and variable latencies,
  170. for example, Babel\cite{babel}, Mixmaster\cite{mixmaster-spec}, and
  171. Mixminion\cite{minion-design}. Because of this
  172. decision, such \emph{high-latency} networks are well-suited for anonymous
  173. email, but introduce too much lag for interactive tasks such as web browsing,
  174. internet chat, or SSH connections.
  175. Tor belongs to the second category: \emph{low-latency} designs that
  176. attempt to anonymize interactive network traffic. Because such
  177. traffic tends to involve a relatively large numbers of packets, it is
  178. difficult to prevent an attacker who can eavesdrop entry and exit
  179. points from correlating packets entering the anonymity network with
  180. packets leaving it. Although some work has been done to frustrate
  181. these attacks, most designs protect primarily against traffic analysis
  182. rather than traffic confirmation \cite{or-jsac98}. One can pad and
  183. limit communication to a constant rate or at least to control the
  184. variation in traffic shape. This can have prohibitive bandwidth costs
  185. and/or performance limitations. One can also use a cascade (fixed
  186. shared route) with a relatively fixed set of users. This assumes a
  187. degree of agreement and provides an easier target for an active
  188. attacker since the endpoints are generally known. However, a practical
  189. network with both of these features has been run for many years
  190. \cite{web-mix}.
  191. they still...
  192. [XXX go on to explain how the design choices implied in low-latency result in
  193. significantly different designs.]
  194. The simplest low-latency designs are single-hop proxies such as the
  195. Anonymizer \cite{anonymizer}, wherein a single trusted server removes
  196. identifying users' data before relaying it. These designs are easy to
  197. analyze, but require end-users to trust the anonymizing proxy.
  198. More complex are distributed-trust, channel-based anonymizing systems. In
  199. these designs, a user establishes one or more medium-term bidirectional
  200. end-to-end tunnels to exit servers, and uses those tunnels to deliver a
  201. number of low-latency packets to and from one or more destinations per
  202. tunnel. Establishing tunnels is comparatively expensive and typically
  203. requires public-key cryptography, whereas relaying packets along a tunnel is
  204. comparatively inexpensive. Because a tunnel crosses several servers, no
  205. single server can learn the user's communication partners.
  206. Systems such as earlier versions of Freedom and onion routing
  207. build the anonymous channel all at once (using an onion). Later
  208. designs of each of these build the channel in stages as does AnonNet
  209. \cite{anonnet}. Amongst other things, this makes perfect forward
  210. secrecy feasible.
  211. Some systems, such as Crowds \cite{crowds-tissec}, do not rely on the
  212. changing appearance of packets to hide the path; rather they employ
  213. mechanisms so that an intermediary cannot be sure when it is
  214. receiving/sending to the ultimate initiator. There is no public-key
  215. encryption needed for Crowds, but the responder and all data are
  216. visible to all nodes on the path so that anonymity of connection
  217. initiator depends on filtering all identifying information from the
  218. data stream. Crowds is also designed only for HTTP traffic.
  219. Hordes \cite{hordes-jcs} is based on Crowds but also uses multicast
  220. responses to hide the initiator. Some systems go even further
  221. requiring broadcast \cite{herbivore,p5} although tradeoffs are made to
  222. make this more practical. Both Herbivore and P5 are designed primarily
  223. for communication between communicating peers, although Herbivore
  224. permits external connections by requesting a peer to serve as a proxy.
  225. Allowing easy connections to nonparticipating responders or recipients
  226. is a practical requirement for many users, e.g., to visit
  227. nonparticipating Web sites or to send mail to nonparticipating
  228. recipients.
  229. Distributed-trust anonymizing systems differ in how they prevent attackers
  230. from controlling too many servers and thus compromising too many user paths.
  231. Some protocols rely on a centrally maintained set of well-known anonymizing
  232. servers. Others (such as Tarzan and MorphMix) allow unknown users to run
  233. servers, while using a limited resource (DHT space for Tarzan; IP space for
  234. MorphMix) to prevent an attacker from owning too much of the network.
  235. [XXX what else? What does (say) crowds do?]
  236. All of the above systems Several systems with varying design goals
  237. and capabilities but all of which require that communicants be
  238. intentionally participating are mentioned here.
  239. Some involve multicast or more to work
  240. herbivore
  241. There are also many systems which are intended for anonymous
  242. and/or censorship resistant file sharing. [XXX Should we list all these
  243. or just say it's out of scope for the paper?
  244. eternity, gnunet, freenet, freehaven, publius, tangler, taz/rewebber]
  245. [XXX Should we add a paragraph dividing servers by all-at-once approach to
  246. tunnel-building (OR1,Freedom1) versus piecemeal approach
  247. (OR2,Anonnet?,Freedom2) ?]
  248. Channel-based anonymizing systems also differ in their use of dummy traffic.
  249. [XXX]
  250. Finally, several systems provide low-latency anonymity without channel-based
  251. communication. Crowds and [XXX] provide anonymity for HTTP requests; [...]
  252. [XXX Mention error recovery?]
  253. Web-MIXes \cite{web-mix} (also known as the Java Anon Proxy or JAP)
  254. use a cascade architecture with relatively constant groups of users
  255. sending and receiving at a constant rate.
  256. Some, such as Crowds \cite{crowds-tissec}, do nothing against such
  257. confirmation but still make it difficult for nodes along a connection to
  258. perform timing confirmations that would more easily identify when
  259. the immediate predecessor is the initiator of a connection, which in
  260. Crowds would reveal both initiator and responder to the attacker.
  261. anonymizer
  262. pipenet
  263. freedom v1
  264. freedom v2
  265. onion routing v1
  266. isdn-mixes
  267. crowds
  268. real-time mixes, web mixes
  269. anonnet (marc rennhard's stuff)
  270. morphmix
  271. P5
  272. gnunet
  273. rewebbers
  274. tarzan
  275. herbivore
  276. hordes
  277. cebolla (?)
  278. [XXX Close by mentioning where Tor fits.]
  279. \SubSection{Our threat model}
  280. \label{subsec:threat-model}
  281. \SubSection{Known attacks against low-latency anonymity systems}
  282. \label{subsec:known-attacks}
  283. We discuss each of these attacks in more detail below, along with the
  284. aspects of the Tor design that provide defense. We provide a summary
  285. of the attacks and our defenses against them in Section \ref{sec:attacks}.
  286. Passive attacks:
  287. simple observation,
  288. timing correlation,
  289. size correlation,
  290. option distinguishability,
  291. Active attacks:
  292. key compromise,
  293. iterated subpoena,
  294. run recipient,
  295. run a hostile node,
  296. compromise entire path,
  297. selectively DOS servers,
  298. introduce timing into messages,
  299. directory attacks,
  300. tagging attacks
  301. \Section{Design goals and assumptions}
  302. \label{sec:assumptions}
  303. [XXX Perhaps the threat model belongs here.]
  304. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  305. \Section{The Tor Design}
  306. \label{sec:design}
  307. \Section{Other design decisions}
  308. \SubSection{Exit policies and abuse}
  309. \label{subsec:exitpolicies}
  310. \SubSection{Directory Servers}
  311. \label{subsec:dir-servers}
  312. \Section{Rendezvous points for location privacy}
  313. \label{sec:rendezvous}
  314. Rendezvous points are a building block for \emph{location-hidden services}
  315. (that is, responder anonymity) in the Tor network. Location-hidden
  316. services means Bob can offer a tcp service, such as an Apache webserver,
  317. without revealing the IP of that service.
  318. We provide censorship resistance for Bob by allowing him to advertise
  319. several onion routers (nodes known as his Introduction Points, see
  320. Section \ref{subsec:intro-point}) as his public location. Alice,
  321. the client, chooses a node known as a Meeting Point (see Section
  322. \ref{subsec:meeting-point}). She connects to one of Bob's introduction
  323. points, informs him about her meeting point, and then waits for him to
  324. connect to her meeting point. This extra level of indirection is needed
  325. so Bob's introduction points don't serve files directly (else they open
  326. themselves up to abuse, eg from serving Nazi propaganda in France). The
  327. extra level of indirection also allows Bob to choose which requests to
  328. respond to, and which to ignore.
  329. We provide the necessary glue code so that Alice can view
  330. webpages on a location-hidden webserver, and Bob can run a
  331. location-hidden server, with minimal invasive changes (see Section
  332. \ref{subsec:client-rendezvous}). Both Alice and Bob must run local
  333. onion proxies (OPs) -- software that knows how to talk to the onion
  334. routing network.
  335. The steps of a rendezvous:
  336. \begin{tightlist}
  337. \item Bob chooses some Introduction Points, and advertises them on a
  338. Distributed Hash Table (DHT).
  339. \item Bob establishes onion routing connections to each of his
  340. Introduction Points, and waits.
  341. \item Alice learns about Bob's service out of band (perhaps Bob gave her
  342. a pointer, or she found it on a website). She looks up the details
  343. of Bob's service from the DHT.
  344. \item Alice chooses and establishes a Meeting Point (MP) for this
  345. transaction.
  346. \item Alice goes to one of Bob's Introduction Points, and gives it a blob
  347. (encrypted for Bob) which tells him about herself and the Meeting
  348. Point she chose. The Introduction Point sends the blob to Bob.
  349. \item Bob chooses whether to ignore the blob, or to onion route to MP.
  350. Let's assume the latter.
  351. \item MP plugs together Alice and Bob. Note that MP doesn't know (or care)
  352. who Alice is, or who Bob is; and it can't read anything they
  353. transmit either, because they share a session key.
  354. \item Alice sends a 'begin' cell along the circuit. It makes its way
  355. to Bob's onion proxy. Bob's onion proxy connects to Bob's webserver.
  356. \item Data goes back and forth as usual.
  357. \end{tightlist}
  358. Ian Goldberg developed a similar notion of rendezvous points for
  359. low-latency anonymity systems \cite{goldberg-thesis}. His ``service tag''
  360. is the same concept as our ``hash of service's public key''. We make it
  361. a hash of the public key so it can be self-authenticating, and so the
  362. client can recognize the same service with confidence later on.
  363. The main differences are:
  364. * We force the client to use our software. This means
  365. - the client can get anonymity for himself pretty easily, since he's
  366. already running our onion proxy.
  367. - the client can literally just click on a url in his Mozilla, paste it
  368. into wget, etc, and it will just work. (The url is a long-term
  369. service tag; like Ian's, it doesn't expire as the server changes
  370. public addresses. But in Ian's scheme it seems the client must
  371. manually hunt down a current location of the service via gnutella?)
  372. - the client and server can share ephemeral DH keys, so at no point
  373. in the path is the plaintext exposed.
  374. * I fear that we would get *no* volunteers to run Ian's rendezvous points,
  375. because they have to actually serve the Nazi propaganda (or whatever)
  376. in plaintext. So we add another layer of indirection to the system:
  377. the rendezvous service is divided into Introduction Points and
  378. Meeting Points. The introduction points (the ones that the server is
  379. publically associated with) do not output any bytes to the clients. And
  380. the meeting points don't know the client, the server, or the stuff
  381. being transmitted. The indirection scheme is also designed with
  382. authentication/authorization in mind -- if the client doesn't include
  383. the right cookie with its request for service, the server doesn't even
  384. acknowledge its existence.
  385. \SubSubSection{Integration with user applications}
  386. \Section{Maintaining anonymity sets}
  387. \label{sec:maintaining-anonymity}
  388. \SubSection{Using a circuit many times}
  389. \label{subsec:many-messages}
  390. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  391. \Section{Attacks and Defenses}
  392. \label{sec:attacks}
  393. Below we summarize a variety of attacks and how well our design withstands
  394. them.
  395. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  396. \Section{Future Directions and Open Problems}
  397. \label{sec:conclusion}
  398. Tor brings together many innovations from many different projects into
  399. a unified deployable system. But there are still several attacks that
  400. work quite well, as well as a number of sustainability and run-time
  401. issues remaining to be ironed out. In particular:
  402. \begin{itemize}
  403. \item foo
  404. \end{itemize}
  405. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  406. \Section{Acknowledgments}
  407. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  408. \bibliographystyle{latex8}
  409. \bibliography{tor-design}
  410. \end{document}
  411. % Style guide:
  412. % U.S. spelling
  413. % avoid contractions (it's, can't, etc.)
  414. % 'mix', 'mixes' (as noun)
  415. % 'mix-net'
  416. % 'mix', 'mixing' (as verb)
  417. % 'Mixminion Project'
  418. % 'Mixminion' (meaning the protocol suite or the network)
  419. % 'Mixmaster' (meaning the protocol suite or the network)
  420. % 'middleman' [Not with a hyphen; the hyphen has been optional
  421. % since Middle English.]
  422. % 'nymserver'
  423. % 'Cypherpunk', 'Cypherpunks', 'Cypherpunk remailer'
  424. %
  425. % 'Whenever you are tempted to write 'Very', write 'Damn' instead, so
  426. % your editor will take it out for you.' -- Misquoted from Mark Twain