path-spec.txt 26 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547
  1. $Id$
  2. Tor Path Specification
  3. Roger Dingledine
  4. Nick Mathewson
  5. Note: This is an attempt to specify Tor as currently implemented. Future
  6. versions of Tor will implement improved algorithms.
  7. This document tries to cover how Tor chooses to build circuits and assign
  8. streams to circuits. Other implementations MAY take other approaches, but
  9. implementors should be aware of the anonymity and load-balancing implications
  10. of their choices.
  11. THIS SPEC ISN'T DONE OR CORRECT YET.
  12. 1. General operation
  13. Tor begins building circuits as soon as it has enough directory
  14. information to do so (see section 5.1 of dir-spec.txt). Some circuits are
  15. built preemptively because we expect to need them later (for user
  16. traffic), and some are built because of immediate need (for user traffic
  17. that no current circuit can handle, for testing the network or our
  18. reachability, and so on).
  19. When a client application creates a new stream (by opening a SOCKS
  20. connection or launching a resolve request), we attach it to an appropriate
  21. open circuit if one exists, or wait if an appropriate circuit is
  22. in-progress. We launch a new circuit only
  23. if no current circuit can handle the request. We rotate circuits over
  24. time to avoid some profiling attacks.
  25. To build a circuit, we choose all the nodes we want to use, and then
  26. construct the circuit. Sometimes, when we want a circuit that ends at a
  27. given hop, and we have an appropriate unused circuit, we "cannibalize" the
  28. existing circuit and extend it to the new terminus.
  29. These processes are described in more detail below.
  30. This document describes Tor's automatic path selection logic only; path
  31. selection can be overridden by a controller (with the EXTENDCIRCUIT and
  32. ATTACHSTREAM commands). Paths constructed through these means may
  33. violate some constraints given below.
  34. 1b. Terminology
  35. A "path" is an ordered sequence of nodes, not yet built as a circuit.
  36. A "clean" circuit is one that has not yet been used for any traffic.
  37. A "fast" or "stable" or "valid" node is one that has the 'Fast' or
  38. 'Stable' or 'Valid' flag
  39. set respectively, based on our current directory information. A "fast"
  40. or "stable" circuit is one consisting only of "fast" or "stable" nodes.
  41. In an "exit" circuit, the final node is chosen based on waiting stream
  42. requests if any, and in any case it avoids nodes with exit policy of
  43. "reject *:*". An "internal" circuit, on the other hand, is one where
  44. the final node is chosen just like a middle node (ignoring its exit
  45. policy).
  46. A "request" is a client-side stream or DNS resolve that needs to be
  47. served by a circuit.
  48. A "pending" circuit is one that we have started to build, but which has
  49. not yet completed.
  50. A circuit or path "supports" a request if it is okay to use the
  51. circuit/path to fulfill the request, according to the rules given below.
  52. A circuit or path "might support" a request if some aspect of the request
  53. is unknown (usually its target IP), but we believe the path probably
  54. supports the request according to the rules given below.
  55. 2. Building circuits
  56. 2.1. When we build.
  57. 2.1.1. Clients build circuits preemptively
  58. When running as a client, Tor tries to maintain at least a certain
  59. number of clean circuits, so that new streams can be handled
  60. quickly. To increase the likelihood of success, Tor tries to
  61. predict what circuits will be useful by choosing from among nodes
  62. that support the ports we have used in the recent past (by default
  63. one hour). Specifically, on startup Tor tries to maintain one clean
  64. fast exit circuit that allows connections to port 80, and at least
  65. two internal circuits in case we get a resolve request or hidden
  66. service request (at least three internal circuits if we _run_ a
  67. hidden service).
  68. After that, Tor will adapt the circuits that it preemptively builds
  69. based on the requests it sees from the user: it tries to have a clean
  70. fast exit circuit available for every port seen recently (one circuit
  71. is adequate for many predicted ports -- it doesn't keep a separate
  72. circuit for each port), and it tries to have the above internal
  73. circuits available if we've seen resolves or hidden service activity
  74. recently. If there are 12 clean circuits open, it doesn't open more
  75. even if it has more predictions. Lastly, note that if there are no
  76. requests from the user for an hour, Tor will predict no use and build
  77. no preemptive circuits.
  78. The Tor client SHOULD NOT store its list of predicted requests to a
  79. persistent medium.
  80. 2.1.2. Clients build circuits on demand
  81. Additionally, when a client request exists that no circuit (built or
  82. pending) might support, we create a new circuit to support the request.
  83. We do so by picking a request arbitrarily, launching a circuit to
  84. support it, and repeating until every unattached request might be
  85. supported by a pending or built circuit.
  86. For hidden service interations, we can "cannibalize" a clean internal
  87. circuit if one is available, so we don't need to build those circuits
  88. from scratch on demand.
  89. We can also cannibalize clean circuits when the client asks to exit
  90. at a given node -- either via mapaddress or the ".exit" notation,
  91. or because the destination is running at the same location as an
  92. exit node.
  93. 2.1.3. Servers build circuits for testing reachability
  94. Tor servers test reachability of their ORPort on start and whenever
  95. their IP address changes.
  96. XXXX
  97. 2.1.4. Hidden-service circuits
  98. See section 4 below.
  99. 2.1.5. Rate limiting of failed circuits
  100. If we fail to build a circuit N times in a X second period (see Section
  101. 2.3 for how this works), we stop building circuits until the X seconds
  102. have elapsed.
  103. XXX
  104. 2.1.6. When to tear down circuits
  105. 2.2. Path selection and constraints
  106. We choose the path for each new circuit before we build it. We choose the
  107. exit node first, followed by the other nodes in the circuit. All paths
  108. we generate obey the following constraints:
  109. - We do not choose the same router twice for the same path.
  110. - We do not choose any router in the same family as another in the same
  111. path.
  112. - We do not choose any router in the same /16 subnet as another in the
  113. same path (unless EnforceDistinctSubnets is 0).
  114. - We don't choose any non-running or non-valid router unless we have
  115. been configured to do so. By default, we are configured to allow
  116. non-valid routers in "middle" and "rendezvous" positions.
  117. - If we're using Guard nodes, the first node must be a Guard (see 5
  118. below)
  119. - XXXX Choosing the length
  120. For circuits that do not need to be not "fast", when choosing among
  121. multiple candidates for a path element, we choose randomly.
  122. For "fast" circuits, we pick a given router as an exit with probability
  123. proportional to its advertised bandwidth [the smaller of the 'rate' and
  124. 'observed' arguments to the "bandwidth" element in its descriptor]. If a
  125. router's advertised bandwidth is greater than MAX_BELIEVABLE_BANDWIDTH
  126. (1.5 MB/s), we clip to that value.
  127. For non-exit positions on "fast" circuits, we pick routers as above, but
  128. we weight the clipped advertised bandwidth of Exit-flagged nodes depending
  129. on the fraction of bandwidth available from non-Exit nodes. Call the
  130. total clipped advertised bandwidth for Exit nodes under consideration E,
  131. and the total clipped advertised bandwidth for non-Exit nodes under
  132. consideration N. If E<N/2, we do not consider Exit-flagged nodes.
  133. Otherwise, we weight their bandwidth with the factor (E-N/2)/(N+E-N/2) ==
  134. (2E - N)/(2E + N). This ensures that bandwidth is evenly distributed over
  135. nodes in 3-hop paths.
  136. Additionally, we may be building circuits with one or more requests in
  137. mind. Each kind of request puts certain constraints on paths:
  138. - All service-side introduction circuits and all rendezvous paths
  139. should be Stable.
  140. - All connection requests for connections that we think will need to
  141. stay open a long time require Stable circuits. Currently, Tor decides
  142. this by examining the request's target port, and comparing it to a
  143. list of "long-lived" ports. (Default: 21, 22, 706, 1863, 5050,
  144. 5190, 5222, 5223, 6667, 6697, 8300.)
  145. - DNS resolves require an exit node whose exit policy is not equivalent
  146. to "reject *:*".
  147. - Reverse DNS resolves require a version of Tor with advertised eventdns
  148. support (available in Tor 0.1.2.1-alpha-dev and later).
  149. - All connection requests require an exit node whose exit policy
  150. supports their target address and port (if known), or which "might
  151. support it" (if the address isn't known). See 2.2.1.
  152. - Rules for Fast? XXXXX
  153. 2.2.1. Choosing an exit
  154. If we know what IP address we want to resolve, we can trivially tell
  155. whether a given router will support it by simulating its declared
  156. exit policy.
  157. Because we often connect to addresses of the form hostname:port, we do not
  158. always know the target IP address when we select an exit node. In these
  159. cases, we need to pick an exit node that "might support" connections to a
  160. given address port with an unknown address. An exit node "might support"
  161. such a connection if any clause that accepts any connections to that port
  162. precedes all clauses (if any) that reject all connections to that port.
  163. 2.2.2. User configuration
  164. Users can alter the default behavior for path selection with configuration
  165. options.
  166. - If "ExitNodes" is provided, then every request requires an exit node on
  167. the ExitNodes list. (If a request is supported by no nodes on that list,
  168. and StrictExitNodes is false, then Tor treats that request as if
  169. ExitNodes were not provided.)
  170. - "EntryNodes" and "StrictEntryNodes" behave analogously.
  171. - If a user tries to connect to or resolve a hostname of the form
  172. <target>.<servername>.exit, the request is rewritten to a request for
  173. <target>, and the request is only supported by the exit whose nickname
  174. or fingerprint is <servername>.
  175. 2.3. Handling failure
  176. If an attempt to extend a circuit fails (either because the first create
  177. failed or a subsequent extend failed) then the circuit is torn down and is
  178. no longer pending. (XXXX really?) Requests that might have been
  179. supported by the pending circuit thus become unsupported, and a new
  180. circuit needs to be constructed.
  181. If a stream "begin" attempt fails with an EXITPOLICY error, we
  182. decide that the exit node's exit policy is not correctly advertised,
  183. so we treat the exit node as if it were a non-exit until we retrieve
  184. a fresh descriptor for it.
  185. XXXX
  186. 3. Attaching streams to circuits
  187. When a circuit that might support a request is built, Tor tries to attach
  188. the request's stream to the circuit and sends a BEGIN or RESOLVE relay
  189. cell as appropriate. If the request completes unsuccessfully, Tor
  190. considers the reason given in the CLOSE relay cell. [XXX yes, and?]
  191. After a request has remained unattached for [XXXX interval?], Tor
  192. abandons the attempt and signals an error to the client as appropriate
  193. (e.g., by closing the SOCKS connection).
  194. XXX Timeouts and when Tor auto-retries.
  195. * What stream-end-reasons are appropriate for retrying.
  196. If no reply to BEGIN/RESOLVE, then the stream will timeout and fail.
  197. 4. Hidden-service related circuits
  198. XXX Tracking expected hidden service use (client-side and hidserv-side)
  199. 5. Guard nodes
  200. XXX writeme
  201. 6. Testing circuits
  202. (From some emails by arma)
  203. Right now the code exists to pick helper nodes, store our choices to
  204. disk, and use them for our entry nodes. But there are three topics
  205. to tackle before I'm comfortable turning them on by default. First,
  206. how to handle churn: since Tor nodes are not always up, and sometimes
  207. disappear forever, we need a plan for replacing missing helpers in a
  208. safe way. Second, we need a way to distinguish "the network is down"
  209. from "all my helpers are down", also in a safe way. Lastly, we need to
  210. examine the situation where a client picks three crummy helper nodes
  211. and is forever doomed to a lousy Tor experience. Here's my plan:
  212. How to handle churn.
  213. - Keep track of whether you have ever actually established a
  214. connection to each helper. Any helper node in your list that you've
  215. never used is ok to drop immediately. Also, we don't save that
  216. one to disk.
  217. - If all our helpers are down, we need more helper nodes: add a new
  218. one to the *end*of our list. Only remove dead ones when they have
  219. been gone for a very long time (months).
  220. - Pick from the first n (by default 3) helper nodes in your list
  221. that are up (according to the network-statuses) and reachable
  222. (according to your local firewall config).
  223. - This means that order matters when writing/reading them to disk.
  224. How to deal with network down.
  225. - While all helpers are down/unreachable and there are no established
  226. or on-the-way testing circuits, launch a testing circuit. (Do this
  227. periodically in the same way we try to establish normal circuits
  228. when things are working normally.)
  229. (Testing circuits are a special type of circuit, that streams won't
  230. attach to by accident.)
  231. - When a testing circuit succeeds, mark all helpers up and hold
  232. the testing circuit open.
  233. - If a connection to a helper succeeds, close all testing circuits.
  234. Else mark that helper down and try another.
  235. - If the last helper is marked down and we already have a testing
  236. circuit established, then add the first hop of that testing circuit
  237. to the end of our helper node list, close that testing circuit,
  238. and go back to square one. (Actually, rather than closing the
  239. testing circuit, can we get away with converting it to a normal
  240. circuit and beginning to use it immediately?)
  241. How to pick non-sucky helpers.
  242. - When we're picking a new helper nodes, don't use ones which aren't
  243. reachable according to our local ReachableAddresses configuration.
  244. (There's an attack here: if I pick my helper nodes in a very
  245. restrictive environment, say "ReachableAddresses 18.0.0.0/255.0.0.0:*",
  246. then somebody watching me use the network from another location will
  247. guess where I first joined the network. But let's ignore it for now.)
  248. - Right now we choose new helpers just like we'd choose any entry
  249. node: they must be "stable" (claim >1day uptime) and "fast" (advertise
  250. >10kB capacity). In 0.1.1.11-alpha, clients let dirservers define
  251. "stable" and "fast" however they like, and they just believe them.
  252. So the next step is to make them a function of the current network:
  253. e.g. line up all the 'up' nodes in order and declare the top
  254. three-quarter to be stable, fast, etc, as long as they meet some
  255. minimum too.
  256. - If that's not sufficient (it won't be), dirservers should introduce
  257. a new status flag: in additional to "stable" and "fast", we should
  258. also describe certain nodes as "entry", meaning they are suitable
  259. to be chosen as a helper. The first difference would be that we'd
  260. demand the top half rather than the top three-quarters. Another
  261. requirement would be to look at "mean time between returning" to
  262. ensure that these nodes spend most of their time available. (Up for
  263. two days straight, once a month, is not good enough.)
  264. - Lastly, we need a function, given our current set of helpers and a
  265. directory of the rest of the network, that decides when our helper
  266. set has become "too crummy" and we need to add more. For example,
  267. this could be based on currently advertised capacity of each of
  268. our helpers, and it would also be based on the user's preferences
  269. of speed vs. security.
  270. ***
  271. Lasse wrote:
  272. > I am a bit concerned with performance if we are to have e.g. two out of
  273. > three helper nodes down or unreachable. How often should Tor check if
  274. > they are back up and running?
  275. Right now Tor believes a threshold of directory servers when deciding
  276. whether each server is up. When Tor observes a server to be down
  277. (connection failed or building the first hop of the circuit failed),
  278. it marks it as down and doesn't try it again, until it gets a new
  279. network-status from somebody, at which point it takes a new concensus
  280. and marks the appropriate servers as up.
  281. According to sec 5.1 of dir-spec.txt, the client will try to fetch a new
  282. network-status at least every 30 minutes, and more often in certain cases.
  283. With the proposed scheme, we'll also mark all our helpers as up shortly
  284. after the last one is marked down.
  285. > When should there be
  286. > added an extra node to the helper node list? This is kind of an
  287. > important threshold?
  288. I agree, this is an important question. I don't have a good answer yet. Is
  289. it terrible, anonymity-wise, to add a new helper every time only one of
  290. your helpers is up? Notice that I say add rather than replace -- so you'd
  291. only use this fourth helper when one of your main three helpers is down,
  292. and if three of your four are down, you'd add a fifth, but only use it
  293. when two of the first four are down, etc.
  294. In fact, this may be smarter than just picking a random node for your
  295. testing circuit, because if your network goes up and down a lot, then
  296. eventually you have a chance of using any entry node in the network for
  297. your testing circuit.
  298. We have a design choice here. Do we only try to use helpers for the
  299. connections that will have streams on them (revealing our communication
  300. partners), or do we also want to restrict the overall set of nodes that
  301. we'll connect to, to discourage people from enumerating all Tor clients?
  302. I'm increasingly of the belief that we want to hide our presence too,
  303. based on the fact that Steven and George and others keep coming up with
  304. attacks that start with "Assuming we know the set of users".
  305. If so, then here's a revised "How to deal with network down" section:
  306. 1) When a helper is marked down or the helper list shrinks, and as
  307. a result the total number of helpers that are either (up and
  308. reachable) or (reachable but never connected to) is <= 1, then pick
  309. a new helper and add it to the end of the list.
  310. [We count nodes that have never been connected to, since otherwise
  311. we might keep on adding new nodes before trying any of them. By
  312. "reachable" I mean "is allowed by ReachableAddresses".]
  313. 2) When you fail to connect to a helper that has never been connected
  314. to, you remove him from the list right then (and the above rule
  315. might kick in).
  316. 3) When you succeed at connecting to a helper that you've never
  317. connected to before, mark all reachable helpers earlier in the list
  318. as up, and close that circuit.
  319. [We close the circuit, since if the other helpers are now up, we
  320. prefer to use them for circuits that will reveal communication
  321. partners.]
  322. This certainly seems simpler. Are there holes that I'm missing?
  323. > If running from a laptop you will meet different firewall settings, so
  324. > how should Helper Nodes settings keep up with moving from an open
  325. > ReachableAddresses to a FascistFirewall setting after the helper nodes
  326. > have been selected?
  327. I added the word "reachable" to three places in the above list, and I
  328. believe that totally solves this question.
  329. And as a bonus, it leads to an answer to Nick's attack ("If I pick
  330. my helper nodes all on 18.0.0.0:*, then I move, you'll know where I
  331. bootstrapped") -- the answer is to pick your original three helper nodes
  332. without regard for reachability. Then the above algorithm will add some
  333. more that are reachable for you, and if you move somewhere, it's more
  334. likely (though not certain) that some of the originals will become useful.
  335. Is that smart or just complex?
  336. > What happens if(when?) performance of the third node is bad?
  337. My above solution solves this a little bit, in that we always try to
  338. have two nodes available. But what if they are both up but bad? I'm not
  339. sure. As my previous mail said, we need some function, given our list
  340. of helpers and the network directory, that will tell us when we're in a
  341. bad situation. I can imagine some simple versions of this function --
  342. for example, when both our working helpers are in the bottom half of
  343. the nodes, ranked by capacity.
  344. But the hard part: what's the remedy when we decide there's something
  345. to fix? Do we add a third, and now we have two crummy ones and a new
  346. one? Or do we drop one or both of the bad ones?
  347. Perhaps we believe the latest claim from the network-status concensus,
  348. and we count a helper the dirservers believe is crummy as "not worth
  349. trying" (equivalent to "not reachable under our current ReachableAddresses
  350. config") -- and then the above algorithm would end up adding good ones,
  351. but we'd go back to the originals if they resume being acceptable? That's
  352. an appealing design. I wonder if it will cause the typical Tor user to
  353. have a helper node list that comprises most of the network, though. I'm
  354. ok with this.
  355. > Another point you might want to keep in mind, is the possibility to
  356. > reuse the code in order to add a second layer helper node (meaning node
  357. > number two) to "protect" the first layer (node number one) helper nodes.
  358. > These nodes should be tied to each of the first layer nodes. E.g. there
  359. > is one helper node list, as described in your mail, for each of the
  360. > first layer nodes, following their create/destroy.
  361. True. Does that require us to add a fourth hop to our path length,
  362. since the first hop is from a limited set, the second hop is from a
  363. limited set, and the third hop might also be constrained because, say,
  364. we're asking for an unusual exit port?
  365. > Another of the things might worth adding to the to do list is
  366. > localization of server (helper) nodes. Making it possible to pick
  367. > countries/regions where you do (not) want your helper nodes located. (As
  368. > in "HelperNodesLocated us,!eu" etc.) I know this requires the use of
  369. > external data and may not be worth it, but it _could_ be integrated at
  370. > the directory servers only -- adding a list of node IP's and e.g. a
  371. > country/region code to the directory and thus reduce the overhead. (?)
  372. > Maybe extending the Family-term?
  373. I think we are heading towards doing path selection based on geography,
  374. but I don't have a good sense yet of how that will actually turn out --
  375. that is, with what mechanism Tor clients will learn the information they
  376. need. But this seems to be something that is orthogonal to the rest of
  377. this discussion, so I look forward to having somebody else solve it for
  378. us, and fitting it in when it's ready. :)
  379. > And I would like to keep an option to pick the first X helper nodes
  380. > myself and then let Tor extend this list if these nodes are down (like
  381. > EntryNodes in current code). Even if this opens up for some new types of
  382. > "relationship" attacks.
  383. Good idea. Here's how I'd like to name these:
  384. The "EntryNodes" config option is a list of seed helper nodes. When we
  385. read EntryNodes, any node listed in entrynodes but not in the current
  386. helper node list gets *pre*pended to the helper node list.
  387. The "NumEntryNodes" config option (currently called NumHelperNodes)
  388. specifies the number of up, reachable, good-enough helper nodes that
  389. will make up the pool of possible choices for first hop, counted from
  390. the front of the helper node list until we have enough.
  391. The "UseEntryNodes" config option (currently called UseHelperNodes)
  392. tells us to turn on all this helper node behavior. If you set EntryNodes,
  393. then this option is implied.
  394. The "StrictEntryNodes" config option, provided for backward compatibility
  395. and for debugging, means a) we replace the helper node list with the
  396. current EntryNodes list, and b) whenever we would do an operation that
  397. alters the helper node list, we don't. (Yes, this means that if all the
  398. helper nodes are down, we lose until we mark them up again. But this is
  399. how it behaves now.)
  400. > I am sure my next point has been asked before, but what about testing
  401. > the current speed of the connections when looking for new helper nodes,
  402. > not only testing the connectivity? I know this might contribute to a lot
  403. > of overhead in the network, but if this only occur e.g. when using
  404. > helper nodes as a Hidden Service it might not have that large an impact,
  405. > but could help availability for the services?
  406. If we're just going to be testing them when we're first picking them,
  407. then it seems we can do the same thing by letting the directory servers
  408. test them. This has the added benefit that all the (behaving) clients
  409. use the same data, so they don't end up partitioned by a node that
  410. (for example) performs selectively for his victims.
  411. Another idea would be to periodically keep track of what speeds you get
  412. through your helpers, and make decisions from this. The reason we haven't
  413. done this yet is because there are a lot of variables -- perhaps the
  414. web site is slow, perhaps some other node in the path is slow, perhaps
  415. your local network is slow briefly, perhaps you got unlucky, etc. I
  416. believe that over time (assuming the user has roughly the same browsing
  417. habits) all of these would average out and you'd get a usable answer,
  418. but I don't have a good sense of how long it would take to converge,
  419. so I don't know whether this would be worthwhile.
  420. > BTW. I feel confortable with all the terms helper/entry/contact nodes,
  421. > but I think you (the developers) should just pick one and stay with it
  422. > to avoid confusion.
  423. I think I'm going to try to co-opt the term 'Entry' node for this
  424. purpose. We're going to have to keep referring to helper nodes for the
  425. research community for a while though, so they realize that Tor does
  426. more than just let users ask for certain entry nodes.
  427. ============================================================
  428. Some stuff that worries me about entry guards. 2006 Jun, Nickm.
  429. 1. It is unlikely for two users to have the same set of entry guards.
  430. 2. Observing a user is sufficient to learn its entry guards.
  431. 3. So, as we move around, we leak our