path-spec.txt 16 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344
  1. $Id$
  2. Tor Path Specification
  3. Roger Dingledine
  4. Nick Mathewson
  5. Note: This is an attempt to specify Tor as currently implemented. Future
  6. versions of Tor will implement improved algorithms.
  7. This document tries to cover how Tor chooses to build circuits and assign
  8. streams to circuits. Other implementations MAY take other approaches, but
  9. implementors should be aware of the anonymity and load-balancing implications
  10. of their choices.
  11. THIS SPEC ISN'T DONE OR CORRECT. I'm just copying in relevant info so
  12. far. The starred points are things we should cover, but not an exhaustive
  13. list. -NM
  14. 1. General operation
  15. * We build some circuits preemptively, and some on-demand.
  16. * We attach greedily, and expire after time.
  17. 1b. Types of circuits.
  18. * Stable / Ordinary
  19. * Internal / Exit
  20. 2. Building circuits
  21. * Preemptive building
  22. * On-demand building
  23. * Cannibalizing circuits
  24. * Choosing the path first, building second.
  25. * Choosing the length of the circuit.
  26. * Choosing entries, midpoints, exits.
  27. * the .exit notation
  28. * exitnodes, entrynodes, strictexitnodes, strictentrynodes.
  29. * What to do when an extend fails
  30. * Keeping track of 'expected' ports
  31. * And expected hidden service use (client-side and hidserv-side)
  32. * Backing off from circuit building when a long time has passed
  33. 3. Attaching streams to circuits
  34. * Including via the controller.
  35. * Timeouts and when Tor autoretries.
  36. * What stream-end-reasons are appropriate for retrying.
  37. 4. Rendezvous circuits
  38. 5. Guard nodes
  39. 6. Testing circuits
  40. (From some emails by arma)
  41. Hi folks,
  42. I've gotten the codebase to the point that I'm going to start trying
  43. to make helper nodes work well. With luck they will be on by default in
  44. the final 0.1.1.x release.
  45. For background on helper nodes, read
  46. http://wiki.noreply.org/noreply/TheOnionRouter/TorFAQ#RestrictedEntry
  47. First order of business: the phrase "helper node" sucks. We always have
  48. to define it after we say it to somebody. Nick likes the phrase "contact
  49. node", because they are your point-of-contact into the network. That is
  50. better than phrases like "bridge node". The phrase "fixed entry node"
  51. doesn't seem to work with non-math people, because they wonder what was
  52. broken about it. I'm sort of partial to the phrase "entry node" or maybe
  53. "restricted entry node". In any case, if you have ideas on names, please
  54. mail me off-list and I'll collate them.
  55. Right now the code exists to pick helper nodes, store our choices to
  56. disk, and use them for our entry nodes. But there are three topics
  57. to tackle before I'm comfortable turning them on by default. First,
  58. how to handle churn: since Tor nodes are not always up, and sometimes
  59. disappear forever, we need a plan for replacing missing helpers in a
  60. safe way. Second, we need a way to distinguish "the network is down"
  61. from "all my helpers are down", also in a safe way. Lastly, we need to
  62. examine the situation where a client picks three crummy helper nodes
  63. and is forever doomed to a lousy Tor experience. Here's my plan:
  64. How to handle churn.
  65. - Keep track of whether you have ever actually established a
  66. connection to each helper. Any helper node in your list that you've
  67. never used is ok to drop immediately. Also, we don't save that
  68. one to disk.
  69. - If all our helpers are down, we need more helper nodes: add a new
  70. one to the *end*of our list. Only remove dead ones when they have
  71. been gone for a very long time (months).
  72. - Pick from the first n (by default 3) helper nodes in your list
  73. that are up (according to the network-statuses) and reachable
  74. (according to your local firewall config).
  75. - This means that order matters when writing/reading them to disk.
  76. How to deal with network down.
  77. - While all helpers are down/unreachable and there are no established
  78. or on-the-way testing circuits, launch a testing circuit. (Do this
  79. periodically in the same way we try to establish normal circuits
  80. when things are working normally.)
  81. (Testing circuits are a special type of circuit, that streams won't
  82. attach to by accident.)
  83. - When a testing circuit succeeds, mark all helpers up and hold
  84. the testing circuit open.
  85. - If a connection to a helper succeeds, close all testing circuits.
  86. Else mark that helper down and try another.
  87. - If the last helper is marked down and we already have a testing
  88. circuit established, then add the first hop of that testing circuit
  89. to the end of our helper node list, close that testing circuit,
  90. and go back to square one. (Actually, rather than closing the
  91. testing circuit, can we get away with converting it to a normal
  92. circuit and beginning to use it immediately?)
  93. How to pick non-sucky helpers.
  94. - When we're picking a new helper nodes, don't use ones which aren't
  95. reachable according to our local ReachableAddresses configuration.
  96. (There's an attack here: if I pick my helper nodes in a very
  97. restrictive environment, say "ReachableAddresses 18.0.0.0/255.0.0.0:*",
  98. then somebody watching me use the network from another location will
  99. guess where I first joined the network. But let's ignore it for now.)
  100. - Right now we choose new helpers just like we'd choose any entry
  101. node: they must be "stable" (claim >1day uptime) and "fast" (advertise
  102. >10kB capacity). In 0.1.1.11-alpha, clients let dirservers define
  103. "stable" and "fast" however they like, and they just believe them.
  104. So the next step is to make them a function of the current network:
  105. e.g. line up all the 'up' nodes in order and declare the top
  106. three-quarter to be stable, fast, etc, as long as they meet some
  107. minimum too.
  108. - If that's not sufficient (it won't be), dirservers should introduce
  109. a new status flag: in additional to "stable" and "fast", we should
  110. also describe certain nodes as "entry", meaning they are suitable
  111. to be chosen as a helper. The first difference would be that we'd
  112. demand the top half rather than the top three-quarters. Another
  113. requirement would be to look at "mean time between returning" to
  114. ensure that these nodes spend most of their time available. (Up for
  115. two days straight, once a month, is not good enough.)
  116. - Lastly, we need a function, given our current set of helpers and a
  117. directory of the rest of the network, that decides when our helper
  118. set has become "too crummy" and we need to add more. For example,
  119. this could be based on currently advertised capacity of each of
  120. our helpers, and it would also be based on the user's preferences
  121. of speed vs. security.
  122. ***
  123. Lasse wrote:
  124. > I am a bit concerned with performance if we are to have e.g. two out of
  125. > three helper nodes down or unreachable. How often should Tor check if
  126. > they are back up and running?
  127. Right now Tor believes a threshold of directory servers when deciding
  128. whether each server is up. When Tor observes a server to be down
  129. (connection failed or building the first hop of the circuit failed),
  130. it marks it as down and doesn't try it again, until it gets a new
  131. network-status from somebody, at which point it takes a new concensus
  132. and marks the appropriate servers as up.
  133. According to sec 5.1 of dir-spec.txt, the client will try to fetch a new
  134. network-status at least every 30 minutes, and more often in certain cases.
  135. With the proposed scheme, we'll also mark all our helpers as up shortly
  136. after the last one is marked down.
  137. > When should there be
  138. > added an extra node to the helper node list? This is kind of an
  139. > important threshold?
  140. I agree, this is an important question. I don't have a good answer yet. Is
  141. it terrible, anonymity-wise, to add a new helper every time only one of
  142. your helpers is up? Notice that I say add rather than replace -- so you'd
  143. only use this fourth helper when one of your main three helpers is down,
  144. and if three of your four are down, you'd add a fifth, but only use it
  145. when two of the first four are down, etc.
  146. In fact, this may be smarter than just picking a random node for your
  147. testing circuit, because if your network goes up and down a lot, then
  148. eventually you have a chance of using any entry node in the network for
  149. your testing circuit.
  150. We have a design choice here. Do we only try to use helpers for the
  151. connections that will have streams on them (revealing our communication
  152. partners), or do we also want to restrict the overall set of nodes that
  153. we'll connect to, to discourage people from enumerating all Tor clients?
  154. I'm increasingly of the belief that we want to hide our presence too,
  155. based on the fact that Steven and George and others keep coming up with
  156. attacks that start with "Assuming we know the set of users".
  157. If so, then here's a revised "How to deal with network down" section:
  158. 1) When a helper is marked down or the helper list shrinks, and as
  159. a result the total number of helpers that are either (up and
  160. reachable) or (reachable but never connected to) is <= 1, then pick
  161. a new helper and add it to the end of the list.
  162. [We count nodes that have never been connected to, since otherwise
  163. we might keep on adding new nodes before trying any of them. By
  164. "reachable" I mean "is allowed by ReachableAddresses".]
  165. 2) When you fail to connect to a helper that has never been connected
  166. to, you remove him from the list right then (and the above rule
  167. might kick in).
  168. 3) When you succeed at connecting to a helper that you've never
  169. connected to before, mark all reachable helpers earlier in the list
  170. as up, and close that circuit.
  171. [We close the circuit, since if the other helpers are now up, we
  172. prefer to use them for circuits that will reveal communication
  173. partners.]
  174. This certainly seems simpler. Are there holes that I'm missing?
  175. > If running from a laptop you will meet different firewall settings, so
  176. > how should Helper Nodes settings keep up with moving from an open
  177. > ReachableAddresses to a FascistFirewall setting after the helper nodes
  178. > have been selected?
  179. I added the word "reachable" to three places in the above list, and I
  180. believe that totally solves this question.
  181. And as a bonus, it leads to an answer to Nick's attack ("If I pick
  182. my helper nodes all on 18.0.0.0:*, then I move, you'll know where I
  183. bootstrapped") -- the answer is to pick your original three helper nodes
  184. without regard for reachability. Then the above algorithm will add some
  185. more that are reachable for you, and if you move somewhere, it's more
  186. likely (though not certain) that some of the originals will become useful.
  187. Is that smart or just complex?
  188. > What happens if(when?) performance of the third node is bad?
  189. My above solution solves this a little bit, in that we always try to
  190. have two nodes available. But what if they are both up but bad? I'm not
  191. sure. As my previous mail said, we need some function, given our list
  192. of helpers and the network directory, that will tell us when we're in a
  193. bad situation. I can imagine some simple versions of this function --
  194. for example, when both our working helpers are in the bottom half of
  195. the nodes, ranked by capacity.
  196. But the hard part: what's the remedy when we decide there's something
  197. to fix? Do we add a third, and now we have two crummy ones and a new
  198. one? Or do we drop one or both of the bad ones?
  199. Perhaps we believe the latest claim from the network-status concensus,
  200. and we count a helper the dirservers believe is crummy as "not worth
  201. trying" (equivalent to "not reachable under our current ReachableAddresses
  202. config") -- and then the above algorithm would end up adding good ones,
  203. but we'd go back to the originals if they resume being acceptable? That's
  204. an appealing design. I wonder if it will cause the typical Tor user to
  205. have a helper node list that comprises most of the network, though. I'm
  206. ok with this.
  207. > Another point you might want to keep in mind, is the possibility to
  208. > reuse the code in order to add a second layer helper node (meaning node
  209. > number two) to "protect" the first layer (node number one) helper nodes.
  210. > These nodes should be tied to each of the first layer nodes. E.g. there
  211. > is one helper node list, as described in your mail, for each of the
  212. > first layer nodes, following their create/destroy.
  213. True. Does that require us to add a fourth hop to our path length,
  214. since the first hop is from a limited set, the second hop is from a
  215. limited set, and the third hop might also be constrained because, say,
  216. we're asking for an unusual exit port?
  217. > Another of the things might worth adding to the to do list is
  218. > localization of server (helper) nodes. Making it possible to pick
  219. > countries/regions where you do (not) want your helper nodes located. (As
  220. > in "HelperNodesLocated us,!eu" etc.) I know this requires the use of
  221. > external data and may not be worth it, but it _could_ be integrated at
  222. > the directory servers only -- adding a list of node IP's and e.g. a
  223. > country/region code to the directory and thus reduce the overhead. (?)
  224. > Maybe extending the Family-term?
  225. I think we are heading towards doing path selection based on geography,
  226. but I don't have a good sense yet of how that will actually turn out --
  227. that is, with what mechanism Tor clients will learn the information they
  228. need. But this seems to be something that is orthogonal to the rest of
  229. this discussion, so I look forward to having somebody else solve it for
  230. us, and fitting it in when it's ready. :)
  231. > And I would like to keep an option to pick the first X helper nodes
  232. > myself and then let Tor extend this list if these nodes are down (like
  233. > EntryNodes in current code). Even if this opens up for some new types of
  234. > "relationship" attacks.
  235. Good idea. Here's how I'd like to name these:
  236. The "EntryNodes" config option is a list of seed helper nodes. When we
  237. read EntryNodes, any node listed in entrynodes but not in the current
  238. helper node list gets *pre*pended to the helper node list.
  239. The "NumEntryNodes" config option (currently called NumHelperNodes)
  240. specifies the number of up, reachable, good-enough helper nodes that
  241. will make up the pool of possible choices for first hop, counted from
  242. the front of the helper node list until we have enough.
  243. The "UseEntryNodes" config option (currently called UseHelperNodes)
  244. tells us to turn on all this helper node behavior. If you set EntryNodes,
  245. then this option is implied.
  246. The "StrictEntryNodes" config option, provided for backward compatibility
  247. and for debugging, means a) we replace the helper node list with the
  248. current EntryNodes list, and b) whenever we would do an operation that
  249. alters the helper node list, we don't. (Yes, this means that if all the
  250. helper nodes are down, we lose until we mark them up again. But this is
  251. how it behaves now.)
  252. > I am sure my next point has been asked before, but what about testing
  253. > the current speed of the connections when looking for new helper nodes,
  254. > not only testing the connectivity? I know this might contribute to a lot
  255. > of overhead in the network, but if this only occur e.g. when using
  256. > helper nodes as a Hidden Service it might not have that large an impact,
  257. > but could help availability for the services?
  258. If we're just going to be testing them when we're first picking them,
  259. then it seems we can do the same thing by letting the directory servers
  260. test them. This has the added benefit that all the (behaving) clients
  261. use the same data, so they don't end up partitioned by a node that
  262. (for example) performs selectively for his victims.
  263. Another idea would be to periodically keep track of what speeds you get
  264. through your helpers, and make decisions from this. The reason we haven't
  265. done this yet is because there are a lot of variables -- perhaps the
  266. web site is slow, perhaps some other node in the path is slow, perhaps
  267. your local network is slow briefly, perhaps you got unlucky, etc. I
  268. believe that over time (assuming the user has roughly the same browsing
  269. habits) all of these would average out and you'd get a usable answer,
  270. but I don't have a good sense of how long it would take to converge,
  271. so I don't know whether this would be worthwhile.
  272. > BTW. I feel confortable with all the terms helper/entry/contact nodes,
  273. > but I think you (the developers) should just pick one and stay with it
  274. > to avoid confusion.
  275. I think I'm going to try to co-opt the term 'Entry' node for this
  276. purpose. We're going to have to keep referring to helper nodes for the
  277. research community for a while though, so they realize that Tor does
  278. more than just let users ask for certain entry nodes.