115-two-hop-paths.txt 19 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387
  1. Filename: 115-two-hop-paths.txt
  2. Title: Two Hop Paths
  3. Version: $Revision$
  4. Last-Modified: $Date$
  5. Author: Mike Perry
  6. Created:
  7. Status: Open
  8. Supersedes: 112
  9. Overview:
  10. The idea is that users should be able to choose if they would like
  11. to have either two or three hop paths through the tor network.
  12. Let us be clear: the users who would choose this option should be
  13. those that are concerned with IP obfuscation only: ie they would not be
  14. targets of a resource-intensive multi-node attack. It is sometimes said
  15. that these users should find some other network to use other than Tor.
  16. This is a foolish suggestion: more users improves security of everyone,
  17. and the current small userbase size is a critical hindrance to
  18. anonymity, as is discussed below and in [1].
  19. This value should be modifiable from the controller, and should be
  20. available from Vidalia.
  21. Motivation:
  22. The Tor network is slow and overloaded. Increasingly often I hear
  23. stories about friends and friends of friends who are behind firewalls,
  24. annoying censorware, or under surveillance that interferes with their
  25. productivity and Internet usage, or chills their speech. These people
  26. know about Tor, but they choose to put up with the censorship because
  27. Tor is too slow to be usable for them. In fact, to download a fresh,
  28. complete copy of levine-timing.pdf for the Theoretical Argument
  29. section of this proposal over Tor took me 3 tries.
  30. Furthermore, the biggest current problem with Tor's anonymity for
  31. those who really need it is not someone attacking the network to
  32. discover who they are. It's instead the extreme danger that so few
  33. people use Tor because it's so slow, that those who do use it have
  34. essentially no confusion set.
  35. The recent case where the professor and the rogue Tor user were the
  36. only Tor users on campus, and thus suspected in an incident involving
  37. Tor and that University underscores this point: "That was why the police
  38. had come to see me. They told me that only two people on our campus were
  39. using Tor: me and someone they suspected of engaging in an online scam.
  40. The detectives wanted to know whether the other user was a former
  41. student of mine, and why I was using Tor"[1].
  42. Not only does Tor provide no anonymity if you use it to be anonymous
  43. but are obviously from a certain institution, location or circumstance,
  44. it is also dangerous to use Tor for risk of being accused of having
  45. something significant enough to hide to be willing to put up with
  46. the horrible performance as opposed to using some weaker alternative.
  47. There are many ways to improve the speed problem, and of course we
  48. should and will implement as many as we can. Johannes's GSoC project
  49. and my reputation system are longer term, higher-effort things that
  50. will still provide benefit independent of this proposal.
  51. However, reducing the path length to 2 for those who do not need the
  52. extra anonymity 3 hops provide not only improves their Tor experience
  53. but also reduces their load on the Tor network by 33%, and should
  54. increase adoption of Tor by a good deal. That's not just Win-Win, it's
  55. Win-Win-Win.
  56. Who will enable this option?
  57. This is the crux of the proposal. Admittedly, there is some anonymity
  58. loss and some degree of decreased investment required on the part of
  59. the adversary to attack 2 hop users versus 3 hop users, even if it is
  60. minimal and limited mostly to up-front costs and false positives.
  61. The key questions are:
  62. 1. Are these users in a class such that their risk is significantly
  63. less than the amount of this anonymity loss?
  64. 2. Are these users able to identify themselves?
  65. Many many users of Tor are not at risk for an adversary capturing c/n
  66. nodes of the network just to see what they do. These users use Tor to
  67. circumvent aggressive content filters, or simply to keep their IP out of
  68. marketing and search engine databases. Most content filters have no
  69. interest in running Tor nodes to catch violators, and marketers
  70. certainly would never consider such a thing, both on a cost basis and a
  71. legal one.
  72. In a sense, this represents an alternate threat model against these
  73. users who are not at risk for Tor's normal threat model.
  74. It should be evident to these users that they fall into this class. All
  75. that should be needed is a radio button
  76. * "I use Tor for local content filter circumvention and/or IP obfuscation,
  77. not anonymity. Speed is more important to me than high anonymity.
  78. No one will make considerable efforts to determine my real IP."
  79. * "I use Tor for anonymity and/or national-level, legally enforced
  80. censorship. It is possible effort will be taken to identify
  81. me, including but not limited to network surveillance. I need more
  82. protection."
  83. and then some explanation in the help for exactly what this means, and
  84. the risks involved with eliminating the adversary's need for timing
  85. attacks with respect to false positives. Ultimately, the decision is a
  86. simple one that can be made without this information, however. The user
  87. does not need Paul Syverson to instruct them on the deep magic of Onion
  88. Routing to make this decision. They just need to know why they use Tor.
  89. If they use it just to stay out of marketing databases and/or bypass a
  90. local content filter, two hops is plenty. This is likely the vast
  91. majority of Tor users, and many non-users we would like to bring on
  92. board.
  93. So, having established this class of users, let us now go on to
  94. examine theoretical and practical risks we place them at, and determine
  95. if these risks violate the users needs, or introduce additional risk
  96. to node operators who may be subject to requests from law enforcement
  97. to track users who need 3 hops, but use 2 because they enjoy the
  98. thrill of russian roulette.
  99. Theoretical Argument:
  100. It has long been established that timing attacks against mixed
  101. and onion networks are extremely effective, and that regardless
  102. of path length, if the adversary has compromised your first and
  103. last hop of your path, you can assume they have compromised your
  104. identity for that connection.
  105. In fact, it was demonstrated that for all but the slowest, lossiest
  106. networks, error rates for false positives and false negatives were
  107. very near zero[2]. Only for constant streams of traffic over slow and
  108. (more importantly) extremely lossy network links did the error rate
  109. hit 20%. For loss rates typical to the Internet, even the error rate
  110. for slow nodes with constant traffic streams was 13%.
  111. When you take into account that most Tor streams are not constant,
  112. but probably much more like their "HomeIP" dataset, which consists
  113. mostly of web traffic that exists over finite intervals at specific
  114. times, error rates drop to fractions of 1%, even for the "worst"
  115. network nodes.
  116. Therefore, the user has little benefit from the extra hop, assuming
  117. the adversary does timing correlation on their nodes. Since timing
  118. correlation is simply an implementation issue and is most likely
  119. a single up-front cost (and one that is like quite a bit cheaper
  120. than the cost of the machines purchased to host the nodes to mount
  121. an attack), the real protection is the low probability of getting
  122. both the first and last hop of a client's stream.
  123. Practical Issues:
  124. Theoretical issues aside, there are several practical issues with the
  125. implementation of Tor that need to be addressed to ensure that
  126. identity information is not leaked by the implementation.
  127. Exit policy issues:
  128. If a client chooses an exit with a very restrictive exit policy
  129. (such as an IP or IP range), the first hop then knows a good deal
  130. about the destination. For this reason, clients should not select
  131. exits that match their destination IP with anything other than "*".
  132. Partitioning:
  133. Partitioning attacks form another concern. Since Tor uses telescoping
  134. to build circuits, it is possible to tell a user is constructing only
  135. two hop paths at the entry node and on the local network. An external
  136. adversary can potentially differentiate 2 and 3 hop users, and decide
  137. that all IP addresses connecting to Tor and using 3 hops have something
  138. to hide, and should be scrutinized more closely or outright apprehended.
  139. One solution to this is to use the "leaky-circuit" method of attaching
  140. streams: The user always creates 3-hop circuits, but if the option
  141. is enabled, they always exit from their 2nd hop. The ideal solution
  142. would be to create a RELAY_SHISHKABOB cell which contains onion
  143. skins for every host along the path, but this requires protocol
  144. changes at the nodes to support.
  145. Guard nodes:
  146. Since guard nodes can rotate due to client relocation, network
  147. failure, node upgrades and other issues, if you amortize the risk a
  148. mobile, dialup, or otherwise intermittently connected user is exposed to
  149. over any reasonable duration of Tor usage (on the order of a year), it
  150. is the same with or without guard nodes. Assuming an adversary has
  151. c%/n% of network bandwidth, and guards rotate on average with period R,
  152. statistically speaking, it's merely a question of if the user wishes
  153. their risk to be concentrated with probability c/n over an expected
  154. period of R*c, and probability 0 over an expected period of R*(n-c),
  155. versus a continuous risk of (c/n)^2. So statistically speaking, guards
  156. only create a time-tradeoff of risk over the long run for normal Tor
  157. usage. Rotating guards do not reduce risk for normal client usage long
  158. term.[3]
  159. On other other hand, assuming a more stable method of guard selection
  160. and preservation is devised, or a more stable client side network than
  161. my own is typical (which rotates guards frequently due to network issues
  162. and moving about), guard nodes provide a tradeoff in the form of c/n% of
  163. the users being "sacrificial users" who are exposed to high risk O(c/n)
  164. of identification, while the rest of the network is exposed to zero
  165. risk.
  166. The nature of Tor makes it likely an adversary will take a "shock and
  167. awe" approach to suppressing Tor by rounding up a few users whose
  168. browsing activity has been observed to be made into examples, in an
  169. attempt to prove that Tor is not perfect.
  170. Since this "shock and awe" attack can be applied with or without guard
  171. nodes, stable guard nodes do offer a measure of accountability of sorts.
  172. If a user was using a small set of guard nodes and knows them well, and
  173. then is suddenly apprehended as a result of Tor usage, having a fixed
  174. set of entry points to suspect is a lot better than suspecting the whole
  175. network. Conversely, it can also give non-apprehended users comfort
  176. that they are likely to remain safe indefinitely with their set of (now
  177. presumably trusted) guards. This is probably the most beneficial
  178. property of reliable guards: they deter the adversary from mounting
  179. "shock and awe" attacks because the surviving users will not
  180. intimidated, but instead made more confident. Of course, guards need to
  181. be made much more stable and users need to be encouraged to know their
  182. guards for this property to really take effect.
  183. This beneficial property of client vigilance also carries over to an
  184. active adversary, except in this case instead of relying on the user
  185. to remember their guard nodes and somehow communicate them after
  186. apprehension, the code can alert them to the presence of an active
  187. adversary before they are apprehended. But only if they use guard nodes.
  188. So lets consider the active adversary: Two hop paths allow malicious
  189. guards to get considerably more benefit from failing circuits if they do
  190. not extend to their colluding peers for the exit hop. Since guards can
  191. detect the number of hops in a path via either timing or by statistical
  192. analysis of the exit policy of the 2nd hop, they can perform this attack
  193. predominantly against 2 hop users.
  194. This can be addressed by completely abandoning an entry guard after a
  195. certain ratio of extend or general circuit failures with respect to
  196. non-failed circuits. The proper value for this ratio can be determined
  197. experimentally with TorFlow. There is the possibility that the local
  198. network can abuse this feature to cause certain guards to be dropped,
  199. but they can do that anyways with the current Tor by just making guards
  200. they don't like unreachable. With this mechanism, Tor will complain
  201. loudly if any guard failure rate exceeds the expected in any failure
  202. case, local or remote.
  203. Eliminating guards entirely would actually not address this issue due
  204. to the time-tradeoff nature of risk. In fact, it would just make it
  205. worse. Without guard nodes, it becomes much more difficult for clients
  206. to become alerted to Tor entry points that are failing circuits to make
  207. sure that they only devote bandwidth to carry traffic for streams which
  208. they observe both ends. Yet the rogue entry points are still able to
  209. significantly increase their success rates by failing circuits.
  210. For this reason, guard nodes should remain enabled for 2 hop users,
  211. at least until an IP-independent, undetectable guard scanner can
  212. be created. TorFlow can scan for failing guards, but after a while,
  213. its unique behavior gives away the fact that its IP is a scanner and
  214. it can be given selective service.
  215. Consideration of risks for node operators:
  216. There is a serious risk for two hop users in the form of guard
  217. profiling. If an adversary running an exit node notices that a
  218. particular site is always visited from a fixed previous hop, it is
  219. likely that this is a two hop user using a certain guard, which could be
  220. monitored to determine their identity. Thus, for the protection of both
  221. 2 hop users and node operators, 2 hop users should limit their guard
  222. duration to a sufficient number of days to verify reliability of a node,
  223. but not much more. This duration can be determined experimentally by
  224. TorFlow.
  225. Considering a Tor client builds on average 144 circuits/day (10
  226. minutes per circuit), if the adversary owns c/n% of exits on the
  227. network, they can expect to see 144*c/n circuits from this user, or
  228. about 14 minutes of usage per day per percentage of network penetration.
  229. Since it will take several occurrences of user-linkable exit content
  230. from the same predecessor hop for the adversary to have any confidence
  231. this is a 2 hop user, it is very unlikely that any sort of demands made
  232. upon the predecessor node would guaranteed to be effective (ie it
  233. actually was a guard), let alone be executed in time to apprehend the
  234. user before they rotated guards.
  235. The reverse risk also warrants consideration. If a malicious guard has
  236. orders to surveil Mike Perry, it can determine Mike Perry is using two
  237. hops by observing his tendency to choose a 2nd hop with a viable exit
  238. policy. This can be done relatively quickly, unfortunately, and
  239. indicates Mike Perry should spend some of his time building real 3 hop
  240. circuits through the same guards, to require them to at least wait for
  241. him to actually use Tor to determine his style of operation, rather than
  242. collect this information from his passive building patterns.
  243. However, to actively determine where Mike Perry is going, the guard
  244. will need to require logging ahead of time at multiple exit nodes that
  245. he may use over the course of the few days while he is at that guard,
  246. and correlate the usage times of the exit node with Mike Perry's
  247. activity at that guard for the few days he uses it. At this point, the
  248. adversary is mounting a scale and method of attack (widespread logging,
  249. timing attacks) that works pretty much just as effectively against 3
  250. hops, so exit node operators are exposed to no additional danger than
  251. they otherwise normally are.
  252. Why not fix Pathlen=2?:
  253. The main reason I am not advocating that we always use 2 hops is that
  254. in some situations, timing correlation evidence by itself may not be
  255. considered as solid and convincing as an actual, uninterrupted, fully
  256. traced path. Are these timing attacks as effective on a real network as
  257. they are in simulation? Maybe the circuit multiplexing of Tor can serve
  258. to frustrate them to a degree? Would an extralegal adversary or
  259. authoritarian government even care? In the face of these situation
  260. dependent unknowns, it should be up to the user to decide if this is
  261. a concern for them or not.
  262. It should probably also be noted that even a false positive
  263. rate of 1% for a 200k concurrent-user network could mean that for a
  264. given node, a given stream could be confused with something like 10
  265. users, assuming ~200 nodes carry most of the traffic (ie 1000 users
  266. each). Though of course to really know for sure, someone needs to do
  267. an attack on a real network, unfortunately.
  268. Additionally, at some point cover traffic schemes may be implemented to
  269. frustrate timing attacks on the first hop. It is possible some expert
  270. users may do this ad-hoc already, and may wish to continue using 3 hops
  271. for this reason.
  272. Implementation:
  273. new_route_len() can be modified directly with a check of the
  274. Pathlen option. However, circuit construction logic should be
  275. altered so that both 2 hop and 3 hop users build the same types of
  276. circuits, and the option should ultimately govern circuit selection,
  277. not construction. This improves coverage against guard nodes being
  278. able to passively profile users who aren't even using Tor.
  279. PathlenCoinWeight, anyone? :)
  280. The exit policy hack is a bit more tricky. compare_addr_to_addr_policy
  281. needs to return an alternate ADDR_POLICY_ACCEPTED_WILDCARD or
  282. ADDR_POLICY_ACCEPTED_SPECIFIC return value for use in
  283. circuit_is_acceptable.
  284. The leaky exit is trickier still.. handle_control_attachstream
  285. does allow paths to exit at a given hop. Presumably something similar
  286. can be done in connection_ap_handshake_process_socks, and elsewhere?
  287. Circuit construction would also have to be performed such that the
  288. 2nd hop's exit policy is what is considered, not the 3rd's.
  289. The entry_guard_t structure could have num_circ_failed and
  290. num_circ_succeeded members such that if it exceeds F% circuit
  291. extend failure rate to a second hop, it is removed from the entry list.
  292. F should be sufficiently high to avoid churn from normal Tor circuit
  293. failure as determined by TorFlow scans.
  294. The Vidalia option should be presented as a radio button.
  295. Migration:
  296. Phase 1: Adjust exit policy checks if Pathlen is set, implement leaky
  297. circuit ability, and 2-3 hop circuit selection logic governed by
  298. Pathlen.
  299. Phase 2: Experiment to determine the proper ratio of circuit
  300. failures used to expire garbage or malicious guards via TorFlow
  301. (pending Bug #440 backport+adoption).
  302. Phase 3: Implement guard expiration code to kick off failure-prone
  303. guards and warn the user. Cap 2 hop guard duration to a proper number
  304. of days determined sufficient to establish guard reliability (to be
  305. determined by TorFlow).
  306. Phase 4: Make radiobutton in Vidalia, along with help entry
  307. that explains in layman's terms the risks involved.
  308. Phase 5: Allow user to specify path length by HTTP URL suffix.
  309. [1] http://p2pnet.net/story/11279
  310. [2] http://www.cs.umass.edu/~mwright/papers/levine-timing.pdf
  311. [3] Proof available upon request ;)