115-two-hop-paths.txt 13 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292
  1. Filename: 115-two-hop-paths.txt
  2. Title: Two Hop Paths
  3. Version: $Revision$
  4. Last-Modified: $Date$
  5. Author: Mike Perry
  6. Created:
  7. Status: Open
  8. Supersedes: 112
  9. Overview:
  10. The idea is that users should be able to choose if they would like
  11. to have either two or three hop paths through the tor network.
  12. This value should be modifiable from the controller, and should be
  13. available from Vidalia.
  14. Motivation:
  15. The Tor network is slow and overloaded. Increasingly often I hear
  16. stories about friends and friends of friends who are behind firewalls,
  17. annoying censorware, or under surveillance that interferes with their
  18. productivity and Internet usage, or chills their speech. These people
  19. know about Tor, but they choose to put up with the censorship because
  20. Tor is too slow to be usable for them. In fact, to download a fresh,
  21. complete copy of levine-timing.pdf for the Theoretical Argument
  22. section of this proposal over Tor took me 3 tries.
  23. Furthermore, the biggest current problem with Tor's anonymity for
  24. those who really need it is not someone attacking the network to
  25. discover who they are. It's instead the extreme danger that so few
  26. people use Tor because it's so slow, that those who do use it have
  27. essentially no confusion set.
  28. The recent case where the professor and the rogue Tor user were the
  29. only Tor users on campus, and thus suspected in an incident involving
  30. Tor and that University underscores this point: "That was why the police
  31. had come to see me. They told me that only two people on our campus were
  32. using Tor: me and someone they suspected of engaging in an online scam.
  33. The detectives wanted to know whether the other user was a former
  34. student of mine, and why I was using Tor"[1].
  35. Not only does Tor provide no anonymity if you use it to be anonymous
  36. but are obviously from a certain institution, location or circumstance,
  37. it is also dangerous to use Tor for risk of being accused of having
  38. something significant enough to hide to be willing to put up with
  39. the horrible performance.
  40. There are many ways to improve the speed problem, and of course we
  41. should and will implement as many as we can. Johannes's GSoC project
  42. and my reputation system are longer term, higher-effort things that
  43. will still provide benefit independent of this proposal.
  44. However, reducing the path length to 2 for those who do not need the
  45. (questionable) extra anonymity 3 hops provide not only improves their
  46. Tor experience but also reduces their load on the Tor network by 33%,
  47. and can be done in less than 10 lines of code (not counting various
  48. security enhancements). That's not just Win-Win, it's Win-Win-Win.
  49. Theoretical Argument:
  50. It has long been established that timing attacks against mixed
  51. and onion networks are extremely effective, and that regardless
  52. of path length, if the adversary has compromised your first and
  53. last hop of your path, you can assume they have compromised your
  54. identity for that connection.
  55. In fact, it was demonstrated that for all but the slowest, lossiest
  56. networks, error rates for false positives and false negatives were
  57. very near zero[2]. Only for constant streams of traffic over slow and
  58. (more importantly) extremely lossy network links did the error rate
  59. hit 20%. For loss rates typical to the Internet, even the error rate
  60. for slow nodes with constant traffic streams was 13%.
  61. When you take into account that most Tor streams are not constant,
  62. but probably much more like their "HomeIP" dataset, which consists
  63. mostly of web traffic that exists over finite intervals at specific
  64. times, error rates drop to fractions of 1%, even for the "worst"
  65. network nodes.
  66. Therefore, the user has little benefit from the extra hop, assuming
  67. the adversary does timing correlation on their nodes. Since timing
  68. correlation is simply an implementation issue and is most likely
  69. a single up-front cost (and one that is like quite a bit cheaper
  70. than the cost of the machines purchased to host the nodes to mount
  71. an attack), the real protection is the low probability of getting
  72. both the first and last hop of a client's stream.
  73. Practical Issues:
  74. Theoretical issues aside, there are several practical issues with the
  75. implementation of Tor that need to be addressed to ensure that
  76. identity information is not leaked by the implementation.
  77. Exit policy issues:
  78. If a client chooses an exit with a very restrictive exit policy
  79. (such as an IP or IP range), the first hop then knows a good deal
  80. about the destination. For this reason, clients should not select
  81. exits that match their destination IP with anything other than "*".
  82. Partitioning:
  83. Partitioning attacks form another concern. Since Tor uses telescoping
  84. to build circuits, it is possible to tell a user is constructing only
  85. two hop paths at the entry node and on the local network. An external
  86. adversary can potentially differentiate 2 and 3 hop users, and decide
  87. that all IP addresses connecting to Tor and using 3 hops have something
  88. to hide, and should be scrutinized more closely or outright apprehended.
  89. One solution to this is to use the "leaky-circuit" method of attaching
  90. streams: The user always creates 3-hop circuits, but if the option
  91. is enabled, they always exit from their 2nd hop. The ideal solution
  92. would be to create a RELAY_SHISHKABOB cell which contains onion
  93. skins for every host along the path, but this requires protocol
  94. changes at the nodes to support.
  95. Guard nodes:
  96. Since guard nodes do rotate due to network failure, node upgrades and
  97. other issues, if you amortize the risk a user is exposed to over any
  98. reasonable duration of Tor usage (on the order of a year), it is the
  99. same with or without guard nodes. Assuming an adversary has c%/n% of
  100. network bandwidth, and guards rotate on average with period R,
  101. statistically speaking, it's merely a question of if the user wishes
  102. their risk to be concentrated with probability c/n over an expected
  103. period of R*c, and probability 0 over an expected period of R*(n-c),
  104. versus a continuous risk of (c/n)^2. So statistically speaking, guards
  105. only create a time-tradeoff of risk over the long run for normal Tor
  106. usage. They do not reduce risk for normal client usage long term.[3]
  107. Guard nodes do offer a measure of accountability of sorts. If a user
  108. was using a small set of guard nodes, and then is suddenly apprehended
  109. as a result of Tor usage, having a fixed set of entry points to suspect
  110. is a lot better than suspecting the whole network.
  111. It has been speculated that a set of guard nodes can be used to
  112. fingerprint a user (presumably by a local adversary) when they move
  113. about. However, it is precisely this activity of moving your laptop that
  114. causes guards to be marked as down by the Tor client, which then chooses
  115. new ones.
  116. All of this is not terribly relevant to this proposal, but worth bearing
  117. in mind, since guard nodes do have a bit more ability to wreak
  118. havoc with two hops than with three.
  119. Two hop paths allow malicious guards to get considerably more benefit
  120. from failing circuits if they do not extend to their colluding peers for
  121. the exit hop. Since guards can detect the number of hops in a path via
  122. either timing or by statistical analysis of the exit policy of the 2nd
  123. hop, they can perform this attack predominantly against 2 hop users
  124. only.
  125. This can be addressed by completely abandoning an entry guard after a
  126. certain ratio of extend or general circuit failures with respect to
  127. non-failed circuits. The proper value for this ratio can be determined
  128. experimentally with TorFlow. There is the possibility that the local
  129. network can abuse this feature to cause certain guards to be dropped,
  130. but they can do that anyways with the current Tor by just making guards
  131. they don't like unreachable. With this mechanism, Tor will complain
  132. loudly if any guard failure rate exceeds the expected in any failure
  133. case, local or remote.
  134. Eliminating guards entirely would actually not address this issue due
  135. to the time-tradeoff nature of risk. In fact, it would just make it
  136. worse. Without guard nodes, it becomes much more difficult for clients
  137. to become alerted to Tor entry points that are failing circuits to make
  138. sure that they only devote bandwidth to carry traffic for streams which
  139. they observe both ends.
  140. For this reason, guard nodes should remain enabled for 2 hop users,
  141. at least until an IP-independent, undetectable guard scanner can
  142. be created. TorFlow can scan for failing guards, but after a while,
  143. its unique behavior gives away the fact that its IP is a scanner and
  144. it can be given selective service.
  145. Why not fix Pathlen=2?:
  146. The main reason I am not advocating that we always use 2 hops is that
  147. in some situations, timing correlation evidence by itself may not be
  148. considered as solid and convincing as an actual, uninterrupted, fully
  149. traced path. Are these timing attacks as effective on a real network as
  150. they are in simulation? Maybe the circuit multiplexing of Tor can serve
  151. to frustrate them to a degree? Would an extralegal adversary or
  152. authoritarian government even care? In the face of these situation
  153. dependent unknowns, it should be up to the user to decide if this is
  154. a concern for them or not.
  155. It should probably also be noted that even a false positive
  156. rate of 1% for a 200k concurrent-user network could mean that for a
  157. given node, a given stream could be confused with something like 10
  158. users, assuming ~200 nodes carry most of the traffic (ie 1000 users
  159. each). Though of course to really know for sure, someone needs to do
  160. an attack on a real network, unfortunately.
  161. Additionally, at some point cover traffic schemes may be implemented to
  162. frustrate timing attacks on the first hop. It is possible some expert
  163. users may do this ad-hoc already, and may wish to continue using 3 hops
  164. for this reason.
  165. Who will enable this option?
  166. This is the crux of the proposal. Admittedly, there is some anonymity
  167. loss and some degree of decreased investment required on the part of
  168. the adversary to attack 2 hop users versus 3 hop users, even if it is
  169. minimal and limited mostly to up-front costs and false positives.
  170. The key questions are:
  171. 1. Are these users in a class such that their risk is significantly
  172. less than the amount of this anonymity loss?
  173. 2. Are these users able to identify themselves?
  174. Many many users of Tor are not at risk for an adversary capturing c/n
  175. nodes of the network just to see what they do. These users use Tor to
  176. circumvent aggressive content filters, or simply to keep their IP out of
  177. marketing and search engine databases. Most content filters have no
  178. interest in running Tor nodes to catch violators, and marketers
  179. certainly would never consider such a thing, both on a cost basis and a
  180. legal one.
  181. In a sense, this represents an alternate threat model against these
  182. users who are not at risk for Tor's normal threat model.
  183. It should be evident to these users that they fall into this class. All
  184. that should be needed is a radio button
  185. * "I use Tor for censorship resistance and IP obfuscation, not anonymity.
  186. Speed is more important to me than high anonymity."
  187. * "I use Tor for anonymity. I need more protection at the cost of speed."
  188. and then some explanation in the help for exactly what this means, and
  189. the risks involved with eliminating the adversary's need for timing
  190. attacks with respect to false positives.
  191. Implementation:
  192. new_route_len() can be modified directly with a check of the
  193. Pathlen option.
  194. The exit policy hack is a bit more tricky. compare_addr_to_addr_policy
  195. needs to return an alternate ADDR_POLICY_ACCEPTED_WILDCARD or
  196. ADDR_POLICY_ACCEPTED_SPECIFIC return value for use in
  197. circuit_is_acceptable.
  198. The leaky exit is trickier still.. handle_control_attachstream
  199. does allow paths to exit at a given hop. Presumably something similar
  200. can be done in connection_ap_handshake_process_socks, and elsewhere?
  201. Circuit construction would also have to be performed such that the
  202. 2nd hop's exit policy is what is considered, not the 3rd's.
  203. The entry_guard_t structure could have num_circ_failed and
  204. num_circ_succeeded members such that if it exceeds F% circuit
  205. extend failure rate to a second hop, it is removed from the entry list.
  206. F should be sufficiently high to avoid churn from normal Tor circuit
  207. failure as determined by TorFlow scans.
  208. The Vidalia option should be presented as a radio button.
  209. Migration:
  210. Phase 1: Adjust exit policy checks if Pathlen is set. Modify
  211. new_route_len() to obey a 'Pathlen' config option.
  212. Phase 2: Implement leaky circuit ability.
  213. Phase 3: Experiment to determine the proper ratio of circuit
  214. failures used to expire garbage or malicious guards via TorFlow
  215. (pending Bug #440 backport+adoption).
  216. Phase 4: Implement guard expiration code to kick off failure-prone
  217. guards and warn the user.
  218. Phase 5: Make radiobutton in Vidalia, along with help entry
  219. that explains in layman's terms the risks involved.
  220. Phase 6: Allow user to specify pathlength by HTTP URL suffix.
  221. [1] http://p2pnet.net/story/11279
  222. [2] http://www.cs.umass.edu/~mwright/papers/levine-timing.pdf
  223. [3] Proof available upon request ;)