blocking.tex 58 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798991001011021031041051061071081091101111121131141151161171181191201211221231241251261271281291301311321331341351361371381391401411421431441451461471481491501511521531541551561571581591601611621631641651661671681691701711721731741751761771781791801811821831841851861871881891901911921931941951961971981992002012022032042052062072082092102112122132142152162172182192202212222232242252262272282292302312322332342352362372382392402412422432442452462472482492502512522532542552562572582592602612622632642652662672682692702712722732742752762772782792802812822832842852862872882892902912922932942952962972982993003013023033043053063073083093103113123133143153163173183193203213223233243253263273283293303313323333343353363373383393403413423433443453463473483493503513523533543553563573583593603613623633643653663673683693703713723733743753763773783793803813823833843853863873883893903913923933943953963973983994004014024034044054064074084094104114124134144154164174184194204214224234244254264274284294304314324334344354364374384394404414424434444454464474484494504514524534544554564574584594604614624634644654664674684694704714724734744754764774784794804814824834844854864874884894904914924934944954964974984995005015025035045055065075085095105115125135145155165175185195205215225235245255265275285295305315325335345355365375385395405415425435445455465475485495505515525535545555565575585595605615625635645655665675685695705715725735745755765775785795805815825835845855865875885895905915925935945955965975985996006016026036046056066076086096106116126136146156166176186196206216226236246256266276286296306316326336346356366376386396406416426436446456466476486496506516526536546556566576586596606616626636646656666676686696706716726736746756766776786796806816826836846856866876886896906916926936946956966976986997007017027037047057067077087097107117127137147157167177187197207217227237247257267277287297307317327337347357367377387397407417427437447457467477487497507517527537547557567577587597607617627637647657667677687697707717727737747757767777787797807817827837847857867877887897907917927937947957967977987998008018028038048058068078088098108118128138148158168178188198208218228238248258268278288298308318328338348358368378388398408418428438448458468478488498508518528538548558568578588598608618628638648658668678688698708718728738748758768778788798808818828838848858868878888898908918928938948958968978988999009019029039049059069079089099109119129139149159169179189199209219229239249259269279289299309319329339349359369379389399409419429439449459469479489499509519529539549559569579589599609619629639649659669679689699709719729739749759769779789799809819829839849859869879889899909919929939949959969979989991000100110021003100410051006100710081009101010111012101310141015101610171018101910201021102210231024102510261027102810291030103110321033103410351036103710381039104010411042104310441045104610471048104910501051105210531054105510561057105810591060106110621063106410651066106710681069107010711072107310741075107610771078107910801081108210831084108510861087108810891090109110921093109410951096109710981099110011011102110311041105110611071108110911101111111211131114111511161117111811191120112111221123112411251126112711281129113011311132113311341135113611371138113911401141114211431144114511461147114811491150115111521153115411551156115711581159116011611162116311641165116611671168116911701171117211731174117511761177117811791180118111821183118411851186118711881189119011911192119311941195119611971198119912001201120212031204120512061207120812091210121112121213121412151216121712181219122012211222122312241225122612271228122912301231123212331234123512361237
  1. \documentclass{llncs}
  2. \usepackage{url}
  3. \usepackage{amsmath}
  4. \usepackage{epsfig}
  5. %\setlength{\textwidth}{5.9in}
  6. %\setlength{\textheight}{8.4in}
  7. %\setlength{\topmargin}{.5cm}
  8. %\setlength{\oddsidemargin}{1cm}
  9. %\setlength{\evensidemargin}{1cm}
  10. \newenvironment{tightlist}{\begin{list}{$\bullet$}{
  11. \setlength{\itemsep}{0mm}
  12. \setlength{\parsep}{0mm}
  13. % \setlength{\labelsep}{0mm}
  14. % \setlength{\labelwidth}{0mm}
  15. % \setlength{\topsep}{0mm}
  16. }}{\end{list}}
  17. \begin{document}
  18. \title{Design of a blocking-resistant anonymity system}
  19. %\author{Roger Dingledine\inst{1} \and Nick Mathewson\inst{1}}
  20. \author{Roger Dingledine \and Nick Mathewson}
  21. \institute{The Free Haven Project\\
  22. \email{\{arma,nickm\}@freehaven.net}}
  23. \maketitle
  24. \pagestyle{plain}
  25. \begin{abstract}
  26. Websites around the world are increasingly being blocked by
  27. government-level firewalls. Many people use anonymizing networks like
  28. Tor to contact sites without letting an attacker trace their activities,
  29. and as an added benefit they are no longer affected by local censorship.
  30. But if the attacker simply denies access to the Tor network itself,
  31. blocked users can no longer benefit from the security Tor offers.
  32. Here we describe a design that builds upon the current Tor network
  33. to provide an anonymizing network that resists blocking
  34. by government-level attackers.
  35. \end{abstract}
  36. \section{Introduction and Goals}
  37. Anonymizing networks such as Tor~\cite{tor-design} bounce traffic around
  38. a network of relays. They aim to hide not only what is being said, but
  39. also who is communicating with whom, which users are using which websites,
  40. and so on. These systems have a broad range of users, including ordinary
  41. citizens who want to avoid being profiled for targeted advertisements,
  42. corporations who don't want to reveal information to their competitors,
  43. and law enforcement and government intelligence agencies who need to do
  44. operations on the Internet without being noticed.
  45. Historically, research on anonymizing systems has assumed a passive
  46. attacker who monitors the user (call her Alice) and tries to discover her
  47. activities, yet lets her reach any piece of the network. In more modern
  48. threat models such as Tor's, the adversary is allowed to perform active
  49. attacks such as modifying communications in hopes of tricking Alice
  50. into revealing her destination, or intercepting some of her connections
  51. to run a man-in-the-middle attack. But these systems still assume that
  52. Alice can eventually reach the anonymizing network.
  53. An increasing number of users are making use of the Tor software
  54. not so much for its anonymity properties but for its censorship
  55. resistance properties -- if they access Internet sites like Wikipedia
  56. and Blogspot via Tor, they are no longer affected by local censorship
  57. and firewall rules. In fact, an informal user study (described in
  58. Appendix~\ref{app:geoip}) showed China as the third largest user base
  59. for Tor clients, with perhaps ten thousand people accessing the Tor
  60. network from China each day.
  61. The current Tor design is easy to block if the attacker controls Alice's
  62. connection to the Tor network --- by blocking the directory authorities,
  63. by blocking all the server IP addresses in the directory, or by filtering
  64. based on the signature of the Tor TLS handshake. Here we describe a
  65. design that builds upon the current Tor network to provide an anonymizing
  66. network that also resists this blocking. Specifically,
  67. Section~\ref{sec:adversary} discusses our threat model --- that is,
  68. the assumptions we make about our adversary; Section~\ref{sec:current-tor}
  69. describes the components of the current Tor design and how they can be
  70. leveraged for a new blocking-resistant design; Section~\ref{sec:related}
  71. explains the features and drawbacks of the currently deployed solutions;
  72. and ...
  73. %And adding more different classes of users and goals to the Tor network
  74. %improves the anonymity for all Tor users~\cite{econymics,usability:weis2006}.
  75. \section{Adversary assumptions}
  76. \label{sec:adversary}
  77. The history of blocking-resistance designs is littered with conflicting
  78. assumptions about what adversaries to expect and what problems are
  79. in the critical path to a solution. Here we try to enumerate our best
  80. understanding of the current situation around the world.
  81. In the traditional security style, we aim to describe a strong attacker
  82. --- if we can defend against this attacker, we inherit protection
  83. against weaker attackers as well. After all, we want a general design
  84. that will work for people in China, people in Iran, people in Thailand,
  85. whistleblowers in firewalled corporate networks, and people in whatever
  86. turns out to be the next oppressive situation. In fact, by designing with
  87. a variety of adversaries in mind, we can take advantage of the fact that
  88. adversaries will be in different stages of the arms race at each location.
  89. We assume there are three main network attacks in use by censors
  90. currently~\cite{clayton:pet2006}:
  91. \begin{tightlist}
  92. \item Block destination by automatically searching for certain strings
  93. in TCP packets.
  94. \item Block destination by manually listing its IP address at the
  95. firewall.
  96. \item Intercept DNS requests and give bogus responses for certain
  97. destination hostnames.
  98. \end{tightlist}
  99. We assume the network firewall has very limited CPU per
  100. connection~\cite{clayton:pet2006}. Against an adversary who spends
  101. hours looking through the contents of each packet, we would need
  102. some stronger mechanism such as steganography, which introduces its
  103. own problems~\cite{active-wardens,tcpstego,bar}.
  104. More broadly, we assume that the chance that the authorities try to
  105. block a given system grows as its popularity grows. That is, a system
  106. used by only a few users will probably never be blocked, whereas a
  107. well-publicized system with many users will receive much more scrutiny.
  108. We assume that readers of blocked content are not in as much danger
  109. as publishers. So far in places like China, the authorities mainly go
  110. after people who publish materials and coordinate organized movements
  111. against the state~\cite{mackinnon}. If they find that a user happens
  112. to be reading a site that should be blocked, the typical response is
  113. simply to block the site. Of course, even with an encrypted connection,
  114. the adversary may be able to distinguish readers from publishers by
  115. observing whether Alice is mostly downloading bytes or mostly uploading
  116. them --- we discuss this issue more in Section~\ref{subsec:upload-padding}.
  117. We assume that while various different regimes can coordinate and share
  118. notes, there will be a significant time lag between one attacker learning
  119. how to overcome a facet of our design and other attackers picking it up.
  120. Similarly, we assume that in the early stages of deployment the insider
  121. threat isn't as high of a risk, because no attackers have put serious
  122. effort into breaking the system yet.
  123. We assume that government-level attackers are not always uniform across
  124. the country. For example, there is no single centralized place in China
  125. that coordinates its censorship decisions and steps.
  126. We assume that our users have control over their hardware and
  127. software --- they don't have any spyware installed, there are no
  128. cameras watching their screen, etc. Unfortunately, in many situations
  129. these threats are very real~\cite{zuckerman-threatmodels}; yet
  130. software-based security systems like ours are poorly equipped to handle
  131. a user who is entirely observed and controlled by the adversary. See
  132. Section~\ref{subsec:cafes-and-livecds} for more discussion of what little
  133. we can do about this issue.
  134. We assume that widespread access to the Internet is economically and/or
  135. socially valuable in each deployment country. After all, if censorship
  136. is more important than Internet access, the firewall administrators have
  137. an easy job: they should simply block everything. The corollary to this
  138. assumption is that we should design so that increased blocking of our
  139. system results in increased economic damage or public outcry.
  140. We assume that the user will be able to fetch a genuine
  141. version of Tor, rather than one supplied by the adversary; see
  142. Section~\ref{subsec:trust-chain} for discussion on helping the user
  143. confirm that he has a genuine version and that he can connect to the
  144. real Tor network.
  145. \section{Components of the current Tor design}
  146. \label{sec:current-tor}
  147. Tor is popular and sees a lot of use. It's the largest anonymity
  148. network of its kind.
  149. Tor has attracted more than 800 routers from around the world.
  150. A few sentences about how Tor works.
  151. In this section, we examine some of the reasons why Tor has taken off,
  152. with particular emphasis to how we can take advantage of these properties
  153. for a blocking-resistance design.
  154. Tor aims to provide three security properties:
  155. \begin{tightlist}
  156. \item 1. A local network attacker can't learn, or influence, your
  157. destination.
  158. \item 2. No single router in the Tor network can link you to your
  159. destination.
  160. \item 3. The destination, or somebody watching the destination,
  161. can't learn your location.
  162. \end{tightlist}
  163. For blocking-resistance, we care most clearly about the first
  164. property. But as the arms race progresses, the second property
  165. will become important --- for example, to discourage an adversary
  166. from volunteering a relay in order to learn that Alice is reading
  167. or posting to certain websites. The third property is not so clearly
  168. important in this context, but we believe it will turn out to be helpful:
  169. consider websites and other Internet services that have been pressured
  170. recently into treating clients differently depending on their network
  171. location~\cite{google-geolocation}.
  172. % and cite{goodell-syverson06} once it's finalized.
  173. The Tor design provides other features as well over manual or ad
  174. hoc circumvention techniques.
  175. Firstly, the Tor directory authorities automatically aggregate, test,
  176. and publish signed summaries of the available Tor routers. Tor clients
  177. can fetch these summaries to learn which routers are available and
  178. which routers have desired properties. Directory information is cached
  179. throughout the Tor network, so once clients have bootstrapped they never
  180. need to interact with the authorities directly. (To tolerate a minority
  181. of compromised directory authorities, we use a threshold trust scheme ---
  182. see Section~\ref{subsec:trust-chain} for details.)
  183. Secondly, Tor clients can be configured to use any directory authorities
  184. they want. They use the default authorities if no others are specified,
  185. but it's easy to start a separate (or even overlapping) Tor network just
  186. by running a different set of authorities and convincing users to prefer
  187. a modified client. For example, we could launch a distinct Tor network
  188. inside China; some users could even use an aggregate network made up of
  189. both the main network and the China network. But we should not be too
  190. quick to create other Tor networks --- part of Tor's anonymity comes from
  191. users behaving like other users, and there are many unsolved anonymity
  192. questions if different users know about different pieces of the network.
  193. Thirdly, in addition to automatically learning from the chosen directories
  194. which Tor routers are available and working, Tor takes care of building
  195. paths through the network and rebuilding them as needed. So the user
  196. never has to know how paths are chosen, never has to manually pick
  197. working proxies, and so on. More generally, at its core the Tor protocol
  198. is simply a tool that can build paths given a set of routers. Tor is
  199. quite flexible about how it learns about the routers and how it chooses
  200. the paths. Harvard's Blossom project~\cite{blossom-thesis} makes this
  201. flexibility more concrete: Blossom makes use of Tor not for its security
  202. properties but for its reachability properties. It runs a separate set
  203. of directory authorities, its own set of Tor routers (called the Blossom
  204. network), and uses Tor's flexible path-building to let users view Internet
  205. resources from any point in the Blossom network.
  206. Fourthly, Tor separates the role of \emph{internal relay} from the
  207. role of \emph{exit relay}. That is, some volunteers choose just to relay
  208. traffic between Tor users and Tor routers, and others choose to also allow
  209. connections to external Internet resources. Because we don't force all
  210. volunteers to play both roles, we end up with more relays. This increased
  211. diversity in turn is what gives Tor its security: the more options the
  212. user has for her first hop, and the more options she has for her last hop,
  213. the less likely it is that a given attacker will be watching both ends
  214. of her circuit~\cite{tor-design}. As a bonus, because our design attracts
  215. more internal relays that want to help out but don't want to deal with
  216. being an exit relay, we end up with more options for the first hop ---
  217. the one most critical to being able to reach the Tor network.
  218. Fifthly, Tor is sustainable. Zero-Knowledge Systems offered the commercial
  219. but now-defunct Freedom Network~\cite{freedom21-security}, a design with
  220. security comparable to Tor's, but its funding model relied on collecting
  221. money from users to pay relays. Modern commercial proxy systems similarly
  222. need to keep collecting money to support their infrastructure. On the
  223. other hand, Tor has built a self-sustaining community of volunteers who
  224. donate their time and resources. This community trust is rooted in Tor's
  225. open design: we tell the world exactly how Tor works, and we provide all
  226. the source code. Users can decide for themselves, or pay any security
  227. expert to decide, whether it is safe to use. Further, Tor's modularity
  228. as described above, along with its open license, mean that its impact
  229. will continue to grow.
  230. Sixthly, Tor has an established user base of hundreds of
  231. thousands of people from around the world. This diversity of
  232. users contributes to sustainability as above: Tor is used by
  233. ordinary citizens, activists, corporations, law enforcement, and
  234. even governments and militaries~\cite{tor-use-cases}, and they can
  235. only achieve their security goals by blending together in the same
  236. network~\cite{econymics,usability:weis2006}. This user base also provides
  237. something else: hundreds of thousands of different and often-changing
  238. addresses that we can leverage for our blocking-resistance design.
  239. We discuss and adapt these components further in
  240. Section~\ref{sec:bridges}. But first we examine the strengths and
  241. weaknesses of other blocking-resistance approaches, so we can expand
  242. our repertoire of building blocks and ideas.
  243. \section{Current proxy solutions}
  244. \label{sec:related}
  245. Relay-based blocking-resistance schemes generally have two main
  246. components: a relay component and a discovery component. The relay part
  247. encompasses the process of establishing a connection, sending traffic
  248. back and forth, and so on --- everything that's done once the user knows
  249. where he's going to connect. Discovery is the step before that: the
  250. process of finding one or more usable relays.
  251. For example, we described several pieces of Tor in the previous section,
  252. but we can divide them into the process of building paths and sending
  253. traffic over them (relay) and the process of learning from the directory
  254. servers about what routers are available (discovery). With this distinction
  255. in mind, we now examine several categories of relay-based schemes.
  256. \subsection{Centrally-controlled shared proxies}
  257. Existing commercial anonymity solutions (like Anonymizer.com) are based
  258. on a set of single-hop proxies. In these systems, each user connects to
  259. a single proxy, which then relays the user's traffic. These public proxy
  260. systems are typically characterized by two features: they control and
  261. operate the proxies centrally, and many different users get assigned
  262. to each proxy.
  263. In terms of the relay component, single proxies provide weak security
  264. compared to systems that distribute trust over multiple relays, since a
  265. compromised proxy can trivially observe all of its users' actions, and
  266. an eavesdropper only needs to watch a single proxy to perform timing
  267. correlation attacks against all its users' traffic. Worse, all users
  268. need to trust the proxy company to have good security itself as well as
  269. to not reveal user activities.
  270. On the other hand, single-hop proxies are easier to deploy, and they
  271. can provide better performance than distributed-trust designs like Tor,
  272. since traffic only goes through one relay. They're also more convenient
  273. from the user's perspective --- since users entirely trust the proxy,
  274. they can just use their web browser directly.
  275. Whether public proxy schemes are more or less scalable than Tor is
  276. still up for debate: commercial anonymity systems can use some of their
  277. revenue to provision more bandwidth as they grow, whereas volunteer-based
  278. anonymity systems can attract thousands of fast relays to spread the load.
  279. The discovery piece can take several forms. Most commercial anonymous
  280. proxies have one or a handful of commonly known websites, and their users
  281. log in to those websites and relay their traffic through them. When
  282. these websites get blocked (generally soon after the company becomes
  283. popular), if the company cares about users in the blocked areas, they
  284. start renting lots of disparate IP addresses and rotating through them
  285. as they get blocked. They notify their users of new addresses by email,
  286. for example. It's an arms race, since attackers can sign up to receive the
  287. email too, but they have one nice trick available to them: because they
  288. have a list of paying subscribers, they can notify certain subscribers
  289. about updates earlier than others.
  290. Access control systems on the proxy let them provide service only to
  291. users with certain characteristics, such as paying customers or people
  292. from certain IP address ranges.
  293. Discovery in the face of a government-level firewall is a complex and
  294. unsolved
  295. topic, and we're stuck in this same arms race ourselves; we explore it
  296. in more detail in Section~\ref{sec:discovery}. But first we examine the
  297. other end of the spectrum --- getting volunteers to run the proxies,
  298. and telling only a few people about each proxy.
  299. \subsection{Independent personal proxies}
  300. Personal proxies such as Circumventor~\cite{circumventor} and
  301. CGIProxy~\cite{cgiproxy} use the same technology as the public ones as
  302. far as the relay component goes, but they use a different strategy for
  303. discovery. Rather than managing a few centralized proxies and constantly
  304. getting new addresses for them as the old addresses are blocked, they
  305. aim to have a large number of entirely independent proxies, each managing
  306. its own (much smaller) set of users.
  307. As the Circumventor site~\cite{circumventor} explains, ``You don't
  308. actually install the Circumventor \emph{on} the computer that is blocked
  309. from accessing Web sites. You, or a friend of yours, has to install the
  310. Circumventor on some \emph{other} machine which is not censored.''
  311. This tactic has great advantages in terms of blocking-resistance ---
  312. recall our assumption in Section~\ref{sec:adversary} that the attention
  313. a system attracts from the attacker is proportional to its number of
  314. users and level of publicity. If each proxy only has a few users, and
  315. there is no central list of proxies, most of them will never get noticed.
  316. On the other hand, there's a huge scalability question that so far has
  317. prevented these schemes from being widely useful: how does the fellow
  318. in China find a person in Ohio who will run a Circumventor for him? In
  319. some cases he may know and trust some people on the outside, but in many
  320. cases he's just out of luck. Just as hard, how does a new volunteer in
  321. Ohio find a person in China who needs it?
  322. %discovery is also hard because the hosts keep vanishing if they're
  323. %on dynamic ip. But not so bad, since they can use dyndns addresses.
  324. This challenge leads to a hybrid design --- centrally-distributed
  325. personal proxies --- which we will investigate in more detail in
  326. Section~\ref{sec:discovery}.
  327. \subsection{Open proxies}
  328. Yet another currently used approach to bypassing firewalls is to locate
  329. open and misconfigured proxies on the Internet. A quick Google search
  330. for ``open proxy list'' yields a wide variety of freely available lists
  331. of HTTP, HTTPS, and SOCKS proxies. Many small companies have sprung up
  332. providing more refined lists to paying customers.
  333. There are some downsides to using these oen proxies though. Firstly,
  334. the proxies are of widely varying quality in terms of bandwidth and
  335. stability, and many of them are entirely unreachable. Secondly, unlike
  336. networks of volunteers like Tor, the legality of routing traffic through
  337. these proxies is questionable: it's widely believed that most of them
  338. don't realize what they're offering, and probably wouldn't allow it if
  339. they realized. Thirdly, in many cases the connection to the proxy is
  340. unencrypted, so firewalls that filter based on keywords in IP packets
  341. will not be hindered. And lastly, many users are suspicious that some
  342. open proxies are a little \emph{too} convenient: are they run by the
  343. adversary, in which case they get to monitor all the user's requests
  344. just as single-hop proxies can?
  345. A distributed-trust design like Tor resolves each of these issues for
  346. the relay component, but a constantly changing set of thousands of open
  347. relays is clearly a useful idea for a discovery component. For example,
  348. users might be able to make use of these proxies to bootstrap their
  349. first introduction into the Tor network.
  350. \subsection{JAP}
  351. Stefan's WPES paper~\cite{koepsell:wpes2004} is probably the closest
  352. related work, and is
  353. the starting point for the design in this paper.
  354. \subsection{steganography}
  355. infranet
  356. \subsection{break your sensitive strings into multiple tcp packets;
  357. ignore RSTs}
  358. \subsection{Internal caching networks}
  359. Freenet is deployed inside China and caches outside content.
  360. \subsection{Skype}
  361. port-hopping. encryption. voice communications not so susceptible to
  362. keystroke loggers (even graphical ones).
  363. \subsection{Tor itself}
  364. And lastly, we include Tor itself in the list of current solutions
  365. to firewalls. Tens of thousands of people use Tor from countries that
  366. routinely filter their Internet. Tor's website has been blocked in most
  367. of them. But why hasn't the Tor network been blocked yet?
  368. We have several theories. The first is the most straightforward: tens of
  369. thousands of people are simply too few to matter. It may help that Tor is
  370. perceived to be for experts only, and thus not worth attention yet. The
  371. more subtle variant on this theory is that we've positioned Tor in the
  372. public eye as a tool for retaining civil liberties in more free countries,
  373. so perhaps blocking authorities don't view it as a threat. (We revisit
  374. this idea when we consider whether and how to publicize a Tor variant
  375. that improves blocking-resistance --- see Section~\ref{subsec:publicity}
  376. for more discussion.)
  377. The broader explanation is that most government-level filters are not
  378. created by people setting out to block all possible ways to bypass
  379. them. They're created by people who want to do a good enough job that
  380. they can still appear in control. They realize that there will always
  381. be ways for a few people to get around the firewall, and as long as Tor
  382. has not publically threatened their control, they see no urgent need to
  383. block it yet.
  384. We should recognize that we're \emph{already} in the arms race. These
  385. constraints can give us insight into the priorities and capabilities of
  386. our various attackers.
  387. \section{The relay component of our blocking-resistant design}
  388. \label{sec:bridges}
  389. Section~\ref{sec:current-tor} describes many reasons why Tor is
  390. well-suited as a building block in our context, but several changes will
  391. allow the design to resist blocking better. The most critical changes are
  392. to get more relay addresses, and to distribute them to users differently.
  393. %We need to address three problems:
  394. %- adapting the relay component of Tor so it resists blocking better.
  395. %- Discovery.
  396. %- Tor's network signature.
  397. %Here we describe the new pieces we need to add to the current Tor design.
  398. \subsection{Bridge relays}
  399. Hundreds of thousands of people around the world use Tor. We can leverage
  400. our already self-selected user base to produce a list of thousands of
  401. often-changing IP addresses. Specifically, we can give them a little
  402. button in the GUI that says ``Tor for Freedom'', and users who click
  403. the button will turn into \emph{bridge relays}, or just \emph{bridges}
  404. for short. They can rate limit relayed connections to 10 KB/s (almost
  405. nothing for a broadband user in a free country, but plenty for a user
  406. who otherwise has no access at all), and since they are just relaying
  407. bytes back and forth between blocked users and the main Tor network, they
  408. won't need to make any external connections to Internet sites. Because
  409. of this separation of roles, and because we're making use of software
  410. that the volunteers have already installed for their own use, we expect
  411. our scheme to attract and maintain more volunteers than previous schemes.
  412. As usual, there are new anonymity and security implications from running a
  413. bridge relay, particularly from letting people relay traffic through your
  414. Tor client; but we leave this discussion for Section~\ref{sec:security}.
  415. %...need to outline instructions for a Tor config that will publish
  416. %to an alternate directory authority, and for controller commands
  417. %that will do this cleanly.
  418. \subsection{The bridge directory authority}
  419. How do the bridge relays advertise their existence to the world? We
  420. introduce a second new component of the design: a specialized directory
  421. authority that aggregates and tracks bridges. Bridge relays periodically
  422. publish server descriptors (summaries of their keys, locations, etc,
  423. signed by their long-term identity key), just like the relays in the
  424. ``main'' Tor network, but in this case they publish them only to the
  425. bridge directory authorities.
  426. The main difference between bridge authorities and the directory
  427. authorities for the main Tor network is that the main authorities provide
  428. out a list of every known relay, but the bridge authorities only give
  429. out a server descriptor if you already know its identity key. That is,
  430. you can keep up-to-date on a bridge's location and other information
  431. once you know about it, but you can't just grab a list of all the bridges.
  432. The identity keys, IP address, and directory port for the bridge
  433. authorities ship by default with the Tor software, so the bridge relays
  434. can be confident they're publishing to the right location, and the
  435. blocked users can establish an encrypted authenticated channel. See
  436. Section~\ref{subsec:trust-chain} for more discussion of the public key
  437. infrastructure and trust chain.
  438. Bridges use Tor to publish their descriptors privately and securely,
  439. so even an attacker monitoring the bridge directory authority's network
  440. can't make a list of all the addresses contacting the authority and
  441. track them that way.
  442. %\subsection{A simple matter of engineering}
  443. %
  444. %Although we've described bridges and bridge authorities in simple terms
  445. %above, some design modifications and features are needed in the Tor
  446. %codebase to add them. We describe the four main changes here.
  447. %
  448. %Firstly, we need to get smarter about rate limiting:
  449. %Bandwidth classes
  450. %
  451. %Secondly, while users can in fact configure which directory authorities
  452. %they use, we need to add a new type of directory authority and teach
  453. %bridges to fetch directory information from the main authorities while
  454. %publishing server descriptors to the bridge authorities. We're most of
  455. %the way there, since we can already specify attributes for directory
  456. %authorities:
  457. %add a separate flag named ``blocking''.
  458. %
  459. %Thirdly, need to build paths using bridges as the first
  460. %hop. One more hole in the non-clique assumption.
  461. %
  462. %Lastly, since bridge authorities don't answer full network statuses,
  463. %we need to add a new way for users to learn the current status for a
  464. %single relay or a small set of relays --- to answer such questions as
  465. %``is it running?'' or ``is it behaving correctly?'' We describe in
  466. %Section~\ref{subsec:enclave-dirs} a way for the bridge authority to
  467. %publish this information without resorting to signing each answer
  468. %individually.
  469. \subsection{Putting them together}
  470. \label{subsec:relay-together}
  471. If a blocked user knows the identity keys of a set of bridge relays, and
  472. he has correct address information for at least one of them, he can use
  473. that one to make a secure connection to the bridge authority and update
  474. his knowledge about the other bridge relays. He can also use it to make
  475. secure connections to the main Tor network and directory servers, so he
  476. can build circuits and connect to the rest of the Internet. All of these
  477. updates happen in the background: from the blocked user's perspective,
  478. he just accesses the Internet via his Tor client like always.
  479. So now we've reduced the problem from how to circumvent the firewall
  480. for all transactions (and how to know that the pages you get have not
  481. been modified by the local attacker) to how to learn about a working
  482. bridge relay.
  483. There's another catch though. We need to make sure that the network
  484. traffic we generate by simply connecting to a bridge relay doesn't stand
  485. out too much.
  486. %The following section describes ways to bootstrap knowledge of your first
  487. %bridge relay, and ways to maintain connectivity once you know a few
  488. %bridge relays.
  489. % (See Section~\ref{subsec:first-bridge} for a discussion
  490. %of exactly what information is sufficient to characterize a bridge relay.)
  491. \section{Hiding Tor's network signatures}
  492. \label{sec:network-signature}
  493. \label{subsec:enclave-dirs}
  494. Currently, Tor uses two protocols for its network communications. The
  495. main protocol uses TLS for encrypted and authenticated communication
  496. between Tor instances. The second protocol is standard HTTP, used for
  497. fetching directory information. All Tor servers listen on their ``ORPort''
  498. for TLS connections, and some of them opt to listen on their ``DirPort''
  499. as well, to serve directory information. Tor servers choose whatever port
  500. numbers they like; the server descriptor they publish to the directory
  501. tells users where to connect.
  502. One format for communicating address information about a bridge relay is
  503. its IP address and DirPort. From there, the user can ask the bridge's
  504. directory cache for an up-to-date copy of its server descriptor, and
  505. learn its current circuit keys, its ORPort, and so on.
  506. However, connecting directly to the directory cache involves a plaintext
  507. HTTP request. A censor could create a network signature for the request
  508. and/or its response, thus preventing these connections. To resolve this
  509. vulnerability, we've modified the Tor protocol so that users can connect
  510. to the directory cache via the main Tor port --- they establish a TLS
  511. connection with the bridge as normal, and then send a special ``begindir''
  512. relay command to establish an internal connection to its directory cache.
  513. Therefore a better way to summarize a bridge's address is by its IP
  514. address and ORPort, so all communications between the client and the
  515. bridge will use ordinary TLS. But there are other details that need
  516. more investigation.
  517. What port should bridges pick for their ORPort? We currently recommend
  518. that they listen on port 443 (the default HTTPS port) if they want to
  519. be most useful, because clients behind standard firewalls will have
  520. the best chance to reach them. Is this the best choice in all cases,
  521. or should we encourage some fraction of them pick random ports, or other
  522. ports commonly permitted through firewalls like 53 (DNS) or 110
  523. (POP)? We need
  524. more research on our potential users, and their current and anticipated
  525. firewall restrictions.
  526. Furthermore, we need to look at the specifics of Tor's TLS handshake.
  527. Right now Tor uses some predictable strings in its TLS handshakes. For
  528. example, it sets the X.509 organizationName field to ``Tor'', and it puts
  529. the Tor server's nickname in the certificate's commonName field. We
  530. should tweak the handshake protocol so it doesn't rely on any details
  531. in the certificate headers, yet it remains secure. Should we replace
  532. it with blank entries for each field, or should we research the common
  533. values that Firefox and Internet Explorer use and try to imitate those?
  534. Worse, Tor's TLS handshake involves sending two certificates in each
  535. direction: one certificate contains the self-signed identity key for
  536. the router, and the second contains the current link key, signed by the
  537. identity key. We use these to authenticate that we're talking to the right
  538. router, and also to establish perfect forward secrecy for that link.
  539. How much will these extra certificates make Tor's TLS handshake stand
  540. out? We have to work on normalizing our appearance not just in terms
  541. of the fields used in each certificate, but also in the number of
  542. certificates we present for each side.
  543. % Nick, I need help with the above paragraph. What are the two certs
  544. % for really, and how much work would it be to start acting like a normal
  545. % browser? -RD
  546. Lastly, what if the adversary starts observing the network traffic even
  547. more closely? Even if our TLS handshake looks innocent, our traffic timing
  548. and volume still look different than a user making a secure web connection
  549. to his bank. The same techniques used in the growing trend to build tools
  550. to recognize encrypted Bittorrent traffic~\cite{bt-traffic-shaping}
  551. could be used to identify Tor communication and recognize bridge
  552. relays. Rather than trying to look like encrypted web traffic, we may be
  553. better off trying to blend with some other encrypted network protocol. The
  554. first step is to compare typical network behavior for a Tor client to
  555. typical network behavior for various other protocols. This statistical
  556. cat-and-mouse game is made more complex by the fact that Tor transports a
  557. variety of protocols, and we'll want to automatically handle web browsing
  558. differently from, say, instant messaging.
  559. \subsection{Identity keys as part of addressing information}
  560. We have described a way for the blocked user to bootstrap into the
  561. network once he knows the IP address and ORPort of a bridge. What about
  562. local spoofing attacks? That is, since we never learned an identity
  563. key fingerprint for the bridge, a local attacker could intercept our
  564. connection and pretend to be the bridge we had in mind. It turns out
  565. that giving false information isn't that bad --- since the Tor client
  566. ships with trusted keys for the bridge directory authority and the Tor
  567. network directory authorities, the user can learn whether he's being
  568. given a real connection to the bridge authorities or not. (After all,
  569. if the adversary intercepts every connection the user makes and gives
  570. him a bad connection each time, there's nothing we can do.)
  571. What about anonymity-breaking attacks from observing traffic, if the
  572. blocked user doesn't start out knowing the identity key of his intended
  573. bridge? The vulnerabilities aren't so bad in this case either ---
  574. the adversary could do similar attacks just by monitoring the network
  575. traffic.
  576. % cue paper by steven and george
  577. Once the Tor client has fetched the bridge's server descriptor, it should
  578. remember the identity key fingerprint for that bridge relay. Thus if
  579. the bridge relay moves to a new IP address, the client can query the
  580. bridge directory authority to look up a fresh server descriptor using
  581. this fingerprint.
  582. So we've shown that it's \emph{possible} to bootstrap into the network
  583. just by learning the IP address and ORPort of a bridge, but are there
  584. situations where it's more convenient or more secure to learn the bridge's
  585. identity fingerprint as well as instead, while bootstrapping? We keep
  586. that question in mind as we next investigate bootstrapping and discovery.
  587. \section{Discovering and maintaining working bridge relays}
  588. \label{sec:discovery}
  589. Tor's modular design means that we can develop a better relay component
  590. independently of developing the discovery component. This modularity's
  591. great promise is that we can pick any discovery approach we like; but the
  592. unfortunate fact is that we have no magic bullet for discovery. We're
  593. in the same arms race as all the other designs we described in
  594. Section~\ref{sec:related}.
  595. In this section we describe four approaches to adding discovery
  596. components for our design, in order of increasing complexity. Note that
  597. we can deploy all four schemes at once --- bridges and blocked users can
  598. use the discovery approach that is most appropriate for their situation.
  599. \subsection{Independent bridges, no central discovery}
  600. The first design is simply to have no centralized discovery component at
  601. all. Volunteers run bridges, and we assume they have some blocked users
  602. in mind and communicate their address information to them out-of-band
  603. (for example, through gmail). This design allows for small personal
  604. bridges that have only one or a handful of users in mind, but it can
  605. also support an entire community of users. For example, Citizen Lab's
  606. upcoming Psiphon single-hop proxy tool~\cite{psiphon} plans to use this
  607. \emph{social network} approach as its discovery component.
  608. There are some variations on this design. In the above example, the
  609. operator of the bridge seeks out and informs each new user about his
  610. bridge's address information and/or keys. Another approach involves
  611. blocked users introducing new blocked users to the bridges they know.
  612. That is, somebody in the blocked area can pass along a bridge's address to
  613. somebody else they trust. This scheme brings in appealing but complex game
  614. theory properties: the blocked user making the decision has an incentive
  615. only to delegate to trustworthy people, since an adversary who learns
  616. the bridge's address and filters it makes it unavailable for both of them.
  617. \subsection{Families of bridges}
  618. Because the blocked users are running our software too, we have many
  619. opportunities to improve usability or robustness. Our second design builds
  620. on the first by encouraging volunteers to run several bridges at once
  621. (or coordinate with other bridge volunteers), such that some fraction
  622. of the bridges are likely to be available at any given time.
  623. The blocked user's Tor client could periodically fetch an updated set of
  624. recommended bridges from any of the working bridges. Now the client can
  625. learn new additions to the bridge pool, and can expire abandoned bridges
  626. or bridges that the adversary has blocked, without the user ever needing
  627. to care. To simplify maintenance of the community's bridge pool, rather
  628. than mirroring all of the information at each bridge, each community
  629. could instead run its own bridge directory authority (accessed via the
  630. available bridges),
  631. \subsection{Social networks with directory-side support}
  632. In the above designs,
  633. - social network scheme, with accounts and stuff.
  634. - public proxies. given out like circumventors. or all sorts of other rate limiting ways.
  635. In the first subsection we describe how to find a first bridge.
  636. Thus they can reach the BDA. From here we either assume a social
  637. network or other mechanism for learning IP:dirport or key fingerprints
  638. as above, or we assume an account server that allows us to limit the
  639. number of new bridge relays an external attacker can discover.
  640. Going to be an arms race. Need a bag of tricks. Hard to say
  641. which ones will work. Don't spend them all at once.
  642. \subsection{Bootstrapping: finding your first bridge}
  643. \label{subsec:first-bridge}
  644. Most government firewalls are not perfect. They allow connections to
  645. Google cache or some open proxy servers, or they let file-sharing or
  646. Skype or World-of-Warcraft connections through.
  647. For users who can't use any of these techniques, hopefully they know
  648. a friend who can --- for example, perhaps the friend already knows some
  649. bridge relay addresses.
  650. (If they can't get around it at all, then we can't help them --- they
  651. should go meet more people.)
  652. Some techniques are sufficient to get us an IP address and a port,
  653. and others can get us IP:port:key. Lay out some plausible options
  654. for how users can bootstrap into learning their first bridge.
  655. Round one:
  656. - the bridge authority server will hand some out.
  657. - get one from your friend.
  658. - send us mail with a unique account, and get an automated answer.
  659. -
  660. Round two:
  661. - social network thing
  662. attack: adversary can reconstruct your social network by learning who
  663. knows which bridges.
  664. \subsection{Centrally-distributed personal proxies}
  665. Circumventor, realizing that its adoption will remain limited if would-be
  666. users can't connect with volunteers, has started a mailing list to
  667. distribute new proxy addresses every few days. From experimentation
  668. it seems they have concluded that sending updates every 3 or 4 days is
  669. sufficient to stay ahead of the current attackers.
  670. If there are many volunteer proxies and many interested users, a central
  671. watering hole to connect them is a natural solution. On the other hand,
  672. at first glance it appears that we've inherited the \emph{bad} parts of
  673. each of the above designs: not only do we have to attract many volunteer
  674. proxies, but the users also need to get to a single site that is sure
  675. to be blocked.
  676. There are two reasons why we're in better shape. Firstly, the users don't
  677. actually need to reach the watering hole directly: it can respond to
  678. email, for example. Secondly,
  679. % In fact, the JAP
  680. %project~\cite{web-mix,koepsell:wpes2004} suggested an alternative approach
  681. %to a mailing list: new users email a central address and get an automated
  682. %response listing a proxy for them.
  683. % While the exact details of the
  684. %proposal are still to be worked out, the idea of giving out
  685. \subsection{Discovery based on social networks}
  686. A token that can be exchanged at the BDA (assuming you
  687. can reach it) for a new IP:dirport or server descriptor.
  688. The account server
  689. runs as a Tor controller for the bridge authority
  690. Users can establish reputations, perhaps based on social network
  691. connectivity, perhaps based on not getting their bridge relays blocked,
  692. Probably the most critical lesson learned in past work on reputation
  693. systems in privacy-oriented environments~\cite{rep-anon} is the need for
  694. verifiable transactions. That is, the entity computing and advertising
  695. reputations for participants needs to actually learn in a convincing
  696. way that a given transaction was successful or unsuccessful.
  697. (Lesson from designing reputation systems~\cite{rep-anon}: easy to
  698. reward good behavior, hard to punish bad behavior.
  699. \subsection{How to allocate bridge addresses to users}
  700. Hold a fraction in reserve, in case our currently deployed tricks
  701. all fail at once --- so we can move to new approaches quickly.
  702. (Bridges that sign up and don't get used yet will be sad; but this
  703. is a transient problem --- if bridges are on by default, nobody will
  704. mind not being used.)
  705. Perhaps each bridge should be known by a single bridge directory
  706. authority. This makes it easier to trace which users have learned about
  707. it, so easier to blame or reward. It also makes things more brittle,
  708. since loss of that authority means its bridges aren't advertised until
  709. they switch, and means its bridge users are sad too.
  710. (Need a slick hash algorithm that will map our identity key to a
  711. bridge authority, in a way that's sticky even when we add bridge
  712. directory authorities, but isn't sticky when our authority goes
  713. away. Does this exist?)
  714. Divide bridges into buckets based on their identity key.
  715. [Design question: need an algorithm to deterministically map a bridge's
  716. identity key into a category that isn't too gameable. Take a keyed
  717. hash of the identity key plus a secret the bridge authority keeps?
  718. An adversary signing up bridges won't easily be able to learn what
  719. category he's been put in, so it's slow to attack.]
  720. One portion of the bridges is the public bucket. If you ask the
  721. bridge account server for a public bridge, it will give you a random
  722. one of these. We expect they'll be the first to be blocked, but they'll
  723. help the system bootstrap until it *does* get blocked, and remember that
  724. we're dealing with different blocking regimes around the world that will
  725. progress at different rates.
  726. The generalization of the public bucket is a bucket based on the bridge
  727. user's IP address: you can learn a random entry only from the subbucket
  728. your IP address (actually, your /24) maps to.
  729. Another portion of the bridges can be sectioned off to be given out in
  730. a time-release basis. The bucket is partitioned into pieces which are
  731. deterministically available only in certain time windows.
  732. And of course another portion is made available for the social network
  733. design above.
  734. Captchas.
  735. Is it useful to load balance which bridges are handed out? The above
  736. bucket concept makes some bridges wildly popular and others less so.
  737. But I guess that's the point.
  738. \subsection{How do we know if a bridge relay has been blocked?}
  739. We need some mechanism for testing reachability from inside the
  740. blocked area.
  741. The easiest answer is for certain users inside the area to sign up as
  742. testing relays, and then we can route through them and see if it works.
  743. First problem is that different network areas block different net masks,
  744. and it will likely be hard to know which users are in which areas. So
  745. if a bridge relay isn't reachable, is that because of a network block
  746. somewhere, because of a problem at the bridge relay, or just a temporary
  747. outage?
  748. Second problem is that if we pick random users to test random relays, the
  749. adversary should sign up users on the inside, and enumerate the relays
  750. we test. But it seems dangerous to just let people come forward and
  751. declare that things are blocked for them, since they could be tricking
  752. us. (This matters even moreso if our reputation system above relies on
  753. whether things get blocked to punish or reward.)
  754. Another answer is not to measure directly, but rather let the bridges
  755. report whether they're being used. If they periodically report to their
  756. bridge directory authority how much use they're seeing, the authority
  757. can make smart decisions from there.
  758. If they install a geoip database, they can periodically report to their
  759. bridge directory authority which countries they're seeing use from. This
  760. might help us to track which countries are making use of Ramp, and can
  761. also let us learn about new steps the adversary has taken in the arms
  762. race. (If the bridges don't want to install a whole geoip subsystem, they
  763. can report samples of the /24 network for their users, and the authorities
  764. can do the geoip work. This tradeoff has clear downsides though.)
  765. Worry: adversary signs up a bunch of already-blocked bridges. If we're
  766. stingy giving out bridges, users in that country won't get useful ones.
  767. (Worse, we'll blame the users when the bridges report they're not
  768. being used?)
  769. Worry: the adversary could choose not to block bridges but just record
  770. connections to them. So be it, I guess.
  771. \subsection{How to learn how well the whole idea is working}
  772. We need some feedback mechanism to learn how much use the bridge network
  773. as a whole is actually seeing. Part of the reason for this is so we can
  774. respond and adapt the design; part is because the funders expect to see
  775. progress reports.
  776. The above geoip-based approach to detecting blocked bridges gives us a
  777. solution though.
  778. \section{Security considerations}
  779. \label{sec:security}
  780. \subsection{Observers can tell who is publishing and who is reading}
  781. \label{subsec:upload-padding}
  782. Should bridge users sometimes send bursts of long-range drop cells?
  783. \subsection{Anonymity effects from acting as a bridge relay}
  784. Against some attacks, relaying traffic for others can improve anonymity. The
  785. simplest example is an attacker who owns a small number of Tor servers. He
  786. will see a connection from the bridge, but he won't be able to know
  787. whether the connection originated there or was relayed from somebody else.
  788. There are some cases where it doesn't seem to help: if an attacker can
  789. watch all of the bridge's incoming and outgoing traffic, then it's easy
  790. to learn which connections were relayed and which started there. (In this
  791. case he still doesn't know the final destinations unless he is watching
  792. them too, but in this case bridges are no better off than if they were
  793. an ordinary client.)
  794. There are also some potential downsides to running a bridge. First, while
  795. we try to make it hard to enumerate all bridges, it's still possible to
  796. learn about some of them, and for some people just the fact that they're
  797. running one might signal to an attacker that they place a high value
  798. on their anonymity. Second, there are some more esoteric attacks on Tor
  799. relays that are not as well-understood or well-tested --- for example, an
  800. attacker may be able to ``observe'' whether the bridge is sending traffic
  801. even if he can't actually watch its network, by relaying traffic through
  802. it and noticing changes in traffic timing~\cite{attack-tor-oak05}. On
  803. the other hand, it may be that limiting the bandwidth the bridge is
  804. willing to relay will allow this sort of attacker to determine if it's
  805. being used as a bridge but not whether it is adding traffic of its own.
  806. It is an open research question whether the benefits outweigh the risks. A
  807. lot of the decision rests on which attacks the users are most worried
  808. about. For most users, we don't think running a bridge relay will be
  809. that damaging.
  810. \subsection{Trusting local hardware: Internet cafes and LiveCDs}
  811. \label{subsec:cafes-and-livecds}
  812. Assuming that users have their own trusted hardware is not
  813. always reasonable.
  814. For Internet cafe Windows computers that let you attach your own USB key,
  815. a USB-based Tor image would be smart. There's Torpark, and hopefully
  816. there will be more thoroughly analyzed options down the road. Worries
  817. about hardware or
  818. software keyloggers and other spyware --- and physical surveillance.
  819. If the system lets you boot from a CD or from a USB key, you can gain
  820. a bit more security by bringing a privacy LiveCD with you. Hardware
  821. keyloggers and physical surveillance still a worry. LiveCDs also useful
  822. if it's your own hardware, since it's easier to avoid leaving breadcrumbs
  823. everywhere.
  824. \subsection{Forward compatibility and retiring bridge authorities}
  825. Eventually we'll want to change the identity key and/or location
  826. of a bridge authority. How do we do this mostly cleanly?
  827. \subsection{The trust chain}
  828. \label{subsec:trust-chain}
  829. Tor's ``public key infrastructure'' provides a chain of trust to
  830. let users verify that they're actually talking to the right servers.
  831. There are four pieces to this trust chain.
  832. Firstly, when Tor clients are establishing circuits, at each step
  833. they demand that the next Tor server in the path prove knowledge of
  834. its private key~\cite{tor-design}. This step prevents the first node
  835. in the path from just spoofing the rest of the path. Secondly, the
  836. Tor directory authorities provide a signed list of servers along with
  837. their public keys --- so unless the adversary can control a threshold
  838. of directory authorities, he can't trick the Tor client into using other
  839. Tor servers. Thirdly, the location and keys of the directory authorities,
  840. in turn, is hard-coded in the Tor source code --- so as long as the user
  841. got a genuine version of Tor, he can know that he is using the genuine
  842. Tor network. And lastly, the source code and other packages are signed
  843. with the GPG keys of the Tor developers, so users can confirm that they
  844. did in fact download a genuine version of Tor.
  845. But how can a user in an oppressed country know that he has the correct
  846. key fingerprints for the developers? As with other security systems, it
  847. ultimately comes down to human interaction. The keys are signed by dozens
  848. of people around the world, and we have to hope that our users have met
  849. enough people in the PGP web of trust~\cite{pgp-wot} that they can learn
  850. the correct keys. For users that aren't connected to the global security
  851. community, though, this question remains a critical weakness.
  852. % XXX make clearer the trust chain step for bridge directory authorities
  853. \subsection{Security through obscurity: publishing our design}
  854. Many other schemes like dynaweb use the typical arms race strategy of
  855. not publishing their plans. Our goal here is to produce a design ---
  856. a framework --- that can be public and still secure. Where's the tradeoff?
  857. \section{Performance improvements}
  858. \label{sec:performance}
  859. \subsection{Fetch server descriptors just-in-time}
  860. I guess we should encourage most places to do this, so blocked
  861. users don't stand out.
  862. network-status and directory optimizations. caching better. partitioning
  863. issues?
  864. \section{Maintaining reachability}
  865. \subsection{How many bridge relays should you know about?}
  866. If they're ordinary Tor users on cable modem or DSL, many of them will
  867. disappear and/or move periodically. How many bridge relays should a
  868. blockee know
  869. about before he's likely to have at least one reachable at any given point?
  870. How do we factor in a parameter for "speed that his bridges get discovered
  871. and blocked"?
  872. The related question is: if the bridge relays change IP addresses
  873. periodically, how often does the bridge user need to "check in" in order
  874. to keep from being cut out of the loop?
  875. \subsection{Cablemodem users don't provide important websites}
  876. \label{subsec:block-cable}
  877. ...so our adversary could just block all DSL and cablemodem networks,
  878. and for the most part only our bridge relays would be affected.
  879. The first answer is to aim to get volunteers both from traditionally
  880. ``consumer'' networks and also from traditionally ``producer'' networks.
  881. The second answer (not so good) would be to encourage more use of consumer
  882. networks for popular and useful websites.
  883. Other attack: China pressures Verizon to discourage its users from
  884. running bridges.
  885. \subsection{Scanning-resistance}
  886. If it's trivial to verify that we're a bridge, and we run on a predictable
  887. port, then it's conceivable our attacker would scan the whole Internet
  888. looking for bridges. (In fact, he can just scan likely networks like
  889. cablemodem and DSL services --- see Section~\ref{block-cable} for a related
  890. attack.) It would be nice to slow down this attack. It would
  891. be even nicer to make it hard to learn whether we're a bridge without
  892. first knowing some secret.
  893. Password protecting the bridges.
  894. Could provide a password to the bridge user. He provides a nonced hash of
  895. it or something when he connects. We'd need to give him an ID key for the
  896. bridge too, and wait to present the password until we've TLSed, else the
  897. adversary can pretend to be the bridge and MITM him to learn the password.
  898. \subsection{How to motivate people to run bridge relays}
  899. One of the traditional ways to get people to run software that benefits
  900. others is to give them motivation to install it themselves. An often
  901. suggested approach is to install it as a stunning screensaver so everybody
  902. will be pleased to run it. We take a similar approach here, by leveraging
  903. the fact that these users are already interested in protecting their
  904. own Internet traffic, so they will install and run the software.
  905. Make all Tor users become bridges if they're reachable -- needs more work
  906. on usability first, but we're making progress.
  907. Also, we can make a snazzy network graph with Vidalia that emphasizes
  908. the connections the bridge user is currently relaying. (Minor anonymity
  909. implications, but hey.) (In many cases there won't be much activity,
  910. so this may backfire. Or it may be better suited to full-fledged Tor
  911. servers.)
  912. \subsection{What if the clients can't install software?}
  913. Bridge users without Tor clients
  914. Bridge relays could always open their socks proxy. This is bad though,
  915. firstly
  916. because bridges learn the bridge users' destinations, and secondly because
  917. we've learned that open socks proxies tend to attract abusive users who
  918. have no idea they're using Tor.
  919. Bridges could require passwords in the socks handshake (not supported
  920. by most software including Firefox). Or they could run web proxies
  921. that require authentication and then pass the requests into Tor. This
  922. approach is probably a good way to help bootstrap the Psiphon network,
  923. if one of its barriers to deployment is a lack of volunteers willing
  924. to exit directly to websites. But it clearly drops some of the nice
  925. anonymity and security features Tor provides.
  926. \subsection{Publicity attracts attention}
  927. \label{subsec:publicity}
  928. Many people working on this field want to publicize the existence
  929. and extent of censorship concurrently with the deployment of their
  930. circumvention software. The easy reason for this two-pronged push is
  931. to attract volunteers for running proxies in their systems; but in many
  932. cases their main goal is not to build the software, but rather to educate
  933. the world about the censorship. The media also tries to do its part by
  934. broadcasting the existence of each new circumvention system.
  935. But at the same time, this publicity attracts the attention of the
  936. censors. We can slow down the arms race by not attracting as much
  937. attention, and just spreading by word of mouth. If our goal is to
  938. establish a solid social network of bridges and bridge users before
  939. the adversary gets involved, does this attention tradeoff work to our
  940. advantage?
  941. \subsection{The Tor website: how to get the software}
  942. \section{Future designs}
  943. \subsection{Bridges inside the blocked network too}
  944. Assuming actually crossing the firewall is the risky part of the
  945. operation, can we have some bridge relays inside the blocked area too,
  946. and more established users can use them as relays so they don't need to
  947. communicate over the firewall directly at all? A simple example here is
  948. to make new blocked users into internal bridges also -- so they sign up
  949. on the BDA as part of doing their query, and we give out their addresses
  950. rather than (or along with) the external bridge addresses. This design
  951. is a lot trickier because it brings in the complexity of whether the
  952. internal bridges will remain available, can maintain reachability with
  953. the outside world, etc.
  954. Hidden services as bridges. Hidden services as bridge directory authorities.
  955. \section{Conclusion}
  956. \bibliographystyle{plain} \bibliography{tor-design}
  957. \appendix
  958. \section{Counting Tor users by country}
  959. \label{app:geoip}
  960. \end{document}
  961. ship geoip db to bridges. they look up users who tls to them in the db,
  962. and upload a signed list of countries and number-of-users each day. the
  963. bridge authority aggregates them and publishes stats.
  964. bridge relays have buddies
  965. they ask a user to test the reachability of their buddy.
  966. leaks O(1) bridges, but not O(n).
  967. we should not be blockable by ordinary cisco censorship features.
  968. that is, if they want to block our new design, they will need to
  969. add a feature to block exactly this.
  970. strategically speaking, this may come in handy.
  971. hash identity key + secret that bridge authority knows. start
  972. out dividing into 2^n buckets, where n starts at 0, and we choose
  973. which bucket you're in based on the first n bits of the hash.
  974. Bridges come in clumps of 4 or 8 or whatever. If you know one bridge
  975. in a clump, the authority will tell you the rest. Now bridges can
  976. ask users to test reachability of their buddies.
  977. Giving out clumps helps with dynamic IP addresses too. Whether it
  978. should be 4 or 8 depends on our churn.
  979. the account server. let's call it a database, it doesn't have to
  980. be a thing that human interacts with.
  981. rate limiting mechanisms:
  982. energy spent. captchas. relaying traffic for others?
  983. send us \$10, we'll give you an account
  984. so how do we reward people for being good?