blocking.tex 92 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798991001011021031041051061071081091101111121131141151161171181191201211221231241251261271281291301311321331341351361371381391401411421431441451461471481491501511521531541551561571581591601611621631641651661671681691701711721731741751761771781791801811821831841851861871881891901911921931941951961971981992002012022032042052062072082092102112122132142152162172182192202212222232242252262272282292302312322332342352362372382392402412422432442452462472482492502512522532542552562572582592602612622632642652662672682692702712722732742752762772782792802812822832842852862872882892902912922932942952962972982993003013023033043053063073083093103113123133143153163173183193203213223233243253263273283293303313323333343353363373383393403413423433443453463473483493503513523533543553563573583593603613623633643653663673683693703713723733743753763773783793803813823833843853863873883893903913923933943953963973983994004014024034044054064074084094104114124134144154164174184194204214224234244254264274284294304314324334344354364374384394404414424434444454464474484494504514524534544554564574584594604614624634644654664674684694704714724734744754764774784794804814824834844854864874884894904914924934944954964974984995005015025035045055065075085095105115125135145155165175185195205215225235245255265275285295305315325335345355365375385395405415425435445455465475485495505515525535545555565575585595605615625635645655665675685695705715725735745755765775785795805815825835845855865875885895905915925935945955965975985996006016026036046056066076086096106116126136146156166176186196206216226236246256266276286296306316326336346356366376386396406416426436446456466476486496506516526536546556566576586596606616626636646656666676686696706716726736746756766776786796806816826836846856866876886896906916926936946956966976986997007017027037047057067077087097107117127137147157167177187197207217227237247257267277287297307317327337347357367377387397407417427437447457467477487497507517527537547557567577587597607617627637647657667677687697707717727737747757767777787797807817827837847857867877887897907917927937947957967977987998008018028038048058068078088098108118128138148158168178188198208218228238248258268278288298308318328338348358368378388398408418428438448458468478488498508518528538548558568578588598608618628638648658668678688698708718728738748758768778788798808818828838848858868878888898908918928938948958968978988999009019029039049059069079089099109119129139149159169179189199209219229239249259269279289299309319329339349359369379389399409419429439449459469479489499509519529539549559569579589599609619629639649659669679689699709719729739749759769779789799809819829839849859869879889899909919929939949959969979989991000100110021003100410051006100710081009101010111012101310141015101610171018101910201021102210231024102510261027102810291030103110321033103410351036103710381039104010411042104310441045104610471048104910501051105210531054105510561057105810591060106110621063106410651066106710681069107010711072107310741075107610771078107910801081108210831084108510861087108810891090109110921093109410951096109710981099110011011102110311041105110611071108110911101111111211131114111511161117111811191120112111221123112411251126112711281129113011311132113311341135113611371138113911401141114211431144114511461147114811491150115111521153115411551156115711581159116011611162116311641165116611671168116911701171117211731174117511761177117811791180118111821183118411851186118711881189119011911192119311941195119611971198119912001201120212031204120512061207120812091210121112121213121412151216121712181219122012211222122312241225122612271228122912301231123212331234123512361237123812391240124112421243124412451246124712481249125012511252125312541255125612571258125912601261126212631264126512661267126812691270127112721273127412751276127712781279128012811282128312841285128612871288128912901291129212931294129512961297129812991300130113021303130413051306130713081309131013111312131313141315131613171318131913201321132213231324132513261327132813291330133113321333133413351336133713381339134013411342134313441345134613471348134913501351135213531354135513561357135813591360136113621363136413651366136713681369137013711372137313741375137613771378137913801381138213831384138513861387138813891390139113921393139413951396139713981399140014011402140314041405140614071408140914101411141214131414141514161417141814191420142114221423142414251426142714281429143014311432143314341435143614371438143914401441144214431444144514461447144814491450145114521453145414551456145714581459146014611462146314641465146614671468146914701471147214731474147514761477147814791480148114821483148414851486148714881489149014911492149314941495149614971498149915001501150215031504150515061507150815091510151115121513151415151516151715181519152015211522152315241525152615271528152915301531153215331534153515361537153815391540154115421543154415451546154715481549155015511552155315541555155615571558155915601561156215631564156515661567156815691570157115721573157415751576157715781579158015811582158315841585158615871588158915901591159215931594159515961597159815991600160116021603160416051606160716081609161016111612161316141615161616171618161916201621162216231624162516261627162816291630163116321633163416351636163716381639164016411642164316441645164616471648164916501651165216531654165516561657165816591660166116621663166416651666166716681669167016711672167316741675167616771678167916801681168216831684168516861687168816891690169116921693169416951696169716981699170017011702170317041705170617071708170917101711171217131714171517161717171817191720172117221723172417251726172717281729173017311732173317341735173617371738173917401741174217431744174517461747174817491750175117521753175417551756175717581759176017611762176317641765176617671768176917701771177217731774177517761777177817791780178117821783178417851786178717881789179017911792179317941795179617971798179918001801
  1. \documentclass{llncs}
  2. \usepackage{url}
  3. \usepackage{amsmath}
  4. \usepackage{epsfig}
  5. \setlength{\textwidth}{5.9in}
  6. \setlength{\textheight}{8.4in}
  7. \setlength{\topmargin}{.5cm}
  8. \setlength{\oddsidemargin}{1cm}
  9. \setlength{\evensidemargin}{1cm}
  10. \newenvironment{tightlist}{\begin{list}{$\bullet$}{
  11. \setlength{\itemsep}{0mm}
  12. \setlength{\parsep}{0mm}
  13. % \setlength{\labelsep}{0mm}
  14. % \setlength{\labelwidth}{0mm}
  15. % \setlength{\topsep}{0mm}
  16. }}{\end{list}}
  17. \begin{document}
  18. \title{Design of a blocking-resistant anonymity system\\DRAFT}
  19. %\author{Roger Dingledine\inst{1} \and Nick Mathewson\inst{1}}
  20. \author{Roger Dingledine \and Nick Mathewson}
  21. \institute{The Free Haven Project\\
  22. \email{\{arma,nickm\}@freehaven.net}}
  23. \maketitle
  24. \pagestyle{plain}
  25. \begin{abstract}
  26. Internet censorship is on the rise as websites around the world are
  27. increasingly blocked by government-level firewalls. Although popular
  28. anonymizing networks like Tor were originally designed to keep attackers from
  29. tracing people's activities, many people are also using them to evade local
  30. censorship. But if the censor simply denies access to the Tor network
  31. itself, blocked users can no longer benefit from the security Tor offers.
  32. Here we describe a design that builds upon the current Tor network
  33. to provide an anonymizing network that resists blocking
  34. by government-level attackers.
  35. \end{abstract}
  36. \section{Introduction and Goals}
  37. Anonymizing networks like Tor~\cite{tor-design} bounce traffic around a
  38. network of encrypting relays. Unlike encryption, which hides only {\it what}
  39. is said, these networks also aim to hide who is communicating with whom, which
  40. users are using which websites, and similar relations. These systems have a
  41. broad range of users, including ordinary citizens who want to avoid being
  42. profiled for targeted advertisements, corporations who don't want to reveal
  43. information to their competitors, and law enforcement and government
  44. intelligence agencies who need to do operations on the Internet without being
  45. noticed.
  46. Historical anonymity research has focused on an
  47. attacker who monitors the user (call her Alice) and tries to discover her
  48. activities, yet lets her reach any piece of the network. In more modern
  49. threat models such as Tor's, the adversary is allowed to perform active
  50. attacks such as modifying communications to trick Alice
  51. into revealing her destination, or intercepting some connections
  52. to run a man-in-the-middle attack. But these systems still assume that
  53. Alice can eventually reach the anonymizing network.
  54. An increasing number of users are using the Tor software
  55. less for its anonymity properties than for its censorship
  56. resistance properties---if they use Tor to access Internet sites like
  57. Wikipedia
  58. and Blogspot, they are no longer affected by local censorship
  59. and firewall rules. In fact, an informal user study
  60. %(described in Appendix~\ref{app:geoip})
  61. showed China as the third largest user base
  62. for Tor clients, with perhaps ten thousand people accessing the Tor
  63. network from China each day.
  64. The current Tor design is easy to block if the attacker controls Alice's
  65. connection to the Tor network---by blocking the directory authorities,
  66. by blocking all the server IP addresses in the directory, or by filtering
  67. based on the signature of the Tor TLS handshake. Here we describe an
  68. extended design that builds upon the current Tor network to provide an
  69. anonymizing
  70. network that resists censorship as well as anonymity-breaking attacks.
  71. In section~\ref{sec:adversary} we discuss our threat model---that is,
  72. the assumptions we make about our adversary. Section~\ref{sec:current-tor}
  73. describes the components of the current Tor design and how they can be
  74. leveraged for a new blocking-resistant design. Section~\ref{sec:related}
  75. explains the features and drawbacks of the currently deployed solutions.
  76. In sections~\ref{sec:bridges} through~\ref{sec:discovery}, we explore the
  77. components of our designs in detail. Section~\ref{sec:security} considers
  78. security implications; ..... %write the rest.
  79. % The other motivation is for places where we're concerned they will
  80. % try to enumerate a list of Tor users. So even if they're not blocking
  81. % the Tor network, it may be smart to not be visible as connecting to it.
  82. %And adding more different classes of users and goals to the Tor network
  83. %improves the anonymity for all Tor users~\cite{econymics,usability:weis2006}.
  84. % Adding use classes for countering blocking as well as anonymity has
  85. % benefits too. Should add something about how providing undetected
  86. % access to Tor would facilitate people talking to, e.g., govt. authorities
  87. % about threats to public safety etc. in an environment where Tor use
  88. % is not otherwise widespread and would make one stand out.
  89. \section{Adversary assumptions}
  90. \label{sec:adversary}
  91. To design an effective anticensorship tool, we need a good model for the
  92. goals and resources of the censors we are evading. Otherwise, we risk
  93. spending our effort on keeping the adversaries from doing things they have no
  94. interest in doing, and thwarting techniques they do not use.
  95. The history of blocking-resistance designs is littered with conflicting
  96. assumptions about what adversaries to expect and what problems are
  97. in the critical path to a solution. Here we describe our best
  98. understanding of the current situation around the world.
  99. In the traditional security style, we aim to defeat a strong
  100. attacker---if we can defend against this attacker, we inherit protection
  101. against weaker attackers as well. After all, we want a general design
  102. that will work for citizens of China, Iran, Thailand, and other censored
  103. countries; for
  104. whistleblowers in firewalled corporate networks; and for people in
  105. unanticipated oppressive situations. In fact, by designing with
  106. a variety of adversaries in mind, we can take advantage of the fact that
  107. adversaries will be in different stages of the arms race at each location,
  108. so a server blocked in one locale can still be useful in others.
  109. We assume that the attackers' goals are somewhat complex.
  110. \begin{tightlist}
  111. \item The attacker would like to restrict the flow of certain kinds of
  112. information, particularly when this information is seen as embarrassing to
  113. those in power (such as information about rights violations or corruption),
  114. or when it enables or encourages others to oppose them effectively (such as
  115. information about opposition movements or sites that are used to organize
  116. protests).
  117. \item As a second-order effect, censors aim to chill citizens' behavior by
  118. creating an impression that their online activities are monitored.
  119. \item Usually, censors make a token attempt to block a few sites for
  120. obscenity, blasphemy, and so on, but their efforts here are mainly for
  121. show.
  122. \item Complete blocking (where nobody at all can ever download censored
  123. content) is not a
  124. goal. Attackers typically recognize that perfect censorship is not only
  125. impossible, but unnecessary: if ``undesirable'' information is known only
  126. to a small few, further censoring efforts can be focused elsewhere.
  127. \item Similarly, the censors are not attempting to shut down or block {\it
  128. every} anticensorship tool---merely the tools that are popular and
  129. effective (because these tools impede the censors' information restriction
  130. goals) and those tools that are highly visible (thus making the censors
  131. look ineffectual to their citizens and their bosses).
  132. \item Reprisal against {\it most} passive consumers of {\it most} kinds of
  133. blocked information is also not a goal, given the broadness of most
  134. censorship regimes. This seems borne out by fact.\footnote{So far in places
  135. like China, the authorities mainly go after people who publish materials
  136. and coordinate organized movements~\cite{mackinnon-personal}.
  137. If they find that a
  138. user happens to be reading a site that should be blocked, the typical
  139. response is simply to block the site. Of course, even with an encrypted
  140. connection, the adversary may be able to distinguish readers from
  141. publishers by observing whether Alice is mostly downloading bytes or mostly
  142. uploading them---we discuss this issue more in
  143. Section~\ref{subsec:upload-padding}.}
  144. \item Producers and distributors of targeted information are in much
  145. greater danger than consumers; the attacker would like to not only block
  146. their work, but identify them for reprisal.
  147. \item The censors (or their governments) would like to have a working, useful
  148. Internet. There are economic, political, and social factors that prevent
  149. them from ``censoring'' the Internet by outlawing it entirely, or by
  150. blocking access to all but a tiny list of sites.
  151. Nevertheless, the censors {\it are} willing to block innocuous content
  152. (like the bulk of a newspaper's reporting) in order to censor other content
  153. distributed through the same channels (like that newspaper's coverage of
  154. the censored country).
  155. \end{tightlist}
  156. We assume there are three main technical network attacks in use by censors
  157. currently~\cite{clayton:pet2006}:
  158. \begin{tightlist}
  159. \item Block a destination or type of traffic by automatically searching for
  160. certain strings or patterns in TCP packets. Offending packets can be
  161. dropped, or can trigger a response like closing the
  162. connection.
  163. \item Block a destination by listing its IP address at a
  164. firewall or other routing control point.
  165. \item Intercept DNS requests and give bogus responses for certain
  166. destination hostnames.
  167. \end{tightlist}
  168. We assume the network firewall has limited CPU and memory per
  169. connection~\cite{clayton:pet2006}. Against an adversary who could carefully
  170. examine the contents of every packet and correlate the packets in every
  171. stream on the network, we would need some stronger mechanism such as
  172. steganography, which introduces its own
  173. problems~\cite{active-wardens,tcpstego}. But we make a ``weak
  174. steganography'' assumption here: to remain unblocked, it is necessary to
  175. remain unobservable only by computational resources on par with a modern
  176. router, firewall, proxy, or IDS.
  177. We assume that while various different regimes can coordinate and share
  178. notes, there will be a time lag between one attacker learning how to overcome
  179. a facet of our design and other attackers picking it up. (The most common
  180. vector of transmission seems to be commercial providers of censorship tools:
  181. once a provider adds a feature to meet one country's needs or requests, the
  182. feature is available to all of the provider's customers.) Conversely, we
  183. assume that insider attacks become a higher risk only after the early stages
  184. of network development, once the system has reached a certain level of
  185. success and visibility.
  186. We do not assume that government-level attackers are always uniform across
  187. the country. For example, there is no single centralized place in China
  188. that coordinates its specific censorship decisions and steps.
  189. We assume that our users have control over their hardware and
  190. software---they don't have any spyware installed, there are no
  191. cameras watching their screens, etc. Unfortunately, in many situations
  192. these threats are real~\cite{zuckerman-threatmodels}; yet
  193. software-based security systems like ours are poorly equipped to handle
  194. a user who is entirely observed and controlled by the adversary. See
  195. Section~\ref{subsec:cafes-and-livecds} for more discussion of what little
  196. we can do about this issue.
  197. We assume that the attacker may be able to use political and economic
  198. resources to secure the cooperation of extraterritorial or multinational
  199. corporations and entities in investigating information sources. For example,
  200. the censors can threaten the service providers of troublesome blogs
  201. with economic
  202. reprisals if they do not reveal the authors' identities.
  203. We assume that the user will be able to fetch a genuine
  204. version of Tor, rather than one supplied by the adversary; see
  205. Section~\ref{subsec:trust-chain} for discussion on helping the user
  206. confirm that he has a genuine version and that he can connect to the
  207. real Tor network.
  208. \section{Adapting the current Tor design to anticensorship}
  209. \label{sec:current-tor}
  210. Tor is popular and sees a lot of use. It's the largest anonymity
  211. network of its kind.
  212. Tor has attracted more than 800 volunteer-operated routers from around the
  213. world. Tor protects users by routing their traffic through a multiply
  214. encrypted ``circuit'' built of a few randomly selected servers, each of which
  215. can remove only a single layer of encryption. Each server sees only the step
  216. before it and the step after it in the circuit, and so no single server can
  217. learn the connection between a user and her chosen communication partners.
  218. In this section, we examine some of the reasons why Tor has become popular,
  219. with particular emphasis to how we can take advantage of these properties
  220. for a blocking-resistance design.
  221. Tor aims to provide three security properties:
  222. \begin{tightlist}
  223. \item 1. A local network attacker can't learn, or influence, your
  224. destination.
  225. \item 2. No single router in the Tor network can link you to your
  226. destination.
  227. \item 3. The destination, or somebody watching the destination,
  228. can't learn your location.
  229. \end{tightlist}
  230. For blocking-resistance, we care most clearly about the first
  231. property. But as the arms race progresses, the second property
  232. will become important---for example, to discourage an adversary
  233. from volunteering a relay in order to learn that Alice is reading
  234. or posting to certain websites. The third property helps keep users safe from
  235. collaborating websites: consider websites and other Internet services
  236. that have been pressured
  237. recently into revealing the identity of bloggers
  238. %~\cite{arrested-bloggers}
  239. or treating clients differently depending on their network
  240. location~\cite{goodell-syverson06}.
  241. %~\cite{google-geolocation}.
  242. The Tor design provides other features as well that are not typically
  243. present in manual or ad hoc circumvention techniques.
  244. First, Tor has a well-analyzed and well-understood way to distribute
  245. information about servers.
  246. Tor directory authorities automatically aggregate, test,
  247. and publish signed summaries of the available Tor routers. Tor clients
  248. can fetch these summaries to learn which routers are available and
  249. which routers are suitable for their needs. Directory information is cached
  250. throughout the Tor network, so once clients have bootstrapped they never
  251. need to interact with the authorities directly. (To tolerate a minority
  252. of compromised directory authorities, we use a threshold trust scheme---
  253. see Section~\ref{subsec:trust-chain} for details.)
  254. Second, the list of directory authorities is not hard-wired.
  255. Clients use the default authorities if no others are specified,
  256. but it's easy to start a separate (or even overlapping) Tor network just
  257. by running a different set of authorities and convincing users to prefer
  258. a modified client. For example, we could launch a distinct Tor network
  259. inside China; some users could even use an aggregate network made up of
  260. both the main network and the China network. (But we should not be too
  261. quick to create other Tor networks---part of Tor's anonymity comes from
  262. users behaving like other users, and there are many unsolved anonymity
  263. questions if different users know about different pieces of the network.)
  264. Third, in addition to automatically learning from the chosen directories
  265. which Tor routers are available and working, Tor takes care of building
  266. paths through the network and rebuilding them as needed. So the user
  267. never has to know how paths are chosen, never has to manually pick
  268. working proxies, and so on. More generally, at its core the Tor protocol
  269. is simply a tool that can build paths given a set of routers. Tor is
  270. quite flexible about how it learns about the routers and how it chooses
  271. the paths. Harvard's Blossom project~\cite{blossom-thesis} makes this
  272. flexibility more concrete: Blossom makes use of Tor not for its security
  273. properties but for its reachability properties. It runs a separate set
  274. of directory authorities, its own set of Tor routers (called the Blossom
  275. network), and uses Tor's flexible path-building to let users view Internet
  276. resources from any point in the Blossom network.
  277. Fourth, Tor separates the role of \emph{internal relay} from the
  278. role of \emph{exit relay}. That is, some volunteers choose just to relay
  279. traffic between Tor users and Tor routers, and others choose to also allow
  280. connections to external Internet resources. Because we don't force all
  281. volunteers to play both roles, we end up with more relays. This increased
  282. diversity in turn is what gives Tor its security: the more options the
  283. user has for her first hop, and the more options she has for her last hop,
  284. the less likely it is that a given attacker will be watching both ends
  285. of her circuit~\cite{tor-design}. As a bonus, because our design attracts
  286. more internal relays that want to help out but don't want to deal with
  287. being an exit relay, we end up with more options for the first hop---the
  288. one most critical to being able to reach the Tor network.
  289. Fifth, Tor is sustainable. Zero-Knowledge Systems offered the commercial
  290. but now defunct Freedom Network~\cite{freedom21-security}, a design with
  291. security comparable to Tor's, but its funding model relied on collecting
  292. money from users to pay relay operators. Modern commercial proxy systems
  293. similarly
  294. need to keep collecting money to support their infrastructure. On the
  295. other hand, Tor has built a self-sustaining community of volunteers who
  296. donate their time and resources. This community trust is rooted in Tor's
  297. open design: we tell the world exactly how Tor works, and we provide all
  298. the source code. Users can decide for themselves, or pay any security
  299. expert to decide, whether it is safe to use. Further, Tor's modularity
  300. as described above, along with its open license, mean that its impact
  301. will continue to grow.
  302. Sixth, Tor has an established user base of hundreds of
  303. thousands of people from around the world. This diversity of
  304. users contributes to sustainability as above: Tor is used by
  305. ordinary citizens, activists, corporations, law enforcement, and
  306. even government and military users\footnote{http://tor.eff.org/overview},
  307. and they can
  308. only achieve their security goals by blending together in the same
  309. network~\cite{econymics,usability:weis2006}. This user base also provides
  310. something else: hundreds of thousands of different and often-changing
  311. addresses that we can leverage for our blocking-resistance design.
  312. Finally and perhaps most importantly, Tor provides anonymity and prevents any
  313. single server from linking users to their communication partners. Despite
  314. initial appearances, {\it distributed-trust anonymity is critical for
  315. anticensorship efforts}. If any single server can expose dissident bloggers
  316. or compile a list of users' behavior, the censors can profitably compromise
  317. that server's operator, perhaps by applying economic pressure to their
  318. employers,
  319. breaking into their computer, pressuring their family (if they have relatives
  320. in the censored area), or so on. Furthermore, in designs where any relay can
  321. expose its users, the censors can spread suspicion that they are running some
  322. of the relays and use this belief to chill use of the network.
  323. We discuss and adapt these components further in
  324. Section~\ref{sec:bridges}. But first we examine the strengths and
  325. weaknesses of other blocking-resistance approaches, so we can expand
  326. our repertoire of building blocks and ideas.
  327. \section{Current proxy solutions}
  328. \label{sec:related}
  329. Relay-based blocking-resistance schemes generally have two main
  330. components: a relay component and a discovery component. The relay part
  331. encompasses the process of establishing a connection, sending traffic
  332. back and forth, and so on---everything that's done once the user knows
  333. where she's going to connect. Discovery is the step before that: the
  334. process of finding one or more usable relays.
  335. For example, we can divide the pieces of Tor in the previous section
  336. into the process of building paths and sending
  337. traffic over them (relay) and the process of learning from the directory
  338. servers about what routers are available (discovery). With this distinction
  339. in mind, we now examine several categories of relay-based schemes.
  340. \subsection{Centrally-controlled shared proxies}
  341. Existing commercial anonymity solutions (like Anonymizer.com) are based
  342. on a set of single-hop proxies. In these systems, each user connects to
  343. a single proxy, which then relays traffic between the user and her
  344. destination. These public proxy
  345. systems are typically characterized by two features: they control and
  346. operate the proxies centrally, and many different users get assigned
  347. to each proxy.
  348. In terms of the relay component, single proxies provide weak security
  349. compared to systems that distribute trust over multiple relays, since a
  350. compromised proxy can trivially observe all of its users' actions, and
  351. an eavesdropper only needs to watch a single proxy to perform timing
  352. correlation attacks against all its users' traffic and thus learn where
  353. everyone is connecting. Worse, all users
  354. need to trust the proxy company to have good security itself as well as
  355. to not reveal user activities.
  356. On the other hand, single-hop proxies are easier to deploy, and they
  357. can provide better performance than distributed-trust designs like Tor,
  358. since traffic only goes through one relay. They're also more convenient
  359. from the user's perspective---since users entirely trust the proxy,
  360. they can just use their web browser directly.
  361. Whether public proxy schemes are more or less scalable than Tor is
  362. still up for debate: commercial anonymity systems can use some of their
  363. revenue to provision more bandwidth as they grow, whereas volunteer-based
  364. anonymity systems can attract thousands of fast relays to spread the load.
  365. The discovery piece can take several forms. Most commercial anonymous
  366. proxies have one or a handful of commonly known websites, and their users
  367. log in to those websites and relay their traffic through them. When
  368. these websites get blocked (generally soon after the company becomes
  369. popular), if the company cares about users in the blocked areas, they
  370. start renting lots of disparate IP addresses and rotating through them
  371. as they get blocked. They notify their users of new addresses (by email,
  372. for example). It's an arms race, since attackers can sign up to receive the
  373. email too, but operators have one nice trick available to them: because they
  374. have a list of paying subscribers, they can notify certain subscribers
  375. about updates earlier than others.
  376. Access control systems on the proxy let them provide service only to
  377. users with certain characteristics, such as paying customers or people
  378. from certain IP address ranges.
  379. Discovery in the face of a government-level firewall is a complex and
  380. unsolved
  381. topic, and we're stuck in this same arms race ourselves; we explore it
  382. in more detail in Section~\ref{sec:discovery}. But first we examine the
  383. other end of the spectrum---getting volunteers to run the proxies,
  384. and telling only a few people about each proxy.
  385. \subsection{Independent personal proxies}
  386. Personal proxies such as Circumventor~\cite{circumventor} and
  387. CGIProxy~\cite{cgiproxy} use the same technology as the public ones as
  388. far as the relay component goes, but they use a different strategy for
  389. discovery. Rather than managing a few centralized proxies and constantly
  390. getting new addresses for them as the old addresses are blocked, they
  391. aim to have a large number of entirely independent proxies, each managing
  392. its own (much smaller) set of users.
  393. As the Circumventor site explains, ``You don't
  394. actually install the Circumventor \emph{on} the computer that is blocked
  395. from accessing Web sites. You, or a friend of yours, has to install the
  396. Circumventor on some \emph{other} machine which is not censored.''
  397. This tactic has great advantages in terms of blocking-resistance---recall
  398. our assumption in Section~\ref{sec:adversary} that the attention
  399. a system attracts from the attacker is proportional to its number of
  400. users and level of publicity. If each proxy only has a few users, and
  401. there is no central list of proxies, most of them will never get noticed by
  402. the censors.
  403. On the other hand, there's a huge scalability question that so far has
  404. prevented these schemes from being widely useful: how does the fellow
  405. in China find a person in Ohio who will run a Circumventor for him? In
  406. some cases he may know and trust some people on the outside, but in many
  407. cases he's just out of luck. Just as hard, how does a new volunteer in
  408. Ohio find a person in China who needs it?
  409. % another key feature of a proxy run by your uncle is that you
  410. % self-censor, so you're unlikely to bring abuse complaints onto
  411. % your uncle. self-censoring clearly has a downside too, though.
  412. This challenge leads to a hybrid design---centrally-distributed
  413. personal proxies---which we will investigate in more detail in
  414. Section~\ref{sec:discovery}.
  415. \subsection{Open proxies}
  416. Yet another currently used approach to bypassing firewalls is to locate
  417. open and misconfigured proxies on the Internet. A quick Google search
  418. for ``open proxy list'' yields a wide variety of freely available lists
  419. of HTTP, HTTPS, and SOCKS proxies. Many small companies have sprung up
  420. providing more refined lists to paying customers.
  421. There are some downsides to using these open proxies though. First,
  422. the proxies are of widely varying quality in terms of bandwidth and
  423. stability, and many of them are entirely unreachable. Second, unlike
  424. networks of volunteers like Tor, the legality of routing traffic through
  425. these proxies is questionable: it's widely believed that most of them
  426. don't realize what they're offering, and probably wouldn't allow it if
  427. they realized. Third, in many cases the connection to the proxy is
  428. unencrypted, so firewalls that filter based on keywords in IP packets
  429. will not be hindered. And last, many users are suspicious that some
  430. open proxies are a little \emph{too} convenient: are they run by the
  431. adversary, in which case they get to monitor all the user's requests
  432. just as single-hop proxies can?
  433. A distributed-trust design like Tor resolves each of these issues for
  434. the relay component, but a constantly changing set of thousands of open
  435. relays is clearly a useful idea for a discovery component. For example,
  436. users might be able to make use of these proxies to bootstrap their
  437. first introduction into the Tor network.
  438. \subsection{Blocking resistance and JAP}
  439. K\"{o}psell and Hilling's Blocking Resistance
  440. design~\cite{koepsell:wpes2004} is probably
  441. the closest related work, and is the starting point for the design in this
  442. paper. In this design, the JAP anonymity system~\cite{web-mix} is used
  443. as a base instead of Tor. Volunteers operate a large number of access
  444. points that relay traffic to the core JAP
  445. network, which in turn anonymizes users' traffic. The software to run these
  446. relays is, as in our design, included in the JAP client software and enabled
  447. only when the user decides to enable it. Discovery is handled with a
  448. CAPTCHA-based mechanism; users prove that they aren't an automated process,
  449. and are given the address of an access point. (The problem of a determined
  450. attacker with enough manpower to launch many requests and enumerate all the
  451. access points is not considered in depth.) There is also some suggestion
  452. that information about access points could spread through existing social
  453. networks.
  454. \subsection{Infranet}
  455. The Infranet design~\cite{infranet} uses one-hop relays to deliver web
  456. content, but disguises its communications as ordinary HTTP traffic. Requests
  457. are split into multiple requests for URLs on the relay, which then encodes
  458. its responses in the content it returns. The relay needs to be an actual
  459. website with plausible content and a number of URLs which the user might want
  460. to access---if the Infranet software produced its own cover content, it would
  461. be far easier for censors to identify. To keep the censors from noticing
  462. that cover content changes depending on what data is embedded, Infranet needs
  463. the cover content to have an innocuous reason for changing frequently: the
  464. paper recommends watermarked images and webcams.
  465. The attacker and relay operators in Infranet's threat model are significantly
  466. different than in ours. Unlike our attacker, Infranet's censor can't be
  467. bypassed with encrypted traffic (presumably because the censor blocks
  468. encrypted traffic, or at least considers it suspicious), and has more
  469. computational resources to devote to each connection than ours (so it can
  470. notice subtle patterns over time). Unlike our bridge operators, Infranet's
  471. operators (and users) have more bandwidth to spare; the overhead in typical
  472. steganography schemes is far higher than Tor's.
  473. The Infranet design does not include a discovery element. Discovery,
  474. however, is a critical point: if whatever mechanism allows users to learn
  475. about relays also allows the censor to do so, he can trivially discover and
  476. block their addresses, even if the steganography would prevent mere traffic
  477. observation from revealing the relays' addresses.
  478. \subsection{RST-evasion and other packet-level tricks}
  479. In their analysis of China's firewall's content-based blocking, Clayton,
  480. Murdoch and Watson discovered that rather than blocking all packets in a TCP
  481. streams once a forbidden word was noticed, the firewall was simply forging
  482. RST packets to make the communicating parties believe that the connection was
  483. closed~\cite{clayton:pet2006}. They proposed altering operating systems
  484. to ignore forged RST packets.
  485. Other packet-level responses to filtering include splitting
  486. sensitive words across multiple TCP packets, so that the censors'
  487. firewalls can't notice them without performing expensive stream
  488. reconstruction~\cite{ptacek98insertion}. This technique relies on the
  489. same insight as our weak steganography assumption.
  490. \subsection{Internal caching networks}
  491. Freenet~\cite{freenet-pets00} is an anonymous peer-to-peer data store.
  492. Analyzing Freenet's security can be difficult, as its design is in flux as
  493. new discovery and routing mechanisms are proposed, and no complete
  494. specification has (to our knowledge) been written. Freenet servers relay
  495. requests for specific content (indexed by a digest of the content)
  496. ``toward'' the server that hosts it, and then cache the content as it
  497. follows the same path back to
  498. the requesting user. If Freenet's routing mechanism is successful in
  499. allowing nodes to learn about each other and route correctly even as some
  500. node-to-node links are blocked by firewalls, then users inside censored areas
  501. can ask a local Freenet server for a piece of content, and get an answer
  502. without having to connect out of the country at all. Of course, operators of
  503. servers inside the censored area can still be targeted, and the addresses of
  504. external servers can still be blocked.
  505. \subsection{Skype}
  506. The popular Skype voice-over-IP software uses multiple techniques to tolerate
  507. restrictive networks, some of which allow it to continue operating in the
  508. presence of censorship. By switching ports and using encryption, Skype
  509. attempts to resist trivial blocking and content filtering. Even if no
  510. encryption were used, it would still be expensive to scan all voice
  511. traffic for sensitive words. Also, most current keyloggers are unable to
  512. store voice traffic. Nevertheless, Skype can still be blocked, especially at
  513. its central directory service.
  514. \subsection{Tor itself}
  515. And last, we include Tor itself in the list of current solutions
  516. to firewalls. Tens of thousands of people use Tor from countries that
  517. routinely filter their Internet. Tor's website has been blocked in most
  518. of them. But why hasn't the Tor network been blocked yet?
  519. We have several theories. The first is the most straightforward: tens of
  520. thousands of people are simply too few to matter. It may help that Tor is
  521. perceived to be for experts only, and thus not worth attention yet. The
  522. more subtle variant on this theory is that we've positioned Tor in the
  523. public eye as a tool for retaining civil liberties in more free countries,
  524. so perhaps blocking authorities don't view it as a threat. (We revisit
  525. this idea when we consider whether and how to publicize a Tor variant
  526. that improves blocking-resistance---see Section~\ref{subsec:publicity}
  527. for more discussion.)
  528. The broader explanation is that the maintainance of most government-level
  529. filters is aimed at stopping widespread information flow and appearing to be
  530. in control, not by the impossible goal of blocking all possible ways to bypass
  531. censorship. Censors realize that there will always
  532. be ways for a few people to get around the firewall, and as long as Tor
  533. has not publically threatened their control, they see no urgent need to
  534. block it yet.
  535. We should recognize that we're \emph{already} in the arms race. These
  536. constraints can give us insight into the priorities and capabilities of
  537. our various attackers.
  538. \section{The relay component of our blocking-resistant design}
  539. \label{sec:bridges}
  540. Section~\ref{sec:current-tor} describes many reasons why Tor is
  541. well-suited as a building block in our context, but several changes will
  542. allow the design to resist blocking better. The most critical changes are
  543. to get more relay addresses, and to distribute them to users differently.
  544. %We need to address three problems:
  545. %- adapting the relay component of Tor so it resists blocking better.
  546. %- Discovery.
  547. %- Tor's network signature.
  548. %Here we describe the new pieces we need to add to the current Tor design.
  549. \subsection{Bridge relays}
  550. Today, Tor servers operate on less than a thousand distinct IP addresses;
  551. an adversary
  552. could enumerate and block them all with little trouble. To provide a
  553. means of ingress to the network, we need a larger set of entry points, most
  554. of which an adversary won't be able to enumerate easily. Fortunately, we
  555. have such a set: the Tor users.
  556. Hundreds of thousands of people around the world use Tor. We can leverage
  557. our already self-selected user base to produce a list of thousands of
  558. often-changing IP addresses. Specifically, we can give them a little
  559. button in the GUI that says ``Tor for Freedom'', and users who click
  560. the button will turn into \emph{bridge relays} (or just \emph{bridges}
  561. for short). They can rate limit relayed connections to 10 KB/s (almost
  562. nothing for a broadband user in a free country, but plenty for a user
  563. who otherwise has no access at all), and since they are just relaying
  564. bytes back and forth between blocked users and the main Tor network, they
  565. won't need to make any external connections to Internet sites. Because
  566. of this separation of roles, and because we're making use of software
  567. that the volunteers have already installed for their own use, we expect
  568. our scheme to attract and maintain more volunteers than previous schemes.
  569. As usual, there are new anonymity and security implications from running a
  570. bridge relay, particularly from letting people relay traffic through your
  571. Tor client; but we leave this discussion for Section~\ref{sec:security}.
  572. %...need to outline instructions for a Tor config that will publish
  573. %to an alternate directory authority, and for controller commands
  574. %that will do this cleanly.
  575. \subsection{The bridge directory authority}
  576. How do the bridge relays advertise their existence to the world? We
  577. introduce a second new component of the design: a specialized directory
  578. authority that aggregates and tracks bridges. Bridge relays periodically
  579. publish server descriptors (summaries of their keys, locations, etc,
  580. signed by their long-term identity key), just like the relays in the
  581. ``main'' Tor network, but in this case they publish them only to the
  582. bridge directory authorities.
  583. The main difference between bridge authorities and the directory
  584. authorities for the main Tor network is that the main authorities provide
  585. a list of every known relay, but the bridge authorities only give
  586. out a server descriptor if you already know its identity key. That is,
  587. you can keep up-to-date on a bridge's location and other information
  588. once you know about it, but you can't just grab a list of all the bridges.
  589. The identity key, IP address, and directory port for each bridge
  590. authority ship by default with the Tor software, so the bridge relays
  591. can be confident they're publishing to the right location, and the
  592. blocked users can establish an encrypted authenticated channel. See
  593. Section~\ref{subsec:trust-chain} for more discussion of the public key
  594. infrastructure and trust chain.
  595. Bridges use Tor to publish their descriptors privately and securely,
  596. so even an attacker monitoring the bridge directory authority's network
  597. can't make a list of all the addresses contacting the authority.
  598. Bridges may publish to only a subset of the
  599. authorities, to limit the potential impact of an authority compromise.
  600. %\subsection{A simple matter of engineering}
  601. %
  602. %Although we've described bridges and bridge authorities in simple terms
  603. %above, some design modifications and features are needed in the Tor
  604. %codebase to add them. We describe the four main changes here.
  605. %
  606. %Firstly, we need to get smarter about rate limiting:
  607. %Bandwidth classes
  608. %
  609. %Secondly, while users can in fact configure which directory authorities
  610. %they use, we need to add a new type of directory authority and teach
  611. %bridges to fetch directory information from the main authorities while
  612. %publishing server descriptors to the bridge authorities. We're most of
  613. %the way there, since we can already specify attributes for directory
  614. %authorities:
  615. %add a separate flag named ``blocking''.
  616. %
  617. %Thirdly, need to build paths using bridges as the first
  618. %hop. One more hole in the non-clique assumption.
  619. %
  620. %Lastly, since bridge authorities don't answer full network statuses,
  621. %we need to add a new way for users to learn the current status for a
  622. %single relay or a small set of relays---to answer such questions as
  623. %``is it running?'' or ``is it behaving correctly?'' We describe in
  624. %Section~\ref{subsec:enclave-dirs} a way for the bridge authority to
  625. %publish this information without resorting to signing each answer
  626. %individually.
  627. \subsection{Putting them together}
  628. \label{subsec:relay-together}
  629. If a blocked user knows the identity keys of a set of bridge relays, and
  630. he has correct address information for at least one of them, he can use
  631. that one to make a secure connection to the bridge authority and update
  632. his knowledge about the other bridge relays. He can also use it to make
  633. secure connections to the main Tor network and directory servers, so he
  634. can build circuits and connect to the rest of the Internet. All of these
  635. updates happen in the background: from the blocked user's perspective,
  636. he just accesses the Internet via his Tor client like always.
  637. So now we've reduced the problem from how to circumvent the firewall
  638. for all transactions (and how to know that the pages you get have not
  639. been modified by the local attacker) to how to learn about a working
  640. bridge relay.
  641. There's another catch though. We need to make sure that the network
  642. traffic we generate by simply connecting to a bridge relay doesn't stand
  643. out too much.
  644. %The following section describes ways to bootstrap knowledge of your first
  645. %bridge relay, and ways to maintain connectivity once you know a few
  646. %bridge relays.
  647. % (See Section~\ref{subsec:first-bridge} for a discussion
  648. %of exactly what information is sufficient to characterize a bridge relay.)
  649. \section{Hiding Tor's network signatures}
  650. \label{sec:network-signature}
  651. \label{subsec:enclave-dirs}
  652. Currently, Tor uses two protocols for its network communications. The
  653. main protocol uses TLS for encrypted and authenticated communication
  654. between Tor instances. The second protocol is standard HTTP, used for
  655. fetching directory information. All Tor servers listen on their ``ORPort''
  656. for TLS connections, and some of them opt to listen on their ``DirPort''
  657. as well, to serve directory information. Tor servers choose whatever port
  658. numbers they like; the server descriptor they publish to the directory
  659. tells users where to connect.
  660. One format for communicating address information about a bridge relay is
  661. its IP address and DirPort. From there, the user can ask the bridge's
  662. directory cache for an up-to-date copy of its server descriptor, and
  663. learn its current circuit keys, its ORPort, and so on.
  664. However, connecting directly to the directory cache involves a plaintext
  665. HTTP request. A censor could create a network signature for the request
  666. and/or its response, thus preventing these connections. To resolve this
  667. vulnerability, we've modified the Tor protocol so that users can connect
  668. to the directory cache via the main Tor port---they establish a TLS
  669. connection with the bridge as normal, and then send a special ``begindir''
  670. relay command to establish an internal connection to its directory cache.
  671. Therefore a better way to summarize a bridge's address is by its IP
  672. address and ORPort, so all communications between the client and the
  673. bridge will use ordinary TLS. But there are other details that need
  674. more investigation.
  675. What port should bridges pick for their ORPort? We currently recommend
  676. that they listen on port 443 (the default HTTPS port) if they want to
  677. be most useful, because clients behind standard firewalls will have
  678. the best chance to reach them. Is this the best choice in all cases,
  679. or should we encourage some fraction of them pick random ports, or other
  680. ports commonly permitted through firewalls like 53 (DNS) or 110
  681. (POP)? Or perhaps we should use other ports where TLS traffic is
  682. expected, like 993 (IMAPS) or 995 (POP3S). We need more research on our
  683. potential users, and their current and anticipated firewall restrictions.
  684. Furthermore, we need to look at the specifics of Tor's TLS handshake.
  685. Right now Tor uses some predictable strings in its TLS handshakes. For
  686. example, it sets the X.509 organizationName field to ``Tor'', and it puts
  687. the Tor server's nickname in the certificate's commonName field. We
  688. should tweak the handshake protocol so it doesn't rely on any unusual details
  689. in the certificate, yet it remains secure; the certificate itself
  690. should be made to resemble an ordinary HTTPS certificate. We should also try
  691. to make our advertised cipher-suites closer to what an ordinary web server
  692. would support.
  693. Tor's TLS handshake uses two-certificate chains: one certificate
  694. contains the self-signed identity key for
  695. the router, and the second contains a current TLS key, signed by the
  696. identity key. We use these to authenticate that we're talking to the right
  697. router, and to limit the impact of TLS-key exposure. Most (though far from
  698. all) consumer-oriented HTTPS services provide only a single certificate.
  699. These extra certificates may help identify Tor's TLS handshake; instead,
  700. bridges should consider using only a single TLS key certificate signed by
  701. their identity key, and providing the full value of the identity key in an
  702. early handshake cell. More significantly, Tor currently has all clients
  703. present certificates, so that clients are harder to distinguish from servers.
  704. But in a blocking-resistance environment, clients should not present
  705. certificates at all.
  706. Last, what if the adversary starts observing the network traffic even
  707. more closely? Even if our TLS handshake looks innocent, our traffic timing
  708. and volume still look different than a user making a secure web connection
  709. to his bank. The same techniques used in the growing trend to build tools
  710. to recognize encrypted Bittorrent traffic
  711. %~\cite{bt-traffic-shaping}
  712. could be used to identify Tor communication and recognize bridge
  713. relays. Rather than trying to look like encrypted web traffic, we may be
  714. better off trying to blend with some other encrypted network protocol. The
  715. first step is to compare typical network behavior for a Tor client to
  716. typical network behavior for various other protocols. This statistical
  717. cat-and-mouse game is made more complex by the fact that Tor transports a
  718. variety of protocols, and we'll want to automatically handle web browsing
  719. differently from, say, instant messaging.
  720. % Tor cells are 512 bytes each. So TLS records will be roughly
  721. % multiples of this size? How bad is this? -RD
  722. % Look at ``Inferring the Source of Encrypted HTTP Connections''
  723. % by Marc Liberatore and Brian Neil Levine (CCS 2006)
  724. % They substantially flesh out the numbers for the web fingerprinting
  725. % attack. -PS
  726. % Yes, but I meant detecting the signature of Tor traffic itself, not
  727. % learning what websites we're going to. I wouldn't be surprised to
  728. % learn that these are related problems, but it's not obvious to me. -RD
  729. \subsection{Identity keys as part of addressing information}
  730. We have described a way for the blocked user to bootstrap into the
  731. network once he knows the IP address and ORPort of a bridge. What about
  732. local spoofing attacks? That is, since we never learned an identity
  733. key fingerprint for the bridge, a local attacker could intercept our
  734. connection and pretend to be the bridge we had in mind. It turns out
  735. that giving false information isn't that bad---since the Tor client
  736. ships with trusted keys for the bridge directory authority and the Tor
  737. network directory authorities, the user can learn whether he's being
  738. given a real connection to the bridge authorities or not. (After all,
  739. if the adversary intercepts every connection the user makes and gives
  740. him a bad connection each time, there's nothing we can do.)
  741. What about anonymity-breaking attacks from observing traffic, if the
  742. blocked user doesn't start out knowing the identity key of his intended
  743. bridge? The vulnerabilities aren't so bad in this case either---the
  744. adversary could do similar attacks just by monitoring the network
  745. traffic.
  746. % cue paper by steven and george
  747. Once the Tor client has fetched the bridge's server descriptor, it should
  748. remember the identity key fingerprint for that bridge relay. Thus if
  749. the bridge relay moves to a new IP address, the client can query the
  750. bridge directory authority to look up a fresh server descriptor using
  751. this fingerprint.
  752. So we've shown that it's \emph{possible} to bootstrap into the network
  753. just by learning the IP address and ORPort of a bridge, but are there
  754. situations where it's more convenient or more secure to learn the bridge's
  755. identity fingerprint as well as instead, while bootstrapping? We keep
  756. that question in mind as we next investigate bootstrapping and discovery.
  757. \section{Discovering working bridge relays}
  758. \label{sec:discovery}
  759. Tor's modular design means that we can develop a better relay component
  760. independently of developing the discovery component. This modularity's
  761. great promise is that we can pick any discovery approach we like; but the
  762. unfortunate fact is that we have no magic bullet for discovery. We're
  763. in the same arms race as all the other designs we described in
  764. Section~\ref{sec:related}.
  765. In this section we describe a variety of approaches to adding discovery
  766. components for our design.
  767. \subsection{Bootstrapping: finding your first bridge.}
  768. \label{subsec:first-bridge}
  769. In Section~\ref{subsec:relay-together}, we showed that a user who knows
  770. a working bridge address can use it to reach the bridge authority and
  771. to stay connected to the Tor network. But how do new users reach the
  772. bridge authority in the first place? After all, the bridge authority
  773. will be one of the first addresses that a censor blocks.
  774. First, we should recognize that most government firewalls are not
  775. perfect. That is, they may allow connections to Google cache or some
  776. open proxy servers, or they let file-sharing traffic, Skype, instant
  777. messaging, or World-of-Warcraft connections through. Different users will
  778. have different mechanisms for bypassing the firewall initially. Second,
  779. we should remember that most people don't operate in a vacuum; users will
  780. hopefully know other people who are in other situations or have other
  781. resources available. In the rest of this section we develop a toolkit
  782. of different options and mechanisms, so that we can enable users in a
  783. diverse set of contexts to bootstrap into the system.
  784. (For users who can't use any of these techniques, hopefully they know
  785. a friend who can---for example, perhaps the friend already knows some
  786. bridge relay addresses. If they can't get around it at all, then we
  787. can't help them---they should go meet more people or learn more about
  788. the technology running the firewall in their area.)
  789. By deploying all the schemes in the toolkit at once, we let bridges and
  790. blocked users employ the discovery approach that is most appropriate
  791. for their situation.
  792. \subsection{Independent bridges, no central discovery}
  793. The first design is simply to have no centralized discovery component at
  794. all. Volunteers run bridges, and we assume they have some blocked users
  795. in mind and communicate their address information to them out-of-band
  796. (for example, through Gmail). This design allows for small personal
  797. bridges that have only one or a handful of users in mind, but it can
  798. also support an entire community of users. For example, Citizen Lab's
  799. upcoming Psiphon single-hop proxy tool~\cite{psiphon} plans to use this
  800. \emph{social network} approach as its discovery component.
  801. There are several ways to do bootstrapping in this design. In the simple
  802. case, the operator of the bridge informs each chosen user about his
  803. bridge's address information and/or keys. A different approach involves
  804. blocked users introducing new blocked users to the bridges they know.
  805. That is, somebody in the blocked area can pass along a bridge's address to
  806. somebody else they trust. This scheme brings in appealing but complex game
  807. theoretic properties: the blocked user making the decision has an incentive
  808. only to delegate to trustworthy people, since an adversary who learns
  809. the bridge's address and filters it makes it unavailable for both of them.
  810. Also, delegating known bridges to members of your social network can be
  811. dangerous: an the adversary who can learn who knows which bridges may
  812. be able to reconstruct the social network.
  813. Note that a central set of bridge directory authorities can still be
  814. compatible with a decentralized discovery process. That is, how users
  815. first learn about bridges is entirely up to the bridges, but the process
  816. of fetching up-to-date descriptors for them can still proceed as described
  817. in Section~\ref{sec:bridges}. Of course, creating a central place that
  818. knows about all the bridges may not be smart, especially if every other
  819. piece of the system is decentralized. Further, if a user only knows
  820. about one bridge and he loses track of it, it may be quite a hassle to
  821. reach the bridge authority. We address these concerns next.
  822. \subsection{Families of bridges, no central discovery}
  823. Because the blocked users are running our software too, we have many
  824. opportunities to improve usability or robustness. Our second design builds
  825. on the first by encouraging volunteers to run several bridges at once
  826. (or coordinate with other bridge volunteers), such that some
  827. of the bridges are likely to be available at any given time.
  828. The blocked user's Tor client would periodically fetch an updated set of
  829. recommended bridges from any of the working bridges. Now the client can
  830. learn new additions to the bridge pool, and can expire abandoned bridges
  831. or bridges that the adversary has blocked, without the user ever needing
  832. to care. To simplify maintenance of the community's bridge pool, each
  833. community could run its own bridge directory authority---reachable via
  834. the available bridges, and also mirrored at each bridge.
  835. \subsection{Public bridges with central discovery}
  836. What about people who want to volunteer as bridges but don't know any
  837. suitable blocked users? What about people who are blocked but don't
  838. know anybody on the outside? Here we describe how to make use of these
  839. \emph{public bridges} in a way that still makes it hard for the attacker
  840. to learn all of them.
  841. The basic idea is to divide public bridges into a set of pools based on
  842. identity key. Each pool corresponds to a \emph{distribution strategy}:
  843. an approach to distributing its bridge addresses to users. Each strategy
  844. is designed to exercise a different scarce resource or property of
  845. the user.
  846. How do we divide bridges between these strategy pools such that they're
  847. evenly distributed and the allocation is hard to influence or predict,
  848. but also in a way that's amenable to creating more strategies later
  849. on without reshuffling all the pools? We assign a given bridge
  850. to a strategy pool by hashing the bridge's identity key along with a
  851. secret that only the bridge authority knows: the first $n$ bits of this
  852. hash dictate the strategy pool number, where $n$ is a parameter that
  853. describes how many strategy pools we want at this point. We choose $n=3$
  854. to start, so we divide bridges between 8 pools; but as we later invent
  855. new distribution strategies, we can increment $n$ to split the 8 into
  856. 16. Since a bridge can't predict the next bit in its hash, it can't
  857. anticipate which identity key will correspond to a certain new pool
  858. when the pools are split. Further, since the bridge authority doesn't
  859. provide any feedback to the bridge about which strategy pool it's in,
  860. an adversary who signs up bridges with the goal of filling a certain
  861. pool~\cite{casc-rep} will be hindered.
  862. % This algorithm is not ideal. When we split pools, each existing
  863. % pool is cut in half, where half the bridges remain with the
  864. % old distribution policy, and half will be under what the new one
  865. % is. So the new distribution policy inherits a bunch of blocked
  866. % bridges if the old policy was too loose, or a bunch of unblocked
  867. % bridges if its policy was still secure. -RD
  868. %
  869. % I think it should be more chordlike.
  870. % Bridges are allocated to wherever on the ring which is divided
  871. % into arcs (buckets).
  872. % If a bucket gets too full, you can just split it.
  873. % More on this below. -PFS
  874. The first distribution strategy (used for the first pool) publishes bridge
  875. addresses in a time-release fashion. The bridge authority divides the
  876. available bridges into partitions, and each partition is deterministically
  877. available only in certain time windows. That is, over the course of a
  878. given time slot (say, an hour), each requestor is given a random bridge
  879. from within that partition. When the next time slot arrives, a new set
  880. of bridges from the pool are available for discovery. Thus some bridge
  881. address is always available when a new
  882. user arrives, but to learn about all bridges the attacker needs to fetch
  883. all new addresses at every new time slot. By varying the length of the
  884. time slots, we can make it harder for the attacker to guess when to check
  885. back. We expect these bridges will be the first to be blocked, but they'll
  886. help the system bootstrap until they \emph{do} get blocked. Further,
  887. remember that we're dealing with different blocking regimes around the
  888. world that will progress at different rates---so this pool will still
  889. be useful to some users even as the arms races progress.
  890. The second distribution strategy publishes bridge addresses based on the IP
  891. address of the requesting user. Specifically, the bridge authority will
  892. divide the available bridges in the pool into a bunch of partitions
  893. (as in the first distribution scheme), hash the requestor's IP address
  894. with a secret of its own (as in the above allocation scheme for creating
  895. pools), and give the requestor a random bridge from the appropriate
  896. partition. To raise the bar, we should discard the last octet of the
  897. IP address before inputting it to the hash function, so an attacker
  898. who only controls a single ``/24'' network only counts as one user. A
  899. large attacker like China will still be able to control many addresses,
  900. but the hassle of establishing connections from each network (or spoofing
  901. TCP connections) may still slow them down. Similarly, as a special case,
  902. we should treat IP addresses that are Tor exit nodes as all being on
  903. the same network.
  904. The third strategy combines the time-based and location-based
  905. strategies to further constrain and rate-limit the available bridge
  906. addresses. Specifically, the bridge address provided in a given time
  907. slot to a given network location is deterministic within the partition,
  908. rather than chosen randomly each time from the partition. Thus, repeated
  909. requests during that time slot from a given network are given the same
  910. bridge address as the first request.
  911. The fourth strategy is based on Circumventor's discovery strategy.
  912. The Circumventor project, realizing that its adoption will remain limited
  913. if it has no central coordination mechanism, has started a mailing list to
  914. distribute new proxy addresses every few days. From experimentation it
  915. seems they have concluded that sending updates every three or four days
  916. is sufficient to stay ahead of the current attackers.
  917. The fifth strategy provides an alternative approach to a mailing list:
  918. users provide an email address and receive an automated response
  919. listing an available bridge address. We could limit one response per
  920. email address. To further rate limit queries, we could require a CAPTCHA
  921. solution
  922. %~\cite{captcha}
  923. in each case too. In fact, we wouldn't need to
  924. implement the CAPTCHA on our side: if we only deliver bridge addresses
  925. to Yahoo or GMail addresses, we can leverage the rate-limiting schemes
  926. that other parties already impose for account creation.
  927. The sixth strategy ties in the social network design with public
  928. bridges and a reputation system. We pick some seeds---trusted people in
  929. blocked areas---and give them each a few dozen bridge addresses and a few
  930. \emph{delegation tokens}. We run a website next to the bridge authority,
  931. where users can log in (they connect via Tor, and they don't need to
  932. provide actual identities, just persistent pseudonyms). Users can delegate
  933. trust to other people they know by giving them a token, which can be
  934. exchanged for a new account on the website. Accounts in ``good standing''
  935. then accrue new bridge addresses and new tokens. As usual, reputation
  936. schemes bring in a host of new complexities~\cite{rep-anon}: how do we
  937. decide that an account is in good standing? We could tie reputation
  938. to whether the bridges they're told about have been blocked---see
  939. Section~\ref{subsec:geoip} below for initial thoughts on how to discover
  940. whether bridges have been blocked. We could track reputation between
  941. accounts (if you delegate to somebody who screws up, it impacts you too),
  942. or we could use blinded delegation tokens~\cite{chaum-blind} to prevent
  943. the website from mapping the seeds' social network. We put off deeper
  944. discussion of the social network reputation strategy for future work.
  945. Pools seven and eight are held in reserve, in case our currently deployed
  946. tricks all fail at once and the adversary blocks all those bridges---so
  947. we can adapt and move to new approaches quickly, and have some bridges
  948. immediately available for the new schemes. New strategies might be based
  949. on some other scarce resource, such as relaying traffic for others or
  950. other proof of energy spent. (We might also worry about the incentives
  951. for bridges that sign up and get allocated to the reserve pools: will they
  952. be unhappy that they're not being used? But this is a transient problem:
  953. if Tor users are bridges by default, nobody will mind not being used yet.
  954. See also Section~\ref{subsec:incentives}.)
  955. %Is it useful to load balance which bridges are handed out? The above
  956. %pool concept makes some bridges wildly popular and others less so.
  957. %But I guess that's the point.
  958. \subsection{Public bridges with coordinated discovery}
  959. We presented the above discovery strategies in the context of a single
  960. bridge directory authority, but in practice we will want to distribute the
  961. operations over several bridge authorities---a single point of failure
  962. or attack is a bad move. The first answer is to run several independent
  963. bridge directory authorities, and bridges gravitate to one based on
  964. their identity key. The better answer would be some federation of bridge
  965. authorities that work together to provide redundancy but don't introduce
  966. new security issues. We could even imagine designs where the bridge
  967. authorities have encrypted versions of the bridge's server descriptors,
  968. and the users learn a decryption key that they keep private when they
  969. first hear about the bridge---this way the bridge authorities would not
  970. be able to learn the IP address of the bridges.
  971. We leave this design question for future work.
  972. \subsection{Assessing whether bridges are useful}
  973. Learning whether a bridge is useful is important in the bridge authority's
  974. decision to include it in responses to blocked users. For example, if
  975. we end up with a list of thousands of bridges and only a few dozen of
  976. them are reachable right now, most blocked users will not end up knowing
  977. about working bridges.
  978. There are three components for assessing how useful a bridge is. First,
  979. is it reachable from the public Internet? Second, what proportion of
  980. the time is it available? Third, is it blocked in certain jurisdictions?
  981. The first component can be tested just as we test reachability of
  982. ordinary Tor servers. Specifically, the bridges do a self-test---connect
  983. to themselves via the Tor network---before they are willing to
  984. publish their descriptor, to make sure they're not obviously broken or
  985. misconfigured. Once the bridges publish, the bridge authority also tests
  986. reachability to make sure they're not confused or outright lying.
  987. The second component can be measured and tracked by the bridge authority.
  988. By doing periodic reachability tests, we can get a sense of how often the
  989. bridge is available. More complex tests will involve bandwidth-intensive
  990. checks to force the bridge to commit resources in order to be counted as
  991. available. We need to evaluate how the relationship of uptime percentage
  992. should weigh into our choice of which bridges to advertise. We leave
  993. this to future work.
  994. The third component is perhaps the trickiest: with many different
  995. adversaries out there, how do we keep track of which adversaries have
  996. blocked which bridges, and how do we learn about new blocks as they
  997. occur? We examine this problem next.
  998. \subsection{How do we know if a bridge relay has been blocked?}
  999. \label{subsec:geoip}
  1000. There are two main mechanisms for testing whether bridges are reachable
  1001. from inside each blocked area: active testing via users, and passive
  1002. testing via bridges.
  1003. In the case of active testing, certain users inside each area
  1004. sign up as testing relays. The bridge authorities can then use a
  1005. Blossom-like~\cite{blossom-thesis} system to build circuits through them
  1006. to each bridge and see if it can establish the connection. But how do
  1007. we pick the users? If we ask random users to do the testing (or if we
  1008. solicit volunteers from the users), the adversary should sign up so he
  1009. can enumerate the bridges we test. Indeed, even if we hand-select our
  1010. testers, the adversary might still discover their location and monitor
  1011. their network activity to learn bridge addresses.
  1012. Another answer is not to measure directly, but rather let the bridges
  1013. report whether they're being used.
  1014. %If they periodically report to their
  1015. %bridge directory authority how much use they're seeing, perhaps the
  1016. %authority can make smart decisions from there.
  1017. Specifically, bridges should install a GeoIP database such as the public
  1018. IP-To-Country list~\cite{ip-to-country}, and then periodically report to the
  1019. bridge authorities which countries they're seeing use from. This data
  1020. would help us track which countries are making use of the bridge design,
  1021. and can also let us learn about new steps the adversary has taken in
  1022. the arms race. (The compressed GeoIP database is only several hundred
  1023. kilobytes, and we could even automate the update process by serving it
  1024. from the bridge authorities.)
  1025. More analysis of this passive reachability
  1026. testing design is needed to resolve its many edge cases: for example,
  1027. if a bridge stops seeing use from a certain area, does that mean the
  1028. bridge is blocked or does that mean those users are asleep?
  1029. There are many more problems with the general concept of detecting whether
  1030. bridges are blocked. First, different zones of the Internet are blocked
  1031. in different ways, and the actual firewall jurisdictions do not match
  1032. country borders. Our bridge scheme could help us map out the topology
  1033. of the censored Internet, but this is a huge task. More generally,
  1034. if a bridge relay isn't reachable, is that because of a network block
  1035. somewhere, because of a problem at the bridge relay, or just a temporary
  1036. outage somewhere in between? And last, an attacker could poison our
  1037. bridge database by signing up already-blocked bridges. In this case,
  1038. if we're stingy giving out bridge addresses, users in that country won't
  1039. learn working bridges.
  1040. All of these issues are made more complex when we try to integrate this
  1041. testing into our social network reputation system above.
  1042. Since in that case we punish or reward users based on whether bridges
  1043. get blocked, the adversary has new attacks to trick or bog down the
  1044. reputation tracking. Indeed, the bridge authority doesn't even know
  1045. what zone the blocked user is in, so do we blame him for any possible
  1046. censored zone, or what?
  1047. Clearly more analysis is required. The eventual solution will probably
  1048. involve a combination of passive measurement via GeoIP and active
  1049. measurement from trusted testers. More generally, we can use the passive
  1050. feedback mechanism to track usage of the bridge network as a whole---which
  1051. would let us respond to attacks and adapt the design, and it would also
  1052. let the general public track the progress of the project.
  1053. %Worry: the adversary could choose not to block bridges but just record
  1054. %connections to them. So be it, I guess.
  1055. \subsection{Advantages of deploying all solutions at once}
  1056. For once, we're not in the position of the defender: we don't have to
  1057. defend against every possible filtering scheme; we just have to defend
  1058. against at least one. On the flip side, the attacker is forced to guess
  1059. how to allocate his resources to defend against each of these discovery
  1060. strategies. So by deploying all of our strategies at once, we not only
  1061. increase our chances of finding one that the adversary has difficulty
  1062. blocking, but we actually make \emph{all} of the strategies more robust
  1063. in the face of an adversary with limited resources.
  1064. %\subsection{Remaining unsorted notes}
  1065. %In the first subsection we describe how to find a first bridge.
  1066. %Going to be an arms race. Need a bag of tricks. Hard to say
  1067. %which ones will work. Don't spend them all at once.
  1068. %Some techniques are sufficient to get us an IP address and a port,
  1069. %and others can get us IP:port:key. Lay out some plausible options
  1070. %for how users can bootstrap into learning their first bridge.
  1071. %\section{The account / reputation system}
  1072. %\section{Social networks with directory-side support}
  1073. %\label{sec:accounts}
  1074. %One answer is to measure based on whether the bridge addresses
  1075. %we give it end up blocked. But how do we decide if they get blocked?
  1076. %Perhaps each bridge should be known by a single bridge directory
  1077. %authority. This makes it easier to trace which users have learned about
  1078. %it, so easier to blame or reward. It also makes things more brittle,
  1079. %since loss of that authority means its bridges aren't advertised until
  1080. %they switch, and means its bridge users are sad too.
  1081. %(Need a slick hash algorithm that will map our identity key to a
  1082. %bridge authority, in a way that's sticky even when we add bridge
  1083. %directory authorities, but isn't sticky when our authority goes
  1084. %away. Does this exist?)
  1085. %\subsection{Discovery based on social networks}
  1086. %A token that can be exchanged at the bridge authority (assuming you
  1087. %can reach it) for a new bridge address.
  1088. %The account server runs as a Tor controller for the bridge authority.
  1089. %Users can establish reputations, perhaps based on social network
  1090. %connectivity, perhaps based on not getting their bridge relays blocked,
  1091. %Probably the most critical lesson learned in past work on reputation
  1092. %systems in privacy-oriented environments~\cite{rep-anon} is the need for
  1093. %verifiable transactions. That is, the entity computing and advertising
  1094. %reputations for participants needs to actually learn in a convincing
  1095. %way that a given transaction was successful or unsuccessful.
  1096. %(Lesson from designing reputation systems~\cite{rep-anon}: easy to
  1097. %reward good behavior, hard to punish bad behavior.
  1098. \section{Security considerations}
  1099. \label{sec:security}
  1100. \subsection{Possession of Tor in oppressed areas}
  1101. Many people speculate that installing and using a Tor client in areas with
  1102. particularly extreme firewalls is a high risk---and the risk increases
  1103. as the firewall gets more restrictive. This notion certain has merit, but
  1104. there's
  1105. a counter pressure as well: as the firewall gets more restrictive, more
  1106. ordinary people behind it end up using Tor for more mainstream activities,
  1107. such as learning
  1108. about Wall Street prices or looking at pictures of women's ankles. So
  1109. as the restrictive firewall pushes up the number of Tor users, the
  1110. ``typical'' Tor user becomes more mainstream, and therefore mere
  1111. use or possession of the Tor software is not so surprising.
  1112. It's hard to say which of these pressures will ultimately win out,
  1113. but we should keep both sides of the issue in mind.
  1114. %Nick, want to rewrite/elaborate on this section?
  1115. \subsection{Observers can tell who is publishing and who is reading}
  1116. \label{subsec:upload-padding}
  1117. Tor encrypts traffic on the local network, and it obscures the eventual
  1118. destination of the communication, but it doesn't do much to obscure the
  1119. traffic volume. In particular, a user publishing a home video will have a
  1120. different network signature than a user reading an online news article.
  1121. Based on our assumption in Section~\ref{sec:adversary} that users who
  1122. publish material are in more danger, should we work to improve Tor's
  1123. security in this situation?
  1124. In the general case this is an extremely challenging task:
  1125. effective \emph{end-to-end traffic confirmation attacks}
  1126. are known where the adversary observes the origin and the
  1127. destination of traffic and confirms that they are part of the
  1128. same communication~\cite{danezis:pet2004,e2e-traffic}. Related are
  1129. \emph{website fingerprinting attacks}, where the adversary downloads
  1130. a few hundred popular websites, makes a set of "signatures" for each
  1131. site, and then observes the target Tor client's traffic to look for
  1132. a match~\cite{pet05-bissias,defensive-dropping}. But can we do better
  1133. against a limited adversary who just does coarse-grained sweeps looking
  1134. for unusually prolific publishers?
  1135. One answer is for bridge users to automatically send bursts of padding
  1136. traffic periodically. (This traffic can be implemented in terms of
  1137. long-range drop cells, which are already part of the Tor specification.)
  1138. Of course, convincingly simulating an actual human publishing interesting
  1139. content is a difficult arms race, but it may be worthwhile to at least
  1140. start the race. More research remains.
  1141. \subsection{Anonymity effects from acting as a bridge relay}
  1142. Against some attacks, relaying traffic for others can improve
  1143. anonymity. The simplest example is an attacker who owns a small number
  1144. of Tor servers. He will see a connection from the bridge, but he won't
  1145. be able to know whether the connection originated there or was relayed
  1146. from somebody else. More generally, the mere uncertainty of whether the
  1147. traffic originated from that user may be helpful.
  1148. There are some cases where it doesn't seem to help: if an attacker can
  1149. watch all of the bridge's incoming and outgoing traffic, then it's easy
  1150. to learn which connections were relayed and which started there. (In this
  1151. case he still doesn't know the final destinations unless he is watching
  1152. them too, but in this case bridges are no better off than if they were
  1153. an ordinary client.)
  1154. There are also some potential downsides to running a bridge. First, while
  1155. we try to make it hard to enumerate all bridges, it's still possible to
  1156. learn about some of them, and for some people just the fact that they're
  1157. running one might signal to an attacker that they place a higher value
  1158. on their anonymity. Second, there are some more esoteric attacks on Tor
  1159. relays that are not as well-understood or well-tested---for example, an
  1160. attacker may be able to ``observe'' whether the bridge is sending traffic
  1161. even if he can't actually watch its network, by relaying traffic through
  1162. it and noticing changes in traffic timing~\cite{attack-tor-oak05}. On
  1163. the other hand, it may be that limiting the bandwidth the bridge is
  1164. willing to relay will allow this sort of attacker to determine if it's
  1165. being used as a bridge but not easily learn whether it is adding traffic
  1166. of its own.
  1167. We also need to examine how entry guards fit in. Entry guards
  1168. (a small set of nodes that are always used for the first
  1169. step in a circuit) help protect against certain attacks
  1170. where the attacker runs a few Tor servers and waits for
  1171. the user to choose these servers as the beginning and end of her
  1172. circuit\footnote{http://wiki.noreply.org/noreply/TheOnionRouter/TorFAQ\#EntryGuards}.
  1173. If the blocked user doesn't use the bridge's entry guards, then the bridge
  1174. doesn't gain as much cover benefit. On the other hand, what design changes
  1175. are needed for the blocked user to use the bridge's entry guards without
  1176. learning what they are (this seems hard), and even if we solve that,
  1177. do they then need to use the guards' guards and so on down the line?
  1178. It is an open research question whether the benefits of running a bridge
  1179. outweigh the risks. A lot of the decision rests on which attacks the
  1180. users are most worried about. For most users, we don't think running a
  1181. bridge relay will be that damaging, and it could help quite a bit.
  1182. \subsection{Trusting local hardware: Internet cafes and LiveCDs}
  1183. \label{subsec:cafes-and-livecds}
  1184. Assuming that users have their own trusted hardware is not
  1185. always reasonable.
  1186. For Internet cafe Windows computers that let you attach your own USB key,
  1187. a USB-based Tor image would be smart. There's Torpark, and hopefully
  1188. there will be more thoroughly analyzed options down the road. Worries
  1189. remain about hardware or
  1190. software keyloggers and other spyware---and physical surveillance.
  1191. If the system lets you boot from a CD or from a USB key, you can gain
  1192. a bit more security by bringing a privacy LiveCD with you. (This
  1193. approach isn't foolproof of course, since hardware
  1194. keyloggers and physical surveillance are still a worry).
  1195. In fact, LiveCDs are also useful if it's your own hardware, since it's
  1196. easier to avoid leaving private data and logs scattered around the
  1197. system.
  1198. %\subsection{Forward compatibility and retiring bridge authorities}
  1199. %
  1200. %Eventually we'll want to change the identity key and/or location
  1201. %of a bridge authority. How do we do this mostly cleanly?
  1202. \subsection{The trust chain}
  1203. \label{subsec:trust-chain}
  1204. Tor's ``public key infrastructure'' provides a chain of trust to
  1205. let users verify that they're actually talking to the right servers.
  1206. There are four pieces to this trust chain.
  1207. First, when Tor clients are establishing circuits, at each step
  1208. they demand that the next Tor server in the path prove knowledge of
  1209. its private key~\cite{tor-design}. This step prevents the first node
  1210. in the path from just spoofing the rest of the path. Second, the
  1211. Tor directory authorities provide a signed list of servers along with
  1212. their public keys---so unless the adversary can control a threshold
  1213. of directory authorities, he can't trick the Tor client into using other
  1214. Tor servers. Third, the location and keys of the directory authorities,
  1215. in turn, is hard-coded in the Tor source code---so as long as the user
  1216. got a genuine version of Tor, he can know that he is using the genuine
  1217. Tor network. And last, the source code and other packages are signed
  1218. with the GPG keys of the Tor developers, so users can confirm that they
  1219. did in fact download a genuine version of Tor.
  1220. In the case of blocked users contacting bridges and bridge directory
  1221. authorities, the same logic applies in parallel: the blocked users fetch
  1222. information from both the bridge authorities and the directory authorities
  1223. for the `main' Tor network, and they combine this information locally.
  1224. How can a user in an oppressed country know that he has the correct
  1225. key fingerprints for the developers? As with other security systems, it
  1226. ultimately comes down to human interaction. The keys are signed by dozens
  1227. of people around the world, and we have to hope that our users have met
  1228. enough people in the PGP web of trust
  1229. %~\cite{pgp-wot}
  1230. that they can learn
  1231. the correct keys. For users that aren't connected to the global security
  1232. community, though, this question remains a critical weakness.
  1233. %\subsection{Security through obscurity: publishing our design}
  1234. %Many other schemes like dynaweb use the typical arms race strategy of
  1235. %not publishing their plans. Our goal here is to produce a design---a
  1236. %framework---that can be public and still secure. Where's the tradeoff?
  1237. %\section{Performance improvements}
  1238. %\label{sec:performance}
  1239. %
  1240. %\subsection{Fetch server descriptors just-in-time}
  1241. %
  1242. %I guess we should encourage most places to do this, so blocked
  1243. %users don't stand out.
  1244. %
  1245. %
  1246. %network-status and directory optimizations. caching better. partitioning
  1247. %issues?
  1248. \section{Maintaining reachability}
  1249. \subsection{How many bridge relays should you know about?}
  1250. The strategies described in Section~\ref{sec:discovery} talked about
  1251. learning one bridge address at a time. But if most bridges are ordinary
  1252. Tor users on cable modem or DSL connection, many of them will disappear
  1253. and/or move periodically. How many bridge relays should a blocked user
  1254. know about so that she is likely to have at least one reachable at any
  1255. given point? This is already a challenging problem if we only consider
  1256. natural churn: the best approach is to see what bridges we attract in
  1257. reality and measure their churn. We may also need to factor in a parameter
  1258. for how quickly bridges get discovered and blocked by the attacker;
  1259. we leave this for future work after we have more deployment experience.
  1260. A related question is: if the bridge relays change IP addresses
  1261. periodically, how often does the blocked user need to fetch updates in
  1262. order to keep from being cut out of the loop?
  1263. Once we have more experience and intuition, we should explore technical
  1264. solutions to this problem too. For example, if the discovery strategies
  1265. give out $k$ bridge addresses rather than a single bridge address, perhaps
  1266. we can improve robustness from the user perspective without significantly
  1267. aiding the adversary. Rather than giving out a new random subset of $k$
  1268. addresses at each point, we could bind them together into \emph{bridge
  1269. families}, so all users that learn about one member of the bridge family
  1270. are told about the rest as well.
  1271. This scheme may also help defend against attacks to map the set of
  1272. bridges. That is, if all blocked users learn a random subset of bridges,
  1273. the attacker should learn about a few bridges, monitor the country-level
  1274. firewall for connections to them, then watch those users to see what
  1275. other bridges they use, and repeat. By segmenting the bridge address
  1276. space, we can limit the exposure of other users.
  1277. \subsection{Cablemodem users don't usually provide important websites}
  1278. \label{subsec:block-cable}
  1279. Another attacker we might be concerned about is that the attacker could
  1280. just block all DSL and cablemodem network addresses, on the theory that
  1281. they don't run any important services anyway. If most of our bridges
  1282. are on these networks, this attack could really hurt.
  1283. The first answer is to aim to get volunteers both from traditionally
  1284. ``consumer'' networks and also from traditionally ``producer'' networks.
  1285. Since bridges don't need to be Tor exit nodes, as we improve our usability
  1286. it seems quite feasible to get a lot of websites helping out.
  1287. The second answer (not as practical) would be to encourage more use of
  1288. consumer networks for popular and useful Internet services.
  1289. %(But P2P exists;
  1290. %minor websites exist; gaming exists; IM exists; ...)
  1291. A related attack we might worry about is based on large countries putting
  1292. economic pressure on companies that want to expand their business. For
  1293. example, what happens if Verizon wants to sell services in China, and
  1294. China pressures Verizon to discourage its users in the free world from
  1295. running bridges?
  1296. \subsection{Scanning resistance: making bridges more subtle}
  1297. If it's trivial to verify that a given address is operating as a bridge,
  1298. and most bridges run on a predictable port, then it's conceivable our
  1299. attacker could scan the whole Internet looking for bridges. (In fact, he
  1300. can just concentrate on scanning likely networks like cablemodem and DSL
  1301. services---see Section~\ref{subsec:block-cable} above for related attacks.) It
  1302. would be nice to slow down this attack. It would be even nicer to make
  1303. it hard to learn whether we're a bridge without first knowing some
  1304. secret. We call this general property \emph{scanning resistance}.
  1305. Password protecting the bridges.
  1306. Could provide a password to the bridge user. He provides a nonced hash of
  1307. it or something when he connects. We'd need to give him an ID key for the
  1308. bridge too, and wait to present the password until we've TLSed, else the
  1309. adversary can pretend to be the bridge and MITM him to learn the password.
  1310. We could use some kind of ID-based knocking protocol, or we could act like an
  1311. unconfigured HTTPS server if treated like one.
  1312. We can assume that the attacker can easily recognize https connections
  1313. to unknown servers. It can then attempt to connect to them and block
  1314. connections to servers that seem suspicious. It may be that password
  1315. protected web sites will not be suspicious in general, in which case
  1316. that may be the easiest way to give controlled access to the bridge.
  1317. If such sites that have no other overt features are automatically
  1318. blocked when detected, then we may need to be more subtle.
  1319. Possibilities include serving an innocuous web page if a TLS encrypted
  1320. request is received without the authorization needed to access the Tor
  1321. network and only responding to a requested access to the Tor network
  1322. of proper authentication is given. If an unauthenticated request to
  1323. access the Tor network is sent, the bridge should respond as if
  1324. it has received a message it does not understand (as would be the
  1325. case were it not a bridge).
  1326. \subsection{How to motivate people to run bridge relays}
  1327. \label{subsec:incentives}
  1328. One of the traditional ways to get people to run software that benefits
  1329. others is to give them motivation to install it themselves. An often
  1330. suggested approach is to install it as a stunning screensaver so everybody
  1331. will be pleased to run it. We take a similar approach here, by leveraging
  1332. the fact that these users are already interested in protecting their
  1333. own Internet traffic, so they will install and run the software.
  1334. Make all Tor users become bridges if they're reachable---needs more work
  1335. on usability first, but we're making progress.
  1336. Also, we can make a snazzy network graph with Vidalia that emphasizes
  1337. the connections the bridge user is currently relaying. (Minor anonymity
  1338. implications, but hey.) (In many cases there won't be much activity,
  1339. so this may backfire. Or it may be better suited to full-fledged Tor
  1340. servers.)
  1341. % Also consider everybody-a-server. Many of the scalability questions
  1342. % are easier when you're talking about making everybody a bridge.
  1343. %\subsection{What if the clients can't install software?}
  1344. %[this section should probably move to the related work section,
  1345. %or just disappear entirely.]
  1346. %Bridge users without Tor software
  1347. %Bridge relays could always open their socks proxy. This is bad though,
  1348. %first
  1349. %because bridges learn the bridge users' destinations, and second because
  1350. %we've learned that open socks proxies tend to attract abusive users who
  1351. %have no idea they're using Tor.
  1352. %Bridges could require passwords in the socks handshake (not supported
  1353. %by most software including Firefox). Or they could run web proxies
  1354. %that require authentication and then pass the requests into Tor. This
  1355. %approach is probably a good way to help bootstrap the Psiphon network,
  1356. %if one of its barriers to deployment is a lack of volunteers willing
  1357. %to exit directly to websites. But it clearly drops some of the nice
  1358. %anonymity and security features Tor provides.
  1359. %A hybrid approach where the user gets his anonymity from Tor but his
  1360. %software-less use from a web proxy running on a trusted machine on the
  1361. %free side.
  1362. \subsection{Publicity attracts attention}
  1363. \label{subsec:publicity}
  1364. Many people working on this field want to publicize the existence
  1365. and extent of censorship concurrently with the deployment of their
  1366. circumvention software. The easy reason for this two-pronged push is
  1367. to attract volunteers for running proxies in their systems; but in many
  1368. cases their main goal is not to build the software, but rather to educate
  1369. the world about the censorship. The media also tries to do its part by
  1370. broadcasting the existence of each new circumvention system.
  1371. But at the same time, this publicity attracts the attention of the
  1372. censors. We can slow down the arms race by not attracting as much
  1373. attention, and just spreading by word of mouth. If our goal is to
  1374. establish a solid social network of bridges and bridge users before
  1375. the adversary gets involved, does this attention tradeoff work to our
  1376. advantage?
  1377. \subsection{The Tor website: how to get the software}
  1378. One of the first censoring attacks against a system like ours is to
  1379. block the website and make the software itself hard to find. Our system
  1380. should work well once the user is running an authentic
  1381. copy of Tor and has found a working bridge, but to get to that point
  1382. we rely on their individual skills and ingenuity.
  1383. Right now, most countries that block access to Tor block only the main
  1384. website and leave mirrors and the network itself untouched.
  1385. Falling back on word-of-mouth is always a good last resort, but we should
  1386. also take steps to make sure it's relatively easy for users to get a copy,
  1387. such as publicizing the mirrors more and making copies available through
  1388. other media.
  1389. See Section~\ref{subsec:first-bridge} for more discussion.
  1390. \section{Future designs}
  1391. \subsection{Bridges inside the blocked network too}
  1392. Assuming actually crossing the firewall is the risky part of the
  1393. operation, can we have some bridge relays inside the blocked area too,
  1394. and more established users can use them as relays so they don't need to
  1395. communicate over the firewall directly at all? A simple example here is
  1396. to make new blocked users into internal bridges also---so they sign up
  1397. on the bridge authority as part of doing their query, and we give out
  1398. their addresses
  1399. rather than (or along with) the external bridge addresses. This design
  1400. is a lot trickier because it brings in the complexity of whether the
  1401. internal bridges will remain available, can maintain reachability with
  1402. the outside world, etc.
  1403. Hidden services as bridges. Hidden services as bridge directory authorities.
  1404. \section{Conclusion}
  1405. a technical solution won't solve the whole problem. after all, china's
  1406. firewall is *socially* very successful, even if technologies exist to
  1407. get around it.
  1408. but having a strong technical solution is still useful as a piece of the
  1409. puzzle. and tor provides a great set of building blocks to start from.
  1410. \bibliographystyle{plain} \bibliography{tor-design}
  1411. %\appendix
  1412. %\section{Counting Tor users by country}
  1413. %\label{app:geoip}
  1414. \end{document}
  1415. ship geoip db to bridges. they look up users who tls to them in the db,
  1416. and upload a signed list of countries and number-of-users each day. the
  1417. bridge authority aggregates them and publishes stats.
  1418. bridge relays have buddies
  1419. they ask a user to test the reachability of their buddy.
  1420. leaks O(1) bridges, but not O(n).
  1421. we should not be blockable by ordinary cisco censorship features.
  1422. that is, if they want to block our new design, they will need to
  1423. add a feature to block exactly this.
  1424. strategically speaking, this may come in handy.
  1425. Bridges come in clumps of 4 or 8 or whatever. If you know one bridge
  1426. in a clump, the authority will tell you the rest. Now bridges can
  1427. ask users to test reachability of their buddies.
  1428. Giving out clumps helps with dynamic IP addresses too. Whether it
  1429. should be 4 or 8 depends on our churn.
  1430. the account server. let's call it a database, it doesn't have to
  1431. be a thing that human interacts with.
  1432. so how do we reward people for being good?
  1433. \subsubsection{Public Bridges with Coordinated Discovery}
  1434. ****Pretty much this whole subsubsection will probably need to be
  1435. deferred until ``later'' and moved to after end document, but I'm leaving
  1436. it here for now in case useful.******
  1437. Rather than be entirely centralized, we can have a coordinated
  1438. collection of bridge authorities, analogous to how Tor network
  1439. directory authorities now work.
  1440. Key components
  1441. ``Authorities'' will distribute caches of what they know to overlapping
  1442. collections of nodes so that no one node is owned by one authority.
  1443. Also so that it is impossible to DoS info maintained by one authority
  1444. simply by making requests to it.
  1445. Where a bridge gets assigned is not predictable by the bridge?
  1446. If authorities don't know the IP addresses of the bridges they
  1447. are responsible for, they can't abuse that info (or be attacked for
  1448. having it). But, they also can't, e.g., control being sent massive
  1449. lists of nodes that were never good. This raises another question.
  1450. We generally decry use of IP address for location, etc. but we
  1451. need to do that to limit the introduction of functional but useless
  1452. IP addresses because, e.g., they are in China and the adversary
  1453. owns massive chunks of the IP space there.
  1454. We don't want an arbitrary someone to be able to contact the
  1455. authorities and say an IP address is bad because it would be easy
  1456. for an adversary to take down all the suspicious bridges
  1457. even if they provide good cover websites, etc. Only the bridge
  1458. itself and/or the directory authority can declare a bridge blocked
  1459. from somewhere.
  1460. 9. Bridge directories must not simply be a handful of nodes that
  1461. provide the list of bridges. They must flood or otherwise distribute
  1462. information out to other Tor nodes as mirrors. That way it becomes
  1463. difficult for censors to flood the bridge directory servers with
  1464. requests, effectively denying access for others. But, there's lots of
  1465. churn and a much larger size than Tor directories. We are forced to
  1466. handle the directory scaling problem here much sooner than for the
  1467. network in general. Authorities can pass their bridge directories
  1468. (and policy info) to some moderate number of unidentified Tor nodes.
  1469. Anyone contacting one of those nodes can get bridge info. the nodes
  1470. must remain somewhat synched to prevent the adversary from abusing,
  1471. e.g., a timed release policy or the distribution to those nodes must
  1472. be resilient even if they are not coordinating.
  1473. I think some kind of DHT like scheme would work here. A Tor node is
  1474. assigned a chunk of the directory. Lookups in the directory should be
  1475. via hashes of keys (fingerprints) and that should determine the Tor
  1476. nodes responsible. Ordinary directories can publish lists of Tor nodes
  1477. responsible for fingerprint ranges. Clients looking to update info on
  1478. some bridge will make a Tor connection to one of the nodes responsible
  1479. for that address. Instead of shutting down a circuit after getting
  1480. info on one address, extend it to another that is responsible for that
  1481. address (the node from which you are extending knows you are doing so
  1482. anyway). Keep going. This way you can amortize the Tor connection.
  1483. 10. We need some way to give new identity keys out to those who need
  1484. them without letting those get immediately blocked by authorities. One
  1485. way is to give a fingerprint that gets you more fingerprints, as
  1486. already described. These are meted out/updated periodically but allow
  1487. us to keep track of which sources are compromised: if a distribution
  1488. fingerprint repeatedly leads to quickly blocked bridges, it should be
  1489. suspect, dropped, etc. Since we're using hashes, there shouldn't be a
  1490. correlation with bridge directory mirrors, bridges, portions of the
  1491. network observed, etc. It should just be that the authorities know
  1492. about that key that leads to new addresses.
  1493. This last point is very much like the issues in the valet nodes paper,
  1494. which is essentially about blocking resistance wrt exiting the Tor network,
  1495. while this paper is concerned with blocking the entering to the Tor network.
  1496. In fact the tickets used to connect to the IPo (Introduction Point),
  1497. could serve as an example, except that instead of authorizing
  1498. a connection to the Hidden Service, it's authorizing the downloading
  1499. of more fingerprints.
  1500. Also, the fingerprints can follow the hash(q + '1' + cookie) scheme of
  1501. that paper (where q = hash(PK + salt) gave the q.onion address). This
  1502. allows us to control and track which fingerprint was causing problems.
  1503. Note that, unlike many settings, the reputation problem should not be
  1504. hard here. If a bridge says it is blocked, then it might as well be.
  1505. If an adversary can say that the bridge is blocked wrt
  1506. $\mathit{censor}_i$, then it might as well be, since
  1507. $\mathit{censor}_i$ can presumably then block that bridge if it so
  1508. chooses.
  1509. 11. How much damage can the adversary do by running nodes in the Tor
  1510. network and watching for bridge nodes connecting to it? (This is
  1511. analogous to an Introduction Point watching for Valet Nodes connecting
  1512. to it.) What percentage of the network do you need to own to do how
  1513. much damage. Here the entry-guard design comes in helpfully. So we
  1514. need to have bridges use entry-guards, but (cf. 3 above) not use
  1515. bridges as entry-guards. Here's a serious tradeoff (again akin to the
  1516. ratio of valets to IPos) the more bridges/client the worse the
  1517. anonymity of that client. The fewer bridges/client the worse the
  1518. blocking resistance of that client.