dir-spec.txt 33 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735
  1. $Id$
  2. Tor directory protocol for 0.1.1.x series
  3. 0. Scope and preliminaries
  4. This document should eventually be merged into tor-spec.txt and replace
  5. the existing notes on directories.
  6. This is not a finalized version; what we actually wind up implementing
  7. may be very different from the system described here.
  8. 0.1. Goals
  9. There are several problems with the way Tor handles directories right
  10. now:
  11. 1. Directories are very large and use a lot of bandwidth.
  12. 2. Every directory server is a single point of failure.
  13. 3. Requiring every client to know every server won't scale.
  14. 4. Requiring every directory cache to know every server won't scale.
  15. 5. Our current "verified server" system is kind of nonsensical.
  16. 6. Getting more directory servers adds more points of failure and
  17. worsens possible partitioning attacks.
  18. This design tries to solve every problem except problems 3 and 4, and to
  19. be compatible with likely eventual solutions to problems 3 and 4.
  20. 1. Outline
  21. There is no longer any such thing as a "signed directory". Instead,
  22. directory servers sign a very compressed 'network status' object that
  23. lists the current descriptors and their status, and router descriptors
  24. continue to be self-signed by servers. Clients download network status
  25. listings periodically, and download router descriptors as needed. ORs
  26. upload descriptors relatively infrequently.
  27. There are multiple directory servers. Rather than doing anything
  28. complicated to coordinate themselves, clients simply rotate through them
  29. in order, and only use servers that most of the last several directory
  30. servers like.
  31. 2. Router descriptors
  32. The router descriptor format is unchanged from tor-spec.txt.
  33. ORs SHOULD generate a new router descriptor whenever any of the
  34. following events have occurred:
  35. - A period of time (18 hrs by default) has passed since the last
  36. time a descriptor was generated.
  37. - A descriptor field other than bandwidth or uptime has changed.
  38. - Bandwidth has changed by more than +/- 50% from the last time a
  39. descriptor was generated, and at least a given interval of time
  40. (20 mins by default) has passed since then.
  41. - Uptime has been reset.
  42. After generating a descriptor, ORs upload it to every directory
  43. server they know.
  44. 3. Network status
  45. Directory servers generate, sign, and compress a network-status document
  46. as needed. As an optimization, they may rate-limit the number of such
  47. documents generated to once every few seconds. Directory servers should
  48. rate-limit at least to the point where these documents are generated no
  49. faster than once per second.
  50. The network status document contains a preamble, a set of router status
  51. entries, and a signature, in that order.
  52. We use the same meta-format as used for directories and router descriptors
  53. in "tor-spec.txt".
  54. The preamble contains:
  55. "network-status-version" -- A document format version. For this
  56. specification, the version is "2".
  57. "dir-source" -- The hostname, current IP address, and directory
  58. port of the directory server, separated by spaces.
  59. "fingerprint" -- A base16-encoded hash of the signing key's
  60. fingerprint, with no additional spaces added.
  61. "contact" -- An arbitrary string describing how to contact the
  62. directory server's administrator. Administrators should include at
  63. least an email address and a PGP fingerprint.
  64. "dir-signing-key" -- The directory server's public signing key.
  65. "client-versions" -- A comma-separated list of recommended client versions.
  66. "server-versions" -- A comma-separated list of recommended server versions.
  67. "published" -- The publication time for this network-status object.
  68. "dir-options" -- A set of flags separated by spaces:
  69. "Names" if this directory server performs name bindings.
  70. "Versions" if this directory server recommends software versions.
  71. The dir-options entry is optional. The "-versions" entries are required if
  72. the "Versions" flag is present. The other entries are required and must
  73. appear exactly once. The "network-status-version" entry must appear first;
  74. the others may appear in any order.
  75. For each router, the router entry contains: (This format is designed for
  76. conciseness.)
  77. "r" -- followed by the following elements, separated by spaces:
  78. - The OR's nickname,
  79. - A hash of its identity key, encoded in base64, with trailing =
  80. signs removed.
  81. - A hash of its most recent descriptor, encoded in base64, with
  82. trailing = signs removed. (The hash is calculated as for
  83. computing the signature of a descriptor.)
  84. - The publication time of its most recent descriptor.
  85. - An IP
  86. - An OR port
  87. - A directory port (or "0" for none")
  88. "s" -- A series of space-separated status flags:
  89. "Exit" if the router is useful for building general-purpose exit
  90. circuits.
  91. "Stable" if the router tends to stay up for a long time.
  92. "Fast" if the router has high bandwidth.
  93. "Running" if the router is currently usable.
  94. "Named" if the router's identity-nickname mapping is canonical.
  95. "Valid" if the router has been 'validated'.
  96. The "r" entry for each router must appear first and is required. The
  97. 's" entry is optional. Unrecognized flags, or extra elements on the
  98. "r" line must be ignored.
  99. The signature section contains:
  100. "directory-signature". A signature of the rest of the document using
  101. the directory server's signing key.
  102. We compress the network status list with zlib before transmitting it.
  103. 4. Directory server operation
  104. By default, directory servers remember all non-expired, non-superseded OR
  105. descriptors that they have seen.
  106. For each OR, a directory server remembers whether the OR was running and
  107. functional the last time they tried to connect to it, and possibly other
  108. liveness information.
  109. Directory server administrators may label some servers or IPs as
  110. blacklisted, and elect not to include them in their network-status lists.
  111. Thus, the network-status list includes all non-blacklisted,
  112. non-expired, non-superseded descriptors for ORs that the directory has
  113. observed at least once to be running.
  114. Directory server administrators may decide to support name binding. If
  115. they do, then they must maintain a file of nickname-to-identity-key
  116. mappings, and try to keep this file consistent with other directory
  117. servers. If they don't, they act as clients, and report bindings made by
  118. other directory servers (name X is bound to identity Y if at least one
  119. binding directory lists it, and no directory binds X to some other Y'.)
  120. The authoritative network-status published by a host should be available at:
  121. http://<hostname>/tor/status/authority.z
  122. An authoritative network-status published by another host with fingerprint
  123. <F> should be available at:
  124. http://<hostname>/tor/status/fp/<F>.z
  125. An authoritative network-status published by other hosts with fingerprints
  126. <F1>,<F2>,<F3> should be available at:
  127. http://<hostname>/tor/status/fp/<F1>+<F2>+<F3>.z
  128. The most recent network-status documents from all known authoritative
  129. directories, concatenated, should be available at:
  130. http://<hostname>/tor/status/all.z
  131. The most recent descriptor for a server whose identity key has a
  132. fingerprint of <F> should be available at:
  133. http://<hostname>/tor/server/fp/<F>.z
  134. The most recent descriptors for servers have fingerprints <F1>,<F2>,<F3>
  135. should be available at:
  136. http://<hostname>/tor/server/fp/<F1>+<F2>+<F3>.z
  137. The most recent descriptor for this server should be at:
  138. http://<hostname>/tor/server/authority.z
  139. A concatenated set of the most recent descriptors for all known servers
  140. should be available at:
  141. http://<hostname>/tor/server/all.z
  142. For debugging, directories MAY expose non-compressed objects at URLs like
  143. the above, but without the final ".z".
  144. Clients MUST handle compressed concatenated information in two forms:
  145. - A concatenated list of zlib-compressed objects.
  146. - A zlib-compressed concatenated list of objects.
  147. Directory servers MAY generate either format: the former requires less
  148. CPU, but the latter requires less bandwidth.
  149. 4.1. Caching
  150. Directory caches (most ORs) regularly download network status documents,
  151. and republish them at a URL based on the directory server's identity key:
  152. http://<hostname>/tor/status/<identity fingerprint>.z
  153. A concatenated list of all network-status documents should be available at:
  154. http://<hostname>/tor/status/all.z
  155. 4.2. Compression
  156. 5. Client operation
  157. Every OP or OR, including directory servers, acts as a client to the
  158. directory protocol.
  159. Each client maintains a list of trusted directory servers. Periodically
  160. (currently every 20 minutes), the client downloads a new network status. It
  161. chooses the directory server from which its current information is most
  162. out-of-date, and retries on failure until it finds a running server.
  163. When choosing ORs to build circuits, clients proceed as follows:
  164. - A server is "listed" if it is listed by more than half of the "live"
  165. network status documents the clients have downloaded. (A network
  166. status is "live" if it is the most recently downloaded network status
  167. document for a given directory server, and the server is a directory
  168. server trusted by the client, and the network-status document is no
  169. more than D (say, 10) days old.)
  170. - A server is "valid" is it is listed as valid by more than half of the
  171. "live" downloaded" network-status document.
  172. - A server is "running" if it is listed as running by more than
  173. half of the "recent" downloaded network-status documents.
  174. (A network status is "recent" if it was published in the last
  175. 60 minutes. If there are fewer than 3 such documents, the most
  176. recently published 3 are "recent." If there are fewer than 3 in all,
  177. all are "recent.")
  178. Clients store network status documents so long as they are live.
  179. 5.1. Scheduling network status downloads
  180. This download scheduling algorithm implements the approach described above
  181. in a relatively low-state fashion. It reflects the current Tor
  182. implementation.
  183. Clients maintain a list of authorities; each client tries to keep the same
  184. list, in the same order.
  185. Periodically, on startup, and on HUP, clients check whether they need to
  186. download fresh network status documents. The approach is as follows:
  187. - If we have under X network status documents newer than OLD, we choose a
  188. member of the list at random and try download XX documents starting
  189. with that member's.
  190. - Otherwise, if we have no network status documents newer than NEW, we
  191. check to see which authority's document we retrieved most recently,
  192. and try to retrieve the next authority's document. If we can't, we
  193. try the next authority in sequence, and so on.
  194. 5.2. Managing naming
  195. In order to provide human-memorable names for individual server
  196. identities, some directory servers bind names to IDs. Clients handle
  197. names in two ways:
  198. If a client is encountering a name it has not mapped before:
  199. If all the "binding" networks-status documents the client has so far
  200. received same claim that the name binds to some identity X, and the
  201. client has received at least three network-status documents, the client
  202. maps the name to X.
  203. If a client is encountering a name it has mapped before:
  204. It uses the last-mapped identity value, unless all of the "binding"
  205. network status documents bind the name to some other identity.
  206. 5.3. Notes on what we do now.
  207. THIS SECTION SHOULD BE FOLDED INTO THE EARLIER SECTIONS; THEY ARE WRONG;
  208. THIS IS RIGHT.
  209. All downloaded networkstatuses are discarded once they are 10 days old (by
  210. published date).
  211. Authdirs download each others' networkstatus every
  212. AUTHORITY_NS_CACHE_INTERVAL minutes (currently 10).
  213. Directory caches download authorities' networkstatus every
  214. NONAUTHORITY_NS_CACHE_INTERVAL minutes (currently 10).
  215. Clients always try to replace any networkstatus received over
  216. NETWORKSTATUS_MAX_VALIDITY ago (currently 2 days). Also, when the most
  217. recently received networkstatus is more than
  218. NETWORKSTATUS_CLIENT_DL_INTERVAL (30 minutes) old, and we do not have any
  219. open directory connections fetching a networkstatus, clients try to
  220. download the networkstatus on their list after the most recently received
  221. networkstatus, skipping failed networkstatuses. A networkstatus is
  222. "failed" if NETWORKSTATUS_N_ALLOWABLE_FAILURES (3) attempts in a row have
  223. all failed.
  224. We do not update router statuses if we have less than half of the
  225. networkstatuses.
  226. A networkstatus is "live" if it is the most recent we have received signed
  227. by a given trusted authority.
  228. A networkstatus is "recent" if it is "live" and:
  229. - it was received in the last DEFAULT_RUNNING_INTERVAL (currently 60
  230. minutes)
  231. OR - it was one of the MIN_TO_INFLUENCE_RUNNING (3) most recently received
  232. networkstatuses.
  233. Authorities always believe their own opinion as to a router's status. For
  234. other tors:
  235. - a router is valid if more than half of the live networkstatuses think
  236. it's valid.
  237. - a router is named if more than half of the live networkstatuses from
  238. naming authorities think it's named, and they all think it has the
  239. same name.
  240. - a router is running if more than half of the recent networkstatuses
  241. think it's running.
  242. Everyone downloads router descriptors as follows:
  243. - If any networkstatus lists a more recently published routerdesc with a
  244. different descriptor digest, and no more than
  245. MAX_ROUTERDESC_DOWNLOAD_FAILURES attempts to retrieve that routerdesc
  246. have failed, then that routerdesc is "downloadable".
  247. - Every DirFetchInterval, or whenever a request for routerdescs returns
  248. no routerdescs, we launch a set of requests for all downloadable
  249. routerdescs. We divide the downloadable routerdescs into groups of no
  250. more than DL_PER_REQUEST, and send a request for each group to
  251. directory servers chosen independently.
  252. - We also launch a request as above when a request for routerdescs
  253. fails and we have no directory connections fetching routerdescs.
  254. TODO Specify here:
  255. - Retry-on-failure.
  256. - When to 0-out failure count for routerdesc?
  257. - When to 0-out failure count for networkstatus?
  258. - Fallback to download-all.
  259. - For versions: if you're listed by more than half of live versioning
  260. networkstatuses, good. if less than half of networkstatuses are live,
  261. don't do anything. If half are live, and half of less of the
  262. versioning ones list you, warn. Only warn once every 24 hours.
  263. - For names: warn if an unnamed router is specified by nickname.
  264. Rate-limit these warnings.
  265. - Also, don't believe N->K if another naming authdir says N->K'.
  266. - Revise naming rule: N->K is true if any naming directory says N->K,
  267. and no other naming directory says N->K' or N'->K.
  268. - Minimum info to build circuits.
  269. - Revise: always split requests when we have too little info to build
  270. circuits.
  271. - Describe when router is "out of date". (Any dirserver says so.)
  272. - Warn when using non-default directory servers.
  273. - When giving up on a non-finished dir request, log how many bytes
  274. dropped.
  275. -
  276. 6. Remaining issues
  277. Client-knowledge partitioning is worrisome. Most versions of this don't
  278. seem to be worse than the Danezis-Murdoch tracing attack, since an
  279. attacker can't do more than deduce probable exits from entries (or vice
  280. versa). But what about when the client connects to A and B but in a
  281. different order? How bad can it be partitioned based on its knowledge?
  282. ================================================================================
  283. Everything below this line is obsolete.
  284. --------------------------------------------------------------------------------
  285. Tor network discovery protocol
  286. 0. Scope
  287. This document proposes a way of doing more distributed network discovery
  288. while maintaining some amount of admission control. We don't recommend
  289. you implement this as-is; it needs more discussion.
  290. Terminology:
  291. - Client: The Tor component that chooses paths.
  292. - Server: A relay node that passes traffic along.
  293. 1. Goals.
  294. We want more decentralized discovery for network topology and status.
  295. In particular:
  296. 1a. We want to let clients learn about new servers from anywhere
  297. and build circuits through them if they wish. This means that
  298. Tor nodes need to be able to Extend to nodes they don't already
  299. know about.
  300. 1b. We want to let servers limit the addresses and ports they're
  301. willing to extend to. This is necessary e.g. for middleman nodes
  302. who have jerks trying to extend from them to badmafia.com:80 all
  303. day long and it's drawing attention.
  304. 1b'. While we're at it, we also want to handle servers that *can't*
  305. extend to some addresses/ports, e.g. because they're behind NAT or
  306. otherwise firewalled. (See section 5 below.)
  307. 1c. We want to provide a robust (available) and not-too-centralized
  308. mechanism for tracking network status (which nodes are up and working)
  309. and admission (which nodes are "recommended" for certain uses).
  310. 2. Assumptions.
  311. 2a. People get the code from us, and they trust us (or our gpg keys, or
  312. something down the trust chain that's equivalent).
  313. 2b. Even if the software allows humans to change the client configuration,
  314. most of them will use the default that's provided. so we should
  315. provide one that is the right balance of robust and safe. That is,
  316. we need to hard-code enough "first introduction" locations that new
  317. clients will always have an available way to get connected.
  318. 2c. Assume that the current "ask them to email us and see if it seems
  319. suspiciously related to previous emails" approach will not catch
  320. the strong Sybil attackers. Therefore, assume the Sybil attackers
  321. we do want to defend against can produce only a limited number of
  322. not-obviously-on-the-same-subnet nodes.
  323. 2d. Roger has only a limited amount of time for approving nodes; shouldn't
  324. be the time bottleneck anyway; and is doing a poor job at keeping
  325. out some adversaries.
  326. 2e. Some people would be willing to offer servers but will be put off
  327. by the need to send us mail and identify themselves.
  328. 2e'. Some evil people will avoid doing evil things based on the perception
  329. (however true or false) that there are humans monitoring the network
  330. and discouraging evil behavior.
  331. 2e''. Some people will trust the network, and the code, more if they
  332. have the perception that there are trustworthy humans guiding the
  333. deployed network.
  334. 2f. We can trust servers to accurately report their characteristics
  335. (uptime, capacity, exit policies, etc), as long as we have some
  336. mechanism for notifying clients when we notice that they're lying.
  337. 2g. There exists a "main" core Internet in which most locations can access
  338. most locations. We'll focus on it (first).
  339. 3. Some notes on how to achieve.
  340. Piece one: (required)
  341. We ship with N (e.g. 20) directory server locations and fingerprints.
  342. Directory servers serve signed network-status pages, listing their
  343. opinions of network status and which routers are good (see 4a below).
  344. Dirservers collect and provide server descriptors as well. These don't
  345. need to be signed by the dirservers, since they're self-certifying
  346. and timestamped.
  347. (In theory the dirservers don't need to be the ones serving the
  348. descriptors, but in practice the dirservers would need to point people
  349. at the place that does, so for simplicity let's assume that they do.)
  350. Clients then get network-status pages from a threshold of dirservers,
  351. fetch enough of the corresponding server descriptors to make them happy,
  352. and proceed as now.
  353. Piece two: (optional)
  354. We ship with S (e.g. 3) seed keys (trust anchors), and ship with
  355. signed timestamped certs for each dirserver. Dirservers also serve a
  356. list of certs, maybe including a "publish all certs since time foo"
  357. functionality. If at least two seeds agree about something, then it
  358. is so.
  359. Now dirservers can be added, and revoked, without requiring users to
  360. upgrade to a new version. If we only ship with dirserver locations
  361. and not fingerprints, it also means that dirservers can rotate their
  362. signing keys transparently.
  363. But, keeping track of the seed keys becomes a critical security issue.
  364. And rotating them in a backward-compatible way adds complexity. Also,
  365. dirserver locations must be at least somewhere static, since each lost
  366. dirserver degrades reachability for old clients. So as the dirserver
  367. list rolls over we have no choice but to put out new versions.
  368. Piece three: (optional)
  369. Notice that this doesn't preclude other approaches to discovering
  370. different concurrent Tor networks. For example, a Tor network inside
  371. China could ship Tor with a different torrc and poof, they're using
  372. a different set of dirservers. Some smarter clients could be made to
  373. learn about both networks, and be told which nodes bridge the networks.
  374. ...
  375. 4. Unresolved issues.
  376. 4a. How do the dirservers decide whether to recommend a server? We
  377. could have them do it based on contact from the human, but by
  378. assumptions 2c and 2d above, that's going to be less effective, and
  379. more of a hassle, as we scale up. Thus I propose that they simply
  380. do some basic automatic measuring themselves, starting with the
  381. current "are they connected to me" measurement, and that's all
  382. that is done.
  383. We could blacklist as we notice evil servers, but then we're in
  384. the same boat all the irc networks are in. We could whitelist as we
  385. notice new servers, and stop whitelisting (maybe rolling back a bit)
  386. once an attack is in progress. If we assume humans aren't particularly
  387. good at this anyway, we could just do automated delayed whitelisting,
  388. and have a "you're under attack" switch the human can enable for a
  389. while to start acting more conservatively.
  390. Once upon a time we collected contact info for servers, which was
  391. mainly used to remind people that their servers are down and could
  392. they please restart. Now that we have a critical mass of servers,
  393. I've stopped doing that reminding. So contact info is less important.
  394. 4b. What do we do about recommended-versions? Do we need a threshold of
  395. dirservers to claim that your version is obsolete before you believe
  396. them? Or do we make it have less effect -- e.g. print a warning but
  397. never actually quit? Coordinating all the humans to upgrade their
  398. recommended-version strings at once seems bad. Maybe if we have
  399. seeds, the seeds can sign a recommended-version and upload it to
  400. the dirservers.
  401. 4c. What does it mean to bind a nickname to a key? What if each dirserver
  402. does it differently, so one nickname corresponds to several keys?
  403. Maybe the solution is that nickname<=>key bindings should be
  404. individually configured by clients in their torrc (if they want to
  405. refer to nicknames in their torrc), and we stop thinking of nicknames
  406. as globally unique.
  407. 4d. What new features need to be added to server descriptors so they
  408. remain compact yet support new functionality? Section 5 is a start
  409. of discussion of one answer to this.
  410. 5. Regarding "Blossom: an unstructured overlay network for end-to-end
  411. connectivity."
  412. SECTION 5A: Blossom Architecture
  413. Define "transport domain" as a set of nodes who can all mutually name each
  414. other directly, using transport-layer (e.g. HOST:PORT) naming.
  415. Define "clique" as a set of nodes who can all mutually contact each other directly,
  416. using transport-layer (e.g. HOST:PORT) naming.
  417. Neither transport domains and cliques form a partition of the set of all nodes.
  418. Just as cliques may overlap in theoretical graphs, transport domains and
  419. cliques may overlap in the context of Blossom.
  420. In this section we address possible solutions to the problem of how to allow
  421. Tor routers in different transport domains to communicate.
  422. First, we presume that for every interface between transport domains A and B,
  423. one Tor router T_A exists in transport domain A, one Tor router T_B exists in
  424. transport domain B, and (without loss of generality) T_A can open a persistent
  425. connection to T_B. Any Tor traffic between the two routers will occur over
  426. this connection, which effectively renders the routers equal partners in
  427. bridging between the two transport domains. We refer to the established link
  428. between two transport domains as a "bridge" (we use this term because there is
  429. no serious possibility of confusion with the notion of a layer 2 bridge).
  430. Next, suppose that the universe consists of transport domains connected by
  431. persistent connections in this manner. An individual router can open multiple
  432. connections to routers within the same foreign transport domain, and it can
  433. establish separate connections to routers within multiple foreign transport
  434. domains.
  435. As in regular Tor, each Blossom router pushes its descriptor to directory
  436. servers. These directory servers can be within the same transport domain, but
  437. they need not be. The trick is that if a directory server is in another
  438. transport domain, then that directory server must know through which Tor
  439. routers to send messages destined for the Tor router in question.
  440. Blossom routers can advertise themselves to other transport domains in two
  441. ways:
  442. (1) Directly push the descriptor to a directory server in the other transport
  443. domain. This probably works particularly well if the other transport domain is
  444. "the Internet", or if there are hard-coded directory servers in "the Internet".
  445. The router has the responsibility to inform the directory server about which
  446. routers can be used to reach it.
  447. (2) Push the descriptor to a directory server in the same transport domain.
  448. This is the easiest solution for the router, but it relies upon the existence
  449. of a directory server in the same transport domain that is capable of
  450. communicating with directory servers in the remote transport domain. In order
  451. for this to work, some individual Tor routers must have published their
  452. descriptors in remote transport domains (i.e. followed the first option) in
  453. order to provide a link by which directory servers can communiate
  454. bidirectionally.
  455. If all directory servers are within the same transport domain, then approach
  456. (1) is sufficient: routers can exist within multiple transport domains, and as
  457. long as the network of transport domains is fully connected by bridges, any
  458. router will be able to access any other router in a foreign transport domain
  459. simply by extending along the path specified by the directory server. However,
  460. we want the system to be truly decentralized, which means not electing any
  461. particular transport domain to be the master domain in which entries are
  462. published.
  463. This is the explanation for (2): in order for a directory server to share
  464. information with a directory server in a foreign transport domain to which it
  465. cannot speak directly, it must use Tor, which means referring to the other
  466. directory server by using a router in the foreign transport domain. However,
  467. in order to use Tor, it must be able to reach that router, which means that a
  468. descriptor for that router must exist in its table, along with a means of
  469. reaching it. Therefore, in order for a mutual exchange of information between
  470. routers in transport domain A and those in transport domain B to be possible,
  471. when routers in transport domain A cannot establish direct connections with
  472. routers in transport domain B, then some router in transport domain B must have
  473. pushed its descriptor to a directory server in transport domain A, so that the
  474. directory server in transport domain A can use that router to reach the
  475. directory server in transport domain B.
  476. Descriptors for Blossom routers are read-only, as for regular Tor routers, so
  477. directory servers cannot modify them. However, Tor directory servers also
  478. publish a "network-status" page that provide information about which nodes are
  479. up and which are not. Directory servers could provide an additional field for
  480. Blossom nodes. For each Blossom node, the directory server specifies a set of
  481. paths (may be only one) through the overlay (i.e. an ordered list of router
  482. names/IDs) to a router in a foreign transport domain. (This field may be a set
  483. of paths rather than a single path.)
  484. A new router publishing to a directory server in a foreign transport should
  485. include a list of routers. This list should be either:
  486. a. ...a list of routers to which the router has persistent connections, or, if
  487. the new router does not have any persistent connections,
  488. b. ...a (not necessarily exhaustive) list of fellow routers that are in the
  489. same transport domain.
  490. The directory server will be able to use this information to derive a path to
  491. the new router, as follows. If the new router used approach (a), then the
  492. directory server will define the set of paths to the new router as union of the
  493. set of paths to the routers on the list with the name of the last hop appended
  494. to each path. If the new router used approach (b), then the directory server
  495. will define the paths to the new router as the union of the set of paths to the
  496. routers specified in the list. The directory server will then insert the newly
  497. defined path into the field in the network-status page from the router.
  498. When confronted with the choice of multiple different paths to reach the same
  499. router, the Blossom nodes may use a route selection protocol similar in design
  500. to that used by BGP (may be a simple distance-vector route selection procedure
  501. that only takes into account path length, or may be more complex to avoid
  502. loops, cache results, etc.) in order to choose the best one.
  503. If a .exit name is not provided, then a path will be chosen whose nodes are all
  504. among the set of nodes provided by the directory server that are believed to be
  505. in the same transport domain (i.e. no explicit path). Thus, there should be no
  506. surprises to the client. All routers should be careful to define their exit
  507. policies carefully, with the knowledge that clients from potentially any
  508. transport domain could access that which is not explicitly restricted.
  509. SECTION 5B: Tor+Blossom desiderata
  510. The interests of Blossom would be best served by implementing the following
  511. modifications to Tor:
  512. I. CLIENTS
  513. Objectives: Ultimately, we want Blossom requests to be indistinguishable in
  514. format from non-Blossom .exit requests, i.e. hostname.forwarder.exit.
  515. Proposal: Blossom is a process that manipulates Tor, so it should be
  516. implemented as a Tor Control, extending control-spec.txt. For each request,
  517. Tor uses the control protocol to ask the Blossom process whether it (the
  518. Blossom process) wants to build or assign a particular circuit to service the
  519. request. Blossom chooses one of the following responses:
  520. a. (Blossom exit node, circuit cached) "use this circuit" -- provides a circuit
  521. ID
  522. b. (Blossom exit node, circuit not cached) "I will build one" -- provides a
  523. list of routers, gets a circuit ID.
  524. c. (Regular (non-Blossom) exit node) "No, do it yourself" -- provides nothing.
  525. II. ROUTERS
  526. Objectives: Blossom routers are like regular Tor routers, except that Blossom
  527. routers need these features as well:
  528. a. the ability to open peresistent connections,
  529. b. the ability to know whwther they should use a persistent connection to reach
  530. another router,
  531. c. the ability to define a set of routers to which to establish persistent
  532. connections, as readable from a configuration file, and
  533. d. the ability to tell a directory server that (1) it is Blossom-enabled, and
  534. (2) it can be reached by some set of routers to which it explicitly establishes
  535. persistent connections.
  536. Proposal: Address the aforementioned points as follows.
  537. a. need the ability to open a specified number of persistent connections. This
  538. can be accomplished by implementing a generic should_i_close_this_conn() and
  539. which_conns_should_i_try_to_open_even_when_i_dont_need_them().
  540. b. The Tor design already supports this, but we must be sure to establish the
  541. persistent connections explicitly, re-establish them when they are lost, and
  542. not close them unnecessarily.
  543. c. We must modify Tor to add a new configuration option, allowing either (a)
  544. explicit specification of the set of routers to which to establish persistent
  545. connections, or (b) a random choice of some nodes to which to establish
  546. persistent connections, chosen from the set of nodes local to the transport
  547. domain of the specified directory server (for example).
  548. III. DIRSERVERS
  549. Objective: Blossom directory servers may provide extra
  550. fields in their network-status pages. Blossom directory servers may
  551. communicate with Blossom clients/routers in nonstandard ways in addition to
  552. standard ways.
  553. Proposal: Geoff should be able to implement a directory server according to the
  554. Tor specification (dir-spec.txt).