dir-spec.txt 32 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702
  1. $Id$
  2. Tor directory protocol for 0.1.1.x series
  3. 0. Scope and preliminaries
  4. This document should eventually be merged into tor-spec.txt and replace
  5. the existing notes on directories.
  6. This is not a finalized version; what we actually wind up implementing
  7. may be very different from the system described here.
  8. 0.1. Goals
  9. There are several problems with the way Tor handles directories right
  10. now:
  11. 1. Directories are very large and use a lot of bandwidth.
  12. 2. Every directory server is a single point of failure.
  13. 3. Requiring every client to know every server won't scale.
  14. 4. Requiring every directory cache to know every server won't scale.
  15. 5. Our current "verified server" system is kind of nonsensical.
  16. 6. Getting more directory servers adds more points of failure and
  17. worsens possible partitioning attacks.
  18. This design tries to solve every problem except problems 3 and 4, and to
  19. be compatible with likely eventual solutions to problems 3 and 4.
  20. 1. Outline
  21. There is no longer any such thing as a "signed directory". Instead,
  22. directory servers sign a very compressed 'network status' object that
  23. lists the current descriptors and their status, and router descriptors
  24. continue to be self-signed by servers. Clients download network status
  25. listings periodically, and download router descriptors as needed. ORs
  26. upload descriptors relatively infrequently.
  27. There are multiple directory servers. Rather than doing anything
  28. complicated to coordinate themselves, clients simply rotate through them
  29. in order, and only use servers that most of the last several directory
  30. servers like.
  31. 2. Router descriptors
  32. The router descriptor format is unchanged from tor-spec.txt.
  33. ORs SHOULD generate a new router descriptor whenever any of the
  34. following events have occurred:
  35. - A period of time (18 hrs by default) has passed since the last
  36. time a descriptor was generated.
  37. - A descriptor field other than bandwidth or uptime has changed.
  38. - Bandwidth has changed by more than +/- 50% from the last time a
  39. descriptor was generated, and at least a given interval of time
  40. (20 mins by default) has passed since then.
  41. - Uptime has been reset.
  42. After generating a descriptor, ORs upload it to every directory
  43. server they know.
  44. 3. Network status
  45. Directory servers generate, sign, and compress a network-status document
  46. as needed. As an optimization, they may rate-limit the number of such
  47. documents generated to once every few seconds. Directory servers should
  48. rate-limit at least to the point where these documents are generated no
  49. faster than once per second.
  50. The network status document contains a preamble, a set of router status
  51. entries, and a signature, in that order.
  52. We use the same meta-format as used for directories and router descriptors
  53. in "tor-spec.txt".
  54. The preamble contains:
  55. "network-status-version" -- A document format version. For this
  56. specification, the version is "2".
  57. "dir-source" -- The hostname, current IP address, and directory
  58. port of the directory server, separated by spaces.
  59. "fingerprint" -- A base16-encoded hash of the signing key's
  60. fingerprint, with no additional spaces added.
  61. "contact" -- An arbitrary string describing how to contact the
  62. directory server's administrator. Administrators should include at
  63. least an email address and a PGP fingerprint.
  64. "dir-signing-key" -- The directory server's public signing key.
  65. "client-versions" -- A comma-separated list of recommended client versions.
  66. "server-versions" -- A comma-separated list of recommended server versions.
  67. "published" -- The publication time for this network-status object.
  68. "dir-options" -- A set of flags separated by spaces:
  69. "Names" if this directory server performs name bindings.
  70. The directory-options entry is optional; the others are required and must
  71. appear exactly once. The "network-status-version" entry must appear first;
  72. the others may appear in any order.
  73. For each router, the router entry contains: (This format is designed for
  74. conciseness.)
  75. "r" -- followed by the following elements, separated by spaces:
  76. - The OR's nickname,
  77. - A hash of its identity key, encoded in base64, with trailing =
  78. signs removed.
  79. - A hash of its most recent descriptor, encoded in base64, with
  80. trailing = signs removed. (The hash is calculated as for
  81. computing the signature of a descriptor.)
  82. - The publication time of its most recent descriptor.
  83. - An IP
  84. - An OR port
  85. - A directory port (or "0" for none")
  86. "s" -- A series of space-separated status flags:
  87. "Exit" if the router is useful for building general-purpose exit
  88. circuits.
  89. "Stable" if the router tends to stay up for a long time.
  90. "Fast" if the router has high bandwidth.
  91. "Running" if the router is currently usable.
  92. "Named" if the router's identity-nickname mapping is canonical.
  93. "Valid" if the router has been 'validated'.
  94. The "r" entry for each router must appear first and is required. The
  95. 's" entry is optional. Unrecognized flags, or extra elements on the
  96. "r" line must be ignored.
  97. The signature section contains:
  98. "directory-signature". A signature of the rest of the document using
  99. the directory server's signing key.
  100. We compress the network status list with zlib before transmitting it.
  101. 4. Directory server operation
  102. By default, directory servers remember all non-expired, non-superseded OR
  103. descriptors that they have seen.
  104. For each OR, a directory server remembers whether the OR was running and
  105. functional the last time they tried to connect to it, and possibly other
  106. liveness information.
  107. Directory server administrators may label some servers or IPs as
  108. blacklisted, and elect not to include them in their network-status lists.
  109. Thus, the network-status list includes all non-blacklisted,
  110. non-expired, non-superseded descriptors for ORs that the directory has
  111. observed at least once to be running.
  112. Directory server administrators may decide to support name binding. If
  113. they do, then they must maintain a file of nickname-to-identity-key
  114. mappings, and try to keep this file consistent with other directory
  115. servers. If they don't, they act as clients, and report bindings made by
  116. other directory servers (name X is bound to identity Y if at least one
  117. binding directory lists it, and no directory binds X to some other Y'.)
  118. The authoritative network-status published by a host should be available at:
  119. http://<hostname>/tor/status/authority.z
  120. An authoritative network-status published by another host with fingerprint
  121. <F> should be available at:
  122. http://<hostname>/tor/status/fp/<F>.z
  123. An authoritative network-status published by other hosts with fingerprints
  124. <F1>,<F2>,<F3> should be available at:
  125. http://<hostname>/tor/status/fp/<F1>+<F2>+<F3>.z
  126. The most recent network-status documents from all known authoritative
  127. directories, concatenated, should be available at:
  128. http://<hostname>/tor/status/all.z
  129. The most recent descriptor for a server whose identity key has a
  130. fingerprint of <F> should be available at:
  131. http://<hostname>/tor/server/fp/<F>.z
  132. The most recent descriptors for servers have fingerprints <F1>,<F2>,<F3>
  133. should be available at:
  134. http://<hostname>/tor/server/fp/<F1>+<F2>+<F3>.z
  135. The most recent descriptor for this server should be at:
  136. http://<hostname>/tor/server/authority.z
  137. A concatenated set of the most recent descriptors for all known servers
  138. should be available at:
  139. http://<hostname>/tor/server/all.z
  140. For debugging, directories MAY expose non-compressed objects at URLs like
  141. the above, but without the final ".z".
  142. Clients MUST handle compressed concatenated information in two forms:
  143. - A concatenated list of zlib-compressed objects.
  144. - A zlib-compressed concatenated list of objects.
  145. Directory servers MAY generate either format: the former requires less
  146. CPU, but the latter requires less bandwidth.
  147. 4.1. Caching
  148. Directory caches (most ORs) regularly download network status documents,
  149. and republish them at a URL based on the directory server's identity key:
  150. http://<hostname>/tor/status/<identity fingerprint>.z
  151. A concatenated list of all network-status documents should be available at:
  152. http://<hostname>/tor/status/all.z
  153. 4.2. Compression
  154. 5. Client operation
  155. Every OP or OR, including directory servers, acts as a client to the
  156. directory protocol.
  157. Each client maintains a list of trusted directory servers. Periodically
  158. (currently every 20 minutes), the client downloads a new network status. It
  159. chooses the directory server from which its current information is most
  160. out-of-date, and retries on failure until it finds a running server.
  161. When choosing ORs to build circuits, clients proceed as follows:
  162. - A server is "listed" if it is listed by more than half of the "live"
  163. network status documents the clients have downloaded. (A network
  164. status is "live" if it is the most recently downloaded network status
  165. document for a given directory server, and the server is a directory
  166. server trusted by the client, and the network-status document is no
  167. more than D (say, 10) days old.)
  168. - A server is "valid" is it is listed as valid by more than half of the
  169. "live" downloaded" network-status document.
  170. - A server is "running" if it is listed as running by more than
  171. half of the "recent" downloaded network-status documents.
  172. (A network status is "recent" if it was published in the last
  173. 60 minutes. If there are fewer than 3 such documents, the most
  174. recently published 3 are "recent." If there are fewer than 3 in all,
  175. all are "recent.")
  176. Clients store network status documents so long as they are live.
  177. 5.1. Scheduling network status downloads
  178. This download scheduling algorithm implements the approach described above
  179. in a relatively low-state fashion. It reflects the current Tor
  180. implementation.
  181. Clients maintain a list of authorities; each client tries to keep the same
  182. list, in the same order.
  183. Periodically, on startup, and on HUP, clients check whether they need to
  184. download fresh network status documents. The approach is as follows:
  185. - If we have under X network status documents newer than OLD, we choose a
  186. member of the list at random and try download XX documents starting
  187. with that member's.
  188. - Otherwise, if we have no network status documents newer than NEW, we
  189. check to see which authority's document we retrieved most recently,
  190. and try to retrieve the next authority's document. If we can't, we
  191. try the next authority in sequence, and so on.
  192. 5.2. Managing naming
  193. In order to provide human-memorable names for individual server
  194. identities, some directory servers bind names to IDs. Clients handle
  195. names in two ways:
  196. If a client is encountering a name it has not mapped before:
  197. If all the "binding" networks-status documents the client has so far
  198. received same claim that the name binds to some identity X, and the
  199. client has received at least three network-status documents, the client
  200. maps the name to X.
  201. If a client is encountering a name it has mapped before:
  202. It uses the last-mapped identity value, unless all of the "binding"
  203. network status documents bind the name to some other identity.
  204. 5.3. Notes on what we do now.
  205. THIS SECTION SHOULD BE FOLDED INTO THE EARLIER SECTIONS; THEY ARE WRONG;
  206. THIS IS RIGHT.
  207. All downloaded networkstatuses are discarded once they are 10 days old (by
  208. published date).
  209. Authdirs download each others' networkstatus every
  210. AUTHORITY_NS_CACHE_INTERVAL minutes (currently 10).
  211. Directory caches download authorities' networkstatus every
  212. NONAUTHORITY_NS_CACHE_INTERVAL minutes (currently 10).
  213. Clients always try to replace any networkstatus received over
  214. NETWORKSTATUS_MAX_VALIDITY ago (currently 2 days). Also, when the most
  215. recently received networkstatus is more than
  216. NETWORKSTATUS_CLIENT_DL_INTERVAL (30 minutes) old, and we do not have any
  217. open directory connections fetching a networkstatus, clients try to
  218. download the networkstatus on their list after the most recently received
  219. networkstatus, skipping failed networkstatuses. A networkstatus is
  220. "failed" if NETWORKSTATUS_N_ALLOWABLE_FAILURES (3) attempts in a row have
  221. all failed.
  222. We do not update router statuses if we have less than half of the
  223. networkstatuses.
  224. A networkstatus is "live" if it is the most recent we have received signed
  225. by a given trusted authority.
  226. A networkstatus is "recent" if it is "live" and:
  227. - it was received in the last DEFAULT_RUNNING_INTERVAL (currently 60
  228. minutes)
  229. OR - it was one of the MIN_TO_INFLUENCE_RUNNING (3) most recently received
  230. networkstatuses.
  231. Authorities always believe their own opinion as to a router's status. For
  232. other tors:
  233. - a router is valid if more than half of the live networkstatuses think
  234. it's valid.
  235. - a router is named if more than half of the live networkstatuses from
  236. naming authorities think it's named, and they all think it has the
  237. same name.
  238. - a router is running if more than half of the recent networkstatuses
  239. think it's running.
  240. Everyone downloads router descriptors as follows:
  241. - If any networkstatus lists a more recently published routerdesc with a
  242. different descriptor digest, and no more than
  243. MAX_ROUTERDESC_DOWNLOAD_FAILURES attempts to retrieve that routerdesc
  244. have failed, then that routerdesc is "downloadable".
  245. - Every DirFetchInterval, or whenever a request for routerdescs returns
  246. no routerdescs, we launch a set of requests for all downloadable
  247. routerdescs. We divide the downloadable routerdescs into groups of no
  248. more than DL_PER_REQUEST, and send a request for each group to
  249. directory servers chosen independently.
  250. - We also launch a request as above when a request for routerdescs
  251. fails and we have no directory connections fetching routerdescs.
  252. 6. Remaining issues
  253. Client-knowledge partitioning is worrisome. Most versions of this don't
  254. seem to be worse than the Danezis-Murdoch tracing attack, since an
  255. attacker can't do more than deduce probable exits from entries (or vice
  256. versa). But what about when the client connects to A and B but in a
  257. different order? How bad can it be partitioned based on its knowledge?
  258. ================================================================================
  259. Everything below this line is obsolete.
  260. --------------------------------------------------------------------------------
  261. Tor network discovery protocol
  262. 0. Scope
  263. This document proposes a way of doing more distributed network discovery
  264. while maintaining some amount of admission control. We don't recommend
  265. you implement this as-is; it needs more discussion.
  266. Terminology:
  267. - Client: The Tor component that chooses paths.
  268. - Server: A relay node that passes traffic along.
  269. 1. Goals.
  270. We want more decentralized discovery for network topology and status.
  271. In particular:
  272. 1a. We want to let clients learn about new servers from anywhere
  273. and build circuits through them if they wish. This means that
  274. Tor nodes need to be able to Extend to nodes they don't already
  275. know about.
  276. 1b. We want to let servers limit the addresses and ports they're
  277. willing to extend to. This is necessary e.g. for middleman nodes
  278. who have jerks trying to extend from them to badmafia.com:80 all
  279. day long and it's drawing attention.
  280. 1b'. While we're at it, we also want to handle servers that *can't*
  281. extend to some addresses/ports, e.g. because they're behind NAT or
  282. otherwise firewalled. (See section 5 below.)
  283. 1c. We want to provide a robust (available) and not-too-centralized
  284. mechanism for tracking network status (which nodes are up and working)
  285. and admission (which nodes are "recommended" for certain uses).
  286. 2. Assumptions.
  287. 2a. People get the code from us, and they trust us (or our gpg keys, or
  288. something down the trust chain that's equivalent).
  289. 2b. Even if the software allows humans to change the client configuration,
  290. most of them will use the default that's provided. so we should
  291. provide one that is the right balance of robust and safe. That is,
  292. we need to hard-code enough "first introduction" locations that new
  293. clients will always have an available way to get connected.
  294. 2c. Assume that the current "ask them to email us and see if it seems
  295. suspiciously related to previous emails" approach will not catch
  296. the strong Sybil attackers. Therefore, assume the Sybil attackers
  297. we do want to defend against can produce only a limited number of
  298. not-obviously-on-the-same-subnet nodes.
  299. 2d. Roger has only a limited amount of time for approving nodes; shouldn't
  300. be the time bottleneck anyway; and is doing a poor job at keeping
  301. out some adversaries.
  302. 2e. Some people would be willing to offer servers but will be put off
  303. by the need to send us mail and identify themselves.
  304. 2e'. Some evil people will avoid doing evil things based on the perception
  305. (however true or false) that there are humans monitoring the network
  306. and discouraging evil behavior.
  307. 2e''. Some people will trust the network, and the code, more if they
  308. have the perception that there are trustworthy humans guiding the
  309. deployed network.
  310. 2f. We can trust servers to accurately report their characteristics
  311. (uptime, capacity, exit policies, etc), as long as we have some
  312. mechanism for notifying clients when we notice that they're lying.
  313. 2g. There exists a "main" core Internet in which most locations can access
  314. most locations. We'll focus on it (first).
  315. 3. Some notes on how to achieve.
  316. Piece one: (required)
  317. We ship with N (e.g. 20) directory server locations and fingerprints.
  318. Directory servers serve signed network-status pages, listing their
  319. opinions of network status and which routers are good (see 4a below).
  320. Dirservers collect and provide server descriptors as well. These don't
  321. need to be signed by the dirservers, since they're self-certifying
  322. and timestamped.
  323. (In theory the dirservers don't need to be the ones serving the
  324. descriptors, but in practice the dirservers would need to point people
  325. at the place that does, so for simplicity let's assume that they do.)
  326. Clients then get network-status pages from a threshold of dirservers,
  327. fetch enough of the corresponding server descriptors to make them happy,
  328. and proceed as now.
  329. Piece two: (optional)
  330. We ship with S (e.g. 3) seed keys (trust anchors), and ship with
  331. signed timestamped certs for each dirserver. Dirservers also serve a
  332. list of certs, maybe including a "publish all certs since time foo"
  333. functionality. If at least two seeds agree about something, then it
  334. is so.
  335. Now dirservers can be added, and revoked, without requiring users to
  336. upgrade to a new version. If we only ship with dirserver locations
  337. and not fingerprints, it also means that dirservers can rotate their
  338. signing keys transparently.
  339. But, keeping track of the seed keys becomes a critical security issue.
  340. And rotating them in a backward-compatible way adds complexity. Also,
  341. dirserver locations must be at least somewhere static, since each lost
  342. dirserver degrades reachability for old clients. So as the dirserver
  343. list rolls over we have no choice but to put out new versions.
  344. Piece three: (optional)
  345. Notice that this doesn't preclude other approaches to discovering
  346. different concurrent Tor networks. For example, a Tor network inside
  347. China could ship Tor with a different torrc and poof, they're using
  348. a different set of dirservers. Some smarter clients could be made to
  349. learn about both networks, and be told which nodes bridge the networks.
  350. ...
  351. 4. Unresolved issues.
  352. 4a. How do the dirservers decide whether to recommend a server? We
  353. could have them do it based on contact from the human, but by
  354. assumptions 2c and 2d above, that's going to be less effective, and
  355. more of a hassle, as we scale up. Thus I propose that they simply
  356. do some basic automatic measuring themselves, starting with the
  357. current "are they connected to me" measurement, and that's all
  358. that is done.
  359. We could blacklist as we notice evil servers, but then we're in
  360. the same boat all the irc networks are in. We could whitelist as we
  361. notice new servers, and stop whitelisting (maybe rolling back a bit)
  362. once an attack is in progress. If we assume humans aren't particularly
  363. good at this anyway, we could just do automated delayed whitelisting,
  364. and have a "you're under attack" switch the human can enable for a
  365. while to start acting more conservatively.
  366. Once upon a time we collected contact info for servers, which was
  367. mainly used to remind people that their servers are down and could
  368. they please restart. Now that we have a critical mass of servers,
  369. I've stopped doing that reminding. So contact info is less important.
  370. 4b. What do we do about recommended-versions? Do we need a threshold of
  371. dirservers to claim that your version is obsolete before you believe
  372. them? Or do we make it have less effect -- e.g. print a warning but
  373. never actually quit? Coordinating all the humans to upgrade their
  374. recommended-version strings at once seems bad. Maybe if we have
  375. seeds, the seeds can sign a recommended-version and upload it to
  376. the dirservers.
  377. 4c. What does it mean to bind a nickname to a key? What if each dirserver
  378. does it differently, so one nickname corresponds to several keys?
  379. Maybe the solution is that nickname<=>key bindings should be
  380. individually configured by clients in their torrc (if they want to
  381. refer to nicknames in their torrc), and we stop thinking of nicknames
  382. as globally unique.
  383. 4d. What new features need to be added to server descriptors so they
  384. remain compact yet support new functionality? Section 5 is a start
  385. of discussion of one answer to this.
  386. 5. Regarding "Blossom: an unstructured overlay network for end-to-end
  387. connectivity."
  388. SECTION 5A: Blossom Architecture
  389. Define "transport domain" as a set of nodes who can all mutually name each
  390. other directly, using transport-layer (e.g. HOST:PORT) naming.
  391. Define "clique" as a set of nodes who can all mutually contact each other directly,
  392. using transport-layer (e.g. HOST:PORT) naming.
  393. Neither transport domains and cliques form a partition of the set of all nodes.
  394. Just as cliques may overlap in theoretical graphs, transport domains and
  395. cliques may overlap in the context of Blossom.
  396. In this section we address possible solutions to the problem of how to allow
  397. Tor routers in different transport domains to communicate.
  398. First, we presume that for every interface between transport domains A and B,
  399. one Tor router T_A exists in transport domain A, one Tor router T_B exists in
  400. transport domain B, and (without loss of generality) T_A can open a persistent
  401. connection to T_B. Any Tor traffic between the two routers will occur over
  402. this connection, which effectively renders the routers equal partners in
  403. bridging between the two transport domains. We refer to the established link
  404. between two transport domains as a "bridge" (we use this term because there is
  405. no serious possibility of confusion with the notion of a layer 2 bridge).
  406. Next, suppose that the universe consists of transport domains connected by
  407. persistent connections in this manner. An individual router can open multiple
  408. connections to routers within the same foreign transport domain, and it can
  409. establish separate connections to routers within multiple foreign transport
  410. domains.
  411. As in regular Tor, each Blossom router pushes its descriptor to directory
  412. servers. These directory servers can be within the same transport domain, but
  413. they need not be. The trick is that if a directory server is in another
  414. transport domain, then that directory server must know through which Tor
  415. routers to send messages destined for the Tor router in question.
  416. Blossom routers can advertise themselves to other transport domains in two
  417. ways:
  418. (1) Directly push the descriptor to a directory server in the other transport
  419. domain. This probably works particularly well if the other transport domain is
  420. "the Internet", or if there are hard-coded directory servers in "the Internet".
  421. The router has the responsibility to inform the directory server about which
  422. routers can be used to reach it.
  423. (2) Push the descriptor to a directory server in the same transport domain.
  424. This is the easiest solution for the router, but it relies upon the existence
  425. of a directory server in the same transport domain that is capable of
  426. communicating with directory servers in the remote transport domain. In order
  427. for this to work, some individual Tor routers must have published their
  428. descriptors in remote transport domains (i.e. followed the first option) in
  429. order to provide a link by which directory servers can communiate
  430. bidirectionally.
  431. If all directory servers are within the same transport domain, then approach
  432. (1) is sufficient: routers can exist within multiple transport domains, and as
  433. long as the network of transport domains is fully connected by bridges, any
  434. router will be able to access any other router in a foreign transport domain
  435. simply by extending along the path specified by the directory server. However,
  436. we want the system to be truly decentralized, which means not electing any
  437. particular transport domain to be the master domain in which entries are
  438. published.
  439. This is the explanation for (2): in order for a directory server to share
  440. information with a directory server in a foreign transport domain to which it
  441. cannot speak directly, it must use Tor, which means referring to the other
  442. directory server by using a router in the foreign transport domain. However,
  443. in order to use Tor, it must be able to reach that router, which means that a
  444. descriptor for that router must exist in its table, along with a means of
  445. reaching it. Therefore, in order for a mutual exchange of information between
  446. routers in transport domain A and those in transport domain B to be possible,
  447. when routers in transport domain A cannot establish direct connections with
  448. routers in transport domain B, then some router in transport domain B must have
  449. pushed its descriptor to a directory server in transport domain A, so that the
  450. directory server in transport domain A can use that router to reach the
  451. directory server in transport domain B.
  452. Descriptors for Blossom routers are read-only, as for regular Tor routers, so
  453. directory servers cannot modify them. However, Tor directory servers also
  454. publish a "network-status" page that provide information about which nodes are
  455. up and which are not. Directory servers could provide an additional field for
  456. Blossom nodes. For each Blossom node, the directory server specifies a set of
  457. paths (may be only one) through the overlay (i.e. an ordered list of router
  458. names/IDs) to a router in a foreign transport domain. (This field may be a set
  459. of paths rather than a single path.)
  460. A new router publishing to a directory server in a foreign transport should
  461. include a list of routers. This list should be either:
  462. a. ...a list of routers to which the router has persistent connections, or, if
  463. the new router does not have any persistent connections,
  464. b. ...a (not necessarily exhaustive) list of fellow routers that are in the
  465. same transport domain.
  466. The directory server will be able to use this information to derive a path to
  467. the new router, as follows. If the new router used approach (a), then the
  468. directory server will define the set of paths to the new router as union of the
  469. set of paths to the routers on the list with the name of the last hop appended
  470. to each path. If the new router used approach (b), then the directory server
  471. will define the paths to the new router as the union of the set of paths to the
  472. routers specified in the list. The directory server will then insert the newly
  473. defined path into the field in the network-status page from the router.
  474. When confronted with the choice of multiple different paths to reach the same
  475. router, the Blossom nodes may use a route selection protocol similar in design
  476. to that used by BGP (may be a simple distance-vector route selection procedure
  477. that only takes into account path length, or may be more complex to avoid
  478. loops, cache results, etc.) in order to choose the best one.
  479. If a .exit name is not provided, then a path will be chosen whose nodes are all
  480. among the set of nodes provided by the directory server that are believed to be
  481. in the same transport domain (i.e. no explicit path). Thus, there should be no
  482. surprises to the client. All routers should be careful to define their exit
  483. policies carefully, with the knowledge that clients from potentially any
  484. transport domain could access that which is not explicitly restricted.
  485. SECTION 5B: Tor+Blossom desiderata
  486. The interests of Blossom would be best served by implementing the following
  487. modifications to Tor:
  488. I. CLIENTS
  489. Objectives: Ultimately, we want Blossom requests to be indistinguishable in
  490. format from non-Blossom .exit requests, i.e. hostname.forwarder.exit.
  491. Proposal: Blossom is a process that manipulates Tor, so it should be
  492. implemented as a Tor Control, extending control-spec.txt. For each request,
  493. Tor uses the control protocol to ask the Blossom process whether it (the
  494. Blossom process) wants to build or assign a particular circuit to service the
  495. request. Blossom chooses one of the following responses:
  496. a. (Blossom exit node, circuit cached) "use this circuit" -- provides a circuit
  497. ID
  498. b. (Blossom exit node, circuit not cached) "I will build one" -- provides a
  499. list of routers, gets a circuit ID.
  500. c. (Regular (non-Blossom) exit node) "No, do it yourself" -- provides nothing.
  501. II. ROUTERS
  502. Objectives: Blossom routers are like regular Tor routers, except that Blossom
  503. routers need these features as well:
  504. a. the ability to open peresistent connections,
  505. b. the ability to know whwther they should use a persistent connection to reach
  506. another router,
  507. c. the ability to define a set of routers to which to establish persistent
  508. connections, as readable from a configuration file, and
  509. d. the ability to tell a directory server that (1) it is Blossom-enabled, and
  510. (2) it can be reached by some set of routers to which it explicitly establishes
  511. persistent connections.
  512. Proposal: Address the aforementioned points as follows.
  513. a. need the ability to open a specified number of persistent connections. This
  514. can be accomplished by implementing a generic should_i_close_this_conn() and
  515. which_conns_should_i_try_to_open_even_when_i_dont_need_them().
  516. b. The Tor design already supports this, but we must be sure to establish the
  517. persistent connections explicitly, re-establish them when they are lost, and
  518. not close them unnecessarily.
  519. c. We must modify Tor to add a new configuration option, allowing either (a)
  520. explicit specification of the set of routers to which to establish persistent
  521. connections, or (b) a random choice of some nodes to which to establish
  522. persistent connections, chosen from the set of nodes local to the transport
  523. domain of the specified directory server (for example).
  524. III. DIRSERVERS
  525. Objective: Blossom directory servers may provide extra
  526. fields in their network-status pages. Blossom directory servers may
  527. communicate with Blossom clients/routers in nonstandard ways in addition to
  528. standard ways.
  529. Proposal: Geoff should be able to implement a directory server according to the
  530. Tor specification (dir-spec.txt).