| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434 | $Id$                           Tor Path Specification                              Roger Dingledine                               Nick MathewsonNote: This is an attempt to specify Tor as currently implemented.  Futureversions of Tor will implement improved algorithms.This document tries to cover how Tor chooses to build circuits and assignstreams to circuits.  Other implementations MAY take other approaches, butimplementors should be aware of the anonymity and load-balancing implicationsof their choices.THIS SPEC ISN'T DONE OR CORRECT.  I'm just copying in relevant info sofar.  The starred points are things we should cover, but not an exhaustivelist.  -NM1. General operation   Tor begins building circuits as soon as it has enough directory   information to do so (see section 5.1 of dir-spec.txt).  Some circuits are   built preemptively because we expect to need them later (for user   traffic), and some are build because of immediate need (for user traffic   that no current circuit can handle, for testing the network or our   availability, and so on).   When a client application creates a new stream (by opening a SOCKS   connection or launching a resolve request), we attach it to an appropriate   open (or in-progress) circuit if one exists, and launch a new circuit only   if no current circuit can handle the request.  We rotate circuits over   time to avoid some profiling attacks.   These processes are described in more detail below.1b. Types of circuits.* Stable / Ordinary* Internal / Exit1c. Terminology   A "path" is an ordered sequence of nodes, not yet built as a circuit.   A "clean" circuit is one that has not yet been used for any stream or   rendezvous traffic.   A "stable" node is one that we believe to have the 'Stable' flag set on   the basis of our current directory information.  A "stable" circuit is one   that consists entirely of "stable" nodes.   A "persistent" stream is one that we predict will require a long uptime.   Currently, Tor does this by examining the stream's target port, and   comparing it to a list of "long-lived" ports. (Default: 21, 22, 706, 1863,   5050, 5190, 5222, 5223, 6667, 8300, 8888.)   An exit node "supports" a stream if the stream's target IP is known, and   the stream's IP and target Port are allowed by the exit node's declared   exit policy.  A path "supports" a stream if:      * The last node in the path "supports" the stream, and      * If the stream is "persistent," all the nodes in the path are        "stable".   An exit node "might support" a stream if the stream's target IP is unknown   (because we haven't resolved it yet), and the exit node's declared exit   policy allows some IPs to exit at that port.  ???2. Building circuits2.1. When we build.   When running as a client, Tor tries to maintain at least 3 clean circuits,   so that new streams can be handled quickly.  To increase the likelihood of   success, Tor tries to predict what exit nodes will be useful by choosing   from among nodes that support the ports we have used in the recent past.   If Tor needs to attach a stream that no current exit circuit can support,   it looks for an existing clean circuit to cannibalize.  If we find one,   we try to extend it another hop to an exit node that might support the   stream.  [Must be internal???]   If no circuit exists, or is currently being built, along a path that   might support a stream, we begin building a new circuit that might support   the stream.   2.2. Path selection   We choose the path for each new circuit before we build it.  We choose the   exit node first, followed by the other nodes in the circuit.  We do not   choose the same router twice for the same circuit.  We do not choose any   router in the same family as another in the same circuit.  We don't choose   any non-running or non-valid router unless we have been configured to do   so.  When choosing among multiple candidates for a path element, we choose   a given router with probability proportional to its advertised bandwidth   [the smaller of the 'rate' and 'observed' arguments to the "bandwidth"   element in its descriptor].  If a router's advertised bandwidth is greater   than MAX_BELIEVEABLE_BANDWIDTH (1.5 MB/sec), we clip to that value.   Additional restrictions:     XXX When to use Fast     XXX When to use Stable     XXX When to use Named   If we're building a circuit preemtively, we choose an exit node that might   support streams to one of our predicted ports; otherwise, we pick an exit   node that will support a pending stream (if the stream's target is known)   or that might support a pending stream.   We pick an entry node from one of our guards; see section 5 below.2.3. Handling failure2.4. Tracking "predicted" ports* Choosing the path first, building second.* Choosing the length of the circuit.* Choosing entries, midpoints, exits.  * the .exit notation* exitnodes, entrynodes, strictexitnodes, strictentrynodes.* What to do when an extend fails* Keeping track of 'expected' ports  * And expected hidden service use (client-side and hidserv-side)  * Backing off from circuit building when a long time has passed3. Attaching streams to circuits  * Including via the controller.  * Timeouts and when Tor autoretries.    * What stream-end-reasons are appropriate for retrying.4. Rendezvous circuits5. Guard nodes6. Testing circuits(From some emails by arma)Hi folks,I've gotten the codebase to the point that I'm going to start tryingto make helper nodes work well. With luck they will be on by default inthe final 0.1.1.x release.For background on helper nodes, readhttp://wiki.noreply.org/noreply/TheOnionRouter/TorFAQ#RestrictedEntryFirst order of business: the phrase "helper node" sucks. We always haveto define it after we say it to somebody. Nick likes the phrase "contactnode", because they are your point-of-contact into the network. That isbetter than phrases like "bridge node". The phrase "fixed entry node"doesn't seem to work with non-math people, because they wonder what wasbroken about it. I'm sort of partial to the phrase "entry node" or maybe"restricted entry node". In any case, if you have ideas on names, pleasemail me off-list and I'll collate them.Right now the code exists to pick helper nodes, store our choices todisk, and use them for our entry nodes. But there are three topicsto tackle before I'm comfortable turning them on by default. First,how to handle churn: since Tor nodes are not always up, and sometimesdisappear forever, we need a plan for replacing missing helpers in asafe way. Second, we need a way to distinguish "the network is down"from "all my helpers are down", also in a safe way. Lastly, we need toexamine the situation where a client picks three crummy helper nodesand is forever doomed to a lousy Tor experience. Here's my plan:How to handle churn.  - Keep track of whether you have ever actually established a    connection to each helper. Any helper node in your list that you've    never used is ok to drop immediately. Also, we don't save that    one to disk.  - If all our helpers are down, we need more helper nodes: add a new    one to the *end*of our list. Only remove dead ones when they have    been gone for a very long time (months).  - Pick from the first n (by default 3) helper nodes in your list    that are up (according to the network-statuses) and reachable    (according to your local firewall config).    - This means that order matters when writing/reading them to disk.How to deal with network down.  - While all helpers are down/unreachable and there are no established    or on-the-way testing circuits, launch a testing circuit. (Do this    periodically in the same way we try to establish normal circuits    when things are working normally.)    (Testing circuits are a special type of circuit, that streams won't    attach to by accident.)  - When a testing circuit succeeds, mark all helpers up and hold    the testing circuit open.  - If a connection to a helper succeeds, close all testing circuits.    Else mark that helper down and try another.  - If the last helper is marked down and we already have a testing    circuit established, then add the first hop of that testing circuit    to the end of our helper node list, close that testing circuit,    and go back to square one. (Actually, rather than closing the    testing circuit, can we get away with converting it to a normal    circuit and beginning to use it immediately?)How to pick non-sucky helpers.  - When we're picking a new helper nodes, don't use ones which aren't    reachable according to our local ReachableAddresses configuration.  (There's an attack here: if I pick my helper nodes in a very   restrictive environment, say "ReachableAddresses 18.0.0.0/255.0.0.0:*",   then somebody watching me use the network from another location will   guess where I first joined the network. But let's ignore it for now.)  - Right now we choose new helpers just like we'd choose any entry    node: they must be "stable" (claim >1day uptime) and "fast" (advertise    >10kB capacity). In 0.1.1.11-alpha, clients let dirservers define    "stable" and "fast" however they like, and they just believe them.    So the next step is to make them a function of the current network:    e.g. line up all the 'up' nodes in order and declare the top    three-quarter to be stable, fast, etc, as long as they meet some    minimum too.  - If that's not sufficient (it won't be), dirservers should introduce    a new status flag: in additional to "stable" and "fast", we should    also describe certain nodes as "entry", meaning they are suitable    to be chosen as a helper. The first difference would be that we'd    demand the top half rather than the top three-quarters. Another    requirement would be to look at "mean time between returning" to    ensure that these nodes spend most of their time available. (Up for    two days straight, once a month, is not good enough.)  - Lastly, we need a function, given our current set of helpers and a    directory of the rest of the network, that decides when our helper    set has become "too crummy" and we need to add more. For example,    this could be based on currently advertised capacity of each of    our helpers, and it would also be based on the user's preferences    of speed vs. security.***Lasse wrote:> I am a bit concerned with performance if we are to have e.g. two out of> three helper nodes down or unreachable. How often should Tor check if> they are back up and running?Right now Tor believes a threshold of directory servers when decidingwhether each server is up. When Tor observes a server to be down(connection failed or building the first hop of the circuit failed),it marks it as down and doesn't try it again, until it gets a newnetwork-status from somebody, at which point it takes a new concensusand marks the appropriate servers as up.According to sec 5.1 of dir-spec.txt, the client will try to fetch a newnetwork-status at least every 30 minutes, and more often in certain cases.With the proposed scheme, we'll also mark all our helpers as up shortlyafter the last one is marked down.> When should there be> added an extra node to the helper node list? This is kind of an> important threshold?I agree, this is an important question. I don't have a good answer yet. Isit terrible, anonymity-wise, to add a new helper every time only one ofyour helpers is up? Notice that I say add rather than replace -- so you'donly use this fourth helper when one of your main three helpers is down,and if three of your four are down, you'd add a fifth, but only use itwhen two of the first four are down, etc.In fact, this may be smarter than just picking a random node for yourtesting circuit, because if your network goes up and down a lot, theneventually you have a chance of using any entry node in the network foryour testing circuit.We have a design choice here. Do we only try to use helpers for theconnections that will have streams on them (revealing our communicationpartners), or do we also want to restrict the overall set of nodes thatwe'll connect to, to discourage people from enumerating all Tor clients?I'm increasingly of the belief that we want to hide our presence too,based on the fact that Steven and George and others keep coming up withattacks that start with "Assuming we know the set of users".If so, then here's a revised "How to deal with network down" section:  1) When a helper is marked down or the helper list shrinks, and as     a result the total number of helpers that are either (up and     reachable) or (reachable but never connected to) is <= 1, then pick     a new helper and add it to the end of the list.     [We count nodes that have never been connected to, since otherwise      we might keep on adding new nodes before trying any of them. By      "reachable" I mean "is allowed by ReachableAddresses".]  2) When you fail to connect to a helper that has never been connected     to, you remove him from the list right then (and the above rule     might kick in).  3) When you succeed at connecting to a helper that you've never     connected to before, mark all reachable helpers earlier in the list     as up, and close that circuit.     [We close the circuit, since if the other helpers are now up, we      prefer to use them for circuits that will reveal communication      partners.]This certainly seems simpler. Are there holes that I'm missing?> If running from a laptop you will meet different firewall settings, so> how should Helper Nodes settings keep up with moving from an open> ReachableAddresses to a FascistFirewall setting after the helper nodes> have been selected?I added the word "reachable" to three places in the above list, and Ibelieve that totally solves this question.And as a bonus, it leads to an answer to Nick's attack ("If I pickmy helper nodes all on 18.0.0.0:*, then I move, you'll know where Ibootstrapped") -- the answer is to pick your original three helper nodeswithout regard for reachability. Then the above algorithm will add somemore that are reachable for you, and if you move somewhere, it's morelikely (though not certain) that some of the originals will become useful.Is that smart or just complex?> What happens if(when?) performance of the third node is bad?My above solution solves this a little bit, in that we always try tohave two nodes available. But what if they are both up but bad? I'm notsure. As my previous mail said, we need some function, given our listof helpers and the network directory, that will tell us when we're in abad situation. I can imagine some simple versions of this function --for example, when both our working helpers are in the bottom half ofthe nodes, ranked by capacity.But the hard part: what's the remedy when we decide there's somethingto fix? Do we add a third, and now we have two crummy ones and a newone? Or do we drop one or both of the bad ones?Perhaps we believe the latest claim from the network-status concensus,and we count a helper the dirservers believe is crummy as "not worthtrying" (equivalent to "not reachable under our current ReachableAddressesconfig") -- and then the above algorithm would end up adding good ones,but we'd go back to the originals if they resume being acceptable? That'san appealing design. I wonder if it will cause the typical Tor user tohave a helper node list that comprises most of the network, though. I'mok with this.> Another point you might want to keep in mind, is the possibility to> reuse the code in order to add a second layer helper node (meaning node> number two) to "protect" the first layer (node number one) helper nodes.> These nodes should be tied to each of the first layer nodes. E.g. there> is one helper node list, as described in your mail, for each of the> first layer nodes, following their create/destroy.True. Does that require us to add a fourth hop to our path length,since the first hop is from a limited set, the second hop is from alimited set, and the third hop might also be constrained because, say,we're asking for an unusual exit port?> Another of the things might worth adding to the to do list is> localization of server (helper) nodes. Making it possible to pick> countries/regions where you do (not) want your helper nodes located. (As> in "HelperNodesLocated us,!eu" etc.) I know this requires the use of> external data and may not be worth it, but it _could_ be integrated at> the directory servers only -- adding a list of node IP's and e.g. a> country/region code to the directory and thus reduce the overhead. (?)> Maybe extending the Family-term?I think we are heading towards doing path selection based on geography,but I don't have a good sense yet of how that will actually turn out --that is, with what mechanism Tor clients will learn the information theyneed. But this seems to be something that is orthogonal to the rest ofthis discussion, so I look forward to having somebody else solve it forus, and fitting it in when it's ready. :)> And I would like to keep an option to pick the first X helper nodes> myself and then let Tor extend this list if these nodes are down (like> EntryNodes in current code). Even if this opens up for some new types of> "relationship" attacks.Good idea. Here's how I'd like to name these:The "EntryNodes" config option is a list of seed helper nodes. When weread EntryNodes, any node listed in entrynodes but not in the currenthelper node list gets *pre*pended to the helper node list.The "NumEntryNodes" config option (currently called NumHelperNodes)specifies the number of up, reachable, good-enough helper nodes thatwill make up the pool of possible choices for first hop, counted fromthe front of the helper node list until we have enough.The "UseEntryNodes" config option (currently called UseHelperNodes)tells us to turn on all this helper node behavior. If you set EntryNodes,then this option is implied.The "StrictEntryNodes" config option, provided for backward compatibilityand for debugging, means a) we replace the helper node list with thecurrent EntryNodes list, and b) whenever we would do an operation thatalters the helper node list, we don't. (Yes, this means that if all thehelper nodes are down, we lose until we mark them up again. But this ishow it behaves now.)> I am sure my next point has been asked before, but what about testing> the current speed of the connections when looking for new helper nodes,> not only testing the connectivity? I know this might contribute to a lot> of overhead in the network, but if this only occur e.g. when using> helper nodes as a Hidden Service it might not have that large an impact,> but could help availability for the services?If we're just going to be testing them when we're first picking them,then it seems we can do the same thing by letting the directory serverstest them. This has the added benefit that all the (behaving) clientsuse the same data, so they don't end up partitioned by a node that(for example) performs selectively for his victims.Another idea would be to periodically keep track of what speeds you getthrough your helpers, and make decisions from this. The reason we haven'tdone this yet is because there are a lot of variables -- perhaps theweb site is slow, perhaps some other node in the path is slow, perhapsyour local network is slow briefly, perhaps you got unlucky, etc.  Ibelieve that over time (assuming the user has roughly the same browsinghabits) all of these would average out and you'd get a usable answer,but I don't have a good sense of how long it would take to converge,so I don't know whether this would be worthwhile.> BTW. I feel confortable with all the terms helper/entry/contact nodes,> but I think you (the developers) should just pick one and stay with it> to avoid confusion.I think I'm going to try to co-opt the term 'Entry' node for thispurpose. We're going to have to keep referring to helper nodes for theresearch community for a while though, so they realize that Tor doesmore than just let users ask for certain entry nodes.============================================================Some stuff that worries me about entry guards. 2006 Jun, Nickm.1. It is unlikely for two users to have the same set of entry guards.2. Observing a user is sufficient to learn its entry guards.3. So, as we move around, we leak our 
 |