111-local-traffic-priority.txt 6.8 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137
  1. Filename: 111-local-traffic-priority.txt
  2. Title: Prioritizing local traffic over relayed traffic
  3. Version:
  4. Last-Modified:
  5. Author: Roger Dingledine
  6. Created: 14-Mar-2007
  7. Status: Open
  8. Overview:
  9. We describe some ways to let Tor users operate as a relay and enforce
  10. rate limiting for relayed traffic without impacting their locally
  11. initiated traffic.
  12. Motivation:
  13. Right now we encourage people who use Tor as a client to configure it
  14. as a relay too ("just click the button in Vidalia"). Most of these users
  15. are on asymmetric links, meaning they have a lot more download capacity
  16. than upload capacity. But if they enable rate limiting too, suddenly
  17. they're limited to the same download capacity as upload capacity. And
  18. they have to enable rate limiting, or their upstream pipe gets filled
  19. up, starts dropping packets, and now their net connection doesn't work
  20. even for non-Tor stuff. So they end up turning off the relaying part
  21. so they can use Tor (and other applications) again.
  22. So far this hasn't mattered that much: most of our fast relays are
  23. being operated only in relay mode, so the rate limiting makes sense
  24. for them. But if we want to be able to attract many more relays in
  25. the future, we need to let ordinary users act as relays too.
  26. Further, as we begin to deploy the blocking-resistance design and we
  27. rely on ordinary users to click the "Tor for Freedom" button, this
  28. limitation will become a serious stumbling block to getting volunteers
  29. to act as bridges.
  30. The problem:
  31. Tor implements its rate limiting on the 'read' side by only reading
  32. a certain number of bytes from the network in each second. If it has
  33. emptied its token bucket, it doesn't read any more from the network;
  34. eventually TCP notices and stalls until we resume reading. But if we
  35. want to have two classes of service, we can't know what class a given
  36. incoming cell will be until we look at it, at which point we've already
  37. read it.
  38. Some options:
  39. Option 1: read when our token bucket is full enough, and if it turns
  40. out that what we read was local traffic, then add the tokens back into
  41. the token bucket. This will work when local traffic load alternates
  42. with relayed traffic load; but it's a poor option in general, because
  43. when we're receiving both local and relayed traffic, there are plenty
  44. of cases where we'll end up with an empty token bucket, and then we're
  45. back where we were before.
  46. More generally, notice that our problem is easy when a given TCP
  47. connection either has entirely local circuits or entirely relayed
  48. circuits. In fact, even if they are both present, if one class is
  49. entirely idle (none of its circuits have sent or received in the past
  50. N seconds), we can ignore that class until it wakes up again. So it
  51. only gets complex when a single connection contains active circuits
  52. of both classes.
  53. Next, notice that local traffic uses only the entry guards, whereas
  54. relayed traffic likely doesn't. So if we're a bridge handling just
  55. a few users, the expected number of overlapping connections would be
  56. almost zero, and even if we're a full relay the number of overlapping
  57. connections will be quite small.
  58. Option 2: build separate TCP connections for local traffic and for
  59. relayed traffic. In practice this will actually only require a few
  60. extra TCP connections: we would only need redundant TCP connections
  61. to at most the number of entry guards in use.
  62. However, this approach has some drawbacks. First, if the remote side
  63. wants to extend a circuit to you, how does it know which TCP connection
  64. to send it on? We would need some extra scheme to label some connections
  65. "client-only" during construction. Perhaps we could do this by seeing
  66. whether any circuit was made via CREATE_FAST; but this still opens
  67. up a race condition where the other side sends a create request
  68. immediately. The only ways I can imagine to avoid the race entirely
  69. are to specify our preference in the VERSIONS cell, or to add some
  70. sort of "nope, not this connection, why don't you try another rather
  71. than failing" response to create cells, or to forbid create cells on
  72. connections that you didn't initiate and on which you haven't seen
  73. any circuit creation requests yet -- this last one would lead to a bit
  74. more connection bloat but doesn't seem so bad. And we already accept
  75. this race for the case where directory authorities establish new TCP
  76. connections periodically to check reachability, and then hope to hang
  77. up on them soon after. (In any case this issue is moot for bridges,
  78. since each destination will be one-way with respect to extend requests:
  79. either receiving extend requests from bridge users or sending extend
  80. requests to the Tor server, never both.)
  81. The second problem with option 2 is that using two TCP connections
  82. reveals that there are two classes of traffic (and probably quickly
  83. reveals which is which, based on throughput). Now, it's unclear whether
  84. this information is already available to the other relay -- he would
  85. easily be able to tell that some circuits are fast and some are rate
  86. limited, after all -- but it would be nice to not add even more ways to
  87. leak that information. Also, it's less clear that an external observer
  88. already has this information if the circuits are all bundled together,
  89. and for this case it's worth trying to protect it.
  90. Option 3: tell the other side about our rate limiting rules. When we
  91. establish the TCP connection, specify the different policy classes we
  92. have configured. Each time we extend a circuit, specify which policy
  93. class that circuit should be part of. Then hope the other side obeys
  94. our wishes. (If he doesn't, hang up on him.) Besides the design and
  95. coordination hassles involved in this approach, there's a big problem:
  96. our rate limiting classes apply to all our connections, not just
  97. pairwise connections. How does one server we're connected to know how
  98. much of our bucket has already been spent by another? I could imagine
  99. a complex and inefficient "ok, now you can send me those two more cells
  100. that you've got queued" protocol. I'm not sure how else we could do it.
  101. (Gosh. How could UDP designs possibly be compatible with rate limiting
  102. with multiple bucket sizes?)
  103. Option 4: ?
  104. Prognosis:
  105. Of the above options, only option 2 can actually be built and achieve
  106. what we want. So that's it by default, unless we can come up with
  107. something better.
  108. In terms of implementation, it will be easy: just add a bit to
  109. or_connection_t that specifies priority_traffic (used by the initiator
  110. of the connection to ignore that connection when relaying a create
  111. request), and another bit that specifies client_only (used by a
  112. receiving Tor server so it can ignore that connection when sending
  113. create requests).
  114. [Not writing the rest of the proposal until we sort out which option
  115. we'll take.]