HACKING 5.8 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117
  1. 0. Intro.
  2. Onion Routing is still very much in development stages. This document
  3. aims to get you started in the right direction if you want to understand
  4. the code, add features, fix bugs, etc.
  5. Read the README file first, so you can get familiar with the basics.
  6. 1. The programs.
  7. 1.1. "or". This is the main program here. It functions as both a server
  8. and a client, depending on which config file you give it. ...
  9. 2. The pieces.
  10. 2.1. Routers. Onion routers, as far as the 'or' program is concerned,
  11. are a bunch of data items that are loaded into the router_array when
  12. the program starts. After it's loaded, the router information is never
  13. changed. When a new OR connection is started (see below), the relevant
  14. information is copied from the router struct to the connection struct.
  15. 2.2. Connections. A connection is a long-standing tcp socket between
  16. nodes. A connection is named based on what it's connected to -- an "OR
  17. connection" has an onion router on the other end, an "OP connection" has
  18. an onion proxy on the other end, an "exit connection" has a website or
  19. other server on the other end, and an "AP connection" has an application
  20. proxy (and thus a user) on the other end.
  21. 2.3. Circuits. A circuit is a single conversation between two
  22. participants over the onion routing network. One end of the circuit has
  23. an AP connection, and the other end has an exit connection. AP and exit
  24. connections have only one circuit associated with them (and thus these
  25. connection types are closed when the circuit is closed), whereas OP and
  26. OR connections multiplex many circuits at once, and stay standing even
  27. when there are no circuits running over them.
  28. 2.4. Cells. Some connections, specifically OR and OP connections, speak
  29. "cells". This means that data over that connection is bundled into 128
  30. byte packets (8 bytes of header and 120 bytes of payload). Each cell has
  31. a type, or "command", which indicates what it's for.
  32. 3. Important parameters in the code.
  33. 3.1. Role.
  34. 4. Robustness features.
  35. 4.1. Bandwidth throttling. Each cell-speaking connection has a maximum
  36. bandwidth it can use, as specified in the routers.or file. Bandwidth
  37. throttling occurs on both the sender side and the receiving side. The
  38. sending side sends cells at regularly spaced intervals (e.g., a connection
  39. with a bandwidth of 12800B/s would queue a cell every 10ms). The receiving
  40. side protects against misbehaving servers that send cells more frequently,
  41. by using a simple token bucket:
  42. Each connection has a token bucket with a specified capacity. Tokens are
  43. added to the bucket each second (when the bucket is full, new tokens
  44. are discarded.) Each token represents permission to receive one byte
  45. from the network --- to receive a byte, the connection must remove a
  46. token from the bucket. Thus if the bucket is empty, that connection must
  47. wait until more tokens arrive. The number of tokens we add enforces a
  48. longterm average rate of incoming bytes, yet we still permit short-term
  49. bursts above the allowed bandwidth. Currently bucket sizes are set to
  50. ten seconds worth of traffic.
  51. The bandwidth throttling uses TCP to push back when we stop reading.
  52. We extend it with token buckets to allow more flexibility for traffic
  53. bursts.
  54. 4.2. Data congestion control. Even with the above bandwidth throttling,
  55. we still need to worry about congestion, either accidental or intentional.
  56. If a lot of people make circuits into same node, and they all come out
  57. through the same connection, then that connection may become saturated
  58. (be unable to send out data cells as quickly as it wants to). An adversary
  59. can make a 'put' request through the onion routing network to a webserver
  60. he owns, and then refuse to read any of the bytes at the webserver end
  61. of the circuit. These bottlenecks can propagate back through the entire
  62. network, mucking up everything.
  63. To handle this congestion, each circuit starts out with a receive
  64. window at each node of 100 cells -- it is willing to receive at most 100
  65. cells on that circuit. (It handles each direction separately; so that's
  66. really 100 cells forward and 100 cells back.) The edge of the circuit
  67. is willing to create at most 100 cells from data coming from outside the
  68. onion routing network. Nodes in the middle of the circuit will tear down
  69. the circuit if a data cell arrives when the receive window is 0. When
  70. data has traversed the network, the edge node buffers it on its outbuf,
  71. and evaluates whether to respond with a 'sendme' acknowledgement: if its
  72. outbuf is not too full, and its receive window is less than 90, then it
  73. queues a 'sendme' cell backwards in the circuit. Each node that receives
  74. the sendme increments its window by 10 and passes the cell onward.
  75. In practice, all the nodes in the circuit maintain a receive window
  76. close to 100 except the exit node, which stays around 0, periodically
  77. receiving a sendme and reading 10 more data cells from the webserver.
  78. In this way we can use pretty much all of the available bandwidth for
  79. data, but gracefully back off when faced with multiple circuits (a new
  80. sendme arrives only after some cells have traversed the entire network),
  81. stalled network connections, or attacks.
  82. We don't need to reimplement full tcp windows, with sequence numbers,
  83. the ability to drop cells when we're full etc, because the tcp streams
  84. already guarantee in-order delivery of each cell. Rather than trying
  85. to build some sort of tcp-on-tcp scheme, we implement this minimal data
  86. congestion control; so far it's enough.
  87. 4.3. Router twins. In many cases when we ask for a router with a given
  88. address and port, we really mean a router who knows a given key. Router
  89. twins are two or more routers that all share the same private key. We thus
  90. give routers extra flexibility in choosing the next hop in the circuit: if
  91. some of the twins are down or slow, it can choose the more available ones.
  92. Currently the code tries for the primary router first, and if it's down,
  93. chooses the first available twin.