|
@@ -95,11 +95,26 @@
|
|
|
|
|
|
3.4. Restricted topology: benefits and roadmap.
|
|
|
|
|
|
- As the Tor network continues to grow, we will make design changes
|
|
|
- to the network topology so that each node does not need to maintain
|
|
|
- connections to an unbounded number of other nodes.
|
|
|
-
|
|
|
- A special case here is the social network.
|
|
|
+ As the Tor network continues to grow, we will need to make design
|
|
|
+ changes to the network topology so that each node does not need
|
|
|
+ to maintain connections to an unbounded number of other nodes. For
|
|
|
+ anonymity's sake, we're going to partition the network such that all
|
|
|
+ the nodes have the same belief about the divisions and each node is
|
|
|
+ in only one partition. (The alternative is that every user fetches
|
|
|
+ his own random subset of the overall node list -- this is bad because
|
|
|
+ of intersection attacks.)
|
|
|
+
|
|
|
+ Therefore the "network horizon" for each user will stay bounded,
|
|
|
+ which helps against the above issues in 3.2 and 3.3.
|
|
|
+
|
|
|
+ It could be that the core of long-lived servers will all get to know
|
|
|
+ each other, and so the critical point that decides whether you get
|
|
|
+ good service is whether the core likes you. Or perhaps it will turn
|
|
|
+ out to work some other way.
|
|
|
+
|
|
|
+ A special case here is the social network, where the network isn't
|
|
|
+ partitioned randomly but instead based on some external properties.
|
|
|
+ More on this later.
|
|
|
|
|
|
3.5. Profit-maximizing vs. Altruism.
|
|
|
|
|
@@ -118,6 +133,25 @@
|
|
|
for an incentive scheme so effective that it produces thousands of
|
|
|
new servers.
|
|
|
|
|
|
+3.6. What part of the node's performance do you measure?
|
|
|
+
|
|
|
+ We keep referring to having a node measure how well the other nodes
|
|
|
+ receive bytes. But many transactions in Tor involve fetching lots of
|
|
|
+ bytes and not sending very many. So it seems that we want to turn
|
|
|
+ things around: we need to measure how quickly a node can _send_
|
|
|
+ us bytes, and then only send it bytes in proportion to that.
|
|
|
+
|
|
|
+ There's an obvious attack though: a sneaky user could simply connect
|
|
|
+ to a node and send some traffic through it. Voila, he has performed
|
|
|
+ for the network. This is no good. The first fix is that we only count
|
|
|
+ if you're sending bytes "backwards" in the circuit. Now the sneaky
|
|
|
+ user needs to construct a circuit such that his node appears later
|
|
|
+ in the circuit, and then send some bytes back quickly.
|
|
|
+
|
|
|
+ Maybe that complexity is sufficient to deter most lazy users. Or
|
|
|
+ maybe it's an argument in favor of a more penny-counting reputation
|
|
|
+ approach.
|
|
|
+
|
|
|
4. Sample designs.
|
|
|
|
|
|
4.1. Two classes of service for circuits.
|
|
@@ -128,8 +162,64 @@
|
|
|
4.3. Treat all the traffic from the node with the same service;
|
|
|
soft reputation system.
|
|
|
|
|
|
+ Rather than a guaranteed system with accounting (as 4.1 and 4.2),
|
|
|
+ we instead try for a best-effort system. All bytes are in the same
|
|
|
+ class of service. You keep track of other Tors by key, and give them
|
|
|
+ service proportional to the service they have given you. That is, in
|
|
|
+ the past when you have tried to push bytes through them, you track the
|
|
|
+ number of bytes and the average bandwidth, and use that to weight the
|
|
|
+ priority of their connections if they try to push bytes through you.
|
|
|
+
|
|
|
+ Now you're going to get minimum service if you don't ever push bytes
|
|
|
+ for other people, and you get increasingly improved service the more
|
|
|
+ active you are. We should have memories fade over time (we'll have
|
|
|
+ to tune that, which could be quite hard).
|
|
|
+
|
|
|
+ Pro: Sybil attacks are pointless because new identities get lowest
|
|
|
+ priority.
|
|
|
+
|
|
|
+ Pro: Smoothly handles periods of both low and high network load. Rather
|
|
|
+ than keeping track of the ratio/difference between what he's done for
|
|
|
+ you and what you've done for him, simply keep track of what he's done
|
|
|
+ for you, and give him priority based on that.
|
|
|
+
|
|
|
+ Based on 3.3 above, it seems we should reward all the nodes in our
|
|
|
+ path, not just the first one -- otherwise the node can provide good
|
|
|
+ service only to its guards. On the other hand, there might be a
|
|
|
+ second-order effect where you want nodes to like you so that *when*
|
|
|
+ your guards choose you for a circuit, they'll be able to get good
|
|
|
+ performance. This tradeoff needs more simulation/analysis.
|
|
|
+
|
|
|
+ This approach focuses on incenting people to relay traffic, but it
|
|
|
+ doesn't do much for incenting them to allow exits. It may help in
|
|
|
+ one way through: if there are few exits, then they will attract a
|
|
|
+ lot of use, so lots of people will like them, so when they try to
|
|
|
+ use the network they will find their first hop to be particularly
|
|
|
+ pleasant. After that they're like the rest of the world though.
|
|
|
+
|
|
|
+ Pro: this is a pretty easy design to add; and it can be phased in
|
|
|
+ incrementally simply by having new nodes behave differently.
|
|
|
+
|
|
|
4.4. Centralized opinions from the reputation servers.
|
|
|
|
|
|
+ Have a set of official measurers who spot-check servers from the
|
|
|
+ directory to see if they really do offer roughly the bandwidth
|
|
|
+ they advertise. Include these observations in the directory. (For
|
|
|
+ simplicity, the directory servers could be the measurers.) Then Tor
|
|
|
+ servers weight priority for other servers depending on advertised
|
|
|
+ bandwidth, giving particularly low priority to connections not
|
|
|
+ listed or that failed their spot-checks. The spot-checking can be
|
|
|
+ done anonymously, because hey, we have an anonymity network.
|
|
|
+
|
|
|
+ We could also reward exit nodes by giving them better priority, but
|
|
|
+ like above this only will affect their first hop. Another problem
|
|
|
+ is that it's darn hard to spot-check whether a server allows exits
|
|
|
+ to all the pieces of the Internet that it claims to. A last problem
|
|
|
+ is that since directory servers will be doing their tests directly
|
|
|
+ (easy to detect) or indirectly (through other Tor servers), then
|
|
|
+ we know that we can get away with poor performance for people that
|
|
|
+ aren't listed in the directory.
|
|
|
+
|
|
|
5. Types of attacks.
|
|
|
|
|
|
5.1. Anonymity attacks:
|