108-mtbf-based-stability.txt 3.7 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990
  1. Filename: 108-mtbf-based-stability.txt
  2. Title: Base "Stable" Flag on Mean Time Between Failures
  3. Version: $Revision$
  4. Last-Modified: $Date$
  5. Author: Nick Mathewson
  6. Created: 10-Mar-2007
  7. Status: Closed
  8. Implemented-In: 0.2.0.x
  9. Overview:
  10. This document proposes that we change how directory authorities set the
  11. stability flag from inspection of a router's declared Uptime to the
  12. authorities' perceived mean time between failure for the router.
  13. Motivation:
  14. Clients prefer nodes that the authorities call Stable. This flag is (as
  15. of 0.2.0.0-alpha-dev) set entirely based on the node's declared value for
  16. uptime. This creates an opportunity for malicious nodes to declare
  17. falsely high uptimes in order to get more traffic.
  18. Spec changes:
  19. Replace the current rule for setting the Stable flag with:
  20. "Stable" -- A router is 'Stable' if it is active and its observed Stability
  21. for the past month is at or above the median Stability for active routers.
  22. Routers are never called stable if they are running a version of Tor
  23. known to drop circuits stupidly. (0.1.1.10-alpha through 0.1.1.16-rc
  24. are stupid this way.)
  25. Stability shall be defined as the weighted mean length of the runs
  26. observed by a given directory authority. A run begins when an authority
  27. decides that the server is Running, and ends when the authority decides
  28. that the server is not Running. In-progress runs are counted when
  29. measuring Stability. When calculating the mean, runs are weighted by
  30. $\alpha ^ t$, where $t$ is time elapsed since the end of the run, and
  31. $0 < \alpha < 1$. Time when an authority is down do not count to the
  32. length of the run.
  33. Rejected Alternative:
  34. "A router's Stability shall be defined as the sum of $\alpha ^ d$ for every
  35. $d$ such that the router was considered reachable for the entire day
  36. $d$ days ago.
  37. This allows a simpler implementation: every day, we multiply
  38. yesterday's Stability by alpha, and if the router was observed to be
  39. available every time we looked today, we add 1.
  40. Instead of "day", we could pick an arbitrary time unit. We should
  41. pick alpha to be high enough that long-term stability counts, but low
  42. enough that the distant past is eventually forgotten. Something
  43. between .8 and .95 seems right.
  44. (By requiring that routers be up for an entire day to get their
  45. stability increased, instead of counting fractions of a day, we
  46. capture the notion that stability is more like "probability of
  47. staying up for the next hour" than it is like "probability of being
  48. up at some randomly chosen time over the next hour." The former
  49. notion of stability is far more relevant for long-lived circuits.)
  50. Limitations:
  51. Authorities can have false positives and false negatives when trying to
  52. tell whether a router is up or down. So long as these aren't terribly
  53. wrong, and so long as they aren't significantly biased, we should be able
  54. to use them to estimate stability pretty well.
  55. Probing approaches like the above could miss short incidents of
  56. downtime. If we use the router's declared uptime, we could detect
  57. these: but doing so would penalize routers who reported their uptime
  58. accurately.
  59. Implementation:
  60. For now, the easiest way to store this information at authorities
  61. would probably be in some kind of periodically flushed flat file.
  62. Later, we could move to Berkeley db or something if we really had to.
  63. For each router, an authority will need to store:
  64. The router ID.
  65. Whether the router is up.
  66. The time when the current run started, if the router is up.
  67. The weighted sum length of all previous runs.
  68. The time at which the weighted sum length was last weighted down.
  69. Servers should probe at random intervals to test whether servers are
  70. running.