WritingTests.txt 15 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401
  1. Writing tests for Tor: an incomplete guide
  2. ==========================================
  3. Tor uses a variety of testing frameworks and methodologies to try to
  4. keep from introducing bugs. The major ones are:
  5. 1. Unit tests written in C and shipped with the Tor distribution.
  6. 2. Integration tests written in Python and shipped with the Tor
  7. distribution.
  8. 3. Integration tests written in Python and shipped with the Stem
  9. library. Some of these use the Tor controller protocol.
  10. 4. System tests written in Python and SH, and shipped with the
  11. Chutney package. These work by running many instances of Tor
  12. locally, and sending traffic through them.
  13. 5. The Shadow network simulator.
  14. How to run these tests
  15. ----------------------
  16. === The easy version
  17. To run all the tests that come bundled with Tor, run "make check"
  18. To run the Stem tests as well, fetch stem from the git repository,
  19. set STEM_SOURCE_DIR to the checkout, and run "make test-stem".
  20. To run the Chutney tests as well, fetch chutney from the git repository,
  21. set CHUTNEY_PATH to the checkout, and run "make test-network".
  22. To run all of the above, run "make test-full".
  23. To run all of the above, plus tests that require a working connection to the
  24. internet, run "make test-full-online".
  25. === Running particular subtests
  26. The Tor unit tests are divided into separate programs and a couple of
  27. bundled unit test programs.
  28. Separate programs are easy. For example, to run the memwipe tests in
  29. isolation, you just run ./src/test/test-memwipe .
  30. To run tests within the unit test programs, you can specify the name
  31. of the test. The string ".." can be used as a wildcard at the end of the
  32. test name. For example, to run all the cell format tests, enter
  33. "./src/test/test cellfmt/..". To run
  34. Many tests that need to mess with global state run in forked subprocesses in
  35. order to keep from contaminating one another. But when debugging a failing test,
  36. you might want to run it without forking a subprocess. To do so, use the
  37. "--no-fork" option with a single test. (If you specify it along with
  38. multiple tests, they might interfere.)
  39. You can turn on logging in the unit tests by passing one of "--debug",
  40. "--info", "--notice", or "--warn". By default only errors are displayed.
  41. Unit tests are divided into "./src/test/test" and "./src/test/test-slow".
  42. The former are those that should finish in a few seconds; the latter tend to
  43. take more time, and may include CPU-intensive operations, deliberate delays,
  44. and stuff like that.
  45. === Finding test coverage
  46. Test coverage is a measurement of which lines your tests actually visit.
  47. When you configure Tor with the --enable-coverage option, it should
  48. build with support for coverage in the unit tests, and in a special
  49. "tor-cov" binary.
  50. Then, run the tests you'd like to see coverage from. If you have old
  51. coverage output, you may need to run "reset-gcov" first.
  52. Now you've got a bunch of files scattered around your build directories
  53. called "*.gcda". In order to extract the coverage output from them, make a
  54. temporary directory for them and run "./scripts/test/coverage ${TMPDIR}",
  55. where ${TMPDIR} is the temporary directory you made. This will create a
  56. ".gcov" file for each source file under tests, containing that file's source
  57. annotated with the number of times the tests hit each line. (You'll need to
  58. have gcov installed.)
  59. You can get a summary of the test coverage for each file by running
  60. "./scripts/test/cov-display ${TMPDIR}/*" . Each line lists the file's name,
  61. the number of uncovered lines, the number of uncovered lines, and the
  62. coverage percentage.
  63. For a summary of the test coverage for each _function_, run
  64. "./scripts/test/cov-display -f ${TMPDIR}/*" .
  65. === Comparing test coverage
  66. Sometimes it's useful to compare test coverage for a branch you're writing to
  67. coverage from another branch (such as git master, for example). But you
  68. can't run "diff" on the two coverage outputs directly, since the actual
  69. number of times each line is executed aren't so important, and aren't wholly
  70. deterministic.
  71. Instead, follow the instructions above for each branch, creating a separate
  72. temporary directory for each. Then, run "./scripts/test/cov-diff ${D1}
  73. ${D2}", where D1 and D2 are the directories you want to compare. This will
  74. produce a diff of the two directories, with all lines normalized to be either
  75. covered or uncovered.
  76. To count new or modified uncovered lines in D2, you can run:
  77. "./scripts/test/cov-diff ${D1} ${D2}" | grep '^+ *\#' |wc -l
  78. What kinds of test should I write?
  79. ----------------------------------
  80. Integration testing and unit testing are complementary: it's probably a
  81. good idea to make sure that your code is hit by both if you can.
  82. If your code is very-low level, and its behavior is easily described in
  83. terms of a relation between inputs and outputs, or a set of state
  84. transitions, then it's a natural fit for unit tests. (If not, please
  85. consider refactoring it until most of it _is_ a good fit for unit
  86. tests!)
  87. If your code adds new externally visible functionality to Tor, it would
  88. be great to have a test for that functionality. That's where
  89. integration tests more usually come in.
  90. Unit and regression tests: Does this function do what it's supposed to?
  91. -----------------------------------------------------------------------
  92. Most of Tor's unit tests are made using the "tinytest" testing framework.
  93. You can see a guide to using it in the tinytest manual at
  94. https://github.com/nmathewson/tinytest/blob/master/tinytest-manual.md
  95. To add a new test of this kind, either edit an existing C file in src/test/,
  96. or create a new C file there. Each test is a single function that must
  97. be indexed in the table at the end of the file. We use the label "done:" as
  98. a cleanup point for all test functions.
  99. (Make sure you read tinytest-manual.md before proceeding.)
  100. I use the term "unit test" and "regression tests" very sloppily here.
  101. === A simple example
  102. Here's an example of a test function for a simple function in util.c:
  103. static void
  104. test_util_writepid(void *arg)
  105. {
  106. (void) arg;
  107. char *contents = NULL;
  108. const char *fname = get_fname("tmp_pid");
  109. unsigned long pid;
  110. char c;
  111. write_pidfile(fname);
  112. contents = read_file_to_str(fname, 0, NULL);
  113. tt_assert(contents);
  114. int n = sscanf(contents, "%lu\n%c", &pid, &c);
  115. tt_int_op(n, OP_EQ, 1);
  116. tt_int_op(pid, OP_EQ, getpid());
  117. done:
  118. tor_free(contents);
  119. }
  120. This should look pretty familiar to you if you've read the tinytest
  121. manual. One thing to note here is that we use the testing-specific
  122. function "get_fname" to generate a file with respect to a temporary
  123. directory that the tests use. You don't need to delete the file;
  124. it will get removed when the tests are done.
  125. Also note our use of OP_EQ instead of == in the tt_int_op() calls.
  126. We define OP_* macros to use instead of the binary comparison
  127. operators so that analysis tools can more easily parse our code.
  128. (Coccinelle really hates to see == used as a macro argument.)
  129. Finally, remember that by convention, all *_free() functions that
  130. Tor defines are defined to accept NULL harmlessly. Thus, you don't
  131. need to say "if (contents)" in the cleanup block.
  132. === Exposing static functions for testing
  133. Sometimes you need to test a function, but you don't want to expose
  134. it outside its usual module.
  135. To support this, Tor's build system compiles a testing version of
  136. each module, with extra identifiers exposed. If you want to
  137. declare a function as static but available for testing, use the
  138. macro "STATIC" instead of "static". Then, make sure there's a
  139. macro-protected declaration of the function in the module's header.
  140. For example, crypto_curve25519.h contains:
  141. #ifdef CRYPTO_CURVE25519_PRIVATE
  142. STATIC int curve25519_impl(uint8_t *output, const uint8_t *secret,
  143. const uint8_t *basepoint);
  144. #endif
  145. The crypto_curve25519.c file and the test_crypto.c file both define
  146. CRYPTO_CURVE25519_PRIVATE, so they can see this declaration.
  147. === Mock functions for testing in isolation
  148. Often we want to test that a function works right, but the function to
  149. be tested depends on other functions whose behavior is hard to observe,
  150. or which require a working Tor network, or something like that.
  151. To write tests for this case, you can replace the underlying functions
  152. with testing stubs while your unit test is running. You need to declare
  153. the underlying function as 'mockable', as follows:
  154. MOCK_DECL(returntype, functionname, (argument list));
  155. and then later implement it as:
  156. MOCK_IMPL(returntype, functionname, (argument list))
  157. {
  158. /* implementation here */
  159. }
  160. For example, if you had a 'connect to remote server' function, you could
  161. declare it as:
  162. MOCK_DECL(int, connect_to_remote, (const char *name, status_t *status));
  163. When you declare a function this way, it will be declared as normal in
  164. regular builds, but when the module is built for testing, it is declared
  165. as a function pointer initialized to the actual implementation.
  166. In your tests, if you want to override the function with a temporary
  167. replacement, you say:
  168. MOCK(functionname, replacement_function_name);
  169. And later, you can restore the original function with:
  170. UNMOCK(functionname);
  171. For more information, see the definitions of this mocking logic in
  172. testsupport.h.
  173. === Okay but what should my tests actually do?
  174. We talk above about "test coverage" -- making sure that your tests visit
  175. every line of code, or every branch of code. But visiting the code isn't
  176. enough: we want to verify that it's correct.
  177. So when writing tests, try to make tests that should pass with any correct
  178. implementation of the code, and that should fail if the code doesn't do what
  179. it's supposed to do.
  180. You can write "black-box" tests or "glass-box" tests. A black-box test is
  181. one that you write without looking at the structure of the function. A
  182. glass-box one is one you implement while looking at how the function is
  183. implemented.
  184. In either case, make sure to consider common cases *and* edge cases; success
  185. cases and failure csaes.
  186. For example, consider testing this function:
  187. /** Remove all elements E from sl such that E==element. Preserve
  188. * the order of any elements before E, but elements after E can be
  189. * rearranged.
  190. */
  191. void smartlist_remove(smartlist_t *sl, const void *element);
  192. In order to test it well, you should write tests for at least all of the
  193. following cases. (These would be black-box tests, since we're only looking
  194. at the declared behavior for the function:
  195. * Remove an element that is in the smartlist.
  196. * Remove an element that is not in the smartlist.
  197. * Remove an element that appears in the smartlist more than once.
  198. And your tests should verify that it behaves correct. At minimum, you should
  199. test:
  200. * That other elements before E are in the same order after you call the
  201. functions.
  202. * That the target element is really removed.
  203. * That _only_ the target element is removed.
  204. When you consider edge cases, you might try:
  205. * Remove an element from an empty list.
  206. * Remove an element from a singleton list containing that element.
  207. * Remove an element for a list containing several instances of that
  208. element, and nothing else.
  209. Now let's look at the implementation:
  210. void
  211. smartlist_remove(smartlist_t *sl, const void *element)
  212. {
  213. int i;
  214. if (element == NULL)
  215. return;
  216. for (i=0; i < sl->num_used; i++)
  217. if (sl->list[i] == element) {
  218. sl->list[i] = sl->list[--sl->num_used]; /* swap with the end */
  219. i--; /* so we process the new i'th element */
  220. sl->list[sl->num_used] = NULL;
  221. }
  222. }
  223. Based on the implementation, we now see three more edge cases to test:
  224. * Removing NULL from the list.
  225. * Removing an element from the end of the list
  226. * Removing an element from a position other than the end of the list.
  227. === What should my tests NOT do?
  228. Tests shouldn't require a network connection.
  229. Whenever possible, tests shouldn't take more than a second. Put the test
  230. into test/slow if it genuinely needs to be run.
  231. Tests should not alter global state unless they run with TT_FORK: Tests
  232. should not require other tests to be run before or after them.
  233. Tests should not leak memory or other resources.
  234. When possible, tests should not be over-fit to the implementation. That is,
  235. the test should verify that the documented behavior is implemented, but
  236. should not break if other permissible behavior is later implemented.
  237. === Advanced techniques: Namespaces
  238. Sometimes, when you're doing a lot of mocking at once, it's convenient to
  239. isolate your identifiers within a single namespace. If this were C++, we'd
  240. already have namespaces, but for C, we do the best we can with macros and
  241. token-pasting.
  242. We have some macros defined for this purpose in src/test/test.h. To use
  243. them, you define NS_MODULE to a prefix to be used for your identifiers, and
  244. then use other macros in place of identifier names. See src/test/test.h for
  245. more documentation.
  246. Integration tests: Calling Tor from the outside
  247. -----------------------------------------------
  248. Some tests need to invoke Tor from the outside, and shouldn't run from the
  249. same process as the Tor test program. Reasons for doing this might include:
  250. * Testing the actual behavior of Tor when run from the command line
  251. * Testing that a crash-handler correctly logs a stack trace
  252. * Verifying that a violating a sandbox or capability requirement will
  253. actually crash the program.
  254. * Needing to run as root in order to test capability inheritance or
  255. user switching.
  256. To add one of these, you generally want a new C program in src/test. Add it
  257. to TESTS and noinst_PROGRAMS if it can run on its own and return success or
  258. failure. If it needs to be invoked multiple times, or it needs to be
  259. wrapped, add a new shell script to TESTS, and the new program to
  260. noinst_PROGRAMS. If you need access to any environment variable from the
  261. makefile (eg ${PYTHON} for a python interpreter), then make sure that the
  262. makefile exports them.
  263. Writing integration tests with Stem
  264. -----------------------------------
  265. The 'stem' library includes extensive unit tests for the Tor controller
  266. protocol.
  267. For more information on writing new tests for stem, have a look around
  268. the tst/* directory in stem, and find a good example to emulate. You
  269. might want to start with
  270. https://gitweb.torproject.org/stem.git/tree/test/integ/control/controller.py
  271. to improve Tor's test coverage.
  272. You can run stem tests from tor with "make test-stem", or see
  273. https://stem.torproject.org/faq.html#how-do-i-run-the-tests .
  274. System testing with Chutney
  275. ---------------------------
  276. The 'chutney' program configures and launches a set of Tor relays,
  277. authorities, and clients on your local host. It has a 'test network'
  278. functionality to send traffic through them and verify that the traffic
  279. arrives correctly.
  280. You can write new test networks by adding them to 'networks'. To add
  281. them to Tor's tests, add them to the test-network or test-network-all
  282. targets in Makefile.am.
  283. (Adding new kinds of program to chutney will still require hacking the
  284. code.)