| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273 | 
Writing tests for Tor: an incomplete guide==========================================Tor uses a variety of testing frameworks and methodologies to try tokeep from introducing bugs.  The major ones are:   1. Unit tests written in C and shipped with the Tor distribution.   2. Integration tests written in Python and shipped with the Tor      distribution.   3. Integration tests written in Python and shipped with the Stem      library.  Some of these use the Tor controller protocol.   4. System tests written in Python and SH, and shipped with the      Chutney package.  These work by running many instances of Tor      locally, and sending traffic through them.   5. The Shadow network simulator.How to run these tests----------------------=== The easy versionTo run all the tests that come bundled with Tor, run "make check"To run the Stem tests as well, fetch stem from the git repository,set STEM_SOURCE_DIR to the checkout, and run "make test-stem".To run the Chutney tests as well, fetch chutney from the git repository,set CHUTNEY_PATH to the checkout, and run "make test-network".To run all of the above, run "make test-full".To run all of the above, plus tests that require a working connection to theinternet, run "make test-full-online".=== Running particular subtestsThe Tor unit tests are divided into separate programs and a couple ofbundled unit test programs.Separate programs are easy.  For example, to run the memwipe tests inisolation, you just run ./src/test/test-memwipe .To run tests within the unit test programs, you can specify the nameof the test.  The string ".." can be used as a wildcard at the end of thetest name.  For example, to run all the cell format tests, enter"./src/test/test cellfmt/..".  To runMany tests that need to mess with global state run in forked subprocesses inorder to keep from contaminating one another.  But when debugging a failing test,you might want to run it without forking a subprocess.  To do so, use the"--no-fork" option with a single test.  (If you specify it along withmultiple tests, they might interfere.)You can turn on logging in the unit tests by passing one of "--debug","--info", "--notice", or "--warn".  By default only errors are displayed.Unit tests are divided into "./src/test/test" and "./src/test/test-slow".The former are those that should finish in a few seconds; the latter tend totake more time, and may include CPU-intensive operations, deliberate delays,and stuff like that.=== Finding test coverageWhen you configure Tor with the --enable-coverage option, it shouldbuild with support for coverage in the unit tests, and in a special"tor-cov" binary.Then, run the tests you'd like to see coverage from.  If you have oldcoverage output, you may need to run "reset-gcov" first.Now you've got a bunch of files scattered around your build directoriescalled "*.gcda".  In order to extract the coverage output from them, make atemporary directory for them and run "./scripts/test/coverage ${TMPDIR}",where ${TMPDIR} is the temporary directory you made.  This will create a".gcov" file for each source file under tests, containing that file's sourceannotated with the number of times the tests hit each line.  (You'll need tohave gcov installed.)You can get a summary of the test coverage for each file by running"./scripts/test/cov-display ${TMPDIR}/*" .  Each line lists the file's name,the number of uncovered lines, the number of uncovered lines, and thecoverage percentage.For a summary of the test coverage for each _function_, run"./scripts/test/cov-display -f ${TMPDIR}/*" .=== Comparing test coverageSometimes it's useful to compare test coverage for a branch you're writing tocoverage from another branch (such as git master, for example).  But youcan't run "diff" on the two coverage outputs directly, since the actualnumber of times each line is executed aren't so important, and aren't whollydeterministic.Instead, follow the instructions above for each branch, creating a separatetemporary directory for each.  Then, run "./scripts/test/cov-diff ${D1}${D2}", where D1 and D2 are the directories you want to compare.  This willproduce a diff of the two directories, with all lines normalized to be eithercovered or uncovered.To count new or modified uncovered lines in D2, you can run:    "./scripts/test/cov-diff ${D1} ${D2}" | grep '^+ *\#' |wc -lWhat kinds of test should I write?----------------------------------Integration testing and unit testing are complementary: it's probably agood idea to make sure that your code is hit by both if you can.If your code is very-low level, and its behavior is easily described interms of a relation between inputs and outputs, or a set of statetransitions, then it's a natural fit for unit tests.  (If not, pleaseconsider refactoring it until most of it _is_ a good fit for unittests!)If your code adds new externally visible functionality to Tor, it wouldbe great to have a test for that functionality.  That's whereintegration tests more usually come in.Unit and regression tests: Does this function do what it's supposed to?-----------------------------------------------------------------------Most of Tor's unit tests are made using the "tinytest" testing framework.You can see a guide to using it in the tinytest manual at   https://github.com/nmathewson/tinytest/blob/master/tinytest-manual.mdTo add a new test of this kind, either edit an existing C file in src/test/,or create a new C file there.  Each test is a single function that mustbe indexed in the table at the end of the file.  We use the label "done:" asa cleanup point for all test functions.(Make sure you read tinytest-manual.md before proceeding.)I use the term "unit test" and "regression tests" very sloppily here.=== A simple exampleHere's an example of a test function for a simple function in util.c:    static void    test_util_writepid(void *arg)    {      (void) arg;      char *contents = NULL;      const char *fname = get_fname("tmp_pid");      unsigned long pid;      char c;      write_pidfile(fname);      contents = read_file_to_str(fname, 0, NULL);      tt_assert(contents);      int n = sscanf(contents, "%lu\n%c", &pid, &c);      tt_int_op(n, OP_EQ, 1);      tt_int_op(pid, OP_EQ, getpid());    done:      tor_free(contents);    }This should look pretty familiar to you if you've read the tinytestmanual.  One thing to note here is that we use the testing-specificfunction "get_fname" to generate a file with respect to a temporarydirectory that the tests use.  You don't need to delete the file;it will get removed when the tests are done.Also note our use of OP_EQ instead of == in the tt_int_op() calls.We define OP_* macros to use instead of the binary comparisonoperators so that analysis tools can more easily parse our code.(Coccinelle really hates to see == used as a macro argument.)Finally, remember that by convention, all *_free() functions thatTor defines are defined to accept NULL harmlessly.  Thus, you don'tneed to say "if (contents)" in the cleanup block.=== Exposing static functions for testingSometimes you need to test a function, but you don't want to exposeit outside its usual module.To support this, Tor's build system compiles a testing version ofeach module, with extra identifiers exposed.  If you want todeclare a function as static but available for testing, use themacro "STATIC" instead of "static".  Then, make sure there's amacro-protected declaration of the function in the module's header.For example, crypto_curve25519.h contains:#ifdef CRYPTO_CURVE25519_PRIVATESTATIC int curve25519_impl(uint8_t *output, const uint8_t *secret,                           const uint8_t *basepoint);#endifThe crypto_curve25519.c file and the test_crypto.c file both defineCRYPTO_CURVE25519_PRIVATE, so they can see this declaration.=== Mock functions for testing in isolationOften we want to test that a function works right, but the function tobe tested depends on other functions whose behavior is hard to observe,or which require a working Tor network, or something like that.To write tests for this case, you can replace the underlying functionswith testing stubs while your unit test is running.  You need to declarethe underlying function as 'mockable', as follows:   MOCK_DECL(returntype, functionname, (argument list));and then later implement it as:    MOCK_IMPL(returntype, functionname, (argument list))    {       /* implementation here */    }For example, if you had a 'connect to remote server' function, you coulddeclare it as:   MOCK_DECL(int, connect_to_remote, (const char *name, status_t *status));When you declare a function this way, it will be declared as normal inregular builds, but when the module is built for testing, it is declaredas a function pointer initialized to the actual implementation.In your tests, if you want to override the function with a temporaryreplacement, you say:   MOCK(functionname, replacement_function_name);And later, you can restore the original function with:   UNMOCK(functionname);For more information, see the definitions of this mocking logic intestsupport.h.=== Advanced techniques: NamespacesXXXX write this.  danah boyd made us some really awesome stuff here.Integration tests: Calling Tor from the outside-----------------------------------------------XXXX WRITEMEWriting integration tests with Stem-----------------------------------XXXX WRITEMESystem testing with Chutney---------------------------XXXX WRITEMEWho knows what evil lurks in the timings of networks? The Shadow knows!-----------------------------------------------------------------------XXXX WRITEME
 |