Нема описа

Ian Goldberg da677c6167 Check whether we have enough RAM to run the 2^28 and 2^30 experiments пре 9 месеци
prac da677c6167 Check whether we have enough RAM to run the 2^28 and 2^30 experiments пре 9 месеци
Dockerfile 43a9f22605 USENIX artifact reproduction script пре 1 година
README.md 98209553a4 Update the README to match the new parse_logs output format пре 1 година
bench_oram.patch 35556ae579 Update the Floram docker to keep track of ORAM startup costs пре 1 година
bench_oram_readwrite.oc 8d7fd27f5e Correct the call to oram_read in the readwrite test пре 1 година
build-docker b79daf99e9 Allow docker scripts to be run from other directories пре 1 година
parse_logs 803bd9c667 parse_logs will parse mode bs output as well пре 1 година
parse_sizes 7e3ac77f22 Remove trailing 0 from parse_sizes output пре 1 година
parse_times b35230328e Remove stray comment copied from parse_sizes пре 1 година
run-experiment 9e32ef44f2 run-experiments can be used with modes "read" or "bs" пре 1 година
run-readwrite-experiments 0d072ddbbc Rename run-net-experiments -> run-readwrite-experiments пре 1 година
set-networking b79daf99e9 Allow docker scripts to be run from other directories пре 1 година
start-docker 61bdc29579 Docker scripts for Floram пре 1 година
stop-docker 61bdc29579 Docker scripts for Floram пре 1 година
unset-networking 6e9c389d46 Typo in unset-networking пре 1 година

README.md

Floram docker experimentation environment

Ian Goldberg, iang@uwaterloo.ca
Adithya Vadapalli, adithya.vadapalli@uwaterloo.ca

This repo contains scripts to run Doerner and shelat's Floram in docker containers for easy experimentation on varying the ORAM size, and network latency and bandwidth.

These scripts are in support of our paper:

Adithya Vadapalli, Ryan Henry, Ian Goldberg. Duoram: A Bandwidth-Efficient Distributed ORAM for 2- and 3-Party Computation. USENIX Security Symposium 2023. https://eprint.iacr.org/2022/1747

It is a dockerization of Doerner and shelat's published code, with two small changes:

  • Their benchmarking code (bench_oram_read and bench_oram_write) sets up the ORAM, and then does a number of read or a number of write operations. The time to set up the ORAM is included in the reported time, but the bandwidth to set up the ORAM is not included in the reported bandwith. We have a patch to also measure the bandwidth of the setup, and report it separately from the bandwidth of the operations.
  • We also add a read/write benchmark that does alternating reads and writes. If you ask for 128 operations, for example, it will do 128 reads and 128 writes, interleaved.

Reproduction instructions

Follow these instructions to reproduce the Floram data points (timings and bandwidth usage of Floram operations for various ORAM sizes and network settings) for the plots in our paper. See below if you want to run experiments of your choosing.

  • Build the docker image with ./build-docker
  • Start the dockers with ./start-docker
    • This will start two dockers, each running one of the parties.
  • Run the reproduction script ./repro with one of the following arguments:

    • ./repro test: Run a short (just a few seconds) "kick-the-tires" test. You should see output like the following:

      Running test experiment...
      Tue 21 Feb 2023 01:37:45 PM EST: Running read 16 1us 100gbit 2 ...
      Floram read 16 1us 100gbit 2 0.554001 s
      Floram read 16 1us 100gbit 2 3837.724609375 KiB
      

    The last two lines are the output data points, telling you that a Floram read test on an ORAM of size 216, with a network configuration of 1us latency and 100gbit bandwidth, performing 2 read operations, took 0.554001 s of time and 3837.724609375 KiB of bandwidth. If you've run the test before, you will see means and stddevs of all of the output data points. When you run it, the time of course will depend on the particulars of your hardware, but the bandwidth used should be exactly the value quoted above.

    • ./repro small numops: Run the "small" tests. These are the tests up to size 226, and produce all the data points for Figures 7 and 8, and most of Figure 9. numops is the number of operations to run for each test; we used the default of 128 for the figures in the paper, but you can use a lower number to make the tests run faster. For the default of 128, these tests should complete in about 4 to 5 hours, and require 16 GB of available RAM.

    • ./repro large numops: Run the "large" tests. These are the rightmost 3 data points in Figure 9. They are not essential to our major claims, so they are optional to run, and you will definitely require a larger machine to run them. For the default numops of 128, these experiments will require 9 to 10 hours to run and 540 GB of available RAM. Reducing numops will only slightly reduce the runtime (down to 8 to 9 hours) and will not change the RAM requirements.

    • ./repro all numops: Run both the "small" and "large" tests.

    • ./repro none numops: Run no tests. This command is nonetheless useful in order to parse the output logs and display the data points for the graphs (see below).

    • ./repro single mode size latency bandwidth numops: run a single manually selected test with the given parameters.

    • After small, large, all, or none, the script will parse all of the outputs that have been collected with the specified numops (in this run or previous runs), and output them as they would appear in each of the subfigures of Figures 7, 8, and 9.

  • When you're done, ./stop-docker

Manual instructions

  • ./build-docker
  • ./start-docker
    • This will start two dockers, each running one of the parties.

Then to simulate network latency and capacity (optional):

  • ./set-networking 30ms 100mbit

To turn that off again:

  • ./unset-networking

If you have a NUMA machine, you might want to pin each party to one NUMA node. To do that, set these environment variables before running ./run-experiment below:

  • export FLORAM_NUMA_P0="numactl -N 1 -m 1"
  • export FLORAM_NUMA_P1="numactl -N 2 -m 2"

Adjust the numactl arguments to taste, of course, depending on your machine's configuration. Alternately, you can use things like -C 0-7 instead of -N 1 to pin to specific cores, even on a non-NUMA machine.

Run experiments:

  • ./run-experiment mode size numops port >> outfile

    • mode is one of read, write, readwrite, or init
      • init measures setting up the database with non-zero initial values; the other three modes include setting up the database initialized to 0. Defaults to read.
    • size is the base-2 log of the number of entries in the ORAM (so size = 20 is an ORAM with 1048576 entries, for example). Defaults to 20.
    • numops is the number of operations to perform; one setup will be followed by numops operations, where each operation is a read, a write, or a read plus a write, depending on the mode. Defaults to 128.
    • port is the port number to use; if you're running multiple experiments at the same time, they must each be on a different port. Defaults to 3000.
  • ./parse_sizes outfile

    • Parses the file output by one or more executions of ./run-experiment to extract the number of bytes sent in each experiment. The output will be, for each experiment, a line with the two numbers size and kib, which are the size of the experiment and the average number of KiB (kibibytes = 1024 bytes) sent per party, including both the ORAM setup and the operations.
  • ./parse_times outfile

    • Parses the file output by one or more executions of ./run-experiment to extract the runtime of each experiment. The output will be, for each experiment, a line with the two numbers size and sec, which are the size of the experiment and the time in seconds, including both the ORAM setup and the operations.

To see an example of how to use ./run-experiment while varying the experiment size and the network latency and bandwidth, and using the NUMA functionality, the ./run-readwrite-experiments script wraps ./run-experiment.

When you're all done:

  • ./stop-docker