浏览代码

[Doc] Clean up and update documentation migrated from the GitHub Wiki

Chia-Che Tsai 4 年之前
父节点
当前提交
65159de310
共有 29 个文件被更改,包括 1430 次插入1291 次删除
  1. 0 42
      Documentation/oldwiki/Building-Linux-Kernel-Support.md
  2. 32 7
      Documentation/oldwiki/Debugging-Graphene.md
  3. 0 7
      Documentation/oldwiki/Failed-tests-in-LTP.md
  4. 0 98
      Documentation/oldwiki/Golang-support.md
  5. 91 0
      Documentation/oldwiki/Graphene-Manifest-Syntax.md
  6. 30 0
      Documentation/oldwiki/Graphene-Quick-Start.md
  7. 70 0
      Documentation/oldwiki/Graphene-SGX-Manifest-Syntax.md
  8. 63 0
      Documentation/oldwiki/Graphene-SGX-Quick-Start.md
  9. 0 145
      Documentation/oldwiki/Home.md
  10. 69 0
      Documentation/oldwiki/Implementing-New-System-Calls-in-Graphene.md
  11. 0 58
      Documentation/oldwiki/Implementing-New-System-Calls.md
  12. 121 0
      Documentation/oldwiki/Introduction-to-Graphene-SGX.md
  13. 228 0
      Documentation/oldwiki/Introduction-to-Graphene.md
  14. 0 68
      Documentation/oldwiki/Introduction-to-Intel-SGX-Support.md
  15. 0 63
      Documentation/oldwiki/Manifest-Syntax.md
  16. 268 231
      Documentation/oldwiki/PAL-Host-ABI.md
  17. 0 80
      Documentation/oldwiki/Port-Graphene-PAL-to-Other-hosts.md
  18. 111 0
      Documentation/oldwiki/Porting-Graphene-PAL-to-Other-hosts.md
  19. 12 7
      Documentation/oldwiki/Process-Creation-in-Graphene-SGX.md
  20. 0 38
      Documentation/oldwiki/Quick-Start.md
  21. 0 22
      Documentation/oldwiki/Remote-Attestation-for-SGX.md
  22. 162 0
      Documentation/oldwiki/Run-Applications-in-Graphene-SGX.md
  23. 65 120
      Documentation/oldwiki/Run-Applications-in-Graphene.md
  24. 0 153
      Documentation/oldwiki/Run-Applications-with-SGX.md
  25. 0 39
      Documentation/oldwiki/SGX-Manifest-Syntax.md
  26. 0 58
      Documentation/oldwiki/SGX-Quick-Start.md
  27. 95 34
      Documentation/oldwiki/Signal-Handling-in-Graphene.md
  28. 13 12
      Documentation/oldwiki/Supported-System-Calls-in-Graphene.md
  29. 0 9
      Documentation/oldwiki/Troubleshooting-Common-Issues.md

+ 0 - 42
Documentation/oldwiki/Building-Linux-Kernel-Support.md

@@ -1,42 +0,0 @@
-# Building Linux Kernel Support
-Graphene requires modifications to the Linux kernel for faster memory copy across processes and security isolation. To enable the security isolation, it requires the whole Linux kernel to be recompiled and installed into the host. The fast memory bulk copy feature can be built into a standalone Linux kernel module, or built as a part of a Graphene kernel.
-
-## Building Graphene Kernel
-
-To build a Graphene kernel, simply use the following commands under the Graphene source tree:
-    cd PAL
-    make  (This step will download and patch Linux 3.19 kernel source, and fail when no .config file provided)
-    cd linux-3.19
-    make menuconfig
-    make
-    make headers_install
-    make modules_install
-    make install    
-
-During configuring the kernel, there are certain Graphene-specific options that need to be enabled:
-
-* `CONFIG_GRAPHENE`:
-  Enabling Graphene support. This option is REQUIRED.
-
-* `CONFIG_GRAPHENE_ISOLATE`:
-  Enabling Graphene security isolation (sandboxing) feature. If this option is disabled, a Graphene instance is still isolated from other processes, but sandboxing inside a Graphene instance is not possible.
-
-* `CONFIG_GRAPHENE_BULK_IPC`:
-  Enabling Graphene fast memory bulk copy feature, as part of the Graphene kernel.
-
-* `CONFIG_GRAPHENE_DEBUG`:
-  Printing Graphene debug log to the kernel log.
-
-After `make install`, you may want to update the GRUB boot menu with a new entry. The following command can be used to update the GRUB menu in most Linux host:
-
-    update-grub
-
-If you are building the kernel in a Ubuntu host, we suggest you to use `make-kpkg` to build the kernel into a _.deb_ package and install it. By doing so, you can conveniently install and remove the kernel by `apt-get` and `dpkg`. 
-
-## Building Graphene Fast Memory Bulk Copy Module
-
-If you don't want to run Graphene with reference monitor, you may build the fast memory bulk copy into a standalone Linux kernel module and install it into the current kernel. To build and install the module, run the following commands under the Graphene source tree:
-
-    cd Pal/ipc/linux
-    make
-    sudo ./load.sh

+ 32 - 7
Documentation/oldwiki/Debugging-Graphene.md

@@ -1,15 +1,40 @@
-# Debugging Graphene
 ## Running Graphene with GDB
 
-To enable GDB support, the PAL loader and Graphene library OS has implemented the GDB protocol to notify any loading and unloading of dynamic libraries. The PAL loader will also load a GDB script to enable proper GDB features to make the debugging process easier. To start Graphene with GDB, use the following command to run your application:
+To enable GDB support, the PAL loader and Graphene implement the GDB protocol to notify the
+debugger about any loading and unloading of dynamic libraries. The PAL loader also loads a
+GDB script to enable GDB features to make the debugging process easier.
 
-    gdb --args <path to PAL>/pal [executable|manifest file] [arguments] ...
-
-To build Graphene with debug symbols, the source code needs to be compiled with `make debug`. Run the following commands in the source tree:
+To build Graphene with debug symbols, the source code needs to be compiled with `DEBUG=1`. Run the
+following commands in the source tree:
 
     make clean
     make DEBUG=1
 
-## Debugging Graphene Kernel
+To run Graphene with GDB, use one of the following commands to run your application:
+
+    GDB=1 [Graphene Directory]/Runtime/pal_loader [executable|manifest] [arguments]
+    gdb --args [executable|manifest] [arguments]
+
+
+## Running Graphene-SGX with GDB
+
+Graphene-SGX also supports GDB from outside the enclave if the enclave is created in debug mode.
+Graphene provides a specialized GDB for the application and the library OS running inside an
+enclave (using a normal GDB will only debug the execution *outside* the enclave).
+
+To build Graphene-SGX with debug symbols, the source code needs to be compiled with `DEBUG=1`. Run
+the following commands in the source tree:
+
+    make SGX=1 clean
+    make SGX=1 DEBUG=1
+
+After rebuilding Graphene-SGX with `DEBUG=1`, you need to re-sign the manifest of the application.
+For instance, if you want to debug the `helloworld` program, run the following commands:
+
+    cd LibOS/shim/test/native
+    make SGX=1
+    make SGX_RUN=1
+
+To run Graphene with GDB, use the Graphene loader (`pal_loader`) and specify `GDB=1`:
 
-If you find any buggy behavior of Graphene kernel or fast memory bulk copy module, we suggest you to enable Graphene debug options in the kernel configuration. If the Graphene kernel is still failing without obvious reason, you may use any kernel debugging techniques, such as _printk_, _KDB_ or _KGDB_ to debug the kernel.
+    GDB=1 SGX=1 [Graphene Directory]/Runtime/pal_loader [executable|manifest] [arguments]

+ 0 - 7
Documentation/oldwiki/Failed-tests-in-LTP.md

@@ -1,7 +0,0 @@
-# Failed tests in LTP
-
-
-| Test name | Reason | Issue? |
-| --- | --- | --- |
-| brk01 | Do not produce core dump |  |
-

+ 0 - 98
Documentation/oldwiki/Golang-support.md

@@ -1,98 +0,0 @@
-# Golang support
-
-This page is intended to track efforts for golang support.
-If you're looking for how to use it, please refer to TBD.
-
-## Goal
-Support golang binary without modification with graphene(-SGX).
-Here golang binary means one created by gc toolchain. Not gccgo, not go-llvm nor other implementations.
-The target user is normal go developer who has already binary. We'd like tell them, bring your binary as is.
-
-## Summary
-| categoary | item | status | PRs/Issues |
-|-----------|------|--------|------------|
-|Static binary |trap-and-emulate syscall instruction | merged |  |
-|              | %gs for PAL/LibOS tls | | https://github.com/oscarlab/graphene/pull/555 https://github.com/oscarlab/graphene/pull/556 https://github.com/oscarlab/graphene/pull/601 |
-|              |binary patch    | |  |
-|              |dedicated stack for LibOS | | |
-| signal emulation | nested signal| discussion on-going. respin PR |https://github.com/oscarlab/graphene/issues/348 https://github.com/oscarlab/graphene/pull/347 |
-|                  | sigaltstack | | |
-| host signal handling | fp registers to PAL_CONTEXT | | https://github.com/oscarlab/graphene/pull/397 |
-|                      | PAL/Linux-SGX dedicated stack for host signal | RFC:code needs to be improved| https://github.com/oscarlab/graphene/pull/632 |
-|                      | multiple signal(PAL, LibOS) | | |
-|                      | Pal/Linux-SGX ocall and signal | | |
-| syscall/instruction emulation | rdtsc |  |https://github.com/oscarlab/graphene/pull/424 |
-|                               | probably more to come | | |
-| test | regression test for golang | | |
-|      | regression test for static binary | | | |
-| misc | vDSO | | https://github.com/oscarlab/graphene/pull/318 https://github.com/oscarlab/graphene/pull/319 |
-|      | stack protector     | |  https://github.com/oscarlab/graphene/pull/774 | 
-
-## Challenges with golang binary
-There are several challenges with go binary.
-### no libc and static link
-golang doesn't use libc. But it has its own runtime library written in go(self hosting). glibc modification doesn't help.
-golang prefers static link and go runtime is always statically linked. (recent go support dynamic link to use shared library. CGO. However go runtime is always statically linked) So the trick to replace shared library isn't usable.
-Current graphene uses modified shared glibc to hook system call instruction for function call.
-### goroutine with small stack size and signal stack
-small memory(e.g. 2KB) is assigned to goroutine on start and stack size is increased on demand.
-sigaltstack is used due to small stack size and for stability. Currently sigaltstack isn't supported. It would be an issue for graphene to directly invoking user signal handler within LibOS. It may cause SEGV due to stack overflow.
-
-## Issues and proposed solutions
-
-### syscall to function call of syscalldb
-* trap SIGILL/SIGSYS and emulate: status: working locally. soon to send PR. This is fallback for corner cases.
-* binary patch to replace syscall instruction with function call
-
-#### binary patch
-trap-and-emulate is slow. optimization for performance is needed to avoid the overhead. One way is to edit loaded text area.
-Editing text area is tricky and fragile. Also it's hard to debug. There are several possible options.
-We should support easiest one and make it solid and then move on to further tricks(more complex and more fragile) if necessary.
-Because we have trap-and-emulate as fallback, the solutions don't have to be perfect. (at the cost of performance.)
-
-There are two major points. How to identify syscall instruction and How to replace syscall instruction with function call.
-It is observed that functions in golang runtime for system call are leaf function without referencing any symbols. a sort of simple wrapper function. It only swaps registers to adjust ABI difference between function call and Linux system call, issues syscall instruction and checks return value. (Please remember -errno trick.)
-(Actually many of glibc syscall functions are so. so actually the solution discussed here could be applied to glibc.)
-
-* replace leaf function as a whole: The assumption is that static symbol is usable. So all the symbol names of syscall functions(of given specific version of golang). So replacing functions can be prepared as a shared library and jump instruction can be put on the beginning of each symbols of the original go binary to replacing functions when target binary is loaded into memory.
-
-* scan instruction to find syscall instruction and replace it somehow: The assumption is static symbol isn't available. This will be very tricky and bunch of heuristics. Please remember that x86-64 instruction has variable length. we have to play with instruction length. Linux paravirt ops uses padding with nop. But golang upstream won't adapt such nop trick to allocate space for text editing.   
-
-* find syscall by SIGILL on runtime. This requires synchronization to stop all the thread, modify the text area, icache flush and resume threads. This is too complex. So for now this is out of choice.
-
-### emulation of signal and sigaltstack
-* https://github.com/oscarlab/graphene/issues/348
-* https://github.com/oscarlab/graphene/pull/347
-
-Now discussion is on-going.
-Right now host signal handling of Pal/Linux-SGX seems broken. a lot of clean ups are necessary before actual sigaltstack support.
-
-### host signal handling and PAL ABI
-The stack can be very small. So the dedicated stack for signal handling is needed.
-Pal/Linux uses sigaltstack. Pal/Linux-SGX has to implement something similar itself because sigaltstack isn't usable.
-PAL ABI related to host signal needs to be clarified.
-* stack: the current stack is used or the dedicated stack is used (sigaltstack). For stability, the dedicated stack is preferable
-* FP registers: The currently only regular register is defined in PAL_CONTEXT. FP registers needs to be included and its format should be defined. We can adapt Linux format. Other platform PAL can emulate it.
-
-### host signal handling and host job control
-The question is, do we want to support host job control? to what extent?
-
-Use case.
-* C-c to kill.(SIGINT)
-* C-z to suspend process and fg/bg command in shell(SIGTSTP, SIGCONT)
-* C-\ coredump. SIGQUIT
-* daemon scripts or systemd: to run/manage daemon process.(SIGTERM/SIGQUIT/SIGTSTP/SIGCONT/SIGHUP)
-* kubernetes also uses signal to kill pods. https://jbodah.github.io/blog/2017/05/23/learning-about-kubernetes-and-unix-signals/
-* SIGTTIN, SIGTTOU, SIGHUP
-
-Feedback:
-* introduce option in manifest which signal to pass through application.
-* C-c is quite convenient.
-* what signal systemd uses? SIGTERM, SIGKILL, SIGHUP, SIGQUIT, SIGABRT. refer to https://www.freedesktop.org/software/systemd/man/systemd.kill.html and https://www.freedesktop.org/software/systemd/man/systemd.service.html  Interesting part is systemd also looks at exit code to determine if it should restart the daemon.
-* Check actual user wants to do: kubernetes/docker support is critical. So it is must-have to fill this gap. Otherwise it is NOT deployable in cloud environment.
-
-### memory size and SGX2
-golang gc runtime requires much memory. SGX2 is desired for good performance. 
-
-## related PR's and issues
-TBD

+ 91 - 0
Documentation/oldwiki/Graphene-Manifest-Syntax.md

@@ -0,0 +1,91 @@
+## Basic Syntax
+
+A manifest file is an application-specific configuration text file that specifies the environment
+and resources for running an application inside Graphene. A manifest file contains entries
+separated by line breaks. Each configuration entry consists of a key and a value. Whitespaces
+before/after the key and before/after the value are ignored. The value can be written in quotes,
+indicating that the value should be assigned to this string verbatim. (The quotes syntax is useful
+for values with leading/trailing whitespaces, e.g. `" SPACES! "`.) Each entry must be in the
+following format:
+
+    [Key][.Key][.Key] = [Value]  or  [Key][.Key][.Key] = "[Value]"
+
+Comments can be inlined in a manifest by starting them with a hash sign (`# comment...`). Any text
+after a hash sign will be considered part of a comment and discarded while loading the manifest
+file.
+
+## Loader-related (Required by PAL)
+
+### Executable
+
+    loader.exec=[URI]
+
+This syntax specifies the executable to be loaded into the library OS. The executable must be an
+ELF binary, with an entry point defined to start its execution (i.e., the binary needs a `main()`
+routine, it cannot just be a library).
+
+### Preloaded Libraries (e.g., LibOS)
+
+    loader.preload=[URI][,URI]...
+
+This syntax specifies the libraries to be preloaded before loading the executable. The URIs of the
+libraries must be separated by commas. The libraries must be ELF binaries.
+
+### Executable Name
+
+    loader.execname=[STRING]
+
+This syntax specifies the executable name that will be passed as the first argument (`argv[0]`)
+to the executable. If the executable name is not specified in the manifest, the PAL will use the
+URI of the executable or the manifest -- depending on whether the executable or the manifest is
+given as the first argument to the PAL loader -- as `argv[0]` when running the executable.
+
+### Environment Variables
+
+    loader.env.[ENVIRON]=[VALUE]
+
+By default, the environment variables on the host will be passed to the library OS. Specifying an
+environment variable using this syntax adds/overwrites it and passes to the library OS. This syntax
+can be used multiple times to specify more than one environment variable. An environment variable
+can be deleted by giving it an empty value.
+
+### Debug Type
+
+    loader.debug_type=[none|inline]
+    (Default: none)
+
+This specifies the debug option while running the library OS. If the debug type is `none`, no
+debug output will be printed to standard output. If the debug type is `inline`, a dmesg-like
+debug output will be printed inlined with standard output.
+
+
+## System-related (Required by LibOS)
+
+### Stack Size
+
+    sys.stack.size=[# of bytes (with K/M/G)]
+
+This specifies the stack size of each thread in each Graphene process. The default value is
+determined by the library OS. Units like `K` (KB), `M` (MB), and `G` (GB) can be appended to the
+values for convenience. For example, `sys.stack.size=1M` indicates a 1MB stack size.
+
+### Program Break (Heap) Size
+
+    sys.brk.size=[# of bytes (with K/M/G)]
+
+This specifies the program break (brk) size in each Graphene process. The default value of the
+program break size is determined by the library OS. Units like `K` (KB), `M` (MB), and `G` (GB) can
+be appended to the values for convenience. For example, `sys.brk.size=1M` indicates a 1MB brk size.
+
+
+## FS-related (Required by LibOS)
+
+### Mount Points
+
+    fs.mount.[identifier].path=[PATH]
+    fs.mount.[identifier].type=[chroot|...]
+    fs.mount.[identifier].uri=[URI]
+
+This syntax specifies how file systems are mounted inside the library OS. For dynamically linked
+binaries, usually at least one mount point is required in the manifest (the mount point of the
+Glibc library).

+ 30 - 0
Documentation/oldwiki/Graphene-Quick-Start.md

@@ -0,0 +1,30 @@
+(The following quick start instruction does not include the steps for running Graphene with
+sandboxing because sandboxing is an experimental feature.)
+
+### 1. Clone the Graphene Repository
+
+    git clone https://github.com/oscarlab/graphene.git
+
+### 2. Build Graphene
+
+    cd graphene
+    make
+
+### 3. Build and Run `helloworld`
+
+    cd LibOS/shim/test/native
+    make
+    ./pal_loader helloworld
+
+### 4. Test LMBench Application
+
+    cd ..
+    git submodule update --init apps
+    cd apps/lmbench
+    make
+    cd lmbench-2.5/bin/linux
+    ./pal_loader lat_syscall null
+    ./pal_loader lat_syscall open
+    ./pal_loader lat_syscall read
+    ./pal_loader lat_proc fork
+

+ 70 - 0
Documentation/oldwiki/Graphene-SGX-Manifest-Syntax.md

@@ -0,0 +1,70 @@
+The basic manifest syntax for Graphene is described in [[Graphene Manifest Syntax]]. If Graphene
+is *not* running with SGX, the SGX-specific syntax is ignored. All keys in the SGX-specific syntax
+are optional. If the keys are not specified, Graphene will use the default values.
+
+## Basic SGX-specific Syntax
+
+### Enclave Size
+
+    sgx.enclave_size=[SIZE]
+    (default: 256M)
+
+This syntax specifies the size of the enclave set during enclave creation time (recall that SGX v1
+requires a predetermined maximum size of the enclave). The PAL and library OS code/data count
+towards this size value, as well as the application memory itself: application's code, stack, heap,
+loaded application libraries, etc. The application cannot allocate memory that exceeds this limit.
+
+### Number of Threads
+
+    sgx.thread_num=[NUM]
+    (Default: 4)
+
+This syntax specifies the maximum number of threads that can be created inside the enclave (recall
+that SGX v1 requires a predetermined maximum number of thread slots). The application cannot have
+more threads than this limit *at a time* (however, it is possible to create new threads after old
+threads are destroyed).
+
+### Debug/Production Enclave
+
+    sgx.debug=[1|0]
+    (Default: 1)
+
+This syntax specifies whether the enclave can be debugged. Set it to 1 for a debug enclave and to 0
+for a production enclave.
+
+### ISV Product ID and SVN
+
+    sgx.isvprodid=[NUM]
+    sgx.isnsvn=[NUM]
+    (Default: 0)
+
+This syntax specifies the ISV Product ID and SVN to be added to the enclave signature.
+
+## Trusted Files and Child Processes
+
+### Trusted Files
+
+    sgx.trusted_files.[identifier]=[URI]
+
+This syntax specifies the files to be cryptographically hashed, and thus allowed to be loaded
+into the enclave. The signer tool will automatically generate hashes of these files and add them
+into the SGX-specific manifest (`.manifest.sgx`). This is especially useful for shared libraries:
+a trusted library cannot be silently replaced by a malicious host because the hash verification
+will fail.
+
+### Allowed Files
+
+    sgx.allowed_files.[identifier]=[URI]
+
+This syntax specifies the files that are allowed to be loaded into the enclave unconditionally.
+These files are not cryptographically hashed and are thus not protected. It is insecure to allow
+files containing code or critical information; developers must not allow files blindly!
+
+### Trusted Child Processes
+
+    sgx.trusted_children.[identifier]=[URI of signature (.sig)]
+
+This syntax specifies the signatures of allowed child processes of the current application. Upon
+process creation, the enclave in the current (parent) process will attest the enclave in the child
+process, by comparing to the signatures of the trusted children. If the child process is not
+trusted, the enclave will refuse to communicate with it.

+ 63 - 0
Documentation/oldwiki/Graphene-SGX-Quick-Start.md

@@ -0,0 +1,63 @@
+Before you run any applications in Graphene-SGX, please make sure that Intel SGX SDK and the SGX
+driver are installed on your system. We recommend using Intel SGX SDK and the SGX driver no older
+than version 2.1.
+
+If Intel SGX SDK and the SGX driver are not installed, please follow the READMEs in
+<https://github.com/01org/linux-sgx> and <https://github.com/01org/linux-sgx-driver> to download
+and install them.
+
+### 1. Ensure That Intel SGX is Enabled on Your Platform
+
+    lsmod | grep isgx
+    ps ax | grep [a]esm_service
+
+The first command should list `isgx` and the second command should list the process status of
+`aesm_service`.
+
+### 2. Clone the Repository and Set the Home Directory of Graphene
+
+    git clone https://github.com/oscarlab/graphene.git
+    cd graphene
+    git submodule update --init -- Pal/src/host/Linux-SGX/sgx-driver/
+    export GRAPHENE_DIR=$PWD
+
+### 3. Prepare a Signing Key
+
+    cd $GRAPHENE_DIR/Pal/src/host/Linux-SGX/signer
+    openssl genrsa -3 -out enclave-key.pem 3072
+
+### 4. Build and Install Graphene SGX Driver
+
+    cd $GRAPHENE_DIR/Pal/src/host/Linux-SGX/sgx-driver
+    make
+    # the console will prompt you for the path of the Intel SGX driver code
+    sudo ./load.sh
+
+### 5. Build Graphene-SGX
+
+    cd $GRAPHENE_DIR
+    make SGX=1
+
+### 6. Set `vm.mmap_min_addr=0` in the System
+
+    sudo sysctl vm.mmap_min_addr=0
+
+### 7. Build and Run `helloworld`
+
+    cd $GRAPHENE_DIR/LibOS/shim/test/native
+    make SGX=1
+    make SGX_RUN=1
+    SGX=1 ./pal_loader helloworld
+
+### 8. Test LMBench Application
+
+    cd $GRAPHENE_DIR
+    git submodule update --init -- LibOS/shim/test/apps
+    cd $GRAPHENE_DIR/LibOS/shim/test/apps/lmbench
+    make SGX=1
+    cd lmbench-2.5/bin/linux
+    SGX=1 ./pal_loader lat_syscall null
+    SGX=1 ./pal_loader lat_syscall open
+    SGX=1 ./pal_loader lat_syscall read
+    SGX=1 ./pal_loader lat_proc fork
+

+ 0 - 145
Documentation/oldwiki/Home.md

@@ -1,145 +0,0 @@
-# Home
-## What is Graphene library OS?
-
-**Graphene library OS** is a project to provide lightweight guest OSes with support for Linux multi-process applications. Comparable to virtual machines, Graphene runs applications in an isolated environment, with virtualization benefits such as guest customization, platform independence and migration. The work is published in the proceeding of [Eurosys 2014](https://oscarlab.github.io/papers/tsai14graphene.pdf).
-
-Graphene Library OS can support running Linux applications with the latest **Intel SGX (Software Guard Extension)** technologies. With Intel SGX, applications are secured in hardware-encrypted memory regions (so called **enclaves**), and no malicious software stack or hardware attack such as cold-boot attack can retrieve the application secret. Graphene Library OS can support native application to run in enclaves, without the porting efforts that developers usually have to pay. For more information, see [[Introduction to Intel SGX Support]].
-
-## What is the prerequisite of running my applications in Graphene?
-
-Graphene is developed on top of 64-bit Linux, to run 64-bit Linux applications. We've tested it on 64-bit Ubuntu Linux up to 15.04 (both Server and Desktop versions). Other distributions of 64-bit Linux can potentially work, but the result is not guaranteed. If you have any problem building or running Graphene on top of other Linux hosts, please contact us. 
-
-To compile Graphene library OS, the following packages are required:
-* build-essential
-* autoconf
-* gawk
-* python-protobuf (for SGX signing tool)
-* python-crypto (for SGX signing tool)
-
-Graphene has implemented about one third of Linux system calls, to support native, unmodified Linux applications. Before running any application, you must confirm if every system call required by the application executables and libraries is supported, or at least not affecting the functionality of the application if the system call is returned with error code **ENOSYS**. Here is a list of all [[Implemented System Calls]].
-
-### What are the other hosts that Graphene can run on top of?
-
-Graphene Library OS can run Linux applications on top of any hosts that Graphene Library OS has been ported to. Porting Graphene Library OS to a new host requires implementing the [[PAL Host ABI]] using the host ABI. Currently we have ported Graphene Library OS to **64-bit FreeBSD** and **64-bit Linux with Intel SGX**. More supported hosts are expected in the future. 
-
-## How to build and run Graphene library OS?
-
-Here is a [[Quick Start]] instruction for how to build and run Graphene with minimal commands.
-
-### Obtaining source code
-
-Graphene can be obtained on _github_. Use the following command to check out the code:
-
-`git clone https://github.com/oscarlab/graphene.git`
-
-### Building Graphene
-
-Graphene Library OS is consist of five parts:
-* Instrumented GNU Library C
-* LibOS (a shared library named `libsysdb.so`)
-* PAL, a.k.a Platform Adaption Layer (a shared library named `libpal.so`)
-* Reference monitor (a shared library named `libpal_sec.so`)
-* Minor kernel customization and kernel modules
-
-Please note that Graphene requires building a customized Linux kernel on the host, apart from the library OS itself. It may require some basic knowledge and experience of building and installing Linux kernels.
-
-To build the system, simply run the following commands in the root of the source tree:
-
-__** Note: Please use GCC version 4 or 5 **__
-
-    git submodule update --init
-    make
-
-For more detail please read this page: [[How to build Graphene Kernel|Building Linux Kernel Support]].
-
-Each part of Graphene can be built separately in the subdirectories.
-
-To build Graphene library OS with debug symbol, run "`make DEBUG=1`" instead of "`make`". For more information about debugging Graphene library OS, please read this page: [[How to debug Graphene|Debugging Graphene]]
-
-
-#### Building with Kernel-Level Sandboxing (Optional)
-
-__** Note: this step is optional. **__
-
-__** Note: for building with Intel SGX support, skip this step. **__
-
-__** Disclaimer: this feature is experimental and may contain bugs. Please do no use in production system before further assessment.__
-
-To enable sandboxing, a customized Linux kernel is needed. Note that this feature is optional and completely unnecessary for running on SGX. To build the Graphene Linux kernel, do the following steps:
-
-    cd Pal/linux-3.19
-    make menuconfig
-    make
-    make install
-    (Add Graphene kernel as a boot option by commands like "update-grub")
-    (reboot and choose the Graphene kernel)
-
-Please note that the building process may pause before building the Linux kernel, because it requires you to provide a sensible configuration file (.config). The Graphene kernel requires the following options to be enabled
-in the configuration:
-
-  - CONFIG_GRAPHENE=y
-  - CONFIG_GRAPHENE_BULK_IPC=y
-  - CONFIG_GRAPHENE_ISOLATE=y
-
-### Run an application in the Graphene Library OS
-
-Graphene library OS uses PAL as a loader to bootstrap an application in the library OS. To start Graphene, PAL will have to be run as an executable, with the name of the program, and a "manifest file" given from the command line. Graphene provides three options for specifying the programs and manifest files:
-
-    option 1: (automatic manifest)
-    [PATH_TO_Runtime]/pal_loader [PROGRAM] [ARGUMENTS]...
-    (Manifest file: "[PROGRAM].manifest" or "manifest")
-
-    option 2: (given manifest)
-    [PATH_TO_Runtime]/pal_loader [MANIFEST] [ARGUMENTS]...
-
-    option 3: (manifest as a script)
-    [PATH_TO_MANIFEST]/[MANIFEST] [ARGUMENTS]...
-    (Manifest must have "#![PATH_TO_PAL]/libpal.so" as the first line)
-
-Using "pal" as loader to start Graphene will not attach the applications
-to the Graphene reference monitor. The applications will have better
-performance, but no strong security isolation. To attach the applications to
-the Graphene reference monitor, Graphene must be started with the PAL
-reference monitor loader (pal_sec). Graphene provides three options for
-specifying the programs and manifest files to the loader:
-
-    option 4: (automatic manifest - with reference monitor)
-    SEC=1 [PATH_TO_Runtime]/pal_loader [PROGRAM] [ARGUMENTS]...
-    (Manifest file: "[PROGRAM].manifest" or "manifest")
-
-    option 5: (given manifest - with reference monitor)
-    SEC=1 [PATH_TO_Runtime]/pal_laoder [MANIFEST] [ARGUMENTS]...
-
-    option 6: (manifest as a script - with reference monitor)
-    SEC=1 [PATH_TO_MANIFEST]/[MANIFEST] [ARGUMENTS]...
-    (Manifest must have "#![PATH_TO_PAL]/pal_sec" as the first line)
-
-Although manifest files are optional for Graphene, running an application usually requires some minimal configuration in its manifest file. A sensible manifest file will include paths to the library OS and GNU library C, environment variables such as `LD_LIBRARY_PATH`, file systems to be mounted, and isolation rules to be enforced in the reference monitor.
-
-Here is an example of manifest files:
-
-    loader.preload = file:LibOS/shim/src/libsysdb.so
-    loader.env.LDL_LIBRAY_PATH = /lib
-
-    fs.mount.libc.type = chroot
-    fs.mount.libc.path = /lib
-    fs.mount.libc.uri = file:LibOS/build
-
-More examples can be found in the test directories (`LibOS/shim/test`). We have also tested several commercial applications such as GCC, Bash and Apache, and the manifest files that bootstrap them in Graphene are provided in the individual directories.
-
-For the full documentation of the Graphene manifest syntax, please see this page: [[Manifest Syntax]].
-
-More details of running tested/benchmarked applications in Graphene, please see this page: [[Run Applications in Graphene]].
-
-## How do I contribute to the project? 
-
-Some documentations that might be helpful:
-
-* [[PAL Host ABI]]
-* [[Port Graphene PAL to Other hosts]]
-
-## How to contact the maintainers?
-
-For any questions or bug reports, please send an email to support@graphene-project.io
-or post an issue on our github repository: https://github.com/oscarlab/graphene/issues
-

+ 69 - 0
Documentation/oldwiki/Implementing-New-System-Calls-in-Graphene.md

@@ -0,0 +1,69 @@
+### Step 1: Define the Interface of System Call and Name the Function in `LibOS/shim/src/shim_syscalls.c`
+
+For example, assume we are implementing `sched_setaffinity`. You must find the definition of
+`sched_setaffinity` in `shim_syscalls.c`, which will be the following code:
+
+```
+SHIM_SYSCALL_PASSTHROUGH(sched_setaffinity, 3, int, pid_t, pid, size_t, len,
+                         __kernel_cpu_set_t*, user_mask_ptr)
+```
+
+Change this line to `DEFINE_SHIM_SYSCALL(...)` to name the function that implements this system
+call: `shim_do_sched_setaffinity` (this is the naming convention, please follow it).
+
+```
+DEFINE_SHIM_SYSCALL(sched_setaffinity, 3, shim_do_sched_setaffinity, int, pid_t, pid, size_t, len,
+                    __kernel_cpu_set_t*, user_mask_ptr)
+```
+
+
+### Step 2: Add Definitions to `LibOS/shim/include/shim_table.h`
+
+To implement system call `sched_setaffinity`, three functions need to be defined in `shim_table.h`:
+`__shim_sched_setaffinity`, `shim_sched_setaffinity`, and `shim_do_sched_setaffinity`. The first
+two should already be defined. Add the third in respect to the system call you are implementing,
+with the same prototype as defined in `shim_syscalls.c`.
+
+```
+int shim_do_sched_setaffinity(pid_t pid, size_t len, __kernel_cpu_set_t* user_mask_ptr);
+``` 
+
+### Step 3: Implement the System Call under `LibOS/shim/src/sys`
+
+You can add the function body of `shim_do_sysinfo` (or the function name defined earlier) in a new
+source file or any existing source file in `LibOS/shim/src/sys`.
+
+For example, in `LibOS/shim/src/sys/shim_sched.c`:
+```
+int shim_do_sched_setaffinity(pid_t pid, size_t len, __kernel_cpu_set_t* user_mask_ptr) {
+   /* code for implementing the semantics of sched_setaffinity */
+}
+```
+
+### Step 4 (Optional): Add New PAL Calls if Necessary for the System Call
+
+The concept of Graphene library OS is to keep the PAL interface as simple as possible. So, you
+should not add new PAL calls if the features can be fully implemented inside the library OS using
+the existing PAL calls. However, sometimes the OS features needed involve low-level operations
+inside the host OS and cannot be emulated inside the library OS. Therefore, you may have to add a
+few new PAL calls to the existing interface.
+
+To add a new PAL call, first modify `Pal/src/pal.h`. Define the PAL call in a platform-independent way.
+
+```
+PAL_BOL DkThreadSetCPUAffinity(PAL_NUM cpu_num, PAL_IDX* cpu_indexes);
+```
+
+Make sure you use the PAL-specific data types, including `PAL_BOL`, `PAL_NUM`, `PAL_PTR`,
+`PAL_FLG`, `PAL_IDX`, and `PAL_STR`. The naming convention of a PAL call is to start functions
+with the `Dk` prefix, followed by a comprehensive name describing the purpose of the PAL call.
+
+### Step 5 (Optional): Export the new PAL call from the PAL binaries
+
+For each directory in `PAL/host/`, there is a `pal.map` file. This file lists all the symbols
+accessible to the library OS. The new PAL call needs to be listed here in order to be used by
+your system call implementation.
+
+### Step 6 (Optional): Implementing the New PAL Call in `PAL/src`
+
+(Not finished...)

+ 0 - 58
Documentation/oldwiki/Implementing-New-System-Calls.md

@@ -1,58 +0,0 @@
-# Implementing New System Calls
-### Step 1: Define the interface of system call and name the implementing function in `LibOS/shim/src/shim_syscalls.c`.
-
-For example, assume we are implementing `sched_setaffinity`, find the definition of `sched_setaffinity` in `shim_syscalls.c`, which will be the following code:
-
-```
-SHIM_SYSCALL_PASSTHROUGH (sched_setaffinity, 3, int, pid_t, pid, size_t, len,
-                          __kernel_cpu_set_t *, user_mask_ptr)
-```
-
-Now, change this line to `DEFINE_SHIM_SYSCALL(...)` to name the function that implements this system call. For example, we can call this function `shim_do_sched_setaffinity` (**this is the naming convention, please follow it**).
-
-```
-DEFINE_SHIM_SYSCALL (sched_setaffinity, 3, shim_do_sched_setaffinity, int, pid_t, pid, size_t, len,
-                     __kernel_cpu_set_t *, user_mask_ptr)
-```
-
-
-### Step 2: Add the definitions to `LibOS/shim/include/shim_table.h`
-
-To implement system call `sched_setaffinity`, three functions need to be defined in `shim_table.h`: `__shim_sched_setaffinity`, `shim_sched_setaffinity`, and `shim_do_sched_setaffinity`. The first two should already be defined. Add the third in respect to the system call you are implementing, with the same prototype as defined in `shim_syscalls.c`.
-
-```
-int shim_do_sched_setaffinity (pid_t pid, size_t len, __kernel_cpu_set_t * user_mask_ptr);
-``` 
-
-### Step 3: Implement the system call in a source file under `LibOS/shim/src/sys`.
-
-You can add the function body of `shim_do_sysinfo` (or the function name defined earlier) in a new source file or any existing source file in `LibOS/shim/src/sys`.
-
-For example, in `LibOS/shim/src/sys/shim_sched.c`:
-```
-int shim_do_sched_setaffinity (pid_t pid, size_t len, __kernel_cpu_set_t * user_mask_ptr) {
-   /* Write the code for implementing the semantic of sched_setaffinity. */
-}
-```
-
-### Step 4 (Optional): Add a new PAL call if it is necessary for the system call.
-
-The concept of Graphene library OS is to keep the PAL interface as simple as possible. So, you should not add new PAL calls if the features can be fully implemented inside the library OS using the existing PAL calls. However, sometimes, the OS features needed involve low-level operations inside the host operating systems and cannot be emulated inside the library OS. Therefore, you may have to add a few new PAL calls to be supplementary to the existing interface.
-
-To add a new PAL call, first modify `Pal/src/pal.h`. Define the PAL call **in a platform-independent way**.
-
-```
-PAL_BOL DkThreadSetCPUAffinity (PAL_NUM cpu_num, PAL_IDX * cpu_indexes);
-```
-
-Make sure you use the PAL-specific data types, including `PAL_BOL`, `PAL_NUM`, `PAL_PTR`, `PAL_FLG`, `PAL_IDX`, and `PAL_STR`. The naming convention of a PAL call starts with a `DK`, followed by a comprehensive name describing the purpose of the PAL call.
-
-### Step 5 (Optional): Export the new PAL call in the PAL binaries.
-
-For each directory in `PAL/host/`, there is a `pal.map` file. This file lists all the symbols accessible to the library OS. The new PAL call needs to be listed here in order to be used for your system call implementation.
-
-### Step 6 (Optional): Implementing the new PAL call in `PAL/src`.
-
-
-
-

+ 121 - 0
Documentation/oldwiki/Introduction-to-Graphene-SGX.md

@@ -0,0 +1,121 @@
+## What is Intel SGX?
+
+SGX (Software Guard Extenstions) is a security feature of the latest Intel CPUs. According to
+<https://github.com/ayeks/SGX-hardware>, SGX is available in Intel CPUs that were launched after
+October 1st, 2015.
+
+Intel SGX is designed to protect critical applications against a potentially malicious system stack,
+from the operating systems to hardware (CPU itself excluded). SGX creates a hardware-encrypted
+memory region (called SGX enclaves) for the protected application, such that neither privileged
+software attacks nor hardware attacks such as cold-boot attacks can modify or retrieve the
+application data from the enclave memory.
+
+## Why use Graphene for Intel SGX?
+
+Porting applications to an Intel SGX platform can be cumbersome. To secure an application with SGX,
+developers must recompile the application executable with the Intel SGX SDK
+(<https://github.com/01org/linux-sgx>). Moreover, an in-enclave application has *no* access to
+OS features, such as opening a file, creating a network connection, or cloning a thread. For any
+interaction with the host, developers must define untrusted interfaces that the application must
+use to exit the enclave, perform the OS system call, and re-enter the enclave.
+
+Graphene provides the OS features to in-enclave applications, by implementing them inside the SGX
+enclaves. To secure their applications, developers can directly load native, unmodified binaries
+into enclaves, with no/minimal porting efforts. Graphene provides a signing tool to sign all
+binaries that are loaded into the enclave (technically, the application manifest, which contains
+hashes and URIs of these binaries, is signed), similar to the Intel SGX SDK workflow.
+
+## How to Build Graphene with Intel SGX Support?
+
+Refer to the [[Quick Start | SGX Quick Start]] page on how to build and run Graphene-SGX.
+
+### Prerequisites
+
+Porting and running an application on Intel SGX with Graphene-SGX involves two parties: the
+developer and the untrusted host (for testing purposes, the same host may represent both parties).
+The developer builds and signs the bundle of Graphene plus the target application(s). Developers/
+users then ship the signed bundle to the untrusted host and run it inside the SGX enclave(s) to
+secure their workloads.
+
+The prerequisites to build Graphene are detailed in
+[[Prerequisites of Graphene | Home#what-is-the-prerequisite-of-running-my-applications-in-graphene]]).
+
+### Prerequisites for Developer
+
+To build Graphene with Intel SGX support, simply run `make SGX=1` instead of `make` at
+the root of the source tree (or in the PAL directory if the rest of the source is already built).
+Like regular Graphene, `DEBUG=1` can be used to build with debug symbols. After compiling the
+source, a PAL enclave binary (`libpal-enclave.so`) is created, along with the untrusted loader
+(`pal-sgx`) to load the enclave.
+
+Note that building Graphene and signing the application manifests do *not* require an SGX-enabled
+CPU on the developer's machine (except for testing purposes).
+
+A 3072-bit RSA private key (PEM format) is required for signing the application manifests. The
+default key is placed under `Pal/src/host/Linux-SGX/signer/enclave-key.pem`, or can be specified
+through the environment variable `SGX_SIGNER_KEY` when building Graphene with Intel SGX
+support. If you don't have a private key, create one with the following command:
+
+    openssl genrsa -3 -out enclave-key.pem 3072
+
+To port an application to SGX, one must use the signing tool (`Pal/src/host/Linux-SGX/signer/pal-sgx-sign`)
+to generate a valid enclave signature (`SIGSTRUCT` as defined in the
+[Programming Reference](https://software.intel.com/sites/default/files/managed/48/88/329298-002.pdf)).
+The signing tool takes the PAL enclave binary, application binaries, a manifest and all
+supporting binaries (including the library OS). It then generates the SGX-specific manifest
+(a `.manifest.sgx` file) and the enclave signature (a `.sig` file).
+
+After signing the manifest, users may ship the application files together with Graphene itself,
+along with an SGX-specific manifest and the signatures, to the untrusted host that has Intel SGX.
+Please note that all supporting binaries must be shipped and placed at the same paths as on the
+developer's machine. For security reasons, Graphene will not allow loading any binaries that are
+not signed/hashed.
+
+For applications that are prepared in the Graphene apps directory, such as GCC, Apache, and Bash
+(more are listed in [[Run Applications in Graphene]]), just type 'make SGX=1' in the corresponding
+directory. The scripts are automated to build the applications and sign their manifests in order
+to ship them to the untrusted host.
+
+If you are simply testing the applications, you may build and run the applications on the same host
+(which must be SGX-enabled). In production scenarios, building and running the applications on the
+same host is mostly meaningless.
+
+### Prerequisites for Untrusted Host
+
+To run the applications on Intel SGX with Graphene-SGX, the host must have an SGX-enabled CPU, with
+Intel SGX SDK and the SGX driver installed. Please download and install the SDK and the driver from:
+<https://github.com/01org/linux-sgx> and <https://github.com/01org/linux-sgx-driver>.
+
+A Graphene SGX driver (gsgx) also needs to be installed on the untrusted host. Simply run the
+following commands to build the driver:
+
+    cd Pal/src/host/Linux-SGX/sgx-driver
+    make
+    # the console will prompt you for the path of the Intel SGX driver code
+    sudo ./load.sh
+
+If the Graphene SGX driver is successfully installed, and the Intel SDK aesmd service is up and
+running (see [here](https://github.com/01org/linux-sgx#start-or-stop-aesmd-service) for more
+information), one can acquire an enclave token to launch Graphene with the application. Use the
+token tool `Pal/src/host/Linux-SGX/signer/pal-sgx-get-token` to connect to the aesmd service
+and retrieve the token.
+
+For applications that are prepared in the Graphene apps directory (GCC, Apache, Bash, etc.), type
+`make SGX_RUN=1` in the corresponding directory. The scripts are automated to retrieve the tokens
+for the applications.
+
+With the manifest (`.manifest.sgx`), the signature (`.sig`), and the token (`.token`) ready, one
+can launch Graphene-SGX to run the application. Graphene-SGX provides three options for specifying
+the programs and manifest files:
+
+    Option 1: (automatic manifest)
+    SGX=1 [PATH_TO_PAL]/pal [PROGRAM] [ARGUMENTS]...
+    (Manifest file: "[PROGRAM].manifest.sgx")
+
+    Option 2: (given manifest)
+    SGX=1 [PATH_TO_PAL]/pal [MANIFEST] [ARGUMENTS]...
+
+    Option 3: (manifest as a script)
+    SGX=1 [PATH_TO_MANIFEST]/[MANIFEST] [ARGUMENTS]...
+    (Manifest must have "#![PATH_TO_PAL]/pal" as the first line)
+

+ 228 - 0
Documentation/oldwiki/Introduction-to-Graphene.md

@@ -0,0 +1,228 @@
+## What is Graphene Library OS?
+
+**Graphene library OS** is a lightweight guest OS that supports Linux multi-process applications.
+Graphene runs applications in an isolated environment, with guest customization, ease of porting
+to different OSes, and process migration (similar to a container or a virtual machine). The work
+was originally published in the proceedings of
+[Eurosys 2014](https://oscarlab.github.io/papers/tsai14graphene.pdf).
+
+Graphene supports running Linux applications using the Intel SGX (Software Guard Extensions)
+technology (we call this version **Graphene-SGX**). With Intel SGX, applications are secured in
+hardware-encrypted memory regions (called SGX enclaves). SGX protects code and data in the
+enclave against privileged software attacks and against physical attacks on the hardware off the
+CPU package (e.g., cold-boot attacks on RAM). Graphene is able to run unmodified applications
+inside SGX enclaves, without the toll of manually porting the application to the SGX environment.
+For more information about SGX support, see [[Introduction to Graphene-SGX]].
+
+### What Hosts Does Graphene Currently Run On?
+
+Graphene was developed to encapsulate all host-specific code in one layer, called the Platform
+Adaptation Layer, or PAL. Thus, if there is a PAL for a given host, the library OS and applications
+will "just work".
+
+Porting Graphene to a new host only requires porting a PAL, by implementing the [[PAL Host ABI]]
+using OS features of the host. To date, we ported Graphene to FreeBSD and Linux (the latter also
+with Intel SGX support). Support for more hosts is expected in the future.
+
+#### Check out Application Test Cases
+
+To get the application test cases, run the following command from the root of the source tree:
+
+    git submodule update --init -- LibOS/shim/test/apps/
+
+See [Run Applications in Graphene] for instructions of how to run each application.
+
+
+## Prerequisites
+
+Graphene has been tested to build and install on Ubuntu 16.04/18.04, along with the Linux
+kernel 4.4+. We recommend to build and install Graphene with the same systems. If you find
+Graphene not working on other Linux distributions, please submit a bug report.
+
+To install the prerequisites of Graphene on Ubuntu, run the following command:
+
+    sudo apt-get install -y build-essential autoconf gawk bison
+
+To build Graphene for SGX, run the following command in addition:
+
+    sudo apt-get install -y python-protobuf
+
+To run tests, you also need the python3-pytest package:
+
+    sudo apt-get install -y python3-pytest
+
+## Build and Run Graphene
+
+See [[Graphene Quick Start]] for instructions how to quickly build and run Graphene.
+
+### Obtain Source Code
+
+The latest version of Graphene can be cloned from GitHub:
+
+    git clone https://github.com/oscarlab/graphene.git
+
+### Build Graphene
+
+To build Graphene, simply run the following commands in the root of the source tree:
+
+    git submodule update --init -- Pal/src/host/Linux-SGX/sgx-driver/
+    make
+
+Each part of Graphene can be built separately in the corresponding subdirectories.
+
+To build Graphene with debug symbols, run `make DEBUG=1` instead of `make`. You may have to run
+`make clean` first if you have previously compiled the source code. To specify custom mirrors for
+downloading the Glibc sources, use `GLIBC_MIRRORS=...` when running `make`. To build with `-Werror`,
+run `make WERROR=1`.
+
+Currently, Graphene has implemented [[these Linux system calls|Supported System Calls in Graphene]].
+Before running any application, you must confirm that all system calls required by the application
+executables and libraries are supported (or that unsupported system calls do not affect the
+functionality of the application).
+
+
+### Build with Kernel-Level Sandboxing (Optional)
+
+This feature is marked as EXPERIMENTAL and no longer exists in the master branch.
+See [EXPERIMENTAL/linux-reference-monitor](https://github.com/oscarlab/graphene/tree/EXPERIMENTAL/linux-reference-monitor).
+
+
+### Build with Intel SGX Support
+
+See [[Graphene-SGX Quick Start]] for instructions on how to build and run Graphene with
+Intel SGX support.
+
+
+#### Prerequisites
+
+(1) Generating signing keys
+
+A 3072-bit RSA private key (PEM format) is required for signing the application manifest. If you
+do not have a private key, create one with the following command:
+
+    openssl genrsa -3 -out enclave-key.pem 3072
+
+You can either put the generated key in the default path, `host/Linux-SGX/signer/enclave-key.pem`,
+or specify the key through the environment variable `SGX_SIGNER_KEY`.
+
+After signing the application manifest, users may ship the application binaries, the manifest, and
+the signature together with the Graphene binaries to an SGX-enabled system.
+
+(2) Installing Intel SGX SDK and SGX driver
+
+The Intel SGX SDK and the SGX driver are required for running Graphene. Download and install them
+from the official Intel GitHub repositories:
+
+   - <https://github.com/01org/linux-sgx>
+   - <https://github.com/01org/linux-sgx-driver>
+
+To make Graphene aware of the SGX driver, run the following commands:
+
+    cd Pal/src/host/Linux-SGX/sgx-driver
+    make
+    # the console will prompt you for the path of the Intel SGX driver code
+    sudo ./load.sh
+
+#### Build Graphene for SGX
+
+To build Graphene with Intel SGX support, in the root directory of the Graphene repo, run
+the following command:
+
+    make SGX=1
+
+To build with debug symbols, instead run the following command:
+
+    make SGX=1 DEBUG=1
+
+Using `make SGX=1` in the test or regression directory will automatically generate the required
+manifest signatures (.sig files).
+
+### Run Applications in Graphene
+
+Graphene uses the PAL binary as a loader to bootstrap applications in the library OS. To start
+Graphene, the PAL runs as an executable, taking the name of the program and/or the manifest file
+as command-line arguments. Please see [Graphene Manifest Syntax] for more information regarding
+the manifest files.
+
+We provide a loader script, `pal_loader`, for the convenience of giving run-time options to the
+PAL loader. Via `pal_loader`, Graphene provides three options for specifying the program and the
+manifest file:
+
+    Option 1: (automatic manifest)
+    [PATH_TO_Runtime]/pal_loader [PROGRAM] [ARGUMENTS]...
+    (Manifest file: "[PROGRAM].manifest" or "manifest")
+
+    Option 2: (given manifest)
+    [PATH_TO_Runtime]/pal_loader [MANIFEST] [ARGUMENTS]...
+
+    Option 3: (manifest as a script)
+    [PATH_TO_MANIFEST]/[MANIFEST] [ARGUMENTS]...
+    (Manifest must have "#![PATH_TO_PAL]/libpal.so" as the first line)
+
+Running an application requires some minimal configuration in the application's manifest file.
+A sensible manifest file will include paths to the library OS and the Glibc library, environment
+variables such as `LD_LIBRARY_PATH`, and file systems to be mounted.
+
+Here is an example manifest file:
+
+    loader.preload = file:[relative path to Graphene root]/LibOS/shim/src/libsysdb.so
+    loader.env.LD_LIBRAY_PATH = /lib
+
+    fs.mount.libc.type = chroot
+    fs.mount.libc.path = /lib
+    fs.mount.libc.uri = file:[relative path to Graphene root]/Runtime
+
+More examples can be found in the test directories (`LibOS/shim/test`). We have also tested several
+applications such as GCC, Bash, Redis, R, and Apache. The manifest files for these applications are
+provided in the individual directories under `LibOS/shim/test/apps`.
+
+For the full documentation of the Graphene manifest syntax, please see the following pages:
+[Graphene Manifest Syntax]] and [[Graphene-SGX Manifest Syntax]].
+
+For more details about running tested/benchmarked applications in Graphene, please see this page:
+[[Run Applications in Graphene]].
+
+
+#### Run Built-in Examples in Graphene-SGX
+
+(1) Build and run `helloworld` with Graphene-SGX:
+
+- Go to LibOS/shim/test/native, sign all the test programs:
+
+      make SGX=1
+
+- Generate launch tokens from the aesmd service:
+
+      make SGX_RUN=1
+
+- Run `helloworld` with Graphene-SGX:
+
+      SGX=1 ./pal_loader helloworld  or  ./pal_loader SGX helloworld
+
+(2) Build and run the Python `helloworld.py` script with Graphene-SGX:
+
+- Go to LibOS/shim/test/apps/python and sign the application:
+
+      make SGX=1
+
+- Generate a launch token from the aesmd service:
+
+      make SGX_RUN=1
+
+- Run the `helloworld.py` script with Graphene-SGX:
+
+      SGX=1 ./python.manifest.sgx scripts/helloworld.py
+
+
+## How Do I Contribute to the Project?
+
+Some documentation that might be helpful:
+
+* [[PAL Host ABI]]
+* [[Porting Graphene PAL to Other hosts]]
+
+## How to Contact the Maintainers?
+
+For any questions or bug reports, please send an email to support@graphene-project.io
+or post an issue on our GitHub repository: https://github.com/oscarlab/graphene/issues.
+

+ 0 - 68
Documentation/oldwiki/Introduction-to-Intel-SGX-Support.md

@@ -1,68 +0,0 @@
-# Introduction to Intel SGX Support
-## What is Intel SGX?
-
-SGX (Software Guard Extenstion) is a new feature of the latest Intel CPUs. According to <https://github.com/ayeks/SGX-hardware>, SGX is available in CPUs that are launched after October 1st, 2015.
-
-Intel SGX is designed to protect critical applications against potentially malicious system stack, from the operating systems to hardware (CPU itself excluded). SGX creates a hardware encrypted memory region (so-called **enclaves**) from the protected applications, that neither compromised operating systems, nor hardware attack such as **cold-boot attack** can retrieve the application secrets.
-
-## Why use Graphene Library OS for Intel SGX?
-
-Porting applications to Intel SGX platform can be cumbersome. To secure an application with SGX, developers must recompile the application executable with the Intel SDK (Linux SDK: <https://github.com/01org/linux-sgx>). Moreover, the secured applications have _no_ access to any OS features, such as opening a file, creating a network connection, or cloning a thread. For any interaction with the host, developers must define untrusted interfaces that the secure applications can call to leave the enclaves.
-
-Graphene Library OS provides the OS features needed by the applications, right inside the SGX enclaves. To secure any applications, developers can directly load native, unmodified binaries into enclaves, with minimal porting efforts. Graphene Library OS provides signing tool to sign all binaries that are loaded into the enclaves, just like the Intel SGX SDK.
-
-## How to build with Intel SGX Support?
-
-Here is a [[Quick Start | SGX Quick Start]] instruction for how to build and run Graphene with minimal commands.
-
-### Prerequisite
-
-To port applications into SGX enclaves with Graphene Library OS, the process is often split into two sides: the developers' side and the untrusted hosts' side (for testing purpose, both sides can be on the same host). The developers' side will build and sign Graphene library OS with the target applications. Users then ship the signed enclave images to the untrusted hosts and run in enclaves to secure the applications.
-
-The support for SGX in Graphene library OS is currently developed on top of 64-bit Ubuntu Linux. To build with SGX support, first the prerequisite of Graphene is required (see here: [[Prerequisite of Graphene | Home#what-is-the-prerequisite-of-running-my-applications-in-graphene]]). The signer used on the developers' side requires Python 2.7+ and OpenSSL (optional).
-
-### Developers' Side
-
-To build Graphene Library OS with Intel SGX support, simply run `make SGX=1` instead of `make` at the root of the source tree (or in Pal directory if the rest of the source is already built). Like the regular Graphene, `DEBUG=1` can be used to build with debug symbols. After compiling the source, a PAL enclave binary (`libpal-enclave.so`) will be created, along with the untrusted loader (`pal-sgx`) to load the enclave.
-
-Note that building Graphene Library OS and signing the applications does NOT require SGX-enabled CPUs or Intel SGX SDK on the developers' machines (except for testing purposes).
-
-A 3072-bit RSA private key (PEM format) is required for signing the applications. The default enclave key is supposed to be placed in `Pal/src/host/Linux-SGX/signer/enclave-key.pem`, or can be specified through environment variable `SGX_ENCLAVE_KEY when building Graphene with Intel SGX support. If you don't have a private key, create it with the following command:
-
-    openssl genrsa -3 -out enclave-key.pem 3072
-
-To port an application into SGX enclave, developers must use the Graphene signing tool (`Pal/src/host/Linux-SGX/signer/pal-sgx-sign`) to generate valid enclave signatures (`SIGSTRUCT` as defined in the [Programming Reference](https://software.intel.com/sites/default/files/managed/48/88/329298-002.pdf)). The signing tool takes the built PAL enclave binary, application binaries, a manifest and all supporting binaries (including the library OS). It then generates the SGX-specific manifest (a `.manifest.sgx` file) and the enclave signature (a `.sig` file). 
-
-After signing the application, users may ship the application files with the built Graphene Library OS, along with a SGX-specific manifest and the signatures, to the untrusted hosts that are enabled with Intel SGX. Please note that all supporting binaries must be shipped and placed at the same path as on the developers' host. For security reasons, Graphene library OS will not allow loading any binaries that are not signed.
-
-For applications that are prepared in the Graphene library source, such as GCC, Apache and OpenJDK (more are listed in [[Run Applications in Graphene]]), just type 'make SGX=1' in the correspondent directories. The applications can be found in `LibOS/shim/test/apps`. The scripts are automated to build and sign the applications that are ready for shipment.
-
-If you are simply testing the applications, you may build an run the applications on the same host (must be SGX-enabled). In real use cases, building and running the applications on the same host is mostly meaningless.
-
-### Untrusted Hosts' Side
-
-To run the applications in SGX enclave with Graphene library OS, the untrusted hosts must have SGX-enabled CPUs, with the Intel SGX SDK installed. Download and install the SDK from the official Intel github repositories: <https://github.com/01org/linux-sgx> and <https://github.com/01org/linux-sgx-driver>
-
-A Graphene SGX driver also needs to be installed on the untrusted host. Simply run the following command to build the driver:
-
-    cd Pal/src/host/Linux-SGX/sgx-driver
-    make
-    (The console will be prompted to ask for the path of Intel SGX driver code)
-    sudo ./load.sh
-
-If the Graphene SGX driver is successfully installed, and the Intel SDK aesmd service is up and running (see [here](https://github.com/01org/linux-sgx#start-or-stop-aesmd-service) for more information), we can acquire enclave token to launch Graphene library OS. Use the token tool `Pal/src/host/Linux-SGX/signer/pal-sgx-get-token` to connect with the aesmd service and retrieve the token.
-
-For applications that are prepared in the Graphene library OS source, just type 'make SGX_RUN=1' in the correspondent directories. The scripts are automated to retrieve the tokens for the applications.
-
-With the manifest (`.manifest.sgx`), the signature (`.sig`) and the token (`.token`) ready, we can now launch Graphene Library OS to run the application. Graphene provides three options for specifying the programs and manifest files:
-
-    option 1: (automatic manifest)
-    [PATH_TO_PAL]/pal [PROGRAM] [ARGUMENTS]...
-    (Manifest file: "[PROGRAM].manifest.sgx")
-
-    option 2: (given manifest)
-    [PATH_TO_PAL]/pal [MANIFEST] [ARGUMENTS]...
-
-    option 3: (manifest as a script)
-    [PATH_TO_MANIFEST]/[MANIFEST] [ARGUMENTS]...
-    (Manifest must have "#![PATH_TO_PAL]/pal" as the first line)

+ 0 - 63
Documentation/oldwiki/Manifest-Syntax.md

@@ -1,63 +0,0 @@
-# Manifest Syntax
-## Basic Syntax
-
-A manifest file is a binary-specific configuration file that specifies the environment and resources of a Graphene library OS instance. A manifest file must be a plain text file, with configuration entries separated by _line breaks_. Each configuration entries must be in the following format: _(Spaces/Tabs before the first key and before/after the equal mark are ignored.)_
-
-    [Key][.Key][.Key] = [Value]
-
-Comments can be inlined in a manifest, by preceding them with a _sharp sign (#)_. Any texts behind a _sharp sign (#)_ will be considered part of a comment and be discarded while loading the manifest file.
-
-## Loader-related (required by PAL)
-
-### Executable (REQUIRED)
-    loader.exec=[URI]
-This syntax specifies the executable to be loaded into the library OS. The executable must be an ELF-format binary, with a defined entry point to start its execution.
-
-### Preloaded libraries
-    loader.preload=[URI][,URI]...
-This syntax specifies the libraries to be preloaded before loading the executable. The URI of the libraries will be separated by _commas(,)_. The libraries must be ELF-format binaries, and may or may not have a defined entry point. If the libraries have their entry points, the entry points will be executed before jumping to the entry point of the executable, in the order as they are listed.  
-
-### Executable name
-    loader.execname=[STRING]
-This syntax specifies the executable name given as the first argument to the binaries (the executable and preloaded libraries). If the executable name is not specified in the manifest, PAL will use the URI of the executable or manifest as the first argument when executing the executable. In some circumstance, the executable name has to be specified so the binaries can re-execute the executable or determine their functionalities. 
-
-### Environment variables
-    loader.env.[ENVIRON]=[VALUE]
-By default, the environment variables on the host will be passed to the binaries in the library OSes. This syntax specifies the environment variable values that are customized for the library OSes. This syntax can be used for multiple times to specify more than one environment variables, and the environment variables can be deleted by giving a empty value.  
-
-### Debug Type (DEFAULT:none)
-    loader.debug_type=[none|inline]
-This syntax specifies the debug option while executing the library OSes. If the debug type is _none_, no debug output will be printed to the screen. If the debug type is _inline_, a dmesg-like debug output will be printed inlined with standard output.
-
-### System redirection symbol
-    loader.syscall_symbol=[SYMBOL NAME]
-This syntax specifies the ELF dynamic symbol name in preloaded libraries to redirect system calls directly made by the executables. Graphene does not allow executables to make direct system calls to the host kernel because of the security concerns. By default, any direct system calls made by using inline assembly code in the executable will be rejected by _Seccomp filter_ installed in the host. However, for better compatibility, the host can also redirect the system call numbers and arguments to a given symbol inside the library OSes. 
-
-## System-related (required by LibOS)
-
-### Stack size
-    sys.stack.size=[# of bytes]
-This syntax specifies the stack size of the first thread in each Graphene process. The default value of stack size is determined by the library OSes.
-
-### Program break size
-    sys.brk.size=[# of bytes]
-This syntax specifies the program break (_brk_) size in each Graphene process. The default value of program break size is determined by the library OSes.
-
-### Enable checkpointing at interruption
-    sys.ask_for_checkpoint=1
-This syntax enables the checkpointing feature, which will be triggered when interruption is sent to a Graphene instance.
-
-## FS-related (required by LibOS and reference monitor)
-
-### Mount points (REQUIRED)
-    fs.mount.[identifier].path=[PATH]
-    fs.mount.[identifier].type=[chroot|...]
-    fs.mount.[identifier].uri=[URI]
-This syntax specifies how the FSes are mounted inside the library OSes. This syntax is almost required for all binaries, because the GNU Library C must be at least mounted somewhere in the library OSes.
-
-## Network-related (required by LibOS and reference monitor)
-
-### Allowed network connection
-    net.allow_bind.[identifier]=[local address]:[local port[-local port]]
-    net.allow_peer.[identifier]=[remote address]:[remote port[-remote port]]
-This syntax specifies the network rules for creating connection or binding on local interfaces. Local/remote addresses may be IPv4 or IPv6 addresses, and local/remote ports can be one single port number or a port range. Any of the addresses or ports can be empty to indicate _ANY_ address or port.

+ 268 - 231
Documentation/oldwiki/PAL-Host-ABI.md

@@ -1,33 +1,52 @@
-# PAL Host ABI
 ## What is Graphene's PAL Host ABI
 
-PAL Host ABI is the interface used by Graphene library OS to interact with its hosts. It is translated into the hosts' native ABI (e.g. system calls for UNIX), by a layer called PAL (platform adaption layer). A PAL not only exports a set of APIs (PAL APIs) that can be called by the library OS, but also act as the loader that bootstraps the library OS. The design of PAL Host ABI strictly follows three primary principles, to guarantee functionality, security, and platform compatibility:  
+PAL Host ABI is the interface used by Graphene to interact with its host. It is translated into
+the host's native ABI (e.g. system calls for UNIX) by a layer called the Platform Adaptation Layer
+(PAL). A PAL not only exports a set of APIs (PAL APIs) that can be called by the library OS, but
+also acts as the loader that bootstraps the library OS. The design of PAL Host ABI strictly follows
+three primary principles, to guarantee functionality, security, and portability:
 
 * The host ABI must be stateless.
 * The host ABI must be a narrowed interface to reduce the attack surface.
-* The host ABI must be generic and independent from the native ABI on the hosts.
+* The host ABI must be generic and independent from the native ABI of any of the supported hosts.
 
-Most of the PAL Host ABI are adapted from _Drawbridge_ library OS.
+Most of the PAL Host ABI is adapted from the Drawbridge library OS.
 
 ## PAL as Loader
 
-Regardless of the actual implementation, we require PAL to be able to load ELF-format binaries as executables or dynamic libraries, and perform the necessary dynamic relocation. PAL will need to look up all unresolved symbols in loaded binaries, and resolve the ones matching the name of PAL APIs (_Important!!!_). PAL does not and will not resolve other unresolved symbols, so the loaded libraries and executables must resolve them afterwards. 
+Regardless of the actual implementation, we require PAL to be able to load ELF-format binaries
+as executables or dynamic libraries, and perform the necessary dynamic relocation. PAL needs
+to look up all unresolved symbols in loaded binaries and resolve the ones matching the names of
+PAL APIs. PAL does not and will not resolve other unresolved symbols, so the loaded libraries and
+executables must resolve them afterwards.
 
-After loading the binaries, PAL needs to load and interpret the manifest files. The manifest syntax will be described in [[Manifest Syntax]].
+After loading the binaries, PAL needs to load and interpret the manifest files. The manifest syntax
+is described in [[Graphene Manifest Syntax]].
 
-After PAL fully initialized the process, it will jump to the entry points of libraries and/or executables to start the execution. When jumping to the entry points, arguments, environment variables and auxiliary vectors must be pushed to the stack as the UNIX calling convention.
+### Manifest and Executable Loading Rules
 
-### Manifest and Executable Loading Rules 
-
-The PAL loader supports multiple ways of locating the manifest and executable. To run a program in Graphene properly, the PAL loader generally requires both a manifest and an executable, although it is possible to load with only one of them. The user shall specify either the manifest and the executable to load in the command line, and the PAL loader will try to locate the other based on the file name or content.
+The PAL loader supports multiple ways of locating the manifest and executable. To run a program
+in Graphene properly, the PAL loader generally requires both a manifest and an executable,
+although it is possible to load with only one of them. The user shall specify either the manifest
+or the executable to load in the command line, and the PAL loader will try to locate the other
+based on the file name or content.
 
 Precisely, the loading rules for the manifest and executable are as follows:
 
-1. The first argument given to the PAL loader (e.g., `pal-Linux`, `pal-Linux-SGX`, `pal-FreeBSD`, or the cross-platform wrapper, `pal-loader`) can be either a manifest or an executable.
-2. If an executable is given to the command line, the loader will search for the manifest in the following order: the same file name as the executable with a `.manifest` or `.manifest.sgx` extenstion, a `manifest` file without any extension, or no manifest at all.
-3. If a manifest is given to the command line, and the manifest contains a `loader.exec` rule, then the rule is used to determine the executable. The loader should exit if the executable file doesn't exist.
-4. If a manifest is given to the command line, and the manifest DOES NOT contain a `loader.exec rule`, then the manifest MAY be used to infer the executable. The potential executable file has the same file name as the manifest file except it doesn't have the `.manifest` or `.manifest.sgx` extension.
-5. If a manifest is given to the command line, and no executable file can be found either based on any `loader.exec` rule or inferring from the manifest file, then no executable is used for the execution.   
+1. The first argument given to the PAL loader (e.g., `pal-Linux`, `pal-Linux-SGX`, `pal-FreeBSD`,
+or the cross-platform wrapper, `pal-loader`) can be either a manifest file or an executable.
+2. If an executable is given to the command line, the loader will search for the manifest in the
+following order: the same file name as the executable with a `.manifest` or `.manifest.sgx` extension,
+a `manifest` file without any extension, or no manifest at all.
+3. If a manifest is given to the command line, and the manifest contains a `loader.exec` rule,
+then the rule is used to determine the executable. The loader should exit if the executable file
+doesn't exist.
+4. If a manifest is given to the command line, and the manifest does *not* contain a `loader.exec rule`,
+then the manifest *may* be used to infer the executable. The potential executable file has the same
+file name as the manifest file except it doesn't have the `.manifest` or `.manifest.sgx` extension.
+5. If a manifest is given to the command line, and no executable file can be found either based on
+any `loader.exec` rule or inferring from the manifest file, then no executable is used for the
+execution.
 
 
 ## Data Types and Variables
@@ -36,46 +55,54 @@ Precisely, the loading rules for the manifest and executable are as follows:
 
 #### PAL handles
 
-The PAL handles are identifiers that are returned by PAL when opening or creating resources. The basic data structure of a PAL handle is defined as follows:
+The PAL handles are identifiers that are returned by PAL when opening or creating resources. The
+basic data structure of a PAL handle is defined as follows:
 
     typedef union pal_handle {
         struct {
             PAL_IDX type;
-            PAL_REF ref;
-            PAL_FLG flags;
-        } __in;
-        (Other resource-specific definitions)
+        } hdr;
+        /* other resource-specific definitions */
     } PAL_HANDLE;
 
-As shown above, a PAL handle is usually defined as a _union_ data type that contain different subtypes that represent each resources such as files, directories, pipes or sockets. The actual memory allocated for the PAL handles may be variable-sized. 
+As shown above, a PAL handle is usually defined as a `union` data type that contains different
+subtypes that represent each resource such as files, directories, pipes or sockets. The actual
+memory allocated for the PAL handles may be variable-sized.
 
 #### Numbers and Flags
 
-_PAL_NUM_ and _PAL_FLG_ represent the integers used for numbers and flags. On x86-64, they are defined as follows:
+`PAL_NUM` and `PAL_FLG` types represent integers and flags. On x86-64, they are defined as follows:
+
+    typedef uint64_t      PAL_NUM;
+    typedef uint32_t      PAL_FLG;
 
-    typedef unsigned long PAL_NUM;
-    typedef unsigned int  PAL_FLG;
-  
 #### Pointers, Buffers and Strings
 
-_PAL_PTR_ and _PAL_STR_ represent the pointers that point to memory, buffers and strings.  On x86_64, they are defined as follows:
+`PAL_PTR` and `PAL_STR` types represent pointers that point to memory, buffers, and strings.
+On x86-64, they are defined as follows:
 
-    typedef const char *  PAL_STR;
-    typedef void *        PAL_PTR;
+    typedef const char*   PAL_STR;
+    typedef void*         PAL_PTR;
 
 #### Boolean Values
 
-_PAL_BOL_ represents the boolean values that will solely contain either _True_ or _False_. This data type is commonly used as the return values of many PAL APIs to determine whether the call has succeeded. The value of _PAL_BOL_ could be either _PAL_TRUE_ or _PAL_FALSE_. On x86_64, they are defined as follows:
+`PAL_BOL` type represents boolean values (either `PAL_TRUE` or `PAL_FALSE`). This data type is
+commonly used as the return value of a PAL API to determine whether the call succeeded. On x86-64,
+it is defined as follows:
 
     typedef bool          PAL_BOL;
- 
+
 ### Graphene Control Block
 
-The control block in Graphene is a structure that provides static information of the current process and its host. It is also a dynamic symbol that will be linked by library OSes and resolved at runtime. Sometimes, for the flexibility or the convenience of dynamic resolution, the address of the control block may be resolved by a function (_pal_control_addr()_).
+The control block in Graphene is a structure that provides static information about the current
+process and its host. It is also a dynamic symbol that will be linked by the library OS and resolved
+at runtime. Sometimes, for the flexibility or the convenience of the dynamic resolution, the
+address of the control block may be resolved by a function (`pal_control_addr()`).
 
-The members of Graphene control block are defined as follows:
+The fields of the Graphene control block are defined as follows:
 
     typedef struct {
+        PAL_STR host_type;
         /* An identifier of current picoprocess */
         PAL_NUM process_id;
         PAL_NUM host_id;
@@ -109,99 +136,116 @@ The members of Graphene control block are defined as follows:
         PAL_CPU_INFO cpu_info;
         /* Memory information */
         PAL_MEM_INFO mem_info;
+
+        /* Attestation information */
+        PAL_STR attestation_status;
+        PAL_STR attestation_timestamp;
+
+        /* Purely for profiling */
+        PAL_NUM startup_time;
+        PAL_NUM host_specific_startup_time;
+        PAL_NUM relocation_time;
+        PAL_NUM linking_time;
+        PAL_NUM manifest_loading_time;
+        PAL_NUM allocation_time;
+        PAL_NUM tail_startup_time;
+        PAL_NUM child_creation_time;
     } PAL_CONTROL;
 
 ## PAL APIs
 
-The PAL APIs contain _44_ functions that can be called from the library OSes.
+The PAL APIs contain 44 functions that can be called from the library OS.
 
-### Memory allocation
+### Memory Allocation
 
 #### DkVirtualMemoryAlloc
 
-    PAL_PTR
-    DkVirtualMemoryAlloc (PAL_PTR addr, PAL_NUM size, PAL_FLG alloc_type,
-                          PAL_FLG prot);
+    PAL_PTR DkVirtualMemoryAlloc(PAL_PTR addr, PAL_NUM size, PAL_FLG alloc_type, PAL_FLG prot);
 
-This API allocates virtual memory for the library OSes. _addr_ can be either _NULL_ or any valid addresses that are aligned by the allocation alignment. When _addr_ is non-NULL, the API will try to allocate the memory at the given address, potentially rewrite any memory previously allocated at the same address. Overwriting any part of PAL and host kernel is forbidden. _size_ must be a positive number, aligned by the allocation alignment. 
+This API allocates virtual memory for the library OS. `addr` can be either `NULL` or any valid
+address aligned at the allocation alignment. When `addr` is non-NULL, the API will try
+to allocate the memory at the given address and potentially rewrite any memory previously allocated
+at the same address. Overwriting any part of PAL and host kernel is forbidden. `size` must be a
+positive number, aligned at the allocation alignment.
 
-_alloc_type_ can be a combination of any of the following flags:
+`alloc_type` can be a combination of any of the following flags:
 
     /* Memory Allocation Flags */
-    #define PAL_ALLOC_32BIT       0x0001   /* Only give out 32-bit addresses */
-    #define PAL_ALLOC_RESERVE     0x0002   /* Only reserve the memory */
+    #define PAL_ALLOC_RESERVE     0x0001   /* Only reserve the memory */
+    #define PAL_ALLOC_INTERNAL    0x8000   /* Allocate for PAL */
 
-_prot_ can be a combination of the following flags:
+`prot` can be a combination of the following flags:
 
     /* Memory Protection Flags */
-    #define PAL_PROT_NONE       0x0     /* 0x0 Page can not be accessed. */
-    #define PAL_PROT_READ       0x1     /* 0x1 Page can be read. */
-    #define PAL_PROT_WRITE      0x2     /* 0x2 Page can be written. */
-    #define PAL_PROT_EXEC       0x4     /* 0x4 Page can be executed. */
-    #define PAL_PROT_WRITECOPY  0x8     /* 0x8 Copy on write */
+    #define PAL_PROT_NONE       0x0     /* Page can not be accessed */
+    #define PAL_PROT_READ       0x1     /* Page can be read */
+    #define PAL_PROT_WRITE      0x2     /* Page can be written */
+    #define PAL_PROT_EXEC       0x4     /* Page can be executed */
+    #define PAL_PROT_WRITECOPY  0x8     /* Copy on write */
 
 #### DkVirtualMemoryFree
 
-    void
-    DkVirtualMemoryFree (PAL_PTR addr, PAL_NUM size);
+    void DkVirtualMemoryFree(PAL_PTR addr, PAL_NUM size);
 
-This API deallocates a previously allocated memory mapping. Both _addr_ and _size_ must be non-zero and aligned by the allocation alignment.
+This API deallocates a previously allocated memory mapping. Both `addr` and `size` must be non-zero
+and aligned at the allocation alignment.
 
 #### DkVirtualMemoryProtect
 
-    PAL_BOL
-    DkVirtualMemoryProtect (PAL_PTR addr, PAL_NUM size, PAL_FLG prot);
+    PAL_BOL DkVirtualMemoryProtect(PAL_PTR addr, PAL_NUM size, PAL_FLG prot);
 
-This API modified the hardware protection of a previously allocated memory mapping. Both _addr_ and _size_ must be non-zero and aligned by the allocation alignment. _prot_ is defined as [[DkVirtualMemoryAlloc|PAL Host ABI#DkVirtualMemoryAlloc]].
+This API modifies the permissions of a previously allocated memory mapping. Both `addr` and
+`size` must be non-zero and aligned at the allocation alignment. `prot` is defined as
+[[DkVirtualMemoryAlloc|PAL Host ABI#DkVirtualMemoryAlloc]].
 
 ### Process Creation
 
 #### DkProcessCreate
 
-    PAL_HANDLE
-    DkProcessCreate (PAL_STR uri, PAL_FLG flags, PAL_STR * args);
+    PAL_HANDLE DkProcessCreate(PAL_STR uri, PAL_STR* args);
 
-This API creates a new process to run a separated executable. _uri_ is the URI of the manifest file or the executable to be loaded in the new process. _flags_ is currently unused. _args_ is an array of strings as the arguments to be passed to the new process.
+This API creates a new process to run a separate executable. `uri` is the URI of the manifest file
+or the executable to be loaded in the new process. `args` is an array of strings -- the arguments
+to be passed to the new process.
 
 #### DkProcessExit
 
-    void
-    DkProcessExit (PAL_NUM exitCode);
-
-This API terminates all threads in the process immediately. _exitCode_ with be exit value returned to the host.
-
-#### DkProcessSandboxCreate
+    void DkProcessExit(PAL_NUM exitCode);
 
-    #define PAL_SANDBOX_PIPE         0x1
-    PAL_BOL
-    DkProcessSandboxCreate (PAL_STR manifest, PAL_FLG flags);
-
-This API loads a new manifest file and inform the reference monitor to create a new sandbox. _manifest_ will be the URI of the manifest file to be loaded. If _PAL_SANDBOX_PIPE_ is given in _flags_, reference monitor will isolate the RPC streams from other processes.
+This API terminates all threads in the process immediately. `exitCode` is the exit value returned
+to the host.
 
 ### Stream Creation/Connection/Open
 
 #### DkStreamOpen
 
-    PAL_HANDLE
-    DkStreamOpen (PAL_STR uri, PAL_FLG access, PAL_FLG share_flags,
-                  PAL_FLG create, PAL_FLG options);
+    PAL_HANDLE DkStreamOpen(PAL_STR uri, PAL_FLG access, PAL_FLG share_flags, PAL_FLG create,
+                            PAL_FLG options);
 
-This APIs open/create stream resources specified by _uri_. If the resource is successfully opened/created, a PAL handle will be returned for further access such as reading or writing. _uri_ is the URI of the stream to be opened/created. The following is a list of URIs that are supported in PAL:
+This API opens/creates a stream resource specified by `uri`. If the resource is successfully opened
+or created, a PAL handle will be returned for further access such as reading or writing. `uri` is
+the URI of the stream to be opened/created. The following is a list of URIs that are supported:
 
-* `file:...`, `dir:...`: Files or directories on the host file systems. If _PAL_CREAT_TRY_ is given in _create_, the file or directory will be created. 
-* `dev:...`: Opening devices as streams. For example, `dev:tty` represents the standard input/output.
-* `pipe.srv:<ID>`, `pipe:<ID>`, `pipe:`: Open a byte stream that can be used as RPC (remote procedure call) between processes. Pipes are located by numeric IDs. The server side of pipes can accept any number of connection. If `pipe:` is given as the URI, it will open a anonymous bidirectional pipe. 
-* `tcp.srv:<ADDR>:<port>`, `tcp:<ADDR>:<PORT>`: Opening a TCP socket to listen or connecting to remote TCP socket.
-* `udp.srv:<ADDR>:<PORT>`, `udp:<ADDR>:<PORT>`: Opening a UDP socket to listen or connecting to remote UDP socket.
+* `file:...`, `dir:...`: Files or directories on the host file system. If `PAL_CREAT_TRY` is given
+   in `create` flags, the file/directory will be created.
+* `dev:...`: Open a device as a stream. For example, `dev:tty` represents the standard I/O.
+* `pipe.srv:<ID>`, `pipe:<ID>`, `pipe:`: Open a byte stream that can be used for RPC between
+   processes. Pipes are located by numeric IDs. The server side of a pipe can accept any number
+   of connections. If `pipe:` is given as the URI, it will open an anonymous bidirectional pipe.
+* `tcp.srv:<ADDR>:<PORT>`, `tcp:<ADDR>:<PORT>`: Open a TCP socket to listen or connect to
+   a remote TCP socket.
+* `udp.srv:<ADDR>:<PORT>`, `udp:<ADDR>:<PORT>`: Open a UDP socket to listen or connect to
+   a remote UDP socket.
 
-_access_ can be a combination of the following flags:
+`access` can be a combination of the following flags:
 
     /* Stream Access Flags */
     #define PAL_ACCESS_RDONLY   00
     #define PAL_ACCESS_WRONLY   01
     #define PAL_ACCESS_RDWR     02
+    #define PAL_ACCESS_APPEND   04
 
-_share_flags_ can be a combination of the following flags:
+`share_flags` can be a combination of the following flags:
 
     /* Stream Sharing Flags */
     #define PAL_SHARE_GLOBAL_X    01
@@ -214,156 +258,148 @@ _share_flags_ can be a combination of the following flags:
     #define PAL_SHARE_OWNER_W   0200
     #define PAL_SHARE_OWNER_R   0400
 
-_create_ can be a combination of the following flags:
+`create` can be a combination of the following flags:
 
     /* Stream Create Flags */
-    #define PAL_CREAT_TRY        0100       /* 0100 Create file if file not
-                                               exist (O_CREAT) */
-    #define PAL_CREAT_ALWAYS     0200       /* 0300 Create file and fail if file
-                                               already exist (O_CREAT|O_EXCL) */
+    #define PAL_CREAT_TRY        0100  /* Create file if does not exist (O_CREAT) */
+    #define PAL_CREAT_ALWAYS     0200  /* Create file and fail if already exists (O_CREAT|O_EXCL) */
 
-_options_ can be a combination of the following flags:
+`options` can be a combination of the following flags:
 
     /* Stream Option Flags */
     #define PAL_OPTION_NONBLOCK     04000
 
 #### DkStreamWaitForClient
 
-    PAL_HANDLE
-    DkStreamWaitForClient (PAL_HANDLE handle);
+    PAL_HANDLE DkStreamWaitForClient(PAL_HANDLE handle);
 
-This API is only available for handles that are opened with `pipe.srv:...`, `tcp.srv:...` and `udp.srv:...`. It will block until a new connection is accepted and return the PAL handle for the connection.
+This API is only available for handles that are opened with `pipe.srv:...`, `tcp.srv:...`, and
+`udp.srv:...`. It blocks until a new connection is accepted and returns the PAL handle for the
+connection.
 
 #### DkStreamRead
 
-    PAL_NUM
-    DkStreamRead (PAL_HANDLE handle, PAL_NUM offset, PAL_NUM count,
-                  PAL_PTR buffer, PAL_PTR source, PAL_NUM size);
-
-This API receives or reads data from an opened stream. If the handles are files, _offset_ must be specified at each call of DkStreamRead. _source_ and _size_ can be used to return the remote socket addresses if the handles are UDP sockets.  
+    PAL_NUM DkStreamRead(PAL_HANDLE handle, PAL_NUM offset, PAL_NUM count, PAL_PTR buffer,
+                         PAL_PTR source, PAL_NUM size);
 
-If the handles are directories, calling DkStreamRead will fill the buffer with the names (NULL-ended) of the files or subdirectories inside.
+This API reads data from an opened stream. If the handle is a file, `offset` must be specified
+at each call of DkStreamRead. `source` and `size` can be used to return the remote socket
+address if the handle is a UDP socket. If the handle is a directory, DkStreamRead fills the buffer
+with the names (NULL-ended) of the files or subdirectories inside of this directory.
 
 #### DkStreamWrite
 
-    PAL_NUM
-    DkStreamWrite (PAL_HANDLE handle, PAL_NUM offset, PAL_NUM count,
-                   PAL_PTR buffer, PAL_STR dest);
+    PAL_NUM DkStreamWrite(PAL_HANDLE handle, PAL_NUM offset, PAL_NUM count,
+                          PAL_PTR buffer, PAL_STR dest);
 
-This API sends or writes data to an opened stream. If the handles are files, _offset_ must be specified at each call of DkStreamWrite. _dest_ can be used to specify the remote socket addresses if the handles are UDP sockets.
+This API writes data to an opened stream. If the handle is a file, `offset` must be specified
+at each call of DkStreamWrite. `dest` can be used to specify the remote socket address if the
+handle is a UDP socket.
 
 #### DkStreamDelete
 
     #define PAL_DELETE_RD       01
     #define PAL_DELETE_WR       02
-    void
-    DkStreamDelete (PAL_HANDLE handle, PAL_FLG access);
+    void DkStreamDelete(PAL_HANDLE handle, PAL_FLG access);
 
-This API deletes files or directories on the host, or shut down connection of TCP or UDP sockets. _access_ specifies the method of shutting down the connection. _access_ can be either read-side only, write-side only, or both if 0 is given in _access_.
+This API deletes files or directories on the host or shuts down the connection of TCP/UDP sockets.
+`access` specifies the method of shutting down the connection and can be either read-side only,
+write-side only, or both if 0 is given.
 
 #### DkStreamMap
 
-    PAL_PTR
-    DkStreamMap (PAL_HANDLE handle, PAL_PTR address, PAL_FLG prot,
-                 PAL_NUM offset, PAL_NUM size);
+    PAL_PTR DkStreamMap(PAL_HANDLE handle, PAL_PTR address, PAL_FLG prot,
+                        PAL_NUM offset, PAL_NUM size);
 
-This API maps files to virtual memory of the current process. _address_ can be NULL or a valid address that are aligned by the allocation alignment. _offset_ and _size_ have to be non-zero and aligned by the allocation alignment. _prot_ is defined as [[DkVirtualMemoryAlloc|PAL Host ABI#DkVirtualMemoryAlloc]].
+This API maps a file to a virtual memory address in the current process. `address` can be NULL or
+a valid address that is aligned at the allocation alignment. `offset` and `size` have to be non-zero
+and aligned at the allocation alignment. `prot` is defined as
+[[DkVirtualMemoryAlloc|PAL Host ABI#DkVirtualMemoryAlloc]].
 
 #### DkStreamUnmap
 
-    void
-    DkStreamUnmap (PAL_PTR addr, PAL_NUM size);
+    void DkStreamUnmap(PAL_PTR addr, PAL_NUM size);
 
-This API unmaps virtual memory that are backed with file streams. _addr_ and _size_ must be aligned by the allocation alignment.
+This API unmaps virtual memory that is backed by a file stream. `addr` and `size` must be aligned
+at the allocation alignment.
 
 #### DkStreamSetLength
 
-    PAL_NUM
-    DkStreamSetLength (PAL_HANDLE handle, PAL_NUM length);
+    PAL_NUM DkStreamSetLength(PAL_HANDLE handle, PAL_NUM length);
 
-This API truncates or extends a file stream to the length given.
+This API truncates or extends a file stream to the given length.
 
 #### DkStreamFlush
 
-    PAL_BOL
-    DkStreamFlush (PAL_HANDLE handle);
+    PAL_BOL DkStreamFlush(PAL_HANDLE handle);
 
 This API flushes the buffer of a file stream.
 
 #### DkSendHandle
 
-    PAL_BOL
-    DkSendHandle (PAL_HANDLE handle, PAL_HANDLE cargo);
+    PAL_BOL DkSendHandle(PAL_HANDLE handle, PAL_HANDLE cargo);
 
-This API can be used to send a PAL handle upon other handle. Currently, the handle that are used to send handle must be a process handle, thus handles can only be sent between parent and child processes. 
+This API sends a PAL handle `cargo` over another handle. Currently, the handle that is used
+to send cargo must be a process handle.
 
 #### DkReceiveHandle
 
-    PAL_HANDLE
-    DkReceiveHandle (PAL_HANDLE handle);
+    PAL_HANDLE DkReceiveHandle(PAL_HANDLE handle);
 
-This API receives a handle upon other handle.
+This API receives a handle over another handle.
 
 #### DkStreamAttributeQuery
 
-    PAL_BOL
-    DkStreamAttributesQuery (PAL_STR uri, PAL_STREAM_ATTR * attr);
+    PAL_BOL DkStreamAttributesQuery(PAL_STR uri, PAL_STREAM_ATTR* attr);
 
-This API queries the attributes of a named stream. This API only applies for URI such as `file:...`, `dir:...` or `dev:...`.
+This API queries the attributes of a named stream. This API only applies for URIs such as
+`file:...`, `dir:...`, and `dev:...`.
 
-The data type _PAL_STREAM_ATTR_ is defined as follows:
+The data type `PAL_STREAM_ATTR` is defined as follows:
 
     /* stream attribute structure */
     typedef struct {
-        PAL_IDX type;
-        PAL_NUM file_id;
-        PAL_NUM size;
-        PAL_NUM access_time;
-        PAL_NUM change_time;
-        PAL_NUM create_time;
+        PAL_IDX handle_type;
         PAL_BOL disconnected;
+        PAL_BOL nonblocking;
         PAL_BOL readable;
         PAL_BOL writeable;
         PAL_BOL runnable;
         PAL_FLG share_flags;
-        PAL_BOL nonblocking;
-        PAL_BOL reuseaddr;
-        PAL_NUM linger;
-        PAL_NUM receivebuf;
-        PAL_NUM sendbuf;
-        PAL_NUM receivetimeout;
-        PAL_NUM sendtimeout;
-        PAL_BOL tcp_cork;
-        PAL_BOL tcp_keepalive;
-        PAL_BOL tcp_nodelay;
+        PAL_NUM pending_size;
+        struct {
+            PAL_NUM linger;
+            PAL_NUM receivebuf;
+            PAL_NUM sendbuf;
+            PAL_NUM receivetimeout;
+            PAL_NUM sendtimeout;
+            PAL_BOL tcp_cork;
+            PAL_BOL tcp_keepalive;
+            PAL_BOL tcp_nodelay;
+        } socket;
     } PAL_STREAM_ATTR;
 
 #### DkStreamAttributesQuerybyHandle
 
-    PAL_BOL
-    DkStreamAttributesQuerybyHandle (PAL_HANDLE handle,
-                                     PAL_STREAM_ATTR * attr);
+    PAL_BOL DkStreamAttributesQuerybyHandle(PAL_HANDLE handle, PAL_STREAM_ATTR* attr);
 
-This API queries the attributes of an opened stream. This API applies for any stream handles.
+This API queries the attributes of an opened stream. This API applies to any stream handle.
 
 #### DkStreamAttributesSetbyHandle
 
-    PAL_BOL
-    DkStreamAttributesSetbyHandle (PAL_HANDLE handle, PAL_STREAM_ATTR * attr);
+    PAL_BOL DkStreamAttributesSetbyHandle(PAL_HANDLE handle, PAL_STREAM_ATTR* attr);
 
 This API sets the attributes of an opened stream.
 
 #### DkStreamGetName
 
-    PAL_NUM
-    DkStreamGetName (PAL_HANDLE handle, PAL_PTR buffer, PAL_NUM size);
+    PAL_NUM DkStreamGetName(PAL_HANDLE handle, PAL_PTR buffer, PAL_NUM size);
 
 This API queries the name of an opened stream.
 
 #### DkStreamChangeName
 
-    PAL_BOL
-    DkStreamChangeName (PAL_HANDLE handle, PAL_STR uri);
+    PAL_BOL DkStreamChangeName(PAL_HANDLE handle, PAL_STR uri);
 
 This API changes the name of an opened stream.
 
@@ -371,122 +407,106 @@ This API changes the name of an opened stream.
 
 #### DkThreadCreate
 
-    PAL_HANDLE
-    DkThreadCreate (PAL_PTR addr, PAL_PTR param, PAL_FLG flags);
+    PAL_HANDLE DkThreadCreate(PAL_PTR addr, PAL_PTR param);
 
-This API creates a thread in the current process. _addr_ will be the address where the new thread starts. _param_ is the parameter that is passed into the new thread as the only argument. _flags_ is currently unused.
-
-#### DkThreadPrivate
-
-    PAL_PTR
-    DkThreadPrivate (PAL_PTR addr);
-
-This API retrieves or sets the thread-local storage address of the current thread.
+This API creates a thread in the current process. `addr` is the address of an entry point of
+execution for the new thread. `param` is the pointer argument that is passed to the new thread.
 
 #### DkThreadDelayExecution
 
-    PAL_NUM
-    DkThreadDelayExecution (PAL_NUM duration);
+    PAL_NUM DkThreadDelayExecution(PAL_NUM duration);
 
-This API will suspend the current thread for certain duration (in microseconds).
+This API suspends the current thread for a certain duration (in microseconds).
 
 #### DkThreadYieldExecution
 
-    void
-    DkThreadYieldExecution (void);
+    void DkThreadYieldExecution(void);
 
-This API will yield the current thread and request for rescheduling in the scheduler on the host. 
+This API yields the current thread such that the host scheduler can reschedule it.
 
 #### DkThreadExit
 
-    void
-    DkThreadExit (void);
+    void DkThreadExit(void);
 
 This API terminates the current thread.
 
 #### DkThreadResume
 
-    PAL_BOL
-    DkThreadResume (PAL_HANDLE thread);
+    PAL_BOL DkThreadResume(PAL_HANDLE thread);
 
-This API resumes a thread and force the thread to jump into a handler. 
+This API resumes a thread.
 
 ### Exception Handling
 
 #### DkSetExceptionHandler
 
-    PAL_BOL
-    DkSetExceptionHandler (void (*handler) (PAL_PTR event, PAL_NUM arg, PAL_CONTEXT * context),
-                           PAL_NUM event, PAL_FLG flags);
+    PAL_BOL DkSetExceptionHandler(void (*handler) (PAL_PTR event, PAL_NUM arg, PAL_CONTEXT* context),
+                                  PAL_NUM event);
 
-This API set the handler for the specific exception event.
+This API sets the handler for the specific exception event.
 
-_event_ can be one of the following values:
+`event` can be one of the following values:
 
-    /* Exception Handling */
-    /* Div-by-zero */
-    #define PAL_EVENT_DIVZERO       1
+    /* arithmetic error (div-by-zero, floating point exception, etc.) */
+    #define PAL_EVENT_ARITHMETIC_ERROR 1
     /* segmentation fault, protection fault, bus fault */
-    #define PAL_EVENT_MEMFAULT      2
+    #define PAL_EVENT_MEMFAULT         2
     /* illegal instructions */
-    #define PAL_EVENT_ILLEGAL       3
+    #define PAL_EVENT_ILLEGAL          3
     /* terminated by external program */
-    #define PAL_EVENT_QUIT          4
+    #define PAL_EVENT_QUIT             4
     /* suspended by external program */
-    #define PAL_EVENT_SUSPEND       5
+    #define PAL_EVENT_SUSPEND          5
     /* continued by external program */
-    #define PAL_EVENT_RESUME        6
+    #define PAL_EVENT_RESUME           6
     /* failure within PAL calls */
-    #define PAL_EVENT_FAILURE       7
+    #define PAL_EVENT_FAILURE          7
 
-_flags_ can be combination of the following flags:
+`flags` can be a combination of the following flags:
 
     #define PAL_EVENT_PRIVATE      0x0001       /* upcall specific to thread */
     #define PAL_EVENT_RESET        0x0002       /* reset the event upcall */
 
 #### DkExceptionReturn
 
-    void
-    DkExceptionReturn (PAL_PTR event);
+    void DkExceptionReturn(PAL_PTR event);
 
-This API exits a exception handler and restores the context.
+This API exits an exception handler and restores the context.
 
 ### Synchronization
 
-#### DkSemaphoreCreate
+#### DkMutexCreate
 
-    PAL_HANDLE
-    DkSemaphoreCreate (PAL_NUM initialCount, PAL_NUM maxCount);
+    PAL_HANDLE DkMutexCreate(PAL_NUM initialCount);
 
-This API creates a semaphore with the given _initialCount_ and _maxCount_.
+This API creates a mutex with the given `initialCount`.
 
-#### DkSemaphoreRelease
+#### DkMutexRelease
 
-    void
-    DkSemaphoreRelease (PAL_HANDLE semaphoreHandle, PAL_NUM count);
+    void DkMutexRelease(PAL_HANDLE mutexHandle);
 
-This API wakes up _count_ waiter on the given semaphore.
+This API unlocks the given mutex.
 
 ##### DkNotificationEventCreate/DkSynchronizationEventCreate
 
-    PAL_HANDLE
-    DkNotificationEventCreate (PAL_BOL initialState);
-    PAL_HANDLE
-    DkSynchronizationEventCreate (PAL_BOL initialState);
+    PAL_HANDLE DkNotificationEventCreate(PAL_BOL initialState);
+    PAL_HANDLE DkSynchronizationEventCreate(PAL_BOL initialState);
 
-This API creates a event with the given _initialState_. The definition of notification events and synchronization events are the same as the WIN32 API. When a notification event is set to the Signaled state it remains in that state until it is explicitly cleared. When a synchronization event is set to the Signaled state, a single thread of execution that was waiting for the event is released, and the event is automatically reset to the Not-Signaled state.
+This API creates an event with the given `initialState`. The definition of notification events
+and synchronization events is the same as the WIN32 API. When a notification event is set to the
+signaled state it remains in that state until it is explicitly cleared. When a synchronization
+event is set to the signaled state, a single thread of execution that was waiting for the event is
+released, and the event is automatically reset to the not-signaled state.
 
 #### DkEventSet
 
-    void
-    DkEventSet (PAL_HANDLE eventHandle);
+    void DkEventSet(PAL_HANDLE eventHandle);
 
 This API sets (signals) a notification event or a synchronization event.
 
 #### DkEventClear
 
-    void
-    DkEventClear (PAL_HANDLE eventHandle);
+    void DkEventClear(PAL_HANDLE eventHandle);
 
 This API clears a notification event or a synchronization event.
 
@@ -494,16 +514,16 @@ This API clears a notification event or a synchronization event.
 
 #### DkObjectsWaitAny
 
-    #define NO_TIMEOUT      ((PAL_NUM) -1)
-    PAL_HANDLE
-    DkObjectsWaitAny (PAL_NUM count, PAL_HANDLE * handleArray, PAL_NUM timeout);
+    #define NO_TIMEOUT ((PAL_NUM)-1)
+    PAL_HANDLE DkObjectsWaitAny(PAL_NUM count, PAL_HANDLE* handleArray, PAL_NUM timeout_us);
 
-This API polls an array of handle and return one handle with recent activity. _timeout_ is the maximum time that the API should wait (in microsecond), or _NO_TIMEOUT_ to indicate it to be blocked as long as possible.
+This API polls an array of handles and returns one handle with recent activity. `timeout` is the
+maximum time that the API should wait (in microseconds), or `NO_TIMEOUT` to indicate it is to be
+blocked until at least one handle is ready.
 
 #### DkObjectClose
 
-    void
-    DkObjectClose (PAL_HANDLE objectHandle);
+    void DkObjectClose(PAL_HANDLE objectHandle);
 
 This API closes (deallocates) a PAL handle.
 
@@ -511,46 +531,63 @@ This API closes (deallocates) a PAL handle.
 
 #### DkSystemTimeQuery
 
-    PAL_NUM
-    DkSystemTimeQuery (void);
+    PAL_NUM DkSystemTimeQuery(void);
 
-This API returns the timestamp of current time (in microseconds).
+This API returns the current time (in microseconds).
 
 #### DkRandomBitsRead
 
-    PAL_NUM
-    DkRandomBitsRead (PAL_PTR buffer, PAL_NUM size);
+    PAL_NUM DkRandomBitsRead(PAL_PTR buffer, PAL_NUM size);
+
+This API fills the buffer with cryptographically-secure random values.
+
+#### DkSegmentRegister
+
+    #define PAL_SEGMENT_FS  0x1
+    #define PAL_SEGMENT_GS  0x2
+    PAL_PTR DkSegmentRegister(PAL_FLG reg, PAL_PTR addr);
+
+This API sets segment register FS or GS specified by `reg` to the address `addr`. If `addr` is
+specified as NULL, then this API returns the current value of the segment register.
+
+#### DkMemoryAvailableQuota
+
+    PAL_NUM DkMemoryAvailableQuota(void);
+
+This API returns the amount of currently available memory for LibOS/application usage.
 
-This API fills the buffer with cryptographically random values.
+#### DkCpuIdRetrieve
 
-#### DkInstructionCacheFlush
+     #define PAL_CPUID_WORD_EAX  0
+     #define PAL_CPUID_WORD_EBX  1
+     #define PAL_CPUID_WORD_ECX  2
+     #define PAL_CPUID_WORD_EDX  3
+     #define PAL_CPUID_WORD_NUM  4
+     PAL_BOL DkCpuIdRetrieve(PAL_IDX leaf, PAL_IDX subleaf, PAL_IDX values[4]);
 
-    PAL_BOL
-    DkInstructionCacheFlush (PAL_PTR addr, PAL_NUM size);
+This API returns CPUID information in the array `values`, based on the leaf/subleaf.
 
-This API flushes the instruction cache at the given _addr_ and _size_.
 
-### Memory Bulk Copy
+### Memory Bulk Copy (Optional)
 
 #### DkCreatePhysicalMemoryChannel
 
-    PAL_HANDLE
-    DkCreatePhysicalMemoryChannel (PAL_NUM * key);
+    PAL_HANDLE DkCreatePhysicalMemoryChannel(PAL_NUM* key);
 
-This API creates a physical memory channel for the process to copy virtual memory as copy-on-write. Once a channel is created, any other processes can connect to the physical memory channel by using [[DkStreamOpen|PAL Host ABI#DkStreamOpen]] with URI as `gipc:<key>`.
+This API creates a physical memory channel for the process to copy virtual memory as copy-on-write.
+Once a channel is created, other processes can connect to the physical memory channel by using
+[[DkStreamOpen|PAL Host ABI#DkStreamOpen]] with a URI `gipc:<key>`.
 
 #### DkPhysicalMemoryCommit
 
-    PAL_NUM
-    DkPhysicalMemoryCommit (PAL_HANDLE channel, PAL_NUM entries, PAL_PTR * addrs,
-                            PAL_NUM * sizes, PAL_FLG flags);
+    PAL_NUM DkPhysicalMemoryCommit(PAL_HANDLE channel, PAL_NUM entries, PAL_PTR* addrs,
+                                   PAL_NUM* sizes);
 
-This API commits (sends) an array of virtual memory area to the physical memory channel.
+This API commits (sends) an array of the virtual memory area over the physical memory channel.
 
 #### DkPhysicalMemoryMap
 
-    PAL_NUM
-    DkPhysicalMemoryMap (PAL_HANDLE channel, PAL_NUM entries, PAL_PTR * addrs,
-                         PAL_NUM * sizes, PAL_FLG * prots);
+    PAL_NUM DkPhysicalMemoryMap(PAL_HANDLE channel, PAL_NUM entries, PAL_PTR* addrs,
+                                PAL_NUM* sizes, PAL_FLG* prots);
 
 This API maps an array of virtual memory area from the physical memory channel.

+ 0 - 80
Documentation/oldwiki/Port-Graphene-PAL-to-Other-hosts.md

@@ -1,80 +0,0 @@
-# Port Graphene PAL to Other hosts
-## Platform Compatibility of Graphene Library OS
-
-Graphene Library OS has adapted the design of PAL (Platform Adaption Layer) from _Drawbridge Library OS_, which is a library OS designed for maximizing its platform compatibility. The argument made by _Drawbridge Library OS_ is that the library OS can be ported to a new host as long as PAL is implemented on the said host. The same property is also available in Graphene library OS.
-
-## How to port Graphene library OS
-
-As a result of platform compatibility, to port Graphene library OS to a new host platform, the only effort required will be reimplementing the PAL on the desired host platform. Most of the implementation should be just as simple as translating PAL API into the native system interface of the host. The implemented PAL must support [[PAL Host ABI]].
-
-In fact, even in the PAL source code, we expect part of the code to be host-generic. To make porting Graphene easier, we deliberately separate the source code of PAL into three parts:
-
-* `Pal/lib`: All the library APIs used internally by PAL.
-* `Pal/src`: Host-generic implementation.
-* `Pal/src/host/<host name>`: Host-specific implementation.
-
-To start porting Graphene to a new host, we suggest you to start with a clone of `Pal/src/host/Skeleton`. This directory contains the skeleton of all functions that need to be implemented as part of a fully compatible PAL. However, although we have tried our best to isolate any host-specific code in each host directories, we do not guarantee that the necessary changes are only limited to those directories. It means that you may have to modify other part of the source code, especially Makefile scripts to complete your implementation. 
-
-## Steps of Porting PAL
-* Step 1: Fix compilation issue
-
-For the first step to port PAL, you want to be able to build PAL as an executable on the target host. After cloning a host-specific directory, first modify `Makefile.am` to adjust compilation rules such as `CC`, `CFLAGS`, `LDFLAGS`, `AS` and `ASFLAGS`. You will also have to define the name of loader, and the reference monitor loader, if there is going to one, as target `pal` and `pal_sec` in `Makefile.am.`
-
-* Step 2: Build a loader
-
-PAL needs to run on the target host like a regular executable. To run Graphene Library OS, PAL must initialize the proper environments and load the applications as well as library OS in the form of Linux ELF binaries. To start the implemention of PAL loader, we suggest you begin with the following APIs in your host-specific directory:
-
-1. `db_main.c`: this files need to contain the entry function of your loader (the 'main' function) and APIs to retrieve host-specific information. The definition of the APIs are as follows:
-
-+ `_DkGetPagesize`(Required): Return the architecture page size of the target platform.
-+ `_DkGetAllocationAlignment`(Required): Return the allocation alignment (granularity) of the target platform. Some platforms will have to different allocation alignment than page size.
-+ `_DkGetAvailableUserAddressRange`(Required): PAL needs to provide a user address range which can be flexibly used by applications. None of these addresses should be used by PAL internally.
-+ `_DkGetProcessId`(Required): Return an unique process ID for each process.
-+ `_DkGetHostId`(Optional): Return an unique host ID for each host.
-+ `_DkGetCPUInfo`(Optional): Retireve CPU information such as vendor ID, model name, etc.
-
-The entry function in `db_main.c` must eventually call the generic entry point `pal_main`. The definition of `pal_main` is:
-
-    /* Main initialization function */
-    void pal_main (
-        PAL_NUM    instance_id,      /* current instance id */
-        PAL_HANDLE manifest_handle,  /* manifest handle if opened */
-        PAL_HANDLE exec_handle,      /* executable handle if opened */
-        PAL_PTR    exec_loaded_addr, /* executable addr if loaded */
-        PAL_HANDLE parent_process,   /* parent process if it's a child */
-        PAL_HANDLE first_thread,     /* first thread handle */
-        PAL_STR *  arguments,        /* application arguments */
-        PAL_STR *  environments      /* environment variables */
-    );
-
-2. `pal_host.h`: this file needs to define the member of `PAL_HANDLE` for handles of files, devices, pipes, sockets, threads, processes, etc.
-
-3. `db_files.c`: To implement a basic loader, you have to specify how to open, read, and map an executable file. At least `file_open`, `file_read`, `file_map` , `file_attrquery`, `file_attrquerybyhdl` must be implemented to load a basic HelloWorld program.
-
-4. `db_memory.c`: the same as `db_files.c`, this file also contain APIs essential to PAL loader. At least `_DkCheckMemoryMappable`, `_DkVirtualMemoryAlloc`, `_DkVirtualMemoryFree`, `_DkVirtualMemoryProtect` must be implemented.
-
-5. `db_rtld.c`: This file must handle how symbols are resolved against PAL loader itself, to discover the entry address of host ABI. If the PAL loader is a Linux ELF binary, you may simply add a `link_map` to the `loaded_maps` list. Otherwise, you need to implement `resolve_rtld` function to return addresses of host ABI by names.
-
-(Optional) You may implement `_DkDebugAddMap` and `_DkDebugDelMap` if you want to use host-specific debugger such as GDB to debug applications in Graphene.
-
-* Step 3: Test a HelloWorld program without loading library OS
-
-In `Pal/test`, we provide test program which can run without library OS, and directly use PAL Host ABI. If you can successfully run a HelloWorld program, Congratulations! You already have a working PAL loader.
-
-* Step 4: Implementing the whole PAL Host ABI
-
-Now it is time to complete the whole implementation of PAL Host ABI. Once you have finished implementation, use the **regression test** to confirm whether your implementation is compatible to PAL Host ABI. To run the regression test, do the following steps:
-
-    Graphene % cd Pal/regression
-    Graphene/Pal/regression % make regression
-
-
-    Basic Bootstrapping:
-    [Success] Basic Bootstrapping
-    [Success] Control Block: Executable Name
-    ...
-
-* Step 5: Running Application with Graphene Library OS
-
-With a completely implemented PAL, you should be able to run any applications that are currently running on Graphene library OS upon other platform. Please be aware you should not try to build any application binaries on your target host. On the contrary, you should build them on a Linux host and ship them to your target host.
-We have packed most of Linux binaries in directories named `.packed` which can be found everywhere in the Graphene source code. Simplt type `make`, and these binaries will be unpacked if an non-Linux host is detected.

+ 111 - 0
Documentation/oldwiki/Porting-Graphene-PAL-to-Other-hosts.md

@@ -0,0 +1,111 @@
+## Platform Compatibility of Graphene
+
+Graphene adopts a similar architecture to the Drawbridge Library OS, which runs a generic library
+OS on top of a Platform Adaptation Layer (PAL) to maximize platform compatibility. In this
+architecture, the library OS can be easily ported to a new host by implementing only the PAL
+for this new host.
+
+## How to Port Graphene
+
+To port Graphene to a new host platform, the only effort required is reimplementing the PAL on the
+desired host platform. Most of the implementation should be as simple as translating the PAL API
+to the native system interface of the host. The implemented PAL must support [[PAL Host ABI]].
+
+In fact, even in the PAL source code, we expect part of the code to be host-generic. To make porting
+Graphene easier, we deliberately separate the source code of PAL into three parts:
+
+* `Pal/lib`: All the library APIs used internally by PAL.
+* `Pal/src`: Host-generic implementation.
+* `Pal/src/host/<host name>`: Host-specific implementation.
+
+To port Graphene to a new host, we suggest starting with a clone of `Pal/src/host/Skeleton`. This
+directory contains the skeleton code of all functions that need to be implemented as part of a
+fully compatible PAL. Although we have tried our best to isolate any host-specific code in each
+host directory, we do not guarantee that the necessary changes are only limited to these
+directories. That is, you may have to modify other parts of the source code (especially the Makefile
+scripts) to complete your implementation.
+
+## Steps of Porting PAL
+
+* Step 1: Fix compilation issues
+
+For the first step to port PAL, you want to be able to build PAL as an executable on the target
+host. After cloning a host-specific directory, first modify `Makefile.am` to adjust compilation
+rules such as `CC`, `CFLAGS`, `LDFLAGS`, `AS` and `ASFLAGS`. You will also have to define the name
+of the loader as target `pal` in `Makefile.am.`
+
+* Step 2: Build a loader
+
+PAL needs to run on the target host like a regular executable. To run Graphene, PAL must initialize
+the proper environments and load the applications as well as the library OS in the form of
+Linux ELF binaries. To start the implemention of PAL loader, we suggest you begin with the following
+APIs in your host-specific directory:
+
+1. `db_main.c`: This file must contain the entry function of your loader (the 'main()' function)
+and APIs to retrieve host-specific information. The definitions of the APIs are as follows:
+
++ `_DkGetPagesize`(required): Return the architecture page size of the target platform.
++ `_DkGetAllocationAlignment`(required): Return the allocation alignment (granularity) of the target
+  platform. Some platforms have different allocation alignments rather than the usual page-size
+  alignment.
++ `_DkGetAvailableUserAddressRange`(required): PAL must provide a user address range that
+  applications can use. None of these addresses should be used by PAL internally.
++ `_DkGetProcessId`(required): Return a unique process ID for each process.
++ `_DkGetHostId`(optional): Return a unique host ID for each host.
++ `_DkGetCPUInfo`(optional): Retrieve CPU information, such as vendor ID, model name.
+
+The entry function in `db_main.c` must eventually call the generic entry point `pal_main()`.
+The definition of `pal_main()` is:
+
+    /* Main initialization function */
+    void pal_main(
+        PAL_NUM    instance_id,      /* current instance id */
+        PAL_HANDLE manifest_handle,  /* manifest handle if opened */
+        PAL_HANDLE exec_handle,      /* executable handle if opened */
+        PAL_PTR    exec_loaded_addr, /* executable addr if loaded */
+        PAL_HANDLE parent_process,   /* parent process if it's a child */
+        PAL_HANDLE first_thread,     /* first thread handle */
+        PAL_STR*   arguments,        /* application arguments */
+        PAL_STR*   environments      /* environment variables */
+    );
+
+2. `pal_host.h`: This file needs to define the member of `PAL_HANDLE` for handles of files, devices,
+   pipes, sockets, threads, processes, etc.
+
+3. `db_files.c`: To implement a basic loader, you have to specify how to open, read, and map an
+   executable file. At least `file_open`, `file_read`, `file_map` , `file_attrquery`,
+   `file_attrquerybyhdl` must be implemented to load a basic HelloWorld program.
+
+4. `db_memory.c`: The same as `db_files.c`, this file also contain APIs essential to PAL loader. At
+   least `_DkCheckMemoryMappable`, `_DkVirtualMemoryAlloc`, `_DkVirtualMemoryFree`,
+   `_DkVirtualMemoryProtect` must be implemented.
+
+5. `db_rtld.c`: This file must handle how symbols are resolved against the PAL loader itself, to
+   discover the entry address of the host ABI. If the PAL loader is a Linux ELF binary, you may simply
+   add a `link_map` to the `loaded_maps` list. Otherwise, you need to implement `resolve_rtld`
+   function to return addresses of the host ABI by names.
+
+You may implement the optional `_DkDebugAddMap` and `_DkDebugDelMap` to use a host-specific
+debugger such as GDB to debug applications in Graphene.
+
+* Step 3: Test a HelloWorld program without loading library OS
+
+In `Pal/test`, we provide a test program that can run without the library OS and directly use the
+PAL Host ABI. If you can successfully run a HelloWorld program, congratulations, you have a working
+PAL loader.
+
+* Step 4: Implementing the whole PAL Host ABI
+
+Now it is time to complete the whole implementation of the PAL Host ABI. Once you have finished
+implementation, use the regression tests to confirm whether your implementation is compatible with
+the PAL Host ABI. To run the regression tests, run the following steps:
+
+    cd Pal/regression
+    make regression
+
+* Step 5: Running Application with Library OS
+
+With a completely implemented PAL, you should be able to run any applications that are currently
+supported by Graphene on your new platform. Please be aware you should not try to build any
+application binaries on your target host. On the contrary, you should build them on a Linux host
+and ship them to your target host.

文件差异内容过多而无法显示
+ 12 - 7
Documentation/oldwiki/Process-Creation-in-Graphene-SGX.md


+ 0 - 38
Documentation/oldwiki/Quick-Start.md

@@ -1,38 +0,0 @@
-# Quick Start
-## Quick Start without Reference Monitor
-
-If you simply want to run Graphene without rebuilding the host kernel, try the following steps:
-
-__** Note: Please use GCC version 4 or 5 **__
-
-### 1. build PAL
-
-    cd Pal/src
-    make
-
-### 2. build and install Bulk Copy kernel module
-
-    cd Pal/ipc/linux
-    make
-    sudo ./load.sh
-
-### 3. build the library OS
-
-    cd LibOS
-    make SGX=1
-
-### 4. Run a helloworld program
-
-    cd LibOS/shim/test/native
-    make
-    ./pal_loader helloworld
-
-### 5. Run LMBench
-
-    cd LibOS/shim/test/apps/lmbench
-    make
-    cd lmbench-2.5/bin/linux
-    ./pal_loader lat_syscall null
-    ./pal_loader lat_syscall open
-    ./pal_loader lat_syscall read
-    ./pal_loader lat_proc fork

+ 0 - 22
Documentation/oldwiki/Remote-Attestation-for-SGX.md

@@ -1,22 +0,0 @@
-# Remote Attestation for SGX
-## What's Remote Attestation for?
-
-## Graphene Remote Attestation Infrastructure
-
-![Simple RA](https://user-images.githubusercontent.com/339249/62333287-78fe6280-b488-11e9-8996-5bab4c7a890e.png)
-
-## Remote Attestation Usage
-
-## Register for the Intel Attestation Service
-### Step 1: Visit the Intel SGX API Portal
-
-**Portal: https://api.portal.trustedservices.intel.com/EPID-attestation**
-
-Click the "Subscribe" button either under "Unlinkable Quotes" or "Linkable Quotes" (Require sign-in).
-![image1](https://user-images.githubusercontent.com/339249/62289388-3440e000-b424-11e9-93ee-bc5df508ecf7.png)
-
-### Step 2: Obtain the SPID and subscription key
-Click the "show" button next to the Primary key.
-![image2](https://user-images.githubusercontent.com/339249/62289625-d19c1400-b424-11e9-92a1-4494efef4c2a.png)
-
-## Implementation

+ 162 - 0
Documentation/oldwiki/Run-Applications-in-Graphene-SGX.md

@@ -0,0 +1,162 @@
+We prepared and tested several applications to demonstrate Graphene-SGX usability. These applications
+can be directly built and run from the Graphene source:
+
+* [[LMBench (v2.5) | Run Applications in Graphene SGX#running lmbench in graphene]]
+* [[Python | Run Applications in Graphene SGX#running python in graphene]]
+* [[R | Run Applications in Graphene SGX#running r in graphene]]
+* [[Lighttpd | Run Applications in Graphene SGX#running lighttpd in graphene]]
+* [[Apache | Run Applications in Graphene SGX#running apache in graphene]]
+* [[Busybox | Run Applications in Graphene SGX#running busybox in graphene]]
+* [[Bash | Run Applications in Graphene SGX#running bash in graphene]]
+
+
+## Running LMBench in Graphene-SGX
+
+The LMBench source and scripts are stored in the directory `LibOS/shim/test/apps/lmbench`. Many
+convenient commands are written in the Makefile inside the directory. The following steps compile
+and run LMBench in a native environment and under Graphene-SGX:
+
+    cd LibOS/shim/test/apps/lmbench
+    make SGX=1                    # compile lmbench and generate manifest and signature
+    make SGX_RUN=1                # get enclave token
+    make SGX=1 test-graphene      # run the whole package in Graphene-SGX
+
+The result of native runs can be found in `lmbench-2.5/results/linux`. The result of Graphene-SGX
+runs can be found in `lmbench-2.5/results/graphene`. The file with the largest number as suffix
+will be the latest output. For debugging purposes, you may want to test each LMBench test
+individually. To do that, run the following commands:
+
+    cd LibOS/shim/test/apps/lmbench
+    cd lmbench-2.5/bin/linux/
+    SGX=1 ./pal_loader lat_syscall null    # run lat_syscall in Graphene-SGX
+
+To run the tcp and udp latency tests:
+
+    SGX=1 ./pal_loader lat_udp -s &        # starts a server
+    SGX=1 ./pal_loader lat_udp 127.0.0.1   # starts a client
+    SGX=1 ./pal_loader lat_udp -127.0.0.1  # kills the server
+
+## Running Python in Graphene-SGX
+
+To run Python, first generate the manifest and the signature, and retrieve the token:
+
+    cd LibOS/shim/test/apps/python
+    make SGX=1
+    make SGX_RUN=1
+
+You can run `python.manifest.sgx` as an executable to load any script. The manifest file is
+actually a script with a shebang that can be automatically loaded in PAL. Use the following
+commands:
+
+    ./python.manifest.sgx scripts/helloworld.py
+    ./python.manifest.sgx scripts/fibonacci.py
+
+## Running R in Graphene-SGX
+
+To run R, first prepare the manifest:
+
+    cd LibOS/shim/test/apps/r
+    make SGX=1
+    make SGX_RUN=1
+
+You can run `R.manifest.sgx` as an executable to load any script. The manifest file is actually
+a script with a shebang that can be automatically loaded in PAL. Use the following commands:
+
+    ./R.manifest.sgx -f scripts/sample.r
+
+## Running Lighttpd in Graphene-SGX
+
+Lighttpd can be used to test the TCP latency and throughput of Graphene-SGX, in either single-
+threaded or multi-threaded environment. The scripts and the source code for Lighttpd can be found
+in `LibOS/shim/test/apps/lighttpd`. To build Lighttpd, run the following commands:
+
+    cd LibOS/shim/test/apps/lighttpd
+    make SGX=1
+    make SGX_RUN=1
+
+The commands above will compile the source code, build the manifest file for Graphene-SGX, generate
+the configuration file for Lighttpd, and generate the HTML sample files. We prepared the following file
+samples:
+
+* `html/random/*.html`: random files (non-html) created with different sizes
+
+The server should be started manually and tested by running the ApacheBench (ab) benchmark from a
+remote client. To start the HTTP server, run one of the following commands:
+
+    make start-native-server  or  make SGX=1 start-graphene-server
+
+To start the server in a multi-threaded environment, run one of the following commands:
+
+    make start-multithreaded-native-server  or  make SGX=1 start-multithreaded-graphene-server
+
+For testing, use ApacheBench (ab). There is a script `run-apachebench.sh` that takes two arguments:
+the IP and the port. It runs 100,000 requests (`-n 100000`) with 25 to 200 maximum outstanding
+requests (`-c 25` to `-c 200`). The results are saved into the same directory, and all previous
+output files are overwritten.
+
+    make SGX=1 start-graphene-server
+    ./run-apachebench.sh <ip> <port>
+    # which internally calls:
+    #   ab -k -n 100000 -c [25:200] -t 10 http://ip:port/random/100.1.html
+
+## Running Apache in Graphene-SGX
+
+Apache is a commercial-class web server that can be used to test the TCP latency and throughput of
+Graphene. The scripts and the source code can be found in `LibOS/shim/test/apps/apache`. To build
+Apache, run the following commands:
+
+    cd LibOS/shim/test/apps/apache
+    make SGX=1
+    make SGX_RUN=1
+
+The commands above will compile the source code, build the manifest file for Graphene, generate
+the configuration file for Apache, and generate the HTML sample files (same as described in the
+[[lighttpd section|Run applications in Graphene#Running Lighttpd in Graphene]]).
+
+The server can be started manually via one of the following commands:
+
+    make start-native-server  or  make SGX=1 start-graphene-server
+
+By default, the Apache web server is configured to run with 4 preforked worker processes and has
+PHP support enabled. To test Apache server with ab, run:
+
+    make SGX=1 start-graphene-server
+    ./run-apachebench.sh <ip> <port>
+    # which internally calls:
+    #   ab -k -n 100000 -c [25:200] -t 10 http://ip:port/random/100.1.html
+
+## Running Busybox in Graphene-SGX
+
+Busybox is a standalone shell including general-purpose system utilities. The scripts and the
+source code for Busybox is stored in `LibOS/shim/apps/busybox`. To build the source code with
+the proper manifest, run the following commands:
+
+    cd LibOS/shim/test/apps/busybox
+    make SGX=1
+    make SGX_RUN=1
+
+To run Busybox, you may directly run busybox.manifest built in the directory as a script.
+For example:
+
+    ./busybox.manifest.sgx sh (to run a shell)
+
+or
+
+    ./busybox.manifest.sgx ls -l (to list local directory)
+
+## Running Bash in Graphene-SGX
+
+Bash is the most commonly used shell utility in Linux. The scripts and the source code for Bash
+are stored in `LibOS/shim/apps/bash`. To build the source code with the proper manifest, simply run
+the following commands:
+
+    cd LibOS/shim/test/apps/bash
+    make SGX=1
+    make SGX_RUN=1
+
+To test Bash, use the benchmark suites we prepared: `bash_test.sh` and `unixbench`. Run one of the
+following commands to test Bash:
+
+    ./bash.manifest.sgx bash_test.sh [times]
+    ./bash.manifest.sgx unixbench.sh [times]
+

+ 65 - 120
Documentation/oldwiki/Run-Applications-in-Graphene.md

@@ -1,5 +1,5 @@
-# Run Applications in Graphene
-We prepared and tested the following applications in Graphene library OS. These applications can be directly built and run from the Graphene library OS source.
+We prepared and tested several applications to demonstrate Graphene usability. These applications
+can be directly built and run from the Graphene source:
 
 * [[LMBench (v2.5) | Run Applications in Graphene#running lmbench in graphene]]
 * [[Python | Run Applications in Graphene#running python in graphene]]
@@ -9,19 +9,22 @@ We prepared and tested the following applications in Graphene library OS. These
 * [[Apache | Run Applications in Graphene#running apache in graphene]]
 * [[Busybox | Run Applications in Graphene#running busybox in graphene]]
 * [[Bash | Run Applications in Graphene#running bash in graphene]]
-* [[GNU Make | Run Applications in Graphene#running gnu make in graphene]]
-* [[OpenJDK 1.7 | Run Applications in Graphene#running openjdk in graphene]]
 
 ## Running LMBench in Graphene
 
-The LMBench source and scripts are stored in directory `LibOS/shim/test/apps/lmbench` inside the source tree. Many convenient commands are written in the Makefile inside the directory. The following steps will compile and run LMBench in native environment and Graphene Library OS.
+The LMBench source and scripts are stored in the directory `LibOS/shim/test/apps/lmbench`. Many
+convenient commands are written in the Makefile inside the directory. The following steps compile
+and run LMBench in a native environment and under Graphene.
 
     cd LibOS/shim/test/apps/lmbench
-    make        # compile source of lmbench and set up manifests as target of graphene tests
-    make test-native         # run the whole package in native environment
-    make test-graphene       # run the whole package in graphene library OS
+    make                  # compile lmbench and set up manifests as target of Graphene tests
+    make test-native      # run the whole package in native environment
+    make test-graphene    # run the whole package in Graphene
 
-The result of native runs can be found in `lmbench-2.5/results/linux`. The result of graphene runs can be found in `lmbench-2.5/results/graphene`. The file with the largest number as suffix will be the latest output. Sometimes, for debugging purpose, you may want to test each LMBench test individually. For doing that, you may run the following commands:
+The result of native runs can be found in `lmbench-2.5/results/linux`. The result of Graphene runs
+can be found in `lmbench-2.5/results/graphene`. The file with the largest number as suffix will be
+the latest output. For debugging purposes, you may want to test each LMBench test individually. To
+do that, run the following commands:
 
     cd LibOS/shim/test/apps/lmbench
     cd lmbench-2.5/bin/linux/
@@ -30,9 +33,9 @@ The result of native runs can be found in `lmbench-2.5/results/linux`. The resul
 
 To run the tcp and udp latency tests:
 
-     ./pal lat_udp -s &        # starts a server
-     ./pal lat_udp 127.0.0.1   # starts a client
-     ./pal lat_udp -127.0.0.1  # kills the server
+     ./pal_loader lat_udp -s &        # starts a server
+     ./pal_loader lat_udp 127.0.0.1   # starts a client
+     ./pal_loader lat_udp -127.0.0.1  # kills the server
 
 ## Running Python in Graphene
 
@@ -41,15 +44,12 @@ To run Python, first prepare the manifest:
     cd LibOS/shim/test/apps/python
     make
 
-You can run `python.manifest` as an executable to load any script. The manifest file is actually a script with a shebang that can be automatically loaded in PAL. Use the following commands:
+You can run `python.manifest` as an executable to load any script. The manifest file is actually
+a script with a shebang that can be automatically loaded in PAL. Use the following commands:
 
     ./python.manifest scripts/helloworld.py
     ./python.manifest scripts/fibonacci.py
 
-In the case that you want to test a locally built python than the native python in the system, run the following command and replace the manifests:
-
-    make python-local
-    make
 
 ## Running R in Graphene
 
@@ -58,69 +58,62 @@ To run R, first prepare the manifest:
     cd LibOS/shim/test/apps/r
     make
 
-You can run `R.manifest` as an executable to load any script. The manifest file is actually a script with a shebang that can be automatically loaded in PAL. Use the following commands:
+You can run `R.manifest` as an executable to load any script. The manifest file is actually a script
+with a shebang that can be automatically loaded in PAL. Use the following command:
 
     ./R.manifest -f scripts/sample.r
 
-In the case that you want to test a locally built R than the native R in the system, run the following command and replace the manifests:
-
-    make R-local
-    make
 
 ## Running GCC in Graphene
 
-In graphene we prepare several C/C++ source file to test the performance of file IO. Usually the native GCC and LD (linker) is used to compile the source code. The scripts and tested source files can be found in `LibOS/shim/test/apps/gcc`. The source files includes:
+We prepared several C/C++ source files to test the performance of file I/O. The scripts and the
+tested source files can be found in `LibOS/shim/test/apps/gcc/test_files`. The source files include:
 
 * `helloworld.c`: an extremely small source file
-* `gzip.c`: an larger real-world application
+* `gzip.c`: a larger real-world application
 * `oggenc.m.c`: even larger, linked with libm.so
-* `single-gcc.c`: merge all gcc codes into a extremely huge source file. used as stress test.
+* `single-gcc.c`: all of the gcc source in one source file, used as a stress test
 
-To test compiling those source file, first prepare gcc manifest to compile the program:
+To test compilation of these source files, first prepare the GCC manifest to compile the program:
 
     cd LibOS/shim/test/apps/gcc
     make
 
-To run GCC for a single source code, you can run `gcc.manifest` as an executable. The manifest file is actually a script with a shebang that can be automatically loaded in PAL. Use the following commands:
-
-    ./gcc.manifest -o helloworld helloworld.c
-    ./gcc-huge.manifest -o single-gcc single-gcc.c (For some source code, GCC need a HUGE stack size to compile the source code)
+To test GCC, run `gcc.manifest` as an executable. The manifest file is actually a script with a
+shebang that can be automatically loaded in PAL. Use the following commands:
 
-In the case that you want to test a locally built gcc than the native gcc in the system, run the following command and replace the manifests:
+    ./gcc.manifest -o test_files/hello test_files/helloworld.c
+    ./gcc.manifest -o test_files/single-gcc test_files/single-gcc.c
 
-    make gcc-local
-    make
-
-Building gcc requires a few libraries:
-
-* `gmp`: GNU Multiple Precision Arithmetic Library
-* `mpfr`: GNU Multiple-precision floating-point rounding library
-* `mpc`: the GNU Multiple-precision C library
 
 ## Running Lighttpd in Graphene
 
-Lighttpd can be used to test tcp latency and throughput of Graphene Library OS, in either single-threaded or multi-threaded environment. The scripts and source codes for Lighttpd can be found in `LibOS/shim/test/apps/lighttpd`. To compile the code base of Lighttpd that can be potentially used, run the following command:
+Lighttpd can be used to test the TCP latency and throughput of Graphene, in either single-threaded
+or multi-threaded environment. The scripts and the source code for Lighttpd can be found in
+`LibOS/shim/test/apps/lighttpd`. To build Lighttpd, run the following command:
 
     cd LibOS/shim/test/apps/lighttpd
     make
 
-The building command will not only compile the source code, but build up manifests for Graphene, config file for Lighttpd, and test html files. We prepare the following test html files so far:
+The commands above will compile the source code, build the manifest file for Graphene, generate the
+configuration file for Lighttpd, and generate the HTML sample files. We prepared the following file
+samples:
 
-* html/oscar-web: a snapshot of [OSCAR website](http://www.oscar.cs.stonybrook.edu) with php support
-* html/oscar-web-static: a snapshot of [OSCAR website](http://www.oscar.cs.stonybrook.edu) without php support
-* html/random/*.html: random file (non-html) created into different sizes
+* `html/random/*.html`: random files (non-html) created with different sizes
 
-The server should be started manually, and tested by running apache bench from a remote client. To start the http server either in native runs or graphene runs, run the following commands:
+The server should be started manually and tested by running the ApacheBench (ab) benchmark from a
+remote client. To start the HTTP server, run one of the following commands:
 
-`make start-native-server` or `make start-graphene-server`.
+    make start-native-server  or  make start-graphene-server
 
-To start the server in multi-threaded environment, run the following commands:
+To start the server in a multi-threaded environment, run one of the following commands:
 
-`make start-multithreaded-native-serve` and `make start-multithreaded-graphene-server`.
+    make start-multithreaded-native-server  or  make start-multithreaded-graphene-server
 
-To actually test, you should use _ApacheBench_. _ApacheBench(ab)_ is an http client which can sit/run from any machine. When we benchmark lighttpd on Graphene, provided web server on Graphene is visible outside the host, one must be able to use ab from any of the lab machines. ab provides multiple options like the number of http requests, number of concurrent requests, silent mode, time delay between requests. The Ubuntu/Debian package is `apache2-utils`.
-
-To test Lighttpd server with _ApacheBench_, first we need to start to Lighttpd server as above. There is a script run-apachebench.sh that takes two arguments: ip and port. It runs 10,000 requests (-n 10000) with 1, 2, 3, 4, and 5 maximum outstanding requests (-n 1...5). The results are saved into the same directory, and all previous output files are overwritten.
+For testing, use ApacheBench (ab). There is a script `run-apachebench.sh` that takes two arguments:
+the IP and the port. It runs 100,000 requests (`-n 100000`) with 25 to 200 maximum outstanding
+requests (`-c 25` to `-c 200`). The results are saved into the same directory, and all previous
+output files are overwritten.
 
     make start-graphene-server
     ./run-apachebench.sh <ip> <port>
@@ -129,20 +122,23 @@ To test Lighttpd server with _ApacheBench_, first we need to start to Lighttpd s
 
 ## Running Apache in Graphene
 
-Apache is a commercial-class web server that can be used to test tcp latency and throughput of Graphene Library OS. The scripts and source codes for Lighttpd can be found in `LibOS/shim/test/apps/apache`. To compile the code base of Apache and PHP module that can be potentially used, run the following command:
+Apache is a commercial-class web server that can be used to test the TCP latency and throughput of
+Graphene. The scripts and the source code can be found in `LibOS/shim/test/apps/apache`. To build
+Apache, run the following command:
 
     cd LibOS/shim/test/apps/apache
     make
 
-The building command will not only compile the source code, but build up manifests for Graphene, config file for Apache, and test html files (as described in the [[lighttpd section|Run applications in Graphene#Running Lighttpd in Graphene]]).
-
-The server could be started manually by using the following commands:
+The commands above will compile the source code, build the manifest file for Graphene, generate
+the configuration file for Apache, and generate the HTML sample files (same as described in the
+[[lighttpd section|Run applications in Graphene#Running Lighttpd in Graphene]]).
 
-`make start-native-server` or `make start-graphene-server`.
+The server can be started manually via one of the following commands:
 
-By default, the Apache web server is configured to run with 4 preforked worker processes, and has PHP support enabled.
+    make start-native-server  or  make start-graphene-server
 
-To test Apache server with _ApacheBench_, first we need to start to Apache server as above. Run the same script to test with ApacheBench:
+By default, the Apache web server is configured to run with 4 preforked worker processes and has
+PHP support enabled. To test Apache server with ab, run:
 
     make start-graphene-server
     ./run-apachebench.sh <ip> <port>
@@ -151,12 +147,15 @@ To test Apache server with _ApacheBench_, first we need to start to Apache serve
 
 ## Running Busybox in Graphene
 
-Busybox is a standalone shell including general-purpose system utilities. Running Busybox is a lot easier than running real shells such as Bash, because: first, Busybox can use _vfork_ instead of _fork_ to create new processes. second, Busybox can call itself as any of the utilities it includes, no need for calling some other binaries. The scripts and source code for Busybox is store in `LibOS/shim/apps/busybox`. To build the source code with proper manifest, simple run the following commands:
+Busybox is a standalone shell including general-purpose system utilities. The scripts and the
+source code for Busybox is stored in `LibOS/shim/apps/busybox`. To build the source code with
+the proper manifest, run the following commands:
 
     cd Shim/shim/test/apps/busybox
     make
 
-To run busybox, either to run a shell or a utility, you may directly run busybox.manifest built in the directory as a script. For example:
+To run Busybox, you may directly run busybox.manifest built in the directory as a script.
+For example:
 
     ./busybox.manifest sh (to run a shell)
 
@@ -164,72 +163,18 @@ or
 
     ./busybox.manifest ls -l (to list local directory)
 
+
 ## Running Bash in Graphene
 
-Bash is the most commonly used shell utilities in Linux. Bash can be run as a interactive standalone shell, or execute scripts or binaries immediately. Besides a few built-in commands, Bash mostly relies on other standalone utilities to execute commands given in the shell, such as `ls`, `cat` or `grep`. Therefore, supporting Bash will require supporting all the utility programs that can be potentially used. The scripts and source code for Bash is store in `LibOS/shim/apps/bash`. To build the source code with proper manifest, simple run the following commands:
+Bash is the most commonly used shell utility in Linux. The scripts and the source code for Bash
+are stored in `LibOS/shim/apps/bash`. To build the source code with the proper manifest, simply run
+the following commands:
 
     cd Shim/shim/test/apps/bash
     make
 
-To test Bash, you may use the benchmark suites we prepared: one is `bash_test.sh`, and the other is `unixbench`. Run one of the following commands to test Bash:
+To test Bash, use the benchmark suites we prepared: `bash_test.sh` and `unixbench`. Run one of the
+following commands to test Bash:
 
     ./bash.manifest bash_test.sh [times]
     ./bash.manifest unixbench.sh [times]
-
-In the case that you want to test a locally built Bash than the native Bash in the system, run the following command and replace the manifests:
-
-    make bash-local
-    make
-
-## Running GNU Make in Graphene
-
-GNU Make is the most commonly used building tool in Linux. GNU Make is one of the most hard-to-implement applications in Graphene because it requires full multi-processing support, Bash support and other potentially used utility programs. The scripts and source code for GNU Make is store in `LibOS/shim/apps/make`. To build the source code with proper manifest, simple run the following commands:
-
-    cd Shim/shim/test/apps/make
-    make
-
-Currently we can only support GNU Make with very specific Makefile scripts, and not able to support other building tools such as _libtool_, _autoconf_ and _automake_. We have prepared some Makefile scripts that are proven working in Graphene:
-
-* `helloworld`: a Makefile script that only compile one source file `helloworld.c`
-* `Graphene LibOS`: eating our own dog food. Compile Graphene Library OS with Graphene Library OS.
-* `bzip2` and `oggenc` 
-
-To test one of those Makefile script in Graphene, run the following commands:
-
-    make clean -C graphene
-    ./make.manifest -C graphene NPROC=<num of processes>
-
-In the case that you want to test a locally built GNU Make than the native GNU Make in the system, run the following command and replace the manifests:
-
-    make make-local
-    make
-
-## Running OpenJDK in Graphene
-
-We have tested OpenJDK 1.6 and 1.7 in Graphene library OS. Newer versions of OpenJDK can potentially work, but there is no guarantee. 
-
-Make sure you install the build dependencies for OpenJDK first.  Sadly, openjdk itself seems like a prerequisite to build openjdk.
-
-On Ubuntu 14.04:
-
-    sudo apt-get build-dep openjdk-7
-    sudo apt-get install openjdk-7-jdk
-
-To build OpenJDK 1.7 and generate the manifest, run the following commands:
-
-    cd LibOS/shim/test/apps/openjdk
-    make
-
-For SGX support, do this:
-
-    make SGX=1; make SGX_RUN=1
-
-The building will take several minutes and require network connection to download packages. After building OpenJDK, use the following script to run a Java program:
-
-    <path to pal> java.manifest -cp classes HelloWorld
-
-For SGX, use the java.manifest.sgx instead of java.manifest.
-
-We can only confirm that openjdk works on 14.04.
-
-In `run-compact-java` we specify the OpenJDK options to limit the resource used by the OpenJDK VM. We do not suggest running OpenJDK without these options, because the assumptions made by OpenJDK may cause Graphene library OS to crash.

+ 0 - 153
Documentation/oldwiki/Run-Applications-with-SGX.md

@@ -1,153 +0,0 @@
-# Run Applications with SGX
-We prepared and tested the following applications in Graphene library OS. These applications can be directly built and run from the Graphene library OS source.
-
-* [[LMBench (v2.5) | Run Applications with SGX#running lmbench in graphene]]
-* [[Python | Run Applications with SGX#running python in graphene]]
-* [[R | Run Applications with SGX#running r in graphene]]
-* [[Lighttpd | Run Applications with SGX#running lighttpd in graphene]]
-* [[Apache | Run Applications with SGX#running apache in graphene]]
-* [[Busybox | Run Applications with SGX#running busybox in graphene]]
-* [[Bash | Run Applications with SGX#running bash in graphene]]
-* [[OpenJDK 1.7 | Run Applications with SGX#running openjdk in graphene]]
-
-## Running LMBench in Graphene
-
-The LMBench source and scripts are stored in directory `LibOS/shim/test/apps/lmbench` inside the source tree. Many convenient commands are written in the Makefile inside the directory. The following steps will compile and run LMBench in SGX enclave.
-
-    cd LibOS/shim/test/apps/lmbench
-    make SGX=1      # compile source of lmbench and generate manifest and signature
-    make SGX_RUN=1  $ get enclave token
-    make test-graphene       # run the whole package in graphene library OS
-
-The result of graphene runs can be found in `lmbench-2.5/results/graphene`. The file with the largest number as suffix will be the latest output. Sometimes, for debugging purpose, you may want to test each LMBench test individually. For doing that, you may run the following commands:
-
-    cd LibOS/shim/test/apps/lmbench
-    cd lmbench-2.5/bin/linux/
-    ./pal_loader lat_syscall null    # run lat_syscall in Graphene
-
-To run the tcp and udp latency tests:
-
-    ./pal_loader lat_udp -s &        # starts a server
-    ./pal_loader lat_udp 127.0.0.1   # starts a client
-    ./pal_loader lat_udp -127.0.0.1  # kills the server
-
-## Running Python in Graphene
-
-To run Python, first generate the manifest and the signature, and retrieve the token:
-
-    cd LibOS/shim/test/apps/python
-    make SGX=1
-    make SGX_RUN=1
-
-You can run `python.manifest.sgx` as an executable to load any script. The manifest file is actually a script with a shebang that can be automatically loaded in PAL. Use the following commands:
-
-    ./python.manifest.sgx scripts/helloworld.py
-    ./python.manifest.sgx scripts/fibonacci.py
-
-## Running R in Graphene
-
-To run R, first prepare the manifest:
-
-    cd LibOS/shim/test/apps/r
-    make SGX=1
-    make SGX_RUN=1
-
-You can run `R.manifest.sgx` as an executable to load any script. The manifest file is actually a script with a shebang that can be automatically loaded in PAL. Use the following commands:
-
-    ./R.manifest.sgx -f scripts/sample.r
-
-## Running Lighttpd in Graphene
-
-Lighttpd can be used to test tcp latency and throughput of Graphene Library OS, in either single-threaded or multi-threaded environment. The scripts and source codes for Lighttpd can be found in `LibOS/shim/test/apps/lighttpd`. To compile the code base of Lighttpd that can be potentially used, run the following command:
-
-    cd LibOS/shim/test/apps/lighttpd
-    make SGX=1
-    make SGX_RUN=1
-
-The building command will not only compile the source code, but build up manifests for Graphene, config file for Lighttpd, and test html files. We prepare the following test html files so far:
-
-* html/oscar-web: a snapshot of [OSCAR website](http://www.oscar.cs.stonybrook.edu) with php support
-* html/oscar-web-static: a snapshot of [OSCAR website](http://www.oscar.cs.stonybrook.edu) without php support
-* html/random/*.html: random file (non-html) created into different sizes
-
-The server should be started manually, and tested by running apache bench from a remote client. To start the http server either in native runs or graphene runs, run the following commands:
-
-`make start-native-server` or `make start-graphene-server`.
-
-To start the server in multi-threaded environment, run the following commands:
-
-`make start-multithreaded-native-serve` and `make start-multithreaded-graphene-server`.
-
-To actually test, you should use _ApacheBench_. _ApacheBench(ab)_ is an http client which can sit/run from any machine. When we benchmark lighttpd on Graphene, provided web server on Graphene is visible outside the host, one must be able to use ab from any of the lab machines. ab provides multiple options like the number of http requests, number of concurrent requests, silent mode, time delay between requests. The Ubuntu/Debian package is `apache2-utils`.
-
-To test Lighttpd server with _ApacheBench_, first we need to start to Lighttpd server as above. There is a script run-apachebench.sh that takes two arguments: ip and port. It runs 10,000 requests (-n 10000) with 1, 2, 3, 4, and 5 maximum outstanding requests (-n 1...5). The results are saved into the same directory, and all previous output files are overwritten.
-
-    make start-graphene-server
-    ./run-apachebench.sh <ip> <port>
-    # which internally calls:
-    #   ab -k -n 100000 -c [25:200] -t 10 http://ip:port/random/100.1.html
-
-## Running Apache in Graphene
-
-Apache is a commercial-class web server that can be used to test tcp latency and throughput of Graphene Library OS. The scripts and source codes for Lighttpd can be found in `LibOS/shim/test/apps/apache`. To compile the code base of Apache and PHP module that can be potentially used, run the following command:
-
-    cd LibOS/shim/test/apps/apache
-    make SGX=1
-    make SGX_RUN=1
-
-The building command will not only compile the source code, but build up manifests for Graphene, config file for Apache, and test html files (as described in the [[lighttpd section|Run Applications with SGX#Running Lighttpd in Graphene]]).
-
-The server could be started manually by using the following commands:
-
-`make start-native-server` or `make start-graphene-server`.
-
-By default, the Apache web server is configured to run with 4 preforked worker processes, and has PHP support enabled.
-
-To test Apache server with _ApacheBench_, first we need to start to Apache server as above. Run the same script to test with ApacheBench:
-
-    make start-graphene-server
-    ./run-apachebench.sh <ip> <port>
-    # which internally calls:
-    #   ab -k -n 100000 -c [25:200] -t 10 http://ip:port/random/100.1.html
-
-## Running Busybox in Graphene
-
-Busybox is a standalone shell including general-purpose system utilities. Running Busybox is a lot easier than running real shells such as Bash, because: first, Busybox can use _vfork_ instead of _fork_ to create new processes. second, Busybox can call itself as any of the utilities it includes, no need for calling some other binaries. The scripts and source code for Busybox is store in `LibOS/shim/apps/busybox`. To build the source code with proper manifest, simple run the following commands:
-
-    cd LibOS/shim/test/apps/busybox
-    make SGX=1
-    make SGX_RUN=1
-
-To run busybox, either to run a shell or a utility, you may directly run busybox.manifest built in the directory as a script. For example:
-
-    ./busybox.manifest.sgx sh (to run a shell)
-
-or
-
-    ./busybox.manifest.sgx ls -l (to list local directory)
-
-## Running Bash in Graphene
-
-Bash is the most commonly used shell utilities in Linux. Bash can be run as a interactive standalone shell, or execute scripts or binaries immediately. Besides a few built-in commands, Bash mostly relies on other standalone utilities to execute commands given in the shell, such as `ls`, `cat` or `grep`. Therefore, supporting Bash will require supporting all the utility programs that can be potentially used. The scripts and source code for Bash is store in `LibOS/shim/apps/bash`. To build the source code with proper manifest, simple run the following commands:
-
-    cd LibOS/shim/test/apps/bash
-    make SGX=1
-    make SGX_RUN=1
-
-To test Bash, you may use the benchmark suites we prepared: one is `bash_test.sh`, and the other is `unixbench`. Run one of the following commands to test Bash:
-
-    ./bash.manifest.sgx bash_test.sh [times]
-
-## Running OpenJDK in Graphene
-
-We have tested OpenJDK 1.6 and 1.7 in Graphene library OS. Newer versions of OpenJDK can potentially work, but there is no guarantee. To build OpenJDK 1.7 and generated the manifest, run the following commands:
-
-    cd LibOS/shim/test/apps/openjdk
-    make SGX=1
-    make SGX_RUN=1
-
-The building will take several minutes and require network connection to download packages. After building OpenJDK, use the following script to run a Java program:
-
-    ./run-java -cp classes HelloWorld
-
-In `run-java` we specify the OpenJDK options to limit the resource used by the OpenJDK VM. We do not suggest running OpenJDK without these options, because the assumptions made by OpenJDK may cause Graphene library OS to crash.

+ 0 - 39
Documentation/oldwiki/SGX-Manifest-Syntax.md

@@ -1,39 +0,0 @@
-# SGX Manifest Syntax
-The basic manifest syntax is described in [[Manifest Syntax]]. The SGX-specific syntax in a manifest is ignored if Graphene library OS is run without Intel SGX support. All keys in the SGX-specific syntax are optional. If the keys are not specified, Graphene library OS will use the default values.
-
-## Basic SGX-specific Syntax
-
-### Enclave size (OPTIONAL)
-    sgx.enclave_size=[SIZE]
-    (default: 256M)
-This syntax specifies the enclave size to be created. Beside PAL and library OS, the remaining memory in the enclave is used as the heap, to load application libraries or create anonymous memory. The application cannot allocate memory that exceeds the enclave size.
-
-### Thread number (OPTIONAL)
-    sgx.thread_num=[NUM]
-    (Default: 4)
-This syntax specifies the number of threads that can be created inside the enclave. The application cannot create more threads than this limit. Creating more threads will require more enclave memory.
-
-### Debugging (OPTIONAL)
-    sgx.debug=[1|0]
-    (Default: 1)
-This syntax specifies the whether the enclave can be debugged. Currently Graphene library OS only supports the debugging mode.
-
-### ISV Product ID and SVN (OPTIONAL)
-    sgx.isvprodid=[NUM]
-    sgx.isnsvn=[NUM]
-    (Default: 0)
-This syntax specifies the ISV Product ID and SVN to be added into the enclave signature.
-
-## Trusted files and child processes
-
-### Trusted Files (OPTIONAL)
-    sgx.trusted_files.[identifier]=[URI]
-This syntax specifies the files that have to be signed, and thus are allowed to be loaded into the enclave. The signer tool will automatically generate the checksums of these files and add them into the SGX-specific manifest (`.manifest.sgx`).
-
-### Allowed Files (OPTIONAL)
-    sgx.allowed_files.[identifier]=[URI]
-This syntax specifies the files that are allowed to be loaded into the enclave unconditionally. These files will be not signed, so it is insecure if these files are loaded as code or contain critical information. Developers must not allow files blindly.
-
-### Trusted Child Processes (OPTIONAL)
-    sgx.trusted_children.[identifier]=[URI of signature (.sig)]
-This syntax specifies the signatures that are allowed to be created as child processes of the current application. Upon process creation, the current enclave will perform attest the enclave in the child process, against the trusted signatures. If the child process is not trusted, the current enclave will not communicate with it. 

+ 0 - 58
Documentation/oldwiki/SGX-Quick-Start.md

@@ -1,58 +0,0 @@
-# SGX Quick Start
-## Quick Start to Run Applications in Intel SGX Enclaves
-
-If you simply want to build and run Graphene on the same host, try the following steps:
-
-__** Note: Please use GCC version 4 or 5 **__
-
-__** Please make sure the Intel SGX Linux SDK and driver are installed. **__
-
-If not, download and install from these two repositories: <https://github.com/01org/linux-sgx> and <https://github.com/01org/linux-sgx-driver>
-
-__** Note: Please use Intel SGX Linux SDK and driver version 1.9 or lower. **__
-
-### 1. Clone the repository and set the home directory of Graphene
-
-    git clone https://github.com/oscarlab/graphene.git
-    export GRAPHENE_DIR=$PWD/graphene
-
-### 2. prepare a signing key
-
-    cd $GRAPHENE_DIR/Pal/src/host/Linux-SGX/signer
-    openssl genrsa -3 -out enclave-key.pem 3072
-
-### 3. build PAL
-
-    cd $GRAPHENE_DIR/Pal/src
-    git submodule update --init -- $GRAPHENE_DIR/Pal/src/host/Linux-SGX/sgx-driver/
-    make SGX=1
-
-### 4. build and install Graphene SGX driver
-
-    cd $GRAPHENE_DIR/Pal/src/host/Linux-SGX/sgx-driver
-    make
-    sudo ./load.sh
-
-### 5. build the library OS
-
-    cd $GRAPHENE_DIR/LibOS
-    make SGX=1
-
-### 6. Run a helloworld program
-
-    cd $GRAPHENE_DIR/LibOS/shim/test/native
-    make SGX=1
-    make SGX_RUN=1
-    ./pal_loader SGX helloworld    or    SGX=1 ./pal_loader helloworld
-
-### 7. Run LMBench
-
-    git submodule update --init -- $GRAPHENE_DIR/LibOS/shim/test/apps
-    cd $GRAPHENE_DIR/LibOS/shim/test/apps/lmbench
-    make SGX=1
-    make SGX_RUN=1
-    cd lmbench-2.5/bin/linux
-    ./pal_loader SGX lat_syscall null    or   SGX=1 ./pal_loader lat_syscall null
-    ./pal_loader SGX lat_syscall open    or   SGX=1 ./pal_loader lat_syscall open
-    ./pal_loader SGX lat_syscall read    or   SGX=1 ./pal_loader lat_syscall read
-    ./pal_loader SGX lat_proc fork       or   SGX=1 ./pal_loader lat_proc fork

+ 95 - 34
Documentation/oldwiki/Signal-Handling-in-Graphene.md

@@ -1,20 +1,52 @@
+(Disclaimer: This explanation is partially outdated. It is intended only as an internal
+reference for developers of Graphene, not as a general documentation for Graphene users.)
+
 # Signal Handling
 
-This analysis is written while Graphene's signal handling mechanisms are in flux. In future, all Graphene PALs should implement the same mechanism, and LibOS should adopt a better scheme to support nested signals and alternate signal stacks.
+This analysis is written while Graphene's signal handling mechanisms are in flux. In future, all
+Graphene PALs should implement the same mechanism, and LibOS should adopt a better scheme to
+support nested signals and alternate signal stacks.
 
-In the interest of space and mental sanity, we do not discuss FreeBSD PAL implementation. Historically, Linux and FreeBSD shared the same mechanism (where signals were immediately delivered to LibOS even if signal arrived during PAL call). This old mechanism was adopted by Linux-SGX PAL, though due to peculiarities of Intel SGX, it has its own sub-flows and is more complicated. Currently, Linux PAL implements a new mechanism where a signal during a PAL call is pended and is delivered to LibOS only after the PAL call is finished.
+In the interest of space and mental sanity, we do not discuss FreeBSD PAL implementation.
+Historically, Linux and FreeBSD shared the same mechanism (where signals were immediately delivered
+to LibOS even if signal arrived during PAL call). This old mechanism was adopted by Linux-SGX PAL,
+though due to peculiarities of Intel SGX, it has its own sub-flows and is more complicated.
+Currently, Linux PAL implements a new mechanism where a signal during a PAL call is pended and is
+delivered to LibOS only after the PAL call is finished.
 
 So, there are two signal-handling mechanisms at the PAL layer:
 
-* Linux PAL: (1) If signal arrives during PAL call, pend it and return from signal context, continuing normal context of PAL call. Immediately after a PAL call is finished, deliver all pending signals to LibOS. (2) If signal arrives during LibOS/application code, deliver the signal immediately to LibOS. Note that the signal delivery and handling is done in signal context (in contrast to pending-signal delivery).
-
-* Linux-SGX PAL: (1) If signal arrives during enclave-code execution, remember the interrupted enclave-code context and return from signal context. When jumping back into the enclave (in normal context), deliver the signal to LibOS. After handling the signal, LibOS/PAL will continue from interrupted enclave-code context. (2) If signal arrives during non-enclave-code, i.e. untrusted-PAL, execution, just return from signal context. When jumping back into the enclave (in normal context), deliver the signal to LibOS. In contrast to first case, after handling the signal, LibOS/PAL will continue as if outermost PAL function failed with PAL_ERROR_INTERRUPTED.
-
-The advantage of the first mechanism is that there is never a possibility of nested PAL calls (which is not supported by Graphene). However, this also disallows nested signals already at the PAL layer. The advantage of the second mechanism is that nested signals are possible, at least as far as it concerns the PAL layer.
-
-There is a single unified signal-handling mechanism at the LibOS layer. This mechanism does *not* support nested signals: if a signal is delivered while another signal is handled (or during a LibOS internal lock), then it is pended. Pended signals are delivered after any system-call completion or after any LibOS internal unlock.
-
-A new signal-handling mechanism at the LibOS layer was proposed by Isaku Yamahata ( see https://github.com/oscarlab/graphene/pull/347 ). This proposal changes the points at which signals are delivered to the user app. The two points are (1) if signal arrives during app execution, the signal is delivered after host OS returns from signal context, and (2) if signal arrives during LibOS/PAL execution, the signal is delivered after system-call completion. This is in contrast to current LibOS approach of (1) delivering the first signal even in the middle of emulated syscall, and (2) pending nested signals until system-call completion.
+* Linux PAL: (1) If signal arrives during PAL call, pend it and return from signal context,
+continuing normal context of PAL call. Immediately after a PAL call is finished, deliver all
+pending signals to LibOS. (2) If signal arrives during LibOS/application code, deliver the
+signal immediately to LibOS. Note that the signal delivery and handling is done in signal context
+(in contrast to pending-signal delivery).
+
+* Linux-SGX PAL: (1) If signal arrives during enclave-code execution, remember the interrupted
+enclave-code context and return from signal context. When jumping back into the enclave (in normal
+context), deliver the signal to LibOS. After handling the signal, LibOS/PAL will continue from
+interrupted enclave-code context. (2) If signal arrives during non-enclave-code, i.e.
+untrusted-PAL, execution, just return from signal context. When jumping back into the enclave
+(in normal context), deliver the signal to LibOS. In contrast to first case, after handling the
+signal, LibOS/PAL will continue as if outermost PAL function failed with `PAL_ERROR_INTERRUPTED`.
+
+The advantage of the first mechanism is that there is never a possibility of nested PAL calls
+(which is not supported by Graphene). However, this also disallows nested signals already at the
+PAL layer. The advantage of the second mechanism is that nested signals are possible, at least as
+far as it concerns the PAL layer.
+
+There is a single unified signal-handling mechanism at the LibOS layer. This mechanism does *not*
+support nested signals: if a signal is delivered while another signal is handled (or during a LibOS
+internal lock), then it is pended. Pended signals are delivered after any system-call completion
+or after any LibOS internal unlock.
+
+A new signal-handling mechanism at the LibOS layer was proposed by Isaku Yamahata
+(see https://github.com/oscarlab/graphene/pull/347). This proposal changes the points at which
+signals are delivered to the user app. The two points are (1) if signal arrives during app
+execution, the signal is delivered after host OS returns from signal context, and (2) if signal
+arrives during LibOS/PAL execution, the signal is delivered after system-call completion. This is
+in contrast to current LibOS approach of (1) delivering the first signal even in the middle of
+emulated syscall, and (2) pending nested signals until system-call completion.
 
 
 ## Linux-SGX PAL Flows
@@ -128,7 +160,8 @@ On the example of SIGINT, until we arrive into `_DkGenericSignalHandle()`.
 
 ### Async Signal Arrives During Non-Enclave Code Execution
 
-Non-enclave code execution can only happen if Graphene process is currently executing untrusted-PAL code, e.g., is blocked on a `futex(wait)` system call.
+Non-enclave code execution can only happen if Graphene process is currently executing untrusted-PAL
+code, e.g., is blocked on a `futex(wait)` system call.
 
 On the example of SIGINT, until we arrive into `_DkGenericSignalHandle()`.
 
@@ -177,13 +210,18 @@ On the example of SIGINT, until we arrive into `_DkGenericSignalHandle()`.
 
 ### Sync Signal Arrives During Enclave Code Execution
 
-This case is exactly the same as for async signal. The only difference in the diagram would be that `_DkTerminateSighandler` is replaced by `_DkResumeSighandler`. But the logic is exactly the same.
+This case is exactly the same as for async signal. The only difference in the diagram would be that
+`_DkTerminateSighandler` is replaced by `_DkResumeSighandler`. But the logic is exactly the same.
 
 ### Sync Signal Arrives During Non-Enclave Code Execution
 
-Non-enclave code execution can only happen if Graphene process is currently executing untrusted-PAL code, e.g., is blocked on a `futex(wait)` system call.
+Non-enclave code execution can only happen if Graphene process is currently executing untrusted-PAL
+code, e.g., is blocked on a `futex(wait)` system call.
 
-If a sync signal arrives in this case, it means that there was a memory fault, illegal instruction, or arithmetic exception in untrusted-PAL code. This should never happen in a correct implementation of Graphene. In this case, `_DkResumeSighandler` simply kills the faulting thread (not the whole process!) by issuing `exit(1)` syscall.
+If a sync signal arrives in this case, it means that there was a memory fault, illegal instruction,
+or arithmetic exception in untrusted-PAL code. This should never happen in a correct implementation
+of Graphene. In this case, `_DkResumeSighandler` simply kills the faulting thread (not the whole
+process!) by issuing `exit(1)` syscall.
 
 ### DkGenericSignalHandle Logic
 
@@ -233,7 +271,8 @@ If a sync signal arrives in this case, it means that there was a memory fault, i
 
 ### Initialization of Signal Handling
 
-Very similar to the flow for Linux-SGX. In addition to 7 handled signals, Linux PAL also operates on these signals:
+Very similar to the flow for Linux-SGX. In addition to 7 handled signals, Linux PAL also operates on
+these signals:
 * SIGCHLD -- is ignored
 * SIGPIPE -- installs `_DkPipeSighandler` handler
 
@@ -510,17 +549,25 @@ On the example of `suspend_upcall()`. Assumes `tcb.context.preempt = 1` (in a si
 (Notation: <Linux signal> -> PAL signal -> LibOS signal handler (purpose))
 
 Sync signals:
-* SIGFPE  -> PAL_EVENT_ARITHMETIC_ERROR  -> arithmetic_error_upcall (if not internal fault, handle pending non-blocked SIGFPEs and then this SIGFPE)
-* SIGSEGV -> PAL_EVENT_MEMFAULT -> memfault_upcall (if not internal fault, handle pending non-blocked SIGSEGVs and then this SIGSEGV)
-* SIGBUS  -> PAL_EVENT_MEMFAULT -> memfault_upcall (if not internal fault, handle pending non-blocked SIGBUSs and then this SIGBUS)
-* SIGILL  -> PAL_EVENT_ILLEGAL  -> illegal_upcall  (handle pending non-blocked SIGILLs and then this SIGILL)
+* SIGFPE  -> PAL_EVENT_ARITHMETIC_ERROR  -> arithmetic_error_upcall (if not internal fault, handle
+pending non-blocked SIGFPEs and then this SIGFPE)
+* SIGSEGV -> PAL_EVENT_MEMFAULT -> memfault_upcall (if not internal fault, handle pending
+non-blocked SIGSEGVs and then this SIGSEGV)
+* SIGBUS  -> PAL_EVENT_MEMFAULT -> memfault_upcall (if not internal fault, handle pending
+non-blocked SIGBUSs and then this SIGBUS)
+* SIGILL  -> PAL_EVENT_ILLEGAL  -> illegal_upcall  (handle pending non-blocked SIGILLs and then
+this SIGILL)
 
 Async signals:
-* SIGTERM -> PAL_EVENT_QUIT     -> quit_upcall    (handle pending non-blocked SIGTERMs and then this SIGTERM)
-* SIGINT  -> PAL_EVENT_SUSPEND  -> suspend_upcall (handle pending non-blocked SIGINTs and then this SIGINT)
-* SIGCONT -> PAL_EVENT_RESUME   -> resume_upcall  (handle pending non-blocked signals but not SIGCONT itself)
-
-We already described flows of `suspend_upcall`. Here is how other signal handlers are different from `suspend_upcall`:
+* SIGTERM -> PAL_EVENT_QUIT     -> quit_upcall    (handle pending non-blocked SIGTERMs and then
+this SIGTERM)
+* SIGINT  -> PAL_EVENT_SUSPEND  -> suspend_upcall (handle pending non-blocked SIGINTs and then
+this SIGINT)
+* SIGCONT -> PAL_EVENT_RESUME   -> resume_upcall  (handle pending non-blocked signals but not
+SIGCONT itself)
+
+We already described flows of `suspend_upcall`. Here is how other signal handlers are different
+from `suspend_upcall`:
 ```
   Normal context (enclave mode)
 +-----------------------------------------------------+
@@ -600,9 +647,10 @@ illegal_upcall(event, context)
 ```
 
 
-## Alarm() Emulation
+# Alarm() Emulation
 
-SIGALRM signal is blocked in Graphene. Therefore, on `alarm()` syscall, SIGALRM is generated and raised purely by LibOS.
+SIGALRM signal is blocked in Graphene. Therefore, on `alarm()` syscall, SIGALRM is generated and
+raised purely by LibOS.
 
 ```
   Application thread                              AsyncHelperThread
@@ -665,16 +713,29 @@ shim_do_alarm(seconds)                          ... no alive host thread ...
 ```
 
 
-## Bugs and Issues
+# Bugs and Issues
 
-* BUG? Graphene LibOS performs `DkThreadYieldExecution()` in `__handle_signal()` (i.e., yield thread execution after handling one pending signal). Looks useless.
+* BUG? Graphene LibOS performs `DkThreadYieldExecution()` in `__handle_signal()` (i.e., yield
+thread execution after handling one pending signal). Looks useless.
 
 * TODO: clean-up `install_async_event()`, redundant logic in `async_list` checking
 
 * TODO: `suspend_on_signal` is useless
 
-* BUG? `return_from_ocall` remembers RDI = -PAL_ERROR_INTERRUPTED, but `_DkExceptionReturn` never returns back to after `_DkHandleExternalEvent` in `return_from_ocall`. Thus, the PAL return code (interrupted error) is lost! Check it with printfs and simple example.
-
-* BUG? `SIGNAL_DELAYED` flag is useless? It is set as one of the highest bits in int64 `SIGNAL_DELAYED = 0x80000000UL`. `resume_upcall` sets SIGNAL_DELAYED flag in current thread's `context.preempt` if the SIGCONT signal arrives during signal handling. `handle_signal` does the same.
-
-* TODO: Sigsuspend fix ( https://github.com/oscarlab/graphene/issues/453 ). In `shim_do_sigsuspend`: (1) unlock before thread_setwait + thread_sleep, (2) lock and unlock around last set_sig_mask, (3) add code similar to `__handle_signal`, but on all possible signal numbers and without `DkThreadYieldExecution` and without unsetting `SIGNAL_DELAYED` (?). Allow all pending signals to be delivered ( see https://stackoverflow.com/questions/40592066/sigsuspend-vs-additional-signals-delivered-during-handler-execution ). If at least one signal was delivered, do NOT go to `thread_sleep` but immediately return (and set the old mask beforehand).
+* BUG? `return_from_ocall` remembers RDI = -PAL_ERROR_INTERRUPTED, but `_DkExceptionReturn` never
+returns back to after `_DkHandleExternalEvent` in `return_from_ocall`. Thus, the PAL return code
+(interrupted error) is lost! Check it with printfs and simple example.
+
+* BUG? `SIGNAL_DELAYED` flag is useless? It is set as one of the highest bits in int64
+`SIGNAL_DELAYED = 0x80000000UL`. `resume_upcall` sets SIGNAL_DELAYED flag in current thread's
+`context.preempt` if the SIGCONT signal arrives during signal handling. `handle_signal` does the same.
+
+* TODO: Sigsuspend fix ( https://github.com/oscarlab/graphene/issues/453 ). In `shim_do_sigsuspend`:
+(1) unlock before thread_setwait + thread_sleep
+(2) lock and unlock around last set_sig_mask
+(3) add code similar to `__handle_signal`, but on all possible signal numbers and without
+`DkThreadYieldExecution` and without unsetting `SIGNAL_DELAYED` (?).
+Allow all pending signals to be delivered
+(see https://stackoverflow.com/questions/40592066/sigsuspend-vs-additional-signals-delivered-during-handler-execution).
+If at least one signal was delivered, do NOT go to `thread_sleep` but immediately return
+(and set the old mask beforehand).

+ 13 - 12
Documentation/oldwiki/Implemented-System-Calls.md → Documentation/oldwiki/Supported-System-Calls-in-Graphene.md

@@ -1,9 +1,8 @@
-# Implemented System Calls
-The following is a list of system calls that are currently implemented. We will update the list when there is a major release.
+The following is a list of system calls that are currently implemented.
 
-## System calls that are fully implemented
+## System Calls that are Fully Implemented
 
-### System calls that require multi-process coordination
+### System Calls that Require Multi-process Coordination
 
 * Process creation (fork/vfork)
 * execve
@@ -53,7 +52,7 @@ The following is a list of system calls that are currently implemented. We will
 * Thread-state (arch_prctl)
 
 
-## System calls that are partially implemented
+## System Calls that are Partially Implemented
 
 * ioctl
 
@@ -62,19 +61,23 @@ The following is a list of system calls that are currently implemented. We will
 * fcntl
    + Supported: Duplicate FDs (F_DUPFD/F_DUPFD_CLOEXEC), Set FD flags (F_GETFD/F_SETFD), Set file flags (F_GETFL/F_SETFL)
    + Unsupported: File locking (F_SETLK/F_SETLKW/F_GETLK)
+
 * clone
 
-   The Linux clone system call is ubiquitously used for creation of processes and threads. However, in Graphene, we only use the clone system call for thread creation. Process creation are implemented as the fork system call. In practice, it is quite rare for applications to use methods that are not forking to create processes.
+   The Linux clone system call is ubiquitously used for creation of processes and threads. However,
+   in Graphene, we only use the clone system call for thread creation. Process creation is
+   implemented as the fork system call. In practice, it is quite rare for applications to use
+   methods that are not forking to create processes.
 
-   The namespace options for the clone system calls (CLONE_FS, CLONE_NEWIPC, CLONE_NEWNET, etc) are currently not supported.
+   The namespace options (CLONE_FS, CLONE_NEWIPC, CLONE_NEWNET, etc) are currently not supported.
 
 * msgctl
 
-   Only IPC_RMID is supported
+   Only IPC_RMID is supported.
 
 * setpgid/setsid
 
-   These two system calls will set the process credential, but do not coordinate any cross-process state.
+   These two system calls set the process credentials but do not coordinate any cross-process state.
 
 * bind
 
@@ -86,7 +89,5 @@ The following is a list of system calls that are currently implemented. We will
 
 * getrlimit
 
-   getrlimit() returns the static values of RLIMIT_NOFILE, RLIMIT_RSS, RLIMIT_AS, RLIMIT_STACK.
-
-## System calls that are added in Graphene as _Hypercalls_
+   Returns the static values of RLIMIT_NOFILE, RLIMIT_RSS, RLIMIT_AS, RLIMIT_STACK.
 

+ 0 - 9
Documentation/oldwiki/Troubleshooting-Common-Issues.md

@@ -1,9 +0,0 @@
-# SGX: Application Won't Start
-
-If you are using an application with a fixed mapping, and at a relatively low address (say 64K), you may have problems starting an enclave on newer versions of Ubuntu.  Check:
-
-sudo sysctl vm.mmap_min_addr
-
-If the result is non-zero, try setting it to zero:
-
-sudo sysctl vm.mmap_min_addr=0

部分文件因为文件数量过多而无法显示