Browse Source

Fix a pointer arithmetic bug in memarea_alloc()

Fortunately, the arithmetic cannot actually overflow, so long as we
*always* check for the size of potentially hostile input before
copying it.  I think we do, though.  We do check each line against
MAX_LINE_LENGTH, and each object name or object against
MAX_UNPARSED_OBJECT_SIZE, both of which are 128k.  So to get this
overflow, we need to have our memarea allocated way way too high up
in RAM, which most allocators won't actually do.

Bugfix on 0.2.1.1-alpha, where memarea was introduced.

Found by Guido Vranken.
Nick Mathewson 8 years ago
parent
commit
be2d37ad3c
2 changed files with 12 additions and 3 deletions
  1. 7 0
      changes/memarea_overflow
  2. 5 3
      src/common/memarea.c

+ 7 - 0
changes/memarea_overflow

@@ -0,0 +1,7 @@
+  o Minor bugfixes (pointer arithmetic):
+    - Fix a bug in memarea_alloc() that could have resulted in remote heap
+      write access, if Tor had ever passed an unchecked size to
+      memarea_alloc().  Fortunately, all the sizes we pass to memarea_alloc()
+      are pre-checked to be less than 128 kilobytes. Fixes bug 19150; bugfix
+      on 0.2.1.1-alpha. Bug found by Guido Vranken.
+

+ 5 - 3
src/common/memarea.c

@@ -80,8 +80,7 @@ typedef struct memarea_chunk_t {
   struct memarea_chunk_t *next_chunk;
   size_t mem_size; /**< How much RAM is available in mem, total? */
   char *next_mem; /**< Next position in mem to allocate data at.  If it's
-                   * greater than or equal to mem+mem_size, this chunk is
-                   * full. */
+                   * equal to mem+mem_size, this chunk is full. */
 #ifdef USE_ALIGNED_ATTRIBUTE
   char mem[FLEXIBLE_ARRAY_MEMBER] __attribute__((aligned(MEMAREA_ALIGN)));
 #else
@@ -237,7 +236,10 @@ memarea_alloc(memarea_t *area, size_t sz)
   tor_assert(sz < SIZE_T_CEILING);
   if (sz == 0)
     sz = 1;
-  if (chunk->next_mem+sz > chunk->U_MEM+chunk->mem_size) {
+  tor_assert(chunk->next_mem <= chunk->U_MEM + chunk->mem_size);
+  const size_t space_remaining =
+    (chunk->U_MEM + chunk->mem_size) - chunk->next_mem;
+  if (sz > space_remaining) {
     if (sz+CHUNK_HEADER_SIZE >= CHUNK_SIZE) {
       /* This allocation is too big.  Stick it in a special chunk, and put
        * that chunk second in the list. */