Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
dguido authored Aug 1, 2016
1 parent e264ac8 commit 699cae2
Showing 1 changed file with 44 additions and 8 deletions.
52 changes: 44 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,41 @@
# DARPA Challenge Binaries in Linux and OS X
# DARPA Challenge Binaries on Linux and OS X

[![Build Status](https://travis-ci.org/trailofbits/cb-multios.svg?branch=master)](https://travis-ci.org/trailofbits/cb-multios)
[![Slack Status](https://empireslacking.herokuapp.com/badge.svg)](https://empireslacking.herokuapp.com)

These programs (CBs) were specifically designed with vulnerabilities that represent a wide variety of software flaws. They are more than simple test cases, they approximate real software with enough complexity to stress both manual and automated vulnerability discovery.
The DARPA Challenge Binaries (CBs) are custom-made programs specifically designed to contain vulnerabilities that represent a wide variety of crashing software flaws. They are more than simple test cases, they approximate real software with enough complexity to stress both manual and automated vulnerability discovery. The CBs come with extensive functionality tests, triggers for introduced bugs, patches, and performance monitoring tools, enabling benchmarking of patching tools and bug mitigation strategies.

The CBs were originally developed for DECREE -- a custom Linux-derived operating system that has no signals, no shared memory, no threads, no libc runtime, and only seven system calls -- making them incompatible with any existing analysis tools. In this repository, we have modified the CBs to work on Linux and OS X by replacing the build system and creating a new libc-like runtime. Scripts have been provided that help modify the CBs to support other operating systems.

The CBs are the best available benchmark to evaluate program analysis tools. Using them, it is possible to make comparisons such as:

* How good are tools from the Cyber Grand Challenge vs. existing program analysis and bug finding tools
* How good are tools from the Cyber Grand Challenge vs. existing program analysis and bug finding tools?
* When a new tool is released, how does it stack up against the current best?
* Do static analysis tools that work with source code find more bugs than dynamic analysis tools that work with binaries?
* Are tools written for Mac OS X better than tools written for Linux, and are they better than tools written for Windows?

Windows support coming soon!
## Components

### original-challenges
This directory contains all of the unmodified source code for the challenge binaries. Challenges that are not building or are not yet supported are in the `multibin` directory.

### include
This directory contains `libcgc`, which implements the syscalls to work on non-DECREE systems. `libcgc` currently works on OS X and Linux.

### tools
This folder contains python scripts that help with modifying, building, and testing the original challenges.

### cb_patcher.py
This script will copy all challenges out of `original-challenges`, modify them as necessary, and place them in `cqe-challenges`. These modifications include:

* Deleting `libcgc.h` if it appears anywhere in the challenge source
* A set of find/replace definitions in `manual_patches.yaml`

### makefiles.py
This will parse the `Makefile` in each challenge folder and generate a `CMakeLists.txt` with the same variables and CFLAGS. This also adds the `-nostdinc` flag to all challenges, so that they have no access to the system libraries, and can only include their own libraries and `libcgc.h`.

### cb_tester.py
This is a helper script to test all challenges using `cb-test`. Results are summarized and can be output to an excel spreadsheet. More details below.

## Building

Expand All @@ -27,7 +51,7 @@ To build individual challenges, list them as arguments to `build.sh`, for exampl
$ ./build.sh CADET_00001 CROMU_00001
```

These commands will build both the patched and unpatched versions of the challenges.
These commands will build both the patched and unpatched binaries in the `bin` folder of the respective challenge (`cqe-challenges/[challenge]/bin/`).

## Testing

Expand All @@ -43,7 +67,7 @@ These commands will build both the patched and unpatched versions of the challen
-o / --output OUTPUT: Output a summary of the results to an excel spreadsheet
```

### Example Usages
### Example Usage

The following will run tests against all challenges in `cqe-challenges` and save the results to `out.xlsx`:

Expand All @@ -63,8 +87,20 @@ This will test only POVs against all challenges and save the results:
$ ./cb_tester.py -a --povs -o out.xlsx
```

## Porting Notes
## Current Status

Porting the Challenge Binaries is a work in progress, and the current status of the porting effort is tracked in the following spreadsheet:

https://docs.google.com/spreadsheets/d/1B88nQFs1G7MZemB2leOROhJKSNIz_Rge6sbd1KZESuE/edit?pli=1#gid=1553177307

## Notes

Windows support is coming soon!

The challenge binaries were written for a platform without a standard libc. Each binary re-implemented just the necessary libc features. Therefore, standard symbols were redefined. By using the `-nostdinc` flag during compilation, we were able to disable the use of standard library headers, and avoid rewriting a lot of challenge binary code.

We are working to make the use of this repository easier to understand and use for the evaluation of tools. If you have questions about the challenge binaries, please [join our Slack](https://empireslacking.herokuapp.com) and we'll be happy to answer any of your questions.
We are working to make this repository easier to use for the evaluation of program analysis tools. If you have questions about the challenge binaries, please [join our Slack](https://empireslacking.herokuapp.com) and we'll be happy to answer them.

## Authors

Porting work was completed by Kareem El-Faramawi and Loren Maggiore, with help from Artem Dinaburg, Peter Goodman, Ryan Stortz, and Jay Little. Challenges were originally created by NARF Industries, Kaprica Security, Chris Eagle, Cromulence, West Point Military Academy, Thought Networks, and Air Force Research Labs while under contract for the DARPA Cyber Grand Challenge.

0 comments on commit 699cae2

Please sign in to comment.