View on GitHub


Fuzz testing framework for binary programs using AI

River: fuzz testing for binaries using AI

River is an open-source framework that uses AI to guide the fuzz testing of binary programs.



River 3.0 Architecture

The architecture of River 3.0 is given below:

RIVER 3.0 architecture

Currently, we have 1 (binary execution) functional, while 0 (LLVM) is work in progress.

Scientific Publications

River is developed in the Department of Computer Science, University of Bucharest. Ciprian Paduraru is the lead developer.

Scientific publications related to River can be found below:



  1. Clone this repo with –recursive option, since we have some external submodules.
    git clone --recursive
  2. Build Triton as documented in River3/ExternalTools/Triton/ or use the guidelines from the project’s documentation.

  3. Install LIEF with pip install lief or build LIEF from River3/ExternalTools/LIEF according to the project’s documentation.

  4. Install numpy with pip install numpy.

  5. Install tensorflow with pip install tensorflow.

To check if the dependencies are installed correctly, run the following commands in a Python console (tested on Python 3.7):

$ python3
>>> import triton
>>> import lief
>>> import numpy

Core tool Testing

Currently we have implemented as a proof of concept a Generic Concolic Executor and Generational Search (SAGE), the first open-source version (see the original paper here) that can be found in River3/python.

The River3/python/ is the recommended one to use for performance.

You can test it against the programs inside River3/TestPrograms (e.g. the crackme_xor, sage, sage2). Feel free to modify them or experiment with your own code.

How to build your program for testing with River?

gcc can be used to build the test binaries for River. The command below is compiling one of the examples provided in ‘River3/TestPrograms’:

$ gcc -g -O0 -o crackme_xor crackme_xor.c

NOTE: The flags -g and -O0 are optional. They can be used for debugging and understanding the asm code without optimizations.

How to use the Concolic (SAGE-like) tool?

In order to see the parameters supported by the script and how to use them, you can run:

$ cd path/to/river/River3/python
$ python3 -h

To run River3/python/ for the binary River3/TestPrograms/crackme_xor navigate to River3/python and run the command below:

$ cd path/to/river/River3/python
$ python3 \
	--binaryPath ../TestPrograms/crackme_xor \
	--architecture x64 \
	--maxLen 5 \
	--targetAddress 0x11d3 \
	--logLevel CRITICAL \
	--secondsBetweenStats 10 \
	--outputType textual

--architecture can be set to x64, x86, ARM32, ARM64.

--targetAddress parameter is optional. It is for “capture the flag”-like kind of things, where you want to get to a certain address in the binary code. If you have access to the source code, you can use the following command to get the address of interest. The execution will stop when the target is reached, otherwise it will exhaustively try to search all inputs.

$ objdump -M intel -S ./crackme_xor

--logLevel is working with the logging module in Python to show logs, basically you put here the level you want to see output. Put DEBUG if you want to see everything outputed as log for example.

--secondsBetweenStats is the time in seconds to show various stats between runs.


By default, will search for a function named RIVERTestOneInput to use as a testing entrypoint. If one desires to change the entrypoint, he can use the --entryfuncName option. The example bellow sets main as the entrypoint:

--entryfuncName "main"

How to use Reinforcement-Learning for general purpose fuzzing combined with symbolic execution ?

Check the folder inside this branch:

How to use the Reinforcement-Learning-based Concolic tool?

NOTE: Our implementation is done using Tensorflow 2 (2.3 version was tested). You can manually modify the parameters of the model from River3/python/

The command below is an example on how to run River3/python/ for the binary River3/TestPrograms/crackme_xor:

$ python3 \
	--binaryPath ../TestPrograms/crackme_xor \
	--architecture x64 \
	--maxLen 5 \
	--targetAddress 0x11d3 \
	--logLevel CRITICAL \
	--secondsBetweenStats 10 \
	--outputType textual

The parameters have the same description as above.

Testing using docker

We also provide a Dockerfile that builds an image with all the required dependencies and the latest river flavour. The Dockerfile can be found in the docker/ folder. Inside the docker/ folder are two files:

To build the image navigate to the docker/ directory and run make:

$ cd path/to/river/docker/
$ make

This will build the docker image locally. View the available images with:

$ docker image ls

To start a self-destructing container that runs a simple test, run:

$ make test

This will test both and The test for will print a bunch of warnings. This is expected, as tensorflow is just warning about the fact that it didn’t find the CUDA libraries, so it wont be able to run on a GPU and it will use the CPU instead.

To start a long running container with an interactive /bin/bash session, run:

$ make run

To connect to an existing, long running container, run:

$ make bash

To delete the container and local image crated, run:

$ make clean

Performance aspects:

Debugging aspects:

Future Work

P.S. Please get in touch if you would like to collaborate on this exciting and humble project! ^_^