Requirements: scons (should have been installed when installing the ecosystem):
python -m pip install scons (also available pre-packaged, e.g. apt-get install scons)
To compile the benchmarks for proteus, run:
make
You can optionally specify a different RISC-V prefix, e.g.
make RISCV_PREFIX=riscv32-unknown-elf
python3 benchmark_speed.py --target-module=run_proteus --timeout=5400 --absolute
You can specify the location of the sim if it is different from the default with --sim=<path to sim> and change the RISC-V prefix, e.g. to use riscv32, with: --riscv-prefix=riscv32-unknown-elf
Original Embench readme below
This repository contains the Embench™ free and open source benchmark suite. These benchmarks are designed to test the performance of deeply embedded systems. As such they assume the presence of no OS, minimal C library support and in particular no output stream.
The rationale behind this benchmark is described in "Embench™: An Evolving Benchmark Suite for Embedded IoT Computers from an Academic-Industrial Cooperative: Towards the Long Overdue and Deserved Demise of Dhrystone" by David Patterson, Jeremy Bennett, Palmer Dabbelt, Cesare Garlati, G. S. Madhusudan and Trevor Mudge (see https://tmt.knect365.com/risc-v-workshop-zurich/agenda/2#software_embench-tm-a-free-benchmark-suite-for-embedded-computing-from-an-academic-industry-cooperative-towards-the-long-overdue-and-deserved-demise-of-dhrystone).
The benchmarks are largely derived from the Bristol/Embecosm Embedded
Benchmark Suite (BEEBS, see https://beebs.mageec.org/), which in turn draws
its material from various earlier projects. A full description and user
manual is in the doc directory.
The following git tags may be used to select the version of the repository for a stable release.
embench-0.5embench-1.0
The following is are development releases
embench-2.0rc1embench-2.0rc2
The benchmarks can be used to yield a single consistent score for the performance of a platform and its compiler tool chain. The mechanism for this is described in the user manual.
-
The benchmarks should all compile to fit in 64kB of program space and use no more than 64kB of RAM
-
The measurement of execution performance is designed to use "hot" caches. Thus each benchmark executes its entire code several times, before starting a timing run.
-
Execution runs are scaled to take approximately 4 second of CPU time. This is large enough to be accurately measured, yet means all 19 benchmarks, including cache warm up can be run in a few minutes. The scaling factor is configurable to make Embench suitable for machines of a wide range of performance.
-
The benchmarks are designed to be run on either real or simulated hardware. However for meaningful execution performance results any simulated hardware must be strictly cycle accurate.
The top level directory contains Python scripts to build and execute the benchmarks. The following are the key top level directories.
-
examples: containing examples for Embench build configurations for different boards. -
doc: The user manual for Embench. -
src: The source for the benchmarks, one directory per benchmark. -
support: The generic wrapper code for benchmarks. -
pylib: Support code for the python scripts
Embench is licensed under the GNU General Public License version 3 (GPL3). See the COPYING file for details. Some individual benchmarks are also available under different licenses. See the comments in the individual source files for details.
The code base is OpenChain compliant, with SPDX license identifiers provided throughout.