Skip to content

XAI-liacs/BLADE

Repository files navigation

Shows the BLADE logo.

IOH-BLADE: Benchmarking LLM-driven Automated Design and Evolution of Iterative Optimization Heuristics

PyPI version Maintenance Python 3.11+ CodeCov

Tip

See also the Documentation.

Table of Contents

πŸ”₯ News

  • 2025.03 ✨✨ BLADE v0.0.1 released!

Introduction

BLADE (Benchmark suite for LLM-driven Automated Design and Evolution) provides a standardized benchmark suite for evaluating automatic algorithm design algorithms, particularly those generating metaheuristics by large language models (LLMs). It focuses on continuous black-box optimization and integrates a diverse set of problems and methods, facilitating fair and comprehensive benchmarking.

Features

  • Comprehensive Benchmark Suite: Covers various classes of black-box optimization problems.
  • LLM-Driven Evaluation: Supports algorithm evolution and design using large language models.
  • Built-In Baselines: Includes state-of-the-art metaheuristics for comparison.
  • Automatic Logging & Visualization: Integrated with IOHprofiler for performance tracking.

Included Benchmark Function Sets

BLADE incorporates several benchmark function sets to provide a comprehensive evaluation environment:

Name Short Description Number of Functions Multiple Instances
BBOB (Black-Box Optimization Benchmarking) A suite of 24 noiseless functions designed for benchmarking continuous optimization algorithms. Reference 24 Yes
SBOX-COST A set of 24 boundary-constrained functions focusing on strict box-constraint optimization scenarios. Reference 24 Yes
MA-BBOB (Many-Affine BBOB) An extension of the BBOB suite, generating functions through affine combinations and shifts. Reference Generator-Based Yes
GECCO MA-BBOB Competition Instances A collection of 1,000 pre-defined instances from the GECCO MA-BBOB competition, evaluating algorithm performance on diverse affine-combined functions. Reference 1,000 Yes

In addition, several real-world applications are included such as several photonics problems.

Included Search Methods

The suite contains the state-of-the-art LLM-assisted search algorithms:

Algorithm Description Link
LLaMEA Large Langugage Model Evolutionary Algorithm code paper
EoH Evolution of Heuristics code paper
FunSearch Google's GA-like algorithm code paper
ReEvo Large Language Models as Hyper-Heuristics with Reflective Evolution code paper
LLM-Driven Heuristics Neighbourhood Search LLM-Driven Neighborhood Search for Efficient Heuristic Design code paper
Monte Carlo Tree Search Monte Carlo Tree Search for Comprehensive Exploration in LLM-Based Automatic Heuristic Design code paper

Note, FunSearch is currently not yet integrated.

Supported LLM APIs

BLADE supports integration with various LLM APIs to facilitate automated design of algorithms:

LLM Provider Description Integration Notes
Gemini Google's multimodal LLM designed to process text, images, audio, and more. Reference Accessible via the Gemini API, compatible with OpenAI libraries. Reference
OpenAI Developer of GPT series models, including GPT-4, widely used for natural language understanding and generation. Reference Integration through OpenAI's REST API and client libraries.
Ollama A platform offering access to various LLMs, enabling local and cloud-based model deployment. Reference Integration details can be found in their official documentation.
Claude Anthropic's Claude models for safe and capable language generation. Reference Accessed via the Anthropic API.
DeepSeek Developer of the DeepSeek family of models for code and chat. Reference Access via OpenAI compatible API at https://api.deepseek.com.

Evaluating against Human Designed baselines

An important part of BLADE is the final evaluation of generated algorithms against state-of-the-art human designed algorithms. In the iohblade.baselines part of the package, several well known SOTA black-box optimizers are imolemented to compare against. Including but not limited to CMA-ES and DE variants.

For the final validation BLADE uses IOHprofiler, providing detailed tracking and visualization of performance metrics.

🎁 Installation

It is the easiest to use BLADE from the pypi package (iohblade).

  pip install iohblade

Important

The Python version must be larger or equal to Python 3.11. You need an OpenAI/Gemini/Ollama/Claude/DeepSeek API key for using LLM models.

You can also install the package from source using uv (0.7.19). make sure you have uv installed.

  1. Clone the repository:

    git clone https://github.com/XAI-liacs/BLADE.git
    cd BLADE
  2. Install the required dependencies via uv:

    uv sync
  3. (Optional) Install additional packages:

    uv sync --group kerneltuner --group dev --group docs

    This will install additional dependencies for development and building documentation. The (experimental) auto-kernel application is also under a separate group for now.

πŸ’» Quick Start

  1. Set up an API key for your preferred provider:

    • Obtain an API key from OpenAI, Claude, Gemini, or another LLM provider.
    • Set the API key in your environment variables:
      export OPENAI_API_KEY='your_api_key_here'
  2. Running an Experiment

    To run a benchmarking experiment using BLADE:

    import os
    
    from iohblade.experiment import Experiment
    from iohblade.llm import Ollama_LLM
    from iohblade.methods import LLaMEA, RandomSearch
    from iohblade.problems import BBOB_SBOX
    from iohblade.loggers import ExperimentLogger
    
    llm = Ollama_LLM("qwen2.5-coder:14b") #qwen2.5-coder:14b, deepseek-coder-v2:16b
    budget = 50 #short budget for testing
    
    RS = RandomSearch(llm, budget=budget) #Random Search baseline
    LLaMEA_method = LLaMEA(llm, budget=budget, name="LLaMEA", n_parents=4, n_offspring=12, elitism=False) #LLamEA with 4,12 strategy
    methods = [RS, LLaMEA_method]
    
    problems = []
    # include all SBOX_COST functions with 5 instances for training and 10 for final validation as the benchmark problem.
    training_instances = [(f, i) for f in range(1,25) for i in range(1, 6)]
    test_instances = [(f, i) for f in range(1,25) for i in range(5, 16)]
    problems.append(BBOB_SBOX(training_instances=training_instances, test_instances=test_instances, dims=[5], budget_factor=2000, name=f"SBOX_COST"))
    # Set up the experiment object with 5 independent runs per method/problem. (in this case 1 problem)
    logger = ExperimentLogger("results/SBOX")
    experiment = Experiment(methods=methods, problems=problems, runs=5, show_stdout=True, exp_logger=logger) #normal run
    experiment() #run the experiment, all data is logged in the folder results/SBOX/

Trackio logging

To mirror results to a Trackio dashboard, install the optional dependency and use TrackioExperimentLogger:

uv sync --group trackio
from iohblade.loggers import TrackioExperimentLogger

logger = TrackioExperimentLogger("my-project")
experiment = Experiment(methods=methods, problems=problems, runs=5, exp_logger=logger)

🌐 Webapp

After running experiments you can browse them using the built-in Streamlit app:

uv run iohblade-webapp

The app lists available experiments from the results directory, displays their progress, and shows convergence plots.


πŸ’» Examples

See the files in the examples folder for examples on experiments and visualisations.


πŸ€– Contributing

Contributions to BLADE are welcome! Here are a few ways you can help:

  • Report Bugs: Use GitHub Issues to report bugs.
  • Feature Requests: Suggest new features or improvements.
  • Pull Requests: Submit PRs for bug fixes or feature additions.

Please refer to CONTRIBUTING.md for more details on contributing guidelines.

πŸͺͺ License

Distributed under the MIT License. See LICENSE for more information.

✨ Citation

If you use BLADE in your research, please cite the following work:

@inproceedings{vanstein2025blade,
  author    = {Niki van Stein and Anna V. Kononova and Haoran Yin and Thomas B{\"a}ck},
  title     = {BLADE: Benchmark suite for LLM-driven Automated Design and Evolution of iterative optimisation heuristics},
  booktitle = {Proceedings of the Genetic and Evolutionary Computation Conference Companion},
  series    = {GECCO '25 Companion'},
  year      = {2025},
  pages     = {2336--2344},
  publisher = {Association for Computing Machinery},
  address   = {New York, NY, USA},
  doi       = {10.1145/3712255.3734347},
  url       = {https://doi.org/10.1145/3712255.3734347}
}

The repository also provides a CITATION.cff file for use with GitHub's citation feature.


Happy Benchmarking with IOH-BLADE! πŸš€

About

Benchmarking Llm Assisted Design & Evolution of algorithms

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages