Skip to content

cscolomboIA/llm-network-validation-framework

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 

Repository files navigation

llm-network-validation-framework

Framework for preventive verification and validation of network configurations generated by Large Language Models (LLMs). The system integrates multiple analysis stages to ensure that automatically generated artifacts are not only syntactically correct, but also semantically consistent and operationally executable.

Overview

The use of LLMs for network configuration generation has been increasing, but syntactic validity alone does not guarantee operational feasibility. This framework proposes a preventive pipeline that operates prior to deployment, reducing the risk of failures in production environments.

The pipeline is composed of four main stages:

  1. Syntactic verification
    Validates whether the model output is in a structured format (valid JSON).

  2. Schema compliance verification
    Evaluates adherence to the expected schema (e.g., YANG), including data types and structure.

  3. Semantic verification
    Detects logical inconsistencies and policy conflicts using principles inspired by the DETOX algorithm.

  4. Experimental validation
    Executes the configuration in an emulated environment (Mininet) and verifies end-to-end connectivity.

Objective

Provide a systematic mechanism to:

  • reduce false positives resulting from syntactic-only validation
  • identify semantic inconsistencies not captured by schemas
  • evaluate the real executability of LLM-generated network configurations
  • support research in natural language-driven network automation

Architecture

The framework operates as an intermediate layer between LLM-based configuration generation and the network infrastructure. The model output is processed sequentially through the pipeline stages until final validation in an emulated environment.

Requirements

  • Python 3.10 or higher
  • Mininet (v2.3 or higher)
  • Linux (Ubuntu 22.04 recommended)

Additional dependencies may be listed in requirements.txt.

Usage

  1. Generate configurations using a compatible LLM
  2. Run the verification pipeline
  3. Validate the configuration in Mininet

Example:

python run_pipeline.py --input input.json

Reproducibility

The repository includes:

  • experiment execution scripts
  • prompt templates
  • execution logs
  • environment replication instructions

The experiments follow the NetConfEval benchmark protocol, enabling comparability with existing literature.

Limitations

  • Evaluation restricted to the benchmark scope
  • Single-run execution per configuration (no statistical variance analysis)
  • Dependence on an emulated environment for operational validation

Future Work

  • Integration with formal network verification tools
  • Evaluation in real-world environments
  • Performance and scalability optimization
  • Incorporation of retrieval-based techniques (RAG)

License

This project is licensed under the MIT License.

About

A framework for preventive verification and validation of LLM-generated network configurations, combining syntactic checks, schema compliance, semantic analysis, and Mininet-based experimental validation.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors