- 
Install uv: Follow the installation guide. 
- 
Sync dependencies: Run uv syncto install the necessary dependencies.
- 
Download models: Execute uv run src/download_models.pyto download the required models.
- 
Test streaming output: Run uv run src/generate_exam.pyto test the streaming output.
- 
                Notifications
    You must be signed in to change notification settings 
- Fork 0
This repository demonstrates how to use outlines and llama-cpp-python for structured JSON generation with streaming output, integrating llama.cpp for local model inference and outlines for schema-based text generation.
testli-ai/outlines-llama-cpp-python-streaming-output
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
About
This repository demonstrates how to use outlines and llama-cpp-python for structured JSON generation with streaming output, integrating llama.cpp for local model inference and outlines for schema-based text generation.
Topics
Resources
Stars
Watchers
Forks
Releases
No releases published
              Packages 0
        No packages published