sieves is a library for zero- and few-shot NLP tasks with structured generation. Build production-ready NLP prototypes quickly, with guaranteed output formats and no training required.
Read our documentation here. An automatically generated version (courtesy of Devin via DeepWiki) is available here.
Installation
Install sieves with pip install sieves (or uv add sieves).
The following extra groups exist:
ingestionfor ingestion libraries (for converting documents into text/markdown), e.g.doclingdistillfor distillation utilities, e.g. training frameworks likesetfitenginesfor structured generation utilities beyond the defaultoutlines
If you want to install all dependencies by default
pip install "sieves[engines,distill,ingestion]"
You can also choose to install individual dependencies as you see fit.
Warning
sieves is in active development and currently in beta. Be advised that the API might change in between minor version
updates.
Even in the era of generative AI, structured outputs and observability remain crucial.
Many real-world scenarios require rapid prototyping with minimal data. Generative language models excel here, but
producing clean, structured output can be challenging. Various tools address this need for structured/guided language
model output, including outlines, dspy,
langchain, and others. Each has different design patterns, pros and cons. sieves wraps these tools and provides
a unified interface for input, processing, and output.
Developing NLP prototypes often involves repetitive steps: parsing and chunking documents, exporting results for
model fine-tuning, and experimenting with different prompting techniques. All these needs are addressed by existing
libraries in the NLP ecosystem address (e.g. docling for file parsing, or datasets for transforming
data into a unified format for model training).
sieves simplifies NLP prototyping by bundling these capabilities into a single library, allowing you to quickly
build modern NLP applications. It provides:
- Zero- and few-shot model support for immediate inference
- A bundle of utilities addressing common requirements in NLP applications
- A unified interface for structured generation across multiple libraries
- Built-in tasks for common NLP operations
- Easy extendability
- A document-based pipeline architecture for easy observability and debugging
- Caching - pipelines cache processed documents to prevent costly redundant model calls
sieves draws a lot of inspiration from spaCy and particularly spacy-llm.
- 🎯 Zero Training Required: Immediate inference using zero-/few-shot models
- 🤖 Unified Generation Interface: Seamlessly use multiple libraries
▶️ Observable Pipelines: Easy debugging and monitoring- 🛠️ Integrated Tools:
- 🏷️ Ready-to-Use Tasks:
- Multi-label classification
- Information extraction
- Summarization
- Translation
- Multi-question answering
- Aspect-based sentiment analysis
- PII (personally identifiable information) anonymization
- Named entity recognition
- Coming soon: entity linking, knowledge graph creation, ...
- 💾 Persistence: Save and load pipelines with configurations
- 🚀 Optimization: Improve task performance by optimizing prompts and few-shot examples using DSPy's MIPROv2
- 🧑🏫 Distillation: Fine-tune smaller, specialized models using your zero-shot results with frameworks like SetFit and Model2Vec.
Export results as HuggingFace
Datasetfor custom training. - ♻️ Caching to avoid unnecessary model calls
Important
sieves was built with and requires Python 3.12 or higher. Note however that some dependencies (such as pyarrow
via datasets) don't have prebuilt wheels for Python versions newer than 3.12 yet, in which case you'll need to
manually install those dependencies.
For the time being we recommend using sieves with Python 3.12.
Here's a simple classification example using outlines:
from sieves import Pipeline, tasks, Doc
import outlines
# 1. Define documents by text or URI.
docs = [Doc(text="Special relativity applies to all physical phenomena in the absence of gravity.")]
# 2. Choose a model (Outlines in this example).
model = outlines.models.transformers("HuggingFaceTB/SmolLM-135M-Instruct")
# 3. Create pipeline with tasks (verbose init).
pipe = Pipeline(
# Add classification task to pipeline.
tasks.Classification(labels=["science", "politics"], model=model)
)
# 4. Run pipe and output results.
for doc in pipe(docs):
print(doc.results)
# Tip: Pipelines can also be composed succinctly via chaining (+).
# For multi-step pipelines, you can write:
# pipe = tasks.Ingestion(export_format="markdown") + tasks.Chunking(chunker) + tasks.Classification(labels=[...], model=model)
# Note: Ingestion libraries are optional and not installed by default.
# Install with: pip install "sieves[ingestion]" or install the specific libraries directly (e.g., `docling`, `marker`).
# Note: additional Pipeline parameters (e.g., use_cache=False) are only available via the verbose init,
# e.g., Pipeline([t1, t2], use_cache=False).Advanced Example
This example demonstrates PDF parsing, text chunking, and classification.
Note: Ingestion libraries are optional and not installed by default. To run the ingestion step, install with the extra or install the libraries directly:
pip install "sieves[ingestion]" # or install ingestion libraries directly
import pickle
import gliner.multitask
import chonkie
import tokenizers
import docling.document_converter
from sieves import Pipeline, tasks, Doc
# 1. Define documents by text or URI.
docs = [Doc(uri="https://arxiv.org/pdf/2408.09869")]
# 2. Choose a model for structured generation.
model_name = 'knowledgator/gliner-multitask-v1.0'
model = gliner.GLiNER.from_pretrained(model_name)
# 3. Create chunker object.
chunker = chonkie.TokenChunker(tokenizers.Tokenizer.from_pretrained(model_name))
# 3. Create pipeline with tasks.
pipe = Pipeline(
[
# 4. Add document parsing task.
tasks.Ingestion(export_format="markdown"),
# 5. Add chunking task to ensure we don't exceed our model's context window.
tasks.Chunking(chunker),
# 6. Add classification task to pipeline.
tasks.Classification(
task_id="classifier",
labels=["science", "politics"],
model=model,
),
]
)
# Alternatively you can also construct a pipeline by using the + operators:
# pipe = tasks.Ingestion(export_format="markdown") + tasks.Chunking(chunker) + tasks.Classification(
# task_id="classifier", labels=["science", "politics"], model=model
# )
# 7. Run pipe and output results.
docs = list(pipe(docs))
for doc in docs:
print(doc.results["classifier"])
# 8. Serialize pipeline and docs.
pipe.dump("pipeline.yml")
with open("docs.pkl", "wb") as f:
pickle.dump(docs, f)
# 9. Load pipeline and docs from disk. Note: we don't serialize complex third-party objects, so you'll have
# to pass those in at load time.
loaded_pipe = Pipeline.load(
"pipeline.yml",
(
{"converter": docling.document_converter.DocumentConverter(), "export_format": "markdown"},
{"chunker": chunker},
{"model": model},
),
)
with open("docs.pkl", "rb") as f:
loaded_docs = pickle.load(f)sieves is built on six key abstractions.
Orchestrates task execution with features for.
- Task configuration and sequencing
- Pipeline execution
- Configuration management and serialization
Represents a document in the pipeline.
- Contains text content and metadata
- Tracks document URI and processing results
- Passes information between pipeline tasks
Encapsulates a single processing step in a pipeline.
- Defines input arguments
- Wraps and initializes
Bridgeinstances handling task-engine-specific logic - Implements task-specific dataset export
Controls behavior of structured generation across tasks.
- Batch size
- Strict mode (whether errors in parsing individual documents should terminate execution)
- Arbitrary arguments passed on to structured generation tool (which one that is depends on the model you specified - Outlines, DSPy, LangChain, ...).
Provides a unified interface to structured generation libraries (internal). You pass a backend model into tasks;
Engine is used under the hood.
- Manages model interactions
- Handles prompt execution
- Standardizes output formats
Connects Task with Engine.
- Implements engine-specific prompt templates
- Manages output type specifications
- Ensures compatibility between tasks and engine
Show FAQs
sieves was originally motivated by the want to use generative models for structured information extraction. Coming
from this angle, there are two ways to explain why we settled on this name (pick the one you like better):
- An analogy to gold panning: run your raw data through a sieve to obtain structured, refined "gold."
- An acronym - "sieves" can be read as "Structured Information Extraction and VErification System" (but that's a mouthful).
Asked differently: what are the benefits of using sieves over directly interacting with an LLM?
- Validated, structured data output - also for LLMs that don't offer structured outputs natively. Zero-/few-shot language models can be finicky without guardrails or parsing.
- A step-by-step pipeline, making it easier to debug and track each stage.
- The flexibility to switch between different models and ways to ensure structured and validated output.
Below are minimal examples for creating model objects for each supported structured‑generation tool. Pass these model objects directly to tasks, optionally with GenerationSettings.
-
DSPy
import os import dspy # Anthropic example (set ANTHROPIC_API_KEY in your environment) model = dspy.LM("claude-3-haiku-20240307", api_key=os.environ["ANTHROPIC_API_KEY"]) # Tip: DSPy can integrate with Ollama and vLLM backends for local model serving. # For Ollama, configure api_base and blank api_key: # model = dspy.LM("smollm:135m-instruct-v0.2-q8_0", api_base="http://localhost:11434", api_key="") # For vLLM, use the OpenAI-compatible server: # model = dspy.LM("meta-llama/Llama-3.2-1B-Instruct", api_base="http://localhost:8000/v1", api_key="")
-
GLiNER
import gliner model = gliner.GLiNER.from_pretrained("knowledgator/gliner-multitask-v1.0")
-
LangChain
from langchain.chat_models import init_chat_model import os model = init_chat_model( model="claude-3-haiku-20240307", api_key=os.environ["ANTHROPIC_API_KEY"], model_provider="anthropic", )
-
Hugging Face Transformers (zero‑shot classification)
from transformers import pipeline model = pipeline( "zero-shot-classification", model="MoritzLaurer/xtremedistil-l6-h256-zeroshot-v1.1-all-33", )
-
Outlines
import outlines from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "HuggingFaceTB/SmolLM-135M-Instruct" # Outlines supports different backends, also remote ones. We use a local `transformers` model here. model = outlines.models.from_transformers( AutoModelForCausalLM.from_pretrained(model_name), AutoTokenizer.from_pretrained(model_name), )
Notes
- Provide provider API keys via environment variables (e.g.,
ANTHROPIC_API_KEY). - Local model serving: DSPy can integrate with Ollama and vLLM for local model serving (see DSPy examples above).
- After you have a
model, use it in tasks like:tasks.predictive.Classification(labels=[...], model=model). - A bunch of useful utilities for pre- and post-processing you might need.
- An array of useful tasks you can right of the bat without having to roll your own.
- Look up the respective tool's documentation for more information.
Which library makes the most sense to you depends strongly on your use-case. outlines provides structured generation
abilities, but not the pipeline system, utilities and pre-built tasks that sieves has to offer (and of course not the
flexibility to switch between different structured generation libraries). Then again, maybe you don't need all that -
in which case we recommend using outlines (or any other structured generation libray) directly.
Similarly, maybe you already have an existing tech stack in your project that uses exclusively langchain or
dspy? All of these libraries (and more) are supported by sieves - but they are not just structured generation
libraries, they come with a plethora of features that are out of scope for sieves. If your application deeply
integrates with a framework like LangChain or DSPy, it may be reasonable to stick to those libraries directly.
As many things in engineering, this is a trade-off. The way we see it: the less tightly coupled your existing
application is with a particular language model framework, the more mileage you'll get out of sieves. This means that
it's ideal for prototyping (there's no reason you can't use it in production too, of course).
Source for
sievesicon: Sieve icons created by Freepik - Flaticon.

