simulacra-summarizer provides a lightweight, reliable way to analyze and summarize user‑provided text (e.g., articles, queries, or any free‑form prose) using a language model.
The package validates the LLM output against a regular‑expression pattern, retrying automatically until a correct, well‑structured summary is produced.
- One‑function API – just call
simulacra_summarizer()with the text you want summarized. - Regex‑based validation – ensures the output follows the expected format.
- Built‑in retry logic – keeps querying the LLM until a valid response is returned.
- Pluggable LLMs – defaults to
ChatLLM7(fromlangchain_llm7) but any LangChain‑compatible chat model can be supplied. - Simple installation – pure‑Python package, no extra system dependencies.
pip install simulacra_summarizerfrom simulacra_summarizer import simulacra_summarizer
# Basic usage – uses the default ChatLLM7 (API key taken from env LLM7_API_KEY)
summary = simulacra_summarizer(
user_input="The simulation hypothesis suggests that our reality might be a sophisticated computer simulation..."
)
print(summary) # -> List of strings that match the predefined patternYou can pass any LangChain BaseChatModel instance (e.g., OpenAI, Anthropic, Google) if you prefer a different provider.
from langchain_openai import ChatOpenAI
from simulacra_summarizer import simulacra_summarizer
llm = ChatOpenAI()
response = simulacra_summarizer(
user_input="Explain the simulation hypothesis in simple terms.",
llm=llm
)from langchain_anthropic import ChatAnthropic
from simulacra_summarizer import simulacra_summarizer
llm = ChatAnthropic()
response = simulacra_summarizer(
user_input="What are the philosophical implications of living in a simulation?",
llm=llm
)from langchain_google_genai import ChatGoogleGenerativeAI
from simulacra_summarizer import simulacra_summarizer
llm = ChatGoogleGenerativeAI()
response = simulacra_summarizer(
user_input="Summarize recent research on simulation theory.",
llm=llm
)from simulacra_summarizer import simulacra_summarizer
summary = simulacra_summarizer(
user_input="Why do some scientists argue that we might be living in a simulation?",
api_key="your-llm7-api-key"
)You can also set the environment variable LLM7_API_KEY and omit the argument.
def simulacra_summarizer(
user_input: str,
api_key: Optional[str] = None,
llm: Optional[BaseChatModel] = None
) -> List[str]:| Parameter | Type | Description |
|---|---|---|
| user_input | str |
The raw text you want to be summarized. |
| api_key | Optional[str] |
API key for ChatLLM7. If omitted, the function reads LLM7_API_KEY from the environment; if still missing, a placeholder "None" is used (which will cause an authentication error). |
| llm | Optional[BaseChatModel] |
A LangChain chat model instance. When provided, api_key is ignored and the supplied LLM is used instead. |
Returns: List[str] – a list of strings that match the predefined regex pattern (simulacra_summarizer.pattern). If the LLM cannot produce a valid output, a RuntimeError is raised.
- LLM: By default the function creates a
ChatLLM7instance from thelangchain_llm7package (see https://pypi.org/project/langchain-llm7). - Prompting: System and human prompts are defined in
simulacra_summarizer.prompts. - Validation: The LLM response is matched against a compiled regular expression (
simulacra_summarizer.pattern). Only a successful match is returned to the caller. - Retry Logic: The helper
llmatchfromllmatch_messageshandles repeated calls until the pattern matches or a hard failure occurs.
The free tier of LLM7 provides generous rate limits for most typical summarization workloads.
If you require higher throughput, supply your own API key (via the environment variable LLM7_API_KEY or the api_key argument). Free API keys can be obtained by registering at https://token.llm7.io/.
- Bug reports / Feature requests: https://github....
- Author: Eugene Evstafev
- Email: hi@euegne.plus
- GitHub: chigwell
Feel free to open an issue or submit a pull request—contributions are welcome!
This project is licensed under the MIT License. See the LICENSE file for details.