A command-line tool for querying OpenAI-compatible endpoints with a prompt and input file, and writing the response to an output file.
- Installation
- Usage
- Command-Line Arguments
- Environment Variables
- Supported Endpoints
- Examples
- Development
- Building and Testing
- Contributing
- Troubleshooting
To install invoke-llm, run the following command:
cargo install --git https://github.com/RustedBytes/invoke-llminvoke-llm is a command-line tool that queries an OpenAI-compatible endpoint
with a prompt and input file, and writes the response to an output file. The
basic usage is as follows:
invoke-llm --endpoint <endpoint> --model <model> --tokens <tokens> --prompt <prompt_file> --input <input_file> [--output <output_file>]The following command-line arguments are supported:
--endpoint(required): The API endpoint name (e.g., "openai", "google") or a custom URL to query.--model(required): The model identifier to use for the completion.--tokens(required): Maximum number of tokens to generate.--prompt(required): Path to the file containing the system prompt.--input(required): Path to the file containing the user input.--output(optional): Path to save the response (prints to stdout if not provided).--reasoning(optional): Whether to use reasoning models instead of regular ones.
The following environment variables are used to store API keys:
API_TOKEN_OAI: OpenAI API keyAPI_TOKEN_GOOGLE: Google API keyAPI_TOKEN_HF: Hugging Face API keyAPI_TOKEN: Default API key for custom endpoints
The following endpoints are currently supported:
- "openai": OpenAI API endpoint
- "google": Google Generative Language API endpoint
- "hf": Hugging Face API endpoint
- Custom endpoints: Any custom URL can be used as an endpoint
To run invoke-llm with some pre-defined prompts, use the following commands:
just -f llmfile code_review
just -f llmfile gemma_grammar_checkTo contribute to this project, you'll need:
- Rust toolchain (nightly version recommended)
cargo install action-validator dircat justcargo install --git https://github.com/ytmimi/markdown-fmt markdown-fmt --features="build-binary"- lefthook (for pre-commit hooks)
- yamlfmt (for YAML formatting)
- Clone the repository
- Run
cargo buildto compile the application - Run
cargo testto execute the test suite - Run
cargo run -- --helpto see command-line options
To test invoke-llm with different endpoints, use the following commands:
just -f llmfile.test gemini
just -f llmfile.test gemma
just -f llmfile.test hf
just -f llmfile.test oai
just -f llmfile.test oai_reasoningRequired packages on Unutun:
sudo apt install libssl-dev pkgconf clangContributions are welcome! Please submit pull requests with clear descriptions of changes and ensure that all tests pass before submitting.
- Check you have correct API keys
- For other issues, please check the issues page or submit a new issue with detailed information about your problem.