Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
175 changes: 114 additions & 61 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# LlmComposer

**LlmComposer** is an Elixir library that simplifies the interaction with large language models (LLMs) such as OpenAI's GPT, providing a streamlined way to build and execute LLM-based applications or chatbots. It currently supports multiple model providers, including OpenAI, OpenRouter, Ollama, Bedrock, and Google (Gemini), with features like auto-execution of functions and customizable prompts to cater to different use cases.
**LlmComposer** is an Elixir library that simplifies the interaction with large language models (LLMs) such as OpenAI's GPT, providing a streamlined way to build and execute LLM-based applications or chatbots. It currently supports multiple model providers, including OpenAI, OpenRouter, Ollama, Bedrock, and Google (Gemini), with features like manual function calls and customizable prompts to cater to different use cases.

## Table of Contents

Expand All @@ -24,6 +24,7 @@
- [Streaming Responses](#streaming-responses)
- [Structured Outputs](#structured-outputs)
- [Bot with external function call](#bot-with-external-function-call)
- [Function Calls](#function-calls)
- [Provider Router Simple](#provider-router-simple)
- [Cost Tracking](#cost-tracking)
- [Requirements](#requirements)
Expand Down Expand Up @@ -74,7 +75,6 @@ The following table shows which features are supported by each provider:
| Basic Chat | ✅ | ✅ | ✅ | ✅ | ✅ |
| Streaming | ✅ | ✅ | ✅ | ❌ | ✅ |
| Function Calls | ✅ | ✅ | ❌ | ❌ | ✅ |
| Auto Function Execution | ✅ | ✅ | ❌ | ❌ | ✅ |
| Structured Outputs | ✅ | ✅ | ❌ | ❌ | ✅ |
| Cost Tracking | ✅ | ✅ | ❌ | ❌ | ✅ |
| Fallback Models | ❌ | ✅ | ❌ | ❌ | ❌ |
Expand All @@ -85,7 +85,7 @@ The following table shows which features are supported by each provider:
- **Google** provides full feature support including function calls, structured outputs, and streaming with Gemini models
- **Bedrock** support is provided via AWS ExAws integration and requires proper AWS configuration
- **Ollama** requires an ollama server instance to be running
- **Function Calls** require the provider to support OpenAI-compatible function calling format
- **Function Calls** - LlmComposer exposes function call handling via `FunctionExecutor.execute/2` for explicit execution; supported by OpenAI, OpenRouter, and Google
- **Streaming** is **not** compatible with Tesla **retries**.

## Usage
Expand Down Expand Up @@ -607,83 +607,137 @@ The model will then produce responses that adhere to the specified JSON schema,

**Note:** This feature is currently supported on the OpenRouter, Google, and OpenAI providers in llm_composer.

### Bot with external function call
### Function Calls

You can enhance the bot's capabilities by adding support for external function execution. This example demonstrates how to add a simple calculator that evaluates basic math expressions:
LlmComposer supports **manual function call execution** using the `FunctionExecutor` module. This approach gives you full control over when and how function calls are executed, without automatic execution. This is useful when you need to:

- Log or audit function calls before execution
- Apply custom validation or filtering to function calls
- Execute multiple function calls in parallel with custom error handling
- Integrate with external systems before/after execution

Here's a concise, self-contained example demonstrating the 3-step manual function-call workflow (no external sample files required):

```elixir
# Configure provider API key
Application.put_env(:llm_composer, :open_ai, api_key: "<your api key>")

defmodule MyChat do
defmodule ManualFunctionCallExample do
alias LlmComposer.FunctionExecutor
alias LlmComposer.Function
alias LlmComposer.Message

@settings %LlmComposer.Settings{
providers: [
{LlmComposer.Providers.OpenAI, [model: "gpt-4.1-mini"]}
],
system_prompt: "You are a helpful math assistant that assists with calculations.",
auto_exec_functions: true,
functions: [
%LlmComposer.Function{
mf: {__MODULE__, :calculator},
name: "calculator",
description: "A calculator that accepts math expressions as strings, e.g., '1 * (2 + 3) / 4', supporting the operators ['+', '-', '*', '/'].",
schema: %{
type: "object",
properties: %{
expression: %{
type: "string",
description: "A math expression to evaluate, using '+', '-', '*', '/'.",
example: "1 * (2 + 3) / 4"
}
},
required: ["expression"]
}
}
]
}
# 1) Define the actual function that will run locally
@spec calculator(map()) :: number() | {:error, String.t()}
def calculator(%{"expression" => expr}) do
# simple validation to avoid arbitrary evaluation
if Regex.match?(~r/^[0-9\.\s\+\-\*\/\(\)]+$/, expr) do
{result, _} = Code.eval_string(expr)
result
else
{:error, "invalid expression"}
end
end

def simple_chat(msg) do
LlmComposer.simple_chat(@settings, msg)
# 2) Define the function descriptor sent to the model
defp calculator_function do
%Function{
mf: {__MODULE__, :calculator},
name: "calculator",
description: "Evaluate arithmetic expressions",
schema: %{
"type" => "object",
"properties" => %{"expression" => %{"type" => "string"}},
"required" => ["expression"]
}
}
end

@spec calculator(map()) :: number() | {:error, String.t()}
def calculator(%{"expression" => expression}) do
# Basic validation pattern to prevent arbitrary code execution
pattern = ~r/^[0-9\.\s\+\-\*\/\(\)]+$/

if Regex.match?(pattern, expression) do
try do
{result, _binding} = Code.eval_string(expression)
result
rescue
_ -> {:error, "Invalid expression"}
end
else
{:error, "Invalid expression format"}
def run() do
functions = [calculator_function()]

settings = %LlmComposer.Settings{
providers: [
{LlmComposer.Providers.OpenAI, [model: "gpt-4o-mini", functions: functions]}
],
system_prompt: "You are a helpful math assistant."
}

user_prompt = "What is 15 + 27?"

# Step 1: send initial chat that may request function calls
{:ok, resp} = LlmComposer.simple_chat(settings, user_prompt)

case resp.function_calls do
nil ->
# Model provided a direct answer
IO.puts("Assistant: #{resp.main_response.content}")
{:ok, resp}

function_calls ->
# Step 2: execute each returned function call locally
executed_calls =
Enum.map(function_calls, fn call ->
case FunctionExecutor.execute(call, functions) do
{:ok, executed} -> executed
{:error, _} -> call
end
end)

# Build tool-result messages (helper constructs proper :tool_result messages)
tool_messages = LlmComposer.FunctionCallHelpers.build_tool_result_messages(executed_calls)

# Build assistant 'with tools' message (provider-aware helper)
user_message = %Message{type: :user, content: user_prompt}

assistant_with_tools =
LlmComposer.FunctionCallHelpers.build_assistant_with_tools(
LlmComposer.Providers.OpenAI,
resp,
user_message,
[model: "gpt-4o-mini", functions: functions]
)

# Step 3: send user + assistant(with tool calls) + tool results back to LLM
messages = [user_message, assistant_with_tools] ++ tool_messages

{:ok, final} = LlmComposer.run_completion(settings, messages)
IO.puts("Assistant: #{final.main_response.content}")
{:ok, final}
end
end
end

{:ok, res} = MyChat.simple_chat("hi, how much is 1 + 2?")

IO.inspect(res.main_response)
# Run the example
ManualFunctionCallExample.run()
```

Example of execution:
**What this shows**
- How to define a safe local function and a corresponding `LlmComposer.Function` descriptor.
- How to call `LlmComposer.simple_chat/2` to obtain potential function calls from the model.
- How to execute returned `FunctionCall` structs with `FunctionExecutor.execute/2`.
- How to build `:tool_result` messages with `LlmComposer.FunctionCallHelpers.build_tool_result_messages/1` and construct the assistant message using `build_assistant_with_tools/4`.
- How to submit the results back to the model with `LlmComposer.run_completion/2` to receive the final assistant answer.

```
mix run functions_sample.ex
#### FunctionExecutor API

16:38:28.338 [debug] input_tokens=111, output_tokens=17
The `FunctionExecutor.execute/2` function:

16:38:28.935 [debug] input_tokens=136, output_tokens=9
LlmComposer.Message.new(
:assistant,
"1 + 2 is 3."
)
```elixir
FunctionExecutor.execute(function_call, function_definitions)
```

In this example, the bot first calls OpenAI to understand the user's intent and determine that a function (the calculator) should be executed. The function is then executed locally, and the result is sent back to the user in a second API call.
**Parameters:**
- `function_call`: The `FunctionCall` struct returned by the LLM
- `function_definitions`: List of `Function` structs that define callable functions

**Returns:**
- `{:ok, executed_call}`: FunctionCall with `:result` populated
- `{:error, :function_not_found}`: Function name not in definitions
- `{:error, {:invalid_arguments, reason}}`: Failed to parse JSON arguments
- `{:error, {:execution_failed, reason}}`: Exception during execution

**Supported Providers:** OpenAI, OpenRouter, and Google (Gemini)

### Provider Router Simple

Expand Down Expand Up @@ -1078,7 +1132,6 @@ IO.inspect(res.main_response)

**Note:** Custom parameters are merged with the base request body. Provider-specific parameters (like `temperature`, `max_tokens`, `reasoning_effort`) can be passed through `request_params` to fine-tune model behavior.

* Auto Function Execution: Automatically executes predefined functions, reducing manual intervention.
* System Prompts: Customize the assistant's behavior by modifying the system prompt (e.g., creating different personalities or roles for your bot).

---
Expand Down
29 changes: 6 additions & 23 deletions lib/llm_composer.ex
Original file line number Diff line number Diff line change
@@ -1,8 +1,7 @@
defmodule LlmComposer do
@moduledoc """
`LlmComposer` is responsible for interacting with a language model to perform chat-related operations,
such as running completions and executing functions based on the responses. The module provides
functionality to handle user messages, generate responses, and automatically execute functions as needed.
such as running completions and generating responses.

## Example Usage

Expand All @@ -16,8 +15,6 @@ defmodule LlmComposer do
],
system_prompt: "You are a helpful assistant.",
user_prompt_prefix: "",
auto_exec_functions: false,
functions: [],
api_key: ""
}

Expand All @@ -43,7 +40,6 @@ defmodule LlmComposer do
In this example, the simple_chat/2 function sends the user's message to the language model using the provided settings, and the response is displayed as the assistant's reply.
"""

alias LlmComposer.Helpers
alias LlmComposer.LlmResponse
alias LlmComposer.Message
alias LlmComposer.ProvidersRunner
Expand All @@ -63,9 +59,9 @@ defmodule LlmComposer do
- `msg`: The user message to be sent to the language model.

## Returns
- The result of the language model's response, which may include function executions if specified.
- The result of the language model's response.
"""
@spec simple_chat(Settings.t(), String.t()) :: Helpers.action_result()
@spec simple_chat(Settings.t(), String.t()) :: {:ok, LlmResponse.t()} | {:error, term()}
def simple_chat(%Settings{} = settings, msg) do
messages = [Message.new(:user, user_prompt(settings, msg, %{}))]

Expand All @@ -76,15 +72,15 @@ defmodule LlmComposer do
Runs the completion process by sending messages to the language model and handling the response.

## Parameters
- `settings`: The settings for the language model, including prompts, model options, and functions.
- `settings`: The settings for the language model, including prompts and model options.
- `messages`: The list of messages to be sent to the language model.
- `previous_response` (optional): The previous response object, if any, used for context.

## Returns
- A tuple containing `:ok` with the response or `:error` if the model call fails.
"""
@spec run_completion(Settings.t(), messages(), LlmResponse.t() | nil) ::
Helpers.action_result()
{:ok, LlmResponse.t()} | {:error, term()}
def run_completion(settings, messages, previous_response \\ nil) do
system_msg = Message.new(:system, settings.system_prompt)

Expand All @@ -97,11 +93,7 @@ defmodule LlmComposer do

Logger.debug("input_tokens=#{res.input_tokens}, output_tokens=#{res.output_tokens}")

if settings.auto_exec_functions do
maybe_run_functions(res, messages, settings)
else
{:ok, res}
end
{:ok, res}

{:error, _data} = resp ->
resp
Expand Down Expand Up @@ -158,13 +150,4 @@ defmodule LlmComposer do
prompt = Map.get(opts, :user_prompt_prefix, settings.user_prompt_prefix)
prompt <> message
end

@spec maybe_run_functions(LlmResponse.t(), messages(), Settings.t()) :: Helpers.action_result()
defp maybe_run_functions(res, messages, settings) do
res
|> Helpers.maybe_exec_functions(settings.functions)
|> Helpers.maybe_complete_chat(messages, fn new_messages ->
run_completion(settings, new_messages, res)
end)
end
end
59 changes: 59 additions & 0 deletions lib/llm_composer/function_call_helpers.ex
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
defmodule LlmComposer.FunctionCallHelpers do
@moduledoc """
Helpers for building assistant messages and tool-result messages when handling
function (tool) calls returned by LLM providers.

This module provides a default implementation for composing the assistant
message that preserves the original assistant response and attaches the
`tool_calls` metadata. Providers can optionally implement
`build_assistant_with_tools/3` to customize behavior.
"""

alias LlmComposer.LlmResponse
alias LlmComposer.Message

@doc """
Build an assistant message that preserves the original assistant response and
attaches `tool_calls` so it can be sent back to the provider along with
tool result messages.

If `provider_mod` exports `build_assistant_with_tools/3`, this function will
delegate to that implementation; otherwise it uses a sensible default.
"""
@spec build_assistant_with_tools(module(), LlmResponse.t(), Message.t(), keyword()) ::
Message.t()
def build_assistant_with_tools(
provider_mod,
%LlmResponse{} = resp,
%Message{} = user_msg,
opts \\ []
) do
if function_exported?(provider_mod, :build_assistant_with_tools, 3) do
provider_mod.build_assistant_with_tools(resp, user_msg, opts)
else
%Message{
type: :assistant,
content: resp.main_response.content || "Using tool results",
metadata: %{
original: resp.main_response.metadata[:original],
tool_calls: resp.function_calls
}
}
end
end

@doc """
Convert executed function-call results into `:tool_result` messages which
include the mapping back to the tool call id in `metadata["tool_call_id"]`.
"""
@spec build_tool_result_messages(list()) :: list(Message.t())
def build_tool_result_messages(executed_calls) when is_list(executed_calls) do
Enum.map(executed_calls, fn call ->
%Message{
type: :tool_result,
content: to_string(call.result),
metadata: %{"tool_call_id" => call.id}
}
end)
end
end
Loading