diff --git a/README.md b/README.md index 594627e..f8ce346 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # LlmComposer -**LlmComposer** is an Elixir library that simplifies the interaction with large language models (LLMs) such as OpenAI's GPT, providing a streamlined way to build and execute LLM-based applications or chatbots. It currently supports multiple model providers, including OpenAI, OpenRouter, Ollama, Bedrock, and Google (Gemini), with features like auto-execution of functions and customizable prompts to cater to different use cases. +**LlmComposer** is an Elixir library that simplifies the interaction with large language models (LLMs) such as OpenAI's GPT, providing a streamlined way to build and execute LLM-based applications or chatbots. It currently supports multiple model providers, including OpenAI, OpenRouter, Ollama, Bedrock, and Google (Gemini), with features like manual function calls and customizable prompts to cater to different use cases. ## Table of Contents @@ -24,6 +24,7 @@ - [Streaming Responses](#streaming-responses) - [Structured Outputs](#structured-outputs) - [Bot with external function call](#bot-with-external-function-call) + - [Function Calls](#function-calls) - [Provider Router Simple](#provider-router-simple) - [Cost Tracking](#cost-tracking) - [Requirements](#requirements) @@ -74,7 +75,6 @@ The following table shows which features are supported by each provider: | Basic Chat | ✅ | ✅ | ✅ | ✅ | ✅ | | Streaming | ✅ | ✅ | ✅ | ❌ | ✅ | | Function Calls | ✅ | ✅ | ❌ | ❌ | ✅ | -| Auto Function Execution | ✅ | ✅ | ❌ | ❌ | ✅ | | Structured Outputs | ✅ | ✅ | ❌ | ❌ | ✅ | | Cost Tracking | ✅ | ✅ | ❌ | ❌ | ✅ | | Fallback Models | ❌ | ✅ | ❌ | ❌ | ❌ | @@ -85,7 +85,7 @@ The following table shows which features are supported by each provider: - **Google** provides full feature support including function calls, structured outputs, and streaming with Gemini models - **Bedrock** support is provided via AWS ExAws integration and requires proper AWS configuration - **Ollama** requires an ollama server instance to be running -- **Function Calls** require the provider to support OpenAI-compatible function calling format +- **Function Calls** - LlmComposer exposes function call handling via `FunctionExecutor.execute/2` for explicit execution; supported by OpenAI, OpenRouter, and Google - **Streaming** is **not** compatible with Tesla **retries**. ## Usage @@ -607,83 +607,137 @@ The model will then produce responses that adhere to the specified JSON schema, **Note:** This feature is currently supported on the OpenRouter, Google, and OpenAI providers in llm_composer. -### Bot with external function call +### Function Calls -You can enhance the bot's capabilities by adding support for external function execution. This example demonstrates how to add a simple calculator that evaluates basic math expressions: +LlmComposer supports **manual function call execution** using the `FunctionExecutor` module. This approach gives you full control over when and how function calls are executed, without automatic execution. This is useful when you need to: + +- Log or audit function calls before execution +- Apply custom validation or filtering to function calls +- Execute multiple function calls in parallel with custom error handling +- Integrate with external systems before/after execution + +Here's a concise, self-contained example demonstrating the 3-step manual function-call workflow (no external sample files required): ```elixir +# Configure provider API key Application.put_env(:llm_composer, :open_ai, api_key: "") -defmodule MyChat do +defmodule ManualFunctionCallExample do + alias LlmComposer.FunctionExecutor + alias LlmComposer.Function + alias LlmComposer.Message - @settings %LlmComposer.Settings{ - providers: [ - {LlmComposer.Providers.OpenAI, [model: "gpt-4.1-mini"]} - ], - system_prompt: "You are a helpful math assistant that assists with calculations.", - auto_exec_functions: true, - functions: [ - %LlmComposer.Function{ - mf: {__MODULE__, :calculator}, - name: "calculator", - description: "A calculator that accepts math expressions as strings, e.g., '1 * (2 + 3) / 4', supporting the operators ['+', '-', '*', '/'].", - schema: %{ - type: "object", - properties: %{ - expression: %{ - type: "string", - description: "A math expression to evaluate, using '+', '-', '*', '/'.", - example: "1 * (2 + 3) / 4" - } - }, - required: ["expression"] - } - } - ] - } + # 1) Define the actual function that will run locally + @spec calculator(map()) :: number() | {:error, String.t()} + def calculator(%{"expression" => expr}) do + # simple validation to avoid arbitrary evaluation + if Regex.match?(~r/^[0-9\.\s\+\-\*\/\(\)]+$/, expr) do + {result, _} = Code.eval_string(expr) + result + else + {:error, "invalid expression"} + end + end - def simple_chat(msg) do - LlmComposer.simple_chat(@settings, msg) + # 2) Define the function descriptor sent to the model + defp calculator_function do + %Function{ + mf: {__MODULE__, :calculator}, + name: "calculator", + description: "Evaluate arithmetic expressions", + schema: %{ + "type" => "object", + "properties" => %{"expression" => %{"type" => "string"}}, + "required" => ["expression"] + } + } end - @spec calculator(map()) :: number() | {:error, String.t()} - def calculator(%{"expression" => expression}) do - # Basic validation pattern to prevent arbitrary code execution - pattern = ~r/^[0-9\.\s\+\-\*\/\(\)]+$/ - - if Regex.match?(pattern, expression) do - try do - {result, _binding} = Code.eval_string(expression) - result - rescue - _ -> {:error, "Invalid expression"} - end - else - {:error, "Invalid expression format"} + def run() do + functions = [calculator_function()] + + settings = %LlmComposer.Settings{ + providers: [ + {LlmComposer.Providers.OpenAI, [model: "gpt-4o-mini", functions: functions]} + ], + system_prompt: "You are a helpful math assistant." + } + + user_prompt = "What is 15 + 27?" + + # Step 1: send initial chat that may request function calls + {:ok, resp} = LlmComposer.simple_chat(settings, user_prompt) + + case resp.function_calls do + nil -> + # Model provided a direct answer + IO.puts("Assistant: #{resp.main_response.content}") + {:ok, resp} + + function_calls -> + # Step 2: execute each returned function call locally + executed_calls = + Enum.map(function_calls, fn call -> + case FunctionExecutor.execute(call, functions) do + {:ok, executed} -> executed + {:error, _} -> call + end + end) + + # Build tool-result messages (helper constructs proper :tool_result messages) + tool_messages = LlmComposer.FunctionCallHelpers.build_tool_result_messages(executed_calls) + + # Build assistant 'with tools' message (provider-aware helper) + user_message = %Message{type: :user, content: user_prompt} + + assistant_with_tools = + LlmComposer.FunctionCallHelpers.build_assistant_with_tools( + LlmComposer.Providers.OpenAI, + resp, + user_message, + [model: "gpt-4o-mini", functions: functions] + ) + + # Step 3: send user + assistant(with tool calls) + tool results back to LLM + messages = [user_message, assistant_with_tools] ++ tool_messages + + {:ok, final} = LlmComposer.run_completion(settings, messages) + IO.puts("Assistant: #{final.main_response.content}") + {:ok, final} end end end -{:ok, res} = MyChat.simple_chat("hi, how much is 1 + 2?") - -IO.inspect(res.main_response) +# Run the example +ManualFunctionCallExample.run() ``` -Example of execution: +**What this shows** +- How to define a safe local function and a corresponding `LlmComposer.Function` descriptor. +- How to call `LlmComposer.simple_chat/2` to obtain potential function calls from the model. +- How to execute returned `FunctionCall` structs with `FunctionExecutor.execute/2`. +- How to build `:tool_result` messages with `LlmComposer.FunctionCallHelpers.build_tool_result_messages/1` and construct the assistant message using `build_assistant_with_tools/4`. +- How to submit the results back to the model with `LlmComposer.run_completion/2` to receive the final assistant answer. -``` -mix run functions_sample.ex +#### FunctionExecutor API -16:38:28.338 [debug] input_tokens=111, output_tokens=17 +The `FunctionExecutor.execute/2` function: -16:38:28.935 [debug] input_tokens=136, output_tokens=9 -LlmComposer.Message.new( - :assistant, - "1 + 2 is 3." -) +```elixir +FunctionExecutor.execute(function_call, function_definitions) ``` -In this example, the bot first calls OpenAI to understand the user's intent and determine that a function (the calculator) should be executed. The function is then executed locally, and the result is sent back to the user in a second API call. +**Parameters:** +- `function_call`: The `FunctionCall` struct returned by the LLM +- `function_definitions`: List of `Function` structs that define callable functions + +**Returns:** +- `{:ok, executed_call}`: FunctionCall with `:result` populated +- `{:error, :function_not_found}`: Function name not in definitions +- `{:error, {:invalid_arguments, reason}}`: Failed to parse JSON arguments +- `{:error, {:execution_failed, reason}}`: Exception during execution + +**Supported Providers:** OpenAI, OpenRouter, and Google (Gemini) ### Provider Router Simple @@ -1078,7 +1132,6 @@ IO.inspect(res.main_response) **Note:** Custom parameters are merged with the base request body. Provider-specific parameters (like `temperature`, `max_tokens`, `reasoning_effort`) can be passed through `request_params` to fine-tune model behavior. -* Auto Function Execution: Automatically executes predefined functions, reducing manual intervention. * System Prompts: Customize the assistant's behavior by modifying the system prompt (e.g., creating different personalities or roles for your bot). --- diff --git a/lib/llm_composer.ex b/lib/llm_composer.ex index 45f35ca..bbce8d4 100644 --- a/lib/llm_composer.ex +++ b/lib/llm_composer.ex @@ -1,8 +1,7 @@ defmodule LlmComposer do @moduledoc """ `LlmComposer` is responsible for interacting with a language model to perform chat-related operations, - such as running completions and executing functions based on the responses. The module provides - functionality to handle user messages, generate responses, and automatically execute functions as needed. + such as running completions and generating responses. ## Example Usage @@ -16,8 +15,6 @@ defmodule LlmComposer do ], system_prompt: "You are a helpful assistant.", user_prompt_prefix: "", - auto_exec_functions: false, - functions: [], api_key: "" } @@ -43,7 +40,6 @@ defmodule LlmComposer do In this example, the simple_chat/2 function sends the user's message to the language model using the provided settings, and the response is displayed as the assistant's reply. """ - alias LlmComposer.Helpers alias LlmComposer.LlmResponse alias LlmComposer.Message alias LlmComposer.ProvidersRunner @@ -63,9 +59,9 @@ defmodule LlmComposer do - `msg`: The user message to be sent to the language model. ## Returns - - The result of the language model's response, which may include function executions if specified. + - The result of the language model's response. """ - @spec simple_chat(Settings.t(), String.t()) :: Helpers.action_result() + @spec simple_chat(Settings.t(), String.t()) :: {:ok, LlmResponse.t()} | {:error, term()} def simple_chat(%Settings{} = settings, msg) do messages = [Message.new(:user, user_prompt(settings, msg, %{}))] @@ -76,7 +72,7 @@ defmodule LlmComposer do Runs the completion process by sending messages to the language model and handling the response. ## Parameters - - `settings`: The settings for the language model, including prompts, model options, and functions. + - `settings`: The settings for the language model, including prompts and model options. - `messages`: The list of messages to be sent to the language model. - `previous_response` (optional): The previous response object, if any, used for context. @@ -84,7 +80,7 @@ defmodule LlmComposer do - A tuple containing `:ok` with the response or `:error` if the model call fails. """ @spec run_completion(Settings.t(), messages(), LlmResponse.t() | nil) :: - Helpers.action_result() + {:ok, LlmResponse.t()} | {:error, term()} def run_completion(settings, messages, previous_response \\ nil) do system_msg = Message.new(:system, settings.system_prompt) @@ -97,11 +93,7 @@ defmodule LlmComposer do Logger.debug("input_tokens=#{res.input_tokens}, output_tokens=#{res.output_tokens}") - if settings.auto_exec_functions do - maybe_run_functions(res, messages, settings) - else - {:ok, res} - end + {:ok, res} {:error, _data} = resp -> resp @@ -158,13 +150,4 @@ defmodule LlmComposer do prompt = Map.get(opts, :user_prompt_prefix, settings.user_prompt_prefix) prompt <> message end - - @spec maybe_run_functions(LlmResponse.t(), messages(), Settings.t()) :: Helpers.action_result() - defp maybe_run_functions(res, messages, settings) do - res - |> Helpers.maybe_exec_functions(settings.functions) - |> Helpers.maybe_complete_chat(messages, fn new_messages -> - run_completion(settings, new_messages, res) - end) - end end diff --git a/lib/llm_composer/function_call_helpers.ex b/lib/llm_composer/function_call_helpers.ex new file mode 100644 index 0000000..0c54656 --- /dev/null +++ b/lib/llm_composer/function_call_helpers.ex @@ -0,0 +1,59 @@ +defmodule LlmComposer.FunctionCallHelpers do + @moduledoc """ + Helpers for building assistant messages and tool-result messages when handling + function (tool) calls returned by LLM providers. + + This module provides a default implementation for composing the assistant + message that preserves the original assistant response and attaches the + `tool_calls` metadata. Providers can optionally implement + `build_assistant_with_tools/3` to customize behavior. + """ + + alias LlmComposer.LlmResponse + alias LlmComposer.Message + + @doc """ + Build an assistant message that preserves the original assistant response and + attaches `tool_calls` so it can be sent back to the provider along with + tool result messages. + + If `provider_mod` exports `build_assistant_with_tools/3`, this function will + delegate to that implementation; otherwise it uses a sensible default. + """ + @spec build_assistant_with_tools(module(), LlmResponse.t(), Message.t(), keyword()) :: + Message.t() + def build_assistant_with_tools( + provider_mod, + %LlmResponse{} = resp, + %Message{} = user_msg, + opts \\ [] + ) do + if function_exported?(provider_mod, :build_assistant_with_tools, 3) do + provider_mod.build_assistant_with_tools(resp, user_msg, opts) + else + %Message{ + type: :assistant, + content: resp.main_response.content || "Using tool results", + metadata: %{ + original: resp.main_response.metadata[:original], + tool_calls: resp.function_calls + } + } + end + end + + @doc """ + Convert executed function-call results into `:tool_result` messages which + include the mapping back to the tool call id in `metadata["tool_call_id"]`. + """ + @spec build_tool_result_messages(list()) :: list(Message.t()) + def build_tool_result_messages(executed_calls) when is_list(executed_calls) do + Enum.map(executed_calls, fn call -> + %Message{ + type: :tool_result, + content: to_string(call.result), + metadata: %{"tool_call_id" => call.id} + } + end) + end +end diff --git a/lib/llm_composer/function_executor.ex b/lib/llm_composer/function_executor.ex new file mode 100644 index 0000000..a896271 --- /dev/null +++ b/lib/llm_composer/function_executor.ex @@ -0,0 +1,82 @@ +defmodule LlmComposer.FunctionExecutor do + @moduledoc """ + Provides manual execution of function calls from LLM responses. + + This module allows users to explicitly execute individual function calls + returned by the LLM, without automatic execution. It's designed for manual + control over function invocation and result handling. + + ## Usage + + After receiving a response with function calls, use `execute/2` to + manually execute each function call with its arguments parsed and + validated before invocation. + """ + + alias LlmComposer.Function + alias LlmComposer.FunctionCall + + @json_mod if Code.ensure_loaded?(JSON), do: JSON, else: Jason + + @doc """ + Executes a single function call and returns the updated FunctionCall with result. + + ## Parameters + - `function_call`: The FunctionCall struct to execute + - `functions`: List of Function definitions available for execution + + ## Returns + - `{:ok, executed_call}`: FunctionCall with result populated + - `{:error, reason}`: Error tuple if execution fails + + ## Possible Errors + - `{:error, :function_not_found}`: Named function not in definitions + - `{:error, {:invalid_arguments, reason}}`: Failed to parse JSON arguments + - `{:error, {:execution_failed, reason}}`: Exception during function execution + """ + @spec execute(FunctionCall.t(), [Function.t()]) :: + {:ok, FunctionCall.t()} | {:error, term()} + def execute(function_call, functions) when is_list(functions) do + with {:ok, function} <- find_function(function_call.name, functions), + {:ok, args} <- parse_arguments(function_call.arguments), + {:ok, result} <- invoke_function(function, args) do + executed_call = %FunctionCall{function_call | result: result} + {:ok, executed_call} + end + end + + @spec find_function(String.t(), [Function.t()]) :: + {:ok, Function.t()} | {:error, :function_not_found} + defp find_function(name, functions) do + case Enum.find(functions, fn f -> f.name == name end) do + nil -> {:error, :function_not_found} + function -> {:ok, function} + end + end + + @spec parse_arguments(String.t() | nil) :: {:ok, map()} | {:error, term()} + defp parse_arguments(nil) do + {:ok, %{}} + end + + defp parse_arguments(arguments) when is_binary(arguments) do + parsed = @json_mod.decode!(arguments) + {:ok, parsed} + rescue + e -> {:error, {:invalid_arguments, Exception.message(e)}} + end + + @spec invoke_function(Function.t(), map()) :: {:ok, term()} | {:error, term()} + defp invoke_function(function, args) do + {module, function_name} = function.mf + + try do + result = apply(module, function_name, [args]) + {:ok, result} + rescue + e -> {:error, {:execution_failed, Exception.message(e)}} + catch + type, value -> {:error, {:execution_failed, "#{type}: #{inspect(value)}"}} + end + end +end diff --git a/lib/llm_composer/helpers.ex b/lib/llm_composer/helpers.ex index 8a01c10..1e62dc2 100644 --- a/lib/llm_composer/helpers.ex +++ b/lib/llm_composer/helpers.ex @@ -1,115 +1,5 @@ defmodule LlmComposer.Helpers do @moduledoc """ - Provides helper functions for the `LlmComposer` module, particularly for managing - function calls and handling language model responses. - - These helpers are designed to execute functions as part of the response processing pipeline, - manage completions, and log relevant information for debugging. + Provides helper functions for the `LlmComposer` module for handling language model responses. """ - - alias LlmComposer.Function - alias LlmComposer.FunctionCall - alias LlmComposer.LlmResponse - alias LlmComposer.Message - - require Logger - - @json_mod if Code.ensure_loaded?(JSON), do: JSON, else: Jason - - @type messages :: [term()] - @type llmfunctions :: [Function.t()] - @type action_result :: - {:ok, LlmResponse.t()} - | {:completion, LlmResponse.t(), llmfunctions()} - | {:error, term()} - - @doc """ - Executes the functions specified in the language model response, if any. - - ## Parameters - - `res`: The language model response containing actions to be executed. - - `llm_functions`: A list of functions available for execution. - - ## Returns - - `{:ok, res}` if no actions are found in the response. - - `{:completion, res, results}` if actions are executed, returning the completion status and results. - """ - @spec maybe_exec_functions(LlmResponse.t(), llmfunctions()) :: action_result() - def maybe_exec_functions(%{actions: []} = res, _functions), do: {:ok, res} - - def maybe_exec_functions(%{actions: [actions | _tail]} = res, llm_functions) do - results = Enum.map(actions, &exec_function(&1, llm_functions)) - - {:completion, res, results} - end - - @doc """ - Completes the chat flow by appending function results to the messages and re-running the completion process. - - ## Parameters - - `action_result`: The result of a previous action or completion. - - `messages`: The list of messages exchanged so far. - - `run_completion_fn`: A function to re-run the completion with updated messages. - - ## Returns - - The result of re-running the completion with the new set of messages and function results. - """ - @spec maybe_complete_chat(action_result(), messages(), function()) :: action_result() - def maybe_complete_chat({:ok, _action_result} = res, _messages, _fcalls), do: res - - def maybe_complete_chat({:completion, oldres, results}, messages, run_completion_fn) do - results = - Enum.map( - results, - &Message.new(:function_result, serialize_fcall_result(&1.result), %{fcall: &1}) - ) - - original = - case oldres do - %LlmComposer.LlmResponse{main_response: %Message{metadata: metadata}} -> - Map.get(metadata, :original) - - _ -> - nil - end - - assistant_msg = - if is_map(original) and - (Map.has_key?(original, "tool_calls") or Map.has_key?(original, "parts")) do - # some providers (OpenAI/OpenRouter/Google) require the assistant message - # to include the original tool_calls / functionCall structure and have nil content, - # so we recreate the assistant message with metadata.original preserved. - Message.new(:assistant, nil, %{original: original}) - else - # fallback to the existing main_response - oldres.main_response - end - - new_messages = messages ++ [assistant_msg] ++ results - - run_completion_fn.(new_messages) - end - - @spec exec_function(fcall :: FunctionCall.t(), functions :: llmfunctions()) :: FunctionCall.t() - defp exec_function(%FunctionCall{} = fcall, functions) do - [ - %{ - mf: {mod, fname} - } - ] = Enum.filter(functions, fn function -> function.name == fcall.name end) - - mod_str = - mod - |> Atom.to_string() - |> String.trim_leading("Elixir.") - - Logger.debug("running function #{mod_str}.#{fname}") - res = apply(mod, fname, [fcall.arguments]) - - %FunctionCall{fcall | result: res} - end - - defp serialize_fcall_result(res) when is_map(res) or is_list(res), do: @json_mod.encode!(res) - defp serialize_fcall_result(res) when is_binary(res) or is_tuple(res), do: res - defp serialize_fcall_result(res), do: "#{res}" end diff --git a/lib/llm_composer/llm_response.ex b/lib/llm_composer/llm_response.ex index cb006c3..8f70ae7 100644 --- a/lib/llm_composer/llm_response.ex +++ b/lib/llm_composer/llm_response.ex @@ -5,7 +5,6 @@ defmodule LlmComposer.LlmResponse do alias LlmComposer.Cost.CostAssembler alias LlmComposer.CostInfo - alias LlmComposer.FunctionCall alias LlmComposer.Message @llm_providers [:open_ai, :ollama, :open_router, :bedrock, :google] @@ -13,8 +12,8 @@ defmodule LlmComposer.LlmResponse do @type provider() :: :open_ai | :ollama | :open_router | :bedrock | :google @type t() :: %__MODULE__{ - actions: [[FunctionCall.t()]] | [FunctionCall.t()], cost_info: CostInfo.t() | nil, + function_calls: [LlmComposer.FunctionCall.t()] | nil, input_tokens: pos_integer() | nil, main_response: Message.t() | nil, metadata: map(), @@ -27,8 +26,8 @@ defmodule LlmComposer.LlmResponse do } defstruct [ - :actions, :cost_info, + :function_calls, :main_response, :input_tokens, :output_tokens, @@ -64,7 +63,6 @@ defmodule LlmComposer.LlmResponse do when llm_provider in [:open_ai, :open_router, :ollama, :google] and is_function(stream) do {:ok, %__MODULE__{ - actions: [], cost_info: nil, input_tokens: nil, output_tokens: nil, @@ -78,7 +76,7 @@ defmodule LlmComposer.LlmResponse do def new( {status, - %{actions: actions, response: %{"choices" => [first_choice | _tail]} = raw_response} = + %{response: %{"choices" => [first_choice | _tail]} = raw_response} = provider_response}, llm_provider, opts @@ -91,13 +89,15 @@ defmodule LlmComposer.LlmResponse do |> String.to_existing_atom() |> Message.new(main_response["content"], %{original: main_response}) + function_calls = extract_function_calls(main_response) + {input_tokens, output_tokens} = CostAssembler.extract_tokens(llm_provider, raw_response) cost_info = CostAssembler.get_cost_info(llm_provider, raw_response, opts) {:ok, %__MODULE__{ - actions: actions, cost_info: cost_info, + function_calls: function_calls, input_tokens: input_tokens, output_tokens: output_tokens, main_response: response, @@ -109,8 +109,7 @@ defmodule LlmComposer.LlmResponse do end def new( - {status, - provider_response = %{actions: actions, response: %{"message" => message} = raw_response}}, + {status, provider_response = %{response: %{"message" => message} = raw_response}}, :ollama = provider, _opts ) do @@ -121,7 +120,6 @@ defmodule LlmComposer.LlmResponse do {:ok, %__MODULE__{ - actions: actions, cost_info: Map.get(provider_response, :cost_info), main_response: response, provider: provider, @@ -131,7 +129,7 @@ defmodule LlmComposer.LlmResponse do end def new( - {status, %{actions: actions, response: response} = provider_response}, + {status, %{response: response} = provider_response}, :bedrock = provider, _opts ) do @@ -140,7 +138,6 @@ defmodule LlmComposer.LlmResponse do {:ok, %__MODULE__{ - actions: actions, cost_info: Map.get(provider_response, :cost_info), input_tokens: response["usage"]["inputTokens"], output_tokens: response["usage"]["outputTokens"], @@ -153,7 +150,7 @@ defmodule LlmComposer.LlmResponse do end def new( - {status, %{actions: actions, response: response}}, + {status, %{response: response}}, :google = provider, opts ) do @@ -175,10 +172,13 @@ defmodule LlmComposer.LlmResponse do {input_tokens, output_tokens} = CostAssembler.extract_tokens(provider, response) cost_info = CostAssembler.get_cost_info(provider, response, opts) + # Extract function calls from Google's tool_uses format + function_calls = extract_google_function_calls(content) + {:ok, %__MODULE__{ - actions: actions, cost_info: cost_info, + function_calls: function_calls, input_tokens: input_tokens, output_tokens: output_tokens, main_response: Message.new(role, message_content, %{original: content}), @@ -190,4 +190,62 @@ defmodule LlmComposer.LlmResponse do def new(_response, provider, _opts), do: raise("provider #{provider} handling not implemented") + + @spec extract_function_calls(map()) :: [LlmComposer.FunctionCall.t()] | nil + defp extract_function_calls(message) do + case message["tool_calls"] do + nil -> + nil + + tool_calls when is_list(tool_calls) -> + Enum.map(tool_calls, fn tool_call -> + function_info = tool_call["function"] + + %LlmComposer.FunctionCall{ + id: tool_call["id"], + name: function_info["name"], + arguments: function_info["arguments"], + type: tool_call["type"], + metadata: %{}, + result: nil + } + end) + + _ -> + nil + end + end + + @spec extract_google_function_calls(map()) :: [LlmComposer.FunctionCall.t()] | nil + defp extract_google_function_calls(content) do + case content["parts"] do + nil -> + nil + + parts when is_list(parts) -> + function_calls = + parts + |> Enum.filter(&Map.has_key?(&1, "functionCall")) + |> Enum.map(fn part -> + function_call = part["functionCall"] + + %LlmComposer.FunctionCall{ + id: function_call["name"], + name: function_call["name"], + arguments: Jason.encode!(function_call["args"] || %{}), + type: "function", + metadata: %{}, + result: nil + } + end) + + case function_calls do + [] -> nil + calls -> calls + end + + _ -> + nil + end + end end diff --git a/lib/llm_composer/providers/bedrock.ex b/lib/llm_composer/providers/bedrock.ex index 3819b59..10ec0c9 100644 --- a/lib/llm_composer/providers/bedrock.ex +++ b/lib/llm_composer/providers/bedrock.ex @@ -81,7 +81,6 @@ if Code.ensure_loaded?(ExAws) do {:ok, %{ response: response, - actions: [], input_tokens: get_in(response, ["usage", "inputTokens"]), output_tokens: get_in(response, ["usage", "outputTokens"]) }} diff --git a/lib/llm_composer/providers/google.ex b/lib/llm_composer/providers/google.ex index 371c9c8..ef07f50 100644 --- a/lib/llm_composer/providers/google.ex +++ b/lib/llm_composer/providers/google.ex @@ -263,9 +263,7 @@ defmodule LlmComposer.Providers.Google do @spec handle_response(Tesla.Env.result(), keyword()) :: {:ok, map()} | {:error, term} defp handle_response({:ok, %Tesla.Env{status: 200, body: body}}, _opts) do - actions = Utils.extract_actions(body) - - {:ok, %{response: body, actions: actions}} + {:ok, %{response: body}} end defp handle_response({:ok, resp}, _opts) do diff --git a/lib/llm_composer/providers/ollama.ex b/lib/llm_composer/providers/ollama.ex index aba8ea8..b3641b7 100644 --- a/lib/llm_composer/providers/ollama.ex +++ b/lib/llm_composer/providers/ollama.ex @@ -51,7 +51,7 @@ defmodule LlmComposer.Providers.Ollama do @spec handle_response(Tesla.Env.result()) :: {:ok, map()} | {:error, term} defp handle_response({:ok, %Tesla.Env{status: status, body: body}}) when status in [200] do - {:ok, %{response: body, actions: []}} + {:ok, %{response: body}} end defp handle_response({:ok, resp}) do diff --git a/lib/llm_composer/providers/open_ai.ex b/lib/llm_composer/providers/open_ai.ex index 0319f5a..924e26a 100644 --- a/lib/llm_composer/providers/open_ai.ex +++ b/lib/llm_composer/providers/open_ai.ex @@ -65,9 +65,7 @@ defmodule LlmComposer.Providers.OpenAI do @spec handle_response(Tesla.Env.result(), keyword()) :: {:ok, map()} | {:error, term} defp handle_response({:ok, %Tesla.Env{status: status, body: body}}, _opts) when status in [200] do - actions = Utils.extract_actions(body) - - {:ok, %{response: body, actions: actions}} + {:ok, %{response: body}} end defp handle_response({:ok, resp}, _opts) do diff --git a/lib/llm_composer/providers/open_router.ex b/lib/llm_composer/providers/open_router.ex index 80e2318..332389f 100644 --- a/lib/llm_composer/providers/open_router.ex +++ b/lib/llm_composer/providers/open_router.ex @@ -81,8 +81,7 @@ defmodule LlmComposer.Providers.OpenRouter do end end - actions = Utils.extract_actions(body) - {:ok, %{response: body, actions: actions}} + {:ok, %{response: body}} end defp handle_response({:ok, resp}, _opts) do diff --git a/lib/llm_composer/providers/utils.ex b/lib/llm_composer/providers/utils.ex index 3dc6115..ac05dde 100644 --- a/lib/llm_composer/providers/utils.ex +++ b/lib/llm_composer/providers/utils.ex @@ -1,11 +1,8 @@ defmodule LlmComposer.Providers.Utils do @moduledoc false - alias LlmComposer.FunctionCall alias LlmComposer.Message - @json_mod if Code.ensure_loaded?(JSON), do: JSON, else: Jason - @spec map_messages([Message.t()], atom) :: [map()] def map_messages(messages, provider \\ :open_ai) @@ -21,27 +18,18 @@ defmodule LlmComposer.Providers.Utils do %Message{type: :system, content: message} -> %{"role" => "system", "content" => message} - # reference to original "tool_calls" - %Message{ - type: :assistant, - content: nil, - metadata: %{original: %{"tool_calls" => _tool_calls} = msg} - } -> - msg - - %Message{type: :assistant, content: message} -> - %{"role" => "assistant", "content" => message} - - %Message{ - type: :function_result, - content: message, - metadata: %{ - fcall: %FunctionCall{ - id: call_id - } + %Message{type: :assistant, content: message, metadata: metadata} -> + build_assistant_message(message, metadata) + + %Message{type: :tool_result, content: content, metadata: metadata} -> + %{ + "role" => "tool", + "tool_call_id" => metadata["tool_call_id"], + "content" => to_string(content) } - } -> - %{"role" => "tool", "content" => message, "tool_call_id" => call_id} + + _other -> + nil end) |> Enum.reject(&is_nil/1) end @@ -54,36 +42,68 @@ defmodule LlmComposer.Providers.Utils do %Message{type: :user, content: message} -> %{"role" => "user", "parts" => [%{"text" => message}]} - # reference to original "tool_calls" - %Message{ - type: :assistant, - content: nil, - metadata: %{original: %{"parts" => [%{"functionCall" => _}]} = msg} - } -> - msg - - %Message{type: :assistant, content: message} -> - %{"role" => "model", "parts" => [%{"text" => message}]} - - %Message{ - type: :function_result, - content: message, - metadata: %{ - fcall: %FunctionCall{ - name: name - } - } - } -> + %Message{type: :assistant, content: message, metadata: metadata} -> + build_google_assistant_message(message, metadata) + + %Message{type: :tool_result, content: content, metadata: metadata} -> %{ "role" => "user", "parts" => [ - %{"functionResponse" => %{"name" => name, "response" => %{"result" => message}}} + %{ + "functionResponse" => %{ + "name" => metadata["tool_call_id"], + "response" => %{ + "result" => to_string(content) + } + } + } ] } + + _other -> + nil end) |> Enum.reject(&is_nil/1) end + @spec build_google_assistant_message(String.t() | nil, map()) :: map() + defp build_google_assistant_message(message, metadata) do + base_message = %{"role" => "model"} + + case metadata[:tool_calls] do + nil -> + Map.put(base_message, "parts", [%{"text" => message}]) + + tool_calls -> + parts = + Enum.map(tool_calls, fn call -> + arguments = + if is_binary(call.arguments) do + Jason.decode!(call.arguments) + else + call.arguments + end + + %{ + "functionCall" => %{ + "name" => call.name, + "args" => arguments + } + } + end) + + # Add text part if message is not empty + parts = + if message && message != "" do + [%{"text" => message} | parts] + else + parts + end + + Map.put(base_message, "parts", parts) + end + end + @spec cleanup_body(map()) :: map() def cleanup_body(body) do body @@ -102,25 +122,6 @@ defmodule LlmComposer.Providers.Utils do Enum.map(functions, &transform_fn_to_tool(&1, provider)) end - @spec extract_actions(map()) :: nil | [] - def extract_actions(%{"choices" => choices}) when is_list(choices) do - choices - |> Enum.filter(&(&1["finish_reason"] == "tool_calls")) - |> Enum.map(&get_action/1) - end - - # google case - def extract_actions(%{"candidates" => candidates}) when is_list(candidates) do - candidates - |> Enum.filter(fn - %{"finishReason" => "STOP", "content" => %{"parts" => [%{"functionCall" => _data}]}} -> true - _other -> false - end) - |> Enum.map(&get_action(&1, :google)) - end - - def extract_actions(_response), do: [] - @spec get_req_opts(keyword()) :: keyword() def get_req_opts(opts) do if Keyword.get(opts, :stream_response) do @@ -151,29 +152,6 @@ defmodule LlmComposer.Providers.Utils do end end - defp get_action(%{"message" => %{"tool_calls" => calls}}) do - Enum.map(calls, fn call -> - %FunctionCall{ - type: "function", - id: call["id"], - name: call["function"]["name"], - arguments: @json_mod.decode!(call["function"]["arguments"]) - } - end) - end - - defp get_action(%{"content" => %{"parts" => parts}}, :google) do - Enum.map(parts, fn - %{"functionCall" => fcall} -> - %FunctionCall{ - type: "function", - id: nil, - name: fcall["name"], - arguments: fcall["args"] - } - end) - end - defp transform_fn_to_tool(%LlmComposer.Function{} = function, provider) when provider in [:open_ai, :ollama, :open_router] do %{ @@ -193,4 +171,31 @@ defmodule LlmComposer.Providers.Utils do "parameters" => function.schema } end + + @spec build_assistant_message(String.t() | nil, map()) :: map() + defp build_assistant_message(message, metadata) do + assistant_msg = %{"role" => "assistant"} + + case metadata[:tool_calls] do + nil -> + Map.put(assistant_msg, "content", message) + + tool_calls -> + formatted_calls = + Enum.map(tool_calls, fn call -> + %{ + "id" => call.id, + "type" => call.type || "function", + "function" => %{ + "name" => call.name, + "arguments" => call.arguments + } + } + end) + + assistant_msg + |> Map.put("content", message) + |> Map.put("tool_calls", formatted_calls) + end + end end diff --git a/lib/llm_composer/providers_runner.ex b/lib/llm_composer/providers_runner.ex index aaee258..603b4ea 100644 --- a/lib/llm_composer/providers_runner.ex +++ b/lib/llm_composer/providers_runner.ex @@ -90,7 +90,6 @@ defmodule LlmComposer.ProvidersRunner do @spec get_provider_opts(keyword(), Settings.t()) :: keyword() defp get_provider_opts(opts, settings) do opts - |> Keyword.put_new(:functions, settings.functions) |> Keyword.put_new(:stream_response, settings.stream_response) |> Keyword.put_new(:track_costs, settings.track_costs) |> Keyword.put_new(:api_key, settings.api_key) diff --git a/lib/llm_composer/settings.ex b/lib/llm_composer/settings.ex index 6125661..5df4439 100644 --- a/lib/llm_composer/settings.ex +++ b/lib/llm_composer/settings.ex @@ -2,12 +2,10 @@ defmodule LlmComposer.Settings do @moduledoc """ Defines the settings for configuring chat interactions with a language model. - This module provides a struct that includes model configuration, prompt settings, and options for function execution, enabling fine control over the chat flow and behavior. + This module provides a struct that includes model configuration and prompt settings, enabling fine control over the chat flow and behavior. """ defstruct api_key: nil, - auto_exec_functions: false, - functions: [], provider: nil, provider_opts: nil, providers: nil, @@ -18,8 +16,6 @@ defmodule LlmComposer.Settings do @type t :: %__MODULE__{ api_key: String.t() | nil, - auto_exec_functions: boolean(), - functions: [LlmComposer.Function.t()], provider: module() | nil, provider_opts: keyword() | nil, providers: [{module(), keyword()}] | nil, diff --git a/test/llm_composer/function_calls_auto_execution_test.exs b/test/llm_composer/function_calls_auto_execution_test.exs deleted file mode 100644 index e13bd92..0000000 --- a/test/llm_composer/function_calls_auto_execution_test.exs +++ /dev/null @@ -1,666 +0,0 @@ -defmodule LlmComposer.FunctionCallsAutoExecutionTest do - use ExUnit.Case, async: true - - alias LlmComposer.Settings - - setup do - bypass = Bypass.open() - {:ok, bypass: bypass} - end - - test "auto executes function calls and completes chat loop", %{bypass: bypass} do - # Mock first call that returns function calls (Google format) - Bypass.expect_once( - bypass, - "POST", - "/v1beta/models/gemini-2.5-flash:generateContent", - fn conn -> - {:ok, body, _conn} = Plug.Conn.read_body(conn) - request_data = Jason.decode!(body) - - # Verify the request includes our function definition - assert request_data["tools"] != nil - function_decls = request_data["tools"]["function_declarations"] - assert length(function_decls) == 1 - assert hd(function_decls)["name"] == "calculator" - - # Return function call response - json_response = %{ - "candidates" => [ - %{ - "content" => %{ - "parts" => [ - %{ - "functionCall" => %{ - "name" => "calculator", - "args" => %{"expression" => "2 + 3"} - } - } - ], - "role" => "model" - }, - "finishReason" => "STOP", - "index" => 0 - } - ], - "modelVersion" => "gemini-2.5-flash", - "usageMetadata" => %{ - "promptTokenCount" => 50, - "candidatesTokenCount" => 10, - "totalTokenCount" => 60 - } - } - - conn - |> Plug.Conn.put_resp_header("content-type", "application/json") - |> Plug.Conn.resp(200, Jason.encode!(json_response)) - end - ) - - # Mock second call with function result and final response - Bypass.expect_once( - bypass, - "POST", - "/v1beta/models/gemini-2.5-flash:generateContent", - fn conn -> - # Return final response - json_response = %{ - "candidates" => [ - %{ - "content" => %{ - "parts" => [%{"text" => "2 + 3 equals 5"}], - "role" => "model" - }, - "finishReason" => "STOP", - "index" => 0 - } - ], - "modelVersion" => "gemini-2.5-flash", - "usageMetadata" => %{ - "promptTokenCount" => 70, - "candidatesTokenCount" => 5, - "totalTokenCount" => 75 - } - } - - conn - |> Plug.Conn.put_resp_header("content-type", "application/json") - |> Plug.Conn.resp(200, Jason.encode!(json_response)) - end - ) - - # Define test function - calculator_function = %LlmComposer.Function{ - mf: {__MODULE__, :calculator}, - name: "calculator", - description: "A calculator that evaluates math expressions", - schema: %{ - "type" => "object", - "properties" => %{ - "expression" => %{ - "type" => "string", - "description" => "Math expression to evaluate" - } - }, - "required" => ["expression"] - } - } - - settings = %Settings{ - providers: [ - {LlmComposer.Providers.Google, - [ - model: "gemini-2.5-flash", - api_key: "test-key", - url: "http://localhost:#{bypass.port}/v1beta/models/" - ]} - ], - functions: [calculator_function], - auto_exec_functions: true, - system_prompt: "You are a helpful assistant" - } - - {:ok, response} = LlmComposer.simple_chat(settings, "What is 2 + 3?") - - assert response.main_response.content == "2 + 3 equals 5" - assert response.input_tokens == 70 - assert response.output_tokens == 5 - assert response.provider == :google - end - - test "handles multiple function calls in sequence", %{bypass: bypass} do - # Mock first call - returns function call - Bypass.expect_once( - bypass, - "POST", - "/v1beta/models/gemini-2.5-flash:generateContent", - fn conn -> - json_response = %{ - "candidates" => [ - %{ - "content" => %{ - "parts" => [ - %{ - "functionCall" => %{ - "name" => "calculator", - "args" => %{"expression" => "10 / 2"} - } - } - ], - "role" => "model" - }, - "finishReason" => "STOP", - "index" => 0 - } - ], - "usageMetadata" => %{ - "promptTokenCount" => 45, - "candidatesTokenCount" => 8, - "totalTokenCount" => 53 - } - } - - conn - |> Plug.Conn.put_resp_header("content-type", "application/json") - |> Plug.Conn.resp(200, Jason.encode!(json_response)) - end - ) - - # Mock second call - returns another function call - Bypass.expect_once( - bypass, - "POST", - "/v1beta/models/gemini-2.5-flash:generateContent", - fn conn -> - {:ok, body, _conn} = Plug.Conn.read_body(conn) - request_data = Jason.decode!(body) - - # Verify first function result is present - contents = request_data["contents"] - - function_result_part = - Enum.find(contents, fn content -> - parts = content["parts"] - Enum.any?(parts, &Map.has_key?(&1, "functionResponse")) - end) - - assert function_result_part != nil - function_response = hd(function_result_part["parts"])["functionResponse"] - assert function_response["response"]["result"] == 5.0 - - # Return another function call - json_response = %{ - "candidates" => [ - %{ - "content" => %{ - "parts" => [ - %{ - "functionCall" => %{ - "name" => "calculator", - "args" => %{"expression" => "5 * 3"} - } - } - ], - "role" => "model" - }, - "finishReason" => "STOP", - "index" => 0 - } - ], - "usageMetadata" => %{ - "promptTokenCount" => 65, - "candidatesTokenCount" => 8, - "totalTokenCount" => 73 - } - } - - conn - |> Plug.Conn.put_resp_header("content-type", "application/json") - |> Plug.Conn.resp(200, Jason.encode!(json_response)) - end - ) - - # Mock third call - final response - Bypass.expect_once( - bypass, - "POST", - "/v1beta/models/gemini-2.5-flash:generateContent", - fn conn -> - # Return final calculation result - json_response = %{ - "candidates" => [ - %{ - "content" => %{ - "parts" => [%{"text" => "The final result is 15"}], - "role" => "model" - }, - "finishReason" => "STOP", - "index" => 0 - } - ], - "usageMetadata" => %{ - "promptTokenCount" => 85, - "candidatesTokenCount" => 6, - "totalTokenCount" => 91 - } - } - - conn - |> Plug.Conn.put_resp_header("content-type", "application/json") - |> Plug.Conn.resp(200, Jason.encode!(json_response)) - end - ) - - calculator_function = %LlmComposer.Function{ - mf: {__MODULE__, :calculator}, - name: "calculator", - description: "Calculator function", - schema: %{ - "type" => "object", - "properties" => %{"expression" => %{"type" => "string"}}, - "required" => ["expression"] - } - } - - settings = %Settings{ - providers: [ - {LlmComposer.Providers.Google, - [ - model: "gemini-2.5-flash", - api_key: "test-key", - url: "http://localhost:#{bypass.port}/v1beta/models/" - ]} - ], - functions: [calculator_function], - auto_exec_functions: true, - system_prompt: "You are a helpful assistant" - } - - {:ok, response} = LlmComposer.simple_chat(settings, "Calculate 10 / 2, then multiply by 3") - - assert response.main_response.content == "The final result is 15" - assert response.input_tokens == 85 - assert response.output_tokens == 6 - end - - test "exercises completion path when functions are executed with OpenAI", %{bypass: bypass} do - # Mock first call that returns function calls (OpenAI format) - Bypass.expect_once(bypass, "POST", "/chat/completions", fn conn -> - json_response = %{ - "id" => "chatcmpl-123", - "object" => "chat.completion", - "created" => 1_677_628_800, - "model" => "gpt-4.1-mini", - "choices" => [ - %{ - "index" => 0, - "message" => %{ - "role" => "assistant", - "content" => nil, - "tool_calls" => [ - %{ - "id" => "call_123", - "type" => "function", - "function" => %{ - "name" => "calculator", - "arguments" => "{\"expression\": \"2 * 3\"}" - } - } - ] - }, - "finish_reason" => "tool_calls" - } - ], - "usage" => %{"prompt_tokens" => 50, "completion_tokens" => 10} - } - - conn - |> Plug.Conn.put_resp_header("content-type", "application/json") - |> Plug.Conn.resp(200, Jason.encode!(json_response)) - end) - - # Mock second call for completion after function execution - Bypass.expect_once(bypass, "POST", "/chat/completions", fn conn -> - {:ok, body, _conn} = Plug.Conn.read_body(conn) - request_data = Jason.decode!(body) - - messages = request_data["messages"] - assert length(messages) > 1 - - json_response = %{ - "id" => "chatcmpl-456", - "object" => "chat.completion", - "created" => 1_677_628_801, - "model" => "gpt-4.1-mini", - "choices" => [ - %{ - "index" => 0, - "message" => %{ - "role" => "assistant", - "content" => "2 * 3 equals 6" - }, - "finish_reason" => "stop" - } - ], - "usage" => %{"prompt_tokens" => 70, "completion_tokens" => 5} - } - - conn - |> Plug.Conn.put_resp_header("content-type", "application/json") - |> Plug.Conn.resp(200, Jason.encode!(json_response)) - end) - - calculator_function = %LlmComposer.Function{ - mf: {__MODULE__, :calculator}, - name: "calculator", - description: "Calculator function", - schema: %{ - "type" => "object", - "properties" => %{"expression" => %{"type" => "string"}}, - "required" => ["expression"] - } - } - - settings = %Settings{ - providers: [ - {LlmComposer.Providers.OpenAI, - [ - model: "gpt-4.1-mini", - api_key: "test-key", - url: endpoint_url(bypass.port) - ]} - ], - functions: [calculator_function], - auto_exec_functions: true, - system_prompt: "You are a helpful assistant" - } - - {:ok, response} = LlmComposer.simple_chat(settings, "What is 2 * 3?") - - assert response.main_response.content == "2 * 3 equals 6" - assert response.input_tokens == 70 - assert response.output_tokens == 5 - assert response.provider == :open_ai - end - - test "exercises completion path when functions are executed with OpenRouter", %{bypass: bypass} do - # Mock first call that returns function calls (OpenRouter/OpenAI format) - Bypass.expect_once(bypass, "POST", "/chat/completions", fn conn -> - json_response = %{ - "id" => "chatcmpl-123", - "object" => "chat.completion", - "created" => 1_677_628_800, - "model" => "openrouter-model", - "choices" => [ - %{ - "index" => 0, - "message" => %{ - "role" => "assistant", - "content" => nil, - "tool_calls" => [ - %{ - "id" => "call_123", - "type" => "function", - "function" => %{ - "name" => "calculator", - "arguments" => "{\"expression\": \"4 + 1\"}" - } - } - ] - }, - "finish_reason" => "tool_calls" - } - ], - "usage" => %{"prompt_tokens" => 50, "completion_tokens" => 10} - } - - conn - |> Plug.Conn.put_resp_header("content-type", "application/json") - |> Plug.Conn.resp(200, Jason.encode!(json_response)) - end) - - # Mock second call for completion after function execution - Bypass.expect_once(bypass, "POST", "/chat/completions", fn conn -> - {:ok, body, _conn} = Plug.Conn.read_body(conn) - request_data = Jason.decode!(body) - - # Ensure the second request contains the appended function result - messages = request_data["messages"] - assert length(messages) > 1 - - json_response = %{ - "id" => "chatcmpl-456", - "object" => "chat.completion", - "created" => 1_677_628_801, - "model" => "openrouter-model", - "choices" => [ - %{ - "index" => 0, - "message" => %{ - "role" => "assistant", - "content" => "4 + 1 equals 5" - }, - "finish_reason" => "stop" - } - ], - "usage" => %{"prompt_tokens" => 70, "completion_tokens" => 5} - } - - conn - |> Plug.Conn.put_resp_header("content-type", "application/json") - |> Plug.Conn.resp(200, Jason.encode!(json_response)) - end) - - calculator_function = %LlmComposer.Function{ - mf: {__MODULE__, :calculator}, - name: "calculator", - description: "Calculator function", - schema: %{ - "type" => "object", - "properties" => %{"expression" => %{"type" => "string"}}, - "required" => ["expression"] - } - } - - settings = %Settings{ - providers: [ - {LlmComposer.Providers.OpenRouter, - [ - model: "openrouter-model", - api_key: "test-key", - url: endpoint_url(bypass.port) - ]} - ], - functions: [calculator_function], - auto_exec_functions: true, - system_prompt: "You are a helpful assistant" - } - - {:ok, response} = LlmComposer.simple_chat(settings, "What is 4 + 1?") - - assert response.main_response.content == "4 + 1 equals 5" - assert response.input_tokens == 70 - assert response.output_tokens == 5 - assert response.provider == :open_router - end - - test "skips function execution when auto_exec_functions is false", %{bypass: bypass} do - # Mock single call that returns function calls - Bypass.expect_once( - bypass, - "POST", - "/v1beta/models/gemini-2.5-flash:generateContent", - fn conn -> - json_response = %{ - "candidates" => [ - %{ - "content" => %{ - "parts" => [ - %{ - "functionCall" => %{ - "name" => "calculator", - "args" => %{"expression" => "1 + 1"} - } - } - ], - "role" => "model" - }, - "finishReason" => "STOP", - "index" => 0 - } - ], - "usageMetadata" => %{ - "promptTokenCount" => 40, - "candidatesTokenCount" => 8, - "totalTokenCount" => 48 - } - } - - conn - |> Plug.Conn.put_resp_header("content-type", "application/json") - |> Plug.Conn.resp(200, Jason.encode!(json_response)) - end - ) - - calculator_function = %LlmComposer.Function{ - mf: {__MODULE__, :calculator}, - name: "calculator", - description: "Calculator function", - schema: %{ - "type" => "object", - "properties" => %{"expression" => %{"type" => "string"}}, - "required" => ["expression"] - } - } - - settings = %Settings{ - providers: [ - {LlmComposer.Providers.Google, - [ - model: "gemini-2.5-flash", - api_key: "test-key", - url: "http://localhost:#{bypass.port}/v1beta/models/" - ]} - ], - functions: [calculator_function], - # Disabled - auto_exec_functions: false, - system_prompt: "You are a helpful assistant" - } - - {:ok, response} = LlmComposer.simple_chat(settings, "What is 1 + 1?") - - assert response.actions != [] - assert length(response.actions) == 1 - [actions_list] = response.actions - assert length(actions_list) == 1 - action = hd(actions_list) - assert action.name == "calculator" - assert action.arguments == %{"expression" => "1 + 1"} - # Not executed - assert action.result == nil - end - - test "maybe_exec_functions executes functions and returns completion" do - fcall = %LlmComposer.FunctionCall{ - type: "function", - id: "call_1", - name: "calculator", - arguments: %{"expression" => "2 + 3"} - } - - functions = [ - %LlmComposer.Function{ - mf: {__MODULE__, :calculator}, - name: "calculator", - description: "Calculator function", - schema: %{"type" => "object"} - } - ] - - res = %LlmComposer.LlmResponse{actions: [[fcall]], main_response: nil} - - {:completion, ^res, results} = LlmComposer.Helpers.maybe_exec_functions(res, functions) - - assert length(results) == 1 - assert hd(results).result == 5 - end - - test "maybe_complete_chat builds assistant message from original and serializes results" do - # Prepare an old response that contains metadata.original with tool_calls (simulating OpenAI/OpenRouter) - original = %{"tool_calls" => [%{"id" => "call_1"}]} - old_main = LlmComposer.Message.new(:assistant, "previous", %{original: original}) - oldres = %LlmComposer.LlmResponse{main_response: old_main} - - # Prepare function call results with different result types - res_map = %LlmComposer.FunctionCall{name: "m1", result: %{"a" => 1}} - res_bin = %LlmComposer.FunctionCall{name: "m2", result: "a binary"} - res_other = %LlmComposer.FunctionCall{name: "m3", result: 123} - - messages = [LlmComposer.Message.new(:user, "hi")] - - run_completion = fn new_messages -> - # Should include assistant message with original preserved and nil content - assert Enum.any?(new_messages, fn m -> - m.type == :assistant and m.content == nil and - Map.get(m.metadata, :original) == original - end) - - # Should include function_result messages with serialized contents - frs = Enum.filter(new_messages, fn m -> m.type == :function_result end) - assert length(frs) == 3 - - [f1, f2, f3] = frs - assert f1.content == Jason.encode!(%{"a" => 1}) - assert f2.content == "a binary" - assert f3.content == "123" - - {:ok, :done} - end - - assert LlmComposer.Helpers.maybe_complete_chat( - {:completion, oldres, [res_map, res_bin, res_other]}, - messages, - run_completion - ) == {:ok, :done} - end - - test "maybe_complete_chat falls back to main_response when original does not include tool_calls or parts" do - old_main = LlmComposer.Message.new(:assistant, "prev content", %{}) - oldres = %LlmComposer.LlmResponse{main_response: old_main} - - res = %LlmComposer.FunctionCall{name: "m1", result: "ok"} - messages = [LlmComposer.Message.new(:user, "hello")] - - run_completion = fn new_messages -> - # Assistant message should be the previous main_response - assert Enum.any?(new_messages, fn m -> - m.type == :assistant and m.content == "prev content" - end) - - {:ok, :done} - end - - assert LlmComposer.Helpers.maybe_complete_chat( - {:completion, oldres, [res]}, - messages, - run_completion - ) == {:ok, :done} - end - - # Test helper functions - @spec calculator(map()) :: integer() | float() | {:error, String.t()} - def calculator(%{"expression" => expression}) do - # Simple calculator for testing - case expression do - "2 + 3" -> 5 - "10 / 2" -> 5.0 - "5 * 3" -> 15 - "1 + 1" -> 2 - "2 * 3" -> 6 - _ -> {:error, "Unsupported expression"} - end - end - - defp endpoint_url(port), do: "http://localhost:#{port}/" -end