Skip to content

create_agent with tools fail as final LLM call does not happen and it errors out with UnboundLocalError #33696

@hoppiesbunny

Description

@hoppiesbunny

Checked other resources

  • This is a bug, not a usage question.
  • I added a clear and descriptive title that summarizes this issue.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
  • This is not related to the langchain-community package.
  • I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
  • I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.

Example Code

Example code that I have when used to call AzureChatOpenAI

    agent = create_agent(
        model=self._client,
        tools=[search_web],
        system_prompt=system_prompt,
        state_schema=state_typed_dict,
        response_format=response_model,
    )
    raw_llm_result = agent.invoke(
        input={"messages": [{"role": "user", "content": user_prompt}]}
    )

    # structured output is a json string in result['messages'][-1].content
    structured_output = json.loads(raw_llm_result["messages"][-1].content)

Error Message and Stack Trace (if applicable)

UnboundLocalError("cannot access local variable 'last_ai_index' where it is not associated with a value")

This is the function inside LangChain that is erroring out repeatedly:

def _fetch_last_ai_and_tool_messages(
    messages: list[AnyMessage],
) -> tuple[AIMessage, list[ToolMessage]]:
    last_ai_index: int
    last_ai_message: AIMessage

    for i in range(len(messages) - 1, -1, -1):
        if isinstance(messages[i], AIMessage):
            last_ai_index = i
            last_ai_message = cast("AIMessage", messages[i])
            break

    tool_messages = [m for m in messages[last_ai_index + 1 :] if isinstance(m, ToolMessage)]
    return last_ai_message, tool_messages

Description

The moment tools are included, it will error out after the tool call, even though the tools return valid results.
This only happens after I've upgraded to Langchain v1.0. Things work fine if no tools are included. The nature of the tools do not matter.

You can see from my Langsmith trace that the 2nd LLM call after all the tools never happen.

Image

You can see that the tool does return valid results. I've redacted the information for privacy reasons.

Image

Before the upgrade to Langchain v1.0, you can see that there will always be final LLM call after tool calls. (Refer to image below). There was no change in my business logic when I upgraded to Langchain v1.0.

Image

I dug deeper into the Langchain source code and saw in the docstring of the create_agent function that the second LLM call should be a deterministic one:

    The agent node calls the language model with the messages list (after applying
    the system prompt). If the resulting `AIMessage` contains `tool_calls`, the graph
    will then call the tools. The tools node executes the tools and adds the responses
    to the messages list as `ToolMessage` objects. The agent node then calls the
    language model again. The process repeats until no more `tool_calls` are
    present in the response. The agent then returns the full list of messages.

Do let me know if more information is needed!

System Info

python = "^3.11"
langgraph = ">=0.2.6"
langchain-openai = "^1.0.1"
langchain-anthropic = "^1.0.0"
langchain-google-vertexai = "^3.0.1"
langchain = "^1.0.2"
langchain-fireworks = "^1.0.0"

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugRelated to a bug, vulnerability, unexpected error with an existing feature

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions