Skip to content

Conversation

@yigithankarabulut
Copy link

Summary

Implement identityRequestProcessor in llminternal pkg, referencing the adk-python src/google/adk/flows/llm_flows/identity.py identity flow implementation.

Changes

  • Added identityRequestProcessor function in other_processors.go
  • Implemented request processing logic following Python ADK patterns
  • Added corresponding test cases in other_processors_test.go

Testing

  • Added unit tests that cover 100% of the basic functionality
  • All tests pass with expected behavior

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @yigithankarabulut, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new feature that enhances LLM agent interactions by automatically injecting agent identity information into LLM requests. By referencing the existing adk-python implementation, it ensures consistency across platforms and improves the LLM's understanding of the agent's role and purpose, leading to more contextually aware responses.

Highlights

  • Implementation of identityRequestProcessor: A new function, identityRequestProcessor, has been added to other_processors.go to handle identity context for LLM agents, fulfilling a previous TODO item.
  • Agent Identity Integration: This processor dynamically constructs and adds system instructions to LLM requests, informing the LLM about the agent's name and description, mirroring the implementation found in adk-python.
  • Comprehensive Unit Tests: A new test file, other_processors_test.go, has been added with extensive unit tests covering various scenarios for the identityRequestProcessor, including agents with names, descriptions, existing system instructions, and non-LLM agents, ensuring its correct behavior.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request implements the identityRequestProcessor to add agent identity information to LLM requests, along with corresponding unit tests. The implementation is straightforward and follows the intended logic. My review includes suggestions to improve string formatting safety to prevent potential prompt injection, enhance test coverage for edge cases, and refactor the test assertion logic for better readability and maintainability.

@yigithankarabulut
Copy link
Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully implements the identityRequestProcessor and adds a comprehensive set of unit tests. The implementation is clear and follows the intended logic from the Python counterpart. My main feedback is focused on improving the structure and clarity of the test setup in other_processors_test.go to make it more robust and maintainable. Overall, great work on this feature.

@yigithankarabulut
Copy link
Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request implements the identityRequestProcessor and adds corresponding unit tests. The implementation correctly adds agent name and description to the system instructions. The tests are comprehensive and cover various scenarios. My review includes a few suggestions to improve code readability and maintainability in both the implementation and the test files by reducing code duplication and repeated function calls. Overall, the changes are well-implemented and tested.

Comment on lines +30 to +42
llmAgent := asLLMAgent(ctx.Agent())
if llmAgent == nil {
return nil // do nothing.
}

// Add identity information to system instructions.
identityInstructions := make([]string, 0, 2)
if name := ctx.Agent().Name(); name != "" {
identityInstructions = append(identityInstructions, fmt.Sprintf(`You are an agent. Your internal name is %q.`, name))
}
if description := ctx.Agent().Description(); description != "" {
identityInstructions = append(identityInstructions, fmt.Sprintf(`The description about you is %q.`, description))
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The ctx.Agent() method is called multiple times. It would be more efficient and readable to call it once and store the result in a local variable. This also avoids using the llmAgent variable which is only used for a nil check.

Suggested change
llmAgent := asLLMAgent(ctx.Agent())
if llmAgent == nil {
return nil // do nothing.
}
// Add identity information to system instructions.
identityInstructions := make([]string, 0, 2)
if name := ctx.Agent().Name(); name != "" {
identityInstructions = append(identityInstructions, fmt.Sprintf(`You are an agent. Your internal name is %q.`, name))
}
if description := ctx.Agent().Description(); description != "" {
identityInstructions = append(identityInstructions, fmt.Sprintf(`The description about you is %q.`, description))
}
agent := ctx.Agent()
if agent == nil || asLLMAgent(agent) == nil {
return nil // do nothing.
}
// Add identity information to system instructions.
identityInstructions := make([]string, 0, 2)
if name := agent.Name(); name != "" {
identityInstructions = append(identityInstructions, fmt.Sprintf("You are an agent. Your internal name is %q.", name))
}
if description := agent.Description(); description != "" {
identityInstructions = append(identityInstructions, fmt.Sprintf("The description about you is %q.", description))
}

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While caching ctx.Agent() in a local variable would be more efficient, this would create inconsistency with the established patterns in this package. The current approach seems to prioritize code consistency and readability over micro optimizations.

If we want to optimize this, it should be done as a package-wide refactoring to maintain consistency across all functions, rather than changing just one function.

Comment on lines +40 to +46
agent: &struct {
agent.Agent
State
}{
Agent: utils.Must(agent.New(agent.Config{Name: "TestAgent"})),
State: State{},
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The pattern for creating a test LLM agent using an anonymous struct is repeated for multiple test cases. To improve readability and maintainability, consider creating a helper function to encapsulate this logic. This will make the test suite easier to read and modify in the future.

For example, you could add a helper function:

func newTestLLMAgent(cfg agent.Config) agent.Agent {
	return &struct {
		agent.Agent
		State
	}{
		Agent: utils.Must(agent.New(cfg)),
		State: State{},
	}
}

And then use it in your test cases like this:

{
    name: "LLM agent with name only - adds name instruction",
    agent: newTestLLMAgent(agent.Config{Name: "TestAgent"}),
    req: &model.LLMRequest{},
    wantErr: false,
    wantSystemInst: `You are an agent. Your internal name is \"TestAgent\".`,
},

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

identityRequestProcessor() isn't exportable, so I included the test file in the llminternal package instead of llminternal_test. This would leave the test helper functions open to call from within the llminternal package, which I don't think we want.

Comment on lines +169 to +178
var gotText string
if tc.req.Config != nil && tc.req.Config.SystemInstruction != nil {
var texts []string
for _, part := range tc.req.Config.SystemInstruction.Parts {
if part.Text != "" {
texts = append(texts, part.Text)
}
}
gotText = strings.Join(texts, "\n\n")
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic to extract and format the system instruction text from the request is contained within the test loop. Extracting this into a helper function would improve code clarity and reduce repetition.

For example, you could define a helper:

func getSystemInstructionText(req *model.LLMRequest) string {
	if req.Config == nil || req.Config.SystemInstruction == nil {
		return ""
	}
	var texts []string
	for _, part := range req.Config.SystemInstruction.Parts {
		if part.Text != "" {
			texts = append(texts, part.Text)
		}
	}
	return strings.Join(texts, "\n\n")
}

And use it in the test loop:

gotText := getSystemInstructionText(tc.req)
if diff := cmp.Diff(tc.wantSystemInst, gotText); diff != "" {
    t.Errorf("system instruction mismatch (-want +got):\n%s", diff)
}

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The same situation applies as the comment in the conversation above.

@yigithankarabulut
Copy link
Author

@rakyll Hello, can you review this pr?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant