-
Notifications
You must be signed in to change notification settings - Fork 10.2k
Description
What would you like to be added?
I propose adding a configuration option (e.g., context = false, isolate = true, or use_history = false) to the custom command TOML configuration files.
When set to false, the CLI should execute this specific command using only the defined system prompt and the current user input, explicitly excluding the accumulated conversation history from the context window.
Proposed TOML structure example (~/.gemini/commands/pr.toml):
description = "Generate git commit messages"
context = false # <--- New Request: Do not send previous chat history
prompt = """
Generate a git commit message for: {{args}}
"""Why is this needed?
Currently, when using custom commands (like /pr or /translate) inside an interactive session (gemini chat), the CLI automatically appends the entire session history. This causes two major issues:
- Token Waste: I often need to run a quick utility command (e.g., generating a PR title) while in the middle of a long debugging session. Sending the entire debugging history just to generate a one-line title consumes unnecessary tokens.
- Context Pollution: The previous context can interfere with the utility command. For example, if I'm discussing a Python bug and then run a
/prcommand for a documentation change, the model might hallucinate and try to include Python fixes in the PR description because it sees the previous conversation.
This feature would allow users to "interweave" utility commands within a session without breaking flow or polluting context.
Additional context
Use Case Scenario:
- I am in a deep conversation debugging a Docker issue (Context: 10k+ tokens).
- I want to quickly generate a standard PR title for a fix I just found:
/pr fix docker networking. - Current Behavior: The request sends 10k+ tokens of context. The model might reply: "Based on the logs you showed me earlier, here is the PR title..."
- Desired Behavior (with
context = false): The request sends only the/prprompt and my input. The model generates a clean, independent response, saving tokens and ensuring accuracy.