diff --git a/.claude/plans/custom-prompting-feature.md b/.claude/plans/custom-prompting-feature.md index 87e3063..9969226 100644 --- a/.claude/plans/custom-prompting-feature.md +++ b/.claude/plans/custom-prompting-feature.md @@ -530,9 +530,26 @@ class ChatSession: - Safety and security measures - Documentation and examples -## Built-in Prompt Library - -### Writing & Communication (15 prompts) +## Revised Built-in Prompt Library + +### Phase 0.5: Essential Prompts (5 prompts) +- `email`: Professional email drafting +- `code-review`: Basic code review +- `summarize`: Content summarization +- `explain`: Concept explanation +- `analyze`: General analysis framework + +### Phase 1: Core Library (10 prompts) +- Previous 5 plus: +- `debug-help`: Debugging assistance +- `technical-doc`: Technical documentation +- `meeting-notes`: Meeting summarization +- `creative-writing`: Creative writing assistance +- `data-analysis`: Data interpretation + +### Phase 2: Complete Library (30+ prompts) + +#### Writing & Communication (10 prompts) - `email-professional`: Professional email drafting - `email-followup`: Follow-up email templates - `technical-documentation`: Technical doc generation diff --git a/CLAUDE.md b/CLAUDE.md index cfacf5b..75cd875 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -10,9 +10,10 @@ Nova is an AI research and personal assistant written in Python that provides: - YAML-based configuration management - Chat history saved to markdown files - **Multi-provider AI integration** (OpenAI, Anthropic, Ollama) +- **Custom prompt templating system** with built-in templates and user-defined prompts - Modular architecture for extensibility -**Current Status:** Phase 2 complete (AI integration), supports OpenAI, Anthropic, and Ollama. +**Current Status:** Phase 3 complete (Custom Prompting), supports OpenAI, Anthropic, and Ollama with custom prompt templates. ## Package Management Commands @@ -37,6 +38,13 @@ Use these commands: - Show configuration: `uv run nova config show` - Initialize config: `uv run nova config init` +## Prompt Management Commands + +- List available prompt templates: `/prompts` +- Search prompt templates: `/prompts search ` +- Apply a prompt template: `/prompt ` +- Templates are stored in `~/.nova/prompts/user/custom/` (user-defined) and built-in templates + ## Testing Commands - Run all tests: `uv run pytest` @@ -107,6 +115,9 @@ Use these commands: - **Clean logic**: Keep core logic clean and push implementation details to the edges - **File Organsiation**: Balance file organization with simplicity - use an appropriate number of files for the project scale - **Input Handling**: Use `prompt-toolkit` for enhanced terminal input with arrow key navigation and history support +- **Template Security**: Always validate user-provided templates for dangerous patterns and length limits +- **Path Security**: Ensure file paths are properly sanitized to prevent directory traversal attacks +- **Variable Validation**: Use type checking for template variables and validate required fields - **YAML Security**: Always use `yaml.safe_load()` instead of `yaml.load()` to prevent code injection - **Error Handling**: Use specific exception types and provide meaningful error messages; avoid silent failures - **Metadata Validation**: Validate user-provided metadata to prevent oversized or malicious content @@ -126,6 +137,7 @@ Use these commands: - Chat history persistence in markdown format with YAML frontmatter metadata - Interactive chat sessions with commands (/help, /save, etc.) - Enhanced input handling with arrow key navigation and message history +- **Custom prompt templating system** with validation, categories, and variable substitution - Comprehensive test suite with unit and integration tests **Chat History Format:** diff --git a/README.md b/README.md index bbf2d84..07b9014 100644 --- a/README.md +++ b/README.md @@ -1,11 +1,12 @@ # Nova - AI Research Assistant -Nova is a configurable command-line AI assistant that provides multi-provider AI integration, conversation history, and extensible chat capabilities. Built with Python and Typer, Nova supports OpenAI, Anthropic, and Ollama providers with a profile-based configuration system and persistent chat history saved in markdown format. +Nova is a configurable command-line AI assistant that provides multi-provider AI integration, conversation history, and extensible chat capabilities. Built with Python and Typer, Nova supports OpenAI, Anthropic, and Ollama providers with a profile-based configuration system, custom prompting templates, and persistent chat history saved in markdown format. ## Features - **Multi-provider AI Integration**: Support for OpenAI, Anthropic, and Ollama - **Profile-based Configuration**: Easy switching between different AI models and providers +- **Custom Prompting System**: Built-in prompt library with interactive templates and system prompts - **Interactive CLI**: Built with Typer for a rich command-line experience - **Intelligent Chat History**: Conversations saved as markdown files with smart content-based titles - **Session Management**: Resume your most recent conversation with a single command @@ -142,6 +143,7 @@ uv run nova --verbose chat start While in a chat session, you can use these commands: +**General Commands:** - `/help` - Show available commands - `/save` - Save current conversation - `/history` - View conversation history @@ -152,8 +154,129 @@ While in a chat session, you can use these commands: - `/tag ` - Add tags to conversation - `/stats` - Show memory statistics +**Prompt Commands:** +- `/prompt ` - Apply a prompt template interactively +- `/prompts` - List all available prompt templates +- `/prompts search ` - Search prompt templates by keywords + **Note**: Conversations without manually set titles will automatically generate intelligent titles based on the content of your first message. +## Custom Prompting System + +Nova includes a powerful custom prompting system that allows you to use pre-built templates and create custom system prompts for different AI profiles. + +### Built-in Prompt Templates + +Nova comes with essential prompt templates that you can use immediately: + +- **`email`** - Draft professional emails with proper tone and structure +- **`code-review`** - Comprehensive code review focusing on quality and best practices +- **`summarize`** - Summarize content with key points and insights +- **`explain`** - Explain complex concepts in simple terms +- **`analyze`** - General analysis framework for topics and situations + +### Using Prompts in Chat + +```bash +# List all available prompt templates +/prompts + +# Search for specific prompts +/prompts search code +/prompts search email + +# Apply a prompt template interactively +/prompt email +# Nova will ask for required variables like purpose, recipient, tone + +/prompt code-review +# Nova will ask for code, language, and focus areas +``` + +### System Prompts per Profile + +Configure custom system prompts for each AI profile to get consistent behavior: + +```yaml +profiles: + coding-assistant: + name: "Coding Assistant" + provider: "anthropic" + model_name: "claude-3-5-sonnet-20241022" + system_prompt: | + You are an expert software developer and code reviewer. + Focus on writing clean, maintainable, and well-documented code. + Always explain your reasoning and suggest best practices. + Today is ${current_date} and the user is ${user_name}. + + creative-writer: + name: "Creative Writer" + provider: "openai" + model_name: "gpt-4" + system_prompt: | + You are a creative writing assistant with expertise in storytelling, + character development, and narrative structure. + Help users craft compelling stories and improve their writing style. +``` + +### System Prompt Variables + +System prompts support variable substitution: + +- `${current_date}` - Current date (YYYY-MM-DD) +- `${current_time}` - Current time (HH:MM:SS) +- `${user_name}` - System username +- `${conversation_id}` - Current conversation ID +- `${active_profile}` - Active profile name + +### Prompt Configuration + +Configure the prompting system in your `config.yaml`: + +```yaml +prompts: + enabled: true # Enable/disable prompt system + library_path: ~/.nova/prompts # Where to store custom prompts + allow_user_prompts: true # Allow custom user prompts + validate_prompts: true # Validate prompt templates + max_prompt_length: 8192 # Maximum prompt length +``` + +### Creating Custom Prompts + +Create custom prompt templates by saving YAML files in `~/.nova/prompts/user/custom/`: + +```yaml +# ~/.nova/prompts/user/custom/my-prompt.yaml +name: "meeting-prep" +title: "Meeting Preparation Assistant" +description: "Help prepare for meetings with agenda and talking points" +category: "business" +tags: ["meeting", "preparation", "agenda"] +variables: + - name: "meeting_type" + description: "Type of meeting" + required: true + - name: "attendees" + description: "Meeting attendees" + required: false + default: "team members" + - name: "duration" + description: "Meeting duration" + required: false + default: "1 hour" + +template: | + Please help me prepare for a ${meeting_type} with ${attendees}. + The meeting is scheduled for ${duration}. + + Please provide: + - Suggested agenda items + - Key talking points + - Potential questions to ask + - Follow-up actions to consider +``` + ## Enhanced Features ### Intelligent Title Generation @@ -291,6 +414,14 @@ chat: max_history_length: 50 auto_save: true +# Custom prompting system +prompts: + enabled: true + library_path: "~/.nova/prompts" + allow_user_prompts: true + validate_prompts: true + max_prompt_length: 8192 + # AI profiles for different models and providers profiles: default: @@ -315,6 +446,18 @@ profiles: max_tokens: 4000 temperature: 0.7 + coding-assistant: + name: "Coding Assistant" + provider: "anthropic" + model_name: "claude-3-5-sonnet-20241022" + max_tokens: 4000 + temperature: 0.7 + system_prompt: | + You are Nova, an expert software developer and code reviewer. + Focus on writing clean, maintainable, and well-documented code. + Always explain your reasoning and suggest best practices. + Today is ${current_date} and the user is ${user_name}. + llama: name: "llama" provider: "ollama" @@ -358,15 +501,18 @@ nova/ | | |-- chat.py # Chat functionality | | |-- config.py # Configuration management | | |-- history.py # Chat history persistence -| | `-- memory.py # Memory management +| | |-- memory.py # Memory management +| | `-- prompts.py # Prompt management system | |-- models/ # Pydantic data models | | |-- config.py # Configuration models -| | `-- message.py # Message models +| | |-- message.py # Message models +| | `-- prompts.py # Prompt data models | `-- utils/ # Shared utilities | |-- files.py # File operations | `-- formatting.py # Output formatting |-- tests/ # Test suite | |-- unit/ # Unit tests +| | `-- test_prompts.py # Prompt system tests | `-- integration/ # Integration tests `-- config/ `-- default.yaml # Default configuration diff --git a/nova/core/chat.py b/nova/core/chat.py index a32170f..d91d638 100644 --- a/nova/core/chat.py +++ b/nova/core/chat.py @@ -1,6 +1,7 @@ """Core chat session management""" import logging +import os import uuid from datetime import datetime from pathlib import Path @@ -10,6 +11,7 @@ from nova.core.history import HistoryManager from nova.core.input_handler import ChatInputHandler from nova.core.memory import MemoryManager +from nova.core.prompts import PromptManager from nova.core.search import SearchError, search_web from nova.models.config import NovaConfig from nova.models.message import Conversation, MessageRole @@ -133,6 +135,9 @@ def __init__( self.history_manager = HistoryManager(self.config.chat.history_dir) self.memory_manager = MemoryManager(self.config.get_active_ai_config()) + self.prompt_manager = ( + PromptManager(self.config.prompts) if self.config.prompts.enabled else None + ) self.input_handler = ChatInputHandler() def start_interactive_chat(self, session_name: str | None = None) -> None: @@ -255,6 +260,9 @@ def _handle_command(self, command: str, session: ChatSession) -> None: " /search --provider - Search with specific provider" ) print(" /search --max - Limit number of results") + print(" /prompt - Apply a prompt template") + print(" /prompts - List available prompt templates") + print(" /prompts search - Search prompt templates") print(" /q, /quit - End session") elif cmd == "/history": @@ -354,6 +362,15 @@ def _handle_command(self, command: str, session: ChatSession) -> None: elif cmd.startswith("/s "): self._handle_search_command(command[3:].strip(), session) + elif cmd.startswith("/prompt "): + self._handle_prompt_command(command[8:].strip(), session) + + elif cmd == "/prompts": + self._handle_prompts_list_command(session) + + elif cmd.startswith("/prompts "): + self._handle_prompts_search_command(command[9:].strip(), session) + else: print_error(f"Unknown command: {command}") print_info("Type '/help' for available commands") @@ -487,19 +504,9 @@ def _generate_ai_response(self, session: ChatSession) -> str: # Add a system message to set context if active_config.provider in ["openai", "ollama"]: - system_message = "You are Nova, a helpful AI research assistant. Provide clear, accurate, and helpful responses." - - # Add conversation context info if we have summaries - if session.conversation.summaries: - system_message += ( - " You have access to conversation summaries to maintain context." - ) - - # Add tag context if available - if session.conversation.tags: - system_message += f" This conversation is tagged with: {', '.join(session.conversation.tags)}." - - messages.append({"role": "system", "content": system_message}) + system_message = self._build_system_prompt(session) + if system_message: + messages.append({"role": "system", "content": system_message}) # Add conversation history (already optimized by memory manager) messages.extend(context_messages) @@ -738,3 +745,193 @@ def _load_session_history_to_input(self, session: ChatSession) -> None: for message in session.conversation.messages: if message.role == MessageRole.USER and not message.content.startswith("/"): self.input_handler.add_to_history(message.content) + + def _build_system_prompt(self, session: ChatSession) -> str: + """Build system prompt using prompt manager or fallback to default""" + + # Get active profile + profile = None + if ( + self.config.active_profile + and self.config.active_profile in self.config.profiles + ): + profile = self.config.profiles[self.config.active_profile] + + # Try to use prompt manager for system prompt + if self.prompt_manager and profile and profile.system_prompt: + # Prepare context variables + context = { + "current_date": datetime.now().strftime("%Y-%m-%d"), + "current_time": datetime.now().strftime("%H:%M:%S"), + "user_name": os.getenv("USER", "User"), + "conversation_id": session.conversation.id, + "active_profile": self.config.active_profile or "default", + } + + # Merge with profile variables + if profile.prompt_variables: + context.update(profile.prompt_variables) + + # Get system prompt from prompt manager + system_prompt = self.prompt_manager.get_system_prompt( + profile.system_prompt, context + ) + if system_prompt: + # Add conversation context info if we have summaries + if session.conversation.summaries: + system_prompt += " You have access to conversation summaries to maintain context." + + # Add tag context if available + if session.conversation.tags: + system_prompt += f" This conversation is tagged with: {', '.join(session.conversation.tags)}." + + return system_prompt + + # Fallback to default system prompt + default_prompt = "You are Nova, a helpful AI research assistant. Provide clear, accurate, and helpful responses." + + # Add conversation context info if we have summaries + if session.conversation.summaries: + default_prompt += ( + " You have access to conversation summaries to maintain context." + ) + + # Add tag context if available + if session.conversation.tags: + default_prompt += f" This conversation is tagged with: {', '.join(session.conversation.tags)}." + + return default_prompt + + def _handle_prompt_command(self, args: str, session: ChatSession) -> None: + """Handle /prompt command for applying templates""" + + if not self.prompt_manager: + print_error("Prompt system is disabled") + return + + if not args: + print_error("Please provide a prompt name") + print_info("Usage: /prompt ") + print_info("Use '/prompts' to see available prompts") + return + + # Parse prompt name (first word) + parts = args.split() + prompt_name = parts[0] + + # Get the template + template = self.prompt_manager.get_template(prompt_name) + if not template: + print_error(f"Prompt template '{prompt_name}' not found") + print_info("Use '/prompts' to see available prompts") + return + + # Show template info + print_info(f"Applying prompt: {template.title}") + print_info(f"Description: {template.description}") + + # Collect variables interactively + variables = {} + for var in template.variables: + if var.required: + while True: + value = input(f"Enter {var.description} ({var.name}): ").strip() + if value: + variables[var.name] = value + break + print_warning("This field is required") + else: + default_text = f" [default: {var.default}]" if var.default else "" + value = input( + f"Enter {var.description} ({var.name}){default_text}: " + ).strip() + if value: + variables[var.name] = value + elif var.default is not None: + variables[var.name] = var.default + + # Render the template + rendered = self.prompt_manager.render_template(prompt_name, variables) + if rendered: + # Add as system message to current session + session.add_system_message(rendered) + print_success("Prompt applied successfully!") + print_info("The prompt has been added to your conversation context.") + else: + print_error("Failed to render prompt template") + + def _handle_prompts_list_command(self, session: ChatSession) -> None: + """Handle /prompts command for listing templates""" + + if not self.prompt_manager: + print_error("Prompt system is disabled") + return + + templates = self.prompt_manager.list_templates() + if not templates: + print_info("No prompt templates available") + return + + print_info(f"Available prompt templates ({len(templates)} total):") + print() + + # Group by category + by_category = {} + for template in templates: + category = template.category.value + if category not in by_category: + by_category[category] = [] + by_category[category].append(template) + + # Display by category + for category, templates_in_cat in sorted(by_category.items()): + print_success(f"{category.title()}:") + for template in sorted(templates_in_cat, key=lambda t: t.name): + required_vars = len(template.get_required_variables()) + optional_vars = len(template.get_optional_variables()) + var_info = ( + f"({required_vars} required, {optional_vars} optional vars)" + if template.variables + else "(no variables)" + ) + + print(f" {template.name:<15} - {template.title}") + print(f" {template.description} {var_info}") + print() + + def _handle_prompts_search_command(self, query: str, session: ChatSession) -> None: + """Handle /prompts search command""" + + if not self.prompt_manager: + print_error("Prompt system is disabled") + return + + if not query: + print_error("Please provide a search query") + print_info("Usage: /prompts search ") + return + + results = self.prompt_manager.search_templates(query) + if not results: + print_warning(f"No prompts found matching: {query}") + return + + print_info(f"Found {len(results)} prompt(s) matching '{query}':") + print() + + for template in results: + required_vars = len(template.get_required_variables()) + optional_vars = len(template.get_optional_variables()) + var_info = ( + f"({required_vars} required, {optional_vars} optional vars)" + if template.variables + else "(no variables)" + ) + + print_success(f"{template.name} - {template.title}") + print(f" Category: {template.category.value}") + print(f" Description: {template.description}") + print(f" Variables: {var_info}") + if template.tags: + print(f" Tags: {', '.join(template.tags)}") + print() diff --git a/nova/core/config.py b/nova/core/config.py index e98a60e..a9e93f3 100644 --- a/nova/core/config.py +++ b/nova/core/config.py @@ -190,6 +190,11 @@ def save_config(self, config: NovaConfig, config_path: Path) -> None: if "chat" in config_dict and "history_dir" in config_dict["chat"]: config_dict["chat"]["history_dir"] = str(config_dict["chat"]["history_dir"]) + if "prompts" in config_dict and "library_path" in config_dict["prompts"]: + config_dict["prompts"]["library_path"] = str( + config_dict["prompts"]["library_path"] + ) + try: with open(config_path, "w") as f: yaml.dump(config_dict, f, default_flow_style=False, sort_keys=False) diff --git a/nova/core/prompts.py b/nova/core/prompts.py new file mode 100644 index 0000000..9b6bfca --- /dev/null +++ b/nova/core/prompts.py @@ -0,0 +1,347 @@ +"""Core prompt management system""" + +import logging +import os +import re +from datetime import datetime +from pathlib import Path +from string import Template +from typing import Any + +import yaml + +from nova.models.config import PromptConfig +from nova.models.prompts import ( + PromptCategory, + PromptTemplate, + PromptVariable, + ValidationResult, + VariableType, +) + +logger = logging.getLogger(__name__) + + +class PromptTemplateEngine: + """Simple template engine for prompt rendering""" + + def __init__(self): + self.context_vars = { + "current_date": datetime.now().strftime("%Y-%m-%d"), + "current_time": datetime.now().strftime("%H:%M:%S"), + "user_name": os.getenv("USER", "User"), + } + + def render(self, template: str, variables: dict[str, Any]) -> str: + """Render template with variables""" + try: + # Combine context variables with user variables + all_vars = {**self.context_vars, **variables} + + # Use Python's Template class for safe substitution + template_obj = Template(template) + + # Replace template variables (${var} format) + return template_obj.safe_substitute(all_vars) + + except Exception as e: + logger.error(f"Template rendering error: {e}") + return template # Return original template on error + + +class PromptValidator: + """Validates prompt templates for safety and correctness""" + + MAX_TEMPLATE_LENGTH = 8192 + MAX_VARIABLES = 20 + DANGEROUS_PATTERNS = [ + r"]*>", + r"javascript:", + r"eval\(", + r"exec\(", + r"__import__", + r"subprocess", + r"os\.system", + ] + + def validate_template(self, template: PromptTemplate) -> ValidationResult: + """Validate a complete template""" + errors = [] + + # Check template length + if len(template.template) > self.MAX_TEMPLATE_LENGTH: + errors.append( + f"Template too long: {len(template.template)} > {self.MAX_TEMPLATE_LENGTH}" + ) + + # Check variable count + if len(template.variables) > self.MAX_VARIABLES: + errors.append( + f"Too many variables: {len(template.variables)} > {self.MAX_VARIABLES}" + ) + + # Check for dangerous patterns + for pattern in self.DANGEROUS_PATTERNS: + if re.search(pattern, template.template, re.IGNORECASE): + errors.append(f"Potentially dangerous pattern found: {pattern}") + + # Validate variable names + for var in template.variables: + if not re.match(r"^[a-zA-Z_][a-zA-Z0-9_]*$", var.name): + errors.append(f"Invalid variable name: {var.name}") + + is_valid = len(errors) == 0 + message = ( + "Valid template" if is_valid else f"Found {len(errors)} validation errors" + ) + + return ValidationResult(is_valid=is_valid, message=message, errors=errors) + + def validate_variables( + self, variables: dict[str, Any], template: PromptTemplate + ) -> ValidationResult: + """Validate provided variables against template requirements""" + errors = [] + + # Check required variables + required_vars = {var.name for var in template.get_required_variables()} + provided_vars = set(variables.keys()) + + missing_vars = required_vars - provided_vars + if missing_vars: + errors.append(f"Missing required variables: {', '.join(missing_vars)}") + + # Validate variable types + for var in template.variables: + if var.name in variables: + value = variables[var.name] + if not self._validate_variable_type(value, var.type): + errors.append(f"Invalid type for {var.name}: expected {var.type}") + + is_valid = len(errors) == 0 + message = ( + "Valid variables" if is_valid else f"Found {len(errors)} variable errors" + ) + + return ValidationResult(is_valid=is_valid, message=message, errors=errors) + + def _validate_variable_type(self, value: Any, var_type: VariableType) -> bool: + """Validate a single variable's type""" + if var_type == VariableType.STRING: + return isinstance(value, str) + elif var_type == VariableType.TEXT: + return isinstance(value, str) + elif var_type == VariableType.INTEGER: + return isinstance(value, int) + elif var_type == VariableType.BOOLEAN: + return isinstance(value, bool) + elif var_type == VariableType.LIST: + return isinstance(value, list) + return True # Unknown type, allow it + + +class PromptManager: + """Main prompt management system""" + + def __init__(self, config: PromptConfig): + self.config = config + self.template_engine = PromptTemplateEngine() + self.validator = PromptValidator() + self.library_path = Path(config.library_path).expanduser() + self.builtin_templates: dict[str, PromptTemplate] = {} + self.user_templates: dict[str, PromptTemplate] = {} + + # Initialize directories and load templates + self._ensure_directories() + self._load_builtin_templates() + self._load_user_templates() + + def _ensure_directories(self): + """Create necessary directories""" + self.library_path.mkdir(parents=True, exist_ok=True) + (self.library_path / "user").mkdir(exist_ok=True) + (self.library_path / "user" / "custom").mkdir(exist_ok=True) + (self.library_path / "config").mkdir(exist_ok=True) + + def _load_builtin_templates(self): + """Load built-in prompt templates from YAML files""" + self.builtin_templates = {} + + # Get the project root directory (where prompts/ is located) + project_root = Path(__file__).parent.parent.parent + builtin_prompts_dir = project_root / "prompts" + + if not builtin_prompts_dir.exists(): + logger.warning( + f"Built-in prompts directory not found: {builtin_prompts_dir}" + ) + return + + # Load all YAML files in the prompts directory + for yaml_file in builtin_prompts_dir.glob("*.yaml"): + try: + with open(yaml_file) as f: + data = yaml.safe_load(f) + + # Convert string category to PromptCategory enum + if "category" in data: + category_str = data["category"].upper() + data["category"] = PromptCategory[category_str] + + # Convert variable dictionaries to PromptVariable objects + if "variables" in data: + variables = [] + for var_data in data["variables"]: + # Convert string type to VariableType enum if present + if "type" in var_data: + type_str = var_data["type"].upper() + var_data["type"] = VariableType[type_str] + variables.append(PromptVariable(**var_data)) + data["variables"] = variables + + template = PromptTemplate(**data) + self.builtin_templates[template.name] = template + logger.debug(f"Loaded built-in template: {template.name}") + + except Exception as e: + logger.warning(f"Failed to load built-in template {yaml_file}: {e}") + + def _load_user_templates(self): + """Load user-defined prompt templates""" + user_dir = self.library_path / "user" / "custom" + if not user_dir.exists(): + return + + for yaml_file in user_dir.glob("*.yaml"): + try: + with open(yaml_file) as f: + data = yaml.safe_load(f) + template = PromptTemplate(**data) + self.user_templates[template.name] = template + logger.debug(f"Loaded user template: {template.name}") + except Exception as e: + logger.warning(f"Failed to load user template {yaml_file}: {e}") + + def get_template(self, name: str) -> PromptTemplate | None: + """Get a template by name""" + # Check user templates first, then built-in + if name in self.user_templates: + return self.user_templates[name] + return self.builtin_templates.get(name) + + def list_templates( + self, category: PromptCategory | None = None + ) -> list[PromptTemplate]: + """List all available templates""" + all_templates = list(self.builtin_templates.values()) + list( + self.user_templates.values() + ) + + if category: + return [t for t in all_templates if t.category == category] + return all_templates + + def search_templates(self, query: str) -> list[PromptTemplate]: + """Search templates by name, title, description, or tags""" + query_lower = query.lower() + results = [] + + for template in self.list_templates(): + # Search in name, title, description, and tags + searchable_text = f"{template.name} {template.title} {template.description} {' '.join(template.tags)}" + if query_lower in searchable_text.lower(): + results.append(template) + + return results + + def render_template(self, name: str, variables: dict[str, Any]) -> str | None: + """Render a template with provided variables""" + template = self.get_template(name) + if not template: + return None + + # Validate variables if validation is enabled + if self.config.validate_prompts: + validation = self.validator.validate_variables(variables, template) + if not validation.is_valid: + logger.error(f"Variable validation failed: {validation.message}") + return None + + # Fill in default values for missing optional variables + final_variables = {} + for var in template.variables: + if var.name in variables: + final_variables[var.name] = variables[var.name] + elif not var.required and var.default is not None: + final_variables[var.name] = var.default + + return self.template_engine.render(template.template, final_variables) + + def save_template( + self, template: PromptTemplate, user_defined: bool = True + ) -> bool: + """Save a template to storage""" + if not self.config.allow_user_prompts and user_defined: + logger.warning("User-defined prompts are disabled") + return False + + # Validate template + if self.config.validate_prompts: + validation = self.validator.validate_template(template) + if not validation.is_valid: + logger.error(f"Template validation failed: {validation.message}") + return False + + try: + if user_defined: + # Save to user directory + file_path = ( + self.library_path / "user" / "custom" / f"{template.name}.yaml" + ) + with open(file_path, "w") as f: + yaml.dump(template.model_dump(), f, default_flow_style=False) + self.user_templates[template.name] = template + else: + # Add to built-in templates (in memory only) + self.builtin_templates[template.name] = template + + logger.info(f"Saved template: {template.name}") + return True + + except Exception as e: + logger.error(f"Failed to save template {template.name}: {e}") + return False + + def delete_template(self, name: str) -> bool: + """Delete a user-defined template""" + if name in self.user_templates: + try: + file_path = self.library_path / "user" / "custom" / f"{name}.yaml" + if file_path.exists(): + file_path.unlink() + del self.user_templates[name] + logger.info(f"Deleted template: {name}") + return True + except Exception as e: + logger.error(f"Failed to delete template {name}: {e}") + return False + return False + + def get_system_prompt( + self, profile_prompt: str | None, context: dict[str, Any] | None = None + ) -> str: + """Get system prompt, either direct or from template""" + if not profile_prompt: + return "" + + # Check if it's a template reference + template = self.get_template(profile_prompt) + if template: + # It's a template reference + variables = context or {} + return self.render_template(profile_prompt, variables) or "" + else: + # It's a direct system prompt, apply basic variable substitution + if context: + return self.template_engine.render(profile_prompt, context) + return profile_prompt diff --git a/nova/models/config.py b/nova/models/config.py index 222ab76..962c360 100644 --- a/nova/models/config.py +++ b/nova/models/config.py @@ -35,6 +35,22 @@ def validate_provider(cls, v: str) -> str: return v +class PromptConfig(BaseModel): + """Prompt system configuration""" + + enabled: bool = Field(default=True, description="Enable custom prompting") + library_path: Path = Field( + default=Path("~/.nova/prompts"), description="Prompt library location" + ) + allow_user_prompts: bool = Field( + default=True, description="Allow user-defined prompts" + ) + validate_prompts: bool = Field( + default=True, description="Validate prompt templates" + ) + max_prompt_length: int = Field(default=8192, description="Maximum prompt length") + + class AIProfile(BaseModel): """Named AI configuration profile""" @@ -51,6 +67,12 @@ class AIProfile(BaseModel): temperature: float = Field( default=0.7, description="Response temperature", ge=0.0, le=1.0 ) + system_prompt: str | None = Field( + default=None, description="Custom system prompt or template reference" + ) + prompt_variables: dict[str, str] = Field( + default_factory=dict, description="Default prompt variables" + ) @field_validator("provider") @classmethod @@ -92,6 +114,22 @@ def validate_provider(cls, v: str) -> str: return v +class MonitoringConfig(BaseModel): + """Configuration for monitoring and debugging""" + + enabled: bool = Field(default=True, description="Enable monitoring") + level: str = Field( + default="basic", description="Monitoring level (basic, detailed, debug)" + ) + debug_log_file: str = Field( + default="~/.nova/debug.log", description="Debug log file path" + ) + context_warnings: bool = Field(default=True, description="Show context warnings") + performance_metrics: bool = Field( + default=True, description="Collect performance metrics" + ) + + class ChatConfig(BaseModel): """Configuration for chat behavior""" @@ -111,6 +149,8 @@ class NovaConfig(BaseModel): chat: ChatConfig = Field(default_factory=ChatConfig) search: SearchConfig = Field(default_factory=SearchConfig) + prompts: PromptConfig = Field(default_factory=PromptConfig) + monitoring: MonitoringConfig = Field(default_factory=MonitoringConfig) profiles: dict[str, AIProfile] = Field( default_factory=dict, description="Named AI profiles" ) diff --git a/nova/models/prompts.py b/nova/models/prompts.py new file mode 100644 index 0000000..939c3d6 --- /dev/null +++ b/nova/models/prompts.py @@ -0,0 +1,99 @@ +"""Prompt data models and schemas""" + +from datetime import datetime +from enum import Enum +from pathlib import Path +from typing import Any + +from pydantic import BaseModel, Field + + +class VariableType(str, Enum): + """Supported variable types""" + + TEXT = "text" + STRING = "string" + LIST = "list" + INTEGER = "integer" + BOOLEAN = "boolean" + + +class PromptVariable(BaseModel): + """Prompt template variable definition""" + + name: str = Field(description="Variable name") + type: VariableType = Field(default=VariableType.TEXT, description="Variable type") + required: bool = Field(default=True, description="Whether variable is required") + default: Any = Field(default=None, description="Default value if not provided") + description: str = Field(default="", description="Variable description") + + +class PromptCategory(str, Enum): + """Built-in prompt categories""" + + WRITING = "writing" + DEVELOPMENT = "development" + ANALYSIS = "analysis" + BUSINESS = "business" + EDUCATION = "education" + COMMUNICATION = "communication" + GENERAL = "general" + + +class PromptTemplate(BaseModel): + """Prompt template definition""" + + name: str = Field(description="Unique prompt identifier") + title: str = Field(description="Human-readable title") + description: str = Field(description="Prompt description") + category: PromptCategory = Field( + default=PromptCategory.GENERAL, description="Prompt category" + ) + tags: list[str] = Field(default_factory=list, description="Searchable tags") + variables: list[PromptVariable] = Field( + default_factory=list, description="Template variables" + ) + template: str = Field(description="Prompt template content") + version: str = Field(default="1.0", description="Template version") + author: str = Field(default="Nova", description="Template author") + created_at: datetime = Field( + default_factory=datetime.now, description="Creation timestamp" + ) + updated_at: datetime = Field( + default_factory=datetime.now, description="Last update timestamp" + ) + + def get_required_variables(self) -> list[PromptVariable]: + """Get list of required variables""" + return [var for var in self.variables if var.required] + + def get_optional_variables(self) -> list[PromptVariable]: + """Get list of optional variables""" + return [var for var in self.variables if not var.required] + + def has_variable(self, name: str) -> bool: + """Check if template has a specific variable""" + return any(var.name == name for var in self.variables) + + +class ValidationResult(BaseModel): + """Result of prompt validation""" + + is_valid: bool = Field(description="Whether validation passed") + message: str = Field(default="", description="Validation message") + errors: list[str] = Field(default_factory=list, description="Validation errors") + + +class PromptLibraryEntry(BaseModel): + """Entry in the prompt library index""" + + name: str = Field(description="Prompt name") + title: str = Field(description="Display title") + category: PromptCategory = Field(description="Prompt category") + tags: list[str] = Field(default_factory=list, description="Tags") + file_path: Path = Field(description="Path to template file") + is_builtin: bool = Field( + default=True, description="Whether this is a built-in prompt" + ) + usage_count: int = Field(default=0, description="Usage statistics") + last_used: datetime | None = Field(default=None, description="Last usage timestamp") diff --git a/prompts/analyze.yaml b/prompts/analyze.yaml new file mode 100644 index 0000000..ad61eff --- /dev/null +++ b/prompts/analyze.yaml @@ -0,0 +1,35 @@ +name: analyze +title: General Analysis +description: Analyze and break down complex topics or situations +category: analysis +tags: + - analysis + - breakdown + - insights +variables: + - name: topic + description: Topic to analyze + type: string + required: true + - name: perspective + description: Analysis perspective + type: string + required: false + default: comprehensive + - name: depth + description: Analysis depth + type: string + required: false + default: thorough +template: | + Please provide a ${depth} ${perspective} analysis of: ${topic} + + Structure your analysis with: + - Overview and context + - Key components or factors + - Relationships and dependencies + - Strengths and weaknesses + - Potential implications + - Recommendations or conclusions + + Use critical thinking and consider multiple angles to provide valuable insights. diff --git a/prompts/code-review.yaml b/prompts/code-review.yaml new file mode 100644 index 0000000..04aefc5 --- /dev/null +++ b/prompts/code-review.yaml @@ -0,0 +1,41 @@ +name: code-review +title: Code Review Assistant +description: Comprehensive code review focusing on quality and best practices +category: development +tags: + - code + - review + - quality + - best-practices +variables: + - name: code + description: Code to review + type: text + required: true + - name: language + description: Programming language + type: string + required: false + default: auto-detect + - name: focus + description: Review focus areas + type: string + required: false + default: quality,security,performance +template: | + Please conduct a thorough code review of this ${language} code: + + ```${language} + ${code} + ``` + + Focus areas: ${focus} + + Please provide feedback on: + 1. Code quality and best practices + 2. Security considerations + 3. Performance optimizations + 4. Maintainability improvements + 5. Documentation and comments + + Format your response with specific line references and actionable recommendations. diff --git a/prompts/email.yaml b/prompts/email.yaml new file mode 100644 index 0000000..4604e3a --- /dev/null +++ b/prompts/email.yaml @@ -0,0 +1,35 @@ +name: email +title: Professional Email +description: Draft professional emails with proper tone and structure +category: communication +tags: + - email + - professional + - business +variables: + - name: purpose + description: Purpose of the email + type: string + required: true + - name: recipient + description: Recipient name or title + type: string + required: false + default: recipient + - name: tone + description: Email tone + type: string + required: false + default: professional +template: | + Please help me draft a ${tone} email for the following purpose: ${purpose} + + The recipient is: ${recipient} + + Please structure the email with: + - Appropriate subject line + - Professional greeting + - Clear and concise body + - Proper closing + + Make sure the tone is ${tone} and the content addresses the specified purpose effectively. diff --git a/prompts/explain.yaml b/prompts/explain.yaml new file mode 100644 index 0000000..3e31a7e --- /dev/null +++ b/prompts/explain.yaml @@ -0,0 +1,36 @@ +name: explain +title: Concept Explanation +description: Explain complex concepts in simple terms +category: education +tags: + - explanation + - teaching + - concepts +variables: + - name: concept + description: Concept to explain + type: string + required: true + - name: audience + description: Target audience level + type: string + required: false + default: general + - name: examples + description: Include examples + type: string + required: false + default: "yes" +template: | + Please explain the concept of "${concept}" for a ${audience} audience. + + ${if examples == "yes"}Please include practical examples and analogies to make it easier to understand.${endif} + + Structure your explanation with: + - Clear definition + - Why it matters + - How it works + - Real-world applications + - Common misconceptions (if any) + + Use simple language and build from basic concepts to more complex ones. diff --git a/prompts/summarize.yaml b/prompts/summarize.yaml new file mode 100644 index 0000000..8f5497e --- /dev/null +++ b/prompts/summarize.yaml @@ -0,0 +1,35 @@ +name: summarize +title: Content Summarization +description: Summarize content with key points and insights +category: analysis +tags: + - summary + - analysis + - key-points +variables: + - name: content + description: Content to summarize + type: text + required: true + - name: length + description: Summary length + type: string + required: false + default: medium + - name: format + description: Output format + type: string + required: false + default: bullet-points +template: | + Please provide a ${length} summary of the following content in ${format} format: + + ${content} + + Focus on: + - Main ideas and key points + - Important details and insights + - Actionable takeaways + - Clear, concise language + + Structure the summary to be easy to scan and understand. diff --git a/tests/unit/test_prompts.py b/tests/unit/test_prompts.py new file mode 100644 index 0000000..dbcb849 --- /dev/null +++ b/tests/unit/test_prompts.py @@ -0,0 +1,255 @@ +"""Tests for prompt management system""" + +from pathlib import Path + +import pytest + +from nova.core.prompts import PromptManager, PromptTemplateEngine, PromptValidator +from nova.models.config import PromptConfig +from nova.models.prompts import ( + PromptCategory, + PromptTemplate, + PromptVariable, + VariableType, +) + + +class TestPromptTemplateEngine: + """Test the template rendering engine""" + + def setup_method(self): + self.engine = PromptTemplateEngine() + + def test_basic_variable_substitution(self): + """Test basic variable substitution""" + template = "Hello ${name}, today is ${date}" + variables = {"name": "Alice", "date": "2025-01-06"} + + result = self.engine.render(template, variables) + assert result == "Hello Alice, today is 2025-01-06" + + def test_missing_variables_safe_substitute(self): + """Test that missing variables are left as-is with safe substitution""" + template = "Hello ${name}, your score is ${score}" + variables = {"name": "Bob"} + + result = self.engine.render(template, variables) + assert result == "Hello Bob, your score is ${score}" + + def test_context_variables(self): + """Test that context variables are automatically included""" + template = "Hello ${user_name}, today is ${current_date}" + variables = {} + + result = self.engine.render(template, variables) + + # Should contain actual user name and current date + assert "Hello" in result + assert "today is" in result + + +class TestPromptValidator: + """Test prompt validation""" + + def setup_method(self): + self.validator = PromptValidator() + + def test_valid_template(self): + """Test validation of a valid template""" + template = PromptTemplate( + name="test", + title="Test Template", + description="A test template", + template="Hello ${name}", + variables=[ + PromptVariable(name="name", description="User name", required=True) + ], + ) + + result = self.validator.validate_template(template) + assert result.is_valid + assert result.message == "Valid template" + + def test_template_too_long(self): + """Test validation fails for overly long templates""" + long_template = "x" * 10000 # Longer than MAX_TEMPLATE_LENGTH + + template = PromptTemplate( + name="test", + title="Test Template", + description="A test template", + template=long_template, + ) + + result = self.validator.validate_template(template) + assert not result.is_valid + assert "Template too long" in result.errors[0] + + def test_dangerous_patterns(self): + """Test validation fails for dangerous patterns""" + template = PromptTemplate( + name="test", + title="Test Template", + description="A test template", + template="", + ) + + result = self.validator.validate_template(template) + assert not result.is_valid + assert any("dangerous pattern" in error.lower() for error in result.errors) + + def test_variable_validation(self): + """Test variable validation""" + template = PromptTemplate( + name="test", + title="Test Template", + description="A test template", + template="Hello ${name}", + variables=[ + PromptVariable(name="name", description="User name", required=True), + PromptVariable(name="age", type=VariableType.INTEGER, required=False), + ], + ) + + # Test valid variables + variables = {"name": "Alice", "age": 25} + result = self.validator.validate_variables(variables, template) + assert result.is_valid + + # Test missing required variable + variables = {"age": 25} + result = self.validator.validate_variables(variables, template) + assert not result.is_valid + assert "Missing required variables" in result.errors[0] + + +class TestPromptTemplate: + """Test PromptTemplate model""" + + def test_template_creation(self): + """Test creating a basic template""" + template = PromptTemplate( + name="greeting", + title="Greeting Template", + description="A simple greeting", + template="Hello ${name}!", + ) + + assert template.name == "greeting" + assert template.title == "Greeting Template" + assert template.category == PromptCategory.GENERAL + assert template.version == "1.0" + assert template.author == "Nova" + + def test_required_variables(self): + """Test getting required variables""" + template = PromptTemplate( + name="test", + title="Test Template", + description="A test template", + template="Hello ${name}, you are ${age} years old", + variables=[ + PromptVariable(name="name", required=True), + PromptVariable(name="age", required=False, default="unknown"), + ], + ) + + required = template.get_required_variables() + assert len(required) == 1 + assert required[0].name == "name" + + optional = template.get_optional_variables() + assert len(optional) == 1 + assert optional[0].name == "age" + + def test_has_variable(self): + """Test checking if template has a variable""" + template = PromptTemplate( + name="test", + title="Test Template", + description="A test template", + template="Hello ${name}", + variables=[PromptVariable(name="name", required=True)], + ) + + assert template.has_variable("name") + assert not template.has_variable("age") + + +class TestPromptManager: + """Test the main PromptManager class""" + + def setup_method(self): + # Use a temporary config for testing + config = PromptConfig( + enabled=True, + library_path=Path("/tmp/test_nova_prompts"), + validate_prompts=True, + ) + self.manager = PromptManager(config) + + def test_builtin_templates_loaded(self): + """Test that built-in templates are loaded""" + templates = self.manager.list_templates() + assert len(templates) > 0 + + # Check that essential templates exist + essential_names = ["email", "code-review", "summarize", "explain", "analyze"] + template_names = [t.name for t in templates] + + for name in essential_names: + assert name in template_names + + def test_get_template(self): + """Test getting a specific template""" + template = self.manager.get_template("email") + assert template is not None + assert template.name == "email" + assert template.title == "Professional Email" + + def test_render_template(self): + """Test rendering a template""" + variables = { + "purpose": "requesting a meeting", + "recipient": "John Doe", + "tone": "professional", + } + + result = self.manager.render_template("email", variables) + assert result is not None + assert "requesting a meeting" in result + assert "John Doe" in result + assert "professional" in result + + def test_search_templates(self): + """Test searching templates""" + results = self.manager.search_templates("code") + assert len(results) > 0 + + # Should find code-review template + names = [t.name for t in results] + assert "code-review" in names + + def test_system_prompt_direct(self): + """Test system prompt with direct string""" + context = {"user_name": "Alice"} + result = self.manager.get_system_prompt( + "You are a helpful assistant for ${user_name}", context + ) + + assert "Alice" in result + + def test_system_prompt_template_reference(self): + """Test system prompt with template reference""" + # This should fall back to direct prompt since "nonexistent" isn't a template + result = self.manager.get_system_prompt("nonexistent", {}) + assert result == "nonexistent" # Returns original string when not a template + + # Test with actual template reference + result = self.manager.get_system_prompt("email", {"purpose": "test"}) + assert result is not None + assert len(result) > 0 + + +if __name__ == "__main__": + pytest.main([__file__])