Conversation
There was a problem hiding this comment.
Pull request overview
Adds RBAC-backed authentication/authorization and expands integrations (notably Ansible), along with DB migrations and security hardening across the backend.
Changes:
- Introduces RBAC entities (users/groups/roles/permissions), auth + RBAC middleware, and related API routes.
- Adds Ansible integration (inventory/commands/packages/playbooks) and records
execution_toolin execution history. - Adds database migration runner + extensive DB/schema/tests, plus security middleware (Helmet, rate limiting, input sanitization) and monitoring endpoints.
Reviewed changes
Copilot reviewed 83 out of 242 changed files in this pull request and generated 14 comments.
Show a summary per file
| File | Description |
|---|---|
| backend/test/debug-inventory-route.test.ts | Updates Bolt import paths after integrations refactor. |
| backend/test/database/rbac-schema.test.ts | Verifies RBAC schema tables, constraints, and indexes. |
| backend/test/database/migration-integration.test.ts | Tests migrations + seed data availability after DB init. |
| backend/test/database/index-verification.test.ts | Verifies required DB indexes + pragmas (WAL, FK, etc.). |
| backend/test/database/MigrationRunner.test.ts | Adds unit tests for the new MigrationRunner behavior. |
| backend/test/database/002_seed_rbac_data.test.ts | Validates RBAC seed migration (roles/permissions/admin user). |
| backend/test/config/ConfigService.test.ts | Adds allowlist pragmas for env var test values. |
| backend/test/bolt/BoltService.test.ts | Updates Bolt import paths + allowlist pragmas for strings. |
| backend/test/UserService.password.test.ts | Adds password validation integration tests for create/update flows. |
| backend/src/validation/rbacSchemas.ts | Centralizes RBAC Zod validation schemas and inferred types. |
| backend/src/utils/passwordValidation.ts | Adds a password complexity validator utility. |
| backend/src/utils/errorHandling.ts | Expands structured error codes + adds auth/RBAC/db helpers. |
| backend/src/services/RoleService.ts | Adds RoleService for CRUD + role-permission assignments. |
| backend/src/services/PuppetRunHistoryService.ts | Adjusts date range behavior and adds logging for aggregation. |
| backend/src/services/PerformanceMonitor.ts | Adds in-memory perf metrics for auth/authz with slow query tracking. |
| backend/src/services/IntegrationColorService.ts | Adds Ansible color config. |
| backend/src/services/GroupService.ts | Adds GroupService for CRUD + group-role assignments. |
| backend/src/server.ts | Wires up RBAC/auth/security middleware, Ansible plugin, and new routes. |
| backend/src/routes/tasks.ts | Updates Bolt types imports to integrations path. |
| backend/src/routes/puppet.ts | Updates Bolt service/types imports to integrations path. |
| backend/src/routes/playbooks.ts | Adds endpoint to execute Ansible playbooks via IntegrationManager. |
| backend/src/routes/permissions.ts | Adds permissions CRUD/list endpoints with auth + RBAC checks. |
| backend/src/routes/packages.ts | Adds tool selection (bolt/ansible) and stores executionTool. |
| backend/src/routes/monitoring.ts | Adds endpoints to retrieve/reset performance metrics. |
| backend/src/routes/inventory.ts | Updates Bolt service/types imports to integrations path. |
| backend/src/routes/integrations/status.ts | Adds explicit “ansible not configured” status entry. |
| backend/src/routes/integrations.ts | Adds auth+RBAC protection for PuppetDB routes when DB is available. |
| backend/src/routes/hiera.ts | Includes classes in Hiera node response. |
| backend/src/routes/facts.ts | Updates Bolt types imports to integrations path. |
| backend/src/routes/executions.ts | Preserves executionTool when creating re-executions. |
| backend/src/routes/commands.ts | Adds tool selection (bolt/ansible) and stores executionTool. |
| backend/src/middleware/securityMiddleware.ts | Adds Helmet CSP, rate limiting, input sanitization, extra headers. |
| backend/src/middleware/rbacMiddleware.ts | Adds permission-checking middleware with audit logging. |
| backend/src/middleware/authMiddleware.ts | Adds JWT verification middleware attaching req.user. |
| backend/src/integrations/types.ts | Points shared types to integrations/bolt types location. |
| backend/src/integrations/puppetserver/PuppetserverService.ts | Fixes type imports to integrations/bolt types. |
| backend/src/integrations/puppetdb/PuppetDBService.ts | Fixes type imports to integrations/bolt types. |
| backend/src/integrations/hiera/types.ts | Adds optional classes to Hiera node data/response types. |
| backend/src/integrations/hiera/HieraService.ts | Uses PuppetDB resources for class extraction; exposes classes. |
| backend/src/integrations/hiera/HieraPlugin.ts | Fixes type imports to integrations/bolt types. |
| backend/src/integrations/bolt/BoltService.ts | Fixes LoggerService import path. |
| backend/src/integrations/bolt/BoltPlugin.ts | Points BoltService/types imports to integrations/bolt implementation. |
| backend/src/integrations/ansible/AnsibleService.ts | Adds Ansible command/package/playbook execution + inventory parsing. |
| backend/src/integrations/ansible/AnsiblePlugin.ts | Adds Ansible plugin wiring, health checks, capabilities, facts. |
| backend/src/integrations/NodeLinkingService.ts | Fixes Node type import to integrations/bolt types. |
| backend/src/integrations/IntegrationManager.ts | Fixes shared types import to integrations/bolt types. |
| backend/src/database/schema.sql | Adds execution_tool column to executions table. |
| backend/src/database/rbac-schema.sql | Adds full RBAC schema + indexes (incl. composite indexes). |
| backend/src/database/migrations/004_audit_logging.sql | Adds audit logging tables + indexes (migration-based). |
| backend/src/database/migrations/003_failed_login_attempts.sql | Adds brute-force tables (failed attempts/lockouts). |
| backend/src/database/migrations/002_seed_rbac_data.sql | Seeds permissions/roles + default admin user and role assignment. |
| backend/src/database/migrations/001_initial_rbac.sql | Adds baseline RBAC schema as a migration. |
| backend/src/database/migrations.sql | Adds legacy ALTERs for executions table (incl. execution_tool). |
| backend/src/database/audit-schema.sql | Adds audit logging schema file (non-migration). |
| backend/src/database/MigrationRunner.ts | Introduces filesystem-based migration runner with tracking table. |
| backend/src/database/ExecutionRepository.ts | Adds executionTool persistence and mapping from DB rows. |
| backend/src/database/DatabaseService.ts | Enables WAL/FK pragmas, runs RBAC schema, switches to MigrationRunner. |
| backend/src/config/schema.ts | Adds Ansible integration config schema. |
| backend/src/config/ConfigService.ts | Parses and exposes Ansible config from env. |
| backend/package.json | Bumps version and adds bcrypt/helmet/rate-limit dependencies. |
| backend/.env.example | Documents Ansible env vars in backend env example. |
| README.md | Documents Ansible integration and bumps version history notes. |
| .kiro/todo/rbac-test-failures.md | Tracks RBAC test assertion mismatches for structured errors. |
| .kiro/todo/default-user-permissions-fix.md | Notes default Viewer-role assignment requirement for new users. |
| .kiro/todo/auth-test-database-lifecycle.md | Documents test DB lifecycle issues causing SQLITE_MISUSE failures. |
| .kiro/specs/rbac-authorization/requirements.md | Adds RBAC requirements specification. |
| .kiro/specs/rbac-authorization/.config.kiro | Adds Kiro spec config metadata. |
| .env.example | Documents Ansible env vars in repo-root env example. |
Comments suppressed due to low confidence (1)
backend/src/routes/packages.ts:1
- Making
taskNameoptional while defaultingselectedTooltoboltwhen Bolt is available creates a confusing API: clients can omittaskNameand still pass validation, but then immediately fail withINVALID_TASKwhen Bolt is selected by default. Suggested fix: enforce conditional validation (e.g., iftoolisboltOR Bolt is selected, thentaskNamemust be present), or change default tool selection whentaskNameis missing (e.g., prefer Ansible if configured), or provide a sensible default task mapping for Bolt.
import { Router, type Request, type Response } from "express";
| // Read and execute RBAC schema file | ||
| const rbacSchemaPath = join(__dirname, "rbac-schema.sql"); | ||
| if (existsSync(rbacSchemaPath)) { | ||
| const rbacSchema = readFileSync(rbacSchemaPath, "utf-8"); | ||
|
|
||
| // Split RBAC schema into statements | ||
| const rbacStatements = rbacSchema | ||
| .split(";") | ||
| .map((s) => s.trim()) | ||
| .filter((s) => s.length > 0); | ||
|
|
||
| // Execute each RBAC statement | ||
| for (const statement of rbacStatements) { | ||
| try { | ||
| await this.exec(statement); | ||
| } catch (error) { | ||
| // Ignore "duplicate column" errors from ALTER TABLE (migration already applied) | ||
| const errorMessage = error instanceof Error ? error.message : ""; | ||
| if (!errorMessage.includes("duplicate column")) { | ||
| throw error; | ||
| } | ||
| // Migration already applied, continue | ||
| } | ||
| } | ||
| } | ||
|
|
||
| // Run migrations | ||
| await this.runMigrations(); |
There was a problem hiding this comment.
Executing rbac-schema.sql directly in addition to running migration files undermines migration tracking (schema changes can be applied without being recorded in the migrations table). More importantly, you’ve switched runMigrations() to use MigrationRunner (directory-based), so the legacy backend/src/database/migrations.sql file is no longer applied—meaning existing databases won’t receive ALTER TABLE changes like execution_tool, and inserts/updates in ExecutionRepository can fail with no such column: execution_tool. Suggested fix: move all schema evolution exclusively into versioned migration files (e.g., add a new NNN_add_execution_tool.sql), stop executing rbac-schema.sql at runtime, and rely on MigrationRunner as the single source of truth.
| // Should have applied both migrations | ||
| expect(status.applied).toHaveLength(2); | ||
| expect(status.applied[0].id).toBe('001'); | ||
| expect(status.applied[1].id).toBe('002'); | ||
| expect(status.pending).toHaveLength(0); |
There was a problem hiding this comment.
This test hard-codes the applied migration count/IDs to 001 and 002, but the PR adds additional migration files (003_failed_login_attempts.sql, 004_audit_logging.sql). With MigrationRunner scanning the migrations directory, status.applied will include these and this test will fail. Update expectations to match the full set of migrations (or assert that at least 001 and 002 are applied and pending is empty).
| app.use("/api/roles", authMiddleware, rateLimitMiddleware, createRolesRouter(databaseService)); | ||
|
|
||
| // Permission management routes | ||
| app.use("/api/permissions", authMiddleware, rateLimitMiddleware, createPermissionsRouter(databaseService)); |
There was a problem hiding this comment.
The server applies authMiddleware (and rate limit) to /api/permissions, but createPermissionsRouter() also applies its own authMiddleware + rbacMiddleware internally. This duplicates JWT verification per request and makes the effective security policy harder to reason about (and can diverge if different JWT secrets/DB handles are used). Prefer one composition point: either (a) make the router "pure" and mount it behind auth/RBAC in server.ts, or (b) keep auth/RBAC inside the router and mount it without extra middleware here.
| app.use("/api/permissions", authMiddleware, rateLimitMiddleware, createPermissionsRouter(databaseService)); | |
| app.use("/api/permissions", rateLimitMiddleware, createPermissionsRouter(databaseService)); |
backend/src/routes/permissions.ts
Outdated
| authMiddleware, | ||
| rbacMiddleware("permissions", "write"), | ||
| asyncHandler(async (req: Request, res: Response): Promise<void> => { | ||
| logger.info("Processing create permission request", { |
There was a problem hiding this comment.
This router embeds authMiddleware + rbacMiddleware, but server.ts already mounts /api/permissions behind authMiddleware and rate limiting. Beyond duplication, this pattern is inconsistent with other routes that rely on server-level middleware composition. Consider removing auth/RBAC from inside this router and enforcing it where the router is mounted, or vice-versa (but not both).
| authMiddleware, | |
| rbacMiddleware("permissions", "write"), | |
| asyncHandler(async (req: Request, res: Response): Promise<void> => { | |
| logger.info("Processing create permission request", { | |
| rbacMiddleware("permissions", "write"), | |
| asyncHandler(async (req: Request, res: Response): Promise<void> => { | |
| logger.info("Processing create permission request", { | |
| logger.info("Processing create permission request", { |
| expertMode: z.boolean().optional().default(false), | ||
| tool: z.enum(["bolt", "ansible"]).optional(), | ||
| }); |
There was a problem hiding this comment.
Making taskName optional while defaulting selectedTool to bolt when Bolt is available creates a confusing API: clients can omit taskName and still pass validation, but then immediately fail with INVALID_TASK when Bolt is selected by default. Suggested fix: enforce conditional validation (e.g., if tool is bolt OR Bolt is selected, then taskName must be present), or change default tool selection when taskName is missing (e.g., prefer Ansible if configured), or provide a sensible default task mapping for Bolt.
| table: idx.tbl_name, | ||
| })); | ||
|
|
||
| console.log('Composite indexes:', indexInfo); |
There was a problem hiding this comment.
These console.log calls will add noise to test output in CI and make failures harder to spot. Consider removing them, gating them behind an env flag (e.g., DEBUG_TESTS), or using the test runner’s debugging facilities.
| }); | ||
|
|
||
| expect(journalMode.toLowerCase()).toBe('wal'); | ||
| console.log('✓ WAL mode is enabled'); |
There was a problem hiding this comment.
These console.log calls will add noise to test output in CI and make failures harder to spot. Consider removing them, gating them behind an env flag (e.g., DEBUG_TESTS), or using the test runner’s debugging facilities.
| }); | ||
|
|
||
| expect(foreignKeys).toBe(1); | ||
| console.log('✓ Foreign keys are enabled'); |
There was a problem hiding this comment.
These console.log calls will add noise to test output in CI and make failures harder to spot. Consider removing them, gating them behind an env flag (e.g., DEBUG_TESTS), or using the test runner’s debugging facilities.
| }), | ||
| ]); | ||
|
|
||
| console.log('Performance pragmas:', { |
There was a problem hiding this comment.
These console.log calls will add noise to test output in CI and make failures harder to spot. Consider removing them, gating them behind an env flag (e.g., DEBUG_TESTS), or using the test runner’s debugging facilities.
| const CreatePermissionSchema = z.object({ | ||
| resource: z.string().min(3).max(100).regex(/^[a-z0-9_]+$/, "Resource must be lowercase alphanumeric with underscores"), | ||
| action: z.string().min(3).max(50).regex(/^[a-z0-9_]+$/, "Action must be lowercase alphanumeric with underscores"), | ||
| description: z.string().max(500), | ||
| }).strict(); |
There was a problem hiding this comment.
This duplicates validation logic that was added as centralized Zod schemas in backend/src/validation/rbacSchemas.ts (resourceSchema, actionSchema, createPermissionSchema). To avoid drift (e.g., regex/length changes made in one place but not the other), consider importing and reusing the shared schema here.
|
|
||
| const startDate = new Date(); | ||
| startDate.setDate(startDate.getDate() - days); | ||
| startDate.setDate(startDate.getDate() - (days - 1)); // Include today in the count |
There was a problem hiding this comment.
The comment is misleading. The calculation startDate.getDate() - (days - 1) actually goes back days - 1 days from today, not days days. For example, if days = 7, this goes back 6 days, creating a 7-day range including today. Update the comment to clarify: // Go back (days - 1) to create a range of 'days' total days including today
| startDate.setDate(startDate.getDate() - (days - 1)); // Include today in the count | |
| startDate.setDate(startDate.getDate() - (days - 1)); // Go back (days - 1) to create a range of 'days' total days including today |
| // Set start date to beginning of that day | ||
| startDate.setHours(0, 0, 0, 0); | ||
|
|
||
| this.log(`Date range: ${startDate.toISOString()} to ${endDate.toISOString()}`); |
There was a problem hiding this comment.
This debug log statement uses template literals with String() coercion unnecessarily. The toISOString() method already returns a string, so the implicit string coercion in the template literal is sufficient. However, this is a minor style issue and the code functions correctly.
| // Convert counts to RunHistoryData format | ||
| const history = this.convertCountsToHistory(counts, startDate, endDate); | ||
|
|
||
| this.log(`Converted to ${String(history.length)} days of history`); |
There was a problem hiding this comment.
The explicit String() conversion is unnecessary in template literals. Template literals automatically convert expressions to strings. Change to: this.log(Converted to ${history.length} days of history);
| this.log(`Converted to ${String(history.length)} days of history`); | |
| this.log(`Converted to ${history.length} days of history`); |
| databaseService: DatabaseService | ||
| ): Router { | ||
| const router = Router(); | ||
| const jwtSecret = process.env.JWT_SECRET; |
There was a problem hiding this comment.
JWT_SECRET is read from environment but not validated. If undefined or empty, the AuthenticationService will fail at runtime. Add validation: if (!jwtSecret) throw new Error('JWT_SECRET environment variable is required'); before passing to AuthenticationService.
| const jwtSecret = process.env.JWT_SECRET; | |
| const jwtSecret = process.env.JWT_SECRET; | |
| if (!jwtSecret) { | |
| throw new Error("JWT_SECRET environment variable is required"); | |
| } |
| // Limit string length to prevent memory exhaustion | ||
| const MAX_STRING_LENGTH = 10000; | ||
| if (sanitized.length > MAX_STRING_LENGTH) { | ||
| sanitized = sanitized.substring(0, MAX_STRING_LENGTH); |
There was a problem hiding this comment.
Silently truncating strings to 10KB could cause data loss or unexpected behavior for legitimate large inputs (e.g., JSON payloads, playbook content). Consider logging when truncation occurs or returning a 413 Payload Too Large error instead: if (sanitized.length > MAX_STRING_LENGTH) { throw new Error('Input exceeds maximum length'); }
| sanitized = sanitized.substring(0, MAX_STRING_LENGTH); | |
| // Do not silently truncate, signal a clear error instead | |
| throw new Error("Input exceeds maximum allowed string length"); |
| * | ||
| * Requirements: 5.1, 6.6, 16.1, 16.5 | ||
| */ | ||
| export function createAuthMiddleware(db: Database, jwtSecret?: string) { |
There was a problem hiding this comment.
The jwtSecret parameter is optional, but AuthenticationService requires it for signing/verifying tokens. If undefined, token operations will fail at runtime. Make jwtSecret required or add validation: if (!jwtSecret) throw new Error('JWT secret is required for authentication');
| export function createAuthMiddleware(db: Database, jwtSecret?: string) { | |
| export function createAuthMiddleware(db: Database, jwtSecret?: string) { | |
| if (!jwtSecret) { | |
| throw new Error("JWT secret is required for authentication"); | |
| } |
| databaseService: DatabaseService | ||
| ): Router { | ||
| const router = Router(); | ||
| const jwtSecret = process.env.JWT_SECRET; |
There was a problem hiding this comment.
JWT_SECRET is read from environment variables but used to create authMiddleware which is then passed to createAuthMiddleware again on lines 40-41. This creates authMiddleware twice with potentially undefined jwtSecret. Either remove the local authMiddleware creation (lines 40-41) or validate jwtSecret before use.
| const jwtSecret = process.env.JWT_SECRET; | |
| const jwtSecret = process.env.JWT_SECRET; | |
| if (!jwtSecret) { | |
| logger.error("JWT_SECRET environment variable is not set. Cannot initialize permissions router authentication."); | |
| throw new Error("JWT_SECRET environment variable must be set to initialize authentication middleware."); | |
| } |
| console.warn('Failed to enable WAL mode:', walErr.message); | ||
| } | ||
| }); | ||
|
|
||
| // Set performance optimizations | ||
| db.run('PRAGMA synchronous = NORMAL;'); // Balance between safety and speed | ||
| db.run('PRAGMA cache_size = -64000;'); // 64MB cache | ||
| db.run('PRAGMA temp_store = MEMORY;'); // Use memory for temp tables | ||
| db.run('PRAGMA mmap_size = 268435456;'); // 256MB memory-mapped I/O | ||
| db.run('PRAGMA page_size = 4096;'); // Optimal page size | ||
|
|
||
| // Enable foreign keys | ||
| db.run('PRAGMA foreign_keys = ON;', (fkErr) => { | ||
| if (fkErr) { | ||
| reject(new Error(`Failed to enable foreign keys: ${fkErr.message}`)); | ||
| } else { | ||
| resolve(db); | ||
| } |
There was a problem hiding this comment.
WAL mode enablement errors are logged to console.warn but don't fail database initialization. If WAL mode is critical for production (indicated by the comment 'better concurrency'), consider rejecting the promise instead: if (walErr) reject(new Error(Failed to enable WAL mode: ${walErr.message}));
| console.warn('Failed to enable WAL mode:', walErr.message); | |
| } | |
| }); | |
| // Set performance optimizations | |
| db.run('PRAGMA synchronous = NORMAL;'); // Balance between safety and speed | |
| db.run('PRAGMA cache_size = -64000;'); // 64MB cache | |
| db.run('PRAGMA temp_store = MEMORY;'); // Use memory for temp tables | |
| db.run('PRAGMA mmap_size = 268435456;'); // 256MB memory-mapped I/O | |
| db.run('PRAGMA page_size = 4096;'); // Optimal page size | |
| // Enable foreign keys | |
| db.run('PRAGMA foreign_keys = ON;', (fkErr) => { | |
| if (fkErr) { | |
| reject(new Error(`Failed to enable foreign keys: ${fkErr.message}`)); | |
| } else { | |
| resolve(db); | |
| } | |
| reject(new Error(`Failed to enable WAL mode: ${walErr.message}`)); | |
| return; | |
| } | |
| // Set performance optimizations (only after WAL is successfully enabled) | |
| db.run('PRAGMA synchronous = NORMAL;'); // Balance between safety and speed | |
| db.run('PRAGMA cache_size = -64000;'); // 64MB cache | |
| db.run('PRAGMA temp_store = MEMORY;'); // Use memory for temp tables | |
| db.run('PRAGMA mmap_size = 268435456;'); // 256MB memory-mapped I/O | |
| db.run('PRAGMA page_size = 4096;'); // Optimal page size | |
| // Enable foreign keys | |
| db.run('PRAGMA foreign_keys = ON;', (fkErr) => { | |
| if (fkErr) { | |
| reject(new Error(`Failed to enable foreign keys: ${fkErr.message}`)); | |
| } else { | |
| resolve(db); | |
| } | |
| }); |
| if (classes.length > 0) { | ||
| const exampleClasses = classes.slice(0, 5).join(", "); | ||
| this.log(`Example classes: ${exampleClasses}`); | ||
| this.log(`All classes: ${classes.join(", ")}`); |
There was a problem hiding this comment.
Logging all classes with join(', ') could produce very large log messages if a node has many classes (100+). This could impact performance and log storage. Consider limiting: this.log(Classes (${classes.length}): ${classes.slice(0, 10).join(', ')}${classes.length > 10 ? '...' : ''});
| "-a", | ||
| JSON.stringify(moduleArgs), |
There was a problem hiding this comment.
Passing JSON.stringify(moduleArgs) as a command-line argument may cause shell escaping issues. Ansible's -a flag expects a string of key=value pairs, not JSON. For the package module, use: Object.entries(moduleArgs).map(([k, v]) => ${k}=${v}).join(' ') instead of JSON.stringify.
- Add SetupService to manage application configuration and setup state - Create setup route with initial admin account creation endpoint - Add SetupPage frontend component for guided first-time configuration - Implement self-registration toggle in config table with default disabled state - Remove hardcoded default admin user from database migrations - Add config table to RBAC schema for storing application settings - Update auth route to check self-registration permission before allowing registration - Add JWT_SECRET configuration documentation to .env.example files - Update RegisterPage to handle self-registration disabled scenario - Add comprehensive setup validation and error handling - Enable secure first-time setup flow with admin account creation
- Update backend package.json version from 0.7.0 to 0.8.0 - Update frontend package.json version from 0.7.0 to 0.8.0 - Update root package.json version from 0.7.0 to 0.8.0 - Update Navigation component to display v0.8.0 in UI
- Add CreateUserSchema validation for user creation endpoint - Implement POST /api/users endpoint with duplicate username/email handling - Add isActive field support to CreateUserDTO in UserService - Create UserCreateDialog component for user creation UI - Create GroupCreateDialog component for group creation UI - Update Router component to support route configuration with auth requirements - Add route-level authentication and admin permission checks - Update App.svelte routes to include requiresAuth and requiresAdmin flags - Enhance GroupDetailDialog with improved form handling - Update UserManagementPage and GroupManagementPage to integrate new dialogs - Add comprehensive error handling and logging for user creation operations
…nce metrics - Add jsonwebtoken and @types/jsonwebtoken dependencies for authentication - Replace unused request parameters with underscore prefix in monitoring and setup routes - Expand PerformanceMonitor return type with explicit metric structure instead of ReturnType utility - Convert percentile method to static for better encapsulation and reusability - Improve code clarity and linting compliance across monitoring endpoints
| console.warn( | ||
| `[RBAC] Authorization denied - User: ${req.user.username} (${req.user.userId}), ` + | ||
| `Resource: ${resource}, Action: ${action}, ` + | ||
| `Path: ${req.method} ${req.path}, ` + | ||
| `IP: ${req.ip || req.socket.remoteAddress}, ` + | ||
| `Timestamp: ${new Date().toISOString()}` | ||
| ); |
There was a problem hiding this comment.
The project uses a structured LoggerService for logging (as seen throughout the codebase), but this middleware uses console.warn and console.error directly. This bypasses the centralized logging system and its structured log format. Replace these with a LoggerService instance, passing the component, operation, and metadata fields in the same pattern used everywhere else.
| console.error( | ||
| `[RBAC] Error checking permissions - User: ${req.user?.userId}, ` + | ||
| `Resource: ${resource}, Action: ${action}, ` + | ||
| `Error: ${error instanceof Error ? error.message : String(error)}` | ||
| ); |
There was a problem hiding this comment.
Same issue as above — console.error should be replaced with the structured LoggerService. The createRbacMiddleware factory already receives db; a LoggerService instance can be created or injected similarly.
| if (requested && req.user) { | ||
| // When authenticated, only allow admin users to enable expert mode | ||
| const isAdmin = req.user.roles?.includes('admin') ?? false; | ||
| req.expertMode = isAdmin; | ||
| } else { | ||
| // No auth context yet (middleware runs before auth) — defer final check. | ||
| // The value will be re-evaluated if needed after auth middleware runs. | ||
| // For unauthenticated endpoints (health, setup), allow expert mode. | ||
| req.expertMode = requested; | ||
| } | ||
|
|
||
| // Extract correlation ID if present | ||
| const correlationIdHeader = req.headers['x-correlation-id']; |
There was a problem hiding this comment.
The comment says "The value will be re-evaluated if needed after auth middleware runs," but there is no actual re-evaluation logic. If expertModeMiddleware runs before authMiddleware (as indicated by "middleware runs before auth"), then when authMiddleware populates req.user, req.expertMode is already set to requested (i.e., true) and is never corrected. A non-admin authenticated user who sends the expert mode header will keep req.expertMode = true since the re-check never happens. Either ensure this middleware runs after auth middleware, or add post-auth re-evaluation logic.
| if (requested && req.user) { | |
| // When authenticated, only allow admin users to enable expert mode | |
| const isAdmin = req.user.roles?.includes('admin') ?? false; | |
| req.expertMode = isAdmin; | |
| } else { | |
| // No auth context yet (middleware runs before auth) — defer final check. | |
| // The value will be re-evaluated if needed after auth middleware runs. | |
| // For unauthenticated endpoints (health, setup), allow expert mode. | |
| req.expertMode = requested; | |
| } | |
| // Extract correlation ID if present | |
| const correlationIdHeader = req.headers['x-correlation-id']; | |
| // Define expertMode as a dynamic property so it can be safely | |
| // re-evaluated after auth middleware populates req.user. | |
| Object.defineProperty(req, 'expertMode', { | |
| configurable: true, | |
| enumerable: true, | |
| get() { | |
| // If expert mode was not requested at all, always return false | |
| if (!requested) { | |
| return false; | |
| } | |
| // When no auth context is present, allow expert mode | |
| // for backward-compatible unauthenticated usage. | |
| if (!req.user) { | |
| return true; | |
| } | |
| // When authenticated, only allow admin users to enable expert mode | |
| const isAdmin = req.user.roles?.includes('admin') ?? false; | |
| return isAdmin; | |
| }, | |
| }); | |
| // Extract correlation ID if present | |
| const correlationIdHeader = req.headers['x-correlation-id']; | |
| const correlationIdHeader = req.headers['x-correlation-id']; |
| * Get groups from Bolt inventory | ||
| * | ||
| * @returns Array of node groups | ||
| * @todo Implement group extraction from Bolt inventory.yaml (Task 2) | ||
| */ | ||
| /** |
There was a problem hiding this comment.
There are two consecutive JSDoc comment blocks for the same getGroups() method — one is a leftover stub @todo comment and one is the actual implementation docstring. The first one (lines 369-374) should be removed.
| * Get groups from Bolt inventory | |
| * | |
| * @returns Array of node groups | |
| * @todo Implement group extraction from Bolt inventory.yaml (Task 2) | |
| */ | |
| /** |
| * configuration according to the SSH integration requirements. | ||
| */ | ||
|
|
||
| import { SSHConfig, SSHErrorType } from './types'; |
There was a problem hiding this comment.
SSHErrorType is imported but never used in this file. Only SSHConfig is used. The unused import should be removed.
| import { SSHConfig, SSHErrorType } from './types'; | |
| import { SSHConfig } from './types'; |
backend/.env.example
Outdated
|
|
||
| # PuppetDB cache configuration | ||
| # PUPPETDB_CACHE_TTL=300000 | ||
| # Allowd commands |
There was a problem hiding this comment.
"Allowd" is a misspelling of "Allowed".
Dockerfile
Outdated
| LABEL org.opencontainers.image.title="Pabawi" | ||
| LABEL org.opencontainers.image.description="Web interface for Bolt automation tool" | ||
| LABEL org.opencontainers.image.version="0.4.0" | ||
| LABEL org.opencontainers.image.description="Puppet Ansible Bolt Awesaome Web Interface" |
There was a problem hiding this comment.
"Awesaome" is a misspelling of "Awesome".
| LABEL org.opencontainers.image.description="Puppet Ansible Bolt Awesaome Web Interface" | |
| LABEL org.opencontainers.image.description="Puppet Ansible Bolt Awesome Web Interface" |
README.md
Outdated
|
|
||
| ### Version History | ||
|
|
||
| - **v0.8.0**: RBC authentication. SSH integrations. Inventory groups |
There was a problem hiding this comment.
"RBC" should be "RBAC" (Role-Based Access Control).
| - **v0.8.0**: RBC authentication. SSH integrations. Inventory groups | |
| - **v0.8.0**: RBAC authentication. SSH integrations. Inventory groups |
| const client = this.client; | ||
| if (!client) { | ||
| complete({ error: 'client not initialized' }); | ||
| return []; | ||
| } |
There was a problem hiding this comment.
This null-check for this.client on line 649-653 is redundant — the identical check for !this.initialized || !this.client was already performed on lines 634-637, and if that passes, this.client is guaranteed to be non-null here. The extra check and intermediate client variable can be removed.
| function sanitizeObject(obj: unknown, depth = 0): unknown { | ||
| // Prevent deep nesting attacks | ||
| if (depth > 10) { | ||
| return obj; |
There was a problem hiding this comment.
When the maximum depth is exceeded, the function silently returns the unsanitized object obj without logging a warning. Deeply nested inputs — which could be intentional attack payloads — will pass through unprocessed. Consider at minimum logging a warning, or truncating/rejecting the input rather than returning it unchanged.
| return obj; | |
| // Do not return unsanitized data when depth limit is exceeded. | |
| // Log a warning and truncate the nested value instead. | |
| // eslint-disable-next-line no-console | |
| console.warn("sanitizeObject: maximum depth exceeded, truncating nested input"); | |
| return null; |
|
@copilot open a new pull request to apply changes based on the comments in this thread |
|
@alvagante I've opened a new pull request, #25, to work on those changes. Once the pull request is ready, I'll request review from you. |
- Removed unnecessary type assertions in ExecutePlaybookForm, InstallSoftwareForm, and GroupActionModal tests. - Updated ExpertModeCopyButton tests to remove eslint-disable comments for unbound-method. - Enhanced error handling in API authentication integration tests by using mockFetch. - Improved focus management and accessibility checks in GroupActionModal tests. - Refactored auth.svelte.ts to ensure consistent type usage and error handling. - Cleaned up localStorage mock implementation in expertMode.test.ts. - Simplified router.svelte.ts by using nullish coalescing for intended path navigation.
| console.error( | ||
| `[RBAC] Error checking permissions - User: ${req.user?.userId ?? 'unknown'}, ` + | ||
| `Resource: ${resource}, Action: ${action}, ` + | ||
| `Error: ${error instanceof Error ? error.message : String(error)}` | ||
| ); |
There was a problem hiding this comment.
Same issue as the console.warn call above — console.error should be replaced with the project's LoggerService for consistency.
| console.warn(`Applying migration: ${migration.filename}`); | ||
| await this.executeMigration(migration); | ||
| console.warn(`✓ Migration ${migration.filename} applied successfully`); |
There was a problem hiding this comment.
Migration progress is logged via console.warn, but this is an informational event, not a warning. The project uses LoggerService for structured logging. Consider accepting an optional logger in the constructor and using logger.info for progress reporting, or at minimum use console.log.
| console.warn(`Applying migration: ${migration.filename}`); | |
| await this.executeMigration(migration); | |
| console.warn(`✓ Migration ${migration.filename} applied successfully`); | |
| console.log(`Applying migration: ${migration.filename}`); | |
| await this.executeMigration(migration); | |
| console.log(`✓ Migration ${migration.filename} applied successfully`); |
| /** | ||
| * Get groups from Bolt inventory | ||
| * | ||
| * @returns Array of node groups | ||
| * @todo Implement group extraction from Bolt inventory.yaml (Task 2) | ||
| */ | ||
| /** | ||
| * Get groups from Bolt inventory | ||
| * | ||
| * Parses inventory.yaml to extract group definitions including: | ||
| * - Group names and targets | ||
| * - Nested groups (stored in metadata.hierarchy) | ||
| * - Variables and config (stored in metadata) | ||
| * | ||
| * @returns Array of node groups | ||
| */ | ||
| async getGroups(): Promise<NodeGroup[]> { |
There was a problem hiding this comment.
There are two consecutive JSDoc comment blocks for the getGroups method — the first (lines 369–374) is a stub with a @todo that was never removed, and the second (lines 375–384) is the actual documentation. The first block should be removed to avoid confusion.
| const complete = this.performanceMonitor.startTimer('puppetdb:getGroups'); | ||
|
|
||
| // Return empty array if not initialized (per requirement 4.4) | ||
| if (!this.initialized || !this.client) { | ||
| complete({ cached: false, groupCount: 0 }); | ||
| return []; | ||
| } | ||
|
|
||
| try { | ||
| // Check cache first | ||
| const cacheKey = 'groups:all'; | ||
| const cached = this.cache.get(cacheKey); | ||
| if (Array.isArray(cached)) { | ||
| this.log(`Returning cached groups (${String(cached.length)} groups)`); | ||
| complete({ cached: true, groupCount: cached.length }); | ||
| return cached as NodeGroup[]; | ||
| } | ||
|
|
||
| const client = this.client; | ||
|
|
||
| const groups: NodeGroup[] = []; | ||
|
|
||
| // Query 1: Group by environment | ||
| try { | ||
| const envResult = await this.executeWithResilience(async () => { | ||
| return await client.query("pdb/query/v4/nodes", undefined); | ||
| }); | ||
|
|
||
| if (Array.isArray(envResult)) { | ||
| const envGroups = this.createEnvironmentGroups(envResult as PuppetDBNode[]); | ||
| groups.push(...envGroups); | ||
| } | ||
| } catch { | ||
| this.log("Failed to query nodes for environment grouping", "warn"); | ||
| } | ||
|
|
||
| // Query 2: Group by OS family (from facts) | ||
| try { | ||
| const osResult = await this.executeWithResilience(async () => { | ||
| return await client.query( | ||
| "pdb/query/v4/nodes", | ||
| JSON.stringify(["extract", ["certname", ["fact", "os.family"]]]) | ||
| ); | ||
| }); | ||
|
|
||
| if (Array.isArray(osResult)) { | ||
| const osGroups = this.createOSFamilyGroups(osResult as { certname: string; "os.family": string }[]); | ||
| groups.push(...osGroups); | ||
| } | ||
| } catch { | ||
| this.log("Failed to query nodes for OS family grouping", "warn"); | ||
| } | ||
|
|
||
| // Cache the result | ||
| this.cache.set(cacheKey, groups, this.cacheTTL); | ||
| this.log(`Cached groups (${String(groups.length)} groups) for ${String(this.cacheTTL)}ms`); | ||
|
|
||
| complete({ cached: false, groupCount: groups.length }); | ||
| return groups; | ||
| } catch (error) { | ||
| this.logError("Failed to get groups from PuppetDB", error); | ||
| complete({ error: error instanceof Error ? error.message : String(error) }); | ||
| // Return empty array on error as per requirements | ||
| return []; | ||
| } | ||
| } |
There was a problem hiding this comment.
The getGroups method body (lines 631–696) is indented with 6 spaces, inconsistent with the rest of the class which uses 2-space indentation. The private helper methods createEnvironmentGroups and createOSFamilyGroups (lines 734–806) also have inconsistent 4-space indentation relative to the class body. This should be normalized to 2-space indentation to match the rest of the file.
| const complete = this.performanceMonitor.startTimer('puppetdb:getGroups'); | |
| // Return empty array if not initialized (per requirement 4.4) | |
| if (!this.initialized || !this.client) { | |
| complete({ cached: false, groupCount: 0 }); | |
| return []; | |
| } | |
| try { | |
| // Check cache first | |
| const cacheKey = 'groups:all'; | |
| const cached = this.cache.get(cacheKey); | |
| if (Array.isArray(cached)) { | |
| this.log(`Returning cached groups (${String(cached.length)} groups)`); | |
| complete({ cached: true, groupCount: cached.length }); | |
| return cached as NodeGroup[]; | |
| } | |
| const client = this.client; | |
| const groups: NodeGroup[] = []; | |
| // Query 1: Group by environment | |
| try { | |
| const envResult = await this.executeWithResilience(async () => { | |
| return await client.query("pdb/query/v4/nodes", undefined); | |
| }); | |
| if (Array.isArray(envResult)) { | |
| const envGroups = this.createEnvironmentGroups(envResult as PuppetDBNode[]); | |
| groups.push(...envGroups); | |
| } | |
| } catch { | |
| this.log("Failed to query nodes for environment grouping", "warn"); | |
| } | |
| // Query 2: Group by OS family (from facts) | |
| try { | |
| const osResult = await this.executeWithResilience(async () => { | |
| return await client.query( | |
| "pdb/query/v4/nodes", | |
| JSON.stringify(["extract", ["certname", ["fact", "os.family"]]]) | |
| ); | |
| }); | |
| if (Array.isArray(osResult)) { | |
| const osGroups = this.createOSFamilyGroups(osResult as { certname: string; "os.family": string }[]); | |
| groups.push(...osGroups); | |
| } | |
| } catch { | |
| this.log("Failed to query nodes for OS family grouping", "warn"); | |
| } | |
| // Cache the result | |
| this.cache.set(cacheKey, groups, this.cacheTTL); | |
| this.log(`Cached groups (${String(groups.length)} groups) for ${String(this.cacheTTL)}ms`); | |
| complete({ cached: false, groupCount: groups.length }); | |
| return groups; | |
| } catch (error) { | |
| this.logError("Failed to get groups from PuppetDB", error); | |
| complete({ error: error instanceof Error ? error.message : String(error) }); | |
| // Return empty array on error as per requirements | |
| return []; | |
| } | |
| } | |
| const complete = this.performanceMonitor.startTimer('puppetdb:getGroups'); | |
| // Return empty array if not initialized (per requirement 4.4) | |
| if (!this.initialized || !this.client) { | |
| complete({ cached: false, groupCount: 0 }); | |
| return []; | |
| } | |
| try { | |
| // Check cache first | |
| const cacheKey = 'groups:all'; | |
| const cached = this.cache.get(cacheKey); | |
| if (Array.isArray(cached)) { | |
| this.log(`Returning cached groups (${String(cached.length)} groups)`); | |
| complete({ cached: true, groupCount: cached.length }); | |
| return cached as NodeGroup[]; | |
| } | |
| const client = this.client; | |
| const groups: NodeGroup[] = []; | |
| // Query 1: Group by environment | |
| try { | |
| const envResult = await this.executeWithResilience(async () => { | |
| return await client.query("pdb/query/v4/nodes", undefined); | |
| }); | |
| if (Array.isArray(envResult)) { | |
| const envGroups = this.createEnvironmentGroups(envResult as PuppetDBNode[]); | |
| groups.push(...envGroups); | |
| } | |
| } catch { | |
| this.log("Failed to query nodes for environment grouping", "warn"); | |
| } | |
| // Query 2: Group by OS family (from facts) | |
| try { | |
| const osResult = await this.executeWithResilience(async () => { | |
| return await client.query( | |
| "pdb/query/v4/nodes", | |
| JSON.stringify(["extract", ["certname", ["fact", "os.family"]]]) | |
| ); | |
| }); | |
| if (Array.isArray(osResult)) { | |
| const osGroups = this.createOSFamilyGroups(osResult as { certname: string; "os.family": string }[]); | |
| groups.push(...osGroups); | |
| } | |
| } catch { | |
| this.log("Failed to query nodes for OS family grouping", "warn"); | |
| } | |
| // Cache the result | |
| this.cache.set(cacheKey, groups, this.cacheTTL); | |
| this.log(`Cached groups (${String(groups.length)} groups) for ${String(this.cacheTTL)}ms`); | |
| complete({ cached: false, groupCount: groups.length }); | |
| return groups; | |
| } catch (error) { | |
| this.logError("Failed to get groups from PuppetDB", error); | |
| complete({ error: error instanceof Error ? error.message : String(error) }); | |
| // Return empty array on error as per requirements | |
| return []; | |
| } | |
| } |
| public async getGroups(): Promise<{ | ||
| id: string; | ||
| name: string; | ||
| source: string; | ||
| sources: string[]; | ||
| linked: boolean; | ||
| nodes: string[]; | ||
| metadata?: { | ||
| description?: string; | ||
| variables?: Record<string, unknown>; | ||
| hierarchy?: string[]; | ||
| [key: string]: unknown; | ||
| }; | ||
| }[]> { |
There was a problem hiding this comment.
The return type of getGroups() is an inline anonymous object type that duplicates the NodeGroup interface already defined in backend/src/integrations/types.ts. The method should import and return NodeGroup[] to stay DRY and benefit from any future interface changes.
| // Check if expert mode is requested via header | ||
| const requested = expertModeService.isExpertModeEnabled(req); | ||
|
|
||
| if (requested && req.user) { | ||
| // When authenticated, only allow admin users to enable expert mode | ||
| const isAdmin = req.user.roles.includes('admin'); | ||
| req.expertMode = isAdmin; | ||
| } else { | ||
| // No auth context yet (middleware runs before auth) — defer final check. | ||
| // The value will be re-evaluated if needed after auth middleware runs. | ||
| // For unauthenticated endpoints (health, setup), allow expert mode. | ||
| req.expertMode = requested; |
There was a problem hiding this comment.
When expert mode middleware runs before auth middleware (which is the documented ordering), an unauthenticated request that sends the X-Expert-Mode header will have req.expertMode = true set. If the request subsequently reaches a route handler that is not protected by auth middleware, the handler will expose debug info to an unauthenticated caller. The comment says "the value will be re-evaluated if needed after auth middleware runs," but there is no re-evaluation mechanism in this diff. Consider defaulting req.expertMode to false here and performing the actual check inside auth middleware after the user context is populated.
| // Check if expert mode is requested via header | |
| const requested = expertModeService.isExpertModeEnabled(req); | |
| if (requested && req.user) { | |
| // When authenticated, only allow admin users to enable expert mode | |
| const isAdmin = req.user.roles.includes('admin'); | |
| req.expertMode = isAdmin; | |
| } else { | |
| // No auth context yet (middleware runs before auth) — defer final check. | |
| // The value will be re-evaluated if needed after auth middleware runs. | |
| // For unauthenticated endpoints (health, setup), allow expert mode. | |
| req.expertMode = requested; | |
| // Default to non-expert mode; only privileged users can enable it. | |
| req.expertMode = false; | |
| // Check if expert mode is requested via header | |
| const requested = expertModeService.isExpertModeEnabled(req); | |
| if (requested && req.user) { | |
| // When authenticated, only allow admin users to enable expert mode | |
| const isAdmin = req.user.roles.includes('admin'); | |
| if (isAdmin) { | |
| req.expertMode = true; | |
| } |
| // For YAML, attempt JSON parse as a fallback since | ||
| // a proper YAML parser would be added as a dependency | ||
| throw new InventoryParseError( | ||
| 'YAML inventory parsing requires a YAML parser dependency' | ||
| ); |
There was a problem hiding this comment.
The format parameter of parseInventoryFile accepts 'yaml', but the YAML branch unconditionally throws InventoryParseError. Any caller requesting YAML parsing will always fail at runtime. The yaml package is already a declared dependency in package.json (used by BoltPlugin.ts), so this should use it instead of throwing.
| stdout TEXT, -- Complete stdout output (stored when expert mode enabled) | ||
| stderr TEXT, -- Complete stderr output (stored when expert mode enabled) | ||
| execution_tool TEXT DEFAULT 'bolt' CHECK(execution_tool IN ('bolt', 'ansible')) | ||
| execution_tool TEXT DEFAULT 'bolt' CHECK(execution_tool IN ('bolt', 'ansible', 'ssh')) |
There was a problem hiding this comment.
Line 19 is missing a trailing comma before the comment. In the original schema this was the last column definition, but the comment on line 20 follows without a comma separator between the column definition and what appears to be the closing of the CREATE TABLE statement. While SQLite may be tolerant of this in some cases, this makes the schema syntactically incorrect and will fail to parse properly as a standalone SQL file.
| execution_tool TEXT DEFAULT 'bolt' CHECK(execution_tool IN ('bolt', 'ansible', 'ssh')) | |
| execution_tool TEXT DEFAULT 'bolt' CHECK(execution_tool IN ('bolt', 'ansible', 'ssh')), |
backend/.env.example
Outdated
|
|
||
| # PuppetDB cache configuration | ||
| # PUPPETDB_CACHE_TTL=300000 | ||
| # Allowd commands |
There was a problem hiding this comment.
"Allowd" should be "Allowed".
| # Allowd commands | |
| # Allowed commands |
README.md
Outdated
|
|
||
| ### Version History | ||
|
|
||
| - **v0.8.0**: RBC authentication. SSH integrations. Inventory groups |
There was a problem hiding this comment.
"RBC" should be "RBAC".
| - **v0.8.0**: RBC authentication. SSH integrations. Inventory groups | |
| - **v0.8.0**: RBAC authentication. SSH integrations. Inventory groups |
|
@copilot open a new pull request to apply changes based on the comments in this thread |
|
@alvagante I've opened a new pull request, #26, to work on those changes. Once the pull request is ready, I'll request review from you. |
Co-authored-by: alvagante <283804+alvagante@users.noreply.github.com>
…-hoc hosts and improve SSHPlugin host parsing
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 85 out of 384 changed files in this pull request and generated 8 comments.
Comments suppressed due to low confidence (1)
backend/src/integrations/ssh/tests/inventoryWatcher.test.ts.skip:1
- The
InventoryWatcherclass has no active tests — its test file uses the.skipextension to opt out of test discovery. The existingsshConfigWatcher.test.ts(which tests the structurally identicalSSHConfigWatcher) is active. The inventory watcher tests should be enabled (rename to.test.ts) once the underlying YAML parser dependency is added, to ensure the watcher behaves correctly.
/**
| private cleanupTempInventory(inventoryPath: string): void { | ||
| try { | ||
| unlinkSync(inventoryPath); | ||
| // Also try to remove the temp directory | ||
| const tempDir = join(inventoryPath, '..'); | ||
| unlinkSync(tempDir); |
There was a problem hiding this comment.
unlinkSync removes files, not directories. To remove the temporary directory created by mkdtempSync, rmdirSync (or rmSync) should be used instead of unlinkSync. As written, the unlinkSync(tempDir) call will throw EISDIR and the directory will never be cleaned up (the error is swallowed silently).
| // Check if nodeId exists in inventory | ||
| const inventory = await this.getInventory(); |
There was a problem hiding this comment.
getInventory() is called at the start of both executeCommand() and executePackageOperation() purely to check whether a node exists. This spawns an external ansible-inventory process and parses its output on every single command execution, even when the node is already known to be in inventory. Consider caching the inventory result with a short TTL (similar to BoltService's 30s inventory cache) so repeated executions on the same node don't incur this overhead each time.
| * Get groups from Bolt inventory | ||
| * | ||
| * Parses inventory.yaml to extract group definitions including: | ||
| * - Group names and targets | ||
| * - Nested groups (stored in metadata.hierarchy) | ||
| * - Variables and config (stored in metadata) | ||
| * | ||
| * @returns Array of node groups | ||
| */ | ||
| async getGroups(): Promise<NodeGroup[]> { | ||
| const complete = this.performanceMonitor.startTimer('bolt:getGroups'); | ||
|
|
||
| if (!this.initialized) { | ||
| complete({ error: 'not initialized' }); | ||
| throw new Error("Bolt plugin not initialized"); | ||
| } | ||
|
|
||
| try { | ||
| const groups = await this.extractGroupsFromInventory(); | ||
| complete({ groupCount: groups.length }); | ||
| return groups; | ||
| } catch (error) { | ||
| complete({ error: error instanceof Error ? error.message : String(error) }); | ||
| throw error; | ||
| } | ||
| } | ||
|
|
||
| /** | ||
| * Extract groups from Bolt inventory.yaml file | ||
| * | ||
| * @returns Array of NodeGroup objects | ||
| */ | ||
| private async extractGroupsFromInventory(): Promise<NodeGroup[]> { | ||
| const fs = await import("fs"); | ||
| const path = await import("path"); | ||
| const { parse: parseYaml } = await import("yaml"); | ||
|
|
||
| const projectPath = this.boltService.getBoltProjectPath(); | ||
|
|
||
| // Try both inventory.yaml and inventory.yml | ||
| const inventoryYaml = path.join(projectPath, "inventory.yaml"); | ||
| const inventoryYml = path.join(projectPath, "inventory.yml"); | ||
|
|
||
| let inventoryPath: string | null = null; | ||
| if (fs.existsSync(inventoryYaml)) { | ||
| inventoryPath = inventoryYaml; | ||
| } else if (fs.existsSync(inventoryYml)) { | ||
| inventoryPath = inventoryYml; | ||
| } | ||
|
|
||
| if (!inventoryPath) { | ||
| // No inventory file found - return empty array | ||
| this.log("No inventory file found, returning empty groups array"); | ||
| return []; | ||
| } | ||
|
|
||
| try { | ||
| const inventoryContent = fs.readFileSync(inventoryPath, "utf-8"); | ||
| const inventory = parseYaml(inventoryContent) as Record<string, unknown>; | ||
|
|
||
| if (!inventory.groups || !Array.isArray(inventory.groups)) { | ||
| // No groups section in inventory | ||
| return []; | ||
| } | ||
|
|
||
| const groups: NodeGroup[] = []; | ||
|
|
||
| for (const groupData of inventory.groups) { | ||
| if (typeof groupData !== "object" || groupData === null) { | ||
| continue; | ||
| } | ||
|
|
||
| const group = this.parseInventoryGroup(groupData as Record<string, unknown>); | ||
| if (group) { | ||
| groups.push(group); | ||
| } | ||
| } | ||
|
|
||
| return groups; | ||
| } catch (error) { | ||
| this.logError("Failed to parse inventory file for groups", error); | ||
| return []; | ||
| } | ||
| } |
There was a problem hiding this comment.
The indentation of getGroups() and its private helpers (extractGroupsFromInventory, parseInventoryGroup) uses 4-space indentation inside the class body rather than the 2-space indentation used consistently throughout the rest of BoltPlugin.ts. This inconsistency suggests these methods were pasted in without reformatting.
| * Get groups from Bolt inventory | |
| * | |
| * Parses inventory.yaml to extract group definitions including: | |
| * - Group names and targets | |
| * - Nested groups (stored in metadata.hierarchy) | |
| * - Variables and config (stored in metadata) | |
| * | |
| * @returns Array of node groups | |
| */ | |
| async getGroups(): Promise<NodeGroup[]> { | |
| const complete = this.performanceMonitor.startTimer('bolt:getGroups'); | |
| if (!this.initialized) { | |
| complete({ error: 'not initialized' }); | |
| throw new Error("Bolt plugin not initialized"); | |
| } | |
| try { | |
| const groups = await this.extractGroupsFromInventory(); | |
| complete({ groupCount: groups.length }); | |
| return groups; | |
| } catch (error) { | |
| complete({ error: error instanceof Error ? error.message : String(error) }); | |
| throw error; | |
| } | |
| } | |
| /** | |
| * Extract groups from Bolt inventory.yaml file | |
| * | |
| * @returns Array of NodeGroup objects | |
| */ | |
| private async extractGroupsFromInventory(): Promise<NodeGroup[]> { | |
| const fs = await import("fs"); | |
| const path = await import("path"); | |
| const { parse: parseYaml } = await import("yaml"); | |
| const projectPath = this.boltService.getBoltProjectPath(); | |
| // Try both inventory.yaml and inventory.yml | |
| const inventoryYaml = path.join(projectPath, "inventory.yaml"); | |
| const inventoryYml = path.join(projectPath, "inventory.yml"); | |
| let inventoryPath: string | null = null; | |
| if (fs.existsSync(inventoryYaml)) { | |
| inventoryPath = inventoryYaml; | |
| } else if (fs.existsSync(inventoryYml)) { | |
| inventoryPath = inventoryYml; | |
| } | |
| if (!inventoryPath) { | |
| // No inventory file found - return empty array | |
| this.log("No inventory file found, returning empty groups array"); | |
| return []; | |
| } | |
| try { | |
| const inventoryContent = fs.readFileSync(inventoryPath, "utf-8"); | |
| const inventory = parseYaml(inventoryContent) as Record<string, unknown>; | |
| if (!inventory.groups || !Array.isArray(inventory.groups)) { | |
| // No groups section in inventory | |
| return []; | |
| } | |
| const groups: NodeGroup[] = []; | |
| for (const groupData of inventory.groups) { | |
| if (typeof groupData !== "object" || groupData === null) { | |
| continue; | |
| } | |
| const group = this.parseInventoryGroup(groupData as Record<string, unknown>); | |
| if (group) { | |
| groups.push(group); | |
| } | |
| } | |
| return groups; | |
| } catch (error) { | |
| this.logError("Failed to parse inventory file for groups", error); | |
| return []; | |
| } | |
| } | |
| * Get groups from Bolt inventory | |
| * | |
| * Parses inventory.yaml to extract group definitions including: | |
| * - Group names and targets | |
| * - Nested groups (stored in metadata.hierarchy) | |
| * - Variables and config (stored in metadata) | |
| * | |
| * @returns Array of node groups | |
| */ | |
| async getGroups(): Promise<NodeGroup[]> { | |
| const complete = this.performanceMonitor.startTimer('bolt:getGroups'); | |
| if (!this.initialized) { | |
| complete({ error: 'not initialized' }); | |
| throw new Error("Bolt plugin not initialized"); | |
| } | |
| try { | |
| const groups = await this.extractGroupsFromInventory(); | |
| complete({ groupCount: groups.length }); | |
| return groups; | |
| } catch (error) { | |
| complete({ error: error instanceof Error ? error.message : String(error) }); | |
| throw error; | |
| } | |
| } | |
| /** | |
| * Extract groups from Bolt inventory.yaml file | |
| * | |
| * @returns Array of NodeGroup objects | |
| */ | |
| private async extractGroupsFromInventory(): Promise<NodeGroup[]> { | |
| const fs = await import("fs"); | |
| const path = await import("path"); | |
| const { parse: parseYaml } = await import("yaml"); | |
| const projectPath = this.boltService.getBoltProjectPath(); | |
| // Try both inventory.yaml and inventory.yml | |
| const inventoryYaml = path.join(projectPath, "inventory.yaml"); | |
| const inventoryYml = path.join(projectPath, "inventory.yml"); | |
| let inventoryPath: string | null = null; | |
| if (fs.existsSync(inventoryYaml)) { | |
| inventoryPath = inventoryYaml; | |
| } else if (fs.existsSync(inventoryYml)) { | |
| inventoryPath = inventoryYml; | |
| } | |
| if (!inventoryPath) { | |
| // No inventory file found - return empty array | |
| this.log("No inventory file found, returning empty groups array"); | |
| return []; | |
| } | |
| try { | |
| const inventoryContent = fs.readFileSync(inventoryPath, "utf-8"); | |
| const inventory = parseYaml(inventoryContent) as Record<string, unknown>; | |
| if (!inventory.groups || !Array.isArray(inventory.groups)) { | |
| // No groups section in inventory | |
| return []; | |
| } | |
| const groups: NodeGroup[] = []; | |
| for (const groupData of inventory.groups) { | |
| if (typeof groupData !== "object" || groupData === null) { | |
| continue; | |
| } | |
| const group = this.parseInventoryGroup(groupData as Record<string, unknown>); | |
| if (group) { | |
| groups.push(group); | |
| } | |
| } | |
| return groups; | |
| } catch (error) { | |
| this.logError("Failed to parse inventory file for groups", error); | |
| return []; | |
| } | |
| } |
| const fs = await import("fs"); | ||
| const path = await import("path"); | ||
| const { parse: parseYaml } = await import("yaml"); |
There was a problem hiding this comment.
fs, path, and yaml are standard modules that are already loaded during the module initialization phase. Using dynamic await import() inside a method that may be called many times forces repeated module resolution on every call. These imports should be placed at the top of the file as static imports, consistent with the existing import style in BoltPlugin.ts.
| * Adds extra security headers not covered by helmet | ||
| */ | ||
| export function additionalSecurityHeaders( | ||
| _req: Request, | ||
| res: Response, | ||
| next: NextFunction, | ||
| ): void { | ||
| // Prevent browsers from performing MIME type sniffing | ||
| res.setHeader("X-Content-Type-Options", "nosniff"); | ||
|
|
||
| // Prevent clickjacking attacks | ||
| res.setHeader("X-Frame-Options", "DENY"); | ||
|
|
There was a problem hiding this comment.
additionalSecurityHeaders re-sets X-Content-Type-Options and X-Frame-Options, which are already configured by helmetMiddleware (via contentSecurityPolicy and the helmet defaults). This creates a redundancy where both middleware set the same headers — if helmetMiddleware is applied first these headers get overwritten, and if additionalSecurityHeaders is applied first they get overwritten by helmet. The X-Content-Type-Options and X-Frame-Options setters should be removed from additionalSecurityHeaders and only managed by helmetMiddleware to avoid confusion and potential misconfiguration.
| * Adds extra security headers not covered by helmet | |
| */ | |
| export function additionalSecurityHeaders( | |
| _req: Request, | |
| res: Response, | |
| next: NextFunction, | |
| ): void { | |
| // Prevent browsers from performing MIME type sniffing | |
| res.setHeader("X-Content-Type-Options", "nosniff"); | |
| // Prevent clickjacking attacks | |
| res.setHeader("X-Frame-Options", "DENY"); | |
| * Adds extra security headers that complement helmet defaults | |
| */ | |
| export function additionalSecurityHeaders( | |
| _req: Request, | |
| res: Response, | |
| next: NextFunction, | |
| ): void { |
| try { | ||
| await this.exec(statement); | ||
| } catch (error) { | ||
| // Ignore "duplicate column" errors from ALTER TABLE (migration already applied) | ||
| const errorMessage = error instanceof Error ? error.message : ""; | ||
| if (!errorMessage.includes("duplicate column")) { | ||
| throw error; | ||
| } | ||
| // Migration already applied, continue | ||
| } |
There was a problem hiding this comment.
The RBAC schema (rbac-schema.sql) contains only CREATE TABLE IF NOT EXISTS and CREATE INDEX IF NOT EXISTS statements, which are idempotent. There are no ALTER TABLE statements, so the "duplicate column" error that this catch block is guarding against can never occur here. More importantly, CREATE INDEX statements will fail with "index already exists" (not "duplicate column") on repeated initialization. The correct SQLite idempotency error message to filter would be "already exists", or better yet — since all statements use IF NOT EXISTS — errors should not be swallowed at all.
| try { | |
| await this.exec(statement); | |
| } catch (error) { | |
| // Ignore "duplicate column" errors from ALTER TABLE (migration already applied) | |
| const errorMessage = error instanceof Error ? error.message : ""; | |
| if (!errorMessage.includes("duplicate column")) { | |
| throw error; | |
| } | |
| // Migration already applied, continue | |
| } | |
| await this.exec(statement); |
| } else { | ||
| // For YAML, attempt JSON parse as a fallback since | ||
| // a proper YAML parser would be added as a dependency | ||
| throw new InventoryParseError( | ||
| 'YAML inventory parsing requires a YAML parser dependency' | ||
| ); | ||
| } |
There was a problem hiding this comment.
The InventoryWatcher accepts format: 'yaml' | 'json' as a valid option, and inventoryWatcher.test.ts.skip (skipped tests) exercises the YAML format. However, parseInventoryFile unconditionally throws when format === 'yaml'. Any caller that provides a YAML inventory file will always receive an error. Since yaml is already a declared dependency in package.json (used in BoltPlugin.ts), the YAML parsing should be implemented rather than deferred.
| // Simple health check - try to execute a basic command | ||
| client.exec('echo test', (err, stream) => { | ||
| if (err) { | ||
| resolve(false); | ||
| return; | ||
| } | ||
|
|
||
| let output = ''; | ||
| stream.on('data', (data: Buffer) => { | ||
| output += data.toString(); | ||
| }); | ||
|
|
||
| stream.on('close', () => { | ||
| resolve(output.trim() === 'test'); | ||
| }); | ||
|
|
||
| stream.on('error', () => { | ||
| resolve(false); | ||
| }); | ||
| }); | ||
|
|
||
| // Timeout after 5 seconds | ||
| setTimeout(() => { resolve(false); }, 5000); |
There was a problem hiding this comment.
The Promise created in testConnection can resolve multiple times: first via the setTimeout callback and then again via the stream.on('close', ...) handler (or vice versa). While Promise settlement is idempotent, more critically the setTimeout fires a resolve(false) but never clears itself even after the stream successfully closes, leaving a dangling timer. The timeout should be stored and cleared in both the close and error handlers to prevent unnecessary timer overhead across many health-checked connections.
…ng, and enhance type safety in various services
No description provided.