Skip to content

fix(#267): emit evaluation_complete event in full_techniques mode#269

Merged
ComBba merged 2 commits intomainfrom
fix/267-evaluation-complete-event
Feb 10, 2026
Merged

fix(#267): emit evaluation_complete event in full_techniques mode#269
ComBba merged 2 commits intomainfrom
fix/267-evaluation-complete-event

Conversation

@ComBba
Copy link
Contributor

@ComBba ComBba commented Feb 10, 2026

Summary

Resolves #267

Problem

In full_techniques evaluation mode, the evaluation_complete SSE event was never emitted, causing the frontend to never reliably detect evaluation completion. The finalize node was missing event emission logic.

Changes

  • Import create_sommelier_event and get_event_channel in finalize.py
  • Emit quality_gate_complete event with final scores (total_score, quality_gate)
  • Emit evaluation_complete event for frontend completion detection

Testing

  • Python import verification passes
  • Backend starts correctly
  • Frontend build succeeds

Checklist

  • Code follows project conventions
  • No breaking changes

Summary by CodeRabbit

새로운 기능

  • 평가 완료 시 진행 상황 업데이트 이벤트 발생 기능 추가. 품질 평가 완료(95% 진행률) 및 전체 평가 완료(100% 진행률) 이벤트가 순차적으로 발생되어 사용자에게 완료 상태를 명확히 전달.

- Import event_channel and create_sommelier_event in finalize.py
- Emit quality_gate_complete event with final scores
- Emit evaluation_complete event for frontend completion detection

Closes #267
@ComBba ComBba self-assigned this Feb 10, 2026
@vercel
Copy link

vercel bot commented Feb 10, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
somm-dev Ready Ready Preview, Comment Feb 10, 2026 0:09am

Request Review

@chatgpt-codex-connector
Copy link

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.
To continue using code reviews, you can upgrade your account or add credits to your account and enable them for code reviews in your settings.

@gemini-code-assist
Copy link

Summary of Changes

Hello @ComBba, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical issue where the frontend was unable to detect the completion of evaluations in full_techniques mode due to a missing evaluation_complete SSE event. The changes introduce the necessary event emission logic into the finalize node, ensuring proper communication of evaluation status and scores to the client.

Highlights

  • Event Emission: Implemented the emission of quality_gate_complete and evaluation_complete Server-Sent Events (SSE) within the finalize node.
  • Frontend Synchronization: Ensured the frontend can now reliably detect evaluation completion in full_techniques mode by emitting the evaluation_complete event.
  • Dependency Update: Imported necessary event utility functions (create_sommelier_event, get_event_channel) into finalize.py.
Changelog
  • backend/app/graph/nodes/technique_categories/finalize.py
    • Imported create_sommelier_event and get_event_channel from app.services.event_channel.
    • Added conditional logic to emit a quality_gate_complete event with the final score and quality gate status.
    • Added conditional logic to emit an evaluation_complete event to signal the end of the evaluation process.
Activity
  • Python import verification passed.
  • Backend starts correctly.
  • Frontend build succeeds.
  • Code follows project conventions.
  • No breaking changes.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link

coderabbitai bot commented Feb 10, 2026

Warning

Rate limit exceeded

@ComBba has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 23 minutes and 48 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

Walkthrough

이 변경은 finalize.py 노드에 이벤트 방출 로직을 추가하여 full_techniques 모드에서 누락된 evaluation_complete 이벤트를 전송합니다. 평가 완료 후 두 개의 Sommelier 이벤트를 순차적으로 방출합니다: quality_gate_complete(95% 진행률)와 evaluation_complete(100% 진행률).

Changes

Cohort / File(s) Summary
Event Emission Logic
backend/app/graph/nodes/technique_categories/finalize.py
Added imports for create_sommelier_event and get_event_channel. Implemented conditional event emission after quality gate computation: emits quality_gate_complete event at 95% progress with quality gate details, followed by evaluation_complete event at 100% progress when evaluation_id exists.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Poem

🐰✨ 완성의 신호를 보내네,
품질의 문을 통과한 기쁨,
마지막 숨결로 100%의 종료를,
프론트엔드를 향해 날려 보내리,
평가는 이제 진정한 끝을 맞이하네! 🎉

🚥 Pre-merge checks | ✅ 4 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed PR 제목이 변경사항의 핵심을 명확히 반영하고 있습니다. full_techniques 모드에서 evaluation_complete 이벤트를 방출한다는 구체적인 목표를 설명합니다.
Linked Issues check ✅ Passed PR은 #267의 주요 요구사항을 충족합니다: finalize.py에서 quality_gate_complete와 evaluation_complete 이벤트를 방출하는 로직을 추가했습니다.
Out of Scope Changes check ✅ Passed 모든 변경사항이 #267 해결에 필요한 범위 내에 있습니다. finalize.py의 이벤트 방출 로직 추가만 포함되어 있습니다.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/267-evaluation-complete-event

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix a missing evaluation_complete event in the full_techniques mode by adding event emission logic in the finalize node. However, it introduces a critical regression: the quality_gate_complete event emission calls create_sommelier_event with unsupported arguments (total_score and quality_gate), leading to a TypeError and a crash in the evaluation process. To resolve this, the SSEEvent model should be used for flexible data payloads, preventing the application from crashing and ensuring evaluations complete successfully.

from app.criteria.bmad_items import list_items, get_category, get_category_max
from app.constants import get_quality_gate
from app.models.graph import ItemScore
from app.services.event_channel import create_sommelier_event, get_event_channel

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

To correctly emit the quality_gate_complete event with a custom payload, it's necessary to use the SSEEvent model, which supports a flexible data field. The current import is missing SSEEvent and EventType, which are required for the fix. Please add them to the import statement.

Suggested change
from app.services.event_channel import create_sommelier_event, get_event_channel
from app.services.event_channel import SSEEvent, EventType, create_sommelier_event, get_event_channel

Comment on lines +67 to +68
total_score=round(normalized, 2),
quality_gate=quality_gate,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The finalize node incorrectly calls create_sommelier_event with unsupported keyword arguments total_score and quality_gate. This mismatch will cause a TypeError at runtime, leading to a crash of the evaluation process and preventing evaluations from completing successfully in full_techniques mode. To resolve this, consider using SSEEvent with a flexible data dictionary to correctly emit the final scores and prevent application crashes.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@backend/app/graph/nodes/technique_categories/finalize.py`:
- Around line 59-70: The call to create_sommelier_event in finalize.py passes
non-existent kwargs total_score and quality_gate causing a TypeError; remove
those kwargs from event_channel.emit_sync and instead include the quality_gate
and rounded score inside the message string (e.g., append ", quality_gate:
{quality_gate}, total_score: {round(normalized,2)}") when calling
create_sommelier_event; alternatively, if you prefer schema change, add fields
to SommelierProgressEvent and update create_sommelier_event signature to accept
total_score and quality_gate across the codebase, but do not pass unknown kwargs
from finalize.py without updating the event model and factory.
🧹 Nitpick comments (1)
backend/app/graph/nodes/technique_categories/finalize.py (1)

57-80: 이벤트 방출 실패 시 에러 전파로 finalize 노드 전체가 실패할 수 있음

이벤트 방출은 부가 기능(side-effect)이지만, 현재 try/except 없이 호출되고 있어 emit_synccreate_sommelier_event에서 예외가 발생하면 평가 결과(return 블록)가 반환되지 않습니다. 이벤트 방출 실패가 평가 결과 손실로 이어지지 않도록 방어적 처리를 권장합니다.

🛡️ try/except로 이벤트 방출을 감싸는 수정안
     if evaluation_id:
+        try:
             event_channel = get_event_channel()
             event_channel.emit_sync(
                 evaluation_id,
                 create_sommelier_event(
                     evaluation_id=evaluation_id,
                     sommelier="finalize",
                     event_type="quality_gate_complete",
                     progress_percent=95,
                     message=f"Quality gate: {quality_gate}, score: {round(normalized, 2)}",
                 ),
             )
             event_channel.emit_sync(
                 evaluation_id,
                 create_sommelier_event(
                     evaluation_id=evaluation_id,
                     sommelier="system",
                     event_type="evaluation_complete",
                     progress_percent=100,
                     message="Evaluation complete!",
                 ),
             )
+        except Exception:
+            import logging
+            logging.getLogger(__name__).warning(
+                "Failed to emit finalize events for %s", evaluation_id, exc_info=True
+            )

Address review feedback:
- Remove total_score and quality_gate kwargs from create_sommelier_event
- Include score in message string instead
- Wrap event emission in try/except to prevent node failure
@ComBba ComBba merged commit d994c2a into main Feb 10, 2026
5 checks passed
@ComBba ComBba deleted the fix/267-evaluation-complete-event branch February 10, 2026 00:14
ComBba added a commit that referenced this pull request Feb 10, 2026
…te-event

fix(#267): emit evaluation_complete event in full_techniques mode
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Critical] full_techniques mode: evaluation_complete event never emitted

1 participant