Conversation
Time Submission Status
You can submit time with the command. Example: See available commands to help comply with our Guidelines. |
📝 WalkthroughWalkthroughUpdates Go toolchain and module versions; always records fee-distribution audit rows (including zero-fee / zero-LP cases); refactors LP reward sampling to a global scoring + normalized insert flow; adjusts audit constraints; increases default max markets per run and updates tests/config parsing. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Settlement as Settlement Handler (033)
participant Rewards as Reward Sampler (034)
participant DB as Database
participant Audit as Audit Logger (036)
Client->>Settlement: distribute_fees(block)
Settlement->>Rewards: sample_lp_rewards / sample_all_active_lp_rewards(block)
Rewards->>DB: compute global scores per market
DB-->>Rewards: total_scores
Rewards->>DB: insert normalized per-participant reward_percent (batch)
Rewards-->>Settlement: reward_percent data
Settlement->>Settlement: compute per-wallet amounts & summary
Settlement->>DB: insert ob_fee_distributions (always)
Settlement->>DB: ob_batch_unlock_collateral(wallets, amounts)
alt rewards exist
Settlement->>DB: insert ob_fee_distribution_details (per-LP)
else no rewards
Settlement->>DB: leave details empty (distribution row with zeros)
end
Settlement->>DB: delete processed ob_rewards
Settlement-->>Client: complete
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
internal/migrations/034-order-book-rewards.sql (2)
165-173:⚠️ Potential issue | 🟠 MajorDynamic spread bucketing is skewed by doubled midpoint distance.
ABS($x_mid - (100 - $x_mid))doubles distance from 50¢, but the 30/60/80 thresholds are not adjusted, so markets move to tighter spreads earlier than documented.Suggested fix
- $x_spread_base INT := ABS($x_mid - (100 - $x_mid)); + -- Distance from 50¢ midpoint + $x_spread_base INT := ABS($x_mid - 50);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/migrations/034-order-book-rewards.sql` around lines 165 - 173, The spread bucketing uses $x_spread_base := ABS($x_mid - (100 - $x_mid)) which effectively doubles the distance from 50 and causes premature tighter buckets; update the calculation to compute the true distance from midpoint (e.g., set $x_spread_base to ABS($x_mid - 50) or otherwise halve the current value) so the existing thresholds for $x_spread (5/4/3/...) behave as intended; adjust only the $x_spread_base expression (referencing $x_mid and $x_spread) so the rest of the bucket logic remains the same.
262-266:⚠️ Potential issue | 🟠 MajorFixed
ORDER BY id ASC LIMIT $market_limitcan starve active markets beyond the first page.Each run processes the same lowest IDs first. If active markets stay above the limit, higher IDs may never be sampled.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/migrations/034-order-book-rewards.sql` around lines 262 - 266, The current query in the migration (SELECT id FROM ob_queries WHERE settled = false ORDER BY id ASC LIMIT $market_limit) repeatedly picks the lowest IDs and can starve higher-ID markets; change the selection to sample across the active set instead (for Postgres, replace ORDER BY id ASC with ORDER BY RANDOM() so the line becomes SELECT id FROM ob_queries WHERE settled = false ORDER BY RANDOM() LIMIT $market_limit), or alternatively implement a random OFFSET using a counted total if ORDER BY RANDOM() is a performance concern; update the query in the migration where SELECT id FROM ob_queries ... ORDER BY id ASC LIMIT $market_limit appears.
🧹 Nitpick comments (3)
tests/streams/order_book/fee_distribution_audit_test.go (1)
346-355: Strengthen zero-fee assertions beyond row count.Consider also asserting
total_fees_distributed == 0andtotal_lp_count == 0to catch regressions where a row exists but values are wrong.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/streams/order_book/fee_distribution_audit_test.go` around lines 346 - 355, The test currently only checks row count from platform.Engine.Call invoking "get_distribution_summary"; enhance the callback passed to platform.Engine.Call to inspect the returned row fields and assert that total_fees_distributed == 0 and total_lp_count == 0 in addition to incrementing rowCount—use the row parameter in the anonymous func passed to Engine.Call (the same callback that increments rowCount) to decode or read the "total_fees_distributed" and "total_lp_count" columns and call require.Equal/require.Zero assertions so the test fails if values are non-zero even when a row exists.internal/migrations/034-order-book-rewards.sql (1)
177-227: Duplicate scoring expression in Step 1 and Step 2 is high-risk for future drift.The same long formula appears twice; a one-sided edit will silently desync normalization vs participant scoring.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/migrations/034-order-book-rewards.sql` around lines 177 - 227, The long scoring formula duplicated in the global total loop and the participant_score loop should be centralized to avoid drift: extract the repeated expression (the inner multiplication/division using $x_spread, $x_mid and the CASE logic referencing p1 and p2) into a single reusable identifier (e.g., a computed column/alias "score_expr" in a WITH/derived subquery or a PL/pgSQL variable computed once per row) and then replace the two inline occurrences in the SELECTs that compute $global_total_score and participant_score with that identifier (references: $global_total_score, participant_score, p1, p2, $x_spread, $x_mid). Ensure the GROUP BY and SUM still use the same alias so both Step 1 and Step 2 derive from the exact same expression.internal/migrations/033-order-book-settlement.sql (1)
231-236: Avoid per-index OFFSET lookups when pairing wallets to amounts.Use
UNNEST($wallet_addresses, $amounts)in one loop to keep the mapping explicit and linear.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/migrations/033-order-book-settlement.sql` around lines 231 - 236, The loop currently iterates wallets and then uses a separate OFFSET lookup into $amounts (using $w_row, $a_row, $idx and assigning $reward_amount), which is O(n^2) and fragile; change the logic to iterate both arrays simultaneously by UNNESTing them together (UNNEST($wallet_addresses, $amounts)) so you get a single row per pair (wallet, amount) and can assign reward_amount directly from that row, replacing the nested OFFSET SELECT and the $idx-based pairing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/migrations/033-order-book-settlement.sql`:
- Around line 179-196: The query currently lets with_remainder emit rows with
final_reward = 0 which later cause inserts to fail; modify the pipeline so only
rows with final_reward > 0 are aggregated—e.g., add a filtering step (or change
the aggregated CTE) to SELECT from with_remainder WHERE final_reward > 0 before
building wallets and amounts arrays; ensure the ARRAY_AGG in aggregated uses the
filtered set so wallets and amounts only include positive final_reward entries
(referencing the with_remainder, aggregated, final_reward, wallets, and amounts
symbols).
- Around line 162-173: The SUM(r.reward_percent)::INT truncates fractional
reward shares early; change the aggregation in participant_totals to keep a
numeric (non-integer) sum (e.g., SUM(r.reward_percent) cast to NUMERIC or left
as NUMERIC with appropriate precision/scale) instead of casting to INT, rename
total_percent_int to a numeric name (e.g., total_percent_numeric) and update
references in calculated_rewards so the base_reward calculation uses the
full-precision total percent when computing (($total_fees *
total_percent_numeric::NUMERIC(...)) / (100::NUMERIC(...) *
$block_count::NUMERIC(...))) to avoid loss of fractional entitlement.
---
Outside diff comments:
In `@internal/migrations/034-order-book-rewards.sql`:
- Around line 165-173: The spread bucketing uses $x_spread_base := ABS($x_mid -
(100 - $x_mid)) which effectively doubles the distance from 50 and causes
premature tighter buckets; update the calculation to compute the true distance
from midpoint (e.g., set $x_spread_base to ABS($x_mid - 50) or otherwise halve
the current value) so the existing thresholds for $x_spread (5/4/3/...) behave
as intended; adjust only the $x_spread_base expression (referencing $x_mid and
$x_spread) so the rest of the bucket logic remains the same.
- Around line 262-266: The current query in the migration (SELECT id FROM
ob_queries WHERE settled = false ORDER BY id ASC LIMIT $market_limit) repeatedly
picks the lowest IDs and can starve higher-ID markets; change the selection to
sample across the active set instead (for Postgres, replace ORDER BY id ASC with
ORDER BY RANDOM() so the line becomes SELECT id FROM ob_queries WHERE settled =
false ORDER BY RANDOM() LIMIT $market_limit), or alternatively implement a
random OFFSET using a counted total if ORDER BY RANDOM() is a performance
concern; update the query in the migration where SELECT id FROM ob_queries ...
ORDER BY id ASC LIMIT $market_limit appears.
---
Nitpick comments:
In `@internal/migrations/033-order-book-settlement.sql`:
- Around line 231-236: The loop currently iterates wallets and then uses a
separate OFFSET lookup into $amounts (using $w_row, $a_row, $idx and assigning
$reward_amount), which is O(n^2) and fragile; change the logic to iterate both
arrays simultaneously by UNNESTing them together (UNNEST($wallet_addresses,
$amounts)) so you get a single row per pair (wallet, amount) and can assign
reward_amount directly from that row, replacing the nested OFFSET SELECT and the
$idx-based pairing.
In `@internal/migrations/034-order-book-rewards.sql`:
- Around line 177-227: The long scoring formula duplicated in the global total
loop and the participant_score loop should be centralized to avoid drift:
extract the repeated expression (the inner multiplication/division using
$x_spread, $x_mid and the CASE logic referencing p1 and p2) into a single
reusable identifier (e.g., a computed column/alias "score_expr" in a
WITH/derived subquery or a PL/pgSQL variable computed once per row) and then
replace the two inline occurrences in the SELECTs that compute
$global_total_score and participant_score with that identifier (references:
$global_total_score, participant_score, p1, p2, $x_spread, $x_mid). Ensure the
GROUP BY and SUM still use the same alias so both Step 1 and Step 2 derive from
the exact same expression.
In `@tests/streams/order_book/fee_distribution_audit_test.go`:
- Around line 346-355: The test currently only checks row count from
platform.Engine.Call invoking "get_distribution_summary"; enhance the callback
passed to platform.Engine.Call to inspect the returned row fields and assert
that total_fees_distributed == 0 and total_lp_count == 0 in addition to
incrementing rowCount—use the row parameter in the anonymous func passed to
Engine.Call (the same callback that increments rowCount) to decode or read the
"total_fees_distributed" and "total_lp_count" columns and call
require.Equal/require.Zero assertions so the test fails if values are non-zero
even when a row exists.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
deployments/infra/go.sumis excluded by!**/*.sumgo.sumis excluded by!**/*.sum
📒 Files selected for processing (6)
deployments/infra/go.modgo.modinternal/migrations/033-order-book-settlement.sqlinternal/migrations/034-order-book-rewards.sqlinternal/migrations/036-order-book-audit.sqltests/streams/order_book/fee_distribution_audit_test.go
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/migrations/034-order-book-rewards.sql`:
- Around line 121-124: The query selecting the best negative-priced bid
currently uses ORDER BY price DESC which returns the value closest to zero;
change the ordering to ORDER BY price ASC in the SELECT inside the loop (the
statement starting with "for $row in SELECT price FROM ob_positions WHERE
query_id = $query_id AND outcome = TRUE AND price < 0 ORDER BY price DESC LIMIT
1") so the row chosen is the most negative (true best) bid; ensure only this
negative-price branch is modified so positive-priced logic remains unchanged.
In `@tests/streams/order_book/lp_rewards_config_test.go`:
- Around line 404-407: The test currently hardcodes maxMarkets = 1000 inside the
case *types.Decimal branch, which masks regressions; replace the hardcoded
assignment by converting the *types.Decimal value to an int and assigning that
to maxMarkets (e.g. extract the decimal value, parse/convert to int64 or int
using the Decimal API or string representation, and set maxMarkets accordingly),
and on parse/convert failure make the test fail with a clear error; update the
case handling for *types.Decimal to parse the returned decimal and assign to
maxMarkets instead of using the literal 1000.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
extensions/tn_lp_rewards/tn_lp_rewards.gointernal/migrations/033-order-book-settlement.sqlinternal/migrations/034-order-book-rewards.sqltests/streams/order_book/fee_distribution_audit_test.gotests/streams/order_book/lp_rewards_config_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/streams/order_book/fee_distribution_audit_test.go
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/migrations/034-order-book-rewards.sql (1)
144-151:⚠️ Potential issue | 🟠 MajorDynamic spread bucket logic is currently using doubled midpoint distance.
At Line 144,
ABS($x_mid - (100 - $x_mid))equals2 * ABS($x_mid - 50). With thresholds at Lines 146/148/150, markets are bucketed more strictly than the documented “distance from 50¢” rules. This changes reward eligibility and spread tiering.🐛 Proposed fix
- $x_spread_base INT := ABS($x_mid - (100 - $x_mid)); + $x_spread_base INT := ABS($x_mid - 50);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/migrations/034-order-book-rewards.sql` around lines 144 - 151, The spread bucket uses $x_spread_base := ABS($x_mid - (100 - $x_mid)) which equals 2 * ABS($x_mid - 50), causing doubled distance; change $x_spread_base to be the actual distance-from-50 (e.g., set $x_spread_base := ABS($x_mid - 50) or divide the existing expression by 2) so the thresholds in the $x_spread branching (the comparisons against 30/60/80) reflect the documented “distance from 50¢” rules and preserve intended reward tiering for $x_spread.
🧹 Nitpick comments (1)
internal/migrations/034-order-book-rewards.sql (1)
160-175: Consider de-duplicating the paired-score expression used in both phases.The score formula and pairing predicates are repeated in Step 1 and Step 2. A single shared subquery/CTE-style computation would reduce maintenance risk and prevent accidental divergence between denominator and numerator paths.
Also applies to: 190-205
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/migrations/034-order-book-rewards.sql` around lines 160 - 175, The repeated paired-score computation used with ob_positions p1/p2 (the long expression using $x_mid, $x_spread and the CASE WHEN p1.price = 0 THEN 100 - ABS(p2.price) ELSE ABS(p1.price) END logic) should be factored into a single CTE or derived subquery (e.g., WITH paired AS (... SELECT p1.query_id, p1.participant_id, p1.amount, p1.outcome, p2.price AS opp_price, <paired_score> AS score ... FROM ob_positions p1 JOIN ob_positions p2 ...)) and then referenced by name in both Step 1 and Step 2 instead of repeating the numerator/denominator expressions; replace the duplicated ABS($x_mid - (...)) and ($x_spread - ABS(...)) occurrences and the amount multiplication with references to paired.score and paired.amount (and any precomputed boolean predicates) so both phases use the same single source of truth.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@internal/migrations/034-order-book-rewards.sql`:
- Around line 144-151: The spread bucket uses $x_spread_base := ABS($x_mid -
(100 - $x_mid)) which equals 2 * ABS($x_mid - 50), causing doubled distance;
change $x_spread_base to be the actual distance-from-50 (e.g., set
$x_spread_base := ABS($x_mid - 50) or divide the existing expression by 2) so
the thresholds in the $x_spread branching (the comparisons against 30/60/80)
reflect the documented “distance from 50¢” rules and preserve intended reward
tiering for $x_spread.
---
Nitpick comments:
In `@internal/migrations/034-order-book-rewards.sql`:
- Around line 160-175: The repeated paired-score computation used with
ob_positions p1/p2 (the long expression using $x_mid, $x_spread and the CASE
WHEN p1.price = 0 THEN 100 - ABS(p2.price) ELSE ABS(p1.price) END logic) should
be factored into a single CTE or derived subquery (e.g., WITH paired AS (...
SELECT p1.query_id, p1.participant_id, p1.amount, p1.outcome, p2.price AS
opp_price, <paired_score> AS score ... FROM ob_positions p1 JOIN ob_positions p2
...)) and then referenced by name in both Step 1 and Step 2 instead of repeating
the numerator/denominator expressions; replace the duplicated ABS($x_mid -
(...)) and ($x_spread - ABS(...)) occurrences and the amount multiplication with
references to paired.score and paired.amount (and any precomputed boolean
predicates) so both phases use the same single source of truth.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
internal/migrations/034-order-book-rewards.sqltests/streams/order_book/lp_rewards_config_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/streams/order_book/lp_rewards_config_test.go
resolves: https://github.com/truflation/website/issues/3400
resolves: trufnetwork/sdk-py#102
Summary by CodeRabbit
Chores
Bug Fixes
New Features
Behavior
Tests