Skip to content

chore: make reward distribution atomic#1323

Merged
MicBun merged 3 commits intomainfrom
atomicLP
Mar 3, 2026
Merged

chore: make reward distribution atomic#1323
MicBun merged 3 commits intomainfrom
atomicLP

Conversation

@MicBun
Copy link
Member

@MicBun MicBun commented Mar 3, 2026

resolves: https://github.com/truflation/website/issues/3400
resolves: trufnetwork/sdk-py#102

Summary by CodeRabbit

  • Chores

    • Bumped Go toolchain to 1.25.3 and upgraded several dependencies.
  • Bug Fixes

    • Audit records now created and persisted for fee distributions in edge cases (zero fees, no LPs).
    • Reordered distribution and cleanup steps to preserve auditing and data consistency.
  • New Features

    • Added a batch sampling action to sample rewards across markets.
  • Behavior

    • Increased default max markets processed per run from 50 to 1000.
  • Tests

    • Updated tests to reflect audit behavior and new default limits.

@MicBun MicBun requested a review from pr-time-tracker March 3, 2026 10:07
@MicBun MicBun self-assigned this Mar 3, 2026
@holdex
Copy link

holdex bot commented Mar 3, 2026

Time Submission Status

Member Status Time Action Last Update
MicBun ✅ Submitted 4h Update time Mar 3, 2026, 3:14 PM

You can submit time with the command. Example:

@holdex pr submit-time 15m

See available commands to help comply with our Guidelines.

@coderabbitai
Copy link

coderabbitai bot commented Mar 3, 2026

📝 Walkthrough

Walkthrough

Updates Go toolchain and module versions; always records fee-distribution audit rows (including zero-fee / zero-LP cases); refactors LP reward sampling to a global scoring + normalized insert flow; adjusts audit constraints; increases default max markets per run and updates tests/config parsing.

Changes

Cohort / File(s) Summary
Go Toolchain & Dependencies
deployments/infra/go.mod, go.mod
Bumps Go toolchain to 1.25.3 and updates multiple modules (testify, x/*, x/tools). Root go.mod also updates github.com/trufnetwork/kwil-db pseudo-version timestamps.
Order-Book Settlement
internal/migrations/033-order-book-settlement.sql
Removes early exits for zero fees / no samples; always creates a distribution summary row; consolidates reward calculation; builds wallets/amounts arrays; records actual_fees_distributed and lp_count; performs batch collateral unlock and then cleans processed rewards.
Order-Book Reward Calculation
internal/migrations/034-order-book-rewards.sql
Replaces per-row scoring with global total-score + normalized per-participant insert; adds sample_all_active_lp_rewards($block, $market_limit) to batch-sample markets; guards against missing two-sided liquidity and non-positive totals.
Order-Book Audit Schema
internal/migrations/036-order-book-audit.sql
Replaces anonymous CHECKs with named CONSTRAINTs; relaxes total_lp_count and block_count checks from > 0 to >= 0 to allow zero-value audit rows.
Tests: Fee Distribution & Config
tests/streams/order_book/fee_distribution_audit_test.go, tests/streams/order_book/lp_rewards_config_test.go
Updates tests to expect audit rows for zero-fee / no-LP cases and to parse/check max_markets_per_run as 1000 (decimal parsing added).
Extension Default
extensions/tn_lp_rewards/tn_lp_rewards.go
Exported constant DefaultMaxMarketsPerRun changed from 50 to 1000.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Settlement as Settlement Handler (033)
    participant Rewards as Reward Sampler (034)
    participant DB as Database
    participant Audit as Audit Logger (036)

    Client->>Settlement: distribute_fees(block)
    Settlement->>Rewards: sample_lp_rewards / sample_all_active_lp_rewards(block)
    Rewards->>DB: compute global scores per market
    DB-->>Rewards: total_scores
    Rewards->>DB: insert normalized per-participant reward_percent (batch)
    Rewards-->>Settlement: reward_percent data
    Settlement->>Settlement: compute per-wallet amounts & summary
    Settlement->>DB: insert ob_fee_distributions (always)
    Settlement->>DB: ob_batch_unlock_collateral(wallets, amounts)
    alt rewards exist
        Settlement->>DB: insert ob_fee_distribution_details (per-LP)
    else no rewards
        Settlement->>DB: leave details empty (distribution row with zeros)
    end
    Settlement->>DB: delete processed ob_rewards
    Settlement-->>Client: complete
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • pr-time-tracker

Poem

🐇 I hopped through SQL, counting fees with care,

Audits now sit, even when zeros are there.
Scores summed and shared, markets sampled wide,
A thousand hops per run — carrot dreams supplied! 🥕

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'chore: make reward distribution atomic' directly reflects the main change: refactoring reward distribution logic across multiple migration files and the LP rewards extension to ensure atomicity.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch atomicLP

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
internal/migrations/034-order-book-rewards.sql (2)

165-173: ⚠️ Potential issue | 🟠 Major

Dynamic spread bucketing is skewed by doubled midpoint distance.

ABS($x_mid - (100 - $x_mid)) doubles distance from 50¢, but the 30/60/80 thresholds are not adjusted, so markets move to tighter spreads earlier than documented.

Suggested fix
-    $x_spread_base INT := ABS($x_mid - (100 - $x_mid));
+    -- Distance from 50¢ midpoint
+    $x_spread_base INT := ABS($x_mid - 50);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/migrations/034-order-book-rewards.sql` around lines 165 - 173, The
spread bucketing uses $x_spread_base := ABS($x_mid - (100 - $x_mid)) which
effectively doubles the distance from 50 and causes premature tighter buckets;
update the calculation to compute the true distance from midpoint (e.g., set
$x_spread_base to ABS($x_mid - 50) or otherwise halve the current value) so the
existing thresholds for $x_spread (5/4/3/...) behave as intended; adjust only
the $x_spread_base expression (referencing $x_mid and $x_spread) so the rest of
the bucket logic remains the same.

262-266: ⚠️ Potential issue | 🟠 Major

Fixed ORDER BY id ASC LIMIT $market_limit can starve active markets beyond the first page.

Each run processes the same lowest IDs first. If active markets stay above the limit, higher IDs may never be sampled.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/migrations/034-order-book-rewards.sql` around lines 262 - 266, The
current query in the migration (SELECT id FROM ob_queries WHERE settled = false
ORDER BY id ASC LIMIT $market_limit) repeatedly picks the lowest IDs and can
starve higher-ID markets; change the selection to sample across the active set
instead (for Postgres, replace ORDER BY id ASC with ORDER BY RANDOM() so the
line becomes SELECT id FROM ob_queries WHERE settled = false ORDER BY RANDOM()
LIMIT $market_limit), or alternatively implement a random OFFSET using a counted
total if ORDER BY RANDOM() is a performance concern; update the query in the
migration where SELECT id FROM ob_queries ... ORDER BY id ASC LIMIT
$market_limit appears.
🧹 Nitpick comments (3)
tests/streams/order_book/fee_distribution_audit_test.go (1)

346-355: Strengthen zero-fee assertions beyond row count.

Consider also asserting total_fees_distributed == 0 and total_lp_count == 0 to catch regressions where a row exists but values are wrong.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/streams/order_book/fee_distribution_audit_test.go` around lines 346 -
355, The test currently only checks row count from platform.Engine.Call invoking
"get_distribution_summary"; enhance the callback passed to platform.Engine.Call
to inspect the returned row fields and assert that total_fees_distributed == 0
and total_lp_count == 0 in addition to incrementing rowCount—use the row
parameter in the anonymous func passed to Engine.Call (the same callback that
increments rowCount) to decode or read the "total_fees_distributed" and
"total_lp_count" columns and call require.Equal/require.Zero assertions so the
test fails if values are non-zero even when a row exists.
internal/migrations/034-order-book-rewards.sql (1)

177-227: Duplicate scoring expression in Step 1 and Step 2 is high-risk for future drift.

The same long formula appears twice; a one-sided edit will silently desync normalization vs participant scoring.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/migrations/034-order-book-rewards.sql` around lines 177 - 227, The
long scoring formula duplicated in the global total loop and the
participant_score loop should be centralized to avoid drift: extract the
repeated expression (the inner multiplication/division using $x_spread, $x_mid
and the CASE logic referencing p1 and p2) into a single reusable identifier
(e.g., a computed column/alias "score_expr" in a WITH/derived subquery or a
PL/pgSQL variable computed once per row) and then replace the two inline
occurrences in the SELECTs that compute $global_total_score and
participant_score with that identifier (references: $global_total_score,
participant_score, p1, p2, $x_spread, $x_mid). Ensure the GROUP BY and SUM still
use the same alias so both Step 1 and Step 2 derive from the exact same
expression.
internal/migrations/033-order-book-settlement.sql (1)

231-236: Avoid per-index OFFSET lookups when pairing wallets to amounts.

Use UNNEST($wallet_addresses, $amounts) in one loop to keep the mapping explicit and linear.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/migrations/033-order-book-settlement.sql` around lines 231 - 236,
The loop currently iterates wallets and then uses a separate OFFSET lookup into
$amounts (using $w_row, $a_row, $idx and assigning $reward_amount), which is
O(n^2) and fragile; change the logic to iterate both arrays simultaneously by
UNNESTing them together (UNNEST($wallet_addresses, $amounts)) so you get a
single row per pair (wallet, amount) and can assign reward_amount directly from
that row, replacing the nested OFFSET SELECT and the $idx-based pairing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/migrations/033-order-book-settlement.sql`:
- Around line 179-196: The query currently lets with_remainder emit rows with
final_reward = 0 which later cause inserts to fail; modify the pipeline so only
rows with final_reward > 0 are aggregated—e.g., add a filtering step (or change
the aggregated CTE) to SELECT from with_remainder WHERE final_reward > 0 before
building wallets and amounts arrays; ensure the ARRAY_AGG in aggregated uses the
filtered set so wallets and amounts only include positive final_reward entries
(referencing the with_remainder, aggregated, final_reward, wallets, and amounts
symbols).
- Around line 162-173: The SUM(r.reward_percent)::INT truncates fractional
reward shares early; change the aggregation in participant_totals to keep a
numeric (non-integer) sum (e.g., SUM(r.reward_percent) cast to NUMERIC or left
as NUMERIC with appropriate precision/scale) instead of casting to INT, rename
total_percent_int to a numeric name (e.g., total_percent_numeric) and update
references in calculated_rewards so the base_reward calculation uses the
full-precision total percent when computing (($total_fees *
total_percent_numeric::NUMERIC(...)) / (100::NUMERIC(...) *
$block_count::NUMERIC(...))) to avoid loss of fractional entitlement.

---

Outside diff comments:
In `@internal/migrations/034-order-book-rewards.sql`:
- Around line 165-173: The spread bucketing uses $x_spread_base := ABS($x_mid -
(100 - $x_mid)) which effectively doubles the distance from 50 and causes
premature tighter buckets; update the calculation to compute the true distance
from midpoint (e.g., set $x_spread_base to ABS($x_mid - 50) or otherwise halve
the current value) so the existing thresholds for $x_spread (5/4/3/...) behave
as intended; adjust only the $x_spread_base expression (referencing $x_mid and
$x_spread) so the rest of the bucket logic remains the same.
- Around line 262-266: The current query in the migration (SELECT id FROM
ob_queries WHERE settled = false ORDER BY id ASC LIMIT $market_limit) repeatedly
picks the lowest IDs and can starve higher-ID markets; change the selection to
sample across the active set instead (for Postgres, replace ORDER BY id ASC with
ORDER BY RANDOM() so the line becomes SELECT id FROM ob_queries WHERE settled =
false ORDER BY RANDOM() LIMIT $market_limit), or alternatively implement a
random OFFSET using a counted total if ORDER BY RANDOM() is a performance
concern; update the query in the migration where SELECT id FROM ob_queries ...
ORDER BY id ASC LIMIT $market_limit appears.

---

Nitpick comments:
In `@internal/migrations/033-order-book-settlement.sql`:
- Around line 231-236: The loop currently iterates wallets and then uses a
separate OFFSET lookup into $amounts (using $w_row, $a_row, $idx and assigning
$reward_amount), which is O(n^2) and fragile; change the logic to iterate both
arrays simultaneously by UNNESTing them together (UNNEST($wallet_addresses,
$amounts)) so you get a single row per pair (wallet, amount) and can assign
reward_amount directly from that row, replacing the nested OFFSET SELECT and the
$idx-based pairing.

In `@internal/migrations/034-order-book-rewards.sql`:
- Around line 177-227: The long scoring formula duplicated in the global total
loop and the participant_score loop should be centralized to avoid drift:
extract the repeated expression (the inner multiplication/division using
$x_spread, $x_mid and the CASE logic referencing p1 and p2) into a single
reusable identifier (e.g., a computed column/alias "score_expr" in a
WITH/derived subquery or a PL/pgSQL variable computed once per row) and then
replace the two inline occurrences in the SELECTs that compute
$global_total_score and participant_score with that identifier (references:
$global_total_score, participant_score, p1, p2, $x_spread, $x_mid). Ensure the
GROUP BY and SUM still use the same alias so both Step 1 and Step 2 derive from
the exact same expression.

In `@tests/streams/order_book/fee_distribution_audit_test.go`:
- Around line 346-355: The test currently only checks row count from
platform.Engine.Call invoking "get_distribution_summary"; enhance the callback
passed to platform.Engine.Call to inspect the returned row fields and assert
that total_fees_distributed == 0 and total_lp_count == 0 in addition to
incrementing rowCount—use the row parameter in the anonymous func passed to
Engine.Call (the same callback that increments rowCount) to decode or read the
"total_fees_distributed" and "total_lp_count" columns and call
require.Equal/require.Zero assertions so the test fails if values are non-zero
even when a row exists.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d52c42c and b7c4d59.

⛔ Files ignored due to path filters (2)
  • deployments/infra/go.sum is excluded by !**/*.sum
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (6)
  • deployments/infra/go.mod
  • go.mod
  • internal/migrations/033-order-book-settlement.sql
  • internal/migrations/034-order-book-rewards.sql
  • internal/migrations/036-order-book-audit.sql
  • tests/streams/order_book/fee_distribution_audit_test.go

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/migrations/034-order-book-rewards.sql`:
- Around line 121-124: The query selecting the best negative-priced bid
currently uses ORDER BY price DESC which returns the value closest to zero;
change the ordering to ORDER BY price ASC in the SELECT inside the loop (the
statement starting with "for $row in SELECT price FROM ob_positions WHERE
query_id = $query_id AND outcome = TRUE AND price < 0 ORDER BY price DESC LIMIT
1") so the row chosen is the most negative (true best) bid; ensure only this
negative-price branch is modified so positive-priced logic remains unchanged.

In `@tests/streams/order_book/lp_rewards_config_test.go`:
- Around line 404-407: The test currently hardcodes maxMarkets = 1000 inside the
case *types.Decimal branch, which masks regressions; replace the hardcoded
assignment by converting the *types.Decimal value to an int and assigning that
to maxMarkets (e.g. extract the decimal value, parse/convert to int64 or int
using the Decimal API or string representation, and set maxMarkets accordingly),
and on parse/convert failure make the test fail with a clear error; update the
case handling for *types.Decimal to parse the returned decimal and assign to
maxMarkets instead of using the literal 1000.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b7c4d59 and 45ca2a1.

📒 Files selected for processing (5)
  • extensions/tn_lp_rewards/tn_lp_rewards.go
  • internal/migrations/033-order-book-settlement.sql
  • internal/migrations/034-order-book-rewards.sql
  • tests/streams/order_book/fee_distribution_audit_test.go
  • tests/streams/order_book/lp_rewards_config_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/streams/order_book/fee_distribution_audit_test.go

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
internal/migrations/034-order-book-rewards.sql (1)

144-151: ⚠️ Potential issue | 🟠 Major

Dynamic spread bucket logic is currently using doubled midpoint distance.

At Line 144, ABS($x_mid - (100 - $x_mid)) equals 2 * ABS($x_mid - 50). With thresholds at Lines 146/148/150, markets are bucketed more strictly than the documented “distance from 50¢” rules. This changes reward eligibility and spread tiering.

🐛 Proposed fix
-    $x_spread_base INT := ABS($x_mid - (100 - $x_mid));
+    $x_spread_base INT := ABS($x_mid - 50);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/migrations/034-order-book-rewards.sql` around lines 144 - 151, The
spread bucket uses $x_spread_base := ABS($x_mid - (100 - $x_mid)) which equals 2
* ABS($x_mid - 50), causing doubled distance; change $x_spread_base to be the
actual distance-from-50 (e.g., set $x_spread_base := ABS($x_mid - 50) or divide
the existing expression by 2) so the thresholds in the $x_spread branching (the
comparisons against 30/60/80) reflect the documented “distance from 50¢” rules
and preserve intended reward tiering for $x_spread.
🧹 Nitpick comments (1)
internal/migrations/034-order-book-rewards.sql (1)

160-175: Consider de-duplicating the paired-score expression used in both phases.

The score formula and pairing predicates are repeated in Step 1 and Step 2. A single shared subquery/CTE-style computation would reduce maintenance risk and prevent accidental divergence between denominator and numerator paths.

Also applies to: 190-205

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/migrations/034-order-book-rewards.sql` around lines 160 - 175, The
repeated paired-score computation used with ob_positions p1/p2 (the long
expression using $x_mid, $x_spread and the CASE WHEN p1.price = 0 THEN 100 -
ABS(p2.price) ELSE ABS(p1.price) END logic) should be factored into a single CTE
or derived subquery (e.g., WITH paired AS (... SELECT p1.query_id,
p1.participant_id, p1.amount, p1.outcome, p2.price AS opp_price, <paired_score>
AS score ... FROM ob_positions p1 JOIN ob_positions p2 ...)) and then referenced
by name in both Step 1 and Step 2 instead of repeating the numerator/denominator
expressions; replace the duplicated ABS($x_mid - (...)) and ($x_spread -
ABS(...)) occurrences and the amount multiplication with references to
paired.score and paired.amount (and any precomputed boolean predicates) so both
phases use the same single source of truth.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@internal/migrations/034-order-book-rewards.sql`:
- Around line 144-151: The spread bucket uses $x_spread_base := ABS($x_mid -
(100 - $x_mid)) which equals 2 * ABS($x_mid - 50), causing doubled distance;
change $x_spread_base to be the actual distance-from-50 (e.g., set
$x_spread_base := ABS($x_mid - 50) or divide the existing expression by 2) so
the thresholds in the $x_spread branching (the comparisons against 30/60/80)
reflect the documented “distance from 50¢” rules and preserve intended reward
tiering for $x_spread.

---

Nitpick comments:
In `@internal/migrations/034-order-book-rewards.sql`:
- Around line 160-175: The repeated paired-score computation used with
ob_positions p1/p2 (the long expression using $x_mid, $x_spread and the CASE
WHEN p1.price = 0 THEN 100 - ABS(p2.price) ELSE ABS(p1.price) END logic) should
be factored into a single CTE or derived subquery (e.g., WITH paired AS (...
SELECT p1.query_id, p1.participant_id, p1.amount, p1.outcome, p2.price AS
opp_price, <paired_score> AS score ... FROM ob_positions p1 JOIN ob_positions p2
...)) and then referenced by name in both Step 1 and Step 2 instead of repeating
the numerator/denominator expressions; replace the duplicated ABS($x_mid -
(...)) and ($x_spread - ABS(...)) occurrences and the amount multiplication with
references to paired.score and paired.amount (and any precomputed boolean
predicates) so both phases use the same single source of truth.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 45ca2a1 and 123173e.

📒 Files selected for processing (2)
  • internal/migrations/034-order-book-rewards.sql
  • tests/streams/order_book/lp_rewards_config_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/streams/order_book/lp_rewards_config_test.go

@MicBun MicBun merged commit 5237f77 into main Mar 3, 2026
9 checks passed
@MicBun MicBun deleted the atomicLP branch March 3, 2026 15:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Problem: LP rewards not flowing on testnet

1 participant