Skip to content

feat: replay txs from file#12

Open
kakysha wants to merge 1 commit intov2from
feat/replay-from-file
Open

feat: replay txs from file#12
kakysha wants to merge 1 commit intov2from
feat/replay-from-file

Conversation

@kakysha
Copy link
Copy Markdown
Contributor

@kakysha kakysha commented Apr 3, 2026

Adds two more flags to tx-replay command:

--from-file filepath to read sniffed txs from. If omitted, will sniff txs from RPC in realtime.
--to-file filepath to store sniffed txs into. If omitted, will replay sniffed txs in realtime.

When replaying from file, we batch txs into blocks by 100 tx each.

Summary by CodeRabbit

Release Notes

  • New Features

    • Added support for reading transactions from files, enabling offline transaction replay without requiring real-time RPC connectivity.
    • Added support for writing sniffed transactions to files for recording and later analysis.
  • Bug Fixes

    • Fixed variable scope handling in the transaction broadcast phase.

+ sniff into file
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 3, 2026

📝 Walkthrough

Walkthrough

This pull request adds file-based transaction sourcing and dumping capabilities to the chain stresser tool. The sniffer can now read transactions from a file instead of querying via RPC in real time, or write transactions to a file instead of replaying them. Configuration flags --from-file and --to-file were added with corresponding validation logic.

Changes

Cohort / File(s) Summary
CLI Configuration
cmd/chain-stresser/main.go
Added --from-file and --to-file flags to tx-replay command. Updated PreRunE validation to require --sniffer-rpc only when --from-file is unset, reject simultaneous --from-file and --to-file, and forbid height flags when replaying from file. Adjusted control flow to skip chain client/payload provider initialization when writing to file.
Configuration Struct
replay/replay.go
Extended TxReplayConfig with FromFile field (read sniffed transactions from file; if unset, sniff from RPC in realtime) and ToFile field (write sniffed transactions to file; if unset, replay in realtime).
Sniffer Implementation
replay/sniffer.go
Introduced conditional file-based transaction sourcing/dumping: NewSniffer creates RPC client only when FromFile is unset, otherwise opens FromFile for reading. If ToFile is set, opens it for writing. In run(), replaced RPC block fetch with conditional branch: when FromFile is set, constructs virtual blocks from file; otherwise fetches real blocks via RPC. Added processBlock() helper for transaction dumping and readTxFromFile() for length-prefixed reads. New constant BlockFromFileTxNum = 100 controls batch size.
Loop Variable Closure
stresser.go
Removed local variable assignments (accountTxs, accountIdx) in broadcast loop before spawning per-account goroutines, fixing loop variable closure capture issue.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐰 From files we read, to files we write,
No RPC calls through the night,
Transactions flow in virtual streams,
While loop closures now fulfill their dreams!
Chain stressing reaches new height! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: replay txs from file' directly summarizes the main change: adding file-based transaction replay capability with --from-file and --to-file flags.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/replay-from-file

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cmd/chain-stresser/main.go`:
- Around line 572-575: The dump-only branch currently waits for <-sniffer.Done()
and returns nil, which masks sniffer failures; after waiting on sniffer.Done()
check sniffer.Errors() and if it returns a non-nil error return that error
instead of nil so failures (e.g., exhausted RPC retries) propagate; update the
branch that checks replayCfg.ToFile and uses sniffer.Done() to inspect
sniffer.Errors() and return it when present.

In `@replay/sniffer.go`:
- Around line 125-140: The code that builds a virtual block from s.cfg.FromFile
uses a nil block sentinel when no txs are read, but because EndHeight==0 the
outer loop never exits; instead of setting block = nil when len(block.Txs) == 0,
stop the producer loop immediately: when readTxFromFile yields no txs (len==0),
break out of the surrounding production loop or return from the producing method
so Done() can close cleanly; adjust the logic around
BlockFromFileTxNum/readTxFromFile to return/terminate rather than relying on a
nil block sentinel.
- Around line 207-229: In readTxFromFile (method on Sniffer) replace the two raw
s.txsFile.Read calls with io.ReadFull to ensure buffers are completely filled
and check the returned error; treat a zero-byte length read as io.EOF and
propagate other read errors; validate the decoded uint64 length against a
reasonable maximum (e.g., a configured MaxTxSize or a constant) before
allocating txnBuf to avoid unbounded allocations, then use io.ReadFull to read
the txnBuf and return any read errors instead of ignoring them.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 80f96cc8-f627-4421-8e5c-3433dd5d9602

📥 Commits

Reviewing files that changed from the base of the PR and between dcbbecb and d30be55.

📒 Files selected for processing (4)
  • cmd/chain-stresser/main.go
  • replay/replay.go
  • replay/sniffer.go
  • stresser.go
💤 Files with no reviewable changes (1)
  • stresser.go

Comment on lines +572 to +575
if replayCfg.ToFile != "" { // we just dumping txs, no replay needed
<-sniffer.Done()
return nil
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Return sniffer failures in dump-only mode.

This branch only waits for Done() and then returns nil. If the sniffer exits after exhausting RPC retries, the command still succeeds and can leave a partial dump behind. Check sniffer.Errors() after Done() and return it when present.

Suggested fix
 			if replayCfg.ToFile != "" { // we just dumping txs, no replay needed
 				<-sniffer.Done()
+				select {
+				case err, ok := <-sniffer.Errors():
+					if ok && err != nil {
+						return err
+					}
+				default:
+				}
 				return nil
 			}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/chain-stresser/main.go` around lines 572 - 575, The dump-only branch
currently waits for <-sniffer.Done() and returns nil, which masks sniffer
failures; after waiting on sniffer.Done() check sniffer.Errors() and if it
returns a non-nil error return that error instead of nil so failures (e.g.,
exhausted RPC retries) propagate; update the branch that checks replayCfg.ToFile
and uses sniffer.Done() to inspect sniffer.Errors() and return it when present.

Comment on lines +125 to +140
if s.cfg.FromFile != "" { // read BlockFromFileTxNum txs from file and construct virtual block
block = &comettypes.Block{
Data: comettypes.Data{
Txs: make([]comettypes.Tx, 0, BlockFromFileTxNum),
},
}
for i := 0; i < BlockFromFileTxNum; i++ {
tx, err := s.readTxFromFile()
if err != nil { // end of file or err
break
}
block.Txs = append(block.Txs, tx)
}

if len(block.Txs) == 0 {
block = nil // signal the end
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Stop the producer after the last file batch.

With --from-file, Line 140 only swaps in a nil block and still returns success. Because this mode is also EndHeight == 0, the outer loop keeps running and Done() never closes cleanly. Break after the final non-empty batch instead of using nil as a sentinel.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@replay/sniffer.go` around lines 125 - 140, The code that builds a virtual
block from s.cfg.FromFile uses a nil block sentinel when no txs are read, but
because EndHeight==0 the outer loop never exits; instead of setting block = nil
when len(block.Txs) == 0, stop the producer loop immediately: when
readTxFromFile yields no txs (len==0), break out of the surrounding production
loop or return from the producing method so Done() can close cleanly; adjust the
logic around BlockFromFileTxNum/readTxFromFile to return/terminate rather than
relying on a nil block sentinel.

Comment on lines +207 to +229
func (s *Sniffer) readTxFromFile() ([]byte, error) {
var (
lenN int
txnN int
len uint64
txnBuf []byte
)

lenBuf := make([]byte, 8)
lenN, _ = s.txsFile.Read(lenBuf)

if lenN > 0 {
len = binary.LittleEndian.Uint64(lenBuf)

txnBuf = make([]byte, len)
txnN, _ = s.txsFile.Read(txnBuf)
}

if lenN < 8 || uint64(txnN) < len {
return nil, fmt.Errorf("sniffer error: can't read tx from file")
}

return txnBuf, nil
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -name "sniffer.go" -path "*/replay/*" -type f

Repository: InjectiveLabs/chain-stresser

Length of output: 91


🏁 Script executed:

# Read the specific file and lines mentioned
sed -n '207,229p' ./replay/sniffer.go | cat -n

Repository: InjectiveLabs/chain-stresser

Length of output: 671


🏁 Script executed:

# Get more context around the function
sed -n '200,240p' ./replay/sniffer.go | cat -n

Repository: InjectiveLabs/chain-stresser

Length of output: 1102


🏁 Script executed:

# Check for other usages of this function and related error handling
rg "readTxFromFile" --context=3

Repository: InjectiveLabs/chain-stresser

Length of output: 591


🏁 Script executed:

# Look at the entire file structure to understand context
wc -l ./replay/sniffer.go

Repository: InjectiveLabs/chain-stresser

Length of output: 95


Use io.ReadFull and validate lengths before allocating slices.

A single Read on an io.Reader is not required to fill the buffer. Ignoring the returned errors here can cause short reads to be mishandled as success, silently accept truncated records as EOF, and line 221 can allocate arbitrarily large slices from an unvalidated length prefix. Replace the two Read calls with io.ReadFull, return io.EOF only when the length read returns zero bytes, and validate the transaction length against a reasonable maximum before allocating.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@replay/sniffer.go` around lines 207 - 229, In readTxFromFile (method on
Sniffer) replace the two raw s.txsFile.Read calls with io.ReadFull to ensure
buffers are completely filled and check the returned error; treat a zero-byte
length read as io.EOF and propagate other read errors; validate the decoded
uint64 length against a reasonable maximum (e.g., a configured MaxTxSize or a
constant) before allocating txnBuf to avoid unbounded allocations, then use
io.ReadFull to read the txnBuf and return any read errors instead of ignoring
them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant