Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
665 commits
Select commit Hold shift + click to select a range
00f02ad
refactor: replace JSONB columns with explicit typed columns in state_…
aditya1702 Jan 30, 2026
266433a
refactor: update StateChange struct to use explicit typed fields
aditya1702 Jan 30, 2026
2f1c2ae
refactor: update StateChangeBuilder for new typed schema
aditya1702 Jan 30, 2026
b153a37
refactor: update processors to use new typed schema
aditya1702 Jan 30, 2026
325becd
refactor: update data layer for new typed state_changes schema
aditya1702 Jan 30, 2026
6a46a31
refactor: update GraphQL resolvers and tests for new typed schema
aditya1702 Jan 30, 2026
3c0acf5
Update types.go
aditya1702 Jan 30, 2026
8bfdebc
change typ0
aditya1702 Jan 30, 2026
b8cdb5d
Add resolvers for cb id and lp id
aditya1702 Jan 30, 2026
3027c5a
Update queries.go
aditya1702 Jan 30, 2026
625b900
Update statechange.go
aditya1702 Jan 30, 2026
e48c293
Update types.go
aditya1702 Jan 30, 2026
409835c
remove offer ID
aditya1702 Jan 30, 2026
13f2368
Change ID -> Id
aditya1702 Jan 30, 2026
f712167
Update data_validation_test.go
aditya1702 Jan 30, 2026
1ab6a1e
Add sponsored data field for reserves change
aditya1702 Jan 30, 2026
5ba65c2
Fix batch copy
aditya1702 Jan 30, 2026
83e05e5
Add LP ID for balance auth change
aditya1702 Jan 30, 2026
72ce607
remove keyValue from balance auth change
aditya1702 Jan 30, 2026
7e3cd12
Update statechange.graphqls
aditya1702 Jan 30, 2026
28e8ed1
Add liquidity pool ID field to trustline change
aditya1702 Jan 31, 2026
a5c7b65
Update effects_horizon.go
aditya1702 Jan 31, 2026
b834e24
fix failing test
aditya1702 Jan 31, 2026
5a953de
fix test
aditya1702 Feb 1, 2026
b07c821
remove keyvalue for reserves change
aditya1702 Feb 1, 2026
72ac8b0
fix failing test - 2
aditya1702 Feb 1, 2026
31d9c57
Update effects.go
aditya1702 Feb 1, 2026
f39aafc
Add constraint checks
aditya1702 Feb 1, 2026
1e4d338
Change transactions PK from hash to to_id, update transactions_accoun…
aditya1702 Feb 1, 2026
573cef0
Rename AccountWithTxHash to AccountWithToID with tx_to_id column
aditya1702 Feb 1, 2026
7302397
Update IndexerBuffer to track participants by ToID instead of TxHash
aditya1702 Feb 1, 2026
116b18d
Update transactions data layer to use to_id for transactions_accounts FK
aditya1702 Feb 1, 2026
e0a9aa9
Rename BatchGetByTxHashes to BatchGetByToIDs in accounts data layer
aditya1702 Feb 1, 2026
bb23e58
Update ingest service to use ToID for transaction participants
aditya1702 Feb 1, 2026
6311a01
Update GraphQL dataloaders to use ToID for transaction accounts
aditya1702 Feb 1, 2026
10cc35b
Update transaction resolver to use ToID for accounts loader
aditya1702 Feb 1, 2026
e5fc412
Update integration test helpers to use tx_to_id for joins
aditya1702 Feb 1, 2026
2fcec72
fix make check
aditya1702 Feb 1, 2026
47b76f9
fix failing resolvers test
aditya1702 Feb 1, 2026
b42a661
Remove tx_hash column and index from operations migration
aditya1702 Feb 1, 2026
da23caa
Remove TxHash field from Operation struct
aditya1702 Feb 1, 2026
4e4d47b
Remove TxHash assignment from ConvertOperation
aditya1702 Feb 1, 2026
6998704
Remove tx_hash from BatchInsert and BatchCopy operations
aditya1702 Feb 1, 2026
96ededa
Replace BatchGetByTxHash with BatchGetByToID operations queries
aditya1702 Feb 1, 2026
5a9a4c7
Update GraphQL dataloaders to use ToID instead of TxHash
aditya1702 Feb 1, 2026
99d4369
Update transaction resolver to use ToID for operations loading
aditya1702 Feb 1, 2026
7c7a22a
Update operations_test.go to use ToID-based queries
aditya1702 Feb 1, 2026
4e502d9
Remove TxHash from Operation in indexer_buffer_test.go
aditya1702 Feb 1, 2026
a6e7c29
Update GraphQL resolver tests to use ToID for operations
aditya1702 Feb 1, 2026
4686772
Fix remaining test files and update BatchGetByOperationIDs to use TOID
aditya1702 Feb 1, 2026
db01b6d
Fix remaining test data to use proper operation IDs and remove tx_has…
aditya1702 Feb 1, 2026
013bccc
Update ingest_test.go
aditya1702 Feb 1, 2026
e65e68c
Update 2025-06-10.3-create_indexer_table_operations.sql
aditya1702 Feb 1, 2026
dc00ea3
Remove tx_hash column from state_changes migration
aditya1702 Feb 1, 2026
cb3a14e
Remove TxHash field from StateChange struct
aditya1702 Feb 1, 2026
0213054
Remove txHash parameter from NewStateChangeBuilder
aditya1702 Feb 1, 2026
803e86d
Update processors to remove txHash from NewStateChangeBuilder calls
aditya1702 Feb 1, 2026
17f3536
Remove tx_hash from BatchInsert and BatchCopy methods
aditya1702 Feb 1, 2026
2ac4a85
Rename BatchGetByTxHash/TxHashes to BatchGetByToID/ToIDs
aditya1702 Feb 1, 2026
4bfe8b4
Rename StateChangesByTxHashLoader to StateChangesByToIDLoader
aditya1702 Feb 1, 2026
7763eb7
Update transaction resolver to use ToID for state changes
aditya1702 Feb 1, 2026
a5fdd2d
Update tests to remove TxHash references
aditya1702 Feb 1, 2026
b608656
Fix additional test files - remove TxHash references
aditya1702 Feb 1, 2026
e8e1ae9
Remove unused txHash parameter from generateTestStateChanges
aditya1702 Feb 1, 2026
039dc35
Fix BatchCopy test: use valid to_id for sc3 state change
aditya1702 Feb 1, 2026
cf8e1c0
Fix FK constraint violations in state_changes tests
aditya1702 Feb 1, 2026
beb871e
Fix BatchGetByStateChangeIDs to join on to_id instead of tx_hash
aditya1702 Feb 1, 2026
e873498
Fix test expectation: state change to_id maps to transaction to_id
aditya1702 Feb 1, 2026
18589ee
Remove tx_hash from operations_test.go state changes INSERT
aditya1702 Feb 1, 2026
a6f6ac3
Remove tx_hash from accounts_test.go state changes INSERT
aditya1702 Feb 1, 2026
0b47039
Update accounts_test.go
aditya1702 Feb 1, 2026
c97772a
Update internal/data/transactions.go
aditya1702 Feb 1, 2026
92b65a0
Update ingest_test.go
aditya1702 Feb 1, 2026
7795e37
Add documentation for explaining the TOID bitmasking
aditya1702 Feb 1, 2026
c358be6
Merge branch 'txhash-remove-ops' into txhash-remove-statechanges
aditya1702 Feb 1, 2026
ffb8ade
fix tests - 1
aditya1702 Feb 1, 2026
0b33e31
Add operation_id to state_changes primary key
aditya1702 Feb 1, 2026
3b8f21c
Add OperationID to StateChangeCursor struct
aditya1702 Feb 1, 2026
3159523
Change ToID to always store transaction to_id
aditya1702 Feb 1, 2026
dca7e9a
Update statechanges.go for 3-part primary key
aditya1702 Feb 1, 2026
30b2104
Update BatchGetByStateChangeIDs to use 3-part tuple format
aditya1702 Feb 1, 2026
57fe265
Update parseStateChangeIDs to return 3 slices
aditya1702 Feb 1, 2026
b14f72a
Update dataloaders to use 3-part state change ID format
aditya1702 Feb 1, 2026
8e81d18
Update GraphQL resolvers for 3-part cursor and stateChangeID format
aditya1702 Feb 1, 2026
6a3a3d3
Update tests for 3-part state change ID format
aditya1702 Feb 1, 2026
e7bb990
remove TxID column
aditya1702 Feb 1, 2026
0ff1c87
Fix tests
aditya1702 Feb 1, 2026
10dd1e5
Update processors_test_utils.go
aditya1702 Feb 1, 2026
ebb008b
rename migration file
aditya1702 Feb 2, 2026
c06fe20
Merge branch 'txhash-remove-txns' into txhash-remove-ops
aditya1702 Feb 2, 2026
b3f22df
rename operations migration file
aditya1702 Feb 2, 2026
cef0fa7
Merge branch 'txhash-remove-ops' into txhash-remove-statechanges
aditya1702 Feb 2, 2026
cfb2c82
rename migration file
aditya1702 Feb 2, 2026
633341c
Update 2025-06-10.3-operations.sql
aditya1702 Feb 2, 2026
6c44225
Merge branch 'txhash-remove-ops' into txhash-remove-statechanges
aditya1702 Feb 2, 2026
2d0ebf0
Update indexes
aditya1702 Feb 2, 2026
4531c59
Remove unused DB table
aditya1702 Feb 2, 2026
6e0c696
remove redundant indexes
aditya1702 Feb 2, 2026
120b252
Add StellarAddress type and update migrations to use BYTEA
aditya1702 Feb 3, 2026
fe40b54
Update accounts.go for BYTEA stellar_address column
aditya1702 Feb 3, 2026
dee8de5
Update transactions.go for BYTEA account_id column
aditya1702 Feb 3, 2026
12b192b
Update query_utils.go for BYTEA account_id conversion
aditya1702 Feb 3, 2026
a5df424
Add tests for StellarAddress Scan/Value methods
aditya1702 Feb 3, 2026
f8aca24
Fix test failures and IsAccountFeeBumpEligible query
aditya1702 Feb 3, 2026
da4c44e
Fix tests
aditya1702 Feb 4, 2026
f23fd75
StellarAddress -> AddressBytea
aditya1702 Feb 4, 2026
096a08e
Change operations_accounts.account_id from TEXT to BYTEA
aditya1702 Feb 4, 2026
705d2a4
Simplify AddressBytea.Scan() to only handle BYTEA
aditya1702 Feb 4, 2026
dacec16
Remove AccountIDBytea from query_utils.go
aditya1702 Feb 4, 2026
8487254
Remove AccountIDBytea field from operations.go and transactions.go
aditya1702 Feb 4, 2026
fa60b23
Update BatchCopy to write account_id as BYTEA
aditya1702 Feb 4, 2026
ac3b07e
Update backfill_helpers.go to use BYTEA for account_id
aditya1702 Feb 4, 2026
331461b
Update test files for BYTEA operations_accounts.account_id
aditya1702 Feb 4, 2026
01b06d3
Update BatchInsert to write account_id as BYTEA
aditya1702 Feb 4, 2026
9021603
Fix operations tests for BYTEA account_id
aditya1702 Feb 4, 2026
db17bf3
Fix make check issues
aditya1702 Feb 4, 2026
ba42d49
Change state_changes account_id columns from TEXT to BYTEA
aditya1702 Feb 4, 2026
9c1e726
Add pgtypeBytesFromNullStringAddress helper for BYTEA address conversion
aditya1702 Feb 4, 2026
5a10c3b
Change StateChange.AccountID type from string to AddressBytea
aditya1702 Feb 4, 2026
e209dc5
Update BatchInsert to write account_id columns as BYTEA
aditya1702 Feb 4, 2026
c6c22e9
Update BatchCopy to write account_id columns as BYTEA
aditya1702 Feb 4, 2026
3bbc0a7
Update BatchGetByAccountAddress to query BYTEA account_id
aditya1702 Feb 4, 2026
e4199c3
Fix all tests
aditya1702 Feb 4, 2026
4677df9
Update backfill_helpers.go
aditya1702 Feb 5, 2026
af12032
Update backfill_helpers.go
aditya1702 Feb 5, 2026
39e24ec
Use NullAddressBytea method for nullable fields of state changes
aditya1702 Feb 5, 2026
6fb0b76
fix tests
aditya1702 Feb 5, 2026
7c5089e
fix more tests
aditya1702 Feb 5, 2026
4ad1ede
fix more tests again
aditya1702 Feb 5, 2026
4d825e1
Update account_service_test.go
aditya1702 Feb 5, 2026
c6682d5
Add HashBytea type for transaction hash BYTEA storage
aditya1702 Feb 5, 2026
6f73fdf
Update Transaction.Hash field to use HashBytea type
aditya1702 Feb 5, 2026
e743465
Wrap transaction hash in HashBytea in ConvertTransaction
aditya1702 Feb 5, 2026
5d16532
Use .String() for map key in IndexerBuffer
aditya1702 Feb 5, 2026
1ad897d
Update transactions.go for BYTEA hash storage
aditya1702 Feb 5, 2026
caacd85
Change transactions.hash column from TEXT to BYTEA
aditya1702 Feb 5, 2026
a041358
Add HashBytea unit tests
aditya1702 Feb 5, 2026
4c1e5dc
Update test files with valid 64-char hex hashes
aditya1702 Feb 5, 2026
09348d9
fix tests
aditya1702 Feb 5, 2026
d238b0b
add resolver for hash
aditya1702 Feb 5, 2026
d6526ef
Update statechanges.go
aditya1702 Feb 5, 2026
6ff993f
fix all tests
aditya1702 Feb 5, 2026
5058261
fix more tests
aditya1702 Feb 5, 2026
6477fa9
Update ingest_test.go
aditya1702 Feb 5, 2026
1829d3e
Change operation_xdr column from TEXT to BYTEA
aditya1702 Feb 5, 2026
c844520
Add XDRBytea type for storing XDR data as BYTEA
aditya1702 Feb 5, 2026
b5ac8ad
Update Operation struct to use XDRBytea type
aditya1702 Feb 5, 2026
c23554e
Update ConvertOperation to use XDRBytea type
aditya1702 Feb 5, 2026
23ba111
Update operations.go for BYTEA operation_xdr storage
aditya1702 Feb 5, 2026
a347cc7
Add forceResolver directive to operationXdr field
aditya1702 Feb 5, 2026
ddaeeee
Run gql-generate and fix test_utils.go type
aditya1702 Feb 5, 2026
aee79f2
Implement OperationXdr GraphQL resolver
aditya1702 Feb 5, 2026
00eeaa3
Update tests to use XDRBytea type
aditya1702 Feb 5, 2026
ec34cef
Update GraphQL resolver tests for XDRBytea type
aditya1702 Feb 5, 2026
b58b994
Fix test data to use valid base64-encoded XDR strings
aditya1702 Feb 5, 2026
525a59d
Change XDRBytea underlying type from string to []byte
aditya1702 Feb 5, 2026
5e221f7
Use MarshalBinary directly in ConvertOperation
aditya1702 Feb 5, 2026
ce12f8a
Simplify BatchInsert and BatchCopy in operations.go
aditya1702 Feb 5, 2026
52bdb28
Update utils_test.go for XDRBytea []byte type
aditya1702 Feb 5, 2026
5bb41f3
Update operations_test.go for XDRBytea []byte type
aditya1702 Feb 5, 2026
a5429a0
Update test_utils.go for XDRBytea []byte type
aditya1702 Feb 5, 2026
41dfbf7
Update ingest_test.go for XDRBytea []byte type
aditya1702 Feb 5, 2026
bd27099
Fix operations_test.go for XDRBytea []byte type
aditya1702 Feb 5, 2026
762593e
Fix accounts_test.go for XDRBytea []byte type
aditya1702 Feb 5, 2026
5f26b91
Fix transactions_test.go for XDRBytea []byte type
aditya1702 Feb 5, 2026
25ccec9
Update generated.go
aditya1702 Feb 5, 2026
b84442d
Fix XDRBytea.Scan buffer reuse bug
aditya1702 Feb 5, 2026
39b68c5
Change token_id column from TEXT to BYTEA in state_changes migration
aditya1702 Feb 21, 2026
3121832
Change StateChange.TokenID type from sql.NullString to NullAddressBytea
aditya1702 Feb 21, 2026
52ee484
Update StateChangeBuilder for NullAddressBytea TokenID
aditya1702 Feb 21, 2026
b2d0a81
Update indexer to use TokenID.String() method
aditya1702 Feb 21, 2026
15b03df
Update statechanges data layer for BYTEA token_id
aditya1702 Feb 21, 2026
979d422
Update GraphQL resolvers for NullAddressBytea TokenID
aditya1702 Feb 21, 2026
349143c
Update resolver tests for NullAddressBytea TokenID
aditya1702 Feb 21, 2026
d20dcdc
Update statechanges tests for NullAddressBytea TokenID
aditya1702 Feb 21, 2026
3051d59
Update effects tests for NullAddressBytea TokenID
aditya1702 Feb 21, 2026
5a0f134
Update processors test utils for NullAddressBytea TokenID
aditya1702 Feb 21, 2026
67232ae
Update contracts test utils for NullAddressBytea TokenID
aditya1702 Feb 21, 2026
6c80306
make check
aditya1702 Feb 21, 2026
2a6b96b
add timescale db hypertable
aditya1702 Feb 2, 2026
ec94e45
update docker and test files to use timescaledb
aditya1702 Feb 2, 2026
18d96ca
Fix tests
aditya1702 Feb 2, 2026
e9c4884
convert junction tables to hypertables
aditya1702 Feb 2, 2026
01f97ac
fix failing tests
aditya1702 Feb 2, 2026
0728469
remove comments
aditya1702 Feb 2, 2026
ed35eaa
changes to indexes
aditya1702 Feb 2, 2026
36976b1
Update 2025-06-10.2-transactions.sql
aditya1702 Feb 2, 2026
85c6dac
Update 2025-06-10.2-transactions.sql
aditya1702 Feb 2, 2026
89c1639
compress chunks at the end
aditya1702 Feb 3, 2026
7dad1fb
parallely compress chunks
aditya1702 Feb 3, 2026
f30b61a
Update ingest_backfill.go
aditya1702 Feb 3, 2026
0e8a0f5
Update ingest_backfill.go
aditya1702 Feb 3, 2026
6548f2b
Update ingest_backfill.go
aditya1702 Feb 3, 2026
e7abf42
Update test files for BYTEA types while preserving TimescaleDB patterns
aditya1702 Feb 10, 2026
351ce26
Fix shadow variable warnings in BatchCopy address conversions
aditya1702 Feb 10, 2026
378091f
Update go.yaml
aditya1702 Feb 10, 2026
b1d069d
Update go.yaml
aditya1702 Feb 10, 2026
4eb8035
Update go.yaml
aditya1702 Feb 10, 2026
c1f6c5d
Update docker-compose.yaml
aditya1702 Feb 10, 2026
33e1cee
Update query_utils.go
aditya1702 Feb 10, 2026
e647156
remove flaky RPC test
aditya1702 Feb 10, 2026
f94310d
Update helpers.go
aditya1702 Feb 10, 2026
2f6d95f
Update helpers.go
aditya1702 Feb 10, 2026
056b1d8
add bloom filter on account_id
aditya1702 Feb 10, 2026
9b4cc4f
remove BatchInsert
aditya1702 Feb 11, 2026
381a586
update hypertable config - 1
aditya1702 Feb 11, 2026
f06b52e
remove Duplicate failure tests
aditya1702 Feb 11, 2026
fe8828c
Add extra orderby columns and other hypertable configration
aditya1702 Feb 11, 2026
18b7767
Update normal B-tree indexes
aditya1702 Feb 11, 2026
e0dafc2
enable chunk skipping
aditya1702 Feb 11, 2026
917b32f
Update dbtest.go
aditya1702 Feb 11, 2026
d05fc9b
Update containers.go
aditya1702 Feb 11, 2026
9d2536d
Compress backfill chunks parallelly using goroutine
aditya1702 Feb 11, 2026
6bf87f5
remove schemas for mainnet and testnet
aditya1702 Feb 12, 2026
e9637a7
Enable direct compress and recompression for backfilling
aditya1702 Feb 12, 2026
beeab09
Add command line configuration to set retention policy and chunk size
aditya1702 Feb 12, 2026
63a4007
Update ingest.go
aditya1702 Feb 12, 2026
6b261a8
add batch index tracking to logs
aditya1702 Feb 12, 2026
8e5ed67
Update ingest_backfill.go
aditya1702 Feb 13, 2026
83c7b35
update auto-vaccum settings for balances tables
aditya1702 Feb 13, 2026
159d476
add explanations for the values
aditya1702 Feb 13, 2026
86b4fe8
Disable FK checks during checkpoint population
aditya1702 Feb 13, 2026
367bd89
Update ingest_live.go
aditya1702 Feb 13, 2026
a19ea7b
Update ingest_live.go
aditya1702 Feb 13, 2026
2ea2f0e
Update ingest_live.go
aditya1702 Feb 13, 2026
3bbb050
Update ingest_backfill.go
aditya1702 Feb 13, 2026
87a6c59
fix ledger_created_at bug
aditya1702 Feb 13, 2026
87dd75c
update oldest ledger using timescaledb job
aditya1702 Feb 13, 2026
14c1559
Add config option for tweaking schedule_interval for compression
aditya1702 Feb 15, 2026
24c5143
Add CLI variable for compress_after setting
aditya1702 Feb 15, 2026
3799f62
Set client_sorted = True since we will rebuild_columnstore at the end
aditya1702 Feb 16, 2026
034b6a8
Update ingest_backfill.go
aditya1702 Feb 16, 2026
75e30e7
Update ingest.go
aditya1702 Feb 16, 2026
52f84b3
Update ingest_backfill.go
aditya1702 Feb 17, 2026
e41dcfd
Add parallel recompression logic
aditya1702 Feb 20, 2026
c44edcf
Add bloom filters for compressed chunks
aditya1702 Feb 20, 2026
ba3ef60
Add primary keys back
aditya1702 Feb 22, 2026
05ae454
remove indexes
aditya1702 Feb 22, 2026
e253998
add comma
aditya1702 Feb 22, 2026
131f3b1
Update 2025-06-10.4-statechanges.sql
aditya1702 Feb 22, 2026
ce16112
Merge branch 'main' into timescale
aditya1702 Feb 27, 2026
8104698
Update internal/integrationtests/infrastructure/helpers.go
aditya1702 Feb 27, 2026
b2d4f45
add defer rows.Close()
aditya1702 Feb 27, 2026
869d762
Merge branch 'timescale' of https://github.com/stellar/wallet-backend…
aditya1702 Feb 27, 2026
eb0522b
Fire oldest ledger reconciliation after retention policy job runs
aditya1702 Feb 27, 2026
2f1b518
remove onReady() func
aditya1702 Feb 27, 2026
aa3801f
run reconciliation every 1h
aditya1702 Feb 27, 2026
ab7985b
use actual stored data for calculating oldest ledger
aditya1702 Feb 27, 2026
80a5e88
Update ingest_test.go
aditya1702 Feb 27, 2026
c3f871f
remove recompressor logic
aditya1702 Mar 3, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/go.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -87,12 +87,12 @@ jobs:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:12-alpine
image: timescale/timescaledb:2.25.0-pg17
env:
POSTGRES_USER: postgres
POSTGRES_DB: postgres
POSTGRES_PASSWORD: postgres
PGHOST: localhost
PGHOST: /var/run/postgresql
options: >-
--health-cmd pg_isready
--health-interval 10s
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -9,3 +9,4 @@ __debug_bin*
*.out
CLAUDE.md
.claude/
.serena
32 changes: 32 additions & 0 deletions cmd/ingest.go
Original file line number Diff line number Diff line change
Expand Up @@ -140,6 +140,38 @@ func (c *ingestCmd) Command() *cobra.Command {
FlagDefault: true,
Required: false,
},
{
Name: "chunk-interval",
Usage: "TimescaleDB chunk time interval for hypertables. Only affects future chunks. Uses PostgreSQL INTERVAL syntax.",
OptType: types.String,
ConfigKey: &cfg.ChunkInterval,
FlagDefault: "1 day",
Required: false,
},
{
Name: "retention-period",
Usage: "TimescaleDB data retention period. Chunks older than this are automatically dropped. Empty disables retention. Uses PostgreSQL INTERVAL syntax.",
OptType: types.String,
ConfigKey: &cfg.RetentionPeriod,
FlagDefault: "",
Required: false,
},
{
Name: "compression-schedule-interval",
Usage: "How frequently the TimescaleDB compression policy job checks for chunks to compress. Does not change which chunks are eligible (that's controlled by compress_after). Empty skips configuration. Uses PostgreSQL INTERVAL syntax.",
OptType: types.String,
ConfigKey: &cfg.CompressionScheduleInterval,
FlagDefault: "",
Required: false,
},
{
Name: "compression-compress-after",
Usage: "How long after a chunk is closed before it becomes eligible for compression. Lower values reduce the number of uncompressed chunks. Empty skips configuration. Uses PostgreSQL INTERVAL syntax.",
OptType: types.String,
ConfigKey: &cfg.CompressAfter,
FlagDefault: "",
Required: false,
},
}

cmd := &cobra.Command{
Expand Down
6 changes: 3 additions & 3 deletions docker-compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,8 @@ services:

db:
container_name: db
image: postgres:14-alpine
image: timescale/timescaledb:2.25.0-pg17
command: ["postgres", "-c", "timescaledb.enable_chunk_skipping=on", "-c", "timescaledb.enable_sparse_index_bloom=on"]
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres -d wallet-backend"]
interval: 10s
Expand Down Expand Up @@ -268,5 +269,4 @@ volumes:
configs:
postgres_init:
content: |
CREATE SCHEMA IF NOT EXISTS wallet_backend_mainnet;
CREATE SCHEMA IF NOT EXISTS wallet_backend_testnet;
CREATE EXTENSION IF NOT EXISTS timescaledb;
4 changes: 2 additions & 2 deletions internal/data/accounts_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -273,7 +273,7 @@ func TestAccountModelBatchGetByToIDs(t *testing.T) {
require.NoError(t, err)

// Insert test transactions_accounts links
_, err = m.DB.ExecContext(ctx, "INSERT INTO transactions_accounts (tx_to_id, account_id) VALUES ($1, $2), ($3, $4)",
_, err = m.DB.ExecContext(ctx, "INSERT INTO transactions_accounts (ledger_created_at, tx_to_id, account_id) VALUES (NOW(), $1, $2), (NOW(), $3, $4)",
toID1, types.AddressBytea(address1), toID2, types.AddressBytea(address2))
require.NoError(t, err)

Expand Down Expand Up @@ -333,7 +333,7 @@ func TestAccountModelBatchGetByOperationIDs(t *testing.T) {
require.NoError(t, err)

// Insert test operations_accounts links (account_id is BYTEA)
_, err = m.DB.ExecContext(ctx, "INSERT INTO operations_accounts (operation_id, account_id) VALUES ($1, $2), ($3, $4)",
_, err = m.DB.ExecContext(ctx, "INSERT INTO operations_accounts (ledger_created_at, operation_id, account_id) VALUES (NOW(), $1, $2), (NOW(), $3, $4)",
operationID1, types.AddressBytea(address1), operationID2, types.AddressBytea(address2))
require.NoError(t, err)

Expand Down
15 changes: 15 additions & 0 deletions internal/data/ingest_store.go
Original file line number Diff line number Diff line change
Expand Up @@ -98,3 +98,18 @@ func (m *IngestStoreModel) GetLedgerGaps(ctx context.Context) ([]LedgerRange, er
m.MetricsService.IncDBQuery("GetLedgerGaps", "transactions")
return ledgerGaps, nil
}

func (m *IngestStoreModel) GetOldestLedger(ctx context.Context) (uint32, error) {
oldest := uint32(0)
start := time.Now()
err := m.DB.GetContext(ctx, &oldest,
`SELECT ledger_number FROM transactions ORDER BY ledger_created_at ASC, to_id ASC LIMIT 1`)
duration := time.Since(start).Seconds()
m.MetricsService.ObserveDBQueryDuration("GetOldestLedger", "transactions", duration)
if err != nil && !errors.Is(err, sql.ErrNoRows) {
m.MetricsService.IncDBQueryError("GetOldestLedger", "transactions", utils.GetDBErrorType(err))
return 0, fmt.Errorf("getting actual oldest ledger from transactions: %w", err)
}
m.MetricsService.IncDBQuery("GetOldestLedger", "transactions")
return oldest, nil
}
62 changes: 62 additions & 0 deletions internal/data/ingest_store_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -316,3 +316,65 @@ func Test_IngestStoreModel_GetLedgerGaps(t *testing.T) {
})
}
}

func Test_IngestStoreModel_GetOldestLedger(t *testing.T) {
dbt := dbtest.Open(t)
defer dbt.Close()
dbConnectionPool, err := db.OpenDBConnectionPool(dbt.DSN)
require.NoError(t, err)
defer dbConnectionPool.Close()

ctx := context.Background()

testCases := []struct {
name string
setupDB func(t *testing.T)
expectedLedger uint32
}{
{
name: "returns_zero_when_table_is_empty",
expectedLedger: 0,
},
{
name: "returns_oldest_ledger_when_data_exists",
setupDB: func(t *testing.T) {
// Insert ledgers with distinct timestamps so ORDER BY ledger_created_at
// returns the correct oldest regardless of to_id ordering.
for i, ledger := range []uint32{150, 100, 200} {
_, err := dbConnectionPool.ExecContext(ctx,
`INSERT INTO transactions (hash, to_id, envelope_xdr, fee_charged, result_code, meta_xdr, ledger_number, ledger_created_at)
VALUES ($1, $2, 'env', 100, 'TransactionResultCodeTxSuccess', 'meta', $3, NOW() - INTERVAL '1 day' * $4)`,
fmt.Sprintf("hash%d", i), i+1, ledger, 300-int(ledger))
require.NoError(t, err)
}
},
expectedLedger: 100,
},
}

for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
_, err := dbConnectionPool.ExecContext(ctx, "DELETE FROM transactions")
require.NoError(t, err)

mockMetricsService := metrics.NewMockMetricsService()
mockMetricsService.
On("ObserveDBQueryDuration", "GetOldestLedger", "transactions", mock.Anything).Return().
On("IncDBQuery", "GetOldestLedger", "transactions").Return()
defer mockMetricsService.AssertExpectations(t)

m := &IngestStoreModel{
DB: dbConnectionPool,
MetricsService: mockMetricsService,
}

if tc.setupDB != nil {
tc.setupDB(t)
}

oldest, err := m.GetOldestLedger(ctx)
require.NoError(t, err)
assert.Equal(t, tc.expectedLedger, oldest)
})
}
}
14 changes: 6 additions & 8 deletions internal/data/native_balances.go
Original file line number Diff line number Diff line change
Expand Up @@ -129,23 +129,21 @@ func (m *NativeBalanceModel) BatchCopy(ctx context.Context, dbTx pgx.Tx, balance

start := time.Now()

rows := make([][]any, len(balances))
for i, nb := range balances {
rows[i] = []any{nb.AccountAddress, nb.Balance, nb.MinimumBalance, nb.BuyingLiabilities, nb.SellingLiabilities, nb.LedgerNumber}
}

copyCount, err := dbTx.CopyFrom(
ctx,
pgx.Identifier{"native_balances"},
[]string{"account_address", "balance", "minimum_balance", "buying_liabilities", "selling_liabilities", "last_modified_ledger"},
pgx.CopyFromRows(rows),
pgx.CopyFromSlice(len(balances), func(i int) ([]any, error) {
nb := balances[i]
return []any{nb.AccountAddress, nb.Balance, nb.MinimumBalance, nb.BuyingLiabilities, nb.SellingLiabilities, nb.LedgerNumber}, nil
}),
)
if err != nil {
return fmt.Errorf("bulk inserting native balances via COPY: %w", err)
}

if int(copyCount) != len(rows) {
return fmt.Errorf("expected %d rows copied, got %d", len(rows), copyCount)
if int(copyCount) != len(balances) {
return fmt.Errorf("expected %d rows copied, got %d", len(balances), copyCount)
}

m.MetricsService.ObserveDBQueryDuration("BatchCopy", "native_balances", time.Since(start).Seconds())
Expand Down
153 changes: 20 additions & 133 deletions internal/data/operations.go
Original file line number Diff line number Diff line change
Expand Up @@ -269,137 +269,13 @@ func (m *OperationModel) BatchGetByStateChangeIDs(ctx context.Context, scToIDs [
return operationsWithStateChanges, nil
}

// BatchInsert inserts the operations and the operations_accounts links.
// It returns the IDs of the successfully inserted operations.
func (m *OperationModel) BatchInsert(
ctx context.Context,
sqlExecuter db.SQLExecuter,
operations []*types.Operation,
stellarAddressesByOpID map[int64]set.Set[string],
) ([]int64, error) {
if sqlExecuter == nil {
sqlExecuter = m.DB
}

// 1. Flatten the operations into parallel slices
ids := make([]int64, len(operations))
operationTypes := make([]string, len(operations))
operationXDRs := make([][]byte, len(operations))
resultCodes := make([]string, len(operations))
successfulFlags := make([]bool, len(operations))
ledgerNumbers := make([]uint32, len(operations))
ledgerCreatedAts := make([]time.Time, len(operations))

for i, op := range operations {
ids[i] = op.ID
operationTypes[i] = string(op.OperationType)
operationXDRs[i] = []byte(op.OperationXDR)
resultCodes[i] = op.ResultCode
successfulFlags[i] = op.Successful
ledgerNumbers[i] = op.LedgerNumber
ledgerCreatedAts[i] = op.LedgerCreatedAt
}

// 2. Flatten the stellarAddressesByOpID into parallel slices, converting to BYTEA
var opIDs []int64
var stellarAddressBytes [][]byte
for opID, addresses := range stellarAddressesByOpID {
for address := range addresses.Iter() {
opIDs = append(opIDs, opID)
addrBytesValue, err := types.AddressBytea(address).Value()
if err != nil {
return nil, fmt.Errorf("converting address %s to bytes: %w", address, err)
}
addrBytes, ok := addrBytesValue.([]byte)
if !ok || addrBytes == nil {
return nil, fmt.Errorf("converting address %s to bytes: unexpected value %T", address, addrBytesValue)
}
stellarAddressBytes = append(stellarAddressBytes, addrBytes)
}
}

// Insert operations and operations_accounts links.
const insertQuery = `
WITH
-- Insert operations
inserted_operations AS (
INSERT INTO operations
(id, operation_type, operation_xdr, result_code, successful, ledger_number, ledger_created_at)
SELECT
o.id, o.operation_type, o.operation_xdr, o.result_code, o.successful, o.ledger_number, o.ledger_created_at
FROM (
SELECT
UNNEST($1::bigint[]) AS id,
UNNEST($2::text[]) AS operation_type,
UNNEST($3::bytea[]) AS operation_xdr,
UNNEST($4::text[]) AS result_code,
UNNEST($5::boolean[]) AS successful,
UNNEST($6::bigint[]) AS ledger_number,
UNNEST($7::timestamptz[]) AS ledger_created_at
) o
ON CONFLICT (id) DO NOTHING
RETURNING id
),

-- Insert operations_accounts links
inserted_operations_accounts AS (
INSERT INTO operations_accounts
(operation_id, account_id)
SELECT
oa.op_id, oa.account_id
FROM (
SELECT
UNNEST($8::bigint[]) AS op_id,
UNNEST($9::bytea[]) AS account_id
) oa
ON CONFLICT DO NOTHING
)

-- Return the IDs of successfully inserted operations
SELECT id FROM inserted_operations;
`

start := time.Now()
var insertedIDs []int64
err := sqlExecuter.SelectContext(ctx, &insertedIDs, insertQuery,
pq.Array(ids),
pq.Array(operationTypes),
pq.Array(operationXDRs),
pq.Array(resultCodes),
pq.Array(successfulFlags),
pq.Array(ledgerNumbers),
pq.Array(ledgerCreatedAts),
pq.Array(opIDs),
pq.Array(stellarAddressBytes),
)
duration := time.Since(start).Seconds()
for _, dbTableName := range []string{"operations", "operations_accounts"} {
m.MetricsService.ObserveDBQueryDuration("BatchInsert", dbTableName, duration)
if dbTableName == "operations" {
m.MetricsService.ObserveDBBatchSize("BatchInsert", dbTableName, len(operations))
}
if err == nil {
m.MetricsService.IncDBQuery("BatchInsert", dbTableName)
}
}
if err != nil {
for _, dbTableName := range []string{"operations", "operations_accounts"} {
m.MetricsService.IncDBQueryError("BatchInsert", dbTableName, utils.GetDBErrorType(err))
}
return nil, fmt.Errorf("batch inserting operations and operations_accounts: %w", err)
}

return insertedIDs, nil
}

// BatchCopy inserts operations using pgx's binary COPY protocol.
// Uses pgx.Tx for binary format which is faster than lib/pq's text format.
// Uses native pgtype types for optimal performance (see https://github.com/jackc/pgx/issues/763).
//
// IMPORTANT: Unlike BatchInsert which uses ON CONFLICT DO NOTHING, BatchCopy will FAIL
// if any duplicate records exist. The PostgreSQL COPY protocol does not support conflict
// handling. Callers must ensure no duplicates exist before calling this method, or handle
// the unique constraint violation error appropriately.
// IMPORTANT: BatchCopy will FAIL if any duplicate records exist. The PostgreSQL COPY
// protocol does not support conflict handling. Callers must ensure no duplicates exist
// before calling this method, or handle the unique constraint violation error appropriately.
func (m *OperationModel) BatchCopy(
ctx context.Context,
pgxTx pgx.Tx,
Expand Down Expand Up @@ -440,23 +316,34 @@ func (m *OperationModel) BatchCopy(

// COPY operations_accounts using pgx binary format with native pgtype types
if len(stellarAddressesByOpID) > 0 {
// Build OpID -> LedgerCreatedAt lookup from operations
ledgerCreatedAtByOpID := make(map[int64]time.Time, len(operations))
for _, op := range operations {
ledgerCreatedAtByOpID[op.ID] = op.LedgerCreatedAt
}

var oaRows [][]any
for opID, addresses := range stellarAddressesByOpID {
ledgerCreatedAt := ledgerCreatedAtByOpID[opID]
ledgerCreatedAtPgtype := pgtype.Timestamptz{Time: ledgerCreatedAt, Valid: true}
opIDPgtype := pgtype.Int8{Int64: opID, Valid: true}
for _, addr := range addresses.ToSlice() {
var addrBytes any
addrBytes, err = types.AddressBytea(addr).Value()
if err != nil {
return 0, fmt.Errorf("converting address %s to bytes: %w", addr, err)
addrBytes, addrErr := types.AddressBytea(addr).Value()
if addrErr != nil {
return 0, fmt.Errorf("converting address %s to bytes: %w", addr, addrErr)
}
oaRows = append(oaRows, []any{opIDPgtype, addrBytes})
oaRows = append(oaRows, []any{
ledgerCreatedAtPgtype,
opIDPgtype,
addrBytes,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AddressBytea.Value() returns (nil, nil) for empty strings (types/types.go:77-78), which would produce a SQL NULL for the account_id column — but that column is NOT NULL in both operations_accounts and transactions_accounts. The COPY would fail with a constraint violation.

In practice this likely can't happen today: addresses come from xdr.AccountId.Address() and Soroban ledger changes, which always produce valid StrKey strings. But there's no guard or comment documenting that assumption, so a future change to the participants pipeline could hit this silently. Same pattern was flagged in PR #488 (comment #2769902080).

Either a nil guard or a brief comment noting the upstream guarantee would close this off.

🤖 Generated with Claude Code

})
}
}

_, err = pgxTx.CopyFrom(
ctx,
pgx.Identifier{"operations_accounts"},
[]string{"operation_id", "account_id"},
[]string{"ledger_created_at", "operation_id", "account_id"},
pgx.CopyFromRows(oaRows),
)
if err != nil {
Expand Down
Loading
Loading