This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
# Build the binary
go build -o stat ./cmd/stat
# Run tests
go test ./...
# Format and lint
go fmt ./...
go vet ./...The binary uses github.com/urfave/cli/v2 with subcommands — Railway manages scheduling externally:
stat serve— long-running HTTP API server (read-only: snapshots + indicators)stat quote— one-shot cron: fetch CoinGecko prices and store in DB (run hourly)stat report— one-shot cron: generate snapshot + export to Google Sheets (run daily)stat import— one-shot: import historical snapshots from old stat API into DBstat import-excel— one-shot: import MONITORING data from Excel, append DB snapshots, refresh IND_ALL/IND_MAIN with historical changes from monitoring history
The API has no write endpoints — snapshot generation only happens via stat report.
There is no internal/worker package; all scheduling is external.
- Indicators are computed on-the-fly from snapshots — never stored in the DB.
- Calculation is a layered DAG:
Layer0 → Layer1 → Layer2 → Dividend / Analytics / Tokenomics. - Each
CalculatordeclaresIDs()andDependencies();Registry.CalculateAllresolves order via topological sort. - To add a new calculator: implement
Calculatorinterface, define its Horizon interface in the same file, register inservice.go, extendIndicatorHorizonif it needshorizon.Client.
fund_snapshots.data(JSONB) storesdomain.FundStructureDatawith per-account token balances and prices.- Token prices captured at snapshot time live in
FundAccountPortfolio.Tokens[].PriceInEURMTL— use these for historical price lookups (seefindBTCPriceinlayer0.goas a pattern). snapshot.Repository.GetByDaterequires exact date match (midnight UTC); snapshots are stored bystat reportusingtime.Date(..., time.UTC).
internal/export/sheets.go— IND_ALL and IND_MAIN are clear+rewrite each run.internal/export/monitoring.go— MONITORING sheet is append-only (one row per daily run viaValues.AppendwithINSERT_ROWS).export.Service.Exportdelegates toExportWithHistory(ctx, data, nil)— both return([]IndicatorRow, error). Rows are reused byAppendMonitoringto avoid recalculating indicators.export.Service.ExportWithHistoryfills gaps in historical change data fromMonitoringHistorywhen DB snapshots are unavailable (used byimport-excel).export.MonitoringHistory(map[time.Time]map[int]decimal.Decimal) — keys are midnight UTC dates, values map indicator ID → value.NearestBefore(target)finds the latest date ≤ target for gap-filling.export.MonitoringColumnIndicatorIDs()exposes the indicator ID mapping frommonitoringColumns(40 ints, 0 = unmapped). Column order is load-bearing — bothbuildMonitoringRowsandbuildMonitoringHistorydepend on positional alignment.- MONITORING column mapping is in
monitoringColumnsslice — when adding new indicators, add the mapping there too. - All three sheets match the original
MTL_report_1.xlsxformatting exactly:- IND_ALL: light-green
#D9EAD3headers, bold Arial 10pt, freeze M2 (1 row + 12 cols), thin borders around change cols F–I, MAIN col L has gray#D9D9D9background. - IND_MAIN: light-yellow
#FFE599headers, freeze D3 (2 rows + 3 cols), Value col B is 12pt bold, change cols D–E0.00%, F–G0%. - MONITORING: light-green
#D9EAD3headers with vertical text (90°), freeze B3, row 2 height 100px (75pt), date col A has green background, per-column widths from Excel.
- IND_ALL: light-green
- Shared helpers:
cellFormatReq,freezePaneReq,colWidthReq— used by both files.
domain.IssuerAddress— main fund issuer Stellar addressdomain.EURMTLAsset()— fund base asset (EUR-pegged stablecoin)domain.AccountRegistry()— all 11 fund accounts (used to exclude fund addresses from external payment filtering)
- Stellar uses 7 decimal places (stroops). Smallest non-zero balance:
0.0000001. - Use
decimal.New(1, -7)for exact stroop thresholds — avoiddecimal.NewFromFloatfor precision-sensitive values. - Asset type is determined by code length:
<=4chars →credit_alphanum4,5-12chars →credit_alphanum12. Usedomain.AssetTypeFromCode().
horizon.Client→IndicatorHorizon(combined interface:TokenomicsHorizon + CirculationHorizon + DividendHorizon)price.Service→HorizonPriceSource(orderbook / pathfinding only)- Both are passed to
indicator.NewService(priceSvc, horizonClient, hist)inmain.go.
// Extract next-page path from Horizon's _links.next.href:
u, err := url.Parse(resp.Links.Next.Href)
if err != nil { return fmt.Errorf("parsing pagination link: %w", err) }
path = u.Path + "?" + u.RawQuery- Add
Links.Next.Hreffield to response structs when pagination is needed. - When paginating payments ordered desc by time, check the timestamp before type/direction filters so non-payment records don't block early termination.
- Use
httptest.NewServer+NewClient(server.URL, 1, 10*time.Millisecond)for HTTP-level tests (seeassets_test.go,account_test.go). - For pagination tests, use a
pagecounter in the handler to return different responses per request. - Calculator tests use mock interfaces (e.g.,
mockTokenomicsHorizon) — always test that mocks exercise the real logic (e.g., union dedup for I27).
- Horizon pagination errors must be returned, not swallowed — silent
breakon parse failure hides incomplete data. - Balance parse errors in helpers should
slog.Warnbefore returning zero — distinguish "not found" from "corrupt data". - When one failed API call cascades to zero out multiple indicators, log the cascade explicitly (which indicators are affected and why).
- Always distinguish
snapshot.ErrNotFoundfrom real DB errors usingerrors.Is(err, snapshot.ErrNotFound)— never conflate "not found" with connection/query failures (seerunImportpattern). - Long loops over dates/snapshots must have a circuit breaker (
maxConsecutiveErrors = 5) to abort on persistent failures — never silently iterate through hundreds of errors.
.envcontains multiline JSON (GOOGLE_CREDENTIALS_JSON) — cannot besourced in shell directly.- Use
docker composefor local runs:docker compose build app && docker compose run --rm --entrypoint "./stat" app report - Dockerfile ENTRYPOINT is
./stat, CMD isserve— to run subcommands use--entrypoint "./stat" app <subcommand>. docker compose up -d dbstarts just PostgreSQL;docker compose run --rmfor one-shot commands.- For
import-excelwith a local file:docker compose run --rm -v "$(pwd)/MTL_report_1.xlsx:/app/MTL_report_1.xlsx" --entrypoint "./stat" app import-excel --file MTL_report_1.xlsx
- Uses
github.com/xuri/excelize/v2to read the MONITORING sheet from an.xlsxfile. excelize.GetRowsreturns displayed cell values as strings — numbers come formatted with commas (e.g."1,827,956"), dates as locale-dependent strings.- Excel formula errors (
#REF!,#DIV/0!,#N/A) are returned as literal strings —parseExcelNumberdrops any#-prefixed string to nil to prevent Google Sheets from interpreting them as errors. - Date parsing in
parseExcelDateprioritizesdd.mm.yyyy(the known MONITORING format) over ambiguous US formats to prevent silent month/day swap.
- Commit messages: Use Conventional Commits format (e.g.,
feat:,fix:,refactor:,docs:,chore:) - PR Merge Strategy: Repository only allows rebase merges. Use
gh pr merge --rebase --delete-branch - Binary output:
go build -o stat ./cmd/statproduces the binary at repo root;/statis gitignored (notcmd/stat/).
Use github.com/samber/lo for readable, type-safe slice/map operations. Prefer lo helpers over manual loops.
lo.Filter(slice, func(x T, _ int) bool { return condition }) // Filter elements
lo.Map(slice, func(x T, _ int) R { return transform(x) }) // Transform elements
lo.Reduce(slice, func(acc R, x T, _ int) R { ... }, init) // Reduce to single value
lo.ForEach(slice, func(x T, _ int) { ... }) // Iterate with side effects
lo.Uniq(slice) // Remove duplicates
lo.UniqBy(slice, func(x T) K { return key }) // Remove duplicates by key
lo.Compact(slice) // Remove zero values ("", 0, nil)
lo.Flatten(nested) // Flatten nested slices
lo.Chunk(slice, size) // Split into chunks
lo.GroupBy(slice, func(x T) K { return key }) // Group by key -> map[K][]T
lo.KeyBy(slice, func(x T) K { return key }) // Index by key -> map[K]T
lo.Partition(slice, func(x T, _ int) bool { ... }) // Split into [match, nomatch]lo.Find(slice, func(x T) bool { return condition }) // Returns (value, found)
lo.FindOrElse(slice, fallback, func(x T) bool { ... }) // Returns value or fallback
lo.Contains(slice, value) // Check if exists
lo.IndexOf(slice, value) // Find index (-1 if not found)
lo.Every(slice, func(x T, _ int) bool { ... }) // All match predicate
lo.Some(slice, func(x T, _ int) bool { ... }) // Any matches predicatelo.Keys(m) // Get all keys
lo.Values(m) // Get all values
lo.PickBy(m, func(k K, v V) bool { ... }) // Filter map entries
lo.OmitBy(m, func(k K, v V) bool { ... }) // Exclude map entries
lo.MapKeys(m, func(v V, k K) K2 { return newKey }) // Transform keys
lo.MapValues(m, func(v V, k K) V2 { return newValue }) // Transform values
lo.Invert(m) // Swap keys and values
lo.Assign(maps...) // Merge maps (later wins)lo.Must(val, err) // Panic on error, return val
lo.Must0(err) // Panic on error (no return)
lo.Must2(v1, v2, err) // Panic on error, return v1, v2
lo.Coalesce(vals...) // First non-zero value
lo.CoalesceOrEmpty(vals...) // First non-zero or zero value
lo.IsEmpty(val) // Check if zero value
lo.FromPtr(ptr) // Dereference or zero value
lo.ToPtr(val) // Create pointer to value
lo.Ternary(cond, ifTrue, ifFalse) // Inline conditional
lo.If(cond, val).Else(other) // Fluent conditionalimport lop "github.com/samber/lo/parallel"
lop.Map(slice, func(x T, _ int) R { ... }) // Parallel map
lop.ForEach(slice, func(x T, _ int) { ... }) // Parallel iteration
lop.Filter(slice, func(x T, _ int) bool { ... }) // Parallel filter