This project demonstrates the design and implementation of a production-grade Ethereum indexer and API gateway, written entirely in Go. It ingests on-chain events from an ERC-4626 vault, processes and normalizes them into a Supabase PostgreSQL database, and exposes a public REST API with automatically generated documentation. The system is built around a concurrent, multi-threaded architecture designed for data integrity, fault tolerance, and maintainable scaling — with live and historical event ingestion, chain reorganization handling, and finalized block tracking. Deployed on a Linux server and managed via systemd, the project demonstrates a full end-to-end data pipeline: from blockchain ingestion to queryable structured data.
- Ethereum Indexer in Go: Listens to Deposit, WithdrawRequested, and Transfer events from an ERC-4626 vault contract
- Supabase Integration: Stores normalized event data in a hosted Postgres database
- API Gateway: Exposes REST endpoints (via Next.js) for querying indexed data
- Auto-Generated Docs: OpenAPI documentation generated automatically from Next.js endpoints, available at
<API_URL>/docs - Resilient Architecture: Includes queue-based event processing and reorg verification
- Production-Ready Deployment: Can run as a Linux systemd service for reliability and observability
The Vault Indexer is designed as a concurrent Go service that ingests, processes, and serves Ethereum vault events in real time.
It operates through three coordinated threads — each responsible for a specific stage of the indexing pipeline — and exposes the processed data through an API gateway backed by Supabase Postgres.
- Connects to an Ethereum node via JSON-RPC and subscribes to the vault’s on-chain events.
- Handles both historical backfill and live streaming of events such as Deposit, WithdrawRequested, and Transfer.
- Writes all captured events into an in-memory event queue, ensuring they’re processed in chronological order.
- Acts as the entry point for all blockchain data entering the system.
- Serves as a buffer between event ingestion and downstream processing.
- Maintains ordered delivery of events to prevent race conditions and ensure deterministic indexing.
- Enables the system to handle bursts of incoming events without blocking network I/O.
- Continuously fetches newly queued events and transforms them into structured index records suitable for querying.
- Normalizes Ethereum event data (e.g., addresses, amounts, block metadata) into relational database entries.
- Persists these records into the Postgres database, providing a clean and queryable representation of vault state over time.
- Monitors blockchain finality and safety levels to maintain data integrity.
- Detects potential chain reorganizations (reorgs) by verifying block hashes and canonicality.
- Triggers cleanup and rollback operations if non-finalized data becomes invalid.
- Updates internal tables to reflect finalized blocks and ensures all indexed data corresponds to the canonical Ethereum chain.
- Acts as the central data store for both raw events and processed indices.
- Enables efficient queries over user positions, vault balances, and event histories.
- Managed through Supabase, which provides hosted PostgreSQL, authentication, and built-in REST access.
- Provides a RESTful API layer for external clients, dashboards, or analytics tools.
- Offers auto-generated documentation (Swagger / OpenAPI) for ease of exploration.
- Fetches data directly from the Postgres database, exposing endpoints for querying vault events, user histories, and position summaries.
- Enables fast, queryable access to indexed data by contract address or user wallet.
- indexer
- go
- ethereum node (HTTP endpoint)
- api gateway
- npm
- node.js
- local postgres development
- supabase installed globally (
npm install supabase --save-dev) - docker
- supabase installed globally (
Copy .env.example and create .env.local. You can also create an .env.prod and run the process with an environment arg. The default environment is dev.
- start docker
- start database
npx supabase start # will deploy existing schema
npx supabase status # view the available endpointsgo mod tidy # installs Go dependencies
./start-indexer.sh # Start server in development (default)
./start-indexer.sh prod # example with .env.prod filenpm install
npm run devSwagger docs will be available at /docs
Here are some useful commands for working with the database.
npx suapbase migration new <name>
npx supabase migration up# for indexer
npx supabase gen types --lang go --local > go-indexer/database/types.go
# for api server
npx supabase gen types --lang typescript --local > app/types/database.ts
npx supabase db reset
## remember to re-generate typesnpx supabase stop --no-backup ## drops DB
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
docker network prune
docker volume ls
docker volume rm <volume name>The indexer is meant to run as a constant-uptime script on a remote server. It is safe to stop and restart, and will backfill the data on each restart.
This process was previously deployed on Linux servers, run with systemd commands which were configured via Nix.
Persistent errors will cause the process to stop running. If you plan to deploy this yourself, make sure to add a configuration which will restart the process with a cooldown if it stops.
nix run .#indexergo test ./...
go test ./go-indexer/indexer -vThe main thread spins up a lightweight HTTP server available for health checks.
curl http://localhost:8080/healthManaul check which can be run to load all historical data locally, and compare that production matches the historical load.
-
Start with fresh local DB
npx supabase db reset -
Load in historical set of data
./start-indexer.sh -
Define connection params for test
- populate
go-indexer/scripts/.env.scriptswith local and remote DB urls, and eth websocket RPC url
- Run comparison script
cd go-indexer/scriptsMUST do this- run comparison script
go run data-check.go