Skip to content

gonkalabs/tx-scanner

Repository files navigation

Gonka blockchain Transaction indexer

A high-performance blockchain transaction scanner and API service for Cosmos SDK based Blockchain Gonka.ai, built in Go. It reads block_results directly from an RPC node, stores every transaction in ClickHouse, and exposes a fast REST API for querying them.

Used as a part of gonka.gg explorer, but designed to work with any Cosmos SDK chain out of the box (or with minor modifications).

Features

  • Reliable scanning -- uses block_results RPC instead of unreliable tx search APIs
  • Bidirectional -- scans forward (new blocks) and backward (historical) simultaneously
  • Concurrent -- configurable worker pool for parallel block processing
  • Batch inserts -- efficient ClickHouse batch writes for maximum throughput
  • Fast queries -- sub-second API responses even with billions of transactions
  • RPC fallback -- if a tx isn't indexed yet, fetches it live from the chain
  • Docker-only -- no Go or other toolchains required, just Docker

Architecture

Cosmos RPC Node (gonka blockchain originally)  ──►  Go Scanner  ──►  ClickHouse  ──►  REST API

Quick Start

Prerequisites

  • Docker & Docker Compose

1. Clone & configure

git clone https://github.com/gonkalabs/tx-scanner.git
cd tx-scanner
cp .env.example .env

Edit .env and set RPC_URL to your chain's RPC endpoint or point it to a rpc-pooler:

RPC_URL=http://your-node:26657

or

RPC_URL=http://<rpc-pooler-ip>:<port>

2. Start

docker compose up -d

That's it. This builds the Go binary inside Docker, starts ClickHouse, the scanner, and the API on port 8080.

3. Check it's working

# Tail logs
docker compose logs -f

# Health check
curl http://localhost:8080/api/v1/health

Configuration

All settings are environment variables (set in .env):

Variable Default Description
RPC_URL (required) RPC endpoint URL
RPC_TIMEOUT 30s RPC request timeout
CLICKHOUSE_HOST localhost ClickHouse host
CLICKHOUSE_PORT 9000 ClickHouse native port
CLICKHOUSE_DATABASE tx_scanner Database name
CLICKHOUSE_USER default Database user
CLICKHOUSE_PASSWORD (empty) Database password
START_BLOCK 1 Starting block height
CONCURRENT_WORKERS 10 Parallel block fetchers
BATCH_SIZE 100 Blocks per batch
SCAN_INTERVAL 5s Polling interval
API_PORT 8080 API listen port
LOG_LEVEL info Log level

API Endpoints

Transactions

GET /api/v1/transactions?limit=50&offset=0       # Latest transactions
GET /api/v1/transactions/latest                   # Alias for above
GET /api/v1/transactions/:hash                    # By hash (ClickHouse + RPC fallback)
GET /api/v1/transactions/address/:address         # By sender or recipient
GET /api/v1/transactions/type/:type               # By type (transfer, delegate, etc.)
GET /api/v1/transactions/ibc                      # IBC cross-chain transactions
GET /api/v1/transactions/inference                # Inference-related transactions
GET /api/v1/transactions/native-transfers         # Native token transfers (no IBC)
GET /api/v1/transfers/latest                      # Alias for native-transfers

Stats

GET /api/v1/stats                                 # Total transaction count
GET /api/v1/stats/daily-transfers?days=30         # Daily transfer volume

Health

GET /api/v1/health

Example response

{
  "transactions": [
    {
      "tx_hash": "A1B2C3...",
      "block_height": 12345678,
      "block_time": "2025-01-15T10:30:00Z",
      "tx_type": "transfer",
      "success": true,
      "sender": "cosmos1abc...",
      "recipient": "cosmos1def...",
      "amount": "1000000uatom"
    }
  ],
  "total": 987654,
  "limit": 50,
  "offset": 0,
  "query_time": 42
}

Makefile

make help           # Show all commands
make build          # Build Docker images
make up             # Start all services
make down           # Stop all services
make logs           # Tail logs
make restart        # Restart services
make test           # Run API smoke tests
make test-docker    # Full Docker integration test
make status         # Show running containers
make clean          # Stop services & remove volumes

Database Schema

CREATE TABLE transactions (
    tx_hash String,
    block_height Int64,
    block_time DateTime64(3),
    tx_index Int32,
    tx_type String,
    success Bool,
    gas_used Int64,
    gas_wanted Int64,
    events String,
    sender String,
    recipient String,
    amount String,
    memo String,
    fee String,
    created_at DateTime64(3)
) ENGINE = MergeTree()
PARTITION BY toYYYYMM(block_time)
ORDER BY (block_height, tx_index, tx_hash);

Bloom filter indexes are auto-created on tx_hash, sender, and recipient.

Transfer Detection

The scanner classifies transactions using a rule-based system:

  1. transfer event with sender/recipient/amount attributes
  2. coin_spent + coin_received events (token movement)
  3. /cosmos.bank.v1beta1.MsgSend message action
  4. /cosmos.bank.v1beta1.MsgMultiSend message action
  5. module = "bank" attribute
  6. MsgExec wrapping bank operations

Project Structure

tx-scanner-go/
├── main.go                 # Entry point
├── internal/
│   ├── api/               # REST API (Gin)
│   ├── config/            # Environment-based config
│   ├── database/          # ClickHouse client & queries
│   ├── logger/            # Structured logging (zap)
│   ├── models/            # Data models & RPC types
│   ├── parser/            # Transaction parsing & classification
│   ├── rpc/               # RPC client
│   └── scanner/           # Bidirectional block scanner
├── Dockerfile             # Multi-stage Go build
├── docker-compose.yml
├── Makefile
└── .env.example

Adapting for Your Chain

This scanner works with any CometBFT-based chain. To adapt it:

  1. Set RPC_URL to your chain's RPC endpoint
  2. The amount extraction regex in daily transfer stats may need adjustment for your chain's denomination (e.g., uatom, uosmo)
  3. Inference-related endpoints are specific to chains with inference modules -- they'll return empty results on standard chains

Performance

  • Scanning: 100-500 blocks/sec depending on worker count and RPC latency
  • API queries: < 500ms for most queries, even with billions of rows
  • Throughput: 1000+ API requests/sec

Contributing

Contributions welcome! Please open an issue or submit a PR.

License

MIT

About

Simple service to accumulate billions of gonka.ai blockchain transactions and blocks and serve them with sub-second api latency, search, stats and a bunch of useful api endpoints. Used in https://gonka.gg explorer.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors