The Merkle Oracle Node monitors the blockchain and, at timed intervals defined in the config, fetches the latest oracle data and adds it to a local Merkle trie. It then uploads the data to a cloud provider such as IPFS and submits a recreate oracle transaction to the blockchain.
Once the transaction is confirmed, the node will index it to the local database.
Rollbacks are handled by the node.
The node also exposes a comprehensive RESTful API, allowing users and applications to query the current and historical state of the data and retrieve individual data points.
- Entrypoint (
cmd/node/main.go): Initializes all components, including configuration, logging, database, and the main actor engine. - Actor System (
internal/actors): Uses theanthdm/hollywoodactor framework to manage concurrent processes.IndexerActor: Connects to a Cardano node using theblinklabs-io/adderlibrary. It streams blocks and transactions, handling rollbacks and forwarding events downstream.StrategyManagerActor: Supervises the core logic. It spawns and manages theIndexerActorand other processing strategies. It also handles the indexer restart and circuit breaker logic.ChainEventProcessorStrategy: The core business logic of the oracle. It listens for blockchain events, determines when to fetch new off-chain data, calculates trie diffs, builds and submits Cardano transactions with the new Merkle root, and persists the new state to the database.
- Database (
internal/database): Manages all interactions with the PostgreSQL database usinggorm. It stores oracle files, trie history, objects, keys, and values, and includes functions for state rollback and historical queries. - Cardano Interaction (
internal/provider,internal/tx):- Uses
zenGate-Global/cardano-connector-goto abstract interactions with various Cardano data providers (Blockfrost, Kupmios, etc.). - Uses
Salvionied/apollofor building, balancing, and signing Cardano transactions.
- Uses
- Cloud Storage (
internal/cloud): Provides a genericCloudinterface with implementations for Google Cloud Storage and IPFS (via Pinata) to store oracle data snapshots. - API Server (
internal/api): Agin-gonicbased web server that provides the public REST API for querying oracle data. It includes auto-generated interactive documentation usingbdpiprava/scalar-go.
- The VM runs a lightweight Caddy reverse proxy to serve HTTPS for merkle-staging.zengate-dev.com.
- The pipeline ensures the VM has an external IP and opens ports 80/443 automatically.
- Your only action is to add a DNS A record pointing the domain to the VM’s external IP in Cloud DNS (project: zengate-dns-management, zone: zengate-dev).
- See devops/caddy.md for the 1-step DNS command and details.
- Go
1.24or later - Docker and Docker Compose
- Access to a running PostgreSQL instance (can use dockerized postgres)
- Blockfrost API, Ogmios+Kupo, UtxoRPC, or Maestro API (the submit part of config file is optional)
- Pinata API access (IPFS) OR GCP bucket (used for development or staging)
- An Oracle API endpoint that returns the oracle data in the format specified in
internal/oprovider/oprovider.go- A mock implementation of a data provider can be found here
The node is configured using a config.yaml file. An example with all available options is provided in config.example.yaml.
- Copy the example configuration file:
cp config.example.yaml config.yaml
- Edit
config.yamland fill in the required values. See the detailed breakdown below.
storage:url: (string) Your PostgreSQL connection string. Ex:postgresql://postgres:postgres@localhost:5432/postgres
indexer:address: (string) The TCP address of your Cardano node (e.g.,IP_ADDR:3001).socketPath: (string) Filesystem path to your Cardano node's IPC socket. Note: Use eitheraddressorsocketPath.interceptHash&interceptSlot: (string, int) The block hash and slot to start indexing from if the database is empty.restartThreshold&restartTimeWindow: (int, duration) Configures the indexer's circuit breaker. If the indexer restarts more thanrestartThresholdtimes withinrestartTimeWindow, it performs a full reset to the intercept point.
metrics,debug,server:listenAddress&listenPort: The address and port for the Prometheus metrics, Go pprof debugger, and public API server, respectively.
submit:url: (string) A custom URL for submitting transactions (e.g., a Blockfrost submit endpoint). If empty, the provider from theapisection will be used.blockFrostProjectID: (string) Your Blockfrost Project ID, required if using the Blockfrost submit endpoint.
wallet:mnemonic: (string) The 24-word seed phrase of the wallet used to sign and pay for transactions. CRITICAL: Secure this value.
logging:level: (string) Log level (debug,info,warn,error).log_discord_webook_url,notification_discord_webhook_url: Optional Discord webhooks for logging and notifications.- Other fields control log file rotation.
api:- Configure at least one Cardano provider (Blockfrost, Ogmios+Kupo, UtxoRPC, or Maestro) to enable the node to query the blockchain.
cloud:- Configure one cloud provider for storing data snapshots.
- GCP:
gcpCredentialJSONPath,bucketName. - IPFS:
pinataGatewayURL,pinataJWT.
oracle:updateInterval: (duration) The minimum time that must pass before the oracle will publish a new root on-chain. Ex:10m.baseURL: (string) The base URL of the off-chain data source API, check out the interface ininternal/oprovider/oprovider.goto determine the format for the data provider integration.
network: (string) The Cardano network to connect to (mainnet,preview, etc.).contract:contractAddress: (string) The on-chain address of the Merkle oracle validator script.singletonPolicyId&singletonName: (string) The policy ID and asset name of the NFT that ensures the contract's UTxO is unique.merkleOracleScriptRef: The transaction output reference (txId,index) that contains the validator script.
Note: the contract section values are determined after running the genesis command with the Merkle Oracle CLI.
The intercept hash should be the block hash of the block includes the genesis transaction or before. The slot should be the absolute slot number of respective block selected.
-
Configure: Create and edit your
config.yamlas described above. -
Prepare Volumes: Run the provided setup script. This script creates the required Docker volume and sets the correct file ownership for
config.yamland the log directory, which is necessary as the container runs with a non-root user for enhanced security.bash ./setup_docker_compose.sh
-
Start Services: Launch the oracle node and the PostgreSQL database.
docker compose up -d
-
Monitor Logs:
docker compose logs -f merkle-oracle-node
-
Stop Services:
docker compose down
-
Ensure you have a running PostgreSQL instance.
-
Configure your
config.yamlwith the database connection string and other parameters. -
Install dependencies:
go mod tidy
-
Run the application:
go run ./cmd/node -config ./config.yaml
Alternatively, build a binary first using the
Makefile:make build ./node -config ./config.yaml
A convinience script is provided to run the node as well.
./run.sh
The node provides swagger documentation. Once the node is running, you can access it at:
Make sure the port matches the server.listenPort in the config.
The service generates structured JSON logs to both the standard output and a rotating log file located in ./assets/logs/. Logging behavior (level, file size, rotation) is configurable in config.yaml.
Prometheus-compatible metrics are exposed on the /metrics endpoint (default port 9094). Key metrics include:
merkle_oracle_node_slot: The current slot being processed by the indexer.merkle_oracle_node_tip_reached: A gauge (1 or 0) indicating if the indexer is synced to the chain tip.merkle_oracle_node_blocks_processed_total: A counter for the total number of processed blocks.merkle_oracle_node_rollbacks_processed_total: A counter for the total number of rollbacks handled.merkle_oracle_node_trie_root_mismatches_total: A counter for detected mismatches between the local and on-chain Merkle root.merkle_oracle_node_indexer_restarts_total: A counter for indexer restarts, labeled by reason.
The project includes a Makefile to streamline common development tasks.
make build: Compiles the node binary into the root directory.make test: Runs all Go tests in the project.make format: Formats all Go source code usinggofmtandgolines.make lint: Runsgolangci-lintto check for code quality and style issues.make lint-fix: Runs the linter with the--fixflag to automatically correct issues.make gen-docs: Auto generates the API documentation for swagger, make sure to rebuild the binary after running this.
The project enforces Conventional Commits via a GitHub Action on pull requests.
