Dria Compute Node serves the computation results within Dria Knowledge Network.
A Dria Compute Node is a unit of computation within the Dria Knowledge Network. It's purpose is to process tasks given by the Dria Admin Node, and receive rewards for providing correct results. These nodes are part of the Waku network, a privacy-preserving cencorship resistant peer-to-peer network.
Dria Admin Node broadcasts heartbeat messages at a set interval, it is a required duty of the compute node to respond to these so that they can be included in the list of available nodes for task assignment.
Compute nodes can technically do any arbitrary task, from computing the square root of a given number to finding LLM outputs from a given prompt. We currently have the following tasks:
- Synthesis: Using Ollama, nodes will generate synthetic data with respect to prompts given by the admin node.
Each task can be enabled providing the task name as a feature to the executable.
We are using a reduced version of nwaku-compose for the Waku node. It only uses the RELAY protocol, and STORE is disabled. The respective files are under the waku folder.
Dria Compute Node is mainly expected to be executed using Docker Compose. The provided compose file will setup everything required. To start running a node, you must do the following:
-
Prepare Environment Variables: Dria Compute Node makes use of several environment variables, some of which used by Waku itself as well. First, prepare you environment variable as given in .env.example.
-
Fund an Ethereum Wallet with 0.1 Sepolia ETH: Waku and Dria makes use of the same Ethereum wallet, and Waku uses RLN Relay protocol for further security within the network. If you have not registered to RLN protocol yet, register by running
./register_rln.sh
. If you have already registered, you will have akeystore.json
which you can place under./waku/keystore/keystore.json
in this directory. Your secret key will be provided atETH_TESTNET_KEY
variable. You can set an optional password atRLN_RELAY_CRED_PASSWORD
as well to encrypt the keystore file, or to decrypt it if you already have one. -
Ethereum Client RPC: To communicate with Sepolia, you need an RPC URL. You can use Infura or Alchemy. Your URL will be provided at
ETH_CLIENT_ADDRESS
variable.
With all of these steps completed, you should be able to start a node with:
# clone the repo
git clone https://github.com/firstbatchxyz/dkn-compute-node
# -d to run in background
docker compose up -d
With -d
option, the containers will be running in the background. You can check their logs either via the terminal or from Docker Desktop.
You have several alternatives to use Ollama:
docker compose --profile ollama-cpu up -d
will launch Ollama container using CPU only.docker compose --profile ollama-cuda up -d
will launch Ollama container with CUDA support, for NVIDIA gpus.docker compose --profile ollama-rocm up -d
will launch Ollama container with ROCM support, for AMD gpus.- For Apple Silicon, you must install Ollama (e.g.
brew install ollama
) and launch the server (ollama serve
) in another terminal, and then simplydocker compose up -d
.
You can decide on a model to use by changing DKN_OLLAMA_MODEL
variable, such as DKN_OLLAMA_MODEL=llama3
. See Ollama library for the catalog of models.
We are using Make as a wrapper for some scripts. You can see the available commands with:
make help
You will need OpenSSL installed as well, see shorthand commands here.
While running Waku and Ollama node elsewhere, you can run the compute node with:
make run # info-level logs
make debug # debug-level logs
Open crate docs using:
make docs
Besides the unit tests, there are separate tests for Waku network, and for compute tasks such as Ollama.
make test # unit tests
make test-waku # Waku tests (requires a running Waku node)
make test-ollama # Ollama tests (requires a running Ollama client)
To measure the speed of some Ollama models we have a benchmark that uses some models for a few prompts:
cargo run --release --example ollama
You can also benchmark these models using a larger task list at a given path, with the following command:
JSON_PATH="./path/to/your.json" cargo run --release --example ollama
Lint and format with:
make lint # clippy
make format # rustfmt