Infernet Node
The Infernet Node is a lightweight off-chain client for Infernet responsible for fulfilling compute workloads:
- Nodes listen for on-chain (via the Infernet
Coordinator
contract) or off-chain (via the REST API) requests - Nodes orchestrate dockerized Ritual ML Workflows consuming on-chain and off-chain provided inputs
- Nodes deliver workflow outputs and optional proofs via on-chain transactions or the off-chain API
James has setup a Governor contract for his DAO, inheriting the Infernet
SDK. Every time a proposal is created, the contract
kicks off a new on-chain Subscription
request alerting an Infernet Node of a new
proposal. Once picked up by the node, James' custom
governor-quantitative-workflow
is run and the node responds on-chain with an
output and associated computation proof.
Emily is developing a new NFT collection that lets minters automatically add
new traits to their NFTs by posting what they'd like in plaintext (think, "an
Infernet-green hoodie"). Emily sets up a minting website that posts signed
Delegate Subscriptions to an Infernet node
running her custom stable-diffusion-workflow
. This workflow parses plaintext
user input and generates a new Base64 image, with the Infernet node posting
the final image to her smart contract via an on-chain transaction.
Travis is building a new web app that allows his users to chat with AI
avatars. He posts new messages via Delegate
Subscriptions to his Infernet node running his
custom llm-inference-workflow
via the HTTP API and receives a
response instantly over the same API. He surfaces these responses to users in
his web app, offering a snappy user experience, while his node asynchronously
publishes a proof of computation on-chain, letting his users verify the
integrity of the responses in the future.
Granular configuration
Infernet Nodes offer granular runtime configuration and permissioning. Operators have full flexibility in:
- Running any arbitrary compute workload (via the Ritual ML Workflows)
- Using both public workflow images and private images via Docker Hub
- Choosing to listen to on-chain events, off-chain events, or both
- Configuring on-chain parameters including
max_gas_limit
, how many blocks to trail chain head, and more - Restricting workload access by IP address, on-chain address, delegated contract address, and more
- Specifying workload configuration parameters (including environment variables, execution ordering, etc.)
- Optionally forwarding diagnostic node system statistics to Ritual
All of these parameters can be configured via a single runtime config.json
file. Read more about sane defaults and modifying this configuration for your own use cases in Node: Configuration.
System specifications
Infernet Node requirements depend greatly on the type of compute workflows you plan to run. Because all workflows run in Docker containers (opens in a new tab), we recommend optimizing for at least a minimum set of requirements that support Virtualization (opens in a new tab). Memory-enhanced machines are preferred.
Minimum Requirements
Minimum | Recommended | GPU-heavy workloads | |
---|---|---|---|
CPU | Single-core vCPU | 4 modern vCPU cores | 4 modern vCPU cores |
RAM | 128MB | 16GB | 64GB |
DISK | 512MB HDD | 500GB IOPS-optimized SSD | 500GB NVME |
GPU | CUDA-enabled GPU |
Off-chain events
If you choose to service off-chain Web2 requests via the REST API you will have to expose port 4000
to the Internet.
On-chain events
If you plan to use your Infernet Node to listen and respond to on-chain events, via the Infernet SDK, you will also need access to a blockchain node RPC with support for the eth_newFilter
(opens in a new tab) Ethereum JSON-RPC method.
Running the node locally
Infernet Nodes execute containerized workflows. As such, installing and running (opens in a new tab) a modern version of Docker is a prerequisite, regardless of your choice of how to run the node.
You can run an Infernet Node locally via Docker Compose (opens in a new tab). First, you must create a configuration file (see: example configuration (opens in a new tab)).
# Create a configuration file
cd deploy
cp ../config.sample.json config.json
For filling in config.json
properly, see Configuration. After you have configured your node, you can run it with:
# Run node and associated services
cd deploy
docker compose up -d
Registering on-chain
If choosing to respond to on-chain Subscription events, your node will also need to call registerNode()
and activateNode()
on the Manager interface of the Coordinator
. Once you have populated the chain
runtime configuration in config.json
, you can use the included scripts via docker exec
to register and activate:
# Set container name
docker ps
container-name="your-container-name-from-ps"
# Register node (via docker exec)
docker exec container-name make register-node
# Activate node (via docker exec)
docker exec container-name make activate-node
Next steps
Once ready, you may choose to:
- Follow an introductory quick start to setting up an Infernet Node end-to-end
- Understand the granular, runtime configuration settings available to you as a node operator
- Read in-depth about the node architecture to better understand what's possible with Infernet
- Explore your options to deploy an Infernet Node in production
- Find out more about the Ritual ML Workflows and the Infernet SDK