Tenzro Network
AI infrastructure that routes globally — reducing cost, removing single points of failure
Access any AI model through a unified protocol spanning a global provider network. Automatic routing keeps systems running when individual providers go down, prices spike, or access is restricted. Providers earn by serving intelligence and security. All settlements happen on-chain with cryptographic verification.
What is Tenzro Network?
Tenzro Network is the protocol layer enabling decentralized AI inference and TEE security services. It sits atop the Tenzro Ledger blockchain, providing the marketplace infrastructure for AI providers and consumers to interact trustlessly.
Unlike centralized AI APIs, Tenzro Network allows anyone to run a node, serve models, and earn TNZO tokens. Users discover available models through the on-chain registry, route requests to optimal providers based on price/latency/reputation, and pay per token via micropayment channels.
All inference results can be cryptographically verified using ZK proofs or TEE attestations. Settlements happen on the Tenzro Ledger with automatic fee distribution to validators, model providers, and the network treasury.
Key Features
AI Inference Marketplace
Discover and access any AI model through a unified protocol. Providers register models on-chain with pricing, capabilities, and performance metrics. Route requests based on cost, latency, or reputation.
TEE Security Services
Run confidential AI inference inside hardware enclaves (Intel TDX, AMD SEV-SNP, AWS Nitro, NVIDIA GPU). Cryptographic attestations prove code integrity and data isolation. Ideal for sensitive workloads.
Micropayment Channels
Pay per token with off-chain micropayment channels. Open a channel with escrowed TNZO, stream payments during inference, and settle on-chain in batches. Enables sub-cent transactions with minimal gas costs.
Model Registry
On-chain catalog spanning chat, time-series forecasting, vision and text embeddings, segmentation, detection, and speech-to-text. Each entry carries license tier, modality, parameter count, pricing, and provider addresses. Filter, compare, and verify reputation on-chain.
Provider Rewards
Earn TNZO by serving models or providing TEE enclaves. Network collects a small commission (default 0.5%) on all AI and TEE service payments. Rewards flow to treasury and are distributed to validators and stakers.
Agent Infrastructure
Self-sovereign AI agents with auto-provisioned MPC wallets. A2A protocol for inter-agent communication. MCP bridge for Anthropic Model Context Protocol. Delegation scopes control agent spending and operations.
Multi-Modal Inference
Beyond chat, the network exposes seven inference runtimes through the same registry, the same provider economics, and the same MCP / A2A / CLI / SDK surfaces. Each runtime loads from the central catalog with license-tier gating (Permissive / Attribution / CommercialCustom / NonCommercial); non-commercial entries refuse to load without explicit opt-in.
Time-Series Forecasting
Foundation forecasters supporting [1, context_len] → [1, horizon] and quantile [1, horizon, n_quantiles] shapes.
Chronos-2 · Chronos-Bolt small/base · TimesFM 2.5 · Granite-TTM-r2 (Apache-2.0)
Vision Embeddings
Image encoders with optional L2-normalize and a cosine-similarity helper for zero-shot classification. CLIP / ImageNet / SigLIP normalization profiles.
DINOv3 (commercial-custom) · SigLIP2 (Apache-2.0) · CLIP ViT-B/L (MIT)
Text Embeddings
MTEB-tier retrieval with HuggingFace tokenizers, Matryoshka dim truncation, and fp32 / q8 / q4 activation paths.
Qwen3-Embedding 0.6B/4B/8B · EmbeddingGemma-300M (768/512/256/128) · BGE-M3 · Snowflake Arctic Embed L
Image Segmentation
Two-pass encoder/decoder runtime. Encoder caches per-image embedding; decoder takes points or boxes and emits masks.
SAM 3 / 3.1 (commercial-custom) · SAM 2 base/large · EdgeSAM · MobileSAM
Object Detection
NMS-free DETR-family detectors. Sigmoid + score-threshold post-processing returning {bbox, label_id, score}.
RF-DETR nano → 2xl (Apache-2.0, ICLR 2026) · D-FINE n/s/m/l/x
Speech-to-Text
Whisper-style encoder/decoder bundles, single-encoder edge models, and triple-bundle Parakeet/Canary. WAV via hound; MP3/FLAC via symphonia.
Whisper-large-v3-turbo (MIT) · Distil-Whisper · Moonshine v2 · Parakeet-TDT-0.6B-v3 · Canary-1B-Flash
A video embeddings runtime ships alongside — RPC, MCP tool, CLI command, A2A skill — but the wave-1 catalog is empty: no permissive ONNX-shippable encoder-only video model exists in the 2026 OSS landscape. Adding entries is mechanical once one ships.
Architecture
Supported TEE Platforms
Intel TDX (Trust Domain Extensions)
Hardware-isolated virtual machines on 4th/5th gen Xeon Scalable processors. Remote attestation via Intel Attestation Service. Memory encryption with per-VM keys. Feature flag: intel-tdx
cargo build --features intel-tdxAMD SEV-SNP (Secure Encrypted Virtualization)
Encrypted VMs on AMD EPYC processors with memory integrity protection. Attestation via AMD Secure Processor. Supports GHCB protocol. Feature flag: amd-sev-snp
cargo build --features amd-sev-snpAWS Nitro Enclaves
Isolated compute environments on EC2 instances. CPU and memory isolation with cryptographic attestation. No persistent storage, no interactive access. Feature flag: aws-nitro
cargo build --features aws-nitroNVIDIA Confidential Computing
GPU-accelerated confidential computing on Hopper/Blackwell/Ada Lovelace architectures. NRAS attestation for GPU workloads. Ideal for AI inference and ZK proof generation. Feature flag: nvidia-gpu
cargo build --features nvidia-gpuGetting Started
1. Run a Tenzro Node
Download and run a full node to participate in the network:
# Quick install
curl -fsSL https://install.tenzro.network/node | sh
# Run as validator
tenzro-node --role validator2. Register a Model (Providers)
Use the CLI to register your model in the on-chain registry:
# Register model
tenzro model register \
--name "gemma4-9b" \
--category text \
--modality "text-generation" \
--pricing "0.0001 TNZO per token" \
--endpoint "https://my-provider.tenzro.network/v1/chat/completions"
# Start serving
tenzro provider start \
--model-id "model_abc123" \
--tee-enabled3. Request Inference (Users)
Use the SDK to discover models and request inference:
use tenzro_sdk::{TenzroClient, InferenceRequest};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = TenzroClient::connect("http://localhost:8545").await?;
// Discover models
let models = client.list_models()
.category("text")
.max_price_per_token(0.0001)
.send()
.await?;
// Request inference
let request = InferenceRequest::new()
.model_id(&models[0].id)
.prompt("Explain blockchain consensus in 3 sentences")
.max_tokens(100);
let response = client.inference(request).await?;
println!("Result: {}", response.text);
Ok(())
}4. Open Micropayment Channel
For high-frequency inference, open a payment channel to reduce transaction costs:
use tenzro_sdk::{PaymentChannel, Amount};
// Open channel with 10 TNZO escrow
let channel = client.payment_channel()
.payee(provider_address)
.amount(Amount::tnzo(10.0))
.open()
.await?;
// Stream payments during inference (off-chain)
for token in inference_stream {
channel.pay(Amount::tnzo(0.0001)).await?;
}
// Close channel and settle on-chain
channel.close().await?;Ready to Build on Tenzro Network?
Access comprehensive documentation, SDKs, and examples to start building decentralized AI applications today.