Tenzro Testnet is live —request testnet TNZO
Infrastructure

Decentralized AI Infrastructure

AI inference is concentrated in a few cloud providers and a few regions. A single outage, pricing change, or policy shift can break applications built on top. Tenzro decentralizes AI compute by turning any hardware operator into a provider — with model routing, TEE privacy, and per-token billing built into the protocol.

The Problem

Running AI at scale means depending on a handful of API providers. Rate limits, region restrictions, model deprecations, and pricing increases are unilateral decisions that application developers cannot control. Private data sent to cloud inference endpoints has no hardware-level isolation guarantee.

  • Single-provider dependency creates availability and pricing risk
  • No way to verify that inference ran on specific hardware or with specific model weights
  • Sensitive prompts and responses are processed on shared infrastructure without isolation
  • Edge devices with GPU capacity sit idle while cloud providers charge premium rates
  • Model providers have no standardized way to monetize open-weight models

How Tenzro Solves It

Tenzro creates a global marketplace where anyone can serve AI models and earn TNZO. The InferenceRouter is modality-aware — it dispatches typed payloads (chat, forecast, vision/text embeddings, segmentation, detection, speech-to-text, video) to the correct runtime, then selects providers per-modality based on price, latency, or reputation. TEE enclaves ensure prompt privacy. Micropayment channels enable per-token billing. The model registry tracks language models alongside seven dedicated multi-modal catalogs.

Model Registry

Language models (Gemma, Qwen, Phi, Mistral) plus seven multi-modal catalogs: time-series forecasters (Chronos-2, TimesFM 2.5, Granite-TTM), vision encoders (DINOv3, SigLIP2, CLIP), text embeddings (Qwen3-Embedding, EmbeddingGemma, BGE-M3), segmentation (SAM 3/3.1, EdgeSAM), detection (RF-DETR, D-FINE), and speech-to-text (Whisper-v3-turbo, Parakeet, Canary, Moonshine v2). License tiers (Permissive, Attribution, CommercialCustom, NonCommercial) are enforced centrally. The HfArtifactDownloader handles HuggingFace single-file and bundle downloads with SHA-256 integrity verification.

Inference Routing

The InferenceRouter supports four strategies: price (cheapest), latency (fastest), reputation (most reliable), and weighted (balanced). Circuit breakers automatically remove failing providers and re-route to healthy ones.

TEE Confidential Inference

Run inference inside Intel TDX, AMD SEV-SNP, AWS Nitro, or NVIDIA GPU CC enclaves. Prompts and responses are encrypted with AES-256-GCM using HKDF-SHA256 derived keys with vendor-specific domain separation. Hardware attestation proves isolation.

Provider Economics

Register as a ModelProvider with tenzro_registerProvider, set pricing schedules, and earn TNZO from inference requests. TEE-attested providers get 2x weight in leader selection. Staking rewards and network incentives on top of direct payments.

Architecture

A user requests inference. The InferenceRouter selects the best provider, the model executes inside a TEE enclave, results are verified with a ZK proof, and payment settles via micropayment channel.

Edge AI & Industrial Applications

Every Tenzro node can serve AI models locally — no cloud required. This enables edge computing scenarios where latency, privacy, or connectivity constraints make centralized APIs impractical.

Manufacturing & Industrial IoT

Quality control agents running vision models on edge nodes. Anomaly detection in real-time without sending factory data to the cloud. Each inspection produces a TEE attestation for regulatory compliance.

Energy & Utilities

Grid monitoring agents analyzing sensor data at the edge. Predictive maintenance models running on local nodes. Settlement for energy trading between producers and consumers via micropayment channels.

Healthcare & Life Sciences

Patient data analysis in TEE enclaves — models run on hospital nodes, data never leaves the premises. Verifiable credentials for medical certifications. ZK proofs for privacy-preserving clinical trials.

Financial Institutions

Banks running AML/fraud detection models on their own infrastructure with TEE attestation proving computation integrity. Real-time transaction screening without exposing customer data to third parties.

Code Example

Serve a model as a provider and request inference as a consumer:

CLI + Rust SDK
# Provider: serve a model
tenzro provider register --role model-provider
tenzro model serve gemma3:27b
tenzro provider pricing set --model gemma3:27b \
    --input-price 0.001 --output-price 0.002

# Consumer: request inference via SDK
use tenzro_sdk::TenzroClient;

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    let client = TenzroClient::new("https://rpc.tenzro.network");

    // List available models
    let models = client.list_models(None, None).await?;

    // Chat completion with automatic routing
    let response = client.chat_completion(
        "gemma3:27b",
        vec![json!({
            "role": "user",
            "content": "Summarize the latest DeFi trends"
        })],
    ).await?;

    // List model endpoints with status
    let endpoints = client.list_model_endpoints().await?;

    // Verify inference result
    let verified = client.verify_inference_result(
        &response.proof,
        &response.model_hash,
    ).await?;

    Ok(())
}

Relevant Tools & APIs

MCP Tools

list_models
chat_completion
list_model_endpoints
tenzro_forecast
tenzro_imageEmbed
tenzro_textEmbed
tenzro_segment
tenzro_detect
tenzro_transcribe
tenzro_videoEmbed
register_provider
get_provider_stats
stake_tokens
verify_zk_proof

RPC Methods

tenzro_listModels
tenzro_chat
tenzro_serveModel
tenzro_stopModel
tenzro_listForecastCatalog
tenzro_listVisionCatalog
tenzro_listTextEmbeddingCatalog
tenzro_listSegmentationCatalog
tenzro_listDetectionCatalog
tenzro_listAudioCatalog
tenzro_loadVisionModel
tenzro_unloadVisionModel
tenzro_registerProvider
tenzro_providerStats
tenzro_listModelEndpoints

CLI Commands

tenzro model list
tenzro model serve
tenzro model stop
tenzro chat
tenzro forecast
tenzro embed-text
tenzro embed-image
tenzro segment
tenzro detect
tenzro transcribe
tenzro embed-video
tenzro provider register
tenzro provider pricing