Skip to main content

Overview

Lava supports 27 AI service providers out of the box, enabling you to route requests to dozens of different AI APIs through a single, unified billing system. Each provider is fully integrated with Lava’s metering and pricing infrastructure.
All providers are accessible through Lava’s /v1/forward endpoint with automatic usage tracking, billing, and cost calculation.

How Provider Integration Works

When you send a request through Lava:
  1. Automatic Routing: Lava identifies the provider from your target URL
  2. Authentication: Lava adds the appropriate provider API key
  3. Request Forwarding: Your request is sent to the provider unchanged
  4. Usage Tracking: Lava extracts usage metrics (tokens, characters, duration, etc.)
  5. Cost Calculation: Costs are calculated based on provider pricing + your configured fees
Lava handles provider-specific authentication formats, response parsing, and usage extraction automatically - you only need to provide the target URL.

Supported Providers by Category

Large Language Models (LLMs)

These providers offer chat completion, text generation, and conversational AI capabilities:

OpenAI

GPT-4, GPT-3.5, GPT-4o
  • Chat completions
  • Embeddings
  • Streaming support

Anthropic

Claude 3 (Opus, Sonnet, Haiku)
  • Long context windows
  • Function calling
  • Vision support

Google

Gemini Pro, Gemini Flash
  • Multimodal inputs
  • Large context
  • Code generation

xAI

Grok models
  • Real-time data
  • Advanced reasoning

Mistral

Mistral Large, Medium, Small
  • Efficient inference
  • Multilingual

DeepSeek

DeepSeek models
  • Cost-effective
  • High performance

Cohere

Command, Embed models
  • Enterprise features
  • RAG support

Groq

Ultra-fast LLM inference
  • Low latency
  • High throughput

together.ai

Open-source LLMs
  • Llama, Mixtral, more
  • Custom models

AI Infrastructure & Hosting

Platforms that host and serve AI models at scale:

Fireworks

Fast LLM inference platform

DeepInfra

Serverless AI inference

Hyperbolic

GPU compute for AI

Cerebras

Wafer-scale AI compute

SambaNova

Enterprise AI platform

Nebius AI Studio

Cloud AI infrastructure

GMI Cloud

AI compute cloud

Inference.net

Decentralized AI inference

Baseten

Model deployment platform

Voice & Audio

Providers specializing in speech synthesis, recognition, and voice AI:

ElevenLabs

Text-to-Speech & Speech-to-Text
  • Natural voice synthesis
  • Voice cloning
  • Multilingual support
  • Character-based metering

Retell

Voice AI Phone Calls
  • Conversational AI calls
  • Real-time responses
  • Duration-based billing

Developer Platforms

AI development and deployment platforms:

Vercel

Vercel AI SDK integration

Novita AI

AI model marketplace

Specialized AI Services

kluster.ai

AI workflow automation

Parasail

AI-powered services

Chutes

AI infrastructure

Targon

Specialized AI models

Pearch

AI search & retrieval

Provider Authentication Methods

Lava automatically handles different authentication formats for each provider:
Authentication TypeProvidersHeader Format
Bearer TokenOpenAI, Anthropic, DeepSeek, Mistral, xAI, and most LLM providersAuthorization: Bearer <key>
x-api-keyStandard API key headerx-api-key: <key>
x-goog-api-keyGoogle Geminix-goog-api-key: <key>
xi-api-keyElevenLabsxi-api-key: <key>
Query ParameterGoogle (alternative)?key=<key>
You don’t need to manage provider API keys yourself - Lava uses managed keys for all providers. Your forward token handles authentication on the Lava side.

Metering by Provider Type

Different AI services are metered in different ways:

Token-Based Metering

Providers: OpenAI, Anthropic, Google, Mistral, xAI, Groq, Cohere, and most LLM providers Metrics Tracked:
  • Input tokens (prompt)
  • Output tokens (completion)
  • Cached tokens (where supported)
  • Audio tokens (multimodal models)
Pricing Basis: Per million tokens (1M)

Character-Based Metering

Providers: ElevenLabs (text-to-speech) Metrics Tracked:
  • Character count of input text
Pricing Basis: Per million characters (1M)

Duration-Based Metering

Providers: Retell (voice calls), ElevenLabs (speech-to-text) Metrics Tracked:
  • Audio duration in seconds
  • Call duration
Pricing Basis: Per minute or per second

Request-Based Metering

Providers: Image generation services Metrics Tracked:
  • Number of API requests
Pricing Basis: Per request

Streaming Support

Lava fully supports Server-Sent Events (SSE) for real-time streaming responses:
All major LLM providers support streaming:
  • OpenAI (GPT models)
  • Anthropic (Claude models)
  • Google (Gemini models)
  • Mistral, xAI, DeepSeek
  • Groq, Fireworks, together.ai
  • And all other LLM platforms
  1. Your request includes "stream": true in the body
  2. Lava detects streaming and forwards the request
  3. Lava streams the response back to you in real-time
  4. Usage data is extracted from the final SSE message
  5. Billing happens after the stream completes
Lava adds the request tracking header:
x-lava-request-id: req_01234567890abcdef
Usage data comes from the response body’s final SSE message, not headers.

Using Providers in Your Application

Basic Example (OpenAI)

const response = await fetch('https://api.lavapayments.com/v1/forward?u=https://api.openai.com/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${forwardToken}`,
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    model: 'gpt-4',
    messages: [
      { role: 'user', content: 'Hello!' }
    ]
  })
});

Switching Providers

Changing providers is as simple as updating the target URL:
// OpenAI
const openaiUrl = 'https://api.openai.com/v1/chat/completions';

// Anthropic
const anthropicUrl = 'https://api.anthropic.com/v1/messages';

// Google
const googleUrl = 'https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent';

// Use any provider with the same forward token
const url = `https://api.lavapayments.com/v1/forward?u=${encodeURIComponent(targetUrl)}`;

Multi-Provider Support

Your users can use their Lava wallet credits across all providers:
// User's wallet has $50 credit
// They can use it for:
- OpenAI GPT-4 completions
- Anthropic Claude conversations
- ElevenLabs text-to-speech
- Google Gemini API calls
// All from the same prepaid balance

Provider-Specific Features

OpenAI

  • Function Calling: Fully supported with usage tracking
  • Vision: Image inputs metered separately
  • DALL-E: Image generation per-request pricing
  • Embeddings: Token-based metering

Anthropic

  • Long Context: Up to 200K tokens supported
  • Vision: Claude 3 image understanding
  • Tool Use: Native function calling support

ElevenLabs

  • Voice Cloning: Character-based billing
  • Multilingual: 29+ languages supported
  • Real-time: Streaming audio generation

Retell

  • Phone Calls: Duration + cost metering
  • Webhooks: Call status notifications
  • Real-time: Live conversation AI

Adding Custom Providers

If you need to use a provider not listed here, you can use the “Other” provider category:
  1. Set your product to use unmanaged provider keys
  2. Your users include their own API keys in requests
  3. Lava tracks usage and bills based on your configured fees
  4. Works with any REST API endpoint
Custom providers require users to bring their own API keys. Lava can still meter and bill for usage, but cannot manage authentication automatically.

Provider Availability

All providers are available on Lava’s production infrastructure with:
  • Global edge deployment - Requests routed to nearest region
  • Less than 20ms latency overhead - Minimal proxy delay
  • 99.9% uptime SLA - Enterprise-grade reliability
  • Automatic failover - Provider outages handled gracefully
  • Real-time monitoring - Usage tracking and error logs

Pricing Transparency

For each provider, Lava:
  1. Passes through base costs - You pay provider’s standard rates
  2. Adds your configured fees - Fixed or percentage markups
  3. Applies service charge - 1.9% platform fee
  4. Shows complete breakdown - Full cost visibility in dashboard
See Pricing Configuration for details on setting up pricing.

Next Steps