Providers & Models
VoltAgent is built directly on top of the Vercel AI SDK. You can either:
- Pass a
LanguageModelfrom an ai-sdk provider package, or - Use a model string like
openai/gpt-4o-miniand let VoltAgent resolve it with the built-in model router.
Both approaches are fully compatible with ai-sdk streaming, tool calling, and structured outputs. For the router, VoltAgent ships with a registry snapshot generated from models.dev.
Model Strings (Model Router)
Model strings remove the need to import provider packages in your app:
import { Agent } from "@voltagent/core";
const agent = new Agent({
name: "my-agent",
instructions: "You are a helpful assistant",
model: "openai/gpt-4o-mini",
});
Other examples:
const claudeAgent = new Agent({
name: "claude-agent",
instructions: "Answer with concise reasoning",
model: "anthropic/claude-3-5-sonnet",
});
const geminiAgent = new Agent({
name: "gemini-agent",
instructions: "Respond in Turkish",
model: "google/gemini-2.0-flash",
});
If you need provider-specific configuration or want to use the ai-sdk APIs directly, pass a LanguageModel instead.
See Model Router & Registry for how strings are resolved and how env vars are mapped.
For the full provider directory, see Models.
Installation
Install the AI SDK base package (required):
- npm
- yarn
- pnpm
npm install ai
yarn add ai
pnpm add ai
If you plan to import ai-sdk providers directly (for embeddings or provider-specific helpers like openai.embedding(...)), install those packages too. If you only use model strings, you can skip them:
- npm
- yarn
- pnpm
# For example, to use OpenAI:
npm install @ai-sdk/openai
# Or Anthropic:
npm install @ai-sdk/anthropic
# Or Google:
npm install @ai-sdk/google
# For example, to use OpenAI:
yarn add @ai-sdk/openai
# Or Anthropic:
yarn add @ai-sdk/anthropic
# Or Google:
yarn add @ai-sdk/google
# For example, to use OpenAI:
pnpm add @ai-sdk/openai
# Or Anthropic:
pnpm add @ai-sdk/anthropic
# Or Google:
pnpm add @ai-sdk/google
If you only use model strings, you can skip installing provider packages.
Usage Examples
Model Strings
import { Agent } from "@voltagent/core";
const agent = new Agent({
name: "my-agent",
instructions: "You are a helpful assistant",
model: "openai/gpt-4o-mini",
});
Direct ai-sdk Provider
import { Agent } from "@voltagent/core";
import { openai } from "@ai-sdk/openai";
const agent = new Agent({
name: "my-agent",
instructions: "You are a helpful assistant",
model: openai("gpt-4o-mini"),
});
Available Providers
The lists below describe ai-sdk provider packages. Model strings are resolved through VoltAgent's registry and map to these providers under the hood.
First-Party AI SDK Providers
These providers are maintained by Vercel and offer the highest level of support and integration:
Foundation Models
| Provider | Package | Documentation | Key Models |
|---|---|---|---|
| xAI Grok | @ai-sdk/xai | Docs | grok-4, grok-3, grok-2-vision |
| OpenAI | @ai-sdk/openai | Docs | gpt-4.1, gpt-4o, o3, o1 |
| Anthropic | @ai-sdk/anthropic | Docs | claude-opus-4, claude-sonnet-4, claude-3.5 |
| Google Generative AI | @ai-sdk/google | Docs | gemini-2.0-flash, gemini-1.5-pro |
| Google Vertex | @ai-sdk/google-vertex | Docs | gemini models, claude models via Vertex |
| Mistral | @ai-sdk/mistral | Docs | mistral-large, pixtral-large, mistral-medium |
Cloud Platforms
| Provider | Package | Documentation | Description |
|---|---|---|---|
| Amazon Bedrock | @ai-sdk/amazon-bedrock | Docs | Access to various models via AWS |
| Azure OpenAI | @ai-sdk/azure | Docs | OpenAI models via Azure |
| Vercel | @ai-sdk/vercel | Docs | v0 model for code generation |
Specialized Providers
| Provider | Package | Documentation | Specialization |
|---|---|---|---|
| Groq | @ai-sdk/groq | Docs | Ultra-fast inference |
| Together.ai | @ai-sdk/togetherai | Docs | Open-source models |
| Cohere | @ai-sdk/cohere | Docs | Enterprise search & generation |
| Fireworks | @ai-sdk/fireworks | Docs | Fast open-source models |
| DeepInfra | @ai-sdk/deepinfra | Docs | Affordable inference |
| DeepSeek | @ai-sdk/deepseek | Docs | DeepSeek models including reasoner |
| Cerebras | @ai-sdk/cerebras | Docs | Fast Llama models |
| Perplexity | @ai-sdk/perplexity | Docs | Search-enhanced responses |
Audio & Speech Providers
| Provider | Package | Documentation | Specialization |
|---|---|---|---|
| ElevenLabs | @ai-sdk/elevenlabs | Docs | Text-to-speech |
| LMNT | @ai-sdk/lmnt | Docs | Voice synthesis |
| Hume | @ai-sdk/hume | Docs | Emotional intelligence |
| Rev.ai | @ai-sdk/revai | Docs | Speech recognition |
| Deepgram | @ai-sdk/deepgram | Docs | Speech-to-text |
| Gladia | @ai-sdk/gladia | Docs | Audio intelligence |
| AssemblyAI | @ai-sdk/assemblyai | Docs | Speech recognition & understanding |
Community Providers
These providers are created and maintained by the open-source community:
| Provider | Package | Documentation | Description |
|---|---|---|---|
| Ollama | ollama-ai-provider | Docs | Local model execution |
| FriendliAI | @friendliai/ai-provider | Docs | Optimized inference |
| Portkey | @portkey-ai/vercel-provider | Docs | LLM gateway & observability |
| Cloudflare Workers AI | workers-ai-provider | Docs | Edge AI inference |
| OpenRouter | @openrouter/ai-sdk-provider | Docs | Unified API for multiple providers |
| Requesty | @requesty/ai-sdk | Docs | Request management |
| Crosshatch | @crosshatch/ai-provider | Docs | Specialized models |
| Mixedbread | mixedbread-ai-provider | Docs | Embedding models |
| Voyage AI | voyage-ai-provider | Docs | Embedding models |
| Mem0 | @mem0/vercel-ai-provider | Docs | Memory-enhanced AI |
| Letta | @letta-ai/vercel-ai-sdk-provider | Docs | Stateful agents |
| Spark | spark-ai-provider | Docs | Chinese language models |
| AnthropicVertex | anthropic-vertex-ai | Docs | Claude via Vertex AI |
| LangDB | @langdb/vercel-provider | Docs | Database-aware AI |
| Dify | dify-ai-provider | Docs | LLMOps platform |
| Sarvam | sarvam-ai-provider | Docs | Indian language models |
| Claude Code | ai-sdk-provider-claude-code | Docs | Code-optimized Claude |
| Built-in AI | built-in-ai | Docs | Browser-native AI |
| Gemini CLI | ai-sdk-provider-gemini-cli | Docs | CLI-based Gemini |
| A2A | a2a-ai-provider | Docs | Specialized models |
| SAP-AI | @mymediset/sap-ai-provider | Docs | SAP AI Core integration |
OpenAI-Compatible Providers
For providers that follow the OpenAI API specification:
| Provider | Documentation | Description |
|---|---|---|
| LM Studio | Docs | Local model execution with GUI |
| Baseten | Docs | Model deployment platform |
| Any OpenAI-compatible API | Docs | Custom endpoints |
Model Capabilities
The AI providers support different language models with various capabilities. Here are the capabilities of popular models:
| Provider | Model | Image Input | Object Generation | Tool Usage | Tool Streaming |
|---|---|---|---|---|---|
| xAI Grok | grok-4 | ❌ | ✅ | ✅ | ✅ |
| xAI Grok | grok-3 | ❌ | ✅ | ✅ | ✅ |
| xAI Grok | grok-3-fast | ❌ | ✅ | ✅ | ✅ |
| xAI Grok | grok-3-mini | ❌ | ✅ | ✅ | ✅ |
| xAI Grok | grok-3-mini-fast | ❌ | ✅ | ✅ | ✅ |
| xAI Grok | grok-2-1212 | ❌ | ✅ | ✅ | ✅ |
| xAI Grok | grok-2-vision-1212 | ✅ | ✅ | ✅ | ✅ |
| xAI Grok | grok-beta | ❌ | ✅ | ✅ | ✅ |
| xAI Grok | grok-vision-beta | ✅ | ❌ | ❌ | ❌ |
| Vercel | v0-1.0-md | ✅ | ✅ | ✅ | ✅ |
| OpenAI | gpt-4.1 | ✅ | ✅ | ✅ | ✅ |
| OpenAI | gpt-4.1-mini | ✅ | ✅ | ✅ | ✅ |
| OpenAI | gpt-4.1-nano | ✅ | ✅ | ✅ | ✅ |
| OpenAI | gpt-4o | ✅ | ✅ | ✅ | ✅ |
| OpenAI | gpt-4o-mini | ✅ | ✅ | ✅ | ✅ |
| OpenAI | gpt-4 | ❌ | ✅ | ✅ | ✅ |
| OpenAI | o3-mini | ❌ | ❌ | ✅ | ✅ |
| OpenAI | o3 | ❌ | ❌ | ✅ | ✅ |
| OpenAI | o4-mini | ❌ | ❌ | ✅ | ✅ |
| OpenAI | o1 | ✅ | ❌ | ✅ | ✅ |
| OpenAI | o1-mini | ✅ | ❌ | ✅ | ✅ |
| OpenAI | o1-preview | ❌ | ❌ | ❌ | ❌ |
| Anthropic | claude-opus-4-20250514 | ✅ | ✅ | ✅ | ✅ |
| Anthropic | claude-sonnet-4-20250514 | ✅ | ✅ | ✅ | ✅ |
| Anthropic | claude-3-7-sonnet-20250219 | ✅ | ✅ | ✅ | ✅ |
| Anthropic | claude-3-5-sonnet-20241022 | ✅ | ✅ | ✅ | ✅ |
| Anthropic | claude-3-5-sonnet-20240620 | ✅ | ✅ | ✅ | ✅ |
| Anthropic | claude-3-5-haiku-20241022 | ✅ | ✅ | ✅ | ✅ |
| Mistral | pixtral-large-latest | ✅ | ✅ | ✅ | ✅ |
| Mistral | mistral-large-latest | ❌ | ✅ | ✅ | ✅ |
| Mistral | mistral-medium-latest | ❌ | ✅ | ✅ | ✅ |
| Mistral | mistral-medium-2505 | ❌ | ✅ | ✅ | ✅ |
| Mistral | mistral-small-latest | ❌ | ✅ | ✅ | ✅ |
| Mistral | pixtral-12b-2409 | ✅ | ✅ | ✅ | ✅ |
| Google Generative AI | gemini-2.0-flash-exp | ✅ | ✅ | ✅ | ✅ |
| Google Generative AI | gemini-1.5-flash | ✅ | ✅ | ✅ | ✅ |
| Google Generative AI | gemini-1.5-pro | ✅ | ✅ | ✅ | ✅ |
| Google Vertex | gemini-2.0-flash-exp | ✅ | ✅ | ✅ | ✅ |
| Google Vertex | gemini-1.5-flash | ✅ | ✅ | ✅ | ✅ |
| Google Vertex | gemini-1.5-pro | ✅ | ✅ | ✅ | ✅ |
| DeepSeek | deepseek-chat | ❌ | ✅ | ✅ | ✅ |
| DeepSeek | deepseek-reasoner | ❌ | ❌ | ❌ | ❌ |
| Cerebras | llama3.1-8b | ❌ | ✅ | ✅ | ✅ |
| Cerebras | llama3.1-70b | ❌ | ✅ | ✅ | ✅ |
| Cerebras | llama3.3-70b | ❌ | ✅ | ✅ | ✅ |
| Groq | meta-llama/llama-4-scout-17b-16e-instruct | ✅ | ✅ | ✅ | ✅ |
| Groq | llama-3.3-70b-versatile | ❌ | ✅ | ✅ | ✅ |
| Groq | llama-3.1-8b-instant | ❌ | ✅ | ✅ | ✅ |
| Groq | mixtral-8x7b-32768 | ❌ | ✅ | ✅ | ✅ |
| Groq | gemma2-9b-it | ❌ | ✅ | ✅ | ✅ |
Note: This table is not exhaustive. Additional models can be found in the provider documentation pages and on the provider websites.
Migration from Deprecated Providers
If you're currently using VoltAgent's native providers (@voltagent/anthropic-ai, @voltagent/google-ai, @voltagent/groq-ai), we recommend migrating to the Vercel AI SDK providers:
Before (Deprecated):
import { AnthropicProvider } from "@voltagent/anthropic-ai";
const provider = new AnthropicProvider({ apiKey: "..." });
const agent = new Agent({
llm: provider,
model: "claude-opus-4-1",
});
After (Recommended):
import { Agent } from "@voltagent/core";
const agent = new Agent({
model: "anthropic/claude-3-5-sonnet",
instructions: "You are a helpful assistant",
});
Environment Variables
Most providers use environment variables for API keys:
# OpenAI
OPENAI_API_KEY=your-key
# Anthropic
ANTHROPIC_API_KEY=your-key
# Google
GOOGLE_GENERATIVE_AI_API_KEY=your-key
# Groq
GROQ_API_KEY=your-key
# And so on...
OpenAI-compatible providers may also need a base URL. You can set a provider-specific override like:
OPENROUTER_BASE_URL=https://openrouter.ai/api/v1
Next Steps
- Choose a provider based on your needs (performance, cost, capabilities)
- Install the corresponding package if you plan to import the provider
- Configure your API keys
- Start building with VoltAgent!
For detailed information about each provider, visit the Vercel AI SDK documentation.
Acknowledgments
The provider lists and model capabilities in this documentation are sourced from the Vercel AI SDK documentation.
A special thanks to the Vercel AI SDK maintainers and community for creating and maintaining this comprehensive ecosystem of AI providers. Their work enables developers to seamlessly integrate with 30+ AI providers through a unified, well-designed interface.
VoltAgent builds upon this excellent foundation to provide a complete framework for building AI agents and workflows.