LLM Providers Guide
Adaptly supports multiple Large Language Model providers, giving you flexibility in AI capabilities, costs, and performance. This guide covers setup for all supported providers.
Quick Start (Google Gemini)
For beginners, we recommend starting with Google Gemini as it's the easiest to set up and has a generous free tier.
1. Get Your API Key
- Go to Google AI Studio
- Sign in with your Google account
- Click "Get API Key" and create a new key
- Copy the API key (starts with
AIza...)
2. Set Up Environment Variables
Create a .env.local file:
# .env.local
NEXT_PUBLIC_GOOGLE_GENERATIVE_AI_API_KEY=your_api_key_here
3. Configure AdaptlyProvider
import { AdaptlyProvider } from "adaptly";
import adaptlyConfig from "./adaptly.json";
export default function App() {
return (
<AdaptlyProvider
apiKey={process.env.NEXT_PUBLIC_GOOGLE_GENERATIVE_AI_API_KEY!}
provider="google"
model="gemini-2.0-flash-exp"
components={{ MetricCard, SalesChart }}
adaptlyConfig={adaptlyConfig}
/>
);
}
4. Supported Models
gemini-2.0-flash-exp(recommended - latest)gemini-2.0-flashgemini-1.5-progemini-1.5-flashgemini-1.0-pro
Advanced Provider Setup
OpenAI GPT Setup
1. Get Your API Key
- Go to OpenAI Platform
- Sign in and navigate to API Keys
- Create a new secret key
- Copy the key (starts with
sk-...)
2. Environment Variables
# .env.local
NEXT_PUBLIC_OPENAI_API_KEY=your_openai_key_here
3. Configuration
<AdaptlyProvider
apiKey={process.env.NEXT_PUBLIC_OPENAI_API_KEY!}
provider="openai"
model="gpt-4o"
components={{ MetricCard, SalesChart }}
adaptlyConfig={adaptlyConfig}
/>
4. Supported Models
gpt-4o(recommended - most capable)gpt-4o-mini(cost-effective)gpt-4-turbogpt-4gpt-3.5-turbo
Anthropic Claude Setup
1. Get Your API Key
- Go to Anthropic Console
- Sign in and navigate to API Keys
- Create a new key
- Copy the key (starts with
sk-ant-...)
2. Environment Variables
# .env.local
NEXT_PUBLIC_ANTHROPIC_API_KEY=your_anthropic_key_here
3. Configuration
<AdaptlyProvider
apiKey={process.env.NEXT_PUBLIC_ANTHROPIC_API_KEY!}
provider="anthropic"
model="claude-3-5-sonnet-20241022"
components={{ MetricCard, SalesChart }}
adaptlyConfig={adaptlyConfig}
/>
4. Supported Models
claude-3-5-sonnet-20241022(recommended - most capable)claude-3-5-haiku-20241022(fastest)claude-3-opus-20240229claude-3-sonnet-20240229
Provider Comparison
| Provider | Best For | Cost | Speed | Capabilities |
|---|---|---|---|---|
| Google Gemini | Beginners, Free tier | Free tier available | Fast | Good for most tasks |
| OpenAI GPT | Advanced reasoning | Higher cost | Medium | Excellent for complex tasks |
| Anthropic Claude | Balanced performance | Medium cost | Fast | Great for structured output |
Runtime Provider Switching
You can switch providers at runtime using state management:
"use client";
import { useState } from "react";
import { AdaptlyProvider } from "adaptly";
export default function App() {
const [selectedProvider, setSelectedProvider] = useState<"google" | "openai" | "anthropic">("google");
const [selectedModel, setSelectedModel] = useState("gemini-2.0-flash-exp");
const getApiKey = () => {
switch (selectedProvider) {
case "google":
return process.env.NEXT_PUBLIC_GOOGLE_GENERATIVE_AI_API_KEY!;
case "openai":
return process.env.NEXT_PUBLIC_OPENAI_API_KEY!;
case "anthropic":
return process.env.NEXT_PUBLIC_ANTHROPIC_API_KEY!;
default:
return "";
}
};
const modelOptions = {
google: ["gemini-2.0-flash-exp", "gemini-2.0-flash", "gemini-1.5-pro"],
openai: ["gpt-4o", "gpt-4o-mini", "gpt-4-turbo"],
anthropic: ["claude-3-5-sonnet-20241022", "claude-3-5-haiku-20241022"]
};
return (
<div>
{/* Provider selector */}
<select
value={selectedProvider}
onChange={(e) => setSelectedProvider(e.target.value as any)}
>
<option value="google">Google Gemini</option>
<option value="openai">OpenAI GPT</option>
<option value="anthropic">Anthropic Claude</option>
</select>
{/* Model selector */}
<select
value={selectedModel}
onChange={(e) => setSelectedModel(e.target.value)}
>
{modelOptions[selectedProvider].map(model => (
<option key={model} value={model}>{model}</option>
))}
</select>
<AdaptlyProvider
apiKey={getApiKey()}
provider={selectedProvider}
model={selectedModel}
components={{ MetricCard, SalesChart }}
adaptlyConfig={adaptlyConfig}
/>
</div>
);
}
Model Selection Guide
For Development
- Google Gemini:
gemini-2.0-flash-exp(free tier, good performance) - OpenAI:
gpt-4o-mini(cost-effective, good quality) - Anthropic:
claude-3-5-haiku-20241022(fastest, good quality)
For Production
- Google Gemini:
gemini-2.0-flash(stable, good performance) - OpenAI:
gpt-4o(best quality, higher cost) - Anthropic:
claude-3-5-sonnet-20241022(excellent quality, balanced cost)
For Cost Optimization
- Google Gemini:
gemini-1.5-flash(fastest, lowest cost) - OpenAI:
gpt-3.5-turbo(good quality, lower cost) - Anthropic:
claude-3-5-haiku-20241022(fast, cost-effective)
Advanced Configuration
Custom LLM Settings
You can customize LLM behavior through the AdaptlyProvider:
<AdaptlyProvider
apiKey={apiKey}
provider="openai"
model="gpt-4o"
components={components}
adaptlyConfig={adaptlyConfig}
// Custom configuration
enableStorage={true}
storageKey="my-app-ui"
storageVersion="1.0.0"
/>
Error Handling
Each provider has different error patterns:
// Handle provider-specific errors
const handleLLMError = (error: Error) => {
if (error.message.includes("API key")) {
console.error("Invalid API key for provider");
} else if (error.message.includes("quota")) {
console.error("API quota exceeded");
} else if (error.message.includes("model")) {
console.error("Model not available");
}
};
Provider-Specific Features
Google Gemini
- Free tier: 15 requests per minute
- Rate limits: Generous for development
- Best for: Quick prototyping, free development
OpenAI GPT
- Rate limits: Varies by model and tier
- Best for: Complex reasoning, advanced tasks
- Cost: Higher but excellent quality
Anthropic Claude
- Rate limits: 5 requests per minute (free tier)
- Best for: Balanced performance and cost
- Safety: Built-in safety features
Troubleshooting
Common Issues
"API key is required" error:
- Check environment variable name matches exactly
- Restart development server after adding variables
- Verify API key format (starts with correct prefix)
"Model not found" error:
- Check model name spelling
- Verify model is available in your region
- Try a different model from the supported list
"Rate limit exceeded" error:
- Wait before making more requests
- Consider upgrading your API plan
- Switch to a different provider temporarily
"Invalid API key" error:
- Verify the key is correct and active
- Check if the key has expired
- Ensure the key has the right permissions
Debug Mode
Enable debug logging to see provider-specific information:
// In your component
import { adaptlyLogger } from "adaptly";
// Enable debug logging
adaptlyLogger.setConfig({ enabled: true, level: "debug" });
Provider Status
Check if your provider is working:
const { currentLLMProvider, isLLMProcessing } = useAdaptiveUI();
console.log("Current provider:", currentLLMProvider);
console.log("Processing:", isLLMProcessing);
Cost Optimization
Free Tiers
- Google Gemini: 15 requests/minute (free)
- OpenAI: $5 credit (new accounts)
- Anthropic: 5 requests/minute (free)
Cost-Effective Models
- Google:
gemini-1.5-flash(fastest, cheapest) - OpenAI:
gpt-3.5-turbo(good quality, lower cost) - Anthropic:
claude-3-5-haiku-20241022(fast, cost-effective)
Usage Monitoring
- Monitor your API usage in provider dashboards
- Set up billing alerts
- Use development models for testing
Next Steps
- Storage Service Guide - Configure persistence
- Advanced Features Guide - Custom loaders and validation
- API Reference - Complete component documentation
Example Implementations
- Demo App - Full provider switching
- Component Examples - Real React components
Ready to configure persistence? Check out the Storage Service Guide to learn how Adaptly automatically saves and restores your UI state!