LLM Configuration
Configure AI models for test generation
WellTested uses Large Language Models (LLMs) to generate test scenarios and code. You must configure an LLM provider before generating tests.
Quick Setup
On first login, you’ll see a prompt to configure LLM settings.
Visit System Settings and fill in the following fields:
- LLM Provider: Select your LLM’s interface
- API Key: Enter your API key
- API Base URL: Enter your LLM’s API endpoint
- Model Name: Enter the model name
Below are example LLM configurations:
| LLM Provider | API Base URL | Model Name (Recommended) | |
|---|---|---|---|
| OpenAI | openai | https://api.openai.com/v1 | gpt-4o-2024-08-06 |
| Claude (Anthropic) | claude | https://api.anthropic.com | claude-sonnet-4-5 |
| Qwen | openai | https://dashscope.aliyuncs.com/compatible-mode/v1 | qwen3-max |
| Self-hosted vLLM | openai | http://vllm-server:8000/v1 | deepseek-ai/DeepSeek-V3-0324 |
| Self-hosted Ollama | openai | http://10.0.1.100:11434/v1 | deepseek-v3:671b |
Get API Keys
- OpenAI: https://platform.openai.com/api-keys
- Anthropic: https://console.anthropic.com/
- Qwen: https://dashscope.console.aliyun.com/apiKey
Langfuse Monitoring (Optional)
Track AI usage and costs by configuring Langfuse in .env:
LANGFUSE_PUBLIC_KEY=pk-lf-...
LANGFUSE_SECRET_KEY=sk-lf-...
LANGFUSE_HOST=https://cloud.langfuse.com
Sign up at https://cloud.langfuse.com to get credentials.
Troubleshooting
Connection Test Fails
Solutions:
- Verify API key is correct
- Check API Base URL format
- Ensure model name is correct
- Check internet connection
Rate Limit Exceeded
Solutions:
- Wait and retry
- Upgrade provider plan
- Use different model
Next Steps
After configuring LLM:
- Create Project - Create your first project
- Upload API - Import OpenAPI document
- Generate Scenario - Create test scenarios
← Back: Installation | ← Documentation | Next: Create Project →