Claude Code Setup
Claude Code is Anthropic’s official AI coding CLI tool. With OfoxAI, you get unlimited throughput, lower latency, and multi-model switching capabilities.
Why Use OfoxAI?
- High rate quota — 200 RPM by default, unlimited TPM for high-frequency coding needs
- 99.9% SLA — Multi-node redundancy with automatic failover
- Low latency — P99 latency < 200ms
- Full protocol support — 100% compatible with Anthropic native protocol
Setup Steps
1. Get an API Key
Go to the OfoxAI Console to create an API Key.
2. Set Environment Variables
Claude Code uses the Anthropic native protocol and requires ANTHROPIC_BASE_URL and ANTHROPIC_AUTH_TOKEN.
Add the following to your shell configuration file (~/.zshrc or ~/.bashrc):
export ANTHROPIC_BASE_URL=https://api.ofox.ai/anthropic
export ANTHROPIC_AUTH_TOKEN=<your OFOXAI_API_KEY>Then reload:
source ~/.zshrc3. Verify Configuration
claude --version
# A normal version output indicates successful configuration
# Send a test message
claude "Hello, how are you?"Model Mapping
Model name mapping when using Claude Code:
| Used in Claude Code | OfoxAI Model ID |
|---|---|
claude-opus-4.6 | anthropic/claude-opus-4.6 |
claude-sonnet-4.5 | anthropic/claude-sonnet-4.5 |
claude-haiku-4.5 | anthropic/claude-haiku-4.5 |
OfoxAI automatically handles model name mapping, so you don’t need to manually add the anthropic/ prefix. Claude Code’s default model names work directly.
Advanced Configuration
Configure in settings.json
You can also set environment variables in Claude Code’s configuration file:
{
"env": {
"ANTHROPIC_BASE_URL": "https://api.ofox.ai/anthropic",
"ANTHROPIC_AUTH_TOKEN": "<your OFOXAI_API_KEY>"
}
}Switch Models
Use the /model command in Claude Code to switch models:
/model claude-sonnet-4.5Troubleshooting
Common Issues
Q: “Authentication error” message
Check that ANTHROPIC_AUTH_TOKEN is correctly set:
echo $ANTHROPIC_AUTH_TOKEN
# Should output your API KeyQ: Connection timeout
Check that ANTHROPIC_BASE_URL is correct:
echo $ANTHROPIC_BASE_URL
# Should output https://api.ofox.ai/anthropicQ: Streaming response stuttering
Check your network connection. If using from mainland China, OfoxAI provides Hong Kong node acceleration with approximately 300ms latency.