Model Catalog
OfoxAI provides unified access to mainstream LLMs. You can browse all available models in the OfoxAI Model Catalog , or programmatically retrieve complete model information via the Models API.
Model Naming Convention
All models follow the provider/model-name format:
anthropic/claude-sonnet-4.5
google/gemini-3-flash-preview
moonshotai/kimi-k2.5Models API Standard
OfoxAI’s Models API follows the OpenRouter standard, returning complete metadata for each model in JSON format.
API Response Structure
Root Response Object
{
"object": "list",
"data": [
/* Array of Model objects */
]
}Model Object
Each model contains the following standardized fields:
| Field | Type | Description |
|---|---|---|
id | string | Model identifier used in API requests, e.g. "anthropic/claude-sonnet-4.5" |
canonical_slug | string | Permanent model identifier, never changes |
name | string | Model display name |
created | number | Time model was added (Unix timestamp) |
description | string | Detailed description of model capabilities and features |
context_length | number | Maximum context window size (tokens) |
architecture | Architecture | Model technical architecture information |
pricing | Pricing | Model pricing information |
top_provider | TopProvider | Primary provider configuration |
supported_parameters | string[] | List of supported API parameters |
Architecture Object
Describes the model’s input/output modalities and tokenizer information:
{
"modality": "text+image+file->text",
"input_modalities": ["text", "image", "file"],
"output_modalities": ["text"],
"tokenizer": "claude",
"instruct_type": null
}| Field | Description |
|---|---|
modality | Shorthand for input/output modalities, e.g. text+image->text |
input_modalities | Supported input types: text, image, audio, file |
output_modalities | Supported output types: text |
tokenizer | Tokenizer type |
instruct_type | Instruction format type (null for some models) |
Pricing Object
All prices are in USD per token. A value of "0" indicates free.
{
"prompt": "0.000001",
"completion": "0.000005",
"input_cache_read": "0.0000001",
"input_cache_write_5m": "0.00000125",
"input_cache_write_1h": "0.000002"
}| Field | Description |
|---|---|
prompt | Input token price |
completion | Output token price |
input_cache_read | Cache read token price |
input_cache_write_5m | 5-minute cache write token price |
input_cache_write_1h | 1-hour cache write token price |
Different models use different tokenizers, so even with identical input and output text, the token count (and cost) may vary. Use the usage field in the response to get the actual token consumption.
TopProvider Object
{
"context_length": 200000,
"max_completion_tokens": 8192,
"is_moderated": false
}| Field | Description |
|---|---|
context_length | Provider-level context limit |
max_completion_tokens | Maximum tokens per response |
is_moderated | Whether content moderation is enabled |
Supported Parameters
The supported_parameters array indicates which OpenAI-compatible parameters the model supports:
| Parameter | Description |
|---|---|
temperature | Sampling temperature control |
top_p | Nucleus sampling parameter |
max_tokens | Maximum response length |
stop | Custom stop sequences |
tools | Function Calling / Tool Use |
tool_choice | Tool selection strategy |
response_format | Output format specification (JSON Mode) |
reasoning | Deep reasoning mode |
Fetching the Model List
The Models API is a public endpoint that does not require an API Key.
cURL
curl https://api.ofox.ai/v1/modelsFor the complete live model list and pricing, visit the OfoxAI Model Catalog . For API endpoint details, see the Models API Reference.