Structured Output
Structured Output makes the model return data in a JSON format you specify. It’s ideal for data extraction, classification, form filling, and similar use cases.
JSON Mode
The simplest structured output method — force the model to return valid JSON:
Python
json_mode.py
from openai import OpenAI
client = OpenAI(
base_url="https://api.ofox.ai/v1",
api_key="<your OFOXAI_API_KEY>"
)
response = client.chat.completions.create(
model="openai/gpt-4o",
messages=[
{"role": "system", "content": "You are a data extraction assistant. Return results in JSON format."},
{"role": "user", "content": "Extract the name, company, and title from this text: John Smith is a senior engineer at Google"}
],
response_format={"type": "json_object"}
)
import json
result = json.loads(response.choices[0].message.content)
print(result)
# {"name": "John Smith", "company": "Google", "title": "Senior Engineer"}When using JSON Mode, your system prompt must include the word “JSON”, or some models may ignore the format requirement.
JSON Schema Constraints
For more precise control over output structure, ensuring field names and types match expectations:
json_schema.py
response = client.chat.completions.create(
model="openai/gpt-4o",
messages=[
{"role": "user", "content": "Analyze the sentiment of this review: This product is amazing, very easy to use!"}
],
response_format={
"type": "json_schema",
"json_schema": {
"name": "sentiment_analysis",
"schema": {
"type": "object",
"properties": {
"sentiment": {
"type": "string",
"enum": ["positive", "negative", "neutral"],
"description": "Sentiment direction"
},
"confidence": {
"type": "number",
"description": "Confidence score 0-1"
},
"keywords": {
"type": "array",
"items": {"type": "string"},
"description": "Key sentiment words"
}
},
"required": ["sentiment", "confidence", "keywords"],
"additionalProperties": False
}
}
}
)Output:
{
"sentiment": "positive",
"confidence": 0.95,
"keywords": ["amazing", "very easy to use"]
}Real-World Use Cases
Data Extraction
# Extract structured data from unstructured text
response = client.chat.completions.create(
model="openai/gpt-4o",
messages=[{
"role": "user",
"content": """Extract the order information:
Customer John Doe placed an order on January 15, 2025 for 3 MacBook Pros,
priced at $1,999 each, shipping to 123 Main Street, New York, NY 10001"""
}],
response_format={
"type": "json_schema",
"json_schema": {
"name": "order_info",
"schema": {
"type": "object",
"properties": {
"customer": {"type": "string"},
"date": {"type": "string"},
"items": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {"type": "string"},
"quantity": {"type": "integer"},
"unit_price": {"type": "number"}
}
}
},
"address": {"type": "string"}
},
"required": ["customer", "date", "items", "address"]
}
}
}
)Classification
# Multi-label classification
response = client.chat.completions.create(
model="openai/gpt-4o",
messages=[{
"role": "user",
"content": "Classify the topic of this article: AI technology is being increasingly applied in the healthcare sector..."
}],
response_format={
"type": "json_schema",
"json_schema": {
"name": "classification",
"schema": {
"type": "object",
"properties": {
"primary_category": {"type": "string"},
"secondary_categories": {
"type": "array",
"items": {"type": "string"}
},
"tags": {
"type": "array",
"items": {"type": "string"}
}
},
"required": ["primary_category"]
}
}
}
)Supported Models
| Model | JSON Mode | JSON Schema |
|---|---|---|
openai/gpt-4o | ✅ | ✅ |
openai/gpt-4o-mini | ✅ | ✅ |
anthropic/claude-sonnet-4.5 | ✅ | — |
google/gemini-3-flash-preview | ✅ | ✅ |
For models that don’t support JSON Schema, you can achieve similar results by describing the expected JSON format in detail within the system prompt.
Last updated on