o3-pro

OpenAI
Released on Sep 24 12:00 AMKnowledge Cutoff May 1, 2024 12:00 AMTool InvocationReasoning

Our most capable reasoning model, using more compute for the best possible answers on the hardest problems.

Specifications

Context200,000
Maximum Output100,000
Inputtext, image
Outputtext

Performance (7-day Average)

Uptime
TPS
RURT

Pricing

Input$22.00/MTokens
Output$88.00/MTokens
Batch Input$11.00/MTokens
Batch Output$44.00/MTokens

Usage Statistics

No usage data available for this model during the selected period
View your usage statistics for this model

Similar Models

$16.50/$132.00/M
ctx400Kmax272Kavailtps
InOutCap

GPT-5 pro uses more compute to think harder and provide consistently better answers.

$16.50/$132.00/M
ctx400Kmax272Kavailtps
InOutCap

GPT-5 pro uses more compute to think harder and provide consistently better answers.

$22.00/$88.00/M
ctx200Kmax100Kavailtps
InOutCap

Snapshot of o3-pro from June 10, 2025. Our most capable reasoning model for the hardest problems.

$16.50/$66.00/M
ctx200Kmax100Kavailtps
InOutCap

A reasoning model designed to solve hard problems across domains. Uses chain of thought to think before responding.

Code Examples

Use the OpenAI Python SDK to call this model. Replace your-api-key with your API key.

python
from openai import OpenAI

client = OpenAI(
    base_url="https://api.ohmygpt.com/v1",
    api_key="your-api-key",  # Replace with your API key
)

response = client.chat.completions.create(
    model="o3-pro",
    messages=[
        {"role": "user", "content": "Hello!"}
    ],
)

print(response.choices[0].message.content)