fireworks/models/gpt-oss-20b

Common Name: OpenAI gpt-oss-20b

Fireworks
Released on Oct 16 12:00 AMTool Invocation

Welcome to the gpt-oss series, OpenAI's open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases. gpt-oss-20b is used for lower latency, and local or specialized use-cases.

Specifications

Context128,000
Inputtext
Outputtext

Performance (7-day Average)

Uptime
TPS
RURT

Pricing

Input$0.08/MTokens
Output$0.33/MTokens

Usage Statistics

No usage data available for this model during the selected period
View your usage statistics for this model

Similar Models

$0.62/$1.85/M
ctx160Kmaxavailtps
InOutCap

DeepSeek-V3.1-Terminus is an updated version of DeepSeek-V3.1 with enhanced language consistency, reduced mixed Chinese-English text, and optimized Code Agent and Search Agent performance.

$0.66/$2.75/M
ctx256Kmaxavailtps
InOutCap

Kimi K2 0905 is an updated version of Kimi K2, a state-of-the-art mixture-of-experts (MoE) language model with 32 billion activated parameters and 1 trillion total parameters. Kimi K2 0905 has improved coding abilities, a longer context window, and agentic tool use, and a longer (262K) context window.

$0.62/$1.85/M
ctx160Kmaxavailtps
InOutCap

DeepSeek-V3.1 is post-trained on the top of DeepSeek-V3.1-Base, which is built upon the original V3 base checkpoint through a two-phase long context extension approach, following the methodology outlined in the original DeepSeek-V3 report. We have expanded our dataset by collecting additional long documents and substantially extending both training phases. The 32K extension phase has been increased 10-fold to 630B tokens, while the 128K extension phase has been extended by 3.3x to 209B tokens. Additionally, DeepSeek-V3.1 is trained using the UE8M0 FP8 scale data format to ensure compatibility with microscaling data formats.

$0.17/$0.66/M
ctx128Kmaxavailtps
InOutCap

Welcome to the gpt-oss series, OpenAI's open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases. gpt-oss-120b is used for production, general purpose, high reasoning use-cases that fits into a single H100 GPU.

Documentation

No documentation available
This model (fireworks/models/gpt-oss-20b) uses a dedicated API. Please refer to the official documentation for usage examples.