TA/deepseek-ai/DeepSeek-R1-Distill-Llama-70B

Common Name: DeepSeek R1 Distill Llama 70B

TogetherAI
Released on Feb 17 12:00 AM

DeepSeek R1 reasoning model distilled to Llama 70B architecture, hosted on TogetherAI.

Specifications

Context128,000
Inputtext
Outputtext

Performance (7-day Average)

Uptime
TPS
RURT

Pricing

Input$2.20/MTokens
Output$2.20/MTokens

Usage Statistics

No usage data available for this model during the selected period
View your usage statistics for this model

Similar Models

$3.30/$7.70/M
ctx64Kmax8Kavailtps
InOut

DeepSeek's reasoning model trained via large-scale reinforcement learning, hosted on TogetherAI.

$1.38/$1.38/M
ctx64Kmax8Kavailtps
InOut

DeepSeek V3 MoE model with 671B total parameters and 37B active, hosted on TogetherAI.

$3.85/$3.85/M
ctx128Kmaxavailtps
InOut

Meta's largest Llama 3.1 405B model optimized for fast inference on TogetherAI.

$1.32/$1.32/M
ctx128Kmaxavailtps
InOut

Alibaba's Qwen2.5 7B model optimized for fast inference on TogetherAI.

Documentation

No documentation available
This model (TA/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) uses a dedicated API. Please refer to the official documentation for usage examples.