TA/deepseek-ai/DeepSeek-R1-Distill-Llama-70B-free

Common Name: DeepSeek R1 Distill Llama 70B

TogetherAI
-100%On SaleReleased on Feb 17 12:00 AM

Free tier of DeepSeek R1 distilled to Llama 70B architecture, hosted on TogetherAI.

Specifications

Context128,000
Inputtext
Outputtext

Performance (7-day Average)

Uptime
TPS
RURT

Pricing

InputFree
OutputFree

Usage Statistics

No usage data available for this model during the selected period
View your usage statistics for this model

Similar Models

$0.17/$0.66/M
ctx128Kmaxavailtps
InOutCap

OpenAI's open-weight 120B model for production and high reasoning use cases, hosted on TogetherAI.

$0.06/$0.22/M
ctx128Kmaxavailtps
InOutCap

OpenAI's open-weight 20B model for lower latency and local use cases, hosted on TogetherAI.

$0.66/$1.87/M
ctx128Kmaxavailtps
InOutCap

DeepSeek V3.1 hybrid model combining V3 and R1 capabilities with 128K context, hosted on TogetherAI.

$0.22/$0.66/M
ctx128Kmax33Kavailtps
InOut

Qwen3 235B model with 22B active parameters optimized for throughput, hosted on TogetherAI.

Documentation

No documentation available
This model (TA/deepseek-ai/DeepSeek-R1-Distill-Llama-70B-free) uses a dedicated API. Please refer to the official documentation for usage examples.