$0.66/$1.87/M
ctx128Kmax—avail—tps—
InOutCap
DeepSeek V3.1 hybrid model combining V3 and R1 capabilities with 128K context, hosted on TogetherAI.
Common Name: Llama 3.1 70B Instruct Turbo
Meta's Llama 3.1 70B optimized for fast inference on TogetherAI.
DeepSeek V3.1 hybrid model combining V3 and R1 capabilities with 128K context, hosted on TogetherAI.
DeepSeek V3 MoE model with 671B total parameters and 37B active, hosted on TogetherAI.
Alibaba's Qwen2.5 7B model optimized for fast inference on TogetherAI.
Mistral AI's 7B instruction-tuned model v0.1, hosted on TogetherAI.