$0.66/$1.87/M
ctx128Kmax—avail—tps—
InOutCap
DeepSeek V3.1 hybrid model combining V3 and R1 capabilities with 128K context, hosted on TogetherAI.
Common Name: Mixtral 8x7B Instruct v0.1
Mistral AI's Mixtral 8x7B MoE model instruction-tuned, hosted on TogetherAI.
DeepSeek V3.1 hybrid model combining V3 and R1 capabilities with 128K context, hosted on TogetherAI.
Meta's Llama 3.1 70B optimized for fast inference on TogetherAI.
Alibaba's Qwen2.5 7B model optimized for fast inference on TogetherAI.
Mistral AI's 7B instruction-tuned model v0.1, hosted on TogetherAI.