$0.66/$1.87/M
ctx128Kmax—avail—tps—
InOutCap
DeepSeek V3.1 hybrid model combining V3 and R1 capabilities with 128K context, hosted on TogetherAI.
Common Name: Mixtral 8x22B Instruct v0.1
Mistral AI's larger Mixtral 8x22B MoE model instruction-tuned, hosted on TogetherAI.
DeepSeek V3.1 hybrid model combining V3 and R1 capabilities with 128K context, hosted on TogetherAI.
DeepSeek V3 MoE model with 671B total parameters and 37B active, hosted on TogetherAI.
DeepSeek R1 reasoning model distilled to Llama 70B architecture, hosted on TogetherAI.
Meta's Llama 3.1 70B optimized for fast inference on TogetherAI.