TA/mistralai/Mixtral-8x7B-Instruct-v0.1

Common Name: Mixtral 8x7B Instruct v0.1

TogetherAI
Released on Feb 17 12:00 AM

Mistral AI's Mixtral 8x7B MoE model instruction-tuned, hosted on TogetherAI.

Specifications

Context128,000
Inputtext
Outputtext

Performance (7-day Average)

Uptime
TPS
RURT

Pricing

Input$0.66/MTokens
Output$0.66/MTokens

Usage Statistics

No usage data available for this model during the selected period
View your usage statistics for this model

Similar Models

$0.66/$1.87/M
ctx128Kmaxavailtps
InOutCap

DeepSeek V3.1 hybrid model combining V3 and R1 capabilities with 128K context, hosted on TogetherAI.

$0.97/$0.97/M
ctx128Kmaxavailtps
InOut

Meta's Llama 3.1 70B optimized for fast inference on TogetherAI.

$1.32/$1.32/M
ctx128Kmaxavailtps
InOut

Alibaba's Qwen2.5 7B model optimized for fast inference on TogetherAI.

$0.66/$0.66/M
ctx128Kmaxavailtps
InOut

Mistral AI's 7B instruction-tuned model v0.1, hosted on TogetherAI.

Documentation

No documentation available
This model (TA/mistralai/Mixtral-8x7B-Instruct-v0.1) uses a dedicated API. Please refer to the official documentation for usage examples.