glm-4.6

Common Name: GLM-4.6

ChatGLM
Released on Oct 8 12:00 AMKnowledge Cutoff Apr 1 12:00 AMTool InvocationReasoning

Latest GLM model from Zhipu AI with improved reasoning and generation capabilities.

Specifications

Context204,800
Maximum Output204,800
Inputtext
Outputtext

Performance (7-day Average)

Uptime
TPS
RURT

Pricing

< 32K
Input¥2.20/MTokens
Output¥8.80/MTokens
Cached Input¥0.44/MTokens
< 32K
Input¥3.30/MTokens
Output¥15.40/MTokens
Cached Input¥0.66/MTokens
32K-200K
Input¥4.40/MTokens
Output¥17.60/MTokens
Cached Input¥0.88/MTokens

Usage Statistics

No usage data available for this model during the selected period
View your usage statistics for this model

Similar Models

Free/Free
ctx131Kmax98Kavailtps
InOutCap

Fast, cost-efficient version of GLM-4.5. Optimized for high-throughput applications.

¥4.40/¥13.20/M
ctx128Kmaxavailtps
InOutCap

Zhipu AI's GLM-4.5 AirX variant optimized for high-speed inference.

¥0.88/¥2.20/M
ctx131Kmax98Kavailtps
InOutCap

Zhipu AI's lightweight GLM-4.5 variant for cost-effective tasks.

¥8.80/¥17.60/M
ctx128Kmaxavailtps
InOutCap

Zhipu AI's GLM-4.5 X variant with enhanced performance.

Documentation

No documentation available
This model (glm-4.6) uses a dedicated API. Please refer to the official documentation for usage examples.