Z.ai’s frontier open-source model joins the Ajelix model lineup, bringing 744B-parameter reasoning, long-horizon agentic capabilities, and best-in-class coding performance to your workflows.
Starting today, Ajelix users can select GLM-5 as their model of choice inside Ajelix Chat. Developed by Z.ai, the team behind the acclaimed GLM model family GLM-5 represents a significant leap in open-source model capability, purpose-built for the kind of complex, multi-step work that pushes lighter models to their limits.
“This addition gives Ajelix users access to one of the most capable open-weight models available anywhere,” says Arturs, CTO of Ajelix. “Whether you’re generating production-ready front-end assets, parsing dense documents, or orchestrating multi-tool research pipelines, GLM-5 brings the reasoning depth to handle it reliably.”
GLM-5 is a Mixture-of-Experts (MoE) model, meaning only a subset of its total 744 billion parameters are activated for any given inference pass. This architecture keeps computational costs manageable while preserving the knowledge depth of a much larger dense model. The practical effect: you get frontier-level output without frontier-level latency on every query.
The model integrates DeepSeek Sparse Attention (DSA), a technique that substantially reduces deployment cost while preserving long-context capacity up to 205,000 tokens. For users working with long documents, large codebases, or extended conversation histories, this context ceiling matters considerably.
On the training side, Z.ai developed Slime, a novel asynchronous reinforcement learning infrastructure that significantly improves training throughput. This allowed more fine-grained post-training iterations, which is a major reason GLM-5 punches well above its weight class on agentic and tool-use benchmarks. The model was pre-trained on 28.5 trillion tokens, scaling up from GLM-4.5’s 23T, and supports tool calling, extended thinking mode, and multilingual output in English and Chinese under an MIT license.
GLM-5 achieves best-in-class performance among all open-source models on reasoning, coding, and agentic tasks. Selected results against the strongest alternatives:
| Benchmark | GLM-5 | GLM-4.7 | DeepSeek-V3.2 | Kimi K2.5 |
|---|---|---|---|---|
| HLE (w/ Tools) | 50.4 | 42.8 | 40.8 | 51.8 |
| AIME 2026 I | 92.7 | 92.9 | 92.7 | 92.5 |
| GPQA-Diamond | 86.0 | 85.7 | 82.4 | 87.6 |
| SWE-bench Verified | 77.8 | 73.8 | 73.1 | 76.8 |
| SWE-bench Multilingual | 73.3 | 66.7 | 70.2 | 73.0 |
| BrowseComp (w/ Context Mgmt) | 75.9 | 67.5 | 67.6 | 74.9 |
| Terminal-Bench 2.0 | 56.2 | 41.0 | 39.3 | 50.8 |
| τ²-Bench | 89.7 | 87.4 | 85.3 | 80.2 |
The BrowseComp and Terminal-Bench numbers are especially relevant for Ajelix workflows. These benchmarks specifically measure how well a model navigates multi-step web research and performs real terminal operations autonomously. GLM-5’s significant margin over its predecessor GLM-4.7 on these tasks directly translates to better performance on the agentic and research-heavy features inside Ajelix.
Not every task needs the heaviest model in the lineup. But when your work hits any of the following scenarios, GLM-5 is the right choice.
“Our model selection process prioritizes capability breadth, reasoning reliability, and open availability,” explains Arturs. GLM-5 cleared all three bars. Released under an MIT license, it offers full commercial flexibility, important for businesses and professionals who rely on Ajelix for production work.
Architecturally, the combination of MoE efficiency and DSA-enabled long context gives GLM-5 an excellent performance-per-token profile. “It’s genuinely competitive with proprietary frontier models on the benchmarks that reflect real agentic work, not just isolated question-answering, which makes it a natural fit for what Ajelix Chat does,” he explains.
GLM-5 was released on Hugging Face on February 11, 2026 and is available via the NVIDIA NIM platform running on B200 hardware with an SGLang inference backend. We’ve integrated it promptly because users working on demanding tasks shouldn’t have to wait for access to the best available open tools.
“Starting today, Ajelix users can select GLM 5 as their preferred model for tasks that demand the highest level of reasoning and execution capability,” adds Agnese, COO at Ajelix. Whether you’re building complex spreadsheets, analyzing business data, generating professional content, or orchestrating multi-step workflows, GLM 5 provides the intelligence backbone to get it done.
To access GLM 5, simply select it from the model dropdown in Ajelix Chat. The model is available now for Pro & Max plan users.
About Ajelix: Ajelix empowers professionals work with agentic AI for spreadsheets, data analysis, content creation, app creation, and productivity enhancement. Sign up at chat.ajelix.com