GLM-5 is Now Available on Ajelix AI Chat

  • Published:
    February 15, 2026
ajelix updates glm-5 available on ai chat

Z.ai’s frontier open-source model joins the Ajelix model lineup, bringing 744B-parameter reasoning, long-horizon agentic capabilities, and best-in-class coding performance to your workflows.

Starting today, Ajelix users can select GLM-5 as their model of choice inside Ajelix Chat. Developed by Z.ai, the team behind the acclaimed GLM model family GLM-5 represents a significant leap in open-source model capability, purpose-built for the kind of complex, multi-step work that pushes lighter models to their limits.

“This addition gives Ajelix users access to one of the most capable open-weight models available anywhere,” says Arturs, CTO of Ajelix. “Whether you’re generating production-ready front-end assets, parsing dense documents, or orchestrating multi-tool research pipelines, GLM-5 brings the reasoning depth to handle it reliably.”

What Makes GLM-5 Different

GLM-5 is a Mixture-of-Experts (MoE) model, meaning only a subset of its total 744 billion parameters are activated for any given inference pass. This architecture keeps computational costs manageable while preserving the knowledge depth of a much larger dense model. The practical effect: you get frontier-level output without frontier-level latency on every query.

The model integrates DeepSeek Sparse Attention (DSA), a technique that substantially reduces deployment cost while preserving long-context capacity up to 205,000 tokens. For users working with long documents, large codebases, or extended conversation histories, this context ceiling matters considerably.

On the training side, Z.ai developed Slime, a novel asynchronous reinforcement learning infrastructure that significantly improves training throughput. This allowed more fine-grained post-training iterations, which is a major reason GLM-5 punches well above its weight class on agentic and tool-use benchmarks. The model was pre-trained on 28.5 trillion tokens, scaling up from GLM-4.5’s 23T, and supports tool calling, extended thinking mode, and multilingual output in English and Chinese under an MIT license.

Benchmark Performance

GLM-5 achieves best-in-class performance among all open-source models on reasoning, coding, and agentic tasks. Selected results against the strongest alternatives:

BenchmarkGLM-5GLM-4.7DeepSeek-V3.2Kimi K2.5
HLE (w/ Tools)50.442.840.851.8
AIME 2026 I92.792.992.792.5
GPQA-Diamond86.085.782.487.6
SWE-bench Verified77.873.873.176.8
SWE-bench Multilingual73.366.770.273.0
BrowseComp (w/ Context Mgmt)75.967.567.674.9
Terminal-Bench 2.056.241.039.350.8
τ²-Bench89.787.485.380.2

The BrowseComp and Terminal-Bench numbers are especially relevant for Ajelix workflows. These benchmarks specifically measure how well a model navigates multi-step web research and performs real terminal operations autonomously. GLM-5’s significant margin over its predecessor GLM-4.7 on these tasks directly translates to better performance on the agentic and research-heavy features inside Ajelix.

When to Choose GLM-5 in Ajelix

Not every task needs the heaviest model in the lineup. But when your work hits any of the following scenarios, GLM-5 is the right choice.

  1. Superior Landing Pages & Visual Assets. GLM-5’s stronger code generation is evidenced by its 77.8% on SWE-bench Verified carries directly into front-end output quality. When generating landing page markup, component layouts, or data visualizations, it produces more structurally sound and visually polished results. Complex CSS interactions, SVG graphics, and multi-section layouts come out cleaner with fewer corrections needed.
  2. Complex File Reasoning & Web Research. With a BrowseComp score of 75.9, GLM-5 maintains coherent reasoning across dense reports, multi-document analyses, and web research tasks that span dozens of sources. It doesn’t lose track of earlier context when conclusions depend on information scattered across a long session.
  3. Complex multi-step workflow execution. GLM-5 scored 89.7 on τ²-Bench, which evaluates realistic multi-turn task completion across domains. In Ajelix, this means the model can sustain a coherent plan across workflows with branching steps: pulling data, reformatting it, generating an output, reviewing it against a criteria set, and iterating without losing context of the original goal.
  4. Technical & engineering tasks. Terminal-Bench 2.0 places GLM-5 at 56.2 – a 37% improvement over GLM-4.7 and well ahead of DeepSeek-V3.2. For users working with APIs, scripts, data pipelines, formula logic, or system configurations, this translates to more accurate technical suggestions, better error diagnosis, and higher first-pass success rates on complex technical prompts.

Why We Added GLM-5

“Our model selection process prioritizes capability breadth, reasoning reliability, and open availability,” explains Arturs. GLM-5 cleared all three bars. Released under an MIT license, it offers full commercial flexibility, important for businesses and professionals who rely on Ajelix for production work.

Architecturally, the combination of MoE efficiency and DSA-enabled long context gives GLM-5 an excellent performance-per-token profile. “It’s genuinely competitive with proprietary frontier models on the benchmarks that reflect real agentic work, not just isolated question-answering, which makes it a natural fit for what Ajelix Chat does,” he explains.

GLM-5 was released on Hugging Face on February 11, 2026 and is available via the NVIDIA NIM platform running on B200 hardware with an SGLang inference backend. We’ve integrated it promptly because users working on demanding tasks shouldn’t have to wait for access to the best available open tools.

What This Means for Ajelix Users

“Starting today, Ajelix users can select GLM 5 as their preferred model for tasks that demand the highest level of reasoning and execution capability,” adds Agnese, COO at Ajelix. Whether you’re building complex spreadsheets, analyzing business data, generating professional content, or orchestrating multi-step workflows, GLM 5 provides the intelligence backbone to get it done.

To access GLM 5, simply select it from the model dropdown in Ajelix Chat. The model is available now for Pro & Max plan users.

About Ajelix: Ajelix empowers professionals work with agentic AI for spreadsheets, data analysis, content creation, app creation, and productivity enhancement. Sign up at chat.ajelix.com

Sources and References

The easiest way to get expert-level data analysis with AI

marketing data analytics by ai data analyst screenshot
Powered by atecplugins.com