An open 30B MoE model from NVIDIA with 3B activated parameters that delivers strong reasoning and agentic capabilities.
33.8K Pulls 3 Tags Updated 1 week ago
Qwen 3.5 is a family of open-source multimodal models that delivers exceptional utility and performance.
3.8M Pulls 52 Tags Updated 3 hours ago
As the strongest model in the 30B class, GLM-4.7-Flash offers a new option for lightweight deployment that balances performance and efficiency.
925K Pulls 4 Tags Updated 2 months ago
The most powerful vision-language model in the Qwen model family to date.
2.6M Pulls 59 Tags Updated 5 months ago
NVIDIA Nemotron 3 Super is a 120B open MoE model activating just 12B parameters to deliver maximum compute efficiency and accuracy for complex multi-agent applications.
122.1K Pulls 7 Tags Updated 2 weeks ago
The first installment in the Qwen3-Next series with strong performance in terms of both parameter efficiency and inference speed.
450.2K Pulls 10 Tags Updated 3 months ago
Nemotron-3-Nano is a new Standard for Efficient, Open, and Intelligent Agentic Models, now updated with a 4B parameter count model.
306.3K Pulls 9 Tags Updated 1 week ago
gpt-oss-safeguard-20b and gpt-oss-safeguard-120b are safety reasoning models built-upon gpt-oss
110.4K Pulls 3 Tags Updated 5 months ago
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.
25.2M Pulls 58 Tags Updated 5 months ago
OpenAI’s open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases.
8.3M Pulls 5 Tags Updated 5 months ago
Magistral is a small, efficient reasoning model with 24B parameters.
1.3M Pulls 5 Tags Updated 9 months ago
DeepSeek-V3.1-Terminus is a hybrid model that supports both thinking mode and non-thinking mode.
515.2K Pulls 8 Tags Updated 6 months ago
DeepSeek-R1 is a family of open reasoning models with performance approaching that of leading models, such as O3 and Gemini 2.5 Pro.
81.3M Pulls 35 Tags Updated 8 months ago