DeepSeek Introduces V3.2 AI Model, Claimed To Outperform ChatGPT And Rival Gemini
The DeepSeek-V3.2 Speciale showcased gold medal performance in IMO, CMO, ICPC, and IOI 2025 olympiads.


Published : December 2, 2025 at 5:24 PM IST
Hyderabad: Chinese artificial intelligence (AI) company DeepSeek has launched the V3.2 AI model into its platform. It includes DeepSeek-V3.2 and DeepSeek-V3.2-Speciale AI models. The company claims that these AI models offer better performance compared to popular AI models like OpenAI’s GPT-5 and Google’s Gemini 3.0 Pro.
The DeepSeek-V3.2 is the standard model designed to help users and act as a daily AI assistant for reasoning, while the DeepSeek-V3.2-Speciale is the more advanced version of the V3.2, which enables complex reasoning.
The latest DeepSeek AI models focus on efficiency, safety, reasoning, and specialised capabilities rather than just being general-purpose chatbots.
Notably, the DeepSeek-V3.2 Speciale AI model won the gold medal in the International Mathematical Olympiad (IMO), Chinese Mathematical Olympiad (CMO), and International Collegiate Programming Contest (ICPC) world finals and International Olympiad in Informatics (IOI) 2025.
DeepSeek V3.2 AI Model: Key Features
The DeepSeek-V3.2 standard variant uses DeepSeek Sparse Attention (DSA), which handles very long contexts with less computing power. It is designed to rival GPT-5 in reasoning while being lightweight and using minimal computing power, making it ideal for everyday use.
🚀 Launching DeepSeek-V3.2 & DeepSeek-V3.2-Speciale — Reasoning-first models built for agents!
— DeepSeek (@deepseek_ai) December 1, 2025
🔹 DeepSeek-V3.2: Official successor to V3.2-Exp. Now live on App, Web & API.
🔹 DeepSeek-V3.2-Speciale: Pushing the boundaries of reasoning capabilities. API-only for now.
📄 Tech… pic.twitter.com/SC49UdmCZv
The standard AI model supports tool calling, meaning it can be used for web searches, coding, analysing data, handling data, working with Math or Science problems, generating images or videos, and connecting to personal apps (like email or calendar), like usage, and agent capabilities.
As the DeepSeek-V3.2 model is trained with a Large-Scale Agentic Task Synthesis Pipeline, it can combine tools for complex workflows. The AI company has introduced “thinking with tools” capability, which shows the steps of its reasoning during tool use.
The DeepSeek-V3.2 standard model allows users to adjust certain settings on their PC or server that control how the AI generates text. These settings are called sampling parameters. DeepSeek recommends users to set the temperature to 1.0 and Top_p to 0.9 for an optimised performance.
Notably, the standard version of V3.2 is released as an open‑source under the MIT License, which allows anyone to use, modify, and share the model and code freely.
The DeepSeek-V3.2 Speciale model is exclusively built for deep reasoning tasks like Maths or Science problems, coding, logical reasoning, and others. According to the AI company, this model surpasses GPT-5 and matches Gemini-3 Pro in reasoning benchmarks. It is built on the same architecture as the V-3.2 AI model, but does not feature tool calling functionality.
The DeepSeek-V3.2 Speciale is available as open‑source under the MIT License. It also supports an Application Programming Interface (API), designed to serve as a powerful reasoning engine for complex tasks. This AI model is ideal for developers, researchers, and companies.
| DeepSeek-V3.2 vs DeepSeek-V3.2 Speciale | ||
| Feature | DeepSeek-V3.2 | DeepSeek-V3.2 Speciale |
| Core mechanism | Uses DeepSeek Sparse Attention (DSA) — handles very long contexts with less compute | Same architecture as V3.2 but scaled for maximum reasoning power |
| Performance Goal | Rivals GPT‑5 in reasoning while being lightweight and efficient | Surpasses GPT‑5 and matches Gemini‑3 Pro in reasoning benchmarks |
| Best Use Case | Everyday use → efficient reasoning + tool integration | Exclusively for deep reasoning tasks (math, science, coding, logic) |
| Tool Calling | Supports tool calling (web search, coding, data analysis, math/science, image/video, personal apps) | No tool calling functionality |
| Agentic Capabilities | Trained with Large‑Scale Agentic Task Synthesis Pipeline — can combine tools for workflows. | Focuses only on reasoning, not agentic workflows. |
| “Thinking with Tools” | Shows reasoning steps during tool use. | Not supported. |
| Sampling Parameters | Adjustable by user. Recommended: Temperature = 1.0, Top_p = 0.9 for optimized output. | Not applicable (no tool calling, focused on reasoning). |
| License | Open‑source under MIT License → free to use, modify, and share. | Same license applies, but intended for specialized reasoning use via API. |

