DeepSeek has just dropped an upgraded version of its already impressive V3 model—and it’s got developers talking. This Chinese AI startup released the V3 and R1 models earlier this year, and they immediately grabbed attention by offering performance that rivals top-tier models from OpenAI and Google—completely open-source and free.

Now, they are back at it again with the updated version of the V3 model – DeepSeek-V3-0324. This is already generating buzz for writing hundreds of lines of code without breaking a sweat.
Let’s break it down.
Table of Contents
What’s New in DeepSeek-V3-0324?
The big change here is power. The parameter count jumped from 671 billion to 685 billion, giving it more capacity while still using the efficient Mixture-of-Experts (MoE) architecture. Only 37 billion parameters activate per task, so it’s smart with how it uses resources.
They also switched to the MIT license, which is developer-friendly and makes integration much easier.
Benchmarks also show strong gains:
- MMLU-Pro: 75.9 → 81.2 (+5.3)
- GPQA: 59.1 → 68.4 (+9.3)
- AIME: 39.6 → 59.4 (+19.8)
- LiveCodeBench: 39.2 → 49.2 (+10.0)

This isn’t just benchmark fluff, either. Here are the changes that you will notice when using the new model.
What You’ll Notice When Using It
- It’s much better at solving math problems. You’ll see a clear boost when you give it reasoning-heavy tasks, especially complex ones like AIME-style questions.
- It doesn’t choke on long code generations anymore. You can ask it to write full websites or applications, and it’ll handle 700+ lines of code in one go without crashing.
- The code it generates for websites now looks cleaner and more polished. If you’re into front-end work, the HTML and CSS it spits out will feel much closer to something you’d deploy.
- If you’re working with Chinese content, you’ll notice the writing feels more natural and better structured. Medium to long articles, especially, show better tone and flow.
- Conversations are smoother now. It remembers what you said earlier in the chat and responds with more relevant replies, even across multiple turns.
- Translation and search tasks are also sharper, especially when switching between Chinese and English. The answers feel more complete and less generic.
- It’s more accurate when generating code that involves function calls. So if you’re using it to write Python, JavaScript, or anything else that requires precise logic—it’ll mess up less often.
Then How It Performs?
People have tested it—and the results are impressive.
Petri Kuittinen, a Finnish lecturer, got it to generate a fully responsive landing page for an AI company—958 lines of working code. Jasper Zhang, a Math Olympiad gold medalist, gave it a 2025 AIME problem. It solved it flawlessly.

Apple’s Awni Hannun ran it on a 512GB M3 Ultra Mac. The speed was around 20+ tokens per second, but the peak memory usage was just 381GB, which is solid for a model this size.
We tested it too.
When we asked it to create a Python web app using Flask, including login functionality and hashed password security, it generated the code. To my surprise, it worked, too.

We tried the same on ChatGPT and Gemini. ChatGPT kept restarting the output. Gemini managed to finish it after a few tries, but the code was incomplete and didn’t work without serious fixing.
How to Access the Latest DeepSeek V3?
You can directly access the V3 from the DeepSeek website and the mobile app. By default, it uses the new DeepSeek-V3-0324 model. So you can just hop on and try the new model right away.

Developers can integrate DeepSeek into their applications and websites by using the API, which costs the same. You can use the same API endpoint (model=deepseek-chat
)
To download and run the model locally, you can do it from the HuggingFace platform.
What’s Next?
Rumors point to an upcoming R2 reasoning model—possibly even sooner than expected. And based on how good V3-0324 is, R2 could make an even bigger splash.
However, not everyone’s thrilled. With its rising influence, DeepSeek is under U.S. government scrutiny over national security and data privacy. There’s talk of banning its apps from official devices. Still, DeepSeek-V3-0324 is proving that open-source AI can be powerful, practical, and cost-effective. If you’re a coder, builder, or just curious about what’s next in AI, you should try it for yourself.