Recently, OpenAI introduced the full version of its o1 Model, an AI that is designed to take time to think before giving a response. Now, Google has stepped into the arena with its own reasoning model, Gemini 2.0 Flash Thinking. Despite being a newcomer, the model has already claimed the top spot in the LMSYS Chatbot Arena, outperforming all competitors. Here’s everything you need to know.
What is Gemini 2.0 Flash Thinking?
It is a reasoning model, meaning it is designed to approach problems methodically rather than providing instant answers like traditional AI models. It uses step-by-step reasoning to fact-check itself. That means it can be potentially suited for solving complex and challenging problems, particularly in programming, math, and physics. However, these reasoning models are a bit slower and sometimes even can take minutes to generate results.
The model is built on the newly released, compact Gemini 2.0 Flash version which is built for speed. It’s likely that this reasoning capability will eventually be integrated into Gemini 2.0 Pro and that may mean an even longer wait time to generate results.
Similar to o1 Model, Gemini 2.0 Flash Thinking is also a multimodal model, meaning it supports images, videos, and audio in addition to text inputs. However, both models can only output text as of now.
The main difference between both models?
- Gemini shows you its thinking process so you can see the step-by-step reasoning and how it landed on the conclusion.
- OpenAI is not this transparent in its approach so you don’t see the step-by-step reasoning happening on the screen.
As mentioned before, Gemini 2.0 Flash Thinking already outranked other models in the LMSYS Chatbot Arena leaderboard, which is quite surprising for a model this new in this short period of time.
Also Read:
- Gemini Now Lets You Call and Message on Android Lock Screen
- Google’s Project Mariner: Use AI Agents to Complete Tasks in Chrome
- Google DeepMind Unveils Veo 2 Video Generation Model to Rival Sora
- Gemini on Android Gets Gemini 2.0 Flash Model: Here’s What it Means
How to Try Google Gemini 2.0 Flash
Currently, Gemini 2.0 Flash Thinking is free and in the experimental testing phase. To try it out, visit AI Studio, Google’s AI prototyping platform. Then select the Gemini 2.0 Flash Thinking Experimental model, labeled as gemini-2.0-flash-thinking-exp-1219, from the sidebar. It supports input sizes of up to 32,000 tokens (50-60 pages of text) and outputs up to 8,000 tokens per response.
I have tested 2.0 Flash Thinking with a couple of riddles. It was able to provide accurate answers within seconds. However, this isn’t a proper test to draw any conclusions. But our initial impression is on par with the OpenAI o1 model.
The launch of Gemini 2.0 Flash Thinking comes amidst a surge in reasoning model development. Companies like DeepSeek and Alibaba’s Qwen team have also entered the arena, releasing their own challengers to OpenAI’s o1.