Finally, the 12 Days of OpenAI campaign has come to an end, leaving us with some exciting updates and some small improvements. Let’s break down everything OpenAI announced during these 12 days, one step at a time.
Table of Contents
Day 1: o1 Model and ChatGPT Pro
OpenAI kicked off its ’12 Days of OpenAI’ campaign by launching the full version of the o1 model. It’s a reasoning-focused model built to handle complex tasks like coding, data analysis, and advanced math. Think of it as an AI that takes time to think through problems before providing a response instead of instantly giving answers like traditional AI models. OpenAI claims it cuts errors by 34%, making it smarter and faster than its preview version.
But that’s not all. OpenAI also introduced ChatGPT Pro—a $200/month subscription that offers unlimited access to all the models. It also features the o1-Pro model, which reportedly slashes coding errors by 75%. This plan is focused towards the professionals who heavily rely on AI.
Check out more details about o1 Model and ChatGPT Pro here.
Day 2: Reinforced Fine-Tuning for o1-Mini
Day 2 shifted gears to enterprise users, introducing reinforced fine-tuning for the o1-Mini model. Think of it as a way to train AI on specific tasks.
For example, legal teams can teach the AI to analyze contracts, or scientists can train it to study gene data. This makes AI smarter in niche areas instead of just relying on general knowledge. OpenAI claims this approach could make the smaller o1-Mini model perform even better than the larger o1-Full model for specialized use cases. While still in early beta testing, businesses can apply to join OpenAI’s program to try it out.
Read more details about Reinforced Fine-Tuning for o1-Mini here.
Day 3: Sora—AI Video Generation Model
On Day 3, OpenAI unveiled Sora, its much-awaited text-to-video model. It allows users to create 1080p videos up to 20 seconds long from text prompts or images.
The standout feature is the Storyboard View, where you can add prompts at specific points on a timeline. Sora then generates videos that match the sequence. Features like Remix and Recut let you tweak elements without starting over. You can also loop or blend videos for smoother transitions. Sora is available now for ChatGPT Plus and Pro subscribers, with higher limits for Pro users.
You can check more details about Sora—AI Video Generation Model here.
Day 4: Canvas Gets Major Updates
Canvas, OpenAI’s tool for writing and coding, saw big changes. Instead of being a separate beta model, it’s now integrated directly into other models like 4o and even o1.
It also got a couple of new features. It can now run Python code directly on Canvas. Developers can also fix errors with the click of a button. It even works with Custom GPTs, allowing users to create personalized workflows. Perhaps the best part? Canvas is now free for all users, so you don’t need a subscription to access it.
Check more details about Day 4 Canvas Updates here.
Day 5: Apple Intelligence and ChatGPT Integration
OpenAI partnered with Apple to bring ChatGPT to Siri and Apple’s writing tools. This integration lets Siri handle more advanced tasks by connecting with ChatGPT.
Also Read:
For example, if you ask a complex question, Siri can now pass it to ChatGPT for a detailed answer. You can also summarize PDFs or rewrite texts using Apple Intelligence’s writing tool. Additionally, Visual Intelligence allows your iPhone camera to analyze objects in real time—like identifying plants or reading clothing labels. While these features were first announced at Apple’s event, OpenAI highlighted them again during the public release of iOS 18.2, which officially includes these updates.
Check out more details about Apple Intelligence and ChatGPT Integration here.
Day 6: Advanced Voice Mode with Video
ChatGPT’s Advanced Voice Mode is now getting a video option letting ChatGPT to see and talk to you. You can show objects and ask questions about them like “How to fix this bike?” ChatGPT can analyze what it is seeing through the camera and gives a tailored answer. Beyond the camera feed, you can also share your screen with ChatGPT and talk to ChatGPT about what’s on the screen.
It even has short-term memory, so it can remember things it have seen and talk about it. For example, you can ask “Can you see where my wrench is?” and ChatGPT can reply, “It’s on the table,” if it observed it earlier. OpenAI also added a fun Santa mode for the holiday season, letting ChatGPT talk to you like Santa. Advanced Voice Mode with Video is available for Plus and Pro users.
Check details about Advanced Voice Mode with Video here.
Day 7: Organize Chats with Projects
ChatGPT now lets you group chats into projects. Think of it like folders that let you organize your conversations into categories like Work, Personal, or tasks like Trip Planning and Coding.
But apart from just tidiness, projects also allow you to upload files to a project. All the chats inside that project can access these files. For example, for a home maintenance project, you can upload your warranty cards, bills, etc., and this info can be accessed by all the chats. Plus, you can set custom instructions. For example, you could tell ChatGPT to always respond formally in a Work project but keep things casual in Personal chats. Available now for Plus and Pro users. Free access rolls out next year.
Check more details about Projects in ChatGPT here.
Day 8: ChatGPT Search Becomes Free
ChatGPT Search is a feature that lets ChatGPT search information on the web to provide up-to-date answers. On day 8, ChatGPT made Search free—no Plus or Pro subscription needed. Even free-tier users can pull real-time data from the web, whether it’s sports scores, stock prices, or breaking news.
The update also introduced rich results, which make ChatGPT feel more like a regular search engine. Now, when you search for a website, you will get the website link. Searching for places will bring lists that include photos, maps, phone numbers, opening times, etc., just like on Google Search. Finally, you can also speak to ChatGPT, and it will read out search results. It’s like having a voice assistant that can browse the web for you.
Check out about Day 8 ChatGPT Search updates here.
Day 9: Developer-Centric Tools
Day 9 focused entirely on developers. OpenAI released APIs for the o1 model, now 60% more efficient than its preview, saving time and costs. It also outputs structured data like JSON, making it easier to connect with tools. The model can analyze images too, such as finding errors in tax forms.
Real-Time APIs got WebRTC support for smoother voice and video calls—great for voice assistants. Audio costs dropped 60%, and cached inputs are 87.5% cheaper, cutting costs for voice-based apps. Another big feature is Preference Fine-Tuning, which lets developers train models to match specific styles or tones. For example, businesses can make their AI sound formal in customer service and casual in marketing.
Check out more details about APIs released on Day 9.
Day 10: Call or Message ChatGPT
OpenAI introduced two exciting ways to connect with ChatGPT on Day 10—via phone calls and WhatsApp chats. First, users can now call ChatGPT directly using a toll-free number (1-800-CHATGPT) without needing an internet connection. It’s as simple as saving the number to your contacts and dialing in. You get 15 minutes of free voice calls per month, and OpenAI plans to offer extended access soon.
For those who prefer text-based chats, OpenAI also added WhatsApp integration. Save the same number to start chatting instantly. Currently, ChatGPT on WhatsApp only supports text. It cannot upload images or other files and does not support voice calls like the native app.
Check out more details about ChatGPT call and WhatsApp feature here.
Day 11: Work with Apps Expansion
ChatGPT now works with tools like Xcode, IntelliJ, PyCharm, Android Studio, Apple Notes, Notion, and Quip. It can read open files, analyze code, and suggest fixes— without you need to copy the code manually to ChatGPT.
This feature now works with Search. That means you can highlight text on the app and ask ChatGPT to fact check using the Search feature mentioned earlier on the campaign. A key upgrade is voice integration. You can also talk to ChatGPT about code or documents in real-time. Work with Apps is available on MacOS and will expand to Windows soon.
Everything you need to know about work with app upgrades on Day 11.
Day 12: o3 and o3 Mini Models
OpenAI ended the campaign by introducing o3 and o3 Mini, two upgraded reasoning models. The o3 model shows big improvements over o1, boosting programming accuracy from 48.9% to 71.7%. It also performed better on benchmarks like ARC-AGI, nearing AGI-level performance.
The o3 Mini is a lighter, cheaper version focused on faster, resource-friendly tasks like scripting, data sorting, and quick calculations. Both models are still in testing with researchers for safety. OpenAI plans to launch them for enterprise customers and developers in the coming months.
You can read more about announcement of o3 models here.
This marks the end of the campaign. Of all the announcements, Sora might be the biggest release, especially considering it’s available for all users with a ChatGPT Plus subscription. Apart from that, Search and Canvas updates, along with availability for even free users, will be useful for a lot of people. So, what’s your favorite update in this entire campaign?