The wait is finally over. Google has debuted Gemini Ultra 1.0, its GPT-4 competing model powering Gemini (formerly Bard), and it’s time to dig in and see if it lives up to the promise of being on the same level as GPT-4 or falls short.
I have been anticipating Gemini Ultra since it was announced in December 2023. I’ve grown frustrated with the lack of stability and constant issues with GPT-4. I use ChatGPT and the GPT-4 API. I also use Microsoft Copilot Pro (my AI subscriptions are starting to add up now).
It’s possible the rebranding might just be lagging behind, but my first use of Gemini Ultra, the Bard naming, is still being used under the moniker Bard Advanced. So, I’m not sure if I jumped in earlier than Google flipping the switch, but the name Bard is still around. As we knew in the leaks prior, Bard Advanced is bundled in with a Google One 2tb subscription.
Sure enough, I was early. It changed to Gemini from Bard shortly after posting this.
The first noticeable difference in Gemini Advanced is its blazing speed. My queries are met with an instant, nearly seamless flow of text. Compared to GPT-4, there’s less sense of waiting for a response. Instead, Gemini Advanced feels like it’s genuinely thinking alongside me. The way it responds also feels softer and humanlike, something I noticed with Gemini Pro.
The refinement that shines through in Gemini Advanced is, frankly, impressive. It flawlessly adapts to my prompts, whether I’m requesting a lighthearted joke or a complex technical explanation. This fluidity in tone and complexity feels a significant step forward in AI development. The polished language and well-structured output leave little room for misunderstanding or misinterpretation.
The coding prompts I ran through Gemini Ultra resulted in code that ran the first time without issue. It seems to accurately produce code that uses modern best practices and how I would write my code for code like Typescript and Javascript.
One of the frustrations with GPT-4 was the usage caps. Long, in-depth conversations would suddenly hit a wall. To my delight, this was nowhere to be found with Gemini Advanced. Unless Google has set this high, I hit Gemini with more than 40 prompts, and I didn’t hit a wall. That is where OpenAI dropped the ball: usage caps.
It’s not all sunshine and rainbows. Gemini does appear to be quite prudish, but having said that, so is ChatGPT 4 these days, too.
When it said “absolutely not”, I pictured it in a British private school teacher’s voice, like a character from Harry Potter. “Absolutely not, Mr Potter!”
Fortunately, I primarily use AI for writing and ideas, sometimes code, so these safety alignment guardrails don’t affect me. I’m not using AI to push boundaries or do anything that would cause strong restrictions to be a concern. But for some, I can understand why the strict nature of not complying with certain prompts would be a hindrance.
For giggles, I asked ChatGPT the same thing, and while I wasn’t lectured as much as Gemini Advanced did, it didn’t comply either (as expected):
While Gemini Ultra is impressive, there are downsides.
- A 32k context window. In GPT-4 Turbo, the context length is 128k. The extra context length can make it nice to work with multiple pages and files.
- Gemini Advanced seems to excel in creativity, but it hallucinates quite a bit. While GPT-4 hallucinates too, it feels like Gemini is prone to more hallucinations.
- Image generation is terrible. While Gemini is a step above what Google previously had, the image generation is on par with DALL-E 2, but nowhere near DALL-E 3.
- Image recognition is also terrible. Despite the focus in their initial marketing showcasing amazing image recognition, Gemini Ultra 1.0 is nowhere near that level. It’s so bad it makes me wonder if Gemini Ultra is being used for the image generation yet. ChatGPT wins in this department. You also cannot upload multiple images for recognition at once.
- Gemini Ultra is not very good at reasoning. I find GPT-4 is exceptional at reasoning tasks, it is still the standard. And while Gemini Ultra is arguably a huge step up from Pro, it doesn’t feel like the gigantic game changing leap we expected.
- It’s too censored and safe. The inappropriate adult joke aside, anything involving violence, political, religious or spiritual content even within the bounds of creative expression like a poem or short story will trigger Gemini and give you a lecture. Either it thinks you’re trying to produce political propaganda to interfere with elections, spread hate or self harm.
- It’s too expensive for what it is right now. Gemini Advanced is priced similarly to ChatGPT Plus and while Google does not have usage caps (as far as I can tell), it falls short of what ChatGPT offers for the same price. It’s not quite the ChatGPT killer, so don’t cancel your subscription just yet. To Google’s credit, they do offer a two month trial through Google One and you get other nice things like 2tb of storage. If Google integrated this into Google suite apps like Sheets and Docs (like Microsoft Copilot) it might be more value for month and I don’t doubt those are coming, just right now it’s the chat app you’re paying for.
It is still early days, but I can confidently say that Google appears to have delivered on multiple fronts. As long as they don’t go down the path of OpenAI and handicap themselves with model tweaks and limitations, I can see myself using Gemini more and ChatGPT less.
Time will tell.