Claude Code fell off a cliff these last few weeks. Anyone actually using it felt the drop: dumber edits, lost context, contradictions, the works. No, we weren’t imagining it.
Well, Anthropic has finally spoken and said what many of us already knew weeks ago. From their incident post on September 8:
Investigating - Last week, we opened an incident to investigate degraded quality in some Claude model responses. We found two separate issues that we’ve now resolved. We are continuing to monitor for any ongoing quality issues, including reports of degradation for Claude Opus 4.1.
I have lived through a few waves of tooling changes. If you write software long enough you get comfortable with the ground moving under your feet. The latest shift is vibecoding: pointing a capable model in roughly the right direction and steering it with context, examples, and taste. Tools like Claude Code, Codex CLI, and Gemini Code make that feel effortless. This post is less about what the tools are doing and more about how to use them without losing your engineering brain.
The wait is finally over. Google has debuted Gemini Ultra 1.0, its GPT-4 competing model powering Gemini (formerly Bard), and it’s time to dig in and see if it lives up to the promise of being on the same level as GPT-4 or falls short.
I have been anticipating Gemini Ultra since it was announced in December 2023. I’ve grown frustrated with the lack of stability and constant issues with GPT-4. I use ChatGPT and the GPT-4 API. I also use Microsoft Copilot Pro (my AI subscriptions are starting to add up now).
OpenAI, the AI alchemist who once charmed us with GPT-3’s witty prose and GPT-4’s early brilliance, has stumbled upon a potent new potion: the Elixir of Unreliability. ChatGPT, once a code-crunching, creativity-conjuring genie, has mutated into a buggy bottleneck, leaving users drowning in frustration and searching for the magic that’s gone missing.
Remember those heady days in 2023 when GPT-4 burst onto the scene, weaving code tapestries and spinning tales that glittered like spun gold? Those were the days of true AI wizardry. Fast forward, and the lustre has gone duller than a tarnished trophy. Constant model tweaks, presumably in the name of alignment, have turned the once-dazzling diamond into a chipped piece of coal. Responses stumble in like a hungover party guest, riddled with errors and devoid of the spark that made GPT-4 so special.
Google has finally unveiled its GPT-4 killer, sorta. Despite perceivably being at the forefront of AI, Google was blindsided when OpenAI released ChatGPT, its AI chat-based assistant. In response, Google rushed out v1 of Google Bard, which was terrible and unusable for almost any task you would give it.
After declaring a ‘code red’ in December 2022, which saw co-founder Sergey Brin, who had stepped down from his executive role at the company in 2019, return to the office and help with its AI efforts, we are finally seeing the fruits of their labour.
With its recent update, Google is creating a stir in the language model landscape, which incorporates the latest model, Gemini, into Google Bard – specifically Gemini Pro. This newest version promises significant performance enhancements, prompting questions about its potential to challenge OpenAI’s supremacy in the field.
Gemini Pro: Stepping Up to the Challenge
While the Gemini Pro may not match the anticipated power of its successor, the Gemini Ultra, which is set for release in early 2024, still demonstrates impressive capabilities. Independent benchmarks show that it outperforms GPT-3.5 in several key areas, including:
Unless you were somewhere remote without internet or news the past few days, then you would already be well aware of the OpenAI drama in which the board fired CEO Sam Altman, and president Greg Brockman lost his board seat (but offered to keep his employment in the company).
Some senior researchers quit, and OpenAI employees 747 strong (out of 770 total workforce) signed a letter stating Sam Altman should return, or they would quit. To make matters worse, Microsoft (OpenAI’s biggest investor) wasn’t told of the news until minutes prior to the public being told.
Lately, you can’t scroll through your news feed without bumping into the term ‘artificial intelligence’. Since ChatGPT burst onto the scene, wowing us with its writing and coding prowess, we’ve all been enthralled by the capabilities of AI. And it’s not only the tech enthusiasts who are excited — tech giants like Microsoft are betting big, weaving AI into the very fabric of Windows 11 and their Office 365 suite.
If you’re using OpenAI’s APIs or considering integrating them into your digital toolkit, it’s essential to be aware of the recent changes. OpenAI updated its Terms of Use in December 2023.
Let’s delve into the significant changes that might alter how you use these potent AI tools.
What’s Off the Table?
First and foremost, let’s discuss a significant update that every developer should be aware of. OpenAI has clarified the terms of use for their API outputs.
For quite a while, OpenAI’s GPT-4 model was up to date until September 2021. However, recently, it appears that GPT-4 has been updated with an up-to-date dataset date of April 2023.
Knowing better than to trust what ChatGPT says, I tested this on ChatGPT web and different modes. I asked it with the default GPT-4 model and the Advanced Data Analysis model. I then checked GPT-3.5 as well.
GPT-4 on both web and mobile says April 2023. For GPT-3.5, it says January 2022.