For quite a while, OpenAI’s GPT-4 model was up to date until September 2021. However, recently, it appears that GPT-4 has been updated with an up-to-date dataset date of April 2023.
Knowing better than to trust what ChatGPT says, I tested this on ChatGPT web and different modes. I asked it with the default GPT-4 model and the Advanced Data Analysis model. I then checked GPT-3.5 as well.
GPT-4 on both web and mobile says April 2023. For GPT-3.5, it says January 2022.
Since OpenAI released its long-awaited Code Interpreter plugin for ChatGPT, I have been playing with it extensively. Throwing everything at it, from a zip file of a large repository and asking it questions to uploading spreadsheets and generating imagery.
It appears that most people are using Code Interpreter for what it was intended for, working with data and code, being able to perform analysis and other awesome things on documents and so on.
As developers, we are always looking for ways to make our lives easier, and that often means bringing in third-party libraries and tools that abstract away the nitty-gritty details of specific tasks. Langchain is one such tool that aims to simplify working with AI APIs (in reality, it doesn’t). However, as we’ll discuss in this blog post, you might not need Langchain at all. In fact, using direct APIs, such as the OpenAI API, can often result in better performance and less complexity.
Something interesting has happened with the famed GPT-4 model from OpenAI lately, and it’s not just me that has noticed. Many people have been talking about how GPT-4 lately feels broken. Some say it’s nerfed, and others are saying it’s possibly just broken due to resource constraints. There was a discussion recently on Hacker News in this thread which received 739 comments.
All signs indicated that OpenAI had changed something significant with ChatGPT lately and its GPT-4 model. Users reported that questions relating to code problems were producing generic and unhelpful answers.
Picture this: It’s 3 AM, and you’re staring at your computer screen, bleary-eyed, as you struggle to solve that pesky bug in your code. You can almost feel the weight of the digital cobwebs piling up on StackOverflow as you sift through outdated answers and snarky comments. But what if I told you that the days of scouring through StackOverflow’s seemingly endless abyss might be numbered? Enter ChatGPT, Google Bard, and a whole new breed of AI-powered chatbots revolutionising how developers find answers to their coding conundrums.
In the past year, a surge of AI tools has hit the market, with many identifying as AI startups. The advent of OpenAI’s ChatGPT, including GPT-3.5 and GPT-4 models, has revolutionised how we interact with technology. However, amidst this excitement, a trend needs addressing: the phenomenon of “API wrappers” masquerading as AI startups.
While it’s true that many of these products utilize the power of OpenAI’s GPT APIs, it’s essential to take a step back and consider the implications of relying solely on an external API for your business. Does wrapping GPT APIs and selling a service based on them warrant the label of an AI startup? Let’s take a closer look at the potential downsides of this approach.
Once upon a time, a social media giant was led by a man who couldn’t resist chasing shiny objects. His name was Mark Zuckerberg, and in the land of Meta, he took it upon himself to singlehandedly bring forth the “future of the internet” with his grand, delusional vision of the Metaverse. But like Icarus, he flew too close to the sun, and now he’s frantically flapping his wings in pursuit of yet another glittering mirage — the artificial intelligence arms race.
In recent years, we’ve seen an unprecedented rise in the development and adoption of artificial intelligence (AI) tools, such as OpenAI’s ChatGPT. As AI becomes increasingly integrated into our daily lives, it’s essential to ask whether governments should step in and regulate these technologies.
While some argue that ChatGPT is a glorified sentence constructor and poses no real threat, others believe that regulation is necessary to prevent misuse and ensure ethical practices. In this article, we’ll explore both perspectives and attempt to determine whether AI regulation is needed.
I was really rooting for Microsoft with its ChatGPT integration into the Bing search engine. You might have seen the hype, including the hilarious controversy around Bing’s ChatGPT threatening journalists and being easily provoked.
After a few weeks of closed access and insurmountable hype, Microsoft has opened the floodgates to many more people, and Bing’s ChatGPT integration is a dismal disappointment.
Perhaps the passive-aggressive and threatening nature of Microsoft’s ChatGPT integration forced their hand. Still, after trying it for a while, it’s clear it isn’t a rival to the original ChatGPT anymore. Despite having access to up-to-date information, it has been dumbed down as it’s obvious Microsoft has cut both legs off to get it under control.
After a lengthy free period of constant downtime and unreliability, ChatGPT has opened up its paid Plus plan to more people and wanting to see what the difference was between Plus and free, I signed up.
For some, USD 20 might be too much to reconcile in this current economic climate. If you use ChatGPT as part of your daily tasks like some do, then $20 might be pretty valuable. If you’re a hobbyist user or just curious, $20 might be a cost you’ll have to weigh against your coffee budget.