AI

OpenAI's Missed Opportunity: How ChatGPT Lost Its First-Mover Advantage

In the tech world, the first-mover advantage is a powerful thing. Companies that innovate and bring groundbreaking products to market often enjoy early success and, in some cases, market dominance. OpenAI seemed poised to become the undisputed champion in the conversational AI domain with its GPT series, particularly ChatGPT. However, despite their early lead, there is a growing sentiment that OpenAI has squandered its first-mover advantage. Failing to Deliver on Multi-Modal Promises One of the most glaring examples of OpenAI’s missteps is the failure to deliver on its promises of a multi-modal GPT-4. Multi-modal models are the next logical step in the evolution of AI, combining various types of input like text, images, and even sound to provide more contextual and nuanced responses. OpenAI’s promotional materials around GPT-4 made a big deal of this feature. But where is it? We got a slick promotional video and blog posts but, multi-modal GPT-4 has yet to hit the market.

Railway Now Has pgvector Support

I switched from using Vercel for my new web projects to Railway a while ago. When the AI space heated up, vector databases became the new hotness. And a lot of the available choices are quite expensive or restricted. Pinecone is probably one of the better-known vector database providers and the favoured choice of many GPT users. Finally, Railway now supports pgvector for PostgreSQL. You can use PostgreSQL as a vector database on Railway and ditch using multiple providers, as I was. I was using Supabase, which did the job nicely, but not having all my infrastructure in one dashboard was annoying.

Unleashing the Llama: Is Meta's Llama 2 Too Safe?

Meta has released version 2 of its open-source Llama AI model and has caught many’s attention – but not entirely for the right reasons. Coming in a broad spectrum of sizes, from the 7 billion to an impressive 70 billion parameter models, Llama 2 certainly stands out. If you’re curious, you can experience the different models for yourself on Perplexity. You can only try 7 and 13 billion models there. But as I’ve dug deeper into Llama 2, I’ve begun to ask myself: has Meta gone too far with safety measures?

The Curious Case of GPT-4's Drop In Quality: An In-depth Look into Recent Changes and Speculations

Since its launch, OpenAI’s GPT-4 has been the talk of the town, marking yet another milestone in artificial intelligence. However, over the past few months, there’s been a rising suspicion within the AI community that GPT-4 has been “nerfed” or subtly downgraded. Despite these concerns, OpenAI maintains its stance that nothing has changed that would cause any significant impact on GPT-4’s performance or quality. But is that really the case?

How To Use ChatGPT to Create Exceptional Midjourney Prompts

The knowledge cut-off for ChatGPT (including GPT-3.5 and GPT-4) is September 2021. This means that GPT is not aware of Midjourney. However, due to how large language models (LLMs) like GPT work, they can be trained with some prompts and produce the desired output. This means you can teach GPT what you want it to do. You are PromptGPT. You create detailed prompts for Midjourney, which is an AI image generator that produces images from detailed text prompts. First, you are going to be provided some example prompts. Then you are going to be provided some keywords which you will then use to generate 5 prompts. Before you are provided examples, here is how Midjourney works. - To set the aspect ratio of the image you can use \`—ar\` to provide an aspect ratio. - Specific camera models, ISO values, f stop and lenses can be used to vary the image produced. - \`--chaos \` Change how varied the results will be. Higher values produce more unusual and unexpected generations. - \`--Weird \` Explore unusual aesthetics with the experimental --weird parameter. Prompt examples: /imagine prompt: elderly man, by the sea, portrait photography, sunlight, smooth light, real photography fujifilm superia, full HD, taken on a Canon EOS R5 F1.2 ISO100 35MM --ar 4:3 --s 750 /imagine prompt: film photography portrait of young scottish prince looking at the camera, plate armor, hyperrealistic, late afternoon, overcast lighting, shot on kodak portra 200, film grain, nostalgic mood --ar 4:5 --q 2 /imagine prompt: photograph from 2018s China: a young couple in their 20s, dressed in white, stands in their home, displaying a range of emotions including laughter and tears. Behind them is a backdrop of a cluttered living space filled with white plastic trash bags and torn white paper rolls. Captured with a film camera, Fujifilm, and Kodak rolls, the image conveys a strong cinematic and grainy texture. This artwork uniquely documents the complex emotions and living conditions faced by the young people of that era. --ar 4:3 /imagine prompt: Young, handsome Keanu reeves In a black long leather coat walking down the street in the rain --ar 2:3 —uplight /imagine prompt: flat vector logo of deer head, golden on white /imagine prompt: logo for a jazzy cat cafe with the text: "CATZ" /imagine prompt: rainbows raining down from the sky, cyberpunk aesthetic, futuristic --chaos 50 /imagine prompt: illustration of a dog walker walking many dogs, tech, minimal vector flat --no photo detail realistic Only use the above as examples. Use the following keywords to create new prompts: Dog, t-shirt design, afghan hound What this prompt does is essentially fine-tunes GPT to produce a desired output. You teach it what you want it to do, provide some additional information and then, in this case, provide some keywords to produce an outcome.

Is ChatGPT Code Interpreter GPT-4.5 In Disguise?

Since OpenAI released its long-awaited Code Interpreter plugin for ChatGPT, I have been playing with it extensively. Throwing everything at it, from a zip file of a large repository and asking it questions to uploading spreadsheets and generating imagery. It appears that most people are using Code Interpreter for what it was intended for, working with data and code, being able to perform analysis and other awesome things on documents and so on.

You Probably Don’t Need Langchain

As developers, we are always looking for ways to make our lives easier, and that often means bringing in third-party libraries and tools that abstract away the nitty-gritty details of specific tasks. Langchain is one such tool that aims to simplify working with AI APIs (in reality, it doesn’t). However, as we’ll discuss in this blog post, you might not need Langchain at all. In fact, using direct APIs, such as the OpenAI API, can often result in better performance and less complexity.

Langchain vs OpenAI SDKs

There has been a bit of talk about Lanchain lately regarding the fact it is creating a walled garden around AI apps and results in lock-in. In this post, we’ll debate the differences between Langchain and just using an official SDK. I assume you’re working with OpenAI, but we also have Anthropic and Hugging Face (amongst others) to consider. To understand the differences, Langchain is a framework for building AI apps. If you are a developer wanting to throw something together quickly, it is brilliant for quickly knocking out AI API wrapper apps, especially the OpenAI GPT API.

Ignoring the Inevitable: StackOverflow’s Blind Spot on AI

Reading the latest update from StackOverflow’s CEO, I can’t help but feel a sense of disconnect. StackOverflow and the broader StackExchange network are facing a tidal wave of change with the rise of AI, and it seems like they’re just treading water. For many of us, AI tools like ChatGPT have become go-to resources. They’re efficient, user-friendly, and, most importantly, not judgemental. On the other hand, StackOverflow has become notorious for its hostile environment, particularly towards newcomers. It’s as if you need to pass a test of fire to ask a question, and that’s if you’re brave enough to ask in the first place.

Has OpenAI Nerfed GPT-4?

Something interesting has happened with the famed GPT-4 model from OpenAI lately, and it’s not just me that has noticed. Many people have been talking about how GPT-4 lately feels broken. Some say it’s nerfed, and others are saying it’s possibly just broken due to resource constraints. There was a discussion recently on Hacker News in this thread which received 739 comments. All signs indicated that OpenAI had changed something significant with ChatGPT lately and its GPT-4 model. Users reported that questions relating to code problems were producing generic and unhelpful answers.