Let’s talk about déjà vu. But not the cool, mysterious kind—more like the ‘Oh no, not this again’ kind. In the early hours of a Wednesday morning, specifically November 8, 2023, Optus gave us a not-so-gentle reminder that history loves to repeat itself. The entire Optus network crashed—mobile, internet, landline, you name it. If it had the Optus logo, it was about as helpful as a screendoor on a submarine.
For quite a while, OpenAI’s GPT-4 model was up to date until September 2021. However, recently, it appears that GPT-4 has been updated with an up-to-date dataset date of April 2023.
Knowing better than to trust what ChatGPT says, I tested this on ChatGPT web and different modes. I asked it with the default GPT-4 model and the Advanced Data Analysis model. I then checked GPT-3.5 as well.
GPT-4 on both web and mobile says April 2023. For GPT-3.5, it says January 2022.
In April 2023, I wrote about how Neural DSP announced the Cortex Control desktop application, allowing you to control your Quad Cortex remotely using a PC or Mac. Well, Neural DSP has finally released it six months later, in beta form. Technically, it is 18 months from when the existence of it was first revealed in February 2022.
My first impressions of Cortex Control are it is pretty damn good for a beta release. I have encountered a few little UI quirks, and it has crashed on me once on Windows, but for the most part, it works surprisingly well.
In the tech world, the first-mover advantage is a powerful thing. Companies that innovate and bring groundbreaking products to market often enjoy early success and, in some cases, market dominance. OpenAI seemed poised to become the undisputed champion in the conversational AI domain with its GPT series, particularly ChatGPT. However, despite their early lead, there is a growing sentiment that OpenAI has squandered its first-mover advantage.
Failing to Deliver on Multi-Modal Promises One of the most glaring examples of OpenAI’s missteps is the failure to deliver on its promises of a multi-modal GPT-4. Multi-modal models are the next logical step in the evolution of AI, combining various types of input like text, images, and even sound to provide more contextual and nuanced responses. OpenAI’s promotional materials around GPT-4 made a big deal of this feature. But where is it? We got a slick promotional video and blog posts but, multi-modal GPT-4 has yet to hit the market.
I switched from using Vercel for my new web projects to Railway a while ago.
When the AI space heated up, vector databases became the new hotness. And a lot of the available choices are quite expensive or restricted. Pinecone is probably one of the better-known vector database providers and the favoured choice of many GPT users.
Finally, Railway now supports pgvector for PostgreSQL. You can use PostgreSQL as a vector database on Railway and ditch using multiple providers, as I was. I was using Supabase, which did the job nicely, but not having all my infrastructure in one dashboard was annoying.
I saw my favourite band, Thrice, recently at their Brisbane show, and unlike other Australian tours, they did a VIP thing where you could pay extra to meet the band, a Q&A, and a couple of songs.
Naturally, one of the questions that came up at the Q&A (presumably at all of their VIP meet and greets) is the subject of Horizons/West, the long-awaited sister record to Horizons/East, released in 2021.
The prospect of an AI tool that could learn from my blog and writing style and then write like me was tantalising. So, when I was being aggressively advertised reword.com – my curiosity was peaked.
Sadly, reword is a typical GPT wrapper. It’s a great idea, but it’s not anything special. It’s akin to those GPT wrapping tools that allow you to ask questions about PDF files and other forms of days with a fancy UI.
The realm of web development is teeming with choices, each technology vying for developers’ attention. On one hand, powerful libraries like React have revolutionised how we build web applications. On the other, there are Web Components—although not as “foundational” as one might think, given that they’ve been universally supported by browsers only since 2020. Yet, they are increasingly important in the modern web ecosystem.
With its massive ecosystem and community, React often becomes the yardstick against which other technologies are measured. This is especially true for Web Components. However, this comparative framework is unfair and fosters misleading criticisms stemming from overreliance on libraries like React. This post aims to disentangle these misplaced critiques and highlight why Web Components deserve to be evaluated on their own merits.
The Aurelia 2 Task Queue is a robust scheduler designed to address various challenges associated with timing issues, memory leaks, race conditions, etc. Unlike its predecessor in Aurelia 1, the Task Queue offers advanced control over synchronous and asynchronous tasks.
Comparison with Aurelia 1 While the term “task queue” may be familiar to those who have worked with Aurelia 1, it’s essential to recognise the fundamental differences in Aurelia 2. The new Task Queue is designed to prevent common challenges and offers enhanced capabilities and flexibility. In Aurelia 1, the task queue was notoriously known as something you used to execute code after rendering changes.
In Aurelia 2, lambda expressions bring a breath of fresh air to templating by enabling developers to write concise and expressive code directly within templates. A significant advantage of lambda expressions is that they allow developers to perform various operations without needing value converters, leading to cleaner and more maintainable code.
This article explores lambda expressions in Aurelia 2, emphasising filtering, sorting, and other operations that can be performed without value converters.