Aurelia 2, the latest incarnation of the Aurelia framework, is packed with improvements and new features. Among these, the revamped template compiler stands out for its potential to significantly boost your productivity. This article takes a deep dive into the template compiler, focusing on functionality that allows developers to intercept and modify the compilation process of an application’s templates.
Introduction to the Template Compiler Aurelia’s template compiler operates behind the scenes, processing templates and providing hooks and APIs that allow developers to modify their default behaviour. One critical use case for the template compiler is the ability to preprocess a template before it’s compiled. This could be for reasons such as accessibility validation, the addition of debugging attributes, or even the injection of custom attributes or elements.
In Aurelia 2 the @watch decorator allows you to react effectively to data changes in your application, from simple properties to complex expressions. Think of it as computedFrom (if you’re coming from Aurelia 1) but on steroids.
Basics of @watch The @watch decorator in Aurelia 2 lets you define a function that will execute whenever a specified expression changes. This expression can be a simple property on your view model, a custom attribute, or a more complex expression involving the view-models properties.
Aurelia 2 has some awesome templating features that make creating dynamic and replaceable components a breeze. One of these features is the au-slot tag. This magic tag, combined with expose.bind and repeaters brings about a new level of control and flexibility for developers to create customizable components.
In Aurelia 2 there are two types of slots: and – the element is only for Shadow DOM-enabled components, whereas the element is more akin to replaceable parts in Aurelia 1 (it’s what they are now in v2).
Meta has released version 2 of its open-source Llama AI model and has caught many’s attention – but not entirely for the right reasons. Coming in a broad spectrum of sizes, from the 7 billion to an impressive 70 billion parameter models, Llama 2 certainly stands out.
If you’re curious, you can experience the different models for yourself on Perplexity. You can only try 7 and 13 billion models there.
But as I’ve dug deeper into Llama 2, I’ve begun to ask myself: has Meta gone too far with safety measures?
Since its launch, OpenAI’s GPT-4 has been the talk of the town, marking yet another milestone in artificial intelligence. However, over the past few months, there’s been a rising suspicion within the AI community that GPT-4 has been “nerfed” or subtly downgraded. Despite these concerns, OpenAI maintains its stance that nothing has changed that would cause any significant impact on GPT-4’s performance or quality. But is that really the case?
In the cutthroat world of web development, trends come and go faster than a blink of an eye. Yet amidst this constant churn, there has been one relentless narrative: the supposed downfall of PHP and its offspring, WordPress. But here’s the twist—despite the years of criticism, proclamations of their death, and the rise of shinier, ‘cooler’ tools, PHP and WordPress are still standing. Not just standing but thriving.
Let’s face it. PHP has been the favourite whipping boy of developers for years. It’s been derided as messy, outdated, and everything in between. Yet, if PHP is as terrible as its critics claim, how has it survived and flourished in the competitive landscape of web development? The answer lies in its simplicity, flexibility, and resilience.
Have you ever found yourself startled by the uncanny resemblance between the smartphone in your hand and that of your mate’s, despite them being from entirely different manufacturers? You are not alone. This unsettling sameness is a symptom of a broader ailment plaguing the tech industry: homogenisation.
Like a relentless tide, homogenisation has washed over the technology landscape, reducing the once vibrant panorama of innovation to a monotonous, grey sea. This is a trend where uniqueness is relinquished in favour of uniformity, where diversity is suppressed for the sake of standardisation. But at what cost?
The knowledge cut-off for ChatGPT (including GPT-3.5 and GPT-4) is September 2021. This means that GPT is not aware of Midjourney. However, due to how large language models (LLMs) like GPT work, they can be trained with some prompts and produce the desired output.
This means you can teach GPT what you want it to do.
You are PromptGPT. You create detailed prompts for Midjourney, which is an AI image generator that produces images from detailed text prompts. First, you are going to be provided some example prompts. Then you are going to be provided some keywords which you will then use to generate 5 prompts. Before you are provided examples, here is how Midjourney works. - To set the aspect ratio of the image you can use \`—ar\` to provide an aspect ratio. - Specific camera models, ISO values, f stop and lenses can be used to vary the image produced. - \`--chaos \` Change how varied the results will be. Higher values produce more unusual and unexpected generations. - \`--Weird \` Explore unusual aesthetics with the experimental --weird parameter. Prompt examples: /imagine prompt: elderly man, by the sea, portrait photography, sunlight, smooth light, real photography fujifilm superia, full HD, taken on a Canon EOS R5 F1.2 ISO100 35MM --ar 4:3 --s 750 /imagine prompt: film photography portrait of young scottish prince looking at the camera, plate armor, hyperrealistic, late afternoon, overcast lighting, shot on kodak portra 200, film grain, nostalgic mood --ar 4:5 --q 2 /imagine prompt: photograph from 2018s China: a young couple in their 20s, dressed in white, stands in their home, displaying a range of emotions including laughter and tears. Behind them is a backdrop of a cluttered living space filled with white plastic trash bags and torn white paper rolls. Captured with a film camera, Fujifilm, and Kodak rolls, the image conveys a strong cinematic and grainy texture. This artwork uniquely documents the complex emotions and living conditions faced by the young people of that era. --ar 4:3 /imagine prompt: Young, handsome Keanu reeves In a black long leather coat walking down the street in the rain --ar 2:3 —uplight /imagine prompt: flat vector logo of deer head, golden on white /imagine prompt: logo for a jazzy cat cafe with the text: "CATZ" /imagine prompt: rainbows raining down from the sky, cyberpunk aesthetic, futuristic --chaos 50 /imagine prompt: illustration of a dog walker walking many dogs, tech, minimal vector flat --no photo detail realistic Only use the above as examples. Use the following keywords to create new prompts: Dog, t-shirt design, afghan hound What this prompt does is essentially fine-tunes GPT to produce a desired output. You teach it what you want it to do, provide some additional information and then, in this case, provide some keywords to produce an outcome.
Since OpenAI released its long-awaited Code Interpreter plugin for ChatGPT, I have been playing with it extensively. Throwing everything at it, from a zip file of a large repository and asking it questions to uploading spreadsheets and generating imagery.
It appears that most people are using Code Interpreter for what it was intended for, working with data and code, being able to perform analysis and other awesome things on documents and so on.
As developers, we are always looking for ways to make our lives easier, and that often means bringing in third-party libraries and tools that abstract away the nitty-gritty details of specific tasks. Langchain is one such tool that aims to simplify working with AI APIs (in reality, it doesn’t). However, as we’ll discuss in this blog post, you might not need Langchain at all. In fact, using direct APIs, such as the OpenAI API, can often result in better performance and less complexity.