As developers, we are always looking for ways to make our lives easier, and that often means bringing in third-party libraries and tools that abstract away the nitty-gritty details of specific tasks. Langchain is one such tool that aims to simplify working with AI APIs (in reality, it doesn’t). However, as we’ll discuss in this blog post, you might not need Langchain at all. In fact, using direct APIs, such as the OpenAI API, can often result in better performance and less complexity.
The Case for Direct APIs
Langchain, like many other abstraction tools, aims to simplify the process of working with AI APIs. However, this simplification often comes at a cost. The abstraction layer can add unnecessary complexity, sometimes leading to performance issues. In the case of Langchain, the abstractions for things like GPT embeddings and chat autocomplete are quite unnecessary.
For instance, let’s consider the OpenAI API. OpenAI provides a simple, straightforward interface for interacting with its powerful AI models. If you’re working with Node.js, you can easily use the openai
package to interact with the API directly.
Here’s an example of how you might use the OpenAI API to create a chat completion:
const { Configuration, OpenAIApi } = require("openai"); const configuration = new Configuration({ apiKey: process.env.OPENAI_API_KEY, }); const openai = new OpenAIApi(configuration); const response = await openai.createChatCompletion({ model: “gpt-4", messages: [{ role: "user", content: "Hello world" }], }); if (!response.data || !response.data.choices) { console.error(‘API did not respond'); } const responseContent = response.data.choices[0].message?.content; const messageId = response.data.id;
As you can see, the code is quite straightforward. We’re simply creating a new instance of OpenAIApi
, passing in our API key, and then using that instance to create a chat completion. If you were to use Langchain, it might be a few lines smaller, but not look as different as you might think.
The response from the OpenAI API is also easy to understand and work with. It includes a choices
array, which contains the AI’s responses, and a finish_reason
, which indicates why the AI stopped generating a response.
The possible values for finish_reason
are:
stop
: API returned a complete message or a message terminated by one of the stop sequences provided via the stop parameter.length
: Incomplete model output due to max_tokens parameter or token limit.function_call
: The model decided to call a function.content_filter
: Omitted content due to a flag from our content filters.null
: API response still in progress or incomplete.
By working with the OpenAI API directly, we’re able to bypass the abstraction layer that Langchain provides. This can lead to more performant code, as the abstraction layer has no additional overhead. It also means that we have more control over how we interact with the API, which can lead to simpler, more maintainable code.
And similarly, if you are working with generating embeddings, you will learn with OpenAI it’s just a simple endpoint. Storing them in your preferred vector database is equally as simple.
Conclusion
While Langchain and similar tools can be useful in certain situations (especially for creating MVPs), they aren’t always necessary. In many cases, working with the API directly can lead to better performance and less complexity. So, before you reach for that third-party library, take a moment to consider whether you really need it. You might find that the best solution is the simplest one.