Are we approaching peak LLM and do we need to rethink how we use LLMs?

It seems that every week brings a new LLM to the market. At this point, most major vendors have one or more LLMs for enterprises to choose from. With the sheer number of LLMs available today, it can be overwhelming for enterprise developers to know which one to use. When. And why.

For all the good that LLMs bring to applications and outcomes, it is not without a tradeoff that is being discussed more and more. Have we reached an inflection point for LLMs? And do we need to rethink how we use LLMs?

LLMs everywhere

Today, every major cloud provider and most enterprise software companies have one or more LLMs that users and their applications can choose from. At the same time, the number of parameters these LLMs leverage has grown to the billions. The general thinking is that bigger is better. That is not always the case. Smaller models can be much more effective for outcomes and cost.

A question that regularly comes up is: Which LLM should I use?

With the myriad of options to choose from, it can be confusing. Some LLMs are very different and unique while others are more general purpose and indistinguishable to the average enterprise developer.

This leads to a need for ‘model selection’. Because of the newness of LLMs and the overwhelming choices available, customers need guidance on which models to use and when based on the work they are doing and the outcomes they are working toward. Today, there is little help to guide customers to optimize their work for the best outcome. Solutions are in the works from some providers; however, they are unique to that provider’s LLM offerings.

LLM Tradeoffs

While we live in an LLM excess today, it is now without downsides. LLMs require a phenomenal amount of processing power, energy, and money to build and train. These factors are driving supply shortages of high-performance chips needed to run these models. And while the chips have gotten more power efficient with each new generation, they still require a ton of energy to operate. Companies such as Amazon and Google have built generations of increasingly specialized chips that significantly lower the price/ performance and power/ performance ratios.

Even so, there are still power demand issues with a recent article noting how workloads were moved to a different data center location due to power shortages for these high-power workloads.

And then there is the cost. Today, the full cost of building, training, modeling, and operating an LLM does not get pushed fully to the customer. There are points along the value chain that are still absorbing costs to build market share. While that is not uncommon with new technology, once the true costs are fully realized, customers may rethink their use of LLMs. One could hope that the price/ performance of custom silicon comes down far enough before those costs are fully pushed to customers negating a potential sticker shock.

Have we reached LLM peak? Or do we need to think differently?

All of this leads to the question: Have we reached LLM peak? In short, my take is that we still have a way to go before hitting peak LLM saturation in the market. As we approach that inflection point, I suspect we will see fewer general purpose LLMs and more specialized LLMs. Think healthcare, patient diagnosis, research, etc. Many of these may center around a specific purpose while others may focus on specific industries.

At the same time, I also believe we need to rethink how we consume LLMs. Today, LLMs are often presented as a broad tool or building block. For enterprises, this is a tall order as they just do not have the capability to build out the applications and supporting frameworks to leverage LLM building blocks.

One of the bright lights is that enterprise applications like Salesforce, SAP and many others are already embedding generative AI capabilities into their existing applications negating the need to build bespoke LLM-based solutions. For enterprises, this means a shorter time to value and a much lower hurdle to leverage the technology.
Like this:Like Loading…

Discover more from AVOA

Subscribe to get the latest posts to your email.

Related articles

Latest articles