Tactics Without Strategy…
Tactics without strategy is the noise before defeat. – Sun Tzu
After keeping a watchful eye on the “AI” (LLM) ecosystem over the past year and doing my own research, my observations have been weighing on me enough to finally write something about my concerns.
Those of us who are in positions to influence adoption, big-picture ecosystems, architectures, and platforms have been badgered constantly over the past few years about things like “AI Strategy,” being “AI First,” etc. The immense pressure by both non-technical and technical leadership has made it a ubiquitous talking point for practically every organization with technology. That is to say, this is a very pervasive situation we’re dealing with!
It should also come as no surprise that this immense pressure has the side effect of pushing things through without a lot of strategic thinking, you know, the whole “agility” rationale for fast-tracking agendas. “AI stacks” are being adopted quickly to keep up with the Jones’ and there’s a wide spectrum of platitudes about security, longevity, and stability.
Security and finance have been at the forefront of most debate surrounding the proliferation of LLMs and their integrations, where the financial gain promises (more with less, or even: cut FTE) outweigh the financial burden, and a swath of the security conversation is met with head nods and promises to make “it” (the LLM integrations) secure (because, you know, the needs of the business never usurp security!).
“This is More Than Just an Ecosystem, It’s a Vibe Ecosystem”
After spending some time evaluating the landscape, I’d like to share my observations and the risks I see, which in aggregate I’m calling for the eventual “omnichannel rug pull.” The “omnichannel rug pull” will be the consolidation and evaporation of key components everyone is seemingly relying on for their “AI strategy” (LLM integration).
Concern 1: MCP Ecosytsem
Have you looked at the various MCP servers from the “Awesome MCP Servers” list? There are some obvious indicators that should concern us. Take a moment and click through a handful of them if you’re not familiar with them.
A large number of these are low-quality, template-generated solutions
These are low-effort machinations of people using AI to build AI tooling, and have practically no commitment to maintainence, scaling, or security. Some of these are hello-world equivalents, others beg the question: yes, but, should this even be used this way?
Commercialization
Trendy companies have realized that it’s easy advertising to wrap their existing API in an MCP server protocol. The overhead of maintaining a wrapper over their own API is definitely not going to backfire, and most assuredly won’t be discontinued if there are no monetization gains from maintaining it.
Concern
It’s my belief that we’ll see a rapid contraction of MCP resources and projects. I wouldn’t want to make almost any of those a critical part of an “Agentic AI workflow” without hedging somehow.
Concern 2: LLM Model Ecosystem
Right now we’re still in an LLM model arms race. There are multiple commercial entities vying for overall supremacy (Anthropic. OpenAI, Google, Microsoft), but also many others with commercial ties, e.g.: Qwen, Mistral, Granite, etc.
There’s many, many derivatives and variants, all trying to leave one another behind in the dust in the various benchmarks for their respective applications.
My concern comes from the fact that at some point, the commercial interests training these models will see diminishing returns on their training investments, cease “refreshing” their models, and inevitably leave them to collect dust. I believe this is half illustrated by the already tricky situation of answering: “which model do we choose?” when trying to make an AI (LLM) agent or product.
Concern
Once you ship with a model, any changes to its idiosyncrasies by upgrading to a successor model, in conjunction with the general non-deterministic nature of AI (LLM) principals, can be a real business risk.
Concern 3: LLM-as-a-Service
A vast majority of AI (LLM) adventures will end up at XYZ Cloud Vendor’s hosted LLM-as-a-service option, as they have the advantage of economies of scale. They take on the risks of CapEx, and you pay for the use.
My concern is that the long-term viability of the economy for such services is hotly contested. We’re still at the market share dominance phase between the cloud vendors offering competitive rates for AI (LLM) platforms.
We do know that right now:
- There is not enough data centers
- Not enough power for those data centers
- Not enough GPUs for those data centers
- Not enough memory and storage for those data centers
The rules of supply and demand dictate that as a result, costs on all of these elements will increase until supply improves or demand decreases.
In the case of lowering demand, it’s bad news once that goes below a certain level because any hedging and economic assumptions made for providing the underlying LLM-as-service fall into question. The result of that will be an increase in consumer prices for those services to recoup commited CapEx, not to mention the likely correlated macro attitude shifts of this AI (LLM) cycle.
On the flip side, if supply starts to exceed demand, it’s bad news above a certain level, as the economic assumptions of the current and projected market ROI for committed CapEx turn into a bit of a train-wreck. Competition amongst LLM providers will make it necessary to compete on price, cut losses and exit the game, or recoup the sunk CapEx in some other ways.
Concern
There’s risk in the LLM-as-a-Service market, and it’s not clear that the economics will be reliable in the long term. Since it’s not likely to be viable to replace an LLM-as-a-Service with your own infrastructure, you will be coupled to the unpredictable behavior of cloud vendors consolidating, eliminating, and optimizing in their own interests.
Concern 4: Everyone is Selling Shovels
An interesting side effect of “vibe coding” is the sudden influx of AI-oriented SaaS offerings that help you stitch together other dependencies, or outright use them as a key dependency for your own builds, that is to say: they are selling shovels.
This situation to me echos the dot-com era, where the accelerated dot-com gold rush was responsible for a sudden expansion and later collapse of businesses that were not viable to begin with.
While cloud vendors have a larger moat to experiment, pivot, and can even purchase “winners” in this space of tools for AI building, I suspect there will be a fairly quick contraction for many of these offerings. They’re largely after the same money, are bound to heavily overlap with each other, thus not having special advantages, and subject to an intense playing field of vibe-coded newcomers.
In short: similar to any startup business, this is a speedrun for a large batch of companies that, just like any startup, are subject to the same rules of failure, but in a hyper-competitive space.
Concern
Depending on unproven “shovel provided” SaaS tools or services introduces more risk than the short-term competitive advantage they provide. SaaS offerings for the AI (LLM) ecosystem have not had enough time to weed out which ones have a long-term viability.
Omnichannel Rug Pull
In summary, I believe that we could see a “rug pull” of dependencies from the AI (LLM) ecosystem components that compose the vast majority of our current state of the AI (LLM) economy:
- Tools such as MCP servers that were hastily built or ill-conceived will be discontinued.
- LLM model non-determinism and idiosyncrasies will form a type of lock-in, when models are no longer refreshed or refined, there will be challenges to reliably replace them.
- Being wholly dependent on LLM-as-a-Service will make bag holders when cloud vendors consolidate or eliminate offerings.
- AI-oriented SaaS (shovel salesman) offerings collapsing like the dot-com bubble due to competition and weak fundamentals.