A Silver Lining in Recent LLM Usage Cost Increases
There’s been a lot of issues with the amount of “AI slop” being thrown at the wall. This has been exhausting the maintainers of many beloved open source projects, and a major source of frustration for development teams everywhere who care about quality.
Cost of “Slop” Goes Up
The recent price hikes (direct or indirect via restrictions/limits) by OpenAI, Anthropic and GitHub are undoubtedly going to slow down the reckless abandon of “AI slop” being thrown at the wall.
I suspect these “SlopMaxxers” will attempt to revert to less expensive and less capable models, the quality of the “slop” will go down even further. The tradeoff of using a less capable model will require them to iterate more just to make passable content, likely negating a lot of cost savings of using a cheaper model.
It’s not hard to prognosticate that the costs will continue to rise as the LLM model providers continue to recoup costs and subsidize less. As a result, we’re going to see the spigot shut more on the exhausting output from the “SlopMaxxer” “engineers”.
Aside: Is Having Money for Access to LLMs Going to Be a Socioeconomic Advantage?
This is starting to make me wonder in a broader sense if access to capable LLMs is going to become a socioeconomic advantage for a chunk of the economy.
It’s hard to refute that there are at least some advantages to be had by using LLMs as tools. Those with access to the powerful models (tools) will at the surface, appear to be more capable and efficient simply because of access to a tool.
An analogy for my concern: it’s like comparing a tradesman who doesn’t have money for a power tool to someone who does. The tradesman with the power tool will do things faster and more efficiently, they will be able to deliver results faster for customers, they will do more business as a result.
Or another analogy would be disadvantaged students who don’t have access to the same resources as their peers.
Perhaps this is all a stretch, but I can’t help but to ponder…
