They were pretty cool when they first blew up. Getting them to generate semi useful information wasn’t hard and anything hard factual they would usually avoid answering or defer.
They’ve legitimately gotten worse over time. As user volume has gone up necessitating faster, shallower model responses, and further training on Internet content has resulted in model degradation as it trains on its own output, the models gradually begin to break. They’ve also been pushed harder than they were meant to, to show “improvement” to investors demanding more accurate human like fact responses.
At this point it’s a race to the bottom on a poorly understood technology. Every money sucking corporation latched on to LLM’s like a piglet finding a teat, thinking it was going to be their golden goose to finally eliminate those stupid whiny expensive workers that always ask for annoying unprofitable things like “paid time off” and “healthcare”. In reality they’ve been sold a bill of goods by Sam Altman and the rest of the tech bros currently raking in a few extra hundred billion dollars.
A shovel with the sides of it sharpened via angle grinder works wonders for chopping tall weeds too. Swing that mf like a baseball bat.