They have kinda hit a bit of a wall with it, as far as I can see in articles and videos on the subject. The amount of energy it takes to run chat gpt has become unsustainable and that's for the rather lackluster responses it can give now. They can't really improve it without massive increases in computer power and even greater energy costs.
As you've likely read the news on the new Chinese AI just launched, but it's shown that many assumptions we had, even those of us who have drilled down a bit into how AI works and are not just going on the output alone in our judgement, are not as set in stone as we imagined (well, speaking for myself at least).
If we, for example, factor in the use of AI in creating new AI's (I don't mean self-learning etc. but rather using AI tools to speed up training and other areas where shortcuts can be found) not only is it unlikely we
won't see leaps we couldn't predict, but as different methods and types of AI feed into each other more, we'll see exponential increases in their power. In fact for me I'd be deeply surprised that despite being a fundamental step forward in general, AI still follows prevalent trends in technology to advance more rapidly as more advances in different but connected areas enable new and novel approaches.
I know very little about it and need to spend some time reading up, but I've only really known anything about the 'traditional' methods of LLM's of using probability and massive training sets and the like to produce output (I know there's more to them than just that, but I believe at their core this is their foundation). But I've read that significant advances are being made in the area of reasoning, and if this represents what I assume it does (more learning needed on my part though) this could be a whole new kettle of fish, one that may even be big enough for a few cans of worms?
Currently the well known LLM AI's are only intelligent in the field of complex arbitrary pattern matching. It has no
understanding of what's asked of it, or of what it answers, it can't create anything genuinely new, it can only take the creation of humans and manipulate it according to statistical transformations related to probability of a word's occurrence in specific circumstances.
But something that actually attempts to understand and reason could be the beginning of a possible golden age of AI and mankind (ha! If only!) or accelerating the rush to our fate.
In the end I think it's likely that a reasoning machine would reflect the values it would need to be given for it's criteria of decision making, as those given both deliberately and subconsciously by their designers/makers. That raises the question of the motives that those inevitably exceptionally wealthy and powerful people want these things to do for them.
The better these machines can be at doing doing this, the faster it'll happen. It won't usher in anything new or novel, unless it benefits those who create and control those AI's.