I would think that such a prediction hinges on whether or not AI becomes sentient and self-aware on its own.
There's really very little evidence (I'm aware of) to show these any likelihood of this occurring.
First of all, you (one) needs to establish a possible mechanism of this self-activated change to occur.
For living things, we know of evolution through genetic mutation and inheritance. But current computers have no equivalent mechanism - after all they didn't come about from this in the first place, why would some unknown non-existent new mechanism of change and adaptation suddenly appear without our deliberately engineering it in? (and bear in mind we can't do this deliberately, so the odd's of it happening by accident are essentially non-existent).
Then you also need a driver to direct that change toward (in this case) a consciousness that mimics our own - after all, these concepts are of AI's challenging us as independent entities in need of resources we covet and/or need (i.e. in competition, hence in conflict).
Currently they only are able to exist through the extreme agency of human's, and because it's so extreme (cost, time, effort, resources, etc) human's only provide it with the capability to produce what humans want or need.
I think the fact they (computers) are now capable of imitating human responses, human's are often fooled by their biases (the natural biases that allow us to continue to function) into imagining some independent agency within the machine, but that's total and complete illusion!
That's in part one of the big dangers in LLM AI's - we are fooled into thinking they are something which they are not, and respond incorrectly to them as though they were another human. Anyone remember the (I think it was OpenAI) engineer who became convinced the LLM was actually self-aware? Even though he knew how it was made, and how it worked, and the fact that there's no facility of independent thought as we know it, he was convinced by interacting with it that it was self-aware, against all logic. So when people without that knowledge of the fact it simple can't do this are going to risk being sucked (and suckered) into it! Which means they'll be suckered in by the human(s) who gave it the purpose they desired (since it has no purpose of it's own).
This is all anthropomorphising an object (as humans are so inclined to do) and instilling human traits of it's own into it when the only human traits it has were those built in at the start, and they are so limited in nature and scope they barely count as anything in this regard. We did this with objects and processes of nature etc. in developing (at least some) of our early religious concepts - instilling self-agency into things that change without our interference, giving them a 'life' of their own, and thus allowing behaviours and personalities to be 'hung' on those concepts to further humanise them and make them recognisable to their own understanding.