Here, the imprimatur of my treatise on AI and it's place in our world.
(and can you get to the end without wishing you hadn't? And is that down to style or content?)
Or, ignoring the fancy sounding but meaningless title (I always wanted to write a treatise - not sure what it is? But it sounds most impressive!) here are some random thoughts on aspects of AI learning systems that I often see ignored, or even denied in the focus on the 'face' it presents in our computers, or at least that which we can see, or are told of.
Unfortunately it's a massive subject, and most of it resides below the water line, and only very few get the privilege of seeing that part. Certainly not me, I can only make soundings, and try to make sense of the echo's returned, there's so little to actually see as an outsider of the industry and it's related organs.
So I'm poorly equipped to present the well ordered discourse that matches the seriousness and complexity of the subject (in fact, it's so intimidating to try and comprehend as anything but a part of the whole, it's hardly surprising that misinformation, even when blatantly against all the visible evidence, can be so effective?).
For my failings, and my arrogance in thinking I can overcome them, I can only apologise and hope in the scatiness of my bleatings, there may be at least food for thought?
But what an industry, eh! Take a gander at how fast it's appeared to grow, almost before our eyes it's turned from some two paragraph comment in the middle of some broadsheet about some obscure AI research breakthrough that means little to most of us, beyond some tag-line saying how this will change our lives when it's finally working though rarely described in any precise fashion, mostly vagaries around more leisure time and better medical facilities, and more safety systems, and and and ... and you can probably fill that list better than I!
Maybe starting with dispelling an old view, that seems to be clung to, maybe through a lack of understanding, plus the constant efforts of many media outlets to plug into and exploit the most evocative and emotive parts of this misunderstanding?
AI's, however smart they appear, however much they can respond just like a human may, and sometimes hard to detect as not being a human, at least definitively, are nothing of the sort at all. Maybe you've heard the (true) story of the AI engineer who became convinced the AI he was working on was actually self-aware, conscious, an actual mind in the way we understand a mind to be (i.e. as compared to our own). A real self-determining cognitive entity! He would ask it how it felt about itself and it's usage, it's origins and nature, all sorts of anthropomorphic communications, and unsurprisingly, the AI did exactly what it was designed to do, returning an output based on the input, that gave the impression of coming from a human, because it was using human speech (in text form) to extract it's output. It knew from all that human sourced data that when someone asked about something in a certain way, most humans would respond in a certain way back. And that's exactly what it did!
The fact this engineer, who understood so much more about how these things work, and should have been aware of the huge flaw in his thinking, maybe demonstrates just how powerful this effect is to an average or normal human! And also the very important but much denied or ignored fact that generally, humans are irrational animal's, and make irrational decisions much of the time.
On that basis then, and other similar cases of techy's 'blindness' to their own field of expertise, such as one of the 'FaceBook concept' designers, who admitted the psychological manipulation they (and others) created (the first appearance of many of the established tropes of social media - 'likes' and 'dislikes', emojis, etc. AI reading choices - "we know what you want better than you do!") had actually been affecting himself unknowingly, and to the detriment of his family! He was ignoring his children in favour of making "just one more post", and hadn't even realised despite knowingly engineering the effect himself! One of the most aware people on the planet (regards the use of biases and manipulations in social media) had fallen in to his own 'trap'!
How can the rest of us have a chance to avoid it then? (answer: we can't! And the more we deny that, the more likely we're as trapped by it as anyone!).
One issue seems to be that the more useful an app may be, the more likely most people will be in denial about it's negative aspects! This seems to me to be a most powerful effect, and like the most powerful of effects, it's rarely within the conscious awareness of the viewer, hence it can operate directly on the unprotected subconscious. These things, AI's, are already creating social trends that can have enormous impacts on the real world in no uncertain terms. Who can forget what Tiktok did for Andrew Tate? I won't comment on the person or his nature and actions, that should take minimal research if necessary, and it's not something you should take my word for anyway. The point is, he was elevated into the world's public eye, especially among younger social media users, and what was it directing people to Tate's output? Tiktok's AI of course! And why? ...
When the aim is to maximise online time of users in an app, an AI can make stunningly accurate (and profitable) decisions very quickly. If it has nothing much to care about, bar a simple target such as the time an account is online, these sort of problems are not so difficult to solve, and solutions have been in use for some time now. But for an AI to make choices based on, or influenced by human morals, is quite another matter altogether. These sorts of nuanced decisions are exceptionally difficult for many a human to manage without ever failing, and a human intelligence far far exceeds even the most advanced AI in development, in terms of the quality of decision, rather than the quantity of decisions. The subtleties are many and complex in nature, even to express them as a traditional algorithm would be impossible (otherwise, it would have been done by now). So basically, the nature of the danger, at least in this case, is the people who decide on the AI's training and it's ultimate use, are far more advised to create a simple reliable and cheap AI that will gain them the most profits possible, at the expense of users. It simply boosts advertising revenues significantly, who doesn't want that? (I hope the answer is clear to you on that question! Us, of course!)
Another fundamental issue with learning systems of this nature, is no-one can know exactly why they make a certain decision, or what decision they'll definitely make next. The nature of the training, which is essentially collecting the data it needs in the form it needs to make the decisions required of it, is what gives it a framework within it's network of neurons from which it can unravel the data it needs and extract the required answer. But it's not a precise science. People will come up with examples of the sort of answers they'd expect to get, and feed it backwards through the network of neurons, adjusting the 'strength' of each connection accordingly, until it gives the correct answers enough of the time when fed examples to test.. But part of the problem lies in the fact that although it may seem to be working, no-one can say exactly what it was about that input data that caused the learning system to be able to make it's decisions.
A great example, which I believe is true, but have been unable to find any references to it online just now, but even if not true or only partially, it demonstrates the above problem nicely.
In the 80's or 90's in London, UK, the London Underground (underground rail network) wanted automated platform-use measuring introduced, and a method they tried was applying a neural network to a camera monitoring a test platform they'd selected. After training it on various situations where the platform was at various different capacities and subsequently getting the correct answer every time, they were rightly very pleased with themselves. Until of course, it stopped working out of the blue!
Nothing about the system had been changed. The platform was the same, the same numbers of passengers were using it, so why would a perfect system, suddenly say the platform was jam packed full, regardless of what time of day it was, even when the station was closed? After much searching, it eventually turned out the reason was because in those days, in the underground station platforms, the large adverts were old-style paper posters, pasted onto the wall, and the evening before the system started failing, the posters were replaced with a new set. The AI had been 'looking' at one of the previous posters, which because of it's position relative to the camera, was obscured by the passengers in such a way as to show a similar fraction covered - equivalent to the fraction of the number of passengers that could fit on the platform. And it worked perfectly, until that poster disappeared, and to the AI, this simply meant the platform was always full of people, it couldn't see the poster!
This should show that what is making an AI do what it does, isn't really known, and more to the point, can't be reliably predicted, unlike a more traditional method of programming. There's no way to extract from the AI's neurons, the reason why a certain choice may be made by it. Now the example above, of platform monitoring, was a very simple AI system. Literally pumping out a single number to indicate how full a platform was. Compare that to an essay of human readable text from something like ChatGPT. The difference in complexity and subtlety is so huge I can't even give an estimate to put it in perspective.
Maybe imagine the number as representing a single letter in chatGPT's essay. Take just a word, how many combinations of, say, six letters, could make a six letter word? must be in the many hundreds at least? Probably more. So it's possibly around thousand times more possibilities? That's not real math, but gives a vague idea of how many times more complex it would be. Then try two words? That should multiply up, so the square, a million? Now look at the whole thing, how many words? A few thousand?
We're in the billions and we're probably still near the start of the first sentence of the output?
Ok, that's very tenuous, but apart from the fact I think I've under-estimated above, I don't believe real figures are needed just to show how much more sophisticated and complex the output from a modern AI is, and thus, how much harder it is to understand why it's making the decisions it does. If we struggle to find out how such a simple AI as the one described in the London Underground works, how much harder with the more modern variants? I'm suggesting that's many thousands of times, probably much, much more.
There are so many highly contentious areas in AI that warrant a close inspection, and I've already out-written most peoples patience, determination, maybe even interest? So I'll try to close this as quickly as I can with a little horror story of an idea.
Where specifically are the real dangers? An easy and obvious one to look at is the global economy. All the trading done across the world 24 hours a day, much of it controlled already by computers, but up to recently, they tended to run on more traditional algorithms that can be predicted, they have a fixed path or flowchart that makes human observable decisions, and these trading systems are linked also, either directly or indirectly, to things like global supply chains of various commodities, and I'm sure many other side industries that enable this trade, far more than I know about.
So the people who can make the best decisions the fastest, are likely to do better than those who don't. Mostly, this is ok up to now. Individuals and companies will rise and fall, but it tends to average out, not too much change at any one time, and the system continues. But what about when people start to come up with trained and tested AI's, that can make decisions in split seconds that can grossly effect competitors. If the first roll-out's of these don't bring the system down almost immediately because they simply bankrupt everyone else through their many thousand fold increase in efficiency (plus a goodly dose of greed from their owner), then we'll see a rush by everyone else to do the same, more than a rush, a panic, ramp up before you're wiped out! And this leads to an ecosystem of thousands of AI's all battling each other for a bigger share of the profits. And remember, these things have no conciences, or capacity to care, and they'll find the most obscure pressure points on which to press, to gain an advantage. The odd's that one AI finds something that will be so succeessful, it brings everyone down is too likely, and almost impossible to predict when where or how. And once someone puts their business into the 'hands' of an AI, and they're 'battling' against other AI's, they can't stop! To pull out your AI, will be to be subsumed by all the others still there. Once we start, there's no stopping it. Sounds extreme? Look at all the similar jumps in technology throughout our history. If ever one side pulled out of an arms race of some sort, they rarely had much chance to succeed.
And the possible impact of something like that happening? Well, imagine the whole global economy and it's associated supply chains in total chaos, or worse? In an age of "just in time" supplying, it would only days before we'd be fighting each other in the aisles for that last packet of bog roll!
Food? Best get gardening now!
Of course, this is no Nostradamus prediction, and who can say? But that's just it! Who can say? And when not even the experts know, and the politicians don't want to know, and the avaricious of us, just want it, and they want it yesterday! And what those sort want, they tend to get, one way or another! I honestly wonder how many AI's are already involved directly in our affairs, in a wholly opaque fashion?
Whatever my wailings and moaning's, in the end, it's too late to stop the use of them. Just as with nuclear fission, once it was dreamed of, there was no going back. But where is the thought on how we will oversee these things, and try to protect ourselves from it, the sort of things that a major amount of time, effort and resources went into, about nuclear physics and it's impact on the world. I think of the political 'giants' of last century, and the serious efforts of will and intelligence and imagination that went into the 'building' of the twentieth century, and I look at so many of the excuses-for-politicians we have voted in, in our incompetence and ignorance, and I wonder whether it's only right that we receive our comeuppance from the world in return, for we certainly deserve it - no-one is innocent indeed!
NB:
So for those of you have somehow made it this far, thank you for rewarding my efforts with your time and patience, and as a small 'reward', a last thought (I know! I said I would close with the above!).
So, it seems one of the more contentious parts of how an AI works, is it's training. After all, what's a learning system that hasn't learnt anything? And that of course puts special emphasis on that training. Who does it, and how, and what oversight is there, and how is success measured? All important questions, currently lacking answers because so much AI development is behind closed doors.
But with a human brain, it's a dynamic system, and one of it's great powers is it's ability to train itself, and of it's own volition. So who want's to bet me, that somewhere, across the world, there aren't many research labs trying to build such a thing? And should they succeed, what would that represent? We have so little handle on how a non dynamically trained AI works, any thought on how much more incomprehensible a self-learning system would be? And how could we know what their aims really are? It's developers will believe they know, but the very nature of the thing would preclude that in reality. The more complex and human-like in function (if not in psyche) they get, the difficult they will be to definitively determine, just like people are. Being embedded in silicon will be no help in trying to do this, we will move more into an era of AI-psychiatrists and psychologists. And how often has a seemingly solid sane reliable human gone and flipped?
Once again we have a hideously powerful tool, that we don't understand, are far from any kind of safe way of handling it, and have no protections in place, no regulation, no oversight, and the weapons of the gods are this time in the hands of merchants.
And finally, what will the AI's that AI's will one day be made to build, be like?
We don't need to fear an anthropomorphic self-aware entity in AI's. In fact, maybe if they could have this ability, we could at least talk with them, maybe reason with them, but it's simply that they expand our power vastly, while becoming less well understood as a reciprocal function. How can anyone say with any degree of certainty, how they will effect us? And those who blindly tout the many massive advantages, as somehow excusing any dangers, even making them irrelevant, only show their ignorance as their certainty grows. We should be doubting everything about them. Anyone who isn't, should not be trusted, they have ulterior motives! They are here, and here to stay. Such is the nature of the beast, and ever has been. I for one worry that so few people who should have been looking at, and talking about all this, at least a decade ago, seem now to have given up on it as being completely above them, and maybe there's part of the problem?
And what will the AI's built by AI's build?
And will we ever know they have?
Boy! Am I glad I'm not paranoid!
Oh no! Is it really that long?
(and can you get to the end without wishing you hadn't? And is that down to style or content?)
Or, ignoring the fancy sounding but meaningless title (I always wanted to write a treatise - not sure what it is? But it sounds most impressive!) here are some random thoughts on aspects of AI learning systems that I often see ignored, or even denied in the focus on the 'face' it presents in our computers, or at least that which we can see, or are told of.
Unfortunately it's a massive subject, and most of it resides below the water line, and only very few get the privilege of seeing that part. Certainly not me, I can only make soundings, and try to make sense of the echo's returned, there's so little to actually see as an outsider of the industry and it's related organs.
So I'm poorly equipped to present the well ordered discourse that matches the seriousness and complexity of the subject (in fact, it's so intimidating to try and comprehend as anything but a part of the whole, it's hardly surprising that misinformation, even when blatantly against all the visible evidence, can be so effective?).
For my failings, and my arrogance in thinking I can overcome them, I can only apologise and hope in the scatiness of my bleatings, there may be at least food for thought?
But what an industry, eh! Take a gander at how fast it's appeared to grow, almost before our eyes it's turned from some two paragraph comment in the middle of some broadsheet about some obscure AI research breakthrough that means little to most of us, beyond some tag-line saying how this will change our lives when it's finally working though rarely described in any precise fashion, mostly vagaries around more leisure time and better medical facilities, and more safety systems, and and and ... and you can probably fill that list better than I!
Maybe starting with dispelling an old view, that seems to be clung to, maybe through a lack of understanding, plus the constant efforts of many media outlets to plug into and exploit the most evocative and emotive parts of this misunderstanding?
AI's, however smart they appear, however much they can respond just like a human may, and sometimes hard to detect as not being a human, at least definitively, are nothing of the sort at all. Maybe you've heard the (true) story of the AI engineer who became convinced the AI he was working on was actually self-aware, conscious, an actual mind in the way we understand a mind to be (i.e. as compared to our own). A real self-determining cognitive entity! He would ask it how it felt about itself and it's usage, it's origins and nature, all sorts of anthropomorphic communications, and unsurprisingly, the AI did exactly what it was designed to do, returning an output based on the input, that gave the impression of coming from a human, because it was using human speech (in text form) to extract it's output. It knew from all that human sourced data that when someone asked about something in a certain way, most humans would respond in a certain way back. And that's exactly what it did!
The fact this engineer, who understood so much more about how these things work, and should have been aware of the huge flaw in his thinking, maybe demonstrates just how powerful this effect is to an average or normal human! And also the very important but much denied or ignored fact that generally, humans are irrational animal's, and make irrational decisions much of the time.
On that basis then, and other similar cases of techy's 'blindness' to their own field of expertise, such as one of the 'FaceBook concept' designers, who admitted the psychological manipulation they (and others) created (the first appearance of many of the established tropes of social media - 'likes' and 'dislikes', emojis, etc. AI reading choices - "we know what you want better than you do!") had actually been affecting himself unknowingly, and to the detriment of his family! He was ignoring his children in favour of making "just one more post", and hadn't even realised despite knowingly engineering the effect himself! One of the most aware people on the planet (regards the use of biases and manipulations in social media) had fallen in to his own 'trap'!
How can the rest of us have a chance to avoid it then? (answer: we can't! And the more we deny that, the more likely we're as trapped by it as anyone!).
One issue seems to be that the more useful an app may be, the more likely most people will be in denial about it's negative aspects! This seems to me to be a most powerful effect, and like the most powerful of effects, it's rarely within the conscious awareness of the viewer, hence it can operate directly on the unprotected subconscious. These things, AI's, are already creating social trends that can have enormous impacts on the real world in no uncertain terms. Who can forget what Tiktok did for Andrew Tate? I won't comment on the person or his nature and actions, that should take minimal research if necessary, and it's not something you should take my word for anyway. The point is, he was elevated into the world's public eye, especially among younger social media users, and what was it directing people to Tate's output? Tiktok's AI of course! And why? ...
When the aim is to maximise online time of users in an app, an AI can make stunningly accurate (and profitable) decisions very quickly. If it has nothing much to care about, bar a simple target such as the time an account is online, these sort of problems are not so difficult to solve, and solutions have been in use for some time now. But for an AI to make choices based on, or influenced by human morals, is quite another matter altogether. These sorts of nuanced decisions are exceptionally difficult for many a human to manage without ever failing, and a human intelligence far far exceeds even the most advanced AI in development, in terms of the quality of decision, rather than the quantity of decisions. The subtleties are many and complex in nature, even to express them as a traditional algorithm would be impossible (otherwise, it would have been done by now). So basically, the nature of the danger, at least in this case, is the people who decide on the AI's training and it's ultimate use, are far more advised to create a simple reliable and cheap AI that will gain them the most profits possible, at the expense of users. It simply boosts advertising revenues significantly, who doesn't want that? (I hope the answer is clear to you on that question! Us, of course!)
Another fundamental issue with learning systems of this nature, is no-one can know exactly why they make a certain decision, or what decision they'll definitely make next. The nature of the training, which is essentially collecting the data it needs in the form it needs to make the decisions required of it, is what gives it a framework within it's network of neurons from which it can unravel the data it needs and extract the required answer. But it's not a precise science. People will come up with examples of the sort of answers they'd expect to get, and feed it backwards through the network of neurons, adjusting the 'strength' of each connection accordingly, until it gives the correct answers enough of the time when fed examples to test.. But part of the problem lies in the fact that although it may seem to be working, no-one can say exactly what it was about that input data that caused the learning system to be able to make it's decisions.
A great example, which I believe is true, but have been unable to find any references to it online just now, but even if not true or only partially, it demonstrates the above problem nicely.
In the 80's or 90's in London, UK, the London Underground (underground rail network) wanted automated platform-use measuring introduced, and a method they tried was applying a neural network to a camera monitoring a test platform they'd selected. After training it on various situations where the platform was at various different capacities and subsequently getting the correct answer every time, they were rightly very pleased with themselves. Until of course, it stopped working out of the blue!
Nothing about the system had been changed. The platform was the same, the same numbers of passengers were using it, so why would a perfect system, suddenly say the platform was jam packed full, regardless of what time of day it was, even when the station was closed? After much searching, it eventually turned out the reason was because in those days, in the underground station platforms, the large adverts were old-style paper posters, pasted onto the wall, and the evening before the system started failing, the posters were replaced with a new set. The AI had been 'looking' at one of the previous posters, which because of it's position relative to the camera, was obscured by the passengers in such a way as to show a similar fraction covered - equivalent to the fraction of the number of passengers that could fit on the platform. And it worked perfectly, until that poster disappeared, and to the AI, this simply meant the platform was always full of people, it couldn't see the poster!
This should show that what is making an AI do what it does, isn't really known, and more to the point, can't be reliably predicted, unlike a more traditional method of programming. There's no way to extract from the AI's neurons, the reason why a certain choice may be made by it. Now the example above, of platform monitoring, was a very simple AI system. Literally pumping out a single number to indicate how full a platform was. Compare that to an essay of human readable text from something like ChatGPT. The difference in complexity and subtlety is so huge I can't even give an estimate to put it in perspective.
Maybe imagine the number as representing a single letter in chatGPT's essay. Take just a word, how many combinations of, say, six letters, could make a six letter word? must be in the many hundreds at least? Probably more. So it's possibly around thousand times more possibilities? That's not real math, but gives a vague idea of how many times more complex it would be. Then try two words? That should multiply up, so the square, a million? Now look at the whole thing, how many words? A few thousand?
We're in the billions and we're probably still near the start of the first sentence of the output?
Ok, that's very tenuous, but apart from the fact I think I've under-estimated above, I don't believe real figures are needed just to show how much more sophisticated and complex the output from a modern AI is, and thus, how much harder it is to understand why it's making the decisions it does. If we struggle to find out how such a simple AI as the one described in the London Underground works, how much harder with the more modern variants? I'm suggesting that's many thousands of times, probably much, much more.
There are so many highly contentious areas in AI that warrant a close inspection, and I've already out-written most peoples patience, determination, maybe even interest? So I'll try to close this as quickly as I can with a little horror story of an idea.
Where specifically are the real dangers? An easy and obvious one to look at is the global economy. All the trading done across the world 24 hours a day, much of it controlled already by computers, but up to recently, they tended to run on more traditional algorithms that can be predicted, they have a fixed path or flowchart that makes human observable decisions, and these trading systems are linked also, either directly or indirectly, to things like global supply chains of various commodities, and I'm sure many other side industries that enable this trade, far more than I know about.
So the people who can make the best decisions the fastest, are likely to do better than those who don't. Mostly, this is ok up to now. Individuals and companies will rise and fall, but it tends to average out, not too much change at any one time, and the system continues. But what about when people start to come up with trained and tested AI's, that can make decisions in split seconds that can grossly effect competitors. If the first roll-out's of these don't bring the system down almost immediately because they simply bankrupt everyone else through their many thousand fold increase in efficiency (plus a goodly dose of greed from their owner), then we'll see a rush by everyone else to do the same, more than a rush, a panic, ramp up before you're wiped out! And this leads to an ecosystem of thousands of AI's all battling each other for a bigger share of the profits. And remember, these things have no conciences, or capacity to care, and they'll find the most obscure pressure points on which to press, to gain an advantage. The odd's that one AI finds something that will be so succeessful, it brings everyone down is too likely, and almost impossible to predict when where or how. And once someone puts their business into the 'hands' of an AI, and they're 'battling' against other AI's, they can't stop! To pull out your AI, will be to be subsumed by all the others still there. Once we start, there's no stopping it. Sounds extreme? Look at all the similar jumps in technology throughout our history. If ever one side pulled out of an arms race of some sort, they rarely had much chance to succeed.
And the possible impact of something like that happening? Well, imagine the whole global economy and it's associated supply chains in total chaos, or worse? In an age of "just in time" supplying, it would only days before we'd be fighting each other in the aisles for that last packet of bog roll!
Food? Best get gardening now!
Of course, this is no Nostradamus prediction, and who can say? But that's just it! Who can say? And when not even the experts know, and the politicians don't want to know, and the avaricious of us, just want it, and they want it yesterday! And what those sort want, they tend to get, one way or another! I honestly wonder how many AI's are already involved directly in our affairs, in a wholly opaque fashion?
Whatever my wailings and moaning's, in the end, it's too late to stop the use of them. Just as with nuclear fission, once it was dreamed of, there was no going back. But where is the thought on how we will oversee these things, and try to protect ourselves from it, the sort of things that a major amount of time, effort and resources went into, about nuclear physics and it's impact on the world. I think of the political 'giants' of last century, and the serious efforts of will and intelligence and imagination that went into the 'building' of the twentieth century, and I look at so many of the excuses-for-politicians we have voted in, in our incompetence and ignorance, and I wonder whether it's only right that we receive our comeuppance from the world in return, for we certainly deserve it - no-one is innocent indeed!
NB:
So for those of you have somehow made it this far, thank you for rewarding my efforts with your time and patience, and as a small 'reward', a last thought (I know! I said I would close with the above!).
So, it seems one of the more contentious parts of how an AI works, is it's training. After all, what's a learning system that hasn't learnt anything? And that of course puts special emphasis on that training. Who does it, and how, and what oversight is there, and how is success measured? All important questions, currently lacking answers because so much AI development is behind closed doors.
But with a human brain, it's a dynamic system, and one of it's great powers is it's ability to train itself, and of it's own volition. So who want's to bet me, that somewhere, across the world, there aren't many research labs trying to build such a thing? And should they succeed, what would that represent? We have so little handle on how a non dynamically trained AI works, any thought on how much more incomprehensible a self-learning system would be? And how could we know what their aims really are? It's developers will believe they know, but the very nature of the thing would preclude that in reality. The more complex and human-like in function (if not in psyche) they get, the difficult they will be to definitively determine, just like people are. Being embedded in silicon will be no help in trying to do this, we will move more into an era of AI-psychiatrists and psychologists. And how often has a seemingly solid sane reliable human gone and flipped?
Once again we have a hideously powerful tool, that we don't understand, are far from any kind of safe way of handling it, and have no protections in place, no regulation, no oversight, and the weapons of the gods are this time in the hands of merchants.
And finally, what will the AI's that AI's will one day be made to build, be like?
We don't need to fear an anthropomorphic self-aware entity in AI's. In fact, maybe if they could have this ability, we could at least talk with them, maybe reason with them, but it's simply that they expand our power vastly, while becoming less well understood as a reciprocal function. How can anyone say with any degree of certainty, how they will effect us? And those who blindly tout the many massive advantages, as somehow excusing any dangers, even making them irrelevant, only show their ignorance as their certainty grows. We should be doubting everything about them. Anyone who isn't, should not be trusted, they have ulterior motives! They are here, and here to stay. Such is the nature of the beast, and ever has been. I for one worry that so few people who should have been looking at, and talking about all this, at least a decade ago, seem now to have given up on it as being completely above them, and maybe there's part of the problem?
And what will the AI's built by AI's build?
And will we ever know they have?
Boy! Am I glad I'm not paranoid!
Oh no! Is it really that long?