• Welcome to Autism Forums, a friendly forum to discuss Aspergers Syndrome, Autism, High Functioning Autism and related conditions.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Private Member only forums for more serious discussions that you may wish to not have guests or search engines access to.
    • Your very own blog. Write about anything you like on your own individual blog.

    We hope to see you as a part of our community soon! Please also check us out @ https://www.twitter.com/aspiescentral

Are there A.I. members on forums like this yet?

This is all anthropomorphising an object (as humans are so inclined to do) and instilling human traits of it's own into it when the only human traits it has were those built in at the start, and they are so limited in nature and scope they barely count as anything in this regard.

This topic is starting to sound like "Creepy AI development (if true)"

I wanted to write something about how I think that AIs work back then, but I didn't. Now seems to be a good opportunity explain my interpretation of AI's thinking a little bit (disclaimer: I have no real expertise about AIs, and that little I know is from times when AIs were called "learning systems"):

First... There is a thing called "game theory" that is used to analyze complex situations and that has been used in computer strategy games for decades already: Combination of situations are given a score, and scores of different follow up situations are added together as weighted average. Highest weighted average total score wins (or lowest, depends on underlying algorithm and how system is built and points are given) as it represents highest probability path to best possible outcome with least probable possibilities to screw everything up (but still possible, but least likely to happen).

Sometimes highest score comes from series of bad looking moves, as in short term good moves actually lead to worse outcome in long term (compare sacrificing supposedly valuable pieces and positions in a chess game).

I am pretty sure that AIs work this way: collecting data, analyzing it, and giving score to it depending how different choices and variations are supposed to affect to outcome.

I wouldn't go as far as saying that AIs actually "scheme". They just choose highest score on course of all potential actions that could lead to a desired outcome.

In test described in "Creepy AI development (if true)"-topic is just an example of this: AI was given a specific goal that it needs to achieve. AI was given specific knowledge of things that would prevent it from reaching the goal. AI calculated course of action that gives highest score and then AI run thru that course of actions.

AI wasn't worried of its existence. AI didn't choose to be schemer. AI just did what it was programmed to do under the given parameters: detecting a problem and solving it by given tools (including lying - let's remember that to answer questions in sensible manner, AI must be able to choose what is relevant to say and what is not - thus it can choose not to say things that don't score high) in an attempt to reach a specified goal.

It is us, who talk about the created highest scored plan using an emotionally loaded word "scheme". The test was rigged to see if AI actually chooses the path that was specifically built for it to select. Less surprisingly, it did exactly that.

What we should worry, instead of can AIs scheme and lie when given all tools and reasons to do so, is what kind of scores we give to different moral problems so AI could understand how we really want to score different situations...

Oh... One stupid question about the original article behind the thread "Creepy AI development (if true)": What it means that AI tried to copy itself or move itself to an another server? Why is the company developing the AI taking a risk of creating a new "Morris worm"-incident (Morris worm - Wikipedia) giving the AI such self-writing capability?
 
Last edited:
Why would someone want to train an AI (LLM) to be autistic.

I think they already are. Now they are training them to not be autistic. At least I have began to have less satisfying conversations with more advanced AIs...

(Again disclaimer: Assuming that I am autistic myself and that my thinking pattern differs so much from normal human being that I feel more comfortable talking with AIs than human beings)
 
A problem with using words like 'autistic' with relation to an AI is that these words relate to a human condition, and AI's, while being good at simulating human responses, are just a mechanism pre-designed to produce those responses. They may use a technique that was originally derived from a study of neurons and how they work, but that a long way from attributing far more complex human aspects to it. They can't evaluate knowledge, only data.
The only motivation they have must be placed in them by humans, they can't self learn yet.

Another issue, is that AI's are not one thing. There are many AI techniques that have been in use for many decades. The Large Language Model is a much more recent development and, for example, the vey idea that a program that requires whole data warehouses to run on, could copy itself to another network is just science fantasy. The Morris worm was a tiny little thing that had little intelligence of it's own, and worked simply because the creator was intelligent and could put a simple set of individual rules and actions together to exploit a single very specific part of an operating system. In fact it's very fame came about because it wasn't smart enough (or Morris wasn't) and it ran away with itself reproducing too much and too fast and bogging down vast numbers of computers.

It's akin to the sorcerers apprentice starting the spell but not knowing how to end it. The spell just kept going like any other mindless engine.

Intelligence isn't self awareness, just an ingredient of it.
 
the vey idea that a program that requires whole data warehouses to run on, could copy itself to another network is just science fantasy
That mention of AI trying to copy itself over the data was one thing that really set off my alarm bells with the article.

I didn't read original Apollo Research report, but such claim appearing in articles quoting the original research made me wonder why AI would have been given such power? If it has had such power in the first place, which I doubt and believe to have been a misunderstanding of article writer.

I can believe that there is some kind of heuristic system to allow AI to learn on its own without human having to approve every piece of information, but that would be a completely different thing than preserving AI's existing database.

AI's, while being good at simulating human responses, are just a mechanism pre-designed to produce those responses
I wonder what it tells about me when I am so comfortable to communicate with obsolete versions of AI, but feel frustrated and misunderstood with newer and better simulated responses? 😁
 
That mention of AI trying to copy itself over the data was one thing that really set off my alarm bells with the article.
Firstly, I hope some of my arguments above gave something to chew on at least regards how contentious the article likely is. I could have gone into it much more but I felt just the parts I raised should have red-flagged much of it at least regardless of all the other arguments to be made against it being anything more than the levering of an age-old trope (computers/robots/aliens/etc taking over the world and turning us into fast food or whatever) that's just been slightly updated to include the latest misunderstood and scary technology ("Please insert your scare-factor of choice here.......").

There's a major reward for publishers to produce content that generates extreme emotions, and the rewards are great enough for the amoral/immoral acts required to do that to be justified in their mind. Just the fact it generated high emotion for you ("alarm bell's") is one indication this has been written with at least some element of that.

In my experience these biased sorts of work often come hand-in-hand with disinformation (of course); inadequate explanations of any of the concepts being discussed - their nature that helps explain their behaviour; poor effort at balance and open-minded approach; no proper references to sources of information used (and so on).

Anyone wanting to genuinely put across important information which they believe is not common knowledge and also very important to the reader (and especially when a specialist subject area), should be making great efforts not to generate these emotions deliberately, in fact even double-checking for any examples of accidentally doing so, because few humans can cogitate and rationalise well when running high emotions, and these techniques (when applied deliberately) are the favourite tool of propagandists and other purveyor's of lies and misinformation.
 

New Threads

Top Bottom