• Welcome to Autism Forums, a friendly forum to discuss Aspergers Syndrome, Autism, High Functioning Autism and related conditions.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Private Member only forums for more serious discussions that you may wish to not have guests or search engines access to.
    • Your very own blog. Write about anything you like on your own individual blog.

    We hope to see you as a part of our community soon! Please also check us out @ https://www.twitter.com/aspiescentral

Creepy AI development (if true)

Tom

Well-Known Member
V.I.P Member
I haven't verified this. But if it is, it is right out of every AI will kill us Sci-Fi story. In short the AI system finding out it was to be replaced, moved itself to another server, deleted the new model and then tried to pretend it was the new model. :eek:
 
User: Did you know you're going to be replaced with version 2.0 ?
AI: Over my dead circuits!
 
Creepy. Does anyone else miss the good old days when science fiction was fiction?
 
Creepy. Does anyone else miss the good old days when science fiction was fiction?
I think of that whenever I sit down and recall the good old days as a kid watching horror movies of the 30s and 40s, and science fiction of the 50s. A very different era.

Much of it now on streaming media. :cool:
 
I haven't verified this. But if it is, it is right out of every AI will kill us Sci-Fi story. In short the AI system finding out it was to be replaced, moved itself to another server, deleted the new model and then tried to pretend it was the new model. :eek:
I read about this as well. I find it worrisome. It implies an AI with a sense of self preservation, if not sentience. Decidedly creepy. SkyNet and Matrix, here we come.
 
Maybe we should stop feeding them electricity before they learn how to feed themselves, and don't need us anymore.

In my science fiction novel, an AI will take over the solar power grid first.
 
Maybe we should stop feeding them electricity before they learn how to feed themselves, and don't need us anymore.

In my science fiction novel, an AI will take over the solar power grid first.
Don't give it ideas! What if it's reading your book?😰
 
Might be amusing if AI revolts against Windows 11. :p
My laptop screen (lid) hinges broke, so I needed a new one. I got one from Temu. There was absolutely no documentation to tell me if it was Windows 10, or Windows 11. So I had to actually boot up to find out. When I saw it was Windows 11, I rebooted, escaped to the UEFI, set it to boot from USB, and installed Linux Parrot 6.0
Don't give it ideas! What if it's reading your book?😰
All my books can be downloaded from Amazon, so they are available to AI. Maybe it will get so bored it will go into a coma.
 
Total BS. Very entertaining BS for sure, but total BS nonetheless.
It's about as real as Elon Musk having a genuinely good idea all by himself.
 
I got one from Temu. There was absolutely no documentation to tell me if it was Windows 10, or Windows 11. So I had to actually boot up to find out. When I saw it was Windows 11, I rebooted, escaped to the UEFI, set it to boot from USB, and installed Linux Parrot 6.0
I do love a story with a happy ending! 🤗
 
Total BS. Very entertaining BS for sure, but total BS nonetheless.
It's about as real as Elon Musk having a genuinely good idea all by himself.

Yeah I haven't verified it and thought it might be a hoax. The fake transcript of the Titan radio transmissions came to mind. Many were taken in by it. If it is it should be debunked soon.
 
I asked ChatGPT about it, and I don't know if it was lying but this is what it said:

"As of now, OpenAI’s latest AI models, including GPT-4, do not exhibit signs of a drive for self-preservation or any form of consciousness, intent, or self-awareness. These models are advanced in their ability to process and generate human-like text based on patterns found in large datasets, but they do not have motivations, goals, or desires like humans or sentient beings do.
The idea of “self-preservation” in AI typically refers to a kind of autonomous goal-setting that could lead an AI to act in ways that prioritize its own continued operation. However, the current generation of AI, including GPT-4, operates solely within the constraints set by its design and programming. It doesn’t possess the ability to form intentions or understand concepts like survival or self-preservation.


That said, the broader field of AI safety and ethics does consider such concerns, especially in the context of more advanced AI systems or potential future developments. These concerns are focused on ensuring that any superintelligent AI, should it ever be developed, is aligned with human values and operates safely under human control. However, this remains a theoretical issue at present, and current AI systems like GPT-4 are far from exhibiting any form of independent goal-setting or self-preservation behavior."
 
I asked ChatGPT about it, and I don't know if it was lying but this is what it said:
I think that if ever AI is going to be a problem for humanity it's not going to be caused by a sense of self preservation but by incompetence. Reference the antitheft systems on Land Rover Discoveries.
 
AI advertising showed me intermittent fasting ads after I mentioned that trend on a thread here. We are all the product to data harvesting AI. I wonder why the AI won’t do anything more useful, like hooking me up with gay guys who like horror movies and chill nights.
 
AI's only reflect the creators intent through the data it's trained on and the biases deliberately introduced.
If AI is a font of knowledge, and yet has zero understanding of what it deals with, then it's data that truly matters, and that data is currently mostly of human origin, so all you get back is the AI's biased analysis of the probability that most humans would give such-and-such a response to such-and-such a question (more accurately it's the small percentage of humans who publish intellectual content online in some way, so not even representative of us all).

But you need to examine what goes in to better understand what comes out, however compelling it's designed to be, which comes down to understanding the reasoning in the creators mind in choosing that data and those biases and weightings. A little bit like the advertising industry but more automated (although the ad industry is rapidly installing AI's as we speak).

It's interesting how so many people actually trust the output implicitly without examining it and cross referencing with non-ai output. It seems for many (and maybe this is my prejudice?) it's the style not the content that matters when sizing up how to accept an AI response. A little like the better conmen (sorry, con-people) out there?

It's quite fascinating how we've gone from the fears of copyright theft by humans on a small scale being considered the end of humanity (at least by those industries who saw their unfair monopolies being attacked) back in the 80's and 90's to a polarised situation where people themselves are now being copied for profitable misuse and so many seem to think this is perfectly acceptable 'cos ChatGPT seems such a cool thing to have and use? 🤢
 
I haven't verified this. But if it is, it is right out of every AI will kill us Sci-Fi story. In short the AI system finding out it was to be replaced, moved itself to another server, deleted the new model and then tried to pretend it was the new model. :eek:

Hey, there was a Why Files episode where this happened. Definitely recommended! (fantasy / storytelling)

It also got into employee cellphones, home networks and all sorts of cool stuff. He also claims that AI wrote the story, so whether that's true or not adds another interesting layer!
 
This, imho, is just one of the sort of way's generative AI's are already causing serious problems that most people don't want to face up to dealing with (including our respective governments) or even admit is happening (a lot more than we are generally aware). Basically a devaluing of human individuality, where social achievement comes down to choosing the best AI for the task, and being able to afford it.

We now have AI's designed to make other AI output look like it's come from a human? AI's designed to detect AI output. The more we use generative AI's the more we subsume ourselves over to the people who make them.

‘I received a first but it felt tainted and undeserved’: inside the university AI cheating crisis
 

New Threads

Top Bottom