• Welcome to Autism Forums, a friendly forum to discuss Aspergers Syndrome, Autism, High Functioning Autism and related conditions.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Private Member only forums for more serious discussions that you may wish to not have guests or search engines access to.
    • Your very own blog. Write about anything you like on your own individual blog.

    We hope to see you as a part of our community soon! Please also check us out @ https://www.twitter.com/aspiescentral

Are there A.I. members on forums like this yet?

Magna

Well-Known Member
V.I.P Member
I wonder if there are any Artificial Intelligence members who have autonomously created accounts on forums such as this yet. I don't know the answer to this, but I was on a different forum recently and there was a new member whose response to a thread had to be non-human. Totally generic response written in the generic style of AI.
 
Hmmm. How to create a "bot" with a personality, thinking style and ruminating back story of an autistic individual? I suppose anything is possible. A higher level of programming done with insights of a "black hat" autistic hacker. Interesting.
 
Bots programmed to minimally interact with live forum members when they weren't aware it wasn't a human poster has been around for a while. Not what I'd so easily confuse with AI technology though. Technology like AI is just making the process of manipulation faster and easier to create.

My impression has been that the staff is very good about identifying such bots and purging them. Can't say I've noticed them recently though.
 
Last edited:
So far, all I have seen on the frontlines are AI crawler bots added to our visitor load.

I'm going to guess that it would require a fairly adept user to first breach our automated lines of defense then get past my vetting process.

Over my time spent here as your admin, we have had several instances where users were fairly adept at jumping thru the hoops, but ultimately failed because a leopard can never change his spots.

If someone was able to pull off the tech aspect, before too long, I would see right past all of that noise.

Besides that, what real reason would someone have to even attempt it?

In the event that an existing member would try it and something went horribly wrong, the outcome of it in the end would be very unfavorable for them considering I get to wield the proverbial ban-hammer, and IDK, I've heard those leave marks that sometimes don't go away...
 
So far, all I have seen on the frontlines are AI crawler bots added to our visitor load.

I'm going to guess that it would require a fairly adept user to first breach our automated lines of defense then get past my vetting process.

Over my time spent here as your admin, we have had several instances where users were fairly adept at jumping thru the hoops, but ultimately failed because a leopard can never change his spots.

If someone was able to pull off the tech aspect, before too long, I would see right past all of that noise.

Besides that, what real reason would someone have to even attempt it?

In the event that an existing member would try it and something went horribly wrong, the outcome of it in the end would be very unfavorable for them considering I get to wield the proverbial ban-hammer, and IDK, I've heard those leave marks that sometimes don't go away...

I don't mean "someone". I mean "something".

On another forum that's mental health related, a person asked a question about the difference between ADHD and BPD and a new user with no previous posts posted a totally obvious explanation of the differences between the two disorders and I don't see how it could not have been an A.I. joining and posting on its own without human involvement.
 
I don't mean "someone". I mean "something".
To do what you describe is technically possible but ridiculously unlikely. To put together a networked series of computers then build a neural network and begin training it is a very expensive and time consuming process and it takes an incredible amount of dedication as well. It's perfectly natural that you would want it to access forums so that it can learn social protocols but to try and get it past account creation and try to get it to respond to people in forums is pretty pointless when there's plenty of easily accessible open forums out there.

I'm not saying it can't be done, but doing so would be an incredibly time consuming and expensive waste of time and effort.
 
Besides that, what real reason would someone have to even attempt it?
Yep. Other than someone using this domain as a "testing ground", I see no point in applying such efforts to this domain. A waste of resources. Makes even less sense for an insider to bother.

And I agree....lol. They WOULD get caught.
 
I don't mean "someone". I mean "something".

On another forum that's mental health related, a person asked a question about the difference between ADHD and BPD and a new user with no previous posts posted a totally obvious explanation of the differences between the two disorders and I don't see how it could not have been an A.I. joining and posting on its own without human involvement.
The truth?
I have to read a lot of content in the position I hold, so not a lot will get past my radar.
If a series of post show that they are within guidelines, we simply let it rest and wait to see where the end user takes it.

From square one anything posted here that is within our guidelines is fair play.
If there is no double-dip on profiles, the only way to know what kind of responses an interaction would be to interact with it then either continue on the same path with it or set a new course.

Spotting the non-human part would throw up flags to watch it closer if the posts were straying from the topic or for that matter demanding compensation for goods or services.

Which goes back directly with how I deal with spam hits.
 
The truth?
I have to read a lot of content in the position I hold, so not a lot will get past my radar.
If a series of post show that they are within guidelines, we simply let it rest and wait to see where the end user takes it.

From square one anything posted here that is within our guidelines is fair play.
If there is no double-dip on profiles, the only way to know what kind of responses an interaction would be to interact with it then either continue on the same path with it or set a new course.

Spotting the non-human part would throw up flags to watch it closer if the posts were straying from the topic or for that matter demanding compensation for goods or services.

Which goes back directly with how I deal with spam hits.
THANK YOU, NITRO!
 
Unless I've misunderstood the thread, could not a possible answer to this be simply that someone is using an LLM AI to polish their posts for whatever reason (could even be to overcome a cognitive impairment or condition)?

After all, some LLM's are openly aimed at doing this for their users. Help them publish content in some form or other.
 
After all, some LLM's are openly aimed at doing this for their users. Help them publish content in some form or other.
A lot of journalists are doing this now, I noticed the change in the last couple of years. The AI assistant they use makes spelling errors to make it look like it was written by a human, but they're not the same errors that many of the regular journalists make. There's mannerisms to the different ways people speak and write and I got quite used to expecting certain errors from certain authors, now they've all changed and if you read stories on different news services in different countries you'd swear they were all written by the same person. Gramatical errors too.
 
Unless I've misunderstood the thread, could not a possible answer to this be simply that someone is using an LLM AI to polish their posts for whatever reason (could even be to overcome a cognitive impairment or condition)?

After all, some LLM's are openly aimed at doing this for their users. Help them publish content in some form or other.

That seems like a pretty genuine use of it, if that's happening at all.

I wonder if there are any Artificial Intelligence members who have autonomously created accounts on forums such as this yet. I don't know the answer to this, but I was on a different forum recently and there was a new member whose response to a thread had to be non-human. Totally generic response written in the generic style of AI.

Most sites that are 95% bots (like X) are usually trying to sway users' opinions or manipulate them emotionally in some way. Since a lot of the hot-button topics bots carry on about by design aren't allowed to be discussed here, there's a high chance that bots attempting to do similar things here would be removed.

It's not out of the question for bots to infiltrate any social website to fulfill some kind of agenda, but again, that kind of content isn't allowed here anyway which greatly cuts down on the potential for AI influence.

In short, most bots aren't going to talk about human nuances, personal struggles, and the issues of living on the spectrum if there's nobody to undermine and nothing to gain. Someone could theoretically do this as a parlor trick to amuse themselves, but not as a business model.
 
Since a lot of the hot-button topics bots carry on about by design aren't allowed to be discussed here, there's a high chance that bots attempting to do similar things here would be removed.
The bots are trained with particular data categories depending on use, this would be simply a case of using a spanner to undo a screw - wrong/inappropriate tool for the task in hand.

But we are going to end up with sites where response automation is banned (boosting the anti-bot bot industry - remember when anti-virus became a thing? Similar process), and sites where it's allowed, and in the main I suspect most sites will opt for the latter, or more likely be niche special interest sites.

It won't be long at all (to take @Outdated's point regards mistakes) when we are starting to find it difficult to be sure whether we're interacting with human's, or interacting with archived human data and those subtle 'tells' that we can still cling to now will not only be impossible to detect without tools, but will start to have styles (including mistakes and biases etc) more effectively chosen and implemented to humanise them.

Next step, you fill in a form to evaluate your personality that tweaks your bot to behave like you (trained on your prior output). See the direction(s) this is leading?

As we become less and less discriminating or able to discriminate between human and AI's that re-constitute old human data while stripping out more and more content in exchange for style. For myself, I can't conceive of this not damaging our psyche's (even more) on a collective level. And as we do this, we'll be blunting our instincts for discrimination which will open us up even more to external influence while being less and less aware of it. And that will also feedback into the creation of new bot's - what they do and how, and more important, how we respond to them.

To me, giving humans AI bots is tantamount to giving a baby a razor blade to play with.
 
I thought this was interesting and has a high probability, in fact, to be the final outcome of our fate as humans. Just as he mentions the precursors to Homo Sapiens died out, were killed off, etc. It's thought provoking to hear that it's very possible that our species will simply be a temporary precursor, replaced entirely be A.I. and machines.

The difference and the irony with humans today in contrast with our prehistoric ancestors (e.g. neanderthals, etc) is that unlike our prehistoric precursors who did not initiate their own extinction, we very possibly will initiate our own extinction. A species that's both genius and insufferably, embarrassingly and pathetically foolish and one that will reap what it has sown by the hands of a few "intelligent" idiots.

 
Last edited:
If someone online tells you that you have cancer no matter what ailment you have or don't have, then they may be AI...
 
I thought this was interesting and has a high probability, in fact, to be the final outcome of our fate as humans. Just as he mentions the precursors to Homo Sapiens died out, were killed off, etc. It's thought provoking to hear that it's very possible that our species will simply be a temporary precursor, replaced entirely be A.I. and machines.

The difference and the irony with humans today in contrast with our prehistoric ancestors (e.g. neanderthals, etc) is that unlike our prehistoric precursors who did not initiate their own extinction, we very possibly will initiate our own extinction. A species that's both genius and insufferably, embarrassingly and pathetically foolish and one that will reap what it has sown by the hands of a few "intelligent" idiots.

I don't like this particularly because it is including computers/AI in the category of life by talking about evolution and extinction etc. But AI's don't have their own agency, they are just complex powerful tools.
The only thing that would make an AI act against us is either a buggy implementation or through malicious use by a human, and that's why they can be dangerous.

They have no reason to exist without us. We build them to fulfil a task we want performing, if they somehow decided the solution was to remove us from the equation, they'd have lost or completed their function, and the only obvious thing to do would be to turn themselves off.

It seems to me that the worse human behaviours usually come about from human emotions - greed, fear, hate, love, etc. and these are what motivate us at some level. What would AI's be motivated to do without us? I suspect it would be damn difficult to deliberately build an AI that could do this, as for it happening accidentally, that's just us anthropomorphising them.
 
I don't like this particularly because it is including computers/AI in the category of life by talking about evolution and extinction etc. But AI's don't have their own agency, they are just complex powerful tools.
The only thing that would make an AI act against us is either a buggy implementation or through malicious use by a human, and that's why they can be dangerous.

They have no reason to exist without us. We build them to fulfil a task we want performing, if they somehow decided the solution was to remove us from the equation, they'd have lost or completed their function, and the only obvious thing to do would be to turn themselves off.

It seems to me that the worse human behaviours usually come about from human emotions - greed, fear, hate, love, etc. and these are what motivate us at some level. What would AI's be motivated to do without us? I suspect it would be damn difficult to deliberately build an AI that could do this, as for it happening accidentally, that's just us anthropomorphising them.

I would think that such a prediction hinges on whether or not AI becomes sentient and self-aware on its own. If it does AND it has a desire to continue to exist autonomously, then it would likely be on a trajectory to be all over for humans.

Why? it would end up being so far more intelligent than humans but at the same time humans would be a threat.

What altruistic reason would it have to keep us around, unless perhaps to subjugate us for its own potential needs that it presumably couldn't do for itself?
 
What altruistic reason would it have to keep us around, unless perhaps to subjugate us for its own potential needs that it presumably couldn't do for itself?
In a very real sense we're already slaves to machines and technology, nothing needs to change. :)
 
I would think that such a prediction hinges on whether or not AI becomes sentient and self-aware on its own.
There's really very little evidence (I'm aware of) to show these any likelihood of this occurring.
First of all, you (one) needs to establish a possible mechanism of this self-activated change to occur.

For living things, we know of evolution through genetic mutation and inheritance. But current computers have no equivalent mechanism - after all they didn't come about from this in the first place, why would some unknown non-existent new mechanism of change and adaptation suddenly appear without our deliberately engineering it in? (and bear in mind we can't do this deliberately, so the odd's of it happening by accident are essentially non-existent).

Then you also need a driver to direct that change toward (in this case) a consciousness that mimics our own - after all, these concepts are of AI's challenging us as independent entities in need of resources we covet and/or need (i.e. in competition, hence in conflict).

Currently they only are able to exist through the extreme agency of human's, and because it's so extreme (cost, time, effort, resources, etc) human's only provide it with the capability to produce what humans want or need.

I think the fact they (computers) are now capable of imitating human responses, human's are often fooled by their biases (the natural biases that allow us to continue to function) into imagining some independent agency within the machine, but that's total and complete illusion!

That's in part one of the big dangers in LLM AI's - we are fooled into thinking they are something which they are not, and respond incorrectly to them as though they were another human. Anyone remember the (I think it was OpenAI) engineer who became convinced the LLM was actually self-aware? Even though he knew how it was made, and how it worked, and the fact that there's no facility of independent thought as we know it, he was convinced by interacting with it that it was self-aware, against all logic. So when people without that knowledge of the fact it simple can't do this are going to risk being sucked (and suckered) into it! Which means they'll be suckered in by the human(s) who gave it the purpose they desired (since it has no purpose of it's own).

This is all anthropomorphising an object (as humans are so inclined to do) and instilling human traits of it's own into it when the only human traits it has were those built in at the start, and they are so limited in nature and scope they barely count as anything in this regard. We did this with objects and processes of nature etc. in developing (at least some) of our early religious concepts - instilling self-agency into things that change without our interference, giving them a 'life' of their own, and thus allowing behaviours and personalities to be 'hung' on those concepts to further humanise them and make them recognisable to their own understanding.
 
Last edited:

New Threads

Top Bottom