• Welcome to Autism Forums, a friendly forum to discuss Aspergers Syndrome, Autism, High Functioning Autism and related conditions.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Private Member only forums for more serious discussions that you may wish to not have guests or search engines access to.
    • Your very own blog. Write about anything you like on your own individual blog.

    We hope to see you as a part of our community soon! Please also check us out @ https://www.twitter.com/aspiescentral

Do you agree that Technology is going too far?

I probably still have the old Betamax player under the house somewhere. If I look, I also may have the promotional video, on Betamax of course, where a presenter with an outrageous French accent extolled its virtues. We don’t even have the VHS player in the stack anymore. Or the digital recorder to capture broadcasts from free-to-air and save them to harddrive. Remember programming the start and finish times for a program for when you were going to be out of the house and getting back to find the broadcast had been delayed and you’d missed the end? Now it’s all streaming. I had put a new TV aerial on the roof (because the possums and trees had trashed the old one) then we bought a new TV. It wouldn’t fit where the old one had been, so we moved it to a new location. I bought the cable and connectors to re-route the antenna cable to the new location and then … bought an Apple TV box and hooked the new TV up to Wifi. The antenna is still on the roof - maybe the bush turkeys use it for roosting.
 
I was actually a little sad to see DVD's replacing analogue video tape. Once the hardware was cheap enough for the quality to beat analogue I could see the appeal but coming from an analogue age, especially one already well established with audio tape, and the ability to record any real-time broadcast at the push of a button was amazing, and to be able to knock up your own compilations of music, and all the other games you could play, although the timings could mess up with delayed record, it was still just so convenient to hit record if something interesting was playing (being before the 'information availability' age).

But I started finding video a more and more troubling format as I grew up and stopped watching TV altogether early 90's (it's not just the style and content, but the whole paradigm of a recorded moving image, I find reading hard work too but so much better in absorbing and understanding what's being said because I'm not forced into someone else's pacing.

I like the idea of streaming in the sense of being self selected on demand, but greatly dislike the business models it's mostly based on. But that's just tech for you, can be used for good and for bad alike!
 
I loved my Sony Super Beta video recorder. One of those with a flying erase head that made seamless edits. And I had an amazing MGA monitor to watch my videos on. A tv that actually had built-in DBX noise reduction. Though I still ended up with two VHS recorders as well.

Funny to think, much like the DOS and Windows 3.1 years that while the technology was vastly inferior to the present, I so enjoyed technology back then more than even now.

I guess in my own mind there was a precipice I exceeded between technology being a new and fun adventure and the future when so much of it became kind of a PITA. My bad...o_O
 
Here is a story where an innocent person's image is probably used to perform a scam and where someone is scammed out of a large amount of money due to deep fakes:-

Story
 
There is a lot of concern that AI is a very dangerous thing. Me included.
However, I do not think AI is more intelligent than humans. That would be a thermodynamics violation.

Still, I worry a lot about AI and our future. My concern, however, is in the ways that humans will use it. AI is a tool. It is commonly used to "fool" people.

In making google searches, I find the AI search results to be laughably ridiculous. With that, I no longer consider AI to be Artificial Intelligence; I now consider it to be Artificial Idiot.

The danger is not that AI is smart. It doesn't have to be.
 
My partner and I are both very against how much Ai has grown and become a nuisance. Often when you google images, ai images come up near the top even for basic things. Every service is starting its own ai bot now and AI seems to be everywhere online when people just want to see real art or real photography.
 
My partner and I are both very against how much Ai has grown and become a nuisance. Often when you google images, ai images come up near the top even for basic things. Every service is starting its own ai bot now and AI seems to be everywhere online when people just want to see real art or real photography.

Yes, it seems this practice is on the increase on YouTube particularly in the last month. And the quality isn't even as good...such as showing animals like bears and tigers infested with barnacles, as is the case of creatures like sea turtles and whales. And the quality of their rendering is still so shoddy that you can see that while human beings have been altered, the animals themselves seem completely created through flawed AI.

With some watermark their presentations in AI while others don't say a thing. A waste of time to watch when you realize it's just demand created from scratch- void of any reality.
 
Last edited:
I think AI is impressive as a curiosity. But it's not bright and often wrong. I tried out chat gpt and asked it for a list of number one songs in the UK at Christmas 1985. It gave me 2 results that were from 1987 and it literally said so in the text it produced. I wrote a prompt that explained that it had made an error and to create a new list without the inaccuracies. It thanked me for pointing out it's mistakes and produced a new list with more results that were incorrect. Then when I pointed that out and asked it to try again it "crashed" and wouldn't allow me to log back in for 24 hours, saying that I had used too many resources or something and that I couldn't start a new "conversation" until 11pm the next day.

It's not really intelligent. It's just a sophisticated prediction algorithm that produces results based on patterns it's been trained on from the internet, and usually the material it was trained on belonged to other people. They had no permission to use it. It uses far too many resources such as electricity and water. All of this to enable people to pretend they have skills that they don't. People are cheating at schools and universities getting it to produce their work.

Sadly basically skills are going to be lost and the people that do have them will be laughed not just "getting AI to do it". They won't be able to understand why you would want to be good a drawing or writing etc when you can just get a machine to do it.

I think AI could have some useful applications, but it's not the way it's being marketed. It's portrayed as a way to get ahead without putting the work in. As an alternative to actually investing in skills and education.

The only AI I might actually use is one that will separate some audio tracks into individual tracks from my band's CD we made when we were teenagers. The master tapes are long gone and, while it's possible with some clever use of editing to separate the recordings we do have, it's impossible to achieve the quality you get from AI.

The problem I have is that I will be essentially providing the AI with training data I have created and maybe my ideas could be used by other AI to rip off my work. So I haven't done it yet.

I think if AI was promoted as being a tool then that would be better. A paint brush is a tool but the paint brush doesn't paint for me. Sadly AI is being pushed as a replacement or alternative for skill and talent. I think that's sad.
 
It's not really intelligent. It's just a sophisticated prediction algorithm that produces results based on patterns it's been trained on from the internet, and usually the material it was trained on belonged to other people.

I'm not worried about the AI we have now, what worries me a little is what will happen when it gets more training and gradually gets better and better at what it does. 🤔 It will just keep evolving and get better and it will probably happen faster and faster.
 
I'm not worried about the AI we have now, what worries me a little is what will happen when it gets more training and gradually gets better and better at what it does. 🤔 It will just keep evolving and get better and it will probably happen faster and faster.
They have kinda hit a bit of a wall with it, as far as I can see in articles and videos on the subject. The amount of energy it takes to run chat gpt has become unsustainable and that's for the rather lackluster responses it can give now. They can't really improve it without massive increases in computer power and even greater energy costs.

So I don't think it's likely to be something that improves exponentially. It will improve of course, but I think the buzz will die down as people realise that they can't rely on it and and hype doesn't match the reality. Of course it will still be used by people who want to cheat (for want of a better word) and for people who have a genuine use for it (people with disabilities for example).

The only thing that might change this is the new Deepseek AI from China. The only issue is that it had to use chatgpt to train it in the first place, which is one of the reasons it became so efficient.

I think that AI improving is a bit like trying to travel at light speed. The closer you get to it, the more difficult it is. One day we will figure it out, but I think we are a very, very long way off from that.
 
Please can you let me know if anyone of this forum finds this article scary/worrying/disturbing? (included in this article is "...and how best to guard against risks for humanity in general..." link to story
 
Please can you let me know if anyone of this forum finds this article scary/worrying/disturbing? (included in this article is "...and how best to guard against risks for humanity in general..." link to story

Right now I'd be far more concerned about guarding against risks from humanity. :oops:
 
Last edited:
They have kinda hit a bit of a wall with it, as far as I can see in articles and videos on the subject. The amount of energy it takes to run chat gpt has become unsustainable and that's for the rather lackluster responses it can give now. They can't really improve it without massive increases in computer power and even greater energy costs.
As you've likely read the news on the new Chinese AI just launched, but it's shown that many assumptions we had, even those of us who have drilled down a bit into how AI works and are not just going on the output alone in our judgement, are not as set in stone as we imagined (well, speaking for myself at least).

If we, for example, factor in the use of AI in creating new AI's (I don't mean self-learning etc. but rather using AI tools to speed up training and other areas where shortcuts can be found) not only is it unlikely we won't see leaps we couldn't predict, but as different methods and types of AI feed into each other more, we'll see exponential increases in their power. In fact for me I'd be deeply surprised that despite being a fundamental step forward in general, AI still follows prevalent trends in technology to advance more rapidly as more advances in different but connected areas enable new and novel approaches.

I know very little about it and need to spend some time reading up, but I've only really known anything about the 'traditional' methods of LLM's of using probability and massive training sets and the like to produce output (I know there's more to them than just that, but I believe at their core this is their foundation). But I've read that significant advances are being made in the area of reasoning, and if this represents what I assume it does (more learning needed on my part though) this could be a whole new kettle of fish, one that may even be big enough for a few cans of worms? 😉

Currently the well known LLM AI's are only intelligent in the field of complex arbitrary pattern matching. It has no understanding of what's asked of it, or of what it answers, it can't create anything genuinely new, it can only take the creation of humans and manipulate it according to statistical transformations related to probability of a word's occurrence in specific circumstances.
But something that actually attempts to understand and reason could be the beginning of a possible golden age of AI and mankind (ha! If only!) or accelerating the rush to our fate.

In the end I think it's likely that a reasoning machine would reflect the values it would need to be given for it's criteria of decision making, as those given both deliberately and subconsciously by their designers/makers. That raises the question of the motives that those inevitably exceptionally wealthy and powerful people want these things to do for them.
The better these machines can be at doing doing this, the faster it'll happen. It won't usher in anything new or novel, unless it benefits those who create and control those AI's.
 
The only thing that might change this is the new Deepseek AI from China. The only issue is that it had to use chatgpt to train it in the first place, which is one of the reasons it became so efficient.
Sorry! I blanked that out when I read your post! Ignore my comment on that please, I was just being a twit!

Out of interest, I know OpenAI claimed ChatGPT was used to train it, but is there actually any independent evidence of that? It would be an important part of appraising it.
 

New Threads

Top Bottom