• Welcome to Autism Forums, a friendly forum to discuss Aspergers Syndrome, Autism, High Functioning Autism and related conditions.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Private Member only forums for more serious discussions that you may wish to not have guests or search engines access to.
    • Your very own blog. Write about anything you like on your own individual blog.

    We hope to see you as a part of our community soon! Please also check us out @ https://www.twitter.com/aspiescentral

Do you agree that Technology is going too far?

Yeah, it's got a nasty flavour to it. MS showed their intentions unintentionally when they messed up with Co-pilot, having it taking screenshots every few seconds or so to build up a training data set without the users knowledge of what was happening, and then when outed claiming it's only screenshots saved to the devices hard drive - but they forgot to mention what happened to that data after it was OCR'ed and analysed by the AI, and all stored where? Oh! It's in MS's data warehouses, what a surprise, must have just slipped their mind! (ho hum)

Essentially data theft in no uncertain terms, but with a neural network the data is stored in such a fashion that even if you know your data is in there, getting it out in a form that would satisfy a court is another matter altogether, and the tech/AI firms know this and that's one reason they feel fine with such 'flexible' ethics. I mean, how can it be stealing if no-one can prove it?

Personally I find it boggling (though not surprising) that people can be allowed to get away with this, for little more reason than they can get away with it, and there's lots of money to be made for the people who enabled this behaviour. Says far more about these companies than any of their mealy mouthed PR messaging tries to say.
 
but they forgot to mention what happened to that data after it was OCR'ed and analysed by the AI, and all stored where? Oh! It's in MS's data warehouses, what a surprise, must have just slipped their mind! (ho hum)
This is why windows 10 is my final stop with MS OSes. I don't really use Windows as a rule but I do have some things like my EEPROM Programmer that run only under windows, unless you want to use CLI under Linux. I prefer the GUI as it's much easier to do things in a few clicks.

I will probably find some way of using stuff like that in MacOS. So far Apple haven't thrust their AI onto users and certainly not in such an intrusive way.

I may torrent the occasional movie, mostly because it's quicker than transcoding them from the DVD I own. But still, I don't want MS snooping at my activities on principle. Who knows what sorts of things they could use that sort of power to do in the future. Recent events in the world show that you are basically a mere coin toss away from finding yourself a target if the politics change where you are in the world. I'm skirting the subject here on purpose as I don't want to make this post stray into actual political territory.

What I am saying though is that as things stand MS may handle your OCR'd data responsibly, but that could change if some other party can make them have a change of policy in the future.
 
I will probably find some way of using stuff like that in MacOS. So far Apple haven't thrust their AI onto users and certainly not in such an intrusive way.
I didn't pay much more than passing attention at the time so may have some detail wrong, but I believe Apple just recently had to pull their AI news collation app because it had repeatedly attributed 'hallucinated' news articles to providers such as the BBC. Apple dug their heels in for a while but they were clearly unable to resolve it in a timely fashion and had to pull what apparently was touted as a flagship AI product (i.e. major selling point).

How you measure the impact of that against the (imho) despicable Co-pilot lies and (again imho) data theft, is going to be difficult and would probably have to be considered on a case by case basis I imagine.
 
I found this link, please what are people's views? On the tech itself and is are the Chinese people oppressed? Link
The question of oppression would likely be considered a political discussion?

Regards the link, this is from 3½ years ago, have you looked at anything more recent about how this has panned out?
 
I didn't pay much more than passing attention at the time so may have some detail wrong, but I believe Apple just recently had to pull their AI news collation app because it had repeatedly attributed 'hallucinated' news articles to providers such as the BBC. Apple dug their heels in for a while but they were clearly unable to resolve it in a timely fashion and had to pull what apparently was touted as a flagship AI product (i.e. major selling point).

How you measure the impact of that against the (imho) despicable Co-pilot lies and (again imho) data theft, is going to be difficult and would probably have to be considered on a case by case basis I imagine.
Yeah, they had to pull it. I fully expect that the AI aggregation news app will be shelved for a very long time. Generally Apple retires products that fail that spectacularly. It will probably reappear in a few years as basically an AI curated news feed, basically the same as you get currently but with more articles based on what you might be interested in.

This is basically what happened with mobile.me when it was a total embarrassment at launch. It limped on for a while as a free service before it was replaced by iCloud. The mobile.me name was considered toxic by that point and iCloud was a bit less ambitious, at least to begin with.

But usually with Apple they just bin a product that causes them headaches, like the social media features they added to iTunes, the Apple HiFi, and butterfly keyboards.
 
But usually with Apple they just bin a product that causes them headaches, like the social media features they added to iTunes, the Apple HiFi, and butterfly keyboards.
That's a major difference between Microsoft and other Linux based systems. In Linux programs are subject to the theory of evolution, the weak die out and only those fit for purpose survive. Microsoft will keep flogging the same dead horse until long after it's started to smell bad.
 
That's a major difference between Microsoft and other Linux based systems. In Linux programs are subject to the theory of evolution, the weak die out and only those fit for purpose survive. Microsoft will keep flogging the same dead horse until long after it's started to smell bad.
But you can't deny it made them exceptionally wealthy? 🙄😏
 
While I've every respect for Hinton as a scientist, I found this disappointing myself. He's attributing anthropic attributes to AI's without any explanation beyond that "they are conscious" without explaining what that means and how it compares to our consciousness, it all seemed predicated on his possibly correct but unexplained views.
He also isn't defining the different types of AI and how they are (or could/would be) used, and the different techniques used and how, it felt like an opinion - maybe he just feels a proper explanation is inappropriate to mass audiences, but seems to be presenting an uncompelling argument regardless, and more relying on reputation instead.

To use a simple example of what I think was poor argument, he claimed technology hasn't lead to job loses using ATM's as an example, but the fact of the matter I think you'll find is that with the introduction of banking automation the number of high street branches has drastically diminished. One of the major drivers of automation is the reduction of costs in the form of employee's wages which often count as one of the highest expenses a company may have.

Another was the thought experiment of replacing a single neuron with an artificial neuron and claiming this shows AI's can be conscious, but is simplistic and conveniently ignores any arguments against that being a linear rule (that if one neuron is replaced with no loss, then they all can be) and ignores the fact that that replicated artificial neuron could only come about by copying the original biological neuron which only came to be because of all it's neighbouring biological neurons. Maybe he's trying to present an 'explanation for dummies' so to speak, but if so, I think it's a poor one.

AI isn't going to take us over in the foreseeable future, it's just going to empower already powerful humans even further and give them ever greater control, which is going to be one of the primary targets for them when enabling/creating those AI's.
 
I'm currently holding out as long as possible because every new cell phone seems to come with AI, and I really don't want that forced on me - not for all that I use my phone for - not even for all that I use my laptop with, still. Boo.
 

New Threads

Top Bottom