This is where regulations should be in place. The FDA and the insurance companies are actually quite quick to deny medications and treatments, at least in the US. The tech is in its infancy and currently being used for folks who are paralyzed. Could it help the blind? Parkinson's? Time will tell. Like anything new, it's easy to let the mind run amok with all the fear-based thoughts and slippery slope arguments against it.
I see your point. It's no good to throw the baby out with the bathwater. But, excessive optimism is another problem. Real dangers get lumped in with the alarmism or pessimism to push projects through. There is a long line of evidence for that when it comes to "new and exciting" technology (and other endeavors, honestly).
What I don’t like is the hype built around using implants as the next level of
consumer electronics. That is the ultimate goal, not a treatment for neurological disorders. Musk wants to create a platform device that can be repurposed.
Consequences might not become dire or obvious until 10 or even 30 years later. No one wants to spend 20 years just testing their tech before they release to the public; the paradigm in Silicon Valley is you test your product by releasing it and seeing what happens. The consumer provides the test data. That shift is part of why digital technology has outpaced other industries in profitability. It might work nicely for an app, where the stakes are low (well, until they aren't low, as we've seen with Facebook and Cambridge Analytica), but not for high stakes products that could result in death or injury. Yet, that is exactly what Elon Musk did with Telsa, despite regulatory opposition:
https://www.washingtonpost.com/technology/2022/03/27/tesla-elon-musk-regulation/
From the article:
Regulators have been slow to take action on some software suites that power automated features, in part because they are wary of appearing to stifle emerging technologies, the former officials said. There also are few rules governing these technologies, further hindering efforts at regulation.
Also see:
Tesla's HR Policies to Protect Itself Are Precisely What Exposed Its Wrongdoings
Balan had no illusions about him or Tesla saving the world. On the contrary, the engineer described him as a
"difficult personality" and with a "detestable person" synonym that is NSFW. Despite that, she gave him the benefit of the doubt and wrote him an email in April 2014 saying she wanted to tell him all the problems she had found. Some days later, Balan was taken to a room believing she would have a meeting with the CEO.
Musk was not there, and she said she was forced to resign.
Hansen made whistleblower complaints, and he is also suing the BEV maker for wrongful termination. In his words, "Tesla's tactics are legally questionable, consistent, covert, overt, and they employ them decisively and long-term with the specific intent of siphoning the energy, resources, motivation, and desire of victims who attempt to take a stand based on justice and personal integrity."
There are suggestions that Musk is running Neuralink the same way. He does not appear to be learning from his mistakes, if he even considers them mistakes.
I have zero faith that Musk and other tech moguls are suddenly shifting their business strategy away from what they built their empires on to humble, long-term, thorough research for medical treatments. Yet, Covid may have lowered the barriers to that market anyway, as Covid treatments and vaccines were remarkably similar to "release now, test later" strategies due to an emergency situation.
The fact that Zilis was asked how they were going to handle security, and she said they'd somehow make it hack-proof and figure that out later, while they were already seeking to do test implants in humans, is not a good sign that they are thoroughly planning for the end uses they're advocating - they are using the "release first, test later" mindset, as far as they can get away with. I don't anticipate it will be any easier to regulate Neuralink than Telsa.
Also note, Zilis is so devoted to Musk that she bore children by him with in-vitro fertilization. Despite conditions of her employment stating that she should have no conflict of interest, because she said it is not a "romantic" relationship, "Neuralink" decided she was not breaching her contract by having children with the company owner (My sick sense of humor finds this comical, but I digress.)
In short, I don't see current regulatory frameworks as an adequate failsafe to the potential abuses when it is being developed and implemented by those who have a history of abusive practices. My own experiences with regulatory agencies and businesses have reinforced how pervasive it is, though Musk is particularly overt. When there is money and power involved,
it is hard to get the right thing done. It is even worse to me, barring those difficulties, that we'd try to assign regulatory oversight to insurance companies, which were not designed for oversight, but to
shift liability and make a profit from it, and that industry has also amassed money and power with that model.
More broadly, AI tech is not being adopted because it has great advantages to the public, but because it is being forcibly integrated into people's lives by the will of a few corporations and groups, like Microsoft, Google, Amazon, Meta, who are incentivized to create a dependence on their technology for daily life and infrastructure (such as defense, power and transportation), and who cause the very problems that they then sweep in and claim to solve with more products and services. If this paradigm extends into body modification, I don't think it's that unreasonable to say we might want to curb the enthusiasm and err on the side of caution.
Sorry, I'll get off the soap-box now. I've seen this kind of stuff firsthand and so I've thought about it a long time. Unfortunately, I don't have that silver lining of solutions quite yet, except to continue to discuss it with anyone interested.