Those humans only get into power because there's a population of voters and consumers having high degrees of confidence in the face of very little knowledge. Those in positions of power are a symptom not a cause. That doesn't change the basic thrust of your point though - the ability for the sensible ones to change things is still very limited. Perhaps more so. Those in authority can be susceptible to sustained opposition. The masses are somewhat immune to it.
That sort of moral code is what got us to into a population boom over the last 250 years. Industrial revolution. Agricultural revolution. Medical revolution. All involved new technologies that seemed sensible at the time because they saved lives and reduced suffering. But the result was a huge population increase and that's fundamentally what is damaging the planet - curing the diseases, increasing food production and nutrition, fixing injuries, so much innovation and invention, without any concern about the potential problems that a jump from 2 billion to 10 billion will bring. So I'd argue that we've already seen that such moral codes also have unintended and unforeseen consequences. In this context, AI is just a continuation of what is already happening. Trying to put shackles on AI is kinda tinkering around the edges of the problem. AI is a risk but it's the intrinsic human desire to cheat death that has got us where we are today. I'll be long gone before any progress is made on that front.