Robin Hanson argues against superhuman general AI as an existential risk.

One point he repeatedly makes is that we already have self-optimizing superhuman intelligences on earth, namely corporations, and they do not appear to be destroying the world.

I donโ€™t buy it. Yes, corporations are self-optimizing superhuman intelligences in a way. But they are severely limited by the slow communication processes inside them. How much faster is communication between neurons simulated by a computer than between human beings? There is no doubt that the difference is quite large. At some point a difference in quantity makes a difference in quality. Not to mention that when viewed in this way, AIs can grow much larger than corporations can.