I am not so sure. You seem to assume that very high intelligence necessarily results in a high level of autonomous thinking. I.e. thinking we can not control. But I don't see a good reason to make that assumption. High intelligence certainly enables autonomous thinking, but in principle, I don't see why it should not be possible to create sand-boxed high intelligence. I.e., the high intelligence is only allowed a pre-defined "room" to navigate in.
An easy example are the laws of robotics formulated by Isaac Asimov.
I think your mistake is to jump at conclusions based on how intelligence works in the human brain. But we can create an electronic brain that operates in fundamentally different ways and can combine high intelligence with high dependency / controllability. Or why should we not? It all is about a proper infrastructure of information, not more. And you can have any kind of system, can you not?
However, it is an interesting question how big the risk would be to majorily screw up in that effort on a large scale, and to accidentally allow more autonomy than we wnated to. To stress - not because it could not have been made right, but because accidents / screw-ups happen all the time, leading to unintended consequences.
You can not trust the private sector to handle that risk responsible, at all. That seems obvious to me. So political tough and vehemently enforced regulations seem like a must to me. I don't find it unlikely that eventually we would have to rigorously criminalize private ventures into advanced electronic minds. As we do with nuclear weaponry nowadays.