I'm not quite sure where this unpredictability would come from - except from human input (see Civ 6 and its AI bugs). An AI that would be unpredictable is useless. It's supposed to do what it's programmed to do. If it doesn't, it's malfunctioning or bugged. By the way, a learning AI is already in use by NASA. A self-repairing AI would be even more useful, but we're not quite there yet. None of which affects predictability, however.
The decision tree of a sufficiently advanced AI is opaque. Although the builders can tune the outcome on aggregate, they cannot predict what an AI will do in any particular case. And for any individual decision it is at least impractical to find out why the machine took the decision it took. At my new job, we are letting a software take real-world-impacting decisions based on a machine-learning algorithm (It does not have to be NASA. There are plenty of public algorithms available - if you have the data). There are obvious cases, where one look at the data tells you what the decision should be. But for the cases at the edge, we do not understand why the decision went that way in one case and the other way in the next case.
And I'll disagree that the risky AI is not privately owned. Market trading algorithms, for example, operate at speeds that basically makes them autonomous in human timeframes. And those trading algorithms can and will be privately owned. They're designed with a goal in mind - make the owner more money. They're not 'designed' to care about the aggregate. The owners individually assume that 'the system' can handle any combination of trades that they perform. Sure, there are many public pieces of software. But are they robust enough to protect from AI? Why think so?
I admit that there are privately owned algorithms. But these are usually limited to applications with a clearly defined rule set - essentially games, for which you can generate almost perfect data and have a clear definition of success. High frequency trading is one of these games, which is indeed linked to real risk. However, it is somewhat easy to safeguard it (which I believe is done already), by implementing speed bumps that prevent trading if the results are to extreme. These bring the decisions back to human time frames.
But for most applications, what is really missing is not the algorithm, but the data. If you wanted to have a system that cares about the aggregate, what would you use as success indicator? The problem is that most of the indicators are much to broad and depend on a lot of parameters outside your model, such that your result will not work at all. Even if you find a way to improve the indicators you feed into the algorithm, you might just improve these indicators and not the aggregate wealth. The discrepancy might be immediately obvious in another parameter, which your AI does not care about, because it was not told to.
The danger of AI is not so much in the algorithm, but in the data. If you feed a learning AI a dataset that discriminates based on certain characteristics of people, the AI will learn to discriminate as well. And unless humans, who are able to self-reflect or just die after some time, the AI might keep doing that forever. This might result in a self-fulfilling prophecy, because people from a certain ethnic or cultural background are not given a chance, because the data clearly shows that they will not succeed and they do not succeed because they are not given a chance. Even if you are aware of that problem, it is quite hard to prevent leakage of such information to the algorithm, because AI algorithms are very good at identifying such information by proxies, like names or locations that people of a certain background share.
But, a lot of our data is privately owned. When it comes to our data, we're the product and not the customer. How much of the market value of the various social apps is embedded in the data they have regarding us?
I would say: almost all of it.