Tax whatever legal entity Alphabet has in the US based on $ value of ads showing to Americans, based on their best geotargeting data.
With the global nature of the internet, this would be extremely hard to set up in a loophole-free way. I would imagine that Alphabet would stop selling ads themselves and set up technically independent offshore companies that sell the ads and pay a heavy "license fee" to get input from Google where to put the ads.
Or maybe tax American companies who buy ads directly, not much point in buying ads targeting Americans if you don't have a sales presence in the country.
That might work. It would still be quite tricky to handle ads bought by a company in country A from another company in country B actually targeting customers in country C. You would need a lot of laws and you would need international cooperation to close all the loopholes.
But let's assume that this can be successfully implemented. You could make ads so unprofitable that nobody would want to display them (in that case you might as well ban them outright). But as long as there is still profit to be made, this is unlikely to break the hold of Google and Facebook on the advertising market. If the profit margin on your ads is quite slim, you will try even harder to target them as accurately as possible. And the companies providing the best targeting are those with the most knowledge about the visitors - Google and Facebook.
Well, the first step is allowing researchers access to the sorting algorithms. The same way I am allowed to see how the water is tested by my water supplier.
All of their power is in the sorting methodology. That's a LOT of power to implicitly control people. And right now, the only people in charge are shareholders.
Another would be to open up the quantities of advertising spent based on the selection criteria. Even raw numbers.
I have heard this demand of public scrutiny for the algorithms quite often and superficially, it sounds good. But I believe that it is based on misconceptions about how these algorithms work. "Selection criteria" sounds like there are people handcrafting rules how to roll out ads and these criteria are easily understandable to humans. Most likely this is not the case at all. Instead, I strongly suspect that Google and Facebook take more or less generic machine learning algorithms and throw huge amounts of data and computing power at them. They then tune some very technical parameters to get the best results. This means that the algorithms are probably very interesting from a technical point of view, but without the data, quite boring socially. Researchers having access to the algorithms would probably conclude that these are very sophisticated algorithms that are very good at solving certain classes of problems, but without the data, they would not be able to say what it actually does.
Sharing the data in addition to the algorithms is extremely problematic (It's bad enough what Google knows about you, but everyone else knowing all that as well is worse, in my opinion). But even if you have the data (and you are able to process it), you are unlikely to make any discoveries like "You need to tune this parameter to make the algorithm less racist", because machine-learning algorithms tend to be quite a black box: You put data in and get results out, but it is nigh impossible to state why exactly the output of the algorithm was what it was. I doubt that even the developers of the algorithms are capable of remove racist bias from them.