What struggle? Nobody is threatening their power.
The absence of a rebel column hammering on the palace gates doesn't indicate that a regime is absolutely secure in its power. The leadership are at odds with sectional and regional factions, the state is at odds with the military, the reformists are at odds with the Maoists, they're
all at odds with the populace. The leadership would be tremendously stupid to assume that they were the natural inheritors of a thousand-year empire, just because they get to sit behind the biggest desks. The fact that they pursue on so many draconian policies suggests that they're very much aware of this.
I do not agree. China is an old civilisation and it is very confident now that it has technically modernised.
I'm talking about "China", as a grand historical abstraction. I'm talking about the leadership of the Communist Party regime. I'm not really interested in the imagined qualities of some intangible civilisation-ghost.
Consider that China, as a "civilisation", wasn't much less old in 1918. Would we have looked at the warlord cliques scrapping over the corpse of the republic-that-wasn't and pronounce "yes, here is a civilisation confident in its ancient station"? Probs not.
YouTube wants to sell you things here. The CCP wants to regulate your behavior for its own interests in a much more profound way.
That's entirely my point. If the most advance advertising algorithms in the world end up trying to sell me patio doors for my rented, second-story flat, why should we expect that the PRC are capable of producing far more detailed and more sophisticated algorithms, and then acting upon those algorithms in an effective way?
The difference, I'll grant, is that the PRC may be in a position to collect more sweeping information, but it doesn't follow that they're better positioned to act on it. If algorithms see a YouTube history of music videos, movie reviews and clips from the
John Adams miniseries and go "here is a guy who wants to play a lot of online bingo", then it's not obvious that they'd get much more practical usage out of my age, religion, or political affiliation.
If I even reported those things accurately or consistently; as
a paper of record has observed, people are really pretty awful at self-reportage.
The one algorithm to rule them all is certainly science-fiction, but that doesn't mean that algorithms will not be used for represssion or that they aren't used for repression already. With the internet censorship going on China, I would be surprised if they didn't already have an algorithm that classifies the political content of internet postings. Such algorithms are not perfect and you will always find examples where they fail, but it is better than random guessing. From this they could generate a political reliability score and once they have that, they could let the score influence many decisions. E.g. those that have a score lower than the 5%-quantile will not even be considered for university and if everything is equal for all other applicants, the score is used as tiebreaker.
What does this describe, the automation of how authoritarian regimes already operate? There's no fundamental shift, here, just the ability for the state to shift manpower to other tasks. There's not even a guarantee that the algorithm is more effective than manual review, only the expectation that the savings in manpower would outweigh the costs of a less efficient system.
Digital automation is rarely more efficient in itself, only more cost-effective. It's more likely that any serious move towards "social credit" is understood by the leadership as the modernisation of the existing surveillance-state rather than something fundamentally new. They will present it as such to the public, but that's because they're trying to present it as something
other than a surveillance system.
Algorithms wield some power of their own. Credit scores are algorithms that cannot force you to pay your bills on time -- that takes a court decision and law enforcement. But they can encourage you to do so to avoid potential repercussions in the future, when you could be denied services. This is enough to keep many people in line without having to wield the power of the legal system. In the same way, if people believe that their actions are monitored and that they flow into some kind of algorithm that may have consequences in the future, they might behave in a way that avoids getting on the bad side of that system. To be effective, the system doesn't even need to be accurrate, people just need to think it is accurate enough. Of course, there will be always people, who see right through that, but for those you can apply more "traditional" repression.
But, people aren't stupid. The Chinese populace are reported to be pretty cynical about their government, because they have to live with it, and know how inefficient and corrupt it can be, how far it acts as a vehicle for factional interests rather than as a coherent force, whether or not that force is applied for the public good. If the Chinese government is going to convince its populace that its new systems are capable of producing any coherent relationship between action and outcome, they are going to have to ensure that there is usually such a correspondence, that a failure to achieve correspondence appears as the exception, and it's absolutely unclear that the PRC has the means to do that. It's not as if people really trust the credit system, either, they just treat it as a great, dumb beast that has to be appeased.
I agree that this is a part of what China is doing. But there is also the fact that it is now possible to collect data on a scale that the Stasi wouldn't have been able to imagine in their wet dreams. China is at the forefront to utilize that for political control. Time will tell, whether this will lead to a much stricter control of the population, whether the repression will stay on the same level, but require less resources or whether the system will implode at one point, because it cannot maintain the illusion of control.
Most of that data is noise. That's a problem which has bedeviled security agencies for centuries: the majority of intelligence is empty. Human agents are rarely in a position to filter the useful from the irrelevant on the spot, and it's even less probable that an algorithm would. The archives of security agencies across Europe are full of trivial minutiae, which agents had to spend as much time filtering as they did collecting. Perhaps there's a further algorithm that can help the filtering- but those results will themselves need reviewed and filtered, and those results will need filtered, and there's no real guarantee of anything useful coming out of that process. As above, it's really just a way of automating existing processes, the value of which lies in cost-effectiveness over efficiency. Any serious repressive measures are still going to revolve around the direct surveillance of known dissidents.