Internet privacy, security, age restrictions, VPNs and backups

How have you reacted to internet restrictions

  • I have gone decentralised ages ago

    Votes: 0 0.0%

  • Total voters
    27
America must fight this, and we must fight it now. If American tech companies wind up deciding that it’s easier to obey foreign censorship laws than to assert their constitutional rights, the Internet will wind up becoming a Constitution-free zone – meaning a free speech-free zone, too. I don’t think any of us would like that outcome.
Of course Orwell couldn't predict this. There was no internet in his time. Now we are sure 1984's curtail on free speech starts with a "Big Brother" heavily monitored and censored internet. China's Social Score might be coming next, and sooner than we expect.
 
The more-or-less forced downgrade to Windows 11, and the (non-coincidentally) simultaneous addition of an even more intrusive Copilot, is certainly wrecking any sort of privacy we could have had on our computers.
And the lawmakers are letting themselves get walked on by Microsoft, who seemingly can casually make entire continents pay hundred of billions in pointless hardware upgrade without any significant pushback.
Talk about living more and more in a cyberpunk dystopia.
 
Linux is right here...
That's mostly irrelevant for the existing market share. It's just what allows Microsoft to evade monopoly/abusive dominant position lawsuits.
 
The more-or-less forced downgrade to Windows 11, and the (non-coincidentally) simultaneous addition of an even more intrusive Copilot, is certainly wrecking any sort of privacy we could have had on our computers.
And the lawmakers are letting themselves get walked on by Microsoft, who seemingly can casually make entire continents pay hundred of billions in pointless hardware upgrade without any significant pushback.
Talk about living more and more in a cyberpunk dystopia.
Yeah...talks about an EU OS should have started a decade or more ago. A consortium like Airbus could have been behind EU critical software!
 
Screenshot-2025-10-17-at-04.50.36.png.scale_.1920x1080.jpg

The high-lighted, woah!

This is consistent with the UK legal doctrine known as parliamentary supremacy, which holds that the UK Parliament has theoretically unlimited power. The infinite character of that power was most famously summed up by English lawyer Sir Ivor Jennings, who once said that “if Parliament enacts that smoking in the streets of Paris is an offence, then it is an offence”. This line is taught to every first-year English law student.

Absurd or not, that’s the rule. This means that the UK could enact a law that says that the entire world has to genuinely believe that 2+2 = 5 or that the Moon is made of cheese and the entire world would, theoretically, have to obey it. It could also pass a law, as it has in the form of the Online Safety Act 2023, that says its censorship codes apply in the United States and override the U.S. Constitution, and its censorship agency has the power to enforce those codes in America. Both of aforementioned laws, one hypothetical and one real, are equally ridiculous and at odds with reality.

Ofcom putting the 'unlimited power' to the test.

Preston Byrne Tue, Oct 14, 2025 at 12:10 AM
To: Online Safet n orcemen < n ne a e nforcement ofcom.or .uk>

Sirs,
Thank you for the several dozen pages of, in America, legally void correspondence. It will make excellent bedding for my
pet hamster.
My client reserves all rights

:)

Ofcom threats -

Suzanne Cater, director of enforcement at Ofcom, said: “Today sends a clear message that any service which flagrantly fails to engage with Ofcom and their duties under the Online Safety Act can expect to face robust enforcement action.

Ofcom is proud that it has scared nearly the entire internet into submission with minimum effort, challenge or push-back -

“We’re also seeing some services take steps to introduce improved safety measures as a direct result of our enforcement action.

More threats -

Services that choose to restrict access rather than protect UK users remain on our watchlist as we continue to monitor their availability to UK users.”
 
More threats -
Services that choose to restrict access rather than protect UK users remain on our watchlist as we continue to monitor their availability to UK users.”
Why? The availability to UK users is irrelevant. What matters is A) if UK users are targeted, and B) if a significant number do actually use it. Just being available does not matter.
 
OfCom are deluding themselves.

Some internet sites are only too happy to comply with Ofcom stipulations that UK residents must identify themselves,
not because they are in any way subservient to OfCom, but because it provides a golden excuse for extracting
personal information about UK residents and then either selling it on or themselves using it for nefarious purposes.
 

Ofcom say -

When considering a service's compliance with the Online Safety Act where it has restricted access for people with UK IP addresses, Ofcom will monitor the restrictions on a case-by-case basis to assess whether they are maintained consistently and to satisfy itself the service is not promoting ways of avoiding the access restrictions. Services which choose to restrict access for people in the UK must not actively promote or encourage ways for UK users to avoid those same protections or restrictions, such as by providing information about or links to a VPN.

Ofcom will prioritise cases where action by Ofcom can be expected to increase online safety protections for people in the UK, particularly children. If a service restricts access to people in the UK, we will consider the presence of the restrictions when assessing whether enforcement action can be expected to result in further improvements in online safety for people in the UK, especially children. We would be likely to continue to prioritise such cases against services which do not consistently maintain such access restrictions or actively promote or encourage ways their UK users can avoid those restrictions, such as by providing information about or links to a VPN.

Geo-blocking is not enough, work-around advice must not been seen by UK eyes and is being enforced thoroughly by Ofcom -

an online suicide forum – the target of our first investigation under the Online Safety Act – responded to our enforcement proceedings against it by implementing a geoblock to restrict access by people with UK IP addresses.

Following further engagement with the service, it later removed messaging from the landing page for UK users that promoted ways to circumvent the block. Ofcom is clear that services who choose to block access by people in the UK must not encourage or promote ways to avoid these restrictions.

This forum remains on Ofcom’s watchlist and our investigation remains open while we check that the block is maintained and that the forum does not encourage or direct UK users to get around it.
 
Just last month,

Charities warn Ofcom too soft on Online Safety Act violators


Make em quake -

Asked about how well the communications regulator has enforced penalties on organizations that violate the OSA, or fail to implement the required safeguards, Andy Burrows, CEO of the Molly Rose Foundation, said: "I do not get the impression that the companies are quaking in their boots at Ofcom's enforcement approach."

Even more protections on top of the current ones on the way, what could they be -

"Technology and harms are constantly evolving, and we're always looking at how we can make life safer online. We've already put forward proposals for more protections that we want to see tech firms roll out.

Tech companies can appease Ofcom all they want, they will never be satisfied -

"Now, if they're spotting new trends, new ways that harms are developing on their platform, but there isn't anything in the codes of practice that addresses that, then there is no obligation on them to address those harms," Govender said.
"So, we're thinking about how do we stay on top of emerging harms. Well, there has to be something that forces companies, once they've identified them, to immediately take action and look at what they could do to mitigate them, and at the minute there is not that incentive there."

Ofcom being egged on to get nastier (by Baroness Kidron) -

"The frustration is that, actually, where it is clear and where it is mandated, we don't want to see [Ofcom] stroking [platforms] and saying, you know, 'come on guys, do it, do it, do it.'

"We want to see them taking action, being robust, and I'm very sympathetic to Ofcom in anywhere where they feel they need a power and it has not been provided by Parliament."

Still scheming on other 'organizations' and VPNs -

Burrows' comments followed a question from the Communications and Digital Committee about the mandatory age assurance and whether Ofcom has learned any lessons from the rollout that could be applied if these rules are extended to other organizations, such as VPN providers.

Expect more fines demanded, also expect any slip-ups to be seized on as the OSA also serves as a new money maker.
As of yet, no fines issued and demanded have been declared to of been collected.

 
Last edited:

However: when eventually / having been proven not to be able to enforce against ***** and the Global Internet, the minds behind the Online Safety Act will start to press for a Great Firewall of Britain

Uk Prime Minister's recent trip to India -

Sir Keir Starmer has hailed India's national digital identification programme as a “massive success”

He would, wouldn't he.

“I don’t know how many times the rest of you have had to look in the bottom drawer for three bills when you want to get your kids into school or apply for this or apply for that, drives me to frustration,” he said.

Shut up Starmer.

Next year he is going to China, get him some more ideas and tips!

2026 -

Sir Keir Starmer has hailed China's Great internet Fire-Wall and Social Credit System as a “massive success”
 
Last edited:
Demostrating the importance of anonymity online:

DHS Tries To Unmask Ice Spotting Instagram Account by Claiming It Imports Merchandise (avoiding paywall)

The Department of Homeland Security (DHS) is trying to force Meta to unmask the identity of the people behind Facebook and Instagram accounts that post about Immigration and Customs Enforcement (ICE) activity, arrests, and sightings by claiming the owners of the account are in violation of a law about the “importation of merchandise.” Lawyers fighting the case say the move is “wildly outside the scope of statutory authority,” and say that DHS has not even indicated what merchandise the accounts, called Montcowatch, are supposedly importing.

“There is no conceivable connection between the ‘MontCo Community Watch’ Facebook or Instagram accounts and the importation of any merchandise, nor is there any indicated on the face of the Summonses. DHS has no authority to issue these summonses,” lawyers with the American Civil Liberties Union (ACLU) wrote in a court filing this month. There is no indication on either the Instagram or Facebook account that the accounts are selling any type of merchandise, according to 404 Media’s review of the accounts. “The Summonses include no substantiating allegations nor any mention of a specific crime or potential customs violation that might trigger an inquiry under the cited statute,” the lawyers add.

A judge temporarily blocked DHS from unmasking the owners last week.

“The court now orders Meta [...] not to produce any documents or information in response to the summonses at issue here without further order of the Court,” the judge wrote in a filing. The move to demand data from Meta about the identities of the accounts while citing a customs statute shows the lengths to which DHS is willing to go to attempt to shut down and identify people who are posting about ICE’s activities.

Montcowatch is, as the name implies, focused on ICE activity in Montgomery County, Pennsylvania. Its Instagram posts are usually titled “Montco ICE alert” and include details such as where suspected ICE agents and vehicles were spotted, where suspected agents made arrests, or information about people who were detained. “10/20/25 Eagleville,” one post starts. “Suspected dentention [sic] near Ollies on Ridge Pike sometime before 7:50 am. 3 Agents and 3 Vehicles were observed.”

The Instagram account has been posting since June, and also posts information about peoples’ legal rights to film law enforcement. It also tells people to not intervene or block ICE. None of the posts currently available on the Instagram account could reasonably be described as doxing or harassing ICE officials.

On September 11, DHS demanded Meta provide identifying details on the owners of the Montcowatch accounts, according to court records. That includes IP addresses used to access the account, phone numbers on file, and email addresses, the court records add. DHS cited a law “focused on customs investigations relating to merchandise,” according to a filing from the ACLU that pushed to have the demands thrown out.

“The statute at issue here, 19 U.S.C. § 1509, confers limited authority to DHS in customs investigations to seek records related to the importation of merchandise, including the assessment of customs duties,” the ACLU wrote. “Identifying anonymous social media users critical of DHS is not a legitimate purpose, and it is not relevant to customs enforcement.” As the ACLU notes, a cursory look at the accounts shows they are “not engaged in commerce.” The court record points to an 2017 Office of the Inspector General report which says Customs and Border Protection (CBP) “regularly” tried much the same thing with its own legal demands, and specifically around the identity of an anonymous Twitter user.

“Movant now files this urgent motion to protect their identity from being exposed to a government agency that is apparently targeting their ‘community watch’ Facebook and Instagram accounts for doing nothing more than exercising their rights to free speech and association,” those lawyers and others wrote last week.

“Movant’s social media pages lawfully criticize and publicize DHS and the government agents who Movant views as wreaking havoc in the Montgomery County community by shining a light on that conduct to raise community members’ awareness,” they added.

The judge has not yet ruled on the ACLU’s motion to quash the demands altogether. This is a temporary blockage while that case continues.

The Montocowatch case follows other instances in which DHS has tried to compel Meta to identify the owners of similar accounts. Last month a judge temporarily blocked a subpoena that was aiming to unmask Instagram accounts that named a Border Patrol agent, The Intercept reported.

Earlier this month Meta took down a Facebook page that published ICE sightings in Chicago. The move came in direct response to pressure from the Department of Justice.

Both Apple and Google have removed apps that people use to warn others about ICE sightings. Those removals also included an app called Eyes Up that was focused more on preserving videos of ICE abuses. Apple’s moves also came after direct pressure from the Department of Justice.

Montcowatch directed a request for comment to the ACLU of Pennsylvania, which did not immediately respond.
 
Half-good new Danish Chat Control proposal

Denmark, currently presiding over the EU Council, proposes a major change to the much-criticised EU chat control proposal to search all private chats for suspicious content, even at the cost of destroying secure end-to-end encryption: Instead of mandating the general monitoring of private chats (“detection orders”), the searches would remain voluntary for providers to implement or not, as is the status quo. The presidency circulated a discussion paper with EU country representatives today, aiming to gather countries’ views on the updated (softened) proposal. The previous Chat Control proposal had even lost the support of Denmark’s own government.

“The new approach is a triumph for the digital freedom movement and a major leap forward when it comes to saving our fundamental right to confidentiality of our digital correspondence”, comments Patrick Breyer (Pirate Party), a former Member of the European Parliament and digital freedom fighter. “It would protect secure encryption and thus keep our smartphones safe. However, three fundamental problems remain unsolved:

1) Mass surveillance: Even where voluntarily implemented by communications service providers such as currently Meta, Microsoft or Google, chat control is still totally untargeted and results in indiscriminate mass surveillance of all private messages on these services. According to the EU Commission, about 75% of the millions of private chats, photos and videos leaked every year by the industry’s unreliable chat control algorithms are not criminally relevant and place our intimate communication in unsafe hands where it doesn’t belong. A former judge of the European Court of Justice, Ninon Colneric (p. 34-35), and the European Data Protection Supervisor (par. 11) have warned that this indiscriminate monitoring violates fundamental rights even when implemented at providers’ discretion, and a lawsuit against the practice is already pending in Germany.

The European Parliament proposes a different approach: allowing for court orders mandating the targeted scanning of communications, limited to persons or groups connected to child sexual abuse. The Danish proposal lacks this targeting of suspects.

2) Digital house arrest: According to Article 6, users under 16 would no longer be able to install commonplace apps from app stores to “protect them from grooming”, including messenger apps such as WhatsApp, Snapchat, Telegram or Twitter, social media apps such as Instagram, TikTok or Facebook, games such as FIFA, Minecraft, GTA, Call of Duty, and Roblox, dating apps, video conferencing apps such as Zoom, Skype, and FaceTime. This minimum age would be easy to circumvent and would disempower as well as isolate teens instead of making them stronger.

3) Anonymous communications ban: According to Article 4 (3), users would no longer be able to set up anonymous e-mail or messenger accounts or chat anonymously as they would need to present an ID or their face, making them identifiable and risking data leaks. This would inhibit, for instance, sensitive chats related to sexuality, anonymous media communications with sources (e.g. whistleblowers), and political activity.

All things considered, the new Danish proposal represents major progress in terms of keeping us safe online, but it requires substantially more work. However, the proposal likely already goes too far already for the hardliner majority of EU governments and the EU Commission, whose positions are so extreme that they will rather let down victims altogether than accept a proportionate, court-proof and politically viable approach.
 
It's weird how much the expectations have changed. Back in the day, "never give out any personal or identifying information" was pretty much the golden rule for personal interactions on the internet. Now that's actively frowned upon in many online spaces and is a hair's breadth away from being made illegal.
 
Digital ID cards could be a disaster in the UK and beyond

The first ID card I ever had was the flimsy piece of laminated paper that made up my driver’s license. In the US, a driver’s license includes a photo, biometric information (eye colour, height, etc.) and birthdate. This led to usage creep: people used the cards as much more than a mere license to drive. Bars and liquor stores would “card” kids trying to get a drink, taking the information on it as proof that we were the proper legal drinking age of 21. Needless to say, I was 18 when I figured out how to doctor the birthdate on my card with a pencil so I could buy cheap cocktails.

This story sounds like a wee fairy tale from the 20th century, but it is deeply relevant to current debates over whether to implement digital ID cards in the UK and beyond. Sure, the cards themselves may be dramatically different, but the problems are the same. First, ID cards are always prone to usage creep. And second, they are incredibly easy to hack.

The British government is hardly the first to suggest that its citizens all carry a little ID app on their phones to access government or other public services. Digital IDs are currently required by the Chinese government, as well as those of Singapore, India, Estonia and many more. Proponents of digital IDs generally give similar reasons for using them: to cut down on fraud, to make it easier to buy things or travel and to prove who you are without carrying a bunch of physical cards or papers.

“It will be safer for you with this digital ID,” a government might say. “You can use it to make purchases or get healthcare, and as a fun bonus, nobody will ever mistake you for an immigrant and throw you in a detainment centre without proper food, sanitation or medication for weeks.” Oops, sorry – that got oddly specific for no particular reason. But you get what I’m saying. These cards are proffered as fixes for problems that aren’t problems (it’s not hard to carry my health insurance card) or require a lot more than an ID to solve (immigration is a huge, multifaceted issue).

But back to my point about usage creep. What happens when a government implements a digital ID on your phone that is supposed to be for verifying your citizenship status when you apply for a job or social services? At a basic level, it snuggles up to all your other apps, possibly sharing data with them. Some of these apps have access to sensitive information, like bank accounts, doctor’s appointments, personal conversations and photos.

As journalist Byron Tau chronicles in his excellent book Means of Control, many apps are already gathering information about you that you don’t realise, such as your location, spending habits and even what other apps are on your phone. There are companies that specialise in extracting this data from, say, your dating apps and selling it to third parties, including government agencies.

In the US, this is largely legal, which is super creepy. In the UK and Europe, there are regulations that prevent some of this rampant data-sharing. Still, the tech is there. The only thing protecting you from a government ID app that tracks your location by tapping into an unrelated app is the government itself. And governments change. Regulations change. Yet, once you start using that digital ID to get jobs, get into bars, pay for chips and ride the tube, it is unlikely you’ll chuck it.

This is the usage creep trap. A government might start using its digital ID in much more invasive ways than originally promised. Meanwhile, citizens might start using it for so many things that they decide the trade-off is worth it. Who cares if the government knows where you are every second of the day if it is easy to buy gum without a credit card? That’s great until the government decides you are a bad guy.

And I haven’t got to the hacking part yet. Even if a government doesn’t start using its digital ID to spy on you, a malicious adversary might. Someone could find a backdoor into government servers and gain access to your ID that way, or they might get your information through a phone app laced with spyware. This is why security experts have been warning the British government about the dangers of digital IDs. Even Palantir, the infamous US surveillance firm, has backed away from supporting digital IDs because, as one of its executives recently put it, they are “very controversial“.

You shouldn’t be worried about this stuff because someone might steal your identity. You should be worried in case they can track your location, read your texts, break into your bank account and listen to your phone calls. The fact is, there is nothing wrong with old-fashioned ID cards. Yes, they can be lost or tampered with. But at least when that happens, all you lose is the card. You don’t lose everything else with it.
 
Not sure why the Labour party just didn't vote to discard the whole proposal afterward, im not sure if UK online safety act was purposely made to sabotage the next goverment, or it was just made by people who didn't understand how bad it was, not sure if there's anything labour could have done, is there any precedent for voting down already passed laws? Could be they just have to let it come into effect or Tory light Starmer just doesn't differ policy wise from the Tories.
 
Not sure why the Labour party just didn't vote to discard the whole proposal afterward, im not sure if UK online safety act was purposely made to sabotage the next goverment, or it was just made by people who didn't understand how bad it was
I know that it is said (though probably not by Napoleon) that one should not attribute to malice that which is adequately explained by stupidity but in this case you have to wonder. Along with the push against encryption and the treatment of our right of copyright in the face of big AI it could be that their primary motivation it to control communication.
not sure if there's anything labour could have done, is there any precedent for voting down already passed laws? Could be they just have to let it come into effect or Tory light Starmer just doesn't differ policy wise from the Tories.
My understanding is that it is generally held that because of parliamentary sovereignty no parliament can bind its successors. As far as real examples, I had a quick google, and the first I came across was the repeal of the Fixed-term Parliament Act 2011 in 2022 by the basically the same party. They totally could have ditched it if they wanted to.
 
Colorado’s Mandatory Social Media “Warning Labels” Are Unconstitutional

State censorship laws come in a variety of forms. Today’s post focuses on one of Colorado’s approaches, which requires social media services to display a mandatory warning label to minor users. Disclosure mandates can receive more favorable Constitutional scrutiny if they qualify for Zauderer scrutiny. This particular mandate doesn’t. Instead, the court says the disclosure mandate is amenable to a facial challenge, that strict scrutiny applies, and the law fails strict scrutiny. Next stop: the Tenth Circuit.

The Law’s Requirements

Social media platforms covered by the Act are required to “establish a function” that provides minor users with certain information. The function must satisfy two criteria: First, it must “provide users who are under the age of eighteen with information about their engagement in social media that helps the user understand the impact of social media on the developing brain and the mental and physical health of youth users,” and second, the information must “be supported by data from peer-reviewed scholarly articles or the sources included in the mental health and technology resource bank established” by the Act.

This function must “[d]isplays a pop-up or full-screen notification to a user who attests to being under the age of eighteen when the user: (I) Has spent one cumulative hour on the social media platform during a twenty-four-hour period; or (II) Is on a social media platform between the hours of ten p.m. and six a.m.” This has to redisplay every 30 minutes (which is a sure way to irritate users).

Facial Challenge

The court says this disclosure requirement is amenable to a facial challenge because “Section 4 requires every covered social media platform to perform the same task under the Act.” Even though the services are likely to disclose different information, the requirement to speak at all is the same for all of the affected social media–“They must all convey to minor users Colorado’s belief that excessive use of social media may be risky to their health and well-being.”

Scrutiny Level

Zauderer scrutiny doesn’t apply. “Section 4’s compelled disclosures do not constitute commercial speech because they do far more than merely propose a commercial transaction….the disclosures compelled by the Act require social media companies to opine on the impacts of social media use on minors’ mental and physical health.” The court rejects the AG’s “suggestion that NetChoice’s members necessarily engage in commercial speech simply because third-party businesses advertise on their platforms.”

The court summarizes the commercial speech argument:

the primary difference between a social media platform’s curated feed and a newspaper’s editorial page is that the former operates in the electronic sphere, whereas the latter has traditionally operated in the physical. But that immaterial difference does not merit the adoption of a new, overly-capacious definition of commercial speech….

Section 4 targets the impacts of social media use generally—that is, it does not specifically target the commercial speech that allegedly occurs on those websites by third-party advertisers

The court distinguishes the atrocious Fifth Circuit ruling in FSC v. Paxton, which upheld warning disclosures for porn sites:

the landing pages on the pornographic websites are different than the feeds curated by the social media platforms at issue here. The Free Speech Coal. decision did not suggest that the pornographic websites moderated content or otherwise curated a bespoke feed based on the particular traits or viewing history of a given user. By contrast, NetChoice’s members allege—and substantiate via their unrebutted declarations—that they do just that. This difference is critical here because the Supreme Court has made clear that content moderation is expression

The court seems to be saying that algorithmically sorted personalized content gets MORE Constitutional protection than the traditional human editorial curation of one-size-fits-all publications. This can’t be right, but it’s the court’s way of dodging the Fifth Circuit’s terrible work.

Application of Strict Scrutiny

The court says the disclosures are not the least restrictive means available to the state:

instead of imposing the compelled speech requirement, Colorado could have incentivized social media companies to voluntarily provide these disclosures to their minor users, or it could have elected to provide minors with these disclosures itself….Colorado had other options at its disposal for advancing its goal of protecting the health and well-being of its children from the potential adverse effects of social media use.

NetChoice racks up another win in court, but as usual, it’s unclear if they can preserve the win on appeal.

Because the court enjoins the law, it’s tempting to overlook how bad Colorado’s law is. Don’t. In the name of “protecting kids online,” legislatures keep embracing terrible policy, including this law, and they should be condemned for it.

As the court summarizes, this law is intended to teach children that “excessive use of social media may be risky to their health and well-being.” It’s true that excessive use of anything is potentially risky, an asymmetrical warning about the risks of social media is a form of miseducation. It doesn’t teach minors how to make smart decisions about the social media use; and worse, it might prompt minors to second-guess and curtail their beneficial social media usage. In other words, disclosing only about risk, and not teaching minors how to evaluate cost-benefit, is unhelpful and pernicious.

And then…to push the message to minors every 30 minutes is annoying and likely counterproductive. Unwanted government-compelled disclosures breed reactance, especially among minors, and minors would surely develop blindness to the disclosures (like how consumers developed banner blindness to disregard unwanted banner ads).

To be clear, the government plays a critical role in teaching minors about responsible social media use. However, this teaching objective should be subject to good pedagogical design. Scare-tactics spam is the opposite of that. For additional ideas of how governments can actually help children online, see my Segregate-and-Suppress article.
 
CHAT CONTROL 2.0 THROUGH THE BACK DOOR – Breyer warns: “The EU is playing us for fools – now they’re scanning our texts and banning teens!”

Just before a decisive meeting in Brussels, digital rights expert and former Member of the European Parliament Dr. Patrick Breyer is sounding the alarm. Using a “deceptive sleight of hand,” a mandatory and expanded Chat Control is being pushed through the back door, in a form even more intrusive than the originally rejected plan. The legislative package could be greenlit tomorrow in a closed-door EU working group session.

“This is a political deception of the highest order,” warns Breyer. “Following loud public protests, several member states including Germany, the Netherlands, Poland, and Austria said ‘No’ to indiscriminate Chat Control. Now it’s coming back through the back door – disguised, more dangerous, and more comprehensive than ever. The public is being played for fools.”

According to Breyer, the new compromise proposal is a Trojan horse containing three poison pills for digital freedom:

1. MANDATORY CHAT CONTROL – MASKED AS “RISK MITIGATION”

Officially, explicit scanning obligations have been dropped. But a loophole in Article 4 of the new draft obliges providers of e-mail, chat and messenger services like WhatsApp to take “all appropriate risk mitigation measures.” This means they can still be forced to scan all private messages – including on end-to-end encrypted services.
“The loophole renders the much-praised removal of detection orders worthless and negates their supposed voluntary nature,” says Breyer. “Even client-side scanning (CSS) on our smartphones could soon become mandatory – the end of secure encryption.”

2. TOTAL SURVEILLANCE OF TEXT CHATS: A “DIGITAL WITCH HUNT”

The supposedly voluntary “Chat Control” goes far beyond the previously discussed scanning of photos, videos, and links. Now, algorithms and AI can be used to mass-scan the private chat texts and metadata of all citizens for suspicious keywords and signals.
“No AI can reliably distinguish between a flirt, sarcasm, and criminal ‘grooming’,” explains Breyer. “Imagine your phone scanning every conversation with your partner, your daughter, your therapist and leaking it just because the word ‘love’ or ‘meet’ appears somewhere. This is not child protection – this is a digital witch hunt. The result will be a flood of false positives, placing innocent citizens under general suspicion and exposing masses of private, even intimate, chats and photos to strangers.” Under the current voluntary “Chat Control 1.0” scanning scheme, German federal police (BKA) already warn that around 50% of all reports are criminally irrelevant, equating to tens of thousands of leaked legal chats per year.

3. DIGITAL HOUSE ARREST FOR TEENS & THE END OF ANONYMOUS COMMUNICATION

In the shadow of the Chat Control debate, two other disastrous measures are being pushed through:

The End of Anonymous Communication: To reliably identify minors as required by the text, every citizen would have to present their ID or have their face scanned to open an email or messenger account. “This is the de facto end of anonymous communication online – a disaster for whistleblowers, journalists, political activists, and people seeking help who rely on the protection of anonymity,” warns Breyer.
“Digital House Arrest”: Teens under 16 face a blanket ban from WhatsApp, Instagram, online games, and countless other apps with chat functions, allegedly to protect them from grooming. “Digital isolation instead of education, protection by exclusion instead of empowerment – this is paternalistic, out of touch with reality, and pedagogical nonsense.”

URGENT APPEAL: GOVERNMENTS MUST NOW USE THEIR VETO!

Several EU governments—including those of Germany, the Netherlands, Poland, Czechia, Luxembourg, Finland, Austria, and Estonia—have previously voiced strong opposition to indiscriminate mass scanning.

“Now, these governments must show some backbone!” demands Breyer. “Block this sham compromise in the Council and demand immediate corrections to save the fundamental rights of all citizens. The EU Parliament has already shown, across party lines, how child protection and digital freedom can be achieved together.”

Breyer suggests the following immediate corrections before any government should agree:
  • No mandatory chat control through the back door: Clarify that scans cannot be enforced as “risk mitigation.”
  • No AI chat police: Restrict scanning to known child sexual abuse material (CSAM).
  • No mass surveillance: Only allow targeted surveillance of suspects based on a court order.
  • Preserve the right to anonymity: The mandatory age verification requirements must be scrapped.
“They are selling us security but delivering a total surveillance machine,” Breyer concludes. “They promise child protection but punish our children and criminalize privacy. This is not a compromise – this is a fraud against the citizen. And no democratic government should make itself an accomplice.”
 
Back
Top Bottom