Internet privacy, security, age restrictions, VPNs and backups

How have you reacted to internet restrictions

  • I have gone decentralised ages ago

    Votes: 0 0.0%

  • Total voters
    27
Sorry about the text on image, but it is what I have...

1763041673982.png
 
Lawmakers Want to Ban VPNs—And They Have No Idea What They're Doing

Remember when you thought age verification laws couldn't get any worse? Well, lawmakers in Wisconsin, Michigan, and beyond are about to blow you away.

It's unfortunately no longer enough to force websites to check your government-issued ID before you can access certain content, because politicians have now discovered that people are using Virtual Private Networks (VPNs) to protect their privacy and bypass these invasive laws. Their solution? Entirely ban the use of VPNs.

Yes, really.

As of this writing, Wisconsin lawmakers are escalating their war on privacy by targeting VPNs in the name of “protecting children” in A.B. 105/S.B. 130. It’s an age verification bill that requires all websites distributing material that could conceivably be deemed “sexual content” to both implement an age verification system and also to block the access of users connected via VPN. The bill seeks to broadly expand the definition of materials that are “harmful to minors” beyond the type of speech that states can prohibit minors from accessing—potentially encompassing things like depictions and discussions of human anatomy, sexuality, and reproduction.

This follows a notable pattern: As we’ve explained previously, lawmakers, prosecutors, and activists in conservative states have worked for years to aggressively expand the definition of “harmful to minors” to censor a broad swath of content: diverse educational materials, sex education resources, art, and even award-winning literature.

Wisconsin’s bill has already passed the State Assembly and is now moving through the Senate. If it becomes law, Wisconsin could become the first state where using a VPN to access certain content is banned. Michigan lawmakers have proposed similar legislation that did not move through its legislature, but among other things, would force internet providers to actively monitor and block VPN connections. And in the UK, officials are calling VPNs "a loophole that needs closing."

This is actually happening. And it's going to be a disaster for everyone.

Here's Why This Is A Terrible Idea​

VPNs mask your real location by routing your internet traffic through a server somewhere else. When you visit a website through a VPN, that website only sees the VPN server's IP address, not your actual location. It's like sending a letter through a P.O. box so the recipient doesn't know where you really live.

So when Wisconsin demands that websites "block VPN users from Wisconsin," they're asking for something that's technically impossible. Websites have no way to tell if a VPN connection is coming from Milwaukee, Michigan, or Mumbai. The technology just doesn't work that way.

Websites subject to this proposed law are left with this choice: either cease operation in Wisconsin, or block all VPN users, everywhere, just to avoid legal liability in the state. One state's terrible law is attempting to break VPN access for the entire internet, and the unintended consequences of this provision could far outweigh any theoretical benefit.

Almost Everyone Uses VPNs

Let's talk about who lawmakers are hurting with these bills, because it sure isn't just people trying to watch porn without handing over their driver's license.

  1. Businesses run on VPNs. Every company with remote employees uses VPNs. Every business traveler connecting through sketchy hotel Wi-Fi needs one. Companies use VPNs to protect client and employee data, secure internal communications, and prevent cyberattacks.
  2. Students need VPNs for school. Universities require students to use VPNs to access research databases, course materials, and library resources. These aren't optional, and many professors literally assign work that can only be accessed through the school VPN. The University of Wisconsin-Madison’s WiscVPN, for example, “allows UW–‍Madison faculty, staff and students to access University resources even when they are using a commercial Internet Service Provider (ISP).”
  3. Vulnerable people rely on VPNs for safety. Domestic abuse survivors use VPNs to hide their location from their abusers. Journalists use them to protect their sources. Activists use them to organize without government surveillance. LGBTQ+ people in hostile environments—both in the US and around the world—use them to access health resources, support groups, and community. For people living under censorship regimes, VPNs are often their only connection to vital resources and information their governments have banned.
  4. Regular people just want privacy. Maybe you don't want every website you visit tracking your location and selling that data to advertisers. Maybe you don't want your internet service provider (ISP) building a complete profile of your browsing history. Maybe you just think it's creepy that corporations know everywhere you go online. VPNs can protect everyday users from everyday tracking and surveillance.

It’s A Privacy Nightmare

Here's what happens if VPNs get blocked: everyone has to verify their age by submitting government IDs, biometric data, or credit card information directly to websites—without any encryption or privacy protection.

We already know how this story ends. Companies get hacked. Data gets breached. And suddenly your real name is attached to the websites you visited, stored in some poorly-secured database waiting for the inevitable leak. This has already happened, and is not a matter of if but when. And when it does, the repercussions will be huge.

Forcing people to give up their privacy to access legal content is the exact opposite of good policy. It's surveillance dressed up as safety.

"Harmful to Minors" Is Not a Catch-All

Here's another fun feature of these laws: they're trying to broaden the definition of “harmful to minors” to sweep in a host of speech that is protected for both young people and adults.

Historically, states can prohibit people under 18 years old from accessing sexual materials that an adult can access under the First Amendment. But the definition of what constitutes “harmful to minors” is narrow — it generally requires that the materials have almost no social value to minors and that they, taken as a whole, appeal to a minors’ “prurient sexual interests.”

Wisconsin's bill defines “harmful to minors” much more broadly. It applies to materials that merely describe sex or feature descriptions/depictions of human anatomy. This definition would likely encompass a wide range of literature, music, television, and films that are protected under the First Amendment for both adults and young people, not to mention basic scientific and medical content.

Additionally, the bill’s definition would apply to any websites where more than one third of the site’s material is "harmful to minors." Given the breadth of the definition and its one-third trigger, we anticipate that Wisconsin could argue that the law applies to most social media websites. And it’s not hard to imagine, as these topics become politicised, Wisconsin claiming it applies to websites containing LGBTQ+ health resources, basic sexual education resources, and reproductive healthcare information.

This breadth of the bill’s definition isn't a bug, it's a feature. It gives the state a vast amount of discretion to decide which speech is “harmful” to young people, and the power to decide what's "appropriate" and what isn't. History shows us those decisions most often harm marginalized communities.

It Won’t Even Work

Let's say Wisconsin somehow manages to pass this law. Here's what will actually happen:

People who want to bypass it will use non-commercial VPNs, open proxies, or cheap virtual private servers that the law doesn't cover. They'll find workarounds within hours. The internet always routes around censorship.

Even in a fantasy world where every website successfully blocked all commercial VPNs, people would just make their own. You can route traffic through cloud services like AWS or DigitalOcean, tunnel through someone else's home internet connection, use open proxies, or spin up a cheap server for less than a dollar.

Meanwhile, everyone else (businesses, students, journalists, abuse survivors, regular people who just want privacy) will have their VPN access impacted. The law will accomplish nothing except making the internet less safe and less private for users.

Nonetheless, as we’ve mentioned previously, while VPNs may be able to disguise the source of your internet activity, they are not foolproof—nor should they be necessary to access legally protected speech. Like the larger age verification legislation they are a part of, VPN-blocking provisions simply don't work. They harm millions of people and they set a terrifying precedent for government control of the internet. More fundamentally, legislators need to recognize that age verification laws themselves are the problem. They don't work, they violate privacy, they're trivially easy to circumvent, and they create far more harm than they prevent.

A False Dilemma​

People have (predictably) turned to VPNs to protect their privacy as they watched age verification mandates proliferate around the world. Instead of taking this as a sign that maybe mass surveillance isn't popular, lawmakers have decided the real problem is that these privacy tools exist at all and are trying to ban the tools that let people maintain their privacy.

Let's be clear: lawmakers need to abandon this entire approach.

The answer to "how do we keep kids safe online" isn't "destroy everyone's privacy." It's not "force people to hand over their IDs to access legal content." And it's certainly not "ban access to the tools that protect journalists, activists, and abuse survivors.”

If lawmakers genuinely care about young people's well-being, they should invest in education, support parents with better tools, and address the actual root causes of harm online. What they shouldn't do is wage war on privacy itself. Attacks on VPNs are attacks on digital privacy and digital freedom. And this battle is being fought by people who clearly have no idea how any of this technology actually works.

If you live in Wisconsin—reach out to your Senator and urge them to kill A.B. 105/S.B. 130. Our privacy matters. VPNs matter. And politicians who can't tell the difference between a security tool and a "loophole" shouldn't be writing laws about the internet.
 
Just in case it was not obvious to everyone:

It's far too easy to find leaked passports and driver's licenses online

Passports and driver's licenses are easy to find online, thanks to a dizzying array of websites and apps that require a copy but aren't keeping the data safe.

On several occasions this year, my computer screen has filled up with literally tens of thousands of people's passports and driver's licenses. My job doesn't require me to handle or process these documents, and — before you ask — I'm not a hacker who's broken in somewhere to access them.

But thanks to shoddy cybersecurity and sloppy coding, oftentimes these sensitive government-issued documents are simply left exposed to the open web for anyone to find. Sometimes I find them, sometimes they find me.

Case in point: A cloud storage server containing some 223,000 government-issued IDs was secured this week after the data was left publicly exposed for an unknown amount of time, likely due to a misconfiguration caused by human error. This meant that reams of passport scans were publicly accessible to anyone on the internet who knew where to look — an easily guessable web address was all you needed, no passwords necessary.

Anurag Sen, an independent security researcher I've known for many years, reached out to me after finding the cloud server earlier this month packed with passports and driver's licenses from around the world. Sen's speciality is finding data online that shouldn't be there. Exposed data can include highly sensitive information, like U.S. military emails and online tracking data by powerful advertising giants, through to the personally identifiable data of regular people. Sen works tirelessly to get the data reported to its owner so it can be secured. On the rare occasion Sen can't figure it out or gets no response, he may reach out to me and we'll try to identify and contact the source of the spill together.

After several days of looking at this cache of exposed passports and driver's licenses, we were both stumped. We couldn't figure out who the customer was, and it wasn't even clear for what purpose the IDs were being stored to begin with. We were left with few options, except to contact the web hosting company storing the customer's data and hope for the best.

This is just the latest in a long list of many involving exposed government-issued IDs.

In January, I reported on a similar data breach containing the scans of more than 200,000 driver's licenses, selfies, and other identity documents belonging to customers of an online gift card store. Then, some months later in August, I found a really simple security flaw in the newly launched but popular app called TeaOnHer that allowed anyone to download the IDs of users who had to submit a copy before they could use the app. The bug was so easy to discover that I found it within just 10 minutes of learning of the app. I would be amazed if someone else hadn't found the bug first.

And that's not to forget other major spills that haven't been in my personal orbit. You may have heard of a few: Tea, the original app that preceded TeaOnHer, exposed thousands of its users' IDs, which were subsequently shared on the notorious forum ***** soon after the app's launch. Discord had a data breach of a customer support system involving its trust and safety team, which handles requests and appeals related to age-verification. Car rental giant Hertz disclosed a breach of driver's license data earlier this year, as did crypto exchange giant Coinbase.

Clearly we have a problem.

Nowadays, we can be asked multiple times during our regular daily lives to hand over our IDs, or upload a copy to the internet for, well, reasons. From booking an appointment with your doctor online to picking up mail at your local post office, providing a copy of your passport or driver's license for some kind of service has become the new normal, and in some cases you can't easily opt out. "It's policy," they might say, and that's that.

It's also increasingly necessary to hand over your ID for an even broader set of reasons. Age verification laws around the U.S., parts of Europe, and beyond, require adults to upload a copy of their ID before they can be allowed to access a website or use certain website features, like direct messaging. Plus, there has been an increase in the number of closed communities, such as apps Tea and TeaOnHer, which rely on screening their users by digitally checking their government-issued IDs before allowing them in.

Yet, companies and app developers are not keeping the data they collect safe, and are contributing to the ever-expanding pool of exposed IDs on the internet.

The irony is that by exposing so many driver's licenses and passports, it's making it easier for anyone to use those IDs for fraudulent purposes. That might be someone with the malicious intent to do a little cybercrime, or some hapless kid trying to trick an age verification system into allowing them to access an adult website.

The good news — if there is any — is that the server spilling 223,000 driver's licenses to the web is now secured, thanks to Sen's data breach hunting skills. After I contacted DigitalOcean to alert them that one of their customers was leaking data, the data was secured soon after.

Without knowing who the customer is, that still leaves hundreds of thousands of people potentially unaware that their personal information was spilled, a responsibility that rests squarely with the customer who exposed the data to the web.

In the end, it really shouldn't be this easy to find driver's licenses and passports online.
 
A Surveillance Mandate Disguised As Child Safety: Why the GUARD Act Won't Keep Us Safe

A new bill sponsored by Sen. Hawley (D-MO), Sen. Blumenthal (D-CT), Sen. Britt (R-AL), Sen. Warner (D-VA), and Sen. Murphy (D-CT) would require AI chatbots to verify all users’ ages, prohibit minors from using AI tools, and implement steep criminal penalties for chatbots that promote or solicit certain harms. That might sound reasonable at first, but behind those talking points lies a sprawling surveillance and censorship regime that would reshape how people of all ages use the internet.

The GUARD Act may look like a child-safety bill, but in practice it’s an age-gating mandate that could be imposed on nearly every public-facing AI chatbot—from customer-service bots to search-engine assistants. The GUARD Act could force countless AI companies to collect sensitive identity data, chill online speech, and block teens from using the digital tools that they rely on every day.

EFF has warned for years that age-verification laws endanger free expression, privacy, and competition. There are legitimate concerns about transparency and accountability in AI, but the GUARD Act’s sweeping mandates are not the solution.

Young People's Access to Legitimate AI Tools Could Be Cut Off Entirely.

The GUARD Act doesn’t give parents a choice—it simply blocks minors from AI companions altogether. If a chat system’s age-verification process determines that a user is under 18, that user must then be locked out completely. The GUARD Act contains no parental consent mechanism, no appeal process for errors in age estimation, and no flexibility for any other context.

The bill’s definition of an AI “companion” is ambiguous enough that it could easily be interpreted to extend beyond general-use LLMs like ChatGPT, causing overcautious companies to block young people from other kinds of AI services too. In practice, this means that under the GUARD Act, teenagers may not be able to use chatbots to get help with homework, seek customer service assistance for a product they bought, or even ask a search engine a question. It could also cut off all young people’s access to educational and creative tools that have quickly become a part of everyday learning and life online.

By treating all young people—whether seven or seventeen—the same, the GUARD Act threatens their ability to explore their identities, get answers to questions free from shame or stigma, and gradually develop a sense of autonomy as they mature into adults. Denying teens’ access to online spaces doesn’t make them safer, it just keeps them uninformed and unprepared for adult life.

The GUARD Act’s sponsors claim these rules will keep our children safe, but that’s not true. Instead, it will undermine both safety and autonomy by replacing parental guidance with government mandates and building mass surveillance infrastructure instead of privacy controls.

All Age Verification Systems Are Dangerous. This Is No Different.

Teens aren’t the only ones who lose out under the GUARD Act. The bill would require platforms to confirm the ages of all users—young and old—before allowing them to speak, learn, or engage with their AI tools.

Under the GUARD Act, platforms can’t rely on a simple “I’m over 18” checkbox or self-attested birthdate. Instead, they must build or buy a “commercially reasonable” age-verification system that collects identifying information (like a government ID, credit record, or biometric data) from every user before granting them access to the AI service. Though the GUARD Act does contain some data minimization language, its mandate to periodically re-verify users means that platforms must either retain or re-collect that sensitive user data as needed. Both of those options come with major privacy risks.

EFF has long documented the dangers of age-verification systems:
  • They create attractive targets for hackers. Third-party services that collect users’ sensitive ID and biometric data for the purpose of age verification have been repeatedly breached, exposing millions to identity theft and other harms.
  • They implement mass surveillance systems and ruin anonymity. To verify your age, a system must determine and record who you are. That means every chatbot interaction could feasibly be linked to your verified identity.
  • They disproportionately harm vulnerable groups. Many people—especially activists and dissidents, trans and gender-nonconforming folks, undocumented people, and survivors of abuse—avoid systems that force identity disclosure. The GUARD Act would entirely cut off their ability to use these public AI tools.
  • They entrench Big Tech. Only the biggest companies can afford the compliance and liability burden of mass identity verification. Smaller, privacy-respecting developers simply can’t compete.
As we’ve said repeatedly, there’s no such thing as “safe” age verification. Every approach—whether it’s facial or biometric scans, government ID uploads, or behavioral or account analysis—creates new privacy, security, and expressive harms.

Vagueness + Steep Fines = Censorship. Full Stop.

Though mandatory age-gates provide reason enough to oppose the GUARD Act, the definitions of “AI chatbot” and “AI companion” are also vague and broad enough to raise alarms. In a nutshell, the Act’s definitions of these two terms are so expansive that they could cover nearly any system capable of generating “human-like” responsesincluding not just general-purpose LLMs like ChatGPT, but also more tailored services like those used for customer service interactions, search-engine summaries, and subject-specific research tools.

The bill defines an “AI chatbot” as any service that produces “adaptive” or “context-responsive” outputs that aren’t fully predetermined by a developer or operator. That could include Google’s search summaries, research tools like Perplexity, or any AI-powered Q&A tool—all of which respond to natural language prompts and dynamically generate conversational text.

Meanwhile, the GUARD Act’s definition of an “AI companion”—a system that both produces “adaptive” or “context-responsive” outputs and encourages or simulates “interpersonal or emotional interaction”—will easily sweep in general-purpose tools like ChatGPT. Courts around the country are already seeing claims that conversational AI tools manipulate users’ emotions to increase engagement. Under this bill, that’s enough to trigger the “AI companion” label, putting AI developers at risk even when they do not intend to cause harm.

Both of these definitions are imprecise and unconstitutionally overbroad. And, when combined with the GUARD Act’s incredibly steep fines (up to $100,000 per violation, enforceable by the federal Attorney General and every state AG), companies worried about their legal liability will inevitably err on the side of prohibiting minors from accessing their chat systems. The GUARD Act leaves them these options: censor certain topics en masse, entirely block users under 18 from accessing their services, or implement broad-sweeping surveillance systems as a prerequisite to access. No matter which way platforms choose to go, the inevitable result for users is less speech, less privacy, and less access to genuinely helpful tools.

How You Can Help

While there may be legitimate problems with AI chatbots, young people’s safety is an incredibly complex social issue both on- and off-line. The GUARD Act tries to solve this complex problem with a blunt, dangerous solution.

In other words, protecting young people’s online safety is incredibly important, but to do so by forcing invasive ID checks, criminalizing AI tools, and banning teens from legitimate digital spaces is not a good way out of this.

The GUARD Act would make the internet less free, less private, and less safe for everyone. It would further consolidate power and resources in the hands of the bigger AI companies, crush smaller developers, and chill innovation under the threat of massive fines. And it would cut off vulnerable groups’ ability to use helpful everyday AI tools, further stratifying the internet we know and love.

Lawmakers should reject the GUARD Act and focus instead on policies that provide transparency, more options for users, and comprehensive privacy for all. Help us tell Congress to oppose the GUARD Act today.
 
This is again putting me on the same side as people I do not want to be associated with. I think this is the lawyer for the webs primary source of memes and hate speech. While there is a certain irony in the US complaining about other countries laws being enforced extraterritoriality I kind of do want the OSA to fail.

I know it’s hard for folk to detach themselves from emotion regarding child protection, hate speech or trolling, but it’s important to understand that Ofcom’s duty to deliver online safety is incipient censorship

Folks in London might be wondering whether my instructions really are to destroy the Online Safety Act or whether this is an exaggeration.

These are my instructions.

Let me explain the situation for you. I represent four American companies who have violated no laws in the United States. For this non-crime, the British authorities are threatening my clients with fines, arrest, and prison terms if they don’t give up their rights.

We explained this to the UK, sending memos and citing applicable US caselaw, on many occasions. Half a dozen at least. The UK, flouting every norm of international law and UK-U.S. treaty arrangement, didn’t listen. They kept coming. They wouldn’t listen to reason. They refused to back off.

My American clients are entitled to peacefully enjoy their constitutional rights when they’re in America. But, for years now, the UK has made it clear that it would never stop hunting them. With the OSA they thought they’d have the power to finally compel their obedience.

There is only one solution to that legal problem. That is to permanently destroy the UK’s ability to threaten my clients, and, by extension, its ability to threaten any American.

We can’t change an Act of Parliament, but we can pull domestic US levers. It is our country, after all.

Wyoming GRANITE is half of the solution – the shield. If Congress steps in and backs up the states, we’ll have the sword too. Then my clients will be hunted no more.

If that happens, the entire global censorship apparatus will collapse. If, at that point, the UK is looking for someone to blame, it need only look in the mirror.

The Full Text of the Wyoming GRANITE Act

The Wyoming GRANITE Act, the first foreign censorship shield bill ever conceived in the history of the United States, was filed for numbering in the State of Wyoming today by Representative Daniel Singh.

I have managed, through my global network of spies, to obtain a copy of the bill in its current draft form.

This proposed law is derived from this proposal I wrote a month ago after being prompted by Reps. Keith Ammon and Calvin Beaulier in New Hampshire, who are working to introduce the GRANITE Act in the Granite State. Exactly one month and one day after putting this proposal down in a blog post, a bill inspired by that post has been filed in a state legislature.

GRANITE Act—Guaranteeing Rights Against Novel International Tyranny & Extortion.

AN ACT relating to civil procedure; creating the Wyoming GRANITE Act; providing legislative findings; creating a private right of action against foreign censorship threats, attempts and enforcement; establishing personal jurisdiction, venue and alternative service; imposing joint and several liability across foreign states, agencies or instrumentalities, and responsible foreign officials (subject to federal law); providing remedies; providing nonrecognition of certain foreign judgments; providing construction, FSIA savings, severability and an effective date.

Purposes

This act safeguards the constitutional rights of Wyoming residents and entities from the extraterritorial application of foreign censorship laws by providing an in-state forum, clear jurisdictional rules and effective remedies, while respecting federal law.
 
There are seldom any case in the world where you would wish a Pyrrhic victory on someone.

Anything involving that set of legal procedure is a glorious exception. May they break OFCom. And may they burn every last cent they will ever see in the doing.

(The Granite Act, on the other hand, is stereotypical American "Give them liberty or give them death" Imperialist overreach trying to state americans have a right to run roughshod over any other country on the internet because MUH FREEDUMBS)
 
I have read the draft Wyoming Granite Act

The key paragraph is;

No court of this state shall recognize, enforce or give effect to any foreign judgment, order, subpoena,
administrative action, fine, penalty, or similar measure that imposes liability or compels action based on expression or
association that would be protected by the United States or Wyoming constitutions if adjudicated in the United States.

I have been expecting this for several weeks now.
 
If they actually limit themselves to that, that's fair enough (and much more reasonable than what it sounded in the post above) - but they should fully expect the same right back.

Such as, say, not giving any weight to any ruling based on repressing things that are protected rights in other countries. Like abortion. Or gender transition.

That's fair game.
 
Quite.

And if Donald Trump decides to sue the BBC in a Florida court,
the UK government ought to pass equivalent legislation.

But I doubt they would, their only dilemma is who to brown nose to.
 
Quite.

And if Donald Trump decides to sue the BBC in a Florida court,
the UK government ought to pass equivalent legislation.

But I doubt they would, their only dilemma is who to brown nose to.
The thing is unlike the OSA we do have laws equivalent to the US defamation laws. My understanding is that ours are more in the interest of the defamed than the US ones, in part because of the 1st amendment and also who has the burden of proof in a defence of truth.
 
Yes, but there are other aspects involved there, including relevant jurisdiction, partiality of the state
appointed judges, and the dubious argument that not merely including the middle of his speech is libel.

But I do wonder what is going on in the UK Office for Communications.

Is it that UK policy has been outsourced to a cliche, that is playing a game with other people's money?
Are they deliberately playing a game of advancing big corporates by raising difficulties for small firms?
 
I would love to know how this works. We have seen how age checks can at least attempt to stop children saying they are adults by requiring something that only adults should have, like credit cards, government ID or an online account that has been active for years. How do you stop adults saying they are children?

Roblox blocks children from chatting to adult strangers

Children will no longer be able to chat to adult strangers on Roblox - one of the world's most popular gaming platforms - as part of an expansion of its safety measures.

Mandatory age checks will be introduced for accounts using chat features, starting in December for Australia, New Zealand and the Netherlands, then the rest of the globe from January.

Roblox has faced criticism for allowing youngsters to access inappropriate content and communicate with adults, and is being sued over child safety concerns in several US states.
 
Here is a pretty good table giving data about VPNs. It has a lot of data, but still missing price which seems to be a flaw.

I have mentioned PIA as a cheap option. It is not too bad on the table, but from the "history" link there is this:

Do we now have a potential VPN criminal conglomerate?

As many of you have already read, Private Internet Access has recently been acquired by a company named “Kape Technologies” . “Kape Technologies” is a huge company that also owns the likes of CyberGhost VPN as well as Zenmate. I decided to read more and found facts that thoroughly shocked me:
The facts about these companies were easy to find, to be honest, I didn’t need to dig deep to find them. I am just truthfully shocked about this and how much I didn’t know about the companies beforehand. Personally, given this knowledge, I am not going to support these companies, especially when they potentially have criminal past and present activities.
 
GrapheneOS accuses Murena & iodé of sabotage, pulls servers from France over police 'threats'

The drama surrounding GrapheneOS has escalated from a debate over security patches to what looks like full-blown corporate and geopolitical warfare. Following heavy scrutiny in the French press, where the OS was branded a “tool for traffickers,” the GrapheneOS team has initiated a scorched-earth response.

They are pulling their server infrastructure out of France, openly naming the competitors they believe are orchestrating a smear campaign, and threatening to ruin the exploit market for everyone by contributing directly to Android (AOSP).

The “unnamed” antagonist — a separate earlier dispute​

Earlier this week, we reported that GrapheneOS slammed an unnamed “small company” for spreading libel and misinformation after a failed partnership. That incident remains unrelated to the current French controversy.

Separately, in response to the recent wave of negative French press, GrapheneOS has now publicly speculated that Murena (behind /e/OS) and/or iodé (behind iodéOS) may be involved in orchestrating or encouraging the smear campaign — though no concrete evidence has been presented.

“It’s very possible Murena and/or iodé are involved. Both are French companies selling products with extraordinarily poor privacy/security. Both are useful to official state plans of locally hosted services with encryption backdoors.”

Murena-and-iode-named-by-grapheneos
Both companies sell de-Googled Android devices in the privacy-focused space and have recently targeted similar modular hardware (Murena with the HIROH Phone and SHIFTphone 8; iodéOS also powering the Shift 8). However, GrapheneOS strongly disputes that they are true competitors, arguing that Murena and iodé fail to deliver timely security patches and encourage unlocked bootloaders — resulting in devices that GrapheneOS considers fundamentally insecure and non-private.

The “fake Snapchat” myth & the Anom parallel​

The accusations from GrapheneOS come in response to articles by Le Parisien and Le Figaro, which claimed criminals use a version of GrapheneOS equipped with a “fake Snapchat” page that wipes data when accessed.

GrapheneOS categorically denied this, stating that no such feature exists in their code. Instead, they argue French police are likely conflating the official OS with illegal, closed-source forks sold on the black market, similar to the Anom sting operation, where the FBI distributed compromised devices running a custom OS to catch criminals.

“Products using operating systems partially based on our code are not GrapheneOS… There’s no such thing as a fake Snapchat app wiping the device in GrapheneOS.” The team insists that while their code is open source, legitimate GrapheneOS is free and obtained only from their official website and not via “shady dealers” in dark alleys.

Exiting France: “We don’t feel safe”​

Perhaps the most significant development is the operational fallout. GrapheneOS announced it is moving all server infrastructure hosted by OVH (a major French cloud provider) out of the country.

While the project is a Canadian non-profit, they have historically used OVH for mirrors and discussion servers. That ends now. The team cited a specific passage in the Le Parisien coverage as a “direct threat” from French law enforcement leadership (OFAC), implying that tech providers who do not provide backdoors will face legal consequences.

“We’re going to be ending the small amount of operations we have in France as we don’t feel the country is safe for open source privacy projects anymore… We don’t want to host servers in France or host servers with OVH anymore.”

The team is moving services to providers in Canada, Germany (Netcup), and the US. They also stated they will no longer travel to France for conferences or hire staff located in the country, citing the French government’s support for “Chat Control” (EU mass surveillance legislation) and “authoritarian” tendencies.

The nuclear option involves helping Google​

In a twist of irony, the hostility from French authorities and the alleged smear campaign have pushed GrapheneOS closer to the company they usually criticize: Google.

The team stated that the situation “almost makes us willing to contribute to AOSP [Android Open Source Project] again,” specifically to patch the vulnerabilities that law enforcement agencies are using to exploit non-GrapheneOS devices.

“Google is welcome to reach out.”

By hardening the base Android code, GrapheneOS would effectively “burn” the expensive exploits used by police and forensic firms like Cellebrite, making all Android phones harder to crack, not just Pixels running GrapheneOS.

The privacy phone market is usually a quiet niche, but this has exploded into a major controversy. GrapheneOS is effectively drawing a line in the sand: you either have mathematical security that cannot be broken (which upsets law enforcement), or you have “privacy theater” that remains compliant with state demands (which they accuse Murena and iodé of offering).

Accusing fellow privacy-phone vendors of being state-aligned honey pots is a massive claim, but GrapheneOS has never been one to mince words. By choosing to migrate their servers to Toronto, Germany, and other locations, it’s clear that they would rather leave a major European market’s jurisdiction than compromise on their “no backdoors” policy.

Beyond Criminal Profiling: Why GrapheneOS Represents Digital Freedom, Not Criminality

The intersection of privacy technology and law enforcement suspicion reveals a troubling trend: the criminalization of digital self-defense.

Recent reports from Spain have highlighted an unsettling development in digital privacy: law enforcement officials in Catalonia are reportedly profiling people based on their Google Pixel devices, specifically associating them with criminal activity because drug traffickers increasingly use GrapheneOS. This development raises fundamental questions about digital rights, privacy tools, and the dangerous precedent of treating security-conscious users as inherently suspicious.

The Privacy Paradox: When Security Becomes Suspicious

The irony is palpable. The Spanish region of Catalonia was at the center of the massive Pegasus spyware scandal in 2019, where sophisticated surveillance tools sold exclusively to governments were used to hack phones belonging to Members of the European Parliament. Yet the same region now scrutinizes citizens who choose to protect themselves against such surveillance.

This paradox extends beyond Spain. Privacy-focused tools like GrapheneOS increasingly face suspicion simply for doing what they're designed to do: protecting user data. Similar pressure has been applied to encrypted messaging apps like Signal, with proposed EU "Chat Control" legislation that would compel secure messaging platforms to scan all communication—including those protected by end-to-end encryption.
 
Back
Top Bottom