Would an end to anonymous posting improve social media?

Would ending Anonymous posting improve social media?


  • Total voters
    43
Without fear, yes. I like that term.
 
Would ending anonymous posting improve CivFanatics? No.

Would it improve other social media cites.... ehhhhh.... I'm inclined to think probably not, or at least not much. Even on platforms like Facebook that at least theoretically require posting with your actual name, things can get pretty un-civil at times. I think part of it is that the Internet is a big enough pond that it offers effective anonymity. Let me explain by way of example.

If I walk into a bar and start flaming people and trolling them and doing other things likely to accrue warning points at CivFanatics, the reception is not going to be positive and I'm likely to be kicked out or banned before long - whether the people there know who I am or not.. Even if I'm technically anonymous there, I'm de facto not anonymous - I'd be the guy ruining everyone's evening.

If I do the same thing on Facebook, on a group that has a geographically dispersed audience and sizeable audience, the chances that someone will identify me and confront my bad behavior are quite low, even if I have my actual name attached to my post (and assuming I'm not someone everyone recognizes like Travis Kelce). Whereas if I do that on the group of a church I hypothetically attend, people would confront me about it and ask why I was such a jerk, either the next Sunday or prior to then.

I think that's the bigger factor. There's no social accountability for being a jerk with social media, and that's often the case regardless of whether it's technically anonymous or not. So it brings out the worst even in people who might not show signs of being a jerk in real life.
 
Yeah I just want to know who you're worried of being canceled by? Friends? People who have a good opinion of you? I think you're missing some necessary precepts here.
That I think is the crux of why you do not understand me. I do not feel that rights are something that should only be protected if I am threatened personally by their removal. Especially not fundamental rights like freedom to express opinions and gather information.

Furthermore, I feel that the free and uncensored exchange of opinions and information is crucial for a healthy and free society. The moment anyone, be that the government or the corporate owners of a forum step in and say what is and is not allowed to be said, whose opinions are banned and whose news is fake we enter a world of censorship and one-mindedness. And that leaves us open to manipulation by who ever holds the reigns of moderation.

And that is precisely the sort of thing you do not want in a society because it allows those in power to manipulate the people into acting against their own interests.
 
Still not getting from anyone what these opinions that are potentially at risk are. Vaccine safety concerns? Quebec sovereignty?
Environmental activism and political change that involves substantial wealth transfer are probably top targets.
 
Without fear, yes. I like that term.
I'm sympathetic to that, pretty obviously. We probably both are much more likely than avg to share opinions we know beforehand will be contentious.

I just don't think it'll ever be a majority view again in my lifetime.
Still not getting from anyone what these opinions that are potentially at risk are. Vaccine safety concerns? Quebec sovereignty?
I know people who'd probably consider the civ series itself morally abhorrent if they knew of its themes.

You won't find many here, obviously, but they do exist. The civ series sorta has themes which are vaguely Hitlerian, honestly. Zero sum competition between different civilizations, with organized violence as the most common technique to secure dominance isn't really all that far from the sorta Darwinist, groups as existant in perpetual competition style thinking Hitler was known for.

I don't see it as totally implausible that the franchise will be brought up in the future in a way we similarly bring up troublesome depictions in media now, tbh, provided the evolution of morality continues down the current trajectory.
 
End anonymous posts? No. But I think paid subscriptions would cut out a lot of riffraff particularly on boards where you're supposed to get good advice.
Count me skeptical, Something Awful is subscription based and it is the indirect forerunner of a certain infamous imageboard.
 
Environmental activism and political change that involves substantial wealth transfer are probably top targets.
Or literally any challenge to the government, the elites and the narrative of the time. Freedom like all things is not a trophy you win and forget about. It's a struggle that newer ceases. For the moment you stop fighting to maintain it you loose it.
 
Guys it has more to do that humans are hyper aggressive apes similar to chimpanzees, where if you get up in the other chimp's space it takes that as though it's being dominated and now has to fight back to reassert it authority else it losses it's sexual monopoly over females and therefore it's biological imperative to father as many children as possible is in jeopardy.

A perfect example where anonymity doesn't prevent violence or aggression? Road rage! And in the US it leads to shootings on the highway, many lethal.
 
You're so right bro, men are inherently violent and so shouldn't be left unattended within 10 miles of children, women or other men lest their primal psychosexual drive renders them unto a frenzy
 
Last edited:
India appears to have been aggressively targeting anonymity and its often-seen-companion, end-to-end encryption.

https://tuta.com/blog/apps-banned-india
----------
May 11, 2023
According to the Indian Express, 14 mobile applications that provided end-to-end encrypted (E2EE) messaging services or enabled peer-to-peer (P2P) messaging, namely “Wickrme, Mediafire, Briar, BChat, Nandbox, Conion, IMO, Element, Second line, Zangi, Threema, Crypviser, Enigma, and Safeswiss,” were blocked in India following the recommendation of the Ministry of Home Affairs (MHA) beginning of May.

A source told the Indian Express that “the intelligence agencies also informed the MHA that most of these apps are designed to provide anonymity to their users and their features make it tough to resolve the entities associated with them.
----------

For people who live in countries where these apps are not banned and who value anonymity, you can think of it as a conveniently provided list of apps to try out.


https://restofworld.org/2024/exporter-whatsapp-encryption-india/
----------
May 2, 2024
IT rules passed by India in 2021 require services like WhatsApp to maintain “traceability” for all messages, allowing authorities to follow forwarded messages to the “first originator” of the text.

Tracing the copies back to their source would require building a new layer of surveillance into WhatsApp. “There is no way to predict which message a government would want to investigate in the future,” the company wrote in a blog post in 2021. “To comply, messaging services would have to keep giant databases of every message you send, or add a permanent identity stamp — like a fingerprint — to private messages with friends, family, colleagues, doctors, and businesses.”
----------

It's more than just about revealing identities, but what can come along with doing so, such as tying together information from your various real-life and online activities into giant profiles which can be misused to harm you, such as due to you ever speaking out with an opinion against someone who happens to be in power or who happens to come in to power.

Still not getting from anyone what these opinions that are potentially at risk are. Vaccine safety concerns? Quebec sovereignty?
https://www.dailydot.com/debug/india-modi-technology-whatsapp/
----------
November 16, 2023
“There are social problems that predate any sort of technology. Even when we did not have an inkling about how to transfer data, there were social problems. Technology in a lot of ways can amplify social problems, but it is not responsible for social problems. Therefore, social problems need social solutions which technology can help with,” [Neeti Biyani, a senior advisor at the Internet Society] said. “Scanning personal data of 800 million people is not the answer.”

But with the traceability rule coming into effect, not only will government snooping on private conversations be legitimized, the government will have also legal rights to act against people exchanging texts with the slightest hint of dissent or criticism.
----------

Read that whole article for multiple, additional examples.
 
Last edited:
The AI firms want to ban anonymous posting to protect against the AI firms.

Researchers at Microsoft and OpenAI, among others, have proposed "personhood credentials" to counter the online deception enabled by the AI models sold by Microsoft and OpenAI, among others.

"Malicious actors have been exploiting anonymity as a way to deceive others online," explained Shrey Jain, a Microsoft product manager, in a Microsoft Research podcast interview.

"Historically, deception has been viewed as this unfortunate but necessary cost as a way to preserve the internet's commitment to privacy and unrestricted access to information.

"Today, AI is changing the way we should think about malicious actors' ability to be successful in those attacks. It makes it easier to create content that is indistinguishable from human-created content, and it is possible to do so in a way that is only getting cheaper and more accessible."

The answer Microsoft, OpenAI, and various academic researchers propose is personhood credentials – or PHCs – which are essentially cryptographically authenticated identifiers bestowed by some authority on those deemed to be legitimate people.

The idea, described in a research paper [PDF] with more than 30 authors, is similar to the way that Certificate Authorities vouch for the ownership of a website – except that PHCs are supposed to be pseudonymous as a means of providing some measure of privacy.

Beyond some of the corresponding authors' Microsoft and OpenAI affilations, the other co-authors have ties to: Harvard Society of Fellows, University of Oxford, SpruceID, a16z crypto, UL Research Institutes, Tucows, Collective Intelligence Project, Massachusetts Institute of Technology, Decentralization Research Center, Digital Bazaar, American Enterprise Institute, Center for Human-Compatible AI, University of California, Berkeley, OpenMined, Decentralized Identity Foundation, Goodfire, Partnership on AI, eGovernments Foundation, University of Minnesota Law School, Mina Foundation, ex/ante, School of Information, University of California, Berkeley, Berkman Klein Center for Internet & Society, and Harvard University.

The proposed PHC identifiers are not supposed to be publicly linkable to a specific individual once granted – though presumably unmasking a PHC holder could be done with an appropriate legal demand.

However, the paper is careful to note that PHCs would not actually provide privacy – which remains all but non-existent online thanks to the ubiquity of tracking mechanisms and the incentives to surveil.

"While PHCs preserve user privacy via unlinkable pseudonymity, they are not a remedy for pervasive surveillance practices like tracking and profiling used throughout the internet today," the paper concedes. "Although PHCs prevent linking the credential across services, users should understand that their other online activities can still be tracked and potentially de-anonymized through existing methods."

The research also mentions fingerprinting – only as an inadequate AI defense and a form of biometric identification, not as a privacy threat to PHC holders.

The paper presents more of a general framework than a specific technical implementation. The authors suggest that various organizations – governmental or otherwise – could offer PHCs as a way to accommodate various "roots of trust," to use a term commonly applied to Certificate Authorities. US states, for example, could offer them to anyone with a tax identification number and the corresponding PHC could be biometrically based, or not.

"We are concerned that the internet is inadequately prepared for the challenges highly capable AI may pose," the AI-making authors and associates state. "Without proactive initiatives involving the public, governments, technologists, and standards bodies, there is a significant risk that digital institutions will be unprepared for a time when AI-powered agents, including those leveraged by malicious actors, overwhelm other activity online."

These and related concerns have spurred other initiatives that similarly aspire to authenticate people online with minimized information disclosure. The authors point to the World Wide Web Consortium's (W3C) Verifiable Credentials and Decentralized Identifiers (DIDs), European Union Digital Identity's (EUDI) privacy-preserving digital wallets, and other standards such as British Columbia's Person credential.

The stated goal of PHCs is "to reduce scaled deception while also protecting user privacy and civil liberties." Doing so, however, would require a one-per-person-per-issuer credential limit. The idea is PHCs should not be available in unlimited quantities like email addresses.

Another aim, ironically, is to allow verification delegation to AI agents – so online services can ensure that AI bots have authority to act that comes from a real person.

The authors acknowledge there are still some challenges to overcome – such as ensuring PHCs are equitable, support free expression, don't provide undue power to ecosystem participants, and are sufficiently robust against attacks and errors.

Jacob Hoffman-Andrews, senior staff technologist at the Electronic Frontier Foundation, told The Register that he took a look at the paper and "from the start it's wildly dystopian."

"It provides for governments – or potential hand-wavy other issuers, but in reality, probably governments – to grant people their personhood, which is actually something that governments are historically very bad at," he said.

"Many governments have people who they're responsible for, in one way or another, or who are under their control, that they consider 'lesser people' and they would prefer not to speak online.

"So, while the proposal uses some fancy cryptography to preserve anonymity in an environment where the government grants you a credential to speak online, it doesn't really solve the problem of your government deciding who speaks online or not."

Hoffman-Andrews observed another major problem is that much of the concern about AI is state-sponsored disinformation.

"If you have different governments saying who's a person, who's granted permission to speak online, but those governments also have an interest in deceptive activity at scale, you wind up with institutions in different countries not trusting the personhood of people in other countries and restricting that speech and just further fragmenting already deeply fragmented internet."

What's more, Hoffman-Andrews said there are movements in the US, the UK, and elsewhere to limit the ability of children and teenagers to speak online.

He warned: "In a regime where you need a personhood credential to be able to log in, this actually seems like kind of a custom-built choke point for governments to prevent certain people from getting online." ®
 
Just changed my vote from (jokingly) my only social media is CFC to NO, definitely NO, this is a very serious matter. More and more I've been realising that I might've been already lynched if some folks knew who I am. Quitted posting on FB almost a decade ago because of the lack of anonymity. Some people want their echo chambers without freedom of thought and opinion but that doesn't mean that sooner or later reality won't clash with them. The right wing shift in EU should be waking some, but it isn't, the unwashed majority are voting wrong and they don't care to listen to them or understand why they adhere to populist politicians, more that the ideology they preach.
 
Last edited:
It's funny because the "no" vote doesn't accurately reflect the reasoning. The reasons for "no" range from "this could further harm marginalised people" all the way to "i don't want people to associate my opinions with me as a person". Hmm.
 
It's funny because the "no" vote doesn't accurately reflect the reasoning. The reasons for "no" range from "this could further harm marginalised people" all the way to "i don't want people to associate my opinions with me as a person". Hmm.
Do you mean that neither of the ends of your spectrum could accurately be described as actually "Harming social media" but other results that do not directly affect social media?

If that is what you mean, we could talk about Parler, 'cos it really did not help them.
 
Back
Top Bottom