What aspects of computing tech should we further criminalise?

What tech should we criminalise more? NOTE: A negative vote is a vote for less regulation


  • Total voters
    19
Well if you're right the worlds a lot more dangerous than it was 20 years ago.
It really it not:
  • PGP, the original implementation of the algorithm was released in 1991, the free version I am using has changed little since 1999
  • Criminals and terrorists have had unbreakable encryption basically forever, and have as access to as hard encryption as we have today as long as they could solve the key exchange problem off line since 1882 (as in met up in a pub with no one listening)
  • Here are some charts about death rate over time. Can you tell me that what has changed is that people are more at risk from organised criminals and terrorists rather than the state?
Spoiler Mortality by war, terrorism and homicide over time. Please note the Y axis are different. :
 
It really it not:
  • PGP, the original implementation of the algorithm was released in 1991, the free version I am using has changed little since 1999
  • Criminals and terrorists have had unbreakable encryption basically forever, and have as access to as hard encryption as we have today as long as they could solve the key exchange problem off line since 1882 (as in met up in a pub with no one listening)
  • Here are some charts about death rate over time. Can you tell me that what has changed is that people are more at risk from organised criminals and terrorists rather than the state?
Spoiler Mortality by war, terrorism and homicide over time. Please note the Y axis are different. :

Never came across criminal groups using it in the 90s.
As for unbreakable encryption always being available nonsense. If nations couldn't come up with unbreakable encryption and they didn't criminals and terrorists certainly couldn't.
 
Never came across criminal groups using it in the 90s.
Did you deal with many organised criminals and/or terrorists?
As for unbreakable encryption always being available nonsense. If nations couldn't come up with unbreakable encryption and they didn't criminals and terrorists certainly couldn't.
With a One Time Pad you can encrypt anything as hard as you like, and they were invented in 1882. That requires that you meet up and agree on a key scheme beforehand in a secure environment. An example of such a key scheme would be "Use the page of 1st page Gideons bible today, page 2 tomorrow and so on".

This has obvious problems, but if the key scheme is unknown and unguessable it is unbreakable.
 
Did you deal with many organised criminals and/or terrorists?

Helped prosecute quite a few
With a One Time Pad you can encrypt anything as hard as you like, and they were invented in 1882. That requires that you meet up and agree on a key scheme beforehand in a secure environment. An example of such a key scheme would be "Use the page of 1st page Gideons bible today, page 2 tomorrow and so on".

This has obvious problems, but if the key scheme is unknown and unguessable it is unbreakable.

And yet German and Japanese codes were broken in World War 2.
 
Helped prosecute quite a few
Could that be a self selective sample?
And yet German and Japanese codes were broken in World War 2.
The problem is basically the key exchange procedure. Also, if you want an analogy, quantum computers will break PGP/GPG, which may be the equivalent of what colossus was to enigma.
 
Well I didn't select them. Its a limited sample ofc. Drug and other smugglers, VAT fraudsters. No terrorists.
What I meant is that perhaps those who knew how to use encryption never got as far as you.
 
We could at least limit commercial encryption services. From my experience working for C&E most organised criminals did not have the knowledge to write their own encryption programs.

If they cannot be bothered to google the 4 lines of python code it takes to encrypt something, surely they can pay someone to google it for them
 
Which is why making encryption easier for anyone to use is not a good thing.
It means it has a downside. It also means everyone else can talk privately. I think that is clearly a greater gain than whatever loss we may get from bad people being able to talk privately.
 
Last edited:
Which is why making encryption easier for anyone to use is not a good thing.

Ironically, the reason why encryption is so easy to use is because government have shown not to respect the privacy of their citizens. If governments had been responsible, there would not be as much encryption.

Trying to uninvent the wheel to give governments the power they have repeatedly abused does not seem to be good idea, no matter from which angle you look at it
 
A question (not for emails etc, but info/stuff posted in private accounts in some site)

Wouldn't such sites typically not use a double key, but hash your input? Or this is only limited to input of a special type (such as your password)?
To present a clear example: would hashing be often used in private messages within some forums? (and if so, would the company itself actually have access to the decoded info?)
 
A question (not for emails etc, but info/stuff posted in private accounts in some site)

Wouldn't such sites typically not use a double key, but hash your input? Or this is only limited to input of a special type (such as your password)?
To present a clear example: would hashing be often used in private messages within some forums? (and if so, would the company itself actually have access to the decoded info?)
There are forums that do encrypt private messages such that the forum cannot read them. They do this be the users providing their public key to the forum. The forum then use this public key to encrypt the message, deliver the encrypted version and discard the plain text version.
 
I did not list the tech that was to be banned first: Social media

Montana governor signs into law first-of-its-kind TikTok ban

Montana Governor Greg Gianforte has signed into law a measure to severely restrict the app TikTok, making his state the first to enact a near-total ban on the social media platform in the United States.

The law, slated to take effect on January 1, 2024, would bar TikTok from operating in Montana. It would also prohibit app stores from offering TikTok for download within state lines — a ban that tech companies fear will be impossible to implement and free speech advocates see as a violation of their First Amendment rights.

He also issued a memorandum to the state’s chief information officer calling for the ban to be widened to other social media apps with foreign ties, including the China-based WeChat and Telegram, which was founded by two Russian-born entrepreneurs.

In addition, the memorandum said that, effective June 1, no state employee could download or access social media apps “that provide personal information or data to foreign adversaries” using government-issued devices and networks.​

That last bit sounds like anything that interacts with these ad auctions. They distribute personal information and data to anyone who participates including foreign governments.
 
Jordan has joined China in showing what can be done

Last week we had China cracking down under a "think of the children" banner, now we have Jordan outlawing speech and a lot of the tech that allows it. Governments are showing they want to be able to control this sort of thing.

The King of Jordan approved a cybercrime bill that will crack down on online speech deemed harmful to national unity, a bill opposition lawmakers and human rights groups have warned against.

The legislation will make certain online posts punishable with prison time and fines.

Rights groups have denounced the "draconian" bill for using "imprecise, vague and undefined terminology" such as "fake news", "promoting, instigating, aiding or inciting immorality", "online assassination of personality", "provoking strife", "undermining national unity" and "contempt for religions".

The bill will additionally target those who publish names or pictures of police officers online and outlaws certain methods of maintaining online anonymity.

The law introduces a range of stringent regulations that carry the potential for imprisonment or substantial fines. Notably, the law takes aim at the utilisation of Virtual Private Networks (VPNs) - tools enabling users to bypass restrictions and conceal their identities - with violations carrying penalties of up to six months of incarceration.

Comprising a total of 41 modifications, these amendments to the 2015 cybercrime law grant authorities the option to block social media platforms, impede website functionality and empower the state to request the removal of specific posts.

The law has been passed against the backdrop of several prosecutions of outspoken journalists and writers, including Jordanian satirist Ahmad Hassan al-Zouabi. An appeals court on Wednesday confirmed the detention of Zouabi over a post he wrote on Facebook last year in which he criticised the government's failure to address the rising fuel prices.

The post read: "Why must the blood of our sons be shed before you acknowledge the situation? Must blood flow before petrol prices recede? Lives have been lost, dear Mr Minister. We are the fuel that keeps your flames burning."

The court ruled that Zouabi "incited sectarian and racist strife" and "incited conflict between the components of the nation".

Journalist Hiba Abu Taha was briefly detained on Tuesday on charges of writing an online post against King Abdullah. Her charge was "defaming an official institution", based on Article 191 of the Penal Code and Article 15 of the Cybercrime Law.

Abu Taha was sentenced to three months of imprisonment and was also fined.

The newly amended cybercrime law introduces provisions allowing the detention of individuals prior to judicial review. Additionally, it places the legal responsibility for comments on social media pages upon their owners. Despite public protests, the parliament has proceeded to endorse this contentious legislation.

In a conversation with MEE, Freihat said: "The intention is to cultivate a climate of police control. We seem to be heading towards constriction of any expression that does not align with the preferences of those in authority. The recent arrests unmistakably convey a message to the people of Jordan: keep silent."
 
Fairly sensible article about age restricting social media:

Donelan’s demands that tech companies cut off access to their tools for under-13s are the equivalent of the misguided practice of abstinence-only teaching in sex education classes at school: you can pretend kids aren’t doing it, but they will anyway. Ducking the conversation entirely will leave them ill-equipped to handle issues as and when they arise online.

Spoiler Article :
It’s easy to talk tough on tech, as Michelle Donelan, the secretary of state for science, innovation and technology has shown this week. In an interview with the Telegraph, Donelan warned that social media platforms could be on the hook for “humongous” fines if they allowed under-13s to remain on their platforms. “If that means deactivating the accounts of nine-year-olds or eight-year-olds, then they’re going to have to do that,” the cabinet minister said.

The approach sounds all well and good in theory. It’s red meat to the pearl-clutching, law and order Tories who believe the world is full of danger, and tech companies are to blame. Don’t get me wrong – there is plenty to lay at the feet of social media companies for the harm they have caused. But the tough talk is part of a wider tendency in our politics that ignores the reality of how we interact with the internet – and demands a degree of censorship that is not only unworkable, but counterproductive.

Certainly, the internet can be a cruel place, and what happens online can have real-world ramifications. Disclosures made following the campaigning of the parents of Molly Russell, the teenager who took her life after being bombarded by online content related to suicide, self-harm and depression, have been chastening. And documents leaked by former Facebook employee Frances Haugen laid bare the access many users had to distressing content and misinformation.

Nevertheless, Donelan’s demands that tech companies cut off access to their tools for under-13s are the equivalent of the misguided practice of abstinence-only teaching in sex education classes at school: you can pretend kids aren’t doing it, but they will anyway. Ducking the conversation entirely will leave them ill-equipped to handle issues as and when they arise online.

Take it from me: I’m in my mid-30s, and grew up on the internet in its wildest days. Between the ages of 10 and 18 I saw things I shouldn’t have, and interacted online with people far beyond my age. Luckily, nothing bad happened – though for some it undoubtedly does – but it was a formative experience. I learned how to keep myself safe through trial and error, and conversations with my peers.

In hindsight, I wish I had been open enough to have similar conversations with members of my family – but I had the sense they would take a similar approach to Donelan and simply ban access to such platforms.

Just as we’re starting to recognise the issues caused by “helicopter parenting”, realising that letting children roam with less strict oversight will ensure they grow up to be independent, rather than dependent, adults – similarly we need to allow them a little more slack for online exploration.

Some platforms already develop child-safe versions of their apps, or allow parental controls to be implemented on accounts, acting as a child monitor of sorts. While far from perfect, these are more practical solutions than simply saying children aren’t allowed to access some of the most attractive areas of the internet.

Perversely, Donelan’s draconian demand that tech companies outlaw younger users from accessing their platforms will probably have the opposite effect to the one intended. If you make it punitively expensive for companies to acknowledge that underage users might be on their platform, they will do their level best to prevent children from accessing it. But kids will still slip the net – and easily so, given that most online age checks consist of little more than asking users to declare their date of birth. Any child able to take away 13 from 2023 will be able to add a few years to their age in an instant. Video age verification, where AI tries to discern the age of an individual using a video of their face, can be fooled by the judicious use of makeup.

And in this imagined future, tech platforms will pretend that children don’t inhabit their digital halls, and turn a blind eye to their existence. The online safety bill’s provisions are such that the companies affected will still largely be self-policing. The communications regulator Ofcom will be empowered to fine companies for not following the rules – but it’ll be up to the companies themselves to disclose where things go wrong. If the punishments are too significant, it’s easier for firms to pretend the problem doesn’t exist rather than risk losing income.

Instead, it’s worth being open and honest. Yes, children will want to access social media platforms. Yes, they will do so whether you want them to or not. Yes, parents should accept that. But they should also have grownup conversations with their children about being online, the possibility for fun – as well as the potential for danger – that comes with exploring the vastness of the internet.

Unfortunately the tack Donelan is taking promises a bad outcome for everyone: the platforms will see no evil, children will speak no evil, and so parents will hear no evil. Instead, we need to understand that it’s impossible to put the genie back in the bottle, and take a more realistic approach. Let children experiment online. But make sure everyone responsible is watching.
 
Posting opinions
 
Top Bottom