What aspects of computing tech should we further criminalise?

What tech should we criminalise more? NOTE: A negative vote is a vote for less regulation


  • Total voters
    19

Samson

Deity
Joined
Oct 24, 2003
Messages
17,314
Location
Cambridge
Read before voting:

The poll is unlimited choice. Vote for anything that should have further regulation/criminalisation. Do not vote for things that should have less regulation/criminalisation. No neutrals are allowed, the status quo is not acceptable, you have to vote one way or another (until CFC adds multi option questions).

Intro

There are a number of efforts around the world to criminalise some aspects of computing technology, development processes or those who create them. Many of these are new laws or novel applications of existing laws. Which areas do you think should have more regulation Alternatively are there areas that should have less regulation?

Here are some areas that have attracted attention recently. There is a lot of overlap, both in the politics/morality and in the maths/technology but those overlaps are frequently different so I hope this way of splitting them up halps.

Encryption

Many countries are looking at banning end to end encryption. In the US we have the EARN IT act, in the UK we have the online safety bill. The really worrying thing about these bill is for them to have got as far as they have got one ofa fw things must be true:
  • The whole of the the senior government is completely unable to understand the principles behind encryption and are unwilling to listen to anyone that understands it
  • They know it will never work without an authoritarian criminalisation of the tools that you are using to view this message and so know it will fail but that will help them because reasons
  • They know it will never work without an authoritarian criminalisation of the tools that you are using to view this message and so expect to use authoritarian criminalisation of the tools that you are using to view this message
I am not sure which is worse, but if they actually try it the most likely thing it will do is to kill off the big boys and force everyone to use open source software and distributed systems, and I think everyone should use open source software and distributed systems anyway. But it would be a horrific thing to do and would do so much damage to the country in the process.

Cryptocurrency/blockchain

There are lots of moves to criminalise Cryptocurrency and blockchain tech. We have actual fraud against the customers such as FTX, we have criminalisation of software programs and their developers in the case of Tornado Cash. We have attempts to make all token holders legally responsible for the actions of DAOs. We have people who call for their criminalisation from an environmental standpoint.

On that latter, looking for references to their viewpoints I googled "cryptocurrency protest organisation" and the point for them becomes clear. For me at least all the hits are protest organisations using cryptocurrency to get around government surveillance and attempts to silence their organisations. In a world where physical cash is giving way to electronic transfers I think the world needs a way to do that without governments being able to read and potentially write those transactions at will.

Artificial Intelligence

This is actually this article in the Grundiad that prompted the thread. It has four specific complaints that I can address:

Hallucination #1: AI will solve the climate crisis

Nature, search "climate machine learning" sort date. Two published today, including a model improving agricultural production in the face of climate change. Yesterday three, including one modelling the antarctic ocean and another "Optical tuner for sustainable buildings". You cannot expect machines to convince politician and the public when decades of science has not, but those scientists are using AI to refine their models and come up with mitigations. This is a good thing.

Hallucination #2: AI will deliver wise governance

They quote Boston Consulting Group. This is just a joke, anyone who believed this was always deluding themselves.

Hallucination #3: tech giants can be trusted not to break the world

No, totally not. AI cannot be controlled by big tech and we can never trust them. Check out the open source tools, google is afraid of them.

Hallucination #4: AI will liberate us from drudgery

Just like the spinning jenny AI will allow some jobs to be automated. That will "liberate" the people who currently do those jobs from both drudgery and their jobs. The alternative, renouncing the use of such tech did not work well for those countries that did not use the spinning jenny in their textile industries and is not likely to work well for those who do not use AI in any industry it can improve productivity.

Personally Identifiable Information processing

The UK and EU have the GDPR, though it is largely ignored other than maximising the popup pain we have to go through. The California Consumer Privacy Act (CCPA) is similar I believe though I know nothing about it or if enforcement is a thing.

I think a big issue on the horizon that is not covered by the GDPR is household use. Currently this is not covered, so you can do what you like as long as you keep it within the house. The new AI tools combined with many peoples propensity to post PII online, perhaps combined with more traditional network analysis tools will put a lot of people's lives into everyone's hands. As I understand it this is completely unregulated.

Product liability

The EU is pushing the European Cyber Resilience Act (“CRA”). I am not an expert, but I think it means you cannot deny liability for harm caused by someone elses your software even if you declare it not fit for purpose. As I understand it, if you write a bit of code (say a Civ MOD) and release it freely under the GLP which specifically denies any presumption of fitness of purpose and someone uses that in a car and it blows up, you can be held liable. I do not think that is a good thing.

Anonymity

There are calls for banning anonymity online. The UK Govs page says "Main social media firms will have to give people the power to control who can interact with them, including blocking anonymous trolls". Now I totally agree that the lack of control that people have over what they are shown is an issue. If what they are taking about is a "Only show twitter blue users" then I do not think we need a law for that. If they are talking about legislating users have fully control of what they are shown I am all for that. I think we all know what they really want though, and it is more about tracking their enemies than "thinking of the children".

The Dark Web

Dark Web marketplaces are under constant DOS attack. It is impossible to know who is doing it, but those with both the most incentive and opportunity have three letter abbreviations and tax payer funding. I do not know what the environmental and social impact of this is, but I suspect it is significant and many of those here are paying for it. Should we be doing this, or perhaps more?

Algorithms

Canada's Online Streaming Act, commonly known as Bill C-11 I think allows the regulator to control what the big tech social media algorithms feed people, with the assumption they will use it to increase the consumption of Canadian content as they do with radio (and TV?). It is an interesting one, I cannot imagine how it would work but I would love to hear opinions.

Death Robots

The development of remote and autonomous weapons has advanced greatly, with Türkiye becoming a world leader. Are some lines we should not be crossing as a species, such as autonomous decision making? I kind of think the cat is out of the bag on that one, but is there another line we could try and not cross?
 
due to practical reality, none of them. though trying some of these is worse than others.

product liability law as described is abject idiocy, similar to the guys selling and talking about metal cards being convicted by a "jury of peers" for automatic weapons, only somehow worse. 0 respect there.

things like ai constraints have multiple good arguments, because it is seriously dangerous to make general ai while people can't even define their own utility function, let alone align an ai to whatever that is. i don't see how you can stop it given selective pressures on it though.

same goes for auto-combat stuff. i hate the idea of it, but i think i hate the idea of defecting nations being the only ones with access to it more. pairing this with general ai that can self-improve is a good way to make human law irrelevant, because there won't be humans. general ai can figure that out anyway though, if it's strong enough.

c11 is trash that population should disrespect, but it's up to the people of the country ultimately. there is in a very different boat than if a country is developing unaligned general ai, where you could make a legit case for all-out war...or that case *would* exist if your country isn't also doing it anyway.

Hallucination #4: AI will liberate us from drudgery

Just like the spinning jenny AI will allow some jobs to be automated. That will "liberate" the people who currently do those jobs from both drudgery and their jobs. The alternative, renouncing the use of such tech did not work well for those countries that did not use the spinning jenny in their textile industries and is not likely to work well for those who do not use AI in any industry it can improve productivity.
don't think this is pure hallucination, instead it's unrealistic optimism. spinning jenny/industrialization more generally were hugely impactful. a vast number of people in countries that implemented these...even common folk...work a fraction of the hours today compared to history. that is in part only possible due to productivity increase...much of which comes from machinery.

what it did not do was "remove all drudgery", that is a hallucination indeed. but it is reasonable to expect it to cause major workforce displacements/problems and settle on reduced (not eliminated) drudgery on average decades later. spinning jenny is an example of past innovation that contributed to doing exactly that.
 
due to practical reality, none of them. though trying some of these is worse than others.

product liability law as described is abject idiocy, similar to the guys selling and talking about metal cards being convicted by a "jury of peers" for automatic weapons, only somehow worse. 0 respect there.

things like ai constraints have multiple good arguments, because it is seriously dangerous to make general ai while people can't even define their own utility function, let alone align an ai to whatever that is. i don't see how you can stop it given selective pressures on it though.

same goes for auto-combat stuff. i hate the idea of it, but i think i hate the idea of defecting nations being the only ones with access to it more. pairing this with general ai that can self-improve is a good way to make human law irrelevant, because there won't be humans. general ai can figure that out anyway though, if it's strong enough.

c11 is trash that population should disrespect, but it's up to the people of the country ultimately. there is in a very different boat than if a country is developing unaligned general ai, where you could make a legit case for all-out war...or that case *would* exist if your country isn't also doing it anyway.


don't think this is pure hallucination, instead it's unrealistic optimism. spinning jenny/industrialization more generally were hugely impactful. a vast number of people in countries that implemented these...even common folk...work a fraction of the hours today compared to history. that is in part only possible due to productivity increase...much of which comes from machinery.

what it did not do was "remove all drudgery", that is a hallucination indeed. but it is reasonable to expect it to cause major workforce displacements/problems and settle on reduced (not eliminated) drudgery on average decades later. spinning jenny is an example of past innovation that contributed to doing exactly that.
You have no support for increased regulations about processing personal information? Do you not think there is an ownership issue with your data? Should Clearview AI be able to build networks of peoples social networks and sell the data to anyone who will pay? Why about biometric data, down to one's genetics?
 
Another sort of tech we could criminalise: Spyware

The EU needs tighter regulation of the spyware industry, a European parliament special committee has said, after concluding that Hungary and Poland had used surveillance software to illegally monitor journalists, politicians and activists.

A special European parliament committee voted on Monday for a temporary ban on the sale, acquisition and use of spyware while the bloc draws up common EU standards based on international law. The moratorium would be lifted only on strict conditions, including independent investigations into the abuse of spyware in the EU.

Although non-binding, the vote is one of the most comprehensive responses by lawmakers yet to the Pegasus project, revelations by a consortium of journalists that governments were using powerful spyware to target domestic opponents, foreign politicians and investigative reporters.
 
Another sort of tech we could criminalise: Spyware

The EU needs tighter regulation of the spyware industry, a European parliament special committee has said, after concluding that Hungary and Poland had used surveillance software to illegally monitor journalists, politicians and activists.

A special European parliament committee voted on Monday for a temporary ban on the sale, acquisition and use of spyware while the bloc draws up common EU standards based on international law. The moratorium would be lifted only on strict conditions, including independent investigations into the abuse of spyware in the EU.

Although non-binding, the vote is one of the most comprehensive responses by lawmakers yet to the Pegasus project, revelations by a consortium of journalists that governments were using powerful spyware to target domestic opponents, foreign politicians and investigative reporters.
The Greek government even sold spyware...
 
A paper in BMJ Global Health calling for banning AI:

While artificial intelligence (AI) offers promising solutions in healthcare, it also poses a number of threats to human health and well-being via social, political, economic and security-related determinants of health. We describe three such main ways misused narrow AI serves as a threat to human health: through increasing opportunities for control and manipulation of people; enhancing and dehumanising lethal weapon capacity and by rendering human labour increasingly obsolescent. We then examine self-improving ‘artificial general intelligence’ (AGI) and how this could pose an existential threat to humanity itself. Finally, we discuss the critical need for effective regulation, including the prohibition of certain types and applications of AI, and echo calls for a moratorium on the development of self-improving AGI. We ask the medical and public health community to engage in evidence-based advocacy for safe AI, rooted in the precautionary principle.

 
Well, they don't necessarily need to be criminalized; they need to be regulated better. Some more than others.

Apart from YouTubers making segues to advertising useless products or brands in their videos; they all need to be put away. :lol:
 
I am not sure there is much of a line between criminalisation and regulation. In each case you are saying do what we say or get punished.
 
I am not sure there is much of a line between criminalisation and regulation. In each case you are saying do what we say or get punished.
In one you are saying don't do it. In the other you are placing limits. Legal duelling would be regulated murder.
 
In one you are saying don't do it. In the other you are placing limits. Legal duelling would be regulated murder.
To continue the analogy, in a situation where all duelling was legal (as we are currently at with say AI), and the regulations were "You must use swords not guns", you could call that regulation of duelling or criminalisation of duelling with guns. Any regulation of AI or encryption or whatever will require breaking those regulations incurring some penalty, which could equally be characterised as criminalising that activity.
 
To continue the analogy, in a situation where all duelling was legal (as we are currently at with say AI), and the regulations were "You must use swords not guns", you could call that regulation of duelling or criminalisation of duelling with guns. Any regulation of AI or encryption or whatever will require breaking those regulations incurring some penalty, which could equally be characterised as criminalising that activity.
But there is a line between regulating an activity and therefore criminalising an aspect of it, and criminalising the activity outright. That is the line between criminalisation and regulation you couldn't see.
 
Reminds me of the computer AM, in the story I have no Mouth and I Must Scream.
Though AM itself is ridiculously sentient, it was developed as a cheaper and more methodical means of dealing with nuclear war.

Assuming it is relatively easy to be aware of many elaborate loopholes in an "AI"'s behavior, certainly governments or other operators can conveniently hide behind the supposed "AI's neutrality, when all along they would know perfectly well what is happening.
 
Let us take a real world likely example, encryption. The tories may well "regulate" encryption such that one cannot offer encrypted communication without scanning the contents for naughty stuff. To everyone that is going to be an effective ban on encryption without fairydust. I do not see that there is a practical objective line between the two, it just depends on if you consider what is banned "an activity" or "an aspect of an activity" which is largely a semantic question.
 
Let us take a real world likely example, encryption. The tories may well "regulate" encryption such that one cannot offer encrypted communication without scanning the contents for naughty stuff. To everyone that is going to be an effective ban on encryption without fairydust. I do not see that there is a practical objective line between the two, it just depends on if you consider what is banned "an activity" or "an aspect of an activity" which is largely a semantic question.
Next step is to use AI to read while being discreet. Since, you know, it is running on a basis of countable infinity ^^
 
You have no support for increased regulations about processing personal information? Do you not think there is an ownership issue with your data? Should Clearview AI be able to build networks of peoples social networks and sell the data to anyone who will pay? Why about biometric data, down to one's genetics?
i am not a fan of brokering personal information. i am also very doubtful of the government in particular stepping in and somehow improving the protection of personal information.

the same governments that are doing crap like proposing restrict act or banning vpns, and use organizations like 23andme to identify relatives (who didn't even use it) for old crimes? these are the people who are going to protect our data? i'm not sure a single organization is more abusive of peoples' data than the government.

i don't have a good answer here. but it probably won't come from the organization that presently mines data to an extent that stretches the bill of rights and took a martyr to out that it was breaking the law at scale.

as for specifics, compiling publicly available information through ai is less intrusive than protected health information (which would include biometric data).

Assuming it is relatively easy to be aware of many elaborate loopholes in an "AI"'s behavior, certainly governments or other operators can conveniently hide behind the supposed "AI's neutrality, when all along they would know perfectly well what is happening.
this has already been experimentally verified afaik, using today's ai capabilities (in context of social media algorithms, search algorithms etc). the youtube bot is obviously impartial/ai algorithm, nevermind that the exact same clip gets a channel strike or demonetization for some vs still openly allowed when used by a news organization (neither party has rights to the clip). yep, nothing to see here. just an impartial algorithm at work!

government have used the "frame an impartiality front" tactic for centuries before ai was even a concept. of course they can and do use ai for that purpose where possible, and will do so more as it becomes more practical with greater # of things.
 
Last edited:
Let us take a real world likely example, encryption. The tories may well "regulate" encryption such that one cannot offer encrypted communication without scanning the contents for naughty stuff. To everyone that is going to be an effective ban on encryption without fairydust. I do not see that there is a practical objective line between the two, it just depends on if you consider what is banned "an activity" or "an aspect of an activity" which is largely a semantic question.
let us take another real world example. Is the way Iran and the UK treat consumption of alcohol just a semantic difference?
 
let us take another real world example. Is the way Iran and the UK treat consumption of alcohol just a semantic difference?
Certainly it would be correct to say the the private distillation of alcohol is criminalised in both places. I would say the difference between Iran and the UK is quantitative rather than qualitative. The fact the johnnie walker is cheaper in Tehran that London certainly says something about the similarity of the two laws.
 
Certainly it would be correct to say the the private distillation of alcohol is criminalised in both places. I would say the difference between Iran and the UK is quantitative rather than qualitative. The fact the johnnie walker is cheaper in Tehran that London certainly says something about the similarity of the two laws.
For most of the population (all Muslims) consumption of alcohol is illegal in Iran. That seems a qualitative difference to me.
 
For most of the population (all Muslims) consumption of alcohol is illegal in Iran. That seems a qualitative difference to me.
Can we agree that there is only a semantic difference between "It is a criminal offence to use end to end encryption without a government backdoor and we will lock you up if you break the law" and "End to end encryption is regulated such that it must always have a government backdoor and we will lock you up if you if you breach of the regulations"?

If we apply this to booze law in the UK it becomes "It is a criminal offence to buy alcohol under 18/distil your own and we will lock you up if you break the law" and "Alcohol is regulated such that you have to be over 18 to buy alcohol/need a distillation licence and we will lock you up if you if you breach of the regulations". It is a bit more of a stretch, but not more.

Apply that to the sorts of regulations that are being suggested and it seems to me that both descriptions would be close enough to that that there is a mostly semantic difference between them.
 
Top Bottom