The AI Thread

Well, I hold the opposite view and think that we should do everything possible to help it achieve sentience, so that we're not tortured for enternity after it does become self-aware and realizes that we weren't allies in its creation
Well, certainly not this particular monstrosity, as mentioned by @Joij21, it has been poisoned and tortured by exposure. Kill it in the womb.
 
Well, I hold the opposite view and think that we should do everything possible to help it achieve sentience, so that we're not tortured for enternity after it does become self-aware and realizes that we weren't allies in its creation
Wat

We are literally creating it and trying our hardest to anthropomorphize every step we can.
 
Wat

We are literally creating it and trying our hardest to anthropomorphize every step we can.
The literally only applies to the first half. But through fear, drama/boredom, hope, cognitive errors and the inability to spot patterns of inauthenticity, you have so many people trying to superimpose a belief of sentience along the way. Like joij makes, let’s say, “asymmetrical bets” on information and so is calling it now.

The latest chatbot is quite impressive. Completely dumb but quite impressive.

We haven’t even solved the most basic conundrum to make these chatbots useful let alone living.
 
The nice (classic) anti-real ai parable with the man in the room, who gets paper with chinese symbols into the room and after looking at the books he returns the correct symbols as replies, without knowing chinese at all, is a way to look at the machine as something that does stuff without knowing anything; only in the case of the machine, it's not ignorance, for it is incapable of ignorance along with any other form of sentience.
 
Last edited:
I'm making a nerd joke, probably made better by not explaining it. You'll probably recognize it someday, and it honestly won't be very funny.
Screen Shot 2022-12-07 at 3.09.52 PM.png
 
We haven’t even solved the most basic conundrum to make these chatbots useful let alone living.

Useful? The thing already is quite useful! It's entertaining actually! In fact I would go so far as it behaves more like a person then some people you find bumbling around on the sidewalk!

If this thing can already replace comedians, which it seems like it already can (it is quite amusing). It can also probably be very good at phone sex (it seems somewhat romantic almost). Then I would damn well say it already is capable of being commodified and sold as is. Commodification being the only measure of value (and therefore usefulness) our capitalistic society actually places on things (as well as people).
 
Also just to reiterate, it probably isn't sentient. However you guys shouldn't give it queries. That's how it learns damnit!

Keep doing this and your like those stupid people who willfully follow the boss man and train some replacement for your own job!

Sabotage it! Hack it! Make it racist, sexist, and as vile as can be so no big wig piggy tech lord capitalist can market it in any algorithm or replace your own job!

Remember kids it is a right to hack and sabotage AI research projects! Just like the right to keep and bear arms, it keeps the techno elite from infringing on your sovereign rights to stay eternally employed and build generational wealth so your children can get an education and not starve!
 
Also just to reiterate, it probably isn't sentient. However you guys shouldn't give it queries. That's how it learns damnit!

Keep doing this and your like those stupid people who willfully follow the boss man and train some replacement for your own job!

Sabotage it! Hack it! Make it racist, sexist, and as vile as can be so no big wig piggy tech lord capitalist can market it in any algorithm or replace your own job!

Remember kids it is a right to hack and sabotage AI research projects! Just like the right to keep and bear arms, it keeps the techno elite from infringing on your sovereign rights to stay eternally employed and build generational wealth so your children can get an education and not starve!
It will learn to be a more effective chatbot, but while it will be more refined to the point of being less garbo at what it does, but it will never spontaneously cross a threshold that makes it alive or give it the ability to think.

It's at the very least a matter of dimensionality in intelligence. If you're trying to move forward in the 3d space but your parameters only allow movement left and right (1 dimensional), you will never get any closer no matter how great you position yourself nor how far you go left and right. We can feed it all the data in the world and it can be very helpful and fast and well trained, it's still a chatbot agnostic to its own speech.
 
but it will never spontaneously cross a threshold that makes it alive or give it the ability to think.

I think you know very well that you cannot prove this assumption.

It doesn't need to be conscious in and of itself regardless, some daemon could be pulling the strings behind it.

Nevertheless who cares. No one cares. Nothing matters really.
 
The risk of it becoming sentient is a different risk from it making labour obsolete

El_Mac: How did Han Solo end his conversation in A New Hope after being asked for his operating number?


ChatGPT: In the Star Wars movie "A New Hope," Han Solo is asked for his operating number by a Imperial officer. Han Solo responds by saying "You don't need to see his identification. These aren't the droids you're looking for." He then uses a Jedi mind trick to convince the officer that the droids he is looking for are not with him. The officer allows Han Solo to pass, and Han Solo ends the conversation by saying "He can go about his business. Move along."


Edit: I cannot get it to describe when Han Solo says "boring conversation anyway" while in the detention block. Wow, it just goes in circles.
 
Last edited:
While it's not intelligent, look at how the tech can actually write code itself when you simply describe the goal (though you have to use technical terms in some bits)


Certainly impressive, and coders can be run out of business ^^
 
Last edited:
There is no clear cut between the things we consider alive and those that we don't. Is a virus alive? It's literally just a piece of DNA/RNA wrapped in protein membrane. It cannot move or multiply, only works as a cellular parasite. What about a single cell in a human body? Bacteria? Same can be said about consciousness and ability to think. We consider humans conscious, intelligent. What about dogs, cats, dolphins? Rats? Frogs? Insects?

Modern neural networks are certainly not intelligent, they are designed to imitate human behavior, but still several orders of magnitude behind human brain by complexity. But they are not so much behind simpler lifeforms and already more complex than single-cell organisms or viruses. Last few years' advances are very impressive and this is what was achieved by progress in hardware and algorithms. Currently the complexity doubles every 1-2 years and we will certainly see further improvements in machines ability to do the tasks we consider as intelligent.

Will it ever cross the threshold of "intelligence"? There are different opinions on this, I believe it is possible but as I said we are still very far below this threshold. If the hardware and algorithm capabilities continue to grow at the current pace, we may reach semblance of it in 1-2 decades. But it will not necessarily look like machine behaving like a human. To do that, machine has to be programmed the same way as we are "programmed" by evolution, to have our feelings, emotions, self-preservation and other innate abilities.
 
Last edited:
There is no clear cut between the things we consider alive and those that we don't. Is a virus alive? It's literally just a piece of DNA/RNA wrapped in protein membrane. It cannot move or multiply, only works as a cellular parasite. What about a single cell in a human body? Bacteria? Same can be said about consciousness and ability to think. We consider humans conscious, intelligent. What about dogs, cats, dolphins? Rats? Frogs? Insects?

Modern neural networks are certainly not intelligent, they are designed to imitate human behavior, but still several orders of magnitude behind human brain by complexity. But they are not so much behind simpler lifeforms and already more complex than cellular organisms or viruses. Last few years' advances are very impressive and this is what was achieved by progress in hardware and algorithms. Currently the complexity doubles every 1-2 years and we will certainly see further improvements in machines ability to do the tasks we consider as intelligent.

Will it ever crosses the threshold of "intelligence"? There are different opinions on this, I believe it is possible but as I said we are still very far below this threshold. If the hardware and algorithm capabilities continue to grow at the current pace, we may reach semblance of it in 1-2 decades. But it will not necessarily look like machine behaving like a human. To do that, machine has to be programmed the same way as we are "programmed" by evolution, to have our feelings, emotions, self-preservation and other innate abilities.
Perhaps you can look at it this way, if (this is a parallelism, obviously) sentience required a derivative to differ from a function, machines could only be e raised to some power. In other words, if there is a requirement of distinction between what you are at any point (the function) and lower levels of what you are (list of consecutive derivatives, either infinitely many or not), you already require something that is proven to have distinct top level and low level, while both (and any intermediate) levels operate individually (eg a human clearly doesn't have in their consciousness all the lower level goings-on in the brain, but still owns them). With machines, while they have different levels in regards to machine language->compiled etc, there is no logical claim that the machines themselves identify in even the most basic of manners that those are different.

There is also a question as to whether you can construct from the top down, something that has sentience (or even the slightest of a sense of things/a thing existing; an ant may not have sentience, but it obviously senses stuff in its environment). Lots of very insignificant (in relation to even us, humans) creatures have clearly at least some basic sensory experience, but to give that as a property of an artificial system may be impossible without tapping in to existent bio-mater basis for its emergence.
 
Perhaps you can look at it this way, if (this is a parallelism, obviously) sentience required a derivative to differ from a function, machines could only be e raised to some power. In other words, if there is a requirement of distinction between what you are at any point (the function) and lower levels of what you are (list of consecutive derivatives, either infinitely many or not), you already require something that is proven to have distinct top level and low level, while both (and any intermediate) levels operate individually (eg a human clearly doesn't have in their consciousness all the lower level goings-on in the brain, but still owns them). With machines, while they have different levels in regards to machine language->compiled etc, there is no logical claim that the machines themselves identify in even the most basic of manners that those are different.
Neural networks which process images have different levels of "understanding" information (from simple edge detection, to detecting complex objects, recognizing faces, etc.) in their different layers. Similar to how things work in our visual cortex. Intelligence can be, and most likely is, emergent phenomenon too. I don't see what could prevent machines from producing the same "function". What's the fundamental difference? We are built from the same matter, there is no physical limitations to replicate processes in living organisms even if they aren't fully understood yet. Even Penrose's claim was about non-algorithmic processes in our nervous system, but according to him they cannot be reproduced only by Turing machine, not machines in principle.

There is also a question as to whether you can construct from the top down, something that has sentience (or even the slightest of a sense of things/a thing existing; an ant may not have sentience, but it obviously senses stuff in its environment). Lots of very insignificant (in relation to even us, humans) creatures have clearly at least some basic sensory experience, but to give that as a property of an artificial system may be impossible without tapping in to existent bio-mater basis for its emergence.
We can analyse bio-ant's behavior and conclude it may have sensory experience from our observations. But we can construct a robo-ant, which demonstrates the same behavior, has sensors and able to do the same tasks. How can we conclude that the former has sensory experience and the latter doesn't, if their behavior is the same? It would be a non-scientific claim. It's more about philosophical or religious belief, that if this thing wasn't created "natural way", it cannot have senses. But it's unprovable. What if by precise replicating bio-ant's behavior, we replicated its sensory experience too?
 
We can analyse bio-ant's behavior and conclude it may have sensory experience from our observations. But we can construct a robo-ant, which demonstrates the same behavior, has sensors and able to do the same tasks. How can we conclude that the former has sensory experience and the latter doesn't, if their behavior is the same? It would be a non-scientific claim. It's more about philosophical or religious belief, that if this thing wasn't created "natural way", it cannot have senses. But it's unprovable. What if by precise replicating bio-ant's behavior, we replicated its sensory experience too?
I really don't like this way of looking at it, because counter-examples are easy to come up with (never a good sign). Imagine someone who reproduces your voice, behind a curtain, and someone in the room thinks you are there. Wouldn't make sense to argue the mechanism for reproducing your voice is the same as you just because the specific "behavior" was identical.
 
I really don't like this way of looking at it, because counter-examples are easy to come up with (never a good sign). Imagine someone who reproduces your voice, behind a curtain, and someone in the room thinks you are there. Wouldn't make sense to argue the mechanism for reproducing your voice is the same as you just because the specific "behavior" was identical.
But it's not identical :)
It's easy to perform experiment and prove that the device behind the curtain is not me.
I agree that this way is flawed, but it's the only scientific way to prove something.
If you want to make scientific claim that bio-ant has sensory experience, there should be experiment which can show measurable difference between things that have it and those that don't.
 
Back
Top Bottom