Consciousness: what it is, where it comes from, do machines can have it and why do we care?

Is consciousness possible in:


  • Total voters
    33
It has the potential and it eventually does it later in its/his/her life. And you can't talk "babish", so how do you know that the baby isn't making fun of your blouse in baby talk already?

You need to watch more Doctor Who:


Video proof with a clear scientific explanation, please? Just to be sure it actually happened, ya know.


We’ve had nearly the same alphabet our whole written existence. I’ve never invented a new letter. Have you?

Have you never created a 'secret code' or some new way of writing something down? I did in college - had to, to keep up with the lectures. My anthropology instructor had a habit of saying, "Just write a few notes in the margins" when he handed out the one-page summary of what the lecture would be about that day.

By the end of class, that page was covered by the "few notes in the margins" and I had to invent a couple of new ways to express certain concepts just to save time and space. Add to this that there were times when I'd use French words if it meant a shorter way to write whatever the instructor was saying.

To this day I defy anyone but myself to fully understand any of those notes without asking me what some of it means - not because anthropology at that level is hard, but because of what I had to do to get those notes written down.

Em, do you AGREE with me on the chess topic?

Where did I say it's a "ongoing real time state" and not a "total sum potential"?
A bird has the potential to fly, but it also spends a lot of time on the ground - so can or can't it fly during those periods when it DOESN'T?

Depends. If the bird is too young to fly, it can't. If it's injured, it can't. If it's a penguin, it can't.

Are non-penguin birds birds because they can't swim underwater like penguins can?

Em... Was it a jab at me?
I was rather serious that I see "consciousness" as "imagination", and that one is best expressed via "inventing new concepts from scratch".
Let's say I kinda missed your point here.

So to you, nobody is "conscious" unless they invent new concepts from scratch? You do realize that many mammals go through a time when they're babies, and their existence is eating, sleeping, eliminating, and eventually learning to walk and interact with other lifeforms, and I'm not even talking about humans?

If you don't think babies are conscious, I sincerely hope you're not a parent. I'll admit that my exposure to human babies is limited (not really into them), but at least I know they're conscious. Most of what I know about mammalian babies is about cats - some of mine were born in the house and were part of my family until they reached old age and died.

Using tools is problematic, because you CAN'T KNOW whether it was or wasn't observed somewhere and then taught around many generations ago. Or maybe it's instinctual to begin with.
Communication is the primary choice because that way we can literally ASK the subject something and EXPECT them to reply coherently. It's just the easiest way to TEST cause-effect.
Also, when I say "consciousness", I literally mean "something that wasn't pre-programmed", and I actually do treat this topic as if ALL test subjects are "computers".
I never said this is a correct way to do it - but I'm yet to see any better ways either, so my point stands simply because nobody challenged it effectively enough yet.

You're never satisfied, are you? Only tool-users are conscious, but OOPS, it doesn't count if they were taught? :huh:

Teach a gorilla sign language, and you can have a conversation with them. Or will you move the goalposts yet again and say, nope, it has to be a spoken human language, even though the animal in question doesn't possess everything physiologically required to communicate in spoken human languages?

How well do you speak fluent cat? You can't, because you lack a tail and ears that can swivel on top of your head. Your tongue lacks the ability to hiss properly.

(Here's how to look at cat photos and tell if they're hissing or yawning: A yawning cat's tongue curls up and in at the end. A hissing cat's tongue curls inward at the sides.)

I've approximated a hiss when communicating with my own cats. It was good enough that they looked at me in shock and then responded appropriately as though their own mother had done it. The hiss in that context meant "Stop what you're doing NOW."
 
This is a new behavior in orcas. They certainly did not learn it from humans or watching Netflix. The first episodes of this date back to 2020. The orcas certainly have "invented" a new pod behavior that can sink boats. They do not seem to be targeting people, just the boats.

A pod of orcas have sunk a yacht in the Strait of Gibraltar​

November 7, 20236:27 PM ET



 
If anything, I'm not sure what ELSE could be used as a litmus test for a distinctively non-human intelligence, beyond communication skills.
There are some in the OP.
I never said this is a correct way to do it - but I'm yet to see any better ways either, so my point stands simply because nobody challenged it effectively enough yet.
How are you defining better? In the OP I define it as practically useful. If you think your definition is practically useful, what for?
 
There are some in the OP.
Cool. But do YOU realize that "climate change" is precisely a "concept of imagination" that requires the explicit capability of "thinking in a calendar-based time frame"?
In other words, in order to ever assume there is a "climate change" at all, you'd unavoidably NEED:
1. The concept of calendar and seasons/years. Or you can't measure any "changes" that happen over longer time periods than a single "day" of "today".
2. The concept of climate way beyond "hot/cold" and "wet/dry". Again, you need to invent actual weather parameters to measure them, so of which are very much "abstract".
3. The concept of "bad climate" detached from "bad weather". After all, "climate change" is a total sum of "daily weather over a long period of time", and some of it is unavoidably "pleasant".
All in all, "climate change worry" is a perfect example of "imagination related to inventing detached concepts" - so you're literally using the same parameter that I do, lol.
 
1. I'm going by "total sum potential". A baby human can grow into an Einstein equivalent (in smarts), but a baby penguin CAN'T grow into an eagle (and learn how to fly).
2. I'm going by "species", not by "individuals". Not ALL human babies will grow into an Einstein equivalent, but Einstein was still born as a human baby with that potential.
3. Whales may have the most complex language ever, but until and unless we meaningfully communicate with them about ABSTRACT concepts, it's a useless unverifiable assumption.
4. Your "cat example" isn't abstract at all. I never said animals don't have "simple meaning signals", because they very obviously DO have them. The question here is about abstract concepts.
 
Cool. But do YOU realize that "climate change" is precisely a "concept of imagination" that requires the explicit capability of "thinking in a calendar-based time frame"?
In other words, in order to ever assume there is a "climate change" at all, you'd unavoidably NEED:
1. The concept of calendar and seasons/years. Or you can't measure any "changes" that happen over longer time periods than a single "day" of "today".
2. The concept of climate way beyond "hot/cold" and "wet/dry". Again, you need to invent actual weather parameters to measure them, so of which are very much "abstract".
3. The concept of "bad climate" detached from "bad weather". After all, "climate change" is a total sum of "daily weather over a long period of time", and some of it is unavoidably "pleasant".
All in all, "climate change worry" is a perfect example of "imagination related to inventing detached concepts" - so you're literally using the same parameter that I do, lol.
Daniel Dennett goes far beyond this though. Those may be necessary, but very far from sufficient. You also need to build a planet spanning economy. As I said, I do not think this definition is much use, just that it is different from your.

And then there is Kevin Mitchell and Joseph LeDoux. I have not read the source material for these, but it seems Kevin Mitchell puts the emphasis on some sort of idea of individual agency, and Joseph LeDoux on the ability to verbally report the content of experiences.

Joseph LeDoux is pretty close to yours, do you accept that the others are different?

Just to be clear, I am not saying your definition is any worse than any others, and I think the worst of all is Daniel Dennett's as we would basically have to find an alien civilisation to find anything but us conscious according to him. Just that these are all different.
3. Whales may have the most complex language ever, but until and unless we meaningfully communicate with them about ABSTRACT concepts, it's a useless unverifiable assumption.
We know we have not got the answer to exactly what is conscious. Just because a definition requires more science to be done before it is practically useful does not mean the definition is useless. It is something we are working on, and we have not had AI that seems able to decipher complex language for long.
 
1. I'm going by "total sum potential". A baby human can grow into an Einstein equivalent (in smarts), but a baby penguin CAN'T grow into an eagle (and learn how to fly).
2. I'm going by "species", not by "individuals". Not ALL human babies will grow into an Einstein equivalent, but Einstein was still born as a human baby with that potential.
3. Whales may have the most complex language ever, but until and unless we meaningfully communicate with them about ABSTRACT concepts, it's a useless unverifiable assumption.
4. Your "cat example" isn't abstract at all. I never said animals don't have "simple meaning signals", because they very obviously DO have them. The question here is about abstract concepts.

:rolleyes:

A baby human can also grow into someone who is, as the saying goes, "dumber than a box of hair." There are some of these people currently running my province into the ground.

Kindly don't 'splain evolution to me, 'k? :huh: I'm quite aware of how it works. And do tell me where I ever said that a baby penguin ever even thought of growing up to be an eagle and flying. Penguins waddle and swim. Eagles fly. Each is in its own ecological niche, both of which are vanishing at a rate that isn't beneficial for the survival of either type of bird.

Does the fact that penguins don't go on talk shows to discuss the fact that 10,000 of their chicks drowned earlier this year due to a melting ice shelf mean that penguins don't have consciousness?

So whales obviously do have language, but because we haven't managed to learn it, you claim they don't have language?

Is a dog who saves its human in some way just going through a "simple meaning signal"?


Wow. What's it like to live in such a tiny bubble of narcissism?
 
One day when i was ~20yo i sat around sad & tired at work (cannot remember why exactly), and a colleague's dog came into the room.
He sat in front of me, made a comforting sound (difficult to describe when you are not native english) and kept looking in a way i can only describe as heart warming.

An important experience..from that day on i started holding animals in high regard.
Not that i ignored them before..but you know, there can be more important things as teen.
 
Wow. What's it like to live in such a tiny bubble of narcissism?
You missed my entire discussion, it seems.
I explicitly said these points:
1. Animals have and can communicate emotions. Still doesn't make them "conscious", see next point.
2. Emotions are just another layer of "danger/pack" instincts. These may be way more complex than "fight or flight" in the most simplistic way. Still proves nothing about "consciousness".
3. Human geniuses are born as human babies. The latter clearly don't express their potential to reach the former achievements until many years later, though. Yet they have it from birth.
4. On the other hand, penguins DON'T have the potential of flying like an eagle, so a penguin growing up isn't going to "unleash its eagle potential", because it doesn't have it to begin with.
5. Thus, the potential for "consciousness" can only be measured by observing the "entire life" of a test subject, simply because it's usually only expressed much later in that subject's life.
6. I'm going by "species", because it's easier than to go by individual specimens, and also because I don't consider "consciousness" to be an isolated feature, but rather a subset of society.
 
Just to be clear, I am not saying your definition is any worse than any others, and I think the worst of all is Daniel Dennett's as we would basically have to find an alien civilisation to find anything but us conscious according to him. Just that these are all different.

We know we have not got the answer to exactly what is conscious. Just because a definition requires more science to be done before it is practically useful does not mean the definition is useless. It is something we are working on, and we have not had AI that seems able to decipher complex language for long.
My definition is easy to verify, and also quite straightforward, and I honestly kinda didn't get how any of these alternatives are "easier to use in practical terms".
Thus, "if it's easier to apply in practical verifiable terms - it's most probably a better theory", or so I'd think.
 
Just as an aside, the actual definition for consciousness is based on being aware of oneself and of one's surroundings. When someone "lose consciousness", it's about stopping being aware of themselves and what happens around them. When someone "regain consciousness", it's about being again self-aware and aware of what happens around them.
It's not about intelligence, it's not about imagination, it's not only about mechanical reactions to external stimuli.
 
My definition is easy to verify, and also quite straightforward, and I honestly kinda didn't get how any of these alternatives are "easier to use in practical terms".
Thus, "if it's easier to apply in practical verifiable terms - it's most probably a better theory", or so I'd think.
To be clear I never said any were any better than yours in any sense. The only one I expressed an opinion on was Daniel Dennett and that is that that is the least useful.

What I really what, and I am not directing this at you as much as the scientific/philosophical community, is answers to practical questions that we are likely to have to solve. A theory that helps with that is what I am calling a "better" theory. Being objectively measurable is a very big part of this.

If I had a criticism of your point it is the ease with with computers may qualify. I would define "practical verifiable terms" as tests that could be applied to a black box and distinguish between there being an undeniably conscious human and a non-conscious computer in there. Looking at how one may do that for some of your tests I think computers are closer than you think.

Innovation - no. If you teach it to do "1 + 1", it won't ever learn how to do "1 ^ 1", even if you leave it alone for a thousand years. It's simply not in there, period.
I think they may. How do we go for one to the other?

x + 1 is defined by set theory, in 362 pages of Principia Mathematica IIRC
x + n = (x + 1) n times
x * n = (x + x) n times
x ^ n = (x * x) n times

We can then compare that to some of the advances in maths they have come up with, for example beating humans at some methods matrix maths or in coming up with, and even solving, some mathematical conjectures. I would not put it past some AI to make the sorts of leaps above.

Sapience would require a gorilla to combine the signs of "group" and "house" into a new sign of "family" - or to do something analogous, where the new sign is NOT taught to it by others.
Not quite what you ask for, but this came out a couple of weeks ago and is close enough I would not expect such a test to be safe for long:

AI ‘breakthrough’: neural net has human-like ability to generalize language Paper here

Scientists have created a neural network with the human-like ability to make generalizations about language. The artificial intelligence (AI) system performs about as well as humans at folding newly learned words into an existing vocabulary and using them in fresh contexts, which is a key aspect of human cognition known as systematic generalization.​
The researchers gave the same task to the AI model that underlies the chatbot ChatGPT, and found that it performs much worse on such a test than either the new neural net or people, despite the chatbot’s uncanny ability to converse in a human-like manner.​

41586_2023_6668_Fig1_HTML.png

Show me a gorilla that made its own sign language (as opposed to being taught one) - then I might allow some slack in my opinion.
Google's AI just [in 2016] created its own universal 'language' Paper here

Google has previously taught its artificial intelligence to play games, and it's even capable of creating its own encryption. Now, its language translation tool has used machine learning to create a 'language' all of its own.​
In September, the search giant turned on its Google Neural Machine Translation (GNMT) system to help it automatically improve how it translates languages. The machine learning system analyses and makes sense of languages by looking at entire sentences – rather than individual phrases or words.​
Following several months of testing, the researchers behind the AI have seen it be able to blindly translate languages even if it's never studied one of the languages involved in the translation. "An example of this would be translations between Korean and Japanese where Korean⇄Japanese examples were not shown to the system," the Mike Schuster, from Google Brain wrote in a blogpost.​
However, the most remarkable feat of the research paper isn't that an AI can learn to translate languages without being shown examples of them first; it was the fact it used this skill to create its own 'language'. "Visual interpretation of the results shows that these models learn a form of interlingua representation for the multilingual model between all involved language pairs," the researchers wrote in the paper.​
An interlingua is a type of artificial language, which is used to fulfil a purpose. In this case, the interlingua was used within the AI to explain how unseen material could be translated.​
 
Last edited:
I'd like to see actual examples, not mere third-party conclusions that I can't verify.
And they taught a computer to "play chess with a language", such a wow achievement.
Alternatively, they totally dismissed the simple possibility that languages are SIMILAR enough to be "learned" by juxtaposition in the first place.
Especially such closely related languages and especially when learning via whole sentences (a tool which I personally always recommend everyone as the best way to learn a language).
Now, if the example was about the AI learning to translate Chinese to Hebrew, and specifically in writing (aka hieroglyphs-to-letters) - well, that'd be a much better proof.
 
Just as an aside, the actual definition for consciousness is based on being aware of oneself and of one's surroundings. When someone "lose consciousness", it's about stopping being aware of themselves and what happens around them. When someone "regain consciousness", it's about being again self-aware and aware of what happens around them.
It's not about intelligence, it's not about imagination, it's not only about mechanical reactions to external stimuli.
I explicitly said several times already that I *may* be talking about a different concept simply because of the language differences.
My subject is about "what makes a human different from a dog on the intellectual level" - and this is precisely what I keep talking about all the time here.
Whether this is "consciousness" or something else, well, maybe it would've been clearer to me, if it was in Russian (my primary language), dunno.
 
Now, if the example was about the AI learning to translate Chinese to Hebrew, and specifically in writing (aka hieroglyphs-to-letters) - well, that'd be a much better proof.
If an AI can do this will you accept it is conscious? I would not, and I would be surprised if this is not done within my lifetime. That paper, from an age ago in AI research, had english to Japanese

naDCdhW.png

Spoiler Legend :
A t-SNE projection of the embedding of 74 semantically identical sentences translated across all 6 possible directions, yielding a total of 9,978 steps (dots in the image), from the model trained on English↔Japanese and English↔Korean examples. (a) A bird’s-eye view of the embedding, coloring by the index of the semantic sentence. Well-defined clusters each having a single color are apparent. (b) A zoomed in view of one of the clusters with the same coloring. All of the sentences within this cluster are translations of “The stratosphere extends from about 10km to about 50km in altitude.” (c) The same cluster colored by source language. All three source languages can be seen within this cluster.


If not, what practical test would you propose?
 
(...)
Sapience would require a gorilla to combine the signs of "group" and "house" into a new sign of "family" - or to do something analogous, where the new sign is NOT taught to it by others.

(...)


But Koko warmed to her interviewer quickly, and when Gorney asked Koko where gorillas go when they die, she signed, “Comfortable hole bye."

Koko is actually famous, even made jokes apparently :)
 



Koko is actually famous, even made jokes apparently :)
How did they explain "die" to her, though?
Are we sure it wasn't synonymous to "sleep" or "rest", especially since WE often use it in similar euphemisms?

Also, you missed my point: Not to COMBINE signs one after another, but to CREATE a combo sign.
Though I may find it hard to translate this task into sign language (which IS using combos of separate signs all the time), so I dunno.
 
I explicitly said several times already that I *may* be talking about a different concept simply because of the language differences.
My subject is about "what makes a human different from a dog on the intellectual level" - and this is precisely what I keep talking about all the time here.
Whether this is "consciousness" or something else, well, maybe it would've been clearer to me, if it was in Russian (my primary language), dunno.
What you are talking about would probably be "sapience".
And the difference between a human and a dog would only be a matter of complexity, just like a child and an adult (in fact, dogs and apes do reach the level of intelligence of young humans).
 
Idk - I assume this experiment was all written up in a scientific report - I may look for it later.

"The Gorilla Foundation said it a statement that it “will continue to honor Koko’s legacy and advance our mission” by studying sign language in great apes and pursuing conservation projects in Africa and elsewhere."
 
If not, what practical test would you propose?
Do I look like an AI expert to you?
Though, I did provide a practical test: Force a CHESS-and-CHECKERS computer to invent "checkers played via chess pieces", just like humans do when having only the set of chess.
This way, you can see that it's capable of "abstractly" viewing the pieces - a "knight" is suddenly a "checker", and a "chess king" is suddenly a "checkers king" with a very different move set.
 
Back
Top Bottom