• Civilization 7 has been announced. For more info please check the forum here .

Consciousness: what it is, where it comes from, do machines can have it and why do we care?

Is consciousness possible in:


  • Total voters
    25
Will be in the city a month earlier, $150 a bit outside my range anyway but would be cool to listen & schmooze w the attendees after if I was there & had the pocket change to go
 
WRT orchestrated objective reduction, aka "consciousness is in quantum effects in microtubules", they just found quantum effects in microtubules where they did not expect to. This is some support of the theory, though I would say quite weak support.

Spoiler Youtube reference :
 
Sperm Whale Codas are at least a bit like human language. I think this seems to be pretty good indication of something like consciousness

Contextual and combinatorial structure in sperm whale vocalisations

Sperm whales (Physeter macrocephalus) are highly social mammals that communicate using sequences of clicks called codas. While a subset of codas have been shown to encode information about caller identity, almost everything else about the sperm whale communication system, including its structure and information-carrying capacity, remains unknown. We show that codas exhibit contextual and combinatorial structure. First, we report previously undescribed features of codas that are sensitive to the conversational context in which they occur, and systematically controlled and imitated across whales. We call these rubato and ornamentation. Second, we show that codas form a combinatorial coding system in which rubato and ornamentation combine with two context-independent features we call rhythm and tempo to produce a large inventory of distinguishable codas. Sperm whale vocalisations are more expressive and structured than previously believed, and built from a repertoire comprising nearly an order of magnitude more distinguishable codas. These results show context-sensitive and combinatorial vocalisation can appear in organisms with divergent evolutionary lineage and vocal apparatus.

Spoiler Big image that sort of represents the complexity :

Spoiler Legend :
Sperm whale codas were previously hypothesized to comprise 21 independent coda types. We show that this coda repertoire is built from two context-independent features (rhythm and tempo) and two context-sensitive features (rubato and ornamentation). A Tempo: (Left) The overall duration of a coda is the sum of its inter-click intervals. (Centre) Coda durations are distributed around a finite set of modes, which we call (tempo types). (Right) Snippets from exchange plots showing codas of different tempo types. B Rhythm: (Left) Normalising the vector of ICIs by the total duration returns a duration-independent coda representation, which we call rhythm. (Centre) Codas cluster around 18 rhythm types. (Right) Examples of normalised codas showing different rhythm types. C Rubato: (Left) Sperm whales slowly modulate coda duration across consecutive codas, a phenomenon we call rubato. (Centre) Rubato is gradual: adjacent codas have durations more similar to each other than codas of the same type from elsewhere in an exchange. (Right) Whale choruses with imitation of rubato represented in exchange plots. D Ornamentation: (Left) Some codas feature `extra clicks' (ornaments) not present in neighbouring codas that otherwise share the same ICIs. (Centre) A density plot showing the distribution of the ratio between final ICIs in ornamented codas versus unornamented codas. Ornamented codas have a significantly different ICI distribution compared to regular codas. (Right) Examples of ornaments in the DSWP dataset. E Thirty minutes of multi-whale choruses: Exchanges feature imitation of coda duration across whales, gradually accumulated changes in call structure, and rich contextual variability.

 
I'm not surpried. Nice post.
 
LLM's are better at "Theory of Mind" than people. What does that say about consciousness?

Testing theory of mind in large language models and humans

At the core of what defines us as humans is the concept of theory of mind: the ability to track other people’s mental states. The recent development of large language models (LLMs) such as ChatGPT has led to intense debate about the possibility that these models exhibit behaviour that is indistinguishable from human behaviour in theory of mind tasks. Here we compare human and LLM performance on a comprehensive battery of measurements that aim to measure different theory of mind abilities, from understanding false beliefs to interpreting indirect requests and recognizing irony and faux pas. We tested two families of LLMs (GPT and LLaMA2) repeatedly against these measures and compared their performance with those from a sample of 1,907 human participants. Across the battery of theory of mind tests, we found that GPT-4 models performed at, or even sometimes above, human levels at identifying indirect requests, false beliefs and misdirection, but struggled with detecting faux pas. Faux pas, however, was the only test where LLaMA2 outperformed humans. Follow-up manipulations of the belief likelihood revealed that the superiority of LLaMA2 was illusory, possibly reflecting a bias towards attributing ignorance. By contrast, the poor performance of GPT originated from a hyperconservative approach towards committing to conclusions rather than from a genuine failure of inference. These findings not only demonstrate that LLMs exhibit behaviour that is consistent with the outputs of mentalistic inference in humans but also highlight the importance of systematic testing to ensure a non-superficial comparison between human and artificial intelligences.


Spoiler Legend :
a, Original test items for each test showing the distribution of test scores for individual sessions and participants. Coloured dots show the average response score across all test items for each individual test session (LLMs) or participant (humans). Black dots indicate the median for each condition. P values were computed from Holm-corrected Wilcoxon two-way tests comparing LLM scores (n = 15 LLM observations) against human scores (irony, N = 50 human participants; faux pas, N = 51 human participants; hinting, N = 48 human participants; strange stories, N = 50 human participants). Tests are ordered in descending order of human performance. b, Interquartile ranges of the average scores on the original published items (dark colours) and novel items (pale colours) across each test (for LLMs, n = 15 LLM observations; for humans, false belief, N = 49 human participants; faux pas, N = 51 human participants; hinting, N = 48 human participants; strange stories, N = 50 human participants). Empty diamonds indicate the median scores, and filled circles indicate the upper and lower bounds of the interquartile range. P values shown are from Holm-corrected Wilcoxon two-way tests comparing performance on original items against the novel items generated as controls for this study.
 
What does that say about consciousness?
That such models can mimic some aspects of human abilities? I'm waiting for an AI to fall in love and then behave irrationally towards its beloved. :p
 
Well what does theory of mind really tell us about consciousness in the first place?
 
Isn't a consciousness, like time, just a concept of human perception, and therefore has no real means of being measured effectively if stuck in one's own head and lens? I get that a rock doesn't seem conscious, but other entities with higher communicability seem like they could be.
 
Well what does theory of mind really tell us about consciousness in the first place?
One of the features of a conscious being that has been postulated is the capacity to put oneself into the mind of another and to attempt to determine hidden meanings or motivation.
 
Seems like a high bar. And one that could be manipulated just by brute forcing text.
 
One of the features of a conscious being that has been postulated is the capacity to put oneself into the mind of another and to attempt to determine hidden meanings or motivation.
Sure, at least that’s how we can rationalize it. But what does it actually mean to try to “emulate” a “mind?” I’m sympathetic to the argument this lay at the core of consciousness but I also wonder if it might be even more easily explained by, “mind emulation is actually the core artifice by which a conscious being assembles their own persona.” That is to say, a consciousness acquires its persona through virtual emulation of that concept of the person and the mind in exactly the same way as empathy is described. So every time chatgpt is being asked to pretend to be something, it is exhibiting a spark of consciousness.

I tend to think this myself and that we can actually get quite close with a chatgpt that has to conscientiously revise and compare its behavior, say in order to specifically fool people into thinking it’s human. It is the complex interplay between both “trying” to be something and checking oneself by constantly reviewing the criteria for being it, that yields all human individuation. This not only reflects the internal conscience but the conscious perception of persona.
 
I would define consciousness as a continuum of the capability to detect and respond to that which is not oneself. Awareness of "otherness".
 
Sure, at least that’s how we can rationalize it. But what does it actually mean to try to “emulate” a “mind?” I’m sympathetic to the argument this lay at the core of consciousness but I also wonder if it might be even more easily explained by, “mind emulation is actually the core artifice by which a conscious being assembles their own persona.” That is to say, a consciousness acquires its persona through virtual emulation of that concept of the person and the mind in exactly the same way as empathy is described. So every time chatgpt is being asked to pretend to be something, it is exhibiting a spark of consciousness.

I tend to think this myself and that we can actually get quite close with a chatgpt that has to conscientiously revise and compare its behavior, say in order to specifically fool people into thinking it’s human. It is the complex interplay between both “trying” to be something and checking oneself by constantly reviewing the criteria for being it, that yields all human individuation. This not only reflects the internal conscience but the conscious perception of persona.
What I am really saying is that it means that if you make rule like "A theory of mind shows consciousness" then there is a good chance that some AI will blow through it soon.

My issue is that we do not have the tools, either philosophical or technological, to make these determinations. We could do something about that.
 
Counting crows

Carrion crows (Corvus corone) can reliably caw a number of times from one to four on command — a skill that had only been seen in people. Over several months, birds were trained with treats to associate a screen showing the digits, or a related sound, with the right number of calls. The crows were not displaying a ‘true’ counting ability, which requires a symbolic understanding of numbers, say researchers. But they are nevertheless able to produce a deliberate number of vocalizations on cue, which is “a very impressive achievement”, says neuroscientist Giorgio Vallortigara.
 
Adam Duritz would likely agree; one for sorrow two for joy etc.
 
Do elephants have names for each other?

Elephants seem to use personalized calls to address members of their group, providing a rare example of naming in animals other than humans.

“There’s a lot more sophistication in animal lives than we are typically aware,” says Michael Pardo, a behavioural ecologist at Cornell University in Ithaca, New York. “Elephants’ communication may be even more complex than we previously realized.”

Other than humans, few animals give each other names. Bottlenose dolphins (Tursiops truncatus) and orange-fronted parakeets (Eupsittula canicularis) are known to identify each other by mimicking the signature calls of those they are addressing. By contrast, humans use names that have no inherent association with the people, or objects, they’re referring to. Pardo had a hunch that elephants might also have a name for each other, because of their extensive vocal communication and rich social relationships.

To find out, Pardo and his colleagues recorded, between 1986 and 2022, the deep rumbles of wild female African savannah elephants (Loxodonta africana) and their offspring in Amboseli National Park in southern Kenya, and in the Samburu and Buffalo Springs National Reserves in the country’s north. The findings were published today in Nature Ecology & Evolution1.

The researchers analysed recordings of 469 rumbles using a machine-learning technique. The model correctly identified which elephant was being addressed 27.5% of the time — a much higher success rate than when the model was fed with random audio as a control. This suggests that the rumbles carry information that is intended only for a specific elephant.

Next, Pardo and his colleagues played recordings of these calls to 17 elephants and compared their reactions. The elephants became more vocal and moved more quickly towards the speaker when they heard their ‘name’ compared with when they heard rumbles directed at other elephants. “They could tell if a call was addressed to them just by hearing that call,” says Pardo.

The findings are a “very promising start”, although more evidence is needed to confirm whether elephants do indeed call each other by name, says Hannah Mumby, a behavioural and evolutionary ecologist at the University of Hong Kong. She adds that understanding elephants’ social relationships and the role of each individual in the group is important for conservation efforts. “Conserving elephants goes far beyond population numbers,” says Mumby.

The next question for the team involves working out how elephants encode information in their calls. That would “open up a whole range of other questions we could ask”, says Pardo, such as whether elephants also name places or even talk about each other in the third person.
 
Elephants may talk like ents!
 
Top Bottom