• Civilization 7 has been announced. For more info please check the forum here .

Consciousness: what it is, where it comes from, do machines can have it and why do we care?

Is consciousness possible in:


  • Total voters
    29
Will be in the city a month earlier, $150 a bit outside my range anyway but would be cool to listen & schmooze w the attendees after if I was there & had the pocket change to go
 
WRT orchestrated objective reduction, aka "consciousness is in quantum effects in microtubules", they just found quantum effects in microtubules where they did not expect to. This is some support of the theory, though I would say quite weak support.

Spoiler Youtube reference :
 
Sperm Whale Codas are at least a bit like human language. I think this seems to be pretty good indication of something like consciousness

Contextual and combinatorial structure in sperm whale vocalisations

Sperm whales (Physeter macrocephalus) are highly social mammals that communicate using sequences of clicks called codas. While a subset of codas have been shown to encode information about caller identity, almost everything else about the sperm whale communication system, including its structure and information-carrying capacity, remains unknown. We show that codas exhibit contextual and combinatorial structure. First, we report previously undescribed features of codas that are sensitive to the conversational context in which they occur, and systematically controlled and imitated across whales. We call these rubato and ornamentation. Second, we show that codas form a combinatorial coding system in which rubato and ornamentation combine with two context-independent features we call rhythm and tempo to produce a large inventory of distinguishable codas. Sperm whale vocalisations are more expressive and structured than previously believed, and built from a repertoire comprising nearly an order of magnitude more distinguishable codas. These results show context-sensitive and combinatorial vocalisation can appear in organisms with divergent evolutionary lineage and vocal apparatus.

Spoiler Big image that sort of represents the complexity :

Spoiler Legend :
Sperm whale codas were previously hypothesized to comprise 21 independent coda types. We show that this coda repertoire is built from two context-independent features (rhythm and tempo) and two context-sensitive features (rubato and ornamentation). A Tempo: (Left) The overall duration of a coda is the sum of its inter-click intervals. (Centre) Coda durations are distributed around a finite set of modes, which we call (tempo types). (Right) Snippets from exchange plots showing codas of different tempo types. B Rhythm: (Left) Normalising the vector of ICIs by the total duration returns a duration-independent coda representation, which we call rhythm. (Centre) Codas cluster around 18 rhythm types. (Right) Examples of normalised codas showing different rhythm types. C Rubato: (Left) Sperm whales slowly modulate coda duration across consecutive codas, a phenomenon we call rubato. (Centre) Rubato is gradual: adjacent codas have durations more similar to each other than codas of the same type from elsewhere in an exchange. (Right) Whale choruses with imitation of rubato represented in exchange plots. D Ornamentation: (Left) Some codas feature `extra clicks' (ornaments) not present in neighbouring codas that otherwise share the same ICIs. (Centre) A density plot showing the distribution of the ratio between final ICIs in ornamented codas versus unornamented codas. Ornamented codas have a significantly different ICI distribution compared to regular codas. (Right) Examples of ornaments in the DSWP dataset. E Thirty minutes of multi-whale choruses: Exchanges feature imitation of coda duration across whales, gradually accumulated changes in call structure, and rich contextual variability.

 
I'm not surpried. Nice post.
 
LLM's are better at "Theory of Mind" than people. What does that say about consciousness?

Testing theory of mind in large language models and humans

At the core of what defines us as humans is the concept of theory of mind: the ability to track other people’s mental states. The recent development of large language models (LLMs) such as ChatGPT has led to intense debate about the possibility that these models exhibit behaviour that is indistinguishable from human behaviour in theory of mind tasks. Here we compare human and LLM performance on a comprehensive battery of measurements that aim to measure different theory of mind abilities, from understanding false beliefs to interpreting indirect requests and recognizing irony and faux pas. We tested two families of LLMs (GPT and LLaMA2) repeatedly against these measures and compared their performance with those from a sample of 1,907 human participants. Across the battery of theory of mind tests, we found that GPT-4 models performed at, or even sometimes above, human levels at identifying indirect requests, false beliefs and misdirection, but struggled with detecting faux pas. Faux pas, however, was the only test where LLaMA2 outperformed humans. Follow-up manipulations of the belief likelihood revealed that the superiority of LLaMA2 was illusory, possibly reflecting a bias towards attributing ignorance. By contrast, the poor performance of GPT originated from a hyperconservative approach towards committing to conclusions rather than from a genuine failure of inference. These findings not only demonstrate that LLMs exhibit behaviour that is consistent with the outputs of mentalistic inference in humans but also highlight the importance of systematic testing to ensure a non-superficial comparison between human and artificial intelligences.


Spoiler Legend :
a, Original test items for each test showing the distribution of test scores for individual sessions and participants. Coloured dots show the average response score across all test items for each individual test session (LLMs) or participant (humans). Black dots indicate the median for each condition. P values were computed from Holm-corrected Wilcoxon two-way tests comparing LLM scores (n = 15 LLM observations) against human scores (irony, N = 50 human participants; faux pas, N = 51 human participants; hinting, N = 48 human participants; strange stories, N = 50 human participants). Tests are ordered in descending order of human performance. b, Interquartile ranges of the average scores on the original published items (dark colours) and novel items (pale colours) across each test (for LLMs, n = 15 LLM observations; for humans, false belief, N = 49 human participants; faux pas, N = 51 human participants; hinting, N = 48 human participants; strange stories, N = 50 human participants). Empty diamonds indicate the median scores, and filled circles indicate the upper and lower bounds of the interquartile range. P values shown are from Holm-corrected Wilcoxon two-way tests comparing performance on original items against the novel items generated as controls for this study.
 
What does that say about consciousness?
That such models can mimic some aspects of human abilities? I'm waiting for an AI to fall in love and then behave irrationally towards its beloved. :p
 
Well what does theory of mind really tell us about consciousness in the first place?
 
Isn't a consciousness, like time, just a concept of human perception, and therefore has no real means of being measured effectively if stuck in one's own head and lens? I get that a rock doesn't seem conscious, but other entities with higher communicability seem like they could be.
 
Well what does theory of mind really tell us about consciousness in the first place?
One of the features of a conscious being that has been postulated is the capacity to put oneself into the mind of another and to attempt to determine hidden meanings or motivation.
 
Seems like a high bar. And one that could be manipulated just by brute forcing text.
 
One of the features of a conscious being that has been postulated is the capacity to put oneself into the mind of another and to attempt to determine hidden meanings or motivation.
Sure, at least that’s how we can rationalize it. But what does it actually mean to try to “emulate” a “mind?” I’m sympathetic to the argument this lay at the core of consciousness but I also wonder if it might be even more easily explained by, “mind emulation is actually the core artifice by which a conscious being assembles their own persona.” That is to say, a consciousness acquires its persona through virtual emulation of that concept of the person and the mind in exactly the same way as empathy is described. So every time chatgpt is being asked to pretend to be something, it is exhibiting a spark of consciousness.

I tend to think this myself and that we can actually get quite close with a chatgpt that has to conscientiously revise and compare its behavior, say in order to specifically fool people into thinking it’s human. It is the complex interplay between both “trying” to be something and checking oneself by constantly reviewing the criteria for being it, that yields all human individuation. This not only reflects the internal conscience but the conscious perception of persona.
 
I would define consciousness as a continuum of the capability to detect and respond to that which is not oneself. Awareness of "otherness".
 
Sure, at least that’s how we can rationalize it. But what does it actually mean to try to “emulate” a “mind?” I’m sympathetic to the argument this lay at the core of consciousness but I also wonder if it might be even more easily explained by, “mind emulation is actually the core artifice by which a conscious being assembles their own persona.” That is to say, a consciousness acquires its persona through virtual emulation of that concept of the person and the mind in exactly the same way as empathy is described. So every time chatgpt is being asked to pretend to be something, it is exhibiting a spark of consciousness.

I tend to think this myself and that we can actually get quite close with a chatgpt that has to conscientiously revise and compare its behavior, say in order to specifically fool people into thinking it’s human. It is the complex interplay between both “trying” to be something and checking oneself by constantly reviewing the criteria for being it, that yields all human individuation. This not only reflects the internal conscience but the conscious perception of persona.
What I am really saying is that it means that if you make rule like "A theory of mind shows consciousness" then there is a good chance that some AI will blow through it soon.

My issue is that we do not have the tools, either philosophical or technological, to make these determinations. We could do something about that.
 
Counting crows

Carrion crows (Corvus corone) can reliably caw a number of times from one to four on command — a skill that had only been seen in people. Over several months, birds were trained with treats to associate a screen showing the digits, or a related sound, with the right number of calls. The crows were not displaying a ‘true’ counting ability, which requires a symbolic understanding of numbers, say researchers. But they are nevertheless able to produce a deliberate number of vocalizations on cue, which is “a very impressive achievement”, says neuroscientist Giorgio Vallortigara.
 
Adam Duritz would likely agree; one for sorrow two for joy etc.
 
Do elephants have names for each other?

Elephants seem to use personalized calls to address members of their group, providing a rare example of naming in animals other than humans.

“There’s a lot more sophistication in animal lives than we are typically aware,” says Michael Pardo, a behavioural ecologist at Cornell University in Ithaca, New York. “Elephants’ communication may be even more complex than we previously realized.”

Other than humans, few animals give each other names. Bottlenose dolphins (Tursiops truncatus) and orange-fronted parakeets (Eupsittula canicularis) are known to identify each other by mimicking the signature calls of those they are addressing. By contrast, humans use names that have no inherent association with the people, or objects, they’re referring to. Pardo had a hunch that elephants might also have a name for each other, because of their extensive vocal communication and rich social relationships.

To find out, Pardo and his colleagues recorded, between 1986 and 2022, the deep rumbles of wild female African savannah elephants (Loxodonta africana) and their offspring in Amboseli National Park in southern Kenya, and in the Samburu and Buffalo Springs National Reserves in the country’s north. The findings were published today in Nature Ecology & Evolution1.

The researchers analysed recordings of 469 rumbles using a machine-learning technique. The model correctly identified which elephant was being addressed 27.5% of the time — a much higher success rate than when the model was fed with random audio as a control. This suggests that the rumbles carry information that is intended only for a specific elephant.

Next, Pardo and his colleagues played recordings of these calls to 17 elephants and compared their reactions. The elephants became more vocal and moved more quickly towards the speaker when they heard their ‘name’ compared with when they heard rumbles directed at other elephants. “They could tell if a call was addressed to them just by hearing that call,” says Pardo.

The findings are a “very promising start”, although more evidence is needed to confirm whether elephants do indeed call each other by name, says Hannah Mumby, a behavioural and evolutionary ecologist at the University of Hong Kong. She adds that understanding elephants’ social relationships and the role of each individual in the group is important for conservation efforts. “Conserving elephants goes far beyond population numbers,” says Mumby.

The next question for the team involves working out how elephants encode information in their calls. That would “open up a whole range of other questions we could ask”, says Pardo, such as whether elephants also name places or even talk about each other in the third person.
 
Elephants may talk like ents!
 
"Life is mainly about being asleep."

Most Life on Earth is Dormant, After Pulling an ‘Emergency Brake’
Researchers recently reported the discovery of a natural protein, named Balon, that can bring a cell’s production of new proteins to a screeching halt. Balon was found in bacteria that hibernate in Arctic permafrost, but it also seems to be made by many other organisms and may be an overlooked mechanism for dormancy throughout the tree of life.

For most life forms, the ability to shut oneself off is a vital part of staying alive. Harsh conditions like lack of food or cold weather can appear out of nowhere. In these dire straits, rather than keel over and die, many organisms have mastered the art of dormancy. They slow down their activity and metabolism. Then, when better times roll back around, they reanimate.

Sitting around in a dormant state is actually the norm for the majority of life on Earth: By some estimates, 60% of all microbial cells are hibernating at any given time. Even in organisms whose entire bodies do not go dormant, like most mammals, some cellular populations within them rest and wait for the best time to activate.

“We live on a dormant planet,” said Sergey Melnikov, an evolutionary molecular biologist at Newcastle University. “Life is mainly about being asleep.”

But how do cells pull off this feat? Over the years, researchers have discovered a number of “hibernation factors,” proteins that cells use to induce and maintain a dormant state. When a cell detects some kind of adverse condition, like starvation or cold, it produces a suite of hibernation factors to shut its metabolism down.

Some hibernation factors dismantle cellular machinery; others prevent genes from being expressed. The most important ones, however, shut down the ribosome — the cell’s machine for building new proteins. Making proteins accounts for more than 50% of energy use in a growing bacterial cell. These hibernation factors throw sand in the gears of the ribosome, preventing it from synthesizing new proteins and thereby saving energy for the needs of basic survival.

Earlier this year, publishing in Nature, researchers reported the discovery of a new hibernation factor, which they have named Balon. The protein is shockingly common: A search for its gene sequence uncovered its presence in 20% of all cataloged bacterial genomes. And it works in a way that molecular biologists had never seen before.

Zzzzzzzzzzz. (May not render correctly in some countries.)

For more see...

Do not go without sleep when you want to remember stuff (like learning for exams?)

Sleep deprivation disrupts memory: here’s why

A crucial brain signal linked to long-term memory falters in rats when they are deprived of sleep — which might help to explain why poor sleep disrupts memory formation. Even a night of normal slumber after a poor night’s sleep isn’t enough to fix the brain signal.

These results, published today in Nature, suggest that there is a “critical window for memory processing”, says Loren Frank, a neuroscientist at the University of California, San Francisco, who was not involved with the study. “Once you’ve lost it, you’ve lost it.”

In time, these findings could lead to targeted treatments to improve memory, says study co-author Kamran Diba, a computational neuroscientist at the University of Michigan Medical School in Ann Arbor.

Firing in lockstep

Neurons in the brain seldom act alone; they are highly interconnected and often fire together in a rhythmic or repetitive pattern. One such pattern is the sharp-wave ripple, in which a large group of neurons fire with extreme synchrony, then a second large group of neurons does the same and so on, one after the other at a particular tempo. These ripples occur in a brain area called the hippocampus, which is key to memory formation. The patterns are thought to facilitate communication with the neocortex, where long-term memories are later stored.

One clue to their function is that some of these ripples are accelerated re-runs of brain-activity patterns that occurred during past events. For example, when an animal visits a particular spot in its cage, a specific group of neurons in the hippocampus fires in unison, creating a neural representation of that location. Later, these same neurons might participate in sharp-wave ripples — as if they were rapidly replaying snippets of that experience.

Previous research found that, when these ripples were disturbed, mice struggled on a memory test. And when the ripples were prolonged, their performance on the same test improved, leading György Buzsáki, a systems neuroscientist at NYU Langone Health in New York City, who has been researching these bursts since the 1980s, to call the ripples a ‘cognitive biomarker’ for memory and learning.

Researchers also noticed that sharp-wave ripples tend to occur during deep sleep as well as during waking hours, and that those bursts during slumber seem to be particularly important for transforming short-term knowledge into long-term memories5. These links between the ripples, sleep and memory are well-documented, but there have been few studies that have directly manipulated sleep to determine how it affects these ripples, and in turn memory, Diba says.

Wake-up call

To understand how poor sleep affects memory, Diba and his colleagues recorded hippocampal activity in seven rats as they explored mazes over the course of several weeks. The researchers regularly disrupted the sleep of some of the animals and let others sleep at will.

To Diba’s surprise, rats that were woken up repeatedly had similar, or even higher, levels of sharp-wave-ripple activity than the rodents that got normal sleep did. But the firing of the ripples was weaker and less organized, showing a marked decrease in repetition of previous firing patterns. After the sleep-deprived animals recovered over the course of two days, re-creation of previous neural patterns rebounded, but never reached levels found in those which had normal sleep.

This study makes clear that “memories continue to be processed after they’re experienced, and that post-experience processing is really important”, Frank says. He adds that it could explain why cramming before an exam or pulling an all-nighter might be an ineffective strategy.

It also teaches researchers an important lesson: the content of sharp-wave ripples is more important than its quantity, given that rats that got normal sleep and rats that were sleep-deprived had a similar number of ripples, he says.

Ripple effects

Buzsáki says that these findings square with data his group published in March that found that sharp-wave ripples that occur while an animal is awake might help to select which experiences enter long-term memory.

It’s possible, he says, that the disorganized sharp-wave ripples of sleep-deprived rats don’t allow them to effectively flag experiences for long-term memory. As a result, the animals might be unable to replay the neural firing of those experiences at a later time.

This means that sleep disruption could be used to prevent memories from entering long-term storage, which could be useful for people who have recently experienced something traumatic, such as those with post-traumatic stress disorder, Buzsáki says.


Neurons in the hippocampus 'cos florescent microscopy is cool
These two posts raise an interesting question: what if sleep and dormancy are "desired" states of consciousness rather than our awake and active life? Perhaps being active and doing things is just how we support being asleep or dormant? Perhaps what is important to our ultimate well being is being asleep? The various sleep states being more important than beng awake. We know we cannot survive without sleeping. Could our priorities be all wrong? :D
 
The New York Declaration on Animal Consciousness

April 19, 2024 | New York University

Which animals have the capacity for conscious experience? While much uncertainty remains, some points of wide agreement have emerged.

First, there is strong scientific support for attributions of conscious experience to other mammals and to birds.

Second, the empirical evidence indicates at least a realistic possibility of conscious experience in all vertebrates (including reptiles, amphibians, and fishes) and many invertebrates (including, at minimum, cephalopod mollusks, decapod crustaceans, and insects).

Third, when there is a realistic possibility of conscious experience in an animal, it is irresponsible to ignore that possibility in decisions affecting that animal. We should consider welfare risks and use the evidence to inform our responses to these risks.
 
Top Bottom