Consciousness: what it is, where it comes from, do machines can have it and why do we care?

Is consciousness possible in:


  • Total voters
    19
Certainly it suggests some primitive "aesthetic sense" imho, grouping objects of similar colour or shape together, not unlike the games we teach very young children...
 
Last edited:
All of these speak to consciousness being spread irregularly along a wide continuum of increasing complexity among life forms. Evolution seems to reward complexity (not in volume of life but in "top of the food chain"). Giving pigs a paint brush is different than bower birds building beautiful structures to attract mates.
 
As I watched this video of crows playing in the snow, I thought about this thread and to what extent this kind of play is a sign of improved consciousness. The video shows individual adult crows sliding down windshields and rolling around in the snow because they are enjoying it. That is different than two sibling pups kittens or other babies using play to develop skills and capabilities.

 
As I watched this video of crows playing in the snow, I thought about this thread and to what extent this kind of play is a sign of improved consciousness. The video shows individual adult crows sliding down windshields and rolling around in the snow because they are enjoying it. That is different than two sibling pups kittens or other babies using play to develop skills and capabilities.

I have on a number of occasions sat watching birds in a up draft, such as you get when wind hits a cliff. It is hard to interpret the behaviour of birds as anything but playing, it does not look like they gain anything as hey do not get anywhere, but it sure looks fun. I agree it says something about their brains that is something like consciousness.
 
Thier spatial thinking is far better than yours. Crows are wicked smart for birds. Counting is not usually an avian strong suit, but crows can get to three. I forget where I read that.
 
As I watched this video of crows playing in the snow, I thought about this thread and to what extent this kind of play is a sign of improved consciousness. The video shows individual adult crows sliding down windshields and rolling around in the snow because they are enjoying it. That is different than two sibling pups kittens or other babies using play to develop skills and capabilities.


Exactly. Reminds me of these:


 
Easy to anthropomorphize, right? :)
 
Exactly. Reminds me of these:

Dogs on sleds could have been learned from people, but dogs, like the crows just enjoying sliding seems like just having dog fun. I wonder if penguins slide for fun? Spontaneous adult play among animals seems some level above instinct.
 
AI consciousness: scientists say we urgently need answers

Could artificial intelligence (AI) systems become conscious? A trio of consciousness scientists say that, at the moment, no one knows — and they are expressing concern about the lack of inquiry into the question.

In comments to the United Nations, three leaders of the Association for Mathematical Consciousness Science (AMCS) call for more funding to support research on consciousness and AI. They say that scientific investigations of the boundaries between conscious and unconscious systems are urgently needed, and they cite ethical, legal and safety issues that make it crucial to understand AI consciousness. For example, if AI develops consciousness, should people be allowed to simply switch it off after use?

Such concerns have been mostly absent from recent discussions about AI safety, such as the high-profile AI Safety Summit in the United Kingdom, says AMCS board member Jonathan Mason, a mathematician based in Oxford, UK, and one of the authors of the comments. Nor did US President Joe Biden’s executive order seeking responsible development of AI technology address issues raised by conscious AI systems, Mason notes.

“With everything that’s going on in AI, inevitably there’s going to be other adjacent areas of science which are going to need to catch up,” Mason says. Consciousness is one of them.

The other authors of the comments were AMCS president Lenore Blum, a theoretical computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania, and board chair Johannes Kleiner, a mathematician studying consciousness at the Ludwig Maximilian University of Munich in Germany.

Not science fiction

It is unknown to science whether there are, or will ever be, conscious AI systems. Even knowing whether one has been developed would be a challenge, because researchers have yet to create scientifically validated methods to assess consciousness in machines, Mason says. “Our uncertainty about AI consciousness is one of many things about AI that should worry us, given the pace of progress,” says Robert Long, a philosopher at the Center for AI Safety, a non-profit research organization in San Francisco, California.

Such concerns are no longer just science fiction. Companies including OpenAI — the firm that created the chatbot ChatGPT — are aiming to develop artificial general intelligence, a deep-learning system that’s trained to perform a wide range of intellectual tasks similar to those humans can do. Some researchers predict that this will be possible in 5–20 years. Even so, the field of consciousness research is “very undersupported”, says Mason. He notes that to his knowledge, there has not been a single grant offer in 2023 to study the topic.

The resulting information gap is outlined in the AMCS leaders’ submission to the UN High-Level Advisory Body on Artificial Intelligence, which launched in October and is scheduled to release a report in mid-2024 on how the world should govern AI technology. The AMCS leaders’ submission has not been publicly released, but the body confirmed to the authors that the group’s comments will be part of its “foundational material” — documents that inform its recommendations about global oversight of AI systems.

Understanding what could make AI conscious, the AMCS researchers say, is necessary to evaluate the implications of conscious AI systems to society, including their possible dangers. Humans would need to assess whether such systems share human values and interests; if not, they could pose a risk to people.

What machines need

But humans should also consider the possible needs of conscious AI systems, the researchers say. Could such systems suffer? If we don’t recognize that an AI system has become conscious, we might inflict pain on a conscious entity, Long says: “We don’t really have a great track record of extending moral consideration to entities that don’t look and act like us.” Wrongly attributing consciousness would also be problematic, he says, because humans should not spend resources to protect systems that don’t need protection.

Some of the questions raised by the AMCS comments to highlight the importance of the consciousness issue are legal ones: should a conscious AI system be held accountable for a deliberate act of wrongdoing? And should it be granted the same rights as people? The answers might require changes to regulations and laws, the coalition writes.

And then there is the need for scientists to educate others. As companies devise ever-more capable AI systems, the public will wonder whether such systems are conscious, and scientists need to know enough to offer guidance, Mason says.

Other consciousness researchers echo this concern. Philosopher Susan Schneider, the director of the Center for the Future Mind at Florida Atlantic University in Boca Raton, says that chatbots such as ChatGPT seem so human-like in their behaviour that people are justifiably confused by them. Without in-depth analysis from scientists, some people might jump to the conclusion that these systems are conscious, whereas other members of the public might dismiss or even ridicule concerns over AI consciousness.

To mitigate the risks, the AMCS comments call on governments and the private sector to fund more research on AI consciousness. It wouldn’t take much funding to advance the field: despite the limited support to date, relevant work is already underway. For example, Long and 18 other researchers have developed a checklist of criteria to assess whether a system has a high chance of being conscious. The paper [referenced in the OP], published in the arXiv preprint repository in August and not yet peer reviewed, derives its criteria from six prominent theories explaining the biological basis of consciousness.

“There’s lots of potential for progress,” Mason says.
 
Could artificial intelligence (AI) systems become conscious?
It all depends upon how one defines consciousness. For sure AI can get close to (or surpass) mimicking human brain power and capabilities in decision making. Can AI fall in love? Get emotional when a dog dies? Experience grief? To answer the question properly we need to define just what consciousness encompasses.
 
I’m going humans-only. I’d define consciousness as broader than monkeys being able to use simple tools, or dolphins learning to jump through a hoop to get a fish.

Do they have an awareness that extends beyond their instincts for survival? I don’t think they do.

Computers too, they’re just machines that are put together and follow sets of instructions, even so-called learning computers. All that’s changed in computing is that we can feed even more instructions to them.
the hubris, perhaps, is believing that human brains are doing anything special to separate themselves from these things on the spectrum beyond merely being more complex (aka a "bigger program").

even in the definition of "awareness" used here, things are casually disregarded. when animals demonstrate empathy for other animals, even risking their life to save them, is this all in service to some baseline survival instinct?

if we accept that it is, what makes you believe your own survival instinct is different, that things you "want" are meaningfully different? that sufficiently capable machine circuitry can't surpass it, when we've already observed it far surpassing the best humans in the world in other domains?

we are far too ready to define consciousness with confidence, considering our current (in)ability to model it.
 
It all depends upon how one defines consciousness. For sure AI can get close to (or surpass) mimicking human brain power and capabilities in decision making. Can AI fall in love? Get emotional when a dog dies? Experience grief? To answer the question properly we need to define just what consciousness encompasses.

And devise some test for it. An AI can certainly simulate falling in love, experience grief and so on. But how do we separate simulated love from real love? Does the distinction even matter when we cannot tell the difference?
 
And devise some test for it. An AI can certainly simulate falling in love, experience grief and so on. But how do we separate simulated love from real love? Does the distinction even matter when we cannot tell the difference?
Testing for "consciousness" moves it into the arena of science and out of the broader arena of our experience. Such moves are naturally a limitation and imply that what is true can only be found through "reason". I think that such testing can be useful to "point to" important things, but maybe not reveal all things.
 
Consciousness: the state of awareness. The process and the result of falling asleep/waking up. A feature of all animals, including humans. Does it apply to computers? You tell me... We have to switch on the computer and run the program to make it's neural networks respond to the feedback from human operator or respond to the changing conditions within analysed space time. That could be analogous to human gaining consciousness in the morning.

Intelligence: the ability to acquire knowledge and skills. (Oxford definition) This one is easier to apply to animals and computers, since the process of acquiring and storing recurring patterns is not materially different when it comes to both groups. Chemistry is different, yes, but the process of recognising a pattern to then store it on an SSD/in the brain is essentially analogous.

In short, consciousness is on/off switch, intelligence is how we gain knowledge.

Does anyone understand the distinction differently? Does anyone disagree how distinction is formulated and whether it exists?

People use intelligence and consciousness interchangeably in the thread, or that's an impression I get. It's hard to tell what is meant where, so this is an attempt for clarity.
 
Consciousness: the state of awareness. The process and the result of falling asleep/waking up. A feature of all animals, including humans. Does it apply to computers? You tell me... We have to switch on the computer and run the program to make it's neural networks respond to the feedback from human operator or respond to the changing conditions within analysed space time. That could be analogous to human gaining consciousness in the morning.

Intelligence: the ability to acquire knowledge and skills. (Oxford definition) This one is easier to apply to animals and computers, since the process of acquiring and storing recurring patterns is not materially different when it comes to both groups. Chemistry is different, yes, but the process of recognising a pattern to then store it on an SSD/in the brain is essentially analogous.

In short, consciousness is on/off switch, intelligence is how we gain knowledge.

Does anyone understand the distinction differently? Does anyone disagree how distinction is formulated and whether it exists?

People use intelligence and consciousness interchangeably in the thread, or that's an impression I get. It's hard to tell what is meant where, so this is an attempt for clarity.
What state of awareness equals consciousness?
You say "all animals"; does this include insects, fish, worms, etc.?

Even when people are asleep, they do not lose all awareness. They can be woken up by many different outside stimuli that alert the brain to what is going on. In REM sleep the brain keeps the body acting out the activity simulated in the brain. Isn't that a level of awareness?
 
What state of awareness equals consciousness?

The state of being awake. Even "slightly awake" is good enough to be characterised as conscious. (If that's what you're asking)

You say "all animals"; does this include insects, fish, worms, etc.?

I was pretty clear. What makes you ask? As far as I can tell, fish, worms and insects are born/wake up and go to sleep/die, thereby losing or gaining consciousness.

Even when people are asleep, they do not lose all awareness. They can be woken up by many different outside stimuli that alert the brain to what is going on. In REM sleep the brain keeps the body acting out the activity simulated in the brain. Isn't that a level of awareness?

Yes, a very low level of awareness.

I'd appreciate if you answer my previous questions too.
 
Does anyone understand the distinction differently? Does anyone disagree how distinction is formulated and whether it exists?
This one?

Intelligence and consciousness are entirely different. Intelligence tends to be defined as connected to both knowledge and capability. Consciousness as awareness. I would separate consciousness from the awake/sleep states. I see it as an innate property of matter that goes down to the chemical and sub atomic levels of things. From that level it is a continuum stretching through all matter and all living things. At the most primitive/basic levels it is the chemical and quantum properties that enable matter to interact and change. Chemical bonds happen because there is an intrinsic awareness of nearby atoms that causes change. If one chooses to begin at life rather than matter, then even one celled critters are aware of their surroundings and can respond to it.
 
Male and female dung beetles coordinate to roll balls, researchers find

Spider dung beetles thought to be a only example of animals other than humans working together without knowing object’s destination

There comes a time in a dung beetle’s life when the only hope of overcoming an obstacle without losing their prized ball is a partner who can pull off a decent headstand.

When their path is blocked, pairs of dung beetles carefully coordinate their actions, with males grabbing the dung ball from above, and females going into a headstand to push the ball off the ground with their legs, researchers say.

The unusual, cooperative behaviour between spider dung beetles is believed to be a unique example of animals other than humans working together to move objects around without knowing their final destination.

While ants coordinate to haul food to their nests, and social spiders cooperate to carry prey to their shelters, both know where they are heading and when they have arrived. With dung beetles, couples start rolling their dung balls with no idea where they will stop.

“It’s the first species that’s been recorded that can coordinate transport in this way,” said Dr Claudia Tocco, who studies animal behaviour at Lund University in Sweden. “They don’t know where they are going, but they can still coordinate to move the object together.”

On flat ground, pairs of beetles rolled no faster than single males, but faced with hurdles, the couples surged ahead. When challenged by walls up to 9cm tall, males started to climb and drag the ball up, as females worked themselves into a headstand and pushed with their legs to help the ball off the ground. The females then held on to the ball as the male pulled it up – lifting about 10 times his body weight.

At times, males clung to walls with only one claw and females repeatedly saved them from falls when they lost grip, the scientists wrote in Proceedings of the Royal Society B. Pairs of beetles were faster than single ones and more efficient over obstacles, which would be “extremely beneficial” in the forest, the authors noted.

But while pairing up makes sense, how the beetles coordinate their actions remains a mystery. “How does a beetle with a brain smaller than a grain of rice communicate? And how do they coordinate with each other in performing this task?” said Tocco. “They don’t know where they are going.”
 
Top Bottom