Consciousness: Is It Possible?

So consciousness by your definition is driven by a bunch of matter with a shared form to act in unison to self perpetuate. Check.

The thing missing from this discussion is not what is consciousness but that consciousness is experienced. Like there's literally a totality of your own universe being experienced by you right now, based on the parameters of your physical existence. We consider that experience synonymous with the consciousness itself. There's nothing that says, logically speaking, that we need to be present and experience consciousness. In fact, all we need to be able to do is be the implied understanding of the man in the Chinese Room, existing without experiencing what we are experiencing. That's what a lot of folks in this very thread thing that a simulated intelligence would be.

But we know we aren't for the very reason we know we aren't, because we can actually experience that we are experiencing our existence.

So we have this experience, where does the experience comes from. Well it comes from a bunch of different, separate things acting in coordination. And in aggregate, those things are experiencing something.
Each of us knows that they experience existence. But how do you know anybody else does? The only reason is because they appear to be essentially just like us. In particular we can observe that people behave like us, and they they seem to be physically similar to us. That's it, the entirety of the body of evidence that leads us to conclude that anybody but is conscious and equally important, worthy of moral consideration.

Behaviorally the logic is straightforward. It quacks like a duck, it's probably a duck.

Physically people are not identical. There are similar, but many things can be said to be similar. Indeed just about everything is similar to everything else by at least one attribute. There needs to be a reason for saying that the similarities are relevant. So what is the relevant criteria for determining if a physical trait is relevant to determining the consciousness and moral consideration? The only answer I can think of is behavior. Which is essentially the first argument.

Therefore, the only reason we know that other people are conscious is their behavior. In light of that, how can we judge a machine by any other standard to determine if it is conscious? If we accept that we are sufficiently confident in others people's consciousness to say we know that they are, then we must apply the same standard to any machine purported to be conscious.

Now, I don't mean to imply that a rudimentary Turing test is sufficient to prove consciousness. That just proves that a program behaves like a human some of the time - it takes more to prove that it behaves like a human all of the time. But though rigorous mathematics and a functional model of the brain and of the machine it should be possible to prove or disprove that the machine experiences existence the same way we do do.
 
Each of us knows that they experience existence. But how do you know anybody else does? The only reason is because they appear to be essentially just like us. In particular we can observe that people behave like us, and they they seem to be physically similar to us. That's it, the entirety of the body of evidence that leads us to conclude that anybody but is conscious and equally important, worthy of moral consideration.

Behaviorally the logic is straightforward. It quacks like a duck, it's probably a duck.

Physically people are not identical. There are similar, but many things can be said to be similar. Indeed just about everything is similar to everything else by at least one attribute. There needs to be a reason for saying that the similarities are relevant. So what is the relevant criteria for determining if a physical trait is relevant to determining the consciousness and moral consideration? The only answer I can think of is behavior. Which is essentially the first argument.

Therefore, the only reason we know that other people are conscious is their behavior. In light of that, how can we judge a machine by any other standard to determine if it is conscious? If we accept that we are sufficiently confident in others people's consciousness to say we know that they are, then we must apply the same standard to any machine purported to be conscious.

Now, I don't mean to imply that a rudimentary Turing test is sufficient to prove consciousness. That just proves that a program behaves like a human some of the time - it takes more to prove that it behaves like a human all of the time. But though rigorous mathematics and a functional model of the brain and of the machine it should be possible to prove or disprove that the machine experiences existence the same way we do.

Why is the bolded part necessary? The point is not that it has the same consciousness, but any consciousness at all. I enjoyed your post very much and it was spot on. I just lean towards consciousness as being unique for all, and behavior is key in understanding how consciousness works in each individual.
 
I know you're using "they happen to have evolved that way because they were useful"as a shorthand but its perhaps little confusing. Wings would be quite useful or second pair of hands or perhaps an extra eye on the back of the head but they haven't evolved (well at least bats and spiders consider them useful) on the other hand heigtened capacity for pain in higher organisms seems like a bit of drag. I mean give me intellect but I dont need any mental problems. Thank you Evolution.

Btw when you say useful you are implying purpose. How would you define it? Survival? That can hardly be done without the other species of plants and animals to develop in harmony. One doesnt survive all by himself. There is a massive interdependence. And for that you need some sort of design. I am afraid...
I am not sure design is needed. The various relationships, interdependence, and proximity of living things with each other and their local environments might well be sufficient to make the biome appear to be harmoniously designed. I think that "purpose" is more of an influence than design. That purpose would be increasingly greater complexity in our ability to experience.
 
Software is only as efficient as it's designer. It seems that you are equating the way the brain works strictly under the guidelines of software.
I'm not sure what you're point is here, but it seems relevant to reiterate what I said to Hygro, that behavior is the relevant part of consciousness and moral worth. It must be, because we have no better standard. And a description of behavior is what software is.

AI will never have conscious if it has to wait on software, unless that software is written given the AI free will and free from the restraint of pre-written software. That would be the "magic" moment that people think may happen. Anything that is designed will only act in the manner that the software dictates. That is one of the hang ups I have with evolution. If you stick with the straight definition of software, software cannot evolve on it's own, even if the hardware can. Software does not "happen". Hardware may evolve and show signs of some software activity, but that software is not the controlling factor, but just the way the hardware would act because of how it evolved.
Our brains are not free from the "restraint" of their inner workings. We can't magic away jealousy, sorrow, or suffering even if we may sometimes want to. Our brains, though not designed, do function only one way. Now they can learn. However, computer programs can be written to be able to learn too. Learning algorithms are useful in pattern recognition software, for example. So a properly written computer programs could match the adaptive capabilities of the human mind.

There is not much evolution pressure on cells and plants, but they also show signs of awareness. I doubt evolutionary pressure has much to do with it. Mutation does not work under pressure. Nature and events select their survivors, but mutation happens way before that. As far as we know the sun's genetic makeup may not support consciousness, but that is about all we know.
Plants may have something that is somehow analogous to pain, but there is no evidence of their self awareness, which would be a prerequisite to consciousness. To say that what little function plants have is consciousness would undermine the relevance of the term.

Specifically, I would say that consciousness is required for moral consideration. It's a relevant similarity (to alude to my answer to Hygro again) for establishing that another being has the same basic freedoms that you demand for yourself. Saying plants are conscious undermines that use, unless you treat plants with the same respect you do humans.

Consciousness is more than mere software. Besides the fact that as long as that software is pre-determined it will never create consciousness. Perhaps we conflate consciousness with software, because it gives the illusion of control?
No that's backwards. Rather I think you're conflating free will with control over our own minds, and therefore saying that software cannot be conscious because it cannot generally change itself. However, we don't actually have control of how we think or much of what we think either, and that it is still meaningful to talk about free will.
 
Why is the bolded part necessary? The point is not that it has the same consciousness, but any consciousness at all. I enjoyed your post very much and it was spot on. I just lean towards consciousness as being unique for all, and behavior is key in understanding how consciousness works in each individual.
I would say it is necessary in so far as calling something that is very far different our own consciousness would undermine the usefulness of the term. Another being has consciousness because they are like us in the relevant ways and we have consciousness. However, I do agree that people do look at the world differently, and those differences do not diminish their consciousness. Therefore I would say that those differences are not relevant to precise definition of consciousness.

In other words, I agree with your point that each of our "consciousness [is] unique for all," but I don't think saying that is in contradiction to the part that you bolded. We're using similar language to communicate different points.
 
I'm not sure what you're point is here, but it seems relevant to reiterate what I said to Hygro, that behavior is the relevant part of consciousness and moral worth. It must be, because we have no better standard. And a description of behavior is what software is.

Do we hold that behavior controls us? Software is written to control the hardware. Are you saying that consciousness evolved to control the brain? Then we project this on to free will and say that we have no control over anything? Personality is who we are. Behavior is how we project who we are. We still have control over how we act. If we had no control we would not be any more civilized than any other animal. There are animals who have civilized behaviors and cannot act in any other manner, or they will not survive.

Our brains are not free from the "restraint" of their inner workings. We can't magic away jealousy, sorrow, or suffering even if we may sometimes want to. Our brains, though not designed, do function only one way. Now they can learn. However, computer programs can be written to be able to learn too. Learning algorithms are useful in pattern recognition software, for example. So a properly written computer programs could match the adaptive capabilities of the human mind.

I would contend that our actions, or the results of what our brain is telling us how to act, can be controlled by choice. Our brain is continuously reasoning out the results of future actions. Most humans though fall into habitual routines that decreases the need for such reasoning.

I am not saying that software cannot be written to learn. Is learning software the same as controlling software? It seems like to me they are two different types of programming and are unable to learn and control at the same time. Even if they could it was because they were designed that way, and we are ruling out design.

Plants may have something that is somehow analogous to pain, but there is no evidence of their self awareness, which would be a prerequisite to consciousness. To say that what little function plants have is consciousness would undermine the relevance of the term.

It seems to me that proving plants are self aware or not, would be as futile as proving God exist or not. We are self aware of each other. Until we can communicate with plants in any clear way, we have only plant behavior to determine how plants operate.

Specifically, I would say that consciousness is required for moral consideration. It's a relevant similarity (to alude to my answer to Hydro again) for establishing that another being has the same basic freedoms that you demand for yourself. Saying plants are conscious undermines that use, unless you treat plants with the same respect you do humans.

Behavior is only as utilitarian as the entity showing such behavior. If we can determine there is a hint of behavior in any entity, then that entity may have a hint of consciousness. That does not preclude all consciousness is the same. Consciousness is only undermined when you try to apply it broadly and in the same manner over all entities showing a hint of consciousness. It seems to me that respect and good will should be handed out despite what one deserves?

No that's backwards. Rather I think you're conflating free will with control over our own minds, and therefore saying that software cannot be conscious because it cannot generally change itself. However, we don't actually have control of how we think or much of what we think either, and that it is still meaningful to talk about free will.

I do not concede that the brain or consciousness evolved. That seems to be the majority concession though. We have no control over our personality, but we do have control over our behavior. That is the process of reasoning. If humans cannot reason, then why do we use logic and debate? That we have the potential to receive thoughts beyond our control has very little to do with what we do with those thoughts, other than we get free "ammo" to work with in our thought processes. I am not even denying that consciousness is unadaptable, but since we have no control over that, the next best thing is learning how to control one's behavior.

I would say it is necessary in so far as calling something that is very far different our own consciousness would undermine the usefulness of the term. Another being has consciousness because they are like us in the relevant ways and we have consciousness. However, I do agree that people do look at the world differently, and those differences do not diminish their consciousness. Therefore I would say that those differences are not relevant to precise definition of consciousness.

In other words, I agree with your point that each of our "consciousness [is] unique for all," but I don't think saying that is in contradiction to the part that you bolded. We're using similar language to communicate different points.

Does the difference have to do with personality and behavior? Each personality is unique, but behavior seems to manifest itself as similar depending on the entity, ie human, plant, or dolphin. Comparing behavior does not give the full picture of consciousness, that is why we have personality test to determine patterns of behavior and even profile people. Patterns of behavior do not necessarily define consciousness, but they allow us to predict what may or may not happen. There is a thin line between controlling free will and dismissing it as non-existent.

If personality is unique, other than being utilitarian, how can we say consciousness is the same in each human, much less the same in every other entity that shows any signs of behavior indicating a hint of consciousness?
 
Do we hold that behavior controls us? Software is written to control the hardware. Are you saying that consciousness evolved to control the brain? Then we project this on to free will and say that we have no control over anything? Personality is who we are. Behavior is how we project who we are. We still have control over how we act. If we had no control we would not be any more civilized than any other animal. There are animals who have civilized behaviors and cannot act in any other manner, or they will not survive.
I do not concede that the brain or consciousness evolved. That seems to be the majority concession though. We have no control over our personality, but we do have control over our behavior. That is the process of reasoning. If humans cannot reason, then why do we use logic and debate? That we have the potential to receive thoughts beyond our control has very little to do with what we do with those thoughts, other than we get free "ammo" to work with in our thought processes. I am not even denying that consciousness is unadaptable, but since we have no control over that, the next best thing is learning how to control one's behavior.
We have some control over how we act, but we have very little control over our internal motivations, what you describe as personality. Our personalities are largely immutable, and in particular immutable enough to make the analogy with software, which you are right to point out is also largely immutable.

You're also right that most software isn't adaptive enough to be able to say that it can control how it acts. But it could be.

I say analogy above, but in a way it's more than that. Software, abstractly, is a description of behavior. The relevant parts of minds are behavior. Therefore our minds are software. Note, I'm using behavior here more broadly than you are. By behavior I mean not only how we act, but how we think too. (Bolded because this this difference in terminology comes up several times)

I would contend that our actions, or the results of what our brain is telling us how to act, can be controlled by choice. Our brain is continuously reasoning out the results of future actions. Most humans though fall into habitual routines that decreases the need for such reasoning.
I would object to your word choice here. We cannot control what our brain is telling us do do, because we are our brain. We can however control what we differ to our subconscious, which you seem to be describing. This is a semantic objection, but it helps to speak the same language.

I am not saying that software cannot be written to learn. Is learning software the same as controlling software? It seems like to me they are two different types of programming and are unable to learn and control at the same time. Even if they could it was because they were designed that way, and we are ruling out design.
What do you mean by controlling software? What kind of controlling software cannot be written?

The reason I brought up learning software, is because it is a type of software that changes it's behavior over time, as it learns. This is similar the human mind is movable when presented with a logical argument, for example. Learning software is adaptive software, potentially as adaptive as the human mind.

It seems to me that proving plants are self aware or not, would be as futile as proving God exist or not. We are self aware of each other. Until we can communicate with plants in any clear way, we have only plant behavior to determine how plants operate.
As I replied to Hygro, we have only each other's behavior to determine how we operate too. So behavior is the right standard to judge plants by. Plants clearly don't behave like they are self aware, nor do they have the anatomy that might behave like the part of ourselves that is responsible for self awareness; that is, plant's don't have brains.

Self awareness, defined as having a mental concept of oneself, is difficult to test for, but it's not impossible, and it is possible to objectively class thing as being self aware, not, or inconclusive. Plants are not self aware. Humans are. Dolphins are. Canines? dubious, though there is some evidence that they aren't.

Behavior is only as utilitarian as the entity showing such behavior. If we can determine there is a hint of behavior in any entity, then that entity may have a hint of consciousness. That does not preclude all consciousness is the same. Consciousness is only undermined when you try to apply it broadly and in the same manner over all entities showing a hint of consciousness. It seems to me that respect and good will should be handed out despite what one deserves?
It is true that it's better to give a rock human rights, than deny them to an actual human, but it does not follow that we should give rocks human rights. For things that might have moral standing and might not, it is certainly better to give them moral standing than not, but even better would be to think longer and put them in one category or the other.

Does the difference have to do with personality and behavior? Each personality is unique, but behavior seems to manifest itself as similar depending on the entity, ie human, plant, or dolphin. Comparing behavior does not give the full picture of consciousness, that is why we have personality test to determine patterns of behavior and even profile people. Patterns of behavior do not necessarily define consciousness, but they allow us to predict what may or may not happen. There is a thin line between controlling free will and dismissing it as non-existent.
I would say that patterns of behavior do define consciousness, but people's actions do not, as you correctly point out. As per above, I define behavior to include the process of thinking, not just acting.

If personality is unique, other than being utilitarian, how can we say consciousness is the same in each human, much less the same in every other entity that shows any signs of behavior indicating a hint of consciousness?
All people have a sufficiently similar mind to be called conscious. We think differently, but not so differently to call it something else. Not so differently that we should be given different moral consideration because of it.
 
Software is only as efficient as it's designer.

Why must that be the case? We have plenty of examples of software doing things humans cannot do. What exactly is the limit here?
 
I know that you're using "designed" as a shorthand, but it's perhaps not helpful to use the word here. Our brains are not designed as pattern-recognition machines. They happen to have evolved that way because they were useful. They could have evolved in a different direction, say incredibly powerful numeric integrators [totally made that up, if you couldn't tell ;)]. That our brains are made up of connections of neurons doesn't mean that's the only way to get a computer, as you well know. And, I doubt that pattern recognition in itself is a prerequisite for consciousness.

Of course, but the point is that our brains are what allow us to be conscious, for whatever reason. That's why I bring them up - what does the sun have that would allow for it to be conscious? I don't see any such instrument or tool or anything even close, which is why I ask how the sun could be possibly conscious. If it is - where is that consciousness coming from? What is the sun's equivalent to the human brain? Is it something we haven't discovered yet? I want to understand what the thinking behind "The sun is conscious is", because i don't understand it.

I think it's premature to say that we know everything there is to know about the sun, however robust astrophysics is these days. Do we know a heck of a lot? Certainly. But we can't even be sure that some birds and cetaceans aren't conscious - and they share not only evolutionary history with us, but also some of the same exact wetware! How can we be so confident to say that there is not any analogous structure in stars?

Maybe we will be able to rule that out some day, but I really don't think we're there yet.

As far as we know stars contain no such thing, so it'd be pure speculation to suggest that they might. I mean, you can easily say "black holes are made of milk", and there is no evidence to the contrary.. and we don't know much about them, so maybe? But yeah, it's a completely random guess - it doesn't really help answer the question "what would make the sun conscious?"

I meant chauvinistic in the sense that you're looking for consciousness and neural nets similar to our own, rather than leaving open the possibility that there could be drastically different substrates or phenomena that could have the same result. It's like expecting all alien life to be humanoid, when not even all life on earth is humanoid.

That's why I am asking for a brain equivalent in the sun - what is there that would make consciousness possible. We have our brains - what does the sun have?
 
@Perfection
I tend to view the whole of our experience as a series of superimposed sensation.
For instance: I can not determine what to think. Try it.
As you will notice, you can only decide what to think after you are already thinking about exactly that. I can only decide to think the word "house" after I am already thinking the word house and am thinking to decide to think the word "house".
While we experience some sensation as external (the view of a red car) and others internal (thoughts), both are out of our hands it appears.
So there in deed would be no "central consciousness" but just a flow of sensation we have the pleasure to be riding.
Hm, I actually very much like that, never quit explained it that way. Goes to show why I see free will as an illusion.
My post should not be seen as negating free will. In any case you just bring up a case where we do not feel we are choosing (probably because we aren't) but it doesn't mean there aren't other cases where we do choose. If I ask you to think of three words then write one down. The three words might not be chosen but the written word at least strongly seems to be chosen.
 
Why must that be the case? We have plenty of examples of software doing things humans cannot do. What exactly is the limit here?

I did not "say" limit. I said "as efficient". If software is more efficient than how it was designed to be, then the software figured out something the designer did not include. It has already figured things out on it's own?

Unless you are just saying that a designer can design software that goes beyond what the designer can do. That is the definition of all tools.

If one is designing a consciousness just to be a tool, then that is different than designing one that can think for itself. It seems to me that humans have a consciousness that allows them to think for themselves.
 
Of course, but the point is that our brains are what allow us to be conscious, for whatever reason. That's why I bring them up - what does the sun have that would allow for it to be conscious? I don't see any such instrument or tool or anything even close, which is why I ask how the sun could be possibly conscious. If it is - where is that consciousness coming from? What is the sun's equivalent to the human brain? Is it something we haven't discovered yet? I want to understand what the thinking behind "The sun is conscious is", because i don't understand it.



As far as we know stars contain no such thing, so it'd be pure speculation to suggest that they might. I mean, you can easily say "black holes are made of milk", and there is no evidence to the contrary.. and we don't know much about them, so maybe? But yeah, it's a completely random guess - it doesn't really help answer the question "what would make the sun conscious?"

That's why I am asking for a brain equivalent in the sun - what is there that would make consciousness possible. We have our brains - what does the sun have?
Is our brain responsible for our consciousness or is it the electro chemical activity that is our consciousness? Could the structure of the brain be just a very efficient vehicle for electro chemical activity?
 
Sorry, reply took a while. But I think (hope) it makes some decent points :)

Well for starters you would have to justify why there are still separate, independent consciousnesses. If there can only be one consciousness per system, then the existence of multiple interacting independent consciousnesses in a network starts getting pretty strange. You would have to introduce other constraints, and suddenly the hypothesis I offer (every aggregation of a self contained system is conscious in its own capacity) is simpler than one that denies it.
I didn't know the existence of separate, independent consciousnesses within us was fact? The only actual observation of consciousness is after all the human experience. This is experience is what consciousness functional is. And there pretty much by definition we have only one experience per human at the time.
We at best have the appearance of separate consciousnesses due to this experience being able to assume different modes with their own characteristic. I guess one could also speak of moods, though that may not capture the idea 100%.
Wouldn't the safest assumption be to assume something IS conscious if it's telling you it is and responding like it is? It might be emotionally easier to swallow that a machine can't be conscious unless we build it some kind of special soul box, and then we can have human-like slaves guilt free. But in terms of playing safe our morals, I'd take the opposite position.
Now it appears we talk about two different kinds of likeliness. Probability as such and probability of moral damage.
Assuming I am right, treating simulated consciousness as the real thing would itself be very immoral, as it would deceive human kind of much potential for better lives (guilt-free slavery). But yes, I assume if I was wrong, the harm would be greater than the harm of letting such an opportunity go.
Still, the more moral decision also depends on probabilities as such (as a rule of thumb, possibility of something bad happening becomes less bad with decreasing likeliness). Now I think I see where you are coming form. What we both seem to argee on is that we don't know how consciousness actually comes about, you then conclude that hence appearance should be our decision basis. Now why do you say that? Because such appearance correlates with consciousness as we so far know it.

But it correlates with more than that. It also correlates with brains.
Whereas you are satisfied with one factor, I want it to correlate with both.
Now who is more going out on a limp?

But there is more than just that.

Viewing a red car on my computer screen has nothing do with seeing an actual car. It is even possible to make that car on the screen appear 3-dimensional (and I am not talking about movie theater 3d or mere 3d-graphics, there are far more advanced ways that track the movement of your eyes and adapt the picture accordingly, totally tricking your brain). It to me appears a fundamental truth of all existence that simulation and actual physical realities are always two independent very different pair of shoes. Which makes sense, because simulating means to cut out conditions which usually correlate with a phenomena. Just what you propose to do.
And applying this general observation to consciousness tells me that it is very likely that the mere simulation of it won't mean the actual thing.

But perhaps all that is a bit too abstract. Let's get real. How would the simulation compare to our brain? In physical terms, it would be extremely different. Whereas in case of our brain there seems to be a direct link between electric patterns and consciousness, in case of a computer electric patterns would be first translated by complex algorithms to finally result in something appearing like consciousness. This is very far removed form where we know consciousness to actually take place and is the plastic result of your abstract action of limiting yourself to one factor correlating with consciousness instead of accounting for both.
In any case you just bring up a case where we do not feel we are choosing (probably because we aren't) but it doesn't mean there aren't other cases where we do choose. If I ask you to think of three words then write one down. The three words might not be chosen but the written word at least strongly seems to be chosen.
Yes I agree that the case I picked is one where the lack of free will is exceptionally well felt. However, I'd argue that the pattern I reasoned with holds also true for cases it is not felt so much.
So let's follow the situation you suggested step by step to really capture what is going on.
You ask me to choose one out of three. This question functions as an input which immediately triggers a reaction in my brain. Did I choose this reaction? This reaction causes a new reaction in my brain. Did I choose this reaction? And another one and another one until eventually a decision manifests and I write down the word of my choosing.
"My choosing" - yes I did choose. But where was I actually free while doing so?
 
Well in my experience it can feel like I list options and then I feel that I choose in my mind. I don't feel merely a series of series of nonchoices.
 
I have an addition to make: I maintain that those chain-reactions taking place in my brain are not of my choosing. But, they are potentially a representation of my identity. As much as how my brain works is potentially a representation of my identity. So in a way, my identity is choosing. And by extension, I am choosing. That is a POV one can take I suppose.
Though honestly, I find that to be more about expressing reality in a way it suits our experience rather than actually just expressing what is going on. And what is actually going on is what I argued in an earlier thread of mine (though in a horribly inept way due to some particular substances I consumed while creating it): which is that the concept of will itself isn't an actual thing, but just us self-confirming our bias. That an actually accurate description of what is going on in our head is us enjoying a hell of a mental ride, just that we experience it like being the stream itself instead of just the guy sitting in a boat and missing a paddle. A guy who looks down at the stream and says "What do you mean I am not in charge? I AM the stream. True story."

Maybe I am missing something, maybe there is a crucial peace not fitting in. But so far I find it the best approach I have encountered (though that may be my own confirmation-bias ;)).
 
So, something to remember about computer consciousness:
Unless we get into deep speculation, we have a model of consciousness that we know works. Our own. It's a giant leap of faith to assume that other people are conscious, but we made remarkable progress once we made that allowance. After that, we decided that neuroanatomy was incredibly important. One component of our neuroanatomy is that we actually have thousands (if not millions) of sensory inputs. We have these sensory detectors across a wide variety of inputs (changes in air pressure, photons, tactile pressure, chemical senses, etc.). I'm quite convinced that consciousness requires sensory inputs across a variety of modalities (i.e., sight AND sound at the same time). Now, you can maintain consciousness with a temporary cessation of some inputs, but I don't know if you can maintain consciousness with a complete cessation. Consciousness IS experience, coupled with interpretation and prediction. You cannot have consciousness of something you cannot experience: this becomes less true as we use analogy and imagery. A color-blind person cannot experience Red (I should know), but we can all imagine radio waves despite never experiencing them. We can nearly convince ourselves that we perceive them, because the analogies are so strong. A non-colorblind person actually has more consciousness than I do, and this is because they have more neuron types than I do. Now, maybe, because we have equal levels of brain neurons, I have some consciousness that they don't. But that's hardware problem. The [edit:] non-colorblind person would definitely have more consciousness if they had both more neurons and more sensory inputs.

So, can a computer be conscious? I don't doubt it. The complexity of their sensory inputs and their modality integration will be decidedly important to the scale and intensity of consciousness. The sheer level of modality integration required will probably require a scale of computing that can only be measure in the 'supercomputer' scale for next many years, but still ...

One neat part about consciousness is our ability to migrate it. Think about your left foot right now. Now think about what you ate about breakfast. Now, the brain is highly integrated, but a clever disruption of selected parts of your neuroanatomy could have entirely prevented either of those thoughts (completely) without preventing the other. What that shows is that neuroanatomy really is decidedly important to the human style of consciousness. And, we assume, other animal consciousness.
 
Is our brain responsible for our consciousness or is it the electro chemical activity that is our consciousness? Could the structure of the brain be just a very efficient vehicle for electro chemical activity?

I'm not sure, but the brain seems to be the vehicle responsible for the phenomenon, fully or partially. The point though is that I don't see anything similar in the sun - so I'm curious what Hygro has to say about that. I have no idea if he thinks that our brains have nothing to do with this, if the sun has a brain, if consciousness is something else entirely, or what.
 
I'm not sure, but the brain seems to be the vehicle responsible for the phenomenon, fully or partially. The point though is that I don't see anything similar in the sun - so I'm curious what Hygro has to say about that. I have no idea if he thinks that our brains have nothing to do with this, if the sun has a brain, if consciousness is something else entirely, or what.

Here's my take: The brain is responsible for the nature of our consciousness as it builds and directs the energy-information aggregation network that spawns a conscious experience. The sun doesn't have a brain, but it does have an aggregated energy process from which a sun-relevant consciousness would spawn. Is its process similar to a brain in that it makes calculations and semantic judgments? Maybe, but lord knows what the hell a billions year old whose life process seems fairly predetermined by physics has to "think" about. Maybe a lot. Maybe nothing and its consciousness is literally just the pseudo-cognizant experience of nothing other than what it is to be nuclearly-fusing.


@Terx, Souron, and Timtofly, forgive me for being absent in responding.
 
Here's my take: The brain is responsible for the nature of our consciousness as it builds and directs the energy-information aggregation network that spawns a conscious experience. The sun doesn't have a brain, but it does have an aggregated energy process from which a sun-relevant consciousness would spawn. Is its process similar to a brain in that it makes calculations and semantic judgments? Maybe, but lord knows what the hell a billions year old whose life process seems fairly predetermined by physics has to "think" about. Maybe a lot. Maybe nothing and its consciousness is literally just the pseudo-cognizant experience of nothing other than what it is to be nuclearly-fusing.

Aren't the energy processes in the sun just nuclear fusion though? Would nuclear fusion plants be conscious, once we build one?

I'm trying to understand how you get from 1. energy
to 2. therefore consciousness
 
Back
Top Bottom