The AI Thread

Iirc Arnold in T2 decided to do stuff out of (also) vaguely sentimental reasons (?).
It's not like I have watched that movie more than once, though. Maybe even less than once :D

Proving that something is not sentient, clearly isn't trivial. But you don't have reason to argue that (say) a lego brick is sentient (despite the harm it can do).
 
Iirc Arnold in T2 decided to do stuff out of (also) vaguely sentimental reasons (?).
It's not like I have watched that movie more than once, though. Maybe even less than once :D

Proving that something is not sentient, clearly isn't trivial. But you don't have reason to argue that (say) a lego brick is sentient (despite the harm it can do).
Sentient and sentimental is not the same. And T2 is semi-crap anyway. T1 is the only one. In T1 Arnold has only a goal and everything it does is to pursue it using different methods and strategies just as these AGI applications do. It even mimic Sara's mother voice in possibly the most chilling scene just like...

 
Doesn't sentience imply you have a sense that stuff actually exist? That's not the same as being a thermostat (which identifies input without having a sense of being). An ant, for example, is sentient, though it's not self-aware.

Chatgpt would pass the Turing Test - which is another example of why the Turing Test has nothing to do with sentience (let alone intelligence).

I feel I should repeat, though, that I find chatgpt to be excellent. I just don't see it as very likely that real AI is possible with such machines. Chatgpt-like machines will already be hugely helpful to humans.
 
Last edited:
Doesn't sentience imply you have a sense that stuff actually exist? That's not the same as being a thermostat (which identifies input without having a sense of being). An ant, for example, is sentient, though it's not self-aware.
I do not think so. The definition I am aware of is the ability to sense the environment and respond to that information.
 
Yes, it's what I meant, more or less :) To have a sense of the environment (that sense doesn't have to translate neatly to a human one, but has to be there in some form). A thermostat does not sense the environment, it merely has a sensor that picks up a change, without translation or a personal reaction; it's strictly mechanical (like a picked up pebble falling if you let go of it).
An ant, on the other hand, afaik has a sense of its environment, despite not (afaik) operating as a distinct being nor having self-awareness.
 
Yes, it's what I meant, more or less :) To have a sense of the environment (that sense doesn't have to translate neatly to a human one, but has to be there in some form). A thermostat does not sense the environment, it merely has a sensor that picks up a change, without translation or a personal reaction; it's strictly mechanical (like a picked up pebble falling if you let go of it).
An ant, on the other hand, afaik has a sense of its environment, despite not (afaik) operating as a distinct being nor having self-awareness.
Can you define these terms objectively? The simplest thing I would be confident in calling minimally sentient is a bacteria, and I am not sure that has any more complex of an internal model of its environment than a modern smart thermostat, just molecular processing capable of detecting a concentration gradient ands moving relative to that, and other such detect/respond pairs.
 
I suspect there is no a qualitative difference between human intelligence, animal intelligence, be it a dolphin an ant or even a bacteria, and machine intelligence, be it chatGPT or a thermostat, being all up to quantity. Quantity of information being processed, quantity of sensors to perceive the environment, quantity of proceses running in parallel, etc. Enough complexity and refinement may lead to all the things we consider exclusive to humans and maybe other superior animals, things like sentiments, self-awareness, empathy, art, curiosity, etc, but ultimately they are nothing but subprocesses, or subgoals of the ultimate goal which in the case of living beings is surviving enough to get ADN replicated, and therefore all other beings have them in some degree or some way, even if in some primitive modality or in negligible amounts. (Well, it is hard to think a thermostat has goals, but you get the point)

Or whatever, see Monty Python's Meaning of Life fot further information.
 
Last edited:
Can you define these terms objectively? The simplest thing I would be confident in calling minimally sentient is a bacteria, and I am not sure that has any more complex of an internal model of its environment than a modern smart thermostat, just molecular processing capable of detecting a concentration gradient ands moving relative to that, and other such detect/respond pairs.
There is a small bit in the Feynman book I am reading, on how a similar organism (some type of fungus) reacts, and whether or not it is as automatic as implied. Feynman (can't establish if he is right, of course) noted that they don't seem to react the same way always, eg not change direction at the same angle/speed when met with an obstacle. Wouldn't something like a thermostat just react in one way or (at best) a finite number of ways that are already either programmed or easy to define?

As for defining my own terms, by "translation" I mean that imo the being has to form (consciously or not) some version of its environment, which is distinct from the environment itself - which is spuriously another way of saying that the latter, such as a thermostat, isn't in (sentient=)contact with its environment.
Of course it is easy to define translation in humans; no one is of the view that we pick up the environment as "it is". Eg there's no reason to expect the cosmos to be actually 3d itself.

I suspect there is no a qualitative difference between human intelligence, animal intelligence, be it a dolphin an ant or even a bacteria, and machine intelligence, be it chatGPT or a thermostat, being all up to quantity. Quantity of information being processed, quantity of sensors to perceive the environment, quantity of proceses running in parallel, etc. Enough complexity and refinement may lead to all the things we consider exclusive to humans and maybe other superior animals, things like sentiments, self-awareness, empathy, art, curiosity, etc, but ultimately they are nothing but subprocesses, or subgoals of the ultimate goal which in the case of living beings is surviving enough to get ADN replicated, and therefore all other beings have them in some degree or some way, even if in some primitive modality or in negligible amounts. (Well, it is hard to think a thermostat has goals, but you get the point)

Or whatever, see Monty Python's Meaning of Life fot further information.
But a rock will also move, in a free-fall. You don't seem to be of the view it moves because it sensed there is a lack of material to stand on. So why do the analogous for a thermostat? Imo in both cases the external trigger forces change; without being picked up as a sense.
 
Last edited:
There is a small bit in the Feynman book I am reading, on how a similar organism (some type of fungus) reacts, and whether or not it is as automatic as implied. Feynman (can't establish if he is right, of course) noted that they don't seem to react the same way always, eg not change direction at the same side/speed when met with an obstacle. Wouldn't something like a thermostat just react in one way or (at best) a finite number of ways that are already either programmed or easy to define?
Many possible ways, and how easy this behaviour is to define is ways to describe the complexity of the system. Fungi are quite sophisticated organisms (they are more like us than they are like plants), but it is possible to consider a big computer having a comparable level of complexity. In this case it could react in as many different ways and be as difficult to define as the fungi.

Talking of insects, they have just fully mapped an insect brain, it is not that complex.

science.add9330-fa.jpg


To build a picture of the fruit fly larva connectome, the team used thousands of slices of the larva’s brain imaged with a high-resolution electron microscope, to reconstruct a map of the fly’s brain - and painstakingly annotated the connections between neurons. As well as mapping the 3016 neurons, they mapped an incredible 548,000 synapses.

The researchers also developed computational tools to identify likely pathways of information flow and different types of circuit patterns in the insect’s brain. They found that some of the structural features are similar to state-of-the-art deep learning architecture.​
 
Last edited:
My sense is that complexity isn't the crucial factor here; I have no reason to assume a powerful computer won't be more "complex" that a very simple organism. But it still needs to form a sense of the environment. Eg my old Amstrad could react when I typed the basic command "run", but obviously wasn't sensing it as anything; a digital computer is after all only a formal logic system.
Speaking of which, a formal logic system isn't even "aware" of its own complexity (shown by the Godel etc theorems). While humans aren't either, the chasm there is that we very obviously do have a level of being conscious of stuff - no reason to think computers have or will get that.
 
A brain is just a bunch of cells converting frequency of inputs into a frequency of output. That does not stop it being sentient.
 
There are views that it may be (if at all similar to machines) crucially analog-based, or rather consciousness may require analog parts. Of course, analog computers do exist; they aren't good for pure computation, though.
I won't be surprised if actual bio matter is needed. Then again, maybe some natural phenomena tie to that ultimately, in even more subatomic levels (?).
 
There are views that it may be (if at all similar to machines) crucially analog-based, or rather consciousness may require analog parts. Of course, analog computers do exist; they aren't good for pure computation, though.
I won't be surprised if actual bio matter is needed. Then again, maybe some natural phenomena tie to that ultimately, in even more subatomic levels (?).
The brain is definitely an analog as opposed to a digital computer. I am not sure how that is particularly relevant to the question of sentience.

[EDIT] I have just looked at the link, and perhaps you mean quantum as opposed to classical, rather than digital/analog? It is quite possible that quantum effects could give us the complexity required for consciousness to pop out. Quantum computers are coming on though.
 
Penrose approaches it from the formal logic angle. I haven't read much of the book, though, so can't comment :/
There are many youtube videos where he speaks of the matter, however!

If I had to guess (and I don't have to, but my ties to literature compel me :D ), it would have to do with analog systems not being separated from the full continuum, thus not fragmented by incompleteness effects beyond any sub-level (or something equally funky :) ) (goes without saying, this is not a good guess)
 
Penrose approaches it from the formal logic angle. I haven't read much of the book, though, so can't comment :/
There are many youtube videos where he speaks of the matter, however!
I have just looked at the link, and perhaps you mean quantum as opposed to classical, rather than digital/analog? It is quite possible that quantum effects could give us the complexity required for consciousness to pop out. Quantum computers are coming on though.
 
I have just looked at the link, and perhaps you mean quantum as opposed to classical, rather than digital/analog? It is quite possible that quantum effects could give us the complexity required for consciousness to pop out. Quantum computers are coming on though.
I have heard him speak of both (quantum, analog), but DEFINITELY focus on why the brain being likely analog would prevent consciousness in current computers. There are many lectures of his on that, on youtube, I will find one and edit this post with the link.

Edit: Here is a very brief video: (changed the video, but I hate this interviewer)

And here are more, with the same prompt: https://www.youtube.com/results?search_query=penrose+consciousness+analog
 
Last edited:
all ahnolds were programmed . Or reprogrammed . T2 version by John Connors . T3 by Kate Brewster ; hacked it still came back to Brewster's . Only one acting on its own was the lastest , which in CFC was practically seen by r16 . Developed a sense of hunting after it got bored .
 
I have just looked at the link, and perhaps you mean quantum as opposed to classical, rather than digital/analog? It is quite possible that quantum effects could give us the complexity required for consciousness to pop out. Quantum computers are coming on though.
It is a curious turn of events to see quantum identified with analog and classical identified with digital. Wouldn't be more logical the opposite way?
 
It is a curious turn of events to see quantum identified with analog and classical identified with digital. Wouldn't be more logical the opposite way?
I do not see that. There are more degrees of freedom in an analog than a digital computer, and more degrees of freedom in a quantum state than a classical one.
 
But analog = continuous, same as classical physics, where space, time and energy is continuous.
Meanwhile digital = discontinuous, same as quantum physics, where space, time and energy are quantified, so discontinuous.
 
Back
Top Bottom