The AI Thread

The problem with the Turing Test is that it's not about consciousness, sentience, thinking or even any basic sense. It is about tricking you (through intricate design) that it is something it's not. But the latter is quite easy, in many contexts; eg playing a horror computer game, you can feel fear, when you know perfectly well there is no material risk to you=>a series of calculations can trigger voluntary or involuntary immersion.
 
One thing the Wiki entry helped me understand is that his starting concern was just to make the definition of "thinking" concrete enough that it could be tested for. (As I understand it, that's part of the reason for borrowing from the gender-identification parlor game: that the interrogator just has to render an one-of-two judgment.)

The other big thing it made me aware of is how many and how diverse are the criticisms of the Turing Test. But for all that, the base principle--if machines ever reach a state that you can't distinguish their actions from thinking beings, then you have to call that thought--that just feels intuitively sound to me.
 
Even if it might be only a trick/misdirection? We tend to take a lot of things for granted. If you see a cat on the street, you wouldn't imagine it is mechanical, but there is no reason why a mechanical cat couldn't pass for a while as a biological organism ^^
 
Well, that's why I noted that the length of time that the investigator gets to investigate is a crucial follow-up consideration for the Turing Test.

AI is a trick. It is a machine masquerading as a thinking human. Turing says (in broad terms) that when it can pull off that trick, we have to say it is thinking.
 
The big thing, unless there is some impressive progress in neuroscience, is that it will be very hard to figure out when an AI is actually thinking once they pass the Turing Test.
In some ways, the architecture will help us. If it's passing the Turing Test using processes that are not based on sensory input plus prediction, but is more of a probabalistic word cloud, then we're much less concerned despite how good it is.
 
Anyone who thought google's AI is bad in comparison to openAI's chatGPT because it gave false information deserves to lose money in the stock market. ChatGPT is, as Gori says, the first one to be actually valuable and interesting but it's still totally a chatbot and nonsensical at the core.

Like it can code, sort of, which is amazing. But it codes with so many errors you still need to be an engineer to make any use of it whatsoever.

It can write lyrics to a theme, sectioned appropriately to versus and choruses. But it has a weak sense of rhyme and zero understanding of meter. You can ask it to fix its meter and it will sort of trend that way but not by truth but by some sort of gross associative pattern (mis-)matching. You would need to be a competent songwriter for it to speed your process in writing songs.

You can get it to summarize basic debates and provide unified/synthesis theories, and they're kind of decent. But you have to massage it away from being a dope and it's not going to do much more than give you a Time magazine level synopsis of a topic.

Obviously it can pass a Wharton MBA final but that tells us more about MBAs.
 
The problem with the Turing Test is that it's not about consciousness, sentience, thinking or even any basic sense. It is about tricking you (through intricate design) that it is something it's not.

Indeed; the Turing Test isn't actually testing the machine; it is more so testing the human interacting with the machine. You could - admittingly simplifying things a bit - consider the Turing Test as computer science's version of Penn & Teller's 'Fool Us' magic show.

The key ingredient for 'thinking' or 'thought' to take place in any system, is consciousness. The primary problem can then be formulated like this; since we don't understand our own consciousness even though we have awareness of its existence, how can we possibly hope to recreate consciousness and insert it into a machine? We fundamentally do not know what consciousness is; all we can do is describe its behavior and describe how we subjectively experience it in the form of self awareness, thoughts, feelings, senses and all that.

It's a bit like gazing into the far reaches of our Universe and wonder what existed before the Big Bang and inflation. We will never find an answer via experiment or observation, because there is nothing pre-luding the Big Bang to observe and nothing to experiment on. We can postulate all kinds of theories and consider some more valid than others, but we will never 'know' for sure.
 
Assuming that a 'living income' is $50k USD annually. And assuming 10 billion people. That's $50 trillion annually.
So, the ultra rich need to have $2500 trillion annually in income before they'll give us enough that our fighting over the scraps might work out.
Assuming an annual capturable income of 2%* of wealth, they will need assets worth $125000 trillion before they'd get $2500 trillion income in order to give $50 trillion.
*income from wealth can only grow faster than the total growth rate as long as their are assets to be transferred in lieu. This calculation works best if we assume they already own everything and so we use a more realistic growth rate. Note that all calculations change if the post-Singularity growth rate plateaus later, but I am presuming sooner.

Global GDP is $100 Trillion and I'm using this number even though it's really not the best number.

Getting 1250x higher income than we have now is only 10 doublings. After that, it's just plug in whatever doubling rate you presume from the Singularity.
If the rich subsided everyone then they'd stop working which would mean the rich would lose their means of production.

Walmart shareholders need Walmart employees to feel the pressure to goto work.
 
Indeed; the Turing Test isn't actually testing the machine; it is more so testing the human interacting with the machine. You could - admittingly simplifying things a bit - consider the Turing Test as computer science's version of Penn & Teller's 'Fool Us' magic show.

The key ingredient for 'thinking' or 'thought' to take place in any system, is consciousness. The primary problem can then be formulated like this; since we don't understand our own consciousness even though we have awareness of its existence, how can we possibly hope to recreate consciousness and insert it into a machine? We fundamentally do not know what consciousness is; all we can do is describe its behavior and describe how we subjectively experience it in the form of self awareness, thoughts, feelings, senses and all that.

It's a bit like gazing into the far reaches of our Universe and wonder what existed before the Big Bang and inflation. We will never find an answer via experiment or observation, because there is nothing pre-luding the Big Bang to observe and nothing to experiment on. We can postulate all kinds of theories and consider some more valid than others, but we will never 'know' for sure.
Consciousness is approached, as a study, from many directions. The aforementioned Penrose, who won a nobel in Physics and produced very important mathematical work before that (he is the one who suggested the 3d form of black holes' frontiers, among other things) is theorizing that it arises from quantum-like effects in parts of cells in the brain.
Consciousness has also been examined philosophically, and in literature. One thought I like a lot is that it can be likened to a system that serves as your ground, but is interchangeable with other grounds; you can rearrange your focus as well as your views, but you don't have access to the absolute (insofar as such even exists) basis of your existence as conscious. A poetic way to think of such is a free fall into a chasm, where it doesn't matter where the floor is, because the person free-falling will have disolved long before they reach it - and that by design ^^

Well, that's why I noted that the length of time that the investigator gets to investigate is a crucial follow-up consideration for the Turing Test.

AI is a trick. It is a machine masquerading as a thinking human. Turing says (in broad terms) that when it can pull off that trick, we have to say it is thinking.
The problem here is that there are people - among them scientists - who do think you can have true AI in machines, and by machine they mean computers similar to what we now have. It is a debate, with many proponents on either side. Personally I don't think computers similar to the current ones in architecture, will ever have true AI.
 
A digital computer is a formal logic system, and formal logic systems can indeed lead to proof of stuff that is consistent with the basis of the system, HOWEVER it was shown that the system itself (in our case, a digital computer) will fail to reach conclusions that are obviously true for someone who can see outside of it (eg a human).

The flaw in this reasoning is that a digital computer doesn't have to return answers that are purely the result of formal logic. We're getting into the whole field of fuzzy logic here. You could program a computer to give illogical (or perhaps more relevantly, speculative) answers. But we're already at a point where computers can do that - whether we want them to or not. ChatGPT further up the thread is perfectly capable of spouting illogical self-contradictory nonsense, and is if anything rather better at it than it is at even quite straightforward maths.

This constitutes limitation, but more importantly for this thread it stresses that the digital computer itself will be unable to treat any object as external to the limited and closed, formal system it IS - as opposed to a human who can both calculate within the confines of a formal logic system, but also, of course, read it from the outside (goes without saying that the human will be far slower with calculations, and is highly unlikely to be fully consistent, whereas the machine HAS to be, which ironically is what leads to its limitation).

Nope - machine learning systems that can learn to play games that they weren't specifically programmed to play already exist. Such systems are by definition not closed. Extending that to learning in the same way as humans do is a matter of incrementation, not any fundamental transition.

The idea that there is some unique ability or function of human brains, or more generally "biomatter" is ultimately springing from the same source as ideas of "vital essence". That living material has some mysterious (and frankly mystical) special substance lurking inside it. From a rational, scientific standpoint that idea died over a century ago, although some people do irrationally cling to it. As someone who studies biomolecules for a living, I'm quite happy to say they are incredibly complicated and interesting. But they're not magic, and that's in the end what's implied if you're arguing they have properties that can't be replicated synthetically.
 
You misread the post. It is certainly established that digital computers are formal logic systems. That has nothing to do with what it can be programmed to give as output; it is about the system itself. You can have a formal system spew whatever output, but that will be set from within the system, which remains limited, eg, it is easy to have a program reply that 1+1=5, yet it will be programmed to do so using consistent logic, not bypassing that logic.
Not sure how you came up with "vital essence" when I referred to a nobel prize winner, but it's not the angle.
 
Last edited:
It is certainly established that digital computers are formal logic systems

OK - first problem is you've now limited AI to exclusively digital systems. Analog computers (and digital computers with analog components, e.g. fancy RNG) do exist. There's no reason AI is required to be exclusively digital. Computers can be formal logic systems - but they don't have to be.

Second problem is that you're appealing to an extreme level of perfect knowledge about how a digital system will respond to any given input. In reality even our "digital" computers are relying on approximations that can for the most part be treated as binary 1s and 0s. But anyone who's had to deal with a flaky, difficult to reproduce bug knows computers are not that perfectly predictable by humans even now. For sufficiently chaotic systems it becomes a literal impossibility to predict what answer a computer will give, simply due to these approximations.

Third problem is that you're assuming humans don't operate on formal logic. To be able to treat a computer as that formally predictable requires a hypothetical godlike external perspective that humans already don't have. Why the assumption that humans have some unique "not constrained by logic" ability when viewed from such a perspective?
 
You are using a digital computer, my friend, where chatgpt and similar are run. Are you sure you understand what the term means?

Strictly speaking ChatGPT isn't run on the digital computer I'm using ;). It's a reasonable assumption that whatever hardware is on the other end of my internet connection running it is digital for practical reasons - but there's no physical barrier to building an equivalent machine from analog systems. Have you shifted position to merely that a hypothetical exclusively digital computer cannot be an AI? Because earlier you seemed to be arguing that some unique property of biomolecular systems was required.

The inherent limitation of digital machines is that their symbolic logic is enumerable (infinitely populous, but countable infinity, set of variations of 0 and 1, are the symbols that express any notion in the formal logic system). The limitations with such systems are proven, theorems by Godel, Turing, Church and others.

As I was getting at above, you are discussing an idealized system with perfect information for the limitations above to be applicable. Reality runs on approximations and imperfect information, which real digital systems necessarily contain. Both the physical hardware of computers (even if we limit ourselves to those operating on digital principles), and any problems our supposed AI might be applied to.

EDIT: Weird - your post I was replying to seems to have vanished.
 
Last edited:
But it has a weak sense of rhyme and zero understanding of meter. You can ask it to fix its meter and it will sort of trend that way but not by truth but by some sort of gross associative pattern (mis-)matching. You would need to be a competent songwriter for it to speed your process in writing songs.
I would phrase this differently. You'll spend more time coaching Chat GPT to write a limerick than you would spend writing the damn thing yourself.

And through the stretch of time that you've been coaching it to write a limerick, it's actually you who has effectively written the resulting limerick. Sneaky bastard!

And that doesn't count the time spent waiting to log on, ffs!

Edit (after Hygro's like, so he's not necessarily on board for this): Now, I'm happy to acknowledge that asking Chat GPT to write a limerick is actually not a fair test of its abilities. Its aim is to serve as a natural language simulator, and limericks are, precisely, an unnatural form of language use. But this ability is my test of intelligence*, and for a second reason besides the handling of meter.

*as well as, again, that the bot in question will sometimes of its own volition choose to compose one. Until I tell it to do something, Chat GPT just sits there. Lazy bastard!
 
Last edited:
A problem with chatgpt is that it has been shown it will opt to pretend it does something, so asking it scientific questions that involve providing something new, is not going to lead to good output. I read this is because part of chatgpt is about making use of fixed "personalities", to fill in blanks. Maybe it is following CFCOT too, that will help it in that domain ^^
 
Maybe it is following CFCOT too
I think a cool test of the bot's capabilities would be (for some human) to register as a new user on a site like this, feed the chatbot our posts, telling it just "give a response to this." Then upload those responses. The human would have to play fair and post everything the bot said and nothing but what the bot said. See if the users of a site like this started treating the bot as a human, of if they said "you're all over the map; I can't get anywhere with you." (which, admittedly, I say to some of the human* users on this site).

*as I suppose
 
If the rich subsided everyone then they'd stop working which would mean the rich would lose their means of production.

Walmart shareholders need Walmart employees to feel the pressure to goto work.

The numbers up there are basically assuming that human labor is a waste of time and resources compared to robot labor.

It's post-scarcity mumbling. We'd achieve post-scarcity faster we assigned productive assets earlier than if we wait for the benevolence of its owners.
 
I think a cool test of the bot's capabilities would be (for some human) to register as a new user on a site like this, feed the chatbot our posts, telling it just "give a response to this." Then upload those responses. The human would have to play fair and post everything the bot said and nothing but what the bot said. See if the users of a site like this started treating the bot as a human, of if they said "you're all over the map; I can't get anywhere with you." (which, admittedly, I say to some of the human* users on this site).

*as I suppose
The bot would just follow the golden rule:

1676051624619.png
 
We'd achieve post-scarcity faster we assigned productive assets earlier than if we wait for the benevolence of its owners.
This is just your fancy way of saying "Eat the rich," El Mac.
 
Back
Top Bottom