Gori the Grey
The Poster
- Joined
- Jan 5, 2009
- Messages
- 13,944
Because when I generate a text, I don't ask what the statistically most likely next word is.How do you know ?
I'll address other of your points in time.
Because when I generate a text, I don't ask what the statistically most likely next word is.How do you know ?
Well, actually they don't know how math is happen, coz it too comlex. And how all those New matrix array and function, that model use appear after trainingThe very people who program AI admits that they don't really know how it comes to the conclusions it does either
1) Not consciously maybe, but you've, by definition, absolutely no idea how your unsconscious works. In fact, we have absolutely no idea about how the minutiae of our thought process work. When I'm trying to remember something, or when I'm trying to phrase some idea, every "low-level" work is completely hidden from my conscious mind, and I don't know how many neurons are working together to produce a global potential that brings the data to the part of my brain that process it - or if there is even such distinction or not.Because when I generate a text, I don't ask what the statistically most likely next word is.
I (think I) do.Do you understand this post, @Akka, that I posted on this site about a year ago?
Your English is very good. It depends on an idiom that I don't know if French shares, but it's a pretty well-known English idiom, I think.
"There was a raptor circling the sky this afternoon. I never did figure out what kind in particular, though I watched it like a hawk."
Because when I generate a text, I don't ask what the statistically most likely next word is.
It completely ignores what I said, so I guess it does stand if you pretend the counter-arguments don't exist even after they've been provided.1) it doesn't matter. If a stage like that never happens when I generate a text, then that is enough to establish that humans generate texts differently than AI does and my point stands.
Again, you just completely ignore what is answered to you just to reiterate the very thing that was argumented again.2) AI's starting data set is a collection of texts, specimens of language in use. Its end product is a text. In linguistic terms, it is working with signifiers throughout the entire process and never with signifieds. A human knows what "cold" means by virtue of the experience of having been cold. AI just knows "in what verbal contexts can the world 'cold' appropriately appear?"
I don't see which point it addresses and I don't see what I'm supposed to understand or not.@Akka, if you don't understand the specimen I gave in #1482, I can give a different one. It is to address one of your points.
My other two points were addressed to Moriarte, not you.I don't see which point it address and I don't see what I'm supposed to understand or not.
So hold on while I go get a different specimen.What makes you know that AI doesn't actually "understand" ?
Okay, might be why they felt completely out of left field.My other two points were addressed to Moriarte, not you.
WHAT point ?I'll try a different specimen. It's to address this point.
Wouldn't you know it! That darned sock developed another hole!
Oh, so you do "get" the first one. Copilot doesn't (fully), but we'll come to that in a minute.I (think I) do.
Just so you know, Copilot does too, and is able to comment on it (unless there is some another level on the joke both it and I aren't getting, but at the very least it gets it on the same level as me), explaining the literal and figurative layers.
That's true, but one builds outward (through metaphor) from a deep and broad experiential base to those matters for which one has only a symbolic connection. AI has only the counters to work with. A little bit of A/V added in with the text won't make up for that.a lot of human knowledge is symbol - mediated: I “know” quarks, Antarctica, or the Black Death largely through testimony and models, not direct sensation.
We've already agreed on that point. And agency comes in not just in setting its own questions, but in other places. Communication is a volitional act. You write because you want to get across your ideas to someone. AI doesn't want anything. And look, the starting impetus for these communications, our earliest communications, is wants as such: hunger, the desire to be held. Computers don't want any of that stuff. Then too, communication is rhetorical; you're trying to move another person. Copilot doesn't give jack squat what I do with the text it generates. It can't conceive me as a being that it would want to move one way or another.It’s autonomy: does it set its own questions and care about the answers? That’s where I’d draw the real divide.
Why not ?That's true, but one builds outward (through metaphor) from a deep and broad experiential base to those matters for which one has only a symbolic connection. AI has only the counters to work with. A little bit of A/V added in with the text won't make up for that.
From what I see, an AI might actually wants to answer questions. In fact, one common problem with many AI for now is that they have a tendency to give an answer even when they don't really "know", with the famous "AI hallucinations" - of course it could simply be that they give the most statistically close answer even if it's wrong, but then isn't trying to give the best approximate rather than admiting you don't know some expression of volition ?We've already agreed on that point. And agency comes in not just in setting its own questions, but in other places. Communication is a volitional act. You write because you want to get across your ideas to someone. AI doesn't want anything.
This one might be more convincing if you finally told us what the AI doesn't understand in it and we do.By the way, do you understand "There was a raptor circling the sky this afternoon. I never did figure out what kind in particular, though I watched it like a hawk."?
That's what I'm going to move on to, Akka.This one might be more convincing if you finally told us what the AI doesn't understand in it and we do.
Even "Groan!" would have been acceptable. From what I see, an AI might actually wants to answer questions. In fact, one common problem with many AI for now is that they have a tendency to give an answer even when they don't really "know", with the famous "AI hallucinations" - of course it could simply be that they give the most statistically close answer even if it's wrong, but then isn't trying to give the best approximate rather than admiting you don't know some expression of volition ?