1) reformulate it, while keeping the meaning,
It can do that. When I asked it my question about Kirk's bad-faith treatment of Ms Rachel's citation of "love your neighbor," it correctly indicated that I was interested in "rhetorical or ideological" reasons for why a person might distort someone's quotation, even though my question had not used either of those words. Its adding those two things made my question more precise than my own formulation of it. (One possible RL answer is Kirk hates Ms Rachel because she rebuffed his romantic advances; but it knows to rule out that kind of answer because I generalized: "a right-wing commentator" and because of other aspects of my phrasing.)
2) actually gives a relevant answer for it.
It can do that. The answer it gave to both my initial question and my follow-up were relevant. As I said, it came down harder on Kirk's bad-faith argumentation than I myself did!
That's not some simplistic "let's use this word, then let's consult which word will statistically comes after".
It does both (1) and (2) precisely by asking, "in treatments of my topic within my dataset, which word is statistically most likely to follow the previous word?"
It depends what you mean by "simplistic." Being able to do that on a huge data set is not something a human would find
easy to do at all; but the process (of statistically calculating probabilities) couldn't be more
simple.
Reformulating and answering require understanding a point,
They do not, and AI does not "understand" the texts that it produces. Because it mimics human the products of human thought so well, we attribute to it the only capacity from which such products had previously been known to emerge.
Our minds
do work differently from generative AI.
Let's take my request for advice about what to do if feeling faint. There are millions of such treatments on the web. I type in "what should I do if feeling faint"
It will likely pick as its first word "If." The logic in medical situations is "If/then" logic: If [symptom X], then [treatment Y]. It will likely pick as it's second word "you." are. feeling. faint. It has choices. If. you. feel. faint. Where it comes down on those little choices will be a function of where the majority of treatments in its dataset come down.
If I had typed "what to do if feeling faint?" it might have opted for "one" or "someone" as its second word. (Because people are sometimes looking up medical advice not for themselves but for someone else.) Probably most pre-AI-composed treatments of the topic start "If someone" for this reason. So "someone" is likely to have the higher probability
unless I use the word "I" in my question.
I don't know about "otherworldly," but the human mind does go about its tasks in a way that is "fundamentally different" from how generative AI does.