I think it remains to be seen how valuable GPT-based products and services will be, but it can do a lot even with just the format of "provide input prompt and get output". Chatbots, spambots, writing assistants, question-answering, and unscripted/flexible video game dialog are some things I can think of. Though all in all, I'm inclined to think GPT-3 itself is not that useful; what's more useful is starting with a big model like that and "fine-tuning" it to be good at specific things (like question-answering, etc). It's not possible to "fine-tune" GPT-3 because Microsoft/OpenAI aren't releasing it, but you can do that with lots of very similar models if you can pony up the money for some GPUs.
I'm a little skeptical of the usefulness of the debate about in what sense these models "think" since it seems like it's basically just a debate about the meaning of the word "think". Though my opinion is that GPT-3 et al can be seen as some kind of loose model of cognition, where cognition is based on correlations learned from text corpora and is implemented with a bunch of matrix math; matrices, that encode linguistic features and word associations, are applied successively to an input to transform it into a (mostly) sensible output.
It's not really "thinking" or "smart", but it provides interesting results. And it's impressive how much mileage the field has gotten out of this matrix-math-based paradigm--in natural language processing and in computer vision and everywhere else deep learning has been applied successfully.
The issue of actual thinking is not an afterthought, though, because it allows for various not easy (or even impossible, at least in a sense) developments to occur or be triggered. In general it is more difficult to make an AI reach a false conclusion, identify it as false but still keep it as a basis for positive things (consciously or not), while in humans it is very common to see a mistake lead to a breakthrough later on (in some indirect manner or in a different question).
An example, used (in a variation) by Socrates, which is easy to notice as real:
If you are asked to solve a math problem, say provide a proof for something very foundational like the Pythagorean theorem, and you haven't studied for it but are intelligent, you may try in the test to come up with an answer still. There are now three possible outcomes (of which one is trivial) :
1) You actually manage to prove the theorem (trivial)
2) You create a progression which you believe to be leading to a proof, but in the end you get stuck (middle from trivial to important)
3) You create this, and think you actually did prove it, but you have proved something else. Errors in how you identified what was asked, or part of the progression, lead to something you think is an answer for the Pyth theorem but isn't. (important).
Socrates notes that everyone who actually believes they answer in a correct way, did at any rate identify what they answered as correct, which is a reality in their mental world. Now, in some cases (obviously a very tiny minority) the person will have stumbled upon parts of a proof for something else, whereas the original thing to be proven wasn't needed anyway (the Pythagorean theorem already has been proven).
Computers are likely not able to make use of this, or at least that is what I have heard a couple of notable (internationally known) people who work in AI creation lecture about.