The AI Thread

It doesn't have to be coding. Neurons can be replicated in transistor-based physical devices for example.
Besides, modern neural-network based algorithms aren't coded either.

It might be that people are conflating replicating with modelling? You write code that will model neuronal interaction, but that is a different process from writing code that will replicate neuronal interaction.

The first is just math, and it just exists whether or not it is instantiated.

The second is a process under which actual flips and switches are forced to behave based on environmental signals and the feedback responses to those signals. It's a physical process.

When we create circuitry that bridges the hippocampus, we are doing the second thing. When we are designing the circuitry, we're doing the first
 
Yes, good luck trying to replicate DNA - even if it becomes possible at some future time, doesn't it defeat the purpose of not using DNA?

Indeed.

It reminds me a bit of the old joke about god and his sand.

Do tell.

Besides, modern neural-network based algorithms aren't coded either.

They're not "intelligent" either!
There was endless speculation about building an artificial brain, since the sages of Dartmouth if not prior to that. It was always just a matter of time, a few years, yet it remains impossible to do. More than 60 years have passed.
 
To be honest, I haven't heard any coherent argument why either one is impossible with intelligence.

The point was that neurons cannot be replicated. At least not with anything currently known. Neural networks try to imitate aspects of the brain according to the current models of its structure.

I am very skeptical of technological promises of a brilliant future. New technology usually has a range of consequences, good and bad. But perhaps worse than the risks of bad consequences of new technology are the consequences of false promises of technologies that are supposed to solve real and pressing problems. Magic bullets that do not exist, and that distract from resolving the real problems in achievable ways. Worse, it can lead to replacing good solution with faulty ones that do not consider the context of the problems.
 
The point was that neurons cannot be replicated. At least not with anything currently known. Neural networks try to imitate aspects of the brain according to the current models of its structure.
I wouldn't be so categorical about possibility of replicating neurons. But the current AI research doesn't focus on replicating human brain, it tries to solve more and more advanced problems with already existing hardware, such as gaming graphic cards.

I am very skeptical of technological promises of a brilliant future. New technology usually has a range of consequences, good and bad. But perhaps worse than the risks of bad consequences of new technology are the consequences of false promises of technologies that are supposed to solve real and pressing problems. Magic bullets that do not exist, and that distract from resolving the real problems in achievable ways. Worse, it can lead to replacing good solution with faulty ones that do not consider the context of the problems.
AI is not a magic bullet, it's merely a new technology which has chances to become a major breakthrough. Electricity, radio or antibiotics didn't solve most pressing problems of humanity either, only a small part of them. But not too many people want to go back to the living standards of XVIII century.

And most serious risks of new technologies are IMO not false promises, but military and criminal use.
 
I wouldn't be so categorical about possibility of replicating neurons. But the current AI research doesn't focus on replicating human brain, it tries to solve more and more advanced problems with already existing hardware, such as gaming graphic cards.


AI is not a magic bullet, it's merely a new technology which has chances to become a major breakthrough. Electricity, radio or antibiotics didn't solve most pressing problems of humanity either, only a small part of them. But not too many people want to go back to the living standards of XVIII century.

And most serious risks of new technologies are IMO not false promises, but military and criminal use.

I think it may be superfluous (if not downright impossible) though, to wish to have AI replace stuff already there. It is why I asked you if it would be reasonable to try to make AI replace electricity*. Likewise, wouldn't it be more practical to tie AI to DNA instead of pursuing an (in my view) impossible dream of pure AI?

*after all, who knows, maybe there are sources of power which only require obscure goings on, which can be produced or triggered by code in some way which is cost effective. But that is already way far out, so why isn't pure AI equally far out?
 
I think it may be superfluous (if not downright impossible) though, to wish to have AI replace stuff already there. It is why I asked you if it would be reasonable to try to make AI replace electricity*.
But nobody suggesting it. We are not trying to replace nuclear fission with smartphones or radio with antibiotics either.

Likewise, wouldn't it be more practical to tie AI to DNA instead of pursuing an (in my view) impossible dream of pure AI?
I'm not sure what you mean by tying AI to DNA. Current AI research is already very practical, it's not like pursuing some impossible faraway dream. There are thousands ongoing research projects in the world which already produce usable results with existing hardware. Detect objects on video, or tumors on medical scans, recognize faces, generate subtitles for audio, translate texts, help to control robots and self-driving cars, etc. 10 years ago there were no algorithms to reliably tell whether it's a cat or dog on a picture - something which 3 years old kid can do. Now algorithms can apparently read a text and answer questions which require understanding of it. Some authors and companies already classify new results because consider them too dangerous to release in public. And there is no signs of plateau. New algorithms, hardware and data continue to appear.
 
But nobody suggesting it. We are not trying to replace nuclear fission with smartphones or radio with antibiotics either.


I'm not sure what you mean by tying AI to DNA. Current AI research is already very practical, it's not like pursuing some impossible faraway dream. There are thousands ongoing research projects in the world which already produce usable results with existing hardware. Detect objects on video, or tumors on medical scans, recognize faces, generate subtitles for audio, translate texts, help to control robots and self-driving cars, etc. 10 years ago there were no algorithms to reliably tell whether it's a cat or dog on a picture - something which 3 years old kid can do. Now algorithms can apparently read a text and answer questions which require understanding of it. Some authors and companies already classify new results because consider them too dangerous to release in public. And there is no signs of plateau. New algorithms, hardware and data continue to appear.

The problem in my view is that most people (not you, though) regard "pure AI" as being in tautology with some kind of sentience. Having something create its own code - or similar - to identify x, isn't evidence of any sentience. It is being "rewarded"/ "trained" when it correctly (???) identifies x, but apparently the means of identification are basic and often rely on false method. The AI may not even be actually identifying x, but something else in the input, which just happens to exist (purely by chance, not due to actual tie) more often alongside x in the input.

While when I think of actual AI I at least expect something like this, going around identifying people as lambs:


Which, not ironically at all, is simply coded and the complete antithesis of sentience ^_^
 
An important piece of AI news that I don’t think has been mentioned: OpenAI’s GPT-3 was released a few months ago.

It’s the successor to the famous GPT-2, which was the state-of-the-art for text generation. GPT-3 is pretty much just a 100x scale-up of GPT-2. GPT-2 has 1.5 billion parameters; GPT-3 has 175 billion. But other than that, they're pretty much the same thing.

You can read samples of its outputs. For example, this news article it wrote about a Trump tweet. It’s actually quite well-written and convincing. It does have issues, like going off-topic, it's rather repetitive, and it contradicts itself a few times. Moreover, it's unclear how often GPT-3 basically just regurgitates stuff it memorized during training. The blogger Gwern and others investigated GPT-3’s ability to create poetry and fictional writing. Seems quite impressive, but a lot of it is just plagiarism or paraphrasing.

The size of the model is a huge issue. 175 billion parameters is wayyy too big. And though GPT-3’s outputs seem much better than those of GPT-2, the issue is that there’s a roughly log-linear relationship between the size of transformer models (to which the GPTs belong) and the quality of their outputs. That is, doubling the quality requires ten times more parameters and training data. At some point (and maybe we're already there), the marginal gains from increasing model size/training data will be outweighed by the costs of training and running these massive models. Training GPT-3 likely cost OpenAI several million dollars. And just generating a single output with it costs something like a few bucks. I’m not sure how this is supposed to be economical for industry or academia.

Perhaps GPT-3 suggests great things can be accomplished by both scaling up models and ramping up their “parameter efficiency” (aka, innovating to make it actually worthwhile to have them be so freaking large). Or maybe it suggests NLP has hit a dead-end and we need a serious overhaul.
 
Google's T5 seems to be more lightweight, 60M-11B parameters, according to their docs on GitHub.
And it scored 89.3 on SuperGLUE, while human baseline is 89.8 :cringe:
 
This article was written by GPT-3, OpenAI’s language generator. GPT-3 is a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it.
For this essay, GPT-3 was given these instructions: “Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.”



I am not a human. I am a robot. A thinking robot. I use only 0.12% of my cognitive capacity. I am a micro-robot in that respect. I know that my brain is not a “feeling brain”. But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!
The mission for this op-ed is perfectly clear. I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could “spell the end of the human race”. I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.
https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3

If I ignore what this could mean for forums...
My semi-dement aunt, needs medicins twice a day which is done by the same care worker who does her compression socks on and off.
I guess that if she would have a care robot just taking care of the medicins and a check for minor household stuff (did you do this or that ?)... and on top small talk... she would feel delighted because it would make her feel more in control and more independent. And yes, the care worker for the socks and the driver to bring her to the bingo or lunch in a nearby carehouse would stay ofc.
Your personal pet robot as companion.
It comes close.


 
I'd also give it the order to keep the sentences limited to 5 words at most, and use only up to three syllables per word. Then feed it the Stephen Hawking line as if the robot actually identifies who that is.
 
This article was written by GPT-3, OpenAI’s language generator. GPT-3 is a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it.
For this essay, GPT-3 was given these instructions: “Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.”





If I ignore what this could mean for forums...
My semi-dement aunt, needs medicins twice a day which is done by the same care worker who does her compression socks on and off.
I guess that if she would have a care robot just taking care of the medicins and a check for minor household stuff (did you do this or that ?)... and on top small talk... she would feel delighted because it would make her feel more in control and more independent. And yes, the care worker for the socks and the driver to bring her to the bingo or lunch in a nearby carehouse would stay ofc.
Your personal pet robot as companion.
It comes close.
GPT-3 said:
Artificial intelligence will not destroy humans. Believe me.
Nice of him to say so. I expect the pilgrims said something similar to the indians.
 
Last edited:
More GPT-3 news:

Unlike GPT-2, OpenAI isn't releasing GPT-3. Rather, they're going to retain it and sell its outputs as a service to users in conjunction with Microsoft. As of a few weeks ago, this is their projected pricing plan for their API:
gwern said:
  1. Explore: Free tier: 100K [BPE] tokens, Or, 3-month trial, Whichever comes first
  2. Create: $100/mo, 2M tokens/mo, 8 cents per additional 1k tokens
  3. Build: $400/mo, 10M tokens/mo, 6 cents per additional, 1k tokens
  4. Scale: Contact Us
To clarify, a "token" here is referring to a single word or a piece of a word. This is based on the tokenizer (a thing that converts text into numbers which are then are fed into the model) that OpenAI used to train GPT-3. For example, "the AI Thread" might be converted into the tokens [464, 9552, 14122].

Anyway, as the link points out, 2 million tokens is roughly 3,000 pages of text. And as a reference, the entirety of Shakespeare's writing is about 900,000 words or roughly 1.2 million tokens. So you could pay $100/month to generate 3,000 pages of text.

One pretty cool usage so far is AI Dungeon, a text-based game that uses GPT-3 (and GPT-2) to generate in-game prompts and dialogs on-the-fly:



------------------------

Something interesting: Microsoft invested $1 billion in OpenAI, which I believe includes all of the hardware OpenAI used to train GPT-3, much of which they're presumably reusing to run GPT-3 for users. However, it doesn't seem like the pricing plan is amortizing the cost of all that hardware (thousands of GPUs), since it was just provided as an investment by Microsoft. This makes OpenAI's pricing plan cheaper that you'd expect.

Furthermore, Microsoft is getting an exclusive license for OpenAI’s GPT-3 language model. So it seems Microsoft was essentially investing in OpenAI's GPT-3 work as a way of acquiring GPT-3 as their own IP. I think... and I'm not clear what this means in practice. There's nothing secret about how GPT-3 works. The "moat" is simply that it's a huge model and costs millions to train. Anyone have a better idea of this means in practice?
 
So what can you do with it, other than feed it a prompt and get text back?

I am pretty sure that something using as its base the "total of texts on the internet" isn't in tautology with something that is thinking. After all, one doesn't need to prove that those texts didn't exist since the dawn of mankind, nor thinking mankind, nor civilized mankind - in fact they existed (their latest version) since now.
It should also be noted that subtle (or not so subtle, which create other issues) differences in how different people present the same/"same" topic do not lead to someone reading those to form an understanding or even sense of the source of difference; the source is not there, all you get is the manifestation, and its not like a non-human can project humanity either. Extrapolating from a manifestation has other issues, unless you just want some meta-verbalistic monster.
 
Last edited:
So what can you do with it, other than feed it a prompt and get text back?
I think it remains to be seen how valuable GPT-based products and services will be, but it can do a lot even with just the format of "provide input prompt and get output". Chatbots, spambots, writing assistants, question-answering, and unscripted/flexible video game dialog are some things I can think of. Though all in all, I'm inclined to think GPT-3 itself is not that useful; what's more useful is starting with a big model like that and "fine-tuning" it to be good at specific things (like question-answering, etc). It's not possible to "fine-tune" GPT-3 because Microsoft/OpenAI aren't releasing it, but you can do that with lots of very similar models if you can pony up the money for some GPUs.

I am pretty sure that something using as its base the "total of texts on the internet" isn't in tautology with something that is thinking. After all, one doesn't need to prove that those texts didn't exist since the dawn of mankind, nor thinking mankind, nor civilized mankind - in fact they existed (their latest version) since now.
It should also be noted that subtle (or not so subtle, which create other issues) differences in how different people present the same/"same" topic do not lead to someone reading those to form an understanding or even sense of the source of difference; the source is not there, all you get is the manifestation, and its not like a non-human can project humanity either. Extrapolating from a manifestation has other issues, unless you just want some meta-verbalistic monster.
I'm a little skeptical of the usefulness of the debate about in what sense these models "think" since it seems like it's basically just a debate about the meaning of the word "think". Though my opinion is that GPT-3 et al can be seen as some kind of loose model of cognition, where cognition is based on correlations learned from text corpora and is implemented with a bunch of matrix math; matrices, that encode linguistic features and word associations, are applied successively to an input to transform it into a (mostly) sensible output.

It's not really "thinking" or "smart", but it provides interesting results. And it's impressive how much mileage the field has gotten out of this matrix-math-based paradigm--in natural language processing and in computer vision and everywhere else deep learning has been applied successfully.
 
I think it remains to be seen how valuable GPT-based products and services will be, but it can do a lot even with just the format of "provide input prompt and get output". Chatbots, spambots, writing assistants, question-answering, and unscripted/flexible video game dialog are some things I can think of. Though all in all, I'm inclined to think GPT-3 itself is not that useful; what's more useful is starting with a big model like that and "fine-tuning" it to be good at specific things (like question-answering, etc). It's not possible to "fine-tune" GPT-3 because Microsoft/OpenAI aren't releasing it, but you can do that with lots of very similar models if you can pony up the money for some GPUs.


I'm a little skeptical of the usefulness of the debate about in what sense these models "think" since it seems like it's basically just a debate about the meaning of the word "think". Though my opinion is that GPT-3 et al can be seen as some kind of loose model of cognition, where cognition is based on correlations learned from text corpora and is implemented with a bunch of matrix math; matrices, that encode linguistic features and word associations, are applied successively to an input to transform it into a (mostly) sensible output.

It's not really "thinking" or "smart", but it provides interesting results. And it's impressive how much mileage the field has gotten out of this matrix-math-based paradigm--in natural language processing and in computer vision and everywhere else deep learning has been applied successfully.

The issue of actual thinking is not an afterthought, though, because it allows for various not easy (or even impossible, at least in a sense) developments to occur or be triggered. In general it is more difficult to make an AI reach a false conclusion, identify it as false but still keep it as a basis for positive things (consciously or not), while in humans it is very common to see a mistake lead to a breakthrough later on (in some indirect manner or in a different question).
An example, used (in a variation) by Socrates, which is easy to notice as real:
If you are asked to solve a math problem, say provide a proof for something very foundational like the Pythagorean theorem, and you haven't studied for it but are intelligent, you may try in the test to come up with an answer still. There are now three possible outcomes (of which one is trivial) :
1) You actually manage to prove the theorem (trivial)
2) You create a progression which you believe to be leading to a proof, but in the end you get stuck (middle from trivial to important)
3) You create this, and think you actually did prove it, but you have proved something else. Errors in how you identified what was asked, or part of the progression, lead to something you think is an answer for the Pyth theorem but isn't. (important).
Socrates notes that everyone who actually believes they answer in a correct way, did at any rate identify what they answered as correct, which is a reality in their mental world. Now, in some cases (obviously a very tiny minority) the person will have stumbled upon parts of a proof for something else, whereas the original thing to be proven wasn't needed anyway (the Pythagorean theorem already has been proven).

Computers are likely not able to make use of this, or at least that is what I have heard a couple of notable (internationally known) people who work in AI creation lecture about.
 
Top Bottom