The AI Thread

And there is the more fundamental question, what exactly real intelligence is supposed to be.
They probably mean intelligence not created by humans, however artificial intelligence nowdays is often only partly created by humans. Seems just a question of time before it have surpassed humans in everything, the abilities it can achieve today is impressive.
 
Hi, I have a question:

Is the gist of the idea used for current AI to have the AI find some way to program for itself that which would make it react to some lesson in the manner (supposedly) indented (also intended, but I prefer indented)? That is what I get from the articles, however I want to ask why access by a human to that computer-programming-for-itself code is not the norm.
 
Hi, I have a question:

Is the gist of the idea used for current AI to have the AI find some way to program for itself that which would make it react to some lesson in the manner (supposedly) indented (also intended, but I prefer indented)? That is what I get from the articles, however I want to ask why access by a human to that computer-programming-for-itself code is not the norm.
Most current "AI" are classifiers, such that you give it a training set of predictors and responses and it will build an algorithm that will assign a probability of each response to a given set of predictors.

The classic "hello world" type example of ML is to use a CNN to classify the MNIST dataset of hand written characters. Here the predictors are the pixels of the input characters and the responses are the characters they represent. This is expandable, with surprisingly little change, to classifying customers to advertise to and prisoners to keep in jail.
 
Most current "AI" are classifiers, such that you give it a training set of predictors and responses and it will build an algorithm that will assign a probability of each response to a given set of predictors.

The classic "hello world" type example of ML is to use a CNN to classify the MNIST dataset of hand written characters. Here the predictors are the pixels of the input characters and the responses are the characters they represent. This is expandable, with surprisingly little change, to classifying customers to advertise to and prisoners to keep in jail.

But why is the norm (from what I have heard) that you (the human) can't have access to the algorithm built? I mean, since it is an actual algorithm, it is easy to reproduce.
 
But why is the norm (from what I have heard) that you (the human) can't have access to the algorithm built? I mean, since it is an actual algorithm, it is easy to reproduce.
It differs by the algorithm used. For example, random forests are actually a great and easy solution to many problems, and they are quite easy to interpret. The problem largely comes from convolutional neural networks (CNN's), aka deep learning. These are great for image processing, and have shown a lot of value in many fields when there are loads of interconnected predictors of unknown significance. These consist of many layers of interconnected nodes (see images below) and the learning is changing the weights of each inter-node connection. It is very difficult to interpret how the training data affects these weights, and how these weights affect the output.
Spoiler CNN architecture :
330px-Neural_Abstraction_Pyramid.jpg
593px-Typical_cnn.png
 
But why is the norm (from what I have heard) that you (the human) can't have access to the algorithm built? I mean, since it is an actual algorithm, it is easy to reproduce.
I'm a little unsure what exactly you're referring to. I think there are two ways one might say a deep learning model is not "accessible".

(1) The model is a "black box". This is what Samson was describing. It's often said deep learning models are "black boxes" because they take some input, spit out an output, and what exactly happens in between is usually pretty unclear. As Samson said, the underlying issue is the models consist of millions (nowadays, even billions) of connections with weights (aka parameters) between the connections. Before you train the model, the weights are just randomly initialized. But during training, the weights are gradually updated little by little such that the model can eventually do the task you're trying to teach it to do (e.g., classify pictures, answer questions, etc). When all is said and done, you have a model that uses all these learned weights to successively process the input in order to give you some output (e.g., the classification of a picture, the answer to a question, etc). But it's very opaque and hard to interpret what's going on.

That being said, the inner workings of a deep learning model typically aren't totally inscrutable. Often you can more or less analyze what's going on on the inside, even if the model is huge. But it's complicated, surgical, rather subjective, and not many people do it (or know how). There are also techniques to try to discern how different pieces of training data impacted the model. But it's an ongoing research area and like Samson said, it's difficult (and not many people do it/know how).

(2) A simpler case would be one where the model itself literally cannot be accessed. For example, GPT-3 (a very powerful and currently very popular model) is only accessible if OpenAI has let you use their API. And when you say "I mean, since it is an actual algorithm, it is easy to reproduce", this isn't true in the case of GPT-3 because GPT-3 is so massive that it required hundreds of graphics cards and millions of dollars to create. The algorithm that was used is well-known and theoretically completely reproducible, but it takes a lot of resources to apply that algorithm at the scale you'd need to create a second GPT-3.

(well, a few people have already replicated it more or less, but it wasn't exactly easy)
 
I'm a little unsure what exactly you're referring to. I think there are two ways one might say a deep learning model is not "accessible".

(1) The model is a "black box". This is what Samson was describing. It's often said deep learning models are "black boxes" because they take some input, spit out an output, and what exactly happens in between is usually pretty unclear. As Samson said, the underlying issue is the models consist of millions (nowadays, even billions) of connections with weights (aka parameters) between the connections. Before you train the model, the weights are just randomly initialized. But during training, the weights are gradually updated little by little such that the model can eventually do the task you're trying to teach it to do (e.g., classify pictures, answer questions, etc). When all is said and done, you have a model that uses all these learned weights to successively process the input in order to give you some output (e.g., the classification of a picture, the answer to a question, etc). But it's very opaque and hard to interpret what's going on.

That being said, the inner workings of a deep learning model typically aren't totally inscrutable. Often you can more or less analyze what's going on on the inside, even if the model is huge. But it's complicated, surgical, rather subjective, and not many people do it (or know how). There are also techniques to try to discern how different pieces of training data impacted the model. But it's an ongoing research area and like Samson said, it's difficult (and not many people do it/know how).

(2) A simpler case would be one where the model itself literally cannot be accessed. For example, GPT-3 (a very powerful and currently very popular model) is only accessible if OpenAI has let you use their API. And when you say "I mean, since it is an actual algorithm, it is easy to reproduce", this isn't true in the case of GPT-3 because GPT-3 is so massive that it required hundreds of graphics cards and millions of dollars to create. The algorithm that was used is well-known and theoretically completely reproducible, but it takes a lot of resources to apply that algorithm at the scale you'd need to create a second GPT-3.

(well, a few people have already replicated it more or less, but it wasn't exactly easy)

I can't look into this currently, since I need some background on stuff, so it (if anything, or if at all) is further down the road for me. But I was reacting to some views I have read on such current AI, for example I recall someone claiming (in some article) that the AI may in the process of such deep learning actually come up with new models which can be of use to science, but those wouldn't be available. It made an impression, since such a thing would at least require some random formation of a subsystem within a vast system (tied to machine language, ultimately), but if it would be really useful to humans it'd need to feature some very specific dynamics - and those don't seem likely to rise up randomly, so would have to be triggered by some code, which would make little sense to not use to produce such a subsystem as its own thing, instead of an unseen cog movement in an AI project.
In other words, I was impressed by the claim that specifically organized subsystems (which make sense to a human on what is termed the meta-level, examination outside the system) could be produced semi-randomly, as a monstrous epiphenomenon of an AI project. But more than likely the claim itself was unrealistic (?).
One such specific claim was that the AI might come up (on its own) with some newtonian physics-based model of the solar system, or some variation which also works up to some level, eg Ptolemaic, that it could use (I suppose an isomorphism of it, tied to stuff it was asked to guess correctly) as the guide for making correct assumptions on some level, but the human wouldn't at all be able to know this happened.
 
Last edited:
Adrian de Wynter, Turing Completeness and Sid Meier’s Civilization.
Abstract:
We prove that three strategy video games from the Sid Meier’s Civilization series: Sid Meier’s Civiliza-
tion: Beyond Earth, Sid Meier’s Civilization V, and Sid Meier’s Civilization VI, are Turing complete.
We achieve this by building three universal Turing machines–one for each game–using only the elements
present in the games, and using their internal rules and mechanics as the transition function. The exis-
tence of such machines imply that under the assumptions made, the games are undecidable. We show
constructions of these machines within a running game session, and we provide a sample execution of
an algorithm–the three-state Busy Beaver–with one of our machines.

https://arxiv.org/abs/2104.14647v1
 
Adrian de Wynter, Turing Completeness and Sid Meier’s Civilization.
Abstract:
We prove that three strategy video games from the Sid Meier’s Civilization series: Sid Meier’s Civiliza-
tion: Beyond Earth, Sid Meier’s Civilization V, and Sid Meier’s Civilization VI, are Turing complete.
We achieve this by building three universal Turing machines–one for each game–using only the elements
present in the games, and using their internal rules and mechanics as the transition function. The exis-
tence of such machines imply that under the assumptions made, the games are undecidable. We show
constructions of these machines within a running game session, and we provide a sample execution of
an algorithm–the three-state Busy Beaver–with one of our machines.

https://arxiv.org/abs/2104.14647v1
Explain like I'm 34?
 
Civ5+6 are "computers".
Turing-completeness means roughly that the system is able to compute anything what a computer could compute.
So in theory you could do any type of calculation in Civ5+6 (if you really wanted, which I doubt anyone does).
 
Civ5+6 are "computers".
Turing-completeness means roughly that the system is able to compute anything what a computer could compute.
So in theory you could do any type of calculation in Civ5+6 (if you really wanted, which I doubt anyone does).
What I do not get is how this makes Civ a computer, rather than a recording device for a real computer, which is the player.

They have a worker and a pillaging unit recording the state in the presence of an improvement and/or road in some tiles, the resources gained from the tiles "counting" the result, and the player following a table of instructions. If this makes Civ Turing complete, is a piece of paper Turing complete, or a pile of matchsticks?

You can automate it with the API, and I do not know what this means in these versions, but in Civ 4 that would be equivalent to saying python is Turing complete, and we already know that.

Spoiler Table player follows :
civ-Turing.png
 
If this makes Civ Turing complete, is a piece of paper Turing complete, or a pile of matchsticks?

They're not machines, but if you'd discard that requirement, then yeah, I'd guess so.
It's about having a medium, to which you can give instructions to compute anything. Pen and Paper would count, I guess, as would your fingers.
Not all machines do qualify (obviously), e.g.... your washing machine, loads of stuff, I guess.

It's funny how they stayed with their version of the turing machine as close as possible to the original one lol, https://en.wikipedia.org/wiki/Turing_machine#Description .

For Civ4/Python analogy, that is not the same, since not everything what is in Python is exposed in Civ4 itself.
 
First law of probability - RNG* is never fair ! Which leads to conclusion that someone programmed it that way, therefore:
We live in a sophisticated simulation.

*RNG - Random Number Generator ;)
 
A wrong conclusion that civ players only believe until they are good enough that nothing hinges on one unit produced and the law of large numbers prevail.
 
The ending (by the university guy speaking to Penrose) is not very good, but the lecture is about consciousness not likely being computable. The technical bit of it is on Turing and (apparently) a way he tried to find in a 1939 paper around the limitations on machines by the Goedel sentence (a way around the infinite axioms schemas etc).
Penrose is certainly against the view that real AI (as in a machine that has "understanding") can exist.


There is also a nice reference to ordinals, using Herakles and the Hydra.
 
Explain like I'm 34?
In a similar vein, it is possible to use some simple rules, like those in Conway's Game of Life, to create other programs and ouputs.
A digital clock is fairly difficult.

Tetris has already been done. Civ could be implemented and it would be a ridiculously difficult task, but it's doable because Turing-
completeness guarantees that it can be done (but not how!).

This video might help you understand a bit more.
Let’s BUILD a COMPUTER in CONWAY's GAME of LIFE⠠
 
Back
Top Bottom