What else is necessary for real intelligence... that's another question.
And there is the more fundamental question, what exactly real intelligence is supposed to be.
What else is necessary for real intelligence... that's another question.
They probably mean intelligence not created by humans, however artificial intelligence nowdays is often only partly created by humans. Seems just a question of time before it have surpassed humans in everything, the abilities it can achieve today is impressive.And there is the more fundamental question, what exactly real intelligence is supposed to be.
Most current "AI" are classifiers, such that you give it a training set of predictors and responses and it will build an algorithm that will assign a probability of each response to a given set of predictors.Hi, I have a question:
Is the gist of the idea used for current AI to have the AI find some way to program for itself that which would make it react to some lesson in the manner (supposedly) indented (also intended, but I prefer indented)? That is what I get from the articles, however I want to ask why access by a human to that computer-programming-for-itself code is not the norm.
Most current "AI" are classifiers, such that you give it a training set of predictors and responses and it will build an algorithm that will assign a probability of each response to a given set of predictors.
The classic "hello world" type example of ML is to use a CNN to classify the MNIST dataset of hand written characters. Here the predictors are the pixels of the input characters and the responses are the characters they represent. This is expandable, with surprisingly little change, to classifying customers to advertise to and prisoners to keep in jail.
It differs by the algorithm used. For example, random forests are actually a great and easy solution to many problems, and they are quite easy to interpret. The problem largely comes from convolutional neural networks (CNN's), aka deep learning. These are great for image processing, and have shown a lot of value in many fields when there are loads of interconnected predictors of unknown significance. These consist of many layers of interconnected nodes (see images below) and the learning is changing the weights of each inter-node connection. It is very difficult to interpret how the training data affects these weights, and how these weights affect the output.But why is the norm (from what I have heard) that you (the human) can't have access to the algorithm built? I mean, since it is an actual algorithm, it is easy to reproduce.
I'm a little unsure what exactly you're referring to. I think there are two ways one might say a deep learning model is not "accessible".But why is the norm (from what I have heard) that you (the human) can't have access to the algorithm built? I mean, since it is an actual algorithm, it is easy to reproduce.
I'm a little unsure what exactly you're referring to. I think there are two ways one might say a deep learning model is not "accessible".
(1) The model is a "black box". This is what Samson was describing. It's often said deep learning models are "black boxes" because they take some input, spit out an output, and what exactly happens in between is usually pretty unclear. As Samson said, the underlying issue is the models consist of millions (nowadays, even billions) of connections with weights (aka parameters) between the connections. Before you train the model, the weights are just randomly initialized. But during training, the weights are gradually updated little by little such that the model can eventually do the task you're trying to teach it to do (e.g., classify pictures, answer questions, etc). When all is said and done, you have a model that uses all these learned weights to successively process the input in order to give you some output (e.g., the classification of a picture, the answer to a question, etc). But it's very opaque and hard to interpret what's going on.
That being said, the inner workings of a deep learning model typically aren't totally inscrutable. Often you can more or less analyze what's going on on the inside, even if the model is huge. But it's complicated, surgical, rather subjective, and not many people do it (or know how). There are also techniques to try to discern how different pieces of training data impacted the model. But it's an ongoing research area and like Samson said, it's difficult (and not many people do it/know how).
(2) A simpler case would be one where the model itself literally cannot be accessed. For example, GPT-3 (a very powerful and currently very popular model) is only accessible if OpenAI has let you use their API. And when you say "I mean, since it is an actual algorithm, it is easy to reproduce", this isn't true in the case of GPT-3 because GPT-3 is so massive that it required hundreds of graphics cards and millions of dollars to create. The algorithm that was used is well-known and theoretically completely reproducible, but it takes a lot of resources to apply that algorithm at the scale you'd need to create a second GPT-3.
(well, a few people have already replicated it more or less, but it wasn't exactly easy)
Explain like I'm 34?Adrian de Wynter, Turing Completeness and Sid Meier’s Civilization.
Abstract:
We prove that three strategy video games from the Sid Meier’s Civilization series: Sid Meier’s Civiliza-
tion: Beyond Earth, Sid Meier’s Civilization V, and Sid Meier’s Civilization VI, are Turing complete.
We achieve this by building three universal Turing machines–one for each game–using only the elements
present in the games, and using their internal rules and mechanics as the transition function. The exis-
tence of such machines imply that under the assumptions made, the games are undecidable. We show
constructions of these machines within a running game session, and we provide a sample execution of
an algorithm–the three-state Busy Beaver–with one of our machines.
https://arxiv.org/abs/2104.14647v1
What I do not get is how this makes Civ a computer, rather than a recording device for a real computer, which is the player.Civ5+6 are "computers".
Turing-completeness means roughly that the system is able to compute anything what a computer could compute.
So in theory you could do any type of calculation in Civ5+6 (if you really wanted, which I doubt anyone does).
If this makes Civ Turing complete, is a piece of paper Turing complete, or a pile of matchsticks?
In a similar vein, it is possible to use some simple rules, like those in Conway's Game of Life, to create other programs and ouputs.Explain like I'm 34?