warpus
Sommerswerd asked me to change this
I've read claims that quantum tunnelling might play a part in sentience, but I don't think there's any proof for that.
Playing Atari with Deep Reinforcement Learning[edit]
As opposed to other AI's, such as IBM's Deep Blue or Watson, which were developed for a pre-defined purpose and only function within its merit, DeepMind claims that their system is not pre-programmed: it learns from experience, using only raw pixels as data input.[24] They test the system on video games, notably early arcade games, such as Space Invaders or Breakout.[24][25] Without altering the code, the AI begins to understand how to play the game. And after some time plays a more efficient game than any human ever could.[25] The application of DeepMind's AI to video games is currently for games made in the 1970s and 1980s, with work being done on more complex 3D games such as Doom, which first appeared in the early 1990s.[25]
JUST what I thought.Firaxis should higher these people to make a murderously hard civ game some day.
JUST what I thought.
And I am actually sure this will be the future of AI. Constructing an AI from scratch is such an immensely clumsy and lengthy and faulty process.
However, what this also means is that future AI design will probably be highly centralized. With few or the one ultimate super algorithm which gives you the one super AI for any task after a simulation-run.
^InterestingHowever (in my view) :
1) The machine pixel-hunts stuff, and identifies something which to its code is what to us would be the notion of 'endgame'. This does not mean that the machine has such a notion. It merely uses its code in a set way, ie it is not able to differentiate between what winning the game is and what examining what winning the game is (ie it would not function with less linear games).
2) Some early games had more complicated winning conditions too. That Doom is the first with 3d does not at all make it more complicated than a vast number of 80s computer games. That said, i doubt this machine would be able to identify change in environment which keeps altering in smaller areas of it (3d screen is not all of the game's world, just a subset of it, a room etc).
3) I do not see how this routine of identifying pixels (and formulating in code some endgame tactic) lead to any kind of consciousness or intelligence. It is too linear and set. Intelligence seems to always prerequisite an underground to any function one has in immediate consciousness![]()
What if all consciousness is is the illusion of conscioussness? If the illusion is as good as the real thing and you can't tell them apart - wouldn't they be one and the same?
Kyriakos said:Depends on what level the illusion you refer to is.
Eg:
1) the illusion is that we are 'conscious'
Kyriakos said:But by direct experience
Don't take it to mean that I believe this - but this.
Maybe the distinction between "illusion of consciousness" and "consciousness" is such that it doesn't make a difference.
If you're analyzing consciousness you can't rely on your personal experience. Can you? I mean, it seems like a huge conflict of interest. How are you going to wrap your head around your own head?