Let's be real

Also, you're a chess player/tutor for a living (or so I'm lead to believe, based on our previous conversations). Deep Blue beat the best chess player in the world, and this was over 20 years ago. Deep Blue could evaluate over 200 millions moves per second. Like I said, this was over 20 years ago. Why you think tax return software is the best we've got, I don't know.
 
Dude "AI" is not intelligent, roaches are intelligent, pigeons are intelligent. Computer programs cannot think, they process what we give them like a toaster processes bread. Like I said I'm glad you and Zelig have nice jobs and degrees but you're not creating gods, sorry.

Some programs can 'learn' (to use the term loosely) a little, but only within the parameters we give them, still not fundamentally different enough to call intelligent even on the level of a plant or fungi.

AI doesn't control the world, that's some conspiracy gobbledegook. Yes most stock training is computer driven, computers are analyzing us and trying to make predictions, if a genius created an algoryhthm to guess who you'd be best off marrying that made you happy ever after the genius would be smart, his software however sophiscated would be software.

Actual AI does not exist. Maybe some bs 'weak AI' but calling it a rose doesn't make it one.

The rosy smell is all in your mind, call it virtual reality if you will... you may be getting off but your mate is not really there.
I'm not totally on board with saying life forms like cockroaches are intelligent in a way fundamentally different than AI. Species are a store of information for evolution as it performs optimizations based varying environments. Evolutionary algorithms in AI vary from being pretty simplistic efforts to approximate functions to being decent models of actual evolution. While I'm not a biologist, I'm pretty ok with comparing species to functions that are gradually modified over time to maximize their "fitness" or be more "optimal." So then lots of species could come close to failing your "parameters" test. Evolution has warped the hell out of all lifeforms through its parameters. Intelligence doesn't have to be free of parameters and molding. Of course, the cockroach example stuck out to me because there's more of an argument there than with humans. I'll point out lots of stuff in AI is just developing clever ways to learn certain functions (e.g., the function of detecting a human face) by gradually minimizing some error function over time. So it's hard for me to see how this broad framework puts us on a track towards some sort of "strong" conscious AI. No matter what, the consciousness aspect is blocked by computer architectures--at a low level, even good convolutional neural networks (really in vogue ones that rn are doing lots of stuff like facial recognition, voice recognition, learning how to paint like van Gogh) are just building functions that, when executed, are passing a sequence of low level instructions through the processor. Reading from memory, doing some computations, changing some registers, writing back to memory, etc., is clearly very different than the consciousness-creating processes on biological brains. But this doesn't lead me to automatically agree with your over all assertions that current programs and AI techniques aren't intelligent. At the very list, they're intelligent in a different sense, but I don't want that to sound trivial because it isn't and consciousness produced through evolutionary constraints shouldn't limit our idea of what counts as intelligent.

And I don't think AI that starts to more and more resemble human thought and creativity is in the distant future. Right now AI researchers model target concepts like faces, voices, hand writing, driving vehicles, and more recently, music and art creation. I totally expect more and more creative "human-like" activities to be modeled as a target concept that AI can learn. And AI could have the modeling of target concepts as a target concept. This doesn't sound to me like conscious AI will spring into existence--again, architectural limitations--but it's a compelling case that AI shouldn't be trivialized as mere software.
 
The fundamental - and not to be breeched apparently, at least in an actual machine and not a machine-dna hybrid- difference between a machine and a roach is that the roach evidently has a sense of something, ie it isn't linked to what it does in a direct and empty manner, as a machine is to its program, or as a rock is to the "program" of operating free fall to the ground.
It seems that having a sense (any sense at all) requires biological matter. I suppose even an amoeba or a fungus has some sort of basic "sense". It may seem easy to go from absolute 0 to 0,000000....1, but apparently it isn't at all easy, and may be impossible, so dna-parts will be needed in an intelligent "machine".

Also, recall that machines require electricity already, and their programs (eg computers) are ultimately functioning due to ties to how electric circuits react. Else you can't even have a binary code. Yet electricity isn't dna, so it cannot provide an actual ability to have even the most basic sense.
 
That really doesn't help explain what this is supposed to be about.

It's about the philosophical implications of humans as appendages of machines, rather than the other way around.
 
At some point, maybe in the near future, we will reach this point called Singularity, where an AI intelligent enough to redesign himseld is created. AI developemnt would become exponencial then and we will reach God-like status as species, or be reduced to living batteries.
 
At some point, maybe in the near future, we will reach this point called Singularity, where an AI intelligent enough to redesign himseld is created. AI developemnt would become exponencial then and we will reach God-like status as species, or be reduced to living batteries.

But wouldn't almost anything make a better battery than a human? Like a potato? Or a battery?
 
Sure, but super AI will be fundamentally wicked and prefer evilness to efficiency.
 
Sure, but super AI will be fundamentally wicked and prefer evilness to efficiency.

They would just get rid of us then. I'm not too worried about that scenario as I think it's likely that advances in AI will go along with bioengineering advances, meaning we "merge" with the AIs rather than creating a monster that usurps our place.
 
Then there would be pure AIs and other AIs coming from human minds uploaded to the net or into cybernetic bodies. I wonder if both kinds of AI would be that different.

There was this book i cant recall...
 
At some point, maybe in the near future, we will reach this point called Singularity, where an AI intelligent enough to redesign himseld is created. AI developemnt would become exponencial then and we will reach God-like status as species, or be reduced to living batteries.
I think it is already possible to create AI that can program. I remember reading something about AI that can reprogram parts of itself, as well.
 
Last edited:
You aren't qualified to say what is and isn't AI. I'm probably not either, but as someone with a Computer Information Systems degree + tech industry experience I'm closer to being qualified than you are.

Tax software is AI, yo. That's the way it is, deal with it.
No it's not. By your own dick measuring standards of correctness on this topic, I am right and you are wrong.
 
I for one welcome our new robot overlords.

You all should be warned, if necessary I will roll over on all of you in about thirty seconds.
 
I never said they 'control the world'. Where did I say this?
Hygro implied we are being used by machines. Which implies machines have agency which of course they don't.

You seem to be saying 'there is no such thing as AI because none of our AI are advanced enough to think just like a human'. Nobody is arguing with you. You just don't understand what 'AI' means, to begin with.
Maybe I just have a higher standard for intelligence. My cat is pretty dumb but my smart-phone is manyfold dumber.

I took a 4000-level machine learning class. I know just enough to realize that it's a really deep field with very heavy theory.
No doubt but it's still not real AI in my view. 'Weak AI' has no intelligence in any conventional sense and the idea that the machines are controlling us as Hygro suggests is absurd. No doubt technology is changing our brains but we're willingly using it to (and if I were rich I'd use it even more so, buying all sorts of fancy neurofeedback equipment and monitoring all my biomarkers to optimize my health and brain function).

I think it'd be cool to have AI to interact with and learn from, I look forward to the day when we can legitimately call machines smart.

Deep Blue beat the best chess player in the world, and this was over 20 years ago. Deep Blue could evaluate over 200 millions moves per second. Like I said, this was over 20 years ago.
And AlphaGo beat the best human last year, it's pretty amazing but all this software is still designed by humans and if it had to think as slowly as humans it would stand no chance. These programs don't "think", they simply analyze staggering amounts of possiblites, they are calculators that's all.
 
My own guess is that AI is a long ways off. Intelligence, I predict, is going to turn out to be more deeply a function of "wetware" than current AI designers probably realize: of the repeated processing of sensory data, of such biological experiences as hunger and thirst. I also actually think AI designers are going to have to significantly dumb-down computer processing in order to get near AI: give it as short an attention span as we have, as small set of data as can be held in consciousness at one time as we have, as forgetful a long-term storage as we have. Then, such things as do get stored, getting stored in multiple strands of connection, and available for further patterning into yet other strands of connection as new experience gets layered over old. I think we're a long, long way off.
 
My own guess is that AI is a long ways off. Intelligence, I predict, is going to turn out to be more deeply a function of "wetware" than current AI designers probably realize: of the repeated processing of sensory data, of such biological experiences as hunger and thirst. I also actually think AI designers are going to have to significantly dumb-down computer processing in order to get near AI: give it as short an attention span as we have, as small set of data as can be held in consciousness at one time as we have, as forgetful a long-term storage as we have. Then, such things as do get stored, getting stored in multiple strands of connection, and available for further patterning into yet other strands of connection as new experience gets layered over old. I think we're a long, long way off.

The main issue is that while organic matter breaks up to smaller parts than those identified at any time (and possibly the divisions to smaller parts never stop at all), you obviously cannot have any code that doesn't rest on a set basis, an ultimate basis, which it uses as its axioms.
Artificial parts and programming won't/can't give a machine the ability to sense any more than writing more equations on a piece of paper can give the paper ability to sense it is being written on.

Imo, yes, pure computer AI (that is not using any dna parts) is impossible, not just difficult. And "AI" (cause it isn't really artificial; it rests on dna) using dna parts will be unpredictable past a point, so likely dangerous under some conditions.
 
The main issue is that while organic matter breaks up to smaller parts than those identified at any time (and possibly the divisions to smaller parts never stop at all), you obviously cannot have any code that doesn't rest on a set basis, an ultimate basis, which it uses as its axioms.
I disagree.
From what I have gathered there has emerged a universal agreement that we can not write really good AI. But we can try to write an AI-improvement-program which, eventually, results in a good AI. And once you go down that road, you can also have this improved AI change its own base of operations and even give itself a new and better AI-improvement-program, causing an open-end cascade of self-programming and self-improving AIs. That is the dream, right now.
And I do think this dream is realizable. Because that is how evolution turned out humans. From virtually nothing. Evolution is basically an extremely complex but mindless brute-force-algorithm to improve capabilities and eventually intelligence. The job of AI-experts is to transfer this process to the digital world.
As I understand, that his how they created this AI mastering that Asian super-complex chess.version. They were not smart enough to just tell the AI how to do that. But they were smart enough to tell the AI how to learn by itself how to do that. The latter is - while not easy - a lot easier than doing it directly while having a lot more potential.

There is one down-side to this idea: In this process of self-improvement, you may loose track and control what the AI is even developing into.
Artificial parts and programming won't/can't give a machine the ability to sense any more than writing more equations on a piece of paper can give the paper ability to sense it is being written on.
That is a different question. There is a vague notion being thrown around that high intelligence would bring about internal being. That is the case with biological gene units, but we don't know exactly why, and we certainly don't know how to artificially recreate this phenomena. I think there is good reason to be cautious about the idea of some kind of spontaneous coming about of sentient programs.
Maybe a better understanding of the brain will shed some light on that in the future.
 
Last edited:
How long until a Smart Toaster becomes self aware and tries to wipe out humanity? After all a toaster is just a death ray with a smaller power source....

I just acquired a busted microwave oven. If I can't fix it I'm thinking I might see about weaponizing the magnetron tube...but I will make sure no AI gets control of it.
 
Back
Top Bottom