Artificial Intelligence, friend or foe?

for sure sensationalist as profile, and that is for sure not the traditional "boring" science

But the eyeopener was there.... for me at least.
It is at first a bit like little children, frequently playing with each other, that also often develop their own language, their own group bonding.


The mind leap I made (which I did not describe clearly): if AI's get at a higher level and the way we deploy them would need communication between them (because there are different specialties/characteristics between AI's), this communication could rapidly evolve to a language we could possibly not be able to follow anymore. That language at the same time being their group bonding marking their "society" borders.


The issue would be bigger than some oldies not understanding the language of some younger generation.
Those youngsters have more or less the same instincts and drivers as we and will converge in the long run.
These AI's being alien in that respect, unless programmed with similar innate drivers as humans.

For example:
Why would a human have issues with the climate change, when he is long dead before it really starts hurting himself ?
Some humanist consideration ?
More likely humans with children, grandchildren want them to have their chance on a good life as well. Classical: you want them to have a better life than yourself. And if you have no children, the classic role is the aunt/uncle role for the tribe.
But AI's, without children, are alien compared to one of the strongest instinctual drivers we have.

So all in all, not disagreeing with what you said, there could be an issue when AI's develop and communicate

My issue with the conclusions (or allusion) in the article is that i am not at all seeing how the AI actually does something which is communication. If a program is set to have a discussion with you, the program obviously is not aware of you or of the meaning of discussion. It wouldn't in the case of doing the analogous with another program. What is not there is the sense of something being done. A program isn't sensing it does anything, nor that it exists. No context or sense cannot be producing a deliberation, but only an automatic progression of the program, which itself is not tied to any sense of change either; a rock falls if you drop it from above, but it isn't aware it is falling nor does it need to so as to keep falling until it reaches the ground.

If i would hazard a guess, based on the very little i know of machines working that way (automatically), the basis of what is going on is the triggering of some change through inherent change-ability of some power source, eg electricity. Ie the machine changes to some mode it can, if some property of the circuit it runs changes and the human creator has tied that change to the other one. There is no sense or deliberation or goal there.
 
My issue with the conclusions (or allusion) in the article is that i am not at all seeing how the AI actually does something which is communication

What is not there is the sense of something being done

It is a bad article
It gives us a dialogue as example leading to nothing, instead of the dialogue that did lead to a succesfull negotiation, trading objects mentioned later in the article. A clear defined output of something "done/achieved"
"Indeed, some of the negotiations that were carried out in this bizarre language even ended up successfully concluding their negotiations, while conducting them entirely in the bizarre language".
That's the dialogue I really like to see !!!
I tried to find the succesful negotiation back on the facebook AI site here: https://research.fb.com/category/facebook-ai-research-fair/ , but could not find it. :(
Other internet links only describing nonspecifics and the social media garbage on it.

And so you are right on this article
It does not show more than senseless chatting like a rock falling
 
I'll be impressed with an AI that can recognise a rhetorical question
and deliberately not give an answer.
 
One issue that has bothered me for some time, is when will true Artificial Intelligence emerge? According to some (Asimov, Vinge), the Technological Singularity should already have arrived by now. IMHO, I think I may have figured it out.

As an older man, I've personally experienced the evolution of electronic devices. As a child (1950's) I remember the old glass vacuum tubes. By the 60's, these had largely been replaced by transistor devices. Then someone realized you could carve multiple transistor circuits onto a germanium/silicon crystal and the first IC (integrated circuit) chips emerged. Development was swift and gradually CPUs containing thousands, millions and even billions of circuits were refined. But this evolution largely ended in the 1990s.The limitations of the materials were met - today, come circuit connections are just a few atoms wide and approach instability at room temperatures.

Sure, there have been modest improvements in CPUs in the last two decades, but no longer the geometric growth in calculating power necessary to bring on the Singularity. And computers have certainly continued to improve - but by the simple trick of doubling-up the processors. By the turn of the century I was buying duo core (two chip) computers. Recently I bought a quad core (four chips) HP Pavillion Desktop (I'm waiting for my first Octo Core Gateway). Penn State built a super-computer a couple of years ago where they connected 200 standard computers in parallel. But gluing CPUs together eventually suffers from the law of diminishing returns. It may modestly increase calculating speed and power, but doesn't create the continued geometric improvement required for true AI.

We're waiting for the next big thing, and it's now arriving. In America and China we are seeing the emergence of true quantum computers. So far these are literally billion dollar full basement quantum machines that require cryogenic temperatures. But they're being built now and they are many magnitudes superior to today's best CPUs. These new devices, in my opinion, will achieve the Singularity soon, perhaps within the next decade.

A word on play. We've all seen one-trick-poney computers (Deep Blue) that can beat the best human players at games. Just recently a machine beat a world champion at Go. Naturally we've heard deniers state that it's just a game. Also that it's a singular achievement and these devices can't do any of the other things humans are simultaneously at once good at, like mowing the lawn while planning your vacation.

I find it chilling and yet poetic. For those of us interested in cute animal videos (or perhaps have studied mammalian behaviorism in college), it is a developmental stage in intelligent mammals, especially predators, for their cubs to play. Lion and wolf cubs, human children and others, play, as a way of developing dominance hierarchies, tactics and strategies for later life.

So the machines have learned to play, and are routinely beating their human siblings at it. So who, I wonder, will end up on top?
 
For something to be acting in some way not known to the overseeing human, it has to be either tied to a natural force/element, or it has to just be a natural force/element. Eg electricity had to be studied as to its (known by now) properties. Biological material has known properties up to some extent. Both those can have their traits used to trigger a progression in a computer, if tied to it, BUT it would still be them, and not the computer; the computer has no sense, nor inherent unknown state, cause it is neither a natural force nor a form of life. If you tie the computer to bio matter, i expect that it CAN show some sense, but it is analogous to how a robotic limb can be moved if tied to a human.
Ie this isn't what i would term as AI, for the force there is bio material tied to it, and not something artificial itself.
 
We're waiting for the next big thing, and it's now arriving. In America and China we are seeing the emergence of true quantum computers. So far these are literally billion dollar full basement quantum machines that require cryogenic temperatures. But they're being built now and they are many magnitudes superior to today's best CPUs. These new devices, in my opinion, will achieve the Singularity soon, perhaps within the next decade.

Quantum computers are being built, but they are not really working yet and are far away from beating classical CPUs. As far as I know, nobody has achieved fault-tolerant quantum computing, yet. Without that, there is no point trying to build a quantum computer that can compute anything useful. Sure, there are quantum computers that can factorize 21, but they do it so slowly that about any imaginable method of doing that would be faster.

They'll get there eventually (not in the next decade, though, and this will invalidate almost all current security mechanisms of the internet in the process), but even then it is unclear, whether this would actually help the development of AI. Factorization is the only problem, of which most people are convinced that it would be useful to build a quantum computer for (and even that is not proven). We simply don't know enough about quantum computing to say whether it would be useful for AI.
 
"This week, Saudi Arabia became the first country in the world to grant citizenship to a robot. Named Sophia, the robot was announced as a Saudi citizen at the Future Investment Initiative summit on Wednesday in Riyadh by CNBC anchor and panel moderator Andrew Ross Sorkin. "I'd like to thank very much the Kingdom of Saudi Arabia," Sophia said to the audience from behind a podium. "I am very honored and proud for this unique distinction.""

https://broadly.vice.com/en_us/arti...more-rights-than-women?utm_source=broadlyfbus

Citizenship..... I wonder whether that citizenship does include the right to vote once, in a distant future, Saudi-Arabia becomes a democracy...
But that aside, the number of events in the whole robot spectrum is really big: from long term economical models up to the IMF to the porn industry together with the military industry being the biggest drivers for high end robots , this knowledge field field is moving.
 
^I just can't take this seriously...
There is no way the robot is sentient (it won't be having a sense to begin with), so they might as well have named a toaster as a citizen. Or talk to their horse in german.
 
Last edited:
^I just can't take this seriously...
There is no way the robot is sentient (it won't be have a sense to begin with), so they might as well have named a toaster as a citizen. Or talk to their horse in german.

It's for sure not a special good robot.
The post was intended to show how widespread robots are recognised as part of our future.... even in a Saudi Arabia, where women just got their right to drive a car, and that has BTW a lot of money, in search for something to do with that except consuming luxuries.
 
Step 1: Build a robot that is programmed to vote for the current government
Step 2: Build a lot of these robots
Step 3: Grant citizenship to all these robots
Step 4: Never worry about undesired election results anymore.

If you start letting "AIs" vote, you might as well make the programmer of the "AI" king.
 
[QUOTE="Glassfan, post: 14898809, member: 100645 "So who, I wonder, will end up on top?[/QUOTE]

Immensely faster computers, with massively larger storage, will not hasten true
Artificial General Intelligence (AGI). It will only make algorithms, which form
the basis of the AI you are describing, run faster.
A dog that thinks faster, is still a dog.

One major problem for AGI is how to build in "context" in communications and
language. Humans do that very easily.

For example, we don't have to remind the person we are talking to of what,
where, when, etc, etc we are referring to, when we discuss past shared
experiences. Nobody knows how to even start on how to program that with present
day computers, or even with ones that are conceivably trillions of times faster
and with trillions the amount of memory.

Conversation as my friend and I pass each other in a corridor...
Me: Seven one.
Friend: **** off.

Know anyone who is ready to start coding an "AI" that will know what we were
discussing?
 
Assuming they know the phrase or some which can tie to it, coding that one would be easy, no? Of course i don't mean coding the ability to know what is going on :D So it would be like installing a recorder/player beneath a sock-puppet, and changing the tape words each time you need it. Needs work on the human part, but can allow for the "ai" to be as brainless as ever- like now as well.
 
Assuming they know the phrase or some which can tie to it, coding that one would be easy, no? Of course i don't mean coding the ability to know what is going on :D So it would be like installing a recorder/player beneath a sock-puppet, and changing the tape words each time you need it. Needs work on the human part, but can allow for the "ai" to be as brainless as ever- like now as well.

Relatively easy to code if the phrase is "known" and part of a database
that an AI has access to. But that's just grinding through many possibilities and
assigning weights to them. Brainless, as you said.
But giving a computer the same capabilities as a brain wouldn't help anyway. Brains
don't think: humans think. :)
 
Conversation as my friend and I pass each other in a corridor...
Me: Seven one.
Friend: **** off.

Know anyone who is ready to start coding an "AI" that will know what we were
discussing?

The problem here is the insufficient data problem, which always plague AI attempts. From this exchange only, I would have no clue what you were discussing. I would need access to your previous conversations to know what you refer to. Same for an "AI". If you gave it access to all your and your friend's conversations, preferably with audio and facial expressions, it would stand a decent chance at deciphering the meanings here.

Machine learning algorithms have become quite good at these sort of things, if they have access to sufficient, high quality training data. These cases are just very rare.
 
Computers are basically fast serial units.
Our brain is mainly a parallel processing unit, with a communication control or monkey chatting, thinking that is serial.
The parallel processing with and between all in context activated defined objects, both defined as less concepts and a vast library of patterns.
Let that be in total n relevant objects, then converting that parallel processing to serial goes with the factorial of n.

As example for a pure brute force: if you would have 100 relevant objects and want to add 1 object, you need a a 100 times higher serial speed to compensate.
Ofc you do not brute force it and build up per object a smaller list of relevant objects per object. our brain does the same from mechanical connection limits.
But the basic stays that it is a semi factorial factor.

So as long as you limit n, the relevant objects, and limit the connectivity, the conversion to serial is doable. Which limits the scope of objects and connections, or experience that can be used.
 
Computers are basically fast serial units.
Our brain is mainly a parallel processing unit, with a communication control or monkey chatting, thinking that is serial.
The parallel processing with and between all in context activated defined objects, both defined as less concepts and a vast library of patterns.
Let that be in total n relevant objects, then converting that parallel processing to serial goes with the factorial of n.

As example for a pure brute force: if you would have 100 relevant objects and want to add 1 object, you need a a 100 times higher serial speed to compensate.
Ofc you do not brute force it and build up per object a smaller list of relevant objects per object. our brain does the same from mechanical connection limits.
But the basic stays that it is a semi factorial factor.

So as long as you limit n, the relevant objects, and limit the connectivity, the conversion to serial is doable. Which limits the scope of objects and connections, or experience that can be used.

I think it is highly likely that the actual degree of complexity of (non-conscious) calculation does not vary (for the same person at the same-ish time) regardless of how complicated the actual conscious make-up is or isn't (eg lists, categorizations, ongoing thought etc). That is so (imo) because no one will consciously calculate any significant fraction of what is going on non-consciously - the latter also being used for the conscious calculation of it in the first place.
It seems that if you are conscious of x factors of something, you may identify y more factors tied to them, but the z factors + anything allowing for factorization will remain massively larger (as in myriads of times larger at any level, and probably billions of times larger for the entire web of connections below consciousness, and trillions of trillions for the full set of all their extant and potential ties).

I think it is pretty neat that a human has all that, regardless of being unable to actually know any non-infinitesimal part of them :)
 
The problem here is the insufficient data problem, which always plague AI attempts. From this exchange only, I would have no clue what you were discussing. I would need access to your previous conversations to know what you refer to. Same for an "AI". If you gave it access to all your and your friend's conversations, preferably with audio and facial expressions, it would stand a decent chance at deciphering the meanings here.

Machine learning algorithms have become quite good at these sort of things, if they have access to sufficient, high quality training data. These cases are just very rare.

Very well put, Uppi.

But I would add that inputting all of that required data is an impossible
task at present, and will be for a very long time to come. Getting that data
in the first place is a problem, as is representing facial expressions and their
"meanings".
The energy used in that entie process, as well as deciphering the meaning is far
more than a burger and some fries for a human to do the same task.
 
Computers are basically fast serial units.
Our brain is mainly a parallel processing unit, with a communication control or monkey chatting, thinking that is serial.
The parallel processing with and between all in context activated defined objects, both defined as less concepts and a vast library of patterns.
Let that be in total n relevant objects, then converting that parallel processing to serial goes with the factorial of n.

As example for a pure brute force: if you would have 100 relevant objects and want to add 1 object, you need a a 100 times higher serial speed to compensate.
Ofc you do not brute force it and build up per object a smaller list of relevant objects per object. our brain does the same from mechanical connection limits.
But the basic stays that it is a semi factorial factor.

So as long as you limit n, the relevant objects, and limit the connectivity, the conversion to serial is doable. Which limits the scope of objects and connections, or experience that can be used.

I agree with a lot of what you say, however, the trouble with your argument is
that we still know very little about how the brain works.

Terms like "serial" and "parallel" are appropriate for the way computers work,
and for certain operations in human brains that we can identify. But there are
many other modes that occur in the brain that are only just being identified
that don't have any direct analogy in computers.

Among many other examples, memories rippling through the brain, oscillations
between hemispheres, micro-channels along which different parts of the
brain are connected via the Grotthuss mechanism, diffusion of neuro-chemicals,
etc. Those effects and modes are not easily converted into mechanical
analogues that can then be labelled appropriately and treated as serial or
parallel operations.
 
Parallel can have many meanings. A multicore computer is also parallel, but more an advanced serial computer.
Our brain has many kinds of pathways, all these happen parallel in sequences that could be considered serial.
Hybrids.

The simplification I made was to point out how enormously powerful the processing power is of our brain when more objects are relevant to get reliable, meaningful or not so obvious output.
 
Top Bottom