Artificial Intelligence, friend or foe?

Parallel can have many meanings. A multicore computer is also parallel, but more an advanced serial computer.
Our brain has many kinds of pathways, all these happen parallel in sequences that could be considered serial.
Hybrids.

The simplification I made

was to point out how enormously powerful the processing power is of our brain when more objects are relevant to get reliable, meaningful or not so obvious output.

I understand what you mean.

I just find the terms "serial" and "parallel" too mechanical to be
appropriate for what happens in a dynamic environment like the human body,
as opposed to the static architecture of silicon-based computers.

Those "pathways" include ones with direct links to the brain, e.g. the spinal
column, Vagus nerve, and others. And then there are many interacting chemical
systems, like the endocrines, etc.
And complicating matters even more, we have all manner of flora and fauna in
and on our bodies, e.g. gut bacteria, skin dwellers, mites.
All of these produce chemicals that enter the blood stream and end up passing
through the brain's blood vessels at some stage. Who knows what effects they
have on a human's processing capabilities.

"Serial" and "parallel" don't feel right to me to encompass all that (and the
myriad things I've left out or don't know about). I don't have any idea of what
might be better simple descriptors - maybe it's not possible to summarise
it adequately in a word or two. :)
 
A very nice paper came out this week that illustrates how difficult the "missing
data" aspect is for AI, in particular for natural language interpretation.
The authors use episodes of CSI: Crime Scene Investigation as their datasets.
The researchers input the show's dialogue as well as descriptions of visual and
other cues, e.g. a suspect looking worried, or frowning etc.

It's still a toy world, and humans input the dialogue and cues, rather than an
AI doing it independently, but IMO it's a very innovative and clever idea to
establish some benchmarks.

I hope you enjoy some cutting-edge research and drama in...

Whodunnit? Crime Drama as a Case for Natural Language Understanding,
https://arxiv.org/pdf/1710.11601.pdf
 
Very well put, Uppi.

But I would add that inputting all of that required data is an impossible
task at present, and will be for a very long time to come. Getting that data
in the first place is a problem, as is representing facial expressions and their
"meanings".
The energy used in that entie process, as well as deciphering the meaning is far
more than a burger and some fries for a human to do the same task.

From the technical side, it could be done. You could outfit persons with bodycams, or Google Glasses, or similar devices and record everything to upload it on a server (I admit that battery lifetimes might be a problem, you would just have to find a power outlet every two hours ;)). The problems are more on the social side, because there is nobody you should trust to have all that data, and the economic side, because there would be a huge investment required that might not pay off.

The energy cost to calculate such a classifier would indeed be huge, but I would argue that it is less than the energy that a humans consume during their lifetime while learning this stuff. It is just that humans learning natural languages will happen anyway, so there is no need to pay the cost for raising a human. But once you have the classifier, applying it is not that costly anymore. So, a human might be better and cheaper when it comes to analyzing a single conversation, but when you want to analyze millions of conversations, doing it manually is not really an option anymore and the computer is the only way you get results for all of them at all. Of course, this raises the social question, whether we really want anyone to analyze that many conversations.
 
From the technical side, it could be done. You could outfit persons with bodycams, or Google Glasses, or similar devices and record everything to upload it on a server (I admit that battery lifetimes might be a problem, you would just have to find a power outlet every two hours ;)). The problems are more on the social side, because there is nobody you should trust to have all that data, and the economic side, because there would be a huge investment required that might not pay off.

The energy cost to calculate such a classifier would indeed be huge, but I would argue that it is less than the energy that a humans consume during their lifetime while learning this stuff. It is just that humans learning natural languages will happen anyway, so there is no need to pay the cost for raising a human. But once you have the classifier, applying it is not that costly anymore. So, a human might be better and cheaper when it comes to analyzing a single conversation, but when you want to analyze millions of conversations, doing it manually is not really an option anymore and the computer is the only way you get results for all of them at all. Of course, this raises the social question, whether we really want anyone to analyze that many conversations.

It will still fail, for one reason that should be very obvious but somehow escapes the proponents of AI as the future. Computers are good for correlating patters and doing work on them. Any AI (and I'm using the term in the sole way I see as realistic, specialized system) ultimately would have to do that, else it would be useless and not deployed. And that deployment also requires economies of scale to make sense: if you can build a specialized system that applies to numerous situations, it is worth the effort. If it applies to just one narrow situation, is it?

Now you may theoretically build an AI that specializes in "understanding" the conversations between two friends, let's assume it is possible. You need to feed it a humongous quantity of data, you need to keep it up and running to update it... and is is trained for that one task! It won't work for your conversations with another friend Was it worth it?

AI is a solution looking for problems. And for most "problems" it is touted, it won't be "cost effective". We're going through another AI fad. As with others before, we'll get some more specialized systems that are worth mass producing, and lots of abandoned projects.
 
From the technical side, it could be done. You could outfit persons with bodycams, or Google Glasses, or similar devices and record everything to upload it on a server (I admit that battery lifetimes might be a problem, you would just have to find a power outlet every two hours ;)). The problems are more on the social side, because there is nobody you should trust to have all that data, and the economic side, because there would be a huge investment required that might not pay off.

The energy cost to calculate such a classifier would indeed be huge, but I would argue that it is less than the energy that a humans consume during their lifetime while learning this stuff. It is just that humans learning natural languages will happen anyway, so there is no need to pay the cost for raising a human. But once you have the classifier, applying it is not that costly anymore. So, a human might be better and cheaper when it comes to analyzing a single conversation, but when you want to analyze millions of conversations, doing it manually is not really an option anymore and the computer is the only way you get results for all of them at all. Of course, this raises the social question, whether we really want anyone to analyze that many conversations.

In the Sherlock Homes story "Silver Blaze", the crucial clue was a dog that did
not bark during a supposed break-in.
https://en.wikipedia.org/wiki/The_Adventure_of_Silver_Blaze

You could have acquired and processed enormous amounts of data, and still not be
able to get the AI to make the required inference to solve that puzzle.

OTOH, my son's response was, "So you want an AI that's the equal of Sherlock
Holmes? Is that a reasonable standard to be aiming for?"
He has a point.
 
It will still fail, for one reason that should be very obvious but somehow escapes the proponents of AI as the future. Computers are good for correlating patters and doing work on them. Any AI (and I'm using the term in the sole way I see as realistic, specialized system) ultimately would have to do that, else it would be useless and not deployed. And that deployment also requires economies of scale to make sense: if you can build a specialized system that applies to numerous situations, it is worth the effort. If it applies to just one narrow situation, is it?

Now you may theoretically build an AI that specializes in "understanding" the conversations between two friends, let's assume it is possible. You need to feed it a humongous quantity of data, you need to keep it up and running to update it... and is is trained for that one task! It won't work for your conversations with another friend Was it worth it?

AI is a solution looking for problems. And for most "problems" it is touted, it won't be "cost effective". We're going through another AI fad. As with others before, we'll get some more specialized systems that are worth mass producing, and lots of abandoned projects.

Of course you need some sort of scaling and you need to weigh the effort against the benefits. You are right that analyzing natural language is usually on the wrong side of that and therefore anyone tasked with developing an AI that is supposed to be actually used for something will groan when encountering natural language data. So the promise that simply throwing an AI at a problem will solve it is obviously bogus marketing by those trying to sell you an AI.

But if you spend some time to make your processes a bit more AI friendly and focus on the big problems, you can replace dozens of people with a single programmer. If you have a sizable operation, you cannot afford to not think about involving AI, because then someone else will do so.

OTOH, my son's response was, "So you want an AI that's the equal of Sherlock
Holmes? Is that a reasonable standard to be aiming for?"
He has a point.

If you own a business and you want to replace some of your employees with an AI, well, those tend to be not an equal of Sherlock Holmes, either.

I think we are quite far away from an AI that is the equal of Sherlock Holmes or one of those dystopic all-controlling AIs. Short term, at least, the greater problem will be how to deal with those who are easily replaced with an AI.
 
Last edited:
I think - just theorizing, though - that it is viable to have a computer tied to bio matter and allow for bio matter (not an actual brain, obviously, just dna) to find its own manner of 'interacting' with the computer available, and then go from there. This would essentially be a setting to examine the bio matter, not about ai; just would give a new route to do it.
That said, i wouldn't give that computer tied to bio matter any heavy mobile limbs and stay in the room :)

I personally think it is ridiculous to expect ai like in movies/scifi works, ie something that is fully a computer and has any kind of sentience. I don't think it can even have a single sense. While dna already comes with the potential to be an actor - regardless of level; i don't mean something akeen to fully developed multicellular organisms. Even an amoeba, a single-cell organism, is an actor, so are fungi.
 
Last edited:
Of course you need some sort of scaling and you need to weigh the effort against the benefits. You are right that analyzing natural language is usually on the wrong side of that and therefore anyone tasked with developing an AI that is supposed to be actually used for something will groan when encountering natural language data. So the promise that simply throwing an AI at a problem will solve it is obviously bogus marketing by those trying to sell you an AI.

But if you spend some time to make your processes a bit more AI friendly and focus on the big problems, you can replace dozens of people with a single programmer. If you have a sizable operation, you cannot afford to not think about involving AI, because then someone else will do so.



If you own a business and you want to replace some of your employees with an AI, well, those tend to be not an equal of Sherlock Holmes, either.

I think we are quite far away from an AI that is the equal of Sherlock Holmes or one of those dystopic all-controlling AIs. Short term, at least, the greater problem will be how to deal with those who are easily replaced with an AI.

That type of AI is little more than one or more optimization and/or data mining
algorithms cobbled together for a specific purpose or business. There is nothing
impressive about their "intelligence" - they do well at mundane tasks they were
designed for.

I agree that many business would be foolish to ignore the potential improvements
that could be made using these codes, but I also suspect that there is a kind of
arms race mentality starting to come into play. By that I mean that some
businesses believe they must use "AI" because their competitors might be using
it. It's not always clear that they will benefit as much from the investment as
they hope, and in many cases it would be a ridiculous commitment of money,
personell and resources.

Personally, I think it's time for another huge audit, like the one James
Lighthill did in England and which led to an AI winter. There are many research
institutions and smart, honest developers that will pass stringent, unbiased,
independent tests. There are also many thousands that will end up looking like
complete charlatans.
 
There is another reason for what I am thinking of as the "AI bubble": people with money desperately throwing it at whatever seems to be the latest fad, under the "greater fool logic". Meaning that they do not really believe in what they are buying, but they believe that what they partly build will be sold at a profit to some greater fool (or greedier smartass like them) before the whole thing collapses.

The world has periodic such economic distortions. We saw some in the "internet bubble", and we're clearly seeing some similar things now. AI happens to be one of them. Many of the applicabilities being touted for it make no economic sense, but they will be pursued anyway until the music stops.
 
I agree that there is plenty of undeserved hype around "AI". A lot of companies are trying to sell "AI" as a magic bullet to solve all problems, so that everything can be automated an no pesky employees are required any more. And right now, there are plenty of managers that buy into these projects that are doomed to fail. To successfully use AI, you need to have a well defined problem, where you can clearly tell whether a solution is good or bad. For games, this is easy: A win is good, a loss is bad. For companies, this is much less clear, because short term profit might be very bad for the company in the long run.

But I don not think it is really a bubble, because there is an increasing amount of problems that can be solved with AI, and there will be success stories of how a large amount of money has been saved. As long as these stories emerge, there will be people thinking that it is the solution to all their problems and there will be demand for vendors promising that.
 
But I don not think it is really a bubble, because there is an increasing amount of problems that can be solved with AI, and there will be success stories of how a large amount of money has been saved. As long as these stories emerge, there will be people thinking that it is the solution to all their problems and there will be demand for vendors promising that.

Anecdata is exactly the sort of marketing claptrap that will lead to over-hype,
oe of the major reasons cited for previous AI winters
.
That's why there have to be people prepared to assess AI implementations
objectively. Just as people do when assessing software and hardware purchased
for non-AI applications.

I agree with Kyriakos that sentient AI in silcon is just sci-fi.
Biological or hybrid organisms have a far more realistic chance, in part
because there will be ways of augmenting humans with chemicals and maybe,
eventually, lumps of meat that have been grown to perform certain cognitive
or other functions.
Is a human no longer human because it has more than 50% foreign biological mass?
I dunno, I'm not a philosopher, and I was told to never ask what's inside a
sausage.

Some boffins recently managed to grow human brain cells inside a rat's brains.
(Real estate agents are now worried about their career futures!)

AI researchers definitely have some of the most amusing paper titles, e.g.

"Dave...I can assure you...that it's going to be all right..."
-- A definition, case for, and survey of algorithmic assurances in human-autonomy
trust relationships,
Brett W Israelsen, Nisar R Ahmed.
 
^Even the title notifies any serious person that it is going to be utter garbage ^^

I didn't read that paper, but I sometimes wonder whether somebody is submitting
variations of the famous Sokal Hoax, where complete nonsense was accepted by a
relatively famous journal. The journal won an Ig Nobel Award for:
"...eagerly publishing research that they could not understand, that the author
said was meaningless, and which claimed that reality does not exist".

Sokal's paper was titled: "Transgressing the Boundaries: Towards a
Transformative Hermeneutics of Quantum Gravity".

Should be obvious to any editor that the (peer) review of a physicist might be
a prudent move.
 
I personally think it is ridiculous to expect ai like in movies/scifi works, ie something that is fully a computer and has any kind of sentience.

Sentience is kind of loose though. It should be possible in principle to create non-biomatter machines that respond to stimuli identically or very similarly to humans and adjust. Far beyond present abilities and possibly computing power, but there's no apparent law that requires a human brain in order to make the same choices/actions as a human brain.

How is an agent that makes the same choices in the same situations and communicates the same way given a situation not sentient if the person is sentient? I would accept that both are, or that neither are (IE the concept of sentience is a wrong-question in the first place), but otherwise I place the burden that "sentience" must allow us to anticipate different actions between the sentient and non-sentient entity to carry any meaning. Same goes for any arbitrary standard of what constitutes "intelligence".

Should be obvious to any editor that the (peer) review of a physicist might be
a prudent move.

The title itself should give red flags. For more than one reason.
 
Sentience is kind of loose though. It should be possible in principle to create non-biomatter machines that respond to stimuli identically or very similarly to humans and adjust. Far beyond present abilities and possibly computing power, but there's no apparent law that requires a human brain in order to make the same choices/actions as a human brain.

How is an agent that makes the same choices in the same situations and communicates the same way given a situation not sentient if the person is sentient? I would accept that both are, or that neither are (IE the concept of sentience is a wrong-question in the first place), but otherwise I place the burden that "sentience" must allow us to anticipate different actions between the sentient and non-sentient entity to carry any meaning. Same goes for any arbitrary standard of what constitutes "intelligence".



The title itself should give red flags. For more than one reason.

Sentience is not (at least reasonably defined, imo) something judged due to what things look like; eg if i throw a ball a sufficiently advanced machine will indeed catch it. Sentience is about having contexts of what is going on, by which i do not mean 'logical' or 'elaborate' context, but any context at all (as in the context an ant has, or - i think - even an amoeba or at least fungi). Context seems to need having any kind of sense (ie 'feel' something, regardless of how that feeds into anything else; it doesn't need to be a feeling as we sense it as humans; all bio stuff have some basic sense of being, while obviously they don't pick up on it in the delicate and complicated manner a human or even large animals to a degree do).
A machine cannot have a sense, cause only bio stuff do. It is like a rock; it can fall or move in certain way, as long as there is sufficient build-up to make it do so, but it will never have any sense of falling or moving or catching or of anything else.

I don't doubt at all that future machines will be able to mimic human actions, even fool one that they are indeed humans, or have thought. But they still will have zero sense, and thus zero sentience as well.
A bio-machine hybrid can have sense, though that will be due to the bio matter.

Re choice: A machine doesn't choose anything; if i press the lamp switch, the lamp will go off, not because it chose to but because of properties of the electric circuit it is built around. A human has limited or maybe -in a way- no choice, but in that case it is due to being nonconscious of the vast majority of mental goings-on in his/her own body. Juxtapose to why a machine doesn't: there is nothing going on there in the first place; at least not tied to the machine itself, let alone that there is no analogous 'ego' of the machine.
 
Last edited:
It's more like "artificial intelligence, just how artificial is the whole field?"

An article worth reading about several already admitted problems in the field. I do believe winter is coming soon.
 
It's more like "artificial intelligence, just how artificial is the whole field?"

An article worth reading about several already admitted problems in the field. I do believe winter is coming soon.

If the field gets you actually useful results, it's as real as anything else.
 
A pre-print released today entitled "7 Myths in Machine Learning Research"
should give some pause to those who think that the success of AlphaGo/Zero and other
AI systems indicates that The Singularity is nigh.
https://arxiv.org/pdf/1902.06789.pdf
 
From Vox,

The case that AI threatens humanity, explained in 500 words

"...Current AI systems frequently exhibit unintended behavior. We’ve seen AIs that find shortcuts or even cheat rather than learn to play a game fairly, figure out ways to alter their score rather than earning points through play, and otherwise take steps we don’t expect — all to meet the goal their creators set.

"As AI systems get more powerful, unintended behavior may become less charming and more dangerous. Experts have argued that powerful AI systems, whatever goals we give them, are likely to have certain predictable behavior patterns. They’ll try to accumulate more resources, which will help them achieve any goal. They’ll try to discourage us from shutting them off, since that’d make it impossible to achieve their goals. And they’ll try to keep their goals stable, which means it will be hard to edit or “tweak” them once they’re running. Even systems that don’t exhibit unintended behavior now are likely to do so when they have more resources available.


"For all those reasons, many researchers have said AI is similar to launching a rocket. (Musk, with more of a flair for the dramatic, said it’s like summoning a demon.) The core idea is that once we have a general AI, we’ll have few options to steer it — so all the steering work needs to be done before the AI even exists, and it’s worth starting on today.

"The skeptical perspective here is that general AI might be so distant that our work today won’t be applicable — but even the most forceful skeptics tend to agree that it’s worthwhile for some research to start early, so that when it’s needed, the groundwork is there."



Unintended behavior seems to be the critical concept. Examples of past technologies, like nuclear power {Chernobyl, Fukashima}, exhibit the law of Unintended Consequences. An unpredictable AI, tied into the IoT, and every aspect of our lives, could be extremely worrysome.
 
How malevolent machine learning could derail AI

"AI security expert Dawn Song warns that “adversarial machine learning” could be used to reverse-engineer systems—including those used in defense."

"In a video demo, Song showed how the car could be tricked into thinking that a stop sign actually says the speed limit is 45 miles per hour. This could be a huge problem for an automated driving system that relies on such information."

"APIs can be probed and exploited to devise ways to deceive them or to reveal sensitive information.

"Unsurprisingly, adversarial machine learning is also of huge interest to the defense community. With a growing number of military systems—including sensing and weapons systems—harnessing machine learning, there is huge potential for these techniques to be used both defensively and offensively."
 
Top Bottom