• Our friends from AlphaCentauri2.info are in need of technical assistance. If you have experience with the LAMP stack and some hours to spare, please help them out and post here.

artificial intelligence

Originally posted by Stapel
A bit more serious: The complexity of human intelligence is shown the best ( I think) in a hand. It can feel touch, pick (and not squeeze) and retreat without thinking when getting burned. At this moment, we can not chop someone's hand off, and connect the nerve to an artificial hand, with the same abilities.

At the moment, no, but research is rapidly heading in that direction. See this article on spinal injuries. This directly bears on the question, "when AI arrives, will it be a good thing or bad?"

I don't think the AIs will revolt against humanity and take over the world - I think they will call themselves human (or at least, people) because humans will voluntarily trade in their living bodies for machine bodies, and brains for computers. Except for a few holdouts, of course - like the Amish are holding out against modernization.

Suppose the nerve-to-machine communication interfaces are developed and become safe and reliable. Now think about the advantages of implanting a small computer next to your brain. Got a computation problem? Just think the question, and the answer pops into your head a few milliseconds later.

But if Moore's law continues apace - i.e. computational power keeps doubling every few years - and AI is developed, why not trade in your big dumb slow brain for a new computer? And that neural/machine interface will provide the perfect way to program the computer to have similar goals, preferences, habits, etc. - in other words, as its advocates will say, "download your personality."

Why not do that? Because it's fatal, that's why. Not just literally fatal, but (IMHO) fatal to your consciousness too. Yet, and here's the catch, it will look like a smashing success. To friends and family, the resulting android/robot will be the same old someone that they knew, only smarter. The resulting robot will also affirm that the operation was a success. Only people with a "religious" objection will remain skeptical.

I've glossed over a lot of tricky points, but that's the basic problem of AI as I see it.
 
This is an old thread dating back to 2004.

CNA (Channel News Asia) broadcasted a 4 part series on AI and it's future; It's narrated by a Singaporean Comedian (Chua Enlai) and some items, like funerals for dead AIs are new to me.

Becoming human https://www.channelnewsasia.com/news/video-on-demand/becoming-human

Artificial love,Unnatural genius, Coding Morality, Real Power

Thought it would be interesting to see the forum's views.
 
Last edited:
Knowing well where the documentary was produced and who produced it, it probably goes over ground that is well-trodden and overhypes the rest.

Odd choice of a narrator as well.
 
It is an interesting subject , I'll be sure to watch the documentary back home

...oh and btw. Congratulations on making that next level in thread necromancy ;) You may choose a reward a black mana orb or the grimoire of pestilential thought :D
 
I wonder where all those wonderful posters are today. AI is here today and will e gaining power. The workforce of the future will need brand new skills and many will be left behind. Fixing things will likely be more important than making things. All the human aspects of healthcare will be important skills. Creating entertainment content will probably be important. Ready or not, the future is coming.

As I think about this, there will be a need for folks who can lead companies and people through change. I think the upheaval will be significant even if not abrupt. Those folks who understand what is coming and can lead others through it will be in demand. Each industry and company will be different, but the required skill sets will be similar.
 
Last edited:
I watched a bit of the documentary. It's a bit sensationalistic, but to be fair, I think it's very hard to strike the right balance between skepticism and sensationalism with respect to AI.

AI conference attendance has been growing exponentially since the deep learning boom started in around 2012. At current rates, a billion people will attend AI conferences in 2040 or something like that. CS academia is bending over backwards to get funding, publications, and citations for AI research. I wouldn't be surprised if a majority of all CS research faculty at American universities were doing work that's at least AI-adjacent. Vast amounts of effort, data, and electricity are poured into laughably arcane models that offer puny marginal improvements. Performance results often aren't reproducible. There is something of a heard mentality, with many AI practitioners simply following and mimicking a handful of AI titans (Yann LeCunn, Ian Goodfellow, Yoshua Bengio, Michael I. Jordan, Andrew Ng, Geoffrey Hinton, and not that many others). A lot of work goes into uninterpretable models with gazillions of parameters that need carefully curated and huge datasets, coupled with super computationally intensive model training. There's always a garbage-in-garbage-out risk. Loads of AI startups with no real value prop compete for seed funding, tech giants are keen on appropriating deep learning lingo for marketing buzzwords, and we all feel suspicion that everyone's overpromising. Meanwhile, most models actually used in industry are simple business-oriented statistical models. Usually classic linear models, like linear regression or logistic regression, with cutting edge neural nets nowhere to be found. Getting deep learning models to actually work is a huge pain and they've fairly earned the monicker "computational BS." A lot of deep learning that you do find in the wild fuels lame stuff like advertising... and advertising itself is possibly a bubble waiting to burst.

On the other hand, the progress we've seen since ~2012 has been... jaw dropping. In a very short period, deep learning algos have vastly improved at tasks and games once seen as hallmarks of human intelligence. They've become vastly better at games like Chess, Go, StarCraft and DoTA. They can create music, paintings, poetry, and essays. They can solve analogies and Winograd schema ("the piano couldn't fit in the room because it was too big"--what does "it" refer to?). Object recognition, speech recognition, machine translation, question-answering, and personal assistance have all made huge strides. Self-driving cars are quite possibly just around the corner.

In defense of the shortcomings: most researchers are very well aware the big problems I mentioned two paragraphs ago (far more so than all the journalists and outsiders sniping at the field). Reducing the number of model parameters has always been a key goal of deep learning. That's part of why convolutional neural nets--which are famously good at doing stuff with images or any other grid-like data--are so great. In general, machine learning papers love to brag about how they accomplished some task using fewer parameters and less training time--not more. In terms of dataset problems, reinforcement learning, semi-supervised learning, and unsupervised learning are all booming subfields that circumvent many of the problems presented by the need for huge curated datasets. They've had extremely impressive results and are probably just getting started. Interpretability is another burgeoning subfield and, conveniently, some of the most exciting (or overhyped) recent algorithms are actually fairly interpretable. For example, it's well-understood how transformers/GPT-2 (which generate essays, poems, etc.) model interpretable linguistic features of natural language. In general, a lot of work has gone into "opening the black box" of neural nets, with respectable success.

People talk a lot about the looming "AI winter" and previous AI winters we saw in the 20th century. But the present boom is very different than the previous ones, with a big reason being quite simple: the value prop this time around is irrefutable. The recent boom has actually created a world where we all interact with AI constantly, with dozens or hundreds of practical applications. And there's still a lot of untapped potential; some extremely impressive algorithms have only been developed super recently and we don't know where they'll go. I certainly think we will eventually slide down the hype cycle, but I also think a lot of big changes are still on the horizon. My guess is when the next AI winter arrives, it won't be as wintry as the previous ones
 
Last edited:


 
AI is a blessing so long as it stays out of morality decisions.

It's great when helping a Doctor in an operation, but a horrible threat when deciding whether a person should live or not.
 
It's great when helping a Doctor in an operation, but a horrible threat when deciding whether a person should live or not.
Pfffft! What could possibly go wrong?
79403310.jpg
 
Back
Top Bottom