The most critical issue your society is facing.

I think in modern society, maybe further back, there is this tendency to project blame onto others. If everyone stood back for a moment and just asked themselves, "Is what I'm doing with my life making the world better?" Then if they decided that it wasn't and decided to change to something that does then we would see a big shift towards realizing a better world.
 
Yes. Well. "If everyone...." And what's your definition of "better"?

Most people just get through life the best they can. Caring for their families. And making what they consider the best decisions, at the time.
 
Yes. Well. "If everyone...." And what's your definition of "better"?

Most people just get through life the best they can. Carrying for their families.

Maybe that's part of the problem. As a single guy with a tiny extended family I sometimes get annoyed at the way caring for ones family often carries a subtext of "stuff everyone else, as long as I look after my family that's all I'm here for".

It's time the concept of family got extended to, well, humanity I guess.
 
Yes. But even as a single man myself, I can still appreciate what a day-to-day struggle life can be for a great many people. And simply how busy family people can be.
 
The most critical issue my society and all societies are facing hardly bothers us at all, yet. It probably won't make the front page of the newspaper more than a few handfuls of times in the next decade or two. Naturally this means we're mostly not facing it. Trouble is, if we wait until the issue is obviously serious, that's probably way too late. And I'm not talking about climate change.

The problem is that we - and by "we" I mean human beings - may be making ourselves obsolete.
Every year, computers surpass human abilities in new ways. A program written in 1956 was able to prove mathematical theorems, and found a more elegant proof for one of them than Russell and Whitehead had given in Principia Mathematica.[1] By the late 1990s, 'expert systems' had surpassed human skill for a wide range of tasks.[2] In 1997, IBM's Deep Blue computer surpassed human ability in chess.[3] In 2011, IBM's Watson computer beat the best human players at a much more complicated game: Jeopardy![4] Recently, a robot named Adam was programmed with our scientific knowledge about yeast, then posed its own hypotheses, tested them, and assessed the results.[5][6]

We may one day design a machine that surpasses human skill at designing artificial intelligences. After that, this machine could improve its own intelligence faster and better than humans can, which would make it even more skilled at improving its own intelligence. By this method the machine could become vastly more intelligent than the smartest human being on Earth: an 'intelligence explosion' resulting in a machine superintelligence.[7][8][9][10]

The consequences of an intelligence explosion would be enormous, because intelligence is powerful.[11][12] Intelligence is what caused humans to dominate the planet in the blink of an eye (on evolutionary timescales). Intelligence is what allows us to eradicate diseases, and what gives us the potential to eradicate ourselves with nuclear war. Intelligence gives us superior strategic skills, superior social skills, superior economic productivity, and the power of invention.

A machine that surpassed us in all these domains could significantly reshape our world, for good or bad. Thus, intelligence explosion scenarios demand our attention.

One of the early warning signs, I expect, will be economic. As machines become better than humans at various tasks - medical diagnosis, science, engineering, math, whatever - the economic value of that sort of human labor will fall. In most cases it will fall to such low levels that even low-status jobs will pay better. The problem is that low-status jobs - flipping burgers, waitressing, whatever - will also become mechanized. Now of course, some jobs by their very definition as stipulated by the customers, require a human being to perform. But we can't all be prostitutes.

The provision of basic life necessities to vast numbers of people who can't earn the wages to purchase them, will become a very pressing political question, to put it mildly.

Then there are the military implications of artificial intelligent autonomous vehicles, to consider.

Which leads to the obvious question, who will hold the power in such a future? Will it be presidents, generals, or programmers? But any of those answers implies the highly optimistic idea that one or more humans will be in control. In practice, few programs do exactly what the programmer (never mind his boss) intended. Just look at all the updates and bug-fixes. And that's in programs directly written by humans. When the human merely programmed the machine that programmed the machine that ... (etc etc) programmed the machine, any claim to control the result threatens to be a bad joke.

But, no way! We're special! Computers utterly fail at writing poetry, or philosophy, or just generally learning from experience! Well, yeah, for now. But if we look back to the time (say the 70s and 80s) when computers were just beginning to be important, we can find pundits listing things that computers will "never be able to do" - many of which have been done.

Our uber-special brains are the products of millions of years of evolution. Random mutations, occurring at a rate not guided by results, occasionally improving body-types in many survival-relevant ways other than just intelligence. Whole orders and phyla being killed off now and then by asteroids or volcanoes. With no intelligent agent in charge. By contrast, modern "evolutionary algorithm" design methods optimize mutation rates, can select for intelligence alone, don't scrap experiments just when things are getting good, and can occasionally tweak the results to get past traps like local optima. And can run many "generations" per second. With today's technology. Oh, and we have a good working model to copy (selectively!) from, if only we can understand how it works.
 
While I do believe that Artificial Intelligence will cause many disruptions, I don't think it will be a catastrophic event for mankind as we will have prepared for it by then.

I find the depiction of AI in popular culture extremely childish. The scenario basically goes: people invent AI, people enslave robots, then the robots kill all humans.

This scenario is roughly equivalent to the crusaders having nuclear weapons in an alternate timeline. Absurd, you say? Crusaders didn't have all the other techs required to build nukes, you say?

Exactly. The path to AI isn't linear or one track. The world isn't a civ game where you can beeline to a specific tech and ignore the rest. By the time we (probably) develop AI, we will also more than likely be extensively bio-engineering ourselves or directly interfacing with computers, blurring the line between what is human and what is AI.

Similarly, our culture will be growing and adapting. Just as the Crusaders would've nuked most of the ME because that's how they rolled, of course we would enslave robots if given that tech right now. But today, thankfully, we don't nuke every country that crosses us. Hopefully, our cultural, societal, economic and political systems and institutions will have grown up to the point that we could rationally handle AI when it comes.

That being said, the current nightmare depictions of our coming AI overlords actually does help our culture adapt. We're thinking about the problem before we even have it. It's effing amazing to be human.

While I know this all doesn't address most of the implications you raised, I hope it does a decent enough job explaining why I don't think any of those implications will be show-stoppers long term. I also believe that by this time we will be living in a world of plenty as all of the prerequisite technologies that will enable AI and genetic engineering and such will also be applied to many of our pressing material needs.
 
@hobbs,

In the interest of not derailing this thread, mind if I post your excellent post as part of the opening of a new one?
 
That seems a very promising thread. :goodjob: And for my part I'm willing to argue that the problem is not AI as a threat to humans, but society adapting to reduced need for "economically productive" human labour. So fat that adaptation has been done by expanding the categories of property so as to transform more human activities in services that can be traded. The whole "intellectual property" mess is a consequence of that.
The transformation of labour is the most critical issue for many modern societies.
 
I see AI as more of a positive development than a threat at this point. Unlike people, most machines can usually be counted on to do what they are supposed to do. They don't come with desire or selfishness so it's unlikely they would try to take over the planet. In industry, AI acts as a production multiplier. One person directing robots can accomplish a lot more. In battle, AI acts as a force multiplier and reduces risk to personal. Of course, society needs to adapt to machines doing more of the labor. People need to get smarter along with the machines.
 
Does Catalan have any sort of military/paramilitary that they can enforce a secessionist vote with?
Does Spain have any sort of military that they can realistically use to prevent it? They're not the Soviet Union, they can't just roll in the tanks every time things get hairy.
 
Does Spain have any sort of military that they can realistically use to prevent it? They're not the Soviet Union, they can't just roll in the tanks every time things get hairy.
Yes, they do have a military that could be used to prevent it.
 
"Realistically". As in, that would create such a political poopstorm as to be a very bad idea indeed. This isn't the Ruskies rolling tanks into Budapest or Prague, Madrid might not have the clout to make that stick even if they could convince the rest of Spain to get behind it.
 
Ace99 is right...
We've seen secession in Europe recently, and it wasn't the USSR...

Yugoslavia...

Yes, it creates a poop storm.

Anyhow, it's a moot point. For Spain to even worry about it, Catalan would have to HAVE some sort of capability to defend its secession. That's how secession works. The unit wanting to leave will probably have to by force.
 
Back
Top Bottom