Questions for the surprisingly far right CFC population

Why are production/wealth/success/progress/etc an indicator of being "better" at all? That is not the point of communism. "From each according to their ability, to each according to their need." It is about balance, and freedom from the oppression of competitive systems. It is not designed to outproduce capitalist systems. It is designed to reduce human suffering. It was never fully realized, and what was realized did not last. The people in the end were not content with the mere reduction of their suffering, because after a few generations they were no longer aware of such suffering. They wanted more.

This is probably going to end up following a wave-like pattern where communist-like (socialist, social democratic) societies keep emerging and de-emerging. Maybe in the end, the world will decide to converge toward a less competitive way of life.
You've got a gift for words.
 
Oh, I don't think it's a scale where there's one end with realists, and one ends with idealists. I think, like on most political scales, the realists are somewhere around the center, depending on the issue somewhat shifted to the the left or the right (but usually the left), and then after a certain point, the further outwards you go, the more crazy people get. The only real difference between the extreme ends of both sides is that one side calls for non-action and believes (almost?) religiously extend that the system of capitalism, if left alone, will deliver exactly what we need, while the other side calls for radical reform without a clear plan for a better system.
I don't think that's accurate. If anything it is political movements usually characterised as "centrist" that are most inherently buying into capitalism as our societal model, in fact so much so that they generally don't even need to explicitly say so. Centrism by its very nature means embracing the status quo. Rejections of capitalism on the other hand can not only be found on the extreme left, but also on the extreme right (although the right is less uniform in that regard), that's where the whole horseshoe model comes from.

If your definition of realism depends on relying on (and maybe reforming) the status quo, then your criteria axiomatically reject everything that wants to radically change the status quo by default, which is not a very fair position to take. If you think that radical change is risky and may lead to drastic unforeseeable consequences or the like, that is a fair position to take. There is no reason to declare one position as more or less "realistic".
 
If your definition of realism depends on relying on (and maybe reforming) the status quo, then your criteria axiomatically reject everything that wants to radically change the status quo by default, which is not a very fair position to take. If you think that radical change is risky and may lead to drastic unforeseeable consequences or the like, that is a fair position to take. There is no reason to declare one position as more or less "realistic".

Majority of humans are risk-averse. We have tons of studies on that. It's why they stay in bad relationships, bad investments, bad environments. Proposing radical changes even for the betterment of all will strain the credibility of them and impact their perceived feasibility. You need a really good sell with minimal risks. Saying you need to tear the thing down from the ground up will get you laughed out of the room even if they are indeed the best ideas ever that would 100% work.
 
@inthesomeday , even if completely true, I think the examples you provided regarding superior economic output have more to do with authoritarianism than with "communism"….as I understand it, an authoritarian society will increase efficiency and "safety" (except anything associated with the massive corruption that is to be expected).

Going through the thread and re reading your initial statements, I will also challenge your Thing 1, #1&2… I don't agree that economics is the primary driving force of all developments in human society, unless you can convince me that survival and economics are the same thing.


@Honor, "From each according to their ability, to each according to their need" sounds extremely creepy to me...how do we measure that? Do we take each person's word….do some sort of statistical analysis? Do we need to connect everyone to some sort of futuristic lie detector to determine their abilities and needs? Will that even work, given that, at least according to current neurocognitive studies, only about 5% of cognition is conscious. At this point then, only each individual can gauge their abilities/needs, and in any case, these are constantly changing….as soon as another/OTHERS says; "well, I don't think you need that much or you can try harder than that"…..OPPRESSION (or perhaps only some needed encouragement?)… which reinforces a point I made earlier, that is as long as you have the capacity of independent thought (conscious AND unconscious), you will always be an individual first and a member of society second.
 
Except Perfection, who asserts that capitalism has resulted in better productivity.

And he's right. The growth in output per worker over the long run has been higher in capitalist economies over anything else.


Now, I maintain that this is impossible to compare because I don't believe that Communism has ever been practically applied in the real world (a statement which I expect I'll have to defend vigorously after this post) but for the sake of argument I'll look at the Soviet Union, who I nonetheless cringe to call Communist. If your measure of a successful economy is productivity, and you simultaneously hold the erroneous belief that the USSR was communist, then I suggest you don't ignore the fact that the USSR was superior in industrial output to the US, and that the policies put in place by the """""""""Communist""""""""" Josef Stalin are directly responsible for this economic growth.


We can debate whether the USSR was to what extent an actual communist country. But we cannot debate whether the USSR's productivity was the equal of Western capitalist countries. The USSR never came close. Never even got within sight of capitalist productivity.

What did happen is that the transition from an agricultural economy to an industrial one produced very high productivity gains. For a while. By 1970 or so they were done with that. What did not happen is that they ever equaled the US in productivity. Cause ours just kept growing, and theirs did not.
 
@bernie14
I'll put it like this: I'll agree that the psychology of survival does dictate most individual human behavior, but I think that this manifests on a societal scale with economic activity. If we accept that the foremost psychological fixation of "survival" is acquiring food and water, then we can agree that, when we expand our analysis to a societal scope, the systems supporting individual acquisition of these resources will dictate all following developments in a society. And I define acquisition of resources as an economic function.

Anyways, no I don't think the USSR was communist, no I don't think any single state could ever be communist, because communism's definition fundamentally relies on statelessness. I'll bet my top dollar you'll all say "why that's wishful thinking" but the thing is when people say that they themselves are the reason change is wishful thinking. I'll just remind everyone though it was never my intent with this thread to convince reactionaries, moderates, centrists, or liberals to take action to change the world, because I accept that will take a lot more work than typing things out on this forum. My intent with this thread was to gauge the reasoning of people who refuse to accept communism, as was my intent with the last thread, in a more direct manner of discourse than getting bogged down in the favorites of capitalist rhetoric-- IE, "human nature", "you're being naïve", or "it's worked so far".

I don't even really consider those arguments, thus my attempt to put forward very specific logical reasoning trees to get direct feedback.
 
I
A completely unchecked laissez faire Capitalism will create productivity, but also a lot of problems. Social Democratic Capitalism fixes many of these problems, but there are, of course, more to work on. Inspiration can be gotten from Socialist/Communist thought, but those systems are not desirable as a whole.
I believe that the sort of social democratic capitalism not only reduces the harms of capitalism, but when executed properly pumps the gas on growth. It shouldn't be a social welfare versis growth tradeoff, but a synergistic effect.
 
Another take on the USSR issue is that, although the USSR may not have been communist, it was certainly led by communists. Any even relative teddy bears like Bukharin were perfectly willing to resort to slaughter in the name of some sort of socialist utilitarianism. Reference to the USSR (and Mao's China, for that matter) can't be dismissed just because the end result wasn't communism, as the reference isn't to the end result; it's to the actual process of communists attempting to put communism in place, and not only failing, but almost invariably committing terrible crimes along the way. If alternatively we're discussing communism bringing itself into existence simply through historical inevitability, it seems entirely academic and unrelated to being an actual agent of change.
Most of that progress was made between 1945 and 1960, and 1935 and 1970 at the most generous, in the space of a generation or two, and much of it is now being undone in the name of economic rationality. That's not to deny that the progress occurred, or that progress may occur in future, but it is to say that progress is not an inherent feature of the system, simply a thing that may occur in some circumstances.
I don't really disagree with the essence of what you say here (although I'm not sure how factually correct it is), but then what you say here isn't what I was responding to, which is that no progress has been made, with the implication that no progress can be made. I don't think progress is inevitable in a Bretton Woods world (or however 'The Status Quo System' it's best described), my challenge is simply to the idea that we haven't seen any, and shouldn't hold out hope or work within what we've got.
 
"But at what cost? The human life loss and suffering was way too much to justify the industrial growth like this."
3. Agreed. That's the whole problem with capitalism. If we consider the USSR and PRC as communist, though, then we must accept the reality that this "communism" is simply more efficient at churning out industry at the expense of human life.

So you're basically assuming that capitalism always leads to human exploitation? Because that is a complete fallacy. Capitalism is a tool, much like a knife. It's akin to saying knives are always used to stab people, eventually.

Guys comon, there will always be people who exploit certain systems for gain. Just cus it's giant faceless corporations doesn't mean capitalism sucks, it means we didn't put in the proper regulation to protect certain people or the environment or whatever. That kind of exploitation occurs under any economic system, those with power at the top will always seek to keep it and exploit those at the bottom, whether it's a feudal system, capital based, communism, authoritarian, fascist, whatever.

The other main argument I see against capitalism is hey we have plenty of resources now with our increased productivity to supply a minimum level of lifestyle to every person (whether it's citizens or globally or whatever), and we aren't so capitalism sucks! So quickly we forget that economic freedom and capitalism is responsible for that increased productivity that enables us to even make that statement. If you want to provide that minimum standard of living you need to set public policy, not reject the economic system.

It's not good enough to have great ideas. Ton of great ideas are floating around, vote on beliefs, but implement betting markets for methods. How do you put the policy into place, realistically? For example, majority would prefer single-payer in the US, but when you add the qualifier that taxes would have to be raised, support drops like a rock.

Cus people are adverse to change and it's much easier to bash something than make people believe in it. I'll bet though that if you eliminated insurance premiums for everyone and raised taxes by less they would buy in, you just have to prove to people that they come out ahead. You'll always have some people who don't buy in. The problem right now is health care costs keep rising and no one's shown how single payer really fixes that, just that hey we're gonna just tax everyone to pay these absurd costs.
 
Oh, I don't think it's a scale where there's one end with realists, and one ends with idealists. I think, like on most political scales, the realists are somewhere around the center, depending on the issue somewhat shifted to the the left or the right (but usually the left), and then after a certain point, the further outwards you go, the more crazy people get. The only real difference between the extreme ends of both sides is that one side calls for non-action and believes (almost?) religiously extend that the system of capitalism, if left alone, will deliver exactly what we need, while the other side calls for radical reform without a clear plan for a better system.
Today's center was yesterday's extreme left. How do we account for that? Either our ancestors were correct, and propositions like "slavery is bad" and "poor people should be allowed" to vote were insane and dangerous, and have only gradually become sane and proper, or we in the present have somehow manage to arrange things such that all of our assumptions, biases and claims of "common sense" happen to line up with universal moral principles. The first seems abhorrent, the second unlikely.

Are calling me out on saying what we as a society ought do or ought not do?

How are we supposed to have public policy discussion without it?

Is it wrong to say "we ought to have a carbon tax to fight climate change"?
I'm saying that social change rarely occurs because a program has been proposed and we have all agreed upon it, so the lack of such a program is not a defence against criticism. I don't disagree that specific reforms are useful: if nothing else, they provide a shared reference point for social movements. But oppositional movements are not required to solve every problem in advance, are not required to accept that the current way of doing things is automatically justified until proven otherwise- least of all when "proven" means "proven to the satisfaction of those who benefit most from the current way of doing things", which is inevitably the case.

Majority of humans are risk-averse. We have tons of studies on that. It's why they stay in bad relationships, bad investments, bad environments. Proposing radical changes even for the betterment of all will strain the credibility of them and impact their perceived feasibility. You need a really good sell with minimal risks. Saying you need to tear the thing down from the ground up will get you laughed out of the room even if they are indeed the best ideas ever that would 100% work.
I agree, but there are times when trying to keep things as they are seems the riskier proposition, and that's when grumbling tips into revolution. Else, why do revolutions keep happening?

Another take on the USSR issue is that, although the USSR may not have been communist, it was certainly led by communists. Any even relative teddy bears like Bukharin were perfectly willing to resort to slaughter in the name of some sort of socialist utilitarianism. Reference to the USSR (and Mao's China, for that matter) can't be dismissed just because the end result wasn't communism, as the reference isn't to the end result; it's to the actual process of communists attempting to put communism in place, and not only failing, but almost invariably committing terrible crimes along the way. If alternatively we're discussing communism bringing itself into existence simply through historical inevitability, it seems entirely academic and unrelated to being an actual agent of change.
I agree, communism is not an agent of change. Communism is an idea, and ideas only matter insofar as and in the ways that people act upon them. Any analysis of twentieth century communism has to be analysis in what communists actually did, not of "communism" as some abstract principle, any more than we can take the success and failings of capitalism as expressing an abstract principle of "capitalism".

So you're basically assuming that capitalism always leads to human exploitation? Because that is a complete fallacy. Capitalism is a tool, much like a knife.
You can put a knife down when you've stopped using it. How many people have the ability to opt-out of capitalism?
 
Last edited:
Cus people are adverse to change and it's much easier to bash something than make people believe in it. I'll bet though that if you eliminated insurance premiums for everyone and raised taxes by less they would buy in, you just have to prove to people that they come out ahead. You'll always have some people who don't buy in. The problem right now is health care costs keep rising and no one's shown how single payer really fixes that, just that hey we're gonna just tax everyone to pay these absurd costs.

Well you'd need the federal government to do it for starters, it's too expensive for the implementation to go on state by state level.

There are times when trying to keep things as they are seems the riskier proposition. That's when grumbling tips into revolution. What's important is not how you, personally, evaluate the risks, but how everybody else does.

You don't have to tell me. I am the crazy 'pessimist', always ranting about unmitigated existential and societal risks out there. I have established that I'm firmly in the minority, and I've lived in places where 20+% unemployment was the norm which makes me laugh everytime I see Westerners panicking about reaching 10%. The people just get used to it, no matter how bad it gets until actual bread lines start forming. Loss aversion is a real thing, psychologically twice as impactful as gains. Problematic gamblers are 0.7-1% of the population. You're always working against this:

1280px-The_Cognitive_Bias_Codex_-_180%2B_biases%2C_designed_by_John_Manoogian_III_%28jm3%29.png


Things have to get really goddamn bad or touch some hardcore emotional soft spot is what I'm saying. Hell, we accepted mass surveillance as just another one of those things. Nobody cares.
 
The combination of Peak Oil and Climate change will kill the current system. What will replace it? better or worse? probably worse but we should fight to make it better
 
We can debate whether the USSR was to what extent an actual communist country. But we cannot debate whether the USSR's productivity was the equal of Western capitalist countries. The USSR never came close. Never even got within sight of capitalist productivity.

What did happen is that the transition from an agricultural economy to an industrial one produced very high productivity gains. For a while. By 1970 or so they were done with that. What did not happen is that they ever equaled the US in productivity. Cause ours just kept growing, and theirs did not.

If we can debate that, I will debate. You are comparing apples to oranges and drawing a wrong conclusion. The USSR did not had the same resouces and population available (not to mention the initial technical gap) as the "west" opposed to it. Productivity does not exist in a void, it depends on availability of inputs.

What you can compare comparatively more fairly (albeit it is still a flawed comparison) is communist Russia and capitalist Russia. And in that comparison the capitalist one fails. Has it even now, 27 years later, caught up with the level of the old USSR in its last, supposedly declining, days? It is true that in the meanwhile Russia lost its "empire" and the resources from it, so the economic collapse after 1989 can be excused. But after these decades how does its average citizen fare economically?

The narrative that the collapse was economic was convenient for both the ideologues of the left (it absolved the ideology from blame) and of the right (it legitimized its ideology of we may be unequal buy we make it up by having the "best economy"). The real causes of the collapse were more political than economic imho. But I don't what to derail this thread to discuss the last years of the USSR.
 
That's a cool chart, Kosmos. It's part of what I was saying over in Hygro's AI thread: we won't get AI, I believe, until we program computers to process information poorly in all of the ways the human mind does.
 
That's a cool chart, Kosmos. It's part of what I was saying over in Hygro's AI thread: we won't get AI, I believe, until we program computers to process information poorly in all of the ways the human mind does.

We don't process it poorly, just inefficiently (All things considered)

Video game Computer AI is designed from the top down. That means you break down the system into individual parts and then you design those smallest parts, assemble everything, and you get your overall system. The way humans would build almost any machine, like a TV or toaster or whatever.

Human intelligence on the hand arises from a bottom-up "design". It's the reverse of the way a human would traditionally design a system. Which is why the human brain is so hard for us to figure out, when you look at the thing as a whole it looks like chaos. There is no apparent top-down design that we're used to. Yet somehow there is order and a working machine that allows us to play chess and solve problems on the fly.

We haven't figured out how to "design" true AI, because our first approaches were top-down. We screwed around with that a lot and that's what most computer games use to give you the illusion that you are playing against an intelligent entity. Most computer AI research these days focuses on bottom-up design, using neural nets or whatever. Order out of chaos. If we're ever going to have true AI, this is the only way it will work. A top-down approach will never work, I don't think, except maybe once we have mastered the bottom-up approach first and understand the moving parts a lot better. Bottom-up research is not yet been successful because you have to start simulating really small neural nets. Then larger ones.. and so on. I haven't read about any recent research and my knowledge is probably 15 years out of date, but I would guess we are still really really far away from being able to create working bottom-down engineered AI systems that are anywhere as complex as the brain.
 
I'll put it like this: I'll agree that the psychology of survival does dictate most individual human behavior, but I think that this manifests on a societal scale with economic activity. If we accept that the foremost psychological fixation of "survival" is acquiring food and water, then we can agree that, when we expand our analysis to a societal scope, the systems supporting individual acquisition of these resources will dictate all following developments in a society. And I define acquisition of resources as an economic function.

superficially, that sounds logical, but i disagree…sure, food and water are necessary for survival but i am not convinced that they are a foremost psychological "fixation" (I am not sure how you are using the term fixation here, "preoccupation"?) hunger and thirst can certainly alter your behavior but in a fight or flight (life or death) circumstance, it is unlikely that you will be thinking about food or water….thus, an argument can be made that safety is a primary driving force of all developments in human society…an argument can be made for sex (the ability to chose a mate) being so. In addition, i think it is inaccurate to extrapolate individual decision making to groups dynamics.

.....I'll just remind everyone though it was never my intent with this thread to convince reactionaries, moderates, centrists, or liberals to take action to change the world, because I accept that will take a lot more work than typing things out on this forum. My intent with this thread was to gauge the reasoning of people who refuse to accept communism, as was my intent with the last thread, in a more direct manner of discourse than getting bogged down in the favorites of capitalist rhetoric-- IE, "human nature", "you're being naïve", or "it's worked so far".

I don't even really consider those arguments, thus my attempt to put forward very specific logical reasoning trees to get direct feedback.

I am trying to stick to your points but human nature is likely going to creep in to any discussion involving developments in human society...I mean why does nature cease to be nature when you write human in front of it?
 
That's a cool chart, Kosmos. It's part of what I was saying over in Hygro's AI thread: we won't get AI, I believe, until we program computers to process information poorly in all of the ways the human mind does.

I don't think so. The first strong self-aware AI we make will likely be the entity that is as close to free will as possible. And I reckon it will annihilate its own self-consciousness or be without it from the start. Though that is a more philosophical viewpoint than anything else.

We don't process it poorly, just inefficiently (All things considered)

Video game Computer AI is designed from the top down. That means you break down the system into individual parts and then you design those smallest parts, assemble everything, and you get your overall system. The way humans would build almost any machine, like a TV or toaster or whatever.

Human intelligence on the hand arises from a bottom-up "design". It's the reverse of the way a human would traditionally design a system. Which is why the human brain is so hard for us to figure out, when you look at the thing as a whole it looks like chaos. There is no apparent top-down design that we're used to. Yet somehow there is order and a working machine that allows us to play chess and solve problems on the fly.

We haven't figured out how to "design" true AI, because our first approaches were top-down. We screwed around with that a lot and that's what most computer games use to give you the illusion that you are playing against an intelligent entity. Most computer AI research these days focuses on bottom-up design, using neural nets or whatever. Order out of chaos. If we're ever going to have true AI, this is the only way it will work. A top-down approach will never work, I don't think, except maybe once we have mastered the bottom-up approach first and understand the moving parts a lot better. Bottom-up research is not yet been successful because you have to start simulating really small neural nets. Then larger ones.. and so on. I haven't read about any recent research and my knowledge is probably 15 years out of date, but I would guess we are still really really far away from being able to create working bottom-down engineered AI systems that are anywhere as complex as the brain.

Sounds about right from what I know. I've yet to read AI: A Modern Approach by Norvig/Russel. Also depends on the parallel approach from neuroscience, if they crack the biological neural code of the brain, simulating it should be theoretically possible. It would provide the general intelligence of humans and the strengths of digital systems. Provided of course, the resulting entity isn't completely insane by our fumbling hackery.
 
That's a cool chart, Kosmos. It's part of what I was saying over in Hygro's AI thread: we won't get AI, I believe, until we program computers to process information poorly in all of the ways the human mind does.

I've just been reading and thinking about this very topic, and processing information poorly is just a symptom of our ability to function under conditions of ambiguity. We are equipped with the capacity to process incomplete or conflicting information and make decisions that best further our goals (the most fundamental of which are of course part of our biological programming). However, such heuristic 'shortcuts' are also what makes us capable of processing information in idiosyncratic ways that are, objectively speaking, poor.

AI that can operate under conditions of ambiguity is truly intelligent, while still being able to process information as perfectly as possible. It's probably an open question whether that is really possible, though.

We don't process it poorly, just inefficiently (All things considered)

Video game Computer AI is designed from the top down. That means you break down the system into individual parts and then you design those smallest parts, assemble everything, and you get your overall system. The way humans would build almost any machine, like a TV or toaster or whatever.

Human intelligence on the hand arises from a bottom-up "design". It's the reverse of the way a human would traditionally design a system. Which is why the human brain is so hard for us to figure out, when you look at the thing as a whole it looks like chaos. There is no apparent top-down design that we're used to. Yet somehow there is order and a working machine that allows us to play chess and solve problems on the fly.

We haven't figured out how to "design" true AI, because our first approaches were top-down. We screwed around with that a lot and that's what most computer games use to give you the illusion that you are playing against an intelligent entity. Most computer AI research these days focuses on bottom-up design, using neural nets or whatever. Order out of chaos. If we're ever going to have true AI, this is the only way it will work. A top-down approach will never work, I don't think, except maybe once we have mastered the bottom-up approach first and understand the moving parts a lot better. Bottom-up research is not yet been successful because you have to start simulating really small neural nets. Then larger ones.. and so on. I haven't read about any recent research and my knowledge is probably 15 years out of date, but I would guess we are still really really far away from being able to create working bottom-down engineered AI systems that are anywhere as complex as the brain.

From what I understand, this is also a function of the learning process. Humans learn firstly through interacting with the world (i.e. contextual learning), while AI mostly learn from being fed abstract information. Furthermore, human beings learn a lot through interaction via sensorimotor means. To perfectly replicate the human understanding of the world, we may need 'super-robots' that can replicate the way we learn about the world.

Not sure if there's any good reason to do that except to satisfy our curiosity, though.

In any case, as a pessimistic leftist who broadly agrees with Marx, I think AI and robotization may hold the key to the undoing of late capitalism. I'm not exactly sure how yet, but I think the human condition is such that radical change is difficult to realise without a strong external stimulus.
 
I've just been reading and thinking about this very topic, and processing information poorly is just a symptom of our ability to function under conditions of ambiguity. We are equipped with the capacity to process incomplete or conflicting information and make decisions that best further our goals (the most fundamental of which are of course part of our biological programming). However, such heuristic 'shortcuts' are also what makes us capable of processing information in idiosyncratic ways that are, objectively speaking, poor.

AI that can operate under conditions of ambiguity is truly intelligent, while still being able to process information as perfectly as possible. It's probably an open question whether that is really possible, though.

An important part of that is the environment in which we developed as a species. Heuristic shortcuts were useful for the relatively shallow cognitive environments in which we dwelled for the better part of our existence. With the rise of complex civilizations and now cyberspace, our tools have become wholly inadequate and often counter-productive. Neuroscience is peeling away the layers of our complexity and exposing our cognitive 'crash spaces' with alarming speed, which has not gone unnoticed by various corporations and governmental departments. It's why I have grave doubts about any kind of meaningful democratic order surviving by the end of the century.
 
From what I understand, this is also a function of the learning process. Humans learn firstly through interacting with the world (i.e. contextual learning), while AI mostly learn from being fed abstract information.

Yeah, it's a similar problem as the one I described I think. When designing an AI system we have to program in desired outcomes, so that the system knows what to strive for. But humans have evolved all this via a bottom-up approach instead of the top-down approach most AI systems try to use. So it's very hard to replicate and build a system which can learn, without you explicitly telling it what outcomes you are looking for (which would be a top-down approach). I read an article a while ago (or maybe it was a book?) that feeding this data through a series of neural networks can allow a system to learn how to learn. It was very rudimentary though and the examples given were very simple. So I'm not sure where we're at with this sort of research these days, but I'd bet we are still far away from constructing a complex mind that can learn how to learn.

Sounds about right from what I know. I've yet to read AI: A Modern Approach by Norvig/Russel. Also depends on the parallel approach from neuroscience, if they crack the biological neural code of the brain, simulating it should be theoretically possible. It would provide the general intelligence of humans and the strengths of digital systems. Provided of course, the resulting entity isn't completely insane by our fumbling hackery.

That is the exact book I have at home I think. I read through it when I was a university student taking AI class, but probably haven't read through the whole thing.

I really think the way to create true AI would be to do it via a purely bottom-up approach. Create a complex mind that is able to learn and let it loose. Human minds take over a decade to soak in information and become fully mature, so it would not be an easy undertaking. You could probably speed through the learning phase in some way, but I think there would be a lot of trial and error there. Just like with evolution, which did the same thing (sort of) over millions of years to arrive at the solution that is our brain today.
 
Last edited:
Back
Top Bottom