Warfare

o la la

20-03-2024e .jpeg


the original is dated 2020 and has suffered a size reduction on the way .
 
Computers can make smarter decisions that us about just about everything (what we should eat for lunch, perhaps even how we should reply to our girlfriend's text) but that doesn't make them intelligent anymore than an 80s pocket calculator is intelligent.

Intelligent enough to lift crates and take em to the opposite side of the warehouse. Intelligent enough to translate text better than overwhelming majority of individual human translators. Intelligent action is not a high bar to cross. You can wave your hand and say "nah, this isn't 100% human intelligence". And while you maintain the stance, capitalist will quietly put on leave 10 million people working jobs AI system can already emulate. There are at least 10 companies now I know of, big companies, which are in the process or preparing mass production of next gen home and industrial robots somewhere in 2024/25. Including military contractors. So, yeah, we can talk a little more about how AI intelligence pales in comparison to the almighty human, or we can shift towards real world implications of synthesis of robotics and intelligent software.

You want to hear arguments these companies put forward? Here is one from Figure 1:

As automation continues to integrate with human life at scale, we can predict that the labor-based economy as we know it will transform. Robots that can think, learn, reason, and interact with their environments will eventually be capable of performing tasks better than humans. Today, manual labor compensation is the primary driver of goods and services prices, accounting for ~50% of global GDP (~$42 trillion/yr), but as these robots “join the workforce,” everywhere from factories to farmland, the cost of labor will decrease until it becomes equivalent to the price of renting a robot, facilitating a long-term, holistic reduction in costs. Over time, humans could leave the loop altogether as robots become capable of building other robots — driving prices down even more. This will change our productivity in exciting ways. Manual labor could become optional and higher production could bring an abundance of affordable goods and services, creating the potential for more wealth for everyone.

We will have the chance to create a future with a significantly higher standard of living, where people can pursue the lives they want.

We believe humanoids will revolutionize a variety of industries, from corporate labor roles (3+ billion humans), to assisting individuals in the home (2+ billion), to caring for the elderly (~1 billion), and to building new worlds on other planets. However, our first applications will be in industries such as manufacturing, shipping and logistics, warehousing, and retail, where labor shortages are the most severe. In early development, the tasks humanoids complete will be structured and repetitive, but over time, and with advancements in robot learning and software, humanoids will expand in capability and be able to tackle more complex job functions. We will not place humanoids in military or defense applications, nor any roles that require inflicting harm on humans. Our focus is on providing resources for jobs that humans don’t want to perform.

 
A) We have globalization, where rich ppl become richer, and poor ppl become poore.
B) Biggest corporations are global and control major production power and will benefit much more than small business from robotics.

So. How this should happen? Any idea?

"We will have the chance to create a future with a significantly higher standard of living, where people can pursue the lives they want"
 
Intelligent enough to lift crates and take em to the opposite side of the warehouse. Intelligent enough to translate text better than overwhelming majority of individual human translators. Intelligent action is not a high bar to cross. You can wave your hand and say "nah, this isn't 100% human intelligence". And while you maintain the stance, capitalist will quietly put on leave 10 million people working jobs AI system can already emulate. There are at least 10 companies now I know of, big companies, which are in the process or preparing mass production of next gen home and industrial robots somewhere in 2024/25. Including military contractors. So, yeah, we can talk a little more about how AI intelligence pales in comparison to the almighty human, or we can shift towards real world implications of synthesis of robotics and intelligent software.
Anecdotally, AI in software is having a real and detrimental impact on the quality of code delivered.

Capitalists will fire who they want to fire. Quality isn't the problem there. Intelligence isn't the goal. Only the perception of value.
 
Intelligent enough to lift crates and take em to the opposite side of the warehouse. Intelligent enough to translate text better than overwhelming majority of individual human translators. Intelligent action is not a high bar to cross. You can wave your hand and say "nah, this isn't 100% human intelligence". And while you maintain the stance, capitalist will quietly put on leave 10 million people working jobs AI system can already emulate. There are at least 10 companies now I know of, big companies, which are in the process or preparing mass production of next gen home and industrial robots somewhere in 2024/25. Including military contractors. So, yeah, we can talk a little more about how AI intelligence pales in comparison to the almighty human, or we can shift towards real world implications of synthesis of robotics and intelligent software.
I don't see how this is fundamentally different than computers replacing humans doing math tasks 50 years ago.

I never said "AI" pales in comparison to human intelligence, computers are obviously vastly superior to humans in many aspects but that doesn't make them intelligent, they're still just tools to be used to make human life easier.

To put it crudely, interacting with an "ai" "girlfriend" isn't a sexual relationship, it's still just masturbation with a fancy human speech generating calculator
 
Capitalists will fire who they want to fire.

Capitalist will hire whoever is cheaper to hire. Capital is in constant search of a bargain. That's why US production moved to China in the past. No amount of patriotism is able to stand in the way of cost savings. Right now Humans are more appealing. Ten years ago robot's price tag would be in the millions and no fancy generative AI. Today, a stumbling humanoid robot performing a range of mundane office tasks or working warehouse, by some estimations (Goldman Sachs) costs $50-150k to make. Let's say it's 200k for early adopters. How much does a human cost to a corp over their lifetime? Of course, a robot will require maintenance, but overall, it is easy to see that cost of human/robot are approaching parity. We are quickly approaching the point when it will become economically necessary to employ a fleet of robots in order for big corps outcompete their peers in cost department. Small fleet at first. Robots, as any mass technology, will continue to become cheaper. At the end of this train of thought there are millions of unemployed. Unless... there is something else to this "formula". Here's a chart of cost of industrial robots (in USD) to get a feel for technology proliferation:

Spoiler Chart :
Screenshot 2024-03-20 at 19.00.08.png
 
A) We have globalization, where rich ppl become richer, and poor ppl become poore.
B) Biggest corporations are global and control major production power and will benefit much more than small business from robotics.

So. How this should happen? Any idea?

"We will have the chance to create a future with a significantly higher standard of living, where people can pursue the lives they want"
It's just marketing of a product.

These algorithms are tools like any other, they can be used to improve people's lives or exploit them, like any tools it depends on how they're used
 
Capitalist will hire whoever is cheaper to hire.
Well, sure, but given an existing workforce that directly implies firings :)

I don't agree that humans are robots are approaching parity in things where actual finesse or lateral thinking is required (except in areas where automation already provides dividends, and the resulting job losses are once again because these aren't necessarily jobs we should have to work in the first place, but capitalism mandates that we do or die, so).
 
And what makes you think that intelligence isn't, at the core, just pattern recognition ? At which point exactly does an AI stops just aping intelligence and starts to actually become intelligent ?
AI is not even aping intelligence. That's the point. Intelligence as a concept requires at the very least an actual sense of self and understanding of the concept of the existence of concepts. Modern "AI" does not have that. All it has is the ability to gather information and process it in such a way as to generate things that match the same pattern.

When you for example have a "conversation" with an AI it's not a real conversation because the "AI" does not actually understand what it is saying. All it knows is that the input it receives from you is statistically likely to be matched by a certain type of output and to use the data it has gathered to generate something that matches this statistic model. But by that definition a phonebook has intelligence because it knows to translate your input of a name and address into a phone number.

Furthermore, I get the feeling that you seem to see this as being some sort of metaphysical discussion about whether we can create a machine that thinks. And I do not understand why this is the case. Especially since the answer is obvious. Of course we can. Intelligence, sentience and all that stuff are just emergent properties of a very complex mechanism that we call the brain. But this is not what modern "AI" is. It's not even what it's trying to be. Although I am sure some tech company out there will have a pamphlet that says otherwise to attract venture capital.

Modern "AI" is just a very advanced data analysis tool based on statistics that has some very real uses and a lot of overblown hype.
 
Furthermore, I get the feeling that you seem to see this as being some sort of metaphysical discussion about whether we can create a machine that thinks.
No, I'm more on the metaphysical discussion about what intelligence is and how we try very hard to think ourselves fundamentally different/better.

You repeated the argument that AI has no intelligence because it doesn't understand the concepts it manipulate and simply correlate big amount of data. I'll repeat my point about : how do you know that's not how our own intelligence work ? Don't we grasp concepts through definitions and accumulation of examples ? I'm not necessarily saying that AI are actually at that level yet, but I'm musing that it might just be a case of complexity and not of fundamental difference.

Even if our intelligence actually doesn't work like that, who is to say that it's not simply a different form of intelligence, which can potentially be just as capable as ours ?
Modern "AI" is just a very advanced data analysis tool based on statistics that has some very real uses and a lot of overblown hype.
And I'm not convinced that "very advanced data analysis" isn't just "true" intelligence provided it reach a sufficient level of complexity - emergent quality as you say.
 
You repeated the argument that AI has no intelligence because it doesn't understand the concepts it manipulate and simply correlate big amount of data. I'll repeat my point about : how do you know that's not how our own intelligence work ? Don't we grasp concepts through definitions and accumulation of examples ? I'm not necessarily saying that AI are actually at that level yet, but I'm musing that it might just be a case of complexity and not of fundamental difference.
Because they fundamentally don't work in the same way. We don't know enough about neuroscience as a developing field to say we understand everything about it, but this means any "what ifs" are equally fruitless. We don't know, so we don't know yes or no. There is no evidence to support your idea.

On the other hand, we know exactly how computers work, because we invented them, and they don't rely on weird things we don't fully understand yet (like, say, the LHC or something like quantum computing which I mentioned earlier). Computers operate on a binary system, the hardware is well-understood (silicon and gold on printed circuit boards), and we program new and better ways for the hardware to work every year. We know how bits are flipped (well, outside of some of the more automagical hardware optimisations like whatever was in Intel's architecture that lead to some hilariously bad exploits), we know how the processor literally processes the operations sent to it, we know how the data bus works, and so on.

Even if you translate this to a philosophical environment, you can't get around the base assumption that human brains do not work like this. Our neural pathways respond differently to repetition, for example, whereas a computer doesn't (and can't, be design). We've developed tons of different ways for a computer to cache data, but that fundamentally doesn't work in the same as the neurons in our brains do.

They're different models, essentially. This isn't like human to monkey to horse to bird. This is like a carbon-based lifeform to another base element-based lifeform, conceptually. The "brain" of a computer is an entirely different world to the brain of a human.

Does that mean they can't one day do what we do? Sure. But they'd do it in a different way to how we do it, short of revolutionising how computers are put together, and how they function. Which means this whole tangent about AI, LLMs, generative "AI", and the like, is all based on a hypothetical "one day anything will be possible" kind of tech-based utopia. I appreciate the idealism personally, but I don't see the tech industry heading that way (if money can't be made out of it).
 
Because they fundamentally don't work in the same way. We don't know enough about neuroscience as a developing field to say we understand everything about it, but this means any "what ifs" are equally fruitless. We don't know, so we don't know yes or no. There is no evidence to support your idea.

On the other hand, we know exactly how computers work, because we invented them, and they don't rely on weird things we don't fully understand yet (like, say, the LHC or something like quantum computing which I mentioned earlier). Computers operate on a binary system, the hardware is well-understood (silicon and gold on printed circuit boards), and we program new and better ways for the hardware to work every year. We know how bits are flipped (well, outside of some of the more automagical hardware optimisations like whatever was in Intel's architecture that lead to some hilariously bad exploits), we know how the processor literally processes the operations sent to it, we know how the data bus works, and so on.

Even if you translate this to a philosophical environment, you can't get around the base assumption that human brains do not work like this. Our neural pathways respond differently to repetition, for example, whereas a computer doesn't (and can't, be design). We've developed tons of different ways for a computer to cache data, but that fundamentally doesn't work in the same as the neurons in our brains do.

They're different models, essentially. This isn't like human to monkey to horse to bird. This is like a carbon-based lifeform to another base element-based lifeform, conceptually. The "brain" of a computer is an entirely different world to the brain of a human.

Does that mean they can't one day do what we do? Sure. But they'd do it in a different way to how we do it, short of revolutionising how computers are put together, and how they function. Which means this whole tangent about AI, LLMs, generative "AI", and the like, is all based on a hypothetical "one day anything will be possible" kind of tech-based utopia. I appreciate the idealism personally, but I don't see the tech industry heading that way (if money can't be made out of it).
Do you think that if we ever meet aliens and they don't think like us, does that mean they don't have intelligence?
By the way, this issue was discussed in the film "Arrival". How to establish contact with beings who do not think like us.

If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck."
 
Do you think that if we ever meet aliens and they don't think like us, does that mean they don't have intelligence?
No, that's not what I think.

But there is a difference between "having what we conceptually describe as intelligence" and "having human intelligence in an applicable manner that allows them to render humans redundant", which is the real-world tech focus of such generative technologies that we're currently seeing.

Pick an argument, stick to the argument. Alien intelligence is not the same as generative AI. Generative AI is not the same as the field of study of actual AI. And so on, and so forth. They are separate and distinct things (that all apply to "warfare" differently, e.g. actual AI is basically a myth in real terms, whereas generative AI as a product is a great way for the people in charge to absolve themselves of the responsibility of a human decision in, say, tactical or strategic choices).
 
If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck."

As a large language model, I cannot look or swim like a duck. However, I can output the word "quack". Quack quack quack
 
Do you think that if we ever meet aliens and they don't think like us, does that mean they don't have intelligence?
By the way, this issue was discussed in the film "Arrival". How to establish contact with beings who do not think like us.

If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck."
Computers aren't beings. They may be able to regurgitate duck facts and duck sounds but they're just ducking around. Toss one in a pond and see how ducky it is.



This is magical thinking, are we gonna argue that a calculator knows what zero means or my dictionary really knows the meaning of the word love?
 
Computers aren't beings. They may be able to regurgitate duck facts and duck sounds but they're just ducking around. Toss one in a pond and see how ducky it is.



This is magical thinking, are we gonna argue that a calculator knows what zero means or my dictionary really knows the meaning of the word love?

To become a human being - a child must be raised to be a human being. We literary teach our children to be human beings. Without this, a person cannot become what he is. See Feral child. After a certain number of years, children raised by animals have no chance to even come close to human consciousness. They're already programmed.
That's what man is doing with neural networks now, trying to create AGI. It is just a matter of model size, computing power and time. The question is not whether we can create, but when we will create artificial intelligence.
It's now magical thinking. It's scientific method, big data and Chaos theory
 
That's what man is doing with neural networks now, trying to create AGI. It is just a matter of model size, computing power and time. The question is not whether we can create, but when we will create artificial intelligence.
Maybe we will but not yet

Computers now are just computers, same as in the 80s
10 PRINT "I IS TELLIGENT"
20 GOTO 10

It's now magical thinking. It's scientific method, big data and Chaos theory
It's magical thinking to think that we current have anything like humans @ this moment, someday yes I hope so as humans don't seem smart enough to solve their own problems
 
Top Bottom