Lexicus
Deity
This is a tool where a sufficiently functional version can be available to both sides in an asymmetric war. When was the last time that happened?
Firearms? Technicals?
This is a tool where a sufficiently functional version can be available to both sides in an asymmetric war. When was the last time that happened?
Computers can make smarter decisions that us about just about everything (what we should eat for lunch, perhaps even how we should reply to our girlfriend's text) but that doesn't make them intelligent anymore than an 80s pocket calculator is intelligent.
As automation continues to integrate with human life at scale, we can predict that the labor-based economy as we know it will transform. Robots that can think, learn, reason, and interact with their environments will eventually be capable of performing tasks better than humans. Today, manual labor compensation is the primary driver of goods and services prices, accounting for ~50% of global GDP (~$42 trillion/yr), but as these robots “join the workforce,” everywhere from factories to farmland, the cost of labor will decrease until it becomes equivalent to the price of renting a robot, facilitating a long-term, holistic reduction in costs. Over time, humans could leave the loop altogether as robots become capable of building other robots — driving prices down even more. This will change our productivity in exciting ways. Manual labor could become optional and higher production could bring an abundance of affordable goods and services, creating the potential for more wealth for everyone.
We will have the chance to create a future with a significantly higher standard of living, where people can pursue the lives they want.
We believe humanoids will revolutionize a variety of industries, from corporate labor roles (3+ billion humans), to assisting individuals in the home (2+ billion), to caring for the elderly (~1 billion), and to building new worlds on other planets. However, our first applications will be in industries such as manufacturing, shipping and logistics, warehousing, and retail, where labor shortages are the most severe. In early development, the tasks humanoids complete will be structured and repetitive, but over time, and with advancements in robot learning and software, humanoids will expand in capability and be able to tackle more complex job functions. We will not place humanoids in military or defense applications, nor any roles that require inflicting harm on humans. Our focus is on providing resources for jobs that humans don’t want to perform.
Anecdotally, AI in software is having a real and detrimental impact on the quality of code delivered.Intelligent enough to lift crates and take em to the opposite side of the warehouse. Intelligent enough to translate text better than overwhelming majority of individual human translators. Intelligent action is not a high bar to cross. You can wave your hand and say "nah, this isn't 100% human intelligence". And while you maintain the stance, capitalist will quietly put on leave 10 million people working jobs AI system can already emulate. There are at least 10 companies now I know of, big companies, which are in the process or preparing mass production of next gen home and industrial robots somewhere in 2024/25. Including military contractors. So, yeah, we can talk a little more about how AI intelligence pales in comparison to the almighty human, or we can shift towards real world implications of synthesis of robotics and intelligent software.
I don't see how this is fundamentally different than computers replacing humans doing math tasks 50 years ago.Intelligent enough to lift crates and take em to the opposite side of the warehouse. Intelligent enough to translate text better than overwhelming majority of individual human translators. Intelligent action is not a high bar to cross. You can wave your hand and say "nah, this isn't 100% human intelligence". And while you maintain the stance, capitalist will quietly put on leave 10 million people working jobs AI system can already emulate. There are at least 10 companies now I know of, big companies, which are in the process or preparing mass production of next gen home and industrial robots somewhere in 2024/25. Including military contractors. So, yeah, we can talk a little more about how AI intelligence pales in comparison to the almighty human, or we can shift towards real world implications of synthesis of robotics and intelligent software.
Capitalists will fire who they want to fire.
It's just marketing of a product.A) We have globalization, where rich ppl become richer, and poor ppl become poore.
B) Biggest corporations are global and control major production power and will benefit much more than small business from robotics.
So. How this should happen? Any idea?
"We will have the chance to create a future with a significantly higher standard of living, where people can pursue the lives they want"
Well, sure, but given an existing workforce that directly implies firingsCapitalist will hire whoever is cheaper to hire.
You want to hear arguments these companies put forward? Here is one from Figure 1:
The trend indicates that by about 2032 they'll be paying you to take them off their cybernetic hands.This is a pitch for investor money.
AI is not even aping intelligence. That's the point. Intelligence as a concept requires at the very least an actual sense of self and understanding of the concept of the existence of concepts. Modern "AI" does not have that. All it has is the ability to gather information and process it in such a way as to generate things that match the same pattern.And what makes you think that intelligence isn't, at the core, just pattern recognition ? At which point exactly does an AI stops just aping intelligence and starts to actually become intelligent ?
No, I'm more on the metaphysical discussion about what intelligence is and how we try very hard to think ourselves fundamentally different/better.Furthermore, I get the feeling that you seem to see this as being some sort of metaphysical discussion about whether we can create a machine that thinks.
And I'm not convinced that "very advanced data analysis" isn't just "true" intelligence provided it reach a sufficient level of complexity - emergent quality as you say.Modern "AI" is just a very advanced data analysis tool based on statistics that has some very real uses and a lot of overblown hype.
Because they fundamentally don't work in the same way. We don't know enough about neuroscience as a developing field to say we understand everything about it, but this means any "what ifs" are equally fruitless. We don't know, so we don't know yes or no. There is no evidence to support your idea.You repeated the argument that AI has no intelligence because it doesn't understand the concepts it manipulate and simply correlate big amount of data. I'll repeat my point about : how do you know that's not how our own intelligence work ? Don't we grasp concepts through definitions and accumulation of examples ? I'm not necessarily saying that AI are actually at that level yet, but I'm musing that it might just be a case of complexity and not of fundamental difference.
Do you think that if we ever meet aliens and they don't think like us, does that mean they don't have intelligence?Because they fundamentally don't work in the same way. We don't know enough about neuroscience as a developing field to say we understand everything about it, but this means any "what ifs" are equally fruitless. We don't know, so we don't know yes or no. There is no evidence to support your idea.
On the other hand, we know exactly how computers work, because we invented them, and they don't rely on weird things we don't fully understand yet (like, say, the LHC or something like quantum computing which I mentioned earlier). Computers operate on a binary system, the hardware is well-understood (silicon and gold on printed circuit boards), and we program new and better ways for the hardware to work every year. We know how bits are flipped (well, outside of some of the more automagical hardware optimisations like whatever was in Intel's architecture that lead to some hilariously bad exploits), we know how the processor literally processes the operations sent to it, we know how the data bus works, and so on.
Even if you translate this to a philosophical environment, you can't get around the base assumption that human brains do not work like this. Our neural pathways respond differently to repetition, for example, whereas a computer doesn't (and can't, be design). We've developed tons of different ways for a computer to cache data, but that fundamentally doesn't work in the same as the neurons in our brains do.
They're different models, essentially. This isn't like human to monkey to horse to bird. This is like a carbon-based lifeform to another base element-based lifeform, conceptually. The "brain" of a computer is an entirely different world to the brain of a human.
Does that mean they can't one day do what we do? Sure. But they'd do it in a different way to how we do it, short of revolutionising how computers are put together, and how they function. Which means this whole tangent about AI, LLMs, generative "AI", and the like, is all based on a hypothetical "one day anything will be possible" kind of tech-based utopia. I appreciate the idealism personally, but I don't see the tech industry heading that way (if money can't be made out of it).
No, that's not what I think.Do you think that if we ever meet aliens and they don't think like us, does that mean they don't have intelligence?
If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck."
Computers aren't beings. They may be able to regurgitate duck facts and duck sounds but they're just ducking around. Toss one in a pond and see how ducky it is.Do you think that if we ever meet aliens and they don't think like us, does that mean they don't have intelligence?
By the way, this issue was discussed in the film "Arrival". How to establish contact with beings who do not think like us.
If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck."
Computers aren't beings. They may be able to regurgitate duck facts and duck sounds but they're just ducking around. Toss one in a pond and see how ducky it is.
![]()
This is magical thinking, are we gonna argue that a calculator knows what zero means or my dictionary really knows the meaning of the word love?
Maybe we will but not yetThat's what man is doing with neural networks now, trying to create AGI. It is just a matter of model size, computing power and time. The question is not whether we can create, but when we will create artificial intelligence.
It's magical thinking to think that we current have anything like humans @ this moment, someday yes I hope so as humans don't seem smart enough to solve their own problemsIt's now magical thinking. It's scientific method, big data and Chaos theory