The AI Thread

If humankind had a history of distributing the resources/profits generated by labor-saving devices toward meeting the basic needs of the entire populace

. . . and again if there were a history of people investing the leisure time freed up by labor-saving devices into efforts of self-actualization . . .

I could totally get on board for this optimistic view of the potential impact of AI.

There is always the hope that the nature inclination of giving about 2% of our income to charity eventually is sufficient! Someday, the top 0.1% will have incomes such that their 2% donation will provide sufficient resources to the unwashed to allow 'reasonable' lifestyles! Holodecks and replicators for all!
 
Certainly no one can see revolution happening, it's perfectly fine to keep having people living with 10000 times the fortune others have, and the latter up to dying from poverty.
Tech can create a more exposed higher class too, due to the critical resource for attack being non-material.
 
Can you carry that math forward, El Mac?

What do we think is the cost of a dignified subsistence for 9 billion people? Or the cost for one, and I can myself multiply by 9 billion.

Then, what is the total wealth of the world's top. .1 percent? Do we know that?

And what would it need to be for 2% of it to answer Figure A?

And finally, do the wealthiest give 2% of their income/wealth to charity? esp. when the "charity" is just deadbeats self-actualizing (that last makes it seem like a loaded question, but I do want an answer to the first of those questions, in its unsnarkified state).

I'm trying to construct a target date for when I'll be able to retire, and self-actualize.
 
Assuming that a 'living income' is $50k USD annually. And assuming 10 billion people. That's $50 trillion annually.
So, the ultra rich need to have $2500 trillion annually in income before they'll give us enough that our fighting over the scraps might work out.
Assuming an annual capturable income of 2%* of wealth, they will need assets worth $125000 trillion before they'd get $2500 trillion income in order to give $50 trillion.
*income from wealth can only grow faster than the total growth rate as long as their are assets to be transferred in lieu. This calculation works best if we assume they already own everything and so we use a more realistic growth rate. Note that all calculations change if the post-Singularity growth rate plateaus later, but I am presuming sooner.

Global GDP is $100 Trillion and I'm using this number even though it's really not the best number.

Getting 1250x higher income than we have now is only 10 doublings. After that, it's just plug in whatever doubling rate you presume from the Singularity.
 
The only time trickle down works, is if someone is eating and then (as in a parable by Kafka) leaves the table to go eat the scraps that fell while he was eating. But eventually the scraps end, no new ones are produced, so he dies.
 
I still resent the term 'Artificial Intelligence' and how casually it is often used by corporations to market their smart tech.

Human intelligence and AI have pretty much nothing in common, as far as we can tell in the context of our understanding of our own brains.
 
I"m in that camp (though I'm not a highly important person). I think thinking requires wetware: bodily experience.
 
And I don't want to hear about any repurposing of your sex-clones, Kyr.;)
 
Do you believe that there are problems that machines will never be able to solve that we can, or that whatever capacities machines can acquire it will never fit the definition of thinking/intelligence?
 
I'd phrase it more like the latter, I think.

In part because I wouldn't want to limit intelligence to "problem-solving." I've been trying to get Chat GPT to write a limerick. It can't. That's not a "problem" to be solved. But it is something I can do with my mind that it can't.

Thanks, El Mac, for doing that math. I'm cynical enough about human nature to believe that if we reach that level of global wealth, the wealthy will make a game of who can better avoid coughing up his 2%. But it's nice to get a feel for the figures that would be involved. Didn't Musk say, "Give me a figure for curing global hunger" and they did, and he instead went and spent 10X that on Twitter? Some story along those lines?

We're wretches, we humans. We can outthink machines, yes. But possibly not outlove them.

Edit: Samson's like added after only the first line. He's not on the hook for liking any of the rest of this post.
 
Last edited:
It's stupid math, obviously. Eventually we should be rich enough that welfare will be casual, as long as 'wants' and 'needs' don't scale faster than production. But it's all sci-fi at that point.
 
Do you believe that there are problems that machines will never be able to solve that we can, or that whatever capacities machines can acquire it will never fit the definition of thinking/intelligence?
Aren't Goedel sentences exactly (proven to exist) cases where the machine will be unable to identify the block as something emerging from the system itself? By contrast, any human will pick up that the system can be viewed from outside of the system, thus the block is only a property of the system.
And computers have to be consistent, so it's not like you can build one to not feature this block.
The digital computer is by definition a formal logic system=>cannot avoid being blind to anything outside itself. Humans can't see outside themselves either, but for them any "outside" is not so easy to define as the very limited confines of an actual formal system (eg for a human, an "outside" could be an impression of something without form,size,position etc).
 
Last edited:
Computers so far have no capacity for abstract thought or insight not derived from data, but creative and often irrational selfmade propositions. Without that, we have no Isaac Newton, no Einstein, no Bohr, no Faraday, no Mozart, no Alan Turing, no Sid Meier, no you. Machines are excellent at one thing; performing Billions or Trillions of calculations in very short time frames.

That is partly why we can program very good chess computers; it's one of the only games where all information about the board and position of the pieces, are fully available to both players from start to finish. The computer thus does not have to account for hidden information only known to its opponent, when calculating which chess moves gives it the most advantageous position, based on a set of pre-programmed variables. If you set your chess computer to play at a low difficulty setting, it's not somehow becoming 'less intelligent'. Instead, it prohibits itself of calculating x moves ahead and assigning a value to each combination of moves and maybe just calculates 1 move ahead only and chooses the move with the highest calculated value. That's why it is usually much faster to announce its move at the lowest difficulty level; it performs far less calculations than it is actually capable of. ;)
 
Last edited:
It's stupid math, obviously. Eventually we should be rich enough that welfare will be casual, as long as 'wants' and 'needs' don't scale faster than production. But it's all sci-fi at that point.
it would be nice to be post-scarcity, but it's probably best to not plan to depend on that any time soon yeah.

It is probably an emergent property of biomatter.
we've been trying to find what process causes this "emergence", so far without success. i'm not convinced it exists in a way that requires "biomatter" specifically, or any reason it would need to be.

it is probably worth hammering home that even if ai is developed sufficiently for general intelligence, that it will indeed still not be "human" regardless.

even so, if we observe machines capable of making decisions and behaving at a same or higher level to people, to the point that the "intelligence" can't be experimentally distinguished, it's probably a mistake to pretend that doesn't count, and likely a mistake to believe that's the limit of the AI in question.
 
The time where we own nothing of value is much closer, given economic exponential trends.
that requires some assumptions that aren't 100% safe to make, but possibly. still, it's better for post-scarcity to be a pleasant surprise than something we depend on.
 
it would be nice to be post-scarcity, but it's probably best to not plan to depend on that any time soon yeah.


we've been trying to find what process causes this "emergence", so far without success. i'm not convinced it exists in a way that requires "biomatter" specifically, or any reason it would need to be.

it is probably worth hammering home that even if ai is developed sufficiently for general intelligence, that it will indeed still not be "human" regardless.

even so, if we observe machines capable of making decisions and behaving at a same or higher level to people, to the point that the "intelligence" can't be experimentally distinguished, it's probably a mistake to pretend that doesn't count, and likely a mistake to believe that's the limit of the AI in question.
(had to rewrite this, because I am not currently in the mood to reread Goedel and tied theorems ^^)
A digital computer is a formal logic system, and formal logic systems can indeed lead to proof of stuff that is consistent with the basis of the system, HOWEVER it was shown that the system itself (in our case, a digital computer) will fail to reach conclusions that are obviously true for someone who can see outside of it (eg a human). This constitutes limitation, but more importantly for this thread it stresses that the digital computer itself will be unable to treat any object as external to the limited and closed, formal system it IS - as opposed to a human who can both calculate within the confines of a formal logic system, but also, of course, read it from the outside (goes without saying that the human will be far slower with calculations, and is highly unlikely to be fully consistent, whereas the machine HAS to be, which ironically is what leads to its limitation).

Not to disappoint, though, here is something you don't see everyday, Lord Penrose with Joe Rogan ^^


As for why formal logic systems are "limited and closed", that has to do with formalization only being possible with countable infinity (integers, ie digits=digital computer).
 
Last edited:
if we observe machines capable of making decisions and behaving at a same or higher level to people, to the point that the "intelligence" can't be experimentally distinguished, it's probably a mistake to pretend that doesn't count,
To this I was going to say "The Turing Test. And the very thing that's giving these new chatbots their buzz is that they've gotten closer to passing it."

But to be sure, I went to Wiki "Turing Test," and boy did I learn interesting things about it that aren't just part of common knowledge. First: "The Turing test is inspired by a parlour game where humans compete to see whether a man can pass as a woman." and therefore, according to Jack Halberstam: "Turing's point in introducing the sexual guessing game was to show that imitation makes even the most stable of distinctions (i.e., gender) unstable." Boy have our views of that last point changed of late!

In its more generalized form (which is what you invoked), I like it as a test. That in turn raises other issues, like how long should the interrogation last? e.g.

Chat GPT gets way closer than any previous chatbot, but you can still trip it up. (And Bard can trip up to the tune of 100 billion dollars!)

The Gori test is "Can it Write a Creditable Limerick"? (Though of course few humans can do that, actually) And I think we'll need other tests as well: "What projects does it take up unprompted?" So, in other words, does it ever decide on its own that it would like to write a limerick? For that last, you have to give it something like what is built in to biomatter-thinkers: e.g. drives of various sorts.
 
Back
Top Bottom