The AI Thread

They are the same math, that's because GPU can do it. But AI dedicated cards do it through very specific dedicated fixed circuits while GPU do it through software overhead because it have to do other things too (the 32 and 64 bits you said for example). It is the same with dedicated Bitcoin mining ASIC cards. Any CPU and GPU can also do it but with much less efficiency.

BTW, lately there is an anti-AI paranoia in YouTube. Any minimally weird or surprising video is accused of being AI in the comments. People have gone of taking the most evidently AI generated videos as real to just the opposite:

 
Well, that's one effect of the AI improvement : it becomes harder and harder to know what is real and what is fabricated. With all the bad consequences it will have for society.
 
And for now it is reasonably easy to differentiate, for people familiar with AI image generation at least. Soon it will be difficult or impossible. Better don't think on it. :scared:
 
They are really similar to the GPU you have but they have more memory
Remember this is part of his economic analysis. The GPUs in your computer don't cost $50,000 each, with one dedicated usage (that nobody can find a way to squeeze $50,000 worth of business usefulness out of).
I really think this idea that AI is something magical that only mega corps can do is playing into their hands.
If anybody is debunking the idea that AI is "magical" it's this guy.
Or is the problem that AI does not always produce a perfect novel work every time? Is that what he expected it to do?
Yes, a big part of his point is that every worthwhile work product is, to a degree, "bespoke," suited to a specific circumstance. So something that can only produce probablistic results, beige soup in Moriarte's terms, is rarely actually useful in the business world. People spend more time adjusting AI's products to the actual needed circumstances than they would if they built the thing directly.
 
Remember this is part of his economic analysis. The GPUs in your computer don't cost $50,000 each, with one dedicated usage (that nobody can find a way to squeeze $50,000 worth of business usefulness out of).

The 50k ones are called accelerators, not GPUs. Just so we don’t confuse each other further…

I would say there is inequality issue here. AI is unevenly useful across the fabric of society. And disruptive to some professions. Lets take professional translations. I have been translating for decades, pretty decent at it. One morning I woke up to find that I am an idiot child compared to AI translations. I just don’t have the dictionary to make precise characterisations, like AI does. And the speed is not even in the same ballpark.

Since this wasn’t my bread and butter, I welcome the speed and efficiency of AI translations with the whole heart. Less work for me = great. But other people’s livelihoods dependent on the art of conveying meaning between languages. Today, hundreds of thousands across the globe of those people were forced to change profession. That’s the disruptuon.

That’s just one example of usefulness for millions, while disruptive for 100s of thousands. I have more examples if you need, when it comes to my investing hobby, AI allowed me to virtually employ an office full of analysts, statisticians, lawyers, which is worth more than a one time 50k expenditure.

I don’t agree that nobody can break even from the 50k investment in accelerator. That’s unnecessary generalisation. Besides, not many people have to - when one can pay 20 bucks a month for subscription.
 
That’s just one example of usefulness for millions, while disruptive for 100s of thousands.
And the thing is, as AI becomes better, it will eventually goes to usefulness for millions, while disruptive for hundreds of millions.
 
Last edited:
Did you read the article, @Moriarte? It's long, but it addresses those issues. It actually takes translators as one of its examples (of a case where good-enough is good enough). And it shows how charging only $20 for use is unsustainable. He probably should have included in his article a concession to your point that some individuals can wrest more than $50,000 value from these products. (Good on you that you're one of them. Hygro too). But the whole system depends on the businesses that buy the use of these things getting that kind of enhanced value out of every employee. And that's just not happening. So the whole thing is not financially sustainable.
 
Did you read the article, @Moriarte?

Kind of? I read first 10 pages during morning coffee. (but lost interest quickly) I find his knowledge of computers and AI shallow and his argumentation sensationalist. But yeah, let's address some of those points.

OpenAI hit $13B in 2025 revenue, and costs per token are dropping 10-20% yearly via efficiency gains in models like o1. If $20 subs are loss leaders, why haven't providers like Claude or ChatGPT collapsed yet? They're subsidizing to capture market share, much like AWS did early on. In my opinion, superscalers can afford to subsidise the project for another 50 years, given the amount of capital they command across other computerised clouds. What we have is hardly an over leveraged situation of early internet startups of the late 90-ies.

Yes, not every business employee extracts $50K+ value, but the "whole system" doesn't need universal ROI - niche users (e.g., in finance, research, some creative fields) subsidise the masses, similar to how free tiers fund premium cloud services. Tools like GitHub Copilot boost dev productivity by 55% in studies, meaning firms with 100 engineers could see millions in savings, offsetting accelerator costs.

And then there are cost curves: As models compress (via distillation), inference drops below $0.01/query, making $20 subs viable long-term.

So, yeah, I do acknowledge that some of the worries are real, but my final analysis is vastly different (I suspect he brings a doomsday scenario by the end of his tirade).
 
Last edited:
(I suspect he brings a doomsday scenario by the end of his tirade).
Yes. I think he actually brings it throughout (just by calling it a bubble). I'm not qualified to judge his financial analysis, nor yours. The market will tell us. Maybe you're right, that the parties that can make super-effective use of it can subsidize everybody else's use of it. I guess we'll see. He doesn't factor that possibility into his analysis. It would be interesting to hear how he would respond to that perspective. If he proves to be correct, there will be no question about the fact.

The most global analysis, I think, would look like this. Something that calculates the % of GDP that emerges from intellectual labor. Then something that calculates the gains in productivity in that sector from having this tool available. I work in that sector. This tool doesn't make me measurably more effective at the kind of work I do. (Sorry to be so vague; I'm on guard against revealing too much personal information online). So maybe I'm extrapolating too much from my own case. But he does cite that study about coders that shows their actual productivity goes down when they use the tool (while their felt productivity goes up). https://www.theregister.com/2025/07/11/ai_code_tools_slow_down/?ref=wheresyoured.at

Maybe intellectual laborers will catch up with its capacities. I keep wracking my brain for writing tasks I could offload to this thing, on Hygro's principle that if I offload those, I free up more time for the parts that really demand human thinking. But I keep not finding any. That may be because I've had 35 years working without this thing and have just become set in my ways. Or, again, maybe my field doesn't matter because there will be some fields that will see so much gain as to offset the costs of the tool, and then everyone gets to use it.

But you yourself reported suddenly being charged $200/month for your favorite, instead of $20. The investors will at some point want to see this thing making businesses the kind of efficiency gains that have been promised.
 
Last edited:
The investors will at some point want to see this thing making businesses the kind of efficiency gains that have been promised.

There is another possibility: investors care more about profits than delivering on the promise of efficiency gains. A wild and unlikely idea, I know... Furthermore, subsidising this thing on behalf of hyperscalers and US government interested in maintaining an edge over one global adversary might mean indefinite pouring of both corporate and government finance just to keep that lead from slipping away. Just like space race, AI is not merely a commercial race. It is at the heart of the struggle for world dominance. (not many things the Chinese can't simply copy - AI is one of those things)

I agree that mr. market is the ultimate Deus ex Machina, and if investors decide that further gains are bogus - we will have the tide greater than the Great Depression itself breaking everything in its way. Personally, in the rapidly de-globalising world I would bet on the former over the latter - I'd say pouring funds like there is no tomorrow, both government and commercial, despite any shortcomings is the more likely outcome.

Consider another angle. Investors, as we both know, enjoy rates of return. And what rates of return can be had when top 7 companies in America, the leaders of AI revolution, are each worth a trillion dollars or more. Nvidia is actually worth 4 (!) trillions right now. So, in order to showcase a 25% annual growth, Nvidia would have to add $1 trillion towards own market capitalisation over the course of 2026. My point is, we might be on a rail track heading full speed towards the abyss - nothing can stop that train, nothing can make Nvidia (and others) grow at the same % rates they did over last few years.

The process of reaching the end of track can take years however, the excess capital locked in in top companies can allow for extensive postponing of the inevitable. But if 300 years of stock market history teaches us anything it's this: we Are going to fudge it all up in the end. And then rise again.
 
The moment that the UK government started talking enthusiastically about
investing in data centers and AI, I concluded that it was all about to crash.

I can only hope the crash occurs before they put too much public money in it.
 
investors care more about profits than delivering on the promise of efficiency gains.
Ok, but the businesses who are buying the tool are looking for it to deliver efficiency. What they've been sold on is the idea that it can replace intellectual laborers the way the robotic assembly line replaced physical laborers. Should they conclude that it after all does not, then the market for this product collapses. If that study I drew from his piece (because I'd heard it referenced elsewhere) is borne out by future studies--that people get 19% less work done when using it--companies are going to start forbidding their workers from using it.*

And there would also then start to feel as though there were little point in being ahead of China in the AI race.

What it seems good at is producing an exact-average version of any kind of text you ask it to make. Is there so much demand for that?

*a funny story about Gori the Grey's work inefficiency with AI. I'm in charge of arranging a party every year. I like making the poster for it using MS Paint. I have no real artistic skills, but the stakes aren't very high. This year, I wanted that image of Shakespeare that forms my avatar (drawn from the first collected edition of his works) (but without the fool's hat), in the shape of party balloons. His chin would have to be skinnier and his brow wider. You get the idea. I knew I couldn't achieve that in Paint. I thought to myself. "Oh, yeah, Copilot has a DALL-E style art capacity." I gave it every command I could think of to get to achieve that effect. I dumped hours and hours into it. Eventually, I went onto Paint and did something else. Now, maybe you'll just say it's a learning curve. Hygro or Thorgaleg could crunch it out in a minute. But, measured against earlier years' posters, I was way more than 19% less efficient.
 
If that study I drew from his piece (because I'd heard it referenced elsewhere) is borne out by future studies--that people get 19% less work done when using it--companies are going to start forbidding their workers from using it.*

I can show you 10 studies showing insane potential, and 10 studies showing harm. Studies are neither here nor there. This is a nuanced question with many variables. The demand is real. The demand for hardware is booked 5+ years ahead out of Taiwan. The Taiwanese, at the TSM factory, simply describe demand for accelerators as "insane". The demand for chatbots is equally huge to the point they think they can get away with raising a bar from $20 to $200 while risking losing a client. That should tell everything you need to know about the demand for an application OpenAI see from the inside. My own experience is also positive, to put it mildly. I do agree there are some professions, which are yet unreachable to AI. Also, there are very many peculiarities about individual people.

What it seems good at is producing an exact-average version of any kind of text you ask it to make. Is there so much demand for that?

I suddenly start to wonder if you do realise that you are pretty high up, intellectually.

Hygro or Thorgaleg could crunch it out in a minute.

Some of those graphical assignments are not so simple. I don't know if you noticed, but average AI is terrible at rendering words inside a picture. It bothered me why that is. AI explained to me that apparently that sort of rendering is so compute-intensive, there is just no point in doing it accurately. I mean... how hard could it be right? Draw me a yacht, which says "Moriarte" on it. Nope. Gibberish instead of words almost every time. I am sure in time someone will come up with an algorithm to solve this, but not yet afaik. Although it has been a while since I did drawings with AI. Maybe they got better...

Spoiler Today's news from world's biggest AI hardware factory :


screenshot-2025-10-17-at-22-15-31-png.745220

 

Attachments

  • Screenshot 2025-10-17 at 22.15.31.png
    Screenshot 2025-10-17 at 22.15.31.png
    556.5 KB · Views: 25
I was about to make a post about the "economical viability of AIs" but I'm reading that's exactly the topic being discussed on this page :clap:

Contrary to the position of @Moriarte, I think it is a big issue if the majority of users get the service for free.
There are of course subsidies left and right. Decisions makers (all of them) are pushing for it.
Energy being subsidised too (at the cost of our biosphere) makes the marginal cost look cheap alright.
But it's also huge investments on physical infrastructures that probably need careful maintenance and constant upgrading.
So. Long term financial equilibrium strikes me as an impossibility.
Unless...

I just read that CHATGPT is trying out AI produced porn (or "erotica"?) to fund their CPU farms.
Will it work?
 
I suddenly start to wonder if you do realise that you are pretty high up, intellectually.
Oh, I think about this a lot. That I can be critical of the text that AI produces because I am myself an above-average producer of texts. (I'm among the people Anthropic stole from).

One of the things that I think AI does very, very well in its writing is to make use of transitional expressions, and I both 1) remember the stage in my own writing development when I learned the importance of them and 2) have interactions with my high-school aged nephews who are just learning the importance of this in structuring the totality of the thought they want to express. To them, the texts that AI produces look spectacularly sophisticated, largely for this reason, I think. I know 1) that that is because they were trained on above-average writing* and 2) that my nephews can learn (because I did) how to make their thoughts equally sophisticated-sounding (and actually cogent) if they work at it (have one draft just looking for places where transitional expressions should go). I now know the importance of transitional expressions in actually thinking clearly in the first place. (or actually in the second place, because you usually add them in your second draft and adjust the original expression of your sentences to their presence).

But that just makes me shift gears to my other concern: that, long term, use of this thing will make people dumber. If my nephews use AI to write their essays (and one has admitted to doing so), they'll never themselves learn the form of thinking that writing is.

*
in other words, AI can come at the proper use of transitional expressions probabilistically.

Edit: I'll have a little more to say on this point.

Oh, and thank you. I have the highest regard for your intelligence as well.
 
Last edited:
So. Long term financial equilibrium strikes me as an impossibility.

Yes, within the current framework of land based data centers consuming power (and water) I agree - it does look stretched. AI companies are utilising night time grids and abandoned factories near sources of renewable energy - but even that will soon not be enough at the rate of energy consumption acceleration by AI. They say that at current rate by 2030 10% of electricity consumed by humanity will be AI-related. This is obviously unsustainable growth, which will hurt many other human endeavours. However, there might be a solution in the making.

Few start ups (notably StarCloud) are testing the possibility of building orbital data centers. Which would completely change economics and free up resources on the surface of the planet for other energy-related things we need. In orbit every solar panel is more potent as there is no atmospheric loss. Furthermore, unlike surface of the planet, the orbit, when properly utilised, can yield near 24/7 sun. Besides having nearly infinite free energy, vacuum has a temperature of -270C, which obviously helps with cooling, while making every single accelerator 2x or more potent. That's just how transistors work - the better you cool them the faster they calculate.

There are far fewer regulatory considerations concerning building a data center in space. Modular orbital based data centers utilising the space launch cost (which lowered 90% during recent years due to SpaceX and others pushing launch tech) seems to be a low-hanging fruit and I hear more and more chatter on this among the likes of Bezos, Musk, Zuckerberg. It seems doable on paper, and before the end of the year StarCloud is planning to prove concept by launching an Nvidia GTX600 GPU with one of the launch providers to see if they can make it work.

Their point is: one can quickly deploy enormous data centers in space (multiple square-km), driving the energy-cost of AI down.

Lasers will transmit data between Earth, space-based data centers and satellites.
 
Last edited:
Oh, I think about this a lot. That I can be critical of the text that AI produces because I am myself an above-average producer of texts. (I'm among the people Anthropic stole from).

One of the things that I think AI does very, very well in its writing is to make use of transitional expressions, and I both 1) remember the stage in my own writing development when I learned the importance of them and 2) have interactions with my high-school aged nephews who are just learning the importance of this in structuring the totality of the thought they want to express. To them, the texts that AI produces look spectacularly sophisticated, largely for this reason, I think. I know 1) that that is because they were trained on above-average writing* and 2) that my nephews can learn (because I did) how to make their thoughts equally sophisticated-sounding (and actually cogent) if they work at it (have one draft just looking for places where transitional expressions should go). I now know the importance of transitional expressions in actually thinking clearly in the first place. (or actually in the second place, because you usually add them in your second draft and adjust the original expression of your sentences to their presence).

But that just makes me shift gears to my other concern: that, long term, use of this thing will make people dumber. If my nephews use AI to write their essays (and one has admitted to doing so), they'll never themselves learn the form of thinking that writing is.

*
in other words, one will come at the proper use of transitional expressions probabilistically.

Edit: I'll have a little more to say on this point.

Oh, and thank you. I have the highest regard for your intelligence as well.
Please highlight the transitional expressions you used in the above. :)
 
Oh, I think about this a lot. That I can be critical of the text that AI produces because I am myself an above-average producer of texts. (I'm among the people Anthropic stole from).

One of the things that I think AI does very, very well in its writing is to make use of transitional expressions, and I both 1) remember the stage in my own writing development when I learned the importance of them and 2) have interactions with my high-school aged nephews who are just learning the importance of this in structuring the totality of the thought they want to express. To them, the texts that AI produces look spectacularly sophisticated, largely for this reason, I think. I know 1) that that is because they were trained on above-average writing* and 2) that my nephews can learn (because I did) how to make their thoughts equally sophisticated-sounding (and actually cogent) if they work at it (have one draft just looking for places where transitional expressions should go). I now know the importance of transitional expressions in actually thinking clearly in the first place. (or actually in the second place, because you usually add them in your second draft and adjust the original expression of your sentences to their presence).

But that just makes me shift gears to my other concern: that, long term, use of this thing will make people dumber. If my nephews use AI to write their essays (and one has admitted to doing so), [then] they'll never themselves learn the form of thinking that writing is.

*
in other words, AI can come at the proper use of transitional expressions probabilistically.

Edit: I'll have a little more to say on this point.

Oh, and thank you. I have the highest regard for your intelligence as well.
 
But then I just shift gears to my other concern: that, long term, use of this thing will make people dumber. If my nephews use AI to write their essays (and one has admitted to doing so), they'll never themselves learn the form of thinking that writing is.

I have exactly the same concern for my children (and nephews/nieces). And I am scratching my head on how to approach this. I feel that reading and discussing classical world literature together and talking is the only "angle of attack" there is left coming from my end. Then there are teachers, who I see take this seriously lately - banning smartphones in schools and adjusting curriculum to be more proactive in class, as opposed to assigning homework, which is, no doubt, going to be AI-generated by students. I am cautiously hopeful.
 
Back
Top Bottom