The AI Thread

my other concern: that, long term, use of this thing will make people dumber.

Quite. I am firmly convinced that many very useful existing
devices have made us dumber, and I think AI is just one more.

car ownership - lost skill to plan trips involving buses, trains etc.
mobile phone - lost skill to schedule activities around fixed hours
word processor and grammarly - lost skill to properly formulate thoughts into sentences first time before typing
recorded music - less people skilled in musical instruments and singing
microsoft project - lost skill to plan projects
automatic translation - reduced motivation to be fluent in other languages

I could go on and on as if competing for bore of the year, but the AI can do that too.
 
Quite. I am firmly convinced that many very useful existing
devices have made us dumber, and I think AI is just one more.

car ownership - lost skill to plan trips involving buses, trains etc.
mobile phone - lost skill to schedule activities around fixed hours
word processor and grammarly - lost skill to properly formulate thoughts into sentences first time before typing
recorded music - less people skilled in musical instruments and singing
microsoft project - lost skill to plan projects
automatic translation - reduced motivation to be fluent in other languages

I could go on and on as if competing for bore of the year, but the AI can do that too.
This all started with bronze axes. Most people can't make good stone ones anymore.
 
I believe Socrates* worried that writing would make people dumber (less capable of retaining things in memory).

Maybe that's apocryphal.

*as we meet him in Plato's written texts, ironically.
 
From the article:

Socrates worried that writing would weaken people’s memories and encourage only superficial understanding: not wisdom but “the conceit of wisdom” – an argument that is strikingly similar to many critiques of AI.

Hey, my memory turned out to be good!
 
There is another possibility: investors care more about profits than delivering on the promise of efficiency gains. A wild and unlikely idea, I know... Furthermore, subsidising this thing on behalf of hyperscalers and US government interested in maintaining an edge over one global adversary might mean indefinite pouring of both corporate and government finance just to keep that lead from slipping away. Just like space race, AI is not merely a commercial race. It is at the heart of the struggle for world dominance. (not many things the Chinese can't simply copy - AI is one of those things)

I agree that mr. market is the ultimate Deus ex Machina, and if investors decide that further gains are bogus - we will have the tide greater than the Great Depression itself breaking everything in its way. Personally, in the rapidly de-globalising world I would bet on the former over the latter - I'd say pouring funds like there is no tomorrow, both government and commercial, despite any shortcomings is the more likely outcome.

Consider another angle. Investors, as we both know, enjoy rates of return. And what rates of return can be had when top 7 companies in America, the leaders of AI revolution, are each worth a trillion dollars or more. Nvidia is actually worth 4 (!) trillions right now. So, in order to showcase a 25% annual growth, Nvidia would have to add $1 trillion towards own market capitalisation over the course of 2026. My point is, we might be on a rail track heading full speed towards the abyss - nothing can stop that train, nothing can make Nvidia (and others) grow at the same % rates they did over last few years.

The process of reaching the end of track can take years however, the excess capital locked in in top companies can allow for extensive postponing of the inevitable. But if 300 years of stock market history teaches us anything it's this: we Are going to fudge it all up in the end. And then rise again.

The dollar amounts are kind of scary, but at least it gives companies something to do other than buy back their own stock.

Meta set like $50 billion on fire trying to make The Metaverse happen in the recent past.
 
Humans are wired for verbal communication, honed over millions of years of evolution. Our brains are optimised for processing spoken language - think of the complex neural machinery behind speech recognition, tone, and context, which we develop naturally as kids. Verbal exchange was central to survival: coordinating hunts, sharing stories, passing knowledge. Writing, by contrast, is a recent invention, barely 5,000 years old, and requires explicit learning. It’s not "natural" in the same way.

Socrates’ beef with writing was that it’s static: a one way street that can’t talk back or adapt like a conversation. He argued it weakens memory and understanding, as it lets people lean on external records instead of internalising knowledge through dialogue. In Phaedrus, he likens writing to a painting: it looks alive but stays silent when questioned.

Spoiler Socrates speaks his piece :

SOCRATES: You know, Phaedrus, that’s the strange thing about writing, which makes it truly analogous to painting. The painter’s products stand before us as though they were alive, but if you question them, they maintain a most majestic silence. It is the same with written words; they seem to talk to you as though they were intelligent, but if you ask them anything about what they say, from a desire to be instructed, they go on telling you just the same thing forever. And once a thing is put in writing, the composition, whatever it may be, drifts all over the place, getting into the hands not only of those who understand it, but equally of those who have no business with it; it doesn’t know how to address the right people, and not address the wrong. And when it is ill-treated and unfairly abused it always needs its parent to come to its help, being unable to defend or help itself.


But writing’s not just a dead end. It’s a tool that scales knowledge across generations and distances, something speech can’t do. It’s less interactive, sure, but it forces clarity and structure - verbal communication can be messy, fleeting, and easily distorted. Writing also lets us refine ideas over time, unlike the improvisational nature of talk.

Our brains still crave the dynamic, social cues of verbal exchange - tone, rhythm, emotion - which is why we’re drawn to podcasts, voice notes, or even talking to AI. I know I like talking to AI every now and then instead of punching in words and letters with my fingers, reading answers with my eyes - the exchange definitely becomes a lot more lively, enjoyable: the spring unwinds. Writing’s powerful but feels less human because it lacks those cues. Socrates might’ve had a point about its limitations, but it’s not a dead branch - it’s more like a trade-off: we lose some vitality but gain permanence and reach. Evolution didn’t prep us for scrolling X or reading PDFs, yet here we are, adapting. Verbal’s still king for connection; writing’s the champ for precision.
 
Is the AWS failure because of AI driven layoffs? (paywalled and I have not found the full version)

When AWS engineers woke up on Monday, they never expected that their Slack access would be revoked by noon.

But by the afternoon, nearly 40% of AWS’s DevOps employees were cut in a single internal strike.

An email memo, which was briefly posted on the internal wiki before being taken down, blamed the cuts on strategic automation initiatives.

Translation: AI is assuming the kind of DevOps work humans have been performing over the last decade.
 
More points, though it may not be all AI's fault.
  • It is a fact that there have been 27,000+ Amazonians impacted by layoffs between 2022 and 2024, continuing into 2025. It's hard to know how many of these were AWS versus other parts of its Amazon parent, because the company is notoriously tight-lipped about staffing issues.
  • Internal documents reportedly say that Amazon suffers from 69 percent to 81 percent regretted attrition across all employment levels. In other words, "people quitting who we wish didn't."
  • The internet is full of anecdata of senior Amazonians lamenting the hamfisted approach of their Return to Office initiative; experts have weighed in citing similar concerns.
 
Humans are wired for verbal communication, honed over millions of years of evolution. Our brains are optimised for processing spoken language - think of the complex neural machinery behind speech recognition, tone, and context, which we develop naturally as kids. Verbal exchange was central to survival: coordinating hunts, sharing stories, passing knowledge. Writing, by contrast, is a recent invention, barely 5,000 years old, and requires explicit learning. It’s not "natural" in the same way.

Socrates’ beef with writing was that it’s static: a one way street that can’t talk back or adapt like a conversation. He argued it weakens memory and understanding, as it lets people lean on external records instead of internalising knowledge through dialogue. In Phaedrus, he likens writing to a painting: it looks alive but stays silent when questioned.

Spoiler Socrates speaks his piece :

SOCRATES: You know, Phaedrus, that’s the strange thing about writing, which makes it truly analogous to painting. The painter’s products stand before us as though they were alive, but if you question them, they maintain a most majestic silence. It is the same with written words; they seem to talk to you as though they were intelligent, but if you ask them anything about what they say, from a desire to be instructed, they go on telling you just the same thing forever. And once a thing is put in writing, the composition, whatever it may be, drifts all over the place, getting into the hands not only of those who understand it, but equally of those who have no business with it; it doesn’t know how to address the right people, and not address the wrong. And when it is ill-treated and unfairly abused it always needs its parent to come to its help, being unable to defend or help itself.


But writing’s not just a dead end. It’s a tool that scales knowledge across generations and distances, something speech can’t do. It’s less interactive, sure, but it forces clarity and structure - verbal communication can be messy, fleeting, and easily distorted. Writing also lets us refine ideas over time, unlike the improvisational nature of talk.

Our brains still crave the dynamic, social cues of verbal exchange - tone, rhythm, emotion - which is why we’re drawn to podcasts, voice notes, or even talking to AI. I know I like talking to AI every now and then instead of punching in words and letters with my fingers, reading answers with my eyes - the exchange definitely becomes a lot more lively, enjoyable: the spring unwinds. Writing’s powerful but feels less human because it lacks those cues. Socrates might’ve had a point about its limitations, but it’s not a dead branch - it’s more like a trade-off: we lose some vitality but gain permanence and reach. Evolution didn’t prep us for scrolling X or reading PDFs, yet here we are, adapting. Verbal’s still king for connection; writing’s the champ for precision.
It is rather ridiculous that the article writer mentioned Socrates while being obviously unaware of the context - and more so did it in an uncouth style as a put-down.
Then again, her prose overall comes across as quite crass :)

1761051644568.png


ChatGPT's purpose isn't to make you more intelligent - nothing can, as long as we are referring to IQ. It is there so that you can search for sources faster, instead of doing it directly through google. ChatGPT is known to "hallucinate", even at subjects it shouldn't (and where antagonistic Chinese LLM do not).
Lastly, the issue of relying on technology to the point of not bothering to learn - as you think that the source of what you like will be there forever etc - is neither new nor examined in any seriousness in this Guardian article. "Computation without computers?" as some director exclaimed in an Asimov story on this very topic.
 
It is there so that you can search for sources faster, instead of doing it directly through google.
I'm gravitating toward the position that it's a glorified search engine.

I think I shared my story about how it operates that way. Some people are arguing Charlie Kirk should be canonized. That made wonder what would he be the patron saint of. School shootings? So did a search--on Google with AI and Duck, Duck, Go without AI: "Are any saints patrons of something because of the circumstances of their martyrdom?" Google with AI gave me like 8 cases (incl St. Lawrence, of cooks--and of comedians) and in its search result was a webpage that had given it five of those results (because the page was dedicated to exactly that point). Duck, Duck, Go gave me only search results, and they did not include that site dedicated to that point, just pages about saints generally.

So AI is better at turning the prompts you give it into a fuller search of the full-text material available on the web, and presenting the results in a way that is a natural-language answer to your natural-language question.

That said, please consult the following recent experience, that I posted in the Civ 7 discussion forum:


It still gets really basic things really wrong.

(Apologies. I don't know what I did wrong that means that citation to the other thread doesn't collapse.)
 
Last edited:
AI is pretty decent at shopping, if you know how to approach it. I have a long standing gripe with chocolate industry going to crap - most chocolates on sale today are either lecithin, GMO, soy or a combination of all. But I need real chocolates as gifts for the New Year party! So I fire up grok the other day and give it the Mission: find me 10 factories in Switzerland, Belgium and Austria, which still make and sell real chocolate without additives and ship direct to EU countries. Bam, 15 seconds later I get a list of 10 factories with links to their online shops, small summary for each factory and prices for comparison organised in a neat table. 10 minutes later I make my choice and pay for it. No time wasted finding shops and browsing through dozens of websites to dig out bits of information, which would likely take me an hour or more and would make me go of the rocker so I postpone the whole endeavour indefinitely. I could granulate information further introducing variables such as user feedback and whatnot, but I decided that would be a waste of my and AI's time with unclear, subjective results.

The reason why AI works so well for these sort of tasks is that it works with clean information, with direct sources, doesn't operate on reddit rumours and doesn't use generative aspect (aside from generating the final post-search answer for me to read, which is fine). The moment you give AI freedom to think, create - you're in trouble. It will hallucinate and fill the gaps according to its own ability and wild reasoning. Once you start unwinding the reasoning steps it does - and I did this on several occasions - you will quickly understand that at this point Human needs to hand hold AI and use it as an aggregator mainly. If one understands coding logic - one can benefit from delegating coding tasks. However, if you set vague tasks - "give me code for a game resembling Wolfenstein 3D but in a surreal dream world where furniture comes alive and attacks…" - you're in for a rude awakening.

By the way, as a spectator to the AI progress of the last several years, I'd say we are in a completely different realm of AI usefulness than we were a year ago. Should AI become a personified creator of even tiny things at the end of the road? I am not sure I'd like that. I'd like AI to stay in the capacity of a second brain, a looking glass or a crystal ball I can take out of my pocket to reach my destination faster. If we aim higher than that, we might not enjoy the result.
 
In the Quarter Pounder vs Filet of Fish thing, my guess is that on the web, that information is often given in tables (as in the case of McDonalds' own nutritional info), and maybe AI is less good at extracting meaningful forms of proximity from tables than it is from sentences, since it was trained mostly on sentences.

Anyway, there's something we've recently raised that I don't want to lose track of, as I grasp at every shiny object that comes along (and on top of that we're long overdue for more consideration of the "domains of think"; I haven't forgotten.) But first this other reflection.

So, based on the Socrates thing, people can scoff at skeptics like myself: "Every advance in technology is met by skeptics. Sir Philip Sidney famously denigrated the printing press, and Socrates badmouthed literacy itself. Would we really be better off without books and even without writing?" With the implication being that information technologies march on to an ever-more-glorious future, and dinosaurs/Luddites like myself will appear as laughable several centuries from now as Sidney and Socrates appear to us today for the resistances they put up.

Against that line of thinking, I would push back just this little bit. Those two previous advances--writing and the press--prevailed in part because of the social magnification of thinking that they allowed. Writing means that, if you weren't present in Athens on some particular afternoon in 420BC, you can still benefit from the wise words that Socrates shared with his interlocutors that day. Millions can. The press means that, even if you can't afford a scribe to hand copy a manuscript, you can still benefit from someone's thinking because that thinking can be more cheaply disseminated. Millions can. Internet ditto, on steroids even.

Is AI a mechanism for the better social dissemination of knowledge? It seems like it's a bunch of individuals punching prompts into various AIs and then either making some workplace use of what they get in return, or sharing what AI generates for others' amusement (mostly the amusement of "look what a computer program can produce.") Now, of course not every technological advance has to work on the model of writing and print in order to have some value. But is AI magnifying the broader human capacity for knowledge, or just enhancing individuals' knowledge (I mean, of course, on those occasions when it does in fact do that, rather than give incorrect information about the sugars in a Filet o Fish sandwich.)?

I'd like AI to stay in the capacity of a second brain, a looking glass or a crystal ball I can take out of my pocket to reach my destination faster. If we aim higher than that, we might not enjoy the result.
But this brings us back to the economic analysis. All the money is being pumped into this on the grounds that its uses are way more exalted than that: that this thing will do our very thinking for us. (And if Samson's post bears out, then Amazon has concluded that AI can do 40% of its previous DevOps team's thinking; that will be a meaningful fact if it proves accurate.)
 
Last edited:
Is AI a mechanism for the better social dissemination of knowledge?

I'd say AI is a mechanism that enables more efficient work with information arrays. Sometimes, it enables work with those information arrays, which were too large to work with at speed before AI came around. That is my subjective experience, not based on any public definition.

But is AI magnifying the broader human capacity for knowledge, or just enhancing individuals' knowledge (I mean, of course, on those occasions when it does in fact do that, rather than give incorrect information about the sugars in a Filet o Fish sandwich.)?

Both, I think.

Pre-AI, deep domain knowledge lived in a few brains or paywalled journals. Now a farmer in Kenya optimizes irrigation with satellite data + GPT, and a hobbyist in Ohio reverse-engineers rocket nozzles from public telemetry. The total usable knowledge in the human system grows super-linearly.

Researchers use AI to screen 10⁶ molecules in a day instead of 10². The rate at which humanity converts curiosity into verified fact has jumped an order of magnitude. That is magnification of capacity, not just individual recall.

Spoiler Link :



Yes, the Filet-O-Fish sugar blunder spreads fast - but so does the correction. The same network that hallucinated 14 g can be cited, debunked, and pinned in seconds. The collective knowledge self-corrects faster than any pre-digital institution could.


But this brings us back to the economic analysis. All the money is being pumped into this on the grounds that its uses are way more exalted than that: that this thing will do our very thinking for us.

Ehh.. I am not so sure the big money is pumped in because their owners expect to outsource thinking. That is a radical thought, definitely not my first thought. (but then again I am not big money) As a thought exercise, if I was big money, I would probably be aiming to become entity, which does thinking for everybody. Not create entity that does the thinking for me and everyone else. In other words, I believe the good old human intellect will serve us for quite a while going forward. The big money compete on who puts their hands on a weapon first, the weapon which eventually comes out of AI labs, the weapon we agreed to call AGI. In this arms race, if you don't put the money upfront, you don't get the ticket. So the fear of missing out pushes everyone to write blank checks on AI, even though no one has a firm idea what's around the corner.
 
The rate at which humanity converts curiosity into verified fact has jumped an order of magnitude.
This seems a fair characterization of its abilities.

Not create entity that does the thinking for me and everyone else.
The way I realize I should have said it is "complete tasks that previously required thinking human beings for their completion." From the Amazon article Samson posted, Amazon believes that AI can handle "deployments, incident response, and postmortems" as well as or better than humans, and has fired a bunch of workders as a result. That's what CEOs are after in investing in AI. My shorthand for that was "thinking," but I really do need that other formulation, particularly in the context of this discussion.

I wonder if we could place the kind of thinking task those workers had previously done somewhere on our "domains of think" chart. Because if so, we'll have to say that that's a form of thinking that AI does well. (Well, unless Amazon collapses and hires these workers back.)
 
Last edited:
Amazon's move (automating deployments, incident response, and postmortems) shows AI reliably handling structured operational reasoning - not creativity or judgment under ambiguity, but rule-based, procedural cognition with clear inputs, outputs, and feedback loops. On the "domains of think", this lands in "Executable Logic" or "Process Orchestration" - the kind of thinking that's algorithmically bounded. It's not general reasoning; it's runnable reasoning. So yes, AI does this form of thinking well and that's exactly why it's replacing people. But it also defines the boundary: as long as the task stays inside that bounded domain, AI wins. The moment it requires context shifts, ethical weighting, or novel framing - Amazon will need humans again. Or collapse trying.

output-8.png



So... #2 and #3 in the yellow zone? And yes - that’s exactly the kind of thinking CEOs want to replace. It’s thinking, but thinking that runs like software.

It does not cross into:
  • Human red zone (reflection, creativity, embodied intuition, dialectics, or motivation)
  • Or even the shared confluence zone (abduction, analogy, heuristics)
But! In other news:

Spoiler Not all goes to plan!!! (press PLAY) :


That's what CEOs are after in investing in AI.

That’s not all CEOs want. They don’t seek to replace human thinking - they aim to erase the necessity of human mediation. Amazon’s layoffs aren’t the goal; they’re fallout from ontological compression: collapsing ritualized cognition (deployments, postmortems) into self-running code. These weren’t thinking tasks; they were biological middleware. CEOs want the moment when asking “why was a human ever needed?” feels natural. Not cost-cutting - metaphysical purge. Yet this purge, allegedly, serves creation. Freed capital flows to the red zone: reflection, synthesis, vision. The machine inherits the drudgery of reason; the human, distilled, ascends to meaning. This isn’t replacement. It’s sublimation.

Note, I am not arguing this from the position of the machine or from CEO's vantage point. Merely trying to point out what we will be dealing with when confronting CEO's on ethical grounds. Devil's advocate, if you will.
 
Last edited:
I already put this thread on unwatch because I really can't follow all that's been written here, pretty informative I am sure but almost as dense as Frank Herbert's Dune but if it's not science fiction then my attention span dies.
Just came here to say 9gag is dying as it's riddled with AI posts...good riddance I guess. I believe the rest of social media will follow suit very soon.
 
Just came here to say 9gag is dying as it's riddled with AI posts...good riddance I guess. I believe the rest of social media will follow suit very soon.
Way back, @Hygro expressed the wish that more posters here would use AI to up their posting game. I held off saying that if I got the sense that posts here were produced by AI, I'd bail. I held off because he was very clear that he thought the site would benefit from posters using AI during the research phase of composing a post and not during the composition phase, and that does seem to me as legitimate as, say, using a search engine and its results while composing a post.

But what you report here doesn't surprise me. I have access to Copilot. If I want to "interact" with that, nothing is stopping me. I come here to interact with humans not with some other poster's AI. And that's part of why I've put some emphasis (and mean to put more) on interestedness in defining "think." I actually come to CFC precisely to do my thinking and to hear others thinking. At some point, I'll use the word "advance." (Well, here I find myself using it). I come here to advance my thinking. So, there are a bunch of topics in the world that matter to me, that I have an interest in. I've used Kirk's death as an example. How America responds to Kirk's death matters to me, because it's a marker of the kind of society I'm living in. So when that event takes place, I like the opportunity to work through my thoughts on it, in part by hearing other people's thoughts on it. I think something, initially, on my own. Then I listen to what other people think and one or another of the comments advances my thinking--gives me a perspective I hadn't thought of, gives me a verbal formulation I hadn't thought of. Now, the totality of my thought on the topic is larger, richer, more precise. Copilot has no actual thoughts on Kirk's death, in part because, until you ask it a question, it isn't thinking anything, and in part because it has no interest of its own in how Kirk gets memorialized, no skin in the game. You can give it a prompt telling it to canonize Kirk and it will; you can give it a prompt to vilify Kirk and it will (within its politeness limits). Its thoughts can't advance (because it has no starting thoughts) and so its not authentically involved in the mental struggle to advance thought on that topic.
 
I occasionally use the AI in Google translator when I have doubts about how to build a sentence. So you can already bail. :p
 
Back
Top Bottom