My reading of Wray's argument is significantly different than yours appears to be. He mentions paperwork, but also specifically mentions doctors being under pressure to increase throughput.
You appear to agree with our hypothetical doctor's boss that increasing throughput is desirable, that it makes sense to apply a quantitative measure of efficiency to a doctor's work, and that the real problem is doctors doing paperwork rather than seeing patients.
Now I said more or
better throughput, so if a doctor is stuck 30% in office dealing with insurance for something that an AI can automate easily and safely, you free them from that, they still have another 20% they can't save, that 50% goes to 80%. Both outcomes are good: either more patients,
or more time with the same patients, cost the doctor equally in their workday (i.e. "free" economically speaking).
Wray also says this:
This is also happening on university campuses, of course. Professors reduce their office hours—or skip them entirely—and send students to the much cheaper teaching assistants as the efficiency fairies work to preserve more time for faculty to spend doing all the paperwork required by a burgeoning administrator staff that has nothing better to do than to create new paperwork requirements.
But this is also where AI is a great use case to free time to go back to office hours. A chatgpt window will do a lot, but the real sauce is an agentic system that takes the paperwork who/what/where/when/why, does it all for you, you just review and sign off or make edits in literally slack, discord, whatsapp, text etc. And the administrators are going to be doing the same so it's a closed system. Costs a little money, runs a lot of computers, frees humans to return to core product aka value aka in the university's case, the core mission of educating young people's minds to be critical thinkers with advanced tools of thought.
AI is specifically good for the bullsht work killing ADHD people like me. The prof still has to write a grant to a real board. AI can help 50% there with the boilerplate and time management of that task, but it's not the full use case, that requires more critical attention. The prof and team still needs to write their own scientific papers. AI can help there, but the real work is left.
But that's the fun stuff! Writing the true nature of the research for the paper is the fun part. That's why you do it!
There's that joke "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes." Agreed. Also if you don't do your own laundry your body atrophies, ADHD symptoms intensify, but that aside. I agree.
But the cool thing is, AI
isn't doing art and writing! Take music. 30% of new music uploaded is AI. But it only gets 0.5% of streams, and 70% of those streams are bots helping run criminal streaming fraud. AI isn't replacing art, not yet, not meaningfully. AI isn't replacing writers.
It replaces is all the fatiguing grind that's stopping us. No, not the laundry. And laundry's a nice break. It's replacing boilerplate, data repetition, all the high attention tedious mindless stuff that kills creatives from finishing. It's also raising the stakes so it's not only good, there's some mammon, there's some Jevon's Paradox. But a writer can now use AI to organize and automate parts of their paperwork life that was a huge pain before. It's great at taxes! Great at scheduling, great at building spreadsheets, and great at reading them. Great at a lot of things. It's great at math, and it's great at coherent arguments, two things it was terrible at 3 years ago.
But it's not the best at coherent arguments, so it's still fun to be an arguer! I wish more people at CFC used AI behind the scenes to help them perfect their own arguments (but not copy + pasting!!

) just to level this place up. It's low hanging fruit. We got a good mix views and personalities, and people are only here "for fun" so it's still gonna be them. Just like I love when people research their own posts, even just a top google search or wikipedia. Sources are nice, don't need to cite them as long as the post is good. And everyone loves when someone had a great college class and for like 2 years they have amazing topical zingers and in depth arguments, it raises the bar and the fun for basically everyone. No reason not use AI search and critiques to really drill in, before writing
whatever. The point is all tools / experiences are good behind the scenes here, none replace the debate.
You won't see from the outside, but the use cases are very diverse (just like http) and are strongest away from that initial hype where they showed it could produce artlike images and could output a story. Those elements are sideshows, can't replace artistry, the tools to authentically augment artistry are still in the works and the currently it lacks any ability to precisely see through a specific vision, a critical tech breakthrough outside of the current GenAI paradigm.
The analogy with writing is apropos. In making messianic promises about the technology's transformative potential, the LLM prophets are (mostly) inappropriately applying engineering terms to humanistic processes they don't understand.
Everything I've said across many huge posts has told you the indstury is trying to apply humanistic, natural language interfaces to solve
engineering and business process problems. So it's really like, the opposite — and I know some real nerds who were hype on AI imagery being valid for consumption that were hyping it, ew — the real thing isn't applying engineering to human stuff but a human interfacing thing to engineering / business / impersonal stuff.
I'll give you an example: In biotech they do a ton of work that generates a ton of data. Right now making that data human useable takes as much time and human as generating that data. But a complex tool using multi agent system cuts that time and human effort down 90% for the cost of 1-2 employees. If the bubble crashes and everyone pays full cost for their own chips and it's never subsidized again and there's no meaningful updates to the tech from today, then its reducing human effort 90% for the cost of 10 employees. But in today's real world, where it's 2, you still have to read the stuff once it's applied, it'll shift more resources into science aka data generation, which uses more of this, but funding remains the same, so instead of
doubling research, the math says +25%-33%, and realistically boosts research "only" 20%.
That's incredible.
At a standard 100 employee biotech startup that means hiring another 30 scientists.
DO U LIKE MEDICINE? Yes
It's also just trend line for capitalism, but for the trend line to continue we need constant breakthrough technologies. This is a breakthrough technology. Breakthrough technologies in my lifetime that have reached *me*: LLMs, The internet, probably weird advances in concrete that I take for granted, MRNA vaccines, GPUs, I dunno probably forgetting about 4 others. All of these things are required for the real economy to grow 3% a year (or 8% if we had good governance).
I've noticed a trend about people who are extra hard skeptics. A few trends but I really want to highlight that most of the people I know who think AI sucks thinks it sucks because they think we already have much better technology
than we actually do. Almost all of the internal business applications of LLM, which is its revolutionary power right now, are things people thought we already had and did 20 years ago, especially 10. But we didn't! The things people assumed was in place didn't exist. We lacked theory, we lacked computing power, we lacked putting a million engineers on the job.
Like people think Google was using semantics like this for search 10 years ago, or facebook was to find your interests. Nope, mostly just more basic relationships and extensive tagging. People think big tech has been spying on everyone and using ML algorithms to figure out what everyone does only vs everyone else to know everything about you. Nope. Dead people showing as alive, duplicates, Google with all my data thought I was a middle aged woman for a while when a young man (they had access to my facebook url history and I had no clear reason why). LLMs are already sussing that I'm Hygro and suggest many correct social media platforms to sniff me out. Using an LLM may cost more in compute than more basic crawlers, but they get it closer to right, more comprehensive, better organized, better asterisked ("this might be someone else with same name"), and can just iterate on people and save that info. We still aren't being meaningfully tracked like people imagine (people image perfect tech and implementation based on the current tech possibilities they read about like "surely if I can imagine what they made, they already did the thing"— nope).
The reason this stuff is so big and important and new and exciting is myriad, but it's important to remind a huge portion who thinks the computer tech from movies is real, one of the biggest reasons it's so big is how little we actually have, and how much this one does that nothing did before.