The AI Thread

Did you really just swap AI for BitTorrent like that wasn’t awesome?
I did not think it was awesome, but to illustrate the point. I can make a useful resource by robbing others, but that does not make the robbing right.
 
You definitely want your APIs stateless but do you want your agent stateless?

My only experience with this other than day dreaming various software is that I’m doing a lot of refactoring of my company’s legacy web app using Augment Code, which I think uses MCP as how they run their Claude agent with your code base and its indexing of your code base as the objects it retrieves (as well as web search etc).

There’s two modes “agent mode” and “chat”. I think neither are stateless although chat mode acts like you’re just sending the LLM everything like a 2024 ChatGPT convo, but with 2025 coding agent skills (creating working files and showing you git style changes).

Agent mode is the same but keeps going and going, weirdly making it cheaper I guess because you are charged per your sent message which maybe has to resend the whole context whereas it running does not? Or maybe it’s just priced that way for other reasons. Using agent mode like it’s chat costs the same as chat but is way more effective.

You can give it much more complex instructions, and it will just run. So skill issue related to above, you have to demand it doesn’t do any coding, but writes long reports and the references these reports to make a plan and then writes a plan and then executes, and you gotta interrupt it frequently, which is expensive, but you know… keep it writing reports that it references and it can code with guidance.

I guess my point is that it’s leagues above using chat or Claude chat for coding because it’s keeping the conversation and its searches in state while you curse at and tell it it should be better. And I think it’s keeping the cost down keeping state and spinning off smaller chunks for xyz in the agent rules pipeline.

But this is all just what it feels like as a user not someone building it. And the article truthy linked obviously slaps and makes mcp sound bloated and useless so I am curious what kind of agents and control he

Like I could imagine an actually efficient agent that wasn’t just a 25k token prompt (cough Claude cough) with code listeners below but instead code listeners above sending json to and from stateless agents doing defined tasks. That should be more computer efficient and “safer” but harder to code and make “alive” like my augment agent refactoring 60 pages of legacy code to help me switch our 120 nested navigation pages from drop downs to simple sidebar.
Admittedly, this is starting to exceed my understanding of how MCP works under the hood, but yeah perhaps for like coding agents you want it to be stateful as a way to track convo history, documents it's editing, etc.

Otoh, for a lot of LLM-powered tool calling and stuff, I'd prefer a stateless function (LLM decides to call a tool, tool call gets executed, results are added to the convo on my end; the tool itself is fully stateless). And then as for the "conversation state", in stuff I've worked on, that's just all stored in our postgres tables and my own code fetches and appends to it as the convo goes on.

Perhaps a big difference here is whether you're using an MCP-powered app as a user vs trying to build an MCP-powered app as a developer who's totally cool persisting state in postgres or wherever on your own. In which case, the stateful-ness of MCP gets frustrating to deal with.

Agent mode is the same but keeps going and going, weirdly making it cheaper I guess because you are charged per your sent message which maybe has to resend the whole context whereas it running does not?
That is odd, dunno why it's cheaper in Augment. Fwiw, the whole context has to be resent to the LLM provider regardless. Cause regardless of what Augment is doing vis-a-vis MCP state, the Claude API being called inside Augment is stateless (and Augment is gonna be managing context caching with Claude to keep costs reasonable)
 

People reading AI summaries on Google search instead of news stories, media experts warn​

Experts warn that AI summaries can be inaccurate and are cutting into consumption of actual news

Some news publishers say the AI-generated summaries that now top many Google search results are resulting in less people actually reading the news — and experts are still flagging concerns about the summaries' accuracy.

When Google rolled out its AI Overview feature last year, its mistakes — including one suggestion to use glue to make pizza toppings stick better — made headlines. One expert warns concerns about the accuracy of the feature's output won't necessarily go away as the technology improves.

"It's one of those very sweeping technological changes that has changed the way we ... search, and therefore live our lives, without really much of a big public discussion," said Jessica Johnson, a senior fellow at McGill University's Centre for Media, Technology and Democracy.

"As a journalist and as a researcher, I have concerns about the accuracy."

While users have flagged mistakes in the AI-powered summaries, there is no academic research yet defining the extent of the problem. A report released by the BBC earlier this year examining AI chatbots from Google, Microsoft, OpenAI and Perplexity found "significant inaccuracies" in their summaries of news stories, although it didn't look at Google AI Overviews specifically.

In small font at the bottom of its AI summaries, Google warns users that "AI responses may include mistakes."

The company maintains the accuracy of the AI summaries is on par with other search features, like those that provide featured snippets, and said in a statement that it's continuing to "make improvements to both the helpfulness and quality of responses."

Leon Mar, director of media relations and issue management at CBC, said the public broadcaster "has not seen a significant change in search referral traffic to its news services' digital properties that can be attributed to AI summaries."

But he warned that users should be "mindful" of the varying accuracy of these summaries.

AI has 'fundamental problem'​

Chirag Shah, a professor at the University of Washington's information school specializing in AI and online search, said the error rate is due to how AI systems work.

Generative AI can't think or understand concepts the way people do. Instead, it makes predictions based on massive amounts of training data. Shah said that "no checking" takes place after the systems retrieve the information from documents and before results are generated.

"What if those documents are flawed?" he said. "What if some of them have wrong information, outdated information, satire, sarcasm?"

A human being would know that someone who suggests adding glue to a pizza is telling a joke, Shah said. But an artificial intelligence system would not.

It's a "fundamental problem" that can't be solved by "more computation and more data and more time," he said.

AI changing how we search​

As Google integrates AI into its popular search function, other AI companies' generative AI systems, such as OpenAI's ChatGPT, are increasingly being used as search engines themselves, despite their flaws.

Search engines were originally designed to help users find their way around the internet, Shah said. Now, the goal of those who design online platforms and services is to get the user to stay in the same system.

"If that gets consolidated, that's essentially the end of the free web," he said. "I think this is a fundamental and a very significant shift in the way not just search but the web, the internet, operates. And that should concern us all."

A study by the Pew Research Center from earlier this year found users were less likely to click on a link when their search resulted in an AI summary. While users clicked on a link 15 per cent of the time in response to a traditional search result, they only clicked on a link eight per cent of the time if an AI summary was included.

That's cause for alarm for news publishers, both in Canada and abroad.

"Zero clicks is zero revenue for the publisher," said Paul Deegan, CEO of News Media Canada, which represents Canadian news publishers.

Last month, a group of independent publishers submitted a complaint to the U.K.'s Competition and Markets Authority saying that AI overviews are causing them significant harm.

Alfred Hermida, a professor at the University of British Columbia's journalism school, said Google used to be a major source of traffic for news outlets by providing users with a list of news articles relevant to their search queries to click on.

But Hermida said, "when you have most people who are casual news consumers, that AI summary may be enough."

He noted Google has been hit with competition cases in the past, including one that saw the company lose an antitrust suit brought forward by the U.S. Department of Justice over its dominance in search.

In a post last week, Google's head of search, Liz Reid, said "organic click volume" from searches to websites has been "relatively stable year-over-year," and claimed this contradicts "third-party reports that inaccurately suggest dramatic declines in aggregate traffic — often based on flawed methodologies, isolated examples, or traffic changes that occurred prior to the roll out of AI features in Search."

'One-two punch'​

Clifton van der Linden, an associate professor and director of the Digital Society Lab at McMaster University in Hamilton, noted that if users bypass a link to a news site due to an AI-generated summary, that "compounds an existing problem" in Canadian media, which is dealing with a ban on news links on Facebook and Instagram.

The Liberal government under Justin Trudeau passed the Online News Act in 2023 to require Meta and Google to compensate news publishers for the use of their content. In response, Meta blocked news content from its platforms in Canada, while Google has started making payments under the legislation.

The future of that legislation seems uncertain. Prime Minister Mark Carney indicated last week he is open to repealing it.

Between Meta pulling news links and the emergence of AI search engines, Johnson says Canadian media has experienced a "one-two punch."

"The point is, and other publishers have raised this, what's the point of me producing this work if no one's going to pay for it, and they might not even see it?"
https://www.cbc.ca/news/science/ai-summaries-news-google-1.7607762
 
Back
Top Bottom