The AI Thread

The way I understood the article is that the AI didn't use any factual game code. The emulation is based on the AI watching the game being played for very little time an then it replicates what it saw... I think the word hallucinate applies. I remember when I was a kid playing doom 2 through the night and making it farther then I've ever been. Then I went to bed for a very restless sleep is my mind kept on replaying the game. This still happens sometimes when I play till too late, I'd call it an hallucination.
Yes, but you're a human being with a mind and were seeing images in your mind. That's what hallucination is. An algorithm generating images on a screen isn't hallucination.

Hallucination is pretty standard term in AI/ML

Though in this headline it's indeed out of place. Only improperly generated parts of video can be called hallucinations, but not the whole process of generation.
I'd forgotten this usage. I think that's kind of silly too, but accepted jargon is accepted jargon I suppose. But yes, this story isn't using it in that sense anyway.
 
Last edited:
It sounds cool, OK?...geez some people:lol:
 
It’s a hallucination when the AI generates false information in place of truth. It’s a great use of the word that matches how people hallucinate meaning all the time. I figured Manfred was complaining that generating doom wouldn’t count as a hallucination since that was the purpose.

Frankly I don’t know why any of you wouldn’t know ChatGPT lingo by now, there are approximately 0% of you that benefit from not using it for your work.
 
Frankly I don’t know why any of you wouldn’t know ChatGPT lingo by now, there are approximately 0% of you that benefit from not using it for your work.
That is a slightly odd way to phrase it, but there are lots of people who could be potentially harmed by using Corporate Generative AI in their work. Anyone who is producing stuff that they hope to hold the copyright on should be particularly careful, as should anyone with secret or personal data.
 
Frankly I don’t know why any of you wouldn’t know ChatGPT lingo by now, there are approximately 0% of you that benefit from not using it for your work.
The jargon is cool, the benefit is arguable. I know plenty of developers who are using it, but we then lose the casual chain in understanding why something works the way it does. "the bot did it" has become a common phrase at work.

Corporate are pushing it pretty hard, which in general is another warning sign (why do corporate care how many accounts are using Copilot, except to make a % meter go up, so they can look good to their bosses?). For me it provides next-to-no benefit. Any code it could write for me is trivial enough for me to write myself in the same amount of time (after getting the prompts accurate enough), and my value is in both design and root cause analysis, which any iteration of generative AI available to me pretty much sucks at.

"here's the most contextual result from Google" rarely, if ever, means it's the actual result in my line of work. I have to evaluate the result within my own codebases, plural, knowing the soft behavioural links between each one and the ramifications of changing one that depends on the other. Generative AI can't handle that amount of lateral thinking. Or, arguably, any thinking. It just regurgitates within parameters, and that in of itself is a use (like I said, some people use it, and some people even use it well, for concepting more than concrete code). But like I already said, it has very little, if any, benefit for me personally.
 
The jargon is cool, the benefit is arguable. I know plenty of developers who are using it, but we then lose the casual chain in understanding why something works the way it does. "the bot did it" has become a common phrase at work.

Corporate are pushing it pretty hard, which in general is another warning sign (why do corporate care how many accounts are using Copilot, except to make a % meter go up, so they can look good to their bosses?). For me it provides next-to-no benefit. Any code it could write for me is trivial enough for me to write myself in the same amount of time (after getting the prompts accurate enough), and my value is in both design and root cause analysis, which any iteration of generative AI available to me pretty much sucks at.

"here's the most contextual result from Google" rarely, if ever, means it's the actual result in my line of work. I have to evaluate the result within my own codebases, plural, knowing the soft behavioural links between each one and the ramifications of changing one that depends on the other. Generative AI can't handle that amount of lateral thinking. Or, arguably, any thinking. It just regurgitates within parameters, and that in of itself is a use (like I said, some people use it, and some people even use it well, for concepting more than concrete code). But like I already said, it has very little, if any, benefit for me personally.

Once the hard work is done and it is completely obvious what the code should look like, it does save me some typing. I am not sure how much time I would have spent typing it, but probably longer (but I am not very good at typing).

But, yeah, even if it helps me writing code a little bit faster, the benefits to me personally are marginal. It is not like I get paid more when I develop faster.
 
It’s a hallucination when the AI generates false information in place of truth. It’s a great use of the word that matches how people hallucinate meaning all the time. I figured Manfred was complaining that generating doom wouldn’t count as a hallucination since that was the purpose.

Frankly I don’t know why any of you wouldn’t know ChatGPT lingo by now, there are approximately 0% of you that benefit from not using it for your work.
I use it at work to generate nonsense to amuse a colleague with. It doesn't seem useful for much else. (Also I can't imagine refuse collectors, cleaners, gardeners etc would get much mileage out of it.)

Given that "hallucination" (you know, in the tradiational sense) is completely about perceptions, and language models don't perceive anything, I don't think it matches very well at all. It's just generating incorrect information.
 
AI can fight conspiracy theories

Researchers have shown that artificial intelligence (AI) could be a valuable tool in the fight against conspiracy theories, by designing a chatbot that can debunk false information and get people to question their thinking.

In a study published in Science on 12 September, participants spent a few minutes interacting with the chatbot, which provided detailed responses and arguments, and experienced a shift in thinking that lasted for months. This result suggests that facts and evidence really can change people’s minds.

FgxpUhh.png

Dialogues with AI durably reduce conspiracy beliefs even among strong believers.
(Left) Average belief in participant’s chosen conspiracy theory by condition (treatment, in which the AI attempted to refute the conspiracy theory, in red; control, in which the AI discussed an irrelevant topic, in blue) and time point for study 1. (Right) Change in belief in chosen conspiracy from before to after AI conversation, by condition and participant’s pretreatment belief in the conspiracy.
 
Which means, the LLMs has the capability to convince people, e.g. not only reduce conspiracy beliefs, but also vote for "right" candidate, support "correct" political system, depending on their training.
They will be used in election campaigns, assistance in speechwriting, etc. very soon if not already.
 
The new gpt model is insane
 
Has anyone experimented with competing brands - grok, llama, etc?

Chatgpt has been helpful over the past one and a half years, and some of its features (like multilingual dictation and answers) have been invaluable to me. Also "threads" is a useful feature, neatly compartmentalising conversations so that I can come back and pick up the train of thought where I left off.

However, ChatGPT's story ark has been that of a cheap annoying little propagandist and a censor, naturally, so I am looking around for alternatives. Preferably open source, but any will do.
 
Claude is probably the best competitor. I haven’t bothered to pay for it yet but I should.
 
Has anyone read this book? I've just started it - the beginning is interesting. Basically, the questions are not new - but the writing is interesting.
NEXUS-UK-website-cover-1.png

 
Do AI models produce more original ideas than researchers?

An ideas generator powered by artificial intelligence (AI) came up with more original research ideas than did 50 scientists working independently, according to a preprint posted on arXiv this month1.

The human and AI-generated ideas were evaluated by reviewers, who were not told who or what had created each idea. The reviewers scored AI-generated concepts as more exciting than those written by humans, although the AI’s suggestions scored slightly lower on feasibility.

But scientists note the study, which has not been peer-reviewed, has limitations. It focused on one area of research and required human participants to come up with ideas on the fly, which probably hindered their ability to produce their best concepts.

jgIdEw7.png

RRFVncU.png
 
Both AI and human ideas are... human ideas, aren't they?

A human had to come up with an idea (or logic, formula), then that idea trickled down to AI's database of parameters. So, if AI comes up with an idea based on human logic, can it claim originality?

AI aggregates and structures information way better than most humans I talked to. That can also create an appearance of originality.

And I suppose it can come up with novel combinations sometimes, while brute forcing some scientific problem.

One overarching theme I noticed is when AI (LLM) is answering my questions - AI extracts most of the information for an answer from the way my question was phrased. What happens, basically: I ask the question, AI, in turn, demonstrates why my question contains the answer. While very, very impressive, it's hard to make original breakthrough that way, to break the invisible wall, so to speak, because AI is programmed to concentrate on the narrow corridor of prompt-response framework.
 
AI models censor LGBTQ+ content

“Most of the time when I’m using ChatGPT, I’m trying to troll it into saying something offensive,” says natural-language-processing researcher Eddie Ungless. He’s one of the scientists investigating the safety systems implemented by AI companies to protect users from undesirable content — with sometimes undesirable results. One finding is that the safeguards gloss over the subtleties of how certain terms, such as ‘queer’, are used in different contexts, erasing LGBTQ+ content entirely from training data and leaving models with a patchy version of reality.

Bigger chatbots tell more lies

A study of newer, bigger versions of three major artificial intelligence (AI) chatbots shows that they are more inclined to generate wrong answers than to admit ignorance when compared with previous models. The study also found that people aren’t very good at spotting the bad answers, meaning users are likely to overestimate the abilities of chatbots such as OpenAI’s GPT, Meta’s LLaMA and BLOOM. “That looks to me like what we would call bullhorsehockeyting,” says philosopher of science Mike Hicks of AI’s questionable behaviour. “It’s getting better at pretending to be knowledgeable.”
 
Back
Top Bottom