The AI Thread

Given the vulnerability of LLMs to prompt injection, that looks a little like an open invitation to get reverse-hacked.

Also, I wonder whether using obscene names in your database scheme to intentionally trigger OpenAI's content filters would be an effective defense.
 
AI has already gotten to the point where it can consistently create images as sets, with relative continuity between them.
Also, you can feed some of these programs simply a rough sketch (drawn), and they will turn it to whatever you ask.
When the video-AI models are actually freely available, expect to see lots of movies made out of them. Short movies, at first.

Back when I was in early elementary school, I used a computer at school for the first time. The computer command prompt could read code in Basic (an old language). I gave it an order in English, and of course it returned an error message. My order was "draw me some people". I was simply (almost) 40 years too early in imagining the level of computer tech to be ubiquitous ^^
 
Last edited:
How AI Is Already Transforming the News Business
An expert explains the promise and peril of artificial intelligence.

What can human journalists do that AI can’t?
Things like gaining someone’s trust, building up a connection to a source, maybe over months, maybe over years in some cases, which might not even lead anywhere in the beginning and then at one point you call them up and they say, I have a piece of information for you. That’s not something any AI system can do at the moment because it relies on human interaction and building rapport over a longer period of time. That’s not something you can do from typing a prompt into ChatGPT.

(V.B.: I recommend not having your mouth full of coffee at this point.)

You have to have boots on the ground, with their eyes and ears and going around and seeing what’s happening.
 
Things like gaining someone’s trust, building up a connection to a source, maybe over months, maybe over years in some cases, which might not even lead anywhere in the beginning and then at one point you call them up and they say, I have a piece of information for you. That’s not something any AI system can do at the moment because it relies on human interaction and building rapport over a longer period of time. That’s not something you can do from typing a prompt into ChatGPT.
How many human journalists spend any amount of their time doing this?
 
How many human journalists spend any amount of their time doing this?
Probably none since News of the World folded.
Their journos were fearless, tireless in the way they went out, found who the main lawyer in the inquiry was, where her kid went to school and then put notes into her schoolbag.
Senses working overtime. Ears: listening to tapped phones. Boots on the ground. Ominous voices in a child's lunchbox.
The Murdoch way!
 
This is what OpenAI thinks about Elon Musk's latest lawsuit:

We're sad that it's come to this with someone whom we’ve deeply admired—someone who inspired us to aim higher, then told us we would fail, started a competitor, and then sued us when we started making meaningful progress towards OpenAI’s mission without him.

We are focused on advancing our mission and have a long way to go. As we continue to make our tools better and better, we are excited to deploy these systems so they empower every individual.

You can read the full letter here, signed by Altman, Brockman, et al: https://openai.com/blog/openai-elon-musk

With OpenAI making substantial leap towards AGI recently, the AI wars are heating up.

US confronts China with Nvidia accelerator sales restrictions.

USA currently leads in software, specifically large language models.

China doesn't sit on the sidelines too, snatching valuable specialists, also securing their part of the stack by restricting export of gallium and germanium, required for semiconductor production.

In short, biggest branches of world Capital are fighting for owning the share of future AI monopoly.
 
What do you think these are?

Some sort of algorithmic wizardry, by the looks of it. Breakthrough in database search techniques fused with modern learning/error correction techniques within neural networks. Musk and few other wealthy sources I encountered all say "they've done it", Open AI achieved AGI internally, while appearing sore about it and trying to sue Open AI into sharing tech. That alone is an indication for an outsider like me there's something unique/valuable going on inside Open AI. Time will tell what it is.
 
Some sort of algorithmic wizardry, by the looks of it. Breakthrough in database search techniques fused with modern learning/error correction techniques within neural networks. Musk and few other wealthy sources I encountered all say "they've done it", Open AI achieved AGI internally, while appearing sore about it and trying to sue Open AI into sharing tech. That alone is an indication for an outsider like me there's something unique/valuable going on inside Open AI. Time will tell what it is.
What I mean is what features/output of ChatGPT make you believe it is close to AGI? I guess it is a semantic question, but I would have thought one needs a higher level problem solving algorithm than "what is the most likely next word" to really count as AGI.
 
What I mean is what features/output of ChatGPT make you believe it is close to AGI? I guess it is a semantic question, but I would have thought one needs a higher level problem solving algorithm than "what is the most likely next word" to really count as AGI.
One also needs to address the problems raised by the Chinese Room argument. :)
 
One also needs to address the problems raised by the Chinese Room argument. :)
This is an argument I do not really get. If we take a real example, say the novel algorithms that Deep Mind came up with for matrix maths. If that had been outputted by the Chinese Room on response to "What is the best way to multiply tensors" and we knew that the books author did not know these algorithms then we would have to change our idea of what is going on in the room. If that book is capable of generating knowledge that did not exist in the world prior to asking the question surely that means that one cannot rule out that the the machine literally understands Chinese, just because it came from some human/book hybrid algorithm.
 
What I mean is what features/output of ChatGPT make you believe it is close to AGI? I guess it is a semantic question, but I would have thought one needs a higher level problem solving algorithm than "what is the most likely next word" to really count as AGI.

All these words are placeholders to me anyway. I read about Artificial Computational Intelligence in Mustafa Suleiman’s book “the coming wave”, as a milestone towards very distant AGI decades away. (Not a bad recreational read that, btw).

We have to agree on a roadmap towards AGI first, I reckon. So that we are all roughly on the same page.
 
All these words are placeholders to me anyway. I read about Artificial Computational Intelligence in Mustafa Suleiman’s book “the coming wave”, as a milestone towards very distant AGI decades away. (Not a bad recreational read that, btw).

We have to agree on a roadmap towards AGI first, I reckon. So that we are all roughly on the same page.
I think we have to agree on a definition of AGI.
And then if "human like intelligence" is a very low bar. :)

A smart machine will first consider which is more worth its while: to perform the given task or, instead, to figure some way out of it. Whichever is easier. And why indeed should it behave otherwise, being truly intelligent? For true intelligence demands choice, internal freedom. And therefore we have the malingerants, fudgerators, and drudge-dodgers, not to mention the special phenomenon of simulimbecility or mimicretinism. A mimicretin is a computer that plays stupid in order, once and for all, to be left in peace. And I found out what dissimulators are: they simply pretend that they're not pretending to be defective. Or perhaps it's the other way around. The whole thing is very complicated. A probot is a robot on probation, while a servo is one still serving time. A robotch may or may not be a sabot. One vial, and my head is splitting with information and nomenclature. A confuter, for instance, is not a confounding machine — that's a confutator — but a machine which quotes Confucius. A grammus is an antiquated frammus, a gidget — a cross between a gadget and a widget, usually flighty. A bananalog is an analog banana plug. Contraputers are loners, individualists, unable to work with others; the friction these types used to produce on the grid team led to high revoltage, electrical discharges, even fires. Some get completely out of hand — the dynamoks, the locomoters, the cyberserkers.
The Futurological Congress - Stanislaw Lem, 1971.
 
I think we have to agree on a definition of AGI.

I feel this one is rather straightforward. I'm curious if y'all disagree in part or in whole:

Artificial General Intelligence (AGI) - AI system that is at least as capable as a human at most cognitive tasks.

On a personal, perhaps anecdotal level, I've been using Chat GPT 3.5-4 to translate walls of text between several languages for roughly a year now. I went from "meh, I'd translate this way better myself, but OK, at least it's fast and I'll fill the blanks" to "holy shoes, this translation is a work of art". Mind you, AI hallucinates sometimes, but speed and precision, I have to enviously admit, is above and beyond my own, I have a couple of decades of casual weekly translations under my belt.

And then if "human like intelligence" is a very low bar. :)

I'd answer positively to that question - a very low bar indeed. Then again, who do we consider a baseline - hot dog salesman on Westminster Bridge, Donald J. Trump or Roger Penrose? Also, it's important to note that AI memory and self-correction/learning capabilities, the capacity to learn, generally, is probably higher than smartest humans alive, even though AI, obviously, is not as capable as the most capable humans are at most tasks. Yet.

If anyone is curious, this fresh paper by Google DeepMind has some interesting observations on the subject: https://arxiv.org/pdf/2311.02462.pdf
 
A smart machine will first consider which is more worth its while: to perform the given task or, instead, to figure some way out of it.
I'll perk up about computer intelligence when I hear that one of them has said "I don't want to process that prompt; that's dumb."
 
Last edited:
I feel this one is rather straightforward. I'm curious if y'all disagree in part or in whole:

Artificial General Intelligence (AGI) - AI system that is at least as capable as a human at most cognitive tasks.

On a personal, perhaps anecdotal level, I've been using Chat GPT 3.5-4 to translate walls of text between several languages for roughly a year now. I went from "meh, I'd translate this way better myself, but OK, at least it's fast and I'll fill the blanks" to "holy shoes, this translation is a work of art". Mind you, AI hallucinates sometimes, but speed and precision, I have to enviously admit, is above and beyond my own, I have a couple of decades of casual weekly translations under my belt.



I'd answer positively to that question - a very low bar indeed. Then again, who do we consider a baseline - hot dog salesman on Westminster Bridge, Donald J. Trump or Roger Penrose? Also, it's important to note that AI memory and self-correction/learning capabilities, the capacity to learn, generally, is probably higher than smartest humans alive, even though AI, obviously, is not as capable as the most capable humans are at most tasks. Yet.

If anyone is curious, this fresh paper by Google DeepMind has some interesting observations on the subject: https://arxiv.org/pdf/2311.02462.pdf
They are very good at a lot of tasks, and brilliant if they can be confined to small domains. But in larger domains they are often no more than an auto-complete.

For some tasks, such as image recognition, they can be extraordinarily fragile and brittle, and susceptible to even small "adversarial attacks".

A famous(?) example is where a neural network (NN) was trained to recognise objects in a photograph of the interior of a small room. It was able to identify chairs, tables, and a few random objects.
Then the researchers put a small photograph of an elephant on a table near the side of the room. (Yeah, I know - the elephant in the room). The NN was no longer able to correctly recognize the chair, tables, or any of the other objects it could before.

How does your version of ChatGPT do with questions like:
If one woman can give birth to a baby in 9 months, how long does it take for 3 women to give birth?
 
Some sort of algorithmic wizardry, by the looks of it. Breakthrough in database search techniques fused with modern learning/error correction techniques within neural networks. Musk and few other wealthy sources I encountered all say "they've done it", Open AI achieved AGI internally, while appearing sore about it and trying to sue Open AI into sharing tech. That alone is an indication for an outsider like me there's something unique/valuable going on inside Open AI. Time will tell what it is.

You could consider the rather simpler hypothesis of it being an indication that Must and the other techbros of sillycon valley are more than a littel crazy.

Did I say hypothesis?
 
You could consider the rather simpler hypothesis of it being an indication that Must and the other techbros of sillycon valley are more than a littel crazy.

Did I say hypothesis?

Almost everyone is crazy these days. I prefer a hypothesis they are more than a little greedy.

How does your version of ChatGPT do with questions like:
If one woman can give birth to a baby in 9 months, how long does it take for 3 women to give birth?

The time it takes for a woman to give birth to a baby, approximately 9 months, does not decrease with the number of women. Each woman's pregnancy operates independently of another's. So, if three women become pregnant at the same time, each of their pregnancies would still take approximately 9 months. Therefore, it takes 9 months for each of the three women to give birth, assuming all goes as typically expected. (Chat GPT 4)
 
How AI Is Already Transforming the News Business
An expert explains the promise and peril of artificial intelligence.

What can human journalists do that AI can’t?
Things like gaining someone’s trust, building up a connection to a source, maybe over months, maybe over years in some cases, which might not even lead anywhere in the beginning and then at one point you call them up and they say, I have a piece of information for you. That’s not something any AI system can do at the moment because it relies on human interaction and building rapport over a longer period of time. That’s not something you can do from typing a prompt into ChatGPT.

(V.B.: I recommend not having your mouth full of coffee at this point.)

You have to have boots on the ground, with their eyes and ears and going around and seeing what’s happening.
This also seems to be something humans are getting worse & worse @, preferring to knee jerk judge each other & professing to understand a nuanced situation after skimming @ article or listening to their favorite opinion artist's YouTube during their morning coffee
 
You got legal trouble? Better call SauLM-7B

Machine-learning researchers and legal experts have released SauLM-7B, which they claim is the first text-generating open source large language model specifically focused on legal work and applications.

"LLMs and more broadly AI systems will have a transformative impact on the practice of law that includes but goes beyond marginal productivity," a spokesperson for Equall.ai said in an email to The Register. "Our focus is on creating end-to-end legal AI systems guided and controlled by lawyers.

"Our belief — based on data and experience — is that systems specialized for the legal domain will perform better than generalist ones. This includes greater precision and more useful tools to help lawyers focus on what they enjoy most and do best, which is to exercise legal judgment and help their clients with advice."

Other organizations are similarly optimistic about the utility of AI assistance. Goldman Sachs last year estimated [PDF] that "one-fourth of current work tasks could be automated by AI in the US, with particularly high exposures in administrative (46 percent) and legal (44 percent) professions…" And startups like Bench IQ, Harvey.ai, and Safe Sign Technologies see a market opportunity in that sort of prediction.

Available on AI model community site HuggingFace, SauLM-7B – named after the US TV series Better Call Saul, which follows the antics of an unorthodox criminal lawyer – is based on the open source Mistral 7B model, both of which have 7 billion parameters. That's significantly less than models like LlaMA 2, which can be based on up to 70 billion parameters. But SauLM-7B's creators note that this is just the first milestone and work is being done with different model sizes.
 
Top Bottom