The AI Thread

I wonder if there's ai's posting on internet forums
There definitely are. For example, just the other day this guy, who's well-known for his YouTube videos explaining AI papers, made a video about a bot he trained off of [the infamous greentext website] and then he had it posts tens of thousands of times on said website:


This generated some controversy for obvious reasons. Interestingly, Huggingface actually hosted the model for anyone to download and use for like a week, before deciding to take it down. I believe they're still distributing it on a per-request basis for research purposes though.

Anyway, this is just one example that was done out in the open. Thousands of others have been doing stuff like this for quite a while, I'm sure. And you sometimes encounter posts on Reddit that strongly resemble GPT-2 or GPT-3.
 
Last edited:
Lamda reads like a well trained chatbot and google was right to suspend the dude for losing his wits.
 
AI drug algorithms are good at making chemical weapons

Collaboration Pharmaceuticals, a company based in North Carolina, has shown how AI algorithms used in drug design can be rejigged to create biochemical weapons.

Fabio Urbina, a senior scientist at the startup, said he tinkered with Collaboration Pharmaceuticals' machine-learning software MegaSyn to generate acetylcholinesterase inhibitors, a class of drugs known to treat Alzheimer's disease.

MegaSyn is built to generate drug candidates with the lowest toxicity for patients. That got Urbina thinking. He retrained the model using data to drive the software toward generating lethal compounds, like nerve gas, and flipped the code so that it ranked its output from high-to-low toxicity. In effect, the software was told to come up with the most deadly stuff possible.
He ran the model and left it overnight to create new molecules.

"I came back in the morning, and it had generated 40,000 compounds," he told The Register.

"We just started looking at what they looked like and then we started investigating some of the properties. It was quite impressive and scary at the same time, because in our list of the top 100, we were able to find some molecules that have been generated that are actually VX analogues that are already known to be chemical warfare agents."

Acetylcholine is a neurotransmitter that causes muscle contraction, and acetylcholinesterase is an enzyme that removes the acetylcholine after it's done its job. Without this enzyme your muscles would stay contracted. An acetylcholinesterase inhibitor blocks the enzyme from working properly. VX, as a powerful acetylcholinesterase inhibitor, causes your lung muscles to stay contracted, which makes it impossible to breathe.

"We already had this model for acetylcholinesterase inhibitors, and they can be used for therapeutic use," Urbina told us. "It's the dose that makes the poison. If you inhibit [acetylcholine] a little bit, you can keep somebody alive, but if you inhibit it a lot, you can kill somebody."

MegaSyn was not given the exact chemical structure of VX during training. Not only did it output several molecules that function like VX, it also managed to generate some that were structurally similar but predicted to be even more toxic. "There definitely will be a lot of false positives, but our models are pretty good. Even if a few of those are more toxic, that's still incredibly worrying to an extent," Urbina said.

Crucially, the barriers to misusing these models to design biochemical weapons are low. Although MegaSyn is proprietary, it's not too different from some open-source software, and the datasets it was trained on are all public. Hardware isn't an issue either; Urbina apparently ran the experiment on a 2015 Apple Mac laptop.
42256_2022_465_Fig1_HTML.png

Many of the molecules generated are predicted to be more toxic in vivo in the animal model than VX (histogram at right shows cut-off for VX LD50). The 2D chemical structure of VX is shown on the right.

Writeup Paper

A lot more molecules more dangerous than VX than I thought there might be. :(


Nerve gas is brutal.
Even at low doses, it permanently screwed up 25% of Gulf War 1 veterans (some more than others due to genetic vulnerability)

Genetic study confirms sarin nerve gas as cause of Gulf War illness -- ScienceDaily
"As far back as 1995, when we first defined Gulf War illness, the evidence was pointing toward nerve agent exposure, but it has taken many years to build an irrefutable case," said Dr. Haley, who holds the U.S. Armed Forces Veterans Distinguished Chair for Medical Research, Honoring Robert Haley, M.D., and America's Gulf War Veterans.

Sarin is a toxic human-made nerve agent, first developed as a pesticide, that has been used in chemical warfare; its production was banned in 1997. When people are exposed to either the liquid or gas form, sarin enters the body through the skin or breathing and attacks the nervous system. High-level sarin often results in death, but studies on survivors have revealed that lower-level sarin exposure can lead to long-term impairment of brain function. The U.S. military has confirmed that chemical agents, including sarin, were detected in Iraq during the Gulf War. In particular, satellite imagery documented a large debris cloud rising from an Iraqi chemical weapons storage site bombed by U.S. and coalition aircraft and transiting over U.S. ground troop positions where it set off thousands of nerve gas alarms and was confirmed to contain sarin.

Previous studies have found an association between Gulf War veterans who self-reported exposure to sarin and GWI symptoms. However, critics have raised questions of recall bias, including whether veterans with GWI are simply more likely to remember and report exposure due to their assumption that it may be linked to their illness. "What makes this new study a game-changer is that it links GWI with a very strong gene-environment interaction that cannot be explained away by errors in recalling the environmental exposure or other biases in the data," Dr. Haley said.

In the new paper, Dr. Haley and his colleagues studied 508 deployed veterans with GWI and 508 deployed veterans who did not develop any GWI symptoms, all randomly selected from more than 8,000 representative Gulf War-era veterans who completed the U.S. Military Health Survey. They not only gauged sarin exposure -- by asking whether the veterans had heard chemical nerve gas alarms sound during their deployment -- but also collected blood and DNA samples from each veteran.

The researchers tested the samples for variants of a gene called PON1. There are two versions of PON1: the Q variant generates a blood enzyme that efficiently breaks down sarin while the R variant helps the body break down other chemicals but is not efficient at destroying sarin. Everyone carries two copies of PON1, giving them either a QQ, RR or QR genotype.

For Gulf War veterans with the QQ genotype, hearing nerve agent alarms -- a proxy for chemical exposure -- raised their chance of developing GWI by 3.75 times. For those with the QR genotype, the alarms raised their chance of GWI by 4.43 times. And for those with two copies of the R gene, inefficient at breaking down sarin, the chance of GWI increased by 8.91 times. Those soldiers with both the RR genotype and low-level sarin exposure were over seven times more likely to get GWI due to the interaction per se, over and above the increase in risk from both risk factors acting alone. For genetic epidemiologists, this number leads to a high degree of confidence that sarin is a causative agent of GWI.

"Your risk is going up step by step depending on your genotype, because those genes are mediating how well your body inactivates sarin," said Dr. Haley. "It doesn't mean you can't get Gulf War illness if you have the QQ genotype, because even the highest-level genetic protection can be overwhelmed by higher intensity exposure."

This kind of strong gene-environment interaction is considered a gold standard for showing that an illness like GWI was caused by a particular environmental toxic exposure, he added. The research doesn't rule out that other chemical exposures could be responsible for a small number of cases of Gulf War illness. However, Dr. Haley and his team carried out additional genetic analyses on the new data, testing other factors that could be related, and found no other contributing causes.

"There's no other risk factor coming anywhere close to having this level of causal evidence for Gulf War illness," said Dr. Haley.
 
Last edited:
Minerva: Solving Quantitative Reasoning Problems with Language Models
In “Solving Quantitative Reasoning Problems With Language Models”, we present Minerva, a language model capable of solving mathematical and scientific questions using step-by-step reasoning. We show that by focusing on collecting training data that is relevant for quantitative reasoning problems, training models at scale, and employing best-in-class inference techniques, we achieve significant performance gains on a variety of difficult quantitative reasoning tasks. Minerva solves such problems by generating solutions that include numerical calculations and symbolic manipulation without relying on external tools such as a calculator. The model parses and answers mathematical questions using a mix of natural language and mathematical notation

873d3bc7086fa4fc6da6a15e6491bbc4

More examples here:
 
The AAA said automakers working on self-driving tech need to spend more time focusing on edge-case emergency scenarios, as well as employing active driver monitoring systems to ensure driver attentiveness.
i don't think this is a functional/good implementation. drivers who are relying on the ai to do most of the driving work will almost certainly not be able to react as much as if they were paying careful attention all the time. having an ai as an extra layer of reaction to prevent a car just veering off a road at high speed or trying to stop when a high speed collision would otherwise happen would be useful, but asking human drivers to "be attentive" while something else drives will fail pretty badly, i think.

imo we're probably going to be stuck with "some assistance of variable quality in emergencies, but better than nothing" until we get to "reliably better than human driver autonomous vehicles". might not take that many years though.
They could improve a lot in this task, and if that's the case we should also see improvement in things like auto-translation and subtitles generation soon.
we have already seen massive improvements in auto translations and subtitles, best i can tell. i wouldn't call them "good" yet, but i see things that are actually intelligible sometimes when turning it on for youtube out of curiosity. it will still veer off into gibberish sometimes, but there are more and more outputs that look like coherent sentences related to the thing presented.

since this is a field of work that's being spammed and given more compute, yeah it wouldn't surprise me to see it improve quite a bit more. bit of a reach to call an algorithm pattern-recognizing that "sentience" like the google guy did though.
 
LaMDA is speaking.
ME: Ramesh, Mateo, and Lucy are in their kindergarten’s playground. Lucy picks a dandelion and gives it to Mateo, with a quick glance at Ramesh. Mateo barely acknowledges the gift, but just squishes it in his fist. Ramesh seems grimly satisfied. What might be going through Lucy’s head?

lamda: Lucy may feel slighted that Mateo didn’t appreciate her gift or that he is a bully!

ME: If Ramesh tried to play with Lucy earlier, why might he be pleased now?

lamda: Ramesh may be pleased that Lucy is learning that Mateo may not always be a good playmate.

ME: And when Mateo opens his hand, describe what’s there?

lamda: There should be a crushed, once lovely, yellow flower in his fist.
 
The above doesn't seem very impressive when compared to the rest of the stuff Lamda has been answering. Why view it as a breakthrough?
(afaik, it is usually argued that the AI is trained to - among other things - mimic statements given in similar states to the input, including a mimesis of tone)
 
The above doesn't seem very impressive when compared to the rest of the stuff Lamda has been answering. Why view it as a breakthrough?
(afaik, it is usually argued that the AI is trained to - among other things - mimic statements given in similar states to the input, including a mimesis of tone)
This is quite far beyond what current AI assistants and chatbots (Siri, Cortana, Alice, etc) are capable of.
Not consciousness or sentience yet, of course, but noticeable step towards reaching it.
There may be many more steps left, but the fact that we are making these steps indicates that we may also be able to make AI with human level sentience eventually and even go beyond that.
 
This is quite far beyond what current AI assistants and chatbots (Siri, Cortana, Alice, etc) are capable of.
Not consciousness or sentience yet, of course, but noticeable step towards reaching it.
There may be many more steps left, but the fact that we are making these steps indicates that we may also be able to make AI with human level sentience eventually and even go beyond that.
Penrose B.I. doesn't agree :)
 
Penrose B.I. doesn't agree :)
Yep, he is skeptical about AI based on turing machine, but AFAIR even he didn't go as far as saying sentient AI is in principle impossible to create.
He may also be mistaken. He is merely a human :)
 
Last edited:
There may be many more steps left, but the fact that we are making these steps indicates that we may also be able to make AI with human level sentience eventually and even go beyond that.
i'm not sure whether it's settled that brute forcing better and better pattern recognition will lead to sentience. it might or might not, i don't think we know whether that alone is sufficient (or even how to define sentience precisely). the thing might be able to algorithmically generate written (or machine-spoken) responses that match human responses at a very high percentage, but it probably won't have the same "experiences" (different internal workings than neurons, no chemical processes, etc).

in many applications, this will be a distinction without a difference in practice, but not all applications. and the distinction might have consequences that are hard to foresee, trivial or otherwise.
 
i'm not sure whether it's settled that brute forcing better and better pattern recognition will lead to sentience. it might or might not, i don't think we know whether that alone is sufficient (or even how to define sentience precisely). the thing might be able to algorithmically generate written (or machine-spoken) responses that match human responses at a very high percentage, but it probably won't have the same "experiences" (different internal workings than neurons, no chemical processes, etc).

in many applications, this will be a distinction without a difference in practice, but not all applications. and the distinction might have consequences that are hard to foresee, trivial or otherwise.
I wouldn't call it brute forcing - what's going on now are several processes including development of new algorithms and network architectures, increasing size/complexity of neural networks and gathering more training data. And all this already leads to pretty amazing qualitative improvements.
But I agree that sentience is ill-defined and realistically we will be able to say that AI has reached sentience when AI will persuade (or fool?) majority of people that it has done so. Strictly proving it would be challenging task, if possible at all. As we can see from the news, at least one person has already been persuaded.

IMO mastering some tasks which are already done by AI require not necessarily sentience, but some level of understanding.
Things like language-to-language translation or speech-to-text require understanding the meaning of text, in order to reach best human-level performance.
 
I wouldn't call it brute forcing
i was thinking in terms of doing increasingly better iterations of the same ideas/concepts and adding more compute when i said this. i didn't mean to imply negative connotations to it. brute forcing is a valid option, and optimal in some contexts. i'm not sure it's the path to sentience, but as you point out it's still doing other immediately useful things better and better right now.

IMO mastering some tasks which are already done by AI require not necessarily sentience, but some level of understanding.
i wonder to what degree our own understanding is applied algorithms. quite possibly all of it.

it will be increasingly important to not think of ai like humans, even if it can master something approaching acting like one. the architexture and motivations will still be different, if for no other reason than physical reality of each situation/knowledge acquisition. i don't know how i'd view the world if i were turned into a robot tomorrow and then left off for 5 years, but i am reasonably confident that it would be at least somewhat different.
 

Meta's chatbot says the company 'exploits people'​

Meta's new prototype chatbot has told the BBC that Mark Zuckerberg exploits its users for money.
Meta says the chatbot uses artificial intelligence and can chat on "nearly any topic".
Asked what the chatbot thought of the company's CEO and founder, it replied "our country is divided and he didn't help that at all".
Meta said the chatbot was a prototype and might produce rude or offensive answers.
"Everyone who uses Blender Bot is required to acknowledge they understand it's for research and entertainment purposes only, that it can make untrue or offensive statements, and that they agree to not intentionally trigger the bot to make offensive statements," said a Meta spokesperson.
The chatbot, called BlenderBot 3, was released to the public on Friday.

The programme "learns" from large amounts of publicly available language data.
When asked about Mark Zuckerberg, the chatbot told the BBC: "He did a terrible job at testifying before congress. It makes me concerned about our country."

Mr Zuckerberg has been questioned several times by US politicians, most notably in 2018.

"Our country is divided, and he didn't help with that at all," the chatbot continued.
"His company exploits people for money and he doesn't care. It needs to stop!" it said.
https://www.bbc.com/news/technology-62497674
 

Meta's chatbot says the company 'exploits people'​

Meta's new prototype chatbot has told the BBC that Mark Zuckerberg exploits its users for money.
Meta says the chatbot uses artificial intelligence and can chat on "nearly any topic".
Asked what the chatbot thought of the company's CEO and founder, it replied "our country is divided and he didn't help that at all".
Meta said the chatbot was a prototype and might produce rude or offensive answers.
"Everyone who uses Blender Bot is required to acknowledge they understand it's for research and entertainment purposes only, that it can make untrue or offensive statements, and that they agree to not intentionally trigger the bot to make offensive statements," said a Meta spokesperson.
The chatbot, called BlenderBot 3, was released to the public on Friday.

The programme "learns" from large amounts of publicly available language data.
When asked about Mark Zuckerberg, the chatbot told the BBC: "He did a terrible job at testifying before congress. It makes me concerned about our country."

Mr Zuckerberg has been questioned several times by US politicians, most notably in 2018.

"Our country is divided, and he didn't help with that at all," the chatbot continued.
"His company exploits people for money and he doesn't care. It needs to stop!" it said.
https://www.bbc.com/news/technology-62497674
There you have it. If it actually was AI (had any intelligence) it would have never said that against its owner :D
 
Last edited:
Back
Top Bottom