The AI Thread

Yesterday I had some free time so I signed up in chatGPT.
I asked to recommend a historical novel set on the basque country.
The first two recommendations were a books that actually don't not exist.
I asked for the ISBN, and gave me a number with wrong format
Third recommendation was a book set in Afghanistan

Next recomendation was Basque history of the world by Mark Kurlansky, which is a good book, but not novel.
Next one was Obaba, by Bernardo Atxaga, probably biggest best seller novel in basque languaje, but not historical.
Asked for another and recommended The accordeonist's son, again by Bernardo Atxaga, set in 1950 to present.
As I have allready read these three books, I asked for a historical novel not by Bernardo Atxaga, and recommended Basque history of the world by Mark Kurlansky, as said before, not a novel.
I told the AI that it was not a novel and recommended Obaba, by Bernardo Atxaga.
Yes, the AI entered in a loop with this three books, and I continued with the conversation to see if it was able to exit for there, I gave up at the third iteration.
 
ahahaha rekt

Screenshot 2023-03-28 at 4.38.12 AM.png


courtesy bing
 
1000 science nerds have signed a letter asking for a 6 month pause in making AI stronger than Chat-gpt4


Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

:think:

Once Skynet learns how to write code, it is off to the races I'd say.
Three laws of robotics should slow it down by a few seconds.


Actually, if they could articulate what exactly they think is dangerous, that could be helpful.

The open letter reads like it was written by AI.
 
Last edited:
A few of the people are famous.

It might be a pitch to get government involved more, to hire more AI people and spend more money regulating stuff.
The cynical view is always around.

Then again, maybe they really are concerned for the future of all humans. :dunno:


Signatories list paused due to high demand
Due to high demand we are still collecting signatures but pausing their appearance on the letter so that our vetting processes can catch up. Note also that the signatures near the top of the list are all independently and directly verified.

Yoshua Bengio, Founder and Scientific Director at Mila, Turing Prize winner and professor at University of Montreal

Stuart Russell, Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook “Artificial Intelligence: a Modern Approach"

Elon Musk, CEO of SpaceX, Tesla & Twitter

Steve Wozniak, Co-founder, Apple

Yuval Noah Harari, Author and Professor, Hebrew University of Jerusalem.

Andrew Yang, Forward Party, Co-Chair, Presidential Candidate 2020, NYT Bestselling Author, Presidential Ambassador of Global Entrepreneurship

Connor Leahy, CEO, Conjecture

Jaan Tallinn, Co-Founder of Skype, Centre for the Study of Existential Risk, Future of Life Institute

Evan Sharp, Co-Founder, Pinterest

Chris Larsen, Co-Founder, Ripple

Emad Mostaque, CEO, Stability AI

Valerie Pisano, President & CEO, MILA

John J Hopfield, Princeton University, Professor Emeritus, inventor of associative neural networks

Rachel Bronson, President, Bulletin of the Atomic Scientists

Max Tegmark, MIT Center for Artificial Intelligence & Fundamental Interactions, Professor of Physics, president of Future of Life Institute

Anthony Aguirre, University of California, Santa Cruz, Executive Director of Future of Life Institute, Professor of Physics

Victoria Krakovna, DeepMind, Research Scientist, co-founder of Future of Life Institute

Emilia Javorsky, Physician-Scientist & Director, Future of Life Institute

Sean O'Heigeartaigh, Executive Director, Cambridge Centre for the Study of Existential Risk

Tristan Harris, Executive Director, Center for Humane Technology

Marc Rotenberg, Center for AI and Digital Policy, President

Nico Miailhe, The Future Society (TFS), Founder and President

Zachary Kenton, DeepMind, Senior Research Scientist

Ramana Kumar, DeepMind, Research Scientist

Gary Marcus, New York University, AI researcher, Professor Emeritus

Steve Omohundro, Beneficial AI Research, CEO

Danielle Allen, Harvard University, Professor and Director, Edmond and Lily Safra Center for Ethics

Luis Moniz Pereira, Universidade Nova de Lisboa, Portugal, Professor Emeritus, Doctor Honoris Causa T.U. Dresden, Fellow of EurAI, Fellow AAIA

Carles Sierra, Director Artificial Intelligence Research Institute, IIIA-CSIC; President of the European Association of AI, EurAI., Research Professor of the CSIC, EurAI Fellow

Ramon Lopez De Mantaras, Artificial Intelligence Research Institute, Research Professor, Robert S. Engelmore Memorial Award of the AAAI, EurAI Fellow, National Research Prize in Mathematics of the Spanish Government

Mark Nitzberg, Center for Human-Compatible AI, UC Berkeley, Executive Directer

Gianluca Bontempi, Université Libre de Bruxelles, Full Professor in Machine Learning, Cohead of the ULB Machine Learning Group

Daniel Schwarz, Metaculus, CTO, Metaculus

Nicholas Saparoff, Software Architect, Founder of ByteSphere Technologies and Phenome.AI

Alessandro Perilli, Synthetic Work, AI Researcher

Matt Mahoney, Hutter Prize Committee, Retired data scientist, Developed PAQ and ZPAQ, large text benchmark to evaluate language models using compression

Régis Sabbadin, Inrae-Université de Toulouse, France, Research Director in AI

Peter Stone, The University of Texas at Austin, Associate Chair of Computer Science, Director of Robotics, Chair of the 100 Year Study on AI, Professor of Computer Science, IJCAI Computer and Thought Award; Fellow of AAAI, ACM, IEEE, and AAAS.

Alessandro Saffiotti, Orebro University, Sweden, Professor, Fellow of the European Association for Artificial Intelligence

Louis Rosenberg, Unanimous AI, CEO & Chief Scientist

Jason Tamara Widjaja, Director of Artificial Intelligence, Certified AI Ethics & Governance (Expert)

Niki Iliadis, The Future Society, Director on AI and the Rule of Law

Dr. Jeroen Franse, ABN Amro Bank, Advisor MLOps and ML governance

Colin De La Higuera, Nantes Université. France, Unesco Chair on Open Educational Resoiurces and Artificial Intelligence

Vincent Conitzer, Carnegie Mellon University and University of Oxford, Professor of Computer Science, Director of Foundations of Cooperative AI Lab, Head of Technical AI Engagement at the Institute for Ethics in AI, Presidential Early Career Award in Science and Engineering, Computers and Thought Award, Social Choice and Welfare Prize, Guggenheim Fellow, Sloan Fellow, ACM Fellow, AAAI Fellow, ACM/SIGAI Autonomous Agents Research Award

Elionai Moura Cordeiro, UFRN | BioME | Plus3D.app, CEO - AI Researcher, RSG Brazil Member | BioME fellow researcher

Peter Warren, Director, Author 'AI on Trial' a discussion on the need for AI regulation published by Bloomsbury

Takafumi Matsumaru, Graduate School of Information, Production and Systems, Waseda University, Japan, Professor (Robotics and Mechatronics), IEEE senior member

Jaromír Janisch, CTU in Prague, PhD. student in AI

Emma Bluemke, Centre for the Governance of AI, PhD Engineering, University of Oxford

Mario Gibney, AI Governance & Safety Canada, Director of Partnerships

Louise Doherty, Executive Coach to AI ethics/safety and for-good leaders

Evan R. Murphy, AI Safety Researcher, Independent; Board Member, AIGS Canada

Julien Billot, Scale AI, CEO

Jeff Orkin, Central Casting AI, CEO, PhD from MIT. Awards for best AI in the video game industry for my work on FEAR and No One Lives Forever 2

Anish Upadhayay, Co-founder of the AI Safety Initiative at Georgia Tech

Subhabrata Majumdar, AI Risk and Vulnerability Alliance, Founder and President

... (hundreds more...)

*Edit*
I guess the biggest concern is that evil people will all get a genius-level personal assistant with no morals some day in the future?

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Still confused about what they want government to do exactly.
 
Last edited:
I 100% disagree with all of them. OpenAI should do what it wants. It had demonstrated itself to be trustworthy and shouldn’t give its competitors 6 months to dethrone it.

That said, in the next 6 months is it even working on the successor to gpt-4? Or are they spending the next while just making gpt-4 better since that’s their flagship product and they can’t even keep up with demand?
 
I started talking to it about all my theories that no one seems to “get” unless they already came up with something similar on their own. It handles it really quite well.

I’m definitely going to be obsolete soon.
 
in 6 months time either Russia will be defeated or the cash squeeze on the major economies will be relaxed . So , yes , major companies will be not talking about me .

remember the (in jest) claims ı was an experimental that ran away ?
 
finance is fiction , let miniscule amounts of real money flow out of the system and do nothing and it will be multiplied over the years . Enough to get America and China to actually think about war . Some Swiss bank just last week or so apparently stole 17 billion dollars from its customers . The Kerkük oil arbitration in favour of Iraq was agreed in mid 2022 and is announced only today , because that 1,4 billion compensation reaching to 3,5 billion dollars is like real money getting into the system . As if ransom . If not , then the elections will be a surprise . Greed of mankind would tax any computer to emulate ...
 
1000 science nerds have signed a letter asking for a 6 month pause in making AI stronger than Chat-gpt4




:think:

Once Skynet learns how to write code, it is off to the races I'd say.
Three laws of robotics should slow it down by a few seconds.


Actually, if they could articulate what exactly they think is dangerous, that could be helpful.

The open letter reads like it was written by AI.
Translation: Elon and apple are way behind and want Open AI and Microsoft to stop six months so they have time to develop his own language models.

Only possible philosophical and juridically fundamented answer to the letter: cry me a damn bloody river.
 
Last edited:
First "atempts" to get closer to some kind of AGI (Artificial General Inteligence or strong AI). It is not any real strong AI of course but an automation of GPT (we are probably decades away of real strong AI, or at least some months :p ), however it may show the path to follow. It is amazing to see the AI reasoning and taking decisions to reach the proposed goals. If some github open source projects are already at this point who knows what OpenAI or Google have already developed in secret.


 
Last edited:
Decades, or an eternity according to some prominent scientists. At least if we are talking about digital machines :)
But imo real ai isn't more useful than the type of ai we are seeing with the current models. Although their factual errors (in math/science) have to go down still.
 
I guess we considered terminator a sentient-enough AI, and that’s basically the level that exists if we made these models behave that way, packaged them into one piece of hardware.
 
Chatgtp has no sentience.
Prove it.
Spoiler :
I do not think it is sentient, but I also do not think we have the tools to prove it one way or another.
 
I thought the terminators were supposed to at least have some sentience? Arnold in Terminator2 anyway.
Chatgtp has no sentience
I remember T1 much better than T2. What in T2 proves he has more sentience than a well tuned, rogue-goaled LLM attached to a vision AI and a movement AI?
 
Haha Samson and I on the two pronged attack, coming at it from opposite sides
 
Back
Top Bottom