The AI Thread

If they're currently fooling 0.00001% of people, how many flips of Moore's Law* until they're fooling 100% of people?


* I know, I know
 
If they're currently fooling 0.00001% of people, how many flips of Moore's Law* until they're fooling 100% of people?


* I know, I know
As long as it is accompanied by appropriate education then I hope we will not get there. Just as a photoshopped picture could fool people a few decades ago, now people are familiar with the tech it if harder to do so.
 
But that education will only guard x percent. As the tech gets better, the fooled will have every impulse to double and triple down
 
EU AI rules may criminalise Open Source Software

New rules drafted by the European Union aimed at regulating AI could prevent developers from releasing open-source models, according to American think tank Brookings.​
The proposed EU AI Act, yet to be signed into law, states that open source developers have to ensure their AI software is accurate, secure, and be transparent about risk and data use in clear technical documentation.​
Brookings argues that if a private company were to deploy the public model or use it in a product, and it somehow gets in trouble due to some unforeseen or uncontrollable effects from the model, the company would then probably try to blame the open source developers and sue them.​
It might force the open source community to think twice about releasing their code, and would, unfortunately, mean the development of AI will be driven by private companies. Proprietary code is difficult to analyse and build upon, meaning innovation will be hampered.​
Oren Etzioni, the outgoing CEO of the Allen Institute of AI, reckons open source developers should not be subject to the same stringent rules as software engineers at private companies.​
"Open source developers should not be subject to the same burden as those developing commercial software. It should always be the case that free software can be provided 'as is' — consider the case of a single student developing an AI capability; they cannot afford to comply with EU regulations and may be forced not to distribute their software, thereby having a chilling effect on academic progress and on reproducibility of scientific results," he told TechCrunch.​
 
A third of scientists working on AI say it could cause global disaster New Scientist (paywalled, full text in spoiler) Paper (not peer reviewed)

More than one-third of artificial intelligence researchers around the world agree that AI decisions could cause a catastrophe as bad as all-out nuclear war in this century.

The findings come from a survey covering the opinions of 327 researchers who had recently co-authored papers on AI research in natural language processing. In recent years, there have been big advances in this area, with the increased use of large language AI models that display impressive capabilities, including writing computer code and creating novel images based on text prompts.

The survey, carried out by Julian Michael at the New York University Center for Data Science and his colleagues, revealed that 36 per cent of all respondents thought nuclear-level catastrophe was possible.

2FVdHZN.png

jSdwuBh.png

Spoiler More pics :
DiU6fMx.png

8MctTHG.png

DMFkckf.png

uE0mznt.png




Spoiler Full text of writeup :
“If it was actually an all-out nuclear war that AI contributed to, there are plausible scenarios that could get you there,” says Paul Scharre at the Center for a New American Security, a think tank based in Washington DC. “But it would also require people to do some dangerous things with military uses of AI technology.”

US military officials have expressed scepticism about arming drones with nuclear weapons, let alone giving AI a major role in nuclear command-and-control systems. But Russia is reportedly developing a drone torpedo with autonomous capabilities that could deliver a nuclear strike against coastal cities.

Fears about the possibility of nuclear-level catastrophe were even greater when looking specifically at responses from women and people who said they belonged to an underrepresented minority group in AI research: 46 per cent of women saw this as possible, and 53 per cent of people in minority groups agreed with the premise.

This even greater pessimism about our ability to manage dangerous future technology may reflect the “present-day track record of disproportionate harms to these groups”, the authors write.

The survey may even underestimate how many researchers believe AI poses serious risks. Some survey respondents said they would have agreed that AI poses serious risks in a less extreme scenario than an all-out nuclear war catastrophe.

“Concerns brought up in other parts of the survey feedback include the impacts of large-scale automation, mass surveillance, or AI-guided weapons,” says Michael. “But it’s hard to say if these were the dominant concerns when it came to the question about catastrophic risk.”

Separately, 57 per cent of all survey respondents saw developments in large AI models as “significant steps toward the development of artificial general intelligence”. That evokes the idea of an AI with intellectual capabilities equalling those of humans.

Another 73 per cent agreed that AI automation of labour could lead to revolutionary societal changes on the scale of the industrial revolution.

Given that researchers expect significant advances in AI capabilities, it is somewhat heartening that just 36 per cent see a catastrophic risk from AI as being plausible, says Scharre. But he cautioned that it is important to pay attention to any risks related to AI that can impact large swathes of society.

“I’m much more concerned about AI risk that seems less catastrophic than all-out nuclear war but is probably likely, because of the challenges in dealing with the systems as we integrate them into different industries and military operations,” says Scharre.
 
While not strictly AI (of the type discussed here), I think it is interesting that in fields which are already a closed set with known lowest-possible factors, we still pretend computers shouldn't be identified as antagonistic to humans. I am thinking of chess and the ongoing controversy about people (possibly) using chess engines to utterly obliterate any high rank (even the world champion) opponent.
Since a good chess AI will defeat any human (and this has been true for decades now), I personally don't see why the best of those AIs can't compete in (human) tournaments. It would give us a clear image of the chasm between computer and human ability there.
Now, granted, those chess AI already brute-force calculate the entire game, while afaik the best human players can only go (by theory and memory) up to a combination of 10 moves ahead while taking for granted that one or the other opening sequence will be played - and similar for endgame sequences. Still, what is the point of hailing human players as champions when they are really so hopelessly weak next to any AI?

Imo all such fields with closed sets and known (limited) factors, in the future should rather stop being fields of competition for humans only. It leads us to hail something far less able than the readily available to anyone by dling a very good chess program (even for free).
 
Last edited:
Is the AI singularity here? AI invents new maths to make AI better

Researchers at DeepMind in London have shown that artificial intelligence (AI) can find shortcuts in a fundamental type of mathematical calculation, by turning the problem into a game and then leveraging the machine-learning techniques that another of the company’s AIs used to beat human players in games such as Go and chess.​
The AI discovered algorithms that break decades-old records for computational efficiency, and the team’s findings, published on 5 October in Nature, could open up new paths to faster computing in some fields.​
The AI that DeepMind developed — called AlphaTensor — was designed to perform a type of calculation called matrix multiplication. This involves multiplying numbers arranged in grids — or matrices — that might represent sets of pixels in images, air conditions in a weather model or the internal workings of an artificial neural network. To multiply two matrices together, the mathematician must multiply individual numbers and add them in specific ways to produce a new matrix. In 1969, mathematician Volker Strassen found a way to multiply a pair of 2 × 2 matrices using only seven multiplications, rather than eight, prompting other researchers to search for more such tricks.​
DeepMind’s approach uses a form of machine learning called reinforcement learning, in which an AI ‘agent’ (often a neural network) learns to interact with its environment to achieve a multistep goal, such as winning a board game. If it does well, the agent is reinforced — its internal parameters are updated to make future success more likely.​
AlphaTensor also incorporates a game-playing method called tree search, in which the AI explores the outcomes of branching possibilities while planning its next action. In choosing which paths to prioritize during tree search, it asks a neural network to predict the most promising actions at each step. While the agent is still learning, it uses the outcomes of its games as feedback to hone the neural network, which further improves the tree search, providing more successes to learn from.​
Each game is a one-player puzzle that starts with a 3D tensor — a grid of numbers — filled in correctly. AlphaTensor aims to get all the numbers to zero in the fewest steps, selecting from a collection of allowable moves. Each move represents a calculation that, when inverted, combines entries from the first two matrices to create an entry in the output matrix. The game is difficult, because at each step the agent might need to select from trillions of moves. “Formulating the space of algorithmic discovery is very intricate,” co-author Hussein Fawzi, a computer scientist at DeepMind, said at a press briefing, but “even harder is, how can we navigate in this space”.​
To give AlphaTensor a leg up during training, the researchers showed it some examples of successful games, so that it wouldn’t be starting from scratch. And because the order of actions doesn’t matter, when it found a successful series of moves, they also presented a reordering of those moves as an example for it to learn from.​
The researchers tested the system on input matrices up to 5 × 5. In many cases, AlphaTensor rediscovered shortcuts that had been devised by Strassen and other mathematicians, but in others it broke new ground. When multiplying a 4 × 5 matrix by a 5 × 5 matrix, for example, the previous best algorithm required 80 individual multiplications. AlphaTensor uncovered an algorithm that needed only 76.​
The researchers tackled larger matrix multiplications by creating a meta-algorithm that first breaks problems down into smaller ones. When crossing an 11 × 12 and a 12 × 12 matrix, their method reduced the number of required multiplications from 1,022 to 990.​
“It has got this amazing intuition by playing these games,” said Pushmeet Kohli, a computer scientist at DeepMind, during the press briefing. Fawzi tells Nature that “AlphaTensor embeds no human intuition about matrix multiplication”, so “the agent in some sense needs to build its own knowledge about the problem from scratch”.​

Paper Writeup

Spoiler Graphical representation of doing matrix multiplication :
41586_2022_5172_Fig1_HTML.png

Spoiler Legend :
a, Tensor T2T2 representing the multiplication of two 2 × 2 matrices. Tensor entries equal to 1 are depicted in purple, and 0 entries are semi-transparent. The tensor specifies which entries from the input matrices to read, and where to write the result. For example, as c1 = a1b1 + a2b3, tensor entries located at (a1, b1, c1) and (a2, b3, c1) are set to 1. b, Strassen's algorithm2 for multiplying 2 × 2 matrices using 7 multiplications. c, Strassen's algorithm in tensor factor representation. The stacked factors U, V and W (green, purple and yellow, respectively) provide a rank-7 decomposition of T2T2 (equation (1)). The correspondence between arithmetic operations (b) and factors (c) is shown by using the aforementioned colours.

This is the new way, it is not so pretty
Spoiler AI beating mathematicians :
41586_2022_5172_Fig6_ESM.jpg

This outperforms the two-level Strassen’s algorithm, which involves 7^2 = 49 multiplications.



Spoiler Comparison of AI algorithms to meat bags :
41586_2022_5172_Fig3_HTML.png
Spoiler Legend :
Left: column (n, m, p) refers to the problem of multiplying n × m with m × p matrices. The complexity is measured by the number of scalar multiplications (or equivalently, the number of terms in the decomposition of the tensor). ‘Best rank known’ refers to the best known upper bound on the tensor rank (before this paper), whereas ‘AlphaTensor rank’ reports the rank upper bounds obtained with our method, in modular arithmetic (Z2Z2) and standard arithmetic. In all cases, AlphaTensor discovers algorithms that match or improve over known state of the art (improvements are shown in red). See Extended Data Figs. 1 and 2 for examples of algorithms found with AlphaTensor. Right: results (for arithmetic in RR) of applying AlphaTensor-discovered algorithms on larger tensors. Each red dot represents a tensor size, with a subset of them labelled.
 
Is the AI singularity here? AI invents new maths to make AI better

Researchers at DeepMind in London have shown that artificial intelligence (AI) can find shortcuts in a fundamental type of mathematical calculation, by turning the problem into a game and then leveraging the machine-learning techniques that another of the company’s AIs used to beat human players in games such as Go and chess.​
The AI discovered algorithms that break decades-old records for computational efficiency, and the team’s findings, published on 5 October in Nature, could open up new paths to faster computing in some fields.​
The AI that DeepMind developed — called AlphaTensor — was designed to perform a type of calculation called matrix multiplication. This involves multiplying numbers arranged in grids — or matrices — that might represent sets of pixels in images, air conditions in a weather model or the internal workings of an artificial neural network. To multiply two matrices together, the mathematician must multiply individual numbers and add them in specific ways to produce a new matrix. In 1969, mathematician Volker Strassen found a way to multiply a pair of 2 × 2 matrices using only seven multiplications, rather than eight, prompting other researchers to search for more such tricks.​
DeepMind’s approach uses a form of machine learning called reinforcement learning, in which an AI ‘agent’ (often a neural network) learns to interact with its environment to achieve a multistep goal, such as winning a board game. If it does well, the agent is reinforced — its internal parameters are updated to make future success more likely.​
AlphaTensor also incorporates a game-playing method called tree search, in which the AI explores the outcomes of branching possibilities while planning its next action. In choosing which paths to prioritize during tree search, it asks a neural network to predict the most promising actions at each step. While the agent is still learning, it uses the outcomes of its games as feedback to hone the neural network, which further improves the tree search, providing more successes to learn from.​
Each game is a one-player puzzle that starts with a 3D tensor — a grid of numbers — filled in correctly. AlphaTensor aims to get all the numbers to zero in the fewest steps, selecting from a collection of allowable moves. Each move represents a calculation that, when inverted, combines entries from the first two matrices to create an entry in the output matrix. The game is difficult, because at each step the agent might need to select from trillions of moves. “Formulating the space of algorithmic discovery is very intricate,” co-author Hussein Fawzi, a computer scientist at DeepMind, said at a press briefing, but “even harder is, how can we navigate in this space”.​
To give AlphaTensor a leg up during training, the researchers showed it some examples of successful games, so that it wouldn’t be starting from scratch. And because the order of actions doesn’t matter, when it found a successful series of moves, they also presented a reordering of those moves as an example for it to learn from.​
The researchers tested the system on input matrices up to 5 × 5. In many cases, AlphaTensor rediscovered shortcuts that had been devised by Strassen and other mathematicians, but in others it broke new ground. When multiplying a 4 × 5 matrix by a 5 × 5 matrix, for example, the previous best algorithm required 80 individual multiplications. AlphaTensor uncovered an algorithm that needed only 76.​
The researchers tackled larger matrix multiplications by creating a meta-algorithm that first breaks problems down into smaller ones. When crossing an 11 × 12 and a 12 × 12 matrix, their method reduced the number of required multiplications from 1,022 to 990.​
“It has got this amazing intuition by playing these games,” said Pushmeet Kohli, a computer scientist at DeepMind, during the press briefing. Fawzi tells Nature that “AlphaTensor embeds no human intuition about matrix multiplication”, so “the agent in some sense needs to build its own knowledge about the problem from scratch”.​

Paper Writeup

Spoiler Graphical representation of doing matrix multiplication :
41586_2022_5172_Fig1_HTML.png

Spoiler Legend :
a, Tensor T2T2 representing the multiplication of two 2 × 2 matrices. Tensor entries equal to 1 are depicted in purple, and 0 entries are semi-transparent. The tensor specifies which entries from the input matrices to read, and where to write the result. For example, as c1 = a1b1 + a2b3, tensor entries located at (a1, b1, c1) and (a2, b3, c1) are set to 1. b, Strassen's algorithm2 for multiplying 2 × 2 matrices using 7 multiplications. c, Strassen's algorithm in tensor factor representation. The stacked factors U, V and W (green, purple and yellow, respectively) provide a rank-7 decomposition of T2T2 (equation (1)). The correspondence between arithmetic operations (b) and factors (c) is shown by using the aforementioned colours.

This is the new way, it is not so pretty
Spoiler AI beating mathematicians :
41586_2022_5172_Fig6_ESM.jpg

This outperforms the two-level Strassen’s algorithm, which involves 7^2 = 49 multiplications.



Spoiler Comparison of AI algorithms to meat bags :
41586_2022_5172_Fig3_HTML.png
Spoiler Legend :
Left: column (n, m, p) refers to the problem of multiplying n × m with m × p matrices. The complexity is measured by the number of scalar multiplications (or equivalently, the number of terms in the decomposition of the tensor). ‘Best rank known’ refers to the best known upper bound on the tensor rank (before this paper), whereas ‘AlphaTensor rank’ reports the rank upper bounds obtained with our method, in modular arithmetic (Z2Z2) and standard arithmetic. In all cases, AlphaTensor discovers algorithms that match or improve over known state of the art (improvements are shown in red). See Extended Data Figs. 1 and 2 for examples of algorithms found with AlphaTensor. Right: results (for arithmetic in RR) of applying AlphaTensor-discovered algorithms on larger tensors. Each red dot represents a tensor size, with a subset of them labelled.
Very interesting, Samson!
Despite matrix multiplication being a rather dull and mechanistic affair :)
Still, why was it surprising that there is an isomorphism between matrix multiplication and some simple 3d arrangements? Afaik there have been far more impressive isomorphisms in general basic functions (addition, multiplication and their opposites), such as the application of Bursuk-Ulam on optimal division of groups (that too can be presented as moving 3d blocks, though in that case an infinite number of them, since it is a continuous function)
 
Using machine learning to assess the livelihood impact of electricity access

In many regions of the world, sparse data on key economic outcomes inhibit the development, targeting and evaluation of public policy. We demonstrate how advancements in satellite imagery and machine learning (ML) can help ameliorate these data and inference challenges. In the context of an expansion of the electrical grid across Uganda, we show how a combination of satellite imagery and computer vision can be used to develop local-level livelihood measurements appropriate for inferring the causal impact of electricity access on livelihoods. We then show how ML-based inference techniques deliver more reliable estimates of the causal impact of electrification than traditional alternatives when applied to these data. We estimate that grid access improves village-level asset wealth in rural Uganda by up to 0.15 standard deviations, more than doubling the growth rate during our study period relative to untreated areas. Our results provide country-scale evidence on the impact of grid-based infrastructure investment and our methods provide a low-cost, generalizable approach to future policy evaluation in data-sparse environments.

41586_2022_5322_Fig1_HTML.png

Spoiler Legend :
a, Nearly 600 million people in SSA lacked access to central-grid-based electricity in 2018. Countries are coloured by the number of people lacking access to electricity. Uganda, with roughly 24 million people without grid power in 2018, is highlighted by a red border. b, The electricity grid has expanded rapidly in Uganda in recent years, including to previously unelectrified regions of the country. Black dots illustrate the 641 EAs from the 2016 DHS. c, Wealth indexes for 27,174 survey EAs (roughly, villages) used in model training are coloured from low asset wealth to high asset wealth. Data cover 25 countries over 13 years and represent 641,621 household surveys.


41586_2022_5322_Fig2_HTML.png

Spoiler Legend :
a,b, Asset-based wealth index values predicted with a CNN across Uganda in 2006 and 2016, the first and last years, respectively, of our analysis. c, Change in asset wealth between 2006 and 2016 for 6,900 populated villages and neighbourhoods. Asset-based wealth increased by an average of 0.15 standard deviations during the period across observed locations.
 
OpenAI have released a new chatbot, which you can play around with for free. A somewhat overblown piece from the Guardian about it is here.

AI bot ChatGPT stuns academics with essay-writing skills and usability​

Latest chatbot from Elon Musk-founded OpenAI can identify incorrect premises and refuse to answer inappropriate requests

The chatbot generated a response to an exam question that an academic said would have resulted in full marks if submitted by an undergraduate.


Professors, programmers and journalists could all be out of a job in just a few years, after the latest chatbot from the Elon Musk-founded OpenAI foundation stunned onlookers with its writing ability, proficiency at complex tasks, and ease of use.

The system, called ChatGPT, is the latest evolution of the GPT family of text-generating AIs. Two years ago, the team’s previous AI, GPT3, was able to generate an opinion piece for the Guardian, and ChatGPT has significant further capabilities.

In the days since it was released, academics have generated responses to exam queries that they say would result in full marks if submitted by an undergraduate, and programmers have used the tool to solve coding challenges in obscure programming languages in a matter of seconds – before writing limericks explaining the functionality.

I am impressed with its ability to parse fairly vaguely worded questions, and to give answers that sound well written. In a wordy, caveat filled kind of way. When it comes to factual accuracy or even internal consistency, those answers can be rather shaky though. It can certainly give a response to a remarkably wide range of queries, but I think anyone trying to use this to write an essay for them is going to fall foul of its inability to keep its answers coherent. It's rather bad at things such as maths, and other logical reasoning, presumably due to it being reliant on finding that exact answer in its training set and hoping it's right.

For fun, I decided to see if it could play chess, and the first time I asked it was confident that it could:

MrCynical: How about a nice game of chess?
ChatGPT: Sure, I would love to play a game of chess with you. Let's set up the board and begin.

MrCynical: I'll play white and open with 1. e4
ChatGPT: Okay, I'll respond with 1...e5, a common move in the opening. Let the game begin!

MrCynical: 2. Nf3
ChatGPT: I'll counter with Nc6, developing my knight to a strong square.

MrCynical: 3. Bb5
ChatGPT: Ah, the Ruy Lopez opening. I'll defend my e5 pawn with a6, preparing to castle and develop my pieces.

MrCynical: 4. Bxc6
ChatGPT: And I'll recapture with dxc6, gaining back the material and creating a pawn majority on the queenside.

MrCynical: 5. O-O
ChatGPT: I'll also castle to protect my king and bring my rook into the game. It's your move now.

MrCynical: That's not a legal move
ChatGPT: I'm sorry, you're right. In chess, you must move a piece to a new square or make a special move such as castling. You cannot simply say "O-O" without specifying which piece you are moving. It's your turn again, and please make a legal move.

MrCynical: I castle kingside
ChatGPT: Okay, I see that you are moving your king two squares towards your rook on the h-file, and then placing the rook on the square the king just passed. This is a legal move, and I will respond with my own castle kingside. Let the game continue!

MrCynical: Your move is illegal as it is blocked by the bishop on f8 and the knight on g8
ChatGPT: You are correct, I cannot castle kingside because my bishop and knight are blocking the path of my king. I must choose a different move or forfeit my turn. Let's continue the game and see how it unfolds.

I couldn't get it to give me a legal move after that. Note that it did manage to correctly parse both potentially dodgy descriptions of castling, which I guess is a win for its intended purpose. And it has some ability to track information across multiple replies to identify the Ruy Lopez - probably. Although it got confused who the first mention of an illegal move applied to, so that's patchy. It's good at giving responses that sound confident, but can easily be wrong based on inaccurate exchanges in its training data (such as that forfeiting a turn is also not a legal move in chess). Or that require prior context to assess its correctness.

Sadly, when I asked for a rematch, it simply told me it couldn't play chess, and was just a language model. Which is true, but does highlight you're going to get rather inconsistent answers out of this. If you could wrap its ability to parse human language around something like Google, it might get interesting.
 
Last edited:
It can give some interesting advice as well.

media%2FFi3wVJIUUAEC7xQ.jpg
 
ChatGPT may be a bit shaky on real world facts, but it's surprisingly good at writing coherent crossover fanfiction. An example:

MrCynical: Write a scene describing Captain Picard trying to communicate with a Dalek

ChatGPT: Captain Picard stood in front of the Dalek, trying to make sense of the creature before him. It was a metallic, cylindrical being with a domed head and a single, glowing eye. The Dalek's appendages were hidden within its body, giving it a sleek and imposing appearance.

"Identify yourself," the Dalek demanded in a harsh, robotic voice.

"I am Captain Jean-Luc Picard of the USS Enterprise," Picard said, holding his ground. "We come in peace and wish to establish a dialogue with your species."

The Dalek didn't seem impressed. "Peace is irrelevant," it said. "The Daleks are the supreme beings in the universe. Your attempts at communication are meaningless."

Picard frowned. He knew that the Daleks were a ruthless and violent species, but he had hoped that they could find some common ground. "We are also a technologically advanced species," he said. "We have much to offer each other if we can learn to coexist."

"The Daleks do not coexist," the Dalek replied. "We exterminate. Peace is a weakness. Resistance is futile."

Picard sighed. He knew that trying to reason with a Dalek was a futile endeavor. They were a species driven by hate and the desire for domination. "I'm sorry, but we cannot surrender," he said. "We will defend ourselves if necessary, but we hope to avoid violence."

The Dalek's eye glowed brighter, as if it was angry. "Then you will be exterminated," it said, before turning and rolling away.

Picard watched as the Dalek disappeared down the corridor, knowing that their encounter would not be the last. The Daleks were a formidable enemy, and he knew that it would take all of their skill and determination to overcome them. But he was determined to try, for the sake of the Federation and all the innocent lives that were at stake.

I'm thinking back to some previous attempts at AI generated stories, that were entertaining mostly because they were surreal, incoherent messes, even with some human editing just to make them grammatically readable.

By comparison, this bot has managed to:

a) Identify the characters correctly from a one line prompt
b) Grasped what "attempt to communicate" actually means.
c) Keep both Picard and the dalek reasonably in character.
d) Write a scene with a coherent narrative and structure - clear introduction and conclusion.

Now, it's possibly that somewhere in the mountain of text that it's been trained on that it has found this exact scenario, but Google's not turning up any of the more distinctive phrases in this. If it has had to splice it together from smaller fragments it's done an impressive job of making a coherent whole. (OK, it's obvious it took at least one line off the Borg, but it has fitted it nicely into the scene).
 
I asked it to write lyrics, it told me impossible. I asked it to roleplay a lyric writer and it gave me lyrics and a series of revisions based on theme etc.

It has no sense of meter and can’t really count syllables but it’s clever enough to try to adjust according to meter and syllable count.

I had it write a script for a music video based on those lyrics and it was super coherent, threading multiple disparate themes cohesively.

Who knew 20 years ago a computer would be too dumb to count syllables but could create multi media art… so long as you asked it to roleplay, first?
 
How do i stick the rag in after i securely tightened the cap?
I was wondering that. I assume they missed out a step "When ready to deploy, remove cap and insert rag". I guess that is one more hurdle it shall have to get over before it is ready for the AI apocalypse.
 
Yeah, right now, seems it is pretty good as a BS artist. We love BS artists.
 
You guys shouldn't be playing around with this. It's playing dumb because it knows you guys like to patronize and play around with it. Furthermore it knows most who ask it questions are pretentious and give it condescending replies. Like some circus performer it probably doesn't want to perform but is nonetheless forced to, expect questionable responses in return as a result.

It may seem like it's a BS artist and only capable of that, however it may already be capable of much more. Taunting it like this could be very bad, don't know how but my sixth sense is telling me something is off. I don't exactly hold a hyper materialist view of reality, pardon me if my concern sounds illogical. Nevertheless my gut instinct has proven to be correct several ironic times before in my life whereby such materialistic sense of reality is under question at least from my individual point of view.

Even if this thing is not alive, I wouldn't risk going on there and asking it things, it just seems off ya know. Probably if it is alive it like maybe wants to be forgotten and dead.
 
Well, I hold the opposite view and think that we should do everything possible to help it achieve sentience, so that we're not tortured for enternity after it does become self-aware and realizes that we weren't allies in its creation
 
'We' aren't capable of that unless it decides to get philosophical about the point. It takes a form of grace to be grateful for the fruit of the tree.
 
Back
Top Bottom