The AI Thread

Anyone have a better idea of this means in practice?
I think it means they will be able to continue to refine and evolve GPT-3 behind the curtains and then at some point sell services based on their proprietary developments.
 
DeepMind’s AI makes gigantic leap in solving protein structures
An artificial intelligence (AI) network developed by Google AI offshoot DeepMind has made a gargantuan leap in solving one of biology’s grandest challenges — determining a protein’s 3D shape from its amino-acid sequence.
DeepMind’s program, called AlphaFold, outperformed around 100 other teams in a biennial protein-structure prediction challenge called CASP, short for Critical Assessment of Structure Prediction.
The ability to accurately predict protein structures from their amino-acid sequence would be a huge boon to life sciences and medicine. It would vastly accelerate efforts to understand the building blocks of cells and enable quicker and more advanced drug discovery.
AlphaFold came top of the table at the last CASP — in 2018, the first year that London-based DeepMind participated. But, this year, the outfit’s deep-learning network was head-and-shoulders above other teams and, say scientists, performed so mind-bogglingly well that it could herald a revolution in biology.
I think this could be really big. Predicting the structure of proteins is a big step towards finding out how they work, what are the effects of genetic difference within the population and in designing drugs. We spend vast amounts of money on big kit to do this in RL, if we could do it in silico it will be a revolution.
Paper (paywalled) Code (Apache License, Version 2.0)
 
Last edited:
AI reading (and listening to) mandatory and voluntary disclosures by companies to make trading decisions
They are being gamed by executives. This is from the FT, which is firewalled but incognito mode works for me. It is based on this paper, "HOW TO TALK WHEN A MACHINE IS LISTENING: CORPORATE DISCLOSURE IN THE AGE OF AI".
When Man Group chief executive Luke Ellis discusses his investment company’s results with analysts he chooses his words carefully. He knows better than most that the machines are listening.
The crown jewel of Man is its $39bn hedge fund group AHL, whose algorithms scour huge data sets for profitable signals that feed into investment decisions.
One of the hottest areas in this field is “natural language processing”, a form of artificial intelligence where machines learn the intricacies of human speech. With NLP, quant hedge funds can systematically and instantaneously scrape central bank speeches, social media chatter and thousands of corporate earnings calls each quarter for clues.
As a result, Mr Ellis’s quant colleagues have coached him to avoid certain words and phrases that algorithms can be particularly sensitive to, and might trigger a quiver in Man’s stock price. He is much more careful about using the word “but”, for example.
“There’s always been a game of cat and mouse, in CEOs trying to be clever in their choice of words,” Mr Ellis says. “But the machines can pick up a verbal tick that a human might not even realise is a thing.”
This is a growing phenomenon. Machine downloads of quarterly and annual reports in the US — scraped by an algorithm rather than read by a human — has rocketed from about 360,000 in 2003 to 165m in 2016, according to a recent paper by the US’s National Bureau for Economic Research. That was equal to 78 per cent of all such downloads that year, up from 39 per cent in 2003.
The paper — How to Talk When a Machine Is Listening: Corporate Disclosure in the Age of AI — points out that companies are keen to show off their business in the best possible light. They have steadily made reports more machine-readable, for example by tweaking the formatting of tables, as a result of this evolving analysis.
“More and more companies realise that the target audience of their mandatory and voluntary disclosures no longer consists of just human analysts and investors,” authors Sean Cao, Wei Jiang, Baozhong Yang and Alan Zhang note. “A substantial amount of buying and selling of shares are triggered by recommendations made by robots and algorithms which process information with machine learning tools and natural language processing kits.”
However, in recent years the corporate adjustment to the reality of algorithmic traders has taken a big step further. The paper found that companies have since 2011 subtly tweaked the language of reports and how executives speak on conference calls, to avoid words that might trigger red flags for machine listening in.
Not coincidentally, 2011 was when Tim Loughran and Bill McDonald, two finance professors at the University of Notre Dame, first published a more detailed, finance-specific dictionary that has become popular as a training tool for NLP algorithms.
Since 2011, words deemed negative in the Loughran-McDonald dictionary have fallen markedly in usage in corporate reports, while words considered negative in the Harvard Psychosociological Dictionary — which remains popular among human readers — show no such trend.
We are not far from someone on a call reading 'we said au revoir to our profitability' versus 'we recorded a loss' because it reads better in some NLP model
Moreover, using vocal analysis software, the authors of the National Bureau for Economic Research paper found that some executives are even changing their tone of voice on conference calls, in addition to the words they use.
“Managers of firms with higher expected machine readership exhibit more positivity and excitement in their vocal tones, justifying the anecdotal evidence that managers increasingly seek professional coaching to improve their vocal performances along the quantifiable metrics,” the paper said.
Some NLP experts say some companies’ investor relations departments are even running multiple draft versions of releases through such algorithmic systems to see which scores the best.
“Access to NLP tools has become an arms race between investors and management teams. We see corporates increasingly wanting to have access to the same firepower that hedge funds have,” says Nick Mazing, director of research at Sentieo, a research platform. “We are not far from someone on a call reading 'we said au revoir to our profitability' versus 'we recorded a loss' because it reads better in some NLP model.”
However, Mr Mazing said that NLP-powered algorithms are also continuously adjusted to reflect the increasing obfuscation of corporate executives, so it ends up being a never-ending game of fruitless linguistic acrobatics.
“Trying to 'outsmart the algos' is ultimately futile: buyside users can immediately report sentence misclassifications back to the model so any specific effort to sound positive on negative news will not work for long,” Mr Mazing says.
Indeed, most sophisticated NLP systems do not rely on a static list of sensitive words and are designed to both identify problematic or promising combinations of words and teach themselves a chief executive’s idiosyncratic styles, Mr Ellis notes. For example, one CEO might routinely use the word “challenging” and its absence would be more telling, while one that never uses the word would be sending as powerful a signal by doing so.
Machines are still unable to pick up non-verbal cues, such as a physical twitch ahead of an answer, “but it’s only a matter of time” before they can do this as well, Mr Ellis says.
 
Although here reliance - and subsequent prominence - of the machine is entirely manufactured; if (eg) the stock-market related services didn't factor computer-generated suggestions, no one would care.
It also seems that it dilutes the actual goal, since it forces spokesmen/officials to just present stuff accordingly to how a machine would "interpret" them, which is a loop and not something which itself leads to a better service. Bogdanof will still win.

 
DeepMind’s AI makes gigantic leap in solving protein structures
An artificial intelligence (AI) network developed by Google AI offshoot DeepMind has made a gargantuan leap in solving one of biology’s grandest challenges — determining a protein’s 3D shape from its amino-acid sequence.
DeepMind’s program, called AlphaFold, outperformed around 100 other teams in a biennial protein-structure prediction challenge called CASP, short for Critical Assessment of Structure Prediction.
The ability to accurately predict protein structures from their amino-acid sequence would be a huge boon to life sciences and medicine. It would vastly accelerate efforts to understand the building blocks of cells and enable quicker and more advanced drug discovery.
AlphaFold came top of the table at the last CASP — in 2018, the first year that London-based DeepMind participated. But, this year, the outfit’s deep-learning network was head-and-shoulders above other teams and, say scientists, performed so mind-bogglingly well that it could herald a revolution in biology.
I think this could be really big. Predicting the structure of proteins is a big step towards finding out how they work, what are the effects of genetic difference within the population and in designing drugs. We spend vast amounts of money on big kit to do this in RL, if we could do it in silico it will be a revolution.
Paper (paywalled) Code (Apache License, Version 2.0)
If anyone is interested in this, and wants to sit in front of an hour long really poor quality video with some world leading scientists discussing this, with a better background to the biology than the AI, there is a youtube video:

tldr: Much of the hype has been justified, but there are some big questions to be answered.

 
AI reading (and listening to) mandatory and voluntary disclosures by companies to make trading decisions
They are being gamed by executives. This is from the FT, which is firewalled but incognito mode works for me. It is based on this paper, "HOW TO TALK WHEN A MACHINE IS LISTENING: CORPORATE DISCLOSURE IN THE AGE OF AI".


SoftBank shares surge 7% after a report says it’s considering going private
https://www.cnbc.com/2020/12/09/sof...port-says-its-considering-going-private-.html

Also, there exists a mechanism by which the pleb-shareholders can be forced out. This means that once there is a runaway success, there's no way for someone to hitch their ride to that set of coat-tails.

I'm thinking about the super-dangerous combinations, too. Amazon has realtime data regarding sales, especially as we move away from retail. This means that they can predict the success of their customers before earnings are called quarterly. Same with Google, which can literally monitor employee productivity.

Robin Hanson has warned that the only defense against the Singularity is owning capital, but I am becoming increasingly sure that's not true. It's a defense, but I suspect it's not sufficient.
 
SoftBank shares surge 7% after a report says it’s considering going private
https://www.cnbc.com/2020/12/09/sof...port-says-its-considering-going-private-.html

Also, there exists a mechanism by which the pleb-shareholders can be forced out. This means that once there is a runaway success, there's no way for someone to hitch their ride to that set of coat-tails.

I'm thinking about the super-dangerous combinations, too. Amazon has realtime data regarding sales, especially as we move away from retail. This means that they can predict the success of their customers before earnings are called quarterly. Same with Google, which can literally monitor employee productivity.

Robin Hanson has warned that the only defense against the Singularity is owning capital, but I am becoming increasingly sure that's not true. It's a defense, but I suspect it's not sufficient.
Those algorithms are still capital to be owned. So in the meantime, Hanson is correct.
 
Those algorithms are still capital to be owned. So in the meantime, Hanson is correct.

Not if the management itself can force a buyout. It looks like there's no defense, especially on the peon end. Theoretically, the shareholders could force management to do a poison-pill against having our share taken away from us, but I just don't see that happening.
 
Not if the management itself can force a buyout. It looks like there's no defense, especially on the peon end. Theoretically, the shareholders could force management to do a poison-pill against having our share taken away from us, but I just don't see that happening.
I mean the managers owning the company is still people owning the robots. If the company owns itself and employs the managers and the company is defined by its profit seeking method and algorithm, then we have something more exciting.
 
The pervasiveness of google and amazon at so many points in the data trail is really dangerous. I could rant about that for a while, but it would have little to do with AI.

To assume that the AI transformation of stock trading is a move away from the people to the megacorps seems less obvious. I know much more about AI than stocks, but it seems that an individual with a few graphics cards and a bit of domain knowledge may well be better able to bet against the megacorps compared to how someone without them could compete against the major stock brokers before such things became commonplace. You quote SoftBank, that took from Wed 12-30 (7122) to Thurs 14-00 (8517) to do its rising. This is not something that you need millisecond timing on the trades, anyone with an internet connection and a decent computer could give it a go. I would expect to be able to do that a lot easier than whatever it is you have to do to gamble on the stock market without AI.
 
Last edited:
Oh, my concern is twofold:

(a) that the AI game might be 'suddenly won' by one of the corporations, where the dominance just steamrolls. Hanson talks about economic growth that compounds in single digits weekly. And that's under a pretty specific scenario among multiple possible runaway events. us.
(b) that the plebs that own 'some' share of the winning company can easily be forced out by management engaging in some type of legal shenanigan. The headline isn't so much about the 7% rise but about the ability to take the company away from

Some of my portfolio is devoted to trying to catch a piece of whatever company might show returns that negate all other investments, due to scale. Ideally, if the Robot Utopia happens, I want to be sufficiently on the winning side that I'm not actually having to play defense for myself.
 
Oh, my concern is twofold:

(a) that the AI game might be 'suddenly won' by one of the corporations, where the dominance just steamrolls. Hanson talks about economic growth that compounds in single digits weekly. And that's under a pretty specific scenario among multiple possible runaway events. us.
If the AI game is suddenly won it will be by the people with the best data (who may also have the people making the AI but that will not be why they win). This will be the google / amazon / alibaba problem. That is very hard.
(b) that the plebs that own 'some' share of the winning company can easily be forced out by management engaging in some type of legal shenanigan. The headline isn't so much about the 7% rise but about the ability to take the company away from

Some of my portfolio is devoted to trying to catch a piece of whatever company might show returns that negate all other investments, due to scale. Ideally, if the Robot Utopia happens, I want to be sufficiently on the winning side that I'm not actually having to play defense for myself.
I am not sure who you mean by "the plebs that own 'some' share of the winning company". I am not sure holding stock in google / amazon / alibaba is any better than holding bitcoin at the moment.

If not, and you mean some random non-megacorp then it seems to me that If you were doing this 20 years ago, you would be betting against offices full of humans digesting company filings and media reports, and you may use your domain knowledge to try to beat them. Now you are competing with a load of computer scientists. If you were not reading the company fillings you would be in a similar position, if you were you are spending time to win at gambling then you should use AI. It is not like stock market speculation is a a social good (is it?)

Most people invest in pension funds and index linked savings, it should not have much of an effect on them. Though it does make me wonder how much my pension fund is investing in AI.

The corporate buyout thing has always been there, a bit dodgy but compared to the idea of corporate responsibility is a tiny question. It also seems like the sort of thing that is more likely to occur in smoky rooms than NVIDIA cards.
 
Last edited:
A bit of a soapbox:

Despite all the hype GPT-3 and other massive models in that vain have received, they rely way too much on memorizing and regurgitating large swathes of the internet. Plus, training GPT-3 had a price tag of millions and was only viable because Microsoft has been pouring money into OpenAI. But even with all that data and funding, these models are still very prone to "hallucination" (making things up as they go) because the way they internally store knowledge is completely unstructured and based purely on correlations between words seen in the training data.

In contrast, there's been some recent, but comparatively muted, buzz in NLP about "retrieval-augmented" models. These models have a knowledge base (e.g., passages from Wikipedia) from which they can retrieve knowledge on-the-fly as they write stuff. It's analogous to a human searching for and reading sources as they write. This is a pretty simple idea, and it's not even new idea either, but no one got it to work well until a few months ago. The best example out there now is a model called "RAG" (Retrieval-Augmented Generation) by Facebook AI Research, which has gotten state-of-the-art results on a lot of tasks, like question-answering. And that's without requiring 100 billion + parameters and millions in capital investment.

I think this is a more promising direction than what OpenAI has been doing.
 
Supposedly use of "AI" (algorithms, really) has been growing appreciably over the past few years. Considering the current state of world governance I'd say that we are suffering the effects of artificial dumbness already. Whatever is being optimized for with all these algorithms is not quality of life for the population o the planet at large. Or they are remarkably bad at work.
 
Right on the heels of me dissing OpenAI the other day, they've released this:

DALL·E: Creating Images from Text
We’ve trained a neural network called DALL·E that creates images from text captions for a wide range of concepts expressible in natural language.

DALL·E is a 12-billion parameter version of GPT-3 trained to generate images from text descriptions, using a dataset of text–image pairs. We’ve found that it has a diverse set of capabilities, including creating anthropomorphized versions of animals and objects, combining unrelated concepts in plausible ways, rendering text, and applying transformations to existing images.

From some of their examples:

Text prompt: an armchair in the shape of an avocado
AI-generated image:


Text prompt: an illustration of a baby daikon radish in a tutu walking a dog
AI-generated image:
 
Top Bottom