Statistical Models > Experts?

Taliesin

Puttin' on the Ritz
Joined
Jun 11, 2003
Messages
4,906
Location
Montréal
http://www.law.yale.edu/intruders/5493.htm

A pretty long article describing the rising success of relatively statistical models compared to expert opinions. It mostly deals with legal matters, with a bit at the end about wine-tasting.

Some excerpts:
Six years ago, Ted Ruger, a law professor at the University of Pennsylvania, attended a seminar at which two political scientists, Andrew Martin and Kevin Quinn, made a bold claim. They said that by using just a few variables concerning the politics of the case, they could predict how the US Supreme Court justices would vote.

Analysing historical data from 628 cases previously decided by the nine Supreme Court justices at the time, and taking into account six factors, including the circuit court of origin and the ideological direction of that lower court’s ruling, Martin and Quinn developed simple flowcharts that best predicted the votes of the individual justices. For example, they predicted that if a lower court decision was considered “liberal”, Justice Sandra Day O’Connor would vote to reverse it. If the decision was deemed “conservative”, on the other hand, and came from the 2nd, 3rd or Washington DC circuit courts or the Federal circuit, she would vote to affirm.

Ruger wasn’t buying it. As he sat in that seminar room, he didn’t like the way these political scientists were describing their results. “They actually used the nomenclature of prediction,” he told me. “[But] like a lot of legal or political science research, it was retrospective in nature.”

After the seminar he went up to them with a suggestion: why didn’t they run the test forward? As the men talked, they decided to run a horse race, to create “a friendly interdisciplinary competition” to compare the accuracy of two different ways to predict the outcome of Supreme Court cases. In one corner stood the predictions of the political scientists and their flow charts, and in the other, the opinions of 83 legal experts – esteemed law professors, practitioners and pundits who would be called upon to predict the justices’ votes for cases in their areas of expertise. The assignment was to predict in advance the votes of the individual justices for every case that was argued in the Supreme Court’s 2002 term.
The experts lost. For every argued case during the 2002 term, the model predicted 75 per cent of the court’s affirm/reverse results correctly, while the legal experts collectively got only 59.1 per cent right. The computer was particularly effective at predicting the crucial swing votes of Justices O’Connor and Anthony Kennedy. The model predicted O’Connor’s vote correctly 70 per cent of the time while the experts’ success rate was only 61 per cent.
And, interestingly:
Several studies have shown that the most accurate way to exploit traditional expertise is merely to add the expert evaluation as an additional factor in the statistical algorithm. Ruger’s Supreme Court study, for example, suggested that a computer that had access to human predictions would rely on the experts to determine the votes of the more liberal members of the court (Stephen Breyer, Ruth Bader Ginsburg, David Souter and John Paul Stevens, in this case) – because the unaided experts outperformed the super-crunching algorithm in predicting the votes of these justices.

The article goes on to discuss the release of sex offenders as an example of increasing reliance on computer models. The issues are obvious: On the one hand, it does seem to be generally more effective to put the machine in charge; on the other hand, the machine's decision-making process isn't always politically correct (e.g. the recidivism calculus penalises any sex offender with at least one male victim); and with no human discretion at all, it's possible that somebody could be penalised for innocuous behaviour that for some reason flags one of the machine's factors.

I think the first case cited, the Supreme Court competition, is the most striking thing about the article, but it's all interesting. How accurate should a machine be before we make it an ultimate arbiter over human freedom? Should human discretion be removed from decision-making, if it were shown that trust in a statistical model yielded superior overall results?
 
Very interesting. Removing the irrational aspects of predicting human behavior will certainly improve the impartiality (provided the prgramming is not biased) of some decisions. But I have always liked, even preferred, the human element in life and the pursuasiveness of passion. :)
 
Yeah same thing applies to trading stocks on the market - it all works well and good until 'irrational, emotional human behavior' shows it's head and throws the whole model out of whack. So, you need an expertise combo of logic, psychology & a little bit of AI giving you indications. Then it all depends on how you interpret, and act.
 
I know a chap who has a model that accurately 'predicts' the outcome of past US elections based on certain data that are available before the election. He has a prediction for the next one that is entirely modelled, so we'll wait and see what happens.
 
I think the problem with these kinds of model are that as soon as they get more popular, people will change the way they make thier decisions so that they don't fit into the mold of the model ;)
 
I think the problem with these kinds of model are that as soon as they get more popular, people will change the way they make thier decisions so that they don't fit into the mold of the model ;)

I believe most well-crafted models will take this sort of thing into account - it's called ____ bias, I think.
 
I believe most well-crafted models will take this sort of thing into account - it's called ____ bias, I think.

I'm sure most do, I just imagine as the average joe learns more about hese models as they get more popular, correcting that bias will be something of a moving target...
 
You just learned of these models. Is this going to affect your decision making processes?

Well, the ones specifically mentioned don't really come down to my decision, so I guess I can't really comment on that. If I had read, for eg, that a model predicted that my riding in the next federal election would go a certain way because university students (which are well represented here) typically vote for one party, I might take a harder look at whether I want to vote for that party to make sure that I really beleived what they had to say and wasn't just going with the crowd.

Of course, I sould do that anyhow, but having someone tell me I'm more likely to do something because of a model almost makes me want to do the opposite just to prove to myself that I'm more complex than tha! :lol:
 
How long until psychohistory? :D
 
Regression analysis 1
Experts 0

Yay statistics! :mischief:

(Not that this "flowchart" is exactly linear regression, but the model could easily fit into a regression format. And a success rate of 75% is pretty darn amazing.)
 
Top Bottom