Test your knowledge of Probability

What do you tell your patient is the probability that she actually has breast cancer?

  • About 1%

    Votes: 15 15.2%
  • About 5%

    Votes: 5 5.1%
  • About 10%

    Votes: 41 41.4%
  • About 15%

    Votes: 1 1.0%
  • About 25%

    Votes: 1 1.0%
  • About 50%

    Votes: 3 3.0%
  • About 75%

    Votes: 7 7.1%
  • About 90%

    Votes: 14 14.1%
  • About 95%

    Votes: 8 8.1%
  • About 100%

    Votes: 4 4.0%

  • Total voters
    99
Ori's numbers are right ...
  • 720 women with cancer return positive test
  • 80 women with cancer return negative test
  • 6,944 women without cancer return positive test
  • 92,256 woman without cancer return negative test
So, of the 7664 women who returned a positive test, only 720 (or 9.4%) actually have cancer.

I wonder how much this test costs. If you have a 99.2% chance of being cancer free before the test ... and a 90% chance of being cancer free after the test (assuming the test is positive) ... is the test worth it?

Consider Ori's 100,000 woman. Each one takes the test and it costs $100 (say). That is a $10m cost to the community. The results:
  • 720 women who have cancer are told they have cancer and start treatment
  • 80 women who have cancer are told they don't have cancer and ... (fill in the gap)
  • 6,944 women who don't have cancer are told they do ... panic and fear ensue
  • 92,256 woman who don't have cancer are told they don't have cancer
So, for $10m cost to the community, you scare the @#%@ out of 7000 woman and you give 80 women a false sense of security ... and you treat and save the lives of 720 women.
Spoiler :
All of this is based on the above statistics being correct (I have no idea if they are or not).
 
Nice - so nearly all the positives are false positives, because nearly all the testees don't have cancer :crazyeye:
 
Good thing I can read....did the analysis correctly, but had read the first probability (of a woman having bc) as .8 out of 1 (80%), not 0.8% (or .008 of 1) and just thought OP was using random numbers. As a result ended up choosing 75% :wallbash:

OP, where did you get these probabilities? Because as you said, that seems to be an alarmingly large number of false positives.
 
OP, where did you get these probabilities? Because as you said, that seems to be an alarmingly large number of false positives.
I got it from 'Contingencies (page 37 onwards)' (actuarial magazine in the US) but that article was quoting Gerd Gigerenzer ...

I'll give you a couple examples relating to medical care. In the U.S. and many European countries, women who are 40 years old are told to participate in mammography screening. Say that a woman takes her first mammogram and it comes out positive. She might ask the physician, "What does that mean? Do I have breast cancer? Or are my chances of having it 99%, 95%, or 90% or only 50%? What do we know at this point?" I have put the same question to radiologists who have done mammography screening for 20 or 25 years, including chiefs of departments. A third said they would tell this woman that, given a positive mammogram, her chance of having breast cancer is 90%.

However, what happens when they get additional relevant information? The chance that a woman in this age group has cancer is roughly1%. If a woman has breast cancer, the probability that she will test positive on a mammogram is 90%. If a woman does not have breast cancer the probability that she nevertheless tests positive is some 9%. In technical terms you have a base rate of 1%, a sensitivity or hit rate of 90%, and a false positive rate of about 9%. So, how do you answer this woman who's just tested positive for cancer? As I just said, about a third of the physicians thinks it's 90%, another third thinks the answer should be something between 50% and 80%, and another third thinks the answer is between 1% and 10%. Again, these are professionals with many years of experience. It's hard to imagine a larger variability in physicians' judgments — between 1% and 90% — and if patients knew about this variability, they would not be very happy. This situation is typical of what we know from laboratory experiments: namely, that when people encounter probabilities — which are technically conditional probabilities — their minds are clouded when they try to make an inference.


http://www.edge.org/3rd_culture/gigerenzer03/gigerenzer_print.html (about 40% of the way down)
 
So, can you give the correct formula that shows how to plug these numbers in? There were multiple formulae provided in this thread, but I'm not sure which one is right, which one is wrong, etc.

Also, what is the probabily of the doctor doing the screening being a male or a female that likes females? If it's anywhere close to the percentage of forumers that fall into the same category (the original post asked the reader to be the doctor), then you really have to wonder about the seriousness/professionalism of the testing.
 
You can just have a look at Ori's explanation, which is good without a formula.

Otherwise, in probability you have the following:
p(A knowing B) * p(B) = p(A and B), or otherwise written p(A/B)p(B)=p(A&B)
You also have p(A) = p(A/B)p(B)+p(A/not B)p(not B)

Using that and the even C (for cancer), S+ (for positive scan) and S- (for negative scan), you can have the following:
p(C/S+)=p(C&S+)/p(S+)=p(S+/C)*p(C)/p(S+)
and with p(S+) = p(S+/C)p(C)+p(S+/not C)p(not C) you have
p(C/S+)=p(C&S+)/p(S+)=p(S+/C)*p(C)/(p(S+/C)p(C)+p(S+/not C)p(not C))
 
Yay :cool: how do I get my cookie? :P
Sure - here you go.
So, can you give the correct formula that shows how to plug these numbers in? There were multiple formulae provided in this thread, but I'm not sure which one is right, which one is wrong, etc.
Ori's post had it right and then ParadigmShifter followed up with the link to Bayes' Theorem. Its basically conditional probability ... what is the chance of A given B.
I agree with JujuLautre

800 out of 100 000 women of that range would have BC, of these 720 would have a true positive mammogram
99200 out of 100 000 women would not have BC, of these 6944 would have a false positive mammogram
-> the chance now for a true positive out of all positives is 720/(6944+720) *100 or 720/7664 *100 or 9.4%
Do you want another one?

I have a deck of 52 standard playing cards. I draw 2 cards.
a) What is the probability that I have an Ace?

I look at both cards and I tell you that I have at least 1 Ace.
b) What is the probability that I have another Ace?
 
So you like probability, huh!

Try this:

You are throwing a needle on your wooden floor made of parallel planks. We assume that the final position of the needle is completely random. The width of the planks is 10 cm, and the needle is shorter than this. What is the probability that the needle will meet or cross one of the seams between the boards?
I've seen this before with a piece of paper ... but don't I need the length of the needle? Or I can just assign it a constant (n).

Ok, I drop the needle on the floor. It will cut a gap if the distance from the middle of the needle to the gap (g) is less than 1/2 n Sin a (where a is the angle between a line parallel to the gap and through the middle of the needle). The angle a can be anywhere between 0 and 180 degrees (or 0 and pi radians).

That should get you started :).

The thing that I love about this question is that you can use this approach to work out the value of pi. The really sad thing about my life is that I have actually performed that experiment :crazyeye:
 
I don't know. If women really got breast cancer at the same rate that we lose our CR3 swords to archers with max collateral, you'd think there would be a lot more breast cancer in the world.
 
Just the other day I had a discussion where I brought up that applied probability doesn't get the attention at school it deserves. We'd get less deliberately sloppy science, ridiculous marketspeak and - most importantly - less ill-informed whining on gaming forums if a good part of the population would just point and laugh...

EDIT: Ah, the Monte Carlo method... useful enough when you don't have all the information you'd like to have or deterministic maths would be unwieldy - but terribly overused. If I had a penny for every time I've seen supposed scientists use error-prone and complex simulation where very basic maths would have given exact results I'd be rich enough to buy myself a cookie.
 
Do you want another one?

I have a deck of 52 standard playing cards. I draw 2 cards.
a) What is the probability that I have an Ace?
Ok, someone pm'd for an answer ... let's see if I can answer part a) ...

1] I draw two cards. My outcomes are:

A A
A xA
xA A
xA xA

The chance of getting A A is 4/52 and 3/51 or 12 / 2652
The chance of getting A xA is 4/52 and 48/51 or 192 / 2652
The chance of getting xA A is 4/52 and 48/51 or 192 / 2652
The chance of getting xA xA is 48/52 and 47/51 or 2256 / 2652

The the chance of getting at least 1 Ace is 396 / 2652 (14.9%)
I look at both cards and I tell you that I have at least 1 Ace.
b) What is the probability that I have another Ace?
As it turns out, I have given enough information above to work out this part too.
 
You can also use this method ...

P(at least 1 Ace) = 1 - P(No Aces) = 1 - 2256 / 2652.

This is particularly useful if I change the question to:

What is the probability of having an A if I draw 20 cards from a 52 card deck? No one wants to use the 'brute force and stupidity' method I outlined above when there are 20 cards involved.
 
Difficult to come by, for a screening test you want something with as high a sensitivity as possible to not miss true cases - but that almost invariably comes with a lower specificity. If you then screen populations with a low cancer rate (young age groups) you end up with a reasonable false negative rate but a very high false positive rate...
 
The accuracy isn't necessarily a problem. Screening tests are supposed to be relatively cheap, risk- and hassle-free. They're just there to test whether it's worth it to run more accurate tests that are expensive, invasive or cause other health concerns.
 
Back
Top Bottom