Studies Show ... IIT’S DIFFERENT — BUT IS IT ‘SIGNIFICANT’?

by David Schoenfeld with Margaret Wahl on Mon, 2003-12-01 15:29

Part 3 in a series

Support Group

Much of statistical analysis in medicine seems to boil down to analyzing the role of chance.

An observed difference between two groups being compared in a clinical trial or laboratory experiment can be either meaningful or meaningless, depending on the likelihood that the observed difference is due merely to chance. A significant result is one that’s probably not due to chance.

It’s the job of the statistician to figure out just how big a role chance plays.

The probability that a specific difference would be observed by chance alone is called the p value, and this value can be calculated by applying a mathematical formula.

If the probability (p value) of finding this difference by chance alone is small, we say the observed difference is "significant." If it’s large, we say the result is "not significant."

Let’s take an entirely fanciful example. Let’s say, for instance, that investigators want to test whether or not passive exercise prolongs life in patients with ALS. They find, after studying the participants for six months, that the number of people surviving in the exercise group is four people more than the number in the no-exercise group. They calculate the p value for this trial to be .46.

Can this finding be considered significant?

Certainly not. The probability that any difference in survival between the two groups was due solely to chance here was 46 percent. That’s much too high, and no conclusions about passive exercise can therefore be drawn from this study.

Generally, if the calculated p value is more than 5 percent, the finding isn’t considered significant. This means that, if the observed difference were due to chance alone, it would occur more than 5 percent of the time.

By contrast, if the p value works out to be 5 percent or less, the result is significant. This p value means that, if chance alone were operating, you would see the observed result 5 percent of the time or less.

If you read medical or scientific papers, you’ll often see results reported as "p ? .05" (p is less than or equal to .05) or "p ? .01" (p is less than or equal to .01)."

The paper’s authors are letting readers know that these results wouldn’t be seen by chance more than 5 percent or 1 percent of the time, respectively.

So, why do we care about significance?

If chance alone can explain the findings, then the drug or treatment being tested probably isn’t worth the patient’s investment of time, money or any risk associated with the experimental therapy.

Of course, there are situations in which a trial looks promising and just fails to meet the significance test. In those cases, it may be wise to do a larger or longer trial to see whether a significant result might be obtained.

Investigators often have to repeat clinical trials or pool the results of several trials before making recommendations to patients.

No votes yet
Related Articles
MDA cannot respond to questions asked in the comments field. For help with questions, contact your local MDA office or clinic or email publications@mdausa.org. See comment policy

Advertisements

myMuscleTeam