OK now I know what you mean by power.
First of all there are two error that can occur when you perform a hypothesis test. Firstly a type I error which has a pobability of alph
a. This is when you reject the null when it is true. So you say there is a diffence in OS rate with treatment when in fact there isnt. Quite serious error. Second there is a Type II error which has a probability of beta where you dont reject the null when its false. So you say there is no difference in OS rate with treatment when there actually is. Not so serious but might result in the company canning a useful drug. Type II errors are only relevent when you dont reject the null (ie when you have big p-values).
There is a trade off between alpha and beta. As one goes up the other goes down.
The power is related to committing a type II error and gives the statitisian some confidence in there conclusion when they reject the null and it is in fact false (they have made the right decision and didnt commit any error). The higher the power, the lower the better and the better the test. So how do you increase the power by lowering beta? Well you can increase alpha (not recommended) or you can increase the sample size to decrease the sampling distribution standard deviation meaning the sample with the larger sample size is a better representation of the population than a sample with a small sample size. The more your sample represents the population the better and the more sound your hypothesis test is.
Make sense? Anything else.
- Forums
- ASX - General
- Stats duded wanted
OK now I know what you mean by power. First of all there are two...
-
-
- There are more pages in this discussion • 9 more messages in this thread...
You’re viewing a single post only. To view the entire thread just sign in or Join Now (FREE)