High School: Statistics and Probability
High School: Statistics and Probability
Making Inferences and Justifying Conclusions HSS-IC.B.5
5. Use data from a randomized experiment to compare two treatments; use simulations to decide if differences between parameters are significant.
Students should understand that a randomized experiment can be used to compare two treatments. A good way to compare the two treatments statistically is through a t-test. (If your students are confused, just do Mr. T impressions: "I pity the fool who doesn't use a t-test for these types of problems!")
The t-test spits out a number that, when used along with a table further provided below, can help us determine the significant difference between two treatment options. The most commonly used α value is 0.05, but you can change that at will in your classroom.
Our t value is given by the following, where x1 and x2 are the averages of the two treatment samples.
In the calculation, x1 and x2 are our sample averages.
In the above equation, s1 and s2 are the standard deviations of each treatment and n1 and n2 are the sample sizes of each treatment type. It looks scary and overly complicated, but as long as your students plug the right numbers in the right spots, they should be fine.
We can find the intersection of the p value and the degrees of freedom on the table. The degrees of freedom for the two trials as a whole equals the minimum whole number when is calculated. Once we know that, we can find the appropriate value on the gargantuan table. (Your students should get used to looking up values in tables. They'll be doing a lot of that.)
Before we go comparing values to tables, though, we should know whether we're performing a one- or two-tailed test. (If it helps them, tell your students to think of Sonic and Tails.)
If we assumed there was a difference, we only need to use the one-tail (Sonic) test. If we started out with a hypothesis that assumed no difference between the two treatments, we use a two-tail (Tails) test.
Students should know, though, that using a one-tail test is sometimes very unethical, especially in medicine because it tests for one outcome (say, greater effectiveness) rather than both possible outcomes (greater and less effectiveness).
Then, we go back to our numbers. Yes, there are a lot of them, so let's see what we've got. We have our t value, our α value (the significance level that was given to us), and our degrees of freedom. What do we do with all of them?
Let's say we have two samples, normally distributed, with a significance level of 0.05 (our α) and our test statistic (the t value) comes out to be 2.34 with 4 degrees of freedom. We would look at the table to find the degrees of freedom and find where our test statistic value falls in with respect to the p values. In this case, our t falls between 2.132 and 2.776.
So for a one-tail test, our p value is greater than 0.025 but less than 0.05. For a two-tail test, all we do is multiply our p value by 2, which means that in our case, it's greater than 0.05 and less than 0.1. Then, we compare this value to α.
If the p < α, we reject whatever hypothesis we had. If p > α, we accept it (or at least don't reject it just yet). Do you remember those hypotheses?
Think of it this way: Tails's two tails are identical, so there's no difference between them. That means for a two-tail test, we assume the treatments are the same. For a one-tail test, it's the opposite.
Continuing with our example from before, our p > α for a two-tail test, which means we accept the hypothesis that the two treatments are the same. This agrees with our findings for the one-tail test (where p < α), where we reject the hypothesis that the two treatments are different.
Students must be able to tell which test to run, know how to run it, and find out if there is a significant difference or not.