(g) Replicate the random assignment process, under the null distribution, once:

  • Check the Show Shuffle Options box. Notice the cards have been set up to match the cards you used.
  • Press Shuffle to examine an outcome resulting from one random assignment of the subjects to the treatments under the null hypothesis that the yawn seed didn't make a difference.
    • Notice that the applet does what we suggested: Shuffles the 50 cards and deals out 34 for the "yawn seed" group and 16 for the "no yawn seed group," separating blue cards (yawners) from green cards (non-yawners) within each group.
    • Group A will now have some of its original blue cards (with the dark blue top edge) and some of the blue cards that used to be in Group B and so forth.
    • Check the Show table box on the left. The applet also determines the 2×2 table for the simulated results. Take a screen capture of this simulated "could have been" two-way table ("Most Recent Shuffled Two-way Table") and paste it into your lab report.

The applet also places a dot on the dotplot of the difference in conditional proportion of yawners (blue cards) for the re-randomized table. (You may need to scroll over to the right.)

(h) Verify the calculation of the difference in conditional proportions for this first table, showing the details in your lab report.

(i) Press Shuffle again to produce a second simulation of the random assignment process under the null hypothesis.

  • Make a second screen capture of this new simulated two-way table and paste into your lab report.
  • Is the difference in conditional proportions this time the same as that obtained in (h)? Did you expect them to be the same? Explain.

stop Ask now if you are not clear what the applet is doing.

(k) But we still need to look at a large number of such repetitions to see the long-term pattern in the results:

  • Change the Number of Shuffles to 998 to run a total of 1000 repetitions, and press Shuffle.
  • In the Count Samples box, entered the observed value of the statistic from the data we gave you for the Mythbusters and press Count. The applet will shade in red all simulated difference that are at least as extreme (or an even bigger difference).
  • Make a screen capture of the dotplot of the 1000 "could have been" differences in conditional proportions and paste into your lab report.
(l) What values are shaded for the p-value and why? [Hint: If you had shaded them yourself, how would you know which ones to shade?]

(i) Press Randomize again to produce a second simulation of the random assignment process under the null hypothesis.
  • Make a second screen capture of this new "could have been" two-way table and paste into your lab report.
  • Is the difference in conditional proportions this time the same as that obtained in (h)? Did you expect them to be the same? Explain.

stop Ask now if you are not clear what the applet is doing.

(k) But we still need to look at a large number of such repetitions to see the long-term pattern in the results:
  • Uncheck the Animate box, and change the Number of repetitions to 998 to run a total of 1000 repetitions, and press Randomize.
  • Press the Approx p-value button. The applet will shade in red all observations that are at least as extreme (or an even bigger difference) as the 0.044 found in the actual research study.
  • Make a screen capture of the dotplot of "could have been" differences in conditional proportions and paste into your lab report.

(l) What region is shaded for the p-value and why?