Proportions
How can an observational study design lead to faulty conclusions when analyzing the relationship between playing violent video games and aggressive behavior?
Observer bias may pseudodata interpretation toward expecting an association due to pre-held beliefs about games' impact on aggression.
Desensitization theory proves narrative forcing researchers to only look for confirming evidence of video game effects.
Data mining bias incorporates isolated statistical anomalies as though they were representative of significant trends in normal player behavior.
Violent content rating system guarantees objective assessment, removing any risk of subjectivity in measuring game aggression levels.
What is power in the context of hypothesis testing?
Probability of NOT making a Type II error
Probability of NOT making a Type I error
Probability of making a Type II error
Probability of making a Type I error
In statistical testing, what does 'power' refer to?
The probability of rejecting a true alternative hypothesis.
The probability of correctly rejecting a false null hypothesis.
The probability of failing to reject a false null hypothesis (Type II error).
The probability of correctly accepting a true null hypothesis.
In context of testing hypotheses about means where populations are normally distributed but standard deviations are unknown and samples are small, what assumption must be made for proper use of Student's t-distribution?
The population mean is known prior to collecting sample data.
Sample sizes must exceed thirty regardless of underlying distributions.
The populations follow any distribution as long as it’s symmetric.
The population standard deviations are equal across groups.
What type of error occurs if a statistical test fails to reject a null hypothesis that is actually false?
A Type I error occurs.
A sampling error occurs.
A Type II error occurs.
An experimental design error occurs.
Given two statistical tests with equal sample sizes and significance levels but differing powers, which will have narrower confidence intervals?
The test with lower powers, since that implies higher sample variability requiring wider confidence intervals.
The tests with lower powers, less prone to type II errors due to stricter criteria for rejecting the null hypothesis.
The tests with higher powers.
Both tests will have equal width confidence intervals since they share the same sample sizes and significance levels.
What do we call an incorrect decision to not reject the null hypothesis when it is actually false?
Significance level
Power of the test
Type I error
Type II error

How are we doing?
Give us your feedback and let us know how we can improve
When a significance test leads to a small p-value, which of the following is a correct interpretation?
The data occurred by random chance with a probability equal to the p-value.
There is strong evidence against the null hypothesis in favor of the alternative.
The probability that the null hypothesis is true is equal to the p-value.
The null hypothesis is definitively proven false.
Why should we be cautious when interpreting correlations in studies analyzing data from social media usage and self-reported happiness levels?
Since everyone uses social media differently, it's impossible to have any valid correlations related to its usage affecting happiness levels at all.
Social media algorithms ensure unbiased sampling which guarantees causal relationships can be inferred from correlations observed.
Self-reported happiness levels are always accurate reflections of emotions making them reliable indicators for establishing causation with social media use.
The relationship might be influenced by confounding factors like personality traits or life events that aren’t accounted for in the analysis.
What influences whether an observed difference will be considered statistically significant in hypothesis testing?
Preference for one outcome over another by researchers
Population size alone without regard to sample characteristics
Convenience of obtaining samples rather than randomness
Sample size, variability within data, and effect size