Sampling Distributions
What is generally considered an unbiased estimator for the population variance?
Sample variance ()
Population standard deviation ()
Population variance ()
Sample standard deviation ()
What do they call when groups are used to collect samples, but each member of the selected group is included in the sample?
Convenience sampling
Stratified sampling
Cluster sampling
Simple random sampling
A statistic that uses every data point in its calculation and does not systematically overestimate or underestimate any parameter it estimates is called what?
Unbiased estimator
Biased estimator
Statistically significant indicator
Variance reducer
A researcher uses bootstrap methods to generate multiple resamples from observed data; these resamples are then used to calculate what kind of point estimates?
Conservatively biased point estimates due to excessive smoothing applied across resampling iterations.
Inherently precise point estimates guaranteed by repeated sampling regardless of underlying distribution shape or spread.
Directly biased point estimates because bootstrap methods inherently increase variability beyond original data levels.
Bias-corrected point estimates based on recentering resampled statistics around original statistic values.
Why might a researcher use bootstrapping when calculating point estimates if concerned about outliers in their data?
To reduce bias introduced by outliers through resampling methods.
To create more outliers for robustness checks.
To lower variability within each bootstrap sample created.
To ensure that all outliers are included in every sample taken.
What measure would typically serve as an unbiased estimate of a population proportion?
Quadratic mean
Interquartile range
Sample proportion
Mean deviation
What effect does high variability in a data set have on type I errors in significance testing compared to low variability datasets?
High variability leads to lower p-values which reduces Type I errors proportionally with variability increase.
High variability decreases Type I errors through increased sensitivity in detecting non-zero effects.
High variability increases Type I errors due to larger standard errors making it harder to detect true effects.
High variability has no effect on Type I errors as they are independent from dataset variability levels.

How are we doing?
Give us your feedback and let us know how we can improve
What kind of sample aims at giving all individuals an equally likely chance to be chosen?
Simple random sample (SRS)
Cluster Sample (CS)
Stratified random sample (STRS)
Systematic Sample (SYS)
In constructing confidence intervals for a mean with unknown standard deviation, what could introduce bias into your interval estimate?
Increasing your confidence level from 90% to 95%, thus widening your interval.
Calculating margin of error using critical values corresponding to your desired confidence level.
Assuming normality when sampling from a heavily skewed distribution with small n.
Using t-distribution with degrees of freedom equal to n - 1.
In the context of estimating population parameters, why would you consider using trimmed means rather than arithmetic means?
Trimmed means lessen the influence of extreme scores, thus providing a more robust estimator.
Arithmetic means always give the most precise estimate regardless of the extremes in data.
Using arithmetic means avoids the need to adjust or analyze underlying distribution patterns.
Trimming extremes simplifies calculation but significantly reduces estimator's precision.