The Effects of a Small Sample Size Limitation

The Effects of a Small Sample Size Limitation
••• littlehenrabi/iStock/GettyImages

Determining the veracity of a parameter or hypothesis as it applies to a large population can be impractical or impossible for a number of reasons, so it's common to determine it for a smaller group, called a sample. A sample size that is too small reduces the power of the study and increases the margin of error, which can render the study meaningless. Researchers may be compelled to limit the sampling size for economic and other reasons. To ensure meaningful results, they usually adjust sample size based on the required confidence level and margin of error, as well as on the expected deviation among individual results.

Small Sample Size Decreases Statistical Power

The power of a study is its ability to detect an effect when there is one to be detected. This depends on the size of the effect because large effects are easier to notice and increase the power of the study.

The power of the study is also a gauge of its ability to avoid Type II errors. A Type II error occurs when the results confirm the hypothesis on which the study was based when, in fact, an alternative hypothesis is true. A sample size that is too small increases the likelihood of a Type II error skewing the results, which decreases the power of the study.

Calculating Sample Size

To determine a sample size that will provide the most meaningful results, researchers first determine the preferred margin of error (ME) or the maximum amount they want the results to deviate from the statistical mean. It's usually expressed as a percentage, as in plus or minus 5 percent. Researchers also need a confidence level, which they determine before beginning the study. This number corresponds to a Z-score, which can be obtained from tables. Common confidence levels are 90 percent, 95 percent and 99 percent, corresponding to Z-scores of 1.645, 1.96 and 2.576 respectively. Researchers express the expected standard of deviation (SD) in the results. For a new study, it's common to choose 0.5.

Having determined the margin of error, Z-score and standard of deviation, researchers can calculate the ideal sample size by using the following formula:

(Z-score)2 x SD x (1-SD)/ME2 = Sample Size

Effects of Small Sample Size

In the formula, the sample size is directly proportional to Z-score and inversely proportional to the margin of error. Consequently, reducing the sample size reduces the confidence level of the study, which is related to the Z-score. Decreasing the sample size also increases the margin of error.

In short, when researchers are constrained to a small sample size for economic or logistical reasons, they may have to settle for less conclusive results. Whether or not this is an important issue depends ultimately on the size of the effect they are studying. For example, a small sample size would give more meaningful results in a poll of people living near an airport who are affected negatively by air traffic than it would in a poll of their education levels.

Related Articles

How to Determine the Sample Size in a Quantitative...
What Is PPS Sampling?
How to Calculate Significance
How to Calculate Statistical Sample Sizes
How to Calculate a Sample Size Population
Similarities of Univariate & Multivariate Statistical...
Distinguishing Between Descriptive & Causal Studies
What Is Gaussian Distribution?
Difference Between Correlation and Causality
How to Calculate MSE
How to Interpret a Student's T-Test Results
How to Calculate Bias
How to Find the Beta With an Alpha Hypothesis
How to Calculate P-hat
Slovin's Formula Sampling Techniques
The Disadvantages of a Small Sample Size
How to Select a Statistically Significant Sample Size
How to Interpret a Beta Coefficient
How to Calculate a P-Value
What Does a Negative T-Value Mean?

Dont Go!

We Have More Great Sciencing Articles!