Standard deviation

Revision as of 17:26, 3 March 2022 by Orange quail 9 (talk | contribs) (Created page with "The <b> standard deviation </b> of a data set is a measure of how "spread out" the data points are in general. Unlike range, which only measures the difference between the...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

The standard deviation of a data set is a measure of how "spread out" the data points are in general. Unlike range, which only measures the difference between the maximum and minimum, standard deviation measures the size of differences across the whole data set.

Formula

For an entire population, the standard deviation is the square root of the variance. Explicitly, for a dataset $X = \{ x_1, x_2, x_3, \dots, x_n \}$ with mean $\overline{x}$ the formula for population standard deviation is \[\sigma = \sqrt{\frac{1}{n}\sum_{i=1}^n (x_i - \overline{x})}.\] However, if $X$ is only a sample then not only does the formula for variance change due to Bessel's correction, but the calculated standard deviation ceases to equal the square root of the calculated variance. Usually, a good approximation when $X$ is a sample is \[s = \sqrt{\frac{1}{n - \frac{3}{2}}\sum_{i=1}^n (x_i - \overline{x})}.\] Conventionally, $s$ denotes sample standard deviation, while $\sigma$ denotes population standard deviation.

This article is a stub. Help us out by expanding it.