Experiments in physics designed to determine parameters
in the functional relationship between quantities *x* and *y*
involve a series of measurements of *x* and the corresponding *y*.
In many cases not only are there measurement errors
*y*_{i} for
each *y*_{j}, but also measurement errors
*x*_{j} for
each *x*_{j}. Most
physicists treat the problem as if all the
*x*_{j} = 0
using the
standard least squares method. Such a procedure loses accuracy
in the determination of the unknown parameters contained in
the function *y* = *f* (*x*) and it gives estimates of
errors which are smaller than the true errors.

The standard least squares method of Section 15 should be
used only when all the
*x*_{j} <<
*y*_{i}.
Otherwise one must replace the weighting factors
1 / _{i}^{2}
in Eq. (24) with
(_{j})^{-2}
where

(36) |

Eq. (24) then becomes

(37) |

A proof is given in Ref. 7.

We see that the standard least squares computer programs may
still be used. In the case where
*y* = _{1} +
_{2}*x* one may use
what
are called linear regression programs, and where *y* is a polynomial
in *x* one may use multiple polynomial regression programs.
The usual procedure is to guess starting values for
ð*f* / ð *x* and then solve for the parameters
_{j}* using Eq. (30)
with
_{j} replaced by
_{j}. Then new
[ð*f* / ð *x*]_{j} can
be evaluated and the procedure repeated.
Usually only two iterations are necessary. The effective
variance method is exact in the limit that
ð*f* / ð *x* is constant over the region
*x*_{j}. This
means it is always exact for linear regressions.