To come in
Speech therapy portal
  • Manov's work "Logarithmic inequalities in the exam"
  • How to find a graph of the function?
  • Casket quality challenges in physicsArchimedova power free oscillations of mathematical and spring pendulum
  • Word-ligaments and how to use them in the essay
  • I will decide the post of geography Task 2
  • Test score on history
  • Construction of the assessment by the method of maximum believing. Methods for obtaining estimates. Evaluation of the exponential distribution parameter

    Construction of the assessment by the method of maximum believing. Methods for obtaining estimates. Evaluation of the exponential distribution parameter

    In addition to the method of moments, which is set out in the previous paragraph, there are other methods of point evaluation of unknown distribution parameters. These include the method of the greatest believing proposed by R. Fisher.

    A. Discrete random variables.Let be X. - discrete random value as a result n. tests made values h. 1 , H. 2 , ..., H. p . Suppose that the type of the distribution of the value X. set but unknown parameter θ which is determined by this law. It is required to find its point assessment.

    Denote the likelihood that as a result of the test, the value X. take a value h. i. (i.= 1 , 2, . . . , n.), through p.(h. i. ; θ ).

    The likelihood function of the discrete accidentalranksX. call the function of argument θ :

    L. (h. 1 , H. 2 ..., x p ; θ ) = p. (h. 1 ; θ ) r(h. 2 ; θ ) . . . p. (h. n. ; θ ),

    where h. 1 , H. 2 , ..., H. p - Fixed numbers.

    As a point estimate of the parameter θ take this value θ * = θ * (h. 1 , H. 2 ..., x p), in which the likelihood function reaches a maximum. Evaluation θ * Call assessment of the greatest believing.

    Functions L. and ln. L. reach a maximum at the same meaning θ , so instead of finding the maximum function L. looking for (more convenient) maximum LN function L..

    Logarithmic function of believingcall LN function L.. As you know, the maximum point of the LN function L. Argument θ You can search for example, so:

    3) to find the second derivative; If the second derivative is θ = θ * negative, then θ * - Maximum point.

    Found Maximum point θ * take as an assessment of the greatest plausibility of the parameter θ .

    The method of the greatest likelihood has a number of advantages: estimates of the greatest likelihood, generally speaking, are consistent (but they can be displaced), are distributed asymptotically normal (at large values n. approximately normal) and have a smallest dispersion compared to other asymptotically normal estimates; If for the estimated parameter θ There is effective assessment θ *, the equation is likely to have a single decision θ *; This method most fully uses the sample data about the estimated parameter, so it is especially useful in the case of small samples.

    The lack of a method is that it often requires complex computing.

    Note 1. The functions of believing - the function of the argument θ ; Evaluation of the greatest believing - a function of independent arguments h. 1 , H. 2 , ..., H. p .

    Note 2. Assessment of the greatest likelihood does not always coincide with the assessment found by the moments.

    Example 1.λ poisson distributions

    where m. - the number of tests produced; x. i. - the number of events in i.-M ( i.=1, 2, ..., n.) experience (experience consists of t.tests).

    Decision. We will make a function of truth, given that. θ= λ :

    L. = p. (h. 1 ; λ :) p. (h. 2 ; λ :) . . .p. (h. n. ; λ :),=

    .

    Write the equation of plausibility, for which we equate the first derivative zero:

    We will find a critical point for which we solve the resulting equation λ:

    We find the second derivative of λ:

    It is easy to see that at λ \u003d the second derivative is negative; Therefore, λ \u003d - the maximum point and, it means, as an estimate of the greatest likelihood of the parameter λ of the distribution of Poisson, it is necessary to take a selective average λ * \u003d.

    Example 2. Find the method of the greatest likelihood of the rating of the parameter p. binomial distribution

    if in n. 1 independent testing event BUTappeared h. 1 = m. 1 time and in p 2 independent testing event BUTappeared h. 2 \u003d T. 2 time.

    Decision. Make a function of believing, considering that θ = p.:

    Find the logarithmic function of loudness:

    Find the first derivative of r:

    .

    .

    We will find a critical point for which we solve the resulting equation p.:

    We find the second derivative of p.:

    .

    It is easy to make sure that the second derivative is negative; hence, - the maximum point and, it means, it must be taken as an estimate of the greatest plausibility of an unknown probability. p. binomial distribution:

    B. Continuous random variables.Let be X. - continuous random value as a result n. tests made values h. 1 , H. 2 , ..., x. p . Suppose that the type of distribution density f.(x.) set but not known parameter θ which is defined by this feature.

    The likelihood function of a continuous random ledranksX. call the function of argument θ :

    L. (h. 1 , H. 2 , ..., H. p ; θ ) = f. (h. 1 ; θ ) f. (h. 2 ; θ ) . . . f. (x. n. ; θ ),

    where h. 1 , H. 2 , ..., x. p - Fixed numbers.

    An assessment of the greatest plausibility of the unknown parameter of the distribution of a continuous random variable is also searched as in the case of a discrete value.

    Example 3. Find the method of greatest believing estimate of the parameter λ, indicative distribution

    (0< h.< ∞),

    if as a result n. tests random variability X., distributed according to the indicative law, took values h. 1 , H. 2 , ..., H. p .

    Decision. Make a function of believing, considering that θ= λ:

    L.= f. (h. 1 ; λ ) f. (h. 2 ; λ ) . . . f. (h. n. ; λ ) =.

    Find the logarithmic function of loudness:

    Find the first derivative of λ:

    Write the equation of plausibility, for which we equate the first derivative zero:

    We will find a critical point for which we solve the obtained equation relative to λ:

    We will find the second derivative of λ :

    It is easy to see that at λ \u003d 1 / the second derivative is negative; Consequently, λ \u003d 1 / is the point of the maximum and, it means, as an estimate of the greatest likelihood of the parameter λ of the indicative distribution, it is necessary to accept the value inversely selective medium: λ * \u003d 1 /.

    Comment. If the distribution density f.(h.) continuous random variable X. Determined by two unknown parameters θ 1 I. θ 2, the truth-like function is the function of two independent arguments. θ 1 I. θ 2:

    L.= f. (h. 1 ; θ 1 , θ 2) f. (h. 2 ; θ 1 , θ 2) . . . f. (h. n. ; θ 1 , θ 2),

    where h. 1 , H. 2 , ..., H. p - observed values X.. Further find the logarithmic function of believing and for finding its maximum make up and solve the system

    Example 4. Find the method of greatest likelihood of estimating parameters but and σ Normal distribution

    if as a result n. Tests value X. taken values h. 1 , H. 2 , ..., H. p .

    Decision. Make a function of believing, considering that θ 1 =a. and θ 2 \u003d Σ.

    .

    Find the logarithmic function of loudness:

    .

    Find private derivatives but And according to σ:

    Equating private derivatives zero and solving the resulting system of two equations relative but and σ 2, we get:

    So, the desired estimates of the greatest plausibility: but* = ;σ*= . Note that the first assessment is unstable, and the second is displaced.

    Method of maximum believing (MMP) is one of the most widely used methods in statistics and econometrics. For its application, it is necessary to know the law of the distribution of the studied random variable.

    Suppose that there is some random value of the given DT distribution law). The parameters of this law are unknown and they need to be found. In general, the magnitude Y. Consider as multidimensional, i.e. consisting of several one-dimensional values \u200b\u200bu1, u2, y3, u.

    Suppose that y is a one-dimensional random value and its individual values \u200b\u200bare numbers. Each of them (Y],2, u3, ..., y ") is considered as a realization of not one random variable y, and η random variables u1; U2, U3 ..., U ". I.e:

    yj - the realization of a random variable from];

    u2 is the implementation of a random variable U2;

    uZ - the implementation of the random variable U3;

    u "- the realization of a random variable of".

    Parameters of the permissions of the distribution of vector y, consisting of random variables Y.b. Y.2, y3, y, represent as vector θ consisting of to Parameters: θχ, θ2, ink. Values Υ ν Υ 2, u3, ..., Υ η can be distributed both with identical parameters and with different; Some parameters may coincide, while others differ. The specific answer to this question depends on the task that the researcher solves.

    For example, if there is a problem of determining the parameters of the law of the distribution of a random variable y, the implementation of which is the values \u200b\u200bof U1; U2, u3, y, "then assume that each of these values \u200b\u200bis distributed in the same way as the value of W. In other words, any value of y is described by the same distribution / (y,), and with the same parameters Θ: θχ, θ2, ..., d.to.

    Another example is to find the parameters of the regression equation. In this case, each value of y is considered as a random value having "own" distribution parameters that can partially coincide with the parameters of the distribution of other random variables, and can completely differ. The use of MMP to find the parameters of the regression equation will be discussed below.

    As part of the method of maximum believing, the combination of available values \u200b\u200bfrom], U2, U3, ..., "is considered as some fixed, unchanged. That is, the law / (y;) there is a function from a given value, and unknown parameters θ. Consequently for p Observations of random variable y p laws / (y;).

    Unknown parameters of these distribution laws are treated as random variables. They may vary, but the attached set of values \u200b\u200bof Ui, U2, U3, ..., "The most likely specific parameter values \u200b\u200bare most likely. In other words, the question is set in this way: what should be the parameters θ, so that the values \u200b\u200bof y, u2, u3, ..., were they most likely?

    To answer it, it is necessary to find the law of joint distribution of random variables U1; U2, U3, ..., pack -Ki, u2, UZ, U "). If we assume that the magnitude of ^ U2, U3, ..., is independent, then it is equal to the work p laws /

    (Y;) (the product of the probabilities of the appearance of these values \u200b\u200bfor discrete random variables or the product of distribution densities for continuous random variables):

    To emphasize the fact that the desired parameters θ are considered as variables, we introduce another argument to the designation of the distribution law - vector of parameters θ:

    Taking into account the introduced designations, the law of joint distribution independent values \u200b\u200bwith parameters will be recorded as

    (2.51)

    The resulting function (2.51) is called the function of maximum believing And denote:

    Once again, we emphasize the fact that in the functions of the maximum truth-like value of the values \u200b\u200bof the values \u200b\u200bof the vector are fixed, and the vector parameters are variable (in a particular case - one parameter). Often to simplify the process of finding unknown parameters, the likelihood function is logarithming, receiving logarithmic function of lobby

    A further decision on MMP implies the finding of such values \u200b\u200bθ, in which the likelihood function (or its logarithm) reaches a maximum. The found values \u200b\u200bθ; Call assessment of maximum believing.

    Methods for finding an assessment of maximum truth-like are quite diverse. In the simplest case, the likelihood function is continuously differentiable and has a maximum at a point for which

    In more complex cases, the maximum functions of maximum likelihood cannot be found by differentiation and solving the likelihood equation, which requires the search for other algorithms for its location, including iterative.

    Estimates of the parameters obtained using MMP are:

    • weissious, those. with an increase in observation volume difference between the estimate and the actual value of the parameter approaches zero;
    • invariant: If an estimate parameter θ is obtained, equal to 0L, and there is a continuous function Q (0), then the value of the value of this function will be the value Q (0l). In particular, if using MMP, we estimate the variance of the dispersion of any indicator (AF), the root of the resulting assessment will be an estimate of the average quadratic deviation (σ,) obtained by MMP.
    • asymptotically effective ;
    • asymptotically normally distributed.

    The last two statements mean that estimates of the parameters obtained by MMP show the properties of efficiency and normality with an infinitely large increase in the size of the sample.

    To find multiple parameters linear regression View

    it is necessary to know the laws of distribution of dependent variables 7; or random residues ε,. Let the variable Y.t distributed by normal law with parameters μ, σ ,. Each observable value of y, has, in accordance with the definition of regression, expected value μ, \u003d MU "equal to its theoretical value, provided that the values \u200b\u200bof regression parameters are known in the general population

    where xfl, ..., x.iP - the values \u200b\u200bof independent variables in і -Mo observation. When performing the prerequisites for the use of MNA (prerequisites for building a classical normal linear model), random variables y, have the same dispersion

    The variance variance is determined by the formula

    We transform this formula:

    When performing the conditions of Gauss - Markova on the equality zero of the mathematical expectation of random residues and the constancy of their dispersions, you can go from formula (2.52) to the formula

    In other words, the dispersion of the random variety of y, and the corresponding random residues coincide.

    Selective assessment of the mathematical expectation of random variable Yj. We will denote

    and the estimate of its dispersion (constant for different observations) as SY.

    If we assume the independence of individual observations y.iT then get the function of maximum truth

    (2.53)

    In the specified function, the divisor is a constant and does not affect the finding of its maximum. Therefore, to simplify the calculations, it can be omitted. Taking into account this comment and after logarithming, the function (2.53) will take

    In accordance with MMP, we find derivatives logarithmic function believing for unknown parameters

    To find the extremum, we equate the obtained expressions to zero. After transformations we get the system

    (2.54)

    This system corresponds to the system obtained using the least squares method. That is, MMP and MNA give the same results, if the prerequisites of MNA are observed. The last expression in the system (2.54) gives an estimate of the dispersion of a random variable 7, or that the same, dispersion of random residues. As noted above (see formula (2.23)), the unformed evaluation of the dispersion of random residues is equal

    A similar assessment obtained using MMP (as follows from the system (2.54)), is calculated by the formula

    those. is an displaced.

    We reviewed the case of using MMP to find the parameters of linear multiple regression, provided that the value of y is normally distributed. Another approach to finding the parameters of the same regression is to build the function of maximum truth for random residues ε,. For them, a normal distribution with parameters (0, σε) is also assumed. It is easy to make sure that the results of the decision in this case coincide with the results obtained above.

    Until now, we believed that the assessment of an unknown parameter is known and engaged in the study of its properties to use them when building a confidential interval. In this section, consider the question of how to build estimates.

    Methods of believing

    Let it be necessary to assess the unknown parameter, generally speaking, vector ,. It is assumed that the type of distribution function is known with an accuracy of the parameter,

    In this case, all moments of random variance become functions from:

    Moment method requires the following actions:

    Calculate k "theoretical" moments

    By sample, we build k of the same names. In the outlined context, it will be moments

    Equating the "theoretical" and the selection moments of the same name, we arrive at the system of equations relative to the component of the estimated parameter

    Solving the resulting system (exactly or approximately), we find source assessments. Of course, they are features from selective values.

    We outlined the procedure based on the initial - theoretical and selective - moments. It remains with another choice of moments, initial, central or absolute, which is determined by the convenience of solving the system (25.1) or similar to it.

    Let us turn to the consideration of examples.

    Example 25.1. Let the random value are distributed evenly on the segment [; ], where - unknown parameters. By sample () volume N from the distribution of random variable. It is required to evaluate and.

    In this case, the distribution is determined by the density

    1) Calculate the first two initial "theoretical" moments:

    2) Calculate the first first initial selective moments by sample

    3) make a system of equations

    4) from the first equation express through

    and substitute in the second equation, as a result, we will come to the square equation

    deciding which, we find two roots

    The corresponding values \u200b\u200bare as follows

    Because in the sense of the task should be implemented< , выбираем в качестве решения системы и оценок неизвестных параметров

    Noticing that there is nothing but a selective dispersion, we finally get

    If we were chosen as the "theoretical" moments of mathematical expectation and dispersion, then would come to the system (including inequality<)

    which is linear and is solved easier than the previous one. The answer, of course, coincides with the already received.

    Finally, we note that our systems always have a solution and, at the same one. The estimates obtained are, of course, are consistent, but the properties of disability do not possess.

    Method of maximum believing

    It is studied, as before, a random variable, the distribution of which is set or probabilities of its values, if discrete, or distribution density, if continuous, where is an unknown vector parameter. Let () - a sample of values. Naturally, as an assessment, take the value of the parameter, in which the likelihood of obtaining the already existing sample is maximum.

    Expression

    call function likelihoodIt is a joint distribution or co-density of the random vector with N independent coordinates, each of which has the same distribution (density) as.

    As an estimate of an unknown parameter, its value is taken, which delivers the maximum of the function considered as functions from the fixed values. Assessment is called assessment of the maximum truthfulness. Note that depends on the sample size of n and selective values

    and, therefore, itself is random variable.

    Introduction of the maximum function of the function is a separate task, which is facilitated if the function is differentiable by parameter.

    In this case, it is convenient to consider its logarith instead of the function, since the extremum points of the function and its logarithm coincide.

    The methods of differential calculus allow you to find dots suspicious to extremum, and then find out which one is achieved maximum.

    To this end, we first consider the system of equations

    solutions of which are points, suspicious to extremum. Then according to the well-known technique, calculating the values \u200b\u200bof the second derivatives

    by the sign of the determinant composed of these values, we find a maximum point.

    The estimates obtained by the method of maximum truthfulness are consistent, although it may be displaced.

    Consider examples.

    Example 25.2. Let some random experiment be performed, the outcome of which there may be some events A, the probability of p (a) of which is unknown and shall be assessed.

    We introduce a random amount of equality

    if the event is happening,

    if an event and did not happen (an event occurred).

    The distribution of random variance is set to equality

    The sample in this case will be the final sequence (), where each of the one may be 0 or 1.

    The functions of truth will be

    We will find the point of its maximum by p, for which the logarithm derivative is calculated

    Denote - this number is equal to the number of units of "success" in the selected sequence.

    Equate the resulting derivative to zero

    and solve the resulting equation

    Since the derivative changes the sign from "+" to "-" with an increase in R from 0 to 1, the point is the maximum point of the function L, A is the estimate of the maximum truthfulness of the p parameter. Note that the ratio is the frequency of events and in the first N tests.

    Since m is the number of "success" in the sequence of N independent tests (in the Bernoulli scheme), then, and is an unstable assessment. By virtue of the law of large numbers, Bernoulli seeks like probability to P, and the assessment is consistent.

    Example 25.3. We construct estimates of unknown mathematical expectations and dispersion of a normally distributed random variable with parameters.

    Decision.

    Under the conditions of the example, a random variable is determined by the distribution density

    Immediately repel the logarithm of the likelihood function

    Make a system of equations for finding extreme points

    From the first equation we find, from the second, substituting the value found, we find.

    Calculate the second derivatives of LNL functions at the point ():

    A \u003d, B \u003d, C \u003d.

    Since the determinant

    a.< 0, то найденная точка в самом деле точка максимума функции правдоподобия.

    Note that the assessment is a selective mean (unstable and wealthy assessment of the mathematical expectation), and is a selective dispersion (offset evaluation of the dispersion).

    This method is that the parameter value is taken as a point estimate of the parameter, in which the likelihood function reaches its maximum.

    For random operation before refusal to failure of the probability density f (t,), the likelihood function is determined by formula 12.11: . is the joint density of the probability of independent measurements of the random variable τ with a probability density f (T,).

    If the random variable is discrete and takes values Z 1, z 2..., respectively, with probabilities P 1 (α), p 2 (α) ...,, then the function of loudness is taken into a different form, namely: where the indexes of probabilities show that the values \u200b\u200bwere observed.

    Estimates of the maximum truth-like parameter are determined from the likelihood equation (12.12).

    The value of the maximum likelihood method is found out by the following two assumptions:

    If there is an effective assessment for the parameter, the likelihood equation (12.12) has a single solution.

    With some general conditions of the analytical nature imposed on the functions f (T,) The solution of the likelihood equation converges with the true value of the parameter.

    Consider an example of using the maximum truthful method for the parameters of the normal distribution.

    Example:

    We have: , , t i (i \u003d 1..n) Sampling from a set with distribution density.

    It is required to find an assessment of the maximum similarity.

    Function likelihood: ;

    .

    Other equations: ;

    ;

    The solution of these equations is: - statistical average; - Statistical dispersion. Evaluation is offset. Not a displaced assessment will be an assessment: .

    The main disadvantage of the method of maximum likelihood is computational difficulties arising from the solution of the likelihood equations, which, as a rule, are transcendent.

    Moment method.

    This method is proposed by K. Pirson and is the most first common method of point assessment of unknown parameters. It is still widely used in practical statistics, since it often leads to a relatively simple computational procedure. The idea of \u200b\u200bthis method is that distribution moments depend on unknown parameters, equal to empirical moments. Taking the number of moments equal to the number of unknown parameters, and making the appropriate equations, we obtain the required number of equations. Most often, the first two statistical points are calculated: selective average; And selective dispersion . Estimates obtained by the method of moments are not the best in terms of their effectiveness. However, they are very often used as the first approximations.

    Consider an example of using the method of moments.

    Example: Consider the exponential distribution:

    t\u003e 0; λ.<0; t i (i=1..N) - Sampling of aggregate with distribution density. It is required to find an estimate for the parameter λ.

    Compile equation: . Thus, otherwise.

    Quantile method.

    This is the same empirical method as the method of moments. It is that the quantile theoretical distribution is equal to the empirical quantil. If several parameters are subject to evaluation, then the corresponding equations are written for several quantile.

    Consider the case when the distribution law F (t, α, β)with two unknown parameters α, β . Let the function F (t, α, β) It has a continuously differentiable density taking positive values \u200b\u200bfor any possible parameter values. α, β. If testing is carried out according to plan R \u003e\u003e 1then the moment of the appearance of refusal can be considered as an empirical quantile level, i \u003d 1,2… , - Empirical distribution function. If t L. and t. R - moments of the appearance of the L-th and Rth failures are known exactly, parameter values α and β could be found from equations

    Essence of the task of point estimation of parameters

    Dotted distribution parameters

    Point estimate It implies the foundation of a single numerical value, which is assumed for the value of the parameter. It is advisable to determine such an assessment in cases where the ED volume is quite large. Moreover, there is no single concept of a sufficient amount of ED, its value depends on the type of the estimated parameter (it is necessary to return to this issue when studying the methods of interval estimation of parameters, and will preliminarily consider a sufficient sample containing at least 10 values). With a small amount of ED, point estimates can differ significantly from the true parameter values, which makes them unsuitable for use.

    Task of point estimation of parameters in typical version Setting is as follows.

    There is: sample of observations ( x 1, x 2, ..., x n) for a random variable H.. Sampling volume n.fixed.

    Known the type of value of the distribution of the magnitude H., for example, in the form of distribution density f (Θ , x) Where Θ - Unknown (in general, vector) distribution parameter. The parameter is non-random value.

    It is required to find an estimate Θ* parameter Θ Distribution law.

    Restrictions: Sampling Representative.

    There are several methods for solving the problem of point estimation of parameters, the most commonly used methods of maximum (greatest) likelihood, moments and quantiles are most common.

    The method is proposed by R. Fisher in 1912. The method is based on the study of the probability of obtaining sampling of observations (x 1, x 2, ..., x n). This probability is equal

    f (x 1, θ) f (x 2, θ) ... f (x p, θ) dx 1 dx 2 ... dx n.

    Joint probability density

    L (x 1, x 2 ..., x n; θ) \u003d f (x 1, θ) f (x 2, θ) ... f (x n, θ),(2.7)

    considered as a function of the parameter Θ , called function likelihood .

    As an assessment Θ* Parameter Θ It is necessary to take the value that draws the function of loudness to the maximum. To find the assessment it is necessary to replace in the credibility of T. on the q. and solve the equation

    dL / D.Θ* = 0.

    To simplify calculations, go from the likelihood function to its logarithm LN L.. Such a transformation is permissible, since the likelihood function is a positive function, and it reaches a maximum at the same point as its logarithm. If the distribution parameter vector magnitude

    Θ* \u003d (q 1, q 2, ..., q n),

    then estimates of the maximum truthfulness are found from the system of equations


    d ln l (q 1, q 2, ..., q n) / d q 1 \u003d 0;

    d ln l (q 1, q 2, ..., q n) / d q 2 \u003d 0;

    . . . . . . . . .



    d LN L (Q 1, Q 2, ..., Q n) / d q n \u003d 0.

    To verify that the optimum point corresponds to the maximum of the likelihood function, it is necessary to find the second derivative of this function. And if the second derivative at the point of optimum is negative, then the found parameter values \u200b\u200bmaximize the function.

    So, the following steps include the following steps: Construction of the likelihood function (its natural logarithm); differentiation of the function in the desired parameters and the compilation of the system of equations; Solution of the system of equations for finding estimates; Determining the second derivative function, checking its sign at the point of the optimum of the first derivative and the formation of conclusions.

    Decision. Function of loudness for sampling of ED volume n.

    Logarithm functions like

    System of equations for finding parameter estimates

    From the first equation follows:

    or finally

    Thus, the arithmetic average is an assessment of maximum likelihood for mathematical expectation.

    From the second equation you can find

    Empirical dispersion is shifted. After eliminating displacement

    The actual values \u200b\u200bof parameter estimates: m. =27,51, s 2. = 0,91.

    To verify that the estimates obtained maximize the value of the likelihood function, take the second derivatives

    The second derivatives from the LN function ( L (M, S)) Regardless of the parameter values, less than zero, therefore, the parameter values \u200b\u200bfound are estimates of the maximum truth.

    The maximum truthful method allows wealthy, effective (if any exist, the solution obtained will give effective estimates), sufficient, asymptotically normally distributed estimates. This method can be given both shifted and unbelievable estimates. The displacement can be eliminated by the introduction of amendments. The method is especially useful at small samples.