To come in
Speech therapy portal
  • How to gain self-confidence, achieve calmness and increase self-esteem: discovering the main secrets of Gaining self-confidence
  • Psychological characteristics of children with general speech underdevelopment: features of cognitive activity Mental characteristics of children with onr
  • What is burnout at work and how to deal with it How to deal with burnout at work
  • How to Deal with Emotional Burnout Methods for Dealing with Emotional Burnout
  • How to Deal with Emotional Burnout Methods for Dealing with Emotional Burnout
  • Burnout - How To Deal With Work Stress How To Deal With Emotional Burnout
  • Expectation mat. Expected value. Simplest properties of mathematical expectation

    Expectation mat.  Expected value.  Simplest properties of mathematical expectation

    DSV characteristics and their properties. Mathematical expectation, variance, standard deviation

    The distribution law completely characterizes the random variable. However, when it is impossible to find the distribution law, or this is not required, one can restrict oneself to finding values, called the numerical characteristics of a random variable. These values ​​determine some average value around which the values ​​of a random variable are grouped, and the degree of their dispersion around this average value.

    Mathematical expectation A discrete random variable is the sum of the products of all possible values ​​of a random variable by their probabilities.

    The mathematical expectation exists if the series on the right side of the equality converges absolutely.

    From the point of view of probability, we can say that the mathematical expectation is approximately equal to the arithmetic mean of the observed values ​​of a random variable.

    Example. The law of distribution of a discrete random variable is known. Find the expected value.

    X
    p 0.2 0.3 0.1 0.4

    Solution:

    9.2 Properties of mathematical expectation

    1. The mathematical expectation of a constant is equal to the most constant.

    2. The constant factor can be taken out beyond the sign of the mathematical expectation.

    3. The mathematical expectation of the product of two independent random variables is equal to the product of their mathematical expectations.

    This property is valid for an arbitrary number of random variables.

    4. The mathematical expectation of the sum of two random variables is equal to the sum of the mathematical expectations of the terms.

    This property is also valid for an arbitrary number of random variables.

    Let n independent tests be carried out, the probability of occurrence of event A in which is equal to p.

    Theorem. The mathematical expectation M (X) of the number of occurrence of event A in n independent trials is equal to the product of the number of trials by the probability of occurrence of the event in each trial.

    Example. Find the mathematical expectation of a random variable Z if the mathematical expectations of X and Y are known: M (X) = 3, M (Y) = 2, Z = 2X + 3Y.

    Solution:

    9.3 Dispersion of a discrete random variable

    However, the mathematical expectation cannot fully characterize a random process. In addition to the mathematical expectation, it is necessary to enter a value that characterizes the deviation of the values ​​of the random variable from the mathematical expectation.

    This deviation is equal to the difference between the random variable and its mathematical expectation. In this case, the mathematical expectation of the deviation is zero. This is due to the fact that some possible deviations are positive, others are negative, and as a result of their mutual repayment, zero is obtained.



    Dispersion (dispersion) a discrete random variable is called the mathematical expectation of the square of the deviation of the random variable from its mathematical expectation.

    In practice, this method of calculating the variance is inconvenient, since leads to cumbersome calculations for a large number of values ​​of a random variable.

    Therefore, a different method is used.

    Theorem. The variance is equal to the difference between the mathematical expectation of the square of the random variable X and the square of its mathematical expectation.

    Proof. Taking into account the fact that the mathematical expectation M (X) and the square of the mathematical expectation M 2 (X) are constant values, we can write:

    Example. Find the variance of a discrete random variable given by the distribution law.

    NS
    X 2
    R 0.2 0.3 0.1 0.4

    Solution: .

    9.4 Properties of dispersion

    1. The variance of the constant is zero. ...

    2. A constant factor can be taken out of the dispersion sign by squaring it. .

    3. The variance of the sum of two independent random variables is equal to the sum of the variances of these values. ...

    4. The variance of the difference of two independent random variables is equal to the sum of the variances of these values. ...

    Theorem. The variance of the number of occurrences of an event A in n independent trials, in each of which the probability p of the occurrence of an event is constant, is equal to the product of the number of trials and the probabilities of occurrence and non-occurrence of an event in each trial.

    9.5 Standard deviation of a discrete random variable

    Mean square deviation a random variable X is called the square root of the variance.

    Theorem. The standard deviation of the sum of a finite number of mutually independent random variables is equal to the square root of the sum of the squares of the standard deviations of these values.

    Expected value- the average value of a random variable (the probability distribution of a stationary random variable) when the number of samples or the number of measurements (sometimes they say - the number of tests) tends to infinity.

    The arithmetic mean of a one-dimensional random variable of a finite number of tests is usually called an estimate of the mathematical expectation... When the number of tests of a stationary random process tends to infinity, the estimate of the mathematical expectation tends to the mathematical expectation.

    Expectation is one of the basic concepts in probability theory).

    Collegiate YouTube

      1 / 5

      ✪ Expectation and variance - bezbotvy

      ✪ Probability Theory 15: Expectation

      ✪ Expected value

      ✪ Mathematical expectation and variance. Theory

      ✪ Expected value in trading

      Subtitles

    Definition

    Let the probability space be given (Ω, A, P) (\ displaystyle (\ Omega, (\ mathfrak (A)), \ mathbb (P))) and a random variable defined on it X (\ displaystyle X)... That is, by definition, X: Ω → R (\ displaystyle X \ colon \ Omega \ to \ mathbb (R)) is a measurable function. If there is a Lebesgue integral of X (\ displaystyle X) in space Ω (\ displaystyle \ Omega), then it is called the mathematical expectation, or the average (expected) value and is denoted M [X] (\ displaystyle M [X]) or E [X] (\ displaystyle \ mathbb (E) [X]).

    M [X] = ∫ Ω X (ω) P (d ω). (\ displaystyle M [X] = \ int \ limits _ (\ Omega) \! X (\ omega) \, \ mathbb (P) (d \ omega).)

    Basic formulas for mathematical expectation

    M [X] = ∫ - ∞ ∞ x d F X (x); x ∈ R (\ displaystyle M [X] = \ int \ limits _ (- \ infty) ^ (\ infty) \! x \, dF_ (X) (x); x \ in \ mathbb (R)).

    The mathematical expectation of a discrete distribution

    P (X = xi) = pi, ∑ i = 1 ∞ pi = 1 (\ displaystyle \ mathbb (P) (X = x_ (i)) = p_ (i), \; \ sum \ limits _ (i = 1 ) ^ (\ infty) p_ (i) = 1),

    then it follows directly from the definition of the Lebesgue integral that

    M [X] = ∑ i = 1 ∞ x i p i (\ displaystyle M [X] = \ sum \ limits _ (i = 1) ^ (\ infty) x_ (i) \, p_ (i)).

    The expected value of an integer value

    P (X = j) = p j, j = 0, 1,. ... ... ; ∑ j = 0 ∞ pj = 1 (\ displaystyle \ mathbb (P) (X = j) = p_ (j), \; j = 0,1, ...; \ quad \ sum \ limits _ (j = 0 ) ^ (\ infty) p_ (j) = 1)

    then its mathematical expectation can be expressed in terms of the generating function of the sequence (p i) (\ displaystyle \ (p_ (i) \))

    P (s) = ∑ k = 0 ∞ p k s k (\ displaystyle P (s) = \ sum _ (k = 0) ^ (\ infty) \; p_ (k) s ^ (k))

    as the value of the first derivative in unit: M [X] = P ′ (1) (\ displaystyle M [X] = P "(1))... If the mathematical expectation X (\ displaystyle X) endlessly then lim s → 1 P ′ (s) = ∞ (\ displaystyle \ lim _ (s \ to 1) P "(s) = \ infty) and we will write P ′ (1) = M [X] = ∞ (\ displaystyle P "(1) = M [X] = \ infty)

    Now let's take the generating function Q (s) (\ displaystyle Q (s)) distribution tail sequences (q k) (\ displaystyle \ (q_ (k) \))

    q k = P (X> k) = ∑ j = k + 1 ∞ p j; Q (s) = ∑ k = 0 ∞ q k s k. (\ displaystyle q_ (k) = \ mathbb (P) (X> k) = \ sum _ (j = k + 1) ^ (\ infty) (p_ (j)); \ quad Q (s) = \ sum _ (k = 0) ^ (\ infty) \; q_ (k) s ^ (k).)

    This generating function is associated with a previously defined function P (s) (\ displaystyle P (s)) property: Q (s) = 1 - P (s) 1 - s (\ displaystyle Q (s) = (\ frac (1-P (s)) (1-s))) at | s |< 1 {\displaystyle |s|<1} ... From this, by the mean value theorem, it follows that the mathematical expectation is simply equal to the value of this function in unity:

    M [X] = P ′ (1) = Q (1) (\ displaystyle M [X] = P "(1) = Q (1))

    The mathematical expectation of an absolutely continuous distribution

    M [X] = ∫ - ∞ ∞ xf X (x) dx (\ displaystyle M [X] = \ int \ limits _ (- \ infty) ^ (\ infty) \! Xf_ (X) (x) \, dx ).

    The mathematical expectation of a random vector

    Let be X = (X 1,…, X n) ⊤: Ω → R n (\ displaystyle X = (X_ (1), \ dots, X_ (n)) ^ (\ top) \ colon \ Omega \ to \ mathbb ( R) ^ (n)) is a random vector. Then by definition

    M [X] = (M [X 1],…, M [X n]) ⊤ (\ displaystyle M [X] = (M, \ dots, M) ^ (\ top)),

    that is, the mathematical expectation of a vector is determined componentwise.

    The mathematical expectation of the transformation of a random variable

    Let be g: R → R (\ displaystyle g \ colon \ mathbb (R) \ to \ mathbb (R)) is a Borel function such that the random variable Y = g (X) (\ displaystyle Y = g (X)) has a finite mathematical expectation. Then the formula is valid for it

    M [g (X)] = ∑ i = 1 ∞ g (xi) pi, (\ displaystyle M \ left = \ sum \ limits _ (i = 1) ^ (\ infty) g (x_ (i)) p_ ( i),)

    if X (\ displaystyle X) has a discrete distribution;

    M [g (X)] = ∫ - ∞ ∞ g (x) f X (x) dx, (\ displaystyle M \ left = \ int \ limits _ (- \ infty) ^ (\ infty) \! G (x ) f_ (X) (x) \, dx,)

    if X (\ displaystyle X) has an absolutely continuous distribution.

    If the distribution P X (\ displaystyle \ mathbb (P) ^ (X)) random variable X (\ displaystyle X) general form, then

    M [g (X)] = ∫ - ∞ ∞ g (x) P X (d x). (\ displaystyle M \ left = \ int \ limits _ (- \ infty) ^ (\ infty) \! g (x) \, \ mathbb (P) ^ (X) (dx).)

    In the special case when g (X) = X k (\ displaystyle g (X) = X ^ (k)), expected value M [g (X)] = M [X k] (\ displaystyle M = M) called k (\ displaystyle k)-th moment of a random variable.

    Simplest properties of mathematical expectation

    • The mathematical expectation of a number is the number itself.
    M [a] = a (\ displaystyle M [a] = a) a ∈ R (\ displaystyle a \ in \ mathbb (R))- constant;
    • The expectation is linear, that is
    M [a X + b Y] = a M [X] + b M [Y] (\ displaystyle M = aM [X] + bM [Y]), where X, Y (\ displaystyle X, Y) are random variables with finite mathematical expectation, and a, b ∈ R (\ displaystyle a, b \ in \ mathbb (R))- arbitrary constants; 0 ⩽ M [X] ⩽ M [Y] (\ displaystyle 0 \ leqslant M [X] \ leqslant M [Y]); M [X] = M [Y] (\ displaystyle M [X] = M [Y]). M [X Y] = M [X] M [Y] (\ displaystyle M = M [X] M [Y]).

    The mathematical expectation (mean value) of a random variable X, given on a discrete probability space, is the number m = M [X] = ∑x i p i if the series converges absolutely.

    Service purpose... Using the service online the mathematical expectation, variance and standard deviation are calculated(see example). In addition, a graph of the distribution function F (X) is plotted.

    Properties of the mathematical expectation of a random variable

    1. The mathematical expectation of a constant is equal to itself: M [C] = C, C is a constant;
    2. M = C M [X]
    3. The mathematical expectation of the sum of random variables is equal to the sum of their mathematical expectations: M = M [X] + M [Y]
    4. The mathematical expectation of the product of independent random variables is equal to the product of their mathematical expectations: M = M [X] M [Y], if X and Y are independent.

    Dispersion properties

    1. The variance of the constant is zero: D (c) = 0.
    2. The constant factor can be taken out of the variance sign by squaring it: D (k * X) = k 2 D (X).
    3. If the random variables X and Y are independent, then the variance of the sum is equal to the sum of variances: D (X + Y) = D (X) + D (Y).
    4. If the random variables X and Y are dependent: D (X + Y) = DX + DY + 2 (X-M [X]) (Y-M [Y])
    5. The calculation formula is valid for the variance:
      D (X) = M (X 2) - (M (X)) 2

    An example. The mathematical expectations and variances of two independent random variables X and Y are known: M (x) = 8, M (Y) = 7, D (X) = 9, D (Y) = 6. Find the mathematical expectation and variance of the random variable Z = 9X-8Y + 7.
    Solution. Based on the properties of the mathematical expectation: M (Z) = M (9X-8Y + 7) = 9 * M (X) - 8 * M (Y) + M (7) = 9 * 8 - 8 * 7 + 7 = 23 ...
    Based on the dispersion properties: D (Z) = D (9X-8Y + 7) = D (9X) - D (8Y) + D (7) = 9 ^ 2D (X) - 8 ^ 2D (Y) + 0 = 81 * 9 - 64 * 6 = 345

    Algorithm for calculating the expected value

    Properties of discrete random variables: all their values ​​can be renumbered with natural numbers; assign a nonzero probability to each value.
    1. We multiply the pairs: x i by p i in turn.
    2. Add the product of each pair x i p i.
      For example, for n = 4: m = ∑x i p i = x 1 p 1 + x 2 p 2 + x 3 p 3 + x 4 p 4
    Distribution function of a discrete random variable stepwise, it increases abruptly at those points, the probabilities of which are positive.

    Example # 1.

    x i 1 3 4 7 9
    p i 0.1 0.2 0.1 0.3 0.3

    We find the mathematical expectation by the formula m = ∑x i p i.
    Mathematical expectation M [X].
    M [x] = 1 * 0.1 + 3 * 0.2 + 4 * 0.1 + 7 * 0.3 + 9 * 0.3 = 5.9
    We find the variance by the formula d = ∑x 2 i p i - M [x] 2.
    Dispersion D [X].
    D [X] = 1 2 * 0.1 + 3 2 * 0.2 + 4 2 * 0.1 + 7 2 * 0.3 + 9 2 * 0.3 - 5.9 2 = 7.69
    Standard deviation σ (x).
    σ = sqrt (D [X]) = sqrt (7.69) = 2.78

    Example # 2. A discrete random variable has the following distribution series:

    NS -10 -5 0 5 10
    R a 0,32 2a 0,41 0,03
    Find the value a, mathematical expectation and standard deviation of this random variable.

    Solution. We find the value a from the relation: Σp i = 1
    Σp i = a + 0.32 + 2 a + 0.41 + 0.03 = 0.76 + 3 a = 1
    0.76 + 3 a = 1 or 0.24 = 3 a, whence a = 0.08

    Example No. 3. Determine the distribution law of a discrete random variable, if its variance is known, and x 1 x 1 = 6; x 2 = 9; x 3 = x; x 4 = 15
    p 1 = 0.3; p 2 = 0.3; p 3 = 0.1; p 4 = 0.3
    d (x) = 12.96

    Solution.
    Here you need to compose a formula for finding the variance d (x):
    d (x) = x 1 2 p 1 + x 2 2 p 2 + x 3 2 p 3 + x 4 2 p 4 -m (x) 2
    where the expectation m (x) = x 1 p 1 + x 2 p 2 + x 3 p 3 + x 4 p 4
    For our data
    m (x) = 6 * 0.3 + 9 * 0.3 + x 3 * 0.1 + 15 * 0.3 = 9 + 0.1x 3
    12.96 = 6 2 0.3 + 9 2 0.3 + x 3 2 0.1 + 15 2 0.3- (9 + 0.1x 3) 2
    or -9/100 (x 2 -20x + 96) = 0
    Accordingly, it is necessary to find the roots of the equation, and there will be two of them.
    x 3 = 8, x 3 = 12
    We choose the one that satisfies the condition x 1 x 3 = 12

    Distribution law of a discrete random variable
    x 1 = 6; x 2 = 9; x 3 = 12; x 4 = 15
    p 1 = 0.3; p 2 = 0.3; p 3 = 0.1; p 4 = 0.3

    The mathematical expectation is the probability distribution of a random variable

    Expectation, definition, mathematical expectation of discrete and continuous random variables, sample, conditional expectation, calculation, properties, tasks, estimation of expectation, variance, distribution function, formulas, examples of calculation

    Expand content

    Collapse content

    The mathematical expectation is, the definition

    One of the most important concepts in mathematical statistics and probability theory, characterizing the distribution of values ​​or probabilities of a random variable. Usually expressed as a weighted average of all possible parameters of a random variable. It is widely used in technical analysis, the study of numerical series, the study of continuous and long-term processes. It is important in assessing risks, predicting price indicators when trading in financial markets, and is used in the development of strategies and methods of gaming tactics in the theory of gambling.

    The mathematical expectation is mean value of a random variable, the probability distribution of a random variable is considered in probability theory.

    The mathematical expectation is a measure of the mean value of a random variable in probability theory. The mathematical expectation of a random variable x denoted M (x).

    The mathematical expectation is


    The mathematical expectation is in probability theory, the weighted average of all possible values ​​that this random variable can take.


    The mathematical expectation is the sum of the products of all possible values ​​of a random variable by the probabilities of these values.

    The mathematical expectation is the average benefit from one solution or another, provided that such a solution can be considered within the framework of the theory of large numbers and long distance.


    The mathematical expectation is in the theory of gambling, the amount of winnings that a player can earn or lose, on average, for each bet. In the language of gamblers, this is sometimes called "player advantage" (if it is positive for the player) or "casino advantage" (if it is negative for the player).

    The mathematical expectation is the percentage of profit on winnings multiplied by the average profit, minus the probability of loss multiplied by the average loss.


    The mathematical expectation of a random variable in mathematical theory

    One of the important numerical characteristics of a random variable is the mathematical expectation. Let us introduce the concept of a system of random variables. Consider a collection of random variables that are the results of the same random experiment. If - one of the possible values ​​of the system, then the event corresponds to a certain probability that satisfies the Kolmogorov axioms. A function defined for any possible values ​​of random variables is called a joint distribution law. This function allows you to calculate the probabilities of any events from. In particular, the joint law of distribution of random variables and, which take values ​​from the set and, is given by probabilities.


    The term "mathematical expectation" was introduced by Pierre Simon the Marquis de Laplace (1795) and originated from the concept of "expected value of a payoff", which first appeared in the 17th century in the theory of gambling in the works of Blaise Pascal and Christian Huygens. However, the first complete theoretical understanding and assessment of this concept was given by Pafnutii Lvovich Chebyshev (mid-19th century).


    The distribution law of random numerical values ​​(distribution function and distribution series or probability density) fully describe the behavior of a random variable. But in a number of problems it is enough to know some of the numerical characteristics of the investigated quantity (for example, its average value and possible deviation from it) in order to answer the question posed. The main numerical characteristics of random variables are mathematical expectation, variance, mode, and median.

    The mathematical expectation of a discrete random variable is the sum of the products of its possible values ​​by the corresponding probabilities. Sometimes the mathematical expectation is called the weighted average, since it is approximately equal to the arithmetic mean of the observed values ​​of a random variable for a large number of experiments. From the definition of the mathematical expectation it follows that its value is not less than the smallest possible value of a random variable and not more than the largest. The mathematical expectation of a random variable is a non-random (constant) value.


    The mathematical expectation has a simple physical meaning: if a unit mass is placed on a straight line by placing some mass at some points (for a discrete distribution), or "smearing" it with a certain density (for an absolutely continuous distribution), then the point corresponding to the mathematical expectation will be the coordinate The "center of gravity" is straight.


    The average value of a random variable is a certain number, which is, as it were, its "representative" and replaces it in rough approximate calculations. When we say: “the average operating time of the lamp is 100 hours” or “the midpoint of impact is displaced relative to the target by 2 m to the right”, we indicate a certain numerical characteristic of a random variable that describes its location on the numerical axis, i.e. "Characterization of the position".

    From the characteristics of the position in the theory of probability, the most important role is played by the mathematical expectation of a random variable, which is sometimes called simply the mean value of a random variable.


    Consider a random variable NS with possible values x1, x2, ..., xn with probabilities p1, p2, ..., pn... We need to characterize by some number the position of the values ​​of a random variable on the abscissa axis, taking into account the fact that these values ​​have different probabilities. For this purpose, it is natural to use the so-called "weighted average" of the values xi, and each value of xi during averaging should be taken into account with a "weight" proportional to the probability of this value. Thus, we will calculate the mean of the random variable X, which we will denote M | X |:


    This weighted average is called the mathematical expectation of a random variable. Thus, we have introduced in consideration one of the most important concepts of probability theory - the concept of mathematical expectation. The mathematical expectation of a random variable is the sum of the products of all possible values ​​of a random variable by the probabilities of these values.

    NS associated with a peculiar relationship with the arithmetic mean of the observed values ​​of a random variable with a large number of experiments. This dependence is of the same type as the dependence between frequency and probability, namely: with a large number of experiments, the arithmetic mean of the observed values ​​of a random variable approaches (converges in probability) to its mathematical expectation. From the presence of a connection between frequency and probability, one can deduce as a consequence the presence of a similar connection between the arithmetic mean and mathematical expectation. Indeed, consider the random variable NS characterized by a distribution series:


    Let it be produced N independent experiments, in each of which the value X takes on a certain meaning. Suppose the value x1 appeared m1 times, value x2 appeared m2 times, generally meaning xi appeared mi times. We calculate the arithmetic mean of the observed values ​​of X, which, in contrast to the mathematical expectation M | X | we will designate M * | X |:

    With an increase in the number of experiments N frequency pi will approach (converge in probability) to the corresponding probabilities. Consequently, the arithmetic mean of the observed values ​​of the random variable M | X | with an increase in the number of experiments, it will approach (converge in probability) to its mathematical expectation. The above connection between the arithmetic mean and the mathematical expectation is the content of one of the forms of the law of large numbers.

    We already know that all forms of the law of large numbers state the fact that certain averages are stable for a large number of experiments. Here we are talking about the stability of the arithmetic mean from a series of observations of the same quantity. With a small number of experiments, the arithmetic mean of their results is random; with a sufficient increase in the number of experiments, it becomes "almost random" and, stabilizing, approaches a constant value - the mathematical expectation.


    The property of stability of averages with a large number of experiments is easy to verify experimentally. For example, weighing a body in a laboratory on an accurate balance, we get a new value each time as a result of weighing; to reduce the observation error, we weigh the body several times and use the arithmetic mean of the values ​​obtained. It is easy to be convinced that with a further increase in the number of experiments (weighing) the arithmetic mean reacts to this increase less and less, and with a sufficiently large number of experiments it practically ceases to change.

    It should be noted that the most important characteristic of the position of a random variable - the mathematical expectation - does not exist for all random variables. It is possible to compose examples of such random variables for which the mathematical expectation does not exist, since the corresponding sum or integral diverges. However, for practice, such cases are not of significant interest. Usually the random variables we deal with have a limited range of possible values ​​and, of course, have a mathematical expectation.


    In addition to the most important of the characteristics of the position of a random variable - the mathematical expectation - other characteristics of the position are sometimes used in practice, in particular, the mode and median of a random variable.


    The mode of a random variable is its most probable value. The term "most probable value", strictly speaking, applies only to discontinuous quantities; for a continuous quantity, the mode is the value at which the probability density is maximum. The figures show the mode for discontinuous and continuous random variables, respectively.


    If the distribution polygon (distribution curve) has more than one maximum, the distribution is called "polymodal".



    Sometimes there are distributions that have a minimum, not a maximum, in the middle. Such distributions are called "anti-modal".


    In the general case, the mode and the mathematical expectation of a random variable do not coincide. In the particular case, when the distribution is symmetric and modal (i.e., has a mode) and there is a mathematical expectation, then it coincides with the mode and the center of symmetry of the distribution.

    Another characteristic of the position is often used - the so-called median of a random variable. This characteristic is usually used only for continuous random variables, although formally it can be determined for a discontinuous variable. Geometrically, the median is the abscissa of the point at which the area bounded by the distribution curve is halved.


    In the case of a symmetric modal distribution, the median coincides with the mathematical expectation and mode.

    The mathematical expectation is the mean value, of the random variable - the numerical characteristic of the probability distribution of the random variable. In the most general way, the mathematical expectation of a random variable X (w) is defined as the Lebesgue integral with respect to the probability measure R in the original probability space:


    The mathematical expectation can be calculated as the Lebesgue integral of NS by probability distribution px magnitudes X:


    In a natural way, you can define the concept of a random variable with an infinite mathematical expectation. Return times in some random walks are typical examples.

    Using the mathematical expectation, many numerical and functional characteristics of the distribution are determined (as the mathematical expectation of the corresponding functions of a random variable), for example, a generating function, a characteristic function, moments of any order, in particular, variance, covariance.

    The mathematical expectation is a characteristic of the location of the values ​​of a random variable (the average value of its distribution). In this capacity, the mathematical expectation serves as a certain "typical" distribution parameter and its role is similar to the role of the static moment - the coordinates of the center of gravity of the mass distribution - in mechanics. The mathematical expectation differs from other location characteristics, with the help of which the distribution is described in general terms, medians, modes, by the greater value that it and the corresponding scattering characteristic - dispersion - have in the limit theorems of probability theory. With the greatest completeness, the meaning of the mathematical expectation is revealed by the law of large numbers (Chebyshev's inequality) and the strengthened law of large numbers.

    The mathematical expectation of a discrete random variable

    Let there be some random variable that can take one of several numerical values ​​(for example, the number of points when throwing a dice can be 1, 2, 3, 4, 5, or 6). In practice, for such a value, the question often arises: what value does it take "on average" with a large number of tests? What will be our average income (or loss) from each of the risky operations?


    Let's say there is some kind of lottery. We want to understand whether it is profitable or not to participate in it (or even to participate repeatedly, regularly). Let's say every fourth winning ticket, the prize is 300 rubles, and the price of any ticket is 100 rubles. With an infinitely large number of participation, this is what happens. In three quarters of the cases, we will lose, every three losses will cost 300 rubles. In every fourth case, we will win 200 rubles. (prize minus cost), that is, for four participations we lose on average 100 rubles, for one - on average 25 rubles. In total, the average rate of our ruin will be 25 rubles per ticket.

    We throw the dice. If it is not cheating (no shift in the center of gravity, etc.), then how many points will we have on average at a time? Since each option is equally probable, we take a stupid arithmetic mean and get 3.5. Since this is AVERAGE, there is no need to be indignant that no specific throw will give 3.5 points - well, this cube has no edge with such a number!

    Now let's summarize our examples:


    Let's look at the picture just shown. On the left is a table of the distribution of a random variable. The value X can take one of n possible values ​​(shown in the top line). There can be no other values. Each possible value below is labeled with its probability. On the right is the formula, where M (X) is called the mathematical expectation. The meaning of this value is that with a large number of tests (with a large sample), the average value will tend to this mathematical expectation.

    Let's go back to the same playing cube. The mathematical expectation of the number of points when throwing is 3.5 (calculate yourself using the formula, if you don't believe). Let's say you threw it a couple of times. They dropped 4 and 6. On average, it turned out 5, that is, far from 3.5. They threw it one more time, dropped 3, that is, on average (4 + 6 + 3) / 3 = 4.3333 ... Somehow far from the mathematical expectation. Now do this crazy experiment - roll the cube 1000 times! And if the average is not exactly 3.5, it will be close to that.

    Let's calculate the mathematical expectation for the above described lottery. The plate will look like this:


    Then the mathematical expectation will be, as we established above .:


    Another thing is that it would be difficult to use the same “on the fingers”, without a formula, if there were more options. Let's say there would be 75% of losing tickets, 20% of winning tickets, and 5% of extra winning tickets.

    Now some properties of the mathematical expectation.

    Proving this is simple:


    A constant factor is allowed to be taken out of the sign of the mathematical expectation, that is:


    This is a special case of the linearity property of the mathematical expectation.

    Another consequence of the linearity of the mathematical expectation:

    that is, the mathematical expectation of the sum of random variables is equal to the sum of the mathematical expectations of the random variables.

    Let X, Y be independent random variables, then:

    This is also easy to prove) XY itself is a random variable, while if the initial values ​​could take n and m values ​​respectively, then XY can take nm values. The probability of each of the values ​​is calculated based on the fact that the probabilities of independent events are multiplied. As a result, we get this:


    The mathematical expectation of a continuous random variable

    Continuous random variables have such characteristic as distribution density (probability density). It, in fact, characterizes the situation that a random variable takes some values ​​from the set of real numbers more often, some less often. For example, consider the following graph:


    Here X is a random variable itself, f (x)- distribution density. Judging by this graph, in experiments, the value X will often be a number close to zero. Chances to exceed 3 or be less -3 rather purely theoretical.


    For example, suppose there is a uniform distribution:



    This is quite consistent with intuitive understanding. Say, if we get a lot of random real numbers with a uniform distribution, each of the segment |0; 1| , then the arithmetic mean should be about 0.5.

    The properties of the mathematical expectation - linearity, etc., applicable for discrete random variables, are applicable here as well.

    Relationship between mathematical expectation and other statistical indicators

    In statistical analysis, along with the mathematical expectation, there is a system of interdependent indicators reflecting the homogeneity of phenomena and the stability of processes. Variation indicators often have no independent meaning and are used for further data analysis. The exception is the coefficient of variation, which characterizes the homogeneity of the data, which is a valuable statistic.


    The degree of variability or stability of processes in statistical science can be measured using several indicators.

    The most important indicator characterizing the variability of a random variable is Dispersion, which is closely and directly related to the mathematical expectation. This parameter is actively used in other types of statistical analysis (hypothesis testing, analysis of cause-and-effect relationships, etc.). Like the linear mean, variance also reflects the measure of the spread of the data around the mean.


    It is useful to translate the language of signs into the language of words. It turns out that the variance is the mean square of the deviations. That is, first the average is calculated, then the difference between each original and the average is taken, squared, added, and then divided by the number of values ​​in the population. The difference between the individual value and the mean reflects the measure of the deviation. It is squared so that all deviations become exclusively positive numbers and to avoid mutual destruction of positive and negative deviations when they are summed up. Then, with the squares of the deviations, we simply calculate the arithmetic mean. Average - square - deviations. Deviations are squared and the average is considered. The solution to the magic word "variance" lies in just three words.

    However, in its pure form, such as the arithmetic mean, or index, variance is not used. It is rather an auxiliary and intermediate indicator that is used for other types of statistical analysis. She doesn't even have a normal unit of measurement. Judging by the formula, this is the square of the unit of measure of the original data.

    Let us measure a random variable N times, for example, we measure the wind speed ten times and want to find the average value. How is the mean related to the distribution function?

    Or we will roll the dice a large number of times. The number of points that will drop out on the die with each roll is a random variable and can take any natural values ​​from 1 to 6. The arithmetic mean of the dropped points, calculated for all dice rolls, is also a random value, but for large N it tends to a very specific number - the mathematical expectation Mx... In this case, Mx = 3.5.

    How did this value come about? Let in N trials n1 once dropped 1 point, n2 times - 2 points and so on. Then the number of outcomes in which one point was dropped is:


    Likewise for the outcomes when 2, 3, 4, 5 and 6 points are rolled.


    Suppose now that we know the distribution law of a random variable x, that is, we know that a random variable x can take values ​​x1, x2, ..., xk with probabilities p1, p2, ..., pk.

    The mathematical expectation Mx of a random variable x is:


    The mathematical expectation is not always a reasonable estimate of some random variable. So, to estimate the average wage, it is more reasonable to use the concept of the median, that is, such a value that the number of people receiving less than the median wage and more are the same.

    The probability p1 that the random variable x will be less than x1 / 2, and the probability p2 that the random variable x will be greater than x1 / 2 are the same and equal to 1/2. The median is not uniquely determined for all distributions.


    Standard or Standard deviation in statistics, the degree to which observational data or sets deviate from the AVERAGE value is called. It is designated by the letters s or s. A small standard deviation indicates that the data is clustered around the mean, while a large standard deviation indicates that the original data is far away from it. The standard deviation is equal to the square root of a quantity called variance. It is the average of the sum of the squared differences of the initial data deviating from the mean. The root-mean-square deviation of a random variable is called the square root of the variance:


    Example. Under test conditions when shooting at a target, calculate the variance and standard deviation of a random variable:


    Variation- variability, variability of the value of the trait in the units of the population. Individual numerical values ​​of a feature that are found in the studied population are called variants of values. Insufficiency of the average value for a complete characteristic of the population makes it necessary to supplement the average values ​​with indicators that make it possible to assess the typicality of these averages by measuring the variability (variation) of the trait under study. The coefficient of variation is calculated by the formula:


    Swipe variation(R) is the difference between the maximum and minimum values ​​of the trait in the studied population. This indicator gives the most general idea of ​​the variability of the trait under study, since it shows the difference only between the limiting values ​​of the options. The dependence on the extreme values ​​of the trait gives the range of variation an unstable, random character.


    Average linear deviation is the arithmetic mean of the absolute (modulo) deviations of all values ​​of the analyzed population from their average value:


    Expected value in the theory of gambling

    The mathematical expectation is the average amount of money a gambler can win or lose on a given bet. This is a very important concept for the player, because it is fundamental to the assessment of most game situations. Expectation is also an optimal tool for analyzing basic card layouts and game situations.

    Let's say you are playing a coin with a friend, betting $ 1 equally each time, regardless of what comes up. Tails - you win, heads - you lose. The odds of coming up tails are one-to-one, and you bet $ 1 to $ 1. Thus, your mathematical expectation is zero, because mathematically speaking, you cannot know whether you will be leading or losing after two tosses or after 200.


    Your hourly gain is zero. An hourly win is the amount of money you expect to win in an hour. You can flip a coin 500 times within an hour, but you will not win or lose, because your chances are neither positive nor negative. From the point of view of a serious player, such a betting system is not bad. But this is simply a waste of time.

    But suppose someone wants to bet $ 2 against your $ 1 in the same game. Then you immediately have a positive expectation of 50 cents from each bet. Why 50 cents? On average, you win one bet and lose the second. Bet the first dollar and lose $ 1, bet the second and win $ 2. You bet $ 1 twice and are $ 1 ahead. So each of your one dollar bets gave you 50 cents.


    If the coin falls out 500 times in one hour, your hourly winnings will already be $ 250, because on average, you lost $ 1 250 times and won $ 2 250 times. $ 500 minus $ 250 equals $ 250, which is the total winnings. Please note that the expected value, which is the amount that you won on average on one bet, is 50 cents. You won $ 250 by placing a dollar bet 500 times, which equals 50 cents from the stake.

    Expectation has nothing to do with short-term results. Your opponent, who decided to bet $ 2 against you, could beat you on the first ten tosses in a row, but you, having a 2: 1 betting advantage, all other things being equal, under any circumstances, earn 50 cents from every $ 1 bet. It makes no difference whether you win or lose one bet or several bets, but only if you have enough cash to calmly compensate for the costs. If you continue to bet in the same way, then over a long period of time your winnings will come up to the sum of your expectations in individual throws.


    Every time you make a bet with the best outcome (a bet that can be profitable over the long run), when the odds are in your favor, you are sure to win something on it, and it does not matter if you lose it or not in this hand. Conversely, if you make a bet with the worst outcome (a bet that is not profitable in the long run), when the odds are not in your favor, you are losing something regardless of whether you win or lose in the given hand.

    You make a bet with the best outcome if your expectation is positive, and it is positive if the odds are on your side. When placing a bet with the worst outcome, you have negative expectation, which happens when the odds are against you. Serious gamblers only bet with the best outcome; in the worst case, they fold. What does the odds mean in your favor? You may end up winning more than the actual odds bring. The real odds of coming up tails are 1 to 1, but you are getting 2 to 1 due to the ratio of the bets. In this case, the odds are in your favor. You will definitely get the best outcome with a positive expectation of 50 cents per bet.


    Here's a more complex example of expected value. Your buddy writes the numbers from one to five and bets $ 5 against your $ 1 that you will not determine the hidden number. Should you agree to such a bet? What is the expectation here?

    On average, you get it wrong four times. Based on this, the odds against you guessing the number are 4 to 1. The odds are that you lose a dollar in one try. However, you win 5 to 1, if you can lose 4 to 1. So the odds are in your favor, you can take the bet and hope for a better outcome. If you make this bet five times, on average you will lose four times $ 1 and win $ 5 once. Based on this, for all five tries, you will earn $ 1 with a positive expected value of 20 cents per bet.


    A player who is going to win more than he bets, as in the example above, catches the odds. Conversely, he ruins the odds when he expects to win less than he bets. A player making a bet can have either positive or negative expectation, which depends on whether he catches or ruins the odds.

    If you bet $ 50 to win $ 10 with a 4 to 1 probability of winning, you will get a negative expectation of $ 2, because on average, you win four times $ 10 and lose $ 50 once, which shows that the loss for one bet is $ 10. But if you bet $ 30 in order to win $ 10, with the same chances of winning 4 to 1, then in this case you have a positive expectation of $ 2, because you win again four times for $ 10 and lose $ 30 once for a profit of $ 10. These examples show that the first bet is bad and the second is good.


    Expectation is the center of any game situation. When a bookmaker encourages football fans to bet $ 11 to win $ 10, they have a positive expectation of 50 cents for every $ 10. If the casino pays out equal money from the passing line in the craps, then the casino's positive expectation is approximately $ 1.40 for every $ 100, because this game is structured so that everyone who bets on this line loses 50.7% on average and wins 49.3% of the total time. Undoubtedly, it is this seemingly minimal positive expectation that brings colossal profits to casino owners around the world. As the owner of Vegas World casino Bob Stupak noted, "One thousandth of a percent negative probability over a long enough distance will ruin the richest man in the world."


    Mathematical expectation when playing poker

    The game of Poker is the most illustrative and illustrative example in terms of using the theory and properties of mathematical expectation.


    Expected Value in Poker is the average benefit from a particular decision, provided that such a decision can be considered within the framework of the theory of large numbers and long distance. A successful poker game is about always accepting moves with positive expectation.

    The mathematical meaning of the mathematical expectation when playing poker is that we often come across random variables when making a decision (we do not know which cards are in our opponent's hands, which cards will come on subsequent betting rounds). We must consider each of the solutions from the point of view of the theory of large numbers, which states that with a sufficiently large sample, the average value of a random variable will tend to its mathematical expectation.


    Among the particular formulas for calculating the mathematical expectation, the following is most applicable in poker:

    When playing poker, the expected value can be calculated for both bets and calls. In the first case, fold equity should be taken into account, in the second - the pot's own odds. When evaluating the mathematical expectation of a move, it should be remembered that a fold always has a zero expectation. Thus, discarding cards will always be a more profitable decision than any negative move.

    Expectation tells you what you can expect (profit or loss) for every dollar you risk. Casinos make money because the mathematical expectation from all the games that are practiced in them is in favor of the casino. With a sufficiently long series of games, one can expect that the client will lose his money, since the "probability" is in favor of the casino. However, professional casino players limit their games to short periods of time, thereby increasing the odds in their favor. The same goes for investing. If your expectation is positive, you can make more money by making many trades in a short period of time. Expectation is your percentage of profit per win multiplied by average profit minus your probability of loss multiplied by average loss.


    Poker can also be viewed in terms of mathematical expectation. You can assume that a certain move is profitable, but in some cases it may turn out to be far from the best, because another move is more profitable. Let's say you hit a full house in a five-card draw poker. Your opponent bets. You know that if you raise your bid, he will answer. Therefore, raising looks like the best tactic. But if you do raise the bet, the remaining two players will definitely fold. But if you call, you will be completely sure that two other players after you will do the same. When you raise the bet, you get one unit, and simply call - two. Thus, equalizing gives you a higher positive mathematical expectation and is the best tactic.

    The mathematical expectation can also give an idea of ​​which tactics are less profitable in poker and which are more. For example, when playing a certain hand, you believe that your losses will average 75 cents, including the antes, then this hand should be played because this is better than folding when the ante is $ 1.


    Another important reason for understanding the essence of mathematical expectation is that it gives you a sense of peace whether you won a bet or not: if you made a good bet or fold on time, you will know that you have earned or saved a certain amount of money, which the weaker player could not save. It is much more difficult to fold if you are upset that your opponent has made a stronger hand on the exchange. With all this, the money that you saved without playing, instead of betting, is added to your winnings per night or per month.

    Just remember that if you changed your hands, your opponent would call you, and as you will see in the article "The Fundamental Theorem of Poker" this is just one of your advantages. You should be happy when this happens. You can even learn to enjoy a losing hand, because you know that other players in your place would have lost a lot more.


    As mentioned in the coin-game example at the beginning, the hourly rate of return is related to the expected value, and this concept is especially important for professional players. When you are going to play poker, you must mentally estimate how much you can win in an hour of playing. In most cases, you will need to rely on your intuition and experience, but you can also use some math. For example, you are playing draw lowball and you see three players bet $ 10 and then exchange two cards, which is a very bad tactic, you might think that every time they bet $ 10, they lose about $ 2. Each of them does it eight times an hour, which means that all three lose about $ 48 per hour. You are one of the remaining four players, which are approximately equal, so these four players (and you among them) must divide $ 48, and the profit of each will be $ 12 per hour. Your hourly rate in this case is simply your share of the money lost by three bad players in an hour.

    Over a long period of time, the player's total payoff is the sum of his mathematical expectations in individual hands. The more you play with positive expectation, the more you win, and vice versa, the more hands with negative expectation you play, the more you lose. As a consequence, you should choose a game that can maximize your positive expectation or negate negative ones so that you can maximize your hourly winnings.


    Positive mathematical expectation in game strategy

    If you know how to count cards, you may have an edge over the casino if they don't see it and kick you out. Casinos love drunken gamblers and can't stand card counters. Advantage will allow you to win more times over time than you lose. Good money management using mathematical expectation calculations can help you get more out of your advantage and reduce losses. Without an advantage, you're better off donating money to charity. In trading on the stock exchange, the advantage is given by the game system, which creates more profits than losses, price differences and commissions. No amount of money management will save a bad gaming system.

    A positive expectation is defined by a value greater than zero. The larger this number, the stronger the statistical expectation. If the value is less than zero, then the mathematical expectation will also be negative. The larger the modulus of the negative value, the worse the situation. If the result is zero, then the expectation is breakeven. You can only win when you have a positive mathematical expectation, a reasonable system of play. Playing by intuition leads to disaster.


    Expectation and exchange trading

    The mathematical expectation is a fairly widely demanded and popular statistical indicator in the implementation of exchange trading in financial markets. First of all, this parameter is used to analyze the success of a trade. It is not difficult to guess that the higher the given value, the more reason to consider the studied trade successful. Of course, the analysis of a trader's work cannot be done only with the help of this parameter. However, the calculated value, in combination with other methods of assessing the quality of work, can significantly improve the accuracy of the analysis.


    The mathematical expectation is often calculated in the services of monitoring trading accounts, which allows you to quickly evaluate the work done on the deposit. As exceptions, one can cite strategies that use “sitting out” of unprofitable trades. A trader may be lucky for some time, and therefore, there may be no losses in his work at all. In this case, it will not be possible to navigate only by expectation, because the risks used in the work will not be taken into account.

    In trading on the market, expectation is most often used when predicting the profitability of a trading strategy or when predicting a trader's income based on the statistical data of his previous trades.

    In terms of money management, it is very important to understand that when making trades with negative expectation, there is no money management scheme that can definitely bring high profits. If you continue to play on the stock exchange under these conditions, then no matter how you manage your money, you will lose your entire account, no matter how large it was in the beginning.

    This axiom is not only true for games or trades with negative expectation, it is also true for games with equal odds. Therefore, the only case where you have a chance to benefit in the long term is when you make deals with a positive expected value.


    The difference between negative expectation and positive expectation is the difference between life and death. It doesn't matter how positive or how negative the expectation is; what matters is whether it is positive or negative. Therefore, before considering money management issues, you must find a game with positive expectation.

    If you don't have such a game, then no amount of money management in the world will save you. On the other hand, if you have a positive expectation, you can, through good money management, turn it into an exponential growth function. It doesn't matter how little positive expectation is! In other words, it doesn't matter how profitable a single contract trading system is. If you have a system that wins $ 10 per contract on a single trade (after deducting commissions and slippage), you can use money management techniques to make it more profitable than a system that shows an average profit of $ 1000 per trade (after deduction of commissions and slippage).


    What matters is not how profitable the system was, but how certain it can be said that the system will show at least minimal profit in the future. Therefore, the most important preparation a trader can make is to make sure that the system shows a positive mathematical expectation in the future.

    In order to have a positive mathematical expectation in the future, it is very important not to restrict the degrees of freedom of your system. This is achieved not only by eliminating or reducing the number of parameters to be optimized, but also by reducing as many system rules as possible. Every parameter you add, every rule you make, every tiny change you make to the system, reduces the number of degrees of freedom. Ideally, you need to build a fairly primitive and simple system that will consistently generate small profits in almost any market. Again, it is important that you understand that it does not matter how profitable the system is, as long as it is profitable. The money you earn in trading will be earned through effective money management.

    A trading system is simply a tool that gives you a positive mathematical expectation so that money management can be used. Systems that work (show at least minimal profit) in only one or a few markets, or have different rules or parameters for different markets, most likely will not work in real time for long enough. The problem with most tech-savvy traders is that they spend too much time and effort optimizing the various rules and parameter values ​​of the trading system. This gives completely opposite results. Instead of spending energy and computer time increasing the profits of the trading system, focus your energy on increasing the level of reliability of making the minimum profit.

    Knowing that money management is just a numerical game that requires the use of positive expectations, a trader can stop looking for the "holy grail" of stock trading. Instead, he can start testing his trading method, find out how logically this method is, whether it gives positive expectations. The right money management methods applied to any, even mediocre trading methods, will do the rest of the work themselves.


    For any trader to succeed in his work, it is necessary to solve the three most important tasks:. Ensure that the number of successful deals exceeds the inevitable mistakes and miscalculations; Set up your trading system so that the opportunity to earn money is as often as possible; To achieve the stability of the positive result of your operations.

    And here we, working traders, can be helped by the mathematical expectation. This term in the theory of probability is one of the key ones. With its help, you can give an average estimate of a certain random value. The mathematical expectation of a random variable is similar to the center of gravity if we imagine all possible probabilities as points with different masses.


    As applied to a trading strategy, to assess its effectiveness, the mathematical expectation of profit (or loss) is most often used. This parameter is defined as the sum of the products of the given levels of profit and loss and the probability of their occurrence. For example, the developed trading strategy assumes that 37% of all transactions will bring profit, and the rest - 63% - will be unprofitable. At the same time, the average income from a successful deal will be $ 7, and the average loss will be $ 1.4. Let's calculate the mathematical expectation of trading using the following system:

    What does this number mean? It says that, following the rules of this system, on average we will receive $ 1.708 from each closed trade. Since the obtained efficiency estimate is greater than zero, then such a system may well be used for real work. If, as a result of the calculation, the mathematical expectation turns out to be negative, then this already speaks of an average loss and such a trade will lead to ruin.

    The amount of profit per trade can also be expressed as a relative value in the form of%. For example:

    - percentage of income for 1 transaction - 5%;

    - percentage of successful trading operations - 62%;

    - percentage of loss per 1 deal - 3%;

    - percentage of unsuccessful transactions - 38%;

    That is, the average trade will generate 1.96%.

    It is possible to develop a system that, despite the prevalence of unprofitable trades, will give a positive result, since its MO> 0.

    However, waiting alone is not enough. It is difficult to make money if the system gives very few trading signals. In this case, its profitability will be comparable to the bank interest. Let each transaction give an average of only $ 0.50, but what if the system assumes 1000 transactions per year? This will be a very serious amount in a relatively short time. It logically follows from this that another distinguishing feature of a good trading system can be considered a short period of holding positions.


    Sources and links

    dic.academic.ru - Academic Internet Dictionary

    mathematics.ru - educational site in mathematics

    nsu.ru - educational website of Novosibirsk State University

    webmath.ru is an educational portal for students, applicants and schoolchildren.

    exponenta.ru educational mathematical website

    ru.tradimo.com - free online trading school

    crypto.hut2.ru - a multidisciplinary information resource

    poker-wiki.ru - the free encyclopedia of poker

    sernam.ru - Scientific library of selected natural science publications

    reshim.su - website LET'S SOLVE course control tasks

    unfx.ru - Forex at UNFX: training, trading signals, trust management

    slovopedia.com - The Big Encyclopedic Dictionary of Slovopedia

    pokermansion.3dn.ru - Your guide to the poker world

    statanaliz.info - information blog "Statistical Data Analysis"

    forex-trader.rf - Forex-Trader portal

    megafx.ru - up-to-date Forex analytics

    fx-by.com - everything for the trader

    - the number of boys among 10 newborns.

    It is quite clear that this number is not known in advance, and in the next ten children born there may be:

    Or boys - one and only one of the listed options.

    And, in order to stay in shape, a little physical education:

    - long jump range (in some units).

    Even the master of sports cannot predict her :)

    However, your hypothesis?

    2) Continuous random variable - takes all numerical values ​​from some finite or infinite range.

    Note : in the educational literature, the abbreviations DSV and NSV are popular

    First, let's analyze a discrete random variable, then - continuous.

    Distribution law of a discrete random variable

    - this is correspondence between the possible values ​​of this quantity and their probabilities. Most often, the law is written in a table:

    Quite often the term row distribution but it sounds ambiguous in some situations, and so I will stick to the "law."

    And now very important point: since the random variable necessarily will accept one of the meanings, then the corresponding events form full group and the sum of the probabilities of their occurrence is equal to one:

    or, if written collapsed:

    So, for example, the law of distribution of probabilities of points dropped on a die is as follows:

    No comments.

    You might be under the impression that a discrete random variable can only take on "good" integer values. Let's dispel the illusion - they can be anything:

    Example 1

    Some game has the following winning distribution law:

    ... you've probably dreamed of such tasks for a long time :) I'll tell you a secret - me too. Especially after finishing work on field theory.

    Solution: since a random variable can take only one of three values, the corresponding events form full group, which means that the sum of their probabilities is equal to one:

    We will expose the "partisan":

    - thus, the probability of winning conventional units is 0.4.

    Control: what was required to be convinced.

    Answer:

    It is not uncommon when the distribution law needs to be drawn up independently. To do this, use classical definition of probability, multiplication / addition theorems for event probabilities and other chips tervera:

    Example 2

    The box contains 50 lottery tickets, among which 12 are winning, with 2 of them winning 1,000 rubles each, and the rest - 100 rubles each. Draw up the distribution law of a random variable - the size of the payoff, if one ticket is taken at random from the box.

    Solution: as you noticed, it is customary to arrange the values ​​of a random variable in ascending order... Therefore, we start with the smallest winnings, namely rubles.

    There are 50 - 12 = 38 such tickets in total, and classical definition:
    - the probability that a ticket drawn at random turns out to be a losing one.

    The rest of the cases are simple. The probability of winning rubles is:

    Check: - and this is a particularly pleasant moment of such tasks!

    Answer: the required distribution of the payoff:

    The next task for independent solution:

    Example 3

    The probability that the shooter will hit the target is. Draw up the distribution law of a random variable - the number of hits after 2 shots.

    ... I knew that you missed him :) Remember multiplication and addition theorems... Solution and answer at the end of the lesson.

    The distribution law completely describes a random variable, but in practice it is useful (and sometimes more useful) to know only some of it. numerical characteristics .

    The mathematical expectation of a discrete random variable

    In simple terms, it is average expected value with multiple repetition of tests. Let a random variable take values ​​with probabilities respectively. Then the mathematical expectation of a given random variable is sum of products of all its values ​​to the corresponding probabilities:

    or collapsed:

    Let's calculate, for example, the mathematical expectation of a random variable - the number of points dropped on a dice:

    Now let's remember our hypothetical game:

    The question arises: is it profitable to play this game at all? … Who has what impressions? So after all "offhand" and you will not say! But this question can be easily answered by calculating the expected value, in fact - weighted average by the probabilities of winning:

    Thus, the mathematical expectation of this game losing.

    Don't trust the impressions - trust the numbers!

    Yes, here you can win 10 or even 20-30 times in a row, but in the long run we will inevitably ruin. And I would not advise you to play such games :) Well, maybe just for fun.

    From all of the above, it follows that the mathematical expectation is no longer a RANDOM value.

    Creative assignment for self-study:

    Example 4

    Mr. X plays European roulette according to the following system: constantly bets 100 rubles on "red". Draw up the law of distribution of a random variable - its gain. Calculate the mathematical expectation of a win and round it to the nearest kopeck. how many average the player loses with every hundred bet?

    reference : European roulette contains 18 red, 18 black and 1 green sectors ("zero"). In the event of a "red" hit, the player is paid a doubled bet, otherwise it goes to the casino's income

    There are many other roulette systems for which you can create your own probability tables. But this is the case when we do not need any distribution laws and tables, for it has been established for certain that the mathematical expectation of the player will be exactly the same. From system to system only changes