To come in
Speech therapy portal
  • How to gain self-confidence, achieve calmness and increase self-esteem: discovering the main secrets of Gaining self-confidence
  • Psychological characteristics of children with general speech underdevelopment: features of cognitive activity Mental characteristics of children with onr
  • What is burnout at work and how to deal with it How to deal with burnout at work
  • How to Deal with Emotional Burnout Methods for Dealing with Emotional Burnout
  • How to Deal with Emotional Burnout Methods for Dealing with Emotional Burnout
  • Burnout - How To Deal With Work Stress How To Deal With Emotional Burnout
  • Statistical mathematical expectation. The mathematical expectation of a discrete random variable. Mate expectation is

    Statistical mathematical expectation.  The mathematical expectation of a discrete random variable.  Mate expectation is

    The concept of mathematical expectation can be considered using the example of throwing a dice. The dropped points are recorded with each throw. Natural values ​​in the range 1 - 6 are used to express them.

    After a certain number of throws, using simple calculations, you can find the arithmetic mean of the dropped points.

    As well as dropping out of any of the range values, this value will be random.

    And if you increase the number of throws several times? With a large number of throws, the arithmetic mean of the points will approach a specific number, which is called the mathematical expectation in probability theory.

    So, the mathematical expectation is understood as the average value of a random variable. This indicator can also be presented as a weighted sum of values ​​of a probable value.

    This concept has several synonyms:

    • mean;
    • average value;
    • indicator of the central tendency;
    • first moment.

    In other words, it is nothing more than a number around which the values ​​of a random variable are distributed.

    In various spheres of human activity, approaches to understanding the mathematical expectation will be slightly different.

    It can be viewed as:

    • the average benefit received from making a decision, in the case when such a decision is considered from the point of view of the theory of large numbers;
    • the possible amount of winning or losing (gambling theory), calculated on average for each of the bets. In slang, they sound like “player's advantage” (positive for the player) or “casino advantage” (negatively for the player);
    • the percentage of the profit received from the winnings.

    Expectation is not required for absolutely all random variables. It is absent for those for which the discrepancy of the corresponding sum or integral is observed.

    Mathematical expectation properties

    Like any statistical parameter, the mathematical expectation has the following properties:


    Basic formulas for mathematical expectation

    The calculation of the mathematical expectation can be performed both for random variables characterized by both continuity (formula A) and discreteness (formula B):

    1. M (X) = ∑i = 1nxi⋅pi, where xi are values ​​of a random variable, pi are probabilities:
    2. M (X) = ∫ + ∞ − ∞f (x) ⋅xdx, where f (x) is a given probability density.

    Examples of calculating the expected value

    Example A.

    Is it possible to find out the average height of the dwarfs in the tale of Snow White. It is known that each of the 7 dwarfs had a certain height: 1.25; 0.98; 1.05; 0.71; 0.56; 0.95 and 0.81 m.

    The calculation algorithm is quite simple:

    • we find the sum of all values ​​of the growth indicator (random variable):
      1,25+0,98+1,05+0,71+0,56+0,95+ 0,81 = 6,31;
    • the resulting amount is divided by the number of gnomes:
      6,31:7=0,90.

    Thus, the average height of gnomes in a fairy tale is 90 cm. In other words, this is the mathematical expectation of the growth of gnomes.

    Working formula - M (x) = 4 0.2 + 6 0.3 + 10 0.5 = 6

    Practical implementation of mathematical expectation

    The calculation of the statistical indicator of the mathematical expectation is resorted to in various fields of practice. First of all, we are talking about the commercial sphere. Indeed, Huygens' introduction of this indicator is associated with the determination of the chances, which can be favorable, or, on the contrary, unfavorable, for some event.

    This parameter is widely used to assess risks, especially when it comes to financial investments.
    So, in entrepreneurship, the calculation of the mathematical expectation acts as a method for assessing risk when calculating prices.

    Also, this indicator can be used to calculate the effectiveness of certain measures, for example, on labor protection. Thanks to him, you can calculate the probability of an event occurring.

    Another area of ​​application of this parameter is management. It can also be calculated during product quality control. For example, using the mat. expectations, you can calculate the possible number of manufacturing defective parts.

    Expectation turns out to be indispensable when carrying out statistical processing of the results obtained in the course of scientific research. It allows you to calculate the likelihood of a desirable or undesirable outcome of an experiment or research, depending on the level of achievement of the goal. After all, its achievement can be associated with gain and benefit, and not its achievement - as a loss or loss.

    Using mathematical expectation in Forex

    The practical application of this statistical parameter is possible when conducting operations in the foreign exchange market. It can be used to analyze the success of trade transactions. Moreover, an increase in the value of expectation indicates an increase in their success.

    It is also important to remember that the mathematical expectation should not be considered as the only statistical parameter used to analyze a trader's performance. The use of several statistical parameters along with the average value increases the accuracy of the analysis at times.

    This parameter has proven itself well in monitoring trading accounts. Thanks to him, a quick assessment of the work carried out on the deposit account is carried out. In cases where the trader's activity is successful and he avoids losses, it is not recommended to use solely the calculation of the mathematical expectation. In these cases, risks are not taken into account, which reduces the effectiveness of the analysis.

    Research conducted on traders' tactics shows that:

    • the most effective are tactics based on random input;
    • the least effective are tactics based on structured entries.

    In achieving positive results, it is equally important:

    • money management tactics;
    • exit strategies.

    Using such an indicator as the mathematical expectation, one can assume what the profit or loss will be when investing 1 dollar. It is known that this indicator, calculated for all games practiced in a casino, is in favor of the institution. This is what allows you to make money. In the case of a long series of games, the likelihood of a client losing money increases significantly.

    The games of professional players are limited to short time intervals, which increases the likelihood of winning and reduces the risk of losing. The same pattern is observed when performing investment operations.

    An investor can earn a significant amount with a positive expectation and a large number of transactions in a short time period.

    Expectation can be thought of as the difference between the percentage of profit (PW) times the average profit (AW) and the probability of loss (PL) times the average loss (AL).

    As an example, consider the following: position - $ 12.5 thousand, portfolio - $ 100 thousand, deposit risk - 1%. The profitability of transactions is 40% of cases with an average profit of 20%. In the event of a loss, the average loss is 5%. Calculating the expected value for a trade gives a value of $ 625.

    In addition to distribution laws, random variables can also be described numerical characteristics .

    Mathematical expectation M (x) of a random variable is called its mean value.

    The mathematical expectation of a discrete random variable is calculated by the formula

    where values ​​of a random variable, p i - their probabilities.

    Consider the properties of the expected value:

    1. The mathematical expectation of a constant is equal to the constant itself

    2. If a random variable is multiplied by some number k, then the mathematical expectation will be multiplied by the same number

    M (kx) = kM (x)

    3. The mathematical expectation of the sum of random variables is equal to the sum of their mathematical expectations

    M (x 1 + x 2 +… + x n) = M (x 1) + M (x 2) +… + M (x n)

    4.M (x 1 - x 2) = M (x 1) - M (x 2)

    5. For independent random variables x 1, x 2, ... x n, the mathematical expectation of the product is equal to the product of their mathematical expectations

    M (x 1, x 2, ... x n) = M (x 1) M (x 2) ... M (x n)

    6.M (x - M (x)) = M (x) - M (M (x)) = M (x) - M (x) = 0

    Let's calculate the mathematical expectation for the random variable from Example 11.

    M (x) = = .

    Example 12. Let the random variables x 1, x 2 be given by the distribution laws, respectively:

    x 1 Table 2

    x 2 Table 3

    Calculate M (x 1) and M (x 2)

    M (x 1) = (- 0.1) 0.1 + (- 0.01) 0.2 + 0 0.4 + 0.01 0.2 + 0.1 0.1 = 0

    M (x 2) = (- 20) 0.3 + (- 10) 0.1 + 0 0.2 + 10 0.1 + 20 0.3 = 0

    The mathematical expectations of both random variables are the same - they are equal to zero. However, the nature of their distribution is different. If the values ​​of x 1 differ little from their mathematical expectation, then the values ​​of x 2 differ to a large extent from their mathematical expectation, and the probabilities of such deviations are not small. These examples show that it is impossible to determine from the average value which deviations from it take place, both upward and downward. So with the same average amount of precipitation in the two areas per year, it cannot be said that these areas are equally favorable for agricultural work. Similarly, according to the indicator of average wages, it is not possible to judge the proportion of high- and low-paid workers. Therefore, a numerical characteristic is introduced - dispersion D (x) , which characterizes the degree of deviation of a random variable from its mean value:

    D (x) = M (x - M (x)) 2. (2)

    Variance is the mathematical expectation of the square of the deviation of a random variable from the mathematical expectation. For a discrete random variable, the variance is calculated by the formula:

    D (x) = = (3)

    It follows from the definition of variance that D (x) 0.

    Dispersion properties:

    1. The variance of the constant is zero

    2. If a random variable is multiplied by some number k, then the variance is multiplied by the square of this number

    D (kx) = k 2 D (x)

    3. D (x) = M (x 2) - M 2 (x)

    4. For pairwise independent random variables x 1, x 2,… x n, the variance of the sum is equal to the sum of variances.

    D (x 1 + x 2 +… + x n) = D (x 1) + D (x 2) +… + D (x n)

    Let's calculate the variance for the random variable from Example 11.

    Mathematical expectation М (x) = 1. Therefore, according to formula (3), we have:

    D (x) = (0 - 1) 2 1/4 + (1 - 1) 2 1/2 + (2 - 1) 2 1/4 = 1 1/4 + 1 1/4 = 1/2

    Note that it is easier to calculate the variance if we use property 3:

    D (x) = M (x 2) - M 2 (x).

    Let's calculate the variance for the random variables x 1, x 2 from Example 12 using this formula. The mathematical expectations of both random variables are equal to zero.

    D (x 1) = 0.01 0.1 + 0.0001 0.2 + 0.0001 0.2 + 0.01 0.1 = 0.001 + 0.00002 + 0.00002 + 0.001 = 0.00204

    D (x 2) = (-20) 2 0.3 + (-10) 2 0.1 + 10 2 0.1 + 20 2 0.3 = 240 +20 = 260

    The closer the variance value is to zero, the smaller the scatter of the random variable relative to the mean value.

    The quantity is called standard deviation. Random variable mode x discrete type Md is called such a value of a random variable, which corresponds to the highest probability.

    Random variable mode x continuous type Md, is called a real number, defined as the maximum point of the probability distribution density f (x).

    The median of a random variable x continuous type Mn is called a real number satisfying the equation

    Basic numerical characteristics of discrete and continuous random variables: mathematical expectation, variance and standard deviation. Their properties and examples.

    The distribution law (distribution function and distribution series or probability density) completely describe the behavior of a random variable. But in a number of problems it is enough to know some of the numerical characteristics of the investigated quantity (for example, its average value and possible deviation from it) in order to answer the question posed. Consider the main numerical characteristics of discrete random variables.

    Definition 7.1.Mathematical expectation a discrete random variable is the sum of the products of its possible values ​​by the corresponding probabilities:

    M(NS) = NS 1 R 1 + NS 2 R 2 + … + x p p p.(7.1)

    If the number of possible values ​​of a random variable is infinite, then if the resulting series converges absolutely.

    Remark 1. The mathematical expectation is sometimes called weighted average, since it is approximately equal to the arithmetic mean of the observed values ​​of the random variable for a large number of experiments.

    Remark 2. From the definition of the mathematical expectation it follows that its value is not less than the smallest possible value of a random variable and not more than the largest.

    Remark 3. The mathematical expectation of a discrete random variable is no coincidence(constant. In what follows, we will see that the same is true for continuous random variables.

    Example 1. Find the mathematical expectation of a random variable NS- the number of standard parts among three selected from a batch of 10 parts, among which 2 are defective. Let us compose a distribution series for NS... It follows from the problem statement that NS can take the values ​​1, 2, 3. Then

    Example 2. Determine the mathematical expectation of a random variable NS- the number of coin tosses before the first appearance of the coat of arms. This value can take an infinite number of values ​​(the set of possible values ​​is a set of natural numbers). Its distribution series is as follows:

    NS NS
    R 0,5 (0,5) 2 (0,5)NS

    + (when calculating, the formula for the sum of an infinitely decreasing geometric progression was used twice:, whence).

    Mathematical expectation properties.

    1) The mathematical expectation of a constant is equal to the most constant:

    M(WITH) = WITH.(7.2)

    Proof. Considering WITH as a discrete random variable taking only one value WITH with probability R= 1, then M(WITH) = WITH?1 = WITH.

    2) The constant factor can be taken out of the sign of the mathematical expectation:

    M(SH) = CM(NS). (7.3)

    Proof. If a random variable NS given by a distribution series


    Then M(SH) = Cx 1 R 1 + Cx 2 R 2 + … + Cx p p p = WITH(NS 1 R 1 + NS 2 R 2 + … + x p p p) = CM(NS).

    Definition 7.2. Two random variables are called independent, if the distribution law of one of them does not depend on what values ​​the other took. Otherwise, the random variables dependent.

    Definition 7.3. Let's call product of independent random variables NS and Y random variable XY, the possible values ​​of which are equal to the products of all possible values NS for all possible values Y, and the corresponding probabilities are equal to the products of the probabilities of the factors.

    3) The mathematical expectation of the product of two independent random variables is equal to the product of their mathematical expectations:

    M(XY) = M(X)M(Y). (7.4)

    Proof. To simplify calculations, we restrict ourselves to the case when NS and Y take only two possible values:

    Hence, M(XY) = x 1 y 1 ?p 1 g 1 + x 2 y 1 ?p 2 g 1 + x 1 y 2 ?p 1 g 2 + x 2 y 2 ?p 2 g 2 = y 1 g 1 (x 1 p 1 + x 2 p 2) + + y 2 g 2 (x 1 p 1 + x 2 p 2) = (y 1 g 1 + y 2 g 2) (x 1 p 1 + x 2 p 2) = M(X)?M(Y).

    Remark 1. Similarly, this property can be proved for a larger number of possible values ​​of the factors.

    Remark 2. Property 3 is valid for the product of any number of independent random variables, which is proved by the method of mathematical induction.

    Definition 7.4. We define sum of random variables NS and Y as a random variable X + Y, the possible values ​​of which are equal to the sums of each possible value NS with every possible value Y; the probabilities of such sums are equal to the products of the probabilities of the terms (for dependent random variables, the products of the probability of one term by the conditional probability of the second).

    4) The mathematical expectation of the sum of two random variables (dependent or independent) is equal to the sum of the mathematical expectations of the terms:

    M (X + Y) = M (X) + M (Y). (7.5)

    Proof.

    Consider again the random variables given by the distribution series given in the proof of property 3. Then the possible values X + Y are NS 1 + at 1 , NS 1 + at 2 , NS 2 + at 1 , NS 2 + at 2. Let us denote their probabilities, respectively, as R 11 , R 12 , R 21 and R 22. Find M(NS+Y) = (x 1 + y 1)p 11 + (x 1 + y 2)p 12 + (x 2 + y 1)p 21 + (x 2 + y 2)p 22 =

    = x 1 (p 11 + p 12) + x 2 (p 21 + p 22) + y 1 (p 11 + p 21) + y 2 (p 12 + p 22).

    Let us prove that R 11 + R 22 = R 1 . Indeed, the event that X + Y will take values NS 1 + at 1 or NS 1 + at 2 and the probability of which is R 11 + R 22 coincides with the event that NS = NS 1 (its probability is R 1). Similarly, it is proved that p 21 + p 22 = R 2 , p 11 + p 21 = g 1 , p 12 + p 22 = g 2. Means,

    M(X + Y) = x 1 p 1 + x 2 p 2 + y 1 g 1 + y 2 g 2 = M (X) + M (Y).

    Comment... Property 4 implies that the sum of any number of random variables is equal to the sum of the mathematical expectations of the terms.

    Example. Find the mathematical expectation of the sum of the number of points dropped by throwing five dice.

    Let us find the mathematical expectation of the number of points dropped by throwing one die:

    M(NS 1) = (1 + 2 + 3 + 4 + 5 + 6) The same number is equal to the mathematical expectation of the number of points dropped on any die. Therefore, by Property 4 M(NS)=

    Dispersion.

    In order to have an idea of ​​the behavior of a random variable, it is not enough to know only its mathematical expectation. Consider two random variables: NS and Y given by distribution series of the form

    NS
    R 0,1 0,8 0,1
    Y
    p 0,5 0,5

    Find M(NS) = 49?0,1 + 50?0,8 + 51?0,1 = 50, M(Y) = 0? 0.5 + 100? 0.5 = 50. As you can see, the mathematical expectations of both quantities are equal, but if for HM(NS) describes well the behavior of a random variable, being its most probable possible value (moreover, the other values ​​are not much different from 50), then the values Y significantly away from M(Y). Therefore, along with the mathematical expectation, it is desirable to know how much the values ​​of the random variable deviate from it. The variance is used to characterize this indicator.

    Definition 7.5.Dispersion (scattering) a random variable is called the mathematical expectation of the square of its deviation from its mathematical expectation:

    D(X) = M (X - M(X)) ². (7.6)

    Find the variance of the random variable NS(the number of standard parts among the selected) in example 1 of this lecture. Let's calculate the values ​​of the squared deviation of each possible value from the mathematical expectation:

    (1 - 2.4) 2 = 1.96; (2 - 2.4) 2 = 0.16; (3 - 2.4) 2 = 0.36. Hence,

    Remark 1. In determining the variance, it is not the deviation from the mean itself that is evaluated, but its square. This is done so that deviations of different signs do not compensate each other.

    Remark 2. It follows from the definition of variance that this quantity takes only non-negative values.

    Remark 3. There is a more convenient formula for calculating the variance, the validity of which is proved in the following theorem:

    Theorem 7.1.D(X) = M(X²) - M²( X). (7.7)

    Proof.

    Using what M(NS) is a constant, and the properties of the mathematical expectation, we transform the formula (7.6) to the form:

    D(X) = M(X - M(X))² = M(X² - 2 X? M(X) + M²( X)) = M(X²) - 2 M(X)?M(X) + M²( X) =

    = M(X²) - 2 M²( X) + M²( X) = M(X²) - M²( X), as required.

    Example. We calculate the variances of random variables NS and Y discussed at the beginning of this section. M(NS) = (49 2 ?0,1 + 50 2 ?0,8 + 51 2 ?0,1) - 50 2 = 2500,2 - 2500 = 0,2.

    M(Y) = (0 2? 0.5 + 100²? 0.5) - 50² = 5000 - 2500 = 2500. So, the variance of the second random variable is several thousand times greater than the variance of the first. Thus, even without knowing the distribution laws of these quantities, we can assert from the known dispersion values ​​that NS deviates little from its mathematical expectation, while for Y this deviation is quite significant.

    Dispersion properties.

    1) Dispersion of constant WITH is zero:

    D (C) = 0. (7.8)

    Proof. D(C) = M((C - M(C))²) = M((C - C)²) = M(0) = 0.

    2) The constant factor can be taken out of the variance sign by squaring it:

    D(CX) = C² D(X). (7.9)

    Proof. D(CX) = M((CX - M(CX))²) = M((CX - CM(X))²) = M(C²( X - M(X))²) =

    = C² D(X).

    3) The variance of the sum of two independent random variables is equal to the sum of their variances:

    D(X + Y) = D(X) + D(Y). (7.10)

    Proof. D(X + Y) = M(X² + 2 XY + Y²) - ( M(X) + M(Y))² = M(X²) + 2 M(X)M(Y) +

    + M(Y²) - M²( X) - 2M(X)M(Y) - M²( Y) = (M(X²) - M²( X)) + (M(Y²) - M²( Y)) = D(X) + D(Y).

    Corollary 1. The variance of the sum of several mutually independent random variables is equal to the sum of their variances.

    Corollary 2. The variance of the sum of a constant and a random variable is equal to the variance of the random variable.

    4) The variance of the difference of two independent random variables is equal to the sum of their variances:

    D(X - Y) = D(X) + D(Y). (7.11)

    Proof. D(X - Y) = D(X) + D(-Y) = D(X) + (-1) ² D(Y) = D(X) + D(X).

    Variance gives the mean of the square of the deviation of a random variable from the mean; to estimate the deviation itself, a quantity called the standard deviation is used.

    Definition 7.6.Mean square deviationσ of a random variable NS called the square root of the variance:

    Example. In the previous example, the standard deviations NS and Y equal respectively

    The mathematical expectation is the probability distribution of a random variable

    Expectation, definition, mathematical expectation of discrete and continuous random variables, sample, conditional expectation, calculation, properties, tasks, estimation of expectation, variance, distribution function, formulas, examples of calculation

    Expand content

    Collapse content

    The mathematical expectation is, the definition

    One of the most important concepts in mathematical statistics and probability theory, characterizing the distribution of values ​​or probabilities of a random variable. Usually expressed as a weighted average of all possible parameters of a random variable. It is widely used in technical analysis, the study of numerical series, the study of continuous and long-term processes. It is important in assessing risks, predicting price indicators when trading in financial markets, and is used in the development of strategies and methods of gaming tactics in the theory of gambling.

    The mathematical expectation is mean value of a random variable, the probability distribution of a random variable is considered in probability theory.

    The mathematical expectation is a measure of the mean value of a random variable in probability theory. The mathematical expectation of a random variable x denoted M (x).

    The mathematical expectation is


    The mathematical expectation is in probability theory, the weighted average of all possible values ​​that this random variable can take.


    The mathematical expectation is the sum of the products of all possible values ​​of a random variable by the probabilities of these values.

    The mathematical expectation is the average benefit from one solution or another, provided that such a solution can be considered within the framework of the theory of large numbers and long distance.


    The mathematical expectation is in the theory of gambling, the amount of winnings that a player can earn or lose, on average, for each bet. In the language of gamblers, this is sometimes called "player advantage" (if it is positive for the player) or "casino advantage" (if it is negative for the player).

    The mathematical expectation is the percentage of profit on winnings multiplied by the average profit, minus the probability of loss multiplied by the average loss.


    The mathematical expectation of a random variable in mathematical theory

    One of the important numerical characteristics of a random variable is the mathematical expectation. Let us introduce the concept of a system of random variables. Consider a collection of random variables that are the results of the same random experiment. If - one of the possible values ​​of the system, then the event corresponds to a certain probability that satisfies the Kolmogorov axioms. A function defined for any possible values ​​of random variables is called a joint distribution law. This function allows you to calculate the probabilities of any events from. In particular, the joint law of distribution of random variables and, which take values ​​from the set and, is given by probabilities.


    The term "mathematical expectation" was introduced by Pierre Simon the Marquis de Laplace (1795) and originated from the concept of "expected value of a payoff", which first appeared in the 17th century in the theory of gambling in the works of Blaise Pascal and Christian Huygens. However, the first full theoretical understanding and assessment of this concept was given by Pafnutii Lvovich Chebyshev (mid-19th century).


    The distribution law of random numerical values ​​(distribution function and distribution series or probability density) fully describe the behavior of a random variable. But in a number of problems it is enough to know some of the numerical characteristics of the investigated quantity (for example, its average value and possible deviation from it) in order to answer the question posed. The main numerical characteristics of random variables are mathematical expectation, variance, mode, and median.

    The mathematical expectation of a discrete random variable is the sum of the products of its possible values ​​by the corresponding probabilities. Sometimes the mathematical expectation is called the weighted average, since it is approximately equal to the arithmetic mean of the observed values ​​of a random variable for a large number of experiments. From the definition of the mathematical expectation it follows that its value is not less than the smallest possible value of a random variable and not more than the largest. The mathematical expectation of a random variable is a non-random (constant) value.


    The mathematical expectation has a simple physical meaning: if a unit mass is placed on a straight line by placing some mass at some points (for a discrete distribution), or "smearing" it with a certain density (for an absolutely continuous distribution), then the point corresponding to the mathematical expectation will be the coordinate The "center of gravity" is straight.


    The average value of a random variable is a certain number, which is, as it were, its "representative" and replaces it in rough approximate calculations. When we say: “the average operating time of the lamp is 100 hours” or “the midpoint of impact is displaced relative to the target by 2 m to the right”, we indicate a certain numerical characteristic of a random variable that describes its location on the numerical axis, i.e. "Characterization of the position".

    From the characteristics of the position in the theory of probability, the most important role is played by the mathematical expectation of a random variable, which is sometimes called simply the mean value of a random variable.


    Consider a random variable NS with possible values x1, x2, ..., xn with probabilities p1, p2, ..., pn... We need to characterize by some number the position of the values ​​of a random variable on the abscissa axis, taking into account the fact that these values ​​have different probabilities. For this purpose, it is natural to use the so-called "weighted average" of the values xi, and each value of xi during averaging should be taken into account with a "weight" proportional to the probability of this value. Thus, we will calculate the mean of the random variable X which we will denote M | X |:


    This weighted average is called the mathematical expectation of a random variable. Thus, we have introduced in consideration one of the most important concepts of probability theory - the concept of mathematical expectation. The mathematical expectation of a random variable is the sum of the products of all possible values ​​of a random variable by the probabilities of these values.

    NS associated with a peculiar relationship with the arithmetic mean of the observed values ​​of a random variable with a large number of experiments. This dependence is of the same type as the dependence between frequency and probability, namely: with a large number of experiments, the arithmetic mean of the observed values ​​of a random variable approaches (converges in probability) to its mathematical expectation. From the presence of a relationship between frequency and probability, one can deduce as a consequence the presence of a similar relationship between the arithmetic mean and mathematical expectation. Indeed, consider the random variable NS characterized by a distribution series:


    Let it be produced N independent experiments, in each of which the value X takes on a certain meaning. Suppose the value x1 appeared m1 times, value x2 appeared m2 times, generally meaning xi appeared mi times. Let us calculate the arithmetic mean of the observed values ​​of the quantity X, which, in contrast to the mathematical expectation M | X | we will designate M * | X |:

    With an increase in the number of experiments N frequency pi will approach (converge in probability) to the corresponding probabilities. Consequently, the arithmetic mean of the observed values ​​of the random variable M | X | with an increase in the number of experiments, it will approach (converge in probability) to its mathematical expectation. The above connection between the arithmetic mean and the mathematical expectation is the content of one of the forms of the law of large numbers.

    We already know that all forms of the law of large numbers state the fact that certain averages are stable for a large number of experiments. Here we are talking about the stability of the arithmetic mean from a series of observations of the same quantity. With a small number of experiments, the arithmetic mean of their results is random; with a sufficient increase in the number of experiments, it becomes "almost random" and, stabilizing, approaches a constant value - the mathematical expectation.


    The property of stability of averages with a large number of experiments is easy to verify experimentally. For example, weighing a body in a laboratory on an accurate balance, we get a new value each time as a result of weighing; to reduce the observation error, we weigh the body several times and use the arithmetic mean of the values ​​obtained. It is easy to see that with a further increase in the number of experiments (weighings), the arithmetic mean reacts to this increase less and less, and with a sufficiently large number of experiments, it practically ceases to change.

    It should be noted that the most important characteristic of the position of a random variable - the mathematical expectation - does not exist for all random variables. It is possible to compose examples of such random variables for which the mathematical expectation does not exist, since the corresponding sum or integral diverges. However, for practice, such cases are not of significant interest. Usually the random variables we deal with have a limited range of possible values ​​and, of course, have a mathematical expectation.


    In addition to the most important of the characteristics of the position of a random variable - the mathematical expectation - other characteristics of the position are sometimes used in practice, in particular, the mode and median of a random variable.


    The mode of a random variable is its most probable value. The term "most probable value", strictly speaking, applies only to discontinuous quantities; for a continuous quantity, the mode is the value at which the probability density is maximum. The figures show the mode for discontinuous and continuous random variables, respectively.


    If the distribution polygon (distribution curve) has more than one maximum, the distribution is called "polymodal".



    Sometimes there are distributions that have a minimum, not a maximum, in the middle. Such distributions are called "anti-modal".


    In the general case, the mode and the mathematical expectation of a random variable do not coincide. In the particular case, when the distribution is symmetric and modal (i.e., has a mode) and there is a mathematical expectation, then it coincides with the mode and the center of symmetry of the distribution.

    Another characteristic of the position is often used - the so-called median of a random variable. This characteristic is usually used only for continuous random variables, although it can be formally defined for a discontinuous variable. Geometrically, the median is the abscissa of the point at which the area bounded by the distribution curve is halved.


    In the case of a symmetric modal distribution, the median coincides with the mathematical expectation and mode.

    The mathematical expectation is an average value, of a random variable - a numerical characteristic of the probability distribution of a random variable. In the most general way, the mathematical expectation of a random variable X (w) is defined as the Lebesgue integral with respect to the probability measure R in the original probability space:


    The mathematical expectation can be calculated as the Lebesgue integral of NS by probability distribution px magnitudes X:


    In a natural way, you can define the concept of a random variable with an infinite mathematical expectation. Return times in some random walks are typical examples.

    Using the mathematical expectation, many numerical and functional characteristics of the distribution are determined (as the mathematical expectation of the corresponding functions of a random variable), for example, a generating function, a characteristic function, moments of any order, in particular, variance, covariance.

    The mathematical expectation is a characteristic of the location of the values ​​of a random variable (the average value of its distribution). In this capacity, the mathematical expectation serves as some "typical" distribution parameter and its role is similar to the role of the static moment - the coordinates of the center of gravity of the mass distribution - in mechanics. The mathematical expectation differs from other location characteristics, with the help of which the distribution is described in general terms, medians, modes, by the greater value that it and the corresponding scattering characteristic - dispersion - have in the limit theorems of probability theory. With the greatest completeness, the meaning of the mathematical expectation is revealed by the law of large numbers (Chebyshev's inequality) and the strengthened law of large numbers.

    The mathematical expectation of a discrete random variable

    Let there be some random variable that can take one of several numerical values ​​(for example, the number of points when throwing a dice can be 1, 2, 3, 4, 5, or 6). In practice, for such a value, the question often arises: what value does it take "on average" with a large number of tests? What will be our average income (or loss) from each of the risky operations?


    Let's say there is some kind of lottery. We want to understand whether it is profitable or not to participate in it (or even to participate repeatedly, regularly). Let's say every fourth winning ticket, the prize is 300 rubles, and the price of any ticket is 100 rubles. With an infinitely large number of participation, this is what happens. In three quarters of the cases, we will lose, every three losses will cost 300 rubles. In every fourth case, we will win 200 rubles. (prize minus cost), that is, for four participations we lose on average 100 rubles, for one - on average 25 rubles. In total, the average rate of our ruin will be 25 rubles per ticket.

    We throw the dice. If it is not cheating (no shift in the center of gravity, etc.), then how many points will we have on average at a time? Since each option is equally probable, we take a stupid arithmetic mean and get 3.5. Since this is AVERAGE, there is no need to be indignant that no specific throw will give 3.5 points - well, this cube has no edge with such a number!

    Now let's summarize our examples:


    Let's look at the picture just shown. On the left is a table of the distribution of a random variable. The value X can take one of n possible values ​​(shown in the top line). There can be no other values. Each possible value below is labeled with its probability. On the right is the formula, where M (X) is called the mathematical expectation. The meaning of this value is that with a large number of tests (with a large sample), the average value will tend to this same mathematical expectation.

    Let's go back to the same playing cube. The mathematical expectation of the number of points when throwing is 3.5 (calculate yourself using the formula, if you don't believe). Let's say you threw it a couple of times. They dropped 4 and 6. On average, it turned out 5, that is, far from 3.5. They threw it one more time, dropped 3, that is, on average (4 + 6 + 3) / 3 = 4.3333 ... Somehow far from the mathematical expectation. Now do this crazy experiment - roll the cube 1000 times! And if the average is not exactly 3.5, it will be close to that.

    Let's calculate the mathematical expectation for the above described lottery. The plate will look like this:


    Then the mathematical expectation will be, as we established above .:


    Another thing is that it would be difficult to use the same “on the fingers”, without a formula, if there were more options. Well, let's say there would be 75% of losing tickets, 20% of winning tickets, and 5% of extra winning tickets.

    Now some properties of the mathematical expectation.

    Proving this is simple:


    A constant factor is allowed to be taken out of the sign of the mathematical expectation, that is:


    This is a special case of the linearity property of the mathematical expectation.

    Another consequence of the linearity of the mathematical expectation:

    that is, the mathematical expectation of the sum of random variables is equal to the sum of the mathematical expectations of the random variables.

    Let X, Y be independent random variables, then:

    This is also easy to prove) XY itself is a random variable, while if the initial values ​​could take n and m values ​​respectively, then XY can take nm values. The probability of each of the values ​​is calculated based on the fact that the probabilities of independent events are multiplied. As a result, we get this:


    The mathematical expectation of a continuous random variable

    Continuous random variables have such characteristic as distribution density (probability density). It, in fact, characterizes the situation that a random variable takes some values ​​from the set of real numbers more often, some less often. For example, consider the following graph:


    Here X is a random variable itself, f (x)- distribution density. Judging by this graph, in experiments, the value X will often be a number close to zero. Chances to exceed 3 or be less -3 rather purely theoretical.


    For example, suppose there is a uniform distribution:



    This is quite consistent with intuitive understanding. Say, if we get a lot of random real numbers with a uniform distribution, each of the segment |0; 1| , then the arithmetic mean should be about 0.5.

    The properties of the mathematical expectation - linearity, etc., applicable for discrete random variables, are applicable here as well.

    Relationship between mathematical expectation and other statistical indicators

    In statistical analysis, along with the mathematical expectation, there is a system of interdependent indicators reflecting the homogeneity of phenomena and the stability of processes. Variation indicators often have no independent meaning and are used for further data analysis. The exception is the coefficient of variation, which characterizes the homogeneity of the data, which is a valuable statistic.


    The degree of variability or stability of processes in statistical science can be measured using several indicators.

    The most important indicator characterizing the variability of a random variable is Dispersion, which is closely and directly related to the mathematical expectation. This parameter is actively used in other types of statistical analysis (hypothesis testing, analysis of cause-and-effect relationships, etc.). Like the linear mean, variance also reflects the measure of the spread of the data around the mean.


    It is useful to translate the language of signs into the language of words. It turns out that the variance is the mean square of the deviations. That is, first the average is calculated, then the difference between each original and the average is taken, squared, added, and then divided by the number of values ​​in the population. The difference between the individual value and the mean reflects the measure of the deviation. It is squared so that all deviations become exclusively positive numbers and to avoid mutual destruction of positive and negative deviations when they are summed up. Then, with the squares of the deviations, we simply calculate the arithmetic mean. Average - square - deviations. Deviations are squared and the average is considered. The solution to the magic word "variance" lies in just three words.

    However, in its pure form, such as the arithmetic mean, or index, variance is not used. It is rather an auxiliary and intermediate indicator that is used for other types of statistical analysis. She doesn't even have a normal unit of measurement. Judging by the formula, this is the square of the unit of measure of the original data.

    Let us measure a random variable N times, for example, we measure the wind speed ten times and want to find the average value. How is the mean related to the distribution function?

    Or we will roll the dice a large number of times. The number of points that will drop out on the die with each roll is a random variable and can take any natural values ​​from 1 to 6. The arithmetic mean of the dropped points, calculated for all dice rolls, is also a random value, but for large N it tends to a very specific number - the mathematical expectation Mx... In this case, Mx = 3.5.

    How did this value come about? Let in N trials n1 once dropped 1 point, n2 times - 2 points and so on. Then the number of outcomes in which one point fell:


    Likewise for the outcomes when 2, 3, 4, 5 and 6 points are rolled.


    Suppose now that we know the distribution law of a random variable x, that is, we know that a random variable x can take values ​​x1, x2, ..., xk with probabilities p1, p2, ..., pk.

    The mathematical expectation Mx of a random variable x is:


    The mathematical expectation is not always a reasonable estimate of some random variable. So, to estimate the average wage, it is more reasonable to use the concept of the median, that is, such a value that the number of people receiving less than the median wage and more are the same.

    The probability p1 that the random variable x will be less than x1 / 2, and the probability p2 that the random variable x will be greater than x1 / 2 are the same and equal to 1/2. The median is not determined unambiguously for all distributions.


    Standard or Standard deviation in statistics, the degree to which observational data or sets deviate from the AVERAGE value is called. It is designated by the letters s or s. A small standard deviation indicates that the data is clustered around the mean, while a large standard deviation indicates that the initial data is far away from it. The standard deviation is equal to the square root of a quantity called variance. It is the average of the sum of the squared differences of the initial data deviating from the mean. The root-mean-square deviation of a random variable is called the square root of the variance:


    Example. Under test conditions when shooting at a target, calculate the variance and standard deviation of a random variable:


    Variation- variability, variability of the value of the trait in the units of the population. Individual numerical values ​​of a feature that are found in the studied population are called variants of values. Insufficiency of the average value for a complete characteristic of the population makes it necessary to supplement the average values ​​with indicators that make it possible to assess the typicality of these averages by measuring the variability (variation) of the trait under study. The coefficient of variation is calculated by the formula:


    Swipe variation(R) is the difference between the maximum and minimum values ​​of the trait in the studied population. This indicator gives the most general idea of ​​the variability of the trait under study, since it shows the difference only between the limiting values ​​of the options. The dependence on the extreme values ​​of the trait gives the range of variation an unstable, random character.


    Average linear deviation is the arithmetic mean of the absolute (modulo) deviations of all values ​​of the analyzed population from their average value:


    Expected value in the theory of gambling

    The mathematical expectation is the average amount of money a gambler can win or lose on a given bet. This is a very important concept for the player, because it is fundamental to the assessment of most game situations. Expectation is also an optimal tool for analyzing basic card layouts and game situations.

    Let's say you are playing a coin with a friend, betting $ 1 equally each time, regardless of what comes up. Tails - you win, heads - you lose. The odds of coming up tails are one-to-one, and you bet $ 1 to $ 1. Thus, your mathematical expectation is zero, because mathematically speaking, you cannot know whether you will be leading or losing after two tosses or after 200.


    Your hourly gain is zero. An hourly win is the amount of money you expect to win in an hour. You can flip a coin 500 times within an hour, but you will not win or lose, because your chances are neither positive nor negative. From the point of view of a serious player, such a betting system is not bad. But this is simply a waste of time.

    But suppose someone wants to bet $ 2 against your $ 1 in the same game. Then you immediately have a positive expectation of 50 cents from each bet. Why 50 cents? On average, you win one bet and lose the second. Bet the first dollar and lose $ 1, bet the second and win $ 2. You bet $ 1 twice and are $ 1 ahead. So each of your one dollar bets gave you 50 cents.


    If the coin falls out 500 times in one hour, your hourly winnings will already be $ 250, because on average, you lost $ 1 250 times and won $ 2 250 times. $ 500 minus $ 250 equals $ 250, which is the total winnings. Please note that the expected value, which is the amount you won on average on one bet, is 50 cents. You won $ 250 by placing a dollar bet 500 times, which equals 50 cents from the stake.

    Expectation has nothing to do with short-term results. Your opponent, who decided to bet $ 2 against you, could beat you on the first ten tosses in a row, but you, having a 2: 1 betting advantage, all other things being equal, under any circumstances, earn 50 cents from every $ 1 bet. It makes no difference whether you win or lose one bet or several bets, but only if you have enough cash to calmly compensate for the costs. If you continue to bet in the same way, then over a long period of time your winnings will come up to the sum of your expectations in individual throws.


    Every time you make a bet with the best outcome (a bet that can turn out to be profitable over the long run), when the odds are in your favor, you will definitely win something on it, and it does not matter if you lose it or not in this hand. Conversely, if you make a bet with the worst outcome (a bet that is not profitable in the long run), when the odds are not in your favor, you are losing something regardless of whether you win or lose in the given hand.

    You make a bet with the best outcome if your expectation is positive, and it is positive if the odds are on your side. When placing a bet with the worst outcome, you have negative expectation, which happens when the odds are against you. Serious gamblers only bet with the best outcome; in the worst case, they fold. What does the odds mean in your favor? You may end up winning more than the actual odds bring. The real odds of coming up tails are 1 to 1, but you are getting 2 to 1 due to the ratio of the bets. In this case, the odds are in your favor. You will definitely get the best outcome with a positive expectation of 50 cents per bet.


    Here's a more complex example of expected value. Your buddy writes the numbers from one to five and bets $ 5 against your $ 1 that you will not determine the hidden number. Should you agree to such a bet? What is the expectation here?

    On average, you get it wrong four times. Based on this, the odds against you guessing the number are 4 to 1. The odds are that you lose a dollar in one try. However, you win 5 to 1, if you can lose 4 to 1. So the odds are in your favor, you can take the bet and hope for a better outcome. If you make this bet five times, on average you will lose four times $ 1 and win $ 5 once. Based on this, for all five tries, you will earn $ 1 with a positive expected value of 20 cents per bet.


    A player who is going to win more than he bets, as in the example above, catches the odds. Conversely, he ruins the odds when he expects to win less than he bets. A player making a bet can have either positive or negative expectation, which depends on whether he catches or ruins the odds.

    If you bet $ 50 to win $ 10 with a 4 to 1 probability of winning, you will get a negative expectation of $ 2, because on average, you win four times $ 10 and lose $ 50 once, which shows that the loss for one bet is $ 10. But if you bet $ 30 to win $ 10, with the same chances of winning 4 to 1, then in this case you have a positive expectation of $ 2, because you win again four times for $ 10 and lose $ 30 once for a profit of $ 10. These examples show that the first bet is bad and the second is good.


    Expectation is the center of any game situation. When a bookmaker encourages football fans to bet $ 11 to win $ 10, they have a positive expectation of 50 cents for every $ 10. If the casino pays out equal money from the passing line in the craps, then the casino's positive expectation is approximately $ 1.40 for every $ 100, because this game is structured so that everyone who bets on this line loses 50.7% on average and wins 49.3% of the total time. Undoubtedly, it is this seemingly minimal positive expectation that brings colossal profits to casino owners around the world. As Vegas World casino owner Bob Stupak remarked, "One thousandth of a percent negative probability over a long enough distance will ruin the richest man in the world."


    Mathematical expectation when playing Poker

    The game of Poker is the most illustrative and illustrative example in terms of using the theory and properties of mathematical expectation.


    Expected Value in Poker is the average benefit from a particular solution, provided that such a solution can be considered within the framework of the theory of large numbers and long distance. A successful poker game is about always accepting moves with positive expectation.

    The mathematical meaning of the mathematical expectation when playing poker is that we often come across random variables when making a decision (we do not know which cards are in our opponent's hands, which cards will come on subsequent betting rounds). We must consider each of the solutions from the point of view of the theory of large numbers, which states that with a sufficiently large sample, the average value of a random variable will tend to its mathematical expectation.


    Among the particular formulas for calculating the mathematical expectation, the following is most applicable in poker:

    When playing poker, the expected value can be calculated for both bets and calls. In the first case, fold equity should be taken into account, in the second - the pot's own odds. When evaluating the mathematical expectation of a move, remember that a fold always has a zero expectation. Thus, discarding cards will always be a more profitable decision than any negative move.

    Expectation tells you what you can expect (profit or loss) for every dollar you risk. Casinos make money because the expectation of all the games that are practiced in them is in favor of the casino. With a sufficiently long series of games, one can expect that the client will lose his money, since the "probability" is in favor of the casino. However, professional casino players limit their games to short periods of time, thereby increasing the odds in their favor. The same goes for investing. If your expectation is positive, you can make more money by making many trades in a short period of time. Expectation is your percentage of profit on win multiplied by average profit minus your probability of loss multiplied by average loss.


    Poker can also be viewed in terms of mathematical expectation. You can assume that a certain move is profitable, but in some cases it may turn out to be far from the best, because another move is more profitable. Let's say you hit a full house in a five-card draw poker. Your opponent bets. You know that if you raise your bid, he will answer. Therefore, raising looks like the best tactic. But if you do raise the bet, the remaining two players will definitely fold. But if you call, you will be completely sure that two other players after you will do the same. When you raise the bet, you get one unit, but simply by calling - two. Thus, equalizing gives you a higher positive mathematical expectation and is the best tactic.

    The mathematical expectation can also give an idea of ​​which tactics are less profitable in poker and which are more. For example, when playing a certain hand, you believe that your losses will average 75 cents, including the antes, then this hand should be played because this is better than folding when the ante is $ 1.


    Another important reason for understanding the essence of mathematical expectation is that it gives you a sense of peace whether you won a bet or not: if you made a good bet or fold on time, you will know that you have earned or saved a certain amount of money, which the weaker player could not save. It is much more difficult to fold if you are upset that your opponent has made a stronger hand on the exchange. With all this, the money that you saved without playing, instead of betting, is added to your winnings per night or per month.

    Just remember that if you changed your hands, your opponent would call you, and as you will see in the article "The Fundamental Theorem of Poker" this is just one of your advantages. You should be happy when this happens. You can even learn to enjoy a losing hand, because you know that other players in your place would have lost a lot more.


    As mentioned in the coin-game example at the beginning, the hourly rate of return is related to the expected value, and this concept is especially important for professional players. When you are going to play poker, you must mentally estimate how much you can win in an hour of playing. In most cases, you will need to rely on your intuition and experience, but you can also use some math. For example, you are playing draw lowball and you see three players bet $ 10 and then exchange two cards, which is a very bad tactic, you might think that every time they bet $ 10, they lose about $ 2. Each of them does it eight times an hour, which means that all three lose about $ 48 per hour. You are one of the remaining four players, which are approximately equal, so these four players (and you among them) must divide $ 48, and the profit of each will be $ 12 per hour. Your hourly rate in this case is simply your share of the money lost by three bad players in an hour.

    Over a long period of time, the player's total payoff is the sum of his mathematical expectations in individual hands. The more you play with positive expectation, the more you win, and vice versa, the more hands with negative expectation you play, the more you lose. As a consequence, you should choose a game that can maximize your positive expectations or negate negative ones so that you can maximize your hourly winnings.


    Positive mathematical expectation in game strategy

    If you know how to count cards, you may have an edge over the casino if they don't see it and kick you out. Casinos love drunken gamblers and can't stand card counters. Advantage will allow you to win more times over time than you lose. Good money management using mathematical expectation calculations can help you get more out of your advantage and reduce losses. Without an advantage, you're better off donating money to charity. In trading on the stock exchange, the advantage is given by the game system, which creates more profits than losses, price differences and commissions. No amount of money management will save a bad gaming system.

    A positive expectation is defined by a value greater than zero. The larger this number, the stronger the statistical expectation. If the value is less than zero, then the mathematical expectation will also be negative. The larger the modulus of the negative value, the worse the situation. If the result is zero, then the expectation is breakeven. You can only win when you have a positive mathematical expectation, a reasonable system of play. Playing by intuition leads to disaster.


    Expectation and exchange trading

    The mathematical expectation is a fairly widely demanded and popular statistical indicator in the implementation of exchange trading in financial markets. First of all, this parameter is used to analyze the success of a trade. It is not difficult to guess that the higher the given value, the more reason to consider the studied trade successful. Of course, the analysis of a trader's work cannot be done only with the help of this parameter. However, the calculated value, in combination with other methods of assessing the quality of work, can significantly improve the accuracy of the analysis.


    The mathematical expectation is often calculated in the services of monitoring trading accounts, which allows you to quickly assess the work done on the deposit. As exceptions, one can cite strategies that use “sitting out” of unprofitable trades. A trader may be lucky for some time, and therefore, there may be no losses in his work at all. In this case, it will not be possible to navigate only by expectation, because the risks used in the work will not be taken into account.

    In trading on the market, expectation is most often used when predicting the profitability of a trading strategy or when predicting a trader's income based on the statistical data of his previous trades.

    In terms of money management, it is very important to understand that when making trades with negative expectation, there is no money management scheme that can definitely bring high profits. If you continue to play on the stock exchange under these conditions, then no matter how you manage your money, you will lose your entire account, no matter how large it was in the beginning.

    This axiom is not only true for games or trades with negative expectation, it is also true for games with equal odds. Therefore, the only case where you have a chance to benefit in the long term is when you make deals with a positive expected value.


    The difference between negative expectation and positive expectation is the difference between life and death. It doesn't matter how positive or how negative the expectation is; what matters is whether it is positive or negative. Therefore, before considering money management issues, you must find a game with positive expectation.

    If you don't have such a game, then no amount of money management in the world will save you. On the other hand, if you have a positive expectation, you can, through good money management, turn it into an exponential growth function. It doesn't matter how little that positive expectation is! In other words, it doesn't matter how profitable a single contract trading system is. If you have a system that wins $ 10 per contract on a single trade (after deducting commissions and slippage), you can use money management techniques to make it more profitable than a system that shows an average profit of $ 1000 per trade (after deduction of commissions and slippage).


    What matters is not how profitable the system was, but how certain it can be said that the system will show at least minimal profit in the future. Therefore, the most important preparation a trader can make is to make sure that the system shows a positive mathematical expectation in the future.

    In order to have a positive mathematical expectation in the future, it is very important not to restrict the degrees of freedom of your system. This is achieved not only by eliminating or reducing the number of parameters to be optimized, but also by reducing as many system rules as possible. Every parameter you add, every rule you make, every tiny change you make to the system, reduces the number of degrees of freedom. Ideally, you need to build a fairly primitive and simple system that will consistently generate small profits in almost any market. Again, it is important that you understand that it does not matter how profitable the system is, as long as it is profitable. The money you earn in trading will be earned through effective money management.

    A trading system is simply a tool that gives you a positive mathematical expectation so that money management can be used. Systems that work (show at least minimal profit) in only one or a few markets, or have different rules or parameters for different markets, most likely will not work in real time for long enough. The problem with most tech-savvy traders is that they spend too much time and effort optimizing the various rules and parameter values ​​of the trading system. This gives completely opposite results. Instead of spending energy and computer time increasing the profits of the trading system, focus your energy on increasing the level of reliability of making the minimum profit.

    Knowing that money management is just a numerical game that requires the use of positive expectations, a trader can stop looking for the "holy grail" of stock trading. Instead, he can start checking his trading method, find out how logically this method is, whether it gives positive expectations. The right money management methods applied to any, even mediocre trading methods, will do the rest of the work themselves.


    For any trader to succeed in his work, it is necessary to solve the three most important tasks:. Ensure that the number of successful deals exceeds the inevitable mistakes and miscalculations; Set up your trading system so that the opportunity to earn money is as often as possible; To achieve the stability of the positive result of your operations.

    And here we, working traders, can be helped by the mathematical expectation. This term in the theory of probability is one of the key ones. With its help, you can give an average estimate of a certain random value. The mathematical expectation of a random variable is similar to the center of gravity if we imagine all possible probabilities as points with different masses.


    As applied to a trading strategy, to assess its effectiveness, the mathematical expectation of profit (or loss) is most often used. This parameter is defined as the sum of the products of the given levels of profit and loss and the probability of their occurrence. For example, the developed trading strategy assumes that 37% of all operations will bring profit, and the rest - 63% - will be unprofitable. At the same time, the average income from a successful deal will be $ 7, and the average loss will be $ 1.4. Let's calculate the mathematical expectation of trading using the following system:

    What does this number mean? It says that, following the rules of this system, on average, we will receive $ 1.708 from each closed trade. Since the obtained efficiency estimate is greater than zero, then such a system can be used for real work. If, as a result of the calculation, the mathematical expectation turns out to be negative, then this already speaks of an average loss and such a trade will lead to ruin.

    The amount of profit per trade can also be expressed as a relative value in the form of%. For example:

    - percentage of income per 1 deal - 5%;

    - percentage of successful trading operations - 62%;

    - percentage of loss per 1 deal - 3%;

    - percentage of unsuccessful transactions - 38%;

    That is, the average trade will generate 1.96%.

    It is possible to develop a system that, despite the prevalence of unprofitable trades, will give a positive result, since its MO> 0.

    However, waiting alone is not enough. It is difficult to make money if the system gives very few trading signals. In this case, its profitability will be comparable to the bank interest. Let each transaction give an average of only $ 0.50, but what if the system assumes 1000 transactions per year? This will be a very serious amount in a relatively short time. It logically follows from this that another distinguishing feature of a good trading system can be considered a short period of holding positions.


    Sources and links

    dic.academic.ru - Academic Internet Dictionary

    mathematics.ru - educational site in mathematics

    nsu.ru - educational website of Novosibirsk State University

    webmath.ru is an educational portal for students, applicants and schoolchildren.

    exponenta.ru educational mathematical website

    ru.tradimo.com - free online trading school

    crypto.hut2.ru - a multidisciplinary information resource

    poker-wiki.ru - the free encyclopedia of poker

    sernam.ru - Scientific library of selected natural science publications

    reshim.su - website LET'S SOLVE course control tasks

    unfx.ru - Forex at UNFX: training, trading signals, trust management

    slovopedia.com - The Big Encyclopedic Dictionary of Slovopedia

    pokermansion.3dn.ru - Your guide to the poker world

    statanaliz.info - information blog "Statistical Data Analysis"

    forex-trader.rf - Forex-Trader portal

    megafx.ru - up-to-date Forex analytics

    fx-by.com - everything for the trader

    Expected value

    Dispersion continuous random variable X, the possible values ​​of which belong to the entire Ox axis, is determined by the equality:

    Service purpose... The online calculator is designed to solve problems in which either distribution density f (x), or the distribution function F (x) (see example). Usually in such tasks you need to find mathematical expectation, standard deviation, build graphs of functions f (x) and F (x).

    Instruction. Select the type of source data: density distribution f (x) or distribution function F (x).

    The distribution density f (x) is given:

    The distribution function F (x) is given:

    A continuous random variable is given by the probability density
    (Rayleigh distribution law - used in radio engineering). Find M (x), D (x).

    The random variable X is called continuous if its distribution function F (X) = P (X< x) непрерывна и имеет производную.
    The distribution function of a continuous random variable is used to calculate the probabilities of hitting a random variable in a given interval:
    P (α< X < β)=F(β) - F(α)
    and for a continuous random variable it does not matter whether its boundaries are included in this interval or not:
    P (α< X < β) = P(α ≤ X < β) = P(α ≤ X ≤ β)
    Density of distribution continuous random variable is called the function
    f (x) = F ’(x), derivative of the distribution function.

    Distribution density properties

    1. The density of distribution of a random variable is non-negative (f (x) ≥ 0) for all values ​​of x.
    2. Normalization condition:

    The geometric meaning of the normalization condition: the area under the distribution density curve is equal to one.
    3. The probability of hitting a random variable X in the interval from α to β can be calculated by the formula

    Geometrically, the probability of a continuous random variable X falling into the interval (α, β) is equal to the area of ​​a curvilinear trapezoid under the distribution density curve based on this interval.
    4. The distribution function is expressed in terms of density as follows:

    The value of the distribution density at the point x is not equal to the probability of accepting this value; for a continuous random variable, we can only talk about the probability of falling into a given interval. Let be )