To come in
Speech therapy portal
  • How to gain self-confidence, achieve calmness and increase self-esteem: discovering the main secrets of Gaining self-confidence
  • Psychological characteristics of children with general speech underdevelopment: features of cognitive activity Mental characteristics of children with onr
  • What is burnout at work and how to deal with it How to deal with burnout at work
  • How to Deal with Emotional Burnout Methods for Dealing with Emotional Burnout
  • How to Deal with Emotional Burnout Methods for Dealing with Emotional Burnout
  • Burnout - How To Deal With Work Stress How To Deal With Emotional Burnout
  • Iterative methods for solving systems of nonlinear equations theory. Iteration method. Simple iteration method for solving systems of nonlinear equations

    Iterative methods for solving systems of nonlinear equations theory.  Iteration method.  Simple iteration method for solving systems of nonlinear equations
    Service purpose... The online calculator is designed to find the roots of the equation iteration method.

    The decision is made in Word format.

    Function entry rules

    Examples of
    ≡ x ^ 2 / (1 + x)
    cos 2 (2x + π) ≡ (cos (2 * x + pi)) ^ 2
    ≡ x + (x-1) ^ (2/3)

    One of the most effective ways numerical solution of the equations is iteration method... The essence of this method is as follows. Let the equation f (x) = 0 be given.
    We replace it with the equivalent equation
    Let us choose the initial approximation of the root x 0 and substitute it into the right-hand side of equation (1). Then we get some number

    x 1 = φ (x 0). (2)


    Substituting now into the right side of (2) instead of x 0 the number x 1, we get the number x 2 = φ (x 1). Repeating this process, we will have a sequence of numbers

    x n = φ (x n-1) (n = 1,2 ..). (3)


    If this sequence is convergent, that is, there is a limit, then passing to the limit in equality (3) and assuming the function φ (x) is continuous, we find

    Or ξ = φ (ξ).
    Thus, the limit ξ is the root of equation (1) and can be calculated by formula (3) with any degree of accuracy.


    Rice. 1a Fig. 1b


    Rice. 2.

    | φ ′ (x) |> 1 is a divergent process

    In Figs 1a, 1b, in a neighborhood of the root | φ ′ (x) |<1 и процесс итерации сходится. Однако, если рассмотреть случай |φ′(x)|>1, then the iteration process can be divergent (see Fig. 2).

    Sufficient conditions for the convergence of the iteration method

    Theorem 7. Let the function φ (x) be defined and differentiable on an interval, and all its values ​​are φ (x) ∈ and let | φ ′ (x) | ≤q<1 при x∈. Тогда процесс итерации x n = φ(x n -1) сходится независимо от начального значения x 0 ∈ и предельное значение является единственным корнем уравнения x= φ(x) на отрезке .
    Proof: Consider two successive approximations x n = φ (x n -1) and x n +1 = φ (x n) and take their difference x n + 1 -x n = φ (x n) -φ (x n-1). By Lagrange's theorem, the right-hand side can be represented as

    φ ′ (x n) (x n -x n-1)

    Where x n ∈
    Then we get

    | x n + 1 -x n | ≤φ ′ (x n) | x n -x n-1 | ≤q | x n -x n-1 |


    Setting n = 1,2, ...

    | x 2 -x 1 | ≤q | x 1 -x 0 |
    | x 3 -x 2 | ≤q | x 2 -x 1 | ≤q² | x 1 -x 0 |
    | x n + 1 -x n ≤q n | x 1 -x 0 | (4)


    From (4), by virtue of the condition q<1 видно, что последовательность {x n } сходится к некоторому числу ξ, то есть , and therefore
    (due to the continuity of the function φ (x))
    or ξ = φ (ξ) ch.d.
    For the error of the root ξ, the following formula can be obtained.
    We have x n = φ (x n-1).
    Further, ξ-x n = ξ-φ (x n-1) = φ (ξ) -φ (x n-1) →
    Now φ (x n-1) = φ (x n) -φ ′ (c) (x n -x n-1) →
    φ (ξ) -φ (x n) + φ ′ (c) (x n -x n-1)
    As a result, we get

    ξ-x n = φ ′ (c 1) (ξ-x n-1) + φ ′ (c) (x n -x n-1)
    or
    | ξ-x n | ≤q | ξ-x n | + q | x n -x n-1 |


    From here

    , (5)


    whence it is seen that for q close to 1 the difference | ξ -x n | can be very large even though | x n -x n -1 |<ε, где ε-заданная величина. Для того, чтобы вычислить ξ с точностью ε необходимо обеспечить

    . (6)


    Then substituting (6) into (5), we obtain | ξ -x n |<ε.
    If q is very small, then instead of (6) one can use

    | x n -x n -1 |<ε

    Convergence of the iteration method linear with the coefficient of convergence α = q. Indeed, we have
    ξ-x n = φ (ξ) -φ n-1 = φ ′ (c) (ξ-x n-1), hence | ξ-x n | ≤q | ξ-x n-1 |.

    Comment. Let in some neighborhood of the root ξ∈ (a, b) of the equation x = φ (x) the derivative φ ’(x) preserves a constant sign and the inequality | φ’ (x) | ≤q<1. Тогда, если φ’(x) положительна, то последовательные приближения x n = φ(x n -1) сходятся к корню монотонно.
    If φ '(x) is negative, then successive approximations oscillate around the root.
    Consider a way to represent the equation f (x) = 0 in the form x = φ (x).
    The function φ (x) must be specified such that | φ ’(x) | was small in the vicinity of the root.
    Let m 1 be known and M 1 are the smallest and largest values ​​of the derivative f '(x)
    0Replace the equation f (x) = 0 with the equivalent equation
    x = x - λf (x).
    We put φ (x) = x- λf (x). Let us choose the parameter λ so that in a neighborhood of the root ξ the inequality

    0≤ | φ ′ (x) | = | 1-λ f ′ (x) | ≤q≤1


    Hence, based on (7), we obtain

    0≤ | 1-λM 1 | ≤ | 1-λm 1 | ≤q


    Then choosing λ = 1 / M 1, we get
    q = 1-m 1 / M 1< 1.
    If λ = 1 / f ’(x), then the iterative formula x n = φ (x n -1) turns into Newton's formula

    x n = x n -1 - f (x n) / f '(x).

    Iteration method in Excel

    In cell B2 we enter the beginning of interval a, in cell B3 we enter the end of interval b. Line 4 is set aside under the table heading. We organize the iteration process itself in cells A5: D5.

    The process of finding the zeros of a function by the iteration method consists of the following stages:

    1. Get a template using this service.
    2. Refine the spacing in cells B2, B3.
    3. Copy rows of iterations to the required precision (column D).
    Note: column A - iteration number, column B - root of equation X, column C - value of function F (X), column D - precision eps.

    An example. Find the root of the equation e -x -x = 0, x = ∈, ε = 0.001 (8)
    Solution.
    We represent equation (8) in the form x = x-λ (e -x -x)
    Find the maximum value of the derivative of the function f (x) = e - x -x.
    max f ′ (x) = max (- (e -x +1)) ≈ -1.37. Meaning ... Thus, we solve the following equation
    x = x + 0.73 (e - x -x)
    The values ​​of successive approximations are given in the table.

    n x i f (x i)
    1 0.0 1.0
    2 0.73 -0.2481
    3 0.5489 0.0287
    4 0.5698 -0.0042
    5 0.5668 0.0006

    MINISTRY OF EDUCATION AND SCIENCE OF UKRAINE

    SUMSK STATE UNIVERSITY

    Department of Informatics

    COURSE WORK

    BY COURSE:

    Numerical Methods

    "Iterative methods for solving systems of nonlinear equations"


    1. Methods for solving systems of nonlinear equations. general information

    2.1 Simple iteration method

    2.2 Aitken transformation

    2.3 Newton's method

    2.3.1 Modifications of Newton's method

    2.3.2 Quasi-Newtonian Methods

    2.4 Other iterative methods for solving systems of nonlinear equations

    2.4.1 Picard method

    2.4.2 Gradient descent method

    2.4.3 Relaxation method

    3. Implementation of iterative methods programmatically and using the Maple mathematical package

    3.1 Simple iteration method

    3.2 Gradient descent method

    3.3 Newton's method

    3.4 Modified Newton's method

    List of used literature


    1. Methods for solving nonlinear equations. General information.

    Let us be given a system of equations, where

    - some nonlinear operators: (1.1)

    It can also be represented in matrix form:

    (1.1)

    Its solution is called such a value

    , for which

    The computational problem of finding some or all solutions of system (1.1) from n nonlinear algebraic or transcendental equations with n unknown.

    Let us denote by NS column vector ( NS 1 , NS 2 , ..., x n)T and write the system of equations in the form of formula (1.2): F(NS) = 0, where F =(f 1 , f 2 , ..., f n)T.

    Such systems of equations can arise directly, for example, in the design of physical systems, or indirectly. So, for example, when solving the problem of minimizing some function G(NS) it is often necessary to determine those points at which the gradient of this function is equal to zero. Assuming F = grad G, we get a nonlinear system.

    Unlike systems of linear algebraic equations, for the solution of which they can be used as straight(or accurate) and iterative(or approximate) methods, the solution of systems of nonlinear equations can be obtained only by approximate, iterative methods. They allow you to obtain a sequence of approximations

    ... If the iterative process converges, then the boundary value is the solution to the given system of equations.

    For a complete understanding of the methods for finding a solution to the system, it is necessary to clarify such a concept as "the rate of convergence". If for consistency x n converging to the limit NS *, the formula is correct

    (k is a positive real number), then k is called the rate of convergence of a given sequence.


    2. Iterative methods for solving systems of nonlinear equations

    2.1 Simple iteration method

    The method of simple iterations (successive approximations) is one of the main methods in computational mathematics and is used to solve a wide class of equations. Let us present a description and justification of this method for systems of nonlinear equations of the form

    f i (x 1, x 2, ... x n) = 0, i=1,2,..n;

    Let us bring the system of equations to a special form:

    (2.1)

    Or in vector form

    . (2.2)

    Moreover, the transition to this system should be only under the condition that

    is a contraction mapping.

    Using some initial approximation X (0) = (x 1 (0), x 2 (0), ... x n (0))

    construct an iterative process X (k + 1) =  (X (k)). Calculations continue until the condition is met

    ... Then the solution to the system of equations is a fixed point of the mapping.

    Let us justify the method in a certain norm

    space.

    Let us give a convergence theorem, the fulfillment of the conditions of which leads to finding a solution to the system.

    Theorem (about convergence). Let be

    1). The vector function Ф (х) is defined in the region

    ; the condition is satisfied

    3). Inequality is valid

    Then in the iterative process:

    , - solution of the system of equations; ,

    Comment. The inequality of condition 2) is the Lipschitz condition for the vector function Φ (x) in the domain S with constant

    (compression condition). It shows that F is a compression operator in the domain S, i.e., the principle of contracted mappings applies to equation (2.2). The assertions of the theorem mean that equation (2.2) has a solution in the domain S, and successive approximations converge to this solution at the rate of a geometric sequence with the denominator q.

    Proof... Insofar as

    , then for the approximation by virtue of assumption 3) we have. It means that . Let us show that, k = 2,3, ... and for neighboring approximations the inequality (2.3)

    We will argue by induction. At

    the statement is true, since and . Suppose that the approximations belong to S and inequality (2.3) holds for. Since, for, taking into account condition 2) of the theorem, we have.

    By inductive hypothesis

    The study of various phenomena or processes by mathematical methods is carried out using a mathematical model . A mathematical model is a formalized description of the object under study by means of systems of linear, nonlinear or differential equations, systems of inequalities, a definite integral, a polynomial with unknown coefficients, etc. The mathematical model should cover the most important characteristics of the object under study and reflect the connections between them.

    After the mathematical model is drawn up, proceed to the formulation of the computational problem . At the same time, it is established which characteristics of the mathematical model are the initial (input) data , what are the parameters of the model , and which ones are output data. The analysis of the obtained problem is carried out from the point of view of the existence and uniqueness of the solution.

    At the next stage, a method for solving the problem is selected. In many specific cases, it is not possible to find a solution to the problem in an explicit form, since it is not expressed in terms of elementary functions. Such problems can be solved only approximately. Computational (numerical) methods mean approximate procedures that make it possible to obtain a solution in the form of specific numerical values. Computational methods, as a rule, are implemented on a computer. To solve the same problem, various computational methods can be used, so you need to be able to evaluate the quality of different methods and the effectiveness of their application for a given problem.

    Then, to implement the chosen computational method, an algorithm and a computer program are drawn up . It is important for a modern engineer to be able to transform a problem into a form that is convenient for implementation on a computer and to build an algorithm for solving such a problem.

    Currently, they are widely used as packages that implement the most general methods for solving a wide range of problems (for example, Mathcad,
    MatLAB) and packages that implement methods for solving special problems.

    The calculation results are analyzed and interpreted. If necessary, the parameters of the method are adjusted, and sometimes the mathematical model, and a new cycle of solving the problem begins.

    1.1. Formulation of the problem

    Let some function be given and it is required to find all or some of the values ​​for which.

    The value at which is called root(or decision) equations. A function is often assumed to be twice continuously differentiable in a neighborhood of the root.

    The root of the equation is called simple, if the first derivative of the function at the point is not zero, i.e. If, then the root is called multiple root.

    Geometrically, the root of the equation is the point of intersection of the function graph with the abscissa axis. In fig. 1 shows a graph of a function with four roots: two simple and two multiples.


    Most of the methods for solving an equation are focused on finding simple roots.

    1.2. The main stages of finding a solution

    In the process of approximately finding the roots of the equation, two stages are usually distinguished: localization(or branch) of the root and root refinement.

    Localization of a root consists in defining a segment containing one and only one root. There is no universal root localization algorithm. Sometimes it is convenient to localize the root using a graph or table of function values. The presence of a root on a segment is indicated by the difference in the signs of the function at the ends of the segment. This is based on the following theorem.

    Theorem . If a function is continuous on a segment and takes values ​​of different signs at its ends, so that the segment contains at least one root of the equation.

    However, the root of even multiplicity cannot be localized in this way, since in the vicinity of such a root the function has a constant sign. At the stage of root refinement, the approximate value of the root is calculated with a given accuracy. The approximate value of the root is refined using various iterative methods. The essence of these methods is to sequentially calculate values ​​that are approximations to the root.

    1.3. Half division method

    The half method is the simplest and most reliable way to solve a nonlinear equation. Let it be known from the preliminary analysis that the root of the equation is on the interval, that is, so that. Let the function be continuous on a segment and take values ​​of different signs at the ends of the segment, i.e. ...

    Divide the segment in half. Let's get a point. Let's calculate the value of the function at this point:. If, then is the required root, and the problem is solved. If, then - the number of a certain sign: either. Then, either at the ends of the segment, or at the ends of the segment, the values ​​of the function have different signs. Let's denote such a segment. Obviously, the length of the segment is two times less than the length of the segment. We will do the same with a segment. As a result, we get either a root or a new segment, etc. (Fig. 2).

    The middle of the th segment. Obviously, the length of the segment will be equal, and since, then

    Termination criterion. From relation (1) it follows that for a given accuracy of the approximation the computation ends when the inequality or inequality is satisfied. Thus, the number of iterations can be determined in advance. The value is taken as the approximate value of the root.

    Example. We will find it approximately with accuracy. This problem is equivalent to solving an equation, or finding the zero of a function. Take a segment as the starting segment. At the ends of this segment, the function takes values ​​with different signs:. Let us find the number of divisions of the segment required to achieve the required accuracy. We have:

    Therefore, no later than the 6th division, we find with the required accuracy,. The calculation results are presented in Table 1.

    Table 1

    1,0000 1,0000 1,0000 1,1250 1,1250 1,1406 1,1406
    2,0000 1,5000 1,2500 1,2500 1,1875 1,1875 1,1562
    1,5000 1,2500 1,1250 1,1875 1,1406 1,1562 1,1484
    Zn - - - - - - -
    Zn + + + + + + +
    5,5938 0,7585 -0,2959 0,1812 -0,0691 0,0532 -0,0078
    - 1,0000 0,5000 0,2500 0,1250 0,0625 0,0312 0,0156

    1.4. Simple iteration method

    Let the equation be replaced by an equivalent equation

    Let us choose the initial approximation in some way. Let us calculate the value of the function at and find the refined value. Let us now substitute into equation (1) and obtain a new approximation, etc. Continuing this process indefinitely, we obtain a sequence of approximations to the root:

    Formula (3) is calculation formula simple iteration method.

    If the sequence converges for, i.e., there exists

    and the function is continuous, then, passing to the limit in (3) and taking into account (4), we obtain:.

    Thus, hence, is the root of equation (2).

    Convergence of the method. The convergence of the simple iteration method is established by the following theorem.

    Theorem. Let the function be defined and differentiable on an interval, with all its values. Then, if the condition is satisfied at:

    1) the iteration process converges regardless of the initial value;

    2) the limiting value is the only root of the equation on the segment.

    Proof. Since and, you can write

    By the mean value theorem (it states that if the derivative of a function is continuous on some interval, then the tangent of the slope of the chord drawn between the points and, (i.e., is equal to the derivative of the function at some intermediate point lying between and), the quotient in the last expression will be is equal to, where is some intermediate point in the root search interval.

    If you enter a designation for the entire search interval, then the previous equality can be rewritten as:

    Likewise. Then the inequality will be true for: and so on. Continuing these calculations further, as a result, we obtain, where is a natural number. Thus, for the method to converge, the following inequality must be satisfied:.

    Hence it follows that there should be less than one. In turn, for all other values ​​smaller, you can write:. The number is determined from the ratio. Then the inequality is true (see the conclusion below):. If we set the condition that the true value of the root should differ from the approximate value by an amount, i.e. , then the approximations must be calculated until the inequality

    or even then.

    Derivation of the inequality Consider two successive approximations: and. From here.

    Using the mean value theorem, we get:

    then, based on the condition, we can write:

    On the other hand, let it be. It's obvious that . Hence, taking into account that, we get

    Then or.

    Using the previous formula, you can get:

    Let us pass to the limit in equality (3), due to the continuity of the function, we obtain, that is, - the root of equation (2). There are no other roots, because if, then, then, where. Equality to zero will be achieved if. That is - the root is the only one.

    The theorem is proved.

    Reducing the equation to the form
    to enforce inequality

    In the general case, it is possible to obtain a suitable iterative form by performing an equivalent transformation of the original equation, for example, by multiplying it by the coefficient:. Then adding to both sides of the equation and denoting, we can require the fulfillment of a sufficient condition. From here the required value is determined. Since the condition must be met on the entire segment, then the largest value on this segment should be used for selection, i.e.

    This ratio defines the range of values ​​of the coefficient, which changes the value within the limits.

    Usually taken.

    In fig. Figures 3-6 show four cases of mutual arrangement of lines and and the corresponding iterative processes. Rice. 3 and 4 correspond to the case, and the iterative process converges. Moreover, if (Fig. 3), the convergence is one-sided, and if (Fig. 4), the convergence is two-sided, oscillatory. Rice. 5 and 6 correspond to the case - the iterative process diverges. In this case, there can be one-sided (Fig. 5) and two-sided (Fig. 6) divergence.

    Method error. The error estimate has been proven (5).

    Termination criterion. It follows from estimate (5) that the calculations must be continued until the inequality is satisfied. If, then, the assessment is simplified:.

    Example 1. We use the simple iteration method to solve the equation with accuracy. We transform the equation to the form:

    , i.e. .

    It is easy to verify that the root of the equation is on the segment. Having calculated the values ​​at the ends of the segment, we get:, a, i.e., the function at the ends of the segment has different signs,

    therefore, there is a root inside the segment. The location of the root is clearly illustrated in Fig. 7.

    Let's calculate the first and second derivatives of the function:

    Since on a segment, the derivative monotonically increases on this segment and takes its maximum value at the right end of the segment, i.e. at the point . Therefore, the following estimate is valid:

    Thus, the condition is met, and the criterion for the end of calculations can be used. Table 2 shows the approximations obtained by the calculation formula. The value is chosen as the initial approximation.

    table 2

    0,8415 0,8861 0,8712 0,8774 0,8765

    The termination criterion is fulfilled when, . The convergence is two-sided, the qualitative nature of such convergence is shown in Fig. 4. Approximate value of the root with the required accuracy.

    Example 2. Solve the equation on a segment using a simple iteration method with an accuracy of 0.025. To solve, the original equation is reduced to the form. To select a value, we use the above formula. Then the calculation formula has the form. As an initial approximation, you can select the upper boundary of the specified segment.

    0,8 0,78

    Since, then.

    1.5. Newton's method (tangent method)

    Newton's method is the most efficient method for solving nonlinear equations. Let the root, i.e. We assume that the function is continuous on an interval and twice continuously differentiable on an interval. We put . Let's draw a tangent to the graph of the function at a point (Fig. 8).

    The tangent equation will look like:.

    The first intersection will be obtained by taking the abscissa of the point of intersection of this tangent with the axis, that is, putting:.

    We will do the same with a point, then with a point, etc., as a result, we obtain a sequence of approximations, and

    Formula (6) is the calculation formula of Newton's method.

    Newton's method can be considered as a special case of the simple iteration method, for which.

    Method convergence... The convergence of Newton's method is established by the following theorem.

    Theorem. Let be a simple root of the equation and in some neighborhood of this root the function is twice continuously differentiable. Then there is such a small neighborhood of the root that for an arbitrary choice of the initial approximation from this neighborhood, the iteration sequence defined by formula (6) does not go beyond this neighborhood and the following estimate is valid:

    The convergence of Newton's method depends on how close the initial approximation is to the root.

    The choice of the initial approximation. Let be a segment containing a root. If, as an initial approximation, we choose one of the ends of the segment for which, then iterations (6) converge, moreover, monotonically. Rice. 8 corresponds to the case when the right end of the segment was chosen as the initial approximation: (Here).

    Method error. Estimate (7) is inconvenient for practical use. In practice, the following error estimates are used:

    Graduation criterion . Estimate (8) allows us to formulate the following criterion for the termination of iterations of Newton's method. For a given accuracy, calculations must be carried out until the inequality

    Example... Calculate the negative root of the equation using Newton's method to the nearest 0.0001. After separating the root, you can make sure that the root is localized in the interval. In this interval and. Since and, then for the initial approximation can be taken.

    -11 -5183 0,6662
    -10,3336 307,3 4276,8 0,0718
    -10,2618 3,496 4185,9 0,0008
    -10,261 0,1477 - -

    . That's why . So, as a result, we get the following, and on, therefore.

    Since then

    All people naturally strive for knowledge. (Aristotle. Metaphysics)

    Numerical Methods: Solving Nonlinear Equations

    The problems of solving equations constantly arise in practice, for example, in economics, when developing a business, you want to know when the profit reaches a certain value, in medicine, when studying the effect of drugs, it is important to know when the concentration of a substance reaches a given level, etc.

    In optimization problems, it is often necessary to determine the points at which the derivative of a function vanishes, which is a necessary condition local extremum.

    In statistics, when constructing estimates by the least squares method or the maximum likelihood method, it is also necessary to solve nonlinear equations and systems of equations.

    So, a whole class of problems arises related to finding solutions nonlinear equations such as equations or equations, etc.

    In the simplest case, we have a function defined on the segment ( a, b) and assuming certain values.

    To every value x from this segment we can compare a number, this is functional addiction, a key concept in mathematics.

    We need to find such a value at which such are called the roots of the function

    Visually, we need to determine the intersection point of the function graphwith the abscissa.

    Halving method

    The simplest method for finding the roots of an equation is the halving method, or dichotomy.

    This method is intuitively clear and everyone would act in a similar way when solving a problem.

    The algorithm is as follows.

    Suppose we found two points and, such that they have various signs, then between these points there is at least one root of the function.

    Divide the segment in half and introduce average point.

    Then either or .

    Let's leave that half of the segment for which the values ​​at the ends have different signs. Now we divide this segment in half again and leave that part of it, on the boundaries of which the function has different signs, and so on, to achieve the required accuracy.

    Obviously, we will gradually narrow down the area where the root of the function is located, and, therefore, we will define it with a certain degree of accuracy.

    Note that the described algorithm is applicable to any continuous function.

    The advantages of the halving method include its high reliability and simplicity.

    The disadvantage of this method is the fact that before you start using it, you need to find two points, the values ​​of the function in which have different signs. Obviously, the method is inapplicable for roots of even multiplicity and also cannot be generalized to the case of complex roots and to systems of equations.

    The order of convergence of the method is linear, at each step the accuracy doubles, the more iterations are done, the more accurately the root is determined.

    Newton's method: theoretical foundations

    Classical Newton's method or tangents is that if is some approximation to the root of the equation , then the next approximation is defined as the root of the tangent to the function drawn at the point.

    The equation of the tangent to the function at the point has the form:

    In the tangent equation we put and.

    Then the sequential computation algorithm in Newton's method is as follows:

    The convergence of the tangent method is quadratic, the order of convergence is 2.

    Thus, the convergence of Newton's tangent method is very fast.

    Remember this wonderful fact!

    The method is generalized to the complex case without any changes.

    If the root is a root of the second multiplicity or higher, then the order of convergence drops and becomes linear.

    Exercise 1... Find, using the tangent method, the solution to the equation on the segment (0, 2).

    Exercise 2. Find, using the tangent method, the solution of the equation on the interval (1, 3).

    The disadvantages of Newton's method should be attributed to its locality, since it is guaranteed to converge for an arbitrary starting approximation only if the condition is satisfied everywhere , otherwise convergence occurs only in some neighborhood of the root.

    The disadvantage of Newton's method is the need to calculate the derivatives at each step.

    Visualization of Newton's method

    Newton's method (tangent method) is applied if the equation f(x) = 0 has a root, and the following conditions are met:

    1) function y= f(x) is defined and continuous at;

    2) f(af(b) < 0 (the function takes values ​​of different signs at the ends of the segment [ a; b]);

    3) derivatives f "(x) and f ""(x) preserve the sign on the segment [ a; b] (that is, the function f(x) either increases or decreases on the segment [ a; b], while maintaining the direction of the convexity);

    The main idea of ​​the method is as follows: on the segment [ a; b] such a number is chosen x 0 , at which f(x 0 ) has the same sign as f"" (x 0 ), i.e., the condition f(x 0 f"" (x) > 0 ... Thus, a point with an abscissa is selected x 0 where the tangent to the curve y= f(x) on the segment [ a; b] crosses the axis Ox... Per point x 0 at first it is convenient to choose one of the ends of the segment.

    Let's consider Newton's method with a specific example.

    Let us be given an increasing function y = f (x) = x 2 -2, continuous on the segment (0; 2), and having f "(x) = 2 x > 0 and f "" (x) = 2 > 0 .

    Drawing1 ... f (x) = x 2 -2

    Equation of tangent in general view has an idea:

    y-y 0 = f" (x 0) (x-x 0).

    In our case: y-y 0 = 2x 0 (x-x 0). As the point x 0, select the point B 1 (b; f (b)) = (2,2). Draw the tangent to the function y = f (x) at point B 1, and denote the point of intersection of the tangent and the axis Ox point x 1... We get the equation of the first tangent: y-2 = 2 2 (x-2), y = 4x-6.

    Ox: x 1 =

    Drawing2. The result of the first iteration

    y = f (x) Ox through the point x 1, we get the point B 2 = (1.5; 0.25)... Draw again the tangent to the function y = f (x) at point В 2, and denote the point of intersection of the tangent and the axis Ox point x 2.

    Equation of the second tangent line: y-0.25=2*1.5(x-1.5), y = 3 x - 4.25.

    Intersection point of tangent and axis Ox: x 2 =.

    Drawing3. Second iteration of Newton's method

    Then we find the intersection point of the function y = f (x) and perpendicular to the axis Ox through point x 2, we get point B 3 and so on.

    Drawing4. The third step of the tangent method

    The first root approximation is determined by the formula:

    = 1.5.

    The second root approximation is determined by the formula:

    =

    The third root approximation is determined by the formula:

    Thus , i-th approximation of the root is determined by the formula:

    Calculations are carried out until a match of decimal places, which are required in the answer, or a given precision e, is reached - until the inequality is fulfilled | xi- xi-1 | < e.

    In our case, let's compare the approximation obtained at the third step with the real answer calculated on the calculator:

    Figure 5. Root of 2 calculated on a calculator

    As you can see, already at the third step, we got an error less than 0.000002.

    Thus, you can calculate the value of the "square root of 2" quantity with any degree of accuracy. This wonderful method was invented by Newton and allows you to find the roots of very complex equations.

    Newton's Method: C ++ Application

    In this article, we will automate the process of calculating the roots of equations by writing a console application in C ++. We will develop it in Visual C ++ 2010 Express, it is a free and very convenient C ++ development environment.

    Let's start with Visual C ++ 2010 Express. The start window of the program will appear. In the left corner, click "Create Project".

    Rice. 1. Visual C ++ 2010 Express Start Page

    In the menu that appears, select "Win32 Console Application", enter the name of the application "Newton_Method".

    Rice. 2. Project creation

    // Newton_Method.cpp: defines the entry point for the console application

    #include "stdafx.h"

    #include

    using namespace std;

    float f (double x) // returns the value of the function f (x) = x ^ 2-2

    float df (float x) // returns the value of the derivative

    float d2f (float x) // value of the second derivative

    int _tmain (int argc, _TCHAR * argv)

    int exit = 0, i = 0; // variables for exit and loop

    double x0, xn; // calculated approximations for the root

    double a, b, eps; // segment boundaries and required precision

    cout<<"Please input \n=>";

    cin >> a >> b; // enter the boundaries of the segment on which we will search for the root

    cout<<"\nPlease input epsilon\n=>";

    cin >> eps; // enter the required accuracy calculations

    if (a> b) // if the user confused the boundaries of the segment, we swap them

    if (f (a) * f (b)> 0) // if the signs of the function at the edges of the segment are the same, then there is no root

    cout<<"\nError! No roots in this interval\n";

    if (f (a) * d2f (a)> 0) x0 = a; // check f (x0) * d2f (x0)> 0 to select the starting point?

    xn = x0-f (x0) / df (x0); // calculate the first approximation

    cout<<++i<<"-th iteration = "<

    while (fabs (x0-xn)> eps) // until we reach the required precision, will continue to calculate

    xn = x0-f (x0) / df (x0); // directly Newton's formula

    cout<<++i<<"-th iteration = "<

    cout<<"\nRoot = "<

    cout<<"\nExit?=>";

    ) while (exit! = 1); // until the user entered exit = 1

    Let's see how it works. Click on the green triangle in the upper left corner of the screen, or press F5.

    If a compilation error occurs "Error error LNK1123: Failed to convert to COFF: file is invalid or damaged", then this is treated either by installing the first Service pack 1, or in the project settings Properties -> Linker disable incremental linking.

    Rice. 4. Solving the compilation error of the project

    We will search for roots of the function f (x) =x2-2.

    First, let's check the operation of the application on the "wrong" input data. There are no roots on the segment, our program should return an error message.

    We have an application window:

    Rice. 5. Entering input data

    Let's introduce the boundaries of the segment 3 and 5, and the accuracy is 0.05. The program, as it should, gave an error message that there are no roots on this segment.

    Rice. 6. Error "There are no roots on this segment!"

    We are not going to leave yet, so to the message "Exit?" we enter "0".

    Now let's check the operation of the application on the correct input data. Let's introduce a segment and an accuracy of 0.0001.

    Rice. 7. Calculation of the root with the required accuracy

    As we can see, the required accuracy was achieved already at the 4th iteration.

    To exit the application, enter "Exit?" => 1.

    Secant method

    To avoid calculating the derivative, Newton's method can be simplified by replacing the derivative with an approximate value calculated from the two previous points:

    The iterative process is as follows:

    This is a two-step iterative process because it uses the previous two to find the next approximation.

    The order of convergence of the secant method is lower than that of the tangent method and is equal in the case of a single root.

    This remarkable value is called the golden ratio:

    Let us verify this, assuming for convenience that.

    Thus, up to infinitesimal higher order

    Dropping the remainder, we obtain a recurrence relation, the solution of which is naturally sought in the form.

    After substitution, we have: and

    It is necessary for convergence to be positive, therefore.

    Since knowledge of the derivative is not required, with the same amount of calculations in the secant method (despite the lower order of convergence), one can achieve greater accuracy than in the tangent method.

    Note that near the root it is necessary to divide by a small number, and this leads to a loss of accuracy (especially in the case of multiple roots), therefore, choosing a relatively small number, perform calculations before performing and continue them until the modulus of the difference of neighboring approximations decreases.

    As soon as growth begins, the calculations are stopped and the last iteration is not used.

    Such a procedure for determining the end of the iterations is called reception Garwick.

    Parabola method

    Consider a three-step method, in which the approximation is determined from the three previous points, and.

    To do this, we replace, similarly to the secant method, the function with an interpolation parabola passing through the points, and.

    In Newton's form, it has the form:

    A point is defined as the one of the roots of this polynomial that is closer in absolute value to the point.

    The convergence order of the parabola method is higher than that of the secant method, but lower than that of Newton's method.

    An important difference from the previously considered methods is the fact that even if it is real for real and the starting approximations are chosen real, the parabola method can lead to a complex root of the original problem.

    This method is very convenient for finding the roots of high degree polynomials.

    Simple iteration method

    The problem of finding solutions to equations can be formulated as the problem of finding the roots:, or as the problem of finding a fixed point.

    Let be and - compression: (in particular, the fact that - compression, as is easy to see, means that).

    By Banach's theorem, there is a unique fixed point

    It can be found as the limit of a simple iterative procedure

    where the initial approximation is an arbitrary point of the interval.

    If the function is differentiable, then a number is a convenient criterion for compression. Indeed, by Lagrange's theorem

    Thus, if the derivative is less than one, then it is compression.

    Condition essential, because if, for example, on, then there is no fixed point, although the derivative is equal to zero. The convergence rate depends on the value. The smaller, the faster the convergence.

    Exercise:

    1) Using the iteration method, solve the system

    2) Using Newton's method, solve the system

    nonlinear equations with an accuracy of 0.001.

    Task number 1 Using the iteration method, solve a system of nonlinear equations with an accuracy of 0.001.

    Theoretical part.

    Iteration method e This is a way to numerically solve mathematical problems. Its essence is to find a search algorithm by a known approximation (approximate value) of the desired value of the next, more accurate approximation. It is used in the case when the sequence of approximations by the specified algorithm converges.

    This method is also called the method of successive approximations, the method of repeated substitutions, the method of simple iterations, etc.

    Newton's method, Newton's algorithm (also known as the tangent method) is an iterative numerical method for finding the root (zero) of a given function. The method was first proposed by the English physicist, mathematician and astronomer Isaac Newton (1643-1727). The search for a solution is carried out by constructing successive approximations and is based on the principles of simple iteration. The method has quadratic convergence. An improvement of the method is the method of chords and tangents. Also, Newton's method can be used to solve optimization problems in which it is required to determine the zero of the first derivative or gradient in the case of a multidimensional space. Justification

    To solve the equation numerically by the simple iteration method, it must be reduced to the following form:, where is a contraction mapping.

    For the best convergence of the method at the point of the next approximation, the condition must be satisfied. The solution to this equation is sought in the form, then:

    Assuming that the approximation point is "close enough" to the root and that the given function is continuous, the final formula for is:

    With this in mind, the function is defined by the expression:

    In the neighborhood of the root, this function performs a contraction mapping, and the algorithm for finding a numerical solution to the equation is reduced to an iterative calculation procedure:

    .

    Job options

    №1. 1)
    2)

    №2. 1)
    2)

    №3. 1)
    2)

    №4. 1)
    2)

    №5. 1)
    2)

    №6. 1)
    2)

    №7. 1)
    2)

    №8. 1)
    2)

    №9. 1)
    2)

    №10.1)
    2)

    №11.1)
    2)

    №12.1)
    2)

    №13.1)
    2)

    №14.1)
    2)

    №15.1)
    2)

    №16.1)
    2)

    №17.1)
    2)

    №18.1)
    2)

    №19.1)
    2)

    №20.1)
    2)

    №21. 1)
    2)

    №22. 1)
    2)

    №23. 1)
    2)

    №24. 1)
    2)

    №25. 1)
    2)

    №26. 1)
    2)

    №27. 1)
    2)

    №28. 1)
    2)

    №29. 1)
    2)

    №30. 1)
    2)

    Sample assignment

    №1. 1)
    2)

    An example of solving a system of nonlinear equations by the iteration method



    Let's rewrite this system as:

    We carry out the separation of the roots graphically (Fig. 1). From the graph we see that the system has one solution, enclosed in the area D: 0<NS<0,3;-2,2<y<-1,8.

    Let us make sure that the iteration method is applicable to refine the solution to the system, for which we write it in the following form:

    Since, we have in the domain D

    + = ;

    + =

    Thus, the convergence conditions are satisfied.

    Table 2

    NS
    0,15 -2 -0,45 -0,4350 -0,4161 -0,1384
    0,1616 -2,035 -0,4384 -0,4245 -0,4477 -0,1492
    0,1508 -2.0245 -0,4492 -0,4342 -0,4382 -0,1461
    0.1539 -2,0342. -0,4461 -0.4313 -0,4470 -0,1490
    0.1510 -2,0313 -0,4490 -0,4341 -0,4444 -0.1481
    0,1519 -2,0341 -0,4481 -0,4333 -0,4469 -0,1490
    0,1510 -2.0333 -0.449 -0,4341 -0.4462 -0,1487
    0.1513 -2.0341 -0,4487 -0,4340 -0,4469 -0.1490
    0.1510 -2,0340

    For initial approximations we take NS=0,15, y 0 =-2.

    (tab. # 2). Then the answer will be written:

    An example of solving a system of nonlinear equations by Newton's method

    We carry out the separation of the roots graphically (Fig. 2). To build graphs of functions, we will compile a table of function values and included in the first and second equations (Table I).

    The values ​​for x can be taken based on the following conditions: from the first equation 1≤1.2x + 0.4≤1, i.e. 1.16≤x≤0.5; from the second equation, i.e. ... Thus, .

    The system has two solutions. Let us clarify one of them, belonging to the region D: 0.4<x<0,5;

    0,76<y<0,73. За начальное приближение примем Имеем:


    Table 3

    x -1,1 -1 -0,8 -0,6 -0,2 -0,4 0,2 0,4 0,5
    x 2 1.21 0,64 0,36 0,04 0,16 0,04 0.16 0,25
    0,8x 2 0,97 0,8 0,51 0,29 0,032 0,13 0,032 0,13 0,2
    1 -0,8x 2 0,03 0,2 0,49 0,71 0,97 0,87 0,97 0.87 0,8
    0,02 0,13 0,33 0,47 0,65 0,58 0,67 0,65 0,58 0.53
    ± 0.14 ± 0.36 ± 0.57 ± 0.69 ± 0.81 ± 0.76 ± 0.82 ± 0.81 ± 0.76 ± 0.73
    1.2x -1,32 -1,2 -0.9b " -0,72 -0,24 -0,48 0,24 0,48 0,6
    0,4+1,2x -0,92 -0,8 -0,56 -0,32 0,16 -0,08 0,4 0,64 0.88
    2x-y -1.17 -0,93 -0,59 -0,33 0,16 -0,08 0,41 0,69 2.06 1,08 1,57
    -1,03 -1,07 -1,01 -0,87 -0,56 -0,72 -0,41 -0,29 -1,26 -1,28 -0.57

    We carry out the refinement of the roots using Newton's method:



    where ; ;


    ;
    ;


    All calculations are made according to table 3

    Table 3 0,10 0,017 -0,0060 0,0247 -0,0027 -0,0256 0,0001 0,0004
    0,2701 0,0440 -0,0193 0,0794 -0,0080 -0,0764 -0,0003 0,0013
    2,6197 3,2199 2,9827 3,1673
    -0,0208 -2,25 0,1615 -2,199 0,1251 -2,1249 0,1452 -2,2017
    -1,1584 0,64 -1,523 0,8 -1,4502 0,7904 -1,4904 0,7861
    0,1198 -0,0282 -0,0131 0,059 -0,0007 -0,0523 -0,0002 0,0010
    0,9988 0,0208 0,9869 -0,1615 0,9921 -0,1251 -0,9894 -0,1452
    0,55 0,733 1,6963 1,7165
    0,128 0,8438 0,2 0,8059 0,1952 0,7525 0,1931 0,8079
    0,4 0,75 0,50 -0,733 0,4940 -0,7083 0,4913 -0,7339 0,4912 -0,7335 Answer: x≈0,491 y≈ 0,734
    n

    Control questions

    1) Present on the graph the possible cases of solving a system of two nonlinear equations.

    2) Formulate the formulation of the problem of solving a system of n-linear equations.

    3) Give the iterative formulas of the simple iteration method in the case of a system of two nonlinear equations.

    4) Formulate a theorem on the local convergence of Newton's method.

    5) List the difficulties encountered when using Newton's method in practice.

    6) Explain how you can modify Newton's method.

    7) Draw in the form of block diagrams the algorithm for solving systems of two nonlinear equations by the methods of simple iteration and Newton.


    Laboratory work No. 3