To come in
Logopedic portal
  • New time (XV-XVIII centuries
  • Presentation on computer science on the topic "Raster and vector graphics
  • Now we get the equation of the tangent to the graph of the function. Basic differentiation formulas
  • Presentation on the topic "road signs" Presentation on the topic of basic road signs
  • Agriculture of the Leningrad region
  • A16 Personal verb endings
  • Cross-correlation function and its application. Lecture summary: Correlation, autocorrelation, cross-correlation. Properties of the autocorrelation and cross-correlation functions Cross-correlation function

    Cross-correlation function and its application.  Lecture summary: Correlation, autocorrelation, cross-correlation.  Properties of the autocorrelation and cross-correlation functions Cross-correlation function

    Mutual correlation functions of signals

    Cross correlation function(CCF) of different signals (cross-correlation function, CCF) describes both the degree of similarity of the shape of two signals, and their relative position relative to each other along the coordinate (independent variable). Generalizing formula (6.1) of the autocorrelation function to two different signals s(t) and u(t), we obtain the following scalar product of the signals:

    B su (t) =s(t) u(t+t) dt. (6.14)

    Mutual correlation of signals characterizes a certain correlation of phenomena and physical processes displayed by these signals, and can serve as a measure of the “stability” of this relationship when signals are processed separately in various devices. For finite-energy signals, the CCF is also finite, while:

    |Bsu(t)| £ ||s(t)||×||u(t)||,

    which follows from the Cauchy-Bunyakovsky inequality and the independence of signal norms from the shift in coordinates.

    When changing the variable t = t-t in the formula (6.2.1), we obtain:

    B su (t) \u003d s (t-t) u (t) dt \u003d u (t) s (t-t) dt \u003d B us (-t).

    It follows from here that the parity condition is not satisfied for the VKF, B su (t) ¹ B su (-t), and the values ​​of the VKF are not required to have a maximum at t = 0.

    This can be clearly seen in Fig. 6.6, where two identical signals are given with centers at points 0.5 and 1.5. Calculation by formula (6.14) with a gradual increase in the values ​​of t means successive shifts of the signal s2(t) to the left along the time axis (for each value of s1(t), the values ​​of s2(t+t) are taken for the integrand multiplication). At t=0, the signals are orthogonal and the value of B 12 (t)=0. The maximum B 12 (t) will be observed when the signal s2(t) is shifted to the left by the value t=1, at which the signals s1(t) and s2(t+t) are completely combined.

    Rice. 6.6. Signals and VKF

    The same values ​​of the CCF according to formulas (6.14) and (6.14") are observed at the same mutual position of the signals: when the signal u(t) relative to s(t) is shifted to the right along the ordinate axis and the signal s(t) by an interval t ) relative to the signal u(t) to the left, i.e. B su (t) = B us (-t).

    On fig. 6.7 shows examples of the CCF for a rectangular signal s(t) and two identical triangular signals u(t) and v(t). All signals have the same duration T, while the signal v(t) is shifted forward by the interval T/2.

    Rice. 6.7. Cross-covariance functions of signals

    The signals s(t) and u(t) are the same in terms of time location and the signal "overlap" area is maximum at t=0, which is fixed by the function B su . At the same time, the function B su is sharply asymmetric, since with an asymmetric signal shape u(t) for a symmetrical shape s(t) (relative to the center of the signals), the signal "overlap" area changes differently depending on the direction of the shift (the sign of t with increasing value t from zero). When the initial position of the signal u(t) is shifted to the left along the ordinate axis (ahead of the signal s(t) - signal v(t)) the VKF shape remains unchanged and shifts to the right by the same shift value - the function B sv in Fig. 6.7. If the expressions of the functions in (6.14) are interchanged, then the new function B vs will be the mirror function B sv with respect to t=0.



    Taking into account these features, the total CCF is calculated, as a rule, separately for positive and negative delays:

    B su (t) =s(t) u(t+t) dt. Bus(t)=u(t)s(t+t)dt. (6.14")

    Cross-correlation function(VKF) is an estimate of the correlation properties between two random processes and , represented by field observations on two profiles, on two traces, etc.

    VKF is calculated by the formula:

    (4.7)

    Where n is the number of points in each implementation, i.e. for each profile, route, etc.

    And - the average values ​​of the observed data for these profiles, traces.

    When the average values ​​are equal to zero: formula (4.7) simplifies

    (4.8)

    At m=0, the value of the CCF is equal to the product of the field values ​​for the same-name observation discretes along profiles, traces, etc.

    At , the value of the VKF is equal to the product of the field values ​​shifted by one sample. In this case, we will assume that the shift by one discret to the left of the subsequent profile, i.e. , relative to the previous one, i.e. , corresponds to a positive bias, i.e. , and the shift to the right corresponds to the value of .

    Since at and at different field values ​​are multiplied, in contrast to the ACF calculation, the CCF is not an even function, i.e. .

    At , the value of the VKF is equal to the product of the field values ​​already shifted by two discretes, and so on.

    In practice, the normalized CCF is often used, defined as (4.8)

    where and are standard deviations of the field values ​​for the first and second trace profiles.

    VKF has found application in solving three main problems of processing geophysical data:

    1) Evaluation of the correlation properties of the signal under the condition of uncorrelated interference between profiles, paths and a slight change in the shape of the signal from profile to profile (from path to path), which is usually done in practice, since the distance between profiles is chosen so that the signals are correlated between profiles, and interference, on the contrary, would be uncorrelated. In seismic surveys, geophone spacings are chosen such that irregular noise waves are uncorrelated between adjacent traces. In this case, the VKF will be equal to

    those. if the waveforms coincide, the last sum will be equal to the ACF of the signal.

    Therefore, the CCF more reliably estimates the correlation properties of the signal compared to the ACF.

    2) Estimation of the signal strike from the positive extremes of the CCF. Positive VKF extrema indicate the presence of a signal correlation between profiles, traces, since the value of the argument , at which the VKF extremum is reached, corresponds to the signal shift on the next profile relative to its position on the previous one. Thus, the signal shift from profile to profile is determined from the magnitude of the positive extrema of the CCF, which leads to an estimate of the signal strike.

    In the case of signals (anomalies) of different strikes, the VKF has two or more positive extrema.

    Figure 4.2a shows the results of observations of the physical field on five profiles and the graphs of the VCF corresponding to these observations, according to which the strike of signals is determined, corresponding to their shift by two discretes from profile to profile.

    In the case of interference of two signals, as shown in Fig. 4.2, b, two positive extrema are fixed at and , which further, when summing up the data over several profiles in the direction of signal strike, makes it possible to clearly separate them over the survey area.

    Finally, a sharp shift of the CCF extrema for any pair of profiles compared to the extrema of adjacent pairs of profiles allows using the CCF to highlight violations in the field distribution, as shown in Fig. 4.2,c. Faults with a strike close to the strike of the geophysical survey profiles are usually mapped based on such a shift of the VKF extrema.

    When processing seismic records, the construction of the CCF between the data of adjacent traces provides an estimate of the total static and kinematic corrections, determined by the abscissa of the positive extremum of the CCF. With knowledge of kinematics, i.e. velocity characteristics of the time section, it is not difficult to determine the value of the static correction.

    Cross correlation function (CCF) of different signals (cross-correlation function, CCF) describes both the degree of similarity of the shape of two signals, and their relative position relative to each other along the coordinate (independent variable). Generalizing the formula (6.1.1) of the autocorrelation function to two different signals s(t) and u(t), we obtain the following scalar product of the signals:

    B su () =s(t) u(t+) dt. (6.2.1)

    Mutual correlation of signals characterizes a certain correlation of phenomena and physical processes displayed by these signals, and can serve as a measure of the “stability” of this relationship when signals are processed separately in various devices. For finite-energy signals, the CCF is also finite, while:

    |B su ()|  ||s(t)||||u(t)||,

    which follows from the Cauchy-Bunyakovsky inequality and the independence of signal norms from the shift in coordinates.

    When changing the variable t = t- in the formula (6.2.1), we obtain:

    B su () = s(t-) u(t) dt = u(t) s(t-) dt = B us (-).

    It follows that the parity condition is not satisfied for the VKF, B su ()  B su (-), and the values ​​of the VKF are not required to have a maximum at  = 0.

    Rice. 6.2.1. Signals and VKF.

    This can be clearly seen in Fig. 6.2.1, where two identical signals are given with centers at points 0.5 and 1.5. Calculation by formula (6.2.1) with a gradual increase in the values ​​of  means successive shifts of the signal s2(t) to the left along the time axis (for each value of s1(t), the values ​​of s2(t+) are taken for the integrand multiplication). When =0, the signals are orthogonal and the value of B 12 ()=0. The maximum B 12 () will be observed when the signal s2(t) is shifted to the left by the value =1, at which the signals s1(t) and s2(t+) completely coincide.

    The same CCF values ​​according to formulas (6.2.1) and (6.2.1") are observed at the same mutual position of the signals: when the signal u(t) is shifted by the interval  relative to s(t) to the right along the y-axis and signal s(t) relative to signal u(t) to the left, i.e. B su () = B us (-

    Rice. 6.2.2. Mutual covariance functions of signals.

    On fig. 6.2.2 shows examples of VKF for a rectangular signal s(t) and two identical triangular signals u(t) and v(t). All signals have the same duration T, while the signal v(t) is shifted forward by the interval T/2.

    The signals s(t) and u(t) are the same in terms of time location and the signal "overlap" area is maximum at =0, which is fixed by the function B su . At the same time, the function B su is sharply asymmetric, since with an asymmetric signal shape u(t) for a symmetrical shape s(t) (relative to the center of the signals), the signal "overlapping" area changes differently depending on the direction of the shift (the sign of  with an increase in the value  from zero). When the initial position of the signal u(t) is shifted to the left along the ordinate axis (ahead of the signal s(t) - signal v(t)) the VKF shape remains unchanged and shifts to the right by the same shift value - the function B sv in Fig. 6.2.2. If the expressions of the functions in (6.2.1) are interchanged, then the new function B vs will be a function B sv that is mirrored with respect to =0.

    Taking into account these features, the total CCF is calculated, as a rule, separately for positive and negative delays:

    B su () = s(t) u(t+) dt. B us () = u(t) s(t+) dt. (6.2.1")

    Cross-correlation of noisy signals . For two noisy signals u(t) = s1(t) + q1(t) and v(t) = s2(t) + q2(t), applying the method of deriving formulas (6.1.13) with the replacement of a copy of the signal s(t ) to the signal s2(t), it is easy to derive the cross-correlation formula in the following form:

    B uv () = B s1s2 () + B s1q2 () + B q1s2 () + B q1q2 (). (6.2.2)

    The last three terms on the right side of (6.2.2) decay to zero as  increases. For large signal setting intervals, the expression can be written in the following form:

    B uv () = B s 1 s 2 () +
    +
    +
    . (6.2.3)

    At zero average values ​​of noise and statistical independence from signals, the following takes place:

    B uv () → B s 1 s 2 ().

    VKF of discrete signals. All properties of VKF of analog signals are also valid for VKF of discrete signals, while the features of discrete signals described above for discrete ACF are also valid for them (formulas 6.1.9-6.1.12). In particular, at t = const =1 for signals x(k) and y(k) with the number of samples K:

    B xy (n) =
    x k y k-n . (6.2.4)

    When normalized in units of power:

    B xy (n) = x k y k-n 
    . (6.2.5)

    Estimation of Periodic Signals in Noise . A noisy signal can be evaluated for cross-correlation with a "reference" signal by trial and error, with the cross-correlation function adjusted to its maximum value.

    For signal u(k)=s(k)+q(k) with statistical independence of noise and → 0, the cross-correlation function (6.2.2) with the signal template p(k) for q2(k)=0 takes the form:

    B up (k) = B sp (k) + B qp (k) = B sp (k) + .

    And since → 0 as N increases, then B up (k) → B sp (k). Obviously, the function B up (k) will have a maximum when p(k) = s(k). By changing the form of the template p(k) and maximizing the function B up (k), we can obtain an estimate of s(k) in the form of the optimal form p(k).

    Function of cross-correlation coefficients (VKF) is a quantitative indicator of the degree of similarity of signals s(t) and u(t). Similarly to the function of autocorrelation coefficients, it is calculated through the centered values ​​of the functions (to calculate the mutual covariance, it is sufficient to center only one of the functions), and is normalized to the product of the values ​​of the standards of the functions s(t) and v(t):

     su () = C su ()/ s  v . (6.2.6)

    The interval of change in the values ​​of correlation coefficients at shifts  can vary from –1 (complete inverse correlation) to 1 (complete similarity or one hundred percent correlation). At shifts , at which zero values ​​ su () are observed, the signals are independent of each other (uncorrelated). The cross-correlation coefficient allows you to establish the presence of a connection between the signals, regardless of the physical properties of the signals and their magnitude.

    When calculating the CCF of noisy discrete signals of limited length using formula (6.2.4), there is a probability of occurrence of values ​​ su (n)| > 1.

    For periodic signals, the concept of CCF is usually not used, except for signals with the same period, for example, entry and exit signals when studying the characteristics of systems.

    SIGNALS And LINEAR SYSTEMS

    Signals and linear systems. Correlation of signals

    Topic 6. SIGNAL CORRELATION

    The utmost fear and the utmost zeal of courage alike upset the stomach and cause diarrhea.

    Michel Montaigne. French jurist-thinker, 16th century.

    Here is the number! Two functions have a 100% correlation with the third and are orthogonal to each other. Well, the Almighty had jokes during the creation of the World.

    Anatoly Pyshmintsev. Novosibirsk geophysicist of the Ural school, XX century.

    1. Autocorrelation functions of signals. The concept of autocorrelation functions (ACF). ACF of signals limited in time. ACF of periodic signals. Autocovariance functions (FAK). ACF of discrete signals. ACF of noisy signals. ACF of code signals.

    2. Cross-correlation functions of signals (CCF). Cross-correlation function (CCF). Cross-correlation of noisy signals. VKF of discrete signals. Estimation of periodic signals in noise. Function of mutual correlation coefficients.

    3. Spectral densities of correlation functions. Spectral density of the ACF. Signal correlation interval. Spectral density of VKF. Calculation of correlation functions using FFT.

    introduction

    Correlation, and its special case for centered signals - covariance, is a signal analysis method. Here is one of the options for using the method. Assume that there is a signal s(t), which may or may not contain some sequence x(t) of finite length T, whose time position is of interest to us. To search for this sequence in a time window of length T sliding along the signal s(t), the scalar products of the signals s(t) and x(t) are calculated. Thus, we "apply" the desired signal x(t) to the signal s(t), sliding along its argument, and by the value of the scalar product we estimate the degree of similarity of the signals at the points of comparison.


    Correlation analysis makes it possible to establish in the signals (or in the series of digital signal data) the presence of a certain relationship between the change in the values ​​of the signals in terms of the independent variable, that is, when large values ​​of one signal (relative to the average values ​​of the signal) are associated with large values ​​of another signal (positive correlation), or, conversely, small values ​​of one signal are associated with large values ​​of the other (negative correlation), or the data of the two signals are not related in any way (zero correlation).

    In the functional space of signals, this degree of connection can be expressed in normalized units of the correlation coefficient, i.e., in the cosine of the angle between the signal vectors, and, accordingly, will take values ​​from 1 (complete coincidence of signals) to -1 (complete opposite) and does not depend on the value (scale) of units of measurement .

    In the autocorrelation variant, using a similar technique, the scalar product of the signal s(t) is determined with its own copy sliding along the argument. Autocorrelation makes it possible to evaluate the average statistical dependence of the current signal samples on their previous and subsequent values ​​(the so-called correlation radius of signal values), as well as to identify the presence of periodically repeating elements in the signal.

    Correlation methods are of particular importance in the analysis of random processes to identify non-random components and evaluate the non-random parameters of these processes.

    Note that there is some confusion in the terms "correlation" and "covariance". In the mathematical literature, the term "covariance" is applied to centered functions, and "correlation" to arbitrary ones. In the technical literature, and especially in the literature on signals and signal processing methods, exactly the opposite terminology is often used. This is not of fundamental importance, but when getting acquainted with literary sources, it is worth paying attention to the accepted purpose of these terms.

    6.1. Autocorrelation functions of signals.

    The concept of autocorrelation functions of signals . The autocorrelation function (ACF, CF - correlation function) of a signal s(t), finite in energy, is a quantitative integral characteristic of the signal shape, revealing the nature and parameters of the mutual temporal relationship of samples in the signal, which always takes place for periodic signals, as well as the interval and the degree of dependence of the reading values ​​at the current moments of time on the prehistory of the current moment. ACF is determined by the integral of the product of two copies of the signal s(t), shifted relative to each other by time t:

    Bs(t) =s(t) s(t+t) dt = ás(t), s(t+t)ñ = ||s(t)|| ||s(t+t)|| cosj(t). (6.1.1)

    As follows from this expression, the ACF is the scalar product of the signal and its copy in functional dependence on the variable value of the shift value t. Accordingly, the ACF has the physical dimension of energy, and at t = 0 the value of the ACF is directly equal to the signal energy and is the maximum possible (the cosine of the angle of signal interaction with itself is equal to 1):

    Bs(0) =s(t)2 dt = Es.

    ACF refers to even functions, which is easy to verify by changing the variable t = t-t in expression (6.1.1):

    Bs(t) = s(t-t) s(t) dt = Bs(-t).

    The maximum ACF, equal to the signal energy at t=0, is always positive, and the ACF modulus does not exceed the signal energy for any value of the time shift. The latter directly follows from the properties of the scalar product (as well as the Cauchy-Bunyakovsky inequality):


    ás(t), s(t+t)ñ = ||s(t)||×||s(t+t)||×cos j(t),

    cos j(t) = 1 for t = 0, ás(t), s(t+t)ñ = ||s(t)||×||s(t)|| = Es,

    cosj(t)< 1 при t ¹ 0, ás(t), s(t+t)ñ = ||s(t)||×||s(t+t)||×cos j(t) < Es.

    As an example, in fig. 6.1.1 shows two signals - a rectangular pulse and a radio pulse of the same duration T, and the shapes of their ACF corresponding to these signals. The amplitude of the radio pulse oscillations is set equal to the amplitude of the rectangular pulse, while the signal energies will also be the same, which is confirmed by the equal values ​​of the central maxima of the ACF. With a finite pulse duration, the ACF durations are also finite, and equal to twice the pulse duration (when the copy of the final pulse is shifted by an interval of its duration both to the left and to the right, the product of the pulse with its copy becomes equal to zero). The oscillation frequency of the ACF of the radio pulse is equal to the oscillation frequency of the filling of the radio pulse (lateral minima and maxima of the ACF occur every time when the copy of the radio pulse is successively shifted by half the oscillation period of its filling).

    Given parity, the graphical representation of the ACF is usually only done for positive values ​​of t. In practice, signals are usually set on the interval of positive values ​​of the arguments from 0-T. The sign +t in expression (6.1.1) means that as the values ​​of t increase, the copy of the signal s(t+t) shifts to the left along the t axis and goes beyond 0. For digital signals, this requires a corresponding extension of the data into the region of negative values ​​of the argument. And since in calculations the interval for setting t is usually much less than the interval for setting the signal, it is more practical to shift the copy of the signal to the left along the argument axis, i.e., use the function s(t-t) in expression (6.1.1) instead of s(t + t ).

    Bs(t) = s(t) s(t-t) dt. (6.1.1")

    For finite signals, as the value of the shift t increases, the temporal overlap of the signal with its copy decreases, and, accordingly, the cosine of the interaction angle and the scalar product as a whole tend to zero:

    ACF calculated from the centered value of the signal s(t) is autocovariance signal function:

    Cs(t) = dt, (6.1.2)

    where ms is the average value of the signal. The covariance functions are related to the correlation functions by a fairly simple relationship:

    Cs(t) = Bs(t) - ms2.

    ACF of signals limited in time. In practice, signals given at a certain interval are usually investigated and analyzed. To compare the ACF of signals given at different time intervals, a modification of the ACF with normalization to the length of the interval finds practical application. So, for example, when setting a signal on the interval :

    Bs(t)=s(t) s(t+t) dt. (6.1.3)

    The ACF can also be calculated for weakly damped signals with infinite energy, as the average value of the scalar product of the signal and its copy when the signal setting interval tends to infinity:

    Bs(t) = . (6.1.4)

    ACF according to these expressions has the physical dimension of power, and is equal to the average mutual power of the signal and its copy in functional dependence on the shift of the copy.

    ACF of periodic signals. The energy of periodic signals is infinite, so the ACF of periodic signals is calculated over one period T, averaging the scalar product of the signal and its shifted copy within the period:

    Bs(t) = (1/T)s(t) s(t-t) dt. (6.1.5)

    Mathematically more rigorous expression:

    Bs(t) = .

    At t=0, the value of the ACF normalized to the period is equal to the average signal power within the period. In this case, the ACF of periodic signals is a periodic function with the same period T. So, for the signal s(t) = A cos(w0t+j0) at T=2p/w0 we have:

    Bs(t) = A cos(w0t+j0) A cos(w0(t-t)+j0) = (A2/2) cos(w0t). (6.1.6)

    The result obtained does not depend on the initial phase of the harmonic signal, which is typical for any periodic signals and is one of the properties of the ACF. Using autocorrelation functions, you can check the presence of periodic properties in any arbitrary signals. An example of the autocorrelation function of a periodic signal is shown in fig. 6.1.2.

    Autocovariance functions (ACV) are calculated similarly, by centered signal values. A remarkable feature of these functions is their simple relationship with the variance ss2 of the signals (the square of the standard - the standard deviation of the signal values ​​from the mean value). As is known, the dispersion value is equal to the average signal power, from which it follows:

    |Cs(t)| ≤ ss2, Cs(0) = ss2 º ||s(t)||2. (6.1.7)

    The FAC values ​​normalized to the dispersion value are a function of the autocorrelation coefficients:

    rs(t) = Cs(t)/Cs(0) = Cs(t)/ss2 º cos j(t). (6.1.8)

    This function is sometimes referred to as the "true" autocorrelation function. By virtue of normalization, its values ​​do not depend on the units (scale) of representation of signal values ​​s(t) and characterize the degree of linear relationship between signal values ​​depending on the shift t between signal samples. Values ​​rs(t) º cos j(t) can vary from 1 (full direct correlation of readings) to -1 (inverse correlation).

    On fig. 6.1.3 shows an example of signals s(k) and s1(k) = s(k)+noise with FAC coefficients corresponding to these signals - rs and rs1. As can be seen on the graphs, the FAC confidently revealed the presence of periodic fluctuations in the signals. The noise in the signal s1(k) lowered the amplitude of periodic oscillations without changing the period. This confirms the plot of the Cs/ss1 curve, i.e., the FAC of the signal s(k) with normalization (for comparison) to the value of the signal dispersion s1(k), where it can be clearly seen that the noise pulses, with complete statistical independence of their samples, caused an increase in the value Сs1(0) with respect to the value of Cs(0) and somewhat "blurred" the function of the autocovariance coefficients. This is due to the fact that the value rs(t) of noise signals tends to 1 at t ® 0 and fluctuates relative to zero at t ≠ 0, while the fluctuation amplitudes are statistically independent and depend on the number of signal samples (they tend to zero as the number of samples increases).

    ACF of discrete signals. With the data sampling interval Dt = const, the calculation of the ACF is performed over the intervals Dt = Dt and is usually written as a discrete function of numbers n of the sample shift nDt:

    Bs(nDt) = Dtsk×sk-n. (6.1.9)

    Discrete signals are usually specified in the form of numerical arrays of a certain length with numbering of readings k = 0.1, ... K at Dt = 1, and the calculation of discrete ACF in units of energy is performed in a one-sided version, taking into account the length of the arrays. If the entire signal array is used and the number of ACF samples is equal to the number of array samples, then the calculation is performed according to the formula:

    Bs(n) = sk×sk-n. (6.1.10)

    The K/(K-n) factor in this function is a correction factor for the gradual decrease in the number of multiplied and summed values ​​as the shift n increases. Without this correction for non-centered signals, a trend of summation of average values ​​appears in the ACF values. When measuring in units of signal power, the factor K/(K-n) is replaced by the factor 1/(K-n).

    Formula (6.1.10) is used quite rarely, mainly for deterministic signals with a small number of samples. For random and noisy signals, a decrease in the denominator (K-n) and the number of multiplied samples as the shift increases leads to an increase in statistical fluctuations in the calculation of the ACF. Greater reliability under these conditions is provided by the calculation of the ACF in units of signal power according to the formula:

    Bs(n) = sk×sk-n, sk-n = 0 for k-n< 0, (6.1.11)

    i.e. with normalization by a constant factor 1/K and with signal extension by zero values ​​(to the left side when shifts k-n or to the right side when using shifts k+n). This estimate is biased and has a slightly smaller dispersion than according to formula (6.1.10). The difference between the normalizations according to formulas (6.1.10) and (6.1.11) can be clearly seen in Fig. 6.1.4.

    Formula (6.1.11) can be considered as averaging the sum of products, i.e. as an estimate of the mathematical expectation:

    Bs(n) = M(sk sk-n) @ . (6.1.12)

    In practice, the discrete ACF has the same properties as the continuous ACF. It is also even, and its value at n = 0 is equal to the energy or power of the discrete signal, depending on the normalization.

    ACF of noisy signals . The noisy signal is written as the sum v(k) = s(k)+q(k). In the general case, the noise does not have to have a zero mean value, and the power-normalized autocorrelation function of a digital signal, containing N samples, is written in the following form:

    Bv(n) = (1/N) ás(k)+q(k), s(k-n)+q(k-n)ñ =

    = (1/N) [ás(k), s(k-n)ñ + ás(k), q(k-n)ñ + áq(k), s(k-n)ñ + áq(k), q(k-n)ñ ]=

    Bs(n) + M(sk qk-n) + M(qk sk-n) + M(qk qk-n).

    Bv(n) = Bs(n) + + + . (6.1.13)

    With the statistical independence of the useful signal s(k) and noise q(k), taking into account the expansion of the mathematical expectation

    M(sk qk-n) = M(sk) M(qk-n) =

    the following formula can be used:

    Bv(n) = Bs(n) + 2 + . (6.1.13")

    An example of a noisy signal and its ACF in comparison with a non-noisy signal is shown in Fig. 6.1.5.

    From formulas (6.1.13) it follows that the ACF of a noisy signal consists of the ACF of the signal component of the useful signal with an imposed noise function decaying to the value 2+. For large values ​​of K, when → 0, we have Bv(n) » Bs(n). This makes it possible not only to distinguish by ACF periodic signals that are almost completely hidden in the noise (the noise power is much greater than the signal power), but also to determine their period and shape within the period with high accuracy, and for single-frequency harmonic signals, their amplitude using expressions (6.1.6).

    Barker signal

    signal ACF

    1, 1, 1, -1, -1, 1, -1

    7, 0, -1, 0, -1, 0, -1

    1,1,1,-1,-1,-1,1,-1,-1,1,-1

    11,0,-1,0,-1,0,-1,0,-1,0,-1

    1,1,1,1,1,-1,-1,1,1-1,1,-1,1

    13,0,1,0,1,0,1,0,1,0,1,0,1

    code signals are a kind of discrete signals. At a certain interval of the M×Dt code word, they can have only two amplitude values: 0 and 1 or 1 and –1. When extracting codes at a significant noise level, the shape of the ACF of the code word is of particular importance. From this position, the best codes are those whose ACF sidelobe values ​​are minimal over the entire length of the codeword interval at the maximum value of the central peak. These codes include the Barker code shown in Table 6.1. As can be seen from the table, the amplitude of the central peak of the code is numerically equal to the value of M, while the amplitude of side oscillations for n ¹ 0 does not exceed 1.

    6.2. Mutual correlation functions of signals.

    Cross correlation function (CCF) of different signals (cross-correlation function, CCF) describes both the degree of similarity of the shape of two signals, and their relative position relative to each other along the coordinate (independent variable). Generalizing the formula (6.1.1) of the autocorrelation function to two different signals s(t) and u(t), we obtain the following scalar product of the signals:

    Bsu(t)=s(t) u(t+t) dt. (6.2.1)

    Mutual correlation of signals characterizes a certain correlation of phenomena and physical processes displayed by these signals, and can serve as a measure of the “stability” of this relationship when signals are processed separately in various devices. For finite-energy signals, the CCF is also finite, while:

    |Bsu(t)| £ ||s(t)||×||u(t)||,

    which follows from the Cauchy-Bunyakovsky inequality and the independence of signal norms from the shift in coordinates.

    When changing the variable t = t-t in the formula (6.2.1), we obtain:

    Bsu(t) =s(t-t) u(t) dt = u(t) s(t-t) dt = Bus(-t).

    This implies that the CCF does not satisfy the parity condition, Bsu(t) ¹ Bsu(-t), and the CCF values ​​are not required to have a maximum at t = 0.

    This can be clearly seen in Fig. 6.2.1, where two identical signals are given with centers at points 0.5 and 1.5. Calculation by formula (6.2.1) with a gradual increase in the values ​​of t means successive shifts of the signal s2(t) to the left along the time axis (for each value of s1(t), the values ​​of s2(t+t) are taken for the integrand multiplication). At t=0, the signals are orthogonal and the value of B12(t)=0. The maximum B12(t) will be observed when the signal s2(t) is shifted to the left by the value t=1, at which the signals s1(t) and s2(t+t) are completely combined.

    The same CCF values ​​according to formulas (6.2.1) and (6.2.1") are observed at the same mutual position of the signals: when the signal u(t) is shifted by an interval t relative to s(t) to the right along the y-axis and signal s(t) with respect to signal u(t) to the left, i.e. Bsu(t) = Bus(-t).

    On fig. 6.2.2 shows examples of VKF for a rectangular signal s(t) and two identical triangular signals u(t) and v(t). All signals have the same duration T, while the signal v(t) is shifted forward by the interval T/2.

    The signals s(t) and u(t) are the same in terms of time location, and the signal "overlap" area is maximum at t=0, which is fixed by the Bsu function. At the same time, the function Bsu is sharply asymmetric, since with an asymmetric signal shape u(t) for a symmetrical shape s(t) (relative to the center of the signals), the signal “overlapping” area varies differently depending on the direction of the shift (the sign of t with increasing value of t from zero). When the initial position of the signal u(t) is shifted to the left along the ordinate axis (ahead of the signal s(t) - signal v(t)) the shape of the VCF remains unchanged and shifts to the right by the same shift value - the function Bsv in Fig. 6.2.2. If the expressions of the functions in (6.2.1) are interchanged, then the new function Bvs will be the mirror function Bsv with respect to t=0.

    Taking into account these features, the total CCF is calculated, as a rule, separately for positive and negative delays:

    Bsu(t)=s(t) u(t+t) dt. Bus(t)=u(t)s(t+t)dt. (6.2.1")

    Cross-correlation of noisy signals . For two noisy signals u(t) = s1(t) + q1(t) and v(t) = s2(t) + q2(t), applying the method of deriving formulas (6.1.13) with the replacement of a copy of the signal s(t ) to the signal s2(t), it is easy to derive the cross-correlation formula in the following form:

    Buv(t) = Bs1s2(t) + Bs1q2(t) + Bq1s2(t) + Bq1q2(t). (6.2.2)

    The last three terms on the right side of (6.2.2) decay to zero as t increases. For large signal setting intervals, the expression can be written in the following form:

    Buv(t) = Bs1s2(t) + + + . (6.2.3)

    At zero average values ​​of noise and statistical independence from signals, the following takes place:

    Buv(t) → Bs1s2(t).

    VKF of discrete signals. All properties of VKF of analog signals are also valid for VKF of discrete signals, while the features of discrete signals described above for discrete ACF are also valid for them (formulas 6.1.9-6.1.12). In particular, at Dt = const =1 for signals x(k) and y(k) with the number of samples K:

    Bxy(n) = xk yk-n. (6.2.4)

    When normalized in units of power:

    Bxy(n) = xk yk-n @ . (6.2.5)

    Estimation of Periodic Signals in Noise . A noisy signal can be evaluated for cross-correlation with a "reference" signal by trial and error, with the cross-correlation function adjusted to its maximum value.

    For a signal u(k)=s(k)+q(k) with statistical independence of the noise and → 0, the cross-correlation function (6.2.2) with the signal template p(k) with q2(k)=0 takes the form:

    Bup(k) = Bsp(k) + Bqp(k) = Bsp(k) + .

    And since → 0 as N increases, then Bup(k) → Bsp(k). Obviously, the function Bup(k) will have a maximum when p(k) = s(k). By changing the form of the template p(k) and maximizing the function Bup(k), we can obtain an estimate of s(k) in the form of the optimal form of p(k).

    Function of cross-correlation coefficients (VKF) is a quantitative indicator of the degree of similarity of signals s(t) and u(t). Similarly to the function of autocorrelation coefficients, it is calculated through the centered values ​​of the functions (to calculate the mutual covariance, it is sufficient to center only one of the functions), and is normalized to the product of the values ​​of the standards of the functions s(t) and v(t):

    rsu(t) = Csu(t)/sssv. (6.2.6)

    The interval of change in the values ​​of correlation coefficients at shifts t can vary from –1 (complete inverse correlation) to 1 (complete similarity or one hundred percent correlation). At shifts t at which zero values ​​of rsu(t) are observed, the signals are independent of each other (uncorrelated). The cross-correlation coefficient allows you to establish the presence of a connection between the signals, regardless of the physical properties of the signals and their magnitude.

    When calculating the CCF of noisy discrete signals of limited length using formula (6.2.4), there is a probability that the values ​​|rsu(n)| > 1.

    For periodic signals, the concept of CCF is usually not used, except for signals with the same period, for example, entry and exit signals when studying the characteristics of systems.

    6.3. Spectral densities of correlation functions.

    Spectral density of the ACF can be determined from the following simple considerations.

    In accordance with expression (6.1.1), the ACF is a function of the scalar product of the signal and its copy, shifted by the interval t, at -¥< t < ¥:

    Bs(t) = ás(t), s(t-t)ñ.

    The scalar product can be defined in terms of the spectral densities of the signal and its copies, the product of which is the mutual power spectral density:

    ás(t), s(t-t)ñ = (1/2p)S(w) St*(w) dw.

    The signal shift along the abscissa by the interval t is displayed in the spectral representation by multiplying the signal spectrum by exp(-jwt), and for the conjugate spectrum by the factor exp(jwt):

    St*(w) = S*(w) exp(jwt).

    With this in mind, we get:

    Bs(t) = (1/2p)S(w) S*(w) exp(jwt) dw =

    = (1/2p)|S(w)|2 exp(jwt) dw. (6.3.1)

    But the last expression is the inverse Fourier transform of the energy spectrum of the signal (spectral energy density). Therefore, the energy spectrum of the signal and its autocorrelation function are related by the Fourier transform:

    Bs(t) w |S(w)|2 = Ws(w). (6.3.2)

    Thus, the spectral density of the ACF is nothing but the spectral power density of the signal, which, in turn, can be determined by the direct Fourier transform through the ACF:

    |S(w)|2 = Bs(t) exp(-jwt) dt. (6.3.3)

    The last expression imposes certain restrictions on the form of the ACF and the method of their limitation in duration.

    Rice. 6.3.1. Spectrum of non-existent ACF

    The energy spectrum of signals is always positive, the power of signals cannot be negative. Therefore, the ACF cannot have the shape of a rectangular pulse, since the Fourier transform of a rectangular pulse is a sign-alternating integral sine. There should be no discontinuities of the first kind (jumps) on the ACF, because, taking into account the parity of the ACF, any symmetric jump along the coordinate ±t generates a “separation” of the ACF into the sum of a certain continuous function and a rectangular pulse of duration 2t with the corresponding appearance of negative values ​​in the energy spectrum. An example of the latter is shown in Fig. 6.3.1 (the graphs of functions are given, as is customary for even functions, only with their right side).

    The ACFs of sufficiently extended signals are usually limited in size (limited data correlation intervals from –T/2 to T/2 are studied). However, the truncation of the ACF is the multiplication of the ACF by a rectangular selection pulse of duration T, which in the frequency domain is displayed by convolution of the actual power spectrum with a sign-variable integral sine function sinc(wT/2). On the one hand, this causes a certain smoothing of the power spectrum, which is often useful, for example, when studying signals at a significant noise level. But, on the other hand, a significant underestimation of the magnitude of energy peaks can also occur if the signal contains any harmonic components, as well as the appearance of negative power values ​​at the edge parts of peaks and jumps. An example of the manifestation of these factors is shown in fig. 6.3.2.

    Rice. 6.3.2. Calculation of the energy spectrum of the signal from the ACF of different lengths.

    As is known, the power spectra of signals do not have a phase characteristic and it is impossible to restore signals from them. Consequently, the ACF of signals, as a temporal representation of the power spectra, also has no information about the phase characteristics of the signals, and it is impossible to restore signals from the ACF. Signals of the same shape, shifted in time, have the same ACF. Moreover, signals of different shapes can have similar ACFs if they have close power spectra.

    Let us rewrite equation (6.3.1) in the following form

    s(t) s(t-t) dt = (1/2p)S(w) S*(w) exp(jwt) dw,

    and substitute the value t=0 into this expression. The resulting equality is well known and is called parseval equality

    s2(t)dt = (1/2p)|S(w)|2dw.

    It allows you to calculate the energy of the signal, both in the time and frequency domain of the description of the signals.

    Signal correlation interval is a numerical parameter for estimating the width of the ACF and the degree of significant correlation of signal values ​​by argument.

    Assuming that the signal s(t) has an approximately uniform energy spectrum with a value of W0 and with an upper cut-off frequency up to wb (the shape of a centered rectangular pulse, as, for example, signal 1 in Fig. 6.3.3 with fb=50 Hz in one-sided representation ), then the ACF of the signal is determined by the expression:

    Bs(t) = (Wo/p)cos(wt) dw = (Wowв/p) sin(wвt)/(wвt).

    The correlation interval of the signal tc is the value of the width of the central peak of the ACF from the maximum to the first crossing of the zero line. In this case, for a rectangular spectrum with an upper cutoff frequency wv, the first zero crossing corresponds to sinc(wвt) = 0 at wвt = p, whence:

    tк = p/wв =1/2fв. (6.3.4)

    The correlation interval is the smaller, the higher the upper cutoff frequency of the signal spectrum. For signals with a smooth cutoff along the upper cutoff frequency, the role of the parameter wв is played by the average width of the spectrum (signal 2 in Fig. 6.3.3).

    The power spectral density of statistical noise in a single measurement is a random function Wq(w) with mean value Wq(w) Þ sq2, where sq2 is the noise variance. In the limit, with a uniform spectral distribution of noise from 0 to ¥, the noise ACF tends to the value Bq(t) Þ sq2 at t Þ 0, Bq(t) Þ 0 at t ¹ 0, i.e., the statistical noise is not correlated (tc Þ 0).

    Practical calculations of the ACF of finite signals are usually limited to the shift interval t = (0, (3-5)tk), in which, as a rule, the main information on signal autocorrelation is concentrated.

    Spectral density of VKF can be obtained based on the same considerations as for the ROS, or directly from formula (6.3.1) by replacing the spectral density of the signal S(w) with the spectral density of the second signal U(w):

    Bsu(t) = (1/2p)S*(w) U(w) exp(jwt) dw. (6.3.5)

    Or, when changing the order of the signals:

    Bus(t) = (1/2p)U*(w) S(w) exp(jwt) dw. (6.3.5")

    The product S*(w)U(w) is the mutual energy spectrum Wsu(w) of signals s(t) and u(t). Accordingly, U*(w)S(w) = Wus(w). Therefore, like the ACF, the cross-correlation function and the spectral density of the mutual power of the signals are interconnected by Fourier transforms:

    Bsu(t) Û Wsu(w) º W*us(w). (6.3.6)

    Bus(t) Û Wus(w) º W*su(w). (6.3.6")

    In the general case, with the exception of the spectra of even functions, it follows from the parity non-observance condition for the VKF functions that the mutual energy spectra are complex functions:

    U(w) = Au(w) + j Bu(w), V(w) = Av(w) + j Bv(w).

    Wuv = AuAv+BuBv+j(BuAv - AuBv) = Re Wuv(w) + j Im Wuv(w),

    On fig. 6.3.4 you can clearly see the features of the formation of the VCF on the example of two signals of the same shape, shifted relative to each other.

    Rice. 6.3.4. Formation of the VKF.

    The shape of the signals and their mutual arrangement are shown in view A. The module and argument of the signal spectrum s(t) are shown in view B. The module of the spectrum u(t) is identical to the module S(w). The same view shows the modulus of the signal mutual power spectrum S(w)U*(w). As is known, when complex spectra are multiplied, the moduli of the spectra are multiplied, and the phase angles are added, while for the conjugate spectrum U*(w) the phase angle changes sign. If the first signal in the formula for calculating the VCF (6.2.1) is the signal s(t), and the signal u(t-t) is ahead of s(t) on the y-axis, then the phase angles S(w) increase towards negative values ​​as the frequency increases angles (without taking into account the periodic reset of values ​​by 2p), and the phase angles U*(w) in absolute values ​​are less than the phase angles s(t) and increase (due to conjugation) towards positive values. The result of the multiplication of the spectra (as seen in Fig. 6.3.4, view C) is the subtraction of the values ​​of the angles U*(w) from the phase angles S(w), while the phase angles of the spectrum S(w)U*(w) remain in area of ​​negative values, which provides a shift of the entire CCF function (and its peak values) to the right from zero along the t axis by a certain amount (for identical signals, by the difference between the signals along the ordinate axis). When the initial position of the signal u(t) is shifted towards the signal s(t), the phase angles S(w)U*(w) decrease, in the limit to zero values ​​when the signals are completely combined, while the function Bsu(t) is shifted to zero values t, in the limit before conversion to ACF (for identical signals s(t) and u(t)).

    As is known for deterministic signals, if the spectra of two signals do not overlap and, accordingly, the mutual energy of the signals is equal to zero, such signals are orthogonal to each other. The relationship between energy spectra and correlation functions of signals shows another side of the interaction of signals. If the signal spectra do not overlap and their mutual energy spectrum is equal to zero at all frequencies, then for any time shifts t relative to each other, their CCF is also equal to zero. This means that such signals are uncorrelated. This is valid for both deterministic and random signals and processes.

    Computing Correlation Functions Using the FFT is, especially for long numerical series, tens and hundreds of times faster than successive shifts in the time domain at large correlation intervals. The essence of the method follows from formulas (6.3.2) for the ACF and (6.3.6) for the VKF. Considering that the ACF can be considered as a special case of the CCF for the same signal, we will consider the calculation process using the example of the CCF for signals x(k) and y(k) with the number of samples K. It includes:

    1. Calculation of the FFT of the spectra of signals x(k) → X(k) and y(k) → Y(k). For a different number of samples, the shorter row is padded with zeros to the size of the larger row.

    2. Calculation of power density spectra Wxy(k) = X*(k) Y(k).

    3. Inverse FFT Wxy(k) → Bxy(k).

    We note some features of the method.

    With the inverse FFT, as is known, the cyclic convolution of the functions x(k) ③ y(k) is calculated. If the number of readings of functions is equal to K, the number of complex readings of the spectra of functions is also equal to K, as well as the number of readings of their product Wxy(k). Accordingly, the number of samples Bxy(k) with the inverse FFT is also equal to K and is cyclically repeated with a period equal to K. Meanwhile, with a linear convolution of the complete arrays of signals according to formula (6.2.5), the size of only one half of the VKF is K points, and the full the duplex size is 2K dots. Consequently, with the inverse FFT, taking into account the cyclicity of the convolution, the main period of the VKF will be superimposed with its lateral periods, as in the case of the usual cyclic convolution of two functions.

    On fig. 6.3.5 shows an example of two signals and VKF values ​​calculated by linear convolution (B1xy) and cyclic convolution through the FFT (B2xy). To eliminate the effect of overlapping side periods, it is necessary to supplement the signals with zeros, in the limit, up to doubling the number of samples, while the FFT result (B3xy graph in Figure 6.3.5) completely repeats the result of linear convolution (taking into account the normalization to increase the number of samples).

    In practice, the number of signal extension zeros depends on the nature of the correlation function. The minimum number of zeros is usually taken equal to the significant information part of the functions, i.e., about (3-5) correlation intervals.

    literature

    1. Baskakov chains and signals: A textbook for universities. - M.: Higher school, 1988.

    19. Applied analysis of time series. – M.: Mir, 1982. – 428 p.

    25. Sergienko signal processing. / Textbook for universities. - St. Petersburg: Peter, 203. - 608 p.

    33. Digital signal processing. Practical approach. / M., "Williams", 2004, 992 p.

    About noticed typos, errors and suggestions for additions: *****@***ru.

    Copyright©2008DavydovA.V.

    Cross-correlation solves the problem of the dependence of anomalous graphs built on parallel profiles or on observations made by different instruments at different times, etc. The measure of dependence is expressed by the integral

    R xy ( t)= , (11.13)

    Where t- shift according to the graph of the second function.

    The function calculated from the discrete values ​​of the field on two adjacent profiles is called the cross-correlation function (CCF) and is calculated by the formula

    B(m) =

    Where Z i (x i)– field value on the first profile at the point x i ; Z 2 (x i + m) is the value of the field on the second profile at the point i + m; and are the average field values ​​on neighboring profiles.

    As a result of cross-correlation, an anomalous body elongated obliquely to the profiles can be traced. The correlation of magnetic anomaly maps with various geophysical and geological maps is often done visually. The interprofile correlation of the magnetic field along the profiles is reminiscent of the correlation method for separating a useful signal against the background of noise, known in seismic exploration as the method of controlled directional reception.

    The manual "Atlas of Correlation Functions of Gravitational and Magnetic Anomalies of Regularly Shaped Bodies" (O.A. Odekov, G.I. Karataev, O.K. Basov, B.A. Kurbansakhatov) /25/ is devoted to the development of correlation methods for interpreting anomalies. The atlas contains graphs of correlation functions for bodies of regular shape, for which the theoretical curves are given in the atlas of D.S. Mikov. The graphs are preceded by a text on the theory and practice of correlation studies, the questions of the practical application of the ACF are carefully developed.

    Autocorrelation plots for anomalies Z(they also apply to anomalies H) are given for three levels. Cross-correlation plots are shown for a combination of different types of anomalies. The text summarizes suggestions on the advisability of using autocorrelation plots in the processing and interpretation of initial magnetic anomalies.

    Autocorrelation and cross-correlation are the latest methods of statistical research. Although they have hardly been considered in the literature of recent years, the information presented about their essence and application has the character of annotations. It seems that when processing a large volume of field observations, these methods will find their rightful place. A.K.Malovichko wrote about the significance of the problem of using correlation functions for the interpretation of magnetic anomalies: “A lot of attention is paid to this problem in modern geophysical literature, although in general it seems debatable. When interpreting it, the possibilities of studying functional fields based on the Coulomb law, using well-known formulas are ignored” /25/.


    Theories of correlations are joined in solving problems related to the study of transient processes with the theory of Fourier transformations. The integrals in the correlation functions are convolution-type integrals, so the development of the theory is naturally considered using spectral representations, frequency responses, and energy spectra.

    The tasks of magnetic exploration solved by correlation methods of analysis are described in the book by S.A. Serkerova /29/.