On Finite Time Delay Dependent Stability of Linear Discrete Delay Systems : Numerical Solution Approach

In this paper, a possible solution of the basic nonlinear quadratic matrix equation was proposed. The solution is crucial in the formulation of the particular criteria for the delay-dependent finite time stability of discrete time delay systems represented as x(k+1) = A0(k) + A1x(k–h). The time delay-dependent criteria have been derived. In addition, the significance of the nonlinear discrete polynomial matrix equation is explained. With the use of the mathematical formalism based on the Traub and Bernoulli’s algorithms, it was concluded that the computation of the dominant solvent of the matrix polynomial equation does not guarantee a necessary convergence in all cases, unlike in the traditional numerical procedures. In this paper, we presented one particular and one general solution valid in the case when the discrete matrix equation was presented in its factorial form. The numerical computations are performed to illustrate the suggested results.


Introduction
O investigate the stability of a control system, the Lyapunov method was widely used in the control system community.In some cases, the Lyapunov stability is insufficient to describe the dynamical behavior of the special classes of the systems or to give satisfactory conclusions about the different types of stability.This is the case for the finite time stability in which the requirements are set on the states of the system.In these situations, there are constraints on the system states trajectories, i.e. they have to stay within the predefined values and should not exceed them.Consequently, there are requirements to investigate the trajectories of the system only over a finite time interval.Based on the stability investigation in a limited time frame, the described stability concept is named finite-time stability the (FTS).In that sense, the system is stable if the states of the system do not exceed the predefined boundaries on some fixed time interval.This stability concept was introduced in the era of modern control systems [1][2][3][4][5][6], and it is still widely used nowadays.
Time delay is often present in the electrical, mechanical, chemical, and other systems.The described latency in such systems can potentially bring the systems into instability, or its appearance can result in low performances during the transient process.Using the linear matrix inequalities (LMI) and the Lyapunov-like functional together with other approaches, many results for time-delay systems have been reported in the recently published literature.
In this article, we present the concept of the finite time stability of discrete time-delay systems previously investigated in [7].
A novel discrete Lyapunov-like functional with a discrete convolution of the delayed states was used for the stability investigations in that paper.The methodology used there was to combine the Lyapunov-like approach and the Jensen's discrete inequality.The novel sufficient stability conditions were presented in a form of algebraic inequalities with a necessity of solving the particular quadratic discrete matrix equation with the defined matrix A.

Notations:
The matrix transposition was denoted by a superscript "T".

Problem formulation
A linear discrete time system with a state delay was analyzed.The system is described as: with a known vector function of the initial conditions: Where ( ) are known constant matrices, h is a constant delay.
The initial condition ( ) is the a priori known vector function for each The linear discrete time-delay system (1), which satisfies the initial condition (2), is said to be finite- Theorem 1.The linear discrete time delay system (1) with 0 A I = , I being identity matrix, satisfying: , is finite-time stable with respect to { } , ,N α β , α β < , if there exist two positive scalars, μ and ε , such that: where: ( ) { } with matrix А as a solution of: as in [7].
The unavoidable problem, in order to examine the finite time stability of system given (1), is to solve the discrete matrix polynomial equation (7) upon the matrix A .
We denote by mn M the linear space of all complex matrices of type ( ) and by n M the linear space of all complex quadratic matrices of n order.We use 1 to denote the identity matrix in n M , n ∈ .If the ambiguity is possible we denote the identity matrix 1 n n ∈M .
In order to be able to formulate the results of the paper we need some definitions.
The system is described by the matrices 0 A , 1 A and 2 , A which belong to the space n M , and by the initial conditions ( ) . Matrix 0 A is usually invertible, so that the system can be described in its equivalent form: We are interested in studying the stability criterion of this dynamical system.
The main tool in our research is the following equivalent representation of the dynamical system.
Lemma 1. Dynamical system, described by the equation ( 1) is equivalent to the following dynamical system: Proof.The proof is rather obvious and will be omitted.We can easily develop the stability criterion of the dynamical system described by (10).Actually, it is a linear system without time-delay, and the system is stable if and only if all the eigenvalues of the matrix eq A are in modulus equal to or less than 1.
Lemma 2. Dynamical system (10) is stable if and only if for every The primer interest in this short paper is the algorithm for the numerical solution of the following quadratic equation: over the set n M .The case when the matrix A is invertible is simpler, since, in this case we can multiply from the left with 1 A − , and get a simplified equation: This equation is somewhat easier to handle, and some stronger results can be given in this case.
Definition 2. Let n X ∈M be such that ( ) 0 P X = .Then we call X the (right) solvent of (11).
The set of all solvents of the equation ( 11) is denoted ( ) x. Consequently: Proof.Using the direct computation, we have: which finishes the proof.Theorem 1.We have: In general, we have ( ) ( ) ( ) ) is the solution of the equation (6).Let ( ) ( ) Then ( ) ( ) The first part of the Lemma is a direct consequence of the Lemma 3. Consider: ( ) Obviously we have However ( ) { } S P = , i.e., there is no solution to the equation ( ) 0 P X = .The nonexistance of the solution can be proved using the Grobner basis techniques.
Assuming there exists a solution, we get the following system of algebraic equations: Using the Buchberger algorithm we get that the Groebner basis reduces to 1, which clearly shows there is no solution.
The second part is obtained using direct computation: . Using Lemma 4, we have: We conclude ( ) ( ) Hence, we have ( ) ( ) ( ) Clearly, the solutions which can be constructed using the previous Theorem include only the solutions which represent operators of the simple structure.
However, a matrix equation can have solutions which do not belong to this class.
Example.Consider the case in which: Obviously, we have at least two solutions: and: ( ) From here, we have Choosing any two linearly independent vectors 1 , x x x , i.e., we cannot construct the first solution.
Theorem 2. Assume that ( ) ≠ , and that: ) ) ( ) ( ) , where 1 λ and 2 λ are the solutions of the equation: Proof.We have ( ) ( ) x .If we subtract these two equations we get: ) ) We get the first part of the statement.
Let λ be the solution of the equation 2 We have: which finishes the second part of the statement.
The previous Theorem can have a much simpler form if we assume that 1 A = .Theorem 3. (Monic case) Assume that ( ) ker ker Then ( ) ( ) ( ) where 1 λ and 2 λ are the solutions of the equation:

Numerical solution
The presented numerical algorithm is developed in classical literature.
Numerical algorithm, used to solve the equation ( ) 0 P X = , is a modification of the Newton's algorithm adopted to solve nonlinear equations over n M .
In order to develop the algorithm we first need to expand P at the point close to the solution X.
We have: If we equate ( ) with zeros and neglect the terms in ( ) 2 X Δ , we get the following equation for X Δ : If we denote by M as a linear operator acting on the space n M .It is easily recognized as the Frechet derivative of the nonlinear operator M .The Newton method goes as follows: choose 0 X , and compute the sequence is the solution of the equation ( ) The computation ends when we achieve requested precision ( ) However, there are some serious issues to be discussed which are connected to the computation of k X Δ .
We recognize that ( ) is actually the Sylvester equation.As given in [1], we know that the Sylvester equation has a unique solution if and only if and the solutions of ( ) ( ) are not the eigenvalues of X .

( )
X S P ∈ then, according to the factorization given in Lemma 4, the Sylvester equation would have a solution if n eigenvalues of X do not belong to the remaining n solutions of ( ) 0 P λ = , and if for example A is nonsingular, in which case the polynomial has a degree n and cannot be identically equal to zero.
This simple observation emphasizes the idea of maximal and minimal solutions.Definition 4. The solution X of the equation ( ) P X is called maximal (minimal) if all the eigenvalues of X are greater (smaller) in modulus than the solutions of ( ) ( ) Maximal solution is also known in the literature as a dominant solution.
Lemma 5. Let X be the solution of the equation ( ) 0 P X = , then Frechet derivative X F is regular.Using the continuity argument, one can claim that there exists some neighborhood derivative X F is nonsingular.That gives the argument to use the Newton method to solve such equations.
However, as we know the existence of the dominant solution is not always guaranteed.

Numerical solution: the Sylvester equation
The numerical method that solves the Sylvester equation has a long history and can be tracked back to the papers of classical literature.
Here we give the method only for the completeness.First, we compute the generalized Schur decomposition of the matrices A and A X B + .
Suppose, these are given by: where α and β are unitary matrices and G , H are upper triangular.Let also * K X γ γ = , be the Schur decomposition of X , with γ being nitary and K upper triangular.
Then our equation becomes: , by equating the columns we get: As we can see we end up in a triangular system of equations, which can be solved easily.

Modification of the Newton method
According to the fact that in deriving the Newton method we neglected the terms of a higher order than the one in X Δ , it is obvious that in some cases, 1 k X + can be worse approximation to the solution than k X was.In order to avoid such scenario line searches are usually implemented in the Newton method.
In this case the Newton method reads as follows: choose 0 X , then compute We can assume that 0

Conclusion
The discrete Lyapunov-like functional with a discrete convolution of delayed states was used for the stability investigations of finite time stability for a particular class of time discrete systems.The methodology used in the previous studies was to combine the Lyapunov-like approach and the Jensen's discrete inequality.The novel sufficient stability conditions were presented in a form of algebraic inequalities with a necessity of solving the particular quadratic (nonlinear) discrete matrix equation upon certain matrix A .
In general, this paper provides one possible solution to the same problem (as previously described) for a nonlinear matrix equation.The formulation of the matrix equation was done in a way that is able to perform the calculation using its factorization form.These results present a natural extension of the contributions presented in [7][8][9][10][11][12].Some numerical methods have been presented in order to show their applicability to this problem.The numerical example is, also, included so that the computation could be demonstrated when the original, suggested procedure was used.
minimum) of eigenvalues of a real symmetric matrix X.
Lemma gives motivation for the rest of the paper.
which reduces to a normal Newton step.Also, if 0 k α = we come up with the solution.The best value of k t can be found by solving numerically its global minimum.It has been shown that this modification of the Newton method converges quadratically as the original Newton method.