Solution of matrix equations. Some properties of operations on matrices. Matrix expressions

The buildings 24.09.2019
The buildings

Let there be a square matrix of the nth order

Matrix A -1 is called inverse matrix with respect to the matrix A, if A * A -1 = E, where E is the identity matrix of the nth order.

Identity matrix- such a square matrix, in which all elements along the main diagonal, passing from the upper left corner to the lower right corner, are ones, and the rest are zeros, for example:

inverse matrix may exist only for square matrices those. for those matrices that have the same number of rows and columns.

Inverse Matrix Existence Condition Theorem

For a matrix to have an inverse matrix, it is necessary and sufficient that it be nondegenerate.

The matrix A = (A1, A2,...A n) is called non-degenerate if the column vectors are linearly independent. The number of linearly independent column vectors of a matrix is ​​called the rank of the matrix. Therefore, we can say that in order for an inverse matrix to exist, it is necessary and sufficient that the rank of the matrix is ​​equal to its dimension, i.e. r = n.

Algorithm for finding the inverse matrix

  1. Write the matrix A in the table for solving systems of equations by the Gauss method and on the right (in place of the right parts of the equations) assign matrix E to it.
  2. Using Jordan transformations, bring matrix A to a matrix consisting of single columns; in this case, it is necessary to simultaneously transform the matrix E.
  3. If necessary, rearrange the rows (equations) of the last table so that the identity matrix E is obtained under the matrix A of the original table.
  4. Write the inverse matrix A -1, which is in the last table under the matrix E of the original table.
Example 1

For matrix A, find the inverse matrix A -1

Solution: We write down the matrix A and on the right we assign the identity matrix E. Using the Jordan transformations, we reduce the matrix A to the identity matrix E. The calculations are shown in Table 31.1.

We check the correctness of the calculations by multiplying the original matrix A and inverse matrix A -1 .

As a result of matrix multiplication, the identity matrix is ​​obtained. Therefore, the calculations are correct.

Answer:

Solution of matrix equations

Matrix equations can look like:

AX = B, XA = B, AXB = C,

where A, B, C are given matrices, X is the desired matrix.

Matrix equations are solved by multiplying the equation by inverse matrices.

For example, to find the matrix from an equation, you need to multiply this equation by on the left.

Therefore, to find a solution to the equation, you need to find the inverse matrix and multiply it by the matrix on the right side of the equation.

Other equations are solved similarly.

Example 2

Solve the equation AX = B if

Solution: Since the inverse of the matrix equals (see example 1)

Matrix method in economic analysis

Along with others, they also find application matrix methods. These methods are based on linear and vector-matrix algebra. Such methods are used for the purposes of analyzing complex and multidimensional economic phenomena. Most often, these methods are used when it is necessary to compare the functioning of organizations and their structural divisions.

In the process of applying matrix methods of analysis, several stages can be distinguished.

At the first stage system is being formed economic indicators and on its basis, a matrix of initial data is compiled, which is a table in which system numbers are shown in its individual lines (i = 1,2,....,n), and along the vertical graphs - numbers of indicators (j = 1,2,....,m).

At the second stage for each vertical column, the largest of the available values ​​of the indicators is revealed, which is taken as a unit.

After that, all the amounts reflected in this column are divided by highest value and a matrix of standardized coefficients is formed.

At the third stage all components of the matrix are squared. If they have different significance, then each indicator of the matrix is ​​assigned a certain weighting coefficient k. The value of the latter is determined by an expert.

On the last fourth stage found values ​​of ratings Rj grouped in order of increasing or decreasing.

The above matrix methods should be used, for example, when comparative analysis various investment projects, as well as in assessing other economic performance indicators of organizations.

A matrix is ​​a mathematical object written as a rectangular table of numbers and allowing algebraic operations (addition, subtraction, multiplication, etc.) between it and other similar objects. The rules for performing operations on matrices are made as follows,

to make it convenient to write systems linear equations. Usually, the matrix is ​​denoted by the capital letter of the Latin alphabet and is distinguished by round brackets "(...)" (it is also found

highlighting with square brackets “[…]”, double straight lines “||…||”) And the numbers that make up the matrix (matrix elements) are denoted by the same letter as the matrix itself, but small. each matrix element has 2 subscripts (a ij ) - the first "i" stands for

the row number the element is in, and the second "j" is the column number.

Matrix operations

Multiplication of a matrix A by a number

B , whose elements are obtained by multiplying each element of matrix A by this number, that is, each element of matrix B is

b ij = λ a ij

Matrix addition A

the element of matrix C is

c ij= a ij+ b ij

Matrix subtraction A

c ij= a ij- b ij

A+Θ=A

Matrix multiplication(notation: AB , rarely with a multiplication sign) - there is an operation to calculate the matrix C , whose elements are equal to the sum of the products of the elements in the corresponding row of the first factor and the column of the second.

c ij= ∑ a ikb kj

The first multiplier must have as many columns as there are rows in the second. If the matrix A has dimension, B -, then the dimension of their product AB = C

there is . Matrix multiplication is not commutative. This can be seen at least from the fact that if the matrices are not square, then you can only multiply one by the other, but not vice versa. For

square matrices, the result of multiplication depends on the order of the factors.

Only square matrices can be raised to a power.

Identity matrix

For square matrices, there is identity matrix E such that any multiplication

matrix on it does not affect the result, namely

EA=AE=A

The identity matrix has units only in

diagonals, other elements are equal to zero

For some square matrices one can find the so-calledinverse matrix.

The inverse matrix A - 1 is such that if you multiply the matrix by it, you get the identity matrix

AA − 1 = E

The inverse matrix does not always exist. Matrices for which the inverse exists are called

non-degenerate, and for which it is not - degenerate. A matrix is ​​nondegenerate if all its rows (columns) are linearly independent as vectors. Maximum number of linearly independent rows

(columns) is called the rank of the matrix. The determinant (determinant) of a matrix is ​​a normalized skew-symmetric linear functional on the rows of a matrix. Matrix

is degenerate if and only if its determinant is zero.

Matrix Properties

1. A + (B + C ) = (A + B ) + C

2.A+B=B+A

3. A (BC ) = (AB )C

4.A(B+C)=AB+AC

5. (B+ C) A= BA+ CA

9. Symmetric Matrix A is positive definite (A > 0) if the values ​​of all its principal angle minors A k > 0

10. Symmetric Matrix A is negative definite (A< 0), если матрица (−A )

is positive-definite, that is, if for any k the principal minor of the kth order A k has the sign (− 1)k

Systems of linear equations

A system of m equations with n unknowns

a11 x1 +a12 x2 +…+a1n xn =b1 a21 x1 +a22 x2 +…+a2n xn =b2

am x1 +am x2 +…+am xn =bm

can be represented in matrix form

and then the whole system can be written like this: AX =B

Matrix operations

Let a ij be elements of matrix A , and b ij be matrix B .

Multiplication of a matrix A by a numberλ (notation: λA ) is to construct a matrix

B , whose elements are obtained by multiplying each element of the matrix A by this number, that is, each element of the matrix B is b ij = λa ij

Let's write the matrix A

Multiply the first element of matrix A by 2

Matrix addition A+ B is the operation of finding a matrix C , all of whose elements are equal in pairwise sum of all corresponding elements of matrices A and B , that is, each

the element of matrix C is

c ij= a ij+ b ij

А+В Let's write the matrices А and В

Perform the addition of the first elements of the matrices

Stretch the values, first horizontally and then vertically (you can vice versa)

Matrix subtraction A− B is defined similarly to addition, it is the operation of finding a matrix C whose elements

c ij= a ij- b ij

Addition and subtraction are allowed only for matrices of the same size.

There is a zero matrix Θ such that its addition to another matrix A does not change A, i.e.

A+Θ=A

All elements of the zero matrix are equal to zero.

So, services for solving matrices online:

Matrix service allows you to perform elementary transformations of matrices.
If you have a task to perform a more complex transformation, then this service should be used as a constructor.

Example. Matrix data A and B, need to find C = A -1 * B + B T ,

  1. You should first find inverse matrixA1 = A-1 , using the service for finding the inverse matrix ;
  2. Further, after finding the matrix A1 do it matrix multiplicationA2 = A1 * B, using the service for matrix multiplication;
  3. Let's do it matrix transpositionA3 = B T (service for finding the transposed matrix);
  4. And the last - find the sum of matrices FROM = A2 + A3(service for calculating the sum of matrices) - and we get an answer with the most detailed solution!;

Product of matrices

This is an online service two steps:

  • Enter the first factor matrix A
  • Enter second factor matrix or column vector B

Multiplication of a matrix by a vector

The multiplication of a matrix by a vector can be found using the service Matrix multiplication
(The first factor will be the given matrix, the second factor will be the column consisting of the elements of the given vector)

This is an online service two steps:

  • Enter matrix A, for which you need to find the inverse matrix
  • Get an answer with a detailed solution for finding the inverse matrix

Matrix determinant

This is an online service one step:

  • Enter matrix A, for which you need to find the determinant of the matrix

Matrix transposition

Here you can follow the matrix transposition algorithm and learn how to solve such problems yourself.
This is an online service one step:

  • Enter matrix A, which needs to be transposed

Matrix rank

This is an online service one step:

  • Enter matrix A, for which you need to find the rank

Matrix eigenvalues ​​and matrix eigenvectors

This is an online service one step:

  • Enter matrix A, for which you need to find the eigenvectors and eigenvalues(own numbers)

Matrix exponentiation

This is an online service two steps:

  • Enter matrix A, which will be raised to the power
  • Enter an integer q- degree

This topic is one of the most hated among students. Worse, probably, only determinants.

The trick is that the very concept of the inverse element (and I'm not just talking about matrices now) refers us to the operation of multiplication. Even in school curriculum multiplication is considered a complex operation, and matrix multiplication is a separate topic in general, to which I have a whole paragraph and a video tutorial devoted to it.

Today we will not go into the details of matrix calculations. Just remember: how matrices are denoted, how they are multiplied and what follows from this.

Review: Matrix Multiplication

First of all, let's agree on notation. A matrix $A$ of size $\left[ m\times n \right]$ is simply a table of numbers with exactly $m$ rows and $n$ columns:

\=\underbrace(\left[ \begin(matrix) ((a)_(11)) & ((a)_(12)) & ... & ((a)_(1n)) \\ (( a)_(21)) & ((a)_(22)) & ... & ((a)_(2n)) \\ ... & ... & ... & ... \\ ((a)_(m1)) & ((a)_(m2)) & ... & ((a)_(mn)) \\\end(matrix) \right])_(n)\]

In order not to accidentally confuse rows and columns in places (believe me, in the exam you can confuse one with a deuce - what can we say about some lines there), just take a look at the picture:

Determination of indexes for matrix cells

What's happening? If we place the standard coordinate system $OXY$ in the upper left corner and direct the axes so that they cover the entire matrix, then each cell of this matrix can be uniquely associated with the coordinates $\left(x;y \right)$ - this will be the row number and column number.

Why is the coordinate system placed exactly in the upper left corner? Yes, because it is from there that we begin to read any texts. It's very easy to remember.

Why is the $x$ axis pointing down and not to the right? Again, it's simple: take the standard coordinate system (the $x$ axis goes to the right, the $y$ axis goes up) and rotate it so that it encloses the matrix. This is a 90 degree clockwise rotation - we see its result in the picture.

In general, we figured out how to determine the indices of the matrix elements. Now let's deal with multiplication.

Definition. The matrices $A=\left[ m\times n \right]$ and $B=\left[ n\times k \right]$, when the number of columns in the first matches the number of rows in the second, are called consistent.

It's in that order. One can be ambiguous and say that the matrices $A$ and $B$ form an ordered pair $\left(A;B \right)$: if they are consistent in this order, then it is not at all necessary that $B$ and $A$, those. the pair $\left(B;A \right)$ is also consistent.

Only consistent matrices can be multiplied.

Definition. The product of consistent matrices $A=\left[ m\times n \right]$ and $B=\left[ n\times k \right]$ is the new matrix $C=\left[ m\times k \right]$ , whose elements $((c)_(ij))$ are calculated by the formula:

\[((c)_(ij))=\sum\limits_(k=1)^(n)(((a)_(ik)))\cdot ((b)_(kj))\]

In other words: to get the element $((c)_(ij))$ of the matrix $C=A\cdot B$, you need to take the $i$-row of the first matrix, the $j$-th column of the second matrix, and then multiply in pairs elements from this row and column. Add up the results.

Yes, that's a harsh definition. Several facts immediately follow from it:

  1. Matrix multiplication is, generally speaking, non-commutative: $A\cdot B\ne B\cdot A$;
  2. However, multiplication is associative: $\left(A\cdot B \right)\cdot C=A\cdot \left(B\cdot C \right)$;
  3. And even distributive: $\left(A+B \right)\cdot C=A\cdot C+B\cdot C$;
  4. And distributive again: $A\cdot \left(B+C \right)=A\cdot B+A\cdot C$.

The distributivity of multiplication had to be described separately for the left and right multiplier-sum just because of the non-commutativity of the multiplication operation.

If, nevertheless, it turns out that $A\cdot B=B\cdot A$, such matrices are called permutable.

Among all the matrices that are multiplied by something there, there are special ones - those that, when multiplied by any matrix $A$, again give $A$:

Definition. A matrix $E$ is called identity if $A\cdot E=A$ or $E\cdot A=A$. In the case of a square matrix $A$ we can write:

The identity matrix is ​​a frequent guest in solving matrix equations. And in general, a frequent guest in the world of matrices. :)

And because of this $E$, someone came up with all the game that will be written next.

What is an inverse matrix

Since matrix multiplication is a very time-consuming operation (you have to multiply a bunch of rows and columns), the concept of an inverse matrix is ​​also not the most trivial. And it needs some explanation.

Key Definition

Well, it's time to know the truth.

Definition. The matrix $B$ is called the inverse of the matrix $A$ if

The inverse matrix is ​​denoted by $((A)^(-1))$ (not to be confused with the degree!), so the definition can be rewritten like this:

It would seem that everything is extremely simple and clear. But when analyzing such a definition, several questions immediately arise:

  1. Does an inverse matrix always exist? And if not always, then how to determine: when it exists and when it does not?
  2. And who said that such a matrix is ​​exactly one? What if for some original matrix $A$ there is a whole crowd of inverses?
  3. What do all these "reverses" look like? And how do you actually count them?

As for the calculation algorithms - we will talk about this a little later. But we will answer the rest of the questions right now. Let us arrange them in the form of separate assertions-lemmas.

Basic properties

Let's start with how the matrix $A$ should look like in order for it to have $((A)^(-1))$. Now we will make sure that both of these matrices must be square, and of the same size: $\left[ n\times n \right]$.

Lemma 1. Given a matrix $A$ and its inverse $((A)^(-1))$. Then both of these matrices are square and have the same order $n$.

Proof. Everything is simple. Let the matrix $A=\left[ m\times n \right]$, $((A)^(-1))=\left[ a\times b \right]$. Since the product $A\cdot ((A)^(-1))=E$ exists by definition, the matrices $A$ and $((A)^(-1))$ are consistent in that order:

\[\begin(align) & \left[ m\times n \right]\cdot \left[ a\times b \right]=\left[ m\times b \right] \\ & n=a \end( align)\]

This is a direct consequence of the matrix multiplication algorithm: the coefficients $n$ and $a$ are "transit" and must be equal.

At the same time, the inverse multiplication is also defined: $((A)^(-1))\cdot A=E$, so the matrices $((A)^(-1))$ and $A$ are also consistent in this order:

\[\begin(align) & \left[ a\times b \right]\cdot \left[ m\times n \right]=\left[ a\times n \right] \\ & b=m \end( align)\]

Thus, without loss of generality, we can assume that $A=\left[ m\times n \right]$, $((A)^(-1))=\left[ n\times m \right]$. However, according to the definition of $A\cdot ((A)^(-1))=((A)^(-1))\cdot A$, so the dimensions of the matrices are exactly the same:

\[\begin(align) & \left[ m\times n \right]=\left[ n\times m \right] \\ & m=n \end(align)\]

So it turns out that all three matrices - $A$, $((A)^(-1))$ and $E$ - are square in size $\left[ n\times n \right]$. The lemma is proven.

Well, that's already good. We see that only square matrices are invertible. Now let's make sure that the inverse matrix is ​​always the same.

Lemma 2. Given a matrix $A$ and its inverse $((A)^(-1))$. Then this inverse matrix is ​​unique.

Proof. Let's start from the opposite: let the matrix $A$ have at least two instances of inverses — $B$ and $C$. Then, according to the definition, the following equalities are true:

\[\begin(align) & A\cdot B=B\cdot A=E; \\ & A\cdot C=C\cdot A=E. \\ \end(align)\]

From Lemma 1 we conclude that all four matrices $A$, $B$, $C$ and $E$ are square of the same order: $\left[ n\times n \right]$. Therefore, the product is defined:

Since matrix multiplication is associative (but not commutative!), we can write:

\[\begin(align) & B\cdot A\cdot C=\left(B\cdot A \right)\cdot C=E\cdot C=C; \\ & B\cdot A\cdot C=B\cdot \left(A\cdot C \right)=B\cdot E=B; \\ & B\cdot A\cdot C=C=B\Rightarrow B=C. \\ \end(align)\]

Received only possible variant: two instances of the inverse matrix are equal. The lemma is proven.

The above reasoning almost verbatim repeats the proof of the uniqueness of the inverse element for all real numbers $b\ne 0$. The only significant addition is taking into account the dimension of matrices.

However, we still do not know anything about whether any square matrix is ​​invertible. Here the determinant comes to our aid - this is a key characteristic for all square matrices.

Lemma 3 . Given a matrix $A$. If the matrix $((A)^(-1))$ inverse to it exists, then the determinant of the original matrix is ​​nonzero:

\[\left| A \right|\ne 0\]

Proof. We already know that $A$ and $((A)^(-1))$ are square matrices of size $\left[ n\times n \right]$. Therefore, for each of them it is possible to calculate the determinant: $\left| A \right|$ and $\left| ((A)^(-1)) \right|$. However, the determinant of the product is equal to the product of the determinants:

\[\left| A\cdot B \right|=\left| A \right|\cdot \left| B \right|\Rightarrow \left| A\cdot ((A)^(-1)) \right|=\left| A \right|\cdot \left| ((A)^(-1)) \right|\]

But according to the definition of $A\cdot ((A)^(-1))=E$, and the determinant of $E$ is always equal to 1, so

\[\begin(align) & A\cdot ((A)^(-1))=E; \\ & \left| A\cdot ((A)^(-1)) \right|=\left| E\right|; \\ & \left| A \right|\cdot \left| ((A)^(-1)) \right|=1. \\ \end(align)\]

The product of two numbers is equal to one only if each of these numbers is different from zero:

\[\left| A \right|\ne 0;\quad \left| ((A)^(-1)) \right|\ne 0.\]

So it turns out that $\left| A \right|\ne 0$. The lemma is proven.

In fact, this requirement is quite logical. Now we will analyze the algorithm for finding the inverse matrix - and it will become completely clear why, in principle, no inverse matrix can exist with a zero determinant.

But first, let's formulate an "auxiliary" definition:

Definition. A degenerate matrix is ​​a square matrix of size $\left[ n\times n \right]$ whose determinant is zero.

Thus, we can assert that any invertible matrix is ​​nondegenerate.

How to find the inverse matrix

Now we will consider a universal algorithm for finding inverse matrices. In general, there are two generally accepted algorithms, and we will also consider the second one today.

The one that will be considered now is very efficient for matrices of size $\left[ 2\times 2 \right]$ and - in part - of size $\left[ 3\times 3 \right]$. But starting from the size $\left[ 4\times 4 \right]$ it is better not to use it. Why - now you will understand everything.

Algebraic additions

Get ready. Now there will be pain. No, don't worry: a beautiful nurse in a skirt, stockings with lace does not come to you and will not give you an injection in the buttock. Everything is much more prosaic: algebraic additions and Her Majesty the "Union Matrix" are coming to you.

Let's start with the main one. Let there be a square matrix of size $A=\left[ n\times n \right]$ whose elements are named $((a)_(ij))$. Then, for each such element, one can define an algebraic complement:

Definition. Algebraic complement $((A)_(ij))$ to the element $((a)_(ij))$ in the $i$-th row and $j$-th column of the matrix $A=\left[ n \times n \right]$ is a construction of the form

\[((A)_(ij))=((\left(-1 \right))^(i+j))\cdot M_(ij)^(*)\]

Where $M_(ij)^(*)$ is the determinant of the matrix obtained from the original $A$ by deleting the same $i$-th row and $j$-th column.

Again. The algebraic complement to the matrix element with coordinates $\left(i;j \right)$ is denoted as $((A)_(ij))$ and is calculated according to the scheme:

  1. First, we delete the $i$-row and the $j$-th column from the original matrix. We get a new square matrix, and we denote its determinant as $M_(ij)^(*)$.
  2. Then we multiply this determinant by $((\left(-1 \right))^(i+j))$ - at first this expression may seem mind-blowing, but in fact we just find out the sign in front of $M_(ij)^(*) $.
  3. We count - we get a specific number. Those. the algebraic addition is just a number, not some new matrix, and so on.

The matrix $M_(ij)^(*)$ itself is called the complementary minor to the element $((a)_(ij))$. And in this sense, the above definition of an algebraic complement is a special case of a more complex definition - the one that we considered in the lesson about the determinant.

Important note. Actually, in "adult" mathematics, algebraic additions are defined as follows:

  1. We take $k$ rows and $k$ columns in a square matrix. At their intersection, we get a matrix of size $\left[ k\times k \right]$ — its determinant is called a minor of order $k$ and is denoted by $((M)_(k))$.
  2. Then we cross out these "selected" $k$ rows and $k$ columns. Again, we get a square matrix - its determinant is called the complementary minor and is denoted by $M_(k)^(*)$.
  3. Multiply $M_(k)^(*)$ by $((\left(-1 \right))^(t))$, where $t$ is (attention now!) the sum of the numbers of all selected rows and columns . This will be the algebraic addition.

Take a look at the third step: there is actually a sum of $2k$ terms! Another thing is that for $k=1$ we get only 2 terms - these will be the same $i+j$ - the "coordinates" of the element $((a)_(ij))$, for which we are looking for an algebraic complement.

So today we use a slightly simplified definition. But as we will see later, it will be more than enough. Much more important is the following:

Definition. The union matrix $S$ to the square matrix $A=\left[ n\times n \right]$ is a new matrix of size $\left[ n\times n \right]$, which is obtained from $A$ by replacing $(( a)_(ij))$ by algebraic complements $((A)_(ij))$:

\\Rightarrow S=\left[ \begin(matrix) ((A)_(11)) & ((A)_(12)) & ... & ((A)_(1n)) \\ (( A)_(21)) & ((A)_(22)) & ... & ((A)_(2n)) \\ ... & ... & ... & ... \\ ((A)_(n1)) & ((A)_(n2)) & ... & ((A)_(nn)) \\\end(matrix) \right]\]

The first thought that arises at the moment of realizing this definition is “this is how much you have to count in total!” Relax: you have to count, but not so much. :)

Well, all this is very nice, but why is it necessary? But why.

Main theorem

Let's go back a little. Remember, Lemma 3 stated that an invertible matrix $A$ is always non-singular (that is, its determinant is non-zero: $\left| A \right|\ne 0$).

So, the converse is also true: if the matrix $A$ is not degenerate, then it is always invertible. And there is even a search scheme $((A)^(-1))$. Check it out:

Inverse matrix theorem. Let a square matrix $A=\left[ n\times n \right]$ be given, and its determinant is nonzero: $\left| A \right|\ne 0$. Then the inverse matrix $((A)^(-1))$ exists and is calculated by the formula:

\[((A)^(-1))=\frac(1)(\left| A \right|)\cdot ((S)^(T))\]

And now - all the same, but in legible handwriting. To find the inverse matrix, you need:

  1. Calculate the determinant $\left| A \right|$ and make sure it's non-zero.
  2. Compile the union matrix $S$, i.e. count 100500 algebraic additions $((A)_(ij))$ and put them in place $((a)_(ij))$.
  3. Transpose this matrix $S$ and then multiply it by some number $q=(1)/(\left| A \right|)\;$.

And that's it! The inverse matrix $((A)^(-1))$ is found. Let's look at examples:

\[\left[ \begin(matrix) 3 & 1 \\ 5 & 2 \\\end(matrix) \right]\]

Solution. Let's check the reversibility. Let's calculate the determinant:

\[\left| A \right|=\left| \begin(matrix) 3 & 1 \\ 5 & 2 \\\end(matrix) \right|=3\cdot 2-1\cdot 5=6-5=1\]

The determinant is different from zero. So the matrix is ​​invertible. Let's create a union matrix:

Let's calculate the algebraic additions:

\[\begin(align) & ((A)_(11))=((\left(-1 \right))^(1+1))\cdot \left| 2\right|=2; \\ & ((A)_(12))=((\left(-1 \right))^(1+2))\cdot \left| 5\right|=-5; \\ & ((A)_(21))=((\left(-1 \right))^(2+1))\cdot \left| 1 \right|=-1; \\ & ((A)_(22))=((\left(-1 \right))^(2+2))\cdot \left| 3\right|=3. \\ \end(align)\]

Pay attention: determinants |2|, |5|, |1| and |3| are the determinants of matrices of size $\left[ 1\times 1 \right]$, not modules. Those. if the determinants were negative numbers, it is not necessary to remove the "minus".

In total, our union matrix looks like this:

\[((A)^(-1))=\frac(1)(\left| A \right|)\cdot ((S)^(T))=\frac(1)(1)\cdot ( (\left[ \begin(array)(*(35)(r)) 2 & -5 \\ -1 & 3 \\\end(array) \right])^(T))=\left[ \begin (array)(*(35)(r)) 2 & -1 \\ -5 & 3 \\\end(array) \right]\]

OK it's all over Now. Problem solved.

Answer. $\left[ \begin(array)(*(35)(r)) 2 & -1 \\ -5 & 3 \\\end(array) \right]$

A task. Find the inverse matrix:

\[\left[ \begin(array)(*(35)(r)) 1 & -1 & 2 \\ 0 & 2 & -1 \\ 1 & 0 & 1 \\\end(array) \right] \]

Solution. Again, we consider the determinant:

\[\begin(align) & \left| \begin(array)(*(35)(r)) 1 & -1 & 2 \\ 0 & 2 & -1 \\ 1 & 0 & 1 \\\end(array) \right|=\begin(matrix ) \left(1\cdot 2\cdot 1+\left(-1 \right)\cdot \left(-1 \right)\cdot 1+2\cdot 0\cdot 0 \right)- \\ -\left (2\cdot 2\cdot 1+\left(-1 \right)\cdot 0\cdot 1+1\cdot \left(-1 \right)\cdot 0 \right) \\\end(matrix)= \ \ & =\left(2+1+0 \right)-\left(4+0+0 \right)=-1\ne 0. \\ \end(align)\]

The determinant is different from zero — the matrix is ​​invertible. But now it will be the most tinny: you have to count as many as 9 (nine, damn it!) Algebraic additions. And each of them will contain the $\left[ 2\times 2 \right]$ qualifier. Flew:

\[\begin(matrix) ((A)_(11))=((\left(-1 \right))^(1+1))\cdot \left| \begin(matrix) 2 & -1 \\ 0 & 1 \\\end(matrix) \right|=2; \\ ((A)_(12))=((\left(-1 \right))^(1+2))\cdot \left| \begin(matrix) 0 & -1 \\ 1 & 1 \\\end(matrix) \right|=-1; \\ ((A)_(13))=((\left(-1 \right))^(1+3))\cdot \left| \begin(matrix) 0 & 2 \\ 1 & 0 \\\end(matrix) \right|=-2; \\ ... \\ ((A)_(33))=((\left(-1 \right))^(3+3))\cdot \left| \begin(matrix) 1 & -1 \\ 0 & 2 \\\end(matrix) \right|=2; \\ \end(matrix)\]

In short, the union matrix will look like this:

Therefore, the inverse matrix will be:

\[((A)^(-1))=\frac(1)(-1)\cdot \left[ \begin(matrix) 2 & -1 & -2 \\ 1 & -1 & -1 \\ -3 & 1 & 2 \\\end(matrix) \right]=\left[ \begin(array)(*(35)(r))-2 & -1 & 3 \\ 1 & 1 & -1 \ \ 2 & 1 & -2 \\\end(array) \right]\]

Well, that's all. Here is the answer.

Answer. $\left[ \begin(array)(*(35)(r)) -2 & -1 & 3 \\ 1 & 1 & -1 \\ 2 & 1 & -2 \\\end(array) \right ]$

As you can see, at the end of each example, we performed a check. In this regard, an important note:

Don't be lazy to check. Multiply the original matrix by the found inverse - you should get $E$.

It is much easier and faster to perform this check than to look for an error in further calculations, when, for example, you solve a matrix equation.

Alternative way

As I said, the inverse matrix theorem works fine for the sizes $\left[ 2\times 2 \right]$ and $\left[ 3\times 3 \right]$ (in the latter case, it's not so "beautiful" anymore). ”), but for matrices large sizes sadness starts.

But don't worry: there is an alternative algorithm that can be used to calmly find the inverse even for the $\left[ 10\times 10 \right]$ matrix. But, as is often the case, to consider this algorithm, we need a little theoretical background.

Elementary transformations

Among the various transformations of the matrix, there are several special ones - they are called elementary. There are exactly three such transformations:

  1. Multiplication. You can take the $i$-th row (column) and multiply it by any number $k\ne 0$;
  2. Addition. Add to the $i$-th row (column) any other $j$-th row (column) multiplied by any number $k\ne 0$ (of course, $k=0$ is also possible, but what's the point of that? ?Nothing will change though).
  3. Permutation. Take the $i$-th and $j$-th rows (columns) and swap them.

Why these transformations are called elementary (for large matrices they do not look so elementary) and why there are only three of them - these questions are beyond the scope of today's lesson. Therefore, we will not go into details.

Another thing is important: we have to perform all these perversions on the associated matrix. Yes, yes, you heard right. Now there will be one more definition - the last one in today's lesson.

Attached Matrix

Surely in school you solved systems of equations using the addition method. Well, there, subtract another from one line, multiply some line by a number - that's all.

So: now everything will be the same, but already “in an adult way”. Ready?

Definition. Let the matrix $A=\left[ n\times n \right]$ and the identity matrix $E$ of the same size $n$ be given. Then the associated matrix $\left[ A\left| E\right. \right]$ is a new $\left[ n\times 2n \right]$ matrix that looks like this:

\[\left[ A\left| E\right. \right]=\left[ \begin(array)(rrrr|rrrr)((a)_(11)) & ((a)_(12)) & ... & ((a)_(1n)) & 1 & 0 & ... & 0 \\((a)_(21)) & ((a)_(22)) & ... & ((a)_(2n)) & 0 & 1 & ... & 0 \\... & ... & ... & ... & ... & ... & ... & ... \\((a)_(n1)) & ((a)_(n2)) & ... & ((a)_(nn)) & 0 & 0 & ... & 1 \\\end(array) \right]\]

In short, we take the matrix $A$, on the right we assign to it the identity matrix $E$ right size, we separate them with a vertical line for beauty - here's the attached one for you. :)

What's the catch? And here's what:

Theorem. Let the matrix $A$ be invertible. Consider the adjoint matrix $\left[ A\left| E\right. \right]$. If using elementary string transformations bring it to the form $\left[ E\left| B\right. \right]$, i.e. by multiplying, subtracting and rearranging rows to obtain from $A$ the matrix $E$ on the right, then the matrix $B$ obtained on the left is the inverse of $A$:

\[\left[ A\left| E\right. \right]\to \left[ E\left| B\right. \right]\Rightarrow B=((A)^(-1))\]

It's that simple! In short, the algorithm for finding the inverse matrix looks like this:

  1. Write the associated matrix $\left[ A\left| E\right. \right]$;
  2. Perform elementary string conversions until the right instead of $A$ appears $E$;
  3. Of course, something will also appear on the left - a certain matrix $B$. This will be the reverse;
  4. PROFITS! :)

Of course, much easier said than done. So let's look at a couple of examples: for the sizes $\left[ 3\times 3 \right]$ and $\left[ 4\times 4 \right]$.

A task. Find the inverse matrix:

\[\left[ \begin(array)(*(35)(r)) 1 & 5 & 1 \\ 3 & 2 & 1 \\ 6 & -2 & 1 \\\end(array) \right]\ ]

Solution. We compose the attached matrix:

\[\left[ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 3 & 2 & 1 & 0 & 1 & 0 \\ 6 & -2 & 1 & 0 & 0 & 1 \\\end(array) \right]\]

Since the last column of the original matrix is ​​filled with ones, subtract the first row from the rest:

\[\begin(align) & \left[ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 3 & 2 & 1 & 0 & 1 & 0 \\ 6 & - 2 & 1 & 0 & 0 & 1 \\\end(array) \right]\begin(matrix) \downarrow \\ -1 \\ -1 \\\end(matrix)\to \\ & \to \left [ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 5 & -7 & 0 & -1 & 0 & 1 \\\end(array) \right] \\ \end(align)\]

There are no more units, except for the first line. But we do not touch it, otherwise the newly removed units will begin to "multiply" in the third column.

But we can subtract the second line twice from the last one - we get a unit in the lower left corner:

\[\begin(align) & \left[ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 5 & -7 & 0 & -1 & 0 & 1 \\\end(array) \right]\begin(matrix) \ \\ \downarrow \\ -2 \\\end(matrix)\to \\ & \left [ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right] \\ \end(align)\]

Now we can subtract the last row from the first and twice from the second - in this way we will “zero out” the first column:

\[\begin(align) & \left[ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right]\begin(matrix) -1 \\ -2 \\ \uparrow \\\end(matrix)\to \\ & \ to \left[ \begin(array)(rrr|rrr) 0 & 6 & 1 & 0 & 2 & -1 \\ 0 & -1 & 0 & -3 & 5 & -2 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right] \\ \end(align)\]

Multiply the second row by −1 and then subtract it 6 times from the first and add 1 time to the last:

\[\begin(align) & \left[ \begin(array)(rrr|rrr) 0 & 6 & 1 & 0 & 2 & -1 \\ 0 & -1 & 0 & -3 & 5 & -2 \ \ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right]\begin(matrix) \ \\ \left| \cdot \left(-1 \right) \right. \\ \ \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrr|rrr) 0 & 6 & 1 & 0 & 2 & -1 \\ 0 & 1 & 0 & 3 & -5 & 2 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right]\begin(matrix) -6 \\ \updownarrow \\ +1 \\\end (matrix)\to \\ & \to \left[ \begin(array)(rrr|rrr) 0 & 0 & 1 & -18 & 32 & -13 \\ 0 & 1 & 0 & 3 & -5 & 2 \\ 1 & 0 & 0 & 4 & -7 & 3 \\\end(array) \right] \\ \end(align)\]

It remains only to swap lines 1 and 3:

\[\left[ \begin(array)(rrr|rrr) 1 & 0 & 0 & 4 & -7 & 3 \\ 0 & 1 & 0 & 3 & -5 & 2 \\ 0 & 0 & 1 & - 18 & 32 & -13 \\\end(array) \right]\]

Ready! On the right is the required inverse matrix.

Answer. $\left[ \begin(array)(*(35)(r))4 & -7 & 3 \\ 3 & -5 & 2 \\ -18 & 32 & -13 \\\end(array) \right ]$

A task. Find the inverse matrix:

\[\left[ \begin(matrix) 1 & 4 & 2 & 3 \\ 1 & -2 & 1 & -2 \\ 1 & -1 & 1 & 1 \\ 0 & -10 & -2 & -5 \\\end(matrix) \right]\]

Solution. Again we compose the attached one:

\[\left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 1 & -2 & 1 & -2 & 0 & 1 & 0 & 0 \ \ 1 & -1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\\end(array) \right]\]

Let's borrow a little, worry about how much we have to count now ... and start counting. To begin with, we “zero out” the first column by subtracting row 1 from rows 2 and 3:

\[\begin(align) & \left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 1 & -2 & 1 & -2 & 0 & 1 & 0 & 0 \\ 1 & -1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\\end(array) \right]\begin(matrix) \downarrow \\ -1 \\ -1 \\ \ \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 0 & -6 & -1 & -5 & -1 & 1 & 0 & 0 \\ 0 & -5 & -1 & -2 & -1 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\\end(array) \right] \\ \end(align)\]

We observe too many "minuses" in lines 2-4. Multiply all three rows by −1, and then burn out the third column by subtracting row 3 from the rest:

\[\begin(align) & \left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 0 & -6 & -1 & -5 & - 1 & 1 & 0 & 0 \\ 0 & -5 & -1 & -2 & -1 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\ \end(array) \right]\begin(matrix) \ \\ \left| \cdot \left(-1 \right) \right. \\ \left| \cdot \left(-1 \right) \right. \\ \left| \cdot \left(-1 \right) \right. \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 0 & 6 & 1 & 5 & ​​1 & -1 & 0 & 0 \\ 0 & 5 & 1 & 2 & 1 & 0 & -1 & 0 \\ 0 & 10 & 2 & 5 & 0 & 0 & 0 & -1 \\\end (array) \right]\begin(matrix) -2 \\ -1 \\ \updownarrow \\ -2 \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr| rrrr) 1 & -6 & 0 & -1 & -1 & 0 & 2 & 0 \\ 0 & 1 & 0 & 3 & 0 & -1 & 1 & 0 \\ 0 & 5 & 1 & 2 & 1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end(array) \right] \\ \end(align)\]

Now it's time to "fry" the last column of the original matrix: subtract row 4 from the rest:

\[\begin(align) & \left[ \begin(array)(rrrr|rrrr) 1 & -6 & 0 & -1 & -1 & 0 & 2 & 0 \\ 0 & 1 & 0 & 3 & 0 & -1 & 1 & 0 \\ 0 & 5 & 1 & 2 & 1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end(array ) \right]\begin(matrix) +1 \\ -3 \\ -2 \\ \uparrow \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr|rrrr) 1 & -6 & 0 & 0 & -3 & 0 & 4 & -1 \\ 0 & 1 & 0 & 0 & 6 & -1 & -5 & 3 \\ 0 & 5 & 1 & 0 & 5 & 0 & -5 & 2 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end(array) \right] \\ \end(align)\]

Final roll: "burn out" the second column by subtracting row 2 from row 1 and 3:

\[\begin(align) & \left[ \begin(array)(rrrr|rrrr) 1 & -6 & 0 & 0 & -3 & 0 & 4 & -1 \\ 0 & 1 & 0 & 0 & 6 & -1 & -5 & 3 \\ 0 & 5 & 1 & 0 & 5 & 0 & -5 & 2 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end( array) \right]\begin(matrix) 6 \\ \updownarrow \\ -5 \\ \ \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr|rrrr) 1 & 0 & 0 & 0 & 33 & -6 & -26 & -17 \\ 0 & 1 & 0 & 0 & 6 & -1 & -5 & 3 \\ 0 & 0 & 1 & 0 & -25 & 5 & 20 & -13 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end(array) \right] \\ \end(align)\]

And again, the identity matrix on the left, so the inverse on the right. :)

Answer. $\left[ \begin(matrix) 33 & -6 & -26 & 17 \\ 6 & -1 & -5 & 3 \\ -25 & 5 & 20 & -13 \\ -2 & 0 & 2 & - 1 \\\end(matrix) \right]$

Similar to inverses in many properties.

Inverse Matrix Properties

  • det A − 1 = 1 det A (\displaystyle \det A^(-1)=(\frac (1)(\det A))), where det (\displaystyle \ \det ) denotes a determinant.
  • (A B) − 1 = B − 1 A − 1 (\displaystyle \ (AB)^(-1)=B^(-1)A^(-1)) for two square invertible matrices A (\displaystyle A) and B (\displaystyle B).
  • (A T) − 1 = (A − 1) T (\displaystyle \ (A^(T))^(-1)=(A^(-1))^(T)), where (. . .) T (\displaystyle (...)^(T)) denotes the transposed matrix.
  • (k A) − 1 = k − 1 A − 1 (\displaystyle \ (kA)^(-1)=k^(-1)A^(-1)) for any coefficient k ≠ 0 (\displaystyle k\not =0).
  • E − 1 = E (\displaystyle \ E^(-1)=E).
  • If it is necessary to solve a system of linear equations , (b is a non-zero vector) where x (\displaystyle x) is the desired vector, and if A − 1 (\displaystyle A^(-1)) exists, then x = A − 1 b (\displaystyle x=A^(-1)b). Otherwise, either the dimension of the solution space is greater than zero, or there are none at all.

Related videos

Ways to find the inverse matrix

If the matrix is ​​invertible, then to find the inverse of the matrix, you can use one of the following methods:

Exact (direct) methods

Jordan-Gauss method

Let's take two matrices: itself A and single E. Let's bring the matrix A to the identity matrix by the Gauss-Jordan method applying transformations in rows (you can also apply transformations in columns). After applying each operation to the first matrix, apply the same operation to the second. When the reduction of the first matrix to the identity form is completed, the second matrix will be equal to A -1.

When using the Gauss method, the first matrix will be multiplied from the left by one of the elementary matrices Λ i (\displaystyle \Lambda _(i))(transvection or diagonal matrix with ones on the main diagonal, except for one position):

Λ 1 ⋅ ⋯ ⋅ Λ n ⋅ A = Λ A = E ⇒ Λ = A − 1 (\displaystyle \Lambda _(1)\cdot \dots \cdot \Lambda _(n)\cdot A=\Lambda A=E \Rightarrow \Lambda =A^(-1)). Λ m = [ 1 … 0 − a 1 m / a m m 0 … 0 … 0 … 1 − a m − 1 m / a m m 0 … 0 0 … 0 1 / a m m 0 … 0 0 … 0 − a m + 1 m / a m m 1 … 0 … 0 … 0 − a n m / a m m 0 … 1 ] (\displaystyle \Lambda _(m)=(\begin(bmatrix)1&\dots &0&-a_(1m)/a_(mm)&0&\dots &0\\ &&&\dots &&&\\0&\dots &1&-a_(m-1m)/a_(mm)&0&\dots &0\\0&\dots &0&1/a_(mm)&0&\dots &0\\0&\dots &0&-a_( m+1m)/a_(mm)&1&\dots &0\\&&&\dots &&&\\0&\dots &0&-a_(nm)/a_(mm)&0&\dots &1\end(bmatrix))).

The second matrix after applying all operations will be equal to Λ (\displaystyle \Lambda ), that is, will be the desired one. The complexity of the algorithm - O(n 3) (\displaystyle O(n^(3))).

Using the matrix of algebraic additions

Matrix Inverse Matrix A (\displaystyle A), represent in the form

A − 1 = adj (A) det (A) (\displaystyle (A)^(-1)=(((\mbox(adj))(A)) \over (\det(A))))

where adj (A) (\displaystyle (\mbox(adj))(A))- adjoint matrix (a matrix composed of algebraic additions for the corresponding elements of the transposed matrix).

The complexity of the algorithm depends on the complexity of the algorithm for calculating the determinant O det and is equal to O(n²) O det .

Using LU/LUP decomposition

Matrix equation A X = I n (\displaystyle AX=I_(n)) for inverse matrix X (\displaystyle X) can be viewed as a collection n (\displaystyle n) systems of the form A x = b (\displaystyle Ax=b). Denote i (\displaystyle i)-th column of the matrix X (\displaystyle X) through X i (\displaystyle X_(i)); then A X i = e i (\displaystyle AX_(i)=e_(i)), i = 1 , … , n (\displaystyle i=1,\ldots ,n),because the i (\displaystyle i)-th column of the matrix I n (\displaystyle I_(n)) is the unit vector e i (\displaystyle e_(i)). in other words, finding the inverse matrix is ​​reduced to solving n equations with the same matrix and different right-hand sides. After running the LUP expansion (time O(n³)) each of the n equations takes O(n²) time to solve, so this part of the work also takes O(n³) time.

If the matrix A is nonsingular, then we can calculate the LUP decomposition for it P A = L U (\displaystyle PA=LU). Let P A = B (\displaystyle PA=B), B − 1 = D (\displaystyle B^(-1)=D). Then, from the properties of the inverse matrix, we can write: D = U − 1 L − 1 (\displaystyle D=U^(-1)L^(-1)). If we multiply this equality by U and L, then we can get two equalities of the form U D = L − 1 (\displaystyle UD=L^(-1)) and D L = U − 1 (\displaystyle DL=U^(-1)). The first of these equalities is a system of n² linear equations for n (n + 1) 2 (\displaystyle (\frac (n(n+1))(2))) of which the right-hand sides are known (from the properties of triangular matrices). The second is also a system of n² linear equations for n (n − 1) 2 (\displaystyle (\frac (n(n-1))(2))) of which the right-hand sides are known (also from the properties of triangular matrices). Together they form a system of n² equalities. Using these equalities, we can recursively determine all n² elements of the matrix D. Then from the equality (PA) −1 = A −1 P −1 = B −1 = D. we obtain the equality A − 1 = D P (\displaystyle A^(-1)=DP).

In the case of using the LU decomposition, no permutation of the columns of the matrix D is required, but the solution may diverge even if the matrix A is nonsingular.

The complexity of the algorithm is O(n³).

Iterative Methods

Schultz Methods

( Ψ k = E − A U k , U k + 1 = U k ∑ i = 0 n Ψ k i (\displaystyle (\begin(cases)\Psi _(k)=E-AU_(k),\\U_( k+1)=U_(k)\sum _(i=0)^(n)\Psi _(k)^(i)\end(cases)))

Error estimate

Choice of Initial Approximation

The problem of choosing the initial approximation in the processes of iterative matrix inversion considered here does not allow us to treat them as independent universal methods, competing with direct inversion methods based, for example, on the LU decomposition of matrices. There are some recommendations for choosing U 0 (\displaystyle U_(0)), ensuring the fulfillment of the condition ρ (Ψ 0) < 1 {\displaystyle \rho (\Psi _{0})<1} (the spectral radius of the matrix is ​​less than unity), which is necessary and sufficient for the convergence of the process. However, in this case, first, it is required to know from above the estimate for the spectrum of the invertible matrix A or the matrix A A T (\displaystyle AA^(T))(namely, if A is a symmetric positive definite matrix and ρ (A) ≤ β (\displaystyle \rho (A)\leq \beta ), then you can take U 0 = α E (\displaystyle U_(0)=(\alpha )E), where ; if A is an arbitrary nonsingular matrix and ρ (A A T) ≤ β (\displaystyle \rho (AA^(T))\leq \beta ), then suppose U 0 = α A T (\displaystyle U_(0)=(\alpha )A^(T)), where also α ∈ (0 , 2 β) (\displaystyle \alpha \in \left(0,(\frac (2)(\beta ))\right)); Of course, the situation can be simplified and, using the fact that ρ (A A T) ≤ k A A T k (\displaystyle \rho (AA^(T))\leq (\mathcal (k))AA^(T)(\mathcal (k))), put U 0 = A T ‖ A A T ‖ (\displaystyle U_(0)=(\frac (A^(T))(\|AA^(T)\|)))). Secondly, with such a specification of the initial matrix, there is no guarantee that ‖ Ψ 0 ‖ (\displaystyle \|\Psi _(0)\|) will be small (perhaps even ‖ Ψ 0 ‖ > 1 (\displaystyle \|\Psi _(0)\|>1)), and a high order of convergence rate will not be immediately apparent.

We recommend reading

Top