An audience of this article's readers will find out about the efficient Schwarz-Rutishauser algorithm, widely used for optimal large-sized matrices QR factorization, as well as the eigendecomposition, based on the QR algorithm. The QR decomposition open source project, developed using Anaconda Python 3.6, implements the modified Gram-Schmidt, Householder Reflections, and Schwarz-Rutishauser algorithms. The project's demo application performs the real or complex matrices QR-decomposition, by using each of these methods, surveying the Schwarz-Rutishauser algorithm's performance, compared to the other algorithms, and providing the performance summary report, as an output.
Table of Contents
Introduction
QR decomposition is one of the newest and, probably, most interesting linear algebra operators, that has several known applications in many fields of science and engineering. The related research of QR decomposition was held starting at the beginning of the 20th century. An entire class of QR decomposition algorithms is effectively used in mathematics to solve the linear least-squares (LS) problem, solving the linear equation systems, reducing the dimensionality of various matrices, as well as an essential step of the eigenvalues computation and singular values decomposition (SVD) methods [1], etc.
For a few past decades, the dynamically growing computer science and IT areas influenced the potential interest of researchers in optimal large-sized matrices QR factorization methods, which is an essential step and sub-process in a vast of other algorithms, formulated to provide a solution to many real-world problems, solved by modern computers:
- Graphical images and digital photography transformation, including rotation, strengthening, distortion, and 3D-perspective, while solving the SVD, in which the QR-decomposition is an essential step. The following shape transformation filters, based on an implementation of the SVD-algorithm, are often used in a variety of graphing apps, such as Adobe Photoshop, Affinity Photo, and GIMP, etc.
- Determining the unique (i.e. "singular") characteristics of electric signals via several interpolation-based methods, widely used in digital signals processing, to recognize noise and other artifacts to have an ability to recover the corrupted data, caused by the distorted signal. All these methods are based on the vector orthogonal property. To find the impactful outlier features of the signal, the set of each parameter's values is decomposed into multiple local subsets. The following decomposition is computed as an orthogonal projection of each parameter's vector onto the corresponding axes in the Hilbert ambient space β [2]. Many existing outlier and artifacts analysis methods rely on the vector orthogonal projection, which can be easily computed, based on the QR decomposition approach.
- The analysis of vibration levels in industrial machines and mechanisms with a number of the principal component analysis (PCA) methods, used for representing a multitude of parameters in the terms of factors, that might be explanatory while detecting the vibration and acoustic environmental interference [15].
- Acquiring the knowledge and insights, exhibited on large statistical datasets to find the trends that provide an understanding of what specifically is having the most critical impact on a variety of economical factors, such as the amounts of sales, annual stocks market closing price, as well as the other estimates, conducted by trading companies, to encompass their business, effectively, aiming to increase the popularity and reduce the potential cost of goods, digital content, and services, being merchandised to billions of online customers, worldwide [1,3,16]. In this case, the QR decomposition methods are used as a data normalization technique, eliminating the various redundant and uninformative data from datasets, and thus, providing a better understanding of the statistical knowledge and trends, revealed.
Moreover, the QR factorization process yields the orthogonal matrix of factors, which is a special kind of functional matrices, in which the most common or similar factor values are being aggregated into multiple areas. The matrix factorization benefits in effective data clustering while solving various classification problems. It's used as an alternative to many AI-based supervised learning algorithms, performing the clustering, based on large datasets and continuous learning.
Today, there're a series of linear algebra methods, widely used for performing real or complex matrices QR factorization, including modified Gram-Schmidt process, Householder reflections, or Givens Rotations.
Each of these methods has its benefits and disadvantages. The modified Gram-Schmidt and Householder reflections are "heavy" applied to complex matrices factorization, whilst the Givens Rotations method incurs the precision loss at the number of its final iterations and thus becomes inefficient for large-sized matrices.
In this article, I will focus the discussion on the modified Gram-Schmidt orthogonalization method's optimization, proposed by Schwarz-Rutishauser in the mid-60s. To demonstrate its efficiency, I've developed the specific codes and demo-app in Python 3.6, implementing the modified Gram-Schmidt, Householder reflection, and Schwarz-Rutishauser algorithms [1,4], surveying the Schwarz-Rutishauser algorithm's performance speed-up, compared to the other two methods, being discussed.
Background
QR Decomposition Fundamentals
First, let's take a short glance at the QR-decomposition procedure. It assumes that an entire QR factorization process relies on the decomposition of any real or complex matrix A into a product of the orthogonal matrix Q and upper triangular matrix R, such as that:
Geometrically, the square matrix Q is a columns-oriented multivector, the vectors of which form an orthogonal basis of A. In turn, the second matrix R, of an arbitrary shape, is an upper triangular rotation matrix that, multiplied by column vectors in Q, delivers each of these vectors to the 90Β°-degree angle, in the coordinates of space βn. This normally causes each of the Q's vectors to become orthogonal to a corresponding vector in A, hence its orthonormal basis.
Herein, our goal is to compute such matrices Q and R, that their product is approximately equal to the given matrix A, as shown below:
Finding the orthogonal matrix Q is the most essential step. It's possible to obtain the matrix Q via several known methods, such as:
- Modified Gram-Schmidt Process
- Householder Reflections
- Givens Rotations
Normally, the computation of orthogonal π yields the matrix πΌ factorization [1,4]:
The orthogonalized matrix π leads to the factorization of πΌ [4], so that full and partial factorizations exist. QR decomposition can be applied not only to the square but also to rectangular matrices, even if a matrix does not have a full rank. Mostly, full factorization of πΌ gives the matrix π with the same number of columns as πΌ [5]. The factorization of πΌ is a useful feature, applied whenever the values in πΌ must be represented in the terms of their factors [1,3].
While obtaining the matrix π, each of its column vectors becomes orthogonal to the span of the other vectors, previously computed. These column vectors form an orthonormal basis of πΌ, which is the same as mapping each vector in the Euclidean space ββ onto the different coordinate system of the space βββ. The term βorthonormalβ means that all orthogonal column vectors in π are normalized, divided by their Euclidean norm [1,2,6].
Since we've already obtained the orthogonal Q, the second matrix R can be easily computed as the product of Q's-transpose and the given matrix A (1.1):
The resultant π is a matrix, all sub-diagonal of which are 0s. Specifically, the computation of π leads to the Gaussian elimination of πΌ, reducing it to the row-echelon form [1,4,5,6]. In this case, the scalar elements πα΅’β and πββ are multiplied by the corresponding unit basis vector πβ of the sub-space β in a Hilbert ambient space β [2]. Mostly, it yields the elimination of those elements in R, followed by its diagonal element.
By its definition, the Hilbert ambient space is a fundamental kind of Euclidean space, the vectors of which have orthogonal and collinear properties. Also, it's possible to apply a variety of linear algebra to these vectors, including the vector-by-rotation-matrix scalar multiplication that yields the rotation of a vector, pivoting it to a specific angle, and thus, mapping it onto a different coordinate system of another Euclidean space.
The upper triangular matrix π, mentioned above, can be effectively used for solving the linear least-squares (LS) problem, as well as finding a single non-trivial solution of π¨π=π linear equation system [1], which is being fundamental for computing the eigenvalues and eigenvectors of A, the A's eigendecomposition.
The matrix R must have the upper triangular form, and the elements of each column followed by the corresponding diagonal element must be eliminated and are 0's.
Finally, the matrix Q, must satisfy the orthogonality condition (1.2), shown below, so that Q is such an orthogonal matrix, that the product of Q's-transpose and Q, always equals or approximated to an identity matrix I with 1's at its diagonal:
If the condition, above, is met, then the matrix Q is orthogonal and has been properly computed. It's used as a verification technique to determine whether the orthogonalization of A was done correctly. At the same time, condition (1.2) is also the stop criteria of the QR-decomposition algorithm.
Additionally, from the equations (1.1) and (1.2), the matrices E and Ρ΄ of eigenvalues and eigenvectors can be also computed. Finding the eigenvalues E=diag{eα΅’ β E} is typically done, by obtaining the product of matrices R and Q:
Alternatively, the eigenvalues can be also computed by obtaining the product of matrices Q's-transpose, A, and Q, as shown, below:
Using the equation, above, provides a better eigendecomposition QR algorithm's convergence, whilst performing the decomposition, based on the iteration-based method.
Basically, to compute the eigenvalues E and corresponding orthogonal eigenvectors Ρ΄, the simple iteration method is applied. This method is thoroughly discussed in one of my article's succeeding paragraphs Computing Eigenvalues And Eigenvectors (QR-Algorithm), below.
The QR decomposition of a trivial 3-by-3 real matrix πΌ can be de-pictured as:
Initially, we have a square matrix A (3x3), the values of which are integers:
The matrix A is a matrix, having three column vectors aβ(7;-5;4), aβ(3;8;7), aβ(1;3;-6), (red)
, (blue)
, and (green)
, projected in the coordinates of 3-dimensional Euclidean space ββ, plotted in the diagram, below:
As the result of performing the QR decomposition of multivector A, discussed above, we have two matrices Q and R. The first matrix Q consists of three orthogonal vectors qβ(-0.73; 0.52; -0.42), qβ(-0.20; -0.77; -0.59), qβ(-0.64; -0.35; 0.68), each one is having a corresponding vector in A, as shown below:
Q-vectors:
In this case, all orthogonal vectors in Q form an orthonormal basis, which is the factorization of multivector A. As we've already discussed, any proper factorization of A must satisfy the condition that each orthogonal vector πΎα΅’ must be orthogonal (i.e., perpendicular) to the other previous vectors. An angle between these vectors must be always 90Β° degrees, and their scalar dot product is approximately equal to 0, such as (πΎα΅’ββ β§ πΎα΅’ β 0). If the product of any previous and the current vector πΎα΅’ β π, π=π..π is equal to 0, it means that it is the proper factorization of matrix A. The length |π¦| of each normalized vector πΎα΅’ β π must belong to the interval |π¦| β (0;1).
Here's the proof:
In this case, all vectors πΎα΅’ β π, π=π..π, shown in the diagram, above, are outlined with the same color as vectors πα΅’ β π¨, π=π..π. Initially, the first vector qβ(-0.73; 0.52; -0.42) is a normalized corresponding vector aβ(7;-5;4):
|aβ| = β72+(-5)Β²+42=β49+25+16=β90 = 9,4868
πΎβ=(-(7Γ·9.4868); -(-5Γ·9.4868); -(4Γ·9.4868))=(-0.73; 0.52; -0.42)
, and we need to compute the vector aβ's Euclidean norm, and then divide each of the vector qβ's components by its norm |aβ|.
The next vector qβ(-0.20; -0.77; -0.59) is either orthogonal to qβ(-0.73; 0.52; -0.42) since their scalar product (πΎβ β§ πΎβ) is equal to 0:
πΎβ β§ πΎβ = (-0.7379)β
(-0.209)+0.527β
(-0.7724)+(-0.4216)β
(-0.5998) = 0.1542Β±0.4071+0.2529 = 0
The same computation can be applied to the vectors qβ(-0.20; -0.77; -0.59) and qβ(-0.64; -0.35; 0.68):
qβ β§ qβ = (-0.6418)β
(-0.209)+(-0.3544)β
(-0.7724)+0.6801β
(-0.5998) = 0.1341+0.2737Β±0.4079 = 0
Finally, vectors πΎβ β§ πΎβ are orthogonal, as well, since their scalar product is approximately equal to 0:
πΎβ β§ qβ = (-0.7379)β
(-0.6418)+0.527β
(-0.3544)+(-0.4216)β
0.6801 =0.4736Β±0.1868Β±0.2867 = 0.0001 β 0
As you can see from the calculations, above, all vectors qβ, qβ, qβ are orthogonal, and thus, the factorization of multivector A is correct. Alternatively, the same orthogonality check is done by obtaining the inner product of the matrix Q's transpose and Q, which is always an identity matrix I, with 1's at its diagonal.
Unlike the orthogonal Q, vectors of another matrix R are not orthogonal. These vectors are obtained column-by-column as the result of scalar multiplication of each column vector qβ β π by a corresponding vector qα΅’ from the span of previously obtained vectors in π(π). The product of these vectors yields the value of diagonal element πα΅’β = qα΅’ β§ qβ, which, then, is multiplied by the corresponding unit basis vectors πβ=(1;0;0), πβ=(0;1;0), and πβ=(0;0;1) of each sub-space in a Hilbert ambient space β, respectively, to eliminate all elements followed by the corresponding diagonal element πα΅’β in the i-th row and k-th column of R. Initially, the diagonal element πβ β R is simply obtained as the norm |qβ| of the first column vector qβ β π, which is πβ=|qβ|. This vector operation is applied while using the Schwarz-Rutishauser method and is quite similar to the multiplication of the Q's transpose or its conjugate by the multivector A in the modified Gram-Schmidt orthogonalization method. Finally, we will have the resultant vectors πβ(-9.48; -0.94; 3.37), πβ(0; -11; 1.07) and πβ(0; 0; -5.78), at each row of the upper-triangular matrix R:
R-vectors:
Vectors scalar multiplication, discussed above, yields the Gaussian elimination of multivector A to the row-echelon form. The resultant matrix R provides a single non-trivial solution of the equation system A, avoiding the complex inverse matrix operation.
Eigendecomposition of A Matrix
Eigendecomposition of square matrices is one of the most efficient QR-decomposition applications. As the result of performing eigendecomposition of the matrix A, we will have a resultant vector of eigenvalues and a matrix of eigenvectors, E and Ρ΄, respectively. The QR algorithm is the very first approach for solving an eigendecomposition of A [7].
In geometry, the eigenvalues of matrix A can be easily obtained as the product of matrices R and Q, as mentioned above. Alternatively, the A's eigenvalues can be computed as the product of matrices Q's-transpose, A, and Q [8].
In most cases, the multiplication of Q's-transpose by A and Q yields to a resultant matrix E with eigenvalues eα΅’ β diag{E}, at the diagonal. As mentioned above, an obtaining of the following product is just as similar as multiplying the rotation matrix R by the orthogonal Q, which yields the rotation of each Q's vector counterclockwise, so that all vectors in Q are "perpendicular", and thus, "orthogonal" to those column vectors of the matrix A, being factorized. While rotated by R, the length (i.e. "magnitude") of each column vector in Q is being stretched. Finally, the extent of the length, by which a vector qα΅’ β Q has been stretched, is a corresponding eigenvalue eα΅’ of the vector aα΅’ β A [10].
The matrix of eigenvectors Ρ΄ can be computed as the inner dot product of the matrices Ρ΄ and Q, to get the new matrix Ρ΄. Initially, the matrix Ρ΄ is simply an identity matrix with 1's at its diagonal [10,11,12].
According to the QR algorithm, the resultant matrices E and Ρ΄ are computed iteratively approximating the resultant eigenvalues and eigenvectors matrices. Using the QR algorithm provides an ability to compute matrices E and Ρ΄ within ~ O(30 x n) iterations, at most, where n - the number of columns in A, [8,9].
Eigenvalues And Eigenvectors Computation (QR-Algorithm)
The eigenvalues E and eigenvectors Ρ΄ of matrix A can be easily computed, based on the Schur matrix decomposition algorithm, discussed below. An entire QR algorithm is trivial and can be formulated as follows [8,9]:
1. Compute the real eigenvalues of A by solving the Schur decomposition, using the iteration-based algorithm;
2. Compute the complex eigenvalues of A solving the A's diagonal sub-matrix polynomials (i.e. quadratic equation);
3. Compute the eigenvectors of A by eliminating A to the row-echelon form with the Jordan-Gaussian method and solving the linear equation system of A;
A fragment of code, implementing the QR algorithm, discussed, is listed below:
def qr_eig_solver(A, n_iters, qr):
E = qr_simple(A, n_iters, qr)
E = qr_eig_vals_complex_rt(E)
E = np.flip(np.sort(E))
V = qr_eig_vecs_complex_rt(A, E)
return E, V
Step #1: Compute 'Real' Eigenvalues eα΅’ β βn:
First, we must compute the matrix A's 'real' eigenvalues of the βn field. This is typically done by using the iteration-based method that relies on solving the QR-decomposition within d-iterations. Initially, it assigns matrix A to the resultant matrix E (E=A). To compute the eigenvalues of A as the diagonal elements in E, it performs d-iterations. At each i-th iteration, it computes the same QR-decomposition of A, obtaining the resultant matrices Q and R. Then, it computes the diagonal matrix E, as the inner product of R and Q, iteratively approximating the values of the diagonal elements in E. Finally, at the end of each iteration, the eigenvalues matrix E is assigned to the new matrix A. It proceeds with the next (i+1)-th iteration, computing the same decomposition of A, and obtaining the resultant matrix E, until all diagonal elements in E are the eigenvalues of A [8,9].
An entire iteration-based algorithm is quite trivial and can be formulated as follows:
1. Initialize matrix E, equal to A (E=A);
2. Perform each of d-iterations i=π..d, and do the following:
2.1. Compute the QR-decomposition of matrix E, which is [Q, R]=QR(E);
2.2. Compute the new matrix E by finding the product of matrices Q's-transpose, E, and Q;
3. Return the matrix E of the 'real A's eigenvalues;
The fragment of code illustrating an implementation of the iteration-based QR algorithm is listed, below:
def qr_simple(A, n_iters, qr):
E = np.array(A, dtype=float)
for i in range(np.shape(A)[1] * n_iters):
Q,R = qr(E, float);
E = np.transpose(Q).dot(E).dot(Q)
return E
Step #2: Compute 'Complex' Eigenvalues eα΅’ β βn:
Next to the 'real' eigenvalues of A, it computes the 'complex' eigenvalues eα΅’ β βn, if any. To do that, it obtains the complex roots of each 2-by-2 sub-matrix E*'s polynomial, in the diagonal of matrix E, from the preceding step. The roots of each polynomial of the sub-matrix E* are a pair of the complex eigenvalues π'ββ and π'ββ. Since each diagonal sub-matrix E β°'s rank is 2, we simply need to solve a quadratic equation, that can be written as [12,14]:
, where Ξ» - eigenvalues of 2-by-2 sub-matrix, tr(E β°) - the sum of diagonal elements in E β°, det(E β°) - the determinant of E β°.
To solve the equation, above, we first need to obtain the average of two sub-matrix E β°'s diagonal elements ΞΌ=avg{diag{eα΅’ β E β°}}, as well as determinant Ο=det(E β°), and, finally, substitute ΞΌ and Ο values to the formula, below:
After solving the equation, the two complex roots Ξ»β and Ξ»β are assigned to matrix E's diagonal elements πββ and πββ , respectively. The following process is applied to each i-th diagonal sub-matrix of E, until all complex eigenvalues eα΅’ β E of the field β2, in the diagonal of E, has been finally computed.
Finally, both the 'real' eigenvalues, from step 1 and the 'complex' eigenvalues of A, are combined into the resultant vector of A's eigenvalues:
def qr_eig_vals_complex_rt(E):
n = np.shape(E)[1]
if n >= 2:
es = np.zeros(n, dtype=complex)
i = 0
while i < n - 1:
mu = (E[i][i] + E[i+1][i+1]) / 2
det = E[i][i] * E[i+1][i+1] - E[i][i+1] * E[i+1][i]
dt_root = cmath.sqrt(mu**2 - det)
e1 = mu + dt_root; e2 = mu - dt_root
if np.iscomplex(e1):
if e1.real == e2.real:
e2 = np.conj(e2)
es[i] = e1; es[i+1] = e2
if es[i] < es[i+1]:
es[[i]] = es[[i+1]]
i = i + 1
i = i + 1
E = np.diag(E)
es = [E[i] if i in np.argwhere(es == 0)[:,0]
else es[i] for i in range(n)]
return es
Step #3: Compute Eigenvectors of A:
The final step is to compute the eigenvectors of A by solving the linear equation system of A, eliminating the matrix A to the row-echelon form with the Jordan-Gaussian elimination. This is typically done by solving the system of n-linear equations for each eigenvalue ek β E and extracting the corresponding eigenvector vk from the last n-th column of A. In case when the ek β βn is complex, the last column in A is rotated by a 90Β°-degree angle, counterclockwise, to obtain the k-th complex eigenvector vk β Ρ΄ of A [13,14]:
Let A - a 'real' or 'complex' square matrix of shape (n x n), E - vector of the A's n-eigenvalues.
For each A's eigenvalue ek β E, k=1..n, solve the linear equation system, using the Jordan-Gaussian method:
1. Subtract the k-th eigenvalue ek from each A's diagonal element ajj β A, such as ajj = ajj - ek;
2. For each row ai β A, i=1..n do the elimination to row-echelon form, as follows:
2.1. Determine the i-th basis element Ξ±i = aii ;
2.2. Check if the value of Ξ±i is equal to 0 or 1. If so, divide each element in the i-th row ai β A by the basis element Ξ±i : ai = ai / Ξ±i , otherwise proceed with the next step 2.3;
2.3. Check if the value of Ξ±i is equal to 0. If so, exchange the iβth and (i+1)βth rows in A, or go to the next step, unless otherwise;
2.3. For each row aj β A, j=1..n, other than ai (j β i), get its basis element Ξ = ajj , and re-compute the values of its elements aj,r = aj,r , r =1..n, such as that: aj,r = aj,r - aik * Ξ;
2.4. Check if ai is the final row in A, i = n. If not, return to step 2.1, otherwise proceed with the next step;
2.5. Assign the value 1.0 to the last diagonal element ann β A, so that ann=1.0;
3. Extract values of all elements in the last n-th column an β A as the k-th eigenvector vk β Ρ΄ of A;
4. Normalize the eigenvector vk β Ρ΄, dividing it by its Euclidean norm |vk|;
5. If the k-th eigenvalue ek β βn is complex, rotate each complex value vki β Ρ΄: vi β βn to the 90Β°-degree angle, such as that:
A fragment of code, illustrating the eigenvectors of A computation, listed below:
def rot_complex(val):
v_rr = val.real * math.cos(math.pi / 2) + val.imag * math.sin(math.pi / 2)
v_ri = val.real * math.sin(math.pi / 2) + val.imag * math.cos(math.pi / 2)
return v_rr + 1.j * v_ri
def qr_eig_vecs_complex_rt(A, E):
(m,n) = np.shape(A)
V = np.identity(n, dtype=complex)
for (e,z) in zip(E, range(m)):
Av = np.array(A, dtype=complex)
for j in range(m):
Av[j][j] = Av[j][j] - e
for i in range(m-1):
alpha = Av[i][i]
if alpha != 0 and alpha != 1:
for j in range(m):
Av[i][j] = Av[i][j] / alpha
Av[[i, i+1]] = Av[[i+1, i]] if alpha == 0 else Av[[i, i+1]]
for k in range(m):
theta = Av[k][i]
if i != k:
for j in range(i,m):
Av[k][j] = Av[k][j] - Av[i][j] * theta
V[:,z] = Av[:,m-1]; V[m-1][z] = 1.0
nV = lin.norm(V[:,z])
V[:,z] = np.array([v / nV for v in V[:,z]], dtype=complex)
V[:,z] = [rot_complex(v) for v in V[:,z]] if e.imag != 0 else V[:,z]
z = z + 1
return V
Computing the SVD (Singular Value Decomposition) is the next essential step for performing the QR-decomposition and finding those eigenvalues and eigenvectors of a real or complex matrix A.
The SVD computation is basically applied whenever the goal is to obtain the factorization of a square matrix A, in its orthonormal eigenspace, revealing the most common latent or explanatory factors while solving various data classification and clustering problems, as well as finding the singular values of parameters in many scientific and engineering processes, such as the acoustic levels analysis or digital signals processing, mentioned above.
Obviously, the SVD-decomposition of A into the diagonal matrix Ξ£ of singular values, as well as the left and right singular vectors U and V can be expressed as:
According to this, SVD is a decomposition of matrix A into the inner product of matrices U of left singular vectors, Ξ£ - diagonal matrix of singular values, and V's transpose, - the right singular vectors matrix, respectively.
Solving the SVD decomposition is performed in the number of steps, formulated, below:
Step #1: Obtain the symmetric matrix Aα΅A of A:
The SVD-decomposition, properly solved, is only possible in the case when a given matrix A is a symmetric matrix, for which we're aiming to compute those singular values and vectors, performing the A's factorization. In this case, the symmetric matrix of A is computed as the product of A's-transpose and A, which is:
As you've already noticed, the symmetric matrix A' is obtained by taking a product of A's-transpose by A, which, in most cases, is a square symmetric matrix. Computing a symmetric matrix of rectangular matrices comes to be never possible, and thus, the SVD decomposition of rectangular matrices normally cannot be solved.
Since we've obtained the symmetric matrix of A, let's continue with the succeeding SVD decomposition's step, below.
Step #2: Compute Diagonal Matrix Ξ£ of Singular Values:
The SVD-decomposition algorithm solely relies on the matrix QR-decomposition, which yields obtaining the diagonal eigenvalues matrix E, as well as the matrix Ρ΄ of corresponding eigenvectors of A. The specific methods for the A's eigenvalues and eigenvectors computation have been previously discussed, above. To compute the diagonal singular values matrix Ξ£, the square root of each eigenvalue eα΅’ β E, π=π..π is taken and placed onto the matrix Ξ£'s diagonal, obtaining an inverse matrix of Ξ£, as shown below:
Herein, all eigenvalues in Ξ£ must also be arranged in the ascending order, starting at the largest eigenvalue, assigned to the first diagonal element sββ=max{eα΅’ β E}. The resultant inverse matrix Ξ£β»ΒΉ is obtained, dividing 1.0 by each diagonal element sα΅’α΅’ β Ξ£, such as that:
Finally, all diagonal elements in the inverse matrix Ξ£β»ΒΉ become the singular values of Aα΅A.
Since we've computed the matrix Aα΅A's singular values matrix Ξ£=Ξ£β»ΒΉ, we're proceeding with the left and right singular vectors computation, discussed in the paragraph, below.
Step #3: Compute Aα΅A's Right Singular Vectors:
The right singular vectors of Aα΅A are normally obtained by finding a single non-trivial solution of the linear equation system, given both the eigenvalues diagonal vector e = diag{eα΅’ β E} and an identity matrix I with 1's at its diagonal:
The non-trivial solution of the linear equation system, above, is basically found by using the famous Jordan-Gaussian elimination approach, which yields the matrix Aα΅A into the row echelon form reduction, and thus, the obtaining of an upper triangular matrix Aα΅A.
Then, each of the right singular vectors is normalized by dividing all its components by the vector's absolute scalar length L, by using the following formula:
After that, the right singular vectors vα΅’ β V, π=π..m are placed along the corresponding rows of the matrix V, and V is transposed, so that Vα΅ is finally a matrix of the right singular vectors of the matrix A.
Step #4: Compute Aα΅A's Left Singular Vectors:
Since we've computed the matrices of singular values Ξ£ and Vα΅, now we can compute the A's left singular vectors matrix U, which is normally done by obtaining the product of the matrices A, Vα΅ and the diagonal Ξ£, such as:
The following computation results in obtaining A's left singular vectors matrix U, all singular vectors of which are the columns in U.
Finally, let's perform the verification of whether the A's SVD-decomposition has been done properly. Similar to the QR-decomposition, this is done by obtaining the product of U, Ξ£, and Vα΅, which always must be approximately equal to the given matrix A.
The process of solving the singular value decomposition (SVD), illustrating the specific step-by-step SVD computations, has been thoroughly discussed in one of my previous articles "Singular Values Decomposition (SVD) In C++11 By An Example", Arthur V. Ratz, CodeProject, November 2018.
Modified Gram-Schmidt Orthogonalization
The Gram-Schmidt process, proposed in the mid-50s, was one of the first successful methods for the orthogonalization of real or complex matrices [1,4,5,6]. This is the easiest method to compute the QR-factorization of a given matrix. The other methods, such as the Householder reflections, based on the specular reflection, are typically even more complex. That's why let's briefly discuss the modified Gram-Schmidt orthogonalization method, first, and then, proceed with the Schwarz-Rutishauser algorithm explanation.
This method is primarily used for recursively finding matrices πΈ of π¨, based on the orthogonal projection of each vector πβ β π¨, π=π..π onto a span of already computed vectors πΎα΅’ β π(π), π=π..π. Each of the new orthogonal vectors π¦β β π is computed as the sum of projections, subtracting each of the previously found orthogonal projections from a corresponding vector πβ β π¨. Another, upper triangular matrix πΉ can be easily found as the product of πΈβs-transpose and the matrix π¨ using equation (1), above. According to the modified Gram-Schmidt method, the Gaussian elimination of each row of π¨ into πΉ is mostly done, in-place, while finding a corresponding column vector in π.
An orthogonal projection of π onto π is obtained by using the equation (2): below:
Normally, the modified Gram-Schmidt process yields the computation of matrix π that can be algebraically expressed as:
, or alternatively:
Each new vector π¦β β π is finally normalized by dividing it by its Euclidean norm |π¦β|.
A fragment of code in Python, listed below, implements the modified Gram-Schmidt orthogonalization process:
def conj_transpose(a):
return np.conj(np.transpose(a))
def proj_ab(a,b):
return a.dot(conj_transpose(b)) / lin.norm(b) * b
def qr_gs(A, type=complex):
A = np.array(A, dtype=type)
(m, n) = np.shape(A)
(m, n) = (m, n) if m > n else (m, m)
Q = np.zeros((m, n), dtype=type)
R = np.zeros((m, n), dtype=type)
for k in range(n):
pr_sum = np.zeros(m, dtype=type)
for j in range(k):
pr_sum += proj_ab(A[:,k], Q[:,j])
Q[:,k] = A[:,k] - pr_sum
Q[:,k] = Q[:,k] / lin.norm(Q[:,k])
if type == complex:
R = conj_transpose(Q).dot(A)
else: R = np.transpose(Q).dot(A)
return -Q, -R
The modified Gram-Schmidt orthogonalization process has some disadvantages, such as numerical instability, as well as a notably high computational complexity, above πΆβ¨πππΒ²β©, when applied to the large-sized matrices. Also, it requires a modification to determine a matrixβs leading dimension, when performing the decomposition of rectangular m-by-n matrices [4].
Schwarz-Rutishauser Algorithm
As youβve probably have noticed, the Schwarz-Rutishauser algorithm, discussed, is entirely based on the modified Gram-Schmidt orthogonalization, which has a typically high complexity, caused by the computation of the orthogonal projection of each vector πβ β π¨ onto vectors πΎα΅’ β π(π). And thus, we need a slightly different orthogonalization method, rather than computing multiple sums of the sums, done for each πβ vector's projection onto the span π(π).
In this case, the complexity of the orthogonal projection operator is approximately - πΆβ¨ππβ©. It negatively impacts the overall performance of the modified Gram-Schmidt process, itself [4]. The Schwarz-Rutishauser algorithm is applied as the modified Gram-Schmidt algorithm-level optimization.
Specifically, the Schwarz-Rutishauser algorithm is a modification of the modified Gram-Schmidt matrix orthogonalization method, proposed by H. R. Schwarz, H. Rutishauser and E. Stiefel, in their research work βNumerik symmetrischer Matrizenβ (Stuttgart, 1968) [6]. The research, held, was motivated by the goal to reduce the complexity of the existing modified Gram-Schmidt projection-based method, improving its performance and numerical stability [5], over those popular reflections- and rotation-based methods.
The main idea behind the Schwarz-Rutishauser algorithm is that the π¨βs orthogonalization complexity can be drastically reduced, compared to the either modified Gram-Schmidt πΆβ¨πππΒ²β© or Householder πΆβ¨πππΒ²-π¬.ππΒ³β© methods [5].
To provide a better understanding of the Schwarz-Rutishauser algorithm, letβs take a closer look at the equation (2) and (3.1) from the right, shown above.
While computing the sum of πβs projections onto the span π(π) vectors, we must divide the scalar product of vectors <π,π> by the squared norm |π¦β|Β² to normalize their product [2]. However, itβs not necessary, since each vector πΎα΅’ β π(π) has been already normalized at one of the previous π-th algorithmβs steps.
Since then, the equation (3.1) can be simplified, eliminating the division of the sum of projections by the squared norm [5], in the equation from the right, below:
Since πΉ is the inner product of πΈβs-transpose and π¨, then we can easily compute the π-th element of each column vector πβ as:
Herein, the scalar product of πΎα΅’- and πβ-vectors is multiplied by the unit basis vector πβ of the corresponding hyperplane in Hilbertβs ambient space β [2].
Then, letβs substitute the equation (4.2) to (4.1), and we'll finally have:
Since we admit that πΈ must be initially equal to π¨ (πΈ =π¨), the computation of vectors πΎ and π can be done recursively, by using equations (4.2) and (4.3), above [5]:
According to the vector arithmetics, to get each new π-th column vector π¦β β π, we subtract the product of the span of previous vectors πΎα΅’ β π(π) and πα΅’β from the current vector π¦β, which is also called the rejection of πβ. Letβs remind that π¦β is a vector, that initially equals to the original vector πβ (π¦β=πβ), and the sum operator can be finally wiped away from the equation (4.3).
In this case, we also need to compute πββ, which is the π-th diagonal element in πΉ, the norm of |π¦β|:
, normalizing the new π-th column vector π¦β β π, dividing it by its norm, equal to the current diagonal element πββ, from the preceding step [5].
Finally, we can briefly formulate the Schwarz-Rutishauser algorithm:
1. Initialize matrix π, equal to π¨ (π=π¨), and πΉ as the matrix of 0s;
2. For each π-th column vector π¦β β π, π=π..π, do the following:
2.1. For the span of vectors πΎα΅’ β π(π), π=π..π, that are already known:
- Get the π-th element πα΅’β of the π-th column vector πβ β πΉ, equation (5.1);
- Update the π-th column vector π¦β β π, equation (5.2);
2.2. Proceed with the previous step 2.1, to orthogonalize vector π¦β, and obtain the π-th column vector πβ;
3. Compute the diagonal element πββ as the π¦β-vectorβs norm πββ=|π¦β|, equation (5.3);
4. Normalize the π-th column vector π¦β β π, dividing it by its norm, equal to πββ from the previous step 3;
5. Proceed with the orthogonalization of π¨, performed in the steps 1β4, to compute the π¨β orthonormal basis π and the upper triangular matrix πΉ;
6. Once done, return the negative resultant matrices -π and -πΉ from the procedure;
As you can see from the fragment of code, below, the implementation of the Schwarz-Rutishauser algorithm has been drastically simplified, compared to the modified Gram-Schmidt process. Moreover, it provides an ability to simultaneously compute those vectors of matrices πΉ and π, in-place [5].
A fragment of the code, demonstrating an implementation of the Schwarz-Rutishauser algorithm is listed below:
def qr_gs_modsr(A, type=complex):
A = np.array(A, dtype=type)
(m,n) = np.shape(A)
Q = np.array(A, dtype=type)
R = np.zeros((n, n), dtype=type)
for k in range(n):
for i in range(k):
R[i,k] = np.transpose(Q[:,i]).dot(Q[:,k]);
Q[:,k] = Q[:,k] - R[i,k] * Q[:,i];
R[k,k] = lin.norm(Q[:,k]); Q[:,k] = Q[:,k] / R[k,k];
return -Q, -R
Numerical Stability and Complexity
Here're some of the potential issues with the Householder reflections or modified Gram-Schmidt orthogonalization methods, increasing the complexity, and thus, negatively impacting their performance:
- The complex conjugate transpose computation.
- The column-by-column sum of orthogonal projections computation.
- The new upper-triangular matrix R computation as the inner product of H and previous R.
Obtaining the householder matrix is mainly based on the specular reflection of each column vector in A, which yields the rotation of each vector by the 90Β° degree angle to find an orthonormal basis of A. In the complex matrix case, the complex conjugate transpose computation is required. The complex conjugate transpose operation has a larger complexity, compared to the entire modified Gram-Schmidt orthogonalization method. Thus, the complexity of the Householder reflections method is normally by O(n) - times greater while performing the decomposition of non-real large-sized matrices.
The overall complexity of the Householder reflections differs mostly by O(-0.6nΒ³) - O(-0.6nβ΄), compared to the modified Gram-Schmidt orthogonalization method's complexity, although does not always provide a sufficient performance speed-up on the large-sized matrices of the real type.
Both the Householder reflections and modified Gram-Schmidt orthogonalization methods have a different complexity in either real or complex matrix cases, proportional to either O(2mnΒ²) or O(2mnΒ³). The complexity of non-real matrix decomposition is n-times greater in the columns dimension, caused by the complex conjugate transpose computation.
Letβs take a look into the Schwarz-Rutishauser algorithmβs complexity. In most cases, it evaluates to πΆ(π’π£Β²) and is constantly reduced at least as twice (2x) for both real and complex matrix cases [4,5].
The complexity of all three QR-decomposition algorithms is shown in the chart, below:
The Schwarz-Rutishauser and other QR-decomposition algorithms estimated complexity survey is visualized in the comparison chart, below:
As you can see, the overall complexity of the Schwarz-Rutishauser algorithm (navy)
has been reduced by almost 61%, compared to the Householder reflections (red)
, and by 50% - compared to the modified Gram-Schmidt orthogonalization process (green)
.
An execution time of the specific codes, implementing the different QR-decomposition algorithms, discussed, is shown in the chart, below:
Matrix A (848 x 931):
In this case, the execution time of codes, implementing the various QR-algorithms was empirically assessed by performing the QR decomposition of two randomly generated matrices A of an arbitrary shape (848 x 931), in the real and complex cases. In the real case, the Householder reflections algorithm provides the best execution wall-time, and thus, the sufficient performance speed-up, compared to the orthogonalization-based methods.
Despite this, in the complex matrix case, the Schwarz-Rutishauser algorithm provides the best execution time, which is 2.33x times faster than the Householder reflections and 1.27x times faster, - than the modified Gram-Schmidt orthogonalization.
Numerical stability is another factor, having an impact on the QR decomposition efficiency. In many cases, approximating the highest precision floating-point values incurs the numerical instability issue caused by the persistent rounding error and arithmetic overflow [1]. Mostly, it largely depends on the matrix values numerical representation (e.g., real or complex), as well as the matrix shape and its values quantity. Mostly, the arithmetic overflow is incurred as the result of real values multiplication, exponentially increasing the number of digits, followed by the decimal point of the resultant floating-point value. In turn, the division of two real values causes the floating-point value simplification, which is a decreased number of decimal digits. It normally leads to a persistent precision loss issue.
According to the best practices, to avoid the significant precision loss while performing the QR decomposition, itβs recommended to represent the elements as rational numbers, based on using a vast of the existing multi-precision libraries (such as the mpmath or GMPY) for Python, Node.js and other programming languages, supporting the high-precision floating-point arithmetic functionality.
Using the Code
As I've already discussed, the following project includes the specific codes in Anaconda Python 3.8 and the latest NumPy library, implementing all three QR-decomposition methods, introduced in this article. First, I've developed a code, implementing the Householder reflections algorithm. Compared to the modified Gram-Schmidt and Schwarz-Rutishauser, an implementation of the Householder reflections is much more "heavy", rather than both of these methods. It's mainly based on the implementation of different procedures to compute the Householder matrix, for either real or complex matrix cases, respectively. Also, in the complex case, the following code must compute a matrix's conjugate transpose and determine the leading dimension for rectangular matrices. A code in Python, implementing the Householder reflections algorithm is listed below:
qr_hh_solver.py
import numpy as np
import numpy.linalg as lin
def qr_hh(A, type=complex):
(m,n) = np.shape(A)
cols = max(m, n) if m > n else min(m, n)
E = np.eye(m, n, dtype=type)
I = np.eye(m, m, dtype=type)
Q = np.eye(cols, cols, dtype=type)
R = np.array(A, dtype=type)
for k in range(min(m, n)):
nsigx = -np.sign(R[k,k])
r_norm = lin.norm(R[k:m,k])
u = R[k:m,k] - nsigx * r_norm * E[k:m,k]
v = u / lin.norm(u);
w = 1.0
if type == complex:
v_hh = np.conj(v)
r_hh = np.conj(R[k:m,k])
w = r_hh.dot(v) / v_hh.dot(R[k:m,k])
H = I[k:m,k:m] - (1.0 + w) * np.outer(v, v_hh)
Q_k = np.eye(cols, cols, dtype=type)
Q_k[k:cols,k:cols] = H;
Q = Q.dot(np.conj(np.transpose(Q_k)))
else:
H = I[k:m,k:m] - (1.0 + w) * np.outer(v, v)
Q_kt = np.transpose(Q[k:cols,:])
Q[k:cols,:] = np.transpose(Q_kt.dot(H))
R[k:m,k:n] = H.dot(R[k:m,k:n])
if type != complex:
Q = np.transpose(Q)
return Q, R
Another code in Python, implementing the famous modified Gram-Schmidt orthogonalization algorithm, is shown below. In this case, the modified Gram-Schmidt orthogonalization algorithm's implementation has been reduced, and finally becomes more attractive, performing the same matrix QR decomposition in the lesser number of steps. Although, it still requires the functionality for either determining the leading dimension, in the rectangular matrix case or obtaining the complex conjugate transpose. Similar to the Householder reflections' implementation, above, it's still based on the implementation of the different fragments of code for either real or complex matrix decomposition. As well, the modified Gram-Schmidt implementation has somewhat lesser complexity.
qr_gs_solver.py
import numpy as np
import numpy.linalg as lin
def conj_transpose(a):
return np.conj(np.transpose(a))
def proj_ab(a,b):
return a.dot(conj_transpose(b)) / lin.norm(b) * b
def qr_gs(A, type=complex):
A = np.array(A, dtype=type)
(m, n) = np.shape(A)
(m, n) = (m, n) if m > n else (m, m)
Q = np.zeros((m, n), dtype=type)
R = np.zeros((m, n), dtype=type)
for k in range(n):
pr_sum = np.zeros(m, dtype=type)
for j in range(k):
pr_sum += proj_ab(A[:,k], Q[:,j])
Q[:,k] = A[:,k] - pr_sum
Q[:,k] = Q[:,k] / lin.norm(Q[:,k])
if type == complex:
R = conj_transpose(Q).dot(A)
else: R = np.transpose(Q).dot(A)
return -Q, -R
Next, I've implemented the efficient Schwarz-Rutishauser algorithm. As you've noticed from the code, listed below, an entire implementation was significantly reduced since it's no longer necessary to implement separate codes for the decomposition of real and complex matrices. Unlike two previous QR-algorithm implementations, it allows to simultaneously obtain the matrices Q and R, performing the specific computations, in-place, rather than computing the inner product of Q's transpose and A. As well, it does not require the complex conjugate transpose computation, which allows to drastically decrease the complexity of large-sized matrices decomposition.
qr_gs_schwrt_solver.py
import numpy as np
import numpy.linalg as lin
def qr_gs_modsr(A, type=complex):
A = np.array(A, dtype=type)
(m,n) = np.shape(A)
Q = np.array(A, dtype=type)
R = np.zeros((n, n), dtype=type)
for k in range(n):
for i in range(k):
R[i,k] = np.transpose(Q[:,i]).dot(Q[:,k]);
Q[:,k] = Q[:,k] - R[i,k] * Q[:,i];
R[k,k] = lin.norm(Q[:,k]); Q[:,k] = Q[:,k] / R[k,k];
return -Q, -R
Finally, I've implemented the eigendecomposition QR algorithm that performs the eigenvalues and eigenvectors of a square 'real' matrix A computation:
qr_eigen_solver.py
import math
import cmath
import numpy as np
import numpy.linalg as lin
def qr_eig_solver(A, n_iters, qr):
E = qr_simple(A, n_iters, qr)
E = qr_eig_vals_complex_rt(E)
E = np.flip(np.sort(E))
V = qr_eig_vecs_complex_rt(A, E)
return E, V
def qr_simple(A, n_iters, qr):
E = np.array(A, dtype=float)
for i in range(np.shape(A)[1] * n_iters):
Q,R = qr(E, float);
E = np.transpose(Q).dot(E).dot(Q)
return E
def qr_eig_vals_complex_rt(E):
n = np.shape(E)[1]
if n >= 2:
es = np.zeros(n, dtype=complex)
i = 0
while i < n - 1:
mu = (E[i][i] + E[i+1][i+1]) / 2
det = E[i][i] * E[i+1][i+1] - E[i][i+1] * E[i+1][i]
dt_root = cmath.sqrt(mu**2 - det)
e1 = mu + dt_root; e2 = mu - dt_root
if np.iscomplex(e1):
if e1.real == e2.real:
e2 = np.conj(e2)
es[i] = e1; es[i+1] = e2
if es[i] < es[i+1]:
es[[i]] = es[[i+1]]
i = i + 1
i = i + 1
E = np.diag(E)
es = [E[i] if i in np.argwhere(es == 0)[:,0]
else es[i] for i in range(n)]
return es
def rot_complex(val):
v_rr = val.real * math.cos(math.pi / 2) + val.imag * math.sin(math.pi / 2)
v_ri = val.real * math.sin(math.pi / 2) + val.imag * math.cos(math.pi / 2)
return v_rr + 1.j * v_ri
def qr_eig_vecs_complex_rt(A, E):
(m,n) = np.shape(A)
V = np.identity(n, dtype=complex)
for (e,z) in zip(E, range(m)):
Av = np.array(A, dtype=complex)
for j in range(m):
Av[j][j] = Av[j][j] - e
for i in range(m-1):
alpha = Av[i][i]
if alpha != 0 and alpha != 1:
for j in range(m):
Av[i][j] = Av[i][j] / alpha
Av[[i, i+1]] = Av[[i+1, i]] if alpha == 0 else Av[[i, i+1]]
for k in range(m):
theta = Av[k][i]
if i != k:
for j in range(i,m):
Av[k][j] = Av[k][j] - Av[i][j] * theta
V[:,z] = Av[:,m-1]; V[m-1][z] = 1.0
nV = lin.norm(V[:,z])
V[:,z] = np.array([v / nV for v in V[:,z]], dtype=complex)
V[:,z] = [rot_complex(v) for v in V[:,z]] if e.imag != 0 else V[:,z]
z = z + 1
return V
To evaluate the performance of all QR algorithms, being implemented, I've developed a demonstration app, the source code of which is listed below. An entire demo app's code generates the random real and complex matrices of an arbitrary shape. It performs the same QR decomposition, based on the Schwarz-Rutishauser algorithm, and sanity check whether the decomposition was done correctly. Then, it estimates the code execution wall-time for each QR algorithm to determine which implementation provides a better performance speed-up. At the end of its execution, the demo app outputs the performance summary report, displaying "winner" and "looser" QR-algorithm for matrices of both real and complex types:
qr_eigen_factorization.py
import math
import cmath
import time
import random
import pandas as pd
import numpy as np
import numpy.linalg as linalg
from qr_hh_solver import *
from qr_gs_solver import *
from qr_gs_schwrt_solver import *
from qr_eigen_solver import *
iter_max = 30
mat_shape = { 'min': 6, 'max': 32 }
mat_shape_perf = { 'min': 750, 'max': 950 }
qr_alg = [ { 'alg': qr_gs, 'name': 'Gram-Schmidt ' },
{ 'alg': qr_gs_modsr, 'name': 'Schwarz-Rutishauser' },
{ 'alg': qr_hh, 'name': 'Householder ' } ]
mat_types = [ 'real ', 'complex' ]
checkup_status = [ 'failed', 'passed' ]
checkup_banner = "\n[ Verification %s... ]"
stats_banner = "%s Matrix A Statistics:\n"
qr_test_banner = "\nQR Factorization Of A `%s` Matrix Using %s Algorithm:"
eig_test_banner = "\nEigenvalues And Eigenvectors <E,V> of A `%s` Real Matrix (%dx%d):"
survey_banner = "Matrix: %s WINS: [ %s : %d secs ] LOOSES: [ %s : %d secs ]"
perf_stats = "%s : [ type: `%s` exec_time: %d secs verification: %s ]"
app_banner = "QR Factorization v.0.0.2 CPOL License (C) 2022 by Arthur V. Ratz"
def perf(A, qr, type=complex):
t_d = time.time(); Q,R = qr(A, type); \
return Q, R, (time.time() - t_d)
def rand_matrix(rows, cols, type=complex):
np.set_printoptions(precision=8)
if type == complex:
return np.reshape(\
np.random.uniform(1, 10, rows * cols) + \
np.random.uniform(-10, 10, rows * cols) * 1j, (rows, cols))
else: return np.reshape(10 * np.random.uniform(\
0.01, 0.99, rows * cols), (rows, cols))
def print_array(arr, alias, type=complex):
fmt = 'complexfloat' \
if type == complex else 'float'
np.set_printoptions(precision=3, \
suppress=True, formatter=fmt)
if isinstance(arr, complex):
eps = np.finfo(complex).eps; tol = 100
arr = [np.real(m) if np.imag(m)<tol*eps else m for m in arr]
arr = [np.asscalar(np.real_if_close(m)) for m in arr]
if len(np.shape(arr)) == 2:
print("\nMatrix %s (%dx%d):" % \
(alias, len(arr), len(arr[0])),"\n")
else:
print("\nVector %s (len = %d):" % \
(alias, len(arr)),"\n")
pd.set_option('precision', 3); \
df = pd.DataFrame(arr)
df = df.to_string(index=False).replace('j','i')
print(df)
def check(M1, M2):
v1 = np.reshape(M1,-1)
v2 = np.reshape(M2,-1)
if len(v1) != len(v2):
return False
else: return 0 == len(np.where(np.array(\
[ format(c1, '.4g') == format(c2, '.4g') \
for c1,c2 in zip(v1, v2) ]) == False)[0])
def logo():
print(app_banner)
print(''.join(['=' for p in range(len(app_banner))]))
def qr_demo(s, qr, type=complex):
print("QR-Decomposition of Matrix A:")
print("=============================")
print(qr_test_banner % (\
"Complex" if type == complex \
else "Real", s.replace(' ', '')))
rows = np.random.randint(\
mat_shape['min'], mat_shape['max'])
cols = np.random.randint(\
mat_shape['min'], mat_shape['max'])
A1 = rand_matrix(rows, cols, type); print_array(A1, "A1")
Q,R,T = perf(A1, qr, type)
status_qr = check(A1, Q.dot(R))
print_array(Q, "Q", type)
print_array(R, "R", type)
print(checkup_banner % (checkup_status[status_qr]),"\n")
if type == complex:
return status_qr
rows = min(rows, cols)
print(eig_test_banner % ("Real", rows, rows))
A2 = np.array(A1[:rows,:rows], dtype=type)
E,V = qr_eig_solver(A2, iter_max, qr)
print_array(E, "E")
print_array(V, "V")
return status_qr
def qr_perf():
print ("\nPerformance Assessment:")
print ("=======================================")
rows = np.random.randint(\
mat_shape_perf['min'], mat_shape_perf['max'])
cols = np.random.randint(\
mat_shape_perf['min'], mat_shape_perf['max'])
d = np.random.randint(5)
cols += d if random.uniform(0,1) > 0.5 else -d
A = [ rand_matrix(rows, cols, float), \
rand_matrix(rows, cols, complex) ]
print ("\nMatrix A (%d x %d):" % (rows, cols))
print ("============================\n")
exec_time = np.zeros((len(mat_types), len(qr_alg)))
survey = np.zeros((len(mat_types), 1), dtype=object)
status = 0
for s_j in range(len(mat_types)):
for s_i in range(len(qr_alg)):
qr, name = \
qr_alg[s_i]['alg'], qr_alg[s_i]['name']; \
Q, R, exec_time[s_j][s_i] = perf(A[s_j], \
qr, float if s_j % len(mat_types) == 0 else complex); \
status = checkup_status[check(A[s_j], Q.dot(R))]
print(perf_stats % (name, mat_types[s_j].replace(' ','').lower(), \
exec_time[s_j][s_i], status))
if status == "failed": break
if status == "failed":
print_matrix(A[s_j], "A")
print_matrix(Q, "Q")
print_matrix(R, "R")
print("\n*** FAILURE!!! ****\n")
break
wr_time, lr_time = \
np.min(exec_time[s_j]), \
np.max(exec_time[s_j])
wi = np.where(exec_time[s_j] == wr_time)[0][0]
li = np.where(exec_time[s_j] == lr_time)[0][0]
s_w = qr_alg[wi]['name']; s_l = qr_alg[li]['name']
survey[s_j] = { 'alg_w': s_w, 'tm_w': wr_time, \
'alg_l': s_l, 'tm_l': lr_time }
print("\n")
for s_j in range(len(mat_types)):
print(survey_banner % (mat_types[s_j].lower(),
survey[s_j][0]['alg_w'], survey[s_j][0]['tm_w'], \
survey[s_j][0]['alg_l'], survey[s_j][0]['tm_l']))
def main():
logo()
np.random.seed(int(time.time()))
status = 0
for s_j in range(len(qr_alg)):
for s_i in range(len(mat_types)):
status = qr_demo(qr_alg[s_j]['name'], \
qr_alg[s_j]['alg'], float if s_i % 2 else complex)
if status == False: break
if status == False:
print("\n*** FAILURE!!! ****\n"); break
qr_perf(); print("\n")
if __name__ == "__main__":
main()
Points of Interest
In conclusion, the Schwarz-Rutishauser algorithm, introduced in this article, is extremely fast and efficient, applied for the large-sized real and complex matrices of an arbitrary shape, which size is far beyond of ππΒ³ elements at each dimension. The overall complexity of the Schwarz-Rutishauser algorithm is just πΆ(π’π£Β²), which is πΆ(π)-times less, compared to the modified Gram-Schmidt process, and is very close to matrices inner product operation's complexity. Finally, itβs considered to have much better numerical stability and is attractive for parallelization, scaling the specific code execution, across multiple cores in the modern symmetric CPUs.
References
- βQR-Decompositionβ β From Wikipedia, the free encyclopedia
- βProjection (Linear Algebra)β- From Wikipedia, the free encyclopedia
- βFactorizationβ β From Wikipedia, the free encyclopedia
- βECE133A β Applied Numerical Computing (6. QR-Factorization)β, Prof. L. Vandenberghe, University of California, Los Angeles, USA, Fall, 2019
- βAlgorithms for the QR-Decompositionβ by Walter Gander, Seminar Fuer Angewandte Mathematik Eidgenoessische Technische Hochschule CH-8092 ZUERIC, April 1980
- βNumerik symmetrischer Matrizenβ, H. R. Schwarz, H. Rutishauser and E. Stiefel, Stuttgart, 1968
- βEigendecomposition of a matrixβ β From Wikipedia, the free encyclopedia
- "Math 108b: Advanced Linear Algebra, Lecture 5: The Schur Decomposition", Padraic Bartlett, University of California, Santa Barbara, 2014
- "Solving large scale eigenvalue problems", Lecture 4, March 14, 2018: The QR algorithm, Peter Arbenz, Computer Science Department, ETH Zurich
- "Facts About Eigenvalues" By Dr. David Butler, The University of Adelaide, Australia
- "Eigenvalue-Polynomials", September 7, 2017, MIT, Massachusetts, USA
- "Complex Eigenvalues of Real Matrices", Alex Freire, Department of Mathematics, University of Tennessee, USA, 2005
- "Computing Eigenvalues and/or Eigenvectors; Part 2, The Power method and QR-algorithm", Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo, November 16, Norway, 2008
- "Eigenvectors, Eigenvalues and Orthogonality", RiskPrep.com, July 22, 2021
- βReducing vibration using QR decomposition and unconstrained optimization for a multi-rotor aircraftβ, Bechhoefer, Eric Robert, Simmonds Precision Products, Inc., United States(US) Patent USP2009037512463, Feb 16, 2005
- βThe Netflix Prize and Singular Value Decompositionβ, NJIT, CS301-Spring 2021
History
- 11th December, 2021 - Article "Can QR Decomposition Be Actually Faster? Schwarz-Rutishauser Algorithm" was published.
- 17th December, 2021 - Article's contents and representation of the basic ideas have been revised.
- 20th December, 2021 - QR-decomposition visualization example and performance optimization content were added.
- 27th December, 2021 - QR-algorithm for computing the real matrices eigenspace and solving the SVD was explained.