Milestone 6 - Solving Systems of Linear Equations
Sparse Matrix Format
The spatial discretization that we derived in milestone 5 is compact, which means that the operators only couple the solution at each degree of freedom with the solutions at neighboring degrees of freedom: . As a consequence, we obtain a very sparse system matrix (many of its entries are zeros). The sparsity of the system matrix is exactly the same as the sparsity of that we computed in the assignment of milestone 5 because we are only adding values to the diagonal entries of :
Utilizing a sparse matrix format can significantly enhance efficiency due to the abundance of zero entries in . This format optimizes storage space allocation by only storing the non-zero elements.
Sparse matrix formats offer additional advantages beyond storage savings. They can accelerate various matrix operations. To illustrate this, consider a matrix-vector multiplication involving a dense matrix and a dense vector. In this case, multiplications and additions are required, where represents the matrix size. In contrast, employing a sparse matrix and a dense vector for the matrix-vector multiplication requires only multiplications and additions, where denotes the number of non-zero elements.
Remark: Our spatial discretization operator couples each interior node with itself and four other nodes, each node at the north pole with all the nodes at the north pole and all the nodes at a colatitude , and each node at the south pole with all the nodes at the south pole and all the nodes at a colatitude .
Therefore, the number of non-zeros of the matrix is which is much smaller than
There are many different sparse matrix formats available in the literature. Depending on the application, some formats can offer more advantages than others. In this course, we will use the Compressed Sparse Column (CSC) format. In the CSC format we store a sparse matrix using only three one-dimensional vectors:
: A vector of real numbers of size that contains the values of the entries of in ascending row and ascending column order.
: A vector of integers of size that contains the row index for each entry in .
: A vector of integers of size that contains the index in and where the given column starts. The last entry of contains the fictitious index in after the last valid index. In other words, it contains in zero-based programming languages (e.g., Python) and in one-based programming languages (e.g., Julia).
Example: We want to store the matrix in CSC sparse format:
where and . We have:
In a zero-based language (e.g., Python):
In a one-based language (e.g., Julia):
Note: We can convert a dense matrix to CSC sparse format in Python using the SciPy 2-D sparse array package for numerical data:
from scipy import sparse
sparse_jacoban = sparse.csc_matrix(dense_jacobian) Similarly, we can convert a dense matrix to CSC sparse format in Julia using the function sparse from the library SparseArrays:
using SparseArrays: sparse
sparse_jacoban = sparse(dense_jacobian)Solving Sparse Linear Systems
There is a myriad of methods to solve linear systems efficiently. The choice of the most efficient variant depends on the size of the system, the sparsity of the matrix, the shape of the matrix and its special properties (symmetric, etc.). For very large problem sizes, typically, iterative linear system solvers are used that solve the problem approximatively up to a (user defined) accuracy threshold. For problem sizes that are relatively small, such as in the case of our 2D EBM with moderate sizes of grids, it is possible to use so-called direct solvers.
Direct methods are the most reliable methods for solving moderate size linear systems since they obtain an exact solution to the problem to within machine precision rounding/conditioning errors. The most well known direct method is Gaussian elimination, which reduces the system to a triangular one using row operations. Once the system is triangular, the solution can be found via backward substitution. Gaussian elimination is considered to be a very expensive method since it requires floating point operations. Many other alternatives are available in the literature, such as the Cholesky decomposition (for Hermitian, positive-definite matrices), the QR decomposition and the LU decomposition (both for general nonsymmetric systems). In next section, LU decomposition is briefly described. LU decomposition is very effective in case the system matrix does not change (or changes only very rarely), while the right-hand-side vector changes all the time. As remarked above, in our EBM approach, the Jacobian of the operator does not change in time and stays constant throughout the simulation, while the right-hand-side vector changes in every time step - an ideal scenario for LU decomposition.
LU Decomposition
The idea of LU decomposition is to factorize the system matrix as the product of non-singular lower- and upper-diagonal matrices and ,
Depending on the structure of the regular matrix , this is not always directly possible. However, a proper reordering of the matrix by rows or columns, also called permutation, is sufficient to enable the LU factorization of general nonsingular matrices,
where is known as the permutation matrix containing only and entries at the right positions to perform the row/column swaps.
Once the matrix is factorized, the linear system can be solved in two steps:
Solve the linear system for .
Solve the linear system for .
Since the matrices and are triangular, both linear solves can be done by backward and forward substitution. Therefore, the solving process is very cheap, requiring approximately operations. The expensive part is the factorization of the matrix, which requires around operations. As a consequence, the LU factorization is a feasible possibility when the factorized matrix can be used for several linear solves with different right-hand-sides, and there is enough storage for , , and the variables needed to factorize the original system.
Note: The LU decomposition of a sparse matrix can be done in Python using the subpackage linalg of scipy.sparse. To perform the factorization, use the function factorized:
from scipy import sparse
solve = sparse.linalg.factorized(sparse_jacobian) To solve the system for a particular right-hand side rhs and store the solution in the array sol, we can use the newly defined function solve:
sol = solve(rhs)
The LU decomposition of a sparse matrix in Julia can be done using the LinearAlgebra library. To perform the factorization, use the function lu:
using LinearAlgebra: lu
lu_decomposition = lu(sparse_jacobian)
After this command, the variable lu_decomposition stores the factorized matrices and . To solve the system for a particular right-hand side rhs and store the solution in the array sol, use the function ldiv!:
using LinearAlgebra: ldiv!
ldiv!(sol, lu_decomposition, rhs)
These Julia commands work for dense or sparse matrices.
Created by Gregor Gassner and Andrés Rueda-Ramírez with contributions by Simone Chiocchetti, Daniel Bach, Sophia Horak, Philipp Baasch, Benjamin Bolm, Erik Faulhaber, and Luca Sommer. Last modified: April 02, 2026. Website built with Franklin.jl and the Julia programming language.