Sparse Days at CERFACS.

( Algèbre linéaire et Calcul sur la grille)

Partially supported by the Region Midi-Pyrénées

 

A two-days meeting on sparse matrix matters at CERFACS in Toulouse, France on June 24th-25th, 2002.


Click here to see some pictures.

Click here to download the report on the conference (in French).


 

Programme of the 24th June


 
14.00 - 14.45 Erik Boman: Support Theory and Preconditioning
14.45 - 15.30
Sivan Toledo: Support preconditioners in TAUCS
15.30 - 16.00 Coffee break
16.00 - 16.45 Doron Chen: Subset preconditioning
16.45 - 17.30 Yousef Saad: Crout Versions of ILUT
20.30 - 23.30 Dinner

Programme of the 25th June


 
09.45 - 10.30 Michele Benzi: Preconditioners for indefinite problems
10.30 - 11.15 Jose Mas Marí: AISM: A factorized sparse approximate inverse preconditioner Slides
11.15 - 12.00 Esmond Ng: Reducing communication latency in applying incomplete factor preconditioner
12.00 - 14.00 Lunch
14.00 - 14.45 Masha Sosonkina : Using numerical values in sparse matrix reorderings for ILU and ARMS preconditioners. Slides
14.45 - 15.30 Hussein Hoteit: Linear systems and mixed finite elements in hydrogeology Slides
15.30 - 16.00 Coffee Break
16.00 - 16.45 Sebastien Lacroix : On iterative solution of linear systems in reservoir simulation
16.45 - 17.30 Robert Scheichl: Decoupling and parallel preconditioning for sedimentary basin simulations.

 
 

Seminars will be organized during the sparse week to stimulate informal discussion of CERFACS researchers and visitors.

Wednesday June 26 Filomena D'Almeida : Condition numbers in projection methods for integral equations.

 

The list of participants is available.
 

Some Abstracts

Support preconditioners in TAUCS
Sivan Toledo (with Doron Chen)

Tel-Aviv University


TAUCS is a library of linear solvers that includes implementations of several support preconditioners. More specifically, TAUCS includes implementations of Vaidya's augmented maximum-spanning-tree preconditioners, Vaidya's augmented maximum-weight-basis preconditioners, Vaidya's recursive support preconditioners, Gremban-Miller support-tree preconditioners, and generalizations of Gremban-Miller's preconditioners.

In addition, TAUCS includes several supplementary subroutines related to support preconditioning. A multifrontal supernodal Cholesky factorization allows rapid construction of support preconditioners even when the preconditioner is quite dense. A subroutine that quickly computes the number of nonzeros in each column of the Cholesky factor of a matrix allows us to quickly search for a preconditioner with a given number of nonzeros or a given factorization time. A routine for converting a linear system with negative offdiagonal coefficients to an equivalent system without negative offdiagonal element allows us to apply maximum-spanning-tree and support-tree preconditioners to a wider class of linear systems. A routine for directly constructing the inverses of the Cholesky factors of a matrix might allow us to create support preconditioners that can be applied quickly in parallel.

The talk will describe these facilities and how they can be used to construct effective linear solvers. I will describe both what we already know about the behavior of these preconditioners and what we do not know about them. For example, Vaidya's recursive preconditioners can be theoretically shown to be very effective, but that has not been yet demonstrated in practice.



Subset Preconditioning
Doron Chen (with Sivan Toledo and Noga Alon)

Tel-Aviv University


The talk will describe a new approach to combinatorial preconditioning of symmetric positive-definite matrices. Our approach, which we term subset preconditioning, is a generalization of Vaidya preconditioners, which have already been proven effective for DD-SPSD (diagonally-dominant, symmetric positive semi-definite) matrices. Our results so far are essentially existence proofs for certain preconditioners; we do not yet have efficient-enough algorithms and complete running-time analyses to make this approach practical. Still, we believe that the new results will direct further research toward general and effective preconditioners.

There are cases in which given A, an n-by-n matrix, we can cheaply compute a factorization A=UU^T, U is n-by-m, m>n. We call such factorizations "natural factorizations". Since U is neither orthogonal nor triangular, the factorization is not useful for direct solution of linear systems. However, we can sometimes use the natural factorization of A to construct subset preconditioners. A subset preconditioner with respect to a natural factorization A=UU^T is M=VV^T where the columns of V are a subset of U's columns. Our goal is to find a subset such that VV^T is an effective preconditioner for A.

The talk will describe two strategies for selecting subsets of the columns of U. Using one strategy, we will show that for every matrix A=UU^T, there is a subset preconditioner VV^T such that V is n-by-n, and the conditioner number of the preconditioned system is bounded by nm.

A second strategy, based on choosing V to be the convex-hull of the columns of U, provides a bound of m on the condition number of the preconditioned system.



Crout versions of ILU
Yousef Saad

University of Minnesota


Standard ILU factorization preconditioning often utilize a delayed update Gaussian Elimination (GE) algorithm, sometimes known as the IKJ variant of GE. This leads to severe inefficiencies when the fill-in allowed becomes substantial. The main problem comes from the requirement to process rows in a certain order in the main elimination step, leading to expensive searches.

Another variant of Gaussian elimination which was advocated for symmetric positive definite matrices [Jones-Plassman, TOMS, 1995], is shown to be quite effective when extended to the nonsymmetric case. An important advantage of this algorithm is that it is easily amenable to the use of more rigourous dropping strategies such as those developed in [Bollhoefer, 2001]. We will describe the algorithm and some of its variations, and report on some experiments. The tests confirm that the method computes better preconditioners and uses less storage than other implementations such as ILUT.



Preconditioners for indefinite problems
Michele Benzi (with Miroslav Tuma, Academy of Sciences of the Czech Republic, Prague, CZ )

Department of Mathematics and Computer Science
Emory University
Atlanta, GA 30322
USA


Symmetric indefinite systems of linear equations arise in a number of scientific and technical fields including structural analysis, fluid dynamics, acoustics, electromagnetism, magnetohydrodynamics, optimization, and so on. Furthermore, several algorithms for large sparse symmetric eigenproblems require the repeated solution of indefinite systems of equations.

In this talk I will present some ideas for constructing preconditioners for Krylov subspace methods applied to general symmetric indefinite systems. This problem is considerably more difficult than in the positive definite case, and little has been done towards the development of preconditioners of broad applicability (as opposed to special-purpose methods for special classes of indefinite systems, such as saddlepoint problems).

The main focus of the talk is on computing symmetric indefinite preconditioners to be used with the simplified QMR algorithm. I will present both incomplete factorization and factored approximate inverse preconditioners based on the Bunch-Kaufman decomposition. Issues such as shifts, scalings and reorderings will be touched upon. Numerical experiments will be used to illustrate the potential (as well as the limitations) of these algorithms.

Methods that destroy the symmetry of the coefficient matrix will also be considered and compared with symmetry-preserving ones.



AISM: A factorized sparse approximate inverse preconditioner
Jose Mas Marí (with R. Bru, J. Cerdán and J. Marín)

Departament de Matemàtica Aplicada,
Universitat Politècnica de València,
46022 València, Spain
emails: rbru,jcerdan,jmarinma,jmasm@mat.upv.es.


Let Ax=b be a large, sparse, nonsymmetric system of linear equations. AISM is a sparse approximate inverse preconditioning technique for such a class of systems. We show how the matrix A_0^{-1} - A^{-1}, where A_0 is a nonsingular matrix, whose inverse is known or easy to compute, can be factorized in the form U\Omega V^T using the Sherman-Morrison formula. When this factorization process is done incompletely, an approximate factorization may be obtained and used as a preconditioner for Krylov iterative methods. For A_0=sI, where I is the identity matrix and s is a positive scalar, the existence of the preconditioner for M-matrices is proved. In addition, some numerical experiments obtained for a representative set of matrices are presented. Results show that our approach is comparable with other existing approximate inverse techniques.

Supported by Spanish DGI Grant BFM2001-2641



Using numerical values in sparse matrix reorderings for ILU and ARMS preconditioners
Masha Sosonkina


It is becoming widely accepted that reordering techniques used for direct methods are also beneficial for convergence of Krylov subspace methods with incomplete LU-type factorizations as preconditioners. The reorderings, such as reverse Cuthill-McKee and minimum degree, are based on the connectivity of an (undirected) graph corresponding to the matrix. Most of the reorderings, such as minimum fill, which also consider numerical values, are rather expensive to construct and thus, their approximations are typically used.

In this talk, I show some experiments with inexpensive reordering techniques based on matrix values. These reorderings are based on shortest path type algorithms whose edge weights are derived from matrix values. In particular, I will show how a few weighting schemes affect iterative convergence when Algebraic Recursive Multilevel Solver (ARMS) preconditioner is used for solving challenging symmetric and nonsymmetric linear systems. I will also discuss an issue of using matrix values in determining the block structure in the ARMS preconditioner.
A few comparisons with other reordering techniques will be presented.



Linear systems and mixed finite elements in hydrogeology
Hussein Hoteit (with Bernard Philippe)

INRIA / IRISA, Rennes


Contaminant transport and water flow in an underground system usually occur in heterogeneous media. Because analytical solutions of the corresponding models usually do not exist, numerical methods must be considered. The resulting linear systems, which must be solved, are characterized by large number of unknowns and high condition numbers due to possible occurrence of discontinuous physical parameters.

In this talk, we analyze the numerical reliability of two numerical methods: the Mixed Finite Element (MFE) and Mixed-Hybrid Finite Element (MHFE) methods. The condition numbers of the resulting linear systems are investigated under the influence of two factors: the mesh discretization and the medium heterogeneity. We show that, unlike the MFE, the MHFE suffers from the presence of ill shaped discretized elements. We also show that the condition number of the systems generated by both methods in heterogeneous media grows up linearly according to the ratio of the extreme values of the hydraulic conductivity. Furthermore, it is found that the MHFE could accumulate numerical errors when large jumps in the tensor of conductivity take place. Finally, we compare running-times for both algorithms by giving various numerical experiments.



On Iterative Solution of Linear Systems in Reservoir Simulation
Sebastien Lacroix (with Yuri Vassilevski, Y. Achdou, R. Masson, P. Quandalle and M.F. Wheeler)

IFP


We discuss the iterative solution of linear systems arising in the process of modeling multi-phase flows in porous media ( e.g. in reservoir simulations). In order to guarantee robustness it is usually necessary to use fully implicit time-stepping schemes to resolve the coupled multicomponent nonlinear flow equations numerically. However, these schemes are very expensive in subsurface flow simulation and require the solution of a nonlinear system at each time step. The fully implicit solution algorithm is based upon a two level iterative solution of the coupled nonlinear partial differential equations at each time step. An inexact Newton method is used to approximate the Jacobian. In the case of multi-phase flow, the resulting linear system is large, sparse, non-symmetric, and ill conditioned. Hence, an efficient and robust iterative solver is a very important component of a given simulator.

We address here several approaches to the iterative solution. They all share the advantage of a low arithmetic complexity per iteration and a good convergence rate. In addition, all the solvers are parallel. Some of them exploit the rectangular structure of the underlying grid. Others may be considered as grid independent, since they only rely on physical properties of the linear system. The first approach consists of combining powerful block-relaxation methods with algebraic multigrid correction on agglomerated systems in order to take advantage of the multigrid idea without the excessive cost of the coarse grids creation. Additionally this approach proves to be more robust than algebraic multigrid applied directly to the full system. As a point of comparison we introduce and couple enhanced incomplete factorizations such as the reduced ILU(0) {\em RS/ILU(0)} or ILU(0) combined with a {\em Minimum discarded fill reordering strategy (MDF)}. All these solvers are then compared to the ones commonly used in reservoir simulation. We also discuss the importance of a so called two-stage preconditioner which can be characterized as follows: the first stage consists in decoupling the pressure unknown from the saturations and in preconditioning the diagonal pressure block of the system matrix {\em independently} of the saturation blocks; the second stage consists in applying a global preconditioner to the residual resulting from the first stage, thus taking care of the recoupling between saturations and pressure.We refer to different types of decouplings and stress out their influence on resolving the whole system.

Finally we present our experience with these solvers, comparing the results, and present some numerical experiments.



Decoupling and Parallel Preconditioning for Sedimentary Basin Simulations
R. Scheichl (with R. Masson, P. Quandalle and J. Wendebourg)


The simulation of sedimentary basins aims at reconstructing its historical evolution in order to provide quantitative predictions about phenomena leading to hydrocarbon accumulations. The kernel of this simulation is the numerical solution of a complex system of time dependent, three-dimensional partial differential equations of mixed parabolic-hyperbolic type. A discretisation (Finite Volumes + Implicit Euler) and linearisation (Newton) of this system leads to very ill-conditioned, strongly non-symmetric and large systems of linear equations with three unknowns per mesh element, i.e. pressure, geostatic load, and hydrocarbon saturation.

The preconditioning which we will present for these systems consists in three stages. First of all the equations for pressure and saturation are locally decoupled on each element. This decoupling aims not only at reducing the coupling, but also at concentrating in the "pressure block" the elliptic part of the system which is then in the second stage preconditioned by efficient methods like AMG. The third step finally consists in "recoupling" the equations (e.g. block Gauss-Seidel, combinative techniques).

In almost all our numerical tests on real test problems taken from case studies we observed a considerable reduction of the CPU-time for the linear solver, up to a factor 4.3 with respect to ILU(0) preconditioning (which is used at the moment in the commercial code). The performance of the preconditioner shows no degradation with respect to the number of elements, the size of the time step, high migration ratios, or strong heterogeneities and anisotropies in the porous media. Parallel tests show no degradation with respect to the number of processors either and lead to a speedup of 2.8 on 4 processors.


 
 

Last modified on June 13th, 2002.
algweb.