Sparse Days Meeting 2004 at CERFACS.

June 2nd-3rd, 2004

===

 

A two-days meeting on sparse matrix matters at CERFACS in Toulouse, France, June 2nd-3rd, 2004.




 

Programme of the 2nd of June (Wednesday)


 
14.00 - 14.30 Yousef Saad: Complete pivoting ILU: a multilevel framework
14.30 - 15.00
Nancy Nichols: Inexact Gauss-Newton methods with applications in numerical weather and ocean prediction  Slides
15.00 - 15.30
Jasper van den Eshof: Preconditioning Lanczos approximations to the matrix exponential  Slides
15.30 - 16.00 Coffee break
16.00 - 16.30 Chen Greif: Augmented Lagrangian techniques for solving saddle point linear systems
16.30 - 17.00 Michael Hagemann: Symmetric weighted matchings for LDL^T preconditionning of symmetric indefinite matrices  Slides
20.30 - 23.30 Dinner

Programme of the 3rd of June (Thursday)


 
10.00 - 11.00 Gene Golub: Adaptive methods for updating/downdating page ranks  Slides
11.00 - 11.30 Coffee break
11.30 - 12.00 Stig Skelboe: Stiff systems of ODEs and sparse matrix techniques  Slides
12.00 - 12.30 Mike Osborne: V-invariant methods, generalised least squares problems, and the Kalman filter
12.30 - 14.00 Lunch
14.00 - 14.30 Etienne Loute: Sparse Gaussian elimination as a computational paradigm
14.30 - 15.00 Omer Meshar: A new sparse out-of-core symmetric indefinite factorization algorithm  Slides
15.00 - 15.30 Coffee Break
15.30 - 16.00 Jennifer Scott: A numerical evaluation of sparse direct solvers for the solution of large, sparse, symmetric linear systems of equations  Slides
16.00 - 16.30 Dror Irony: The snap-back pivoting method for symmetric banded indefinite matrices  Slides

 

Seminars will be organized during the sparse week to stimulate informal discussion of CERFACS researchers and visitors.

Wednesday June 2th Arno Swart: Internal waves in enclosed geometries  Slides

 

The list of participants is available.
 

Some Abstracts

Internal waves in enclosed geometries
Arno Swart

Utrecht University
The Netherlands
email: swart@math.uu.nl



In this talk I will propose a discretisation and regularisation scheme for the equation describing internal waves in closed geometries. The governing equation is, in terms of the streamfunction, a wave equation in spatial coordinates. The solutions to the equation together with it's boundary conditions are extremely sensitive to the shape of the boundary. For this reason the problem is often termed 'ill-posed'. A further challenge is the fact that solutions may have fractal structure. Numerical solutions will be compared with analytical solutions and laboratory experiments.


Complete pivoting ILU: a multilevel framework
Yousef Saad

University of Minnesota
email: saad@cs.umn.edu



This talk will discuss a preconditioning method based on combining two-sided permutations with a multilevel approach. The nonsymmetric permutation exploits a greedy strategy to put large entries of the matrix in the diagonal of the upper leading submatrix. The method can be regarded as a complete pivoting version of the incomplete LU factorization. This leads to an effective incomplete factorization preconditioner for general nonsymmetric, irregularly structured, sparse linear systems. The algorithm is implemented in a multilevel fashion and borrows from the Algebraic Recursive Multilevel Solver (ARMS) framework. Numerical experiments will be reported on problems that are known as difficult to solve by iterative methods.


Inexact Gauss-Newton methods with applications in numerical weather and ocean prediction
Nancy Nichols (with Serge Gratton from CERFACS, and Amos S. Lawless from Reading)

The University of Reading
Reading, UK
email: n.k.nichols@reading.ac.uk



For the very large nonlinear systems that arise in meteorology and oceanography, the available observations are not sufficient to initiate a numerical forecasting model. Data assimilation is a technique for combining the measured observations with the model predictions in order to generate accurate estimates of the expected system states - both current and future. Four-dimensional variational assimilation techniques (4D-Var) are attractive because they deliver the best statistically linear unbiased estimate of the model solution given the available observations and their error covariances. The optimal estimates minimize an objective function that measures the mismatch between the model predictions and the observed system states, weighted by the inverse of the covariance matrices. The model equations are treated as strong constraints.

Gradient methods are used, typically, to solve the large-scale constrained optimization problem. Currently popular is the 'incremental 4D-Var' procedure, in which a sequence of linearly constrained, convex, quadratic cost functions are minimized. We show here that this procedure approximates a Gauss-Newton method for treating nonlinear least squares problems. We review the known convergence theory for this method and then investigate the effects of approximations on the convergence of the procedure. Specifically we consider the effects of truncating the inner iterations and of using approximate linear constraints in the inner loop. To illustrate the behaviour of the method, we apply incremental 4D-Var schemes to a discrete numerical model of the one-dimensional nonlinear shallow water equations.


Preconditioning Lanczos approximations to the matrix exponential
Jasper van den Eshof (with Marlis Hochbruck)

Heinrich Heine University
Duesseldorf
email: eshof@am.uni-duesseldorf.de



The Lanczos method is an iterative procedure to compute an orthogonal basis for the Krylov subspace generated by a symmetric matrix A and a starting vector v. An interesting application of this method is the computation of the matrix exponential exp(-t A)v. This vector plays an important role in the solution of parabolic equations where A results from some form of discretization of an elliptic operator.

In the present talk we will argue that for these applications the convergence behavior of this method can be unsatisfactory. We will propose a modified method that resolves this by a simple preconditioned transformation at the cost of an inner--outer iteration. A priori error bounds are presented that are independent of the norm of A. This shows that the worst case convergence speed is independent of the mesh width in the spatial discretization of the elliptic operator.

We discuss, furthermore, a posteriori error estimation and the tuning of the coupling between the inner and outer iteration. We conclude with several numerical experiments with the proposed method.


Augmented Lagrangian techniques for solving saddle point linear systems
Chen Greif (with Gene Golub and Jim Varah)

The University of British Columbia
Vancouver, Canada
email: greif@cs.ubc.ca



We perform an algebraic analysis of a generalization of the augmented Lagrangian method for solution of saddle point linear systems. It is shown that in cases where the (1,1) block is singular, specifically semidefinite, a low-rank perturbation that minimizes the condition number of the perturbed matrix while maintaining sparsity is an effective approach. The vectors used for generating the perturbation are columns of the constraint matrix that form a small angle with the null-space of the original (1,1) block. Block preconditioning techniques of a similar flavor are also discussed and analyzed, and the theoretical observations are illustrated and validated by numerical results.


Symmetric weighted matchings for LDL^T preconditionning of symmetric indefinite matrices
Michael Hagemann (with Olaf Schenk)

Departement Informatik,
Universität Basel,Klingelbergstr. 50-70, 4056 Basel
Switzerland
email: michael.hagemann@unibas.ch , olaf.schenk@unibas.ch



The benefit of non-symmetric reorderings based on maximum weighted matchings for the factorization and preconditioning of general linear systems is well-known by now and an important ingredient of current state-of-the-art direct solvers.

In this talk we present symmetric preconditioning methods for symmetric indefinite matrices based on maximum weighted matchings, following reports by Duff, Gilbert and Pralet [1,2]. The emerging structure is exploited in a modified incomplete LDL^T factorization scheme that uses static 1x1 and 2x2 diagonal block pivoting. The resulting symmetric incomplete factorizations are used as preconditioners for Krylov-subspace linear solvers.

The objective of the reordering is to maximize the diagonal element or the elements directly alongside the diagonal. For a large class of matrices, such reorderings allow the factorization method to choose a numerically satisfying 1x1 or a 2x2 pivot.

We show the effectiveness of the resulting method on a number of real world examples ranging from non-linear elasticity to non-linear optimization. Comparisons with direct methods and other iterative solutions methods show the robustness and competitiveness of the approach.

As a case study, we show the application of the preconditioner to a series of non-linear KKT optimization matrices [3], where it provides favorable performance with a set of standard parameters.

[1] I. S. Duff and J. R. Gilbert, Maximum-weighted matching and block pivoting for symmetric indefinite systems. Abstract book of Householder Symposium XV, pp. 73-75, June, 2002.

[2] I. S. Duff and S. Pralet, Experiments in preprocessing and scaling symmetric problems for multifrontal solutions. Technical Report WN/PA/04/17, CERFACS, France, 2004.

[3] A. Wächter and L. T. Biegler, On the Implementation of a Primal-Dual Interior Point Filter Line Search Algorithm for Large-Scale Nonlinear Programming. Research Report, IBM T. J. Watson Research Center, Yorktown, March 2004.



Adaptive methods for updating/downdating page ranks
Gene Golub

Computer Science Department
Stanford University
Stanford, CA 94305
USA
email: golub@sccm.stanford.edu



Google's Page Rank algorithm for web search involves computing the principal eigenvector of a stochastic matrix describing the hyperlink structure of the World Wide Web. Since the web is a highly dynamic structure, with new pages constantly being added or removed, the update problem is an important problem in web search. We address the problem of efficiently recomputing the principal eigenvector of the web hyperlink matrix after small perturbations in the link structure of the web.

We present an algorithm that is motived by the empirical observation that the convergence patterns of web pages in the PageRank algorithm have a nonuniform distribution. Specifically, many pages converge to their true PageRank quickly, while relatively few pages take a much longer time to converge. This algorithm, called Adaptive PageRank, avoids redundant computations associated with the PageRanks of pages that have already converged.

We show empirically that Adaptive PageRank speeds up the computation of PageRank even in the standard case, and that is is much more effective in the context of updates.


Stiff systems of ODEs and sparse matrix techniques
Stig Skelboe

Department of Computer Science
University of Copenhagen (DIKU)
Universitetsparken 1, 2100 Copenhagen
Denmark
email: stig@diku.dk



Sparse matrix techniques have played a prominent role in the solution algorithms for stiff systems of ODEs for many years. In this talk it is described how classical techniques for reordering sparse matrices into block-triangular matrices can be used to partition systems of ODEs into loosely coupled subsystems.

The objective of the partitioning is to permit the numerical integration of one time step to be performed as the solution of a sequence of small subsystems. This reduces the computational complexity compared to solving one large system and permits efficient parallel execution under appropriate conditions. The subsystems are integrated using methods based on low order backward differentiation formulas [1].

As an example is chosen the chemical reaction kinetics equations from air pollution models, since the solution of these equations is a substantial fraction of the total computational effort [2].

[1] S. Skelboe, Accuracy of decoupled implicit integration formulas, SIAM Journal on Scientific Computing, Volume 21, Number 6, 2000, pp. 2206-2224.

[2] S. Skelboe and Z. Zlatev, Exploiting the natural partitioning in the numerical solution of ODE systems arising in atmospheric chemistry, Springer Lecture Notes in Computer Science 1196 Springer 1997, Proceedings of the First Workshop on Numerical Analysis and Applications (WNAA-96), Rousse, Bulgaria, June 24-27, 1996.



V-invariant methods, generalised least squares problems, and the Kalman filter
Mike Osborne (with Inge Soderkvist, Department of Mathematics, Lulea University of Technology, Sweden)

Centre for Mathematics and its Applications
Australian National University
Canberra, A.C.T. 0200
email: Mike.Osborne@maths.anu.edu.au



V-invariant methods for the generalised least squares problem extend the techniques based on orthogonal factorization for ordinary least squares to problems with multiscaled, even singular covariances. These methods are summarised breifly here, and the ability to handle multiple scales indicated. An application to a class of Kalman filter problems derived from generalised smoothing splines is considered. Evidence of severe illconditioning of the covariance matrices is demonstrated in several examples. This suggests that this is an appropriate application for the V-invariant techniques.


Sparse Gaussian elimination as a computational paradigm
Etienne Loute

CORE, Université Catholique de Louvain
34 voie du Roman pays, B-1348 Louvain-la-Neuve
Belgium
email: loute@core.ucl.ac.be



An abstraction of the Gaussian elimination algorithm for sparse, pds matrices, with its accompanying graph-theory paradigm, is proposed as a computational model for some classes of large structured problems. The elimination tree (e-tree) concept is an essential aspect of this model as it embodies the possibilities of parallelization.

The model is a natural framework for block variants of the Cholesky factorization applied to sparse block structured pds matrices. An application to block banded structured systems with variable band width is presented. The underlying graphs are interval graphs and a minimum height e-tree is polynomially computable.

As a second use of the model, we present nested applications of primal and dual decomposition for block structured linear programs (LP). A master problem and subproblems acting as master/subproblem or as pure subproblem are nested according to the e-tree ordering on a row-block or column-block intersection graph of the problem block structure. The nested decomposition algorithms are described in terms of special subsets of vertices of the intersection graph, related to the e-tree ordering.

The model can also be applied to the solution of sparse block structured unsymmetric systems of linear equations, in particular to basis factorization for block structured LP, within the revised simplex algorithm. An e-tree applied to the row-block intersection graph, dictates to which blocks the columns must belong. Rules for assigning columns to pivot positions in blocks are derived from the model. These assignments depends on the blocks which own the columns. A column change, as in a simplex iteration, induces an amount of updating work proportional to the e-tree height.


A new sparse out-of-core symmetric indefinite factorization algorithm
Omer Meshar (with Sivan Toledo)

Tel-Aviv University
Israel
email: omerm@post.tau.ac.il



We present a new out-of-core sparse symmetric-indefinite factorization algorithm. The most significant innovation of the new algorithm is a dynamic partitioning method for the sparse factor. This partitioning method results in very low input-output traffic and allows the algorithm to run at high computational rates even though the factor is stored on a slow disk.

Our implementation of the new code compares well with both high-performance in-core sparse symmetric-indefinite codes and with a high-performance out-of-core sparse Cholesky code. More specifically, the new code provides a new capability that none of these existing codes has: it can factor symmetric indefinite matrices whose factors are larger than main memory; it is somewhat slower, but not by much. For example, it factors, on a conventional 32-bit workstation, an indefinite finite-element matrix whose factor size is about 10 GB in less than an hour.


A numerical evaluation of sparse direct solvers for the solution of large, sparse, symmetric linear systems of equations
Jennifer Scott (with Nicholas I. M. Gould, Rutherford Appleton Laboratory, and Yifan Hu, Wolfram Research)

Rutherford Appleton Laboratory
UK
email: J.A.Scott@rl.ac.uk



In recent years a number of solvers for the direct solution of large sparse, symmetric linear systems of equations have been developed. These include solvers that are designed for the solution of positive-definite systems as well as those that are principally intended for solving indefinite problems. The available choice can make it difficult for users to know which solver is the most appropriate for their applications.

We report on using performance profiles as a tool for evaluating and comparing the performance of the serial sparse direct solvers on an extensive set of large test problems taken from a range of practical applications. Our aim is to make recommendations as to the efficacy of the various packages.


The snap-back pivoting method for symmetric banded indefinite matrices
Dror Irony (with Sivan Toledo)

Tel-Aviv University
Israel
email: irony@post.tau.ac.il



The four existing stable factorization methods for symmetric indefinite matrices suffer serious defects when applied to banded matrices. Partial pivoting (row or column exchanges) maintains a band structure in the reduced matrix and the factors, but destroys symmetry completely once an off-diagonal pivot is used. Two-by-two block pivoting maintains symmetry at all times, but quickly destroys the band structure. Gaussian reduction to tridiagonal also maintains symmetry but destroys the band structure. Orthogonal reductions to tridiagonal maintain both symmetry and the band structure, but are too expensive for linear-equation solvers.

We propose a new pivoting method, which we call snap-back pivoting. When applied to banded symmetric matrices, it maintains the band structure (like partial pivoting does), it keeps the reduced matrix symmetric (like 2-by-2 pivoting and reductions to tridiagonal) and the factors mostly symmetric (unlike any previous method), and it is fast.

In snap-back pivoting, if the next diagonal element is too small, the next pivoting step might be unsymmetric, leading to unsymmetry in the next row and column of the factors. But both the reduced matrix and the factors snap back to symmetry once the next step is completed.



 
 

Last modified on May 17th, 2004.
algweb.