Qualitative computing

General presentation

For a scientist or an engineer, it is of crucial importance to have confidence in the results of large codes and in the robustness of the models used. Indeed the emergence of supercomputers has allowed the intensive use of numerical simulation to replace physical experiments, even for problems at the frontiers of instability. However, tools which provide information on the quality and validity of computer results are very rarely used in industry. In order to fill this gap, the Qualitative Computing Group has been set up within the Parallel Algorithms Project. Its objectives are to :
  • explore problems at the edge of computability,
  • provide engineers and scientists with analysis tools, both quantitative and qualitative, to help them to extract the appropriate information from results seemingly wrong,
  • solve industrial problems which have at least one of the following characteristics: physically unstable model, pathological numerical behaviour, very large scale, critical flow of information, self-evolving complexity.
The activity draws upon a well-recognized expertise in the field of eigenproblems for matrices and operators, and more generally on nonlinear problems with algebraic singularities. Industrial collaborations are on-going with EADS, EDF-GDF and CNES.

Scientific description

It is well-known that a computation becomes unstable (and in particular computer results differ significantly from exact ones) in the neighbourhood of a singularity. And a singularity is not generic : the perturbations generated by the finite-precision arithmetic are such that one always solves a regular problem. However the properties of computations are ruled by the underlying singularity, and the distance to the mathematical singularity, as viewed by the computer, is an essential parameter to interpret the computation. The role of singularities should not be looked upon as a mathematical oddity, or a modelling clumsiness : they indeed play an essential role in the area of theoretical physics concerned with reduction of theories, and underlie some of the most difficult and intensively-studied problems in physics today. On the other hand, singularities of the same mathematical nature may have more or less influence on the computation, depending on their condition number, themselves under the influence of some parameters of the problem, such as the departure from normality for a matrix (depending on a physical parameter such as the Reynolds number in CFD, the resistivity in MHD, etc...). This influence, which is at first local, can gradually become global as the relevant parameter evolves. It results in the emergence of new collective phenomena when certain singularities which are separated in exact arithmetic, are nevertheless viewed as connected in finite-precision computation - as well as in the physical reality. This is the essence of a new trend in the interpretation of some divergence phenomena in CFD for example (based on pseudo rather than exact spectra, amongst other tools).
The standard tool for investigating the stability properties of a solution process is linear perturbation theory, but its domain of validity may be below the natural threshold imposed by machine precision in difficult cases.
There is presently no general theory available which describes the effects of large perturbations (beyond linear asymptotics) on the solution of general nonlinear equations in any number of variables. One possibility to circumvent this theoretical gap is to use the computer to experiment with the effects of instability by working on samples of randomly perturbed problems. From a statistical analysis of the samples of data and solutions computed by means of a reliable algorithm, one can retrieve the essential characteristics of the singularities as they are seen by the computer.
Qualitative Computing is proposed as a rigorous theory of computability in finite precision. Some of its salient features are :
  • study of the stability and the reliability of numerical methods, specifically in the neighbourhood of algebraic singularities,
  • design of reliable codes for computing some eigenelements of very large nonnormal matrices (based on the Arnoldi Tchebycheff method), which are robust to high nonnormality,
  • influence of high nonnormality on the reliability of iterative methods in linear algebra,
  • identification of physical applications giving birth to highly nonnormal matrices, in CFD in particular,
  • software tools for assessing the quality (including reliability and robustness) of numerical methods.
  • understanding Krylov methods in finite precision.
  • a self-evolving theory of Information and Complexity set in the space of quaternions and octonions over R.

The toolbox PRECISE

PRECISE, PRecision Estimation and Control In Scientific and Engineering computing, is a toolbox under MATLAB which offers a variety of software tools to assess the quality of numerical methods and software. It includes conditioning, backward errors, arithmetic / numeric quality index, distance to singularity and dangerous border, perturbed spectra, spectral portraits. It has been upgraded into commercial quality software (FORTRAN 77) in the framework of PINEAPL, is available as freeware, and is distributed by CERFACS and NAG over the web. 

Eigensolvers

The heart of our software production besides PRECISE is eigensolvers.
  • ARNCHEB is a Fortran 77 code for the computation of some eigenvalues of large, sparse and nonnormal matrices. It implements the Arnoldi-Techbycheff method. Two versions are available upon request from CERFACS : one for real matrices and one for complex matrices.
  • A parallel code has been designed for the computation of spectral portraits for distributed memory machine using PVM. The spectral portrait is the tool of choice for getting insight on the spectrum of highly nonnormal matrices. It is available as module 2 of PRECISE.
  • ISABeL, a hybrid eigensolver developped with IRISA.

Impact on industry

Here are some of the collaborations that are already well-established with industries :
  • Aerospatiale, Division Avions (Aircrafts Division), Toulouse, since February 1990, in the framework of the ``plane to be'' (avion du futur) : computation of eigenvalues of very large and highly nonnormal matrices (order > 100000), with significant spectral instability.
  • ONERA Chatillon (Division Helicopteres), for the study of vibrations of helicopter blades, since 1993.
  • Thomson--CSF, Laboratoire Central de Recherches, since March 1992, in electromagnetism (electromagnetic waveguides), acoustics, optoelectronics (coupled partial differential equations, modelling of unstable physical phenomena, qualitative and/or quantitative results.
  • CNES Toulouse, since September 1993 : study of the quality of numerical codes used for solving nonlinear least squares problems arising in orbitography.
  • Electricité de France, Clamart, since 1999, on inner-outer iterations.

The European Project PINEAPL (1996-1998)

The PINEAPL project (Parallel NumErical Applications and Portable Libraries, Fourth Framework Project:20018) was a coordinated effort to produce a general purpose library of parallel numerical software suitable for a wide range of computationally intensive industrial applications and to port several application codes which use this library to parallel computers. PRECISE has been translated into Fortran to allow larger problems to be handled to match industrial needs. It i was used to test each item of numerical software produced during the project and distributed by NAG. NAG is the leader of the consortium which includes British Aerospace, CERFACS, Thomson-CSF, CPS (Napoli), the Danish Hydraulic Institute, IBM SEMEA, the University of Manchester, Math-Tech, and PIAGGIO. 

Collaboration with CNES on the Jason Project (1999-2000)

The GPS phase mesurement can be used for precise localisation of beacons, satellites,... Unfortunately, this phase measurement is only known within an additive term called real ambiguity. Some experiments show that, in order to obtain a good accuracy on the localisation, one can take advantage of the fact that we know certain linear combinations of real ambiguities which yield integer numbers. This leads to a mixed least-squares problem with real and integer variables which is difficult and expensive to solve in practice. CERFACS role was to help CNES in making a sensible algorithmic choice for estimation of integer ambiguities, in the context of the search for optimal orbit parameters for a low satellite with GPS phase measurements, in order to achieve centimeter-level accuracy.
We studied the LAMDA method which provides the least-squares estimates for ambiguities which are the best unbiased estimates. To overcome its unpractical slow convergence, an exponential speed-up was achieved for the search algorithm. With such a speed up, the LAMDA method is fitted for data processing from regional network of GPS stations, as well as from low Earth orbiting satellites. 

Inner-outer Iterations and Software coupling

Embedded linear solvers are increasingly used in large scale scientific computing. An important issue is to understand how to tune the level of accuracy of the inner solver, in order to guarantee the convergence of the outer solver at the optimal overall cost. When the outer solver is a Newton-like method, then, as one could expect, the accuracy of the inner solver needs to be increased as the global convergence proceeds. On the contrary, and quite surprisingly, when the outer iteration is a Krylov-type method, the first iterates require to be computed with full accuracy. And this accuracy can be relaxed as the convergence proceeds. This astonishing robustness of Krylov methods to inaccuracy of the matrix-vector products is potentially of great interest for large scale computations for industry (domain decomposition, multipole etc...). A Workshop on this theme was organized with CNES, EDF, EADS at CERFACS on 11-12 September 2000. 

Computation with hypercomplex numbers

We compare, from the point of view of computing efficiency, the computing power of the four real division algebras of numbers, i.e. the real and the complex numbers, the quaternions and the octonions. Our experiments support the conjecture that computation with octonions is to Biology what computation with quaternions is to Physics.

Convergence of Krylov methods in finite precision

Krylov methods are, since their introduction in the 1980s, the most heavily used methods to solve the two problems
Ax=b and Ax=lx, x non equal to 0
where the matrix A is very large.
However, the understanding of their numerical behaviour is far from satisfactory.
We study a radically new viewpoint for this long standing enigma. We reinterpret the "happy breakdown" as a singular event in the algorithm, which is sought for, rather forbidden for the information it provides. A new interpretation of convergence in finite precision is derived. 

Other topics under investigation

The recent results of the Qualitative Computing Group cover a large spectrum of topics in Numerical Linear Algebra and Applied Functional Analysis, as examplified by the list below:
  • Sensitivity analysis by means of Fréchet derivatives for algorithms such as Arnoldi, or matrix factorizations such as LU, QR, Cholesky and polar.
  • Chaotic behaviour exhibited in finite precision when convergence is proved in exact arithmetic for the power method.
  • Normwise and homotopic perturbations in relation with uncertain and inexact computation respectively
  • About the definitions of pseudospectra of closed operators in Banach spaces
  • Inexact Krylov methods

Selected talks

  • Conference given at CNES (CCT Calcul Sientifique et Modélisation), December 8, 1999, by V.Frayssé and L.Giraud (in French).
  • Intensive training course given at CNES : "Outils de programmation efficace et robuste pour le logiciel scientifique"
March 27-28 2000, by V.Frayssé and L.Giraud (in French).
CNESEADSEDFMeteo FranceONERASAFRANTotal
English | French | Intranet | FTP | Site Map | Legal Information | © CERFACS 2009 | Conception: CERFACS - Oréalys