In this talk we shall give an introduction to error analysis, particularly concentrating on backward error analysis, condition, and discuss the implementation of the ideas in modern software, using the numerical linear algebra package LAPACK as a prime example.
The aim of an error analysis is to assess the accuracy of the computed solution to a problem produced by a numerical algorithm, whereas condition is concerned with measuring the sensitivity of the solution to perturbations in the data. A forward error analysis is concerned with how close the computed solution is to the exact solution, but a backward error analysis is concerned with how well the computed solution satisfies the problem to be solved. For many problems, particularly those of linear algebra, it is more natural to perform a backward error analysis. A knowlegde of the backward error, together with a knowledge of the sensitivity of the problem allows us to measure the forward error.
LAPACK, which stands for Linear Algebra PACKage, is a state of the art software package for the numerical solution of problems in dense and banded linear algebra, which allows users to assess the accuracy of their solutions.
For branch free codes automatic or algorithmic differentiation
yields derivatives of outputs ith respect to inputs, whose values are obtained
with working accuracy by variations of the chain rule. We present the basic
mathematical methods, corresponding software tools and indicate the cost
in terms of storage, runtime, and manpower. When there are few dependent
variables but many independents the reverse or adjoint mode has an amazingly
low operations count, but poses a challenge for the memory management.
Many practical codes contain branches causing kinks or even discontinuities of the functional relation between input and output variables. For example optimal design in CFD calculations typically involves an iterative solver for some kind of state equations like Navier Stokes, whose soluton is fed into one (or several) performance criteria. Once the gradient of this objective function with respect to certain parameters has been computed with acceptable accuracy, an outer optimization step can be performed leading to the kind of nested iteration that forms the theme of this conference.
We will show that the crucial task of appropriately coordinating the stopping criterion of the inner iteration with the progress made by the outer "design" iteration must take into account so-called sensitivity residuals, which can be evaluated quite cheaply. Moreover, certain modification of the adjoint code are neccessary to keep the storage requirement constant with respect to the number
of inner iterations. This new forward adjoint iteration should facilitate the calculation of sensitivities for rather complex practical problems.
The remarkable robustness of Krylov methods with respect to inexact matrix-vector products is a strong property recently emphasised. In the context of embedded iterative solvers with an outer Krylov
scheme, it is possible to monitor the inner accuracy and relax it when outer convergence proceeds.
We extend the relaxation strategy proposed in  to the context of domain decomposition methods for partial differential equations solved with the Schur complement method.
Numerical experiments on an heterogeneous and anisotropic problem show that it is possible to save a significant amount of matrix-vector products when using a tuned relaxation strategy for controlling the accuracy of the local subproblems.
 A. Bouras and V. Frayssé, A relaxation strategy for inexact matrix-vector products for Krylov methods, CERFACS TR/PA/00/15, 2000.
In this talk we consider the iterative solution of linear (and other)
systems of equations via two-stage iterative methods. These methods seem particularly
well suited for implementation on parallel computers, since the second stage iteration may
be adapted to achieve a sensible compromise between the accuracy of intermediate
solutions and the additional cost due to communication overhead.
We present elements of the mathematical convergence
theory for such methods,
have a brief look on asynchronous variations and report on several numerical experiments
on different computer architectures.