High performance computing

CONTEXT

FulljetsRecent advances in computer science and highly parallel algorithms make LES an efficient tool for the study of complex flows. The available ressources allow today to tackle full complex geometries that can not be installed in laboratory facilities. At the same time, recent combustion instability studies done at CERFACS demonstrate the necessity to extend the computational domain to handle acoustics or burner/burner interactions.

The extension of the domain allows to correctly predict the interaction between acoustics and combustion without any assumption. Two kinds of acoustic modes can develop in combustion chambers and have to be distinguished:

Longitudinal modes. The boundary conditions for the flow (pressure, velocity, temperature) are easy to measure experimentally and to impose numerically. But in terms of acoustics it is practically impossible to correctly define experimentally the impedance to impose in numerical computations. It is then necessary to extend the computational domain upstream and downstream up to well-known acoustical boundary conditions, for example acoustic decoupling, choked nozzle, atmosphere.
This animation shows the importance of outlet impedance for thermo-acoustic instability predictions.

[Martin, 2006] C.E. Martin, L. Benoit, Y. Sommerer, F. Nicoud and T. Poinsot. Large-Eddy Simulation and Acoustic Analysis of a Swirled Staged Turbulent Combustor. AIAA Journal. Vol. 44, No 4, April 2006.

Azimutal modes. The ordinary periodicity assumption used for azimutal combustion chambers neglect the flame/flame interactions and the coupling between combustion and azimutal modes. The only way to correctly model this phenomenon is to compute the entire combustion chamber, usually composed of tens of burners.

[Staffelbach, 2005] G. Staffelbach, L.M.Y. Gicquel, and T. Poinsot. Highly parallel large eddy simulations of multiburner configurations in industrial gas turbines. In The Cyprus International Symposium on Complex Effects in Large Eddy Simulation, Limassol, Cyprus, 2005.

An other topic concerns the non-periodic computations in azimutal burners. For example a computation of ignition sequence of gaz turbine needs to include in the computational domain all the burners, including the ignition devices and making a LES of the full chamber necessary.

[Boileau, 2005] M. Boileau, J.-B. Mossa, B. Cuenot, T. Poinsot, D. Bissiéres, and C. Bérat. Toward LES of an ignition sequence in a full helicopter combustor. In First Workwhop INCA 2005, SNECMA, Villaroche, France, 2005.

Finally, of course, it is important to add physical modeling as liquid fuel injection, boundary thermal coupling or radiative coupling.
All those arguments show the necessity to improve the efficiency of the numerical codes in order to minimize the restitution time for the user.

AVBP PARALLEL EFFICIENCY

Since the beginning of its development, AVBP has been coded for massively parallel computers using MPI library. The recent evolution of supercomputers allows new scientific studies as described in the previous paragraph, and the necessity of parallel efficient codes increases. Tests have been performed using AVBP on the most powerful computers up to date:

IBM Thomas J. Watson Research Center, Blue Gene Solution - number 2 on the 26th top500 list - tests up to 5120 processors.

IBM Rochester BlueGene/L - number 8 in the 24th top500 list - tests up to 2048 processors. IBM Red Book (Chapter 8.4):
http://www.redbooks.ibm.com/abstracts/sg246686.html?Open

Comissariat a l'Energie Atomic (CEA) Bull ter@10 - number 62 on the 26th top500 list, most powerful french computer - tests up to 1899 processors.

Barcelona Supercomputer Center MareNostrum IBM JS20 - number 4 on the 24th top500 list, most powerful European computer.

The following figure show the excellent scalability of AVBP. The tests are done for a constant problem size, which is of real user representative requirements: the goal is to reduce the restitution time of the computation for a given problem size.

Speedup HPCA VBP


Moreover, the CPU consumption and the memory usage are also constantly optimized in collaboration with the parallel algorithms team of CERFACS or constructors for specific computer architectures (for example IBM for the BlueGene computations). Parallel partitioning is employed in order to minimize the memory usage.

For example, the 40 millions cells configuration uses only 24 MB memory per processor when using 4096 processors and the speedup is 4078.

=> The parallel efficiency is 99,5%.

EXAMPLES OF CALCULATIONS

Ignition sequence of an helicopter gas turbine. Image and movie of the computation on the gallery. 19 millions cells - 2048 IBM BlueGene processors and 128 processors on CINES computers
Collaboration between CERFACS - IBM - TURBOMECA (Safran group) and CINES.

IgnitionProcesses controlling flame ignition and propagation in helicopter combustion chambers are critical phenomena which control the performances of these systems: being able to start an helicopter engine at high altitude and low temperatures is critical.
The prediction of these phenomena has long been out of reach of Computational Fluid Dynamic tools. In 2005, the capacities of Large Eddy Simulation tools have been combined with the power of IBM server Blue Gene to tackle this problem: one of the most advanced LES solvers developed jointly by CERFACS and IFP, has been ported on IBM Blue Gene and run on a high-resolution mesh, in order to compute ignition and flame propagation in the combustor of an helicopter turboshaft engine from Turbomeca (Safran group).

An extremely refined mesh is built (19 millions tetrahedras): all 20 burners of the engine are computed.
25 ms in physical time have been simulated on IBM server Blue Gene with 2048 processors. The restitution time is about 30 hours.


Combustion begins near the two ignition devices placed between burners at 180 degrees of the chamber. It then propagates towards other burners. The flame must then move and stabilize through the high-velocity zones induced by air injection and dilution jets.
Mesh Partition
Snapshot 

Full annular combustion chamber of an industrial gas turbine.

[Staffelbach, 2005] G. Staffelbach et al. Highly parallel large eddy simulations of multiburner configurations in industrial gas turbines. In The Cyprus International Symposium on Complex Effects in Large Eddy Simulation, Limassol, Cyprus, 2005.


 
40 millions cells - 5120 IBM BlueGene processors
DESIRE European Research project - Collaboration between CERFACS and IBM


FullCombustion instabilities pose a threat to the proper behavior and shorten the life cycle of industrial gaz turbines. One of the most interesting case where combustion instabilities have been encountered is annular combustion chambers.

Gas turbines with annular combustion chambers may exhibit azimuthal acoustic modes. The complexity of this chambers and the high wave length of such modes renders laboratory tests very difficult or even impossible. One solution to study this modes and their effects on realistic cases is the use of Large Eddy Simulation. The first step towards simulating this case is taken here by demonstrating the feasabilty of a full chamber LES.

This case uses a 40 million cells mesh to account for twenty four burners blowing in a common annular chamber. It represents the most complex reactive case simulated at CERFACS using LES so far and permits to validate the whole computational chain from the grid generation (done with Centaursoft on only one burner and extended to the entire annular geometry using the CERFACS tool HIP developed by J. Muller) to the results visualisation.

Piston engine.Image and movie of the computation on the gallery.







10 millions cells - 1024 IBM BlueGene processors
Collaboration between CERFACS and IBM


More drastic norms on pollutant emissions lead to develop new combustion concepts based on DI engines. In such engines aerodynamics plays a key role and the design of intake pipes is crucial, requesting significant optimizations and especially with CFD. For such flows, classical turbulence methods lack for accuracy: the revolution introduced by Large Eddy Simulation (LES) methods in the last ten years, allows now a precise computation of the flows but the size of the models makes them impossible to be run on most of the computers with classical architectures.
This study shows how one of the most advanced LES solvers (AVBP: www.cerfacs.fr/cfd/LES.php) developed jointly by CERFACS and Institut Français du Pétrole, has been ported on IBM server Blue Gene and run on a high-resolution mesh, in order to compute a typical Diesel intake geometry.

An extremely refined mesh is built with CentaurSoft (10 millions tetrahedras) composed of a plenum, pipes, cylinder and silencer.

Piston 

Piston










10 ms in physical time have been simulated on IBM server Blue Gene with 1024 processors.

Instantaneous velocity fields exhibit many structures on the valve jets and show that the high-resolution LES reveals flow features which were never computed before.

Cuts (1) show that flow in pipes is highly detached at specific locations, depending on the pipe shape. From cut (2) to cut (3), the two jets are joined into one animated by a quasi solid body rotation with a precessing motion.
















CNESEADSEDFMeteo FranceONERASAFRANTotal
English | French | Intranet | FTP | Site Map | Legal Information | © CERFACS 2009 | Conception: CERFACS - Oréalys