Numerical methods and HPC
Open Access

This article has an erratum: []

Oil Gas Sci. Technol. – Rev. IFP Energies nouvelles
Volume 74, 2019
Numerical methods and HPC
Article Number E2
Number of page(s) 2
Published online 09 July 2019

The use of High-Performance Computing (HPC) by industrial companies is one of the most important pillars of digital product design. High-performance computing is at the heart of digital technology and is a critical element in the modeling and simulation of increasingly complex physical phenomena that are inaccessible on a human scale. Therefore, more and more real-time applications, such as Human Brain-behavior simulation or CO2 storage, require high-performance computing, with ever-increasing computing power and processing of very large volumes of data.

HPC offers hardware and software tools serving for strategic decision making and support processes. The current trend for hardware architectures toward heterogeneous and multi-cores systems with more and more cores by CPU to generate Exascale computing power. The exascale definition limited to machines capable of a rate of 1018 flops is of interest to only few scientific domains. Main issues arising from such systems are: i) HPC System Architecture and Components, ii) System Software and Management, iii) Programming Environments, iv) Code Optimization, v) Energy and Resiliency, vi) Balance Compute, vii) I/O and Storage Performance, viii) Mathematics and algorithms for HPC systems and finally Big Data and HPC usage Models.

The use of currently proven approaches on multi-core CPUs and accelerators (e.g. GPUs, Many-core CPUs) has resulted in a significant performance for several applications. In this topical issue “Numerical methods and HPC”, several of these issues are addressed, for example programming environments, I/O and storage performance, Big Data and HPC usage, and mathematics and algorithms dedicated to HPC systems.

Since applications that employ high performance computing resources usually have a longer life than hardware architecture, we emphasis here programming environments that include programming models and languages. To allow portable performance of applications at extreme scales, several programming models are presented, such as PyCOMPSs [1] which supports both homogenous and heterogeneous architectures, including Xeon-Phi and GPUs.

Once the different parts of the application are designed/realized, the program optimization must be carried out. This issue is addressed in [2] based on the Adaptive Code Refinement (ACR) language extension and compiler tools. The idea is to allow less accurate approximations on computations where high precision is not required in order to improve the computational time.

Regarding parallel I/O, the authors in [3] present an efficient and fully parallel middleware that optimizes I/O for legacy applications in seismic working with a very well-known standard data format. The mentioned application treats extremely large data-sets that are broken into data files, ranging in size from 100s of GiBs to 10s of TiBs and larger. In fact, the parallel I/O for these files is complex due to the amount of data along with varied and multiple access patterns within individual files.

Thereafter, advanced mathematical methods and algorithms, an important part of the issues in order to produce robust applications for modern high-performance architectures, are addressed. The contributions relating to numerical methods in this topical issue can be roughly divided into two categories.

In the first category, some traditional methods are revisited, extended and applied to new problems. There can be found, for example, multi-scale mixed finite element methods [4] suitable for parallel implementation of porous media flow simulations, or the transport of multi-component multiphase mixtures coupled with reactive geochemistry, always with a view to effective parallelization [5]. In this category, there can also be considered the design of a new theoretical formalism to classify a large number of known finite volume methods [6], or at the opposite end, the very pragmatic application of MUSCL reconstructions in order to improve the accuracy of chemical tracer calculations [7].

The second category includes more prospective emerging methods whose emphasis is more numerical. The unified formulation in thermodynamics which enables one to automatically handle the appearance and disappearance of phases is quite recent and is the subject of several research works. The authors of [8] put forward a new version using fugacities as unknowns and provide a numerical comparison with standard formulations. The design of numerical schemes relying on a natural property of energy decay to guarantee stability is an original research direction, reviewed by [9]. The treatment of fracture matrices through dual virtual element methods, advocated by [10], is part of a more general trend of the last years. Finally, the a priori error estimates proposed by [11] provide an in-depth analysis of the timely topic of coupling a poro-elastic object embedded in an elastic environment. This latter presentation is the first a priori convergence analysis for the fixed stress iterative coupling.

Users of HPC resources can clearly benefit from the advanced features mentioned above, in their applications, in particular, in the area of industrial and engineering applications as well as emerging applications using Big data. In this topical issue, two kinds of applications are presented, which taking advantage of multi-GPU and multi-core systems.

The problem in simulating the human brain behavior consists of finding efficient ways to manipulate and compute the huge volume of data particularly for the computation of the voltage on neurons morphology step which is the most time consuming one. This issue is addressed in [12] where several approaches are presented using multi-GPU systems efficiently.

For the real time applications based on the multi-rate co-simulation, a new approach is proposed in [13] for multi-core systems. Co-simulation allows system designers to simulate a whole system is made up of a number of interconnected subsystem simulators. This approach is based on the exploitation of the co-simulation parallelism where the dependent functions perform different computation tasks and their scheduling. Acyclic orientation of mixed graphs for handling mutual exclusion constraints between functions that belong to the same simulator are also discussed, by proposing an exact algorithm and a heuristic one for performing them.


  • Amela R., Ramon-Cortes C., Ejarque J., Conejero J., Badia R.M. (2018) Executing linear algebra kernels in heterogeneous distributed infrastructures with PyCOMPSs, Oil Gas Sci. Technol. - Rev. IFP Energies nouvelles 73, 47. [CrossRef] [Google Scholar]
  • Schmitt M., Bastoul C., Helluy P. (2018) A language extension set to generate adaptive versions automatically, Oil Gas Sci. Technol. - Rev. IFP Energies nouvelles 73, 52. [CrossRef] [Google Scholar]
  • Fisher M.A., Conbhuí P.Ó., Brion C.Ó., Acquaviva J.-T., Delaney S., O’Brien G.S., Dagg S., Coomer J., Short R. (2018) ExSeisDat: A set of parallel I/O and workflow libraries for petroleum seismology, Oil Gas Sci. Technol. - Rev. IFP Energies nouvelles 73, 74. [CrossRef] [Google Scholar]
  • Puscas M.A., Enchéry G., Desroziers S. (2018) Application of the mixed multiscale finite element method to parallel simulations of two-phase flows in porous media, Oil Gas Sci. Technol. - Rev. IFP Energies nouvelles 73, 38. [CrossRef] [Google Scholar]
  • Ahusborde E., Amaziane B., El Ossmani M. (2018) Improvement of numerical approximation of coupled multiphase multicomponent flow with reactive geochemical transport in porous media, Oil Gas Sci. Technol. - Rev. IFP Energies nouvelles 73, 73. [CrossRef] [Google Scholar]
  • Schneider M., Gläser D., Flemisch B., Helmig R. (2018) Comparison of finite-volume schemes for diffusion problems, Oil Gas Sci. Technol. - Rev. IFP Energies nouvelles 73, 82. [CrossRef] [Google Scholar]
  • Braconnier B., Preux C., Douarche F., Bourbiaux B. (2019) MUSCL scheme for Single Well Chemical Tracer Test simulation, design and interpretation, Oil Gas Sci. Technol. - Rev. IFP Energies nouvelles 74, 10. [CrossRef] [Google Scholar]
  • Ben Gharbia I., Flauraud É. (2019) Study of compositional multiphase flow formulation using complementarity conditions, Oil Gas Sci. Technol. - Rev. IFP Energies nouvelles 74, 43. [CrossRef] [Google Scholar]
  • Cancès C. (2018) Energy stable numerical methods for porous media flow type problems, Oil Gas Sci. Technol. - Rev. IFP Energies nouvelles 73, 78. [CrossRef] [Google Scholar]
  • Fumagalli A., Keilegavlen E. (2019) Dual Virtual Element Methods for Discrete Fracture Matrix model, Oil Gas Sci. Technol. - Rev. IFP Energies nouvelles 74, 41. [CrossRef] [Google Scholar]
  • Girault V., Wheeler M.F., Almani T., Dana S. (2019) A priori error estimates for a discretized poro-elastic–elastic system solved by a fixed-stress algorithm, Oil Gas Sci. Technol. - Rev. IFP Energies nouvelles 74, 24. [CrossRef] [Google Scholar]
  • Valero-Lara P., Martínez-Pérez I., Sirvent R., Peña A.J., Martorell X., Labarta J. (2018) Simulating the behavior of the Human Brain on GPUs, Oil Gas Sci. Technol. - Rev. IFP Energies nouvelles 73, 63. [CrossRef] [Google Scholar]
  • Saidi S.E., Pernet N., Sorel Y. (2019) A method for parallel scheduling of multi-rate co-simulation on multi-core platforms, Oil Gas Sci. Technol. - Rev. IFP Energies nouvelles 74, 49. [CrossRef] [Google Scholar]

© M.F. Wheeler et al., published by IFP Energies nouvelles, 2019

Licence Creative CommonsThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.