Design Justification File

The Design Justification File (DJF) is generated and reviewed at all stages of the development and review processes. It contains the documents that describe the trade-offs, design choice justifications, verification plan, validation plan, validation testing specification, test procedures, test results, evaluations and any other documentation called for to justify the design of the software. A wide range of ideas for NEPTUNE has been considered both by UKAEA and NEPTUNE grantees, as indicated by the titles in Table 5.1, and choices described in the DDF will be found supported by by the minutes of meetings which form part of many reports. To see what has been considered before, it is recommended that the access-controlled github repository ref. [7] be cloned to a local directory and its subdirectories reports and reports/ukaea_reports be indexed using software such as DocFetcher ref. [46] or Recoll ref. [47].

A number of algorithmic approaches were ruled out very early from consideration and are only discussed briefly or not at all in the supporting documentation. The reasons for rejection relate to the stringent demands of the NEPTUNE project, which may be regarded as serving as a ‘corner case’ for other fusion modelling projects, ie. they should not use ideas that make them incompatible with NEPTUNE. These currently deprecated approaches are described in the following section, after which the detailed reports are listed.

Deprecated Approaches

A prime consideration for edge modelling is accurately to represent the surface normal, not just the surface itself, since good power-handling demands that plasma flows in close to tangency, typically within \(2^{o}\). Unless there is to be special coding at boundaries, the educational material for finite difference modellers explains why this approach is unsatisfactory, and the same objection also applies to AMR type meshes, although in view of the unexpected success of AMR in fluid and PIC codes, it was carefully discussed as detailed in ref. [48]. The most serious objection appeared to be the need to represent plasma diffusion which is strongly anisotropic due to an applied magnetic field. (Moreover, if hanging nodes are allowed in the FEM, then AMR behaviour may be reproduced by FEM codes, whereas the converse is not true.)

Similar objections also apply to Lattice Boltzmann methods (LBM), where there is the simple objection that since their complexity arises from treating diffusive transport, why not code the diffusive operators directly? Equally when it is significant that the dissipation operator is mathematically extremely complex as in many plasma transport problems, why not use a more accurate representation of velocity-phase space effects than is provided by a small number of LBM constants? Although proponents claim that apparently fundamental inefficiencies in the approach, such as the need for explicit time advance, may be overcome, the ideas needed have invariably been already explored in the finite volume or finite element context, so as LBM approaches the efficiency of say FEM, it increasingly resembles it algorithmically.

Smoothed particle hydrodynamics (SPH) may be eliminated as a general approach to plasma modelling on purely mathematical grounds, namely that for small dimensionality problems it is provably more accurate to use a mesh-based approximation to reconstruct a function than a set of point samples, see for example Niederreiter’s textbook ref. [49]. For supposing \(N_S\) samples are taken, then Monte-Carlo sampling has an error\(\propto N_S^{-1/2}\), ie. an error exponent of \(1/2\) and never as large as unity even for Quasi-Monte-Carlo sampling, whereas it might be assumed that the spacing of samples on a regular \(d-\)dimensional lattice is \(h\propto N_S^{-1/d}\), so a \(p-\)th order scheme has error exponent \(p/d\), thus taking \(p>d\) ensures that a mesh-based approximation is always more accurate than one with more randomised sampling. In practice the situation is not as clear cut as this argument indicates, since spectral schemes may exhibit gross error if \(h\) exceeds the smallest scale of the function whereas Monte-Carlo errors increase more gradually with decreasing \(N_S\). Exactly where equality of error bound occurs depends on details of the problem, but it seems in practice to be at least for \(d>4\), so that particle methods may become attractive for treating kinetic effects in \(5-D\) or \(6-D\) position-plus-velocity phase spaces.

Both SPH and LBM may have niches when mesh production is difficult, such as strong surface deformation cracks, however these niches do not seem presently to need occupation in order to design a fusion reactor. Gentler deformations due to thermal expansion and melt may be handled by moving finite element methods which have been introduced into existing FEM packages as required. Moreover, if indeed rapid boundary changes are concerned, the envisaged combined particle and mesh NEPTUNE software would anyway seem to offer an excellent foundation for handling the whole range of shape variation.

Lastly, ‘particle shaping’ has been mentioned in the context of particle methods (regarded as methods directly capturing some aspect of velocity space, ie. not SPH). To most mathematicians and physicists, giving a particle a ‘shape’ would seem unnatural, and the only benefit that it seems to offer, namely a reduced level of noise, is more efficiently handled by filtering in physical space alone, in the NEPTUNE context by smoothing any related mesh-based quantities.

UKAEA (Internal) Reports

Table 5.1: NEPTUNE REPORT TITLES For full filenames, prepend the string ‘CD-EXCALIBUR-FMS’. Source is in subdirectory ‘tex’.

No.

Key

Title

Filename

Source

bibtex

PDF

.tex file

0001

sciplanref. [4]

ExCALIBUR Fusion Modelling System Science Plan

0001

sciplan/rp1

0004

y12actsref. [50]

ExCALIBUR Fusion Modelling System Activities Y1-2

0004

-

0010

y1re111bref. [51]

NEPTUNE : Report on Y1 2020 External Workshop (REPORT1)

0010-M1.1.1b

-

0011

y1re121ref. [52]

Year One Summary Report

0011-1.00-M1.2.1

t12/rp1

0012

y1re211ref. [48]

Options for Geometry Representation

0012-1.00-M2.1.1

t21/rp1

0013

y1re231ref. [53]

Options for Particle Algorithms

0013-1.01-M2.3.1

t23/rp1

0014

y1re311ref. [54]

NEPTUNE : Report on system requirements

0014-1.00-M3.1.1

-

0015

y1re331ref. [32]

NEPTUNE : Background information and user requirements for design patterns

0015-1.00-M3.3.1

-

0016

y1re351ref. [55]

Benchmarking requirements for NEPTUNE and available tools

0016-1.00-M3.5.1

-

0018

y1re111aref. [56]

NEPTUNE : Report on Y1 2019 Internal Workshop

0018-M1.1.1a

-

0020

charterref. [57] EXCALIBUR NEPTUNE Charter

0020

chart/rp

0021

pappeqs3ref. [45] Equations for EXCALIBUR/NEPTUNE Proxyapps

0021-1.20-M1.2.1

rp21/rpc

0022

y2re312ref. [58]

Report on user frameworks for tokamak multiphysics

0022-M3.1.2

t31/rp2

0023

y2re332ref. [33]

Report on design patterns specifications and prototypes

0023-M3.3.2

t33/rp2

0024

y2re313ref. [59]

Report on user layer design for Uncertainty Quantification

0024-M3.1.3

t31/rp3

0025

y2grantref. [60]

ExCALIBUR Fusion Modelling use case: contract award recommendation report

0025-M1.3.1

docx

0026

y2re333ref. [31]

Design patterns evaluation report

0026-M3.3.3

t33/rp3

0027

y3actref. [61]

ExCALIBUR Fusion Model SPF Research Plan Y3

0027-M1.5.1

y3act/genpdf

0030

y2re141ref. [62]

Winter 2020-21 Workshop

0030-M1.4.1

t14/rp1

0030a

y2closeref. [63]

ExCALIBUR NEPTUNE Project analysis to date: close out Y2

0030a-M1.6.1

docx

0031

y2re251ref. [64]

Select techniques for MOR (Model Order Reduction)

0031-M2.5.1

t25/rp1

0032

y2d33ref. [30]

Module Guide

0032-D3.3

d33/rpc

0033

y2d34ref. [2]

Development Plan

0033-D3.4

d34/rpc

0034

y2re221ref. [65]

Performance of spectral-hp element methods for the referent plasma models

0034-M2.2.1

t22/rp1

0035

y2re241ref. [66]

Assessment of which UQ methods are required to make NEPTUNE software actionable

0035-M2.4.1

t24/rp1

0036

y2re271ref. [67]

Identification of suitable preconditioner techniques

0036-M2.7.1

t27/rp1

0037

y2re281ref. [68]

Selection of the physics models

0037-M2.8.1

t28/rp1

0038

y2re261ref. [69]

Identification of a preferred overall numerical scheme

0038-M2.6.1

t26/rp1

0039

y3re321ref. [70]

Survey of code generators and their suitability for NEPTUNE

0039-M3.2.1

t32/rp1

0040

y3re51ref. [71]

Management of external research. Supports UQ Procurement

0040-M5.1

t51/rpc

0041

y3re322ref. [72]

Survey of Domain Specific Languages

0041-M3.2.2

t32/rp2

0042

y3re314ref. [3]

Specification and Integration of Scientific Software

0042-M3.1.4

t31/rp4

0043

y3re242ref. [73]

Selection of techniques for Uncertainty Quantification

0043-M2.4.2

t24/rp2

0044

y3re252ref. [74]

Selection of techniques for Model Order Reduction

0044-M2.5.2

t25/rp2

0045

y3re272ref. [75]

Identification of suitable preconditioner techniques

0045-M2.7.2

t27/rp2

0046

y3re212ref. [76]

Surface mesh generation

0046-M2.1.2

t21/rp2

0047

y3re222ref. [77]

Finite Element Models: Performance

0047-M2.2.2

t22/rp2

0048

y3re232ref. [78]

Options for Particle Algorithms

0048-M2.3.2

t23/rp2

0049

y3d32ref. [79]

Domain-Specific Language (DSL) and Performance Portability Assessment

0049-D3.2

d32/rpc

0050

y3d35ref. [80]

Verification and Benchmarks Methodology

0050-D3.5

d35/rpc

0051

y3re61ref. [81]

Finite Element Models: Complementary Activities I

0051-M6.1

t61/rpc

0052

y3re71ref. [82]

Literature review for Call T/AW086/21: Mathematical Support for Software Implementation

0052-M7.1

t71/report

0053

y3re72ref. [83]

Code coupling and benchmarking

0053-M7.2

t72/rpc

0054

y2d31ref. [84]

Software Specification Web-site

0054-D3.1

d31/rpc

0055

y3re181ref. [1]

Report of NEPTUNE Workshop 7 October 2021

0055-M1.8.1

t18/rp1

0056

y3grantref. [85]

ExCALIBUR Fusion Modelling use case: contract award recommendation report

0056-M1.7.1

docx

0057

y3re262ref. [86]

Fluid Referent Models

0057-M2.6.2

t26/rp2

0058

y3re282ref. [87]

Technical report on Physics model selection

0058-M2.8.2

t28/rp2

0059

y45actref. [88]

ExCALIBUR -Fusion Model System Y4-Y6

0059-M1.9.1

y45act/genpdf

0060

y3closeref. [89]

Analysis to Date: Close out Y3

0060-M1.10

t110/rpc

0061

y3re42ref. [90]

2-D Model of Neutral Gas and Impurities

0061-M4.2

t42/rpc

0062

y3re43ref. [91]

High-dimensional Models Complementary Actions 2

0062-M4.3

t43/rpc

0063

y3re52ref. [92]

Selection of techniques for Uncertainty Quantification

0063-M5.2

t52/rpc

0064

y3re62ref. [93]

Finite Element Models Complementary Actions 2

0064-M6.2

t62/rpc

0065

y3re73ref. [94]

Software Support Complementary Actions 2

0065-M7.3

t73/rpc

0066

y3re41ref. [95]

Support High-dimensional Procurement

0066-M4.1

t41/rpc

Brief Survey of Reports to end FY 2020/21

For Activity 2, UKAEA produced three milestone reports, namely

  • • CD/EXCALIBUR-FMS/0012-M2.1.1 - Options for Geometry Representation

  • • CD/EXCALIBUR-FMS/0013-M2.3.1 - Options for Particle Algorithms

  • • CD/EXCALIBUR-FMS/0031-M2.5.1 - Select techniques for MOR (Model Order Reduction)

together with the Activity 1 Report (’Equations document’)
CD/EXCALIBUR-FMS/0021-1.00-M1.2.1 - Equations for NEPTUNE/ExCALIBUR proxyapps

The first two (Reports 12 and 13) describe the problems presented by the fusion use case in respect of geometry and strong magnetic field in the first, and of generally but not invariably low collisionality of the edge plasma in the second. They set out issues that needed to be urgently addressed and possible lines of research. Report 31 discusses in depth a wide range of options for research into Model Order Reduction, drawing attention to the possibility of producing scalable algorithms by borrowing ideas from the field of Data Assimilation. Report 21 sets out equations to be studied using the first six proxyapps, beginning with relatively simple models for anisotropic transport and advancing to complex models of plasma-neutral interaction.

For Activity 3, UKAEA produced six milestone reports, namely

  • • CD/EXCALIBUR-FMS/0022-M3.1.2 - User frameworks for tokamak multiphysics

  • • CD/EXCALIBUR-FMS/0024-M3.1.3 - User layer design for uncertainty quantification

  • • CD/EXCALIBUR-FMS/0023-M3.3.2 - Design patterns specifications and prototypes

  • • CD/EXCALIBUR-FMS/0026-M3.3.3 - Design patterns evaluation

  • • CD/EXCALIBUR-FMS/0032-D3.3 - Module Guide

  • • CD/EXCALIBUR-FMS/0033-D3.4 - Development Plan

and there were three earlier milestone reports from FY2019/20, namely

  • • CD/EXCALIBUR-FMS/0014-M3.1.1 - NEPTUNE: Report on system requirements

  • • CD/EXCALIBUR-FMS/0015-M3.3.1 - NEPTUNE: Background information and user requirements for design patterns

  • • CD/EXCALIBUR-FMS/0016-M3.5.1 - Benchmarking requirements for NEPTUNE and available tools

These reports are concerned principally with assessing the state-of-the-art in software design with a particular emphasis on design of scientific software, and of course paying attention to Exascale applicability. Selected textbooks and the wider literature were examined for the factors important for successful software developments. This examination threw up the importance of (1) software frameworks described in Report 22 as an integrated set of software artefacts that collaborate to provide a reusable architecture for a family of related applications, (2) software layering in Report 24 as a more widely useful technique than just one enabling ‘separation of concerns’, and (3) design patterns in Reports 15, 23 and 26 as an approach to reusing and communicating reliable software structures. The importance of building a community and the techniques for doing so were also described in Report 22.

Report 32 describes how the concept of module or class should be integrated into a structure of frameworks, layers and design patterns, so that a large, complex code can be partitioned into manageable segments. The utility of the Unified Modeling Language (UML 2) to describe not just software structure, but also code use in a wider based engineering structure, through model-based systems engineering (MBSE) was noted. Report 33 attempts a preliminary synthesis of the material presented in the previous Activity 3 reports into a plan for the NEPTUNE software life-cycle, with focus on the subdivision of the plan into documents expected to be arranged as a web-site. Report 24 also includes an introduction to a wide range of uncertainty quantification (UQ) techniques.

Brief Survey of Reports FY 2021/22

For Deliverable 4, UKAEA produced three milestone reports, namely

  • • CD/EXCALIBUR-FMS/0066-M4.1 Support High-dimensional Procurement

  • • CD/EXCALIBUR-FMS/0061-M4.2 2-D Model of Neutral Gas and Impurities

  • • CD/EXCALIBUR-FMS/0062-M4.3 High-dimensional Models Complementary Actions 2

The first (Report 66) supports the usefulness of the call for development of higher dimensional elements, suitable for solution of a continuum kinetic model of plasma, by demonstrating Nektar++ solution of a \(1d1v\) model. The second (Report 61) considers the use of the Julia language as a means to coding particle models of plasma and neutral gas at HPC, with broadly favourable conclusions. The third and final (Report 62) outlines critical physics expected to require treatment using particle-based and/or Monte-Carlo methods. Its principal content examines how best to treat inter-processor communication of particle-based information at the Exascale. There is also a brief description of a proxyapp for the exploration of \(1d1v\) solution by particles.

For Deliverable 5, UKAEA produced two milestone reports, namely

  • • CD/EXCALIBUR-FMS/0040-M5.1 Management of external research. Supports UQ Procurement

  • • CD/EXCALIBUR-FMS/0063-M5.2 Selection of techniques for Uncertainty Quantification

The first (Report 40) begins with a reminder of the high level of “noise" in tokamak data, and the kind of comparisons with simulation required. It is pointed out that although spline interpolants are provably optimal, they may perform poorly for classes of functions relevant to the tokamak edge. The main content is the use of the VECMA toolkit for UQ of BOUT++ and Nektar++ for two 2-D fluid dynamical models. There is also derivation of simplified model by dimension reduction, by integration in one coordinate and/or by use of the Lie derivative. The report finishes with a summary of UQ-related PhD projects sponsored by NEPTUNE.

The second (Report 63) pursues the use of splines for NEPTUNE, indicating utility in the case of noisy data, and explores the use of Gaussian processes, including derivation of key formulae, and highlighting their strengths relative to splines. There is a brief comparison with neural network surrogates in an annex.

For Deliverable 6, UKAEA produced two milestone reports, namely

  • • CD/EXCALIBUR-FMS/0051-M6.1 Finite Element Models: Complementary Activities I

  • • CD/EXCALIBUR-FMS/0064-M6.2 Finite Element Models Complementary Actions 2

The first (Report 51) details the application of the Nektar++ spectral / hp finite-element software to the classic problem of two-dimensional vertical natural convection - physically a model for the heat transfer that takes place within the cavity of a double-glazing unit and also relevant to heat transport in a plasma. A brief survey of results from the literature is followed by a numerical investigation showing transitions between conducting, laminar convective, and turbulent regimes. Small extensions to the existing Nektar++ framework, aimed at extracting engineering-relevant quantities such as the maximum near-wall temperature, are given. The second (Report 64) builds on these results with a quantitative comparison to the well-established MIT numerical convection benchmark and also the reproduction of some detailed flow-fields from a recent publication; excellent agreement between results from Nektar++ and those from the literature is obtained in both cases. Also included in this report is a preliminary study of a numerical implementation of discrete exterior calculus with a demonstration of spectral convergence for a simple test problem, work motivated by the favourable properties of such schemes when coupled to particle kinetic codes.

For Deliverable 7, UKAEA produced three milestone reports, namely

  • • CD/EXCALIBUR-FMS/0052-M7.1 Literature review for Call T/AW086/21: Mathematical Support for Software Implementation

  • • CD/EXCALIBUR-FMS/0053-M7.2 Code coupling and benchmarking

  • • CD/EXCALIBUR-FMS/0065-M7.3 Software Support Complementary Actions 2

The first of these (Report 52) is a literature review performed to support the Call T/AW086/21: Mathematical Support for Software Implementation. It describes recent advances in algorithm development for hyperbolic and elliptic equations. In particular, the report describes developments in IMEX schemes, Deferred Correction methods, Asymptotic Preserving (AP) methods, and Variable Stepsize, Variable Order (VSVO) timestepping for hyperbolic problems, and AP methods and nested solvers for elliptic problems.

The second (Report 53) provides a commentary on the present state of Exascale hardware and software, and discusses the available tools and technologies available for benchmarking and code coupling. The hardware and software landscape of HPC systems is becoming increasing diverse, with a proliferation of vendors and different technologies. To perform well at Exascale, software will likely need be able to target multiple heterogeneous systems. In such an environment, it is crucial for developers to have access to benchmarking infrastructure to measure performance and highlight regressions. This environment and the separation of concerns approach taken to navigate it necessitates the development of discrete proxyapps which will need to be drawn together into a single software suite. Thus the issue of code coupling will also be important at Exascale.

The final (Report 65) describes aspects of coordination within the NEPTUNE project not covered in previous reports, namely, the development of a GitHub repository for infrastructure code and project planning; the development of a project website for knowledge transfer within NEPTUNE ; and a description of collaborations arising from NEPTUNE -related interactions, including the Fusion Modelling Use Case working group established create a connection between Project NEPTUNE and the wider ExCALIBUR programme.

Parallelism Abstraction

To exploit parallelism most effectively on any given architecture, data must be arranged in arrays to which the same operations can be applied to many (\(N_{adj}\)) adjacent elements. The arrangement of data describing, say, the magnetic field or a particle distribution function can nonetheless make a big difference to ultimate speed of execution which can depend sensitively on \(N_{adj}\). Thus a good API could be defined at the array level, taking away from the developer the decision as to whether the data is arranged as \(n_x \times n_y \times n_z\) or \(n_z \times n_x \times n_y\), ie. as to which array index runs the fastest. Further, extremely large first indices \(n_x\) might for example be factored so that the first index is of order \(64\) to exploit caching, whereas the final index might be used to map array contents to different nodes of the machine.

Address of array
General indexing (start at 0)
I(0)*INC(0)+I(1)*INC(1)+I(2)*INC(2)+I(3)*INC(3)+...
Suppose I index from 0 to N0-1
J index from 0 to N1-1
K index from 0 to N2-1
L index from 0 to N3-1
Address :
(I,J,K,L,...), I=0,N(0), J=0,N(1), K=0,N(2), L=0,N(3)
Set INC(0)=1, INC(1)=N0, INC(2)=N0*N1, INC(3)=N0*N1*N2
N(0)=N0-1, N(1)=N1-1, N(2)=N2-1, N(3)=N3-1
Address :
(L,I,J,K,...) Set INC(0)=1, INC(1)=N3, INC(2)=N0*N3, INC(3)=N0*N1*N3
N(0)=N3-1, N(1)=N0-1, N(2)=N1-1, N(3)=N2-1
(0123)->(3012) is permutation
integer value 1, labels 3, 30, 301,.. give increments
Set N(i)=1 to suppress dependence on index i

In a physics modelling code, it seems reasonable that physics should have a say as to how the data is arranged, with the special implication that all information relating to a particular position in space should be as close together as possible. However, particularly for edge physics, this may lead to conflict with an array level API. There are two main problems, namely that at a given spatial point (1) some species may be represented as particles and others as finite elements and some as both, and (2) not all species need be present at a given point. The issue at (1) arises when the species collisionality varies so that a fluid and a high-dimensional representation that accounts more accurately for non-Maxwellian effects are needed in different spatial regions, with the different representations allowed to overlap. Situation (2) may occur with a neutral species that becomes fully ionised with distance into the plasma, or when say singly-charged ions of a certain species are only present in the divertor. The problem is intensified when \(p\)-adaptive finite elements are used such that adjacent elements may have different orders of polynomial discretisation. It may also be desirable when working with ensembles to have samples from different solutions but for the same spatial region to be physically close together in storage.

The plasma physical constraint may be met by domain decomposition in position space, so that within each subdomain, fluid species can be represented by one set of arrays, one per species, and particles or other high-dimensional representations as other set(s) of arrays. The optimality of this arrangement, and certainly the size of subdomain, depends on machine architecture. For example, on a node with both conventional CPU cores and a GPU, it might be good to store finite elements adjacent to the CPU and use the GPU for particles. Another option might be to take the localisation concept to its extreme, and arrange together quantities that are close in the 6-D phase velocity and position space, perhaps using an hierarchy of elements in velocity space. Fluid species might be represented by pointers in these elements, without too much wastage of store, even if there is only one species that requires a high-dimensional representation.

Since the main work of a NEPTUNE solver is expected to be the numerical inversion of a large matrix to obtain field values at a new time or iteration, there is even a question mark over how much weight should be attached to the localisation constraint. At the Exascale, the matrix and especially its preconditioner must be virtual in the sense that it will be too large to store all the coefficients simultaneously, given the size of field discretisation. Hence the ease of computation of the coefficients of the matrix may be more important for performance.