
Nuclear data activities for medium mass and heavy nuclei at Los Alamos
M. R. Mumpower1,∗,T. M. Sprouse1,T. Kawano1,M. W. Herman1,A. E. Lovell1,G. W. Misch1,D. Neudecker1,H.
Sasaki1,I. Stetcu1, and P. Talou1
1Los Alamos National Laboratory, Los Alamos, NM, 87545, USA
Abstract. Nuclear data is critical for many modern applications from stockpile stewardship to cutting edge
scientific research. Central to these pursuits is a robust pipeline for nuclear modeling as well as data assimilation
and dissemination. We summarize a small portion of the ongoing nuclear data efforts at Los Alamos for medium
mass to heavy nuclei. We begin with an overview of the NEXUS framework and show how one of its modules
can be used for model parameter optimization using Bayesian techniques. The mathematical framework affords
the combination of different measured data in determining model parameters and their associated correlations.
It also has the advantage of being able to quantify outliers in data. We exemplify the power of this procedure
by highlighting the recently evaluated 239Pu cross section. We further showcase the success of our tools and
pipeline by covering the insight gained from incorporating the latest nuclear modeling and data in astrophysical
simulations as part of the Fission In R-process Elements (FIRE) collaboration.
1 Introduction
Nuclear data have a profound impact on modern society
[1]. These data serve as a foundation for nuclear energy,
nuclear security, and the wide variety of research that is
carried out in nuclear astrophysics from the composition
of neutron stars to the impact of kilonova light curves [2–
5].
Despite its importance, a robust pipeline for nuclear
data assimilation and dissemination still remains on the
horizon. Currently, major evaluated databases of nuclear
reactions, decays and structure can be found in ENDF
[6] and ENSDF [7]. Inputs valuable for reaction model-
ing codes can be found in the Reference Input Parame-
ter Library (RIPL) [8] and experimental reaction data can
be found in EXFOR [9]. The Atomic Mass Evaluation
(AME) [10, 11] and NuBase [12] provide targeted infor-
mation regarding ground state and decay properties.
The disparate nature of these sources, coupled with
the difficulty of extracting information, complicate the
use of these database in modern applications, particu-
larly with cutting edge science. In this contribution, we
provide a brief overview of the nuclear data activities at
Los Alamos National Laboratory (LANL). We discuss the
NEXUS framework which represents a first step towards
providing a pipeline between nuclear data, nuclear mod-
eling efforts and applications. We showcase the utility of
this platform by highlighting its use in a recent evaluation
of 239Pu cross sections as well as in the scientific Fission
In R-process Elements (FIRE) collaboration. We end with
a discussion of recent tools developed at LANL intended
to advance the use of nuclear data in astrophysical appli-
cations.
∗e-mail: mumpower@lanl.gov
2 NEXUS
The Los Alamos Python package, NEXUS, is a data-
agnostic framework intended to furnish access to nuclear
properties. The phrase ‘data agnostic’ means that it does
not matter how the data was generated or parsed. The out-
put format of the data is also irrelevant to the framework’s
function. Instead the focus of the code is on a consistent
object-oriented representation of various physical quanti-
ties and the relationships between them. This approach
is in stark contrast to the major databases which revolve
around the transmission format of data.
As an example of an object in NEXUS, in its simplest
form a reaction cross section may be reasonably repre-
sented by two arrays: the incident particle energy and the
cross section at each energy. Additional information may
be warranted, in which case the base object may be ex-
tended. There may be uncertainties reported on each en-
ergy point as well as uncertainties in the cross section val-
ues. For particular applications associated metadata may
also need to be affiliated with attributes, for example, pro-
viding the physical units of each of the arrays. The focal
point is on the representation of the data in code, not on
how the information will be transmitted.
Information transmission of data is supported however
in NEXUS. This revolves around the concept of marshalling
which is a more general concept of object serialization
(which deals only with string representations). Each ob-
ject in NEXUS has a corresponding object, a marshaller,
which handles its transmutation to various representations.
A marshaller may translate one object type into another,
or it may convert an object from its code representation
to a form that can be saved on a computer hard disk, e.g.
ASCII or JSON. The power of this approach means that
arXiv:2210.12136v1 [nucl-th] 21 Oct 2022