Constrained dynamics of localized excitations causes a non-equilibrium phase transition in an atomistic model of glass formers

Dynamic facilitation theory assumes short-ranged dynamic constraints to be the essential feature of supercooled liquids and draws much of its conclusions from the study of kinetically constrained models. While deceptively simple, these models predict the existence of trajectories that maintain a high overlap with their initial state over many structural relaxation times. We use molecular dynamics simulations combined with importance sampling in trajectory space to test this prediction through counting long-lived particle displacements. For observation times longer than the structural relaxation time exponential tails emerge in the probability distribution of this number. Reweighting trajectories towards low mobility corresponds to a phase transition into an inactive phase. While dynamics in these two phases is drastically different structural measures show only slight differences. We discuss the choice of dynamic order parameter and give a possible explanation for the microscopic origin of the effective dynamic constraints.

Dynamical facilitation theory assumes short-ranged dynamical constraints to be the essential feature of supercooled liquids and draws much of its conclusions from the study of kinetically constrained models. While deceptively simple, these models predict the existence of trajectories that maintain a high overlap with their initial state over many structural relaxation times. We use molecular dynamics simulations combined with importance sampling in trajectory space to test this prediction through counting long-lived particle displacements. For observation times longer than the structural relaxation time exponential tails emerge in the probability distribution of this number. Reweighting trajectories towards low mobility corresponds to a phase transition into an inactive phase. While dynamics in these two phases is drastically different structural measures show only slight differences. We discuss the choice of dynamical order parameter and give a possible explanation for the microscopic origin of the effective dynamical constraints.
Since crystallization occurs through nucleation virtually any liquid can be supercooled below its melting temperature. But some liquids never become crystals. Their viscosity increases dramatically and at some point internal relaxation cannot keep up with the cooling anymore and they fall out of equilibrium, reaching a state we call a glass (for reviews see Refs. [1,2]). While in principle protocol-dependent, the temperature range at which the transition occurs is narrow and nearly a material property. The idea that this glass transition is a, or determined by a, thermodynamic transition has influenced the theoretical studies of glasses for decades. However, experimentally determined structure factors of supercooled liquids show little to no change while approaching the glass transition. If not global structure, what is the origin of slow dynamics?
The arguably most striking feature of supercooled liquids is the emergence of dynamical heterogeneity (see Ref. [3] for a review) below an onset temperature, i.e., while large regions of the liquid are jammed structural relaxation continues through regions which are less rigid. This phenomenon has been observed directly in, e.g., colloidal glasses [4] and granular systems [5]. While simple liquids above the onset temperature are well described by a mean-field theory [6], dynamical heterogeneity leads to different environments for different particles. This observation forms the foundation for dynamical facilitation theory [7], a theory of the glass transition as a dynamical phenomenon, see Ref. [8] for a review and references. While the interplay between structure and particle dynamics is complicated, the glass transition is independent of much of these details and dynamics is dominated by effective constraints restricting the accessible regions in space-time. The glass transition is controled by the number of excitations marking locally weak or soft regions able to reorganize, and not by a thermodynamic variable.
Crucial for dynamical facilitation theory is the notion of a mobility field coarse-grained both in time and space.
A convenient caricature of mobility fields are spin-like excitations on a lattice [9]. The effect of the crowded environment on particle motion is mimicked by kinetic constraints, i.e., for a spin to change its state it must be facilitated by one (or more) neighboring excited spin(s). These dynamical rules suffice to give rise to dynamical heterogeneity and a dramatic slow-down of relaxation in the absence of thermodynamic transitions.
Structural relaxation is the decorrelation of particle positions with their initial positions over time. In a dense liquid particle motion is strongly hindered by surrounding particles leading mainly to vibrational motion, short-lived excursions, and rare, collective, long-lived particle displacements that could be described as "cage breaks" [10]. One approach to glassy dynamics is to find strategies to predict long-time motion from short-time vibrations probing the local structure [11,12]. In this paper we pursue a different route and focus on the long-lived displacements as recorders of excitations in the sense of the simple lattice models. We introduce a binary mobility field but use molecular dynamics to determine its time evolution instead of postulated, and necessarily idealized, dynamics.
The paper is organized as follows: In Sec. I we give a brief reminder on kinetically constrained models. In Sec. II we combine the technique introduced in Ref. [13] to record excitations in atomistic models of glass formers with the fundamental ideas outlined in Refs. [14] and [15]. We thus confirm predictions made previously by the study of kinetically constrained lattice models. In particular, for observation times much longer than the relaxation time we find exponential tails in the distribution of mobile particles and a phase transition upon varying a field that couples to mobility. Such dynamical phase transitions [16] have been studied analytically and numerically for kinetic constrained lattice models [17,18], spin-glasses [19], and have also been found in atomistic glass formers [20]. In Sec. III we then study some structural and dynamical properties of the inactive phase in more detail before coming to the conclusions in Sec. IV.

I. KINETICALLY CONSTRAINED LATTICE MODELS
We start with a brief reminder on two popular variants of kinetically constrained lattice models [21]: the Frederickson-Andersen (FA) [9] and the East model [22]. Both models have a trivial energy function E({n l }) = J l n l where J sets the energy scale and n l is either 1 or 0 indicating whether site l is excited or not. Excitations correspond to regions of the supercooled liquid where particles are unjammed and mobile. The effects of local jamming are incorporated by kinetic constraints: in the FA model a site can change state only if a neighboring site is excited, i.e., n l±1 = 1; in the East model the site l + 1 must be excited. Evolution of the mobility field {n l } is assumed to be Markovian. A simple choice for the rates is the following. Annihilation n l : 1 → 0 of an excitation occurs with rate 1 defining the time scale. Detailed balance then implies the rate e −J/T for the creation of an excitation (if the constraint is fulfilled). The equilibrium concentration of excitations is with respect to the reduced temperature 1/T ≡ 1/T − 1/T 0 . The temperature T 0 marks the onset of dynamical heterogeneity. As a result of constraints, excitations move in lines. Fluctuations in excitations lead to coalescing and branching of excitation lines. These changes of state, or "kinks", provide the means to detect excitations through particle motion as discussed in the next section. However, the absence of detectable motion does not signify the absence of excitations which, without other excitations reaching out, are quiescent as a direct consequence of the kinetic constraints. The phase transition that we detail in this paper is a transition between two phases of markedly different concentrations of kinks or fluctuations.

A. Excited particles
We have performed extensive molecular dynamics simulations on the Kob-Anderson binary mixture [23], a popular model for atomistic glass formers [24,25] (see appendix A for details). It is composed of 80% large (A) and 20% small (B) particles. Simulations are run at temperature T = 0.6 well below the onset temperature T 0 0.88 of heterogeneous dynamics. For comparison, T 0.435 is an estimate of the glass transition temperature for this system [23,25]. We have chosen the higher temperature and therefore only moderately supercooled state point to be able to run millions of trajectories over a couple of structural relaxation times. For that state point, the structural relaxation time is τ α 24.5 as measured by the decay F (τ α ) = 1/e of the intermediate scattering function (see Fig. 1a) Particle motion is measured on the length scale 2π/|q| 1.058 corresponding to the peak position of the pair distribution function. Fig. 1a demonstrates that at T = 0.6 the structural relaxation for the system sizes considered here shows no finite size effects (see also Ref. [26]). Kinetically constrained models are minimal models incorporating the crucial ingredient of hindered motion in dense, nearly jammed liquids. What they did not provide so far is a concrete prescription on how to map a set of particle positions {r l } onto a binary mobility field {n l }.
To establish such a mapping we follow Ref. [13] and use long-lived particle displacements of a given length a as recorders of excitations. To this end we define the single particle indicator function where Θ(x) is the unit step function and h l = 1 if the lth particle has moved further than the distance a in the time interval ∆t and h l = 0 otherwise. In the following we will call particles with h l = 1 mobile, or excited to emphasize their role as recorders of underlying excitations. To distinguish non-trivial particle displacements from mere vibrations we use the inherent structure positions {r l } [27]. The inherent structure of a configuration is obtained by steepest descent to the nearest minimum in the potential energy landscape. Even though there is a hierarchy of motion on different length scales [13], here we focus on the single length a = 0.3. The commitment time ∆t = 1.5 is then chosen to be large enough so that a particle can commit to a new position but small enough so that we do not count multiple jumps. The instantaneous density of mobile particles iŝ with meanc ≡ ĉ = h l . In Fig. 1b we demonstrate that the temperature dependence of this mean density indeed follows the prediction for excitations Eq. (1). The fitted onset temperature T 0 0.88 agrees excellently with previous estimates [13,28]. The fitted energy is J 3.9 [49].

B. Low mobility tails
Our objects of interest are trajectories X ≡ {r l (t) : 0 t t obs } of fixed length t obs . To quantify the amount of mobility in a trajectory we count excited particles through the order parameter with equally spaced t i = i∆t and observation time t obs = K∆t. In Fig. 1 we show distributions p(c) for different system sizes and different observation times obtained through umbrella sampling combined with replica exchange (see appendix B). For trajectories in which motion decorrelates on a time scale much shorter than t obs we would sum over many independent events. Following the central limit theorem the probability distribution p(c) then approaches a Gaussian. Indeed, for moderate observation times τ α < t obs t * obs larger than the relaxation time τ α but below a cross-over time t * obs we find such Gaussian distributions (see t obs = 75 ≈ 3τ α in Fig. 1). Increasing t obs t * obs we observe two effects: for small c exponential tails emerge and the shape of p(c) becomes non-concave. The physical picture is that highly constrained dynamics facilitates the creation of extended immobile regions, i.e., compared to uncorrelated dynamics it is easier to "remove" mobility from a given space-time volume.
The explanation why the tails of p(c) are exponential is as follows [14]. Assuming that excitations are noninteracting the probability to find at t = 0 an immobile region of particles is proportional to e −γc with a geometric factor γ independent of N and t obs . Combining this probability with C ≈ K(N − )c for the case that this "bubble" persists leads to ln p(c) = γN c plus an offset. In Ref. [13], evidence is presented that the assumption of non-interacting excitations is indeed a good approximation. Of course, not all bubbles span the entire trajectory connecting the initial with the final state. The temporal extent t * obs of a typical bubble grows proportionally to the mean persistence time, i.e., the mean time a particle remains at its initial inherent structure position.
The order parameter Eq. (5) is purely dynamical. Since in a crystal particle mobility is low and motion restricted to defects such an order parameter cannot discriminate between a low activity amorphous and a low activity crystalline phase. Therefore, we also monitor the structure through the orientational order parameter ψ 6 (l) defined in Eq. (D1). To allow local fluctuations in structure but prevent global long-range order we use the value ofψ So far we have established that trajectories contributing to the exponential tails in Fig. 1 are those which remember their initial conditions and do not relax within the observation time t obs . At least formally and in computer simulations we can apply a bias in the space of trajectories to stabilize these low mobility trajectories. The fact that the distributions p(c) are non-concave implies a phase transition between an active phase corresponding to the liquid in which motion is plentiful, and an inactive phase of low particle mobility. To provide a link with traditional thermodynamics, and the Ising model in particular, imagine the {h i } to be spins on a lattice with the order parameter C taking the role of the magnetization [29]. Below the critical temperature the system undergoes a first-order transition between a disordered phase of low magnetization and an ordered phase of high magnetization through applying a field (which we will call s in the following). While the statistical treatment is analogous, the underlying physics of a supercooled liquid is of course different from the Ising model. In our case the lattice extends over space and time, and the interactions between "spins" is due to short-ranged forces, geometrical confinement, and the thereof resulting temporal correlations of particle motion.
In Fig. 2 we plot the mean fraction of excited particles and the susceptibility χ(s) ≡ −∂ c s /∂s vs. the biasing field s. The plot shows that the density of mobile particles drops fromc 0.11 at s = 0 to about c in 0.01 for s s * . For small s we can expand the mean c s ≈c − κs with κ ≡ [ C 2 − C 2 ]/(N K). The linear behavior around s = 0 in Fig. 2, therefore, reflects the Gaussian nature of the liquid phase. The coexistence field s * is obtained from the peak position of χ(s) maximizing the fluctuations of the order parameter. Increasing either the number of particles N or the observation time t obs sharpens the transition. However, space and time are not symmetric. At least in part, this asymmetry reflects that we employ periodic boundary conditions in space whereas trajectories can have quite distinct initial and final states. This leads to temporal boundary effects enhancing the mobility at the beginning and the end of the trajectory [16]. For fixed N one expects to leading order s * = s * N + O(1/t obs ) as shown in the upper inset of Fig. 2. From the fits we estimate the limiting coexistence field s * ∞ ≈ 5 × 10 −4 to be small but nonzero. Finally, in the lower inset of Fig. 2 we demonstrate the finite-size scaling of the peak values χ * ≡ χ(s * ) plotted vs. the space-time volume N t obs = N K∆t. The dashed line corresponds to χ * ≈ t obs (0.065 + 1.08 × 10 −6 N t obs ), which suggests a first-order transition with a diverging susceptibility and a discontinuous jump of c s at s * ∞ in the limit of large N and/or t obs .

D. Choice of order parameter
To check whether the transition into an inactive phase is robust with respect to the way we measure mobility in trajectories, we have also considered the dynamical order parameter This order parameter was used in Ref. [20] to demonstrate for the first time a transition between a high activity and a low activity phase in an atomistic model. It sums over the short-time mean-square displacements of particles, which obscures the separation of vibrations from reorganization events that lead to structural relaxation. Nevertheless, as demonstrated in Fig. 3, an abrupt transition from high to low activity can still be observed. To be specific, we define a new ensemble where we denote the biasing field coupling to K by σ and A is any observable. As an illustration for N = 216 and t obs = 450 the mean k σ is plotted in Fig. 3. It resembles the curves shown in Fig. 2 and drops abruptly around σ * 0.0077. Moreover, we find exponential tails in the probability distribution of k plotted in the inset of Fig. 3. In addition to k σ we determine the density of mobile particles c σ , which closely follows the former. However, while the activity drops by a factor of less than two, the number of excited particles is reduced by more than one order of magnitude. The comparision with the curve c s obtained in the C-ensemble shows that the mean fraction of mobile particles approaches the same value (within uncertainties) in both ensembles. We, therefore, conclude that both measures C and K prepare the same inactive phase through applying a biasing field in trajectory space. The transition is more pronounced in C since vibrational motion, which does not cease in the inactive phase evolving at the same temperature as the active phase, contributes significantly to Eq. (8).

A. Nucleation of activity
An important consequence of first order phase transitions is nucleation: a system crossing the transition line, although the new phase is favored, has to pay a penalty for interfaces and one has to wait for a large enough nucleus to appear spontaneously. Translated to the present case one might ask what happens if at time t = t obs for s s * we turn off the field s. In the picture of facilitated dynamics a new excitation can only appear close to an existing excitation. Since both density of excitations and kinks, as recorded through c s , are drastically reduced in the inactive phase we have to wait a certain time before excitations "percolate" through the system and it returns to the liquid phase. The nucleation of activity, therefore, is conceptually different from, say, the nucleation of a crystal which is determined by a single large barrier.
To demonstrate this "melting" of jammed configurations we prepare trajectories at s = 0.01. From these trajectories we then take a single configuration, randomize velocities, and run 10,000 unbiased trajectories out of this initial configuration (see also the isoconfigurational ensemble [30]). In Fig. 4a a single trajectory is shown. Clearly, the system remains inactive for many structural relaxation times (in this example t w 15τ α ) as measured byĉ and then, suddenly, becomes active again with large fluctuations ofĉ. In Fig. 4b a different trajectory out of the same initial configuration shows a more gradual transition from inactive to active. In Fig. 4c and d we show distributions of waiting times t w for two different initial states. These distributions are clearly non-exponential, which is consistent with a variable step process as excitations reach out and reconnect. A detailed comparision of these distributions to predictions from kinetically constrained models is left for a future study.

B. Local structure
In the introduction we have emphasized that global structural differences between liquid and glass are at most minuscule. However, the fact that the system remains jammed for particle configurations taken from trajectories prepared at s s * indicates that there is a structural difference between these configurations and configurations typically visited in the liquid phase. To make this more quantitative we sample trajectories at fixed s and compare the structures as measured by three different methods, see Fig. 5. The pair distribution function g AA (r) for the large (A) particles shown in Fig. 5a demonstrates that liquid and inactive phase are globally indistinguishable. Small differences are seen for the small (B) particles in Fig. 5b. Beyond the simple two-point functions we also consider the histogram of the bondorder parameter ψ 6 as defined in appendix D plotted in Fig. 5c. This order parameter is a convenient measure for long-range order. For every particle it quantifies its local order with ψ 6 = 1 for a particle in a perfect crystal. All measures clearly show that the inactive phase is amorphous.
In Fig. 5d we show that the potential energy per particle and the density of mobile particles are uncorrelated in both phases. While in the inactive phase the potential energy of particles is typically lower, the mean difference ≈ 0.1 is much less than the vibrational contribution ≈ 1.5T separating real space potential energies from the inherent state energies. Moreover, as demonstrated in Fig. 5d, there is still an overlap of potential energies be- tween both phases. Hence, we conclude that particles are not trapped energetically but rather due to geometrical constraints. Differences in structure are picked up by the pair distribution function for the small (B) particles plotted in Fig. 5b. It has been shown that the peaks of g BB (r) for binary mixtures can be assigned to certain local structures: two bonded B particles sharing m common A neighbors [31]. Of particular interest is the second peak corresponding to m = 3 since it indicates icosahedral coordination shells. Fig. 5b shows that in the inactive phase this local structure occurs more often compared to s = 0. This is consistent with recent observations of short-ranged structures in supercooled binary mixtures [32,33]. Slow relaxation is attributed to reorganization of particles bound in these structures. Moreover, the drop of ≈ 0.1 in the potential energy of inherent states agrees quantitatively with the drop associated to the formation of these structures [33] (albeit for a slightly different model).

C. Dynamical facilitation
We finally study the behavior of facilitation when going from the active to the inactive phase. First, we note that the fraction of excited particles as plotted in Fig. 6 is independent of temperature in the inactive phase. This indicates that the dynamics in the inactive phase is decoupled from the externally fixed temperature. Second, we study the degree to which particle motion is facilitated. Different methods have been reported in the literature including a mobility transfer function [34] and the facilitation volume [13]. In the spirit of a mobility transfer we consider the set of newly excited particles for which the binary indicator function is w l = 1. We follow a single particle along a trajectory and through we count the number of excited particles that have been created in a sphere with radius r around the tagged particle under the condition that the tagged particle itself had been excited in the preceding time slice. We define the transfer function The ratio µ(s)/µ(0) is plotted in the inset of Fig. 6 using r = 1.5 roughly corresponding to the first coordination shell. It shows that the probability that a particle becomes excited close to an already mobile particle increases in the inactive phase. Putting all observations together the following picture of dynamics in the inactive phase emerges: The persistence time exceeds the observation time and most particles maintain a high overlap with their initial position. However, some activity continues in isolated regions. The fraction of these mobile particles is decoupled from the temperature. Particles do not become mobile (or immobile) at random but are facilitated through existing mobile particles in their vicinity. Turning off the s-field these remaining mobile particles are the seeds from which excitations can reconnect before the system returns to its fluid state.

IV. CONCLUSIONS
The separation between fast inter-basin vibrations and slow, activated transitions between inherent structures (or meta-basins [35]) is the essence of the energy landscape paradigm [2,36]. It implies a time evolution that is dominated by rare thermal fluctuations that carry the system from one minimum over a barrier into a neighboring minium [37]. However, there is mounting evidence that such a mechanism competes with, or is even shadowed by, relaxation that occurs through channels that present only low energetic barriers but which are rare and found through "surging" particle motion: examples are string-like motion [38] and participation maps of lowfrequency modes [30,39]. Dynamical facilitation theory postulates "excitations" to be the fundamental objects describing such structural weaknesses, and facilitation to be the dominant mechanism at low concentrations of excitations.
The energy landscape picture assumes that the way the landscape is sampled is governed by temperature. From the evidence presented here, we arrive at a seemingly different perspective: the concentration of excitations determines relaxation. At constant temperature the system can be forced into an inactive glassy phase through either removing excitations or suppressing fluctuations leading to "frozen" excitations that are quiescent. In this inactive phase particles vibrate around local energy minima, the statistics of which is consistent with the externally fixed temperature. In contrast, transitions between local minima, or inherent states, are rare and decoupled from this temperature. It appears that these jammed states, in which excitations are arrested, can be created rather easily through local particle rearrangements that do not affect the global structure. In Ref. [40] it is shown that these states are mechanically more stable than fluid states at a lower temperature. Here we have demonstrated that the melting of jammed states is not consistent with a single crossing of a large free energy barrier but that it is rather a multi-step process. We attribute this multi-step process to the "unfreezing" of excitations, an interpretation that is supported by an increased degree of facilitation in these jammed states. These observations, together with the emergence of exponential tails equivalent to those observed in kinetically constrained models, leads us to the conclusion that the transition is indeed caused by local dynamic constraints. The precise pathways and microscopic mechanisms of the particle rearrangements underlying the active-inactive transition are left for future studies.
As a final note we emphasize that the active to inactive transition is reminiscent of the glass transition. The fundamental difference is that the transition demonstrated in this paper is a transition controlled by a field coupling to a dynamical observable, while the experimental glass transition is controlled by rate of temperature decrease. The connection between the two remains to be quantified.
Just as standard Monte Carlo simulations [43] do importance sampling of configurations, the methods we employ do importance sampling of trajectories. Specifically, we harvest new trajectories using moves from transition path sampling [44,45]. These moves preserve the equilibrium weight P 0 [X] of trajectories. Hence, accepting or rejecting a move X → X according to the usual Metropolis criterion generates an ensemble of trajectories with weight Here, C[X] is a dynamical order parameter that calculates a real number from trajectory X, see Eqs. (5) and (8); and w(x) is the weight function.
For a single trajectory we store K + 1 configurations at times t i = i∆t with i = 0, . . . , K. We employ the "massive stochastic collision" thermostat, i.e., all velocities are randomized at these times and the center-of-mass velocity is subtracted. The use of a stochastic thermostat allows us to perform transition path sampling with so-called 'half moves' as described in detail in Ref. [45]. In order to efficiently sample probability distributions of the order parameter we use the quadratic form To speed up the sampling of trajectory space and decrease the correlations of subsequently generated trajectories we use replica exchange between N rep = 8 replicas with different values of {x 0 }. Hence, a single cycle consists of two consecutive steps: (i) every replica generates a new trajectory which is either accepted (and replaces the previous trajectory) or rejected, and (ii) trajectories are swapped between replicas. To obtain good mixing we attempt 8 5 swaps between all of the replicas, not only neighbors. Data has been acquired from two independent runs, i.e., two independent seed trajectories (except K = 50, which is from one run). For a single run we let the trajectories relax for N x cycles and then recorded N t trajectories. Table I shows an overview for the data gathered to produce Figs. 1 and 2.

Appendix C: MBAR
To calculate distributions and expectation values from raw data we use Shirts' and Chodera's multistate Bennett acceptance ratio (MBAR) method [46] and its extension to path ensembles [47]. For completeness we briefly summarize the method. We solve  self-consistently for the set of "free energies" {f i }. Here, w i (x) is the weight function corresponding to replica i, and x jn = w j (C[X n ]) is the value of the order parameter along the nth trajectory of replica j. While in principle MBAR provides an error estimate it requires independent samples, whereas consecutive trajectories obtained using the method described in the previous section are highly correlated. The statistical inefficiency of replica j is g j = 1 + 2τ j , where τ j is the correlation time (in unit samples) of some representative observable (here we use c). One possibility to obtain independent samples is to subsample the data series with stride g j . Here we use all data but weigh replicas according to their relative statistical inefficiencies. Errors are estimated by splitting the data into chunks of N t = 10, 000 trajectories and calculating the standard error of expectation values. Expectation values of an observable A are calculated through Here, w(x) and f can correspond to one of the replicas, i.e., w = w i and f = f i . The advantage of MBAR is that we can employ an in principle arbitrary weight function w(x) (given sufficient statistical weight e −w(x) of the sampled data) with . (C1) Probabilities p i = χ i are calculated as the expectation value of an indicator function χ i (x) which is 1 if the value x falls into bin i and 0 otherwise. In particular, the distributions shown in Fig. 1 correspond to the unbiased ensemble P 0 [X] with w(x) = 0.
The curves shown in Fig. 2 for the mean value c s are obtained through using the weight function w(x) = sx in Eq. (C1). In order to sample trajectories at fixed s we use this weight function instead of Eq. (B1) for a chain of replicas with different s values ranging from s = 0 to s = 0.01. To obtain a set of independent trajectories we keep only every 1000th trajectory for analysis.

Appendix D: Orientational order
To quantify orientational order we follow Ref. [48]. For each particle k a complex vector is defined, where Y lm are spherical harmonics and the angles θ kk and φ kk describe the orientation of the displacement vector between particles k and k with respect to a fixed reference frame. The sum is over all N b neighbors in the first coordination shell with radius 1.42. The normalized scalar product of the q-vectors is S(k, k ) ≡ 6 m=−6 q 6m (k)q * 6m (k ) 6 m=−6 |q 6m (k)| 2 6 m=−6 |q 6m (k )| 2 .
The average over neighbors is the bond order parameter. It is ψ 6 = 1 for a particle in a perfect crystal and acquires a broad distribution with mean 0.2 − 0.3 for particles in a disordered environment.