Mathematics of small stochastic reaction networks: A boundary layer theory for eigenstate analysis

We study and analyze the stochastic dynamics of a reversible bimolecular reaction A + B ↔ C called the “trivalent reaction.” This reaction is of a fundamental nature and is part of many biochemical reaction networks. The stochastic dynamics is given by the stochastic master equation, which is difﬁ-cult to solve except when the equilibrium state solution is desired. We present a novel way of ﬁnding the eigenstates of this system of difference-differential equations, using perturbation analysis of ordinary differential equations arising from approximation of the difference equations. The time evolu-tion of the state probabilities can then be expressed in terms of the eigenvalues and the eigenvectors.


I. INTRODUCTION
The elementary chemical reaction which we will refer to as the "trivalent reaction" holds foundational importance as a repeating unit within reaction network models of many biochemical processes. Most of the existing study of this reaction has concentrated on the deterministic approximation and on its equilibrium state stochastic solution. The deterministic analysis of this reaction does not reflect the underlying stochasticity in the reaction and is often insufficient to completely understand the temporal progression of the process. For this reason we aim to understand the stochastic dynamics of such reactions that describe the more general behavior.
Earlier works on the stochastic model for some elementary bimolecular reactions are due to Renyi 3 and Darvey et al. 2 where the idea of probability generating functions has been used and closed form solution for the equilibrium state has been provided. McQuarrie 4 presents a comprehensive review of the stochastic models for several such chemical reactions. Laurenzi 5 gave an exact closed form for the solution of the stochastic master equation for the trivalent reaction by using Laplace transformation. Recently, Lee and Kim 1 have provided exact solution for a slightly more general class of single-reaction systems. However, the resulting expression is complex and inefficient to evaluate for large numbers of molecules.
In this paper we present a perturbation theoretic approach for relatively large number of molecules to describe the eigenstates of the stochastic master equation for the trivalent reaction, which yields an efficient way to calculate the probability of the reaction for any specific state as a function of time. a) emj@uci.edu. b) uxp10@case.edu.

A. Mass action kinetics
Given that the numbers of particles A, B, and C at time t are α − n, β − n, and n, respectively, the mean number of particles n should evolve roughly according to the deterministic expression or (to make contact with the notation used subsequently) where τ = k f t and κ = k r /k f . The law of mass action gives the following relation in dynamic equilibrium: [A] [B] [C] = κ = (k r /k f ), where [ . ] stands for concentration. This could be restated in terms of number of molecules of A and B (after adjustments for the volume in the term κ) as In the special case when α = β, we have (α − n) 2 = κn ⇔ n 2 − (2α + κ)n + α 2 = 0. Define x = n/α (n + 1)/(α + 2) and μ = κ/α.
Then the law of mass action above implies For large values of α this suggests that The mass action solutions are given by and the mass action condition could be expressed in above notation as For all μ > 0, the value of x + > 1, so this root is less important than x − ∈ (0, 1) which is the actual mass action solution. Note the special point μ = 1/2 for which x − = 1/2. This symmetric situation will be the default when parameters are required.

A. Chemical master equation
Let α and β be the initial numbers of molecules of A and B, respectively. Clearly by law of conservation, if n is the number of C at time t, starting initially from 0, then α − n and β − n are the numbers of molecules of A and B at time t, respectively. Thus, only one variable, namely, n is sufficient to represent the state of the system at time t. Let p(n) denote the probability that the reaction is in state n at time t.
The chemical master equation (CME) for the trivalent reaction is given by the following relation. See a recent work by Laurenzi 5 for a brief derivation. For a rigorous derivation and complete description of the underlying assumptions see Gillespie 6 and references therein, together with conditions p(−1) = 0 = p(α + 1). On using τ = k f t, κ = k r /k f and on rewriting the scaled τ as t, i.e., t ← τ , we get the following simplification for the CME: +κn]p(n) + κ(n + 1)p(n + 1).
Note that the rescaling of time does not affect the calculation of eigenvectors in Secs. II B-IV E, so there is no harm in writing t in place of τ . The CME could be written in terms of the transition rate matrix W -the generator of the continuous-time Markov process-which, from Eq. (2), is clearly a tridiagonal matrix and b i = κ(i + 1), for i = 0, 1, · · · , α.
Matrix W could be represented in terms of birth and death operators that account for the inflow and outflow of probability in a particular state. It could be verified that the sum of every column of this matrix is zero, which explains that the total probability is conserved for any state.
Let the pair (λ, P λ ), where P λ = [p λ (0), . . . , p λ (α)] T , denote an eigenvalue-eigenvector pair for the above system defined by the relation This leads to the following three term recurrence relation when used in Eq. (2): with boundary conditions p λ (−1) = p λ (α + 1) = 0. This recurrence relation could be solved for an eigenvalue-eigenvector pair recursively. However, this method is not numerically efficient and does not give a closed form solution for the eigenvector. We present a matrix theoretic approach to find the same, which takes advantage of the underlying structure in the recurrence relation and is numerically more stable and efficient.

B. Eigenvalues from matrix theory
From previous discussion, an eigenstate of the above system can be defined as a pair of eigenvalue λ and eigenvector P λ that satisfies the relation W P λ = λP λ .
The special form of the matrix W -which is a tridiagonal matrix-lends an easier way of evaluating the eigenvalues and eigenvectors. The matrix W defined above is diagonally similar to a symmetric matrixŴ , i.e., W = DŴ D −1 for some diagonal matrix D. Indeed, by construction, D = diag(d 0 , d 1 , · · · , d α ) provides such a diagonal matrix, where d i is defined by the first order recurrence relation Every real symmetric matrix is diagonalizable in the sense that there exists an orthogonal matrix Q (see p. 104 of Horn and Johnson 10 ) such that where the diagonal matrix = diag(λ 0 , λ 1 , · · · , λ α ), and where λ i are eigenvalues ofŴ . We will assume henceforth that these eigenvalues are arranged in increasing order of magnitude.
Once these eigenvalues and associated eigenvectors are known, the solution of the master equation could be given by where matrix e t could be easily calculated as the diagonal matrix e t = diag(e tλ 0 , e tλ 1 , · · · , e tλ α ).
On solving the recurrence relation in Eq. (4), we get the following elements for the matrix D: In the case when α = β, we get The symmetric matrixŴ is given bŷ In the case when α = β, The symmetric tridiagonal form of the matrixŴ lends a faster and more accurate algorithm for eigenvalue calculation. We have used steps of implicit shift QR algorithm to find these eigenvalues. 9 Similarity of W andŴ implies that W has the same exact eigenvalues.
The eigenvectors associated with an eigenvalue then can be found by solving the recurrence relation in Eq. (3) for the given eigenvalue (saving the steps in QR algorithm would also have given us the eigenvector, however, numerical accuracy is not good).
Once the eigenvectors have been found, the solution of the CME with initial condition P(0) = [p(0), p(1), . . . , p(α)] T could be given more explicitly as If only the first few eigenvectors of W are known then pseudo-inverse could be used to express the solution of the CME as where S is a rectangular matrix, columns of which are the first few eigenvector of W , and S † denotes the pseudo-inverse of S. It should be noted that, for all sufficiently large times t > 0, the first few terms suffice to provide a good approximation to the solution of p(k, t) in Eq. (6) as higher eigenstates make very small contribution due to large negative eigenval-ues. This fact allows us to focus just on the first few eigenstates for the calculation of an approximate solution to the CME.
A motivating idea behind finding an approximate analytic solution to the trivalent reaction in the context of a large reaction network is the operator splitting method for solving differential equations. Our proposed analysis could be used to solve CME for more complex reaction networks involving a bimolecular reaction in conjunction with the splitting method that has been suggested by Jahnke and Altintan. 11 The central notion in this strategy consists of subdividing the reactions in a large reaction network into subsets of uncoupled reaction channels on which either an explicit analytic solution or a stochastic simulation algorithm could be applied over small time intervals. The solution of the entire stochastic system then evolves as a flow of nested compositions of two or more operators working over small time intervals. Jahnke and Altintan 11 also show that the error due to the splitting is comparable to the natural fluctuations arising in any stochastic system. As explicit solutions already exist for several reactions such as monomolecular, catalytic, and auto-catalytic reactions, 12 this technique could be used with the stochastic simulation algorithm for solving larger reaction networks.
An alternative approach for solving the CME for large reaction networks is the finite state projection that has been suggested by Munsky and Khammash. 13 In this method the solution of the CME evolves from a smaller subset of the state space to a larger one. The error accrued in the process is shown to be within permissible limits. However, this strategy also suffers from the computational expense of the operation, particularly for the parameter regime of medium to large but finite α (and thus particle number) studied in this paper, of calculating the exponential of the generator matrix or a submatrix. Indeed, calculating such matrix exponentials approximately for larger systems is one of the potential benefits of the perturbation theory method proposed in this paper, so perturbation theory methods could be used to extend the domain of applicability of finite state projection methods.
We believe that an explicit (albeit approximate) solution for a bimolecular reaction could be beneficial as it augments the set of reactions for which an analytic solution is known. This could be used together with existing methods such as operator splitting and/or finite state projection for efficient solutions of larger reaction systems. Figure 1(a) displays the plots of −λ i /α 2 , i = 0, 1, . . . , α for α = 1, 2, . . . , 100 when α = β and μ = 0.5, and Figure 1(b) gives a plot of the reciprocal of differences of consecutive eigenvalues for α = 100. We observe the following from these two plots.

C. Numerical evaluation of eigenstates
1. There is a zero eigenvalue and all other eigenvalues of the system are nonpositive. This signifies, according to Eq. (6), that as time progresses, the contributions of higher eigenstates diminish, as the system evolves to the equilibrium solution which is associated with the zero eigenvalue and corresponding eigenvector. 2. All the eigenvalues for a given α value interlace with the next α value. This follows from the interlacing theorem for bordered matrices (see p. 185 of Horn and Johnson 10 ). The interlacing theorem implies that the jth nonzero eigenvalue of W ∈ R N,N is sandwiched between the jth and (j + 1)st eigenvalues of W ∈ R N+1,N+1 . This implies that the solution of the CME for higher α values could be seen as a perturbation of the solutions for the preceding α values. 3. It is observed that for higher α values, the plot of −λ i /α 2 are horizontal lines. This motivates us to fitting extrapolation curves for ith eigenvalues, which are quadratic in α for large α values. Moreover, Figure 1(b) shows a peak in the middle indicating a dense distribution of eigenvalues near the median. It also signifies a transition in the behavior of the differences in consecutive eigenvalues, which is decreasing in the left and increasing in the right. This suggests the need of two different kinds of the extrapolation curves to fit the eigenvalues before and after the peak. We perform these curve fittings later in Sec. IV E.
Once the eigenvalues of the matrix W are found the associated eigenvectors could be found by using these eigenvalues in Eq. (3) and by solving the resulting recurrence relation starting from the initial conditions p(−1) = 0 and p(0) = 1.
Figures 2(a)-2(c) are the plots of the first three eigenvectors on log scale for α = 50 and κ = α/2 = 25 evaluated from the recurrence relation. Experiments suggest accumulation of error to the right end of the plot. However, this error is very small in relative terms.

D. Continuum differential equation
In Secs. III and IV, we present a novel approach to find approximations to the eigenvectors using perturbation theory. The basic idea is to construct a continuum differential equa-tion from the recurrence relation, Eq. (3), and solve the resulting boundary value problem which happens to be a singular perturbation problem. For this, we first find the difference equation from the recurrence relation.
We define the forward difference operators: These could be reversed to express shifted functions in terms of unshifted ones and forward difference operators: Starting with the following form of the recurrence relation in Eq. (3): and by using the above forward differences, we get On collecting terms, we have the difference equation for p λ (n), To convert above difference equation to an approximating differential equation in x ≈ n/α, we use the following Sign approximations: Thus, On applying the above in Eq. (8), we have Further simplification gives Dividing both sides by α, we get Take the special case β = α for simplicity, and we have So a large-α approximation to the recursion equation is given by this ordinary differential equation (ODE)-eigenvalue problem with unknown eigenvalues η in its large-α limit: together with the boundary conditions p(0 Accurate extrapolation formulae for the eigenvalues of the recurrence relation will be very helpful in setting η so as to satisfy the boundary conditions. We will describe this in Sec. IV E.

A. Ground state solution
The ground state solution corresponds to the lowest eigenvalue, λ = 0, or equivalently η = 0. The boundary value problem (BVP) in Eq. (9) turns into the following problem in this case: Integrating both sides with respect to x gives which is a first order linear equation having the solution If we use the initial condition as p(1) = 0, we get C 2 = 0, and thus The unknown constant C could be taken as a normalization constant so that the sum of the state probabilities in steady state is 1. We observe that there is a strong peak in p ground (x) at x x − . As α → ∞, we expect a delta function in p(x) at x = x − (which depends on μ).

IV. BOUNDARY LAYER ANALYSIS OF HIGHER EIGENSTATES
In this section, we present a perturbation theoretic approach to find the approximate solution of the BVP, Eq. (9), in the case when η = 0: It should be noted that conversion of Eq. (9) into Sturm-Liouville form guarantees a total of α nonzero eigenvalues for the above problem in strictly increasing order of magnitude. The eigenvector associated with the kth eigenvalue has k zeroes. So the right boundary condition changes sign alternately according to (−1) k . Our numerical experiments with recurrence relation, Eq. (3), also suggest similar behavior.
It is not possible to analytically solve this boundary value problem explicitly for nonzero values of η. Numerical methods also fail. However, perturbation methods especially the boundary layer approach could be used successfully to provide a closed form approximate solution to this problem. Indeed, this is a boundary layer problem because of the small coefficient μ/α in the highest order derivative for large α.
We apply boundary layer analysis to this problem where we aim to find an approximation to the solution. The narrow regions in the interval [0, 1] where p (x) and p (x) change very rapidly in comparison to p(x) are called the boundary layers in contrast to the outer layer where this change is not so rapid. We can recast Eq. (11) in a standard form as Following are some salient points of this boundary value problem: 1. It has a regular singularity at x = 0. The basic idea behind boundary layer analysis of a singularly perturbed differential equation (see Bender and Orszag 8 ) is to generate a solution in the form where X = x/ is called stretched variable, P(x, ) is called the outer solution, given as an asymptotic power series in as and P(X, ) is called the boundary layer solution, given by a power series The boundary layer solution is asymptotically matched to the outer solution so that it satisfies appropriate initial conditions and decays exponentially to zero as X goes to infinity (cf. the review paper 7 ). For most practical purposes, finding the leading order approximation, i.e., the first few terms in the power series, provides good enough approximation to the solution. In this paper we mainly concentrate on finding the leading outer solution P 0 (x) and boundary layer correction P 0 (X) unless it is imperative to find any higher order terms.

A. Outer layer solution
The outer layer is the region where terms involving the small parameter = μ/α could be neglected. We get the following differential equation on doing so: For few lower eigenvalues, the term η α ∼ O( ) and can thus be neglected. We get the following on rearrangement: On integration with respect to x on both sides, we get After combining terms and rearranging, we get where x + , x − = ((2 + μ) ± μ 2 + 4μ)/2 are zeroes of the denominator μx − (1 − x) 2 and .
The constant C is found by applying the left or right boundary conditions providing two outer layer solutions to the left and right of x − (obviously, in the case when x − ∈ (0, 1)).
If we assume that lim x→0 p out (x) = lim x→1 p out (x) = 1/α, we get the following values for C: where "state" refers to the index of the eigenvalue.
To conform to the calculations in Sec. IV B for the purpose of asymptotic matching, however, let us denote the left and right outer solution as the following: where Simple calculation would suffice to verify that the above two outer solutions, Eqs. (15) and (16), are the same as Eq. (14) if we take the two values of C into account.

B. Boundary layer
The nature and complexity of the boundary layer largely depend on coefficient functions f(x, ) and g(x). From the discussion by Bender and Orszag: 8  Recall the boundary layer problems in Eq. (12). The coefficient function f(x, ) has a simple zero at x = x − ( ). In a small neighborhood around x = x − , we use the following linear approximation: It could be verified that β, γ , and ν have the following relationship for large α: So in the small neighborhood of x − , we have the simplified ODE Note that the ν-value is crucial. Numerical experiments suggest that for smaller eigenvalues ν exhibits near integral values. Following solution which is based on Bender and Orszag 8 assumes that ν is non-integer.
Clearly, γ = η/(x + − x − ) > 0. Following the discussion in Bender and Orszag 8 (p. 456), we have only an internal boundary layer at x = x − . Defining the stretched variable X by δX = x − x − , the internal layer solution is given by Taking δ = √ and using the linear approximations for f and g from above, we have Using Liouville transformation P = e −γ X 2 /4 W and then using Z = √ γ X, we obtain the Weber equation The solution of above equation is given in terms of the parabolic cylinder functions D ν (Z) and D ν (−Z) (when ν is non-integral and positive) as √ αγ /μ) and ν = β/γ − 1. The constants C 1 and C 2 are found by applying asymptotic matching with the outer solutions in Eqs. (15) and (16) obtained in Sec. IV A.
Note that in the case when ν is a positive integer, D ν (Z) and D ν (−Z) are no longer linearly independent. This leads to an overdetermined system (see Bender and Orszag 8 for details) and further correction is needed to find the correct structure of the solution.

C. Uniform approximation
The uniform approximation over the entire interval is generally defined as where p out (x), p in (x), and p match (x) are the outer, inner, and matched solutions, respectively.
Uniform approximation for a problem related with Eq. (12) has been provided by Bender and Orszag 8 (p. 458) by using asymptotic matching. A detailed and rigorous proof is provided by Wong and Yang 16 for a slightly general formulation of the problem with boundary conditions y(x l ) = A and y(x r ) = B.
On combining the above results in Eqs. (15)-(17) by the help of results provided in Bender and Orszag 8 and Wong and Yang, 16 we get the following uniform approximation: On finding the integrals above, we have (state refers to the index of the eigenvalue). See Bender and Orszag 8 for detailed derivation.

D. Perturbation method for eigenvectors
The uniform approximation in Eq. (18) could be used to find the eigenvector associated with a given eigenvalue. We call this perturbation method. The nth element of the eigenvector is found by evaluating Eq. (18) at x = n/α. For the sake of comparison with recurrence method, an eigenvector is divided by the magnitude of the element with maximum value. Figure 4(a) shows plot of the first eigenvector by the recurrence method (red dashes) and the perturbation method (blue dots) for α = 200. It is observed from several such plots that the eigenvector found by perturbation method is slightly shifted leftward from the one found by the recurrence method.
We believe that this shift results from our initial approximation (n + 1)/(α + 2) ≈ n/α = x that we have used to get a manageable differential equation from the recurrence relation. Actually, in doing so we are disregarding higher order terms in 1/α which can be seen below: It is also observed from experiments that this horizontal shift could be minimized by a small rightward shift in the values of x in evaluating Eq. (18). The expression x = (n − 0.2k)/α − 7.0k/α 2 for the nth element, where k is the index of the eigenvalue, incorporates such an empirical shift. Table I gives the relative error in the calculations of the first five eigenvectors for different α values by the perturbation method. The error is calculated by In every row of the table, the first line gives the error with the empirical shift and the second line, in parentheses, without any shift. It is observed that the empirical shift leads to reduction in error. Better accuracy could be achieved by taking more terms in the shift. The plots in Figures 4(b)-4(d) compare the eigenvectors found by solving the recurrence relation to those obtained by the perturbation method with empirical shift in colors red and blue, respectively, for the first three higher eigenstates. We can see that the plots by the perturbation calculation agree well with those by recurrence calculation, once the empirical shift is incorporated in the perturbation method.  (19), in the calculation of P λ k by perturbation method. First line in every row is the error after an empirical shift in the x n/α axis, which shows an apparent reduction from the unshifted error shown on the second line in parenthesis. The plots in Figure 5 illustrate very schematically the possible application of our perturbation theory eigenvector results to compute time-varying CME solutions, according to Eqs. (6) and (7) pseudoinverse approximation thereof in Sec. II B. An initial condition consisting of a Gaussian probability distribution is evolved forward in time using the 10 lowest-lying eigenstates (out of a total of α = 100), using (a) the exact solutions (i.e., arbitrary-precision, without use of perturbation theory) from the recurrence equation or, alternatively, (b) their perturbation theory approximations. There is essentially no difference in solution quality, and both solutions converge quickly in simulated time to the equilibrium distribution given by the zero-eigenvalue eigenvector, which can be accurately computed by perturbation theory (Figure 3(a)). Errors such as small negative probabilities at early times in both histories are due to the projection down to 10 out of 100 eigenvectors (a), rather than to errors in the perturbation theory approximation of those eigenvectors (b). Including all 100 exact eigenvectors eliminates these small constraint violations but otherwise does not substantially change the results (plots not shown).
The Gaussian initial condition illustrated in Figure  terms in perturbation theory, which we believe would require a deeper investigation and understanding of the spectrum shown in Figure 1(b) to the left of the spectrum density peak, i.e., for longer time scales, than we yet have. (Indeed it is possible that the peak or scale-transition eigenvalues correspond roughly to those eigenvectors whose effective support first reaches the ends of the interval.) Improvements in this direction could yield competitive predictive CME-solving algorithms. For example, globally low-order moment approximations (e.g., Gaussian) initial conditions could be evolved forward accurately as in the foregoing example, perhaps within an operator splitting framework.

E. Extrapolation on eigenvalues
We have seen in Secs. II-IV D that our approach for solving the BVP by perturbation method relies on the values of the parameters α and μ. There is another parameter, namely, η = λ/α that is unknown and we need a reliable method to evaluate this. Of course, the first idea that comes to mind is calculation of eigenvalues and then using η i = λ i /α. However, doing so for large values of α (as α → ∞) would not always be practical. For this reason, we have used extrapolation to find η i corresponding to the first few higher eigenstates.
In the foregoing examples, we have used η values found by applying extrapolation to the collection of (α, η i ) points obtained from matrix method of Sec. II B. The motivation for choosing a particular function form for extrapolation comes from observing the plot of η i as a function of α shown in Figures 6(a) and 6(b), and from Figure 1. We observe that as α → ∞, η 1 and η 5 approach horizontal lines. This behavior follows for low lying eigenvalues, however, it does not hold for higher eigenvalues. For last few eigenvalues, we see that η is a linear function of α with positive slope as in Figure 6(c).
The form of extrapolation that is suggested by such plots-as in Figures 6(a), 6(b), and 1 below-falls mainly in two types: first, that has term containing 1/α, and second, that has inverse exponential powers of α. After several experiments we have chosen the following form of the extrapolating function:   for η 1 and in general for η i , where u = μ 2 + 4μ, a, b, c, d, e are extrapolation parameters and i is the index of the eigenvalue.
On observing various extrapolation fittings on the η ivalues for dataset of triplets (α, u, i), where α = 100, 101, . . . , 800 and u corresponding to μ = 0.1, 0.2, . . . , 1.0, we find the following extrapolation functions to best describe the observed data (see Table II): We remarked in Sec. IV B that the foregoing derivation of the uniformly asymptotic expression derived above is valid only when the values of the parameter ν are non-integer. Nevertheless, numerical observations suggest that the limiting values of ν as α → ∞ can indeed be very close to integer values. Table III

V. CONCLUDING REMARKS AND DISCUSSION
In this paper we have presented a stochastic analysis of trivalent reaction A + B C. Unlike the deterministic analysis, which can only describe the ground equilibrium state of the reaction, stochastic analysis of the master equation explains not only the ground state but the higher eigenstates too. Our approach uses the boundary layer structure in the singularly perturbed boundary value problem that is derived as a continuum limit of the difference-differential master equation. Numerical experiments suggest that the asymptotic approximation achieved by boundary layer analysis conforms well to the actual eigenvectors found by recursion from the difference equation for smaller nonzero eigenstates. By taking more terms in the approximation better accuracy could be obtained, but this leads to further complications in the calculations and in the asymptotic matching of the outer and inner solutions. The analysis in this paper has been carried out assuming that the initial numbers of molecules of both A and B are the same, that is, α = β. A very similar approach could be applied in the more general case, when (for example, and without loss of generality) α ≥ β. Similar analysis and approach could be applied for other bimolecular reactions such as A + B ↔ C + D, A + B ↔ 2C, and 2A ↔ C as well.
The perturbation theoretic approach that we have developed for the calculation of approximate eigenvectors is so far only accurate in the case of the first few eigenvalues. Its accuracy is increased if we adopt an empirical shift in the molecule number axis, possibly representing higher order effects that we have not calculated. For the completeness of this method, it will be necessary to extend the perturbation theory method to higher eigenstates also. One essential question that should be addressed is the orthonormalization of the eigenvectors, because the state probability function p(k, t) in Eq. (6) is expressed in terms of the rows of the orthogonal matrix Q. Complete orthonormalization is possible only when all the eigenvectors of the system have been calculated.
The major motivation for developing the perturbation theoretic approach for large N, the maximum number of molecules in the system, is the following. For large N we are able to approximate the steady-state probability distribution over states, and also the slowest eigenvectors (which are those that dominate convergence to equilibrium after an initial transient), all with an amount of computation that is essentially constant in N. Our method achieves this goal. That is much better scaling than what is obtained by other recursive eigenvalue calculations of which we are aware, and it is the general advantage of a 1/N-expansion perturbation theory approach.
We also discovered empirically, using arbitrary-precision arithmetic, a peak in the density of eigenvalues (Figure 1(b)) that marks an effective transition between slower and faster time scales. While we do not fully understand the reason for this phenomenon, it may provide an important advantage to future multiple-time-scale approaches in solving the master equation for small reaction networks by delimiting the set of eigenvectors required for an accurate and computationally feasible slow time scale approximation.

ACKNOWLEDGMENTS
E.M. would like to acknowledge useful discussions and preliminary calculations with Carl Bender. This work has been supported in part by National Institutes of Health (NIH) R01 GM086883 and P50 GM76516.