Slack reactants: A state-space truncation framework to estimate quantitative behavior of the chemical master equation

State space truncation methods are widely used to approximate solutions of the chemical master equation. While most methods of this kind focus on truncating the state space directly, in this work, we propose modifying the underlying chemical reaction network by introducing slack reactants that indirectly truncate the state space. More specifically, slack reactants introduce an expanded chemical reaction network and impose a truncation scheme based on desired mass conservation laws. This network structure also allows us to prove inheritance of special properties of the original model, such as irreducibility and complex balancing. We use the network structure imposed by slack reactants to prove the convergence of the stationary distribution and first arrival times. We then provide examples comparing our method with the stationary finite state projection and finite buffer methods. Our slack reactant system appears to be more robust than some competing methods with respect to calculating first arrival times.


Introduction
Chemical reaction networks (CRN) are a fundamental tool in the modeling of biological systems, providing a concise representation of known chemical or biological dynamics.A CRN is defined by a family of chemical reactions of the form where α i and β i are the number of species X i consumed and produced in this reaction, respectively.The classical approach to modeling this CRN is to consider the concentrations c(t) = (c 1 (t), c 2 (t), . . ., c n (t)) , where c i (t) is the concentration of species X i at time t, and to use a system of nonlinear differential equations to describe the evolution of the concentrations.
Suppose we are interested in studying the typical enzyme-substrate system [30], given by the CRN Given an initial condition where the molecular numbers of both D and P are 0, a natural question to ask is how long the system takes to produce the first copy of P .Similarly, there exists a time in the future where S and D are fully depleted, resulting in a chemically-inactive system, and one can ask when this occurs.These are both quantities that the classical deterministic modeling leaves unanswered -by considering only continuous concentrations, there is not a well-defined way to address modeling questions at the level of single molecules as the model assumes that all the reactions simultaneously occur within infinitesimally small time intervals.
Instead, by explicitly considering each individual copy of a molecule, we may formulate a continuoustime Markov chain.This stochastic modeling is especially important when the system consists of low copies of species, in which case the influence of intrinsic noise is magnified [34,11,2].Rather than deterministic concentrations c(t), we consider the continuous-time Markov chain X(t) = (X 1 (t), X 2 (t), . . ., X n (t)) describing the molecular number of each species X i at time t.
The questions regarding the enzyme-substrate system (1), such as the time of the first production of P , simply correspond to the first passage times of various combinations of states.For a continuoustime Markov process X, the first passage time to visit a set K of system states is formally defined as τ = inf{t ≥ 0 : X(t) ∈ K}.One can directly compute E(τ ) by using the transition rate matrix A. With few exceptions (such as CRNs with only zero-or first-order reactions [28]), most chemical reaction networks of any notable complexity will have an intractably large or infinite state-space, i.e. they exhibit the curse of dimensionality.This direct approach can, therefore, suffer from the high dimensionality of the transition rate matrix.
An alternative approach is to estimate the mean first passage time by generating sample trajectories with stochastic simulation algorithms such as the Gillespie algorithm [15].This overcomes the curse of dimensionality since a single trajectory needs only to keep track of its current population numbers.Nevertheless, there still remain circumstances under which it is more efficient to numerically evaluate the exact solution of the chemical master equation -in particular, when the Markov process rarely visits K so that significantly long simulations may be required to sample enough trajectories to estimate the mean first passage time [29].
Fortunately, a broad class of state space reduction methods have recently been developed that allow for direct treatment of the transition rate matrix.These methods are based on the truncationand-augmentation of the state space [35,36,32,16], the finite buffer method [7,8] and linear programming [25,26].A recent review summarizes truncation-based methods [27].The stationary finite state projection (sFSP) [16], and the finite buffer method [7,8] are examples of such truncation-based methods, which we will describe in detail below.They all satisfy provable error estimates on the probability distributions when compared with the original distribution.On the other hand, each of these methods has potential limitations for estimating mean first passage times and other quantitative features.For instance, using sFSP depends on the choice of a designated state, which can significantly alter the estimate for first passage times.
In this paper we provide a new algorithm of state space reduction, the slack reactant method, for stochastically modeled CRNs.In this algorithm we generate a new CRN from an existing CRN by adding one or multiple new species, so that the associated stochastic system satisfies mass conservation relations and is confined to finitely many states.In order to ensure equivalent dynamics to the original system, we define a mild form of non-mass action kinetics for the new system.Since the state space reduction is implemented using a fully determined CRN, we can study the CRN using well-known tools of CRN theory such as deficiency zero [], Lyapunov functions [], etc, as long as they extend to this form of kinetics.
In Section 3.2 we provide an algorithm to produce a slack variable network given a desired set of mass conservation relations.In addition to its theoretical uses, this algorithm allows to implement existing software packages such as CERENA [23], StochDynTools [19] and FEEDME [24] for chemical reaction networks to generate quantities such as the moment dynamics of the associated stochastic processes, using the network structures as inputs.
We employ classical truncation Markov chain approaches to prove convergence theorems for slack networks.For fixed time t, if a probability density of each slack system under conservation quantity N converges to its stationary distribution uniformly in N , then the stationary distribution of the slack system converges to the original stationary distribution as N tends to infinity.We further prove that under a uniform tail condition of first passage times, the mean first passage time of the original system can be approximated with slack systems confined on a sufficiently large truncated state space.Finally we show that the existence of Lyapunov function for an original system guarantees that all the sufficient conditions for the first passage time convergence are satisfied.We also show that this truncation method is natural in the sense that a slack system admits the same stationary distribution up to a constant multiplication as the stationary distribution of the original system if the original system is complex balanced.This paper is outlined as follows.In Section 3, we introduce the slack reactant method and include several illustrative examples.In Section 4, we demonstrate that the slack method compares favorably with other state space truncation methods (sFSP, finite buffer method) when calculating mean first passage times.We prove convergence in the mean first passage time, and other properties, in Section 5 In Section 7, we use slack reactants to estimate the mean first passage times for practical CRN models such as a Lotka-Volterra population model and a system of protein synthesis with a slow toggle switch.

Stochastic chemical reaction networks
A chemical reaction network (CRN) is a graph that describes the evolution of a biochemical system governed by a number of species (S) and reactions (R).Each node in the graph represents a possible state of the system and nodes are connected by directed edges when a single reaction transforms one state into another.Each reaction consists of complexes and a reaction rate constant.For example, the reaction representing the transformation of complex ν to complex ν at rate κ is written as follows: A complex, such as ν, is defined as a number of each species S i ∈ S. That is, ν = (ν 1 , ν 2 , . . ., ν d ) representing a complex d i=1 ν i S i , where ν i ≥ 0 are the stochiometric coefficients indicating how many copies of each species S i ∈ S belong in complex ν.The full CRN is thus defined by (S, C, R, Λ) where C and Λ represent the set of complexes and reaction propensities respectively.
When intrinsic noise plays a significant role for system dynamics, we use a continuous-time Markov chain X(t) = (X 1 (t), X(2), . . ., X d (t)) to model the copy numbers of species S i of a reaction network.The stochastic system treats individual reactions as discrete transitions between integer-valued states of the system.The probability density for X(t) is denoted as where X(0) is the initial probability density.We occasionally denote by p(x) the probability omitting the initial state x 0 when the contexts allows.For each state x, the probability density p(x) obeys the chemical master equation (CME), which gives the time-evolution of p(t) with a linear system of ordinary differential equations [14]: Here, the entry A ij (i = j) is the transition rate at which the i-th state transitions to j-th state.Letting x and x be the i-th and j-th states, the transition rate from x to x is where λ ν→ν is the reaction intensity for a reaction ν → ν .The diagonal elements of A are defined as Normally, reaction intensities hold a standard property, A typical choice for λ → is mass-action, which defines for x = (x 1 , x 2 , . . ., x d ) and ν = (α 1 , α 2 , . . ., α d ), where κ ν→ν is the rate constant.Here we used the notation 1 {n≥m} , for positive integers m and n.

Construct of Slack Networks
In this Section, we introduce the method of slack reactants, which adds new species to an existing CRN so that the state space of the associated stochastic process is truncated to a finite subset of states.This model reduction accurately approximates the original system as the size of truncation is large.We begin with a simple example to demonstrate the main idea of the method.

Slack reactants for a simple birth-death model
Consider a mass-action 1-dimensional birth-death model, For the associated stochastic process X, the mass-action assumption defines reactions intensities as λ ∅→X (x) = κ 1 and λ X→∅ (x) = κ 2 x for each reaction in (5).Note that the count of species X could be any positive integer value as the birth reaction ∅ → X can occur unlimited times.Therefore the state space for this system consists of infinitely many states.Consider instead the CRN where we have introduced the slack reactant Y .This new network admits X + Y as a conservative law since for each reaction either one species is degraded by one while the other is produced by one.
Since the purpose of this new network is to approximate the qualitative behavior of the original system (5), we minimize the contribution of the slack reactant Y for modeling the associated stochastic system.Hence, we assign Y a special reaction intensity -instead of λ Y →X (x, y) = κ 1 y using mass-action, we choose and we use the same intensity λ X→Y (x, y) = κ 2 x for the reaction X → Y .By forcing Y to have "zero-order kinetics," we ensure that the computed rates remain the same throughout the chemical network except for on the imposed boundary.This choice of reaction intensities not only preserves the conservative law X(t) + Y (t) = X(0) + Y (0) but also prevent the slack reactant Y from having negative counts with the characteristic term 1 {y≥0} .

Algorithm for constructing a network with slack reactants
In general, by using slack reactants, any conservative laws can be deduced in a similar fashion to encode the desired truncation directly in the CRN.We have found this perspective to be advantageous with respect to studying complex CRNs.Rather than thinking about the software implementation of the truncation, it is often easier to design the truncation in terms of slack reactants, then implement the already-truncated system exactly.We now provide an algorithm to automatically determine the slack reactants required for a specified truncation.It is often the case that a "natural" formulation arises (typically by replacing zeroorder complexes with slack reactants) but when that is not the case, one can still find a useful approximation by following our algorithm.
Consider any CRN (S, C, R, Λ).Suppose we wish to apply a number of conservation bounds to reduce the state-space of the associated chemical master equation, e.g.many equations of the form for i = 1, 2, . . ., m. Collecting the conservation vectors {w i } m i=1 into the rows of the m × d matrix W and each integer {N i } into the m × 1 vector N, we may concisely write where denotes an element-wise less-than-or-equal operator and X is the column vector of reactants {X i }.
We augment the d × 1 vector X with a m × 1 vector Y of slack reactants to convert the inequality to an equality: where I is the m × m identity matrix.This matrix multiplication implies m conservative laws among X i 's and slack reactants Y i 's.In other words, we inject the variables Y = (Y 1 , Y 2 , . . ., Y m ) into the existing reactions in such a way as to enforce these augmented conservation laws.
To do so, we need to properly define several CRN quantities.Let S be the |C| × |R| connectivity matrix: if the r-th reaction establishes an edge from the i-th complex to the j-th, then −S ir = S jr = 1 and the other entries in column r are 0. Let C be the |S| × |C| complex matrix C. Each column of C corresponds to a unique complex and each row corresponds to a reactant.The entry C ij is the multiplier of X i in complex j (which may be implicitly 0).The stoichiometry matrix is simply Γ = CS, whose i-th column indicates the reaction vector of the i-th reaction.Note that the CRN admits a conservation law if and only if r Γ = 0, where r = (r 1 , r 2 , . . ., r d ) .
Now consider a modified network where we inject Y 1 , Y 2 , . . ., Y K into the complexes.Let D ij denote the (unknown) multiplier of Y i in the j-th complex.We require the following to be true: Then by using the same connectivity matrix S and a new complex matrix C = C D , we obtain a new CRN that enforces the conservative laws (10).To find D satisfying (11), we choose D such that for an m × |C| matrix U with each row of the form u i 1 = u i (1, 1, . . ., 1) for some positive integer u i .Since 1 T S = 0, this guarantees (11).
A new network can be generated with the complex matrix C and the connectivity matrix S. We call this network a regular slack network with species S and complexes C. For the j-th reaction ν j → ν j ∈ R corresponding to the j-th column of the matrix CS, we denote by νj → ν j ∈ R the reaction corresponding to the j-th column of the matrix CS.That is, νj → ν j is the reaction obtained from ν j → ν j by adding slack reactants.
We model the slack network by where each of the entries represents the count of species in the new network.Note that the count of each slack reactant Y i is fully determined by species counts X N i 's because of the conservation law(s).As such, we do not explicitly model the Y i 's.
The intensity function λ N ν→ν of X N for a reaction ν → ν is defined as where and ν → ν is the reaction in R. Then we denote by ( S, C, R, Λ N ) a new system with slack reactants obtained from the algorithm, where K N is the collection of kinetic rates {λ N ν→ν : ν → ν ∈ R}.We refer this system with slack reactants to a slack system.
Here we highlight that the connectivity matrix S and the complex matrix C of the original network are preserved for a new network with slack reactants.Thus the original network and the new network obtained from the algorithm have the same connectivity.This identical connectivity allows the qualitative behavior of the original network, which solely depends on S and C, to be inherited to the new network with slack reactants.We particularly exploit the inheritance of accessibility and the inheritance of Poissonian stationary distribution in Section 6.
Remark 3.1.A single conservation relation, such as w • X + y = N with a non-negative vector w, is sufficient to truncate the given state space into a finite state space.Hence, in this manuscript, we mainly consider a slack network that is obtained with a single conservation vector w and a single slack reactant Y .Remark 3.2.Although we primarily think about bounding our species counts from above, we could also bound species counts from below by choosing negative integers for w i,j .
Example 3.1.We illustrate our slack algorithm with an example CRN consisting of two species and five reactions indicated by edge labels on the following network: (14) We enumerate the complexes in the order of ∅, A and B. We order the reactions according to their labels on the network.Thus the connectivity matrix S and complex matrix C are defined as follows: Suppose we set a conservation bound A + B ≤ N for some N > 0. Then the matrix A = 1 1 and by ( 12), we have where we have the conservation relation where we have the same conservation relation Here we explain why network ( 15) is preferred to ( 16) because it is less intrusive.Let X = (X A , X B , X Y ) and X = (X A , X B , X Y ) be the stochastic process associated with networks ( 15) and ( 16) respectively.Suppose the initial state of both X and X is (0, 0, N ).X can reach the state (0, N, 0) by transitioning N times with the reaction Y → B. This state corresponds to the state (0, N ) in the original network (14).On the other hand, X cannot reach the state (0, N, 0).This is because the states (0, 0, N ) and (0, N − 1, 1) are the only states from which X jumps to (0, 0, N ).However, no reaction in (16) can be fired at the states since no species Y presents at those states.
Consequently, one state, that is accessible in the original network (14), is lost in the system associated with network (16).However, it can be shown that the stochastic process associated with network (15) preserves all the states of the original network.This occurs mainly because the matrix D for network (15) is sparser than the matrix D for network (16).We discuss how to minimize the effect of slack reactants in Section 3.3.

Optimal Slack CRNs for effective approximation of the original systems
The algorithm we introduced to construct a network with slack reactants provides is valid and unique up to any user-defined conservation bounds (8), and the outcome is the matrix D, shown in (12), that indicates the stoichiometric coefficient of slack reactants at each complex.
As we showed in Example 3.1, to minimize the 'intrusiveness' of a slack network, we can simplify a slack network by setting as many D ij = 0 as possible.To do that, we choose the entries of u such that u i is the maximum entry of the i-th row of AC.We further optimize the effect of the slack reactants by removing the 'redundant' stoichiometric coefficient of slack reactants.For example, for a CRN, the algorithm generates the following new CRN with a single slack reactant The dotted arrows correspond to the reaction vectors that are turned off (i.e. the associated reaction intensity is zero) at the bound of the state space for a slack system as described in Section 3.3.
However, by breaking up the connectivity, we can also generate another network The network in ( 18) is more intrusive than the network in (19) in the sense of accessibility.At any state where Y = 0, the system associated with (18) will remain unchanged because no reaction can take place.However, the reaction A → Y in (19) can always occur despite Y = 0. Hence (19) preserves the accessibility of the original system associated with (17) as any state for A is accessible from any other state in the original reaction system (17).We refer such a system with slack reactants generated by canceling redundant slack reactants to an optimized slack system.In Section 6, we explore the accessibility of an optimized reaction network with slack reactants in a more general setting.Finally, we can make a network with slack reactants admit a better approximation of a given CRN by choosing an optimized conservation relation in (8).First, we assume that only a single conservation law and a single slack reactant are added to a given CRN.For the purpose of state space truncation onto finitely many states, a single conservation law is enough as all species could be bounded by N as shown in (8).Let this single conservation law be Then the matrix W in ( 9) is a vector (w 1 , w 2 , . . ., w d ) , and we denote this by w.By the definition of the intensities (13) for a network with slack reactants, some reactions are turned off when 2).
Hence we optimize the estimation with slack reactants by minimizing such intrusiveness of turned off reactions.To do that, we choose v which minimizes the number of the reactions in {ν → ν ∈ R : (ν − ν) • w > 0}.

Comparison to Other Truncation-Based Methods
In this Section we demonstrate that our method can potentially resolve limitations in calculating mean first passage times observed in other methods of state space truncation, namely sFSP and the finite-buffer method.Both methods require the user to make decisions about the state-space truncation that may introduce variability in the results.While all methods will converge to the true result as the size of the state space increases, we show our method is less dependent on user defined quantities.This minimizes additional decision-making on the part of the user that can lead to suboptimal results, especially in a context where the solution of the original system is not known.

Comparison to the sFSP method
A well known state truncation algorithm is known as the Finite State Projection (FSP) method [32].For a given continuous-time Markov chain, the associated FSP model is restricted to a finite state space.If the process escapes this truncated space, the state is sent to a designated absorbing state (see Figure 1B).For a fixed time t, the probability density function of the original system can be approximated by using the associated FSP model with sufficiently many states.The long-term dynamics of the original system, however, is not well approximated because the probability flow of the FSP model leaks to the designated absorbing state in the long run.
To fix this limitation of FSP, Gupta et al. proposed the stationary Finite State Projection method (sFSP) [16].This method also projects the original state space onto a finite state space as the FSP method intended to.But sFSP does not create a designated absorbing state, as all outgoing transitions from the projected finite state space are merged to a single state x * 'inside' the finite state space (Figure 1C).The sFSP has been frequently used to estimate the long-term distribution of discrete stochastic models.However, if the size of the truncated state space is not sufficiently large, this method could fail to provide accurate estimation for the first passage time.To demonstrate this case, we consider the following simple 2-dimensional model.In the network shown in Figure 3A, two X 1 proteins are dimerized into protein X 2 while X 1 is being produced at relatively high rate.The state space of the original model is the full 2-dimensional positive integer grid.We estimate the time until the system reaches one of the two states indicated in red in Figure 3C, and we use alternative methods to do this.
For the sFSP, we project the original state space onto the rectangle by restricting X 1 ≤ N and X 2 ≤ N for some N > 0, and we fix the origin (0, 0) as the designated state x * (Figure 3C).If the process associated with the sFSP model escapes the rectangle, it transports to the designated state immediately.On the other hand, we also consider a slack network shown in Figure 3B, where we introduce the conservation law X 1 + 2X 2 + Y = N for some N > 0. Let τ = inf{t ≥ 0 : X 1 = 1 and X 2 ∈ {1, 2}} be the first passage time we want to estimate.
By using the inverse matrix of the truncated transition matrix for each method, as detailed in [9], we obtain the mean of τ by using different values of N .We also obtain an 'almost' true mean of τ by using 10 5 Gillespie simulations of the original process.As shown in Figure 3B, the slack network model provides a more accurate mean first passage time estimation for the size of truncation in between 100 and 400 if the designated return state of sFSP is (0, 0).
The inaccurate estimate from the sFSP is due to the choice of a return state.The sFSP model escapes the confined space often because the production rate of X 1 is relatively high.When it returns back to the origin, it is more likely to visit the target states {X 1 = 1 and X 2 ∈ {1, 2}} than the original process.
Figure 3 shows that the mean first passage time of this system using sFSP depends significantly on the location of the chosen designated state.One of the two states is a particularly poor choice for sFSP, but it illustrates the idea that without previous knowledge of the system it can be difficult to know which states will perform well.
We display the behavior of individual solutions of the original model, the slack network, and the sFSP model in Figure 3D.The trajectory plots show that within the time interval [0, 500], almost half of the 100 samples from both the original model and the slack network model stay far away from the target states, while all the 100 sample trajectories from the sFSP model stay close to the target states.We also illustrate this point in Figure 3E with heat maps of the three models at t = 500.Note that only in the case of the sFSP, the probability densities are concentrated at the target states.

Comparison to the finite buffer method
The finite buffer method was proposed to estimate the stationary probability landscape with state space truncation and a novel state space enumeration algorithm [7,8].For a given stochastic model associated with a CRN, the finite buffer method sets inequalities among species such as (8), so-called buffer capacities.Then at each state x, the transition rate of a reaction ν → ν is set to be zero if at least one of the inequality does not hold at x + ν − ν.We note the algorithm, described in Section 3.2, for generating a slack network uses the same inequalities.Thus, the finite buffer method and the slack reactant method truncate the state space in the same way.We have shown, in Section 3.3 this type of truncation can create 'additional' absorbing states.These additional absorbing states change the accessibility between the states which means the mean first passage times cannot be accurately estimated.However, the regular slack systems preserve the network structure of the original network.Hence we are able to prove, as we already noted, that regular slack networks inherit the accessibility of the original network as long as the original network is 'weakly reversible' as we will define below.
We demonstrate this disparity between the the finite buffer and slack methods with the following network.Consider the mass-action system ( 14) with a fixed initial state x 0 = (a 0 , b 0 ).We are interested in estimating the mean first passage time to a target state x T = (10, 10).Note that the state space is irreducible (i.e., every state is accessible from any other state) as the network consists of unions of cycles.This condition, the union of cycles, is precisely what is meant by weakly reversible.[13,5] Thus, the original stochastic system has no absorbing state and is accessible to the target state x T .
To use the finite buffer method on this network, we set 2X A + X B ≤ N as the buffer capacity, where X = (X A , X B ) is the associated stochastic process.(Here we choose N > 30 so the state space contains the target state x T .)Hence when X satisfies 2X A + X B = N , the reactions ∅ → A, B → A and ∅ → B cannot be fired as 2X A + X B exceeds the buffer capacity.We now demonstrate the system has a new absorbing state.By first using reaction A → B, to deplete all A, and then ∅ → B, every state can reach the state (0, N ) in finite time with positive probability.The state (0, N ) is absorbing state because no other reactions can occur.Reactions A → ∅ and A → B require at least one A species, and any other reactions lead to states exceeding the buffer capacity.Therefore, the finite buffer method has introduced a new absorbing state not present in the original model so that it is not accessible to x T with positive probability.Now, we show the explicit network structure of our slack network formulation will preserve the accessibility of the original system.We consider the same inequality 2X A + X B ≤ N as above with N > 30.We generate the slack network by using the algorithm shown in Section 3.2, Note that the associated stochastic process X = (X A , X B , Y ) admits the conservation relation The state (0, N, 0) can not be reached as the only state that is accessible to (0, N, 0) is (0, N − 1, 2), but it violates the conservation law.
As we highlighted in Section 3.3, slack networks preserve the connectivity of the original network (14), hence the network (20) is also weakly reversible.Thus the state space of the stochastic process associated with the slack network is irreducible by Corollary 6.1.This implies that there is no absorbing state and the system is accessible to x T , unlike the stochastic process associated with the finite buffer relation.

Convergence Theorems for slack networks
In this section we establish theoretical results on the convergence of properties of a slack network to the original network.(Proofs of the theorems below are provided in Appendix B.) Many of these results rely on theorems from Hart and Tweedie [18] who studied when the probability density function of a truncated Markov process converges to that of the original Markov process.We employ the same idea of their proof to show the convergence of a slack network to the original network.
By assuming "uniforming mixing," we show the convergence of the stationary distribution of the slack system to the stationary distribution of the original system as the conservation quantity N grows.Furthermore, we show the convergence of mean first passage times for the slack network to the true mean first passage times.In particular, all these conditions hold when there is a Lyapunov function for the original system.
In this Section, assume that a given CRN (S, C, R, Λ) is well defined for all time t and let ( S, C, R, Λ N ) be an associated slack network obtained with a single conservative quantity w•X ≤ N .We denote by X and X N the associated stochastic processes the original CRN and the slack network respectively.We fix the initial state for both systems, i.e., X(0) = X N (0) = x 0 for some x 0 and for each N .(This means we can only consider slack systems where N is large enough so that w • x(0) < N .)Assume that both the original and slack systems are irreducible, and denote by S and S N the state spaces for each respectively.(In Section 6, we prove accessibility properties that the slack system can inherit from the original system.) Notice that every state in S N satisfies our conservation inequality.That is, for every x ∈ S N we have w • x ≤ N .It is possible that S N = S N +1 for some N .For simplicity, we assume that the truncated state space is always enlarged with respect to the conservative quantity N , that is S N ⊂ S N +1 for each N .(For the general case, we could simply consider a subsequence N k such that As defined in Section 2, λ ν→ν ∈ Λ is the intensity of a reaction ν → ν for the associated stochastic system X.We also denote by λ N ν→ν ∈ Λ N the intensity of a reaction ν → ν for the associated stochastic system X N .Finally we let p(x, t) and p N (x, t) be the probability density function of X and X N , respectively.We begin with the convergence of the probability density functions of the slack network to the original network with increasing N .Theorem 5.1.For any x ∈ S N and t ≥ 0, we have A Markov process defined on a finite state space admits a stationary distribution.Hence X N admits a stationary distribution π N .If the slack system satisfies the condition of "uniform mixing", that is the convergence rate of p N (x, t) − π N (x) 1 is uniform in N , then we have the following result: Theorem 5.2.Suppose X admits a stationary distribution π.Suppose further that there exists a positive function h(t), which is independent of N , such that p N (•, t)−π N 1 ≤ h(t) and lim t→∞ h(t) = 0. Then We now consider convergence of the mean first passage time of X N .Recall, we assumed that both stochastic processes have the same initial state X(0) = X N (0) = x 0 and both state spaces S and S N are irreducible.Hence, for any K ⊆ S, each state in S N is accessible to K for sufficiently large N .
Theorem 5.3.For a subset K of the state space of X, let τ and τ N be the first passage times to K for X and to K ∩ S N for X N , respectively.Assume the following conditions, 1. X admits a stationary distribution π, and 2. lim Then for any t ≥ 0, lim If we further assume that 3. E(τ ) < ∞, and 4. there exists g(t) such that P (τ N > t) ≤ g(t) for all N and Remark 5.1.To obtain convergence of higher moments of the first passages time, we need only replace conditions respectively.
We now show that if a Lyapunov function exists for the original system, the conditions in Theorem 5.2 and Theorem 5.3 hold.The Lyapunov function approach has been proposed by Meyn and Tweedie [31] and it has been used to study long-term dynamics of Markov processes [6,16,4,17] especially exponential ergodicity.Gupta et al [16] uses a linear Lyapunov function to show that the stationary distribution of an sFSP model converges to a stationary distribution of the original stochastic model and use the Lyapunov function explicitly compute the convergence rate.In particular, we show Lyapunov functions exist for the examples we consider in Section 7.
Theorem 5.4.Suppose there exists a function V and positive constants C and D such that for all x, 1. V (x) ≥ 1 for all x ∈ S, 2. V is an increasing function in the sense that for each x N +1 ∈ S N +1 \ S N and x N ∈ S N , and 3.
Then the conditions in Theorem 5.3 hold.
Remark 5.2.The conditions (21) hold if a Lyapunov function satisfying the conditions in Theorem 5.4 exists.Thus, the convergence of the higher moments of the first passage time also follows.

Inheritance of slack networks
As we showed in Section 4.2, not all state space truncations preserve accessibility of states in the original system.(For the example in Section 4.2, the truncation created a new absorbing state.)Thus, it is desirable to obtain reduced models that are guaranteed to maintain the accessibility of the original system to predetermined target states.In this Section, we show that under mild conditions, both a regular slack system and an optimized slack system preserve the original accessibility.The proofs of the theorems introduced in this section are in Appendix C. The key to these results is the condition of weak reversibility.Definition 6.1.A reaction network is weakly reversible if each connected component of the network is strongly connected.That is, if there is a path of reactions from a complex ν to ν , then there is a path of reactions ν to ν.
We note that the weakly reversible condition applies to the network graph of the CRN.The network graph consists of complexes (nodes) and reactions (edges).It is a sufficient condition for irreducibility of the associated mass-action stochastic process.Indeed, the sufficiency of weak reversibility holds even under general kinetics as long as condition (4) is satisfied.[33] Hence irreducibility of a regular slack network follows since it preserves weak reversibility of the original network and the kinetics modeling the regular slack system satisfies (4).Corollary 6.1.Let (S, C, R, Λ) be a weakly reversible CRN with intensity functions Λ = {λ y→y } satisfying (4).Then the state space of the associated stochastic process with a regular slack network ( S, C, R, Λ N ) is a union of closed communication classes for any N .
In case the original network is not weakly reversible, we can still guarantee that optimized slack systems have the same accessibility as the original system provided all species have a degradation reaction (S i → ∅).Theorem 6.1.Let (S, C, R, Λ) be a reaction network such that {S i → ∅ : S i ∈ S} ⊂ R. Suppose that the stochastic process X associated with (S, C, R, Λ) and beginning at the point x 0 is irreducible.Let X N be the stochastic process associated with an optimized slack network ( S, C, R, Λ N ) such that X N (0) = x 0 for every N large enough.Then, for any subset K of the state space of X, there exists N 0 such that X N reaches K almost surely for N ≥ N 0 .This theorem follows from the fact that a slack system only differs from the original system when its runs out of slack reactants.However, in an optimized slack system, degradation reactions are allowable with no slack reactants.Hence, our proof of Theorem 6.1 relies on the presence of all degradation reactions.
A slack network may also inherit its stationary distribution from the original reaction system.When the original system admits a stationary distribution of a product form of Poissons under the complex balance condition, a slack system inherits the same form of the stationary distribution as well.A reaction system is complex balanced if the associated deterministic mass-action system admits a steady state c * , such that is the deterministic mass-action rate [12].If a reaction system is complex balanced, then its associated stochastic mass-action system admits a stationary distribution corresponding to a product of Poisson distributions centered at the complex balance steady state.[3].The following lemma show that the complex balancing of the original network is inherited by a regular slack network.Lemma 6.1.Suppose that (S, C, R, Λ) is a reaction network whose mass-action deterministic model admits a complex balanced steady state c * .Then any regular slack network ( S, C, R, Λ N ) with slack reactants Y 1 , • • • , Y m also admits a complex balanced steady state at c = (c * , 1, 1, . . ., 1) .Remark 6.1.Note that a regular slack network also preserves the deficiency of the original network.Deficiency δ of a reaction network is an index such that where n is the number of the complexes, is the number of connected components and s is the rank of the stoichiometry matrix of the reaction network.Deficiency characterizes the connectivity of the network structure, and surprisingly it can also determine the long-term behavior of the system dynamics regardless of the system parameters.[12,20,3] A regular slack network and original network have the same number of complexes n, and the same connectivity matrix S, which implies they have the same number of connected components .Furthermore, using the notation from Section 3.2, the stoichiometry matrices are Γ = CS for the original network and for a slack network, which means that they have the same rank s.Together, these imply that the original network and its regular slack network have the same deficiency.
Since the complex balancing is inherited with the same steady state values for X i , we have the following stochastic analog of inheritance of the Poissonian stationary distribution for regular slack systems.Theorem 6.2.Let X be the stochastic mass-action system associated with a complex balanced (S, C, R, Λ) with an initial condition X(0) = x 0 .Let X N be the stochastic system associated with a regular slack system ( S, C, R, Λ N ) with X N (0) = x 0 .Then, for the state space S N of X N , there exists a constant M N > 0 such that where π and π N are the stationary distributions of X and X N , respectively.
We demonstrate Lemma 6.1 and Theorem 6.2 with a simple example.Example 6.1.Consider two networks, Let X and X N be systems ( 22) and ( 23), respectively, where N is the conservation quantity X N (t) ≤ N .Under mass-action kinetics, the complex balance steady state of ( 22) is c * = 2.Under mass-action kinetics, c = (2, 1) is a complex balance steady state of ( 23).Now let π and π N be the stationary distribution of X and X N , respectively.By Theorem 6.4, Anderson et al [3], π is a product form of Poissons such that x! for each state x.
By plugging π into the chemical master equation 3 of X N and showing that for each x we can verify that π is a stationary solution of the chemical master equation 3 of X N .Since the state space of Then π N = M N π is the stationary distribution of X N .

Applications of Slack Networks
In this Section, we demonstrate the utility of slack reactants in computing mean first-passage times for two biological examples.For both examples, we compute the mean first passage time via the matrix inversion approach as shown in Appendix A.

A Lotka-Volterra model with migration
Consider a Lotka-Volterra model with migration shown in Figure 4A.In this model, species A is the prey, and species B is the predator.Clearly, the state space this model is infinite (A, B) such that A ≥ 0, B ≥ 0. We will use slack reactants to determine the expected time to extinction of either species.More specifically, let K = {(A, B) : A = 0 or B = 0}.We will calculate the mean first arrival time to K from an initial condition (A(0), B(0)).(In our simulations in Figure 4, we chose To generate our slack network, we choose a conservative bound w • (A, B) ≤ N with w = (1, 1).As we discussed in Section 3.3, this w minimizes the intrusiveness of slack reactants because the number of reactions ν → ν such that (ν − ν) • w > 0 is minimized.By using the algorithm introduced in Section 3.2, we generate a regular slack network (24) with a slack reactant Y , As the slack reactant Y in reactions B + Y 2Y A + Y can be canceled, we further generate the optimized slack network shown in Figure 4A.We let A(0) + B(0) + Y (0) = N , which is the conservation quantity of the new network.
Let τ be the first passage time from our initial condition to K. First, we examine the accessibility of the set K. Because our reaction network contains creation and destruction of all species (i.e., B ∅ A) the original model is irreducible and any state is accessible to K. Furthermore, Theorem 6.1 guarantees that the stochastic model associated with the optimized slack network is also accessible to K from any state.
Next, by showing there exists a Lyapunov function satisfying the condition of Theorem 5.4 for the original model, we are guaranteed the first passages times from our slack network will converge to the true first passage times.(See Appendix E.1 for more detail.)Therefore, as the plot shows in Figure 4B, the mean first extinction time of the slack network converges to that of the original model, as N increases.The mean first passage time of the original model was obtained by averaging 10 9 sample trajectories.These trajectories were computed serially on a commodity machine and took 4.6 hours to run.In contrast, the mean first passage times of the slack systems were computed analytically on the same computer and took at most 13 seconds.Figure 4 also shows that using only 10 3 samples is misleading as the simulation average has not yet converged to the true mean first passage time.Finally, as expected from Theorem 5.1 the probability density of the slack network converges to that of the original network (see Figure 4C).

Protein synthesis with slow toggle switch
We now consider a protein synthesis model with a toggle switch, see Figure 5A.Protein species X and Z may be created, but only when their respective genes D X or D Z are in the active (unoccupied) state, D X 0 and D Z 0 .Each protein acts as a repressor for the other by binding at the promoter of the opposite gene and forcing it into the inactive (occupied) state (D X 1 and D Z 1 ).In this system we consider only one copy of each gene, so that, D X 0 + D X 1 = D Z 0 + D Z 1 = 1 for all time.Thus, we focus primarily on the state space of protein numbers only (X, Z).
The deterministic form of such systems are often referred to as a "bi-stable switch" as it is characterized by steady states (X * , 0) and (X "on" and Z "off") and (0, Z * ) (X "off" and Z "on").This stochastic form of toggle switch has been shown to exhibit a multi-modal long-term probability density due to switches between these two deterministic states due to rapid significant changes in the numbers of proteins X and Z by synthesis or degradation (depending on the state of promoters) [1]. Figure 5C shows that the associated stochastic system admits a tri-modal long-term probability density.Thus the system transitions from one mode to other modes and, for the kinetic parameters chosen in Figure 5, rarely leaves the region R = {(X, Z)|0 ≤ X ≤ 30 or 0 ≤ Z ≤ 30}.Significant departures from a stable region of a genetic switch may be associated with irregular and diseased cell fates.As such, the first passage time of this system outside of R may indicate the appearance of an unexpected phenotype in a population.Because this event is rare, estimating first passage times with direct stochastic simulations, such as with the Gillespie algorithm [15], will be complicated by the long time taken to exit the region.As in the previous example, slack systems provide a valuable tool for direct calculation of mean first passage times.In this example, we consider the time a trajectory beginning at state (X, Z, D X 0 , D Z 0 ) = (0, 0, 1, 1) enters the target set K = {(X, Z)|X > 30 and Z > 30} = R c and compute τ , the first passage time to K.
Since the species corresponding to the status of promoters (D X 0 , D X 1 , D Z 0 and D Z 1 ) are already bounded, we use the conservative bound X + Z ≤ N to generate a regular slack network in Figure 5B with the algorithm introduced in Section 3.2.The original toggle switch model is irreducible (because of the degradation X → 0, Z → 0 and protein synthesis reactions).Moreover by Theorem 6.1, the degradation reactions guarantee that the slack system is also accessible to K from any state.
As shown in Figure 5D, the mean first passage time of the slack system appears to be converting to approximately 3.171 × 10 9 .To prove that the limit of the slack system is actually the original mean first passage time, we construct a Lyapunov functions satisfying the conditions of Theorem 5.4.See Appendix E.2 for more details about the construction of the Lyapunov function.
for the mean first passage time than the sFSP method and the finite buffer method.In particular, in Section 5 we provide theorems that show the slack system approximates the probability density and the mean first passage time of the original system.Because slack networks preserve network properties, such as weak reversibility, the slack system is also likely to have the same accessibility to a target state as the original model (see Section 6).In particular, we note that weak reversibility guarantees our slack truncation does not introduce absorbing states.
In Section 6, we show that this truncation method is natural in the sense that the stationary distributions of the original and slack system are identical up to multiplication by a constant when the original system is complex balanced.Finally, in Section 7, we use slack networks to calculate first passage times for two biological examples.We expect that this new theoretical framework for state space truncation will be useful in the study of other biologically motivated stochastic chemical reaction systems.

A First passage time for Markov processes with finitely many states
The Markov chain associated with a slack network has always a finite state space.There are many different methods to analytically derive the mean first passage time of a Markov chain with a finite state space [22,21,37,9].In this paper, we use the method of Laplace transform which is also used in [37].For a continuous time Markov process, let S be the finite state space and let Q be the transition rate matrix, i.e.Q ij is the transition rate from state i to state For a subset K = {i 1 , i 2 , . . ., i k } ⊂ S, we define a truncated transition matrix Q K that is obtained by removing the i j th row and column from Q for j = 1, 2, . . ., k.Then the mean first passage time to set K starting from the i-th state is the i-th entry of −Q −1 K 1, where 1 is a column vector with each entry 1.

B Proofs of Convergence Theorems
In this section, we provide the proofs of the theorems introduced in Section 5. We use the same notations and the same assumptions as we used in Section 5.
Proof of Theorem 5.1: We employ the main idea shown in the proof of Theorem 2.1 in Hart and Tweedie [18].Let a state x and time t be fixed.We consider large enough N so that x ∈ S N −1 .
We use an FSP model on S N −1 of the original system X with the designated absorbing state x * ∈ S c N −1 .Let p FSP N −1 be the probability density function of this FSP model.Let T N be the first time for X N to hit S N \ S N −1 .We generate a coupling of X N and the FSP model restricted on S N −1 as they move together by T N and they move independently after T N .Then , where we used the fact that after T N , the FSP process is absorbed at x * .Thus Note that monotonically increases in N and converges to p(x, t), as N → ∞ for each x ∈ S N .Then by using the monotone convergence theorem, the term converges to p(S, t) = 1.Therefore, by taking lim N →∞ in both sides of (26) the result follows.Proof of Theorem 5.2: This proof is a slight generalization of the proof of Theorem 3.3 in Hart and Tweedie [18].Since the convergence of p N (•, t) − π N 1 is independent of N , for any > 0, we choose sufficiently large t 0 such that for all N .Then by using the triangle inequalities This can be formally proved as for each x ∈ K c P (X(t) = x) = P (X(t) = x, τ > t) + P (X(t) = x, τ ≤ t) = P ( X(t) = x) + P (X(t) = x, τ K ≤ t) ≥ P ( X(t) = x).
In the same way, we can prove that P ( XN (t) = x) < P (X N (t) = x) for each x ∈ K c .Let > 0 be arbitrary.Then lim N →∞ π − π N 1 = 0 implies that for any subset U , we have for sufficiently large N .We use this property and ( 28) combined with the 'monotonicity' of the chemical master equation to show the result.First of all, note that we are assuming X(0) = X N (0) = x 0 .Hence there is a constant γ 1 > 0 such that for all x.Moreover by choosing sufficiently small in (29), we have the minimum min N π N (x 0 ) ≥ π(x 0 ) − > 0. Thus there exists γ 2 such that for all sufficiently large N and for each x.Note that the chemical master equation is a system of ordinary differential equations, and P (X(t) = •) and π are the solution to the system with the initial conditions P (X(0) = •) and π.Hence the monotonicity of a system of ordinary differential equations, it follows that P (X(t) = x) ≤ γ 1 π(x) for any x and t.
In the same way, for any N it follows that P (X N (t) = x) ≤ γ 2 π N (x) for any x and t.
The detailed proof about the monotonicity of the chemical master equation is shown in Lemma 3.2 of Enciso and Kim.[10] Secondly, there exists a finite set K such that π( Kc ) < because π is a probability distribution, Furthermore by (29), π N ( Kc ) ≤ π( Kc ) + < 2 for any sufficiently large N .
We prove Theorem 5.3 by using the 'absorbing' Markov processes X and XN coupled to X and X N , respectively.
Proof of Theorem 5.3: We first break the term P (τ > t) − P (τ N > t) to show that Let > 0 be an arbitrarily small number.Let K be a finite set we found in Lemma B.1.Then by using triangular inequalities, Note that by the same proof of Theorem 5.1, we have the convergence Since the summation x∈K c ∩ K is finite, we have that by taking N → ∞ in (32) lim Now, to show the convergence of the mean first passage times, note that where the integrand is bounded by P (τ K > t) + g(t).Condition (iii) in Theorem 5.3 implies that E(τ k ) < ∞ and ∞ 0 g(t)dt < ∞.Hence the dominant convergence theorem and (33) The existence of a special Lyapunov function ensures that the conditions in Theorem 5.3 hold.In order to use the absorbing Markov processes, as we did in the previous proof, we define a Lyapunov function for X based on a given Lyapunov function V .Let λν→ν and λN ν→ν denote the intensity of a reaction ν → ν for X and X N , respectively.For a given function V such that V (x) ≥ 1 for any x ∈ S, we define V such that because V (x) = V (x) and V (x + ν − ν) ≥ V (x + ν − ν) for every reaction x + ν − ν.Moreover, Since λν→ν (x) = 0 ≤ λ ν→ν (x) if x ∈ K by the definition of X for each reaction ν → ν .Hence, for each From ( 34) and ( 35), therefore, we conclude that for any where C = C and D = max{C + 1, D}. Proof of Theorem 5.4: In this proof, we show that all the conditions in Theorem 5.3 are met by using the given function V .First, we show that X admits a stationary distribution.This follows straightforwardly by Theorem 3.2 in Hard and Tweedie [18] because condition 3 in Theorem 5.4 means that V is a Lyapunov function for X.
The condition basically means that every outward reaction ν → ν (i.e.x + ν − ν ∈ S c N and x ∈ S N ) in R gives a non-negative drift as V (x + ν − ν) − V (x) ≥ 0. We use condition 2 in Theorem 5.4 to show that V is also a Lyapunov function for X N for any N.
We denote by λ N ν→ν the intensity of a reaction ν → ν in ( S, C, R, Λ N ).Suppose that a reaction ν → ν ∈ R is obtained from ν → ν ∈ R by adding a slack reactant.That is, q(ν − ν) = ν − ν, where q is a projection function such that q(x 1 , . . ., x d , x d +1, . . ., x d+r ) = (x 1 , . . ., x d ).Then, by the definition (13) of the intensity in a slack network, we have λ N ν→ν (x) = 0 when x + q(ν − ν) ∈ S N , because the X N is confined in S N .Furthermore, by condition 2 in Theorem 5.4, In case x + q(ν − ν) ∈ S N ⊆ S, we have that λ N ν→ν (x) = λ ν→ν (x) by the definition (13).Hence by condition 3 and (37) we have for any Thus V is also a Lyapunov function for X N .Hence Theorem 6.1 (the Foster-Lyapunov criterion for exponential ergodicity) in Meyn and Tweedie [31] implies that for each N there exist β > 0 and η > 0, which are only dependent of C and D, such that This guarantees that the condition in Theorem 5.2 holds with h(t) = βV (x 0 )e −ηt .Hence we have lim Now to show that the first passage time τ has the finite mean, we apply the Foster-Lyapunov criterion to X. Since (36) meets the conditions of Theorem 6.1 in Meyn and Tweedie [31], the probability of X converges in time to its stationary distribution exponentially fast.That is, for any subset U , there exist β > 0 and η > 0 such that where π is the stationary distribution of X.This, in turn, implies that where the second inequality follows as π(K C ) = 0, which is because X is eventually absorbed in K as the original process X is irreducible and closed.
Finally we show that for any N , there exists g(t) such that P (τ N < t) < g(t) and ∞ 0 g(t)dt < ∞ for the first passage time τ K,N .By the same reasoning we used to derive (36), we also derive by using (38) that Hence in the same way as used for (39), we have the exponential ergodicty of X N by Theorem 6.1 in Meyn and Tweedie [31], and then we derive that where πN is the stationary distribution of X N which also has zero probability in K C as X N is eventually absorbed in K. Since β and η only depend on C and D , we let g(t) = βV (x 0 )e −ηt that satisfies that P (τ K,N > t) ≤ g(t) for any N and Hence by Lemma C.1, for each reaction νi → ν i such that q(ν i ) = ν i and q(ν i ) = ν i we have (q(ν j ) − q(ν j )) > 0 for each i.
Finally since q(ν i ) = ν i and q(ν i ) = ν i for each i, we have Hence ( 41) and (42) imply that if N k ≥ N 0 , then X N k can re-enter to S N k from 0 with positive probability.Consequently, we constructed a sequence of reactions along which X N k can re-enter S N k from x k for any k such that N k ≥ N 0 .Since each S N k is a communication class, x k ∈ S N k and ,in turn, this contradictions to the assumption that x k ∈ S N k .Hence the claim holds so that S N is closed for any N large enough.D Proofs for Lemma 6.1 and Theorem 6.2 Proof of Lemma 6.1:In the deterministic model x(t) = (x 1 (t), . . ., xd (t), y 1 (t), . . ., y m (t)) associated with ( S, C, R, Λ N ), if we fix y i (t) ≡ 1 for all i = 1, 2, . . ., m, then x(t) follows the same ODE system as the deterministic model x(t) associated with the original network (S, C, R, Λ) does.Therefore c = (c * , 1, 1, . . ., 1) is a complex balanced steady state for x(t).It has been shown that the existence of a single positive complex balanced steady state implies that all other positive steady states are complex balanced [12], therefore completing the proof.
Proof of Theorem 6.2: Let w i and N i be the conservative vector and the conservative quantity of the slack network ( S, C, R, Λ N ), respectively.Then π is a stationary solution of the chemical master equation Especially, as shown in Theorem 6.4, Anderson et al [3], π satisfies the stochastic complex balance for X: for each complex ν * ∈ R and a state x, Let q be the one-to-one projection function q that we defined for the proof of Theorem 6.1.Note that each reaction ν → ν ∈ R is defined as it satisfies the conservative law where q(ν) = ν and q(ν ) = ν .Then π also satisfies the stochastic complex balance for X N because (43) and (44) imply that for each complex ν * ∈ R and a state x, Then by summing up for each ν * ∈ R, and, in turn, π N is a stationary solution of the chemical master equation of X N .Since the state space of X and X N differ, M N is a constant such that the sum of M N π(x) over the state space of X N is one.Then π N = M N π.

E Lypunov Functions for Example Systems E.1 Lotka-Volterra with Migration
We construct a Lyapunov function satisfying the conditions of Theorem 5.4 for the Lotka-Volterra model with migration (Figure 4A).First, for both the original model and the slack network, we assume A(0) = a 0 , B(0) = b 0 that are not depend on N .Let V (x) = e w•x with w = (1, 1) .The condition in Theorem 5.4 obviously holds.Note that the slack network of Figure 4A

E.2 Protein Synthesis with Slow Toggle Switch
We also make a use of the Lyapunov approach shown in Theorem 5.4 to prove that the first passage time of the slack system in Figure 5B converges to the original first passage time, as the truncation size N goes infinity.Let X N = (X, Z, D X 0 , D X 1 , D Z 0 , D Z 1 ) be the stochastic system associated with the slack system.Recall that the slack network admits a conservation relation w • X ≤ N with w = (1, 1, 0, 0, 0, 0).Hence we define a Lyapunov function V (x) = e x+y where x = (x, z, dx 0 , dx 1 , dz 0 , dz 1 ).By the definition of V , it is obvious that conditions 1 and 2 in Theorem 5.4 hold.So we show that condition 3 in Theorem holds.

Figure 3 :
Figure 3: Comparison between an sFSP model and a slack network for the same reaction system.A.Original 2-dimensional system and its slack network.B. Mean first passage time as a function of the truncated state size, comparing sFSP and the slack reactant method.C. The state space truncation corresponding to the sFSP method and the slack method for this model.Target states for the first passage time are indicated in red.D. Mean of 100 sample trajectories obtained by Gillespie simulations for the original, the slack system and the sFSP system.The red lines indicate the mean of the trajectories that have touched the target states within [0,500], while the black lines are the mean of the trajectories that did not touch the target space during this time.E. Probability denstiy heat maps of the stochastic processes modeled under the original, the slack system and the sFSP model, respectively, at t = 500.

Figure 4 :Figure 5 :
Figure 4: Calculating Mean Time to Extinction of a Lotka-Volterra Model with Migration Using Slack Reactants.A. The reaction network for the Lotka-Volterra model with migration (left).The Lotka-Volterra model with a slack reactant Y (right).The parameters are κ 1 = 0.1, κ 2 = 0.1, κ 3 = 0.2, κ 4 = 0.6, κ 5 = 0.2 and κ 6 = 0.2.B. Convergence of the mean time to extinction of the slack network (blue) to the true network (solid red).C. Heatmaps of the probability density at time 1000 of the original model and the slack system with various truncation size.