Exact and efficient hybrid Monte Carlo algorithm for accelerated Bayesian inference of gene expression models from snapshots of single-cell transcripts

Single cells exhibit a significant amount of variability in transcript levels, which arises from slow, stochastic transitions between gene expression states. Elucidating the nature of these states and understanding how transition rates are affected by different regulatory mechanisms require state-of-the-art methods to infer underlying models of gene expression from single cell data. A Bayesian approach to statistical inference is the most suitable method for model selection and uncertainty quantification of kinetic parameters using small data sets. However, this approach is impractical because current algorithms are too slow to handle typical models of gene expression. To solve this problem, we first show that time-dependent mRNA distributions of discrete-state models of gene expression are dynamic Poisson mixtures, whose mixing kernels are characterized by a piecewise deterministic Markov process. We combined this analytical result with a kinetic Monte Carlo algorithm to create a hybrid numerical method that accelerates the calculation of time-dependent mRNA distributions by 1000-fold compared to current methods. We then integrated the hybrid algorithm into an existing Monte Carlo sampler to estimate the Bayesian posterior distribution of many different, competing models in a reasonable amount of time. We demonstrate that kinetic parameters can be reasonably constrained for modestly sampled data sets if the model is known a priori. If there are many competing models, Bayesian evidence can rigorously quantify the likelihood of a model relative to other models from the data. We demonstrate that Bayesian evidence selects the true model and outperforms approximate metrics typically used for model selection.

assays only provide population snapshots of transcripts levels and cannot follow the dynamics of transcription in a single cell through time.
To this end, static assays have relied upon mathematical models to infer dynamic properties of gene expression in single cells from the measured snapshot of transcript levels; see Ref. 14 for a review. Inference requires (1) appropriate models of stochastic gene expression, (2) numerical methods to calculate the timedependent mRNA distribution in a population of cells given any underlying model and associated parameters, and (3) quantifying the likelihood that measured data were sampled from the calculated distribution. We recently developed a Bayesian approach (BayFISH) that uses this likelihood to infer best-fitting parameters from single cell data and quantifies their uncertainty using the posterior distribution. 15,16 Although Bayesian inference is the most complete and rigorous approach, it requires significantly more computation than other approximate methods, e.g., maximum likelihood.
Here, we developed a hybrid numerical method that accelerates the calculation of time-dependent mRNA distributions by 1000fold compared to standard methods. We integrated this method into BayFISH and, for the first time, one can estimate the Bayesian posterior distribution of many competing models of gene expression in a reasonable amount of time. The Bayesian evidence rigorously quantifies the likelihood of a model relative to other models given the data, and we show that Bayesian evidence selects the true model and outperforms approximate metrics, e.g., Bayesian Information Criterion (BIC) or Akaike Information Criterion (AIC), typically used for model selection. Our accelerated Bayesian inference represents a significant advance over existing methods used for inferring gene expression models from snapshots of single cell transcripts.

II. CONNECTING MODELS OF GENE EXPRESSION TO SINGLE CELL DATA
Our inference method uses data from single-molecule RNA Fluorescence In Situ Hybridization (smFISH) but could include single cell data from other static assays. The smFISH technique labels transcripts with fluorescent DNA oligos and measures both the number of mature mRNAs (m) and the number of gene loci with high-activity transcription sites (TSs); see Fig. 1(a). A typical smFISH data set is a histogram h = h(⃗ ω), where ⃗ ω ∈ Ω is the set of all possible states (m, TS) that can be measured in a cell.
A broad spectrum of measured gene expression profiles in bacteria and eukaryotes is well-explained by discrete state gene expression models, 17 In this article, we adopt a two-allele, 3-state model [ Fig. 1(b)] as a case study for modeling stochastic gene expression in eukaryotes and for testing our method of accelerated Bayesian inference. We further focus on dynamic smFISH experiments that perturb gene expression (e.g., induction) and then measure mRNA distributions at different times after induction to infer dynamics and kinetic parameters. Induction can change one or more of the model parameters [ Fig. 1(c)]. The smFISH data from an induction experiment consist of a joint histogram h = h(⃗ ω, t ℓ ), where t ℓ are independent observations made at different times before and after induction. If the changed parameters are unknown a priori, then one should evaluate all possible induction models, which leads to a combinatorial explosion in model space. For example, there are 2 8 = 256 candidate induction models for the 3-state model shown in Fig. 1(b), of which the model shown in Fig. 1(c) is one. In Sec. V, we will consider 2 6 = 64 candidate models where the same two parameters (δ and β 0 ) are known a priori to not change upon induction.
A likelihood approach is used to connect mathematical models of stochastic gene expression to single cell data. Formally, the likelihood L is the probability that a candidate model M and its associated parameter set ⃗ θ would generate a given set of data (h). The number of parameters (i.e., dimension of ⃗ θ) is determined by the model structure M. Mathematically, the likelihood L is a function of the joint probability distribution P(⃗ ω, t ℓ | ⃗ θ, M) of a candidate model M and its associated parameters ⃗ θ at discrete observation times, where Φ is the set of observation times and M ℓ is the multinomial coefficient associated with each h(⃗ ω, t ℓ ) that arises because the data were not ordered. In our Bayesian inference work flow [ Fig. 1(d)], each candidate model M in the class of possible models {M} will require a large number (≥10 6 ) of Monte Carlo steps where, at each step, numerical simulations calculate the time-dependent mRNA distributions and evaluate the likelihood that different parameter sets ⃗ θ for that model generated the observed data. Our previous software 15, 16 took days to perform the likelihood calculations for one model, which highlights the challenge of using Bayesian inference to evaluate hundreds of models and perform model selection. Below, we develop a hybrid method that both accelerates numerical simulation and likelihood calculations, and (in contrast to standard methods) scales with the number of multicore processors, thus allowing for efficient parallelization.

III. A NOVEL HYBRID METHOD TO CALCULATE THE TIME EVOLUTION OF DISCRETE-STATE MODELS
While exact time-dependent solutions exist for two-state models, [19][20][21] it is hard to generalize this analysis to models with more states. It is therefore necessary to solve the general timedependent problem using numerical simulations. There are two classes of numerical procedures to solve the time evolution of a discrete-state model for a given set of parameters. The first class forward-evolves the chemical master equations (CMEs), which are a system of infinitely many coupled ordinary differential equations (ODEs) that describe the joint probability distributions P(⃗ ω, t) as a function of time. 22,23 To be numerically feasible, a truncation scheme (e.g., only consider mRNA levels below a maximum M)

FIG. 1.
Single cell data and models of gene expression. (a) The singlemolecule RNA FISH (smFISH) technique provides information on the localization and numbers of mature mRNAs (m) in single cells, including clusters of nascent transcripts produced at transcription sites (TSs) at active genetic loci. (b) A diploid, two-allele 3-state genetic model where κ ij is the transition rate between genetic states, β i is the mRNA synthesis rate of each state, and δ is the mRNA degradation rate. (c) Induction changes one or more parameters from an unstimulated (U) to a stimulated value (S). Here, we show one of the many possible induction models M, labeled in binary (00110000). (d) Schematic of the Bayesian inference work flow. is used to reduce the infinite size of the dynamical system. While this approach delivers accurate estimates of the temporal evolution of the truncated CME, there are two shortcomings. First, the number of ODEs scales as S 2 M, where S is the number of genetic states for each allele. The ODE system becomes unwieldy for mammalian cells where the number of observed mRNAs per cell can be is computationally less expensive, it comes at the cost of having to sample over many runs to achieve equivalent accuracy to the CME.
In this article, we develop a hybrid simulation method (the Poisson Mixture with a Piecewise Deterministic Markov Switching Rate or PM-PDMSR) which leverages analytical results and the efficiency of the kinetic Monte Carlo method. The key result is that the mRNA distribution can be exactly calculated for any realization (trajectory) of the genetic state, s(t); see Appendix. Once transient initial conditions have burned off (t ≫ δ −1 ), where δ is the mRNA degradation rate, the mRNA (N mRNA ) distribution is always Poisson, P(N mRNA = m) = λ m (t)e −λ(t) /m! with a dynamic rate λ(t) satisfying the following piecewise ODE: with an initial condition λ(0) = 0. Given any trajectory s(t), we can exactly compute the mRNA distribution P(m|s(t)); see Figs. 2(a) and 2(b). Our goal, however, is to determine the joint distribution P(⃗ ω, t), which requires us to generate Ns sample paths of s(t) that cover P(s, t). The sample paths in the small genetic state space (S 2 -dimensional) are efficiently generated using standard kinetic Monte Carlo methods. After accumulating a large number of sample paths Ns generated by the underlying model, the mixture of the Poisson distributions recovers the mRNA distribution via a convolution where λ k (t) is the solution of (2) subject to the kth sample path of genetic switching trajectory s k (t) and δi,j is the Kronecker delta [see A detailed description of the hybrid simulator is given in the Appendix. We evaluated the efficiency of the hybrid simulator relative to the CME in performing a single step of the Bayesian inference work flow, i.e., simulate the joint distribution P(⃗ ω, t) and calculate the likelihood L that this joint distribution produced a given data set (h). We benchmarked the simulators on diverse classes of discrete-state models, parameter sets, and data sets; see Fig. 3. The hybrid simulator is up to 10 3 more efficient for models with increased genetic states, S = 3 and 4. The efficiency gain of the hybrid simulator originates from the fact that P(m|s(t)) is solved exactly in mRNA space (and is independent of the size of M) and that P(s, t) is sampled efficiently in genetic-state space via kinetic Monte Carlo techniques. The accelerated hybrid method achieved equivalent accuracy to the CME; see Fig. S1 of the supplementary material. Finally, we tested the scaling of efficiency of different simulators on a modern multicore workstation, which can execute up to 64 parallel threads. We found that the hybrid method runs well in parallel, i.e., the total time needed for a fixed computational task distributed over T threads scales as 1/T. Surprisingly, the CME method exhibited stiff scaling such that the total time stayed constant and did not decrease with increasing threads; see

IV. BAYESIAN INFERENCE AND UNCERTAINTY QUANTIFICATION OF MODEL PARAMETERS
Equipped with an efficient simulator of the time-dependent joint probability distribution and likelihood calculation for any model and parameter set, we first turned our attention to uncertainty quantification of model parameters ⃗ θ for a fixed model M. Given a likelihood, Bayesian inference uses the Bayes formula to update any prior beliefs P( ⃗ θ|M) and calculate the posterior distribution P( ⃗ θ|h, M) of parameters ⃗ θ given the data h and a fixed model M, As done previously, we resorted to Markov chain Monte Carlo (MCMC) with a Metropolis-Hastings (MH) sampler to estimate the posterior distribution P( ⃗ θ|h, M); see Appendix and Ref. 15. We assumed that the prior P( ⃗ θ|M) is log-uniform. At each MCMC step, the MH sampler randomly proposes a nearby parameter set and computes the ratio of the posterior probability P( ⃗ θ|h, M) relative to that of the current parameter set and probabilistically accepts or rejects the proposal with a prescribed criterion that only depends on the ratio of the likelihood values. The denominator P(h|M) in (4) is identical for any parameter set ⃗ θ and cancels during the calculation of the ratio.
We benchmarked our approach on two synthetic data sets that were generated by sampling (N = 100 or 1000 cells per time point for a total of 4 time points) from a two-allele, 3-state induction model, where the induction stimulus decreased the downward transition rates; see Methods. Here, the model was known a priori and our goal was to infer the kinetic parameters and perform uncertainty quantification by comparing their posterior distributions [Figs. 4(a) and 4(b)] to the ground truth (GT) parameters used to generate the sampled synthetic data set (Fig. S3 of the supplementary material). Our method constrained the posterior parameter distribution around the ground truth, and a 10-fold increase in the number of sampled cells dramatically reduced uncertainty in the inferred parameters. This observation holds true for a synthetic data set generated by a different two-allele, 3-state induction model; see Fitted models in systems biology often exhibit "sloppiness," where the goodness of the fit remains unchanged when several parameters are perturbed in a coordinated direction. Such directions in the parameter space, called eigenparameters, are the principle components of the likelihood function in the high-dimensional parameter space. 34 A common way to visualize the eigenparameters is to project the high-dimensional posteriors to the subspace spanned by any of the two bare parameters; 34,35 see Figs. S5 and S6 of the supplementary material. For example, our results show that simultaneously increasing the ON rate and OFF rate (and, thus, leaving mean transcript levels unchanged) results in a similar goodness of the fit. We also show that the posterior distribution is far from the asymptotic Gaussian limit, even when the number of samples N per time point is as large as 10 3 . In this non-Gaussian regime, it is necessary to consider the full posterior distributions for parameter uncertainty quantification, in contrast to heuristic approaches that consider only the covariance matrix of the posterior chain. 36 The Journal of Chemical Physics

V. MODEL SELECTION USING THE FULL BAYESIAN FRAMEWORK
Knowing that our method of accelerated Bayesian inference can reliably constrain the kinetic parameters for a given model, we turned our attention to the harder problem of model selection. The goal was to identify the correct model from 64 possible types of two-allele, 3-state induction models given the same synthetic data set in Fig. 4, which was sampled from a groundtruth model and its parameters. We reduced the number of candidate models from 256 to 64 by keeping β 0 and δ constant upon induction, i.e., always 0 in the binary notation such that M = xxxx0xx0. Our choice of noninducible parameters stems from our previous work that used a Bayesian approach to infer models of gene expression in stimulated neurons. 15,16 It was known that the mRNA degradation rate (δ) did not change, and our analysis showed that the inferred basal transcription rate (β 0 ) did not change upon stimulation. We therefore chose to keep these parameters unchanging to mimic our previous case study. Bayesian analysis naturally provides a quantitative measure of the likelihood of any model M, i.e., the probability of the model to reproduce the experimentally observed data h. The measure, referred to as the marginalized likelihood or evidence, 37,38 is the denominator of (4), The evidence is simply the probability that a model M produced data h and is equal to the sum of the probabilities of the model (i.e., likelihood) over all sets of parameters that could have produced the data. The evidence is a convolution of the likelihood with the prior P( ⃗ θ|M), which quantifies the belief regarding the initial parameter distributions. The dimensionality of ⃗ θ does not have to be identical for two different models, and this prior inherently penalizes models with too many parameters; see Sec. VI. The complexity of each model M increases with the total number of 1's in the binary notation because there will be two values (before induction and after induction) to be inferred for each inducible parameter.
The evidence for a model M is not calculated during the MCMC sampling of the posterior distribution and has to be computed separately. Computing the evidence is a sophisticated problem, [39][40][41][42] and we adopted an Importance Sampler of the Harmonic Mean Estimator (IS-HME) proposed by Robert and Wraith, 43 which resamples the posterior distribution estimated by the MCMC to compute the evidence of each model; see Appendix. We first carried out the MCMC calculations of posterior distributions for each of the 64 possible types of two-allele, 3-state induction models for the data sets described in Fig. 4 and Fig. S4 of the supplementary material. We then used IS-HME to compute the evidence of each model given the underlying data set. We compared the IS-HME evidence to maximum likelihood metrics used for model selection, such as the Bayesian Information Criterion (BIC) and Akaike Information Criterion (AIC). 15 Both BIC and AIC are approximations to the Bayesian evidence and become equivalent in the limit of large sample sizes; see Sec. VI.
Our results demonstrate that the IS-HME evidence P(h|M) of the ground truth model dominates over other models (≥95%) when using Bayesian inference on the larger data set (N = 1000 cells sampled per time point); see Fig. 5. The BIC approximation also selected the ground-truth model (although incorrect models exhibited significant probabilities, e.g., >5%), whereas the AIC failed to select the correct model. When the sample size dropped to N = 100 cells per time point, even IS-HME evidence could not reliably select the ground-truth model with this underpowered data set.

VI. DISCUSSION
Piecewise-deterministic Markov processes (PDMPs) have become a useful, coarse-grained description of stochastic gene dynamics, where the underlying discrete variable s(t) captures the stochastic dynamics of gene states and the continuous variable λ(t) captures the first moment of downstream gene products. [44][45][46][47][48][49][50] The key insight of our manuscript was proving that the time-dependent mRNA distribution of any underlying s(t) is asymptotically a Poisson distribution with a rate λ(t) and that the time-dependent joint probability distributions of discrete-state models are dynamic Poisson mixtures, whose mixing kernels are characterized by a PDMP. This significantly expands upon a related framework, which only considered the stationary distribution of discrete-state models. 51 More generally, our analysis helps bridge a gap between mechanistic discrete-state models and statistical models used in single cell analysis. For example, Wang et al. recently proposed a statistical model of gene expression which postulated that mRNA distributions are Poisson mixtures, 52 and our work justifies this assumption.
We used our insight to develop a hybrid method that calculates the time-dependent joint distribution more efficiently than standard numerical methods that forward-integrate the Chemical Master Equation (CME). The efficiency arises because our method analytically solves the mRNA distribution and rapidly samples many path s(t) of discrete-switching events using a kinetic Monte Carlo algorithm. We benchmarked the hybrid method and showed that it is O(10 3 ) more efficient than previous methods that directly integrate the CME. Furthermore, the hybrid method runs efficiently in parallel on a multicore processor than it does on a single processor. The stiff CME integrators ran more slowly in parallel, and this sublinear scaling persisted for different integrators. We suspect that the slowdown arises from the competing memory demands of stiff CME integrators running on a multicore processor. While there is room to improve stiff integration and parallelization, we note that current approaches are fundamentally limited compared to the hybrid method because they must integrate the CME for a large number of mRNA states, e.g., 0-1000 mRNAs per cell.
We incorporated the hybrid algorithm into BayFISH and were able, for the first time, to use a full Bayesian framework for model selection and uncertainty quantification of parameters from singlecell smFISH data. We adopted the Bayesian framework for model selection because it naturally quantifies "Occam's factor" 37,40 and, thus, avoids overfitting. For example, the top models based on Bayesian evidence are not the most complex models with the  (5) is small due to normalization. Thus, Bayesian evidence will favor a model that is complex enough to have a large likelihood but not too complex to decrease the prior parameter density. We note that when the data sample size is large such that the posterior distributions P( ⃗ θ|h, M) can be approximated by a multivariate normal distribution, the logarithm of the evidence converges asymptotically to the Schwarz index (commonly known as the Bayesian Information Criterion, BIC). 38,40 A similar asymptotic criterion is the Akaike Information Criterion (AIC), 53 which aims to minimize the information loss measured by the Kullback-Leibler divergence of the theoretically predicted joint probability distribution from the sampled distribution. Our results show that the posterior distribution estimated from modest data sets can deviate from multivariate normal distributions (see Figs. S5 and S6 of the supplementary material), which suggests that AIC and BIC can underperform in model selection, relative to Bayesian evidence. Here, we benchmarked the ability of Bayesian evidence, BIC and AIC metrics, to select the correct model from synthetic data sets generated by a ground-truth model and parameters. Figure 5 shows that while the BIC (but not AIC) ranked models similar to Bayesian evidence for the larger data set (N = 1000 cells per time point), BIC requires an even larger sample size to confidently converge to the correct model. This is an important issue because most biology labs are ill-equipped to generate and analyze large smFISH data sets, and their sample sizes are typically N = 100-1000 cells per time point. Our work demonstrates why Bayesian inference should be used for modestly sampled data sets. We show that N = 100 cells per time point is sufficient for parameter inference and uncertainty quantification if one has high confidence in the underlying model; see Fig. 4. However, if the goal of the smFISH experiments is model selection, then these smaller data sets are underpowered and the experimentalist needs to increase data sampling by at least 10-fold; see Figs. 5(a) and 5(c). Here, we only considered one round of experiments followed by Bayesian inference, but multiple cycles of data collection and analysis are becoming the norm. Our framework quantifies certainty in both models and parameters using Bayesian evidence and posterior distributions. Future work can complete the data collection and analysis cycle by using the evidence and posterior distributions to rationally dictate the next round of experiments, 54 i.e., different sampling times and densities, which are most informative for constraining models and/or parameters.
In this article, we adopted a Markov chain Monte Carlo algorithm with Metropolis-Hastings sampling (i.e., MCMC-MH) to compute the posterior distributions of the model parameters. 15,16,36 However, there is room to further improve the speed of Bayesian inference. First, Hamiltonian Monte Carlo (HMC) algorithms are more efficient at sampling posterior distributions in high dimensional parameter spaces because they use local sensitivity, i.e., the partial derivatives of the likelihoods with respect to the model parameters. [55][56][57] Second, although PM-PDMSR is efficient at generating sample paths in the space (s, λ), evaluating the convolution to calculate the joint distribution (3) is the rate-limiting step in the likelihood calculation. Thus, transforming the experimental data h into the mixing kernel of the Poisson mixtures ρs(λ) would accelerate Bayesian inference. Third, one could use low-order moments of PM-PDMSR and experimental data to formulate a sufficient statistics for likelihood-free approximate Bayesian computation, 35 thus replacing the explicit calculation of the likelihood L. Finally, our proposed PM-PDMSR provides an order-of-magnitudes more efficient algorithm to evaluate the likelihoods compared to CME calculations. In this article, we demonstrated that such an efficiency gain makes expensive Bayesian calculations feasible. If one has a large enough data set such that the posterior distribution is in the Gaussian limit (e.g., N = 10 4 cells per time points), then model selection could be achieved by the asymptotic BIC, which only needs the maximum likelihood. In this regime, however, PM-PDMSR is still O(10 3 ) more efficient and scalable at estimating the maximum likelihood of complex models when compared to CME methods.

SUPPLEMENTARY MATERIAL
See supplementary material figures for Figs. S1-S10. Figure  S1-Accuracy of PM-PDMSR: The likelihood calculated by using PM-PDMSR (L PM-PDMSR ) is plotted against the likelihood calculated using CME integration (L CME ). The summary error ⟨ε⟩ is computed according to Eq. (A23) in the manuscript. Figure S2-Scaling analysis of CME and PM-PDMSR runtimes: We parallelized both CME and PM-PDMSR using Python's multiprocessing module. Simultaneously, {1, 2, 4, 8, 16, 32} cores are utilized to process the same batch (1024) of synthetic data for each of the 2-, 3-, and 4-state models described in Appendix 2a. We report the average time per thread to process each data set (panels A and C) and the total time of all threads to process the entire batch (panels B and D). PM-PDMSR described in Fig. 3 is parallel (total time ∼ 1/number of cores utilized simultaneously), whereas the CME suffers from a stiff scaling such that multiprocessing on a computer-even if it is equipped with multiple CPUs-is not significant more efficient than running a single thread. The machine we used for this benchmarking is equipped with 32 cores and can process simultaneously 64 threads (two Intel© Xeon© CPU E5-2698 v3 at 2.30 GHz). Figure  S3-Joint probability distribution of the best-fit parameters: The joint distribution of the best-fit parameters to synthetic data sets with N = 100 (panel A) and 1000 (panel B) cells per time point, M = 00110000. Figure S4-Parameter inference and uncertainty quantification using the Bayesian posterior distribution: We benchmarked the hybrid method by running Bayesian inference on a synthetic data set sampled (N cells at 4 different time points) from a known model M = 11000000 and "ground-truth" (GT) parameter set. Posterior distributions (panels A and B) and joint distribution of best-fit parameters (panels C and D) for N = 100 and 1000 cells per time point, respectively. Figure S5 We illustrate our central theoretical results using a single-allele model. However, the results generalize to multiple-allele models because the states of a multiple-allele model can be relabeled as internal states of a single-allele model.

a. Central theoretical result I
Given a trajectory of genetic state s(t), the total number of mRNA, N mRNA (t), is the sum of two variables N initial mRNA (t) and N new mRNA (t). N initial mRNA (t) describes the number of initial mRNAs that remain at time t. The probability distribution of N initial mRNA (t) is a binomial mixture weighted by the initial mRNA distribution P m,0 ∶= P(N initial mRNA (t = 0) = n), Here, Θ(⋅) is the Heaviside step function. N new mRNA (t) describes the number of new mRNAs that were synthesized after t > 0 and still remain at time t. The probability distribution of N new mRNA (t) is a Poisson distribution with rate λ(t), where λ(t) satisfies equationλ(t) = β s(t) − δλ(t) with an initial condition λ(0) = 0.
Proof We denote the probability of the total number of mRNA N mRNA at time t by Pm(t). For a given trajectory of the genetic state s(t), the temporal evolution satisfies the chemical master equation for all m ∈ Z ≥0 and with a boundary condition P −1 = 0. We prove central theoretical result I by using the probability generating function defined by where Pm(t) = ∂ m z G(z, t)/m!. We first solve for G(z, t) over an interval of time when s(t) is constant. We will then extend our analysis to include piecewise intervals of time with different value of constant s, similar to s(t) generated by a genetic state model. To begin, we apply the operator ∂t to G(z, t) and use (A3) to obtain the partial differential equation (PDE), ∂tG(z, t) = δ(1 − z)∂zG(z, t) + βs(z − 1)G(z, t). (A5) This linear PDE can be solved using the method of characteristics, 58 and the general solution is where P m,0 ∶= P m,0 (t = 0) is the initial mRNA distribution of the system. For reasons that will become apparent below, we label the initial-distribution-dependent part by G init (z, t) and the rest of the terms by G new (z, t), The above solution applies to a constant s(t). We now consider a piecewise-constant trajectory for any genetic state s(t). To The Journal of Chemical Physics ARTICLE scitation.org/journal/jcp specify the discrete state and the switching events, we label s(t) by the ordered pairs (ti, si) for i = 0, 1, . . ., N before an observation time t. The gene starts with a state s 0 at t 0 ∶= 0, switches to s 1 at t 1 , etc., until the final switching event to sN at time tN. Our aim is to compute the generating function G(z, t) after N switching events (t ≥ tN).
The solution G(z, t, |t ≤ t 1 ) is identical to (A6) with s = s 0 before the first switching event. At t = t 1 , the generating function is Note that after t 1 and before t 2 , the genetic state changes to s 1 and only the transcription rate in (A3) changes from βs 0 to βs 1 . The initial condition of the generating function of this period (t 1 ≤ t ≤ t 2 ) is precisely G(z, t 1 ) in the above equation. Matching the "initial condition" for G(z, t 1 ), we arrive at We iterate this procedure for each piecewise "episode" until t = tN, and for t ≥ tN, The total solution G(z, t|t ≥ tN) is factorized into two terms, G init (z, t) and G new (z, t), for any N and t. The probability generating function of the sum of two independent random variables is the product of the generating functions of the random variables. This hints that we can define two random variables, X 1 and X 2 , which have generating functions G init (z, t) and G new (z, t), respectively. Our next task is to identify variables X 1 and X 2 and their probability distributions. For X 1 , we expand G init (z, t) to arrive at Recall that the generating function of a Binomial(m, p) distribution is The probability distribution of X 1 is therefore identified to be a binomial mixture with a temporally decaying parameter p = exp(−δt) and a mixing kernel defined by the initial distribution P m,0 . The physical meaning of X 1 (t) is the number of initial mRNA molecules that remain at time t, i.e., N initial mRNA (t). These mRNA molecules can only degrade with the decay rate δ. Each of the mRNA decays independently and, at time t, there is a probability exp(−δt) that a specific mRNA has not degraded. Importantly, when t ≫ 1/δ, this distribution will be concentrated at m = 0 (see Corollary I).
The total mRNA is N(t) = X 1 (t) + X 2 (t) = N initial mRNA (t) + X 2 (t), so X 2 (t) is identified to be the number of new mRNA molecules that were synthesized after t = 0 but which have not degraded at time t. We refer to this variable as N new mRNA (t). The square bracket of G new (z, t) in (A11) is the piecewise solution λ(t) of the following ODE for a given genetic trajectory s(t): where qm(λ(t)) is the probability density function of a Poisson distribution with rate λ(t), as in (A2). ◽ Corollary I. The transient time scale for the initial distribution is O(1/δ). When t ≫ O(1/δ), the mRNA distribution converges to a Poisson with a dynamic rate parameter λ(t).
Proof Physically, the time scale of degradation of each initially populated mRNA is 1/δ, so for a time scale which is much longer than this, the initial distribution will be fully degraded. Mathematically, the probability that the initial mRNA molecules have not fully decayed is (A16) In the asymptotic limit t ≫ 1/δ, exp(−δt) ≪ 1 so Here, ⟨N initial mRNA (0)⟩ is the first moment of the initial distribution. P(N initial mRNA (t) > 0) decays exponentially fast, and we can bind this probability to be smaller than ε when t > δ −1 log(⟨N initial mRNA (0)⟩/ε). ◽

b. Central theoretical result II
At long times t ≫ 1/δ, the mRNA distribution asymptotically converges to a Poisson mixture regardless of the initial mRNA The Journal of Chemical Physics ARTICLE scitation.org/journal/jcp distribution and genetic switching trajectory s(t), where the joint probability density ρi(λ, t) satisfies the forward Kolmogorov equation The initial condition for ρi(λ, t = 0) is defined by where δ(λ) is the Dirac delta distribution at λ = 0.
Proof The solution of (A14) subject to random switching events of s(t) is a random process. Formally, λ(t) and the discrete switching states s(t) jointly comprise a piecewise deterministic Markov process (PDMP). [59][60][61] The forward Kolmogorov equation describing the temporal evolution of the joint probability distribution is (A19). 47,49 Therefore, We then use central theoretical result I and corollary I to show that P(N mRNA (t) = m|λ(t) = ℓ, s(t) = i) = ℓ m e −ℓ /m! asymptotically when t ≫ 1/δ to complete the proof. ◽ c. Efficient numerical method for sampling ρi(λ, t) Because λ is continuous, solving the forward Kolmogorov equation (A19) is as complex as solving the full CME, both of which are infinite dimensional systems. Instead, we used an efficient kinetic Monte Carlo simulation 44,49 to generate a large number of sample paths to estimate the asymptotic joint distribution P(N mRNA (t) = m|λ(t) = ℓ, s(t) = i) when t ≫ 1/δ using (A18). The pseudocode of this algorithm is shown in Algorithm 1. Require: Initial state λ(t = 0) = 0 and s(t) = s 0 . Kinetic rate κij (switching rates from discrete state i →j), β k (transcription rates), and δ (degradation rate). N discrete observation times T ∶= {t ℓ } N ℓ=1 . Ensure: An exact sample path of the random process (λ(t), s(t)) at N discrete times T. To compute the time-dependent joint distribution of genetic states and mRNAs, we generated Ns sample paths {λ k (t), s k (t)} N s k=1 with Algorithm 1. We then used (A18) to estimate ρi(λ), where δi,j is the Kronecker delta function which is equal to 1 if i = j and 0 otherwise. We determined that Ns ≡ 10 5 is a sufficient number of sample paths to estimate the same joint distribution obtained by forward-integrating the CME. We refer to our method as the Poisson Mixture with a Piecewise Deterministic Markov Switching Rate (PM-PDMSR).
The goal was to compute the joint distribution of genetic states and mRNAs before and after induction. Similar to the situation in many experimental systems, 62,63 the joint distribution is at stationarity before induction. Upon induction, we assume that some model parameters are changed, which results in the time-evolution of the joint probability distribution P(N mRNA (t) = m, s(t) = i) toward a new stationary state. We label the kinetic rates (κij and β k ) before and after induction by U (Unstimulated) and S (Stimulated). To use PM-PDMSR to estimate the stationary distribution before induction, we first solved for the marginal stationary distribution of the genetic state p . ., S for an S-state model. We initiated Nsp * i sample paths in PM-PDMSR at λ = βi/δ and state s = i and ran for t = 10/δ so that the Poisson mixture relaxes to stationarity. Upon induction at t = 10/δ, we changed model parameters κ U ij → κ S ij and β U k → β S k for i, j, k ∈ {1, 2, . . . S} and continued simulating the temporal evolution of the joint probability distribution after induction using PM-PDMSR. This is valid because our previous proofs and arguments for (A18) apply even when the kinetic rate constants change upon induction.

Testing the speed and accuracy of the simulators
We benchmarked the efficiency of PM-PDMSR vs the "gold-standard" simulator, i.e., forward-integration of a truncated CME. 22,23 Both simulators were embedded into BayFISH and evaluated on their ability to perform a single Monte Carlo step, i.e., simulate the time-dependent joint distribution of a model and its parameters, and to calculate the likelihood of a synthetic data set generated by the same model and parameters. This single-step benchmarking was performed for 1024 diverse models and parameters across two-allele, discrete-state models of increasing complexity (2-state, 3-state, and 4-state induction models).
a. Generating diverse models, parameters, and synthetic data sets Each model had a randomly chosen subset of parameters that were one value (κ U ij , β U k ) before induction from t = 0 to t = 20 and with different parameters (κ S ij , β S k ) after induction from t = 20 to t = 22. The genetic switching rates of parameters κij ranged between 10 −2 and 10 2 , and the transcription rates β k range between 0 and 200.
For each instance of a model, we randomly generated the switching rate constants in the logarithmic space (log 10 κij ∼ Unif(−2, 2) if |i − j| = 1) and transcription rate constants in linear space (β k ∼ Unif(0, 200)). Note that not all parameters were random or changed upon induction: We fixed β 0 = 0 and the mRNA degradation rate δ = 1, and we constrained βi ≤ βj if i < j. For the purpose of benchmarking algorithm speed and efficiency, we considered a complex induction model M for each discrete model class. The corresponding induction model is M = 11010 (2-state), M = 11110110 (3-state), and M = 11111101110 (4-state).
The synthetic data of each model with its parameters were generated by running 1000 independent trajectories of standard continuous-time Markov chain simulation. We collected the statistics of the trajectories at four discrete times t ℓ = 20, 20.5, 21, and 22 (N = 1000 cells per time point). The measured allele activity state TS was marginalized: we define TS = 1 when its internal state is s > 0. The synthetic data set therefore consists of a histogram at discrete times t ℓ , h(m, TS, t ℓ ), which is the number of trajectories with m mRNAs and TS active transcription sites (which can be 0, 1, or 2 for a two-allele system) at time t ℓ . For each model class (2-, 3-, and 4-state models), we repeated the process 1024 times to test diverse parameter combinations.

b. CME simulators
Given an induction model and associated parameters, we forward propagated the CME using the same parameter values used to create the synthetic data sets in Sec. IV. We truncated the number of mRNA at 500 (i.e., there is no transcription event once the system reaches N mRNA = 500) with an absolute error tolerance of 10 −5 . The truncation number M was motivated by data sets in animals, showing that mRNA populations can be as large as O(500). [24][25][26][27] We tested different software platforms, including Matlab, Python (SciPy), and a research software ACME, 23 to forward integrate the same stiff CME. Python's stiff integrator (using backward differentiation formula, BDF) outperformed other integrators and software platforms. Thus, Python (with SciPy) was chosen to be the platform for direct integration of the CME in the following analysis.

c. Comparison of the PM-PDMSR and CME simulators
We incorporated these PM-PDMSR and CME simulators into modified BayFISH software to evaluate their speed and accuracy for one Monte Carlo step. Both algorithms were implemented in c++ and compiled using Intel's icc compiler. All PM-PDMSR and CME simulations were carried out on the same machine with Intel© Xeon© CPU E5-2695 v3 at 2.30 GHz. We computed the joint distributions of the models and parameter sets used to create the synthetic data sets. These joint distributions and corresponding synthetic data (h(m, TS, t ℓ )) were then used to compute the likelihood L of the generated data h(m, n TS , t ℓ ) using (1). The execution time of a Monte Carlo step for each simulator for each model class of the generated parameter set was recorded and presented in Fig. 3. We also compared the accuracy of the calculated likelihoods of each synthetic data set. We compared the average error of the PM-PDMSR likelihood relative to the more accurate CME likelihood. We define the average error by The relative accuracy of CME vs PM-PDMSR is presented in the supplementary material.
To test the parallelization of each simulator, we simultaneously ran 32 simulations on a 32-core machine (two Intel© Xeon© CPU E5-2698 v3 at 2.30 GHz) and recorded the execution time; see Fig. 3. PM-PDMSR on 32 parallel threads takes the same amount of time as running a single thread. By contrast, the CME on 32 parallel threads takes 32 times longer than a single thread; see Fig. S2 of the supplementary material. This suboptimal scaling holds true on the multiple machines that we tested, and our algorithms leverage Python's subprocesses functionality (Pool). We suspect that the slowdown in the CME is due to the high memory demand of the BDF integrator. Results of a more detailed scaling analysis with different numbers of parallel threads are presented in Fig. S2 of the supplementary material.

b. Bayesian analysis
The model class we considered for Bayesian inference is the set of two-allele, 3-state models with β U 0 = β S 0 = 0. The rest of the parameters are free parameters, but depending on the model structure, some of the perturbed (S) parameters may be constrained to the unperturbed (U) value. Combinatorially, there are in total 2 6 = 64 models we considered, as there are six biophysical parameters ⃗ θ ∶= (κ 01 , κ 12 , κ 21 , κ 10 , β 1 , β 2 ). We adopted a plain Markov chain Monte Carlo algorithm to sample the posterior distribution P( ⃗ θ|h, M), using the Metropolis-Hastings sampler. [64][65][66] Specifically, we perform random jumps in the logarithm space of the parameters (log κij and log β k ). The jump kernel was chosen to be uniformly distributed Unif(−D, D), where the metaparameter D (diffusivity) globally regulates how wide the isotropic diffusion kernel is. For each model structure, we adjusted the metaparameter D such that the acceptance rate of the Metropolis-Hastings sampler was between 0.2 and 0.3. 67 We assumed that our prior is uniform in the logarithm space log θi.
Before running the MCMC samplers, we randomized 400 initial guesses of the model parameters and forward evolved the MCMC for 5 × 10 4 iterations each chain. Most of the chains converged to a unique parameter region, and the likelihood value in this region was significantly higher than the few chains trapped in (presumably) local maxima. We independently initiated 32 MCMC chains with different random initial speeds from the parameter values that maximized the likelihood in the previous test runs. We collected a total length (the sum of the length of all 32 chains, >10 7 ) to accurately approximate the posterior distribution P( ⃗ θ|h, M).
c. Computing the evidence P(h|M) from the posterior distributions P( ⃗ θ|h, M) As described in Ref. 43, the evidence P(h|M) is computed by the algebraic identity with an importance sampler φ( ⃗ θ ′ ) satisfying the normalization condition In this work, the posterior distributions were exclusively unimodal. Therefore, we chose the importance sampler to be proportional to an indicator function on an ellipsoid located at the high posterior density region. We first ranked the posterior chain by their likelihood and selected the top 20% parameter sets to construct the importance sampler. We performed a principle component analysis on the selected samples to compute the mean θ k , variance σ 2 k , and eigenvector ê k , k = 1, 2, . . ., 6, in the sixdimensional parameter space. We used the eigenvalues and eigenvectors to construct an ellipsoid centering at the mean and with the axis length proportional to the eigenvalues along with the eigenvectors, where Ri,j ∶= (êi) j is the linear transformation onto the eigenbasis. We tuned the metaparameter α such that there were precisely 20% of the points of the posterior chains inside the ellipsoid E. These samples were then used to compute the evidence. One must also specify the prior distribution P( ⃗ θ|M) to compute the evidence. For simplicity, we imposed a uniform prior in the logarithm space of the parameters, bound by (10 −16 , 10 4 ).
where 1 ⃗ θ k ∈E is the characteristic function which is equal to 1 if ⃗ θ k is in the ellipsoid E and 0 otherwise. In Fig. 5, we presented a normalized probability among all the 64 linear 3-state models we considered,P (h|Mi) ∶=P (h|Mi) which is reported in Fig. 5.
d. Estimating the evidence P(h|M) from the Bayesian information criterion and Akaike information criterion The Schwarz index is an asymptotic result of the Bayesian evidence P(h|M) when the sample size is large. 68 Given a model Mi, the Bayesian Information Criterion (BIC) is defined to be twice of the its Schwarz index, where L max i and mi are the maximum likelihood and the number of free parameters of model Mi, respectively, and N is the sample size (the number of data). Thus, to estimate the normalized probabilitȳ P using BIC, we usē Akaike Information Criterion (AIC) is another commonly adopted matrix information criterion. The motivation of AIC is to minimize the information loss, measured by the Kullback-Leibler divergence (KL) of the prediction from the data. It is derived 53  . (A32) We remark that the negative logarithm of the likelihood function (1) converges to N × KL(h(ω)/N||P(ω|Mi, ⃗ θ)) only when N ≫ 1 such that the multinomial coefficient M ℓ can be expanded by the Stirling approximation. In most biological cases, the sample size is far from this regime, and the Kullback-Leibler divergence is a poor choice to approximate the likelihood function (1).