Analytic modeling of neural tissue: II. Nonlinear membrane dynamics

Computational modeling of neuroactivity plays a central role in our effort to understand brain dynamics in the advancements of neural engineering such as deep brain stimulation, neuroprosthetics, and magnetic resonance electrical impedance tomography. However, analytic solutions do not capture the fundamental nonlinear behavior of an action potential. What is needed is a method that is not constrained to only linearized models of neural tissue. Therefore, the objective of this study is to establish a robust, straightforward process for modeling neurodynamic phenomena, which preserves their nonlinear features. To address this, we turn to decomposition methods from homotopy analysis, which have emerged in recent decades as powerful tools for solving nonlinear differential equations. We solve the nonlinear ordinary differential equations of three landmark models of neural conduction—Ermentrout–Kopell, FitzHugh–Nagumo, and Hindmarsh–Rose models—using George Adomian’s decomposition method. For each variable, we construct a power series solution equivalent to a generalized Taylor series expanded about a function. The first term of the decomposition series comes from the models’ initial conditions. All subsequent terms are recursively determined from the first. We show rapid convergence, achieving a maximal error of <10−12 with only eight terms. We extend the region of convergence with one-step analytic continuation so that our complete solutions are decomposition splines. We show that this process can yield solutions for single- and multi-variable models and can characterize a single action potential or complex bursting patterns. Finally, we show that the accuracy of this decomposition approach favorably compares to an established polynomial method, B-spline collocation. The strength of this method, besides its stability and ease of computation, is that, unlike perturbation, we make no changes to the models’ equations; thus, our solutions are to the problems at hand, not simplified versions. This work validates decomposition as a viable technique for advanced neural engineering studies.


ARTICLE
scitation.org/journal/adv using MREIT to detect neural activity directly, exploring the conductivity changes that result from active membrane dynamics. It is therefore appropriate to focus our analytic power on models of the active membrane, which are highly nonlinear. Most biologic-indeed, most engineering-phenomena are characterized by nonlinear differential equations, which most analytic methods cannot solve directly but instead must have the equations simplified through, e.g., linearization or dramatically restricting the parameter space, that is, the solutions are to problems different from the original modeling, curtailing their applicability. Sometimes such changes are perfectly sensible because the nonlinear behavior is not the salient detail under scrutiny, such as Rall and Agmon-Snir using the cable equation to model neurons to consider impulse transmission along large distances 6 or how in the first study of this series Schwartz et al. held the membrane as purely resistive to consider a snapshot of electromagnetic field distribution throughout a bidomain. 7 In this study, we will look at three increasingly complex models of membrane conduction. The first is the theta model by Ermentrout and Kopell. 8,9 Using only one variable, this quadratic integrate and fire model describes bursting in Aplysia neurons. Next we consider the model by Fitzhugh 10 and Nagumo et al., 11 which shows the all-or-nothing phenomenon of an action potential once the transmembrane voltage has surpassed a certain threshold. The final model is by Hindmarsh and Rose 12 and adds a third variable that allows for the description of oscillatory burst discharge. The nonlinear nature of these models is key to the frontier problems of neural engineering, e.g., MREIT; hence, our solution method must include it.
Over the last 30 or so years, a rich scholarship comprising the broad field of homotopy analysis has been developed 13 with the goal of a unified method for solving all sorts of mathematic models of the natural world-linear, nonlinear, deterministic, and stochastic-and which remains largely unexplored in the neural engineering community. Broadly, these techniques are recursive, starting with the initial or boundary conditions and creating an analytic series solution. Specifically in this work, we will use the decomposition method developed by Adomian 14 but with modifications added later, which we will note. For convenience, we will simply refer to it as the Adomian decomposition method (ADM). The ADM creates a power series solution, called a decomposition series, which is equivalent to a Taylor series, by decomposing the unknown function into an infinite series, defining the first term as the initial condition. The nonlinear components are decomposed into their own infinite series called Adomian polynomials, which also start with the initial condition. Each term in the decomposition series depends on the one before it; hence, starting from the initial conditions, all terms are recursively calculated. The reviews by Adomian 15 and Rach 16 are both excellent primers on this method.
The ADM has gained some traction in the realms of biomedical phenomena, having been applied to nonlinear models of cellular population growth, 17 cellular systems and aging, 18 and infection diseases and immune response. 19 Adomian 20 and Adomian et al. 21 discussed a limited time-and-space-dependent version of the FitzHugh-Nagumo model, which had only one state variable, as well as the Hodgkin-Huxley model, 21 to describe impulse propagation down an axon. These studies are largely theoretic, serving more as proposals, without worked concrete examples, error analysis, or analytic continuation, all of which we address here. Furthermore, we will only consider models described by ordinary differential equations, neglecting spatial dependence.
We will start with a brief description of the ADM in Sec. II, continue with detailed analyses of our three models in Sec. III, compare the Adomian spline solutions to cardinal basis spline solutions in Sec. IV, and end with remarks on future directions.

A. Operator notation
Consider a heterogeneous nonlinear ordinary differential equation ℱ{u(t)} = g(t) whose operator ℱ is the sum of linear ℒ and ℛ and nonlinear terms, where ℒ = d n dt n is the highest order differential operator, which we assume to be easily invertible, ℛ is the remaining part(s) of the linear operation, is the nonlinear operator, and g is a given function. Hereafter, we will neglect the curly brackets around arguments with only one variable. We assume that ℒ −1 ℒu = u − Φ, where ℒ −1 is the n-fold definite integral operator from 0 to t, whence we determine Φ by the initial value(s). Thus, if ℒ = d dt , then ℒ −1 = ∫ t 0 dt and Φ = u(0). Applying ℒ −1 through (1) and solving for u yield (2)

B. Decomposition series
The ADM assumes that we can decompose the solution into an infinite series, 14 with the first term coming from the initial condition. The nonlinear terms u are themselves decomposed into the Adomian polynomials, which are defined as 22 See Appendix A for an example of the Mathematica code for generating An. The components of u are determined from the recursion scheme, 23 From Eq. (5), we see that A 0 comes from the initial condition u 0 , A 1 from u 0 and u 1 , A 2 from u 0 , u 1 , and u 2 , and so on. Thus, all ARTICLE scitation.org/journal/adv components in Eq. (3) are determined. The solution converges quickly, 24 so in practice, the m-term partial sum will suffice, given here as C. One-step analytic continuation The decomposition series φ m (t) is equivalent to a generalized Taylor series 25 about the function u 0 (t) 26 and whose radius of convergence r is insufficient to characterize, e.g., rapidly varying neural dynamics. 27 The solution 28,29 to this shortcoming-built into the ADM itself-is a self-starting, simple to use, and accurate one-step method of analytic continuation of φ m (t) in terms of its family of elemental functions φ (k) The , the values at the end of each element are the initial conditions for the following element. 27 The resultant spline of N + 1 elements is conveniently expressed as 30 where Π is the boxcar function. 31 Appendix B shows our code for generating the spline elements. Neither the intervals' durations h k = t k+1 − t k nor orders m of their partial sums need to be the same. 28,29 Indeed, the spline can be optimized by (1), fixing each m while varying the intervals' duration from h k = λr (k) where the dilation constant λ < 1 and the radii of convergence are estimated by , 32 or by (2), fixing h k = h while varying m to be the smallest required to bring the interval's maximum truncation error

A. Ermentrout-Kopell model
Let us begin with a single variable model from Ermentrout and Kopell 8,9 that can describe parabolic bursting behavior in Aplysia neurons. The theta model, as it is also known, is canonic for type I membrane dynamics-characterized by a wide range of firing frequencies 34 -because all other type I models can be reduced thereto 35,36 and is given as where θ is the voltage, the constant η represents the input current, and q is the membrane time constant, which we can set to unity without loss of generality. When η > 0, via quadrature, the analytic solution in closed form is found to be 34 To approach the θ model with the ADM, let us re-arrange Eq. (10), conveniently isolating the nonlinear component, We see that our operators are ℒθ = dθ dt , θ = cos(θ), g = 1 + η, and ℛ = 0, so Eq. (12) becomes We write θ = ∑ ∞ n=0 θn and θ = ∑ ∞ n=0 An(cos(θ)), and, after apply- From Eq. (6), each θn is found from our recursion scheme, Thus, our decomposed θ(t) is fully determined, completing our analysis. Because this model has a known analytic solution in closed form, we can easily verify our solution by hand. The first few An(cos(θ)) values are so the first few terms of our decomposition series are thus, we can see emerging the Taylor series representation of the a priori known solution 2 arctan( √ η tan( √ ηt) , which is expressed as 37 ARTICLE scitation.org/journal/adv where B are the Bernoulli numbers. 38

B. FitzHugh-Nagumo model
First devised as an oscillator 10 and then as an equivalent circuit, 11 the Fitzhugh-Nagumo model consists of two coupled nonlinear differential equations given as 39 where V is the transmembrane potential, W is the recovery variable, σ is the stimulating current, and ϕ, β, and α are positive constants. The operators of the V Eq. (19) . When we apply ℒ −1 to Eqs. (19) and (20) we respectively get The recursion schemes are and W 0 = W(0), However, for a few highly restricted cases, e.g., Demina and Kudryashov's meromorphic, 40 we have no a priori solution, so we must verify our results with the error e = ℱ{φm(t)} − g(t), which comes from Eq. (1). If we write our m-term partial sums as vm(t) = ∑ m−1 n=0 Vn(t) and wm(t) = ∑ m−1 n=0 Wn(t), then the infinity norm of the error ∥e∥∞ from the V Eq. (19) and the W Eq. (20) (denoted by subscript) are, respectively, given as Figure 1 shows the ∥ev∥∞ for the primitive element k = 0 whose r (0) = 1.8 for λ = 0.05, 0.1, and 0.2, and as expected, our solution converges more quickly as λ is decreased. Note that, by inspection, we can see that the error decreases linearly, which means that, since this is a logarithmic scale, the rates of convergence are exponential. The ∥ew∥∞ (not shown) looks the same as ∥ev∥∞ for all intervals, which is expected since their interval length h k corresponds to the local r (k) .
Let us now use our solution to construct splines (shown in Fig. 2) depicting an action potential using v 10 (t) and w 10 (t) and varying the interval length. We take the inputs (listed in Table I) from FitzHugh's work. 39 The bottom panel shows the full dynamic behavior of the two state variables. Right above it, we zoom into a sample of the v 10 (t) spline to show the varying interval lengths. Above that, we zoom in even further to focus on just one element v (10) 10 (t) to show how quickly the curve converges as we add terms.

C. Hindmarsh-Rose model
Hindmarsh and Rose first modified the two state variable FitzHugh-Nagumo model of action potentials to allow for long interspike intervals to resemble the behavior of real neurons more closely. 41 Later they added a third state variable, which allows for a qualitative description of bursting behavior, 12 which we now consider. The model is given as where X is the transmembrane potential, Y reflects the spiking Na + and K + currents, Z is an adaptive current that terminates them, thus  ) + An(X 2 )) and Eq. (28) X = ∑ ∞ n=0 An(X 2 ), and we apply ℒ −1 throughout Eq. (24) to obtain

ARTICLE scitation.org/journal/adv
The recursion schemes are X 0 = X(0), and Z 0 = Z(0), Writing our m-term partial sums as xm(t) = ∑  Figure 3 shows the ∥ex∥∞ at intervals k = 266 and 342 whose respective r (k) = 3.5 and 0.57, respectively. Once again, by inspection, Initial conditions we can recognize exponential convergence rates from the (approximately) linear shape on the log scale. Furthermore, we see a faster convergence in the interval with the smaller radius. The ∥ey∥∞ and ∥ez∥∞ are similar (not shown).
Using inputs from the work by Hindmarsh and Rose 12 (listed in Table II), we now construct splines (plotted in Fig. 4) that show a burst of spikes. This time, however, we hold the interval length fixed at h = 0.1 and allow the order of each element to vary, maintaining a truncation error at or below a set tolerance of ε tol . In the top panel of Fig. 4, we zoom in to a section of the y m (t) spline to see individual elements labeled with their respective orders.

A. Meromorphic FitzHugh-Nagumo model
Having shown how to construct spline solutions with the ADM (A-splines), we now validate this method through comparison to the method of collocation and cardinal basis splines (B-splines). For convenience, we will solve a version of the FitzHugh-Nagumo model (inputs in Table III),

ARTICLE
scitation.org/journal/adv Initial conditions for which we have the analytic solutions in closed form, 40

B. B-spline collocation
We adopt the approach given by Pitolli; 42 however, for a thorough treatment of B-spline theory and applications, please see the works by Schumaker 43 and Prenter. 44 Briefly, we partition △ the time interval [0, T] into uniformly spaced integers l (called knots) as △ δ ∶= {lδ, 0 ≤ l ≤ N}, where N = T δ and δ is the dilate for the knot time interval. We use spline curves for our approximating functions, where ν l and ω l are unknown coefficients and B m l (t) are the B-spline functions, piecewise polynomials of order m interleaved at the knots, recursively defined as 45 Our knot vector is equally spaced with t = 0 having multiplicity m + 1. 46 The solution then involves solving for ν l and ω l through the method of collocation. We do this by substituting Eq. (42) into Eq. (39) to get (summation over l ∈ [0, N] is understood) and evaluating our B-splines at collocation points, which are expressed as △τ ∶= {tp = pτ, 0 ≤ p ≤ P}, where P = T τ ≤ N. We solve this nonlinear system of 2(P + 1) equations through 2(m + N) unknowns by the Gauss-Newton method on Mathematica 13.1 (Wolfram Research, Inc., Champaign, IL). A detailed description of the method can be found in Ref. 42.

C. Example results
For our numeric example, we look at a small interval [0, 1] and use cubic splines, i.e., m = 3. All inputs are summarized in Table III. In this example, we want to show that the ADM is at least as accurate as the B-spline collocation method by comparing the infinity norm of the errors, which we define as eB = v − v δ and eA = v − vm at various discretizations h = 1 3 δ. We list in Table IV the errors for an increasingly fine mesh. As expected, when the error goes down, the time steps are shorter. At all levels, the A-spline is around an order of magnitude more accurate than the B-spline approximation.
Finally, as a further check of the ADM, following the same steps in Sec. III B but this time with the meromorphic formulation and inputs, the first few terms of our decomposition series are revealing the Taylor series expansions for our known solutions.

V. CONCLUSION
The solutions we presented here represent a new way for neural engineers to approach key biophysical models. They are just a tiny fraction of the analytic possibilities with the ADM. For instance, besides analytic continuation, the solutions' validity can also be expanded through, e.g., diagonal Padé approximates 47 or iterated Shanks transforms. 48 For the purposes of MREIT, generalizations worth considering include partial differential equations 49 for