Path Integrals: From Quantum Mechanics to Photonics

The path integral formulation of quantum mechanics, i.e., the idea that the evolution of a quantum system is determined as a sum over all the possible trajectories that would take the system from the initial to its final state of its dynamical evolution, is perhaps the most elegant and universal framework developed in theoretical physics, second only to the Standard Model of particle physics. In this tutorial, we retrace the steps that led to the creation of such a remarkable framework, discuss its foundations, and present some of the classical examples of problems that can be solved using the path integral formalism, as a way to introduce the readers to the topic, and help them get familiar with the formalism. Then, we focus our attention on the use of path integrals in optics and photonics, and discuss in detail how they have been used in the past to approach several problems, ranging from the propagation of light in inhomogeneous media, to parametric amplification, and quantum nonlinear optics in arbitrary media. To complement this, we also briefly present the Path Integral Monte Carlo (PIMC) method, as a valuable computational resource for condensed matter physics, and discuss its potential applications and advantages if used in photonics.

The underlying concepts of the path integral approach are sometimes considered difficult to grasp.Indeed, from a philosophical standpoint, its underpinnings are extraordinary -in describing a system, all possible paths between its initial and final state must be taken into account mathematically.This idea of summing over all paths has previously been characterised ontologically as everything that can happen does happen 40 .
As befits the subject, the history of path integrals is rather circuitous and does not follow a straight line from Feynman's seminal work on the subject, published in 1948 1 , to the present day.Already in the 1920s, the mathematician Norbert Wiener had developed a method to treat Brownian motion and diffusion using a technique of integrating over paths, with formal similarities to Feynman's eventual construction but in a purely classical context [41][42][43] .During the early part of the following decade, the general ideas of de Broglie and Schrödinger, that waves can be associated with particle dynamics 44 , motivated Dirac to publish a paper in the Physikalische Zeitschrift der Sowjetunion (Physical Journal of the Soviet Union) setting the stage for future developments by a) Electronic mail: marco.ornigotti@tuni.fiproposing the Lagrangian as a more natural basis for a theory of quantum mechanics, as opposed to a Hamiltonian-based method which he argued was less fundamental due to its nonrelativistic form 45 .This led him to propose the key idea that a quantum mechanical transition amplitude for a particle is given by a phase factor controlled by the action along that particle's path 46 .This also led him to state, that the classical path of a quantum system can be interpreted as resulting from the constructive interference of all such paths.In other terms, the action S of a given system counts, de-facto, the number of waves of the path in units of Planck's constant h, and, therefore, for each path, the phase of the related wave 47 is given by exp (i 2π S/h) = exp (iS/h).This allows the evaluation of the interference pattern of the particle dynamics, and essentially represents what is nowadays commonly understood as path integral 2,5 .
The pivotal step in the development of the theory occurred in the 1940s, when Feynman formulated his version of quantum mechanics, a 'third way' following the earlier, wellknown Schrödinger and Heisenberg alternatives 4 .Based on integration over all paths between initial and final physical states, with each path contributing an action-dependent phase as Dirac had proposed, Feynman's formulation culminated in his important 1948 paper entitled Space-time approach to non-relativistic quantum mechanics 1 .
From a conceptual point of view, one of the most interesting consequences of path integrals is that they can provide a deep understanding of the relation between quantum and classical mechanics, as the limit h → 0 emerges naturally from the formalism as the classical limit of the theory 3 .
Although the concepts underlying the path integral formalism may at first appear quite alien, they are deeply profound.Even if its utility were limited, the beauty of the idea would still merit a wide audience.The fact is, however, that the path integral has immense value as a practical computational technique in the physical sciences, and it can be applied to solve problems in many diverse areas, even beyond physics.Two examples showing the remarkable breadth of its applicability are its use in quantum gravity, where the "sum over histories" is interpreted as a sum over all different spacetime configu-rations interpolating between an initial and final state of the universe 13 , and in financial market modelling, where the formalism has proven useful, since the time dependence of asset prices can be represented by fluctuating paths 5 .
The universality of the technique has allowed scientists to tackle many problems, and gain tremendous physical insight into them.In statistical physics, for example, path integrals conveyed the basic framework for the first formulation of the renormalisation group transformation, and they are largely employed to study systems with random distribution of impurities 6 .In particle physics, they allow one to understand and properly account for the presence of instantons 48 .In quantum field theory they provide the natural framework to quantise gauge fields 11,12,49 .In chemical, atomic, and nuclear physics, on the other hand, they have been applied to various semiclassical schemes for scattering theory 6 .Through path integrals, topological and geometrical features of classical and quantum fields can be readily investigated and be used to create novel forms of perturbative and nonperturbative analysis of fundamental processes of Nature 15,50,51 .In addition to that, path integrals allow one to re-interpret established results, such as, for example, the BCS theory of superconductivity 7,52 , from a novel, more insightful, perspective.
Explicit, analytical solutions to problems formulated in terms of path integrals, however, are scarce, and only available for very simple systems, such as a free particle, or the ubiquitous harmonic oscillator 2 .The complexity of the path integral formalism, in fact, increases very rapidly to overwhelming levels of difficulties for many simple problems.As an example of that, the simplest quantum system, i.e., a single hydrogen atom, required nearly 40 years to be fully solved in terms of path integrals 53 .
On the other hand, it is amidst complex and computationally challenging problems that path integrals show their true potential, providing a simple, insightful and intuitive perspective on the physical principles regulating such processes.To do that, numerical techniques, such as Monte Carlo methods 19,23,[54][55][56][57][58] , and the computational power of modern supercomputers are crucial to their successful implementation.
Most of the practical applications to numerical simulations of quantum particles with path integral approaches are systems with finite temperature in thermodynamical equilibrium with one of the statistical ensembles 24 .Finite temperature equilibrium involves dynamics of constituent particles and exchange of energy with environment or heat bath.These are the central factors in condensed matter physics with phase transitions, conductivities and other processes related to interactions between constituent particles.Typically, one assumes canonical ensembles, where both the number of particles and the volume they occupy are kept constant at a given temperature T , but other ensembles can be chosen where needed.
For a many-particle system in finite temperature, there is no wave function, but the mixed state can be described with a density matrix, and it turns out that it can be written in terms of path integrals in imaginary time 2,24 .The expectation values of observables are then evaluated by using the trace of the density matrix, which in space basis means finite closed loop paths.Thus, in terms of path integrals each of the particles propagate from a position in space in imaginary time back to the same position, the time period being inversely proportional to the temperature.Then, with Metropolis Monte Carlo it is possible to sample particle paths with correct weight in predefined temperature and collect data enough for convergence of expectation values of relevant operators.This approach based on imaginary time propagation is called Path Integral Monte Carlo method (PIMC).David Ceperley and coworkers have carried out seminal development work and a sizable number of PIMC simulations of various many-particle systems ranging from superfluid He 17 and neutron matter 59 to electrons and hydrogen in extreme conditions 60 , including both bosonic and fermonic particles.
In recent years, one of the authors of this tutorial (TTR) has significantly contributed to taking PIMC simulations to new though simple quantum particle systems, such as small atoms 61,62 , molecules [63][64][65][66] , a chemical reaction 67 and quantum dots [68][69][70] , with the ultimate goal of providing a more accurate description of their electronic structure and related properties including many-body effects, and how they change with temperature 62,71,72 .In this context, PIMC has proven to be a very reliable and excellent method to calculate the electric polarisabilities of atomic and molecular systems, leading therefore to an accurate estimation of the optical properties of both individual small quantum systems and collections of them, in the form of dilute gases 66,73,74 .
A different approach, based on a real, rather than imaginary, time path integral (RTPI) has been recently proposed as a way to describe the full quantum dynamics of a quantum system at zero-Kelvin, and to also characterise the evolution of its eigenstates 69,73,[75][76][77][78] .A combination of PIMC and RTPI therefore gives the possibility to have a comprehensive tool to study the properties of complex systems and their classical and quantum evolution.This feature in particular might prove to be very useful not only in chemistry and condensed matter physics, where this technique fluorished in the past decades, but also as a viable mean to understand and design the properties of materials of interest for photonics.
A fully integrable simulation platform, that allows control of both electronic and photonic properties of matter exactly, without the necessity to revert to approximations or effective theories, in fact, would constitute a tremendous resource towards the optimisation of integrated photonic systems.
It is interesting to notice, that throughout the last 30 years, path integrals have been used to describe several problems in classical and quantum optics, such as the propagation of light in gradient index media 31 , the estimation of the channel capacity of classical and quantum fiber-based communication networks 26 , parametric amplification 37,38 , light-matter interaction beyond the rotating wave approximation 27 , decoherence and dephasing in nonlinear spectroscopy 29 , and the effect of retardation in radiative damping 28 .Path integrals have also been employed to link the nonparaxial propagation of light with different models for quantum gravity 32 .All these works share the common thread of employing nonrelativistic path integrals to calculate the propagator (i.e., the Green's func-tion) of the electromagnetic field in different contexts, and use this information to solve the problem at hand.A different approach, based on path integrals in quantum field theory and Feynman diagrams, has been recently introduced as a viable way to handle classical 79 and quantum 39 optical phenomena in arbitrary media.
However, the benefits of path integrals in photonics, namely their ability to calculate both the properties of matter and its interaction with light in an exact way, without the need of any approximation both on the matter and light side, and the new physical insight that this could bring to photonics, remains to date uncharted territory.
In this tutorial, we aim at introducing the concept and methods of path integrals to the reader unfamiliar with the field, and to provide researchers in optics and photonics with a reference point for both analytical and numerical methods involving path integrals, with the hope that this will provide a powerful and practical toolkit that could be used in the future to tackle challenging problems in photonics.An accurate description of the diverse interactions between light and matter, emerging from the interplay of fundamental particles and fields, calls naturally for the use of quantum physics, and its degree of complexity grows very quickly.Path integrals are a natural way to study these interactions, and they actively take advantage of the complexity of the problem.Using them in photonics might then lead to novel methods to exploit complicated light-matter dynamics in photonic systems.
The tutorial is split into two main parts: Part 1, comprising Sections II through VI, covers the basics of the path integral approach in physics, and presents examples on how they can be used to solve problems in classical and quantum optics, as well as how to employ PIMC to determine optical properties of materials.In particular, Sect.II covers the fundamentals of the path integral approach in physics, treating core concepts such as the principle of least action, classical and quantum probabilities, and a brief description of the mathematics of integration over infinite number of paths.Basic examples, comprising the dynamics of a free quantum particle, the quantum harmonic oscillator, and diffraction from a double slit are covered in Sect.III.To conclude Part I, two examples of the use of path integrals in classical and quantum optics are presented, namely the propagation of light in an inhomogeneous medium and how this could be related to the physics of a harmonic oscillator with a time-dependent frequency, in Sect.IV, and the investigation of degenerate parametric down conversion presented in Sect.V (based on Ref. 37), respectively.Finally, Sect.VI briefly discusses how PIMC can be used to predict the optical properties of matter, and presents some perspectives on the use of this computational resource for photonics.
Part 2, on the other hand, including Sections VII through XI, deals with the basics of path integrals in quantum field theory (QFT), and presents an application of such framework to the case of the dynamics of the electromagnetic field in arbitrary media.In particular, Sect.VII briefly introduces the concept of path integral for quantum fields, and makes use of the simple case of a scalar field as an example to calculate the relevant quantities and establish the formalism.After having done that, Sect.VIII discussed how to include non-linear interactions in the formalism, and introduces Feynman diagrams.The results from these two sections are then intuitively and qualitatively generalised for the case of a vector field in Sect.IX, as a reference point for Sect.X, where these results are applied to the particular case of an electromagnetic field propagating in a dispersive medium of arbitrary shape.The last section of Part II, namely Sect.XI then presents two explicit examples, of how path integrals can be used in quantum optics.The first example presents how to describe the onset of spontaneous parametric down conversion (SPDC) in lossy media through path integrals, while the second example deals with the calculation of the rate of spontaneous emission of a quantum emitter surrounded by a dispersive medium.
In the spirit of the educational purpose of a tutorial, and given the mathematical complexity of path integrals especially concerning the concepts introduced in Part 2, we also provide, in Appendix A, a step-by-step guide on how to deal with path integral calculations for the explicit case of the electromagnetic field in arbitrary media.We hope this would serve as a good reference and guide to better understand the techniques and methods presented below.
Finally conclusions and future perspectives are then given in Sect.XII.

II. FUNDAMENTALS OF PATH INTEGRALS
A. Probability Amplitudes: Classical vs.Quantum Quantum physics is an abstract theory, whose specific features beyond classical physics are typically only spectroscopically observable.A good starting point to find the underlying differences between the two seemingly different worlds of classical and quantum physics, is represented by their different interpretation of the concept of probability.This, in fact, turns out to be a direct manifestation of the wave nature of quantum particles, and thus, the fundamental issue that we need to incorporate in the study of the dynamics of quantum systems.
The necessity of a change in viewpoint concerning probability, and the consequent definition, for quantum physics, of a complex-valued probability amplitude, emerges very clearly within the context of the least action principle.Let us consider the situation depicted in Fig. 1, where a classical [panel (a)] and a quantum [panel (b)] particle are evolving from an initial time t a to a final time t b .
For the classical, deterministic, system, although many different paths joining t a with t b are available, only the stationary paths with least action [red line in Fig. 1 (a)] gives a significant contribution to its dynamics, and determine, ultimately, its equations of motion.
For a quantum system, on the other hand, its intrinsic wave nature (and, ultimately the uncertainty principle) prevent it for following one single path, and the classical path [blue tube in Fig. 1 (b)] must be interpreted as the one with maximum constructive interference coming from all possible paths.
To understand this better, let us consider a classical particle first propagating from a point x a to a point x c , and subse- quently to a third point x b .If we denote with P(c, a) ≡ P(x a → x c ) the probability for the particle to propagate from x a to x c , and, similarly, with P(b, c) ≡ P(x c → x b ) the probability for the particle to propagate from x c to x b , the (classical) conditional probability for the particle to propagate from x a to x b by going through x c reads where the summation over c takes into account all possible alternatives for the intermediate state c.Note, that the definition of conditional probability given above, i.e., P(b, a), differs slightly from the usual one, which reads P(a|b).This notation, however, is fully equivalent with the traditional one, and will turn out to be of more practical use for the purpose of this work.
The above definition can be readily generalised to the case of continuous variables, i.e., to probability densities, by promoting P(a) and P(b) to probability density functions, and to interpret a and b as two sets of coordinates {x a } and {x b } for the particle to occupy at a given time t a and t b , respectively 80 .In this case, then, the summation over all possible alternatives c in Eq. ( 1) becomes an integral over the set of coordinates {x a }, i.e., where the subscript a on the integral indicates that the integration has to take into account all the possible values of the integrating coordinate x a .Now, let us extend the concepts introduced above to the case of a quantum particle with wave nature.To do that, let us first rewrite Eq. ( 1) for the probability amplitude ψ associated to the quantum particle as The probability amplitude ψ(b, a) defined above constitutes the basic quantity from which the dynamics of a quantum particle can be derived, and it is usually referred to in literature as the kernel (or propagator, or Green's function) of the quantum system at-hand.The conventional symbol for that in path integral language is K(b, a), and we then adopt this notation for the rest of this manuscript.The kernel defined above has a simple physical interpretation.In fact, it can be thought as the impulse response of the system at-hand 2,81 .Moreover, if we know the probability amplitude of the system at a given initial state ψ(a), we can immediately evolve it to a final state characterised by a probability amplitude ψ(b) using the following relation Finally, the experimentally observable classical probability distributions are found as squares of the absolute values of the probability amplitudes, P(a) = |ψ(a)| 2 and P(b) = |ψ(b)| 2 at times t a (initial state) and t b (final state), respectively.With this definition, the probability amplitudes appearing above can be readily interpreted as the wave function of the quantum system.Equation (3), in particular, hints at the interpretation of the wave function of a quantum system as the sum (or, better said, interference) of all possible paths linking the initial and final states of the considered evolution.

B. Lagrangian, Action and Path Integral
Contrary to the canonical formulation of quantum mechanics, which bases its premises on the Hamiltonian function H of the system, and therefore on the concept of total energy 82 , the path integral formalism starts from the Lagrangian function, generally defined as the difference between kinetic and potential energy of the system 83 , i.e., L = T −V .For this reason, the path integral formalism is often referred to as the third formulation of quantum mechanics, with the first being the matrix mechanics developed by Heisenberg, Born and Jordan in 1925 [84][85][86] , while the second one being the familiar Hamiltonian formulation developed by Schrödinger in 1926 87 .
Although the Hamiltonian and Lagrangian formulation of physical problems are essentially, from the point of view of physical meaning, equivalent, the latter is more elegant, and, thanks also to its natural appearance in the path integral formalism, has been adopted as the natural framework for more complicated theories, such as QFT 8,88 , particle physics 89 , and string theory 51 .
For simplicity of notations, we proceed in using the one dimensional space with the coordinate x, but the generalisation to three dimensions is trivial.Then, for a particle with mass m in motion on the path x(t) with velocity ẋ(t) in a potential V (x,t) the classical Lagrangian is On a way to both finding the classical equation of motion and concurrently incorporating the wave nature of dynamics of the particle, on the path x(t) from t a to t b we define the action In Lagrangian mechanics the action is a parameter related to the path length, whose optimisation will lead to the equations of motion.This procedure, i.e., optimising the path x(t) such that δ S = 0 is called "the principle of least action", and leads to the Euler-Lagrange equation 83,90 d dt If we now substitute Eq. ( 5) into the equation above, we find the following differential equation which we immediately recognise as Newton's classical equation of motion.This constitutes the fundamental ingredient for defining the classical and quantum probabilities as in Eqs.
(1) and (3).However, for the dynamics of a classical system, the integral approach above is redundant, and the traditional approach based on Newton's equation of motion is favourable one in most cases.Similarly, in quantum mechanics Schrödinger equation is the best approach for simple problems.However, there are several sophisticated cases where Eqs. ( 3) and ( 4) are more practical and easy to handle.For these more complicated scenarios, therefore, we need to find the kernel K(b, a), and the right way to do that is to incorporate in the "classical" path-based approach above, the information of the wave nature of quantum particles, i.e., to introduce interference between paths.Following the ideas of Dirac 45 and Feynman 1,4 , we consider the action S as a (classical) measure of the path length, and Planck's constant h as the wave length.Then, we can assign a wave exp [i2πS(x(t), a)/h] to the path x(t) and follow the phase of the waves to find the interference effects.In particular, we want to take into account the contributions of all possible waves of the form exp [iS(b, a)/h] to the probability amplitude K(b, a).
The sum (or integral) over the contributions of all possible paths is called the path integral, i.e., where the notation Dx(t) indicates integration over all paths from a = (x a ,t a ) to b = (x b ,t b ), which, following Ref. 2, can be defined as where A(ε) is a suitable normalisation factor that ensures the limit to properly converge (an example of it is given in the next section, but the explicit for of A(ε) might change depending on the problem at-hand), and ε comes from the discretisation of the time interval in the action S(b, a) in N finite points, i.e., t b − t a = Nε, with t 0 = t a , and t N = t b .This, on the other hand, implies a discretisation on the paths x(t), which now are defined as ).This discretisation procedure allows us to consider each of the N − 1 integrals above as standard Riemann/Lebesgue integrals in the variable dx k .Then, when all N − 1 integrals have been computed, the limit ε → 0, together with the correct definition of the normalisation factor A(ε), ensures convergence of the path integral b a Dx(t), and justifies its definition as integration over all possible paths.Notice, moreover, that the same line of reasoning will allow us, in Sect.VII to define the path integral for fields.
The explicit form of the path integral above is usually defined by the form of the potential term V (x) appearing in the Lagrangian (5).For some potential functions there are analytical, closed-form, exact solutions to the Eq. ( 9), but one needs to prepare for numerical methods with possible approximations in more general cases, such as multi-dimensional or many-body problems.In what follows, we present some of the exact propagators.
The above result, obtained for a simple one dimensional system, can be readily extended to three dimensional and many particle systems.While the former is straightforward to work out, in the latter case, the quantum statistics of fermions and bosons needs to be taken into account explicitly, which makes the problem of finding the correct generalisation of the path integral to the many body case less trivial 91 .
In addition to that, the path integral approach also allows for an easy way to simulate the evolution of the density matrix for finite temperature equilibrium systems.An example of this will be discussed below in Sect.V.

III. BASIC EXAMPLES FOR QUANTUM PARTICLES
For the evaluation of the wave function from the integral equation ( 4) we need to calculate explicitly the kernel from the path integral, i.e., Eq. (9).In this section we therefore consider the simplest kernels, with examples following the book of Feynman and Hibbs 2 .
In case the integrand is an exponential of a quadratic function (Gaussian integral) the kernel can always be evaluated using recursively the basic Gaussian integral formula 81 dx e −ax 2 +bx = π a e b 2 /4a .
Another useful result to keep in mind is the fact, that for given x a and x b endpoints, the contributions from other than the classical path interfere destructively and vanish.Thus, only the classical action S cl (x b , x a ) contributes, leading to the major simplification for the kernel, i.e., As long as the action involves path variables only up to the second order, therefore, the exact propagators in the form above can be factored out from the path integral, leaving at most to calculate a prefactor of the form F(t b − t a ).
To illustrate how one arrives at the result above, let us consider a general quadratic Lagrangian in x and ẋ of the form where {a, b, c, d, e, f } are (arbitrary, but well-behaved) timedependent coefficients.The action corresponding to this Lagrangian is then the integral of Eq. ( 13) with respect to time between two fixed end points t a and t b , as given by Eq. ( 6).
Let us now assume, that x cl (t) is the classical path between the specified end points, i.e., the path for which δ S = 0 holds.We can then represent x(t) in terms of deviations from the classical path x cl (t) by introducing the function y(t) as This substitution means, that instead of defining a points on the path by its distance from and arbitrary coordinate axis, we measure instead the deviation y(t) from the classical path.Moreover, at each time t ∈ [t a ,t b ], the variables x and y differ only by the constant x cl (t), and therefore dx i = dy i for each point t i .In general, then, it follows that Dx(t) = Dy(t).Notice, that as a consequence of Eq. ( 14), y(t a ) = 0 = y(t b ), as at the endpoints the path x(t) coincides with the classical path x cl (t).
If we then use the change of variables defined in Eq. ( 14), the Lagrangian (13) can be written as the sum of three terms as follows where L cl (L y ) is just Eq. ( 13) with {x, ẋ} → {x cl , ẋcl } ({x, ẋ} → {y, ẏ}), and Similarly, the action can be then written as the sum of three terms, namely the classical action S cl (b, a), the action relative to the deviation y(t), i.e., S y (b, a), and the mixed action S mix (b, a).Because of the fact that y(t a ) = 0 = y(t b ), however, all the terms which contain linear terms in y result in a vanishing integral.Thus, only the second-order terms in y give rise to a nonzero contribution to the total action, which can now be written as Notice how S cl (b, a) does not depend on the deviation y(t), and therefore the corresponding exponential can be treated as a constant, with respect to the path integration Dy(t).The Kernel can then be written in the following form where the notation 0 0 Dy(t) is reminiscent of the fact that all the paths y(t) obey the boundary condition y(t a ) = 0 = y(t b ).The path integral above, then, can be written as a function of the time interval (t b − t a ) solely, i.e., This, ultimately, allows us to write the kernel in the following simplified form which is equivalent to that of Eq. (12).
A. Path Integrals for a Free Quantum Particle The first example concerns the simplest quantum system, i.e., a quantum particle of mass m, freely propagating without experiencing any interaction.Following the assumptions made in the previous section, we discuss the case of a one dimensional free particle.The generalisation to an arbitrary number of dimensions can be readily done, since the dynamics of a free quantum particle in D dimensions can be seen as the product of the independent evolution of D one dimensional particles 46 .
The Lagrangian of a free particle of mass m is given by and the equation of motion deriving from the Euler-Lagrange equations ( 7) is simply m ẍ = 0.The correspondent action S = t b t a L( ẋ, x,t)dt can be readily calculated explicitly by means of part integration, and has the following form -� -� 2. Real (blue, solid line) and imaginary (red, dashed line) parts of the kernel K(x b , x a ) of a free quantum particle, as given by Eq. ( 30), as a function of x = x b − x a , for a fixed time interval t b −t a = T .For this plot, we assumed m = 1 = h, which corresponds to measure time in square meters, rather than seconds.Moreover, T = 1 has also been used.
where x a,b = x(t a,b ).To calculate the path integral for a free particle, we need to consider all the possible paths the particle takes from the initial state (t a , x a ) to the final state (t b , x b ).To do that, we simply divide the time interval T = t b − t a into N smaller intervals of length ε = t i+1 − t i (such that T = Nε), calculate the action S i corresponding to the particle evolution within each single interval, such that with x 0 ≡ x a and x N ≡ x b .We then substitute this result into Eq.( 9) and evaluate the path integral over the set of N distinct trajectories, i.e, b a Dx(t) where A(ε) = (2πihε)/m is a factor included to ensure the integral to converge.This factor, however, is not merely a normalisation factor, since it is complex, and therefore it contributes to the overall phase of the path integral.This is discussed in great detail in Ref. 2. Finally, we take the limit N → ∞, to arrive at the following expression for the propagator of a free particle The integrals appearing above are Gaussian in the variables x i , and can be then readily calculated one after another.To see how, let us first calculate explicitly the integral with respect to x 1 .Once we have this result, we can perform the other integrations in cascade in the same manner.
First, notice that the relevant term in the action, depending explicitly on x 1 (i.e., those obtained by setting i = 1 and i = 2 in the expression above) give rise to the following term and that, in particular, the first term does not depend on the integration variable x 1 .Integrating the above quantity with respect to x 1 then gives where to pass from the second to the third line we have employed the change of variables X = ix 1 and used Eq.(11).
Next, we take into account the terms in the action depending explicitly on x 2 , i.e., S 2 = S 1 + (x 3 − x 2 ) 2 , and we integrate with respect to x 2 .We can do so, by simply taking the result of the integral of S 1 given in Eq. ( 27) and multiply it by and integrate again, this time over x 2 .The result is similar to that of Eq. ( 27), except that (x 2 − x 0 ) 2 becomes (x 3 − x 0 ) 2 , and 2ε → 3ε.It is now clear, that we can solve the set of N integrals in Eq. ( 25) by recursively applying terms of the form (28) and then performing Gaussian integration with respect to the variable x i .After N − 1 steps, we are left with the following result If we now notice, that Nε = T = t b −t a , it is easy to see that the limit operation in Eq. ( 25) can be readily performed, leading us to the final form for the path integral of a free particle, i.e., The functional form of the real and imaginary parts of the kernel above for a constant time interval t b − t a = T are shown in Fig. 2. It is worth to comment this result a bit.From it, in fact, we see how the quantum dynamics of a free particle (but, more in general, of an arbitrary quantum system) couples with its classical dynamics described by the action (22).However, this result also allows to shed light on the essential difference between quantum and classical dynamics.While the classical principle of least action localised the propagation of a particle on a specific trajectory x(t) (i.e., the path of minimal action), the propagator K(x(t), x a ) describing the propagation of a particle from the initial state x a along the classical trajectory x(t), yields instead a complex wave function ψ(x,t) delocalising in time during propagation (obviously within the constraints of the Heisenberg principle).

B. Refraction of Photons at an Interface
With the above notations, consider now the space divided in two parts by a planar interface C, and let A and B be two points located at opposite sides of the interface, as it is shown in Fig. 3.Then, we assume constant but different potentials at opposite sides of the interface.At both sides we expect the path of the photons to be those of a free particle.With this assumption, we can project the path connecting the points A and B onto a one dimensional subspace, for simplicity.Notice, however, that despite we will perform the calculations in this one dimensional subspace, we still need to think in terms of three dimensional space when considering deflection of light rays from the interface C.
Without loss of generality, we can assume that total energy is conserved in both separate sides of the interface, and while passing through it.However, since the two sides might have different values of the potentials (i.e., different refractive indices), the particle passing through the interface C needs to change its velocity from ẋa to ẋb , to account for this variation in potential energy.The change of velocity, moreover, occurs at some position x c on the interface.
As shown in Fig. 3, in case the straight line from A to B is not perpendicular to the interface, the observed path becomes deflected at x c .Now, we can write the action for the path as where the coordinate x c is a free parameter to optimise following the principle of least action.For the case of a photon refracting from interface C, therefore, the "minimum optical path length" from A to B is found by optimising the coordinate x c .Following Eq. ( 12), the kernel for the refraction of a quantum particle from a planar interface C is given, up to an inessential constant F ≡ F(t b − t a ), by We leave as an exercise for the reader to figure out the explicit expression of the normalisation constant F ≡ F(t b −t a ).
In the equations above, the two different constant velocities ẋa to ẋb of the photon follow from two different refractive indices.In optics, the optimisation of the optical path length is called Fermat's principle.If we consider the problem from a geometrical optics perspective, in fact, it is easy to see how the optimisation of x c leads directly to the celebrated Snell's law of refraction 92 , i.e., n a sin θ a = n b sin θ b , where n a,b = 1/ ẋa,b are the refractive indices of the two media separated by interface C (that can be expressed as the inverse velocity of the particle in each side of C), and θ a,b the angle the optical rays emerging from A and B, resepctively, make with it.The angle difference between the two sides of the interface C is reminiscent of the different wavelength that a photon with velocity ẋa and one with velocity ẋb experiences.In this simple example, the classical and quantum approaches give identical explanations for observations.So, we see that the "quantum corrections", though not absent may give not only small, but even vanishing contribution.

C. Diffraction from a Double Slit
This is the well known "classic experiment" for demonstrating the wave nature of light, or, if conducted with quantum particles like electrons, to expose the wave nature of particle dynamics 93 .The results of this section can be then thought as valid for both a photon, or a quantum particle.Experimentally, to prove the wave nature of light one would need to perform this experiment with monochromatic light, while mono-energetic particles are needed to unravel the wave nature of particle dynamics.From a path integral perspective, the interference pattern typical of such experiments naturally arises when the possible paths the particle can take to traverse the double slit are explicitly taken into account.
Our first task is then to construct the kernel for this problem.To this aim, let us consider the situation depicted in Fig. 4, where a particle is emitted from a suitable source located at point A and propagates to a detector (or a collection thereof) placed at B, through a screen C with two slits C 1 and C 2 carved in it.Without loss of generality, we can assume the evolution of the particle is that of a free particle, and that the only potential it encounters is represented by the double-slit structure (which, as a matter of fact, acts as a transfer function for the particle).
To reach the detector at B, the particle takes a time τ to travel from A to the screen C (and, in particular, to one of the slits), pass through the screen, and then arrive at B after a time T −τ, where T = t b −t a is the total time the particle takes to go from A to B. However, since we don't know exactly the time at which the particle arrives on the screen, we need to integrate over all possible times.In other terms, since we don't know the exact path the particle will take to reach C from A, we need to integrate over all possible paths it might undertake.With this in mind, and by remembering that a = (x a ,t a ), b = (x b ,t b ), and c 1,2 = (x c 1,2 ,t c 1,2 ) represent the position and time at points A, B, and C 1,2 , respectively, the kernel for the propagation of a particle through a two-slit screen is given by where the first term accounts for the particle passing through the slit C 1 , while the second one for it passing through the slit C 2 .From the expression above, it is clear that interference must occur, since the probability to detect the particle at point In fact, we can arrive at the same conclusion by considering the case above in terms of the probability amplitudes and observed probabilities in Sect.II A. In particular, let us have a look at the probability distribution of the particle observed on the screen at B for different cases, namely the probability As it can be clearly seen, the probabilities do not sum up, as P = P 1 + P 2 .If, on the other hand, we first sum the probability amplitudes, i.e., ψ = ψ 1 + ψ 2 , and then calculate the probability distribution as P = |ψ| 2 we get where the third term is responsible for adding interference on top of the sum of probabilities P 1 + P 2 in Fig. 4 (d), thus leading to the correct result of Fig. 4 (a).The reader familiar with wave theory would immediately recognise that the probability distributions shown in Fig. 4 (a) and (d) occur as well for classical waves.Specifically, the distribution (a) occurs for the interference of correlated (i.e., first-order coherent) classical waves, while distribution (d) results from incoherent classical wave mixing.This is yet another indicator of the wave nature of quantum particles 94 .
5. Real (blue, solid line) and imaginary (red, dashed line) parts of the kernel K(x, 0) of a harmonic oscillator, as defined by Eq. ( 52), assuming that the initial condition for the oscillator is x a = 0, and for a given time interval T = 1.Analogously to the convention adopted in Fig. 2, m = 1 = h have been assumed to plot this figure as well.

D. Path Integral for the Harmonic Oscillator
As our next example, we consider a simple harmonic oscillator, described by the following Lagrangian where m is the mass of the oscillator, and ω its characteristic resonance frequency.Using the Euler-Lagrange Eq. ( 7), the equation of motion reads ẍ + ω 2 x = 0, and the action S = where T = t b − t a , and sinc(x) = sin(x)/x.To calculate the path integral for the harmonic oscillator, we take a slightly different approach, than the one taken above, which allows us to perform calculations in an easier and more intuitive manner.Let us then assume that x cl (t) and ẋcl (t) represent the classical path of the oscillator, and its velocity, respectively.We can then express any other possible path taken by the oscillator, as a deviation from the classical path, i.e., x(t) = x cl (t) + y(t) and ẋ(t) = ẋcl (t) + ẏ(t), and choose the appropriate boundary conditions on y(t) and ẏ(t), i.e., y(t a ) = 0 = y(t b ) and ẏ(t a ) = 0 = ẏ(t b ) as required by the principle of least action.
If we apply this change of variables to the Lagrangian above, we get three different terms, namely where the first term has the form given by Eq. ( 36) and it represents the Lagrangian for the classical trajectory, the second term is the Lagrangian (36) for the deviation, and the third term amounts to a vanishing term (for the Euler-Lagrange equations) and can therefore be neglected 2 .Using this result, we can then factor the propagator K(x 2 , x 1 ) for the harmonic oscillators into two terms as follows where S cl = t b t a dt L(x cl , ẋcl ), and the explicit expression of the function F(T ), which depends upon the time interval solely, is Because of our choice of boundary conditions for the deviations y(t) and ẏ(t), namely that they must both be zero at the endpoints, we can significantly simplify the calculation of the path integral above if we allow the various paths y(t) to be represented as a Fourier series as The requirement that y(t a ) = 0 = y(t b ) and ẏ(t a ) = 0 = ẏ(t b ), in fact, corresponds to say that both y(t) and ẏ(t) are T -periodic functions, that can then be represented in a Fourier series.This representation of y gives possibility to specify a path through the coefficients a n instead of values of y at any particular time t.This can be seen as a linear transformation of coordinates, whose Jacobian J is a dimensionless constant independent of ω, m or h.Moreover, we don't really need to evaluate J explicitly, since we can always recover the correct normalisation factor at the end of our calculation, by requiring that i.e., that in the limit of ω → 0, where the Lagrangian of the harmonic oscillator reduces to that of a free particle, we find the appropriate normalisation coefficient for a free particle.
For this reason, we omit J from the following calculations, and we restore the correct normalisation factor only at the very end of them.Before substituting Eq. ( 41) into Eq.( 40), a couple more assumptions are needed, in order to easily compute the path integral.First, we truncate the Fourier series to a finite number N, so that and we can calculate it recursively as we did for the case of a free particle.We will then take the limit N → ∞ at the end of the calculation.
Putting everything together, we obtain the following expression for the term F(T ), after performing the trigonometric integrals 81 , as Notice, once more, that the integrals above are all Gaussian in the integration variables a n .As we did for the case of the free particle, therefore, we can perform them individually, and then obtain the final result recursively.The result of a single integration over a n is then given by where Since there are no linear terms in a n in any of the integrals above, the final result of the path integration will be proportional to simply the product of N independent terms f n (ω, T ), one for each value of n.This allows us to write We now need to take the limit of the expression above for N → ∞.To do that, let us first rewrite the product above in the following way, using Eq. ( 47) The first product does not depend on ω and, together with the Jacobian and the terms 2/εT deriving from the various integrations, can be collected into an overall normalisation factor C. The second product, on the other hand, admits the following limit Putting everything together and evaluating C = m/2πihT from the free-particle-limit (42) we get the final expression of the term F(T ) as and, after substituting this result into Eq.( 39), we obtain the final form of the path integral for the harmonic oscillator to be where S cl is given by Eq. (37).The real and imaginary pary of the kernel K(b, a) for the harmonic oscillator are shown in Fig. 5.

IV. AN EXAMPLE FROM CLASSICAL OPTICS: PATH INTEGRAL DESCRIPTION OF LIGHT DYNAMICS IN AN INHOMOGENEOUS MEDIUM
We now take a look at how path integrals can be used to solve problems outside quantum mechanics, and apply this formalism to describe the propagation of the electromagnetic field inside a weakly inhomogeneous medium, using the case of gradient-index (GRIN) media as explicit reference.This problem has been solved by C. Gõmez-Reino and J. Liñares in 1987 31 .In their work, Gõmez-Reino and Liñares first represent the electromagnetic field in a GRIN medium as a superposition of optical rays, and use this picture to calculate the propagator as a path integral over the rays' trajectories.They then provide an explicit expression for it, parametrised in the so-called paraxial and field rays of an arbitrary (paraxial) optical system 92 .
Here, we take a different approach, with which we want to show how the free propagation of light in a medium can be seen as, essentially, the evolution of a massive quantum particle in a harmonic oscillator potential with a suitably defined frequency, which, in general, can be z-dependent.We will identify such massive particle with a photon propagating inside the medium, and all the possible trajectories the particle can take as the possible optical rays linking the initial (z = 0) and final (z = z) propagation plane in the medium.We will then calculate the diffraction kernel by means of path integrals, essentially following the results of Sect.III D and show how our calculations naturally suggest a representation of the diffraction kernel in terms of Hermite-Gaussian functions.
Without loss of generality, and for the sake of simplicity of exposition, we consider light propagating in a 1 + 1dimensional GRIN medium, characterised by the following refractive index profile where n 0 = n(0, z) is the background index and g(z) is a smooth-enough function that describes the evolution of the refractive index along the z axis.

A. Diffraction Kernel as a Path Integral
Let us first assume paraxial propagation of light in a medium described by the refractive index given by Eq. ( 53).In general, if we know the field distribution at an initial plane z = 0 to be E(x a , y a ), we can calculate the field distribution at a plane z > 0 by means of the diffraction integral 95 where K(x b , x a , z) is the diffraction kernel, i.e., the Green's function of the paraxial equation with the boundary condition that K(x, x a , z) → δ (x − x a ) for z → 0. If we imagine the electromagnetic field propagating in the medium described by n(x, z) as a bundle of optical rays, then the diffraction kernel can be interpreted as a path integral over all the possible trajectories of the optical rays contained in the field as where λ = λ /2π = 1/k, and the action functional S for an optical ray propagating in the medium described by Eq. ( 53) is defined as The Lagrangian L for an optical ray propagating in a medium with refractive index n(x, z) can be written as follows To justify our starting assumption, i.e., that the electromagnetic field in the medium can be seen as a collection of rays, we can use different arguments.One possibility would be to represent the field in its plane wave components and consider each plane wave as an optical ray 95 .Another, more inspiring, possibility is to notice that the paraxial equation ( 55) is formally equivalent to the Schrödinger equation for a quantum particle of mass n 0 in a potential n(x, z), where the propagation direction z plays the role of time, and k plays the role of h.Thanks to this formal analogy, we can identify a single optical ray as a (massive) photon propagating in the medium, and then easily understand as the diffraction kernel can be seen as a path integral over all the possible trajectories that such quantum particle can take when evolving from the initial state to the final one.In the form above, the Lagrangian is of little use, since it provides no analytical solution for the trajectory of the optical ray and, by extension, doesn't really allow for an easy handling of the path integral in Eq. (56).To circumvent this problem we can however assume that the medium is weakly inhomogeneous, so that ∆n = n(x, z) − n 0 n holds, and we only consider rays propagating in a small region around the z-axis, which corresponds to assuming ẋ 1 (this is equivalent to assuming that the paraxial approximation holds).With these assumptions, we can Taylor expand the square root in Eq. ( 58) and the refractive index profile obtaining, to the leading order in ẋ and x Notice, that the optical Lagrangian is quadratic in both x and ẋ, and can be therefore interpreted as the Lagrangian of a (shifted) harmonic oscillator with mass n 0 and z-dependent resonance frequency Ω(z) = g(z).Thanks to this analogy, we can calculate the kernel K(x b , x a , z) in the same manner we did for the harmonic oscillator in the previous section, with the difference that now we need to account for the fact that the frequency of the oscillator varies with z.In particular, we can employ the same trick of writing the components of the trajectories as x(z) = x cl (z) + y(z), with the boundary conditions that the deviation y(z) is zero at the endpoints z = 0 and z = z, and we can then write the propagator, in analogy with Eq. (39) as where Following this line of reasoning, and representing the classical action S cl (x a , x b , z) in terms of the so-called paraxial (H 1 (z)) and field (H 2 (z)) rays of a general optical system, we can obtain a similar result, in our (1 + 1)-dimensional model, than the (2 + 1)-dimensional result obtained by Gõmez-Reino and J. Liñares 31 , namely 96 where the dot stands for derivative with respect to z and H 1,2 (z) are two independent solutions of the following equation of motion This result, however, is not particularly insightful, and the solution presented above (or its two-dimensional counterpart presented in Ref. 31) is quite hard to intuitively link to known results.For this reason, we present below a much clearer and intuitive approach, which we hope will help the reader in appreciating the universality and versatility of path integrals beyond quantum mechanics.

B. Paraxial Propagation as a Harmonic Oscillator
Let us go back to the analogy to the Lagrangian (59) and calculate the path integral deriving from it.The fact that the frequency of the oscillator is now z-dependent doesn't really allow us to repeat one-to-one the calculations in Sect.III D. In particular, after we introduce the deviations y(z), we cannot represent F(z) in terms of a Fourier series anymore, since now the oscillator frequency depends on z as well.Instead of doing that, then, we just follow the same line of reasoning that we used to calculate the path integral for a free particle in Sect.III A, namely we divide the propagation "interval" Z = z b − z a into N smaller intervals of length ε = z i+i − z i , such that Z = Nε, so that we can write the action correspondent to the Lagrangian (59) as where x 0 ≡ x a , x N ≡ x b , and Ω 2 i ≡ g 2 (z i ).This allows us to approximate the path integral in Eq. ( 56) using Eq. ( 24), and evaluate it first over a finite set of N trajectories, and then to get back to the actual result by taking the limit N → ∞.By doing this, moreover, we gain the advantage, that in each infinitesimal interval ε, the frequency Ω i of the oscillator is constant, and so we can use the results for the standard oscillator within each interval of length ε.If we then introduce the quantities we can then rewrite the path integral in Eq. ( 56) as N nested Gaussian integrals, i.e., and then the diffraction kernel can be calculated as The explicit expression for K N can be calculated by recursively applying the following Gaussian integral result to Eq. ( 66) This procedure will lead to the quite compact result where a N , p N , and q N are quantities that depend on β and α i , and their explicit expression can be found in Ref. 97.We can then use the result above and take its limit for N → ∞ to get the final form of the propagator, which is given explicitly as follows where the proper limit for the quantities a N , p N , and q N has been taken as instructed in Ref. 97 (also see the same discussion for the simple harmonic oscillator in Sect.III D), , and the quantities s(z) and γ(z) are determined from the solution of the following differential equation where ξ (z) = s(z) exp [iγ(z)], and Ω(z) ∈ R. First of all, notice the similarity between this result and the form of the propagator for a harmonic oscillator, as given by Eq. ( 52).In fact, the result above reduces to the propagator of a harmonic oscillator with constant frequency ω by means of the substitution s(z) = n 0 /ω and γ(z) = ω z.We can then interpret the propagation of light in a medium described by the refractive index (53) as basically being given by the propagator of a harmonic oscillator with a suitably chosen z-dependent frequency (which depends on the longitudinal properties of the refractive index).The propagator, moreover, is uniquely determined by the amplitude and phase of the solution of a harmonic-oscillator-like equation of motion for ξ (z), where the z-dependent profile of the medium determines, again, the characteristic frequency of the oscillator.Following Ref. 97 we can make this connection appear more evident, by rewriting the propagator above in terms of the eigenstates of the refractive index potential given by Eq. ( 53).This can be done by first rewriting the sin φ (z) and cos φ (z) as complex exponentials and then using Mehler's formula 98 exp with Z = exp (−iφ (z)), X = x b n 0 γb /λ , and Y = x a n 0 γa /λ to transform the resulting expression in terms of Hermite polynomials and finally obtain (we generalise z = 0 to a generic z = z a for convenience) where ψ n (x, z) are the eigenfunctions of a harmonic oscillator with z-dependent frequency Ω(z), and their explicit expression is given as where s(z) and γ(z) are solutions of Eq. ( 71).For light propagating in vacuum, g(z) = 0 (and, therefore Ω(z) = 0) and the above expression of the propagator reduces to the well-known result of the resolution of the diffraction kernel in terms of Hermite-Gaussian eigenstates of the paraxial equation 99 .The expression above is moreover fully equivalent to that found by C. Gõmez-Reino and J. Liñares 31 , but it gives more insight on how path integrals can be used to easily solve complicated problems in optics, such as the propagation of light in a GRIN medium, whose analytical solution, even in the simplest cases, in not really available.By establishing the analogy with a time-dependent harmonic oscillator, on the other hand, path integrals give a rather elegant and remarkably simple result, which can be intuitively hinted at using simple arguments, such as the propagation of light in free space.

V. AN EXAMPLE FROM QUANTUM OPTICS: PATH INTEGRAL DESCRIPTION OF DEGENERATE PARAMETRIC AMPLIFIERS
Path integrals, in the form defined by Eq. ( 9), have also been used to approach various problems in nonlinear optics.In this case, as we will see throughout this section, the integral over all the possible trajectories of the quantum particle will be replaced, in the formulation originally proposed by Hillery and Zubairy in 1982 38 , by an integral over all possible configurations in the complex plane spanned by coherent states.
As an explicit example, we consider here the case of parametric amplification, whereby a signal incident on an optically nonlinear material is amplified 100,101 .A pump field impinging on a material characterised by a χ 2 nonlinearity (see Fig. 6) can combine with a signal field, leading to a depleted pump and an amplified signal (and an idler field, in order to conserve energy).
In this section, the contents of which based on Ref. 38, we will then apply the tools presented so far in this tutorial to the case of the degenerate parametric amplifier, thus providing an example of how path integrals can prove useful to solve problems in quantum optics.
For the sake of simplicity, let us consider a single mode of the radiation field (generalisations of this formalism to multimode fields is covered explicitly in Ref. 38), and we represent the field using coherent states, a natural choice here as the Hamiltonians under consideration will be expressed in terms of creation and annihilation operators (a † and a, respectively).This assumption, in particular, will allow us to define the path integral in terms of coherent states, i.e., as a path integral in the field's phase space.

A. Propagator as Path Integral Over Coherent States
We begin by defining the time-evolution operator as Û(t b ,t a ), which evolves the system's state at time t a to the state at time t b by |ψ(t b ) = Û(t b ,t a )|ψ(t a ) 82 .Denoting the time-dependent Hamiltonian of the system by H(t), the timeevolution operator is then Û(t b ,t a ) = T exp −i t b t a H(t )dt , where T is the time-ordering operator 102 .
If the electromagnetic field is represented in terms of coherent states, we can readily give a definition of the propagator using the time-evolution operator defined above as follows where |α represent the set of coherent states, defined as the eigenstates of the annihilation operator, i.e., â|α = α|α 103 .

FIG. 6. Pictorial representation of optical parametric amplification.
A nonlinear material with second-order susceptibility (χ 2 ) is impinged upon by pump photons at angular frequency ω p and signal photons at ω s .Due to the excitation of the medium to a virtual energy level by the pump (shown at the bottom of the figure), the signal photon stimulates the emission of photons at ω s and, due to energy conservation, produces an idler field at ω i .For the case of a degenerate parametric amplifier, the signal and idler fields have the same frequency, i.e., ω s = ω i .
If we introduce the notation |α,t ≡ Û(t, 0)|α , we can rewrite the expression above in the following form, which will prove useful in the remainder of this section To understand why the form above is useful to our means, let us show how the quantity above naturally appears when calculating expectation values of operators and, more generally, correlation functions, in the so-called P-representation.We assume that at time t = 0 the density matrix of the field can be written as where P(α) is the Glauber-Sudarshan P-function 104,105 .In this representation, the expectation value of any operator Ô(t) in the Heisenberg picture can then be written as Using the expression above in combination with the completeness relation of coherent states, i.e., where d 2 α = d Re α d Im α spans the complex-plane defined by coherent states 103 , any normal-ordered correlation func- ) can be then expressed in terms of the propagator (76).For the simple case of â(t) , this can be readily shown as follows where to pass from the second to the third line we have employed the completeness relation for coherent states.For correlation functions containing more creation and annihilation operators, more propagators, calculated at different times, will appear in the expression above 38 .The propagator is then clearly an important quantity, to evaluate such expectation values.
We now show how to write the propagator as a path integral.To start with, let us assume the Hamiltonian of our system is normal-ordered, i.e., H = H( â † , â,t), and as we did in Sect.III A assume to divide the evolution interval t b − t a into N slices of length ε = (t b − t a )/N.We can mirror this choice directly into Eq.( 76) by inserting N times the completeness relation ( 79) and set t → t j = t j−1 + ε for each entry.If we do so, we then obtain The quantities α j ,t j |α j−1 ,t j−1 can be readily evaluated using the time-evolution operator defined above, and noticing that since t j − t j−1 = ε 1, we can Taylor expand the timeevolution operator to obtain With this result at hand, the individual terms α j ,t j |α j−1 ,t j−1 can be brought, after a simple algebraic manipulation 38 , in the following form where we have defined Now that we have an expression of the propagator in terms of N discrete "trajectories" (i.e., N different coherent states |α j ), we can take the limit N → ∞ and arrive at the definition of the path integral in coherent state representation, following the same line of reasoning used in Sect.III.The details of this calculation are reported in Ref. 38, and we refer the interested reader therein.The final result of this calculation is then given as follows: with the integration measure taken to mean an integral over all coherent states parameterised by τ, with α(t a ) ≡ α a and α(t b ) ≡ α b .

B. Propagator for Quadratic Hamiltonians
We now apply the above results, focusing on the class of Hamiltonians which are at most quadratic in the creation and annihilation operators.This class of Hamiltonians is of particular interest in quantum optics, since it describes second-order nonlinear phenomena, within the so-called undepleted pump approximation 101 , and it can sometimes also be used to describe harmonic generation in third-order nonlinear systems 101,106 .
The most general quadratic Hamiltonian can be written as with f (t) and g(t) arbitrary, but well-behaved, time-dependent functions.The reader might notice a similarity between the Hamiltonian above and that of a forced harmonic oscillator 2 .
For this class of Hamiltonians, the integrals appearing in Eq. ( 85) are all Gaussian, and can be performed using the methods described in Sect.III.In particular, to calculate the path integral with the Hamiltonian defined above, one first needs to discretise the paths over the coherent states α (in the same manner we discretised the trajectories for the harmonic oscillator in Sect.III), then express the exponential appearing in Eq. ( 85) in terms of Re{α} and Im{α} explicitly, and notice, that the correspondent integrals are Gaussian in both the real and the imaginary parts of α.After having calculated a single term, one could then calculate the remaining integrals iteratively, as we have done for the examples in Sect.III.After a lengthy but straightforward calculation, which is partially covered in Appendix A of Ref. 38, the propagator for a quadratic Hamiltonian assumes the following form where α = {α a , α b }, t = {t a ,t b } and the functions F and Σ are defined as where the auxiliary function X(t) is constrained by with initial condition X(t a ) = 0.The functions Y (t) and Z(t) are instead defined in terms of X(t) as and C. Propagator for Degenerate Parametric Amplification A degenerate parametric amplifier is characterised by the following quadratic Hamiltonian 101 where ω is the angular frequency of the mode and κ is a coupling constant.The above Hamiltonian can clearly be seen to fall into the class of Hamiltonians given in Eq. ( 86) if we identify ω(t) = ω, f (t) = κe 2iωt , and g(t) = 0.Although parametric amplification is formally a 3-wave process involving, as depicted in Fig. 6, a pump, a signal, and an idler field, it is common practice in nonlinear optics experiments to work within the so-called undepleted pump approximation 101,106 , which treats the pump mode as a classical (bright) field, whose number of photons does not change significantly (i.e., the pump field remains undepleted) during the nonlinear interaction.Within this approximation, then, the Hamiltonian becomes effectively quadratic in the signal and idler modes, and can be described by Eq. ( 93) above.The explicit form of the propagator given below is then valid within this approximation.A fully quantised analysis of parametric amplification beyond the undepleted pump approximation in terms of path integrals has been presented by Hillery and Zubairy in Ref. 37.
In this case, with the form of ω(t), f (t), and g(t) given above, the functions X(t), Y (t), and Z(t) can be written in an explicit form (in particular, Eq. ( 90) cabn be solved analytically), giving Z(t) = 0 and 38 These time-dependent quantities can then be directly substituted into Eq.( 87)), finally yielding the propagator for the degenerate parametric amplifier:

VI. PREDICTING OPTICAL PROPERTIES OF MATTER USING PATH INTEGRALS
The examples discussed above only deal with a single particle, either freely propagating, or interacting with a suitable potential.In all these cases, the complexity of the problem is low enough, to allow analytical solutions to the path integrals.As soon as many-body effects are taken into account, as it is the case, for example, for atoms and molecules, handling the path integral analytically becomes impossible, and numerical techniques for its evaluation are necessary.A particularly successful computational method for this task has been Path Integral Monte Carlo (PIMC).The interested reader can download David Ceperley's group PIMC++ open-source code for Path Integral Monte Carlo simulations, available, together with its relative documentation, at https://pimc.soft112.com.
In this section, we then present some of the results obtained using PIMC to predict the exact linear and nonlinear optical properties of simple systems as a function of frequency.These results, recently demonstrated by Tiihonen et al. 107 , constitute one of the first attempts to use PIMC to investigate the optical properties of matter.There, however, only very simple systems were studied, such as the hydrogen (H) and hydrogen-like (Li + and Be + 2 ) atoms, the helium atoms He and He + , the hydrogen molecule H 2 and H + 2 , hydrogenhelium (HeH + ) and hydrogen-deuterium (HD + ) molecules, and the positronium atom.These systems have been studied employing different methods, including finite-field simulations for static polarisabilities 65 , polarisability estimators for simulation without the external field 74 , static field-gradient polarisabilities 66 , and finally, dynamic polarisabilities and van der Waals coefficients 73 .The last one, in particular, is of great interest, as the macroscopic electric susceptibility is constructed starting from the dynamic frequency dependent polarisabilities.
The results presented in the aforementioned references, and summarised in this section represent an exceptional benchmark to gauge the capabilities of PIMC in providing accurate and multidimensional information about the optical properties of matter, and hint at the potential impact PIMC could have in photonics, as a new modelling platform to calculate exactly the optical response of exotic materials, such as 2D materials, the understanding of which is still in its infancy 108 .A great advantage PIMC would give, for example, is the possibility to investigate the nonlinear properties of 2D materials far from equilibrium, a physical regime that is currently poorly understood.This could be of particular importance for the nonlinear optical properties of 2D materials, since for such materials, non-equilibrium dynamics can be easily reached at relatively small optical powers.

A. Polarisability in PIMC
The optical response of matter 95 , frequently described in terms of the matter polarisation vector P, is completely determined by electron dynamics solely within the limits of the the Born-Oppenheimer (BO) approximation, which specifies that the nuclear dynamics occurs on a much longer timescale and can be therefore neglected, it is implicitly assumed 91 .In general, however, when thermal effects are explicitly taken into account, this approximation might not be valid anymore, as the role of the nuclei becomes more prominent as the temperature of the system is increased.To properly take into account these effects, then, an approach able to go beyond BO is needed to get the correct results.PIMC, then, is the most viable, if not the only possible, approach to systematically study the optical properties of materials in this particular regime (which, for example, includes the calculation of such properties at room temperature).
While optics makes extensive use of the (linear and nonlinear) susceptibility tensor to describe the optical properties of a material 101 , from a PIMC perspective, it is better to work with polarisabilities, as they can be defined quite easily in terms of path integrals.For example, the i j-component of the linear (i.e., dipole) polarisability tensor, namely α i j , can be simply calculated as the (functional) derivative of the expectation value (i.e., first order correlator) of the dipole moment µ oriented, say, along the i-direction, with respect to an external electric field oriented along the j direction 107 , i.e., where the expectation value of an operator Ô is defined with respect to the density matrix operator of the system in the socalled imaginary time representation 107 , i.e., where N is a suitable normalisation constant, S(T ) = Ĥ0 − Q • E /k B T is the temperature-dependent action of the system (described by the Hamiltonian Ĥ0 ) interacting with the electromagnetic field E, and Q is the electric moment operator 107 .Notice that to represent the expectation value above as a path integral, it is computationally more convenient to transform the complex phase factor exp (i S/h), naturally arising from path integrals, into an exponentially decaying term exp (−S/h) by means of a Wick rotation 109 ( i.e., a change in reference frame from real to imaginary time, namely t → iτ), since computationally it is much easier to deal with real-valued, exponentially decaying terms, rather than complex-valued, spuriously oscillating ones.In fact, the everywhere positive exponential function can be considered as a probability distribution for sampling the imaginary time paths with Metropolis Monte Carlo algorithm in canonical (NVT) ensemble.PIMC makes use of several different strategies to compute the quantities defined above.The zero-field polarisability estimators, for example, are Hellmann-Feynman type operators 107 , whose expectation values give the polarisabilities of various types and order, without the need of including an external electric field in the simulation.
Computation of dynamic multipolar polarisabilities, on the other hand, is more sophisticated and it requires to first determine, through PIMC, the multipole-multipole correlation function in the so-called imaginary time representation, then analytically continue it to obtain the spectral function in the real time domain (a task that is highly non-trivial, due to the fact that it is an ill-posed numerical problem), and finally both the real and imaginary parts of the polarisability.

B. From Polarisability to Susceptibility
Assuming a very low density gas phase of these small atoms or molecules, one can evaluate the corresponding macroscopic susceptibility from the atomic and molecular polarisabilities.At higher densities, interactions and chemical reactions of these moieties will change the polarisabilities, composition or both, which consequently leads to the density dependent susceptibilities.
With increasing density and, in particular, in case of liquids and solids, where several electrons interact strongly, the Fermion Sign Problem (FSP) emerges.This is the notorious challenge for Monte Carlo evaluation of essential expectation values of identical fermions, and it discloses partly a still open problem.The problem emerges from the evaluation of the difference of two large contributions with opposite signs, resulting from the sign of the density matrix (or wave function).There are practical solutions to FSP in PIMC simulations for many cases [20][21][22][23]107 , but not any general robust approaches, yet. Mos of the solutions are based on finding approximate or iterative nodal surfaces of the density matrix.
The real-time path integral (RTPI) approach 74-77 may offer a remedy in the future, as it works directly with the wave function with explicit sign, even though only at zero Kelvin.However, as for most of the cases of interests, electrons at room temperature can be well-described with the zero Kelvin temperature model, this is not an issue.Moreover, including temperature in RTPI should be, in principle, possible.
Overall, the FSP represents a challenging problem, and sits at the forefront of current research in PIMC methods.The interested reader can find in Refs.20-23, and references therein, a good starting point to delve deeper in this fascinating, yet challenging, problem.In the examples that we are going to present below, however, we have purposely selected simple systems, where there are no more than two electrons in each moiety, where at low temperatures we can assume the system FIG.7. Real (top) and imaginary (bottom) part of the dynamic polarisability as a function of frequency for Hydrogen (H, left) and Helium (He, right) atoms at T = 2000 K, within the BO approximation.In these plots, a blue line represents the dipole polarisability α 1 (ω), while a green line represents the quadrupole polarisability α 2 (ω).The black dots represent the reference simulations of polarisabilities calculated at T = 0 K. Dotted lines correspond to big time steps, i.e., ∆τ = 0.1 for Hydrogen, and ∆τ = 0.025 for Helium, while solid lines correspond to small time steps, namely ∆τ = 0.05 for Hydrogen, and ∆τ = 0.0125 for Helium.In PIMC, time steps are defined, in imaginary time, as ∆τ = 1/(Mk B T ), where k B is the Boltzmann's constant, T is the temperature of the system, and M is the so-called Trotter number, which controls the accuracy of the discretisation of the propagator exp (−∆τ Ĥ).In these plots, the frequency is measured in Hartree (1 Ha = 2 Ry = 27.2 eV).To get a better grasp of the span of the frequency axis of the plots above, a frequency of ω = 0.1 Ha corresponds to a wavelength λ = 456 nm, while a frequency of ω = 1 Ha corresponds to a wavelength λ = 45 nm.This figure has been taken from Ref. 107 with permission of the author.
to be in its singlet ground state, thus enabling us to directly label the light fermions with opposite spins and avoiding FSP entirely.

C. Some Examples
In this section we present some results concerning simple systems, such as one-and two-electron atoms and molecules, obtained using PIMC.In particular, we focus our attention on the electric polarisability, to show what are the capabilities of this computational method, and how it could be used to provide a new interesting platform for photonics, to calculate the optical properties of interesting materials, beyond the standard approaches and approximations frequently used in optics 95,101 .In both examples below, the polarisability is calculated from the Kubo-like formula for the time-dependent optical susceptibility 91 where the square brackets denote a commutator, the angled brackets denote instead thermal average [see Eq. ( 97)], Θ(t) is the Heaviside step function 81 , Q is a multipole operator (which represents dipole or quadrupole interactions, for the cases presented below), G ret (t) is the retarder Green's function of the perturbation P and the latter represents the perturbation generating the optical susceptibility.The link between susceptibility and polarisability is better understood in Fourier space, where the following relation is valid 73 where the spectral function A(ω) is related to the susceptibility via the Kramers-Kronig relation 95 , i.e., A(ω) = Im{χ(ω)}.PIMC is then used to calculate the value of the expectation value in Eq. ( 98) (or, equivalently, the retarded Green's function), and from it both the real and imaginary part of the polarisability are then calculated.Notice, moreover, that the real part of the polarisability Re{α(ω)} corresponds to the actual optical response of the material, while its imaginary part Im{α(ω)} is, basically the spectral function A(ω).
The first example we present is given in Fig. 7, which depicts the real (top) and imaginary (bottom) part of the dipole (blue line) and quadrupole (green line) polarisabilities α 1 (ω) and α 2 (ω) for the case of atomic Hydrogen (left) and Helium (right) at T = 2000 K.The simulations have been performed using PIMC with two distinct characteristic time steps ∆τ, one large (solid green and blue lines in Fig. 7) and one small (dotted green and blue lines in Fig. 7), to rule out possible numer-ical artefacts 73 .The results have been then compared with polarisability data available in literature, and obtained using a T = 0 K approach, with methods other than PIMC 110 .This reference is indicated by black dots in Fig. 7.For low frequencies, i.e., in the region 0 < ω < 0.4 Ha (corresponding to 0 < λ < 1.14 µm), PIMC simulations reproduce very well the standard T = 0 K result.This, essentially, means, that for the hydrogen atom, the eigenstates at T = 2000 K are essentially the same as those calculated at T = 0 K, and, as a consequence of that, the low-frequency polarisability in Fig. 7 fits perfectly the results obtained at zero temperature 110 .
As the frequency is increased beyond ω = 0.5 Ha (corresponding, for the case of Hydrogen, to its ionisation energy), the zero temperature approach fails to faithfully describe the optical properties of the system.In this high frequency range, i.e., ranging from the near infrared to the deep UV, PIMC produces instead a more accurate and reliable result.This is an important result.Usually, the models used in photonics to describe the optical properties of various materials, are based on a zero temperature approach.Although this is perfectly fine (and enough) for most materials, since the wavelength at which non-equilibrium effects are starting to become non-negligible is either in the far IR or the deep UV, this is not the case for 2D materials, which can be easily be driven out of equilibrium in the wavelength range of interest for different applications (typically ranging from THz to visible).In this case, traditional, zero-temperature-based, methods would fail to correctly describe the influence of out-of-equilibrium effects onto the optical properties of 2D materials, while PIMC, on the other hand, are able to exactly account for these contributions, as they naturally take these into account.
The second example deals with a slightly more complicated system, namely molecular hydrogen H 2 .Figure 8 depicts the real (top) and imaginary (bottom) part of the dipole (left) and quadrupole (right) polarisabilities α 1 (ω) and α 2 (ω), respectively, for several different values of the ambient temperature T .A comparison with reference simulations obtained at T = 0 K from Ref. 110 is also shown (black dots in Fig. 8).
Notice, in particular, the deviation from the zero temperature behaviour in the small frequency regime (insets in the top part of Fig. 8).This discrepancy is due to the fact, that in the case of molecules, the nuclear contribution (mainly nuclear vibrations induced by finite temperature) cannot be neglected, as it significantly changes the behaviour of the polarisability at low frequencies.This change amounts only to a shift of the value of Re{α 1 (ω)} for the dipole polarisability (top let inset in Fig. 8), while it radically changes the form of Re{α 2 (ω)} at low frequencies, for the case of quadrupole polarisability (top right inset in Fig. 8).Notice, moreover, as in this case temperature has also a significant effect in shaping the imaginary part of the quadrupole polarisability at very low frequencies (lower right inset in Fig. 8).
Although they show the optical properties of very simple systems, such as H, He and H 2 , the examples presented in this section offer good insight on the capabilities and potential of using PIMC for calculating the optical properties of even more complicated systems.The large frequency range PIMC can span (from THz to the deep UV) represents an enormous advantage to test the current models for susceptibilities and polarisabilities of photonic materials, and could in particular provide tremendous insight on the nonlinear properties of exotic materials, such as epsilon-near-zero materials, for example, as it provides a single, ab-initio method to calculate the optical response in an exact manner, without inserting any approximation or without introducing different models for different frequency regimes.
Besides that, the examples provided in this section also help the reader in getting a better grasp of the breadth of path integrals, and how they do not only represent an elegant, alternative formulation of quantum mechanics, but also how their use in atomic and molecular physics can be pivotal for the accurate description of the properties of matter.

VII. PATH INTEGRAL FOR CLASSICAL AND QUANTUM FIELDS
In the previous sections, we have seen how it is possible to describe the dynamics of a quantum particle (or a collection thereof) in terms of path integrals.In this section, instead, we extend this formalism to the case of quantum fields, where the building blocks for the path integral are not particles and all the trajectories they can possibly take to evolve from the initial to the final state, but rather all the possible configurations of a field, that serve the same purpose.
A quantum field can be seen as a continuous collection of harmonic oscillators, one for each point in space where the field is defined.As such, then the results presented for the harmonic oscillator in Sect.III D can serve as guidance to better understand this section, and, more generally, the properties of a QFT.Some of the language and tools used here, however, are slightly different than those introduced above, and it is worthwhile to introduce some key concepts and methods proper of QFT before specialising our attention to the electromagnetic field.To do that in the most beginner-friendly manner, we use the case of a scalar field as a guidance to construct the path integral, and show how interactions can be introduced in such a theory, and what is the consequence of doing that.Then, we will qualitatively and intuitively extend these results to the case of a vector field, and use those results to look at the linear and nonlinear dynamics of the electromagnetic field in an arbitrary medium.
This section follows closely the notation and line of reasoning of Ref. 109.For a more formal discussion of path integrals in field theory, we invite the interested reader to consult Refs.8,9,88,109.For the sake of simplicity, natural units c = h = m e = 1 are implicitly assumed.If needed, these constants can be easily reinserted in the final results of calculations, by means of simple dimensional analysis.Moreover, summation over repeated indices (Einstein summation convention) is also understood.

A. Path Integral for a Scalar Field
To start with, let us then consider a free scalar field φ (x) (where x is a shorthand for {r,t}), whose Lagrangian density is given by where η µν = diag(−1, 1, 1, 1) is the Minkowski flat space metric tensor 109 , and the subscript 0 indicated that the Lagrangian is that of the free field, i.e., φ (x) is not interacting with either itself, or its environment.Before going any further, it is instructive to discuss the physical meaning of the parameter m appearing in the above equation.In QFT, it is customary to give m the meaning of mass of the scalar field φ , as it could represent a scalar massive boson such as the Higgs boson 89 , or even a scalar fermion in more complicated theories involving dark matter 111 .In optics, on the other hand, we frequently refer to the case m = 0, as the electromagnetic field is composed by photons, which are massless bosons 89 .However, when propagating inside a medium, the electromagnetic field acquires a mass, proportional to the refractive index of the medium 92 .This can be easily proven by recalling that the dispersion relation for a monochromatic electromagnetic field propagating inside a homogeneous medium characterised by a refractive index n is given by 95 k 2 = ω 2 n 2 .The dispersion relation for a massive scalar field φ , on the other hand, can be readily calculated from its equation of motion (i.e., the Klein-Gordon equation 109 ), which can be readily derived from Eq. ( 100) using the Euler-Lagrange equations for fields, i.e., which leads to (∇ 2 − ∂ 2 t − m 2 )φ = 0.If we now assume plane wave solutions for φ , the dispersion relation for a massive scalar field can be written as k 2 = ω 2 − m 2 .Notice, now, that upon defining m 2 = ω 2 (1 − n 2 ), the dispersion relation for a massive scalar field φ matches that of an electromagnetic field inside a homogeneous medium.This allows us to identify the mass (squared) of the scalar field φ with m 2 = 1 − n 2 = 1 − ε = χ, i.e., the medium's susceptibility 95 .In this sense, therefore, while a massless scalar field describes a (scalar 112 ) electromagnetic field in vacuum, a massive one describes instead the (scalar) electromagnetic field in a homogeneous medium.
We now proceed to the definition of path integral for the scalar field φ .To do so, let us notice that the Lagrangian density in Eq. ( 100) has a similar structure to that of a harmonic oscillator, as given in Eq. (36).The first term in Eq. ( 100), in fact, is quadratic in the derivative of the field, and can be associated with its "kinetic energy", while the second term, quadratic in the field, is its "potential energy".Analogously to what we have done in the previous sections, we can introduce the action of the field as where d 4 x = dt d 3 r and we have introduced the source term J(x) (analogous to the external force acting on a harmonic oscillator 2 ), which, for the sake of this tutorial, can be imagined to simply be a convenient mathematical tool, introduced to make calculations easier.The physical meaning of J(x), as its name suggests, is to represent a source of the field.The x-dependence of the field φ (x) is from here henceforth, and if not specified otherwise, implicitly assumed, i.e., we define φ ≡ φ (x), to make our notation less cumbersome.We can then define the path integral for a free scalar field in the same manner as we did in Eq. ( 9), thus obtaining where the functional measure Dφ (x) plays the same role as the sum over all possible trajectories Dx(t) in Eq. ( 9), with the conceptual difference, however, that in this case the sum (integral) is extended to all possible configurations of the function φ (x).Practically, this means that instead of slicing time into small intervals (as we did in the previous sections for the case of quantum particles), we divide the volume of spacetime V into N elementary cubes, i.e., V = N k=1 V k , with V k = δ 4 , and assume that in each V k the field φ is constant, i.e., φ φ (t i , x j , y n , z m ) = φ (x k ) ≡ φ k (this implies that since each field φ is a function of 4 variables, each taking N possible values, this discretisation of spacetime results in having a total of N 4 fields).This then means, that the integration measure can be written as In doing so, each field configuration φ k becomes an independent integration variable, and the integral Dφ reduces to a usual Riemann/Lebesgue integral in N 4 -dimensions.
The above definition of path integral is valid both for a classical, as well as a quantum field.For the latter, the field φ (x) needs to be equipped with a suitable set of commutation relations, that define the quantum nature and structure of the algebra behind it.To do that, we can revert back to the analogy with the harmonic oscillator, and use it to find a straightforward definition of such quantities.First, we discretise the field φ (x) and associate a harmonic oscillator to each point in space.Then, for each oscillator, we introduce the usual commutation relations between position and momentum, i.e., [q a (t), p b (t)] = iδ ab , where δ ab simply indicates that the oscillators are mutually decoupled.If we then take the continuum limit (to revert back to the field φ (x)) the commutation relations become those of the field φ itself with its canonically conjugated momentum Equation ( 103), moreover, has implicitly been defined such that the time interval used for the time integration in the action spans the whole real axis, i.e., −∞ < t < ∞.Moreover, we require that Z 0 (J) is normalised in such a way that Z 0 (0) = 1.
For a classical field, Z 0 (J) represents its partition function Z 0 (J), from which, in analogy with statistical mechanics, it is possible to derive all the relevant quantities of the classical field, such as energy, momentum, entropy, and Green's function.
For a quantum field, it represents the vacuum-to-vacuum correlation function, which allows to calculate transition probabilities and correlation functions of field operators with respect to the ground state (or vacuum), which is typically a stable and well-defined state of the QFT at-hand.In other terms, Z 0 (J) for a quantum field only describes a QFT at equilibrium (i.e., at zero temperature).To deal with non-equilibrium systems, a further step in the definition of the path integral is needed, which is beyond the scope of this tutorial.The interested reader is then referred to Ref. 113 for further information about non-equilibrium QFTs.

B. The Propagator for a Free Scalar Field
To get familiar with the formalism of QFT, in this section we calculate the propagator for the free scalar field, which constitutes, to a certain extent, the QFT analogue of the free quantum particle described in Sect.III A. To this aim, let us first introduce the Fourier transform of the field as where kx = k • r − ωt.If we substitute the expression of φ above into Eq.( 102) we arrive, after some straightforward calculations, at the following expression for the action in Fourier space where k 2 = |k| 2 − ω 2 , and J(k) is the Fourier transform of J(x).The trick to use here for calculating the path integral (103) with the action above is the same as the one we employed in Sect.III A, namely we need to discretise the integral appearing in the action S 0 , and consider a discrete set of field configurations, such that Dφ can be written as a product of integrals.If we do so, we can then use the fact that the integrand above is Gaussian in ϕ(k) and repeatedly apply Gaussian integration to it to get the final result.Alternatively, we can operate a change of path integration, by performing the change of variables χ(k) = ϕ(k) − J(k)/(k 2 + m 2 ), such that D χ = Dφ , to isolate, in the action above, a term that only depends on the field χ, from a term independent from it, i.e., If we now substitute the expression above into Eq.( 103), together with D χ = Dφ from the change of field variables, we obtain the following result Notice, how the first term has been brought out of the path integral, as it is independent on χ.Notice moreover, that the path integral can now be evaluated, using a generalisation of the Gaussian integration formula (see Appendix A), and it essentially amounts to a normalisation factor (which, for convenience, will be henceforth neglected).We then arrive at the final result for the path integral, which can be written in a very similar form to that of a harmonic oscillator, as where and a small imaginary part ε 1 has been introduced to regularise the ω-integral and allow its calculation as a contour integral in the complex plane 81 .The quantity G(x − y) is the propagator (Green's function) for the free scalar field, which solves the correspondent equation of motion (derived from the Euler-Lagrange equations of the field), i.e., The Green's function can also be interpreted as the vacuumto-vacuum two-point correlation function between the field at point x and the field at point y as follows 109 where T is the usual time-ordering operator.The two-point correlation function is defined in terms of path integrals as follows From this definition, by noticing that, in analogy with the derivative of the exponential of a function we can write 5 where the symbol δ /δ J(y) stands for the functional derivative, and it is a generalisation of the concept of derivative to functionals 5 defined by the operative relation With this in mind, then, we can formally introduce in Eq. ( 113) the substitution φ (x) → −iδ /δ J(x), thus obtaining an alternative definition for the propagator of a free field as

VIII. NONLINEAR INTERACTIONS THROUGH PATH INTEGRALS
The results above are suitable for describing the free propagation of a field, such as, for example, the electromagnetic field in either free space (i.e., vacuum), or inside a homogeneous or inhomogeneous medium.However, neither the Lagrangian (100) nor the path integral (103) take into account the possible self-interactions of the field, or any other form of interaction (such as the field-environment interaction, for example).In general, field interactions are very important, because they allow us to gain valuable insight on how the field behaves in different environments.The self-interaction of a field with itself, in particular, plays a big role in photonics, where it is simply known as nonlinear optics.If we then want to use path integrals to describe quantum nonlinear optics, we need a way to include the effect of interactions in our theory.In QFT this is typically done by adding to the Lagrangian density a so-called interaction term, i.e., where L int contains information about the nature of all the interactions taking place in the considered system.For the sake of simplicity of exposition, and because they play an essential role in nonlinear optics, let us focus our attention on the case of a self-interacting field, where the interaction part is polynomial in the field φ and therefore the interaction Lagrangian can be written as 90 where λ is a suitable coupling constant describing the strength of the interaction.Notice, that the first nontrivial interaction is given by n = 3, as any quadratic term in the Lagrangian density can be absorbed in the free Lagrangian as a "shift in the potential energy".Inserting Eq. ( 118) into Eq.( 103) gives the formal expression of the path integral for the interacting theory, i.e., In general, Z(J) cannot be evaluated analytically, because the integral at the exponent is intrinsically non-Gaussian, due to the presence of the n > 2 polynomial terms provided by L int .However, we can use the following argument, to write Z(J) in a form that can be treated with perturbation theory.First, notice that the interaction Lagrangian L int as defined in Eq. ( 119) is a polynomial of order n in φ .Formally, then, if we employ Eq. ( 115), we can say that holds.Now, thanks to Eq. ( 121), the interaction term L int does not depend anymore on the path integral variable φ , because we have formally substituted each occurrence of φ in it with the correspondent derivative with respect to J.This allows us to bring this term outside the path integral, and therefore rewrite Eq. (120) as where Z 0 (J) is given by Eq. ( 103).
If we now assume that the interaction is "small enough" (with respect to a certain energy scale, typical of the problem at-hand) that we can treat it perturbatively, i.e., if we set λ 1 we can expand the exponential in a power series in λ (truncated at the desired perturbation order) and write the path integral for the interacting theory in the following manner where we have also written Z 0 (J) in terms of its power series expansion, for a reason that will become clear in a moment.
The expression above allow us to treat the path integral for the interacting theory in a perturbative manner, up to an arbitrary perturbation order in λ .To choose the appropriate one, in fact, one needs only to truncate the summation in V at the desired order, so only terms proportional to λ V (or lower powers) will appear in the perturbative series expansion.

A. Feynman Diagrams
In general, calculating Eq. (123) analytically is not possible, and even writing down the terms contributing to Z(J) at a given perturbation order might be cumbersome.In fact, at order λ V in Eq. ( 123), Z(J) contains nV functional derivatives through the term (δ /δ J) n , that acts on the 2P different source terms J deriving from the P−th order expansion of Z 0 (J).This, in general, amounts to (2P)!/(2P − nV )! different ways the nV functional derivatives can act on the 2P source terms.It is not difficult to see from this example, then, how the number of required integrals to calculate becomes very high and complex very easily, with increasing V and P.
An elegant and quite insightful way to keep track of all these quantities, and to focus on their physical meaning, rather than their complex mathematical expressions is given by the so-called Feynman diagrams.An example of them for the case of n = 3 is reported in Fig. 9.
To understand their meaning, a comprehensive set of rules connecting the various parts of these diagrams with the correspondent integrals and mathematical quantities appearing in Eq. ( 123) can be defined, following for example Refs.8,9,109, as follows: • to represent a propagator −iG(x − y) we use a wiggly line joining the initial (x) point with the final point (y) of the free evolution of the field φ ; • to represent the interaction between fields, we use a socalled vertex, i.e., a filled dot, which joins n lines, thus fulfilling the φ n nature of the interaction; • to each vertex, moreover, we associate the quantity i λ d 4 x, which contains the coupling constant λ defining the interaction.In addition, we require that energy and momentum conservation at each vertex holds; .This diagram will be of particular importance for the case of the electromagnetic field, as its vector counterpart is the basic quantity that describes second-order nonlinear processes in arbitrary media (see Sect.X).
• finally, to represent the presence of sources J(x) we use a circle, placed at one end of a propagator, which is associated to the quantity i d 4 x J(x).
With these simple rules, the terms at different perturbation orders in Eq. ( 123) can be conveniently expressed in terms of diagrams.To illustrate how this works, let us consider two examples:

Feynman Diagram for the Free Propagator
To obtain the free theory from Eq. (123), we simply set V = 0 (which corresponds to no interaction).Following the rules stated above, the path integral 0 (J) can then be diagrammatically written as follows J(x 2 ) + higher orders. (124) If we then want to calculate the propagator, we just have to simply take the functional derivative of the diagram above, i.e., remove the source terms J(x 1 ) and J(x 2 ) and replace them with simply the labels x and y for the initial and final points of the propagator.This corresponds, in terms of calculations, to the following Notice, how the action functional derivatives δ /δ J(x), namely to remove a source term from Z 0 (J) and replace x 1 (x 2 ) with x (y), now appears more clear and intuitive.
2. First Order Perturbation Diagram for n = 3 As a second example, let us consider a n = 3 interacting theory with first order perturbation (i.e., V = 1), for the simple cases of P = 2, and P = 3 as reported in Fig. 9 (a), and (b), respectively.
The case P = 2 represents the simplest perturbative calculation that we can do on Z(J) for n = 3, and corresponds to the well-known tadpole diagram in QFT, i.e., the case where two propagators interact at a single vertex V [Fig.9 (a)].Notice, that this diagram contains 2P − 3V = 1 external sources, and can be arranged in 2 different ways, by exchanging the role of the two propagators G 1 and G 2 .Both of these configurations, however, are algebraically equivalent, and correspond to the same term in the first order perturbative expansion Z 1 (J), which for this simple case can be written, following the rules above, as where the last term G 2 (x − x) accounts for the loop appearing in Fig. 9 (a).The factor 1/2 in front of the integral compensates for the 2 equivalent ways diagram (a) can be written.
The subscript 1 and 2 in the propagators have been added to make it easier to recognise the correspondent terms in Fig. 9, and have no physical meaning.For the case of three propagators meeting at a vertex (P = 3 and V = 1), the actual computation of Z 1 (J) would be rather cumbersome, as it requires to perform three successive functional derivatives on an object that has six current terms, and three propagators.If we however rely on the correspondent Feynman diagram, depicted in Fig. 9 (b), we can readily write the explicit expression of Z 1 (J) as The diagram in Fig. 9 can be arranged in 3! = 6 different ways, by interchanging the roles of the three source terms J k .All these different arrangements of the diagrams, however, are algebraically equivalent, and correspond to the same form of the first-order perturbation Z 1 (J) written above.The factor 1/6 there, moreover, accounts for this symmetry.

IX. GENERALISATION TO VECTOR FIELDS
The results obtained in the previous section for a scalar field are valid, if properly generalised, also for vector fields, and, in particular, for the electromagnetic field.The correct way to generalise those results to vector fields, however, is out of the scope of this tutorial.The interested reader is referred to Refs.2,5,8-10,88,109 for further and more formal details.In what follows, we just give an intuitive and qualitative generalisation.

A. Path Integral for Free Vector Fields
For a vector field, in fact, Z 0 (J) can be qualitatively written in a similar manner to Eq. (110).To do that, we assume that the source terms are now vectorial in nature, i.e, J → J µ , which makes the Green's function become a two-index object, i.e., we need to formally make the substitution G(x − y) → G µν (x − y).This allows us to immediately extend the result of Eq. (110) to the case of a vector field, by simply introducing the dyadic Green's function where G 0 (r, r ;t,t ) is the Green's function of the wave equation, i.e., where n 2 is a possibly space-dependent parameter describing the properties of the vector field.For the electromagnetic field, for example, n 2 is the refractive index profile of the medium the field is propagating through.We can then write the partition function for the noninteracting vector field simply as Using Eqs.(113) and (117) we can also write G µν (x − y) as a two-point correlation function as where the shorthand J = 0 means that all the components of the current need to be set to zero after the calculation.We will make use of this result in the next section, for deriving the propagator of the effective free electromagnetic field in an arbitrary medium.

B. Path Integral for Interacting Vector Fields
Analogously, Eq. ( 123) can be qualitatively extended to the case of interacting vector fields by promoting source terms to vector fields (i.e., J → J µ ) , and the propagators to tensor fields (i.e., G(x − y) → G µν (x − y)), thus obtaining where λ {α} represents a rank-n tensor, whose indices are contracting the n indices appearing in the functional derivatives and J(y) ← → G (y − z)J(z) = J µ (y)G µν (y − z)J ν (z).In general, moreover, λ {α} could also be space-dependent.This result will be used in the next section as a starting point to analyse nonlinear quantum effects for photons propagating in arbitrary media.

X. PATH INTEGRAL FORMALISM FOR CLASSICAL AND QUANTUM ELECTRODYNAMICS IN ARBITRARY MEDIA
We now use the formalism introduced above for describing the propagation and interaction properties of an electromagnetic field in a dispersive medium of arbitrary geometry.The content of this section follows the line of reasoning of Refs.39,79, and it is reported here with a higher level of detail, in order to facilitate the reader in the process of adapting the formalism developed in the previous sections to the case of the electromagnetic field.
Let us then start by making a series of assumptions to construct a suitable Lagrangian density, which is the starting point for constructing the path integral.First, we choose to work in the so-called Weyl gauge 114 where the scalar potential is zero (which corresponds to assuming that the medium we consider contains no free carriers).This, practically, corresponds to the usual assumption that both the electric and magnetic fields are fully determined by the vector potential, i.e., E = −∂ A/∂t and B = ∇ × A. Then, we assume that light-matter interaction can be fully described within the framework of the dipole approximation 115 .
We can then describe the evolution of an electromagnetic field in a dispersive medium with arbitrary shape in terms of the Huttner-Barnett Lagrangian density 116 Let us first understand what is the physical meaning of all the elements in the expression.L HB describes the interaction of three fields: the electromagnetic field (represented by E ≡ E(r,t) and B ≡ B(r,t)), the matter (polarisation) field P ≡ P(r,t), and the reservoir field F(ω) ≡ F(r,t; ω), which essentially models all the possible decay channels present in the material, that could affect its optical properties.
The first two lines of Eq. ( 133) are the free Lagrangian densities of these three fields, respectively.
The third line, instead, contains all the relevant interactions, with the first term being the traditional dipole interaction term, and the second one representing the polarisation-reservoir interaction, that models losses in the system.
The geometry of the system under investigation is contained in the shape factor g(x), which equals 1 inside the region of interest, and zero otherwise.
To complete the picture, the medium is characterised by a resonant frequency ω 0 , static polarisability β (x), mass density (per unit frequency) ρ(x), and spectral coupling f (ω), which accounts for the frequency distribution of the various decay channels of the medium.The dot in Eq. ( 133), moreover, represents derivation with respect to time.
Notice, that although L HB (r,t) contains interacting terms, from the point of view of the electromagnetic field, it is still considered as a free Lagrangian density, as no terms proportional to powers of E or B (or, equivalently, the vector potential A) higher than two is present.
We can then substitute Eq. (133) into Eq.( 103) to obtain the full path integral of the Huttner-Barnett model as Z(J, J P , J F ) = DADPDF exp i S HB where J is the source term for the electromagnetic field, J P the source term for the matter polarisation field, J F the source term for the reservoir field, and S HB = d 4 x L HB is the Huttner-Barnett action.
Written in this way, Z(J, J P , J F ) allows one to describe, in a fully quantum manner, not only the dynamics of photons (through the current J), but also those of polaritons (through the current J P ), i.e., quantum excitations of the matter polarisation field, and to also account for the quantum features of the loss reservoir, through the current J F .However, for the purposes of this tutorial, we shall focus our attention on photons only, and therefore we set J P = 0 = J F , leaving J as the only free parameter.In this limit, the path integrals for P and F(ω) can be performed explicitly, resulting in an effective path integral for the electromagnetic field propagating in a medium macroscopically described by an effective dielectric constant.We call the system electromagnetic field plus effective medium dressed electromagnetic field.
The full procedure on how to do that can be found in Refs.39,79, and it is summarised in Appendix A, for completeness.

A. Dressed Free Electromagnetic Field
After performing the path integration with respect to P and F(ω) as illustrated in Appendix A, the partition function in Eq. ( 134) reduces to an effective partition function Z e f f (J) given by where the effective action S e f f (A) has the following explicit form where the tensor quantity Γ(r,t −τ) is directly connected with the dielectric function of the effective medium the "dressed" electromagnetic field is propagating into.An exhaustive discussion on the physical meaning and origin of Γ(r,t − τ) is given in Ref. 79.
If we substitute in the action above the expressions of the electric and magnetic fields in terms of the vector potential A, i.e., E = −∂ t A and B = ∇ × A, we can see that S e f f (A) is at most quadratic in A and ∂ t A, meaning that, de-facto, it describes the free evolution of the dressed field.This means, that the path integral in Eq. ( 135) can be reduced to a form similar to Eq. (130).In the present case, however, it is more convenient to write the path integral in Fourier space, as a function of the frequency ω, rather than time.By doing this we then obtain where dR = d 3 rd 3 r , R = r − r , and ← → G (R, ω) is the dyadic Green's function of the dressed field in Fourier space, and it is a solution of the following equation of motion for the dressed field where ε(r, ω) = 1 + g(r) Γ(r, ω)/ε 0 (with Γ being the Fourier transform of Γ appearing in Eq. ( 136)).Alternatively, the Green's function can be also derived from Eq. (137) by using Eq.(131) as the correlation function between the electromagnetic field at point r and r , i.e., B. Nonlinear Quantum Electrodynamics In photonics, the description of nonlinear interactions of the electromagnetic field in a medium are often expressed in terms of the induced polarisation, which is, in general, expressed as a power series expansion upon the electric field 101 , where χ n is the n-th order susceptibility tensor, containing the linear (n = 1) and nonlinear (n ≥ 2) response of the medium to the electric field.
This form of self-interaction can be generated with an interaction Lagrangian similar to that of Eq. ( 119), i.e., The lowest order nonlinear effect described by the interaction Lagrangian above is that of the interaction of three electromagnetic fields (photons), i.e., a second-order nonlinear process 101 .We can then write the path integral for the nonlinear interaction in analogy with Sect.VIII and expand it perturbatively in powers of the various χ n .However, for the particular case of nonlinear optics, the magnitude of the various nonlinear effects gets progressively smaller, as n increases, i.e., that |χ n+1 | |χ n | holds for any n.
If we now expand the path integral in powers of χ using Eq. ( 132) as guidance, we can decompose the path integral in the following sum of three contributions where Z 0 (J) is the free theory given by Eq. (137), Z (k) (J) accounts for the k−th order nonlinearity, i.e., it contains terms proportional to χ k solely, and Z cross (J) describes the combined interaction of several orders of nonlinearities (for example, it contains terms of the form χ k χ n , with k = n etc.).Since |χ n+1 | |χ n | holds, however, we can safely neglect this last term 39 , as it contains terms that are O(|χ| k ).The explicit expression of the term Z (k) (J) closely resembles that of the perturbative expansion for an interacting vector field theory, defined in Eq. (132), i.e., This result can be used to calculate the cross section σ (N) (r 1 , • • • , r N ) of a given N−th order nonlinear event to take place, as simply the vacuum N-point correlation function of the electromagnetic field as where σ 0 is a suitable dimensional constant and The cross section above can also be interpreted as the probability to generate N photons through a N−th order nonlinear process.In this case, the probability is given simply by the modulus square of the quantity defined above, i.e., where, again, P 0 is a suitable dimensional constant that transforms |A | 2 into a probability density.

XI. APPLICATIONS OF PATH INTEGRALS IN QUANTUM OPTICS
We now present two examples on how this formalism can be useful to describe linear and nonlinear quantum optical phenomena in arbitrary media, hoping that this will serve the reader to get a better insight on the potential applications of path integrals in photonics.
In this section we then present the solution, by means of path integrals, to two well-known problems: (1) the generation of spontaneous parametric down conversion (SPDC) in a lossy medium, and (2) the calculation of the spontaneous emission rate of an atom.Both examples make use of the dressed electromagnetic field introduced in Sect.X A, for both the interacting and free case, respectively.

A. Spontaneous Parametric Down Conversion in Lossy Media
Let us consider only the term n = 3 in Eq. (140).The interaction Lagrangian then becomes This interaction term, as it is well-known within the nonlinear optics community, is responsible for second-order nonlinear phenomena, such as second-harmonic generation (SHG), sum-and difference-frequency generation (SFG/DFG), and spontaneous parametric down conversion (SPDC) 101 .As it can be seen, L int involves the interaction of three fields, which are conventionally called pump, signal and idler.A very common assumption (and experimental working condition) is to stimulate the aforementioned phenomena by means of a very bright pump beam, whose quantum properties are typically neglected, and whose intensity (i.e., number of photons) does not change significantly during the interaction 106 .This approximation is called undepleted pump approximation.If we accept this approximation, we can rewrite the interaction Lagrangian above in a form that allows us to easily calculate the path integral.In doing so, essentially, we make use of the undepleted pump approximation to eliminate the pump field from the dynamics (i.e., we consider only the dynamics of the signal and idler photons), by defining an effective nonlinear susceptibility χµν = χ µντ A (p) τ , so that Eq. (146) becomes 39 where the superscripts {p, s, i} stand for pump, signal and idler field, respectively (mimicking the usual notation of nonlinear optics).Substituting the interaction Lagrangian above into Eq.( 142) with k = 3 (the order of the nonlinearity in the Lagrangian) and n = 1 (first order perturbation term), we can write down the explicit expression of the partition function for second-order nonlinear optical processes.In terms of Feynman diagrams, the path integral reads where the permutations have to be understood as all the possible ways to order the three lines (and the correspondent labels) appearing in the Feynman diagram.The dashed line in the diagram above represents the dressed vacuum state |0 ≡ |α pump ; 0 signal ; 0 idler , i.e., the effective quantum vacuum seen by the signal and idler photons.
From Eq. ( 148), we can calculate the probability for an SPDC event to occur using Eqs.( 144) and (145) with N = 2, namely where X µν (x − y) is the biphoton propagator, whose explicit expression is given as where ω s,i are the frequencies of the signal and idler fields, respectively.The probability of detecting an SPDC event is then completely determined by the propagators for the signal and idler field.As an explicit example of this, let us consider a lossy isotropic 1D medium of length L. Since the medium is isotropic, χ 2 (r, ω) ≡ χ.Moreover, let us assume that the pump beam can be modelled as a plane wave of frequency ω p propagating along the x-direction.The effective nonlinear susceptibility then has the following explicit form To calculate the biphoton propagator, we need the explicit expression for the Green's function.For a lossy, isotropic, 1D material the dyadic Green's function can be calculated explicitly using Eq. ( 111) (with m = 0, k 2 = κ 2 − ω 2 and by using the residue theorem to perform the ω−integral) and has the following form 81 where κ(ω) = k(ω) + iγ(ω) is the complex wave vector of the field in the medium.Substituting Eqs. ( 151) and (152) into Eq.( 149) we obtain where Γ = γ s + γ i are the losses seen by the signal (γ s ≡ γ(ω s )) and the idler (γ i ≡ γ(ω i )) and ∆k = k(ω p ) − k(ω s ) − k(ω i ) is the mismatch parameter.One can readily see that in the limit Γ = 0, the equation above reproduces the well known result from nonlinear optics 101 , that P SPDC ∝ sinc 2 (∆kL/2).For the general case where Γ = 0, instead, the equation above reduces to which is depicted in Fig. 10 as a function of the dimensionless quantity x = ΓL, for different values of the phase mismatch parameter ∆k.

B. Spontaneous Emission Decay Rate of a Quantum Emitter in an Arbitrary Medium
The spontaneous emission decay rate of a quantum emitter embedded in a homogeneous environment can be written as follows 117 Γ sp = 2πcΓ 0 where Γ 0 is the free-space spontaneous emission decay rate (the Einstein A coefficient), ω 0 is the characteristic frequency of the quantum emitter, here modelled as a two-level system with a dipole moment µ, and ← → G (r, r; ω) is the dyadic Green function of the electromagnetic field in which the emitter is embedded, calculated at the position of the emitter itself, and Tr {A} = ∑ i A ii is the trace operation 81 .Equation ( 155) is a general result, that holds for different systems, such as atoms in free space 117 , cavity QED 118 , and even metamaterials 119 .
If we assume to consider a homogeneous surrounding medium characterised by the complex-valued dielectric function ε = ε 1 + iε 2 , and assume as well that the dipole moment of the emitter is aligned along the z axis, Eq. ( 155) simplifies to 120,121 We can arrive at the same result of both Eq. ( 155) and Eq.(156) using path integrals.To show that, we can use the interacting theory of the dressed electromagnetic field described in Sect.XI A, but rather than calculating the actual nonlinear interaction of the field with itself (i.e., the second-order nonlinear processes) we turn our attention to the nonlinear corrections to the free propagator Z 0 (J).These corrections are known in QFT as loop corrections, or, sometimes, vacuum bubbles 8,10,88,109 and are represented by Feynman diagrams like the following Z corr = + higher order terms.
This term can be obtained by Eq. ( 148) by simply identifying the two source terms J µ and J ν with each other, and corresponds to the following integral where ω 0 is the transition frequency of the quantum emitter.Notice, that in the integral above, we have used λ αα (x), rather than χαα (x), since the interaction vertex in this case doesn't necessarily represent an actual nonlinear optical interaction, but rather the quantum emitter which, initially its excited state, emits a photon, which gets absorbed at a later time (from here the loop term appearing in the Feynman diagram above).Moreover, the dipole moment µ of the quantum emitter is aligned along the α direction.If, without any loss of generality, we assume that α → z, and that the quantum emitter is localised at the vertex, and that it is characterised by a transition frequency ω 0 , the interaction vertex λ αα (x, ω) = |µ| 2 δ (ω − ω 0 ) and therefore In QFT, these loop corrections need to be properly renormalised, as they amount to a sort of self-energy of the vacuum (in our case self-energy of the dressed electromagnetic field), which typically diverge.However, while the real part of the Green's function above actually diverges, its imaginary part remains bounded, and it is actually the spontaneous emission rate of the quantum emitter, i.e., To prove that, let us first write down the explicit expression of the loop propagator G zz (0, ω 0 ), which is now a scalar quantity, using Eq.(111) as where m 2 = ω 2 0 (ε 1 + iε 2 ), i.e., is the "mass of the dressed photon in the medium described by the complex dielectric function (ε 1 + iε 2 ).Since the environment surrounding the quantum emitter is homogeneous and isotropic, we can perform the integral above in spherical coordinates, readily integrating away the angular degrees of freedom, which amount to an overall 4π multiplicative term.Separating the real and imaginary part of the fraction in the integral above leads then to the following result (162) Both integrals above can be readily evaluated, giving Re{G zz (0, ω 0 )} → ∞, and which, apart from an inessential multiplicative factor, looks very similar to the result in Eq. (156).

XII. CONCLUSIONS AND FUTURE PERSPECTIVES
In this tutorial, we have introduced the reader to the concept and framework of path integrals.We have discussed their origin, historical development and their interpretation as sums over all possible trajectories, and, in Sect.II, we have provided the foundations upon which the very concept of path integral is rooted.To help the reader familiarise with the formalism, and the kind of mathematics needed to deal with path integrals, in Sect.III we have presented some introductory standard textbook examples, namely the free particle (Sect.III A), the refraction of a light beam from an interface (Sect.III B) , the double slit experiment (Sect.III C), and the quantum harmonic oscillator (Sect.III D).All these examples have been discussed extensively, pointing out the procedures and techniques to get to the analytical form of the kernel for those special cases.
Starting from Sect.IV, then, we have built upon this knowledge, and presented a series of less trivial applications of path integrals in both classical and quantum optics, with the twofold aim of presenting an alternative derivation, based on path integrals, of results known in optics, and helping the reader contextualise this framework in an accustomed environment.In particular, we have discussed the propagation of light through an inhomogeneous medium in Sect.IV, while in Sect.V, we have shown how this framework can also be employed to solve problems of quantum nonlinear optics, and we have discussed explicitly the case of parametric amplification.
The first part of our tutorial concludes with Sect.VI, where we give the reader an idea of the computational capabilities of PIMC, and briefly discuss how this could be used in photonics.Within the current photonics ecosystem, and considering the present challenges faced by this discipline, in fact, we believe that having at disposal a computational framework that allows calculations of complex light-matter interaction scenarios, and the derivation of optical properties of complex materials in an exact way, without the need to resort to any particular regime of approximation, could represent a very viable resource for photonics.This, somehow, is the very aim of our tutorial, i.e., to introduce the photonics community to this elegant, but also very powerful, formalism, and stimulate cross-contamination between these two seemingly different disciplines.
The second part of our tutorial, starting with Sect.VII gives the reader a very basic introduction on the use of path integrals in QFT.Although at a first glance this might seem too advanced for the spirit of this tutorial, we nevertheless included this topic, for essentially two reasons.First, since many of the modern problems in theoretical physics, such as quantum gravity, string theory, topological field theories, and so on, are formulated in terms of path integrals, introducing the reader to the very basic formalism of path integrals in QFT would greatly help them, were they interested in reading more about these topics.Secondly, sections VII-IX set the stage for Sect.X, where we present a unified way to describe quantum nonlinear dynamics of the electromagnetic field in arbitrarily shaped media in terms of path integrals, as an example of the potential that this framework has to offer to photonics, not only from a purely computational perspective, like the prediction of optical properties of materials granted by PIMC, but also as a viable and simple analytical tool for complex problems in photonics.
With this tutorial, we hope to have sparked the curiosity and interest of the reader on the topic, and conveyed the message that path integrals do not only represent an elegant formulation of quantum mechanics , but they also provide the language of modern theoretical physics, and can be applied in many different areas of science.For the specific case of photonics, as extensively discussed in this tutorial, path integrals could represent a valuable analytical and computational resource for investigating complex and multi-physics photonics devices, which would require the assembly of model systems from different disciplines, a task that is not always simple to carry out.The universality of path integrals, the simplicity of adaption of the formalism to very different situations, and their remarkable ability to treat very complex problems in an easy and accessible way, could represent a significant step forward towards this goal.
5. Derivation of the quadratic form for I P In the form above, the exponent of I F (P) contains a term proportional to P 2 .This term can be summed with the corresponding quadratic term appearing in the free part of the matter action S mat (P) in the expression of I P (A) given by Eq. (A3), i.e., As we did for I F , one can then integrate by parts the term proportional to Ṗ2 and introduce an extra integral with respect to time, to be able to write the above quantity as a quadratic form, i.e., (P, B P) + (c, P), where c(t) = −ig(x) Ȧ, (A36) and the expression for the operator B ≡ B(t,t ) is obtained by looking at the terms sandwiched between the terms P(t ) and P(t), i.e., B(t,t ) = i g(x) Notice how the form of the operator B is very similar to that of the operator Â introduced when calculating I F .The extra term appearing above, i.e., −i g(x) G(t − t , x)/ρ, however, makes B intrinsically integro-differential, which, in turns, makes the determination of its correspondent Green's function quite complicated (we discuss it briefly in the next subsection, but the interested reader is referred to Ref. 79 for more details).
Once we have transformed the exponent Φ(P) into a quadratic form, we can again discretise the path integral, solve using Gaussian integration, and then take the limit of it, to obtain the following result where Γ(t − t , x) is the Green's function of the operator B.
6. Expression and physical meaning of Γ(t − t , x) The Green's function for the operator B just defined can be found in the following way: first, notice that the prefactor i g(x) in the definition of B, i.e., Eq. (A37) can be treated, for the purpose of finding the Green's function, as a "numerical" constant, and therefore it can be neglected during calculation (we can put it back at the end of it).Using the definition of Green's function 81 we then know that Γ(t − t , x) is a solution of the equation B Γ(t − t , x) = δ (t − t ).
It should be noted that in this case the operator B is not simply a differential operator, as was the case earlier in (A13), but it contains an extra source term.The Green's function for B is then found as the solution to the following integrodifferential equation where G(Ω, x) is the Fourier transform of G(t − t , x).After a lengthy but straightforward calculation 79 , one can arrive at the following form for the Fourier transform of Γ(t − t , x) where λ F (Ω) is the Fourier transform of the Feynman propagator D F (t − t , ω).
The above result can be interpreted as an effective dielectric constant for the "dressed" electromagnetic field.To see this, let us consider the full expression of the effective Lagrangian coming from L em as given in Eq. (A5) and the result given by I P (A).Combining these two terms and remembering that Ȧ = −E and ∇ × A = B, we get, in Fourier space where the effective Lagrangian has been written in terms of the electric and magnetic field, since this form will provide an easier understanding of the interpretation of Γ(t − t , x).From L e f f we can calculate the displacement vector D(Ω, x), as follows 90 D(Ω, x) = ∂ L e f f ∂ E * = ε 0 E(Ω, x)+g(x) Γ(Ω, x)E(Ω, x).(A42) If we then recall the usual constitutive relation for the displacement vector 95 , i.e., D = ε 0 ε E, we can identify the term Γ(Ω, x) with the dielectric constant of a medium, i.e., ε(Ω, x) = 1 + g(x) It should now appear clear, that Γ(t − t , x) represents the dielectric function of an effective medium the electromagnetic field is propagating through.In particular, it contains information on both the loss channels present in the medium (through the reservoir field), and the possible interaction channels that the medium has with the electromagnetic field (through the matter porlarisation).After the integration with respect to F and P the partition function has the following form where Ȧ • ← → Γ • Ȧ = Ȧµ (t) Γ µν (t − t; , x) Ȧν (t ), and now we have restored the current-dependent term, because now we are going to perform the path integration with respect to the vector potential A. The result above then matches the form of the effective partition function introduced in Eq. ( 135).

FIG. 1 .
FIG. 1. Pictorial representation of the evolution of a classical (left) and quantum (right) system from an initial state at time t a , to a final state at time t b over a certain potential landscape.(a) While many different paths link the initial and the final state of the classical system, once the initial conditions have been fixed, only the path with minimal action (red line in the figure) is the actual one undertaken by the system.(b) For a quantum system, on the other hand, the classical path (blue tube) minimising the action is interpreted as the path along which the interference of all the contributing paths linking t a and t b is maximum.

FIG. 3 .
FIG.3.Schematic representation of a ray of light being refracted from a planar interface.A light source located at A emits a photon, which gets detected at point B by a suitable detector.Between A and B, a planar interface C is placed, separating the two different media in which the photon propagates.In the language of path integrals, refraction can be interpreted as the necessary change in velocity of the photon along the path where the action has the minimal value, with respect to the contact point x c on the planar interface C.

FIG. 4 .
FIG. 4. Left panel: pictorial representation of the double-slit experiment.A quantum particle is emitted from a source located in A and reaches a detector (or a distribution thereof) located at point B by passing through a screen C, on which two slits, C 1,2 have been carved.Right panel: sketch of the various possible probability distributions that could be observed at the detector plane B. (a) Interference pattern P revealing the wave nature of the quantum particle.(b) Probability distribution P 2 obtained by closing the slit C 1 .(c) Probability distribution P 1 obtained by closing the slit C 2 .(d) Estimation of the total probability distribution P 1 + P 2 , obtained by simply summing the distributions obtained in the cases (b) and (c), as would be valid for a classical particle, where no interference occurs.
distribution P when interference pattern is observed (both slits are open) [panel (a)], the probability distribution P 2 observed when the slit C 1 has been closed [panel (b)], the probability P 1 observed when the slit C 2 has been closed [panel (c)], and finally the probability distribution P 1 +P 2 obtained by summing the results of observations in (b) and (c).

FIG. 8 .
FIG. 8. Real (top) and imaginary (bottom) part of the dynamic polarisability as a function of frequency for molecular Hydrogen H 2 for various temperatures.The left panel presents the results for the dipole polarisability α 1 (ω), while the right panel contains the results for the quadrupole polarisability α 2 (ω).The black dots represent the reference simulations of polarisabilities calculated at T = 0 K. Dotted lines correspond to big time steps, while continuous lines correspond to smaller time steps.For these simulations, the time step has been adapted according to the chosen temperature, and it varies from ∆τ = 0.02 to ∆τ = 0.05.In these plots, the frequency is measured in Hartree (1 Ha = 2 Ry = 27.2 eV).To get a better grasp of the span of the frequency axis of the plots above, a frequency of ω = 0.1 Ha corresponds to a wavelength λ = 456 nm, while a frequency of ω = 1 Ha corresponds to a wavelength λ = 45 nm.This figure has been taken from Ref. 107 with permission of the author.

FIG. 9 .
FIG. 9. (a) Relevant Feynman diagram for a scalar φ 3 interacting QFT for the simple case V = 1, P = 2.As it can be seen, the diagram contains one source term corresponding to the current J(z) in Eq. (126), and two propagators, G 1 (black, wiggly line) and G 2 (red, wiggly line), which is looping around the vertex.By exchanging the roles of G 1 and G 2 , the diagram doesn't change, i.e., the value of Z 1 [J] in Eq. (126) doesn't change.This diagram is known in QFT as a tadpole diagram.(b) Relevant Feynman diagram for the case of V = 1 and P = 3.This diagram can be arranged in 3! = 6 different ways, by cycling through the currents J 1,2,3 and the propagators G 1 (black, wiggly line), G 2 (blue, wiggly line), and G 3 (red, wiggly line), without changing the value of the correspondent partition function.This diagram will be of particular importance for the case of the electromagnetic field, as its vector counterpart is the basic quantity that describes second-order nonlinear processes in arbitrary media (see Sect.X).

dt d 3
ω)| 2 g(x) 2ρ P 2 (t).(A31)If we now introduce the quantity v(ω) = f (ω) ε 0 ω 2 the term in the curly brackets above in the folthe primary effect of the reservoir field is to introduce a frequency-dependent shift in the resonance frequency ω 0 of the material.The I P -integral now readsI P = DP exp [i Φ(P)],) G(t − t , x) P(t ).(A35)