A short introduction to the Lindblad Master Equation

The theory of open quantum system is one of the most essential tools for the development of quantum technologies. Furthermore, the Lindblad (or Gorini-Kossakowski-Sudarshan-Lindblad) Master Equation plays a key role as it is the most general generator of Markovian dynamics in quantum systems. In this paper, we present this equation together with its derivation and methods of resolution. The presentation tries to be as self-contained and straightforward as possible to be useful to readers with no previous knowledge of this field.


I. INTRODUCTION
Open quantum system techniques are vital for many studies in quantum mechanics [1][2][3]. This happens because closed quantum systems are just an idealisation of real systems 1 , as in Nature nothing can be isolated. In practical problems, the interaction of the system of interest with the environment cannot be avoided, and we require an approach in which the environment can be effectively removed from the equations of motion.
The general problem addressed by Open Quantum Theory is sketched in Figure 1. In the most general picture, we have a total system that conforms a closed quantum system by itself. We are mostly interested in a subsystem of the total one (we call it just "system" instead "total system"). Therefore, the whole system is divided into our system of interest and an environment. The goal of Open Quantum Theory is to infer the equations of motions of the reduced systems from the equation of motion of the total system. For practical purposes, the reduced equations of motion should be easier to solve than the full dynamics of the system. Because of his requirement, several approximations are usually made in the derivation of the reduced dynamics.
FIG. 1: A total system divided into the system of interest, "System", and the environment. * manzano@onsager.ugr.es 1 The same happens with closed classical systems.
The purpose of this paper is to provide basic knowledge about the Lindblad Master Equation.
In Section II, the mathematical requirements are introduced while in Section III there is a brief review of quantum mechanical concepts that are required to understand the paper. Section IV, includes a description of a mathematical framework, the Fock-Liouville space, that is especially useful to work in this problem. In Section V, we define the concept of CPT-Maps, derive the Lindblad Master Equation from two different approaches, and we discus several properties of the equation. Finally, Section VI is devoted to the resolution of the master equation using different methods. To deepen in the techniques of solving the Lindblad equation, an example consisting of a two-level system with decay is analysed, illustrating the content of every section. The problems proposed are solved by the use of Mathematica notebooks that can be found at [20].

II. MATHEMATICAL BASIS
The primary mathematical tool in quantum mechanics is the theory of Hilbert spaces. This mathematical framework allows extending many results from finite linear vector spaces to infinite ones. In any case, this tutorial deals only with finite systems and, therefore, the expressions 'Hilbert space' and 'linear space' are equivalent. We assume that the reader is skilled in operating in Hilbert spaces. To deepen in the field of Hilbert spaces we recommend the book by Debnath and Mikusińki [21]. If the reader needs a brief review of the main concepts required for understanding this paper, we may recommend Nielsen and Chuang's Quantum Computing book [22]. It is also required some basic knowledge about infinitesimal calculus, like integration, derivation, and the resolution of simple differential equations, To help the readers, we have made a glossary of the most used mathematical terms. It can be used also as a checklist of terms the reader should be familiar with.

Glossary:
• H represents a Hilbert space, usually the space of pure states of a system.
• |ψ ∈ H represents a vector of the Hilbert space H (a column vector).
• ψ| ∈ H represents a vector of the dual Hilbert space of H (a row vector).
• B(H) represents the space of bounded operators acting on the Hilbert space B : H → H.
• O † ∈ B(H) is the Hermitian conjugate of the operator O ∈ B(H).
• H ∈ B(H) is a Hermitian operator iff H = H † .
• Tr [B] represents the trace of operator B.
• ρ (L) represents the space of density matrices, meaning the space of bounded operators acting on H with trace 1 and positive.
• |ρ is a vector in the Fock-Liouville space.
• A|B = Tr A † B is the scalar product of operators A, B ∈ B(H) in the Fock-Liouville space.
•L is the matrix representation of a superoperator in the Fock-Liouville space.

III. (VERY SHORT) INTRODUCTION TO QUANTUM MECHANICS
The purpose of this chapter is to refresh the main concepts of quantum mechanics necessary to understand the Lindblad Master Equation. Of course, this is NOT a full quantum mechanics course. If a reader has no background in this field, just reading this chapter would be insufficient to understand the remaining of this tutorial. Therefore, if the reader is unsure of his/her capacities, we recommend to go first through a quantum mechanics course or to read an introductory book carefully. There are many great quantum mechanics books in the market. For beginners, we recommend Sakurai's book [23] or Nielsen and Chuang's Quantum Computing book [22]. For more advanced students, looking for a solid mathematical description of quantum mechanics methods, we recommend Galindo and Pascual [24]. Finally, for a more philosophical discussion, you should go to Peres' book [25].
We start stating the quantum mechanics postulates that we need to understand the derivation and application of the Lindblad Master Equation. The first postulate is related to the concept of a quantum state.
Postulate 1 Associated to any isolated physical system, there is a complex Hilbert space H, known as the state space of the system. The state of the system is entirely described by a state vector, which is a unit vector of the Hilbert space (|ψ ∈ H).
As quantum mechanics is a general theory (or a set of theories), it does not tell us which is the proper Hilbert space for each system. This is usually done system by system. A natural question to ask is if there is a one-to-one correspondence between unit vectors and physical states, meaning that if every unit vector corresponds to a physical system. This is resolved by the following corollary that is a primary ingredient for quantum computation theory (see Ref. [22] Chapter 7).
Corollary 1 All unit vectors of a finite Hilbert space correspond to possible physical states of a system.
Unit vectors are also called pure states. If we know the pure state of a system, we have all physical information about it, and we can calculate the probabilistic outcomes of any potential measurement (see the next postulate). This is a very improbable situation as experimental settings are not perfect, and in most cases, we have only imperfect information about the state. Most generally, we may know that a quantum system can be in one state of a set {|ψ i } with probabilities p i . Therefore, our knowledge of the system is given by an ensemble of pure states described by the set {|ψ i , p i }. If more than one p i is different from zero the state is not pure anymore, and it is called a mixed state. The mathematical tool that describes our knowledge of the system, in this case, is the density operator (or density matrix).
Density matrices are bounded operators that fulfil two mathematical conditions 1. A density matrix ρ has unit trace (Tr[ρ] = 1).

2.
A density matrix is a positive matrix ρ > 0.
Any operator fulfilling these two properties is considered a density operator. It can be proved trivially that density matrices are also Hermitian.
where the diagonal elements are called populations ρ ii ∈ R + 0 and i ρ i,i = 1 , while the off-diagonal elements are called coherences ρ i,j ∈ C and ρ i,j = ρ * j,i . Note that this notation is base-dependent.

Box 1. State of a two-level system (qubit)
The Hilbert space of a two-level system is just the two-dimension lineal space H 2 . Examples of this kind of system are 1 2 -spins and two-level atoms. We can define a basis of it by the orthonormal vectors: {|0 , |1 }. A pure state of the system would be any unit vector of H 2 . It can always be expressed as a |ψ = a|0 + b|1 with a, b ∈ C s. t. |a| 2 + |b| 2 = 1.
A mixed state is therefore represented by a positive unit trace operator ρ ∈ O(H 2 ).
Once we know the state of a system, it is natural to ask about the possible outcomes of experiments (see Ref. [23], Section 1.4).
Postulate 2 All possible measurements in a quantum system are described by a Hermitian operator or observable.
Due to the Spectral Theorem we know that any observable O has a spectral decomposition in the form 2 being a i ∈ R the eigenvalues of the observable and |a i their corresponding eigenvectors. The probability of obtaining the result a i when measuring the property described by observable O in a state |ψ is given by After the measurement we obtain the state |a i if the outcome a i was measured. This is called the post-measurement state.
This postulate allow us to calculate the possible outputs of a system, the probability of these outcomes, as well as the after-measurement state. A measurement usually changes the state, as it can only remain unchanged if it was already in an eigenstate of the observable.
It is possible to calculate the expectation value of the outcome of a measurement defined by operator O in a state |ψ by just applying the simple formula With a little algebra we can translate this postulate to mixed states. In this case, the probability of obtaining an output a i that corresponds to an eigenvector |a i is and the expectation value of operator O is Box 2. Measurement in a two-level system.
A possible test to perform in our minimal model is to measure the energetic state of a system, assuming that both states have a different energy. The observable corresponding to this measurement would be This operator has two eigenvalues {E 0 , E 1 } with two corresponding eigenvectors {|0 , |1 }.
Another natural question to ask is how quantum systems evolve. The time-evolution of a pure state of a closed quantum system is given by the Schrödinger equation (see [24], Section 2.9).
Postulate 3 Time evolution of a pure state of a closed quantum system is given by the Schrödinger equation where H is the Hamiltonian of the system and it is a Hermitian operator of the Hilbert space of the system state (from now on we avoid including Planck's constant by selecting the units such thath = 1).
The Hamiltonian of a system is the operator corresponding to its energy, and it can be non-trivial to realise. Schrödinger equation can be formally solved in the following way. If at t = 0 the state of a system is given by |ψ(0) at time t it will be |ψ(t) = e −iHt |ψ(0) .
As H is a Hermitian operator, the operator U = e −iHt is unitary. This gives us another way of phrasing Postulate 3.

Postulate 3'
The evolution of a closed system is given by a unitary operator of the Hilbert space of the system It is easy to prove that unitary operators preserve the norm of vectors and, therefore, transform pure states into pure states. As we did with the state of a system, it is reasonable to wonder if any unitary operator corresponds to the evolution of a real physical system. The answer is yes.
Lemma 1 All unitary evolutions of a state belonging to a finite Hilbert space can be constructed in several physical realisations like photons and cold atoms.
The proof of this lemma can be found at [22].
The time evolution of a mixed state can be calculated just by combining Eqs. (10) and (1), giving the von-Neumann equation.ρ where we have used the commutator [A, B] = AB − BA, and L is the so-called Liouvillian superoperator. It is easy to prove that the Hamiltonian dynamics does not change the purity of a system d dt where we have used the cyclic property of the trace. This result illustrates that the mixing rate of a state does not change due to the quantum evolution.
Box 3. Time evolution of a two-level system.
The evolution of our isolated two-level system is described by its Hamiltonian As the states |0 and |1 are Hamiltonian eigenstates if at t = 0 the atom is at the excited state |ψ(0) = |1 after a time t the state would be |ψ(t) = e −iHt |1 = e −iE1t |1 .
As the system was already in an eigenvector of the Hamiltonian, its time-evolution consists only in adding a phase to the state, without changing its physical properties. (If an excited state does not change, why do atoms decay?) Without losing any generality we can fix the energy of the ground state as zero, obtaining with E ≡ E 1 . To make the model more interesting we can include a driving that coherently switches between both states. The total Hamiltonian would be then where Ω is the frequency of driving. By using the von-Neumann equation (13) we can calculate the populations (ρ 00 , ρ 11 ) as a function of time. The system is then driven between the states, and the populations present Rabi oscillations, as it is shown in Fig. 2. Finally, as we are interested in composite quantum systems, we need to postulate how to work with them.

Postulate 4
The state-space of a composite physical system, composed by N subsystems, is the tensor product of the state space of each component H = H 1 ⊗ H 2 ⊗ · · · ⊗ H N . The state of the composite physical system is given by a unit vector of H. Moreover, if each subsystem belonging to H i is prepared in the state |ψ i the total state is given by |ψ = |ψ 1 ⊗ |ψ 2 ⊗ · · · ⊗ |ψ N .
The symbol ⊗ represents the tensor product of Hilbert spaces, vectors, and operators. If we have a composited mixed state where each component is prepared in the state ρ i the total state is given by States that can be expressed in the simple form |ψ = |ψ 1 ⊗ |ψ 2 , in any specific basis, are very particular and they are called separable states (For this discussion, we use a bipartite system as an example. The extension to a general multipartite system is straightforward.) . In general, any arbitrary state should be described as |ψ = i,j |ψ i ⊗ |ψ j (or ρ = i,j ρ i ⊗ ρ j for mixed states). Non-separable states are called entangled states.
Now that we know how to compose systems, but we can be interested in going the other way around. If we have a system belonging to a bipartite Hilbert space in the form H = H a ⊗ H b we can be interested in studying some properties of the subsystem corresponding to one of the subspaces. To do so, we define the reduced density matrix. If the state of our system is described by a density matrix ρ the reduced density operator of the subsystem a is defined by the operator were Tr b is the partial trace over subspace b and it is defined as [22] Tr The concepts of reduced density matrix and partial trace are essential in the study of open quantum systems. If we want to calculate the equation of motions of a system affected by an environment, we should trace out this environment and deal only with the reduced density matrix of the system. This is the main idea of the theory of open quantum systems.

Box 4. Two two-level atoms
If we have two two-level systems, the total Hilbert space is given by A basis of this Hilbert space would be given by the set If both systems are in their ground state, we can describe the total state by the separable vector A more complex, but still separable, state can be formed if both systems are in superposition.
An entangled state would be This state cannot be separated into a direct product of each subsystem. If we want to obtain a reduced description of subsystem 1 (or 2) we have to use the partial trace. To do so, we need first to calculate the density matrix corresponding to the pure state |ψ E .
We can now calculate the reduced density matrix of the subsystem 1 by using the partial trace.
From this reduced density matrix, we can calculate all the measurement statistics of subsystem 1.

IV. THE FOCK-LIOUVILLE HILBERT SPACE. THE LIOUVILLE SUPEROPERATOR
In this section, we revise a useful framework for both analytical and numerical calculations. It is clear that some linear combinations of density matrices are valid density matrices (as long as they preserve positivity and trace 1). Because of that, we can create a Hilbert space of density matrices just by defining a scalar product. This is clear for finite systems because in this case scalar space and Hilbert space are the same things. It also happens to be true for infinite spaces. This allows us to define a linear space of matrices, converting the matrices effectively into vectors (ρ → |ρ ). This is called Fock-Liouville space (FLS). The usual definition of the scalar product of matrices φ and ρ is defined as φ|ρ ≡ Tr φ † ρ . The Liouville super-operator from Eq. (13) is now an operator acting on the Hilbert space of density matrices. The main utility of the FLS is to allow the matrix representation of the evolution operator.
Box 5. Time evolution of a two-level system.
The density matrix of our system (3) can be expressed in the FLS as The time evolution of a mixed state is given by the von-Neumann equation (13). The Liouvillian superoperator can now be expressed as a matrix where each row is calculated just by observing the output of the operation −i [H, ρ] in the computational basis of the density matrices space. The time evolution of the system now corresponds to the matrix equation d|ρ dt =L|ρ , that in matrix notation would be V. CPT-MAPS AND THE LINDBLAD MASTER EQUATION.

A. Completely positive maps
The problem we want to study is to find the most general Markovian transformation set between density matrices. Until now, we have seen that quantum systems can evolve in two way, by a coherent evolution given (Postulate 3) and by collapsing after a measurement (Postulate 2). Many efforts have been made to unify these two ways of evolving [16], without giving a definite answer so far. It is reasonable to ask what is the most general transformation that can be performed in a quantum system, and what is the dynamical equation that describes this transformation.
We are looking for maps that transform density matrices into density matrices. We define ρ(H) as the space of all density matrices in the Hilbert space H. Therefore, we are looking for a map of this space onto itself, V : ρ(H) → ρ(H).
To ensure that the output of the map is a density matrix this should fulfil the following properties • Completely positive (see below).
Any map that fulfils these two properties is called a completely positive and trace-preserving map (CPT-maps). The first property is quite apparent, and it does not require more thinking. The second one is a little more complicated, and it requires an intermediate definition.
This definition is based in the idea that, as density matrices are positive, any physical map should transform positive matrices into positive matrices. One could naively think that this condition must be sufficient to guarantee the physical validity of a map. It is not. As we know, there exist composite systems, and our density matrix could be the partial trace of a more complicated state. Because of that, we need to impose a more general condition.
To prove that not all positive maps are completely positive, we need a counterexample. A canonical example of an operation that is positive but fails to be completely positive is the matrix transposition. If we have a Bell state in the form |ψ B = 1 √ 2 (|01 + |10 ) its density matrix can be expressed as with a matrix representation A little algebra shows that the full form of this matrix is and it is positive. It is easy to check that the transformation 1 ⊗ T 2 , meaning that we transpose the matrix of the second subsystem leads to a non-positive matrix The total matrix is with −1 as an eigenvalue. This example illustrates how the non-separability of quantum mechanics restrict the operations we can perform in a subsystem. By imposing this two conditions, we can derive a unique master equation as the generator of any possible Markovian CPT-map.

B. Derivation of the Lindblad Equation from microscopic dynamics
The most common derivation of the Lindblad master equation is based on Open Quantum Theory. The Lindblad equation is then an effective motion equation for a subsystem that belongs to a more complicated system. This derivation can be found in several textbooks like Breuer and Petruccione's [2] as well as Gardiner and Zoller's [1]. Here, we follow the derivation presented in Ref. [26]. Our initial point is displayed in Figure 3. A total system belonging to a Hilbert space H T is divided into our system of interest, belonging to a Hilbert space H, and the environment living in H E .
The evolution of the total system is given by the von Neumann equation (13).
As we are interested in the dynamics of the system, without the environment, we trace over the environment degrees of freedom to obtain the reduced density matrix of the system ρ(t) = Tr E [ρ T ]. To separate the effect of the total hamiltonian in the system and the environment we divide it in the form H T = H S ⊗ 1 E + 1 S ⊗ H E + αH I , with H ∈ H, H E ∈ H E , and H I ∈ H T , and being α a measure of the strength of the system-environment interaction. Therefore, we have a part acting on the system, a part acting on the environment, and the interaction term. Without losing any generality, the interaction term can be decomposed in the following way To better describe the dynamics of the system, it is useful to work in the interaction picture (see Ref. [24] for a detailed explanation about Schrödinger, Heisenberg, and interaction pictures). In the interaction picture, density matrices evolve with time due to the interaction Hamiltonian, while operators evolve with the system and environment Hamiltonian. An arbitrary operator O ∈ B(H T ) is represented in this picture by the time-dependent operatorÔ(t), and its time evolution isÔ The time evolution of the total density matrix is given in this picture by This equation can be easily integrated to givê By this formula, we can obtain the exact solution, but it still has the complication of calculating an integral in the total Hilbert space. It is also troublesome the fact that the stateρ(t) depends on the integration of the density matrix in all previous time. To avoid that we can introduce Eq. (37) into Eq. (36) giving By applying this method one more time we obtain After this substitution, the integration of the previous states of the system is included only in the terms that are O(α 3 ) or higher. At this moment, we perform our first approximation by considering that the strength of the interaction between the system and the environment is small. Therefore, we can avoid high-orders in Eq. (39). Under this approximation we have We are interested in finding an equation of motion for ρ, so we trace over the environment degrees of freedom This is not a closed time-evolution equation forρ(t), because the time derivative still depends on the full density matrixρ T (t). To proceed, we need to make two more assumptions. First, we assume that t = 0 the system and the environment have a separable state in the form ρ T (0) = ρ(0) ⊗ ρ E (0). This means that there are not correlations between the system and the environment. This may be the case if the system and the environment have not interacted at previous times or if the correlations between them are short-lived. Second, we assume that the initial state of the environment is thermal, meaning that it is described by a density matrix in the form ρ E (0) = exp (−H E /T ) /Tr[exp (−H E /T )], being T the temperature and taking the Boltzmann constant as k B = 1. By using these assumptions, and the expansion of H I (34), we can calculate an expression for the first element of the r.h.s of Eq. (41).
To calculate the explicit value of this term, we may use that E i = Tr[E i ρ E (0)] = 0 for all values of i. This looks like a strong assumption, but it is not. If our total Hamiltonian does not fulfil it, we can always rewrite it as , and the system Hamiltonian is changed just by the addition of an energy shift that does no affect the system dynamics. Because of that, we can assume that E i = 0 for all i. Using the cyclic property of the trace, it is easy to prove that the term of Eq. (42) is equal to zero, and the equation of motion (41) reduces tȯ This equation still includes the entire state of the system and environment. To unravel the system from the environment, we have to make a more restrictive assumption. As we are working in the weak coupling regime, we may suppose that the system and the environment are non-correlated during all the time evolution. Of course, this is only an approximation. Due to the interaction Hamiltonian, some correlations between system and environment are expected to appear. On the other hand, we may assume that the timescales of correlation (τ corr ) and relaxation of the environment (τ rel ) are much smaller than the typical system timescale (τ sys ), as the coupling strength is very small (α <<). Therefore, under this strong assumption, we can assume that the environment state is always thermal and is decoupled from the system state,ρ T (t) =ρ(t) ⊗ρ E (0). Eq. (43) then transforms intȯ The equation of motion is now independent for the system and local in time. It is still non-Markovian, as it depends on the initial state preparation of the system. We can obtain a Markovian equation by realising that the kernel in the integration and that we can extend the upper limit of the integration to infinity with no real change in the outcome. By doing so, and by changing the integral variable to s → t − s, we obtain the famous Redfield equation [? ].
It is known that this equation does not warrant the positivity of the map, and it sometimes gives rise to density matrices that are non-positive. To ensure complete positivity, we need to perform one further approximation, the rotating wave approximation. To do so, we need to use the spectrum of the superoperatorHA ≡ [H, A], ∀A ∈ B(H).
The eigenvectors of this superoperator form a complete basis of space B(H) and, therefore, we can expand the system-environment operators from Eq. (34) in this basis where the operators S i (ω) fulfils being ω the eigenvalues ofH. It is easy to take also the Hermitian conjugated To apply this decomposition, we need to change back to the Schrödinger picture for the term of the interaction Hamiltonian acting on the system's Hilbert space. This is done by the expressionŜ k = e itH S k e −itH . By using the eigen-expansion (46) we arrive tõ To combine this decomposition with Redfield equation (45), we first may expand the commutators.
We now apply the eigenvalue decomposition in terms of S k (ω) forĤ I (t − s) and in terms of S † k (ω ) forĤ I (t). By using the permutation property of the trace and the fact that [H E , ρ E (0)] = 0, and after some non-trivial algebra we obtainρ where the effect of the environment has been absorbed into the factors where we are writing the environment operators of the interaction Hamiltonian in the interaction picture (Ê l (t) = e iH E t E l e −iH E t ). At this point, we can already perform the rotating wave approximation. By considering the timedependency on Eq. (51), we conclude that the terms with |ω − ω | >> α 2 will oscillate much faster than the typical timescale of the system evolution. Therefore, they do not contribute to the evolution of the system. In the lowcoupling regime (α → 0) we can consider that only the resonant terms, ω = ω , contribute to the dynamics and remove all the others. By applying this approximation to Eq. (51) reduces tȯ To divide the dynamics into Hamiltonian and non-Hamiltonian we now decompose the operators Γ kl into Hermitian and non-Hermitian parts, Γ kl (ω) = 1 2 γ kl (ω) + iπ kl , with By these definitions we can separate the Hermitian and non-Hermitian parts of the dynamics and we can transform back to the Schrödinger picturė The Hamiltonian dynamics now is influenced by a term H Ls = ω,k,l π kl (ω)S † k (ω)S l (ω). This is usually called a Lamb shift Hamiltonian and its role is to renormalize the system energy levels due to the interaction with the environment. Eq. (55) is the first version of the Markovian Master Equation, but it is not in the Lindblad form yet.
It can be easily proved that the matrix formed by the coefficients γ kl (ω) is positive as they are the Fourier's transform of a positive function Tr Ê † k (s)E lρE (0) . Therefore, this matrix can be diagonalised. This means that we can find a unitary operator, O, s.t.
We can now write the master equation in a diagonal forṁ This is the celebrated Lindblad (or Lindblad-Gorini-Kossakowski-Sudarshan) Master Equation.
In the simplest case, there will be only one relevant frequency ω, and the equation can be further simplified tȯ The operators L i are usually referred to as jump operators.

C. Derivation of the Lindblad Equation as a CPT generator
The second way of deriving Lindblad equation comes from the following question: What is the most general (Markovian) way of mapping density matrix onto density matrices? This is usually the approach from quantum information researchers that look for general transformations of quantum systems. We analyse this problem following mainly Ref. [28].
To start, we need to know what is the form of a general CPT-map.

Lemma 2 Any map V : B (H) → B (H) that can be written in the form
The proof of the lemma requires a little algebra and a known property of normal matrices Therefore, if ρ is positive, the output of the map is also positive.

End of the proof.
This is a sufficient condition for the positivity of a map, but it is not necessary. It could happen that there are maps that cannot be written in this form, but they are still positive. To go further, we need a more general condition, and this comes in the form of the next theorem.
The proof of this theorem requires some algebra.

Proof
The 'if' implication is a trivial consequence of the previous lemma. To prove the converse, we need to extend the dimension of our system by the use of an auxiliary system. If d is the dimension of the Hilbert space of pure states, H, we define a new Hilbert space of the same dimension H A . We define a maximally entangled pure state in the bipartition H A ⊗ H in the way being {|i } and {|i A } arbitrary orthonormal bases for H and H A . We can extend the action of our original map V, that acts on B(H) to our extended Hilbert space by defining the map Note that the idea behind this map is to leave the auxiliary subsystem invariant while applying the original map to the original system. This map is positive because V is completely positive. This may appear trivial, but as it has been explained before complete positivity is a more restrictive property than positivity, and we are looking for a condition to ensure complete positivity.
We can now apply the extended map to the density matrix corresponding to the maximally entangled state (60), obtaining Now we can use the maximal entanglement of the state |Γ to relate the original map V and the action V 2 |Γ Γ| by taking the matrix elements with respect to H A .
To relate this operation to the action of the map to an arbitrary vector |ψ ∈ H A ⊗ H, we can expand it in this basis as We can also define an operator V |ψ ∈ B (H) s.t. it transforms |Γ into |ψ . Its explicit action would be written as At this point, we have related the vectors in the extended space H A ⊗ H to operators acting on H. This can only be done because the vector |Γ is maximally entangled. We go now back to our extended map V 2 . Its action on |Γ Γ| is given by Eq. (62) and as it is a positive map it can be expanded as with |v l ∈ H A ⊗ H. The vectors |v l can be related to operators in H as in Eq. (65).
Based on this result we can calculate the product of an arbitrary vector |i A ∈ H A with |v l .
This is the last ingredient we need for the proof.
We come back to the original question, we want to characterise the map V. We do so by applying it to an arbitrary basis element |i j| of B (H).
As |i j| is an arbitrary element of a basis any operator can be expanded in this basis. Therefore, it is straightforward to prove that End of the proof.
Thanks to Choi's Theorem, we know the general form of CP-maps, but there is still an issue to address. As density matrices should have trace one, we need to require any physical maps to be also trace-preserving. This requirement gives as a new constraint that completely defines all CPT-maps. This requirement comes from the following theorem. Proof.
We have already proved that this is a completely positive map, we only need to prove that it is also trace-preserving and that all trace preserving-maps fulfil Eq. (71). The 'if' proof is quite simple by applying the cyclic permutations and linearity properties of the trace operator.

Tr [Vρ] = Tr
We have to prove also that any map in the form (70) is trace-preserving only if the operators V l fulfil (71). We start by stating that if the map is trace-preserving by applying it to an any arbitrary element of a basis of B (H) we should obtain As the map has a form given by (70) we can calculate this same trace in an alternative way.
where {|k } is an arbitrary basis of H. As both equalities should be right we obtain and therefore, the condition (71) should be fulfilled.
End of the proof.
Operators V i of a map fulfilling condition (71) are called Krauss operators. Because of that, sometimes CPT-maps are also called Krauss maps, especially when they are presented as a collection of Krauss operators. Both concepts are ubiquitous in quantum information science. Krauss operators can also be time-dependent as long as they fulfil relation (71) for all times. At this point, we already know the form of CPT-maps, but we do not have a master equation, that is a continuous set of differential equations. This means that we know how to perform an arbitrary operation in a system, but we do not have an equation to describe its time evolution. To do so, we need to find a time-independent generator L such that and therefore our CPT-map could be expressed as V(t) = e Lt . The following calculation is about founding the explicit expression of L. We start by choosing an orthonormal basis of the bounded space of operators B(H), . To be orthonormal it should satisfy the following condition Without any loss of generality, we select one of the elements of the basis to be proportional to the identity, F d 2 = 1 √ d 1 H . It is trivial to prove that the norm of this element is one, and it is easy to see from Eq. (77) that all the other elements of the basis should have trace zero.
The closure relation of this basis is 1 B(H) = i |F i F i |. Therefore, the Krauss operators can be expanded in this basis by using the Fock-Liouville notation As the map V(t) is in the form (59) we can apply (79) to obtain 4 .
where we have absorved the sumation over the Krauss operators in the terms c i,j (t) = l F i |V l V l |F j . We go back now to the original problem by applying this expansion into the time-derivative of Eq. (76) Introducing these coefficients in Eq (82) we obtain an equation with no explicit dependence in time. (84) As we are already summing up over all the Krauss operators it is useful to define a new operator Applying it to Eq. (82).
At this point, we want to separate the dynamics of the density matrix into a Hermitian (equivalent to von Neunmann equation) and an incoherent part. We split the operator F in two to obtain a Hermitian and anti-Hermitian part.
where we have used the notation H for the Hermitian part for obvious reasons. If we take this definition to Eq. (86) we obtain We define now the last operator for this proof, , and the expression of the time derivative leads to Until now we have imposed the complete positivity of the map, as we have required it to be written in terms of Krauss maps, but we have not used the trace-preserving property. We impose now this property, and by using the cyclic property of the trace, we obtain a new condition Therefore, G 2 should fulfil By applying this condition, we arrive at the Lindblad master equation Finally, by definition the coefficients g i,j can be arranged to form a Hermitian, and therefore diagonalisable, matrix. By diagonalising it, we obtain the diagonal form of the Lindblad master equation.

D. Properties of the Lindblad Master Equation
Some interesting properties of the Lindblad equation are: • Under a Lindblad dynamics, if all the jump operators are Hermitian, the purity of a system fulfils d dt Tr ρ 2 ≤ 0. The proof is given in A.

• The Lindblad Master Equation is invariant under unitary transformations of the jump operators
with v representing a unitary matrix. It is also invariant under inhomogeneous transformations in the form where a i ∈ C and b ∈ R. The proof of this can be found in Ref. [2] (Section 3).
• Thanks to the previous properties it is possible to find traceless jump operators without loss of generality.
Box 6. A master equation for a two-level system with decay.
Continuing our example of a two-level atom, we can make it more realistic by including the possibility of atom decay by the emission of a photon. This emission happens due to the interaction of the atom with the surrounding vacuum state a . The complete quantum system would be in this case the 'atom+vacuum' system and its time evolution should be given by the von Neumann equation (13), where H represents the total 'atom+vacuum' Hamiltonian. This system belongs to an infinite-dimension Hilbert space, as the radiation field has infinite modes. If we are interested only in the time dependence state of the atom, we can derive a Markovian master equation for the reduced density matrix of the atom (see for instance Refs. [1,2]). The master equation we will study is where Γ is the coupling between the atom and the vacuum.
In the Fock-Liouvillian space (following the same ordering as in Eq. (3)) the Liouvillian corresponding to evolution (96) is Expressing explicitly the set of differential equations we obtaiṅ ρ 00 = iΩρ 01 − iΩρ 10 + Γρ 11 a This is why atoms decay.

A. Integration
To calculate the time evolution of a system determined by a Master Equation in the form (96) we need to solve a set of equations with as many equations as the dimension of the density matrix. In our example, this means to solve a 4 variable set of equations, but the dimension of the problem increases exponentially with the system size. Because of this, for bigger systems techniques for dimension reduction are required.
To solve systems of partial differential equations there are several canonical algorithms. This can be done analytically only for a few simple systems and by using sophisticated techniques as damping bases [29]. In most cases, we have to rely on numerical approximated methods. One of the most popular approaches is the 4 th -order Runge-Kutta algorithm (see, for instance, [30] for an explanation of the algorithm). By integrating the equations of motion, we can calculate the density matrix at any time t.
The steady-state of a system can be obtained by evolving it for a long time (t → ∞). Unfortunately, this method presents two difficulties. First, if the dimension of the system is big, the computing time would be huge. This means that for systems beyond a few qubits, it will take too long to reach the steady-state. Even worse is the problem of stability of the algorithms for integrating differential equations. Due to small errors in the calculation of derivatives by the use of finite differences, the trace of the density matrix may not be constantly equal to one. This error accumulates during the propagation of the state, giving non-physical results after a finite time. One solution to this problem is the use of algorithms specifically designed to preserve the trace, as Crank-Nicholson algorithm [31]. The problem with this kind of algorithms is that they consume more computational power than Runge-Kutta, and therefore they are not useful to calculate the long-time behaviour of big systems. An analysis of different methods and their advantages and disadvantages can be found at Ref. [32].
Box 7. Time dependency of the two-level system with decay.
In this box we show some results of solving Eq (96) and calculating the density matrix as a function of time. A Mathematica notebook solving this problem can be found at [20]. To illustrate the time behaviour of this system, we calculate the evolution for different state parameters. In all cases, we start with an initial state that represents the state being excited ρ 11 = 1, with no coherence between different states, meaning ρ 01 = ρ 10 = 0. If the decay parameter Γ is equal to zero, the problem reduces to solve von Neumann equation, and the result is displayed in Figure 2. The other extreme case would be a system with no coherent dynamics (Ω = 0) but with decay. In this case, we observe an exponential decay of the population of the excited state. Finally, we can calculate the dynamics of a system with both coherent driving and decay. In this case, both behaviours coexist, and there are oscillations and decay. In both the blue lines represent ρ 11 and the orange one ρ 00 .

B. Diagonalisation
As we have discussed before, in the Fock-Liouville space the Liouvillian corresponds to a complex matrix (in general complex, non-hermitian, and non-symmetric). By diagonalising it we can calculate both the time-dependent and the steady-state of the density matrices. For most purposes, in the short time regime integrating the differential equations may be more efficient than diagonalising. This is due to the high dimensionality of the Liouvillian that makes the diagonalisation process very costly in computing power. On the other hand, in order to calculate the steady-state, the diagonalisation is the most used method due to the problems of integrating the equation of motions discussed in the previous section.
Let see first how we use diagonalisation to calculate the time evolution of a system. As the Liouvillian matrix is non-Hermitian, we cannot apply the spectral theorem to it, and it may have different left and right eigenvectors. For a specific eigenvalue Λ i we can obtain the eigenvectors |Λ R i and |Λ L i s. t.

L |Λ
An arbitrary system can be expanded in the eigenbasis ofL as [1,33] Therefore, the state of the system at a time t can be calculated in the form Note that in this case to calculate the state a time t we do not need to integrate into the interval [0, t], as we have to do if we use a numerical solution of the differential set of equations. This is an advantage when we want to calculate long-time behaviour. Furthermore, to calculate the steady-state of a system, we can look to the eigenvector that has zero eigenvalue, as this is the only one that survives when t → ∞. For any finite system, Evans' Theorem ensures the existence of at least one zero eigenvalue of the Liouvillian matrix [34,35]. The eigenvector corresponding to this zero eigenvalue would be the steady-state of the system. In exceptional cases, a Liouvillian can present more than one zero eigenvalues due to the presence of symmetry in the system [26,27,36]. This is a non-generic case, and for most purposes, we can assume the existence of a unique fixed point in the dynamics of the system. Therefore, diagonalising can be used to calculate the steady-state without calculating the full evolution of the system. This can be done even analytically for small systems, and when numerical approaches are required this technique gives better precision than integrating the equations of motion. The spectrum of Liouvillian superoperators has been analysed in several recent papers [33,37]. Box 8. Spectrum-analysis of the Liouvillian for the two-level system with decay.

VII. ACKNOWLEDGEMENTS
The author wants to acknowledge the Spanish Ministry and the Agencia Española de Investigación (AEI) for financial support under grant FIS2017-84256-P (FEDER funds).