Thursday 11 January 2018

On single-valued solutions of differential equations

This post is about the issue of solving a nonlinear matrix equation that I raised on MathOverflow. This matrix equation determines the existence of single-valued solutions of certain meromorphic differential equations. The motivating examples are the BPZ differential equations that appear in two-dimensional CFT. For more details on these examples, see my recent article with Santiago Migliaccio on the analytic bootstrap equations of non-diagonal two-dimensional CFT.

 

Relation between diagonal and non-diagonal solutions

(This paragraph is adapted from Appendix A of the cited article.)
Let \(D^+\) and \(D^-\) be two meromorphic differential operators of order \(n\) on the Riemann sphere. Let a non-diagonal solution of \((D^+,D^-)\) be a single-valued function \(f\) such that \(D^+f = \bar{D}^- f=0\), where \(\bar D^-\) is obtained from \(D^-\) by \(z\to\bar z,\partial_z \to \partial_{\bar z}\). Let a diagonal solution of \(D^+\) be a single-valued function \(f\) such that \(D^+f =\bar D^+f=0\).
We assume that \(D^+\) and \(D^-\) have singularities at two points \(0\) and \(1\). Let \((\mathcal F^\epsilon_i)\) and \((\mathcal{G}^\epsilon_i)\) be bases of solutions of \(D^\epsilon f=0\) that diagonalize the monodromies around \(0\) and \(1\) respectively. In the case of \((\mathcal F^+_i)\) this means \[\begin{aligned} D^+ \mathcal{F}^+_i = 0 \quad , \quad \mathcal{F}^+_i \left(e^{2\pi i}z\right) = \lambda_i \mathcal{F}^+_i(z)\ .\end{aligned}\] We further assume that our bases are such that \[\begin{aligned} \forall \epsilon,\bar{\epsilon}\in\{+,-\}\, , \quad \left\{ \begin{array}{l} \mathcal{F}^\epsilon_i(z) \mathcal{F}^{\bar\epsilon}_j(\bar z) \ \text{has trivial monodromy around } z=0 \ \ \iff \ \ i=j\ , \\ \mathcal{G}^\epsilon_i(z) \mathcal{G}^{\bar\epsilon}_j(\bar z) \ \text{has trivial monodromy around } z=1 \ \ \iff \ \ i=j\ . \end{array}\right. \label{tmo}\end{aligned}\] For \(\epsilon \neq \bar{\epsilon}\) this is a rather strong assumption, which implies that the operators \(D^+\) and \(D^-\) are closely related to one another. This assumption implies that a non-diagonal solution \(f^0\) has expressions of the form \[\begin{aligned} f^0(z,\bar z) = \sum_{i=1}^n c^0_i \mathcal{F}_i^+(z) \mathcal{F}_i^-(\bar z) = \sum_{i=1}^n d^0_i \mathcal{G}^+_i(z) \mathcal{G}_i^-(\bar z)\ , \label{fz}\end{aligned}\] for some structure constants \((c^0_i)\) and \((d^0_i)\). Similarly, a diagonal solution \(f^\epsilon\) of \(D^\epsilon\) has expressions of the form \[\begin{aligned} f^\epsilon(z,\bar z) = \sum_{i=1}^n c^\epsilon_i \mathcal{F}_i^\epsilon(z) \mathcal{F}_i^\epsilon(\bar z) = \sum_{i=1}^n d^\epsilon_i \mathcal{G}^\epsilon_i(z) \mathcal{G}_i^\epsilon(\bar z)\ . \label{fe}\end{aligned}\] We now claim that
if \(D^+\) and \(D^-\) have diagonal solutions, and if moreover \((D^+,D^-)\) has a non-diagonal solution, then the non-diagonal structure constants are geometric means of the diagonal structure constants, \[\begin{aligned} (c^0_i)^2 \propto c^+_ic^-_i\ , \label{ccc} \end{aligned}\] where \(\propto\) means equality up to an \(i\)-independent prefactor.
The proof of this statement is simple bordering on the trivial. We introduce the size \(n\) matrices \(M^\epsilon\) such that \[\begin{aligned} \mathcal{F}^\epsilon_i = \sum_{j=1}^n M^\epsilon_{i,j} \mathcal{G}^\epsilon_j \ .\end{aligned}\] Inserting this change of bases in our expression for a diagonal solution, we must have \[\begin{aligned} j\neq k \implies \sum_{i=1}^n c^\epsilon_i M_{i,j}^\epsilon M_{i,k}^\epsilon = 0\ .\end{aligned}\] For a given \(\epsilon\), this is a system of \(\frac{n(n-1)}{2}\) linear equations for \(n\) unknowns \(c^\epsilon_i\). One way to write the solution is \[\begin{aligned} c^\epsilon_i \propto (-1)^i\det_{\substack{ i'\neq i \\ j \neq 1}} \left( M^\epsilon_{i',1}M^\epsilon_{i',j} \right) = (-1)^i \left(\prod_{i'\neq i} M^\epsilon_{i',1}\right) \det_{\substack{ i'\neq i \\ j \neq 1}} \left( M^\epsilon_{i',j} \right)\ .\end{aligned}\] Similarly, inserting the change of bases in the expression of a non-diagonal solution, we find \[\begin{aligned} j\neq k \implies \sum_{i=1}^n c^0_i M_{i,j}^+ M_{i,k}^- = 0\ .\end{aligned}\] We will write two expressions for the solution of this linear equations, \[\begin{aligned} c^0_i &\propto (-1)^i \det_{\substack{ i'\neq i \\ j \neq 1}} \left( M^-_{i',1}M^+_{i',j}\right) = (-1)^i \left(\prod_{i'\neq i} M^-_{i',1}\right) \det_{\substack{ i'\neq i \\ j \neq 1}} \left( M^+_{i',j} \right)\ , \\ &\propto (-1)^i \det_{\substack{ i'\neq i \\ j \neq 1}} \left( M^+_{i',1}M^-_{i',j}\right) = (-1)^i \left(\prod_{i'\neq i} M^+_{i',1}\right) \det_{\substack{ i'\neq i \\ j \neq 1}} \left( M^-_{i',j} \right)\ .\end{aligned}\] Writing \((c^0_i)^2\) as the product of the above two expressions, we obtain the announced relation.

Existence of solutions

We assume that a solution is single-valued if and only if it has trivial monodromies around \(0\) and \(1\). Then the existence of diagonal and non-diagonal solutions depend on our matrices \(M^\epsilon\). Let us rewrite our solutions in terms of these matrices and their inverses: \[\begin{aligned} c^\epsilon_i \propto \frac{ N^\epsilon_{i,1} }{ M^\epsilon_{i,1} }\quad , \quad c^0_i \propto \frac{ N^+_{i,1} }{ M^-_{i,1} } \propto \frac{N^-_{i,1}}{M^+_{i,1}}\ ,\end{aligned}\] where we define \(N^\epsilon\) as the transpose of the inverse of \(M^\epsilon\). This rewriting assumes that matrix elements of \(M^\epsilon\) do not vanish: otherwise, we can have special solutions, which we will ignore.
Our expressions for the solutions depend on the choice of a particular second index, which we took to be \(1\). The condition for solutions to actually exist is that they do not depend on this choice. In the case of diagonal solutions, the condition is \[\begin{aligned} \frac{N^\epsilon_{i_1,j_1}N^\epsilon_{i_2,j_2}}{N^\epsilon_{i_2,j_1}N^\epsilon_{i_1,j_2}} = \frac{M^\epsilon_{i_1,j_1}M^\epsilon_{i_2,j_2}}{M^\epsilon_{i_2,j_1}M^\epsilon_{i_1,j_2}}\ .\end{aligned}\] In the case of non-diagonal solutions, the condition is \[\begin{aligned} \frac{N_{i_1,j_1}^+}{N_{i_2,j_1}^+} \frac{M_{i_1,j_2}^+}{M_{i_2,j_2}^+} = \frac{N_{i_1,j_2}^-}{N_{i_2,j_2}^-} \frac{M_{i_1,j_1}^-}{M_{i_2,j_1}^-}\ .\end{aligned}\] Summing over \(i_1\) in this equation leads to \(\frac{\delta_{j_1,j_2}}{N^+_{i_2,j_1}M^+_{i_2,j_2}} = \frac{\delta_{j_1,j_2}}{N^-_{i_2,j_2}M^-_{i_2,j_1}}\). We call this the resummed condition, and write is as \[\begin{aligned} \forall i,j, \ \ M^+_{i,j} \left((M^+)^{-1}\right)_{j,i} = M^-_{i,j} \left((M^-)^{-1}\right)_{j,i}\ .\end{aligned}\] If we sum over \(i\) or \(j\), we obtain identities that automatically hold. So we have \((n-1)^2\) independent equations. This matches the number of compatibility conditions in our original system of \(n(n-1)\) linear equations for the \(n\) coefficients \((c^0_i)\).
Let us call two matrices \(M^1,M^2\) equivalent if there are vectors \((\rho_i),(\sigma_j)\) such that \(M^1_{i,j}=\rho_i M^2_{i,j}\sigma_j\). Then \(M_{i,j}(M^{-1})_{j,i}\) is invariant under this equivalence. Modulo equivalence, the resummed condition has two simple universal solutions:
  • \(M^+ = M^-\): then non-diagonal solutions exist if and only if diagonal solutions exist.
  • \(M^+\) is the transpose of the inverse of \(M^-\): then \(c_i^0\) is an \(i\)-independent constant, non-diagonal solutions always exist, but we do not know about diagonal solutions.
Actually, for matrices of size two, these solutions are equivalent to each other, and give the general solution of our conditions. And diagonal solutions always exist.

Matrices of size three

By a direct calculation, the condition for a diagonal solution to exist is \[\begin{aligned} \det \frac{1}{M} = 0 \ , \end{aligned}\] where \(\frac{1}{M}\) is the matrix whose coefficients are the inverses of those of \(M\).
We compute \[\begin{aligned} M = \begin{pmatrix} a & b & c \\ d & e & f \\ g & h & i \end{pmatrix} \implies M_{ij}M^{-1}_{ji} = \begin{pmatrix} t_1-t_2 & t_3-t_4 & t_5 - t_6 \\ t_5-t_4 & t_1-t_6 & t_3-t_2 \\ t_3-t_6 & t_5-t_2 & t_1-t_4 \end{pmatrix}\ ,\end{aligned}\] where we introduced the combinations \[\begin{aligned} t_1 = \frac{aei}{\det M} \ , \ t_2 = \frac{afh}{\det M} \ , \ t_3 = \frac{bfg}{\det M} \ , \ t_4 = \frac{bdi}{\det M} \ , \ t_5 = \frac{cdh}{\det M}\ , \ t_6 = \frac{ceg}{\det M}\ ,\end{aligned}\] which obey the relations \[\begin{aligned} t_1 +t_3 +t_5 - (t_2+t_4+t_6) = 1\quad , \quad t_1t_3t_5 = t_2t_4t_6\ .\end{aligned}\] We also introduce the quadratic combination \[\begin{aligned} \kappa = t_1t_3 +t_3t_5 + t_5t_1 - t_2t_4-t_4t_6 - t_2t_6 = t_1t_3t_5 \det \frac{1}{M}\det M\ , \end{aligned}\] where \[\begin{aligned} \det \frac{1}{M} \det M = \frac{1}{t_1}+\frac{1}{t_3}+\frac{1}{t_5} -\left( \frac{1}{t_2}+\frac{1}{t_4}+\frac{1}{t_6}\right) \ .\end{aligned}\] If two matrices \(M^+\) and \(M^-\) have the same \(M_{ij}M^{-1}_{ji}\), then \(t_i^+ = t_i^- + c\) for some \(c\), which implies in particular \[\begin{aligned} c(\kappa^+ - \kappa^-) = 0\ .\end{aligned}\] We assume that all the coefficients of \(M^+\) and \(M^-\) are nonzero, and distinguish two cases:
  • If \(c=0\), then \(t^+_i=t^-_i\), which implies that \(M^+\) and \(M^-\) are equivalent.
  • If \(c\neq 0\), then \(\det\frac{1}{M^+}=0\iff \det\frac{1}{M^-}=0\). In the special case where \(M^+\) is equivalent to the inverse transpose of \(M^-\), we have \(c=\kappa^+=-\kappa^-\).
In both cases, assuming non-diagonal solutions, we have diagonal solutions for one equation if and only if we have such solutions for the other equation.

Conclusion

There is an intriguing piece of mathematics to be explored here, with a wealth of special cases where some coefficients of our matrices vanish. The relevance to two-dimensional CFT is however apparently not that great, because in order to solve CFTs with Virasoro symmetry, second-order BPZ equations are typically enough.

No comments:

Post a Comment