#### Degenerate fields in \(d=2\) CFT

What makes \(d=2\) CFT solvable in many cases is the existence of degenerate primary fields.

When working on conformal field theory, your life is very different depending on whether the dimension is two or not. In \(d=2\) you have that infinite-dimensional symmetry algebra called the Virasoro algebra, and in some important cases such as minimal models you can classify your CFTs, and solve them analytically. In \(d\neq 2\), your symmetry algebra is finite-dimensional, and you mostly have to do with numerical results. This not only makes you code a lot, but also incites you to make technical assumptions that are physically restrictve, such as unitarity.

####
Degenerate fields in \(d=2\) CFT

####

What makes \(d=2\) CFT solvable in many cases is the existence of degenerate primary fields.

What makes \(d=2\) CFT solvable in many cases is the existence of degenerate primary fields.

Three years and three major revisions after it first appeared on Arxiv and GitHub (why GitHub? see this blog post), my review article on two-dimensional conformal field theory may be mature enough for appearing in book form. But with which publisher?

To answer this question, I should first say why I would want to have a book in the first place, since the text is already on Arxiv.

To answer this question, I should first say why I would want to have a book in the first place, since the text is already on Arxiv.

In two-dimensional conformal field theory, correlation functions are partly (and sometimes completely) determined by the properties of the fields under symmetry transformations. In particular, correlation functions of primary fields are relatively simple, because by definition primary fields are killed by the annihilation modes of the symmetry algebra. On top of that, there exist degenerate primary fields that are killed not only by the annihilation modes, but also by some combinations of creation modes. As a result, correlation functions that involve degenerate primary fields sometimes obey nontrivial differential equations, for example BPZ equations. Usually, these equations are deduced from the relevant combinations of creation modes, called null vectors.

Determining null vectors in representations of a symmetry algebra is often complicated, as the algebraic structures of the relevant algebras and of their representations can themselves be complicated. Even in the case of the Virasoro algebra, it is not easy to explicitly determine null vectors. It is however much easier to determine which representations do have null vectors, using the fusion product. For example, if we know degenerate representations \(R_{(1,1)}\) and \(R_{(2,1)}\) with null vectors at levels \(1\) and \(2\) respectively, we can deduce that the fusion product \(R_{(2,1)}\times R_{(2,1)}\) is degenerate and contains \(R_{(1,1)}\). The remainder of \(R_{(2,1)}\times R_{(2,1)}\) must therefore be a degenerate representation, which can be identified as \(R_{(3,1)}\), and has a null vector at level \(3\). (See Section 2.3.1 of my review article for more details.)

An important idea is therefore that it is not the structures of the algebras and representations that matter, but rather the structure of the category of representations, in other words their fusion products. This idea has in particular been developed in the works of Fuchs, Runkel and Schweigert. But how does this help us compute correlation functions, and determine the differential equations that they obey? In other words, can we determine differential equations from fusion products, without computing null vectors?

Determining null vectors in representations of a symmetry algebra is often complicated, as the algebraic structures of the relevant algebras and of their representations can themselves be complicated. Even in the case of the Virasoro algebra, it is not easy to explicitly determine null vectors. It is however much easier to determine which representations do have null vectors, using the fusion product. For example, if we know degenerate representations \(R_{(1,1)}\) and \(R_{(2,1)}\) with null vectors at levels \(1\) and \(2\) respectively, we can deduce that the fusion product \(R_{(2,1)}\times R_{(2,1)}\) is degenerate and contains \(R_{(1,1)}\). The remainder of \(R_{(2,1)}\times R_{(2,1)}\) must therefore be a degenerate representation, which can be identified as \(R_{(3,1)}\), and has a null vector at level \(3\). (See Section 2.3.1 of my review article for more details.)

An important idea is therefore that it is not the structures of the algebras and representations that matter, but rather the structure of the category of representations, in other words their fusion products. This idea has in particular been developed in the works of Fuchs, Runkel and Schweigert. But how does this help us compute correlation functions, and determine the differential equations that they obey? In other words, can we determine differential equations from fusion products, without computing null vectors?

In the tense negotiations between the German consortium DEAL and Elsevier, there is a new twist: on February 13th, Elsevier announced that it was restoring the access of the affected German institutions to its journals.

Elsevier’s two explanations for this maneuver fall short of being convincing. The first explanation, given to Nature, is that “it is customary [...] to retain access to content after a contracted period is concluded and as long as renewal discussions are ongoing”. Why then cut off access in January, and restore it in February?

Elsevier’s two explanations for this maneuver fall short of being convincing. The first explanation, given to Nature, is that “it is customary [...] to retain access to content after a contracted period is concluded and as long as renewal discussions are ongoing”. Why then cut off access in January, and restore it in February?

The debate about green versus gold open access leaves aside a more fundamental difference: that between legal open access and pirate open access. This difference is essential because, as Bjorn Brembs put it,

In terms of making the knowledge of the world available to the people who are the rightful owners, [pirate] Alexandra Elbakyan has single-handedly been more successful than all [legal] open access advocates and activists over the last 20 years combined.With Sci-Hub, pirate open access is so successful that one might wonder whether legal open access is still needed. The obvious argument that pirate open access is parasitic and therefore unsustainable, because someone has to pay for scientific journals, is easily disposed of: with up-to-date tools, journals could cost orders of magnitude less than they currently do, and be financed by modest institutional subsidies. A better reason why pirate open access is not enough is that it is subject to technical and legal challenges. This makes it potentially precarious, and unsuited to uses such as content mining.

Since several years ago, Wikipedia is being widely used by academics. As a theoretical physicist, I often use it as a quick reference for mathematical terminology and results. Wikipedia is useful in spite of its many gaps and flaws: there was no general article on two-dimensional conformal field theory until I started one recently, the article on minimal models is itself minimal, and googling conformal blocks sends you to a discussion on StackExchange, since there is nothing on Wikipedia.

The paradox is that many academics see these gaps and flaws in the coverage of their own favourite subjects, yet do nothing to correct them. Let me discuss three possible reasons for this passivity: fear of Wikipedia, lack of time, and laziness.

####
The jungle outside the ivory tower

####

Attracting and retaining academic contributors has long been recognized as a challenge by Wikipedians, to the extent that there are guidelines on how to do it.

The paradox is that many academics see these gaps and flaws in the coverage of their own favourite subjects, yet do nothing to correct them. Let me discuss three possible reasons for this passivity: fear of Wikipedia, lack of time, and laziness.

Now that I have published my first article in SciPost, let me comment on that experience.

####

####
Open peer review!

####

The main reason I was attracted to SciPost in the first place is that it practises open peer review, which means that the referee reports are publicly viewable. (The referees can choose to remain anonymous.) If one wants to improve the communication of research results, publishing referee reports is the obvious first step, as it requires no extra work, and has potentially large benefits on the quality of the process. Actually, publishing reports on a rejected article can even save some work if the article is later submitted elsewhere. (SciPost however erases reports on rejected articles.)

Have you ever wondered why this apparently interesting new paper on arXiv was only four or five pages long? Why it had this unreadable format with two columns in fine print, with formulas that sometimes straddle both columns, and with these cramped figures? Why the technical details were relegated to appendices or future work, if not omitted altogether? And why so much of the already meager text was devoted to boastful hot air?

Most physics researchers do not wonder for long, and immediately recognize a paper that is destined to be submitted to Physical Review Letters. That journal’s format is easy to recognize, as it has barely changed since 50 years ago – a time when page limits had the rationale of saving ink and paper. That rationale having now evaporated, the awful format has nevertheless survived as a signal of prestige. Because, you see, Physical Review Letters is supposed to be physics’ top journal, which means that publishing there is supposed to be good for one’s career.

Most physics researchers do not wonder for long, and immediately recognize a paper that is destined to be submitted to Physical Review Letters. That journal’s format is easy to recognize, as it has barely changed since 50 years ago – a time when page limits had the rationale of saving ink and paper. That rationale having now evaporated, the awful format has nevertheless survived as a signal of prestige. Because, you see, Physical Review Letters is supposed to be physics’ top journal, which means that publishing there is supposed to be good for one’s career.

While the conformal bootstrap method has recently enjoyed the wide popularity that it deserves, its applications have been mostly restricted to unitary conformal field theories. (By definition, in a unitary theory, there is a positive definite scalar product on the space of states, such that the dilatation operator is self-adjoint.) Unitarity brings the technical advantage that three-point structure constants are real, so squared structure constants are positive, leading to bounds on allowed conformal dimensions. However, dealing with non-unitary theories using similar methods is surely possible, at the expense of having the signs of squared structure constants as extra discrete variables. And unitarity is sometimes assumed even in cases where it brings no discernible technical benefit, such as in studies of torus partition functions, where multiplicities are positive integers whether the theory is unitary or not.

So it is refreshing that, in their recent article, Esterlis, Fitzpatrick and Ramirez apply the conformal bootstrap method to non-unitary theories.

So it is refreshing that, in their recent article, Esterlis, Fitzpatrick and Ramirez apply the conformal bootstrap method to non-unitary theories.

En page 10 de son rapport d’activité 2015, le Service de Valorisation de l’Information du CEA publie les coûts des abonnements aux revues électroniques pour les années 2014, 2015 et 2016, avec pour 2015 l’évaluation du coût par article téléchargé. Je voudrais ici diffuser et commenter ces chiffres.

ArXiv has not changed much since it started in 1991, and it is only starting to consider the obvious next steps: allowing comments on articles, followed by full-fledged open peer review. Scientists have not all been waiting idly for the sloth to make its move, and a few have tried to build systems for doing that. Here I will discuss a recent attempt, called SciPost.

####

####
A strong editorial college

The most distinctive feature of SciPost is its editorial college, made of well-known theoretical physicists. These people do not just lend their names to the project. Given how SciPost functions, they have a lot of work:

Two years ago, when posting a review article on Arxiv, I did the experiment of putting it in the public domain. The idea was to allow anyone to distribute and even to modify it, in the hope of increasing the circulation and usefulness of the article, as I explained in this blog post.

Putting the text in the public domain also has potential drawbacks:

The potential loss of control is a priori more worrisome. Could my reputation be damaged if someone did something bad with my text? In order to find out, I had to wait until people actually did something with my text.

####

####
Enters Amazon

My review article is now available for sale on Amazon, in the Kindle format, at the price of about $9. I had nothing to do with that edition, I guess it was done by an Amazon robot.

Putting the text in the public domain also has potential drawbacks:

- losing revenue,
- losing control.

The potential loss of control is a priori more worrisome. Could my reputation be damaged if someone did something bad with my text? In order to find out, I had to wait until people actually did something with my text.

Subscribe to:
Posts (Atom)