
Search Results
166 results found with an empty search
- What makes a Good Theoretical Physicist?
The answer to this question, of course depends on what one means by 'good'. I will choose a low bar for this qualification - let 'good' mean 'reasonably competent'. By reasonably competent I mean capable of obtaining tenure, funding, publications, students, etc. - or equivalent. The main virtue of this identification is that it allows me to at once claim to be a good theoretical physicist! It also makes it natural to state that one can hope to become a good theoretical physicist without being a genius. A maximalist counterexample to the approach just taken would perhaps be posed by the Landau minimum, a famously exacting set of criteria the great scientist used for selecting his students, many of whom became justly well known later. Mathematical ability, abstract reasoning skills and some deep familiarity with physics seem to be the three requirements for becoming a good theoretical physicist. (I have heard that it is also required to have a good working knowledge of the Greek and Latin alphabets - so as not to run out of indices on vectors, or more generally, tensors!*). These skills can be learned and developed during our formal education - but see what Nobel Laureate Gerard't Hooft has to say about learning theoretical physics without going to school for it. I think it is important to note that there are many species of theorists. They occupy various niches in the physics ecosystem. And they are all are necessary for the success of the enterprise, in my opinion. There are theorists who do only theory and no experiments (surprise!). They can be found, for example, in high-energy physics, and in - most of - condensed matter physics. But there are theorists who also do experiments, in areas such as quantum optics (that is indeed one of the attractions of the field, that the same person can do both experiment and theory). There are theorists whose work does not have immediate experimental implications (string theorists); while others are more strongly coupled to laboratory science (high energy phenomenologists). There are pen-and-paper theorists; then there are those who use sophisticated computational techniques. And so on. For myself and my students, I try to engineer a mix of possibilities. I think doing theory that has been verified by experiment shows that the theorist is able to relate to the real world. I personally - perhaps due to the fact that I have been trained as an experimentalist - naturally interpret and suggest experiments and speak the language of experimentalists. Some of the most satisfying and impactful science I have done has emerged from such engagement with experiment. In fact I was told - not sure how true it is - in my early days as a professor that no funding agency would take me seriously unless I collaborated with an experimentalist. For the record, I have been funded both collaboratively as well as individually. Having only experimental collaborations on a theoretical resume, however, can lead to the charge of being a mere 'calculator' - someone who is always directly following experiment, and lacks original and independent theoretical ideas of her/his own. Doing work independently of specific experiments shows understanding of the theoretical aspects of the relevant physics. In case the work involves proposing an experiment, it leads, rather than follows, existing experiments. Increasingly more independent - from experiment - is work that introduces a new theoretical technique (e.g. density functional theory) or paradigm (e.g. Landau theory, renormalization group). Just to remind, the examples in the parentheses are all Nobel-prize winning achievements, but more mundane instances can also be substituted for them. On the other hand, having only 'pure' theory on the resume can lead to an accusation of being a fantasizer, someone who believes in equations rather than in phenomena that can be experimentally observed - though Steven Weinberg famously warned us that the problem is not that we do not take our theories seriously but that we do not take them seriously enough. In my opinion having both accomplishments - standalone theory as well as that coupled to experiment - brings balance to a theorist's oeuvre. Likewise, a combination of analytic and numerical techniques shows versatility. Finally, some single author papers amongst a slew of collaborative work gives an impression of intellectual independence. I would perhaps add that in my experience being a good theoretical physicist requires an appropriate mixture of faith and doubt. If faith dominates, i.e. if one believes every idea that comes to mind, a lot of time will be wasted. This kind of attitude is warned away by sayings in the business such as 'most ideas in theoretical physics are wrong' or that 'a good exercise for a young theoretical physicist is to give up one pet idea before breakfast every day'. On the other hand, if doubt rules everything, too much skepticism will make progress impossible. Finding the right combination, of course, is an art. * It's a coincidence that a week after writing this I am visiting the University of Crete and living the experience in reverse: because of my exposure to the Greek alphabet because I am a physicist, I can read all the signage in Greece without much trouble. Helps to pick out the flights, etc., though Google translate has to help with the rest.
- Philosophy and Science: Time in Special Relativity
Whether philosophy has much to say to science is a well-debated topic. I have heard it mentioned that science is what we know and philosophy is what we don't know. That could already imply a relation, one that acts out across the frontier of knowledge. I am not a trained philosopher - not even a trained philosopher of science - so I cannot present any technical arguments here on the subject. But I would like to discuss a specific example which has always intrigued me. This has to do with the nature of time and space in special relativity. Let's just consider time. Wise people since the beginning of time (sic) have said very profound things about its nature, including Aristotle, St. Augustine, Berlioz, Bergson, John Wheeler, Stephen Hawking, and, most recently, Carlo Rovelli. But in introducing time into special relativity Einstein does not take any of this into account. He defines time in the simplest, least philosophical, most practical, way possible: time is what is measured by a clock. I think this is a remarkable definition. First of all, it is the reverse of the usual statement - a clock is what measures time. Einstein simply runs this definition backwards! I wonder if he found it convenient because so many of his heuristic arguments about time in relativity involved clocks (and those about space involved rulers). Second, Einstein's definition seems to imply that if clocks did not exist to measure it, time would not make sense. This points to the importance of measurement in special relativity, something that is not talked about - perhaps since relativity is a classical theory - as much as, for example, the role of measurement in quantum mechanics. Third, it is remarkable that special relativity does not seem to require any more deeply philosophical, or sophisticated, notion of time, than the threadbare definition used by Einstein. From his definition, we obtain everything the theory has to give us - time dilation, the relativity of simultaneity, the transverse Doppler effect, etc. It is in fact the existence of some of these effects that shook philosophy to its roots! Though Einstein posited the view of time taken by relativity and debated it famously with philosophers like Bergson, the fact remains that special relativity uses such plain definitions - and only simple algebra - that even high school students can (and do) follow its basics. There does not seem to be any need for any philosophy, at least in this case.
- The Same Only Different: The Role and Reach of Analogies in Physics
Analogies form an important part of human thought. They seem to be an evolutionary mechanism which leverage previous experience in the understanding of newly encountered phenomena. Not surprisingly, analogical learning is a big deal in cognitive science and - wait for the buzzword - AI. In the Greek language, ana logos translates to 'same logic'. In physics, analogies are plentiful. A few examples: "light is like sound", "the atom is like a planetary system", "the atomic nucleus is like a drop of liquid". There is a mechanical aspect to these analogies, and maybe even a pictorial aspect. Another type of analogy is mathematical (I mean formulaic). This kind of analogy does not have a mechanical component. Instead it has a strong pictorial flavor as the analogy rests on identifying similar symbols. For example, if we write down the energy of a pendulum, we find there are two contributions. One contribution, the potential energy, is proportional to the square of position of the pendulum. The second contribution, the kinetic energy, is proportional to the square of the pendulum's momentum. Now if we also write down the energy carried by an electromagnetic wave, there are again two contributions, this time proportional to the squares of the electric and magnetic fields, respectively. The energy of the wave thus has the same mathematical form as that of the pendulum, with the electric (magnetic) field being analogous to the pendulum position (momentum). What the analogy implies: an electromagnetic wave (read light) is like a pendulum. How that turns out in practice: the electric and magnetic fields oscillate at the frequency of the wave. This is a useful insight, as a wave is somewhat more complicated - and less tangible - than a pendulum. Since the behavior of a pendulum is familiar and easier to understand, using the analogy, we can gain insight into the behavior of electromagnetic waves. In fact, the analogy allows us to connect mechanics to optics in far-reaching ways. A humble contribution to this area from my research group has been the realization of the mechanical analog of an optical laser. At a higher level of mathematical sophistication, is another example: Bill Unruh's discovery that the equations for sound waves in a moving fluid resemble those for light waves in curved spacetime. This insight sparked the field of analog gravity, which simulates e.g. black holes and Hawking radiation in the laboratory. These phenomena are otherwise not accessible to experimentalists. Why does Nature repeat herself, and give us the benefit of these insights into apparently unrelated systems? Why do such varied and different phenomena follow similar mathematical models? No one knows. We may ask how much novelty can be derived from an analogy. It may be suspected that the solution to any problem which can be found using analogy cannot teach us anything fundamentally new. It merely allows us to understand the new in terms of the old. But then the new simply becomes the latest manifestation of the old paradigm. It does not change our thinking fundamentally. A counterexample to this argument is Bohr's atom. Although it was solved using the analogy of the solar system, it ushered in quantum mechanics, which - though similar in some aspects - is fundamentally different from classical mechanics. This is because Bohr departed from the model for a classical planet going around the sun, by quantizing the angular momentum for the electron orbiting the nucleus. This assumption went beyond the analogy. In other words, the analogy was not complete. Incomplete or - in a fruitful sense - superficial analogies are therefore fertile sources of new paradigms. They give us access to the fundamentally new by allowing a grip on the part of the problem that is similar to what we already know. For those who wish to take a highly detailed, entertaining and informative five hundred page dive into the topic, including delicious subjects such as 'banalogies' (page 143) and Einstein as a superb analogizer (page 452): Surfaces and Essences: Analogy as the Fuel and Fire of Thinking by Douglas Hofstadter and Emmanuel Sander. The first author is well known for his cult book Gödel, Escher, Bach.
- Quantum Physics: Impressions and Facts
Quantum physics is on a lot of peoples' minds these days. It is of course, one of the two theories that have resisted integration into a single scaffolding of natural laws - the other being gravity. But even considered on its own, quantum physics seems to be a subject whose foundations are mired in controversy. Every few days there appears a paper declaiming that there is no problem with quantum mechanics, and another saying that there is a problem and it can be fixed as prescribed, and yet another stating that there is a problem and it is far from being solvable. Every few months, there is a new book on quantum physics to placate the hunger that seems as perennial as the one fed by titles on optimal management practices, financial success and weight loss. But quantum now also figures prominently in the news as an identified source of next-generation technological innovation. Governments have been pouring tens of billions of dollars (USD) into quantum research and enterpreneurship over the last few years. There is talk and quantification of 'quantum supremacy', the point at which quantum physics becomes clearly more capable than classical methods. So I thought it may not be a bad time to reconsider some common impressions about quantum mechanics and compare them to - sometimes subtle rejoinders from - state of the art knowledge. Here's a list of ten: i) The quantum wavefunction is not a physical object as only its absolute magnitude squared - in other words the probability - can be measured. But see this paper about experimental measurement of the quantum wavefunction, already more than a decade old. ii) It is not possible to beat the Heisenberg Uncertainty Principle. But see this paper. iii) Quantum mechanics is non-local. Therefore it can be used to transfer information faster than the speed of light. But see the no-communication theorem. iv) Quantum physics only applies to small objects. But see this and this. v) Quantum jumps are completely random. But see this amazing experiment performed at Yale in the group of Michel Devoret, following the theoretical proposal of Howard Carmichael. It shows that quantum jumps are unpredictable on long time scales, but are deterministically predictable on short time scales. They can even be reversed! vi) Quantum teleportation can be used to transport material objects. No, just the quantum state of the object. vii) Quantum mechanics implies that all physical variables are allowed to take on discrete values. No, depending on the context, physical variables can be continuous in quantum theory, such as position for a free - quantum - particle. viii) Quantum physics says everything is random. No, the evolution of any quantum system - before measurement - is given by Schrodinger's equation, which is deterministic. ix) The existence of the multiverse has been proven. No, but the concept has received some Oscars. x) It is necessary to fully understand quantum mechanics before using it. No, a large number of physicists use it while disagreeing about the fundamental interpretation, if any, of the theory. It is currently not clear what a full understanding of quantum physics entails.
- Music Of The Spheres
This post is about the connection of physicists - and physics - to music. While writing, I felt it divided naturally into two sections. The Music of the Physicists It has been noticed that several well known physicists were also enthusiastic musicians. Perhaps the best known case is that of Einstein who famously played his violin whenever he was stuck on a problem and needed inspiration. In his biography of the great scientist (page 292), Walter Isaacson recounts that when Einstein landed in America for the first time (April 2, 1921), he had his pipe in one hand and his violin case in the other. Further digging readily reveals more examples: Galileo played keyboard and lute, Planck piano and organ, Heisenberg piano, Satyen Bose esraj, Feynman bongos, Bhabha violin, Fabiola Gianotti piano, Stephon Alexander saxophone. Some of these are referred to in this article. Some relevant pictures can be found here. It may be pointed out as counterexamples that Newton and Schrodinger, for example, were not interested in practical music. I think it will take further surveying to establish if physicists incline to music any more or less than people in other professions (Politics: Lincoln played the violin, Paderewski piano; Computers: Steve Jobs played guitar, Don Knuth is an organist and a composer; Sports: Shaq has released four rap albums, Oscar de la Hoya is a Grammy-nominated singer; etc...). The Physics of Music Musical thinking has guided physics' search for natural laws. Pythagoras and Ptolemy searched for harmonies in the physical universe. The scientific thought of Kepler and Newton was also influenced by this approach. The modern study of sound probably started with Galileo, followed by Mersenne and D'Alembert, whose name is associated with the wave equation. Around the same time, Euler wrote three volumes on acoustics. The field received a solid foundation with Helmholtz's On the sensations of tone as a physiological basis for the theory of music and Lord Rayleigh's The Theory of Sound. Wikipedia provides a readable and compact history of the subject. But it does not mention the Nobel laureate C. V. Raman, who published extensively on acoustics, including some of the earliest papers [e.g. Proc. Indian Acad. Sci. A1 179-188 (1935)] analyzing the nontrivial physics of membranophones like tabla and mridangam (e.g. the production of a definite pitch by loading the skins with resin - the black discs located at the center of the drumhead). In his book A Beautiful Question Nobel laureate Frank Wilczek points out (page 170) that the equations that govern the behavior of atoms do not look very different from the equations that describe musical instruments. This is of course because they both deal with waves - acoustic waves for the instruments and waves of probability for the atoms. Theories currently advancing the frontier of physics have retained this paradigm at their core: considering strings and membranes to be the fundamental constituents of the universe. A recent, well illustrated, and detailed book on the topic by Tsuji and Muller that I found informative: Physics and Music: Essential Connections and Illuminating Excursions. See how well you do on the quiz in Section 1.1 - you can access it in the preview material accessible online.
- How Much Should I Publish?
I'm an academic, and papers are the currency of my trade. A significant part of my professional effort goes into trying to put out publications high in value as well as number. Metrics like the h-index grade this enterprise, attempting to include both quality and quantity in their definitions. Let's talk about quantity. Especially in a world where detail and nuance are submerged by a tsunami of information, and few people have the time to judge the essential quality of a work (which is difficult - if not impossible - to reduce to a number), quantity has acquired a quality all its own. Considerations of quantity are very important for those treading the academic path - for graduate students under pressure for landing good postdoctoral positions, for postdocs trying to find faculty jobs, for junior faculty trying to obtain tenure, or associates aiming for promotion to full professorship. For faculty evaluation and advancement, department heads, deans and provosts are increasingly taking decisions that are 'data-driven': meaning quantity is important. But once a member of the faculty has reached full professorship, and is not under existential pressures to publish a lot, the question arises if (s)he should take a more considered approach to publishing. Should such professors publish even more, not limiting themselves to topics that are fashionable, or projects that would necessarily lead to invited talks at conferences? This kind of work could open up new areas, enable hitherto unsuspected applications, or introduce disruptive new paradigms. Or should they limit themselves to publishing only work that is obviously relevant, seriously aimed at the big impact factor journals, and certainly not in the niche or on the fringe? This would ensure all work is well-motivated, the funding has obvious justification, and investigations avoid scientific cul-de-sacs. For guidance, we can consider some well known scientists and their publication records. Peter Higgs, with the eponymous boson, famously admitted he would not be considered productive enough to hold a faculty position today; he published no more than ten papers after the groundbreaking work for which he was awarded the Nobel prize. Grigory Perelman, who famously solved the Riemann conjecture, and Andrew Wiles who famously proved Fermat's last theorem, have low h-indices, of around 7 and 17 respectively (mine is 24, just for comparison). Going back a little further, Feynman had about 70 papers in all, Onsager about the same; but many of them ended up being classics. Going back even further, Gauss was notorious for not publishing until he had polished his work to a high shine. "Few," was his motto,"But ripe." On the other hand, there have been many prolific authors such as Cauchy (800 papers and five textbooks), Faraday (about 500 papers), and Euler (more than 800 papers). An article in Nature discusses hyperprolific authors, i.e. those who publish a paper every five days. A number of factors enabling such high academic metabolic rates have been identified in the write-up, including, but not limited to: leading long-range collaborations, working in multiple research areas, and sleeping fewer hours per day. Of course, this is a personal choice; I have colleagues who have flourished using either approach. For myself I have relaxed a little bit in favor of publishing more niche. I am not intelligent or far-seeing enough to accurately decide ahead of time which of my work, if any, will be the most relevant. And I am curious to see if anyone will pick up, often many years later, a thread that I had initially started. Of course, I try not to publish incrementally, or with what I have heard referred to as the MPU (Minimal Publishable Unit). But if I can complete something that I think represents an advance in knowledge worth communicating, I will definitely try to get it published. There have been occasions on which a paper was initially ignored by the physics community but eventually turned out to be phenomenally important: a sleeping beauty (see Table 1 for "Beauty coefficients" and "awakening years"). I will probably never write a paper like that - but hope springs eternal!
- Toy Story
Since ancient times toys have exploited, knowingly or not, the principles of physics. They are simultaneously pure fun as well as a great segway to initiating physics discussions. It always amazes me how much the public in general, and kids in particular, are excited by physics toys and demonstrations. Once in a while I get the chance to do some outreach with physics toys, such as at the E3 fair at RIT. This year it will be held on March 31. Most of the attendees will be primary school children. I am really looking forward to it, as in the past I have received a very enthusiastic response from the audience. Here are some of my favorite physics toys and demonstrations (the sorting categories are approximate): i)Toys based on rotation: rattlebacks, tippy tops, gyroscopes, daruma tumbler, somersault doll . You might enjoy this picture of two great scientists, Niels Bohr and Wolfgang Pauli having fun with a tippy top. ii) Yoyo tricks. iii) Toys based on levitation: the levitron and its horizontal cousin the halitron. Some cool effects based on magnetic levitation. iv) Toys based on springs: the slinky. v) Toys based on fluid dynamics: Cretan pottery. The toys and tricks start around 6:00min. vi) Toys based on optics. vii) Optical illusions. viii) Liquid Nitrogen ice-cream. ix) A nice potpourri. A start on the literature on physics toys can be made with these sources: i) The Historian's Toybox. ii) Toys in physics lectures and demonstrations - A brief review iii) Toys and physics. iv) Physics Toys Effectiveness of Undergraduates’ Understanding Physics Principles Before I close I cannot resist making a remark about how important 'play' is in physics research. If we are stuck with a theoretical problem, we are told to go and 'play' with the equations. This represents a process of manipulating and rearranging the symbols representing the physical system. If we are stuck with an experimental problem, such as optimizing a laser for light output, we are told to go and 'play' with the laser's control knobs. Play thus seems to be integral to discovery and solution-finding in physics and probably in all of science if not in all of human endeavor. Artists and musicians certainly indulge in it.
- Pictures or Words?
What follows is a probably a futile protest against pictures and how we are often reminded that they are worth a thousand words. Since I my youngest days I have found algebra endlessly fascinating. There is a great quote about how algebra is amazing because it gives you back much more (knowledge of all the solutions) than what you put in (the unknowns) - it's the ultimate mathematical investment! In my own little way, formulas talk to me. When I flip through a scientific paper, I usually decide to read it based on how interesting the formulas and equations look. I once even constructed a little theory to try and justify my mathematical aesthetics: I loved language, and hence words, and hence letters, and hence formulas. Convinced? Now don't get me wrong, I was reasonably good at geometry when I was young. But I always felt I was just jumping through mathematical hoops, proving one theorem after another in the syllabus (nonetheless, perhaps a bow to Euclid would be appropriate here. I love him for his quote about how anything asserted without proof can also be refuted without proof). I never felt I saw the light until Cartesian geometry showed up in the curriculum - how amazing that now I could forget about the figures, and just manipulate analytic formulas! Even though, in a pinch, I can parse fairly involved diagrams, they are usually not fun for me. I think this is mainly because my spatial orientation skills are poor. I often think I must be in a minority as most scientists I know are very good at visualization. In fact I have several colleagues who decide to read a paper based on the diagrams or plots. I also happen to have collaborators whose papers contain especially beautiful diagrams. Of course I smile at the irony when I teach courses like freshman mechanics and find myself telling the students that "The key to solving the problem is starting with a good diagram." They probably don't know what is entertaining their professor as he says this. Some great geometers in physics: i) Newton ( I remember struggling to understand even a few proofs from his Principia as an undergraduate; it's full of diagrams, he draws a tangent every time he takes a derivative) ii) Einstein (changed the geometry of space time) iii) Gauss (one of the pioneers of differential geometry) iv) Feynman (he of the diagrams fame) v) Roger Penrose (check out the tiles names after him). For more, look here. Some great analysts: i) Lagrange (who famously, had no diagrams in his book on celestial mechanics) ii) Julian Schwinger (whose analytical methods were a counterpoint to Feynman's diagrammatics) iii) Lars Onsager (probably, judging on the basis of his famous solution of the two-dimensional Ising model) iv) C. N. Yang (just because he has no diagrams named after him) I am curious about extending these lists, especially the second one. And also about learning if some people are in both camps - or neither? Are there other modalities of doing physics other than pictures and words?
- The Secret to Solving the Problem
There is often no recipe - chatGPT included - for cracking open an unsolved problem other than creative thinking. Where does human creativity come from? I have never carried out a formal study of this topic. But a general curiosity about the subject made me pick up a copy of The Psychology of Invention in the Mathematical Field by Jacques Hadamard. Hadamard was a great mathematician. As a physicist I have encountered his work in the form of the Cauchy-Hadamard theorem, and the Hadamard matrices, which form the basis of the Hadamard gate in quantum computation. I have appreciated many times his famous saying about how useful complex analysis is when solving problems with real variables. The actual remark can be found in the book quoted above, on page 123. Hadamard was interested in mathematics not only from a technical perspective but also from a pedagogical and psychological viewpoint. He had been motivated to investigate the question of mathematical creativity after hearing a talk by Poincaré on the subject. What I found interesting about Hadamard's book: i) the nuanced comparisons between discovery and invention ii) the suggestion that scientific truth is born from poetic emotion iii) a 'creativity' questionnaire, assembled by the psychologists Claparede and Flournoy (Appendix I). Although aimed at mathematicians, I think it's fun and interesting for anyone to take. iv) statements about their own creative process by Einstein (Appendix II), Gauss, Hermite, Helmholtz, Poincare, Mozart, Norbert Wiener, Pólya, Paul Valery (the poet), etc. v) Hadamard's admission that he needs to walk in order to think - at last, a trait that I share with the great man! v) extensive discussions about the role played by the unconscious in creative discovery vi) the refrain that typically creative inspiration comes only after a period of fruitless labor. Hadamard's work has apparently been confirmed and extended by subsequent researchers. His book was published in 1945 - it's an oldie, but a goldie.
- The Quantum Flow of Light
Let's start this blog with a story: one of my favorite writers of fiction is Gabriel Garcia Marquez, and one of my favorite short stories by him is Light is Like Water. Long after I read this story I began noticing a number of papers in the physics literature which investigated the flow of light as a liquid. Of course, light, being a wave, shows some of the behaviors we commonly associate with water, such as diffraction, interference and even the formation of vortices. But what I found interesting about these papers was that they were reporting the behavior of light as a quantum fluid. Quantum fluids, as opposed to classical fluids, can show superfluid behavior. For example, they can flow around an obstacle without being scattered, that is, without rippling. Now we know that photons, which make up light, are massless. Also, photons pretty much do not interact with each other. In order for light to behave like a quantum fluid, photons have to somehow 'acquire' mass, and also begin interacting with each other. Interestingly, a number of experiments have been devised where both these steps have been implemented. One way to endow photons with mass is to confine them between two highly reflecting mirrors. Roughly, speaking, because the photons bounce back and forth between the mirrors many times before they leak out, light takes longer to travel through the distance between the two mirrors than it would if they mirrors were absent. Thus, each photon seems to travel at a speed lower than the speed of light, which implies that it has acquired mass. (As I said, this is a rough argument; technically we say that 'the dispersion relation between the photon frequency and wavenumber is rendered particle-like by the optical cavity'). In addition, these photons can be made to interact with each other by including a material medium (between the two mirrors) which interacts strongly with the photons, leading to an effective interaction between the photons themselves. This medium could be made of atoms or molecules, for instance. Using these two 'tricks', several phenomena, well known from other superfluid systems (such as liquid Helium and atomic Bose-Einstein Condensates), have been observed for light. These include flow of light without loss around a defect for low light speeds, and vortices and solitons. While these phenomena have been observed earlier in other systems, the fact that they can be seen using light implies new possibilities and applications for transporting light (and therefore information) without the usual scattering. For those who have journal access, more details can be found in I. Carusotto and C. Ciuti, Reviews of Modern Physics 85, 299 (2013). A recent review is available here for free.