top of page

Search Results

166 results found with an empty search

  • This Year's Physics Nobel: Playing with the fastest Camera in the World

    This post covers the Nobel prize in physics awarded this year to Pierre Agostini, Ferenc Krausz and Anne L'Huillier“for experimental methods that generate attosecond pulses of light for the study of electron dynamics in matter”. Background In the early days of quantum mechanics it became clear that electrons inside atoms could occupy only certain discrete - quantized - energies (unlike, for example a stone tied to a string, which can be rotated at any energy). The electrons could be sent - excited- from one energy to another by using photons of the appropriate energy. For a long time, in atomic physics laboratories, lamps were used to supply these photons. The basic idea was to apply a current through a gas contained in a glass jar. The electrons in this current would collide with electrons contained in the atoms in the gas and excite them. The excited electrons would then de-excite to a state of lower energy by emitting light. This light would escape through the jar and could be used for exciting other atoms. Then, in 1960, Ted Maiman invented the laser. The laser eventually became the preferred source of photons for exciting electrons in atoms and molecules. This was because the photons coming out of a laser (unlike those from the sun or from a lamp) travelled along the same direction in space (like in a laser pointer), were of a well defined frequency (pure in color), and this color could be tuned very precisely. Pulsed lasers If a laser emits light without interruption, it is said to be in a continuous wave (CW in the jargon) mode of operation. When it turns on and off, it is said to be pulsed. Pulses can be useful, for example, since they concentrate the laser energy into short but very intense bursts, as opposed to the constant, but lower, level of energy in CW mode. The difference between the two is a little bit like that between sprinting and distance running. An example of the usefulness of pulsed operation is laser ablation, a technique for using light to melt or remove material from a surface. This is useful for metal machining, semiconductor lithography (i.e. making computer chips), inertial confinement fusion, treatment of atrial fibrillation (irregular heartbeat), tattoo removal, etc. How short can the laser pulses be made? If we operate a laser in CW mode, but block and unblock the light, we can effectively get pulsed operation. Similar effects can be obtained by switching on and off the power supply to the laser. Techniques such as these can create pulses as short as nanoseconds (10^-9s) i.e. a billionth of a second in duration. More sophisticated techniques, such as mode-locking (the addition of CW modes of different frequencies), can create picosecond (10^-12s - a million-millionth of a second) or femtosecond (10^-15s - a million-billionth of a second) long pulses. The Nobel work The Nobel prize this year was given for the development of techniques which produce attosecond (10^-18s - a billion-billionth of a second!) long pulses. To understand how this was done, we need to realize that shorter-in-time pulses can be made by adding more waves with different frequencies (some of you may be familiar with this Fourier principle). The technique for generating attosecond pulses is to send femtosecond pulses into a gas. If these pulses are weak, then the same frequencies come out of the gas as are sent in. And the output pulse length is still femtoseconds. (For those who are familiar, this is the case of linear response - the output light field is simply proportional to the input field). But if the input femtosecond pulses are intense, extra frequencies show up in the output. This is the case of nonlinear response - the output field also has contributions from the square, the cube, etc. of the input field. This leads to output frequencies double, triple, etc. of the input - a process generally called high-harmonic generation. The presence of these extra frequencies in the output pulse shortens it from femtoseconds to attoseconds. What a cool method. A camera for electrons Another cool thing about pulsed lasers is that we can think of them as strobe lights (usually seen at parties!) for electrons in atoms. To see (pun intended) how this works let us remind ourselves that an electron in the hydrogen atom orbits the nucleus in about 150 attoseconds. So if we shine a pulse of adequate photon energy we can remove (ionize) the electron from the atom. The probability of ionization is lower when the electron is closer to the nucleus, since it feels a stronger attraction. Now imagine shining a stream of pulses, each a few attoseconds long, on the hydrogen atom. If we do not detect ionization, that means the electron was close to the nucleus. If we detect ionization it means the electron was far away. In this way attosecond pulses allow us to 'see' what the electron is up to. Since how electrons move plays a central role in atomic and molecular physics and chemistry and in solid state physics, attosecond pulses are a very useful investigative tool. There are also possible applications to areas such as fast optical switching and data transfer. Two notes before I end: i) For similar work - using femtosecond pulses to 'track' atoms, and follow chemical reactions in real time - Ahmed Zewail was awarded the Nobel prize in Chemistry in 1999. ii) The pulse detection techniques have amusing names: FROG (Frequency-Resolved Optical Grating) SPIDER (spectral phase interferometry for direct electric-field reconstruction), and RABBITT (reconstruction of attosecond beating by interference of two-photon transitions), for example. Looks like the optical physicists are having their fun.

  • Doesn't (anti) matter

    This post is about an experiment in the news this week, which might have taken the first step towards elucidating one of the greatest mysteries in physics: the observable universe seems to be dominated by matter rather than anti-matter. This is a mystery because the laws of physics make no distinction between matter and anti-matter. So where is all the anti-matter hiding? Before we describe this experiment, let us describe what anti-matter is. History In 1928, Paul Dirac successfully combined the theories of quantum mechanics and special relativity by proposing a new equation, which was named after him. When he solved the equation, he found electrons could be described as having positive as well as negative energies. He predicted that the negative energies corresponded to anti-electrons, which were later called positrons. Thus was born the concept of anti-matter. The positron was discovered in 1932. Discovery of other anti-particles, such as the anti-proton in 1955 and the antineutron in 1956, followed. Generally, if a particle and its antiparticle meet, they annihilate: all mass disappears, and is replaced by the equivalent energy. In an electron-positron collision, both particles disappear and photons are released. Note: Antimatter has percolated into the public consciousness. For example, in Dan Brown's Novel Angels and Demons a stolen canister of antimatter plays a central role. Also, since annihilation is a source of energy, matter-anti-matter engines have been seriously discussed for space travel applications. Availability Antimatter seems to be rare in the universe, though. A few antiparticles come to Earth from outer space, in the form of cosmic ray showers. Some positrons are emitted in radioactivity (this is the basis of PET scans in medicine). But if we want a regular supply, we have to make our own, such as at the LEP (Large Electron-Positron Collider) which has been dismantled, and the CEPC (Circular...) which has been proposed. Thus, matter seems to dominate in abundance over antimatter. The laws of physics do not offer any clear hint about why this imbalance exists. Experimentally, therefore, it would be interesting to find if antimatter shows any different characteristics from matter. This might provide a clue to the difference in their perceived presence in the universe. The idea behind the experiment I will now discuss is to see how antimatter responds to gravity. Do antiparticles fall to the Earth under gravity? Are they indifferent to it? Or as some people have suggested, do they display 'anti-gravity'? The experiment The idea is simply to first trap a number of antiparticles and then let them out of the trap to see what happens in the presence of gravity. There are several things to be kept in mind here. First, the trap has to be 'contactless', that is, it cannot be built out of any regular material because the antiparticles would annihilate in contact with that material (since it would be made of matter). In the experiment, the trap was made from a magnetic field. Second, the antiparticles cannot carry net electrical charge, as that would make them very sensitive to stray electric and magnetic fields, which would mask the effects of gravity. This is the reason the experiment could not be done with positively charged positrons or negatively charged anti-protons, which have been available for some time now. The two had to be combined into electrically neutral anti-hydrogen first, which recently became possible to do in large quantities (hundreds, which gives a large enough detection signal) and for long times (hours, required to accumulate enough anti-particles). Finally, since the trapped anti-hydrogen is not at zero temperature, the atoms are actually moving around in all directions inside the trap. So when the trap is switched off, some atoms go up, some to the side and some fall down below. The outcome Roughly speaking, the test was carried out by switching off the trap and detecting how many antihydrogen atoms left from the top and how many from the bottom. If gravity has the expected effect, then more should leave from the bottom. They do. The data are consistent with a theory that includes gravity and do not agree with theories that neglect gravity or include antigravity (Fig. 5 in the paper, which shows this, is quite fun to look at). The next step I tried to choose my words carefully above. The agreement of the data with regular gravity is at the level of consistency, not agreement. The authors say the next step is to explore this aspect further. In other words, the present study shows that antiparticles see gravity, much like regular particles (i.e. the sign of the acceleration is the same for both). The aim of the future study would be to find out exactly how much gravity is seen by antiparticles (i.e. whether the magnitude is the same as that for regular particles).

  • A Physicist thinks About Biology

    When I was young I had problems with biology. It started with the fact that my memory was not very capacious. As a result, I was not good at memorizing things. I had sort of a meltdown around the tenth grade, when I found it almost physically difficult to memorize endless names of cells, organs, components of the nervous system, etc. It was a relief when I could drop biology in the eleventh grade and focus on physics, chemistry, mathematics and engineering drawing. These subjects did not seem to require memorization; one could start from a few laws and derive the required results. Relatively more memory might be required for chemistry, but I found the subject manageable, and in fact quite fascinating. While in graduate school for physics, and having to deal, as a young adult, with the quirks of human behavior at both personal as well as professional levels, I became interested in psychology, anthropology, sociology, etc. These subjects, however interesting, were quite complex in themselves. Following some natural threads in my reading, I found myself getting fascinated more by animal behavior. It took me a bit of time before I realized that what I was implementing was a classic physics move - confronted by a complex system, I was trying to to find a simpler system that showed some of the same phenomena. Basically I was studying animal behavior as it seemed to be a more tractable subject than human behavior (but related). Over time, I have made, somewhat randomly, forays that turned out to be interesting, into the animal kingdom. Below I share some of these: i) I was fascinated by the behavior of horses. An amazing manifestation of equine jealousy, which I read about in Laura Hillenbrand's outstanding book Seabiscuit (a famous horse in American derby history; the movie based on the book was nominated for an Oscar): horses often train in pairs and round off the day with a practice race - the one which loses often refuses to eat dinner that day (!). Another manifestation of jealousy/competitiveness: Seabiscuit would often come from behind in races, pull level with the leader, look him/her in the eye, and then pull ahead. Whoa! Bonus observation: When small groups of horses travel in the wild, they walk in a line with a set hierarchy: the alpha male comes in last. The first in line is his lead mare and then her children, then the other mares and their children. This should not be taken to mean that female horses are weaker than male horses. Male and female horses are (mostly) raced together and females often win. ii) I ran into the writings of Frans de Waal and was immediately sucked into the similarities between ape and human behavior. Politics, emotional blackmail, violence, intrigue, sex, friendship, the battle for resources - they are all there in the chimpanzee kingdom! Also check out this amazing experiment with Capuchin monkeys (which are not apes), who display moral outrage in ways we are accustomed to think of as perhaps exclusively human. iii) I was bitten by a dog when I was very young, about 6 years old. The bite and the following injections into my stomach, to prevent rabies, gave me a fear of dogs for a long time. I would instinctively shrink even if I saw one coming towards me on a leash. About 15 years ago, I was asked to dogsit for some friends whom I could not refuse. I ended up with 4 dogs over a long weekend and loved it. Partly responsible for this transition was a dignified, handsome, and affectionate black labrador by the name of Othello. I could see the dogs were very sensitive to my state of mind. If I came into the house in a good mood, they would put their paws on my chest. if I came in with a bad temper, one sniff and they would find the farthest corner away from me. In turn, they had feelings of their own. One of them even threw a tantrum on me, shaking with rage, when I did not pet her after her meal! Needless to say, I pacified her rightaway. A few years ago, I happened to encounter the incredible videos of Cesar Milan, the Dog Whisperer, on YouTube. I became addicted to viewing his demonstrations and explanations of dog psychology. To me, they represented vital knowledge I was missing for years. Now I try to use Cesar's principles every chance I get and connect to dogs much better. Cesar's material is also, to me, a working example of how empirical knowledge can be used to build a phenomonenological model and explain a complex system - a paradigm of physics (probably of science). Previously, I used to make the joke that I am only interested in Nature at the level of atoms, but thanks to our animal relatives I am now hooked on to larger organisms.

  • Probing the Pudding

    This post is about proof - an object usually found in the pudding - and its importance. The word 'proof' comes from the Latin 'probare', which means to test, and which hopefully explains the title of the post. Below I will first discuss the importance of proof in mathematics, keeping in mind that I am not a professional mathematician. Then I will discuss the importance of proof in physics, keeping in mind that I am a barely competent theoretical physicist. Proof in Mathematics Mathematical proof has existed in several forms in antiquity (in the form of plausibility arguments, etc.). The first person to exploit it at great depth and range was likely Euclid. A brief look at the Elements (proofs from which all of us learnt in high school) will convince most people that the author knew the concept of proof inside out. This included its limitations, resulting in probably one of my all time favorite quotes ("What has been asserted..."). Proofs start with axioms (assumptions) and then logically infer conclusions. Many methods of proof exist: mathematical induction, reductio ad absurdum, contraposition, computational, etc. There is apparently even a subject called proof theory where proofs are considered to be formal mathematical objects and manipulated accordingly. I know nothing about it. In this context it might be appropriate to state that Godel's Incompleteness Theorem is often misinterpreted to imply that nothing can be proved in mathematics. That is not correct; what it says is that some things cannot be proved. This is an involved topic, and I will probably write about it in a separate post. For now, let's note that it was ironic that one of the greatest logicians of all time convinced himself that someone was trying to poison him and starved himself to death. (Or was his suspicion correct?). Proof in Physics I may be allowed to begin this section by confessing that mathematicians often get upset at physicists for not being rigorous enough with their mathematics, and in particular for not supplying proof for their assertions. When I was a postdoc, I actually attended a seminar in the mathematics department aimed at addressing - if not redressing - this injustice. The snide answer to the mathematicians' complaint is that physicists do not need proof, they have experiment (or another one, perhaps more applicable to theoretical physicists: too much rigor can lead to rigor mortis). The honest answer is that physicists need proofs and use them fairly often. There are various kinds of proofs in physics. I will discuss two types in this post. The first type involves proofs which are essentially limited to the mathematical formalism. The most important of these are called theorems. These theorems basically help in calculating quantities of physical interest. Examples are the Parallel Axis theorem (used for determining the moments of inertia of rigid bodies), and the Quantum Regression theorem (used for determining correlation functions in quantum mechanics). Another type of proof, though relying on mathematics, says something profound about physics. Examples in this category are the No-Cloning theorem (which shows that unknown quantum states cannot be cloned), or the Penrose-Hawking theorem (which specifies conditions for the appearance of gravitational singularities, such as those inside black holes). Notwithstanding these examples, it generally seems difficult to prove the existence of natural laws or phenomena. For example, as far as I know, no one can mathematically prove that the sun - or anything for that matter - exists. To conclude on a more positive note, a short list of alternative proof styles I have encountered as an academic physicist over 25 years: Proof by intimidation - You say the same thing again, but this time in a louder voice. Proof by erasure - You erase from the board what you wrote so quickly that no one can question it. Proof by tautology - You say the thing is well known to those who know it well. Proof by absenteeism - During the talk you say we can discuss this afterwards, and afterwards you make yourself scarce. Proof by transference - You say the proof is trivial and is left as an exercise for the questioner.

  • Exploring the (Meta)Verse

    Science quite often influences, or even gives birth to, poetry. This includes science written as poetry, poets complaining about or making fun of scientists, scientists having fun with science, and other variations. Here are some examples, collected over time, in no particular order: Mathematicians i) Bhaskaracharya (1114–1185), who wrote his Lilavati (apparently his daughter), a collection of mathematical problems, in verse. My Sanskrit is no longer good enough - if it ever was - to follow the original. Translations are available in other languages; some of these do not preserve the poetry. ii) Lewis Carroll (Charles Lutwidge Dodgson; 1832-1898) lectured on mathematics at Christ Church, Oxford. He is famous for writing books like Alice in Wonderland and Through the Looking Glass. In the latter appears the poem The Walrus and the Carpenter, which is amusing of its own accord, but also will be relevant below. Astronomers i) Eratosthenes (~250 BC), a Greek savant, calculated, among other things, the circumference of the Earth. His sieve, for finding prime numbers, is also famous. He also wrote poems such as Hermes and Erigone. ii) Omar Khayyam (1048-1131), a polymath from Persia, made contributions to both science and poetry. He wrote one of my favorite poems of all time, the Rubaiyyat (Edward Fitzgerald translation), which I can quote here without copyright worries as it is in the public domain. A sample quatrain about the confusing nature of the universe: "Myself when young did eagerly frequent Doctor and Saint, and heard great argument About it and about: but evermore Came out by the same door where in I went." iii) Walt Whitman (1819-1892), the poet whose work When I Heard the Learn’d Astronomer is often quoted by those miffed with science [I should perhaps says astronomy specifically -:)]. iv) W H Williams (1881-1959), a physicist at Berkeley, who was around at the time Eddington was visiting the place, wrote The Einstein and the Eddington, and read it out at the farewell party for Eddington. I find it very amusing and a masterful adaptation of Lewis Carroll's The Walrus and the Carpenter, mentioned above. I first came across this poem in the book Truth and Beauty by S. Chandrashekhar. Physicists i) Lucretius (99-55 BC), the scientist who famously discussed the constitution of the universe in terms of atoms and the void in his long poem On the Nature of Things. ii) James Clerk Maxwell (1831-1879), the unifier of electromagnetic theory, wrote poetry extensively and often amusingly. A number of his poems - some of them about physics - are available here. iii) William Wordsworth (1770-1850), the famous English romantic poet, gets a mention here, for his lines on Newton in The Prelude. iv) John Updike (1932-2009). Famous American writer. Try his short poem on neutrinos. Chemists Humphry Davy (1778-1829) the English chemist who invented the minor's lamp and discovered Michael Faraday, was friends with poets such as Wordsworth, Lord Byron and Coleridge. He also wrote quite a bit himself. Here's one, about breathing nitrous oxide. Biologists/doctors i) Erasmus Darwin (1731-1802). I checked to see if Charles Darwin had written any poetry. I could not find anything by him, but in the process learnt that his physician grandfather, Erasmus, wrote a lot of it. There's a long one, called the The Botanic Garden. ii) Ronald Ross (1875-1932), the British medical doctor who discovered that malaria was transmitted by mosquitoes and received the Nobel prize in Medicine in 1902 for it. He also wrote a fair bit of poetry, including one after making his malarial discovery, which is quoted here. (Apparently the book The Calcutta Chromosome is partially based on Ross).

  • Deciding on Gödel

    I had mentioned in an earlier post that I would likely dedicate a separate piece to Gödel. Here it is. This post relies on several books and publications; a good one is Kurt Gödel: The Genius of Metamathematics by William D. Brewer. In brief: Life: There were three main phases to Gödel's life (1906-1978). i) Birth and early upbringing in Brno. I felt like an illiterate on discovering this information: when I visited Brno (Czechia) earlier this year, I did not know Gödel was born there (so was Milan Kundera, and I didn't know that either; I just knew about Mendel). So I did not visit his house. What a miss. As a child Gödel indulged in ceaseless questioning, demonstrating what would be his lifelong conviction - that every fact about the universe should have a rational explanation. ii) Higher education in Vienna. In Vienna, Gödel completed his undergraduate degree, then his PhD (under Hans Hahn) and then his Habilitation, a kind of postdoctoral work required for becoming a professor. Gödel started as a physicist, possibly due to reading Goethe on optics in high school. At the University of Vienna, a charismatic professor by the name of Philipp Furtwängler taught him a course on number theory and inspired him to change over to mathematics. Furtwängler was paralyzed from the neck down and lectured from a wheelchair - shades of Stephen Hawking. iii) The remainder of his life at the Institute for Advanced Studies in Princeton. At the IAS Gödel became close friends with Einstein. Funny story: Einstein and Morgenstern (an economist mentioned in my post on von Neumann) went with Gödel for his US citizenship interview. Gödel had studied up, months in advance, on local governance and the US constitution. He believed he had found a logical flaw in the constitution which would allow the country to become a dictatorship, and tried to explain it to the judge. Nonetheless, he was not denied his citizenship. Gödel became well known to the general public after the publication of Douglas Hofstadter's classic in 1979. Work: i) Metamathematics - This subject involves the foundations of mathematics. A. Completeness theorem (1930): This was his PhD work. He showed (roughly speaking) that all logical deductions from the axioms of a system could be proved using those axioms. B. Incompleteness theorems (1931): This was his Habilitation work. a) His first incompleteness theorem showed (roughly speaking) that no axiomatic system can prove all truths about natural numbers using just those axioms. b) His second incompleteness theorem showed that such a theory could not be consistent. You can get a flavor of the terminology and notation involved in the discussions here. I am not a trained logician, and I cannot claim to have followed the proofs in detail. What I was able to follow was the trick by which Gödel made truths verifiable in arithmetic: he assigned every logical operation on natural numbers (not, plus, times, equals...) a number. Then any symbolic logical statement could be expressed as a sequence of these numbers. Further, he used these numbers as the powers of the smallest prime numbers. So if there are 5 numbers in the logical operation, say 4,6,11,3,4 he uses the first 5 primes: 2,3,5,7,11. Then he combines and multiplies them to get 2^4 3^6 5^11 7^3 11^4. Such numbers are today called Gödel numbers. Gödel numbers can be factorized uniquely into their prime factors - this is guaranteed by a theorem of Euclid. Gödel numbers can be large (try calculating the one above), depending on the complexity of the corresponding logical statements. But the cool thing is we can now - after Gödelization - manipulate logical statements about numbers by using arithmetical operations on the Gödel numbers! This is a self-referential system, which can be used (roughly speaking) to verify truths. What Gödel said is that in implementing this process sometimes you'll end up with unverifiable paradoxes, by considering statements like 'this statement is false' (The link is to a useful video explanation). ii) Computation Gödel proposed a definition of a computable function, and investigated theorems that speeded up computation. He believed no machine could equal the human mind. I will not write more on this as computer science is far from my specialty. iii) General relativity Gödel proposed a rotating universe which allowed for time travel. This model consisted of an exact solution of Einstein's equations of general relativity, and was presented in a paper contributed to Einstein's Festschrift when he turned 70. There does not seem to be any experimental proof for such a universe as the one we have does not seem to be rotating; also, Gödel's universe, unlike ours, does not expand. We will stop here since any post on Gödel must by definition be incomplete.

  • More about Bardeen

    As promised, this is a post about the biography of the only person to ever win two Nobel prizes in physics: John Bardeen. The book is True Genius: The Life and Science of John Bardeen: The Only Winner of Two Nobel Prizes in Physics by Lillian Hoddeson and Vicki Daitch. The book is basically aimed at recounting the biographical details of Bardeen's life. But it also considers, in light of those facts, why Bardeen is not very well known to the general public (say as compared to Einstein), as well as the nature of genius: what does it take to win one - or more - Nobel prizes? Highlights that struck me as I read the book: The midwest had a profound role to play in Bardeen's life: he was born and initially educated in Madison (finishing high school at the age of 15; already attending courses at the local university before that), and he eventually became a professor at the University of Illinois at Urbana-Champaign. The northeast also made major contributions to his career: he went to graduate school in Princeton (wanted to work with Einstein, but the man wasn't taking students); postdoc-ed at Harvard (with van Vleck, who also grew up in Madison, and whose course on quantum mechanics Bardeen had earlier attended at Wis-Mad); worked for the Navy during the war years in Washington DC (where he got to interview Einstein, who had come up with a new design for a torpedo); and the first Nobel - for the transistor - came from his work at Bell labs in New Jersey. Bardeen's father was the Dean of the School of Medicine at the University of Wisconsin-Madison. Bardeen's mother, who passed away in her forties due to cancer, was a school teacher who worked on implementing the philosopher John Dewey's program of teaching children to set their own problems, and then find solutions using a combination of creativity and collaboration. This strategy was strongly reflected in Bardeen's work, especially the theory of superconductivity, which brought him the second Nobel. The book confirms Bardeen's dubiousness about his first Nobel that I had referred to in my previous post. His discomfiture arose due to several reasons: he did not consider the invention of the transistor to be a deep physics contribution; he was hot on the trail of solving superconductivity; he was uncomfortable receiving (1956) the Nobel before his PhD advisor Eugene Wigner (1963). A nice coda to the story is mentioned in the book - Bardeen's wife, who was beginning to lose her hearing, received one of the first transistorized hearing aids. The book reports that the conversation about not bringing all three children to the first Nobel ceremony did indeed occur, but between the King of Sweden and Mrs. Bardeen (and not John Bardeen, as I had suggested earlier). The dispute with Josephson is also mentioned, including a follow-up that I had missed: Bardeen invited Josephson to UIUC as a postdoc, and Josephson came. A substantial part of the book is centered around the functioning of the solid state physics division at Bell, hiring at and functioning of several prominent American universities, and the lore of superconductivity. I will not mention much about these topics except to give the authors full marks for their treatment of the relevant technologies and science as far as I could judge; and to mention the hilariously named Institute of Retarded Study (vis-a-vis the Institute of Advanced Study in Princeton, which housed Einstein, Godel, etc.) on the fourth floor of the physics building at UIUC, where the graduate students, including Bardeen's student Bob Schrieffer (the S in BCS) worked. Why is Bardeen relatively unknown to the public? The book suggests (the first chapter is devoted to the topic) this is because he was not a nonconforming unstable self-trained solitary mad inventor type. He was soft spoken and modest, educated at institutions of higher learning and a family man, and very collaborative. The book memorably mentions one of his students, Ravindra Bhatt, recalling Bardeen's lack of self- promotion, and stating the concept of a 'Bardeen number': given by the ratio of genuine content to showy display. What does it take to 'raise a genius'? The 15-page epilogue to the book addresses this subject. I will leave it to you to read it - if you want to win a Nobel prize you should be prepared to do some work!

  • One is Not Enough: Double awardees of the Nobel Prize

    The Nobel Prize is probably the top recognition in the fields of Physics, Chemistry, Physiology or Medicine, Literature and Peace. Certainly a tremendous amount of attention and prestige are associated with an award of the Prize. Very few people - limited by a maximum of three a year in any discipline - receive a Nobel. The club of Nobel laureates is therefore quite exclusive. In this post I will discuss a club which is even more exclusive - the collection of individuals who have been awarded two Nobel prizes. There are just five of them. These are such high achievers that even a Nobel prize could not stop them! Marie Curie was the first person to win two Nobel prizes. So far she is the only scientist to win the Nobel for two different sciences. She received a quarter of the first one, in Physics, shared with her husband Pierre Curie (the two of them got half the Prize; Henri Becquerel got the other half), for her studies of radioactivity, in 1903. The second one, in Chemistry, she was the sole winner of, for the discovery of the elements Radium and Polonium. This was awarded in 1911. There are several biographies of her; and also a recent biographical drama film. John Bardeen is the only person to have won two Nobel prizes for physics. The first one he shared with William Shockley and Walter Brattain, for the invention of the transistor, in 1956. The second he shared with Leon Cooper and Robert Schrieffer for their ('BCS') theory of superconductivity, in 1972. I have downloaded his biography and will probably post a blog on it once I finish reading. I have read two interesting stories about Bardeen's prizes, both unconfirmed (if anyone has a definitive source, please let me know; I will, of course, look for them in the book). The first one is that he was apprehensive about receiving the prize for the transistor in 1956 as he had solved superconductivity - along with Cooper and Schrieffer - recently, and wanted to receive the Prize for that accomplishment. He accepted after he was reassured that the first Nobel would not interfere with the second. The second story was that he brought only one of his three children to the ceremony in Stockholm in 1956. The Swedish king reproached Bardeen for this, and Bardeen replied he would bring all three children with him the next time. He did. Frederick Sanger won the Nobel prize for Chemistry twice. The first time he was the sole winner, for his work on insulin, in 1958. The second time he won half the Prize (Paul Berg and Walter Gilbert shared the other half), for studies of nucleic acids, in 1980. His obituary in Science provides a cover of his contributions; the one in PNAS is also quite good. A fairly extensive interview, with interesting comments on his playing fullback in soccer, is available here. Karl Barry Sharpless won the Nobel prize in chemistry twice. The first time he shared it with Ryoji Noyori and William. S. Knowles for "chirally catalysed hydrogenation reactions", in 2001. The second time he shared it with Carolyn Bertozzi and Morten Meldal for "the development of click chemistry and bioorthogonal chemistry", in 2022. Sharpless talks about how he works, here. Linus Pauling was the only person to win two unshared Nobels. The first one was in Chemistry awarded for his work on the chemical bond, in 1954. The second one was for Peace, in the context of de-escalation of the nuclear arms race, in 1980. Pauling describes his career and experiences in this interview. Has anyone won the Nobel prize three times? No individual has, only the Red Cross.

  • Bhabha: Science, Administration and Art

    Homi Jehangir Bhabha was a physicist who made important contributions at the international (there are processes and equations named after him in physics) and national (he was the head of the Indian nuclear program and the founder of the Tata Institute of Fundamental Research and the Bhabha Atomic Research Centre) levels. This post is a discussion of the recent and quite extensive (722 pages) biography of Bhabha by Bakhtiar Dadabhoy. Science Bhabha obtained his PhD under Ralph Fowler at Cambridge, who was one of the early physicists to apply quantum theory to astrophysics. Bhabha made fundamental contributions to cosmic ray physics (Bhabha-Heitler theory), collisions between electrons and positrons (Bhabha scattering, nowadays regularly used to calibrate beams in particle accelerators), and spinning particles (Bhabha-Corben equations and Bhabha equations). His contributions were substantial enough for him to be nominated five times for the Nobel prize in physics, which however he was not awarded. Administration Bhabha was visiting India, intending to return to the West when the second World War broke out. He stayed on in India and took the opportunity to set up a first rate research institute: the Tata Institute of Fundamental Research (TIFR). I spent a summer at TIFR in my senior year in college, under the Visiting Students Research Program, working on a particle accelerator. During this time I had the privilege of meeting Prof. Virendra Singh, who is quoted often in the book. He was the Director then, and his younger son was my classmate at IIT Bombay. Bhabha also set up the Bhabha Atomic Research Centre (BARC). After he had heard of the discovery of atomic fission, he suspected that America was building a bomb. After the bomb was deployed, along with Meghnad Saha and S. S. Bhatnagar, Bhabha pushed for an Indian atomic energy program and eventually a nuclear weapons program. Bhabha was also, along with the statistician P. C. Mahalonobis, an early pioneer of high performance computing in India. Eventually he became too mired in administration to continue active research and teaching. Interestingly, the book says he gave instructions around this time (1956) that he should be called "Dr." and not "Prof." Bhabha, as he could no longer discharge the duties of the professoriat! Music Bhabha was very sensitive to music. The book says playing music was all his parents had to do to stop him from crying as a child. As an adult, Bhabha astonished his European colleagues by pointing out subtle features in, e.g. Beethoven's quartets, which they were unaware of. Art and Literature Bhabha was a painter of distinction himself. With the permission of Nehru, he used 1% of the TIFR budget to acquire works of art. He commissioned artists like M. F. Hussain to paint on campus (he also tried to contract Picasso). He played an important role in the early days of the magazine Marg, which was established by the writer Mulk Raj Anand. Conclusion Dadabhoy's biography is quite detailed, especially about the setting up of TIFR, BARC and India's nuclear arms program. It educated me about Bhabha's intimate correspondence with Pauli, his close relationship to Nehru, his untimely death in an air crash in 1966 and the conspiracy theories that followed. There is ample space given in the book to the development of related characters, such as Raman, Saha, Schrodinger, Born, Bhatnagar, Sarabhai, etc. I was impressed with how well the physics is handled in the book, given that Dadabhoy is not trained in the discipline. All in all a substantial and informative read.

  • Standing by the Science

    This post is about a variety of attitudes that some well known scientists have taken to the discoveries they have made. These attitudes range from wishywashiness to extreme confidence. The examples have been chosen at random, so please do not be offended if your favorite stories have not been included (I would be happy to be informed of them). Also, I will not discuss the reasons for the responses given by the scientists in every case. That's a bigger task than I have time and space for. We will start with Einstein, who will in fact provide another story. The first one is about his prediction of the deflection of light by a gravitational field. After Eddington (more of him later, too) had confirmed the effect, Einstein was asked how he would have felt if experiment had contradicted his theory. He famously said he would have felt sorry for the Lord, since (he believed) the theory was correct. Not exactly short on confidence, was he? Next: Subrahmanyam Chandrasekhar, found in his studies that there was a critical mass, above which stars evolved into neutron stars or black holes and below which they became white dwarfs. This idea was opposed and ridiculed by Eddington, who believed all stars eventually became white dwarfs. Chandra was young at that time (about 24 years of age) and relatively unknown, while Eddington was older (about 52), and famous after his confirmation of Einstein's theory. The two scientists maintained friendly relations, but rather than fight the opposition to the end, Chandra wrote his results up in a book and changed his field to another topic in astrophysics. Eddington passed away in 1944 and in 1983 Chandra was recognized with a Nobel prize in physics. By then his theory was well confirmed by experiment. Now comes Brian Josephson, who received the Nobel prize for physics for predicting the behavior of supercurrents in the presence of a tunneling barrier. His original prediction was initially met with strenuous - but polite - opposition from John Bardeen (the account provided here is quite amusing), one of the original pioneers of the theory of superconductivity, and the only person in history to receive two Nobel prizes in physics. This was substantial resistance (sic), but Josephson persisted with his claims. Experiments eventually showed Bardeen to be wrong and he graciously withdrew his objections. Next we have Daniel Schechtman, who won the Nobel prize for chemistry in 2011 for the discovery of quasicrystals (structures that are ordered, but not periodic). His original observations were not believed to be correct and he had to leave his research group as well as face opposition and ridicule from scientists as prominent as Linus Pauling (Pauling said there were no such things as quasi-crystals, only quasi-scientists - ouch!). Schechtman persisted, went through a roller-coaster ride trying to get his results published, and was eventually proved to be correct. Einstein again. This time the prediction in question was that of gravitational waves. Although they came out of the mathematics of his theory of general relativity, Einstein went back and forth between positing their existence and denying it. You can find quotes from him to this effect here. Gravitational waves were finally detected in 2016. Murray Gell-Mann famously made sense of particle physics by introducing the notion of quarks. But for a long while he referred to them as mathematical conveniences rather than as real particles (among other things they seemed to imply the existence of fractional electric charge). The physical existence of quarks was eventually experimentally established. That's all for now, but I have a feeling I missed some of the other juicy stories...

  • Making Waves with Atoms

    Background One of the revelations of quantum mechanics is that the same objects can behave as particles as well as waves. Light was the first object to be verified to show this behavior. Young, Maxwell, Hertz and others showed that light behaved like a wave; Planck, Einstein and others showed light behaved like it was made of particles. Later, de Broglie, Schrodinger and others showed that material particles like electrons and atoms also displayed wave like behavior. Einstein, following work by Bose, predicted in 1924 a situation where a collection of atoms behaved like a giant (matter) wave: a Bose-Einstein condensate (BEC). A BEC was realized in the laboratory in 1995 - the year I joined graduate school - leading to a Nobel prize in 2001. It is coming up to 30 years since BEC was made; in this post I will describe three exciting areas of research that use BECs as a platform. Interferometry One of the fundamental characteristics of waves is that they interfere. This can be seen for water waves in a pond by dropping stones in nearby locations. Likewise, matter waves also interfere. This was famously originally demonstrated at an experiment at MIT, where two BECs were combined to yield bright and dark fringes in the location of the atoms. Matter wave interferometry is a technique developed mostly in the 1990s, where one starts with a BEC, divides it into two pieces, and then recombines them. Before the recombination, one of the two pieces is made to interact with some force, which we desire to measure. The presence of the force shifts the interference pattern from when it is absent, and this change can be used to obtain information about the force. Atoms move much more slowly than photons, and hence are more sensitive to forces that vary slowly. Thus, using matter wave interferometry, we can measure the acceleration of a vehicle, and also important constants of physics such as the gravitational constant and the fine structure constant. Recent matter wave interferometers have opened up exciting prospects for using entanglement to increase resolution. Atomtronics Electronics - relying on the flow of electrons through wires - has revolutionized society. Notwithstanding its crucial presence in computers, cell phones and various other devices, electronics technology has now started approaching its limits. One problem facing the field stems from the fact that currents flowing in metallic wires face resistance, which leads to heating. The speed and power of modern day computers are practically limited by how many circuit components can be packed into a certain volume of space without overheating the device. One way out of this problem is to look for high-temperature superconductors - materials which do not offer resistance to the flow of electric current. This is an exciting and ongoing, but still unsuccessful, pursuit. The challenges are that superconductivity shows up only at low temperatures or high pressures and not under the ambient conditions where machines typically operate. Another way out is to use the fact that BECs display superfluidity (though BEC-formation is neither necessary nor sufficient for superfluidity!) - wherein neutral atoms can flow without friction. This has led to the burgeoning field of 'atomtronics', where matter waves flow without generating heat, in various kinds of circuits. Analogous to regular electronics, elements like transistors and diodes can also be set up using neutral atoms. Although the atoms themselves are at very low temperatures (nanoKelvins!), atomtronics works at room temperature, as the atoms are well isolated from their surroundings. The challenge is that these circuits require high vacuum for the isolation, and as of yet these vacuum setups can hardly match the miniaturization, complexity or portability of standard electronic chips. Still, this is an exciting area of research, one that I have recently entered. Turbulence One of the great fundamental unsolved problems in classical fluid mechanics is that of turbulence. It is also a topic of practical importance, e.g. in aircraft design and weather forecasting. Turbulence consists of flow whose properties (pressure, velocity) are irregular (chaotic). It can be seen in a variety of systems, such as when we drag something (our finger) through water. If we move the finger slowly, the liquid flows past it smoothly. This is called laminar flow. If we move the finger fast enough, irregular flow, bubbles and froth result. This is turbulent flow. A cool video demonstration is available here. A full understanding of classical turbulence, from the underlying Navier-Stokes equation, still eludes us. For example, we cannot always predict the speed at which laminar flow becomes turbulent. Enter BEC into this picture. Dragging an obstacle through a BEC results in quantum turbulence, since the BEC represents the quantum mechanical manifestation of the wave nature of matter (in other words it's a fluid). Similar studies have been carried out earlier in superfluid Helium. In contrast to classical turbulence, quantum turbulence i) occurs in a superfluid, so there is no damping to convert kinetic energy to heat and ii) implies that the circulation of structures that are shed, such as vortices, is quantized (so the quantum wavefunction describing the superfluid is single-valued) and hence not arbitrary. Typically energy considerations lead the circulation to take on its lowest possible value, i.e. a single quantum. These interesting differences, which can sometimes simplify the phenomena, have contributed to making quantum turbulence an interesting frontier of contemporary physics research.

  • The Story is the Thing

    Nowadays, when I get to talk to younger students studying science, especially seniors in high school or starting undergraduates, I ask them to give some attention to reading literature (flash fiction, longer stories, novels, listen to podcasts, anything they like). That sounds like strange advice - how would reading literature help science students? The mathematicians would say our business is proving (bounds and) theorems - why should we read flash fiction? The physicists would say our job involves making mathematical models and solving equations - how will going through stories help us? The chemists would say our tasks are basically looking at reactions and analyzing processes - what's the point of scanning novels? The computer scientists would say our mandate is to write code and develop algorithms - how would listening to podcasts aid our endeavors? In fact, for some science students, the minute they can stop taking courses on literature (and the humanities in general), is a moment of relief. Literature to them is not attractive, as it involves subjective considerations (where a lot, if not everything, seems to boil down to personal opinion), while science is all about quantitative phenomena (and facts which exist independently of anyone's personal opinion). The connection between science and literature arises from the fact that the technical documents we have to assemble while performing science - grant proposals, journal and review articles, reports, books, applications for tenure and/or promotion - all share the structure of stories (I would claim that even a piece of good computer code tells a story). I have quoted documents from my experience in academia, but I suspect that similar/analogous objects exist in other places as well - in industry or government jobs, for example. And in an increasingly competitive world, it has become imperative that we tell the most compelling stories possible. I should clarify rightaway that in no way am I suggesting that we should misrepresent or fabricate any material. We have to tell the true story - the truth as we can best determine it - in each case, sticking conscientiously to the facts. But within those constraints there is usually scope for arranging the ingredients in the most compelling way. And the more compelling the story, the better chance that the grant proposal will be awarded, the paper will be accepted, the book will sell, tenure will be given, and promotion secured. All this I believe is linked to the fact that storytelling arises from a biological necessity that human beings have, to establish narratives that make sense of the infinitude of facts and impressions that life throws at us. Let us take the example of a physics research article. In my opinion the abstract should itself be a short story - a condensed version of what is to come in the main body of the paper. The main paper should set up by posing the problem and its importance (the basis of conflict in the story), describing previous unsuccessful or partially successful attempts in the area (deepening the crisis), describing the approach of the authors (proposing a rescue plan), fleshing out the work in detail (revealing the plot in measured steps) and delivering a punchline (the climax of the story). I have not covered every detail, of course, but hopefully enough to convey the general idea. Of course, one may ask if such an approach is superficial. Shouldn't grants be funded on the quality of the science rather than the cosmetic polish of their writing? Shouldn't papers be judged on the depth and reach of their results, rather than the way in which these results are arranged? A practical answer, which I have heard from several program officers who manage research portfolios, is that funding agencies receive many competitive proposals that contain great science. Typically, there are many more competitive proposals than can be funded. In this situation, any and all legitimate techniques for gaining a competitive advantage can, should be, and are, used. A more philosophical answer is that while superficial work may achieve short-term success, in the long-term science with depth will likely survive and continue to be impactful. Coming back to the original advice mentioned at the beginning of this post, I believe telling good stories is an art. How to learn this art? A quick review of interviews of successful authors online will reveal some advice they often repeat: a good way to become a good writer is to read a lot. Their point, I think, is that one should try and absorb the examples by osmosis. Accessible analyses of writing techniques are also available nowadays. These can clarify important concepts like foreshadowing, world-building and micro-plotting, which can be productively translated into the scientific/technical context. Literature also improves our communication skills. In this context an interesting anecdote: a decade ago, there was a lot of emphasis in the United States on STEM (Science Technology Engineering and Mathematics) education. I remember reading that employers found that to get STEM graduates to communicate and collaborate effectively, English majors had to be hired into the team. So my advice to the science majors is perhaps not completely misplaced - at the very least they will be humanized by their reading.

Responsible comments are welcome at mb6154@gmail.com. All material is under copyright ©.

© 2023 by Stories from Science. Powered and secured by Wix

bottom of page