Saturday, May 3, 2008

Atomic islands in the Fermi sea


When the founders of Quantum Mechanics talked about the wave functions of electrons in atoms and solids, I doubt they imagined that those wave functions would ever be imaged in experiments. Scanning tunneling microscopy (STM) has opened up that possibility, and the images obtained are seriously beautiful. (A technical aside is that really it is not the wave function that is imaged, but the density of electronic states, but let's not be picky.) Electron waves break against triangular islands, atomic mountains rise out of the sea of electrons, and cliffs drop sheerly down to the ocean when one type of nano-structure ends and another begins.

In fact, from an aesthetic point of view, the maps of these electronic densities of states rival the astrophysical images I talked about in my last blog entry. Indeed, I think that the pictures from the Colloquium yesterday were as good as any in a talk about galaxies. They definitely added to the presentation, helping to give an intuitive picture of what was going on. My personal favourite was an image that showed what one my colleagues referred to as "the STM spin bombing the surface". As he said, most appropriate, given that Hamburg was hard hit by Allied bombs during WWII.

Joking aside, I thoguht it a very good Colloquium. Starting from STM basics, Prof. Wiesendanger showed why it is even more interesting to do spin-polarized STM. The patterns of anti-ferromagnetic strips imaged in the experiments of his group reveal details of the microscopic interactions that lead to the structures on these surfaces. But even more impressive was the ability to image single atoms on the surface. Indeed, Prof. Wiesendanger's group has a recent paper in Science where they obtain magnetization ("hysteresis") curves for these isolated atoms, and from those curves glean information on the spin-dependent interactions between the atom and a nearby magnetic "cliff".

For me, I found it pretty wonderful to see elementary Quantum Mechanics in action (e.g. solutions for a particle in a triangular box being realized in some of these systems). But at another level understanding the magnetic (i.e. spin-spin interaction) properties in these systems---and using STM tips to manipulate those spins---potentially opens up a variety of new technologies. Like Manasvita said, exploring fundamental science leads to exciting new technologies, and not necessarily in ways that you'd expect.

So, did you have a favorite picture? (You can see some of them again by clicking on the link above that takes you to the Weisendanger group website in Hamburg.) Does seeing images like this make Quantum Mechanics more real for you?

Saturday, April 19, 2008

Musings on disciplinary culture


After yesterday's colloquium I found myself thinking not so much about the science, but about the issue of disciplinary culture. Here I refer to the distinction between Astronomy and Physics. At many institutions (including ours) these are in the same Department. But at many others (e.g. Ohio State for one) they exist separately.

Having heard quite a few Astronomy talks over the years I have reached the following (tentative) conclusions. I realize that generalization from a small sample is dangerous (maybe I even learnt that from astronomers?) but I'm gonna do it anyway. I'll be interested to hear others' comments on these issues:

(1) Astronomers know how to make their talks look good. Yesterday's talk looked fabulous. And indeed it was astrophysics talks (actually cosmology talks) 10 years ago that made me realize I needed to raise my own game in Colloquia and make talks that were funnier, more generally appealing, and looked better. That meant I had to throw out my equation-covered transparencies of the time....

But this also has to do with how data is represented. Astronomy data sets can be pretty complex (and seem to be becoming more so). My impression is that astronomers spend quite a bit of time thinking about how to make their data visually interesting. (Think about all those false colour images from telescopes.) Maybe those of us in other fields could learn from that, and do better on presenting our complex data sets in ways that aid understanding through visualization.

(2) Astronomers can be pretty anthropormorphic. (That's the $10 word for talking about in animate objects, or even animals, like they were human.) Yesterday we had Andromeda (who was referred to multiple times as "she") "eating her last meal a billion years ago". Still, anthropormophism is not limited to astronomers. How many times in my 253 lectures did I say "The charge wants to move to the left because it can feel the electric field". Encouraging students to sympathize with the physical entities in your problem can be an effective (if rather deceptive!) teaching strategy I think. "If you were a quantum particle in this box, what would you do?"

(3) Language is tricky. Earlier this week we had the discussion about "Neutral iron" in our joint astro-nuclear seminar. It turned out, that to astronomers---or at least to that particular speaker---"neutral iron" means anything with an ionization state less than Fe5+. Then there are "metals" which are not really metals at all and baryons, which are anything that isn't dark matter. And so on and so on. (Someone may need to correct my limited knowledge of astronomy lingo, but you get the idea.) Indeed, this gap in language, can be VERY confusing, and having someone who can translate across the divide (which, I must say, our astronomers do a pretty god job of) is crucial to communication.

(4) Astronomy talks tend to be more qualitative. Don't you often get the sense they are saying: "Well, we did this simulation, and it kind-of looks like this data, so we think we're onto something". Sometimes I feel like I am listening to a biology talk. This seems to go with the astronomy culture of "If I get agreement between theory and data to within an order of magnitude I'm happy". Which is not a wrong way to approach the problem, because astrophysical systems (like biological ones) are very complicated, with lots of stuff going on and quantities varying over several decades. So if you can get it to within an order of magnitude you are probably indeed pretty happy. But it's quite different to what I'm used to in my sub-field, where if you only get the order of magnitude right that's sucky agreement.

Monday, February 11, 2008

The last unknown angle in neutrino mixing?

Kilometre-long tunnels in the ground, six nuclear reactors, 200 scientists, giant acrylic vessels containing 20 tonnes of liquid scintillator. It sounds like the plot of a James Bond movie, but actually it is the Daya Bay experiment.

This experiment aims to measure the neutrino mixing angle theta_{13}. With two of the three mixing angles pretty well constrained by data on atmospheric and solar neutrino oscillations this angle is the least well-known of all the parameters in the neutrino-mixing matrix. (Well, except for maybe one...but more on that later.) Daya Bay will either measure it, or---if it is very small---improve the present limit by a factor of 15. In order to do this one wants to optimize the distance so that electron-type neutrinos are at their "oscillations minimum", i.e. the maximum number of them have disappeared. The best distance to do this at is at about 2 km. (You might want to ask why you don't want to go to the "next minimum" in the oscillation pattern, the one corresponding to 2 Pi, rather than Pi.) So what you need is a copious source of electron-type anti-neutrinos, and a big chunk of matter which will convert some of them back into positrons through inverse beta decay.

Now nuclear reactors are just such a source, because for each GW-hour a typical Uranium-based nuclear reactor produces about 10^{20} electron anti-neutrinos. So in the Daya Bay "far" detector they expect to see about 90 neutrino events per day. Which sounds like a lot until you realize that the "far detector" consists of four vessels, each containing 20t of liquid scintillator. That scintillator ensures that when the p anti-neutrino strikes a proton, converting it to a neutron and a positron, both the neutron and the positron can be detected, with a time gap between them that is an (almost) unique signature of the process.

Other experiments have tried this approach before, but one limiting factor has been knowledge of the neutrino flux from the reactors. Daya Bay will solve this problem by taking a ratio of the neutrino rate between "near" detectors and "far" detectors. That technique, combined with the large volume of the detector, and with the promiximity to 17.4 GW of thermal power that is feeding China's growing energy needs, means that Daya Bay can "nail" theta_{13}. If, that is, they can ensure that what is going on at the near and far detectors is essentially identical.

So this experiment becomes all about controlling systematics. Is the scintillator in the near and far detectors exactly the same? Do the PMTs in the near and the far detectors function in the same way? (This is where regular calibration of detectors becomes key.) And it becomes about removing backgrounds: especially in the "far" detector where the count rate is a factor of ten lower. So when Dr Liu spoke on this topic last Thursday he spent some time explaining the cuts they make in their data to remove background events, e.g. cosmic rays activating Lithium-9 that then beta decay to produce a coincidence signal very like that seen when one of the reactor anti-neutrinos interacts with the target. Obviously it is a very delicate business to remove _only_ those events, and none of the signal, and to do so consistently between the near and far detectors. But if those background events are not properly removed they will mess up the measurement of the ratio that is the goal of this experiment.

Which might prompt the question: why all this effort? The millions of dollars (or RMB) on digging tunnels, the millions of dollars more on detector development here in the US, the 200 people working hard to design the best experiment they can. Part of the answer is that theta_{13} is "the gateway to CP violation": the possibility that there is a complex phase in the neutrino-mixing matrix like the one in the quark CKM matrix. And CP violation is a hot topic right now in particle physics. But is that the reason the Chinese government is footing most of the bill for this experiment? And that the US Department of Energy is making a significant commitment to it too? While I like to think that the US and Chinese governments care about CP violation, I don't think they care that much.

Wednesday, February 6, 2008

Are we there yet?

Jefferson Lab was built largely to probe the "transition region": that domain of energies and momentum transfers where a description of nuclei in terms of protons and neutrons gives way to a description based on the fundamental QCD degrees of freedom of quarks and gluons.

At very large energies we know that the interactions of quarks and gluons can be computed in perturbation theory (pQCD: perturbative Quantum Chromodynmics). This allowed theorists to predict (already in the 1970s) how various quantities that JLab has now measured should behave in the limit of large energies/momentum transfer. The only problem is that no-one really knew how large was lareg. Some of these predictions have been borne out, e.g. those for the disintegration of a deuteron by a high-energy photon. But, for the most part, the "asymptotic region", where these predictions based on pQCD become correct, has proven elusive. JLab appears to be stuck in a transition region covers more kinematic territory than many originally thought.

One observable for which pQCD makes a definite prediction is the "form factor" of a pion. This function encodes how different the pion's charge distribution is from the charge distribution of a point particle. (Yes, you guessed it, the charge distribution of a point particle is that all the charge is at a single point.) The form factor can be accessed via electron-pion scattering experiments. But it is very very difficult to build a target out of pions. (Yes, that was me exercising my gift for understatement.) So experimentalists have cleverly figured out how to get at the pion form factor in experiments where the pion is "electro-produced" in the interaction of a beam of electrons with a proton target. It turns out that protons (and neutrons for that matter too) fluctuate into a pion-nucleon state in ways that are governed by quantum mechanics (think Heisenberg Uncertainty Principle). So if you come in with an electron and hit the pion in that "virtual state" you can knock it away from the proton and detect it in a detector. Look at that: you scattered an electron from a pion target! Pion form factor here we come!

But the problem is that there are a bunch of other mechanisms by which pions can be produced when the electron-proton interaction takes place. So the experimentalists, such as Dr Gaskell who gave Monday's talk, need a model of these other processes in order to isolate the piece of the reaction they are interested in: the piece where the electron interacts with the pion in that pion-nucleon virtual state. With such a model in hand they can subtract off the other stuff and extract numbers for the pion form factor at a variety of momentum transfers where they have measured the pion electro-production process.

The results they get are intriguing, if vaguely disappointing. They show that the pion form factor is behaving with the power of momentum transfer predicted by pQCD (should go like 1/Q^2). But the pre-factor is off by a factor of a few. In astronomy a prediction of the right power law and a coefficient within an order of magnitude would be a success. However, in this case it has been the cause of some head-scratching by theorists, who have built a variety of different models to try and understand the additional processes (i.e. processes beyond those predicted by pQCD) that are taking place in the regime probed by the JLab experiments. Some of these models suggest that data on the pion form factor taken at an upgraded, 12 GeV, JLab will show the beginnings of an approach to the pQCD pion form factor. But at best it seems that 12 GeV JLab will provide only a glimpse of the promised land where the quarks and gluons inside the pion play together under the benevolent rule of pQCD. So let me ask the question: will all this effort and experimental ingenuity have been worth it if what we get out of this program are some very nice measurements of the pion's form factor in the "transition region", i.e. at Q^2's where pQCD does not apply?

Friday, February 1, 2008

Deep thoughts

How exactly do you measure a half life that is > 10^25 years? Very very carefully.
(a) You go deep underground, so as to stop Cosmic Rays masquerading as the decay products you're looking for in your detector. Try 8000' under rock in South Dakota. That oughta do it.
(b) You use ultra-high-purity materials: materials with almost no radioactivity of their own, to stop those decay products from looking like the decay your looking for. Try materials that are literally a billion times freer of Uranium and Thorium than the dust in our offices. That might be good enough.
(c) Try collecting a tonne of germanium. Then you might have enough nuclei in there that a few will decay in the way your interested in. And for good measure, you can use the germanium you've collected as a solid-state detector too. Fortunately Germanium will work well in that regard. If you can get enough. And pay enough to the Russians to purify it for you.
(d) Wait several years. At least you know that the longer you wait the better lower bound you'll set on neutrinoless double-beta decay.

So if you do all that you might, just might, see neutrinoless double-beta decay. (Something to think about: how does your chance of seeing NDBD depend on the different things you do in (a)-(d)?)

But neutrinoless double-beta decay---if seen---would be a revelation. It would tell us that neutrinos are Majorana particles. They are their own anti-particle. Personally I think that'd be worth a Nobel prize.

And the rate of neutrinoless double-beta decay is proportional to the square of a linear combination of neutrino masses. So if we see it we learn something concrete regarding the absolute mass scale of the neutrinos: a quantity for which we at present only have upper bounds. (From cosmology and from studies of the end point in tritium beta decay.) Indeed, even if the next generation of neutrinoless double-beta decay experiments sees NOTHING that will still pretty much rule out the "inverted hierarchy" of neutrino masses. It's good when seeing nothing still gets you something.

But to get anything in terms of physics results will take a long time. In the meantime we learn that everything is radioactive. Dust, plastic, air. Everything.

You know one experiment of this type---one based in Europe---had a lot of background events coming from the surface of their crystals. The lines indicated Polonium-210 contamination on the surfaces. I heard today that people think that this background comes about because the people handling the crystals are smokers. and cigarettes are full of polonium. So smoking really is bad for your health: you get lots of radioactivtity on you as a result.

But when a neutrinoless double-beta decay guy says "lots of radioactivity" he probably means 1 decay per kilogram per 20 days or something like that. Because these guys are the ultimate purists. You have to be when you want to find something that only happens once a year in one nucleus out of all the nuclei in a couple of hundred kilograms of stuff. Like I said, delayed gratification. Big time.

Wednesday, January 30, 2008

A nu probe of strongly interacting systems

Since they interact with matter only via the weak force, neutrinos provide a probe of the internal structure of nucleons and nuclei that is complementary to the electromagnetic probes used at, say, JLab. The Minerva experiment will attempt to exploit this complementarity so that we learn more about both neutrino interactions with nuclei and the internal structure of neutrons and protons.

Minerva is in fact a detector that will see events generated "parasitically" from the beam being used for the MINOS experiment. In MINOS (which we had a colloquium on last year) where neutrinos are shot from Fermilab to Minnesota in efforts to measure theta_{13}---a key undetermined parameter in our description of neutrino mixing. You need a lot of neutrinos to do that measurement, and a small fraction of these neutrinos can scatter off the iron, carbon, etc. in the Minerva detector. The fraction that do this is small enough that it will not affect MINOS' ability to do its work, hence the term "parasitic". So Minerva hopes to provide us---essentially for free, OK, for about $10M---with (a) more information on neutrino-nucleus cross sections and (b) data on neutrinos scattering from nucleons. (a) actually means that Minerva's relationship to MINOS is symbiotic, not parasitic, since uncertainties in these very neutrino-nucleus cross sections limit MINOS' ability to model their detectors. And lack of understanding of how the neutrinos interact with MINOS' detectors is in turn is a significant systematic error in the limit that MINOS places on theta_{13}.

As for (b), at high energies and momentum transfers we think that the neutrinos scatter almost all the time from individual neutrons and protons inside the nuclei. (Why?) Consequently we can look at events where a neutrino converts a neutron in the nucleus into a muon and a proton. Dr Schulte explained how Minerva could use such events to infer the behavior of the "nucleon axial form factor" with momentum transfer (Q^2). This quantity tells us something about how "axial charge" is distributed inside neutrons and protons. Since we now know (thanks to JLab) that the electric and magnetic form factors of the proton fall off differently at Q^2s above 1 GeV^2 it would be interesting to know which one the axial form factor is like---or if it is different from both G_E and G_M.

Minerva will not deliver data for a few years, which is partly because it is still under development, and partly because neutrino experiments take a while. They don't call them weak interactions for nothing. Even when you have several tons of material it takes a long time to collect enough events to measure these cross sections. Neutrino physicists must be good at delayed gratification.

Which leads me to ask: would you be prepared to put up with waiting this long for your data? If you thought it was important? Can you think of other reasons why we might want to know about neutrino interactions with nucleons and nuclei at high energies?

Thursday, January 24, 2008

When you're a jet...

you may not be a jet all the way. At least not at the Relativistic Heavy-Ion Collider. "Jets" in this instance refers not to the local NFL team, but instead to streams of hadrons that are formed in the high-energy collisions that take place inside this machine. At high energies the formation of these jets is a prediction of perturbative Quantum Chromodynamics (QCD). Last Thursday's speaker was using the modifications in the patterns of formation of these jets at high temperatures and densities to try an learn about QCD in these extreme conditions. In particular "away-side jet suppression", where jets do not appear as regularly in collisions of gold nuclei as they do in collisions of protons, suggests that the smashing together of the roughly 400 protons and neutrons in two gold nuclei heats up the QCD vacuum a lot more than a proton-proton collision does. Yes, I know this is not a surprise, but the point is that the results are so different that there seems to be a phase transition to a "new state of matter": the so-called Quark-gluon plasma, that some have dubbed "the perfect liquid" because of its low viscosity.

I found it interesting to hear Dr Frantz say that predictions from pQCD do a good job of predicting much of the behaviour of the jets in the proton-proton collisions. But when it comes to the gold-gold collisions the hot "QCD plasma" (or, more conservatively "QCD fluid") that we think gets formed in the collision buffets the jet that has to traverse more of the plasma. Those jets lose a lot of energy to the plasma, and so we tend not to detect them as often. I liked the way that Dr Frantz could test this hypothesis by looking at how photons escape from this plasma. After all, if the jet is interacting, i.e. being accelerated by, the stuff in the plasma, then it should spit off photons too. But once a photon has been spat off, it will tend to get out of the plasma, since it does not carry "QCD charge". So photons, once emitted, can be photons all the way to the detector. And the data Dr Frantz and his colleagues collected bore out this hypothesis.

The future of this research would seem to be at the Large Hadron Collider, which has a heavy-ion program lined up for a period a few years from now when they hope to be done with discovering the Higgs boson, supersymmetry, and extra dimensions. (Yes, I am joking, although I am sure they hope to find all that.) Because the energies at the LHC will be much higher than those at RHIC they can test whether the jets continue to be suppressed in higher-energy collisions. Because QCD is an asymptotically free theory one would expect that the interactions of higher-energy jets with the plasma will be smaller than those of the 4-10 GeV jets they've looked at so far at RHIC. But this is asymptotic freedom, so it may take a while to get there. I guess once the LHC does this experiment we'll have more idea of what energy we have to go to before jets really are jets all the way.

Wednesday, January 23, 2008

Weak is the new strong

In the family of fundamental forces weak interactions are the child we spend time apologizing for because their gifts are not immediately obvious. Electromagnetism gets to bind atoms and be responsible for all of condensed-matter physics. The strong force keeps nuclei together, and governs the interactions that make stars shine. And gravity binds us to the Earth and keeps the celestial spheres moving in their orderly (or not-so-orderly) paths.

But the weak force we usually pass over with a few words about nuclear beta decay, neutrinos, and so forth. We know they are a little peculiar (ooh look, how cute, parity violation) but when pressed, we may have trouble explaining what their purpose in life is. Yesterday's Colloquium provided a nice corrective to this view by explaining how weak interactions are providing a unique window on aspects of proton structure and Standard-Model physics.

These results come out of the parity-violation program at Jefferson Lab which was initiated by a paper written 20 years ago that proposed looking at electrons that scatter from the proton via the weak force (specficially via the exchange of a Z0-boson). The trick was that these electrons would couple to the quarks inside the proton differently than would electrons that scattered from the proton via the exchange of photons. Photon exchange is the usual (i.e. dominant) process by which protons and electrons interact---it mediates the electromagnetic force. But it turns out that one can combine information from photon-exchange and Z0-exchange experiments to infer the distribution of strange quarks inside the proton---something that would be impossible with only electromagnetic-scattering data.

The trick is distinguishing the electrons that scatter via the weak force (Z0-exchange let's say) from those that scatter in the "regular" way (photon exchange). Here the fact that the weak interaction violates parity comes to our aid, since we can eliminate the dominant contribution from the "regular" electrons by constructing the difference of cross sections for electrons with opposite spins. The resulting "asymmetry" has recently been measured at a variety of kinematic points by the G0 collaboration. As a consequence we have important new information on the distribution of strange quarks inside the proton. So we now know for sure (well, at the 68% confidence level) that a proton is not two up quarks and a down quark, like you have been taught. Instead it has a bunch of other stuff in it too, of which the non-zero results measured by G0 are just one manifestation.

But the parity-violating electron scattering people did not stop there. They realized that their new probe gave them access to the "weak charge" of the proton. In just the same way as the charge of the proton determines the scattering of low-energy electrons from it, so too the "weak charge" of the proton determines the low-momentum-transfer result for the parity-violating asymmetry measured by G0 and other groups. (The information on proton structure that these experiments were actually after was obtained at values of the momentum transfer that are not "low"---where details of the proton's weak charge distribution become important.) A new experiment at JLab aims to do the world's best measurement of the proton's weak charge. And since this number---Qweak---is predicted by the Standard Model, they can test if that prediction is correct and perhaps see contributions of beyond-the-standard-model stuff to the proton's weak charge.

Personally I find it amazing that differences in cross sections at the level of 1 ppm are being measured with a few-per-cent-level accuracy. But perhaps more interesting is that the weak force is coming into its own, not just as a peculiarity but as a tool. Since the mass-scale of the weak force is set by the gauge bosons and so is of order 100 GeV, these experiment use 100-GeV-scale physics to learn about proton structure. And ultimately the (we hope) closeness of that 100 GeV scale to the mass of particles that are beyond the standard model is what gives experiments like Qweak the chance to upset the four-force picture of interactions we all grew up hearing about. That would truly be the revenge of the under-appreciated child.