Wednesday, May 16, 2018

"Active learning" or "research-based teaching" in upper level courses

This past spring Carl Wieman came to Rice's Center for Teaching Excellence, to give us this talk, about improving science pedadogy.  (This video shows a very similar talk given at UC Riverside.) He is very passionate about this, and argues strongly that making teaching more of an active, inquiry-based or research-question-based experience is generally a big improvement over traditional lecture.  I've written previously that I think this is a complicated issue. 

Does anyone in my readership have experience applying this approach to upper-level courses?  For a specific question relevant to my own teaching, have any of you taught or taken a statistical physics course presented in this mode?  I gather that PHYS 403 at UBC and PHYS 170 at Stanford have been done this way.  I'd be interested in learning about how that was implemented and how it worked - please feel free to post in comments or email me.

(Now that the semester is over and some of my reviewing responsibilities are more under control, the frequency of posting should go back up.)

Wednesday, May 02, 2018

Short items

A couple of points of interest:
  • Bill Gates apparently turned down an offer from the Trump administration to be presidential science advisor.  It's unclear if this was a serious offer or an off-hand remark.   Either way it underscores what a trivialized and minimal role OSTP appears to be playing in the present administration.  It's a fact of modern existence that there are many intersections between public policy and the need for technical understanding of scientific issues (in the broad sense that includes engineering).   While an engaged and highly functional OSTP doesn't guarantee good policy (because science is only one of many factors that drive decision-making), the US is doing itself a disservice by running a skeleton crew in that office.  
  • Phil Anderson has posted a document (not a paper submitted for publication anywhere, but more of an essay) on the arxiv with the sombre title, "Four last conjectures".  These concern: (1) the true ground state of solids made of atoms that are hard-core bosons, suggesting that at sufficiently low temperatures one could have "non-classical rotational inertia" - not exactly a supersolid, but similar in spirit; (2) a discussion of a liquid phase of (magnetic) vortices in superconductors in the context of heat transport; (3) an exposition of his take on high temperature superconductivity (the "hidden Fermi liquid"), where one can have non-Fermi-liquid scattering rates for longitudinal resistivity, yet Fermi liquid-like scattering rates for scattering in the Hall effect; and (4) a speculation about an alternative explanation (that, in my view, seems ill-conceived) for the accelerating expansion of the universe.   The document is vintage Anderson, and there's a melancholy subtext given that he's 94 years old and is clearly conscious that he likely won't be with us much longer.
  • On a lighter note, a paper (link goes to publicly accessible version) came out a couple of weeks ago explaining how yarn works - that is, how the frictional interactions between a zillion constituent short fibers lead to thread acting like a mechanically robust object.  Here is a nice write-up.

Sunday, April 29, 2018

What is a quantum point contact? What is quantized conductance?

When we teach basic electrical phenomena to high school or college physics students, we usually talk about Ohm's Law, in the form \(V = I R\), where \(V\) is the voltage (how much effort it takes to push charge, in some sense), \(I\) is the current (the flow rate of the charge), and \(R\) is the resistance.  This simple linear relationship is a good first guess about how you might expect conduction to work.  Often we know the voltage and want to find the current, so we write \(I = V/R\), and the conductance is defined as \(G \equiv 1/R\), so \(I = G V\). 

In a liquid flow analogy, voltage is like the net pressure across some pipe, current is like the flow rate of liquid through the pipe, and the conductance characterizes how the pipe limits the flow of liquid.  For a given pressure difference between the ends of the pipe, there are two ways to lower the flow rate of the liquid:  make the pipe longer, and make the pipe narrower.  The same idea applies to electrical conductance of some given material - making the material longer or narrower lowers \(G\) (increases \(R\)).   

Does anything special happen when the conductance becomes small?  What does "small" mean here - small compared to what?  (Physicists love dimensionless ratios, where you compare some quantity of interest with some characteristic scale - see here and here.  I thought I'd written a long post about this before, but according to google I haven't; something to do in the future.)  It turns out that there is a combination of fundamental constants that has the same units as conductance:  \(e^2/h\), where \(e\) is the electronic charge and \(h\) is Planck's constant.  Interestingly, evaluating this numerically gives a characteristic conductance of about 1/(26 k\(\Omega\)).   The fact that \(h\) is in there tells you that this conductance scale is important if quantum effects are relevant to your system (not when you're in the classical limit of, say, a macroscopic, long spool of wire that happens to have \(R \sim 26~\mathrm{k}\Omega\).   
Example of a quantum point contact, from here.

Conductance quantization can happen when you make the conductance approach this characteristic magnitude by having the conductor be very narrow, comparable to the spatial spread of the quantum mechanical electrons.  We know electrons are really quantum objects, described by wavefunctions, and those wavefunctions can have some characteristic spatial scale depending on the electronic energy and how tightly the electron is confined.  You can then think of the connection between the two conductors like a waveguide, so that only a handful of electronic "modes" or "channels" (compatible with the confinement of the electrons and what the wavefunctions are required to do) actually link the two conductors.  (See figure.) Each spatial electronic mode that connects between the two sides has a conductance of \(G_{0} \equiv 2e^{2}/h\), where the 2 comes from the two possible spin states of the electron.  

Conductance quantization in a 2d electron system,
from here.
A junction like this in a semiconductor system is called a quantum point contact.  In semiconductor devices you can use gate electrodes to confine the electrons, and when the conductance reaches the appropriate spatial scale you can see steps in the conductance near integer multiples of \(G_{0}\), the conductance quantum.  A famous example of this is shown in the figure here.  

In metals, because the density of (mobile) electrons is very high, the effective wavelength of the electrons is much shorter, comparable to the size of an atom, a fraction of a nanometer.  This means that constrictions between pieces of metal have to reach the atomic scale to see anything like conductance quantization.  This is, indeed, observed.

For a very readable review of all of this, see this Physics Today article by two of the experimental progenitors of this.  Quantized conductance shows up in other situations when only a countable number of electronic states are actually doing the job of carrying current (like along the edges of systems in the quantum Hall regime, or along the edges of 2d topological materials, or in carbon nanotubes).   

Note 1:  It's really the "confinement so that only a few allowed waves can pass" that gives the quantization here.  That means that other confined wave systems can show the analog of this quantization.  This is explained in the PT article above, and an example is conduction of heat due to phonons.

Note 2:  What about when \(G\) becomes comparable to \(G_{0}\) in a long, but quantum mechanically coherent system?  That's a story for another time, and gets into the whole scaling theory of localization.  

Wednesday, April 25, 2018

Postdoc opportunity

While I have already spammed a number of faculty colleagues about this, I wanted to point out a competitive, endowed postdoctoral opportunity at Rice, made possible through the Smalley-Curl Institute.  (I am interested in hiring a postdoc in general, but the endowed opportunity is a nice one to pursue as well.)

The endowed program is the J Evans Attwell Welch Postdoctoral Fellowship.  This is a competitive, two-year fellowship, and each additionally includes travel funds and research supplies/minor equipment resources.  The deadline for the applications is this coming July 1, 2018 with an anticipated start date around September, 2018.  

I'd be delighted to work with someone on an application for this, and I am looking for a great postdoc in any case.  The best applicant would be a strong student who is interested in working on (i) noise and transport measurements in spin-orbit systems including 2d TIs; (ii) nanoscale studies (incl noise and transport) of correlated materials and non-Fermi liquids; and/or (iii) combined electronic and optical studies down to the molecular scale via plasmonic structures.  If you're a student finishing up and are interested, please contact me, and if you're a faculty member working with possible candidates, please feel free to point out this opportunity.

Saturday, April 21, 2018

The Einstein-de Haas effect

Angular momentum in classical physics is a well-defined quantity tied to the motion of mass about some axis - its value (magnitude and direction) is tied to a particular choice of coordinates.  When we think about some extended object spinning around an axis with some angular velocity \(\mathbf{\omega}\), we can define the angular momentum associated with that rotation by \(\mathbf{I}\cdot \mathbf{\omega}\), where \(\mathbf{I}\) is the "inertia tensor" that keeps track of how mass is distributed in space around the axis.  In general, conservation of angular momentum in isolated systems is a consequence of the rotational symmetry of the laws of physics (Noether's theorem). 

The idea of quantum particles possessing some kind of intrinsic angular momentum is a pretty weird one, but it turns out to be necessary to understand a huge amount of physics.  That intrinsic angular momentum is called "spin", but it's *not* correct to think of it as resulting from the particle being an extended physical object actually spinning.  As I learned from reading The Story of Spin (cool book by Tomonaga, though I found it a bit impenetrable toward the end - more on that below), Kronig first suggested that electrons might have intrinsic angular momentum and used the intuitive idea of spinning to describe it; Pauli pushed back very hard on Kronig about the idea that there could be some physical rotational motion involved - the intrinsic angular momentum is some constant on the order of \(\hbar\).  If it were the usual mechanical motion, dimensionally this would have to go something like \(m r v\), where \(m\) is the mass, \(r\) is the size of the particle, and \(v\) is a speed; as \(r\) gets small, like even approaching a scale we know to be much larger than any intrinsic size of the electron, \(v\) would exceed \(c\), the speed of light.  Pauli pounded on Kronig hard enough that Kronig didn't publish his ideas, and two years later Goudsmit and Uhlenbeck established intrinsic angular momentum, calling it "spin".

Because of its weird intrinsic nature, when we teach undergrads about spin, we often don't emphasize that it is just as much angular momentum as the classical mechanical kind.  If you somehow do something to a system a bunch of spins, that can have mechanical consequences.  I've written about one example before, a thought experiment described by Feynman and approximately implemented in micromechanical devices.  A related concept is the Einstein-de Haas effect, where flipping spins again exerts some kind of mechanical torque.  A new preprint on the arxiv shows a cool implementation of this, using ultrafast laser pulses to demagnetize a ferromagnetic material.  The sudden change of the spin angular momentum of the electrons results, through coupling to the atoms, in the launching of a mechanical shear wave as the angular momentum is dumped into the lattice.   The wave is then detected by time-resolved x-ray measurements.  Pretty cool!

(The part of Tomonaga's book that was hard for me to appreciate deals with the spin-statistics theorem, the quantum field theory statement that fermions have spins that are half-integer multiples of \(\hbar\) while bosons have spins that are integer multiples.  There is a claim that even Feynman could not come up with a good undergrad-level explanation of the argument.  Have any of my readers every come across a clear, accessible hand-wave proof of the spin-statistics theorem?)

Tuesday, April 10, 2018

Chapman Lecture: Using Topology to Build a Better Qubit

Yesterday, we hosted Prof. Charlie Marcus of the Niels Bohr Institute and Microsoft for our annual Chapman Lecture on Nanotechnology.   He gave a very fun, engaging talk about the story of Majorana fermions as a possible platform for topological quantum computing. 

Charlie used quipu to introduce the idea of topology as a way to store information, and made a very nice heuristic argument about how topology encodes information in a global rather than a local sense.  That is, if you have a big, loose tangle of string on the ground, and you do local measurements of little bits of the string, you really can't tell whether it's actually tied in a knot (topologically nontrivial) or just lying in a heap.  This hints at the idea that local interactions (measurements, perturbations) can't necessarily disrupt the topological state of a quantum system.

The talk was given a bit of a historical narrative flow, pointing out that while there had been a lot of breathless prose written about the long search for Majoranas, etc., in fact the timeline was actually rather compressed.  In 2001, Alexei Kitaev proposed a possible way of creating effective Majorana fermions, particles that encode topological information,  using semiconductor nanowires coupled to a (non-existent) p-wave superconductor.   In this scheme, Majorana quasiparticles localize at the ends of the wire.  You can get some feel for the concept by imagining string leading off from the ends of the wire, say downward through the substrate and off into space.  If you could sweep the Majoranas around each other somehow, the history of that wrapping would be encoded in the braiding of the strings, and even if the quasiparticles end up back where they started, there is a difference in the braiding depending on the history of the motion of the quasiparticles.   Theorists got very excited a bout the braiding concept and published lots of ideas, including how one might do quantum computing operations by this kind of braiding.

In 2010, other theorists pointed out that it should be possible to implement the Majoranas in much more accessible materials - InAs semiconductor nanowires and conventional s-wave superconductors, for example.  One experimental feature that could be sought would be a peak in the conductance of a superconductor/nanowire/superconductor device, right at zero voltage, that should turn on above a threshold magnetic field (in the plane of the wire).  That's really what jumpstarted the experimental action.  Fast forward a couple of years, and you have a paper that got a ton of attention, reporting the appearance of such a peak.  I pointed out at the time that that peak alone is not proof, but it's suggestive.  You have to be very careful, though, because other physics can mimic some aspects of the expected Majorana signature in the data.

A big advance was the recent success in growing epitaxial Al on the InAs wires.  Having atomically precise lattice registry between the semiconductor and the aluminum appears to improve the contacts significantly.   Note that this can be done in 2d as well, opening up the possibility of many investigations into proximity-induced superconductivity in gate-able semiconductor devices.  This has enabled some borrowing of techniques from other quantum computing approaches (transmons).

The main take-aways from the talk:

  • Experimental progress has actually been quite rapid, once a realistic material system was identified.
  • While many things point to these platforms as really having Majorana quasiparticles, the true unambiguous proof in the form of some demonstration of non-Abelian statistics hasn't happened yet.  Getting close.
  • Like many solid-state endeavors before, the true enabling advances here have come from high quality materials growth.
  • If this does work, scale-up may actually be do-able, since this does rely on planar semiconductor fabrication for the most part, and topological qubits may have a better ratio of physical qubits to logical qubits than other approaches.
  • Charlie Marcus remains an energetic, engaging speaker, something I first learned when I worked as the TA for the class he was teaching 24 years ago. 

Thursday, March 29, 2018

E-beam evaporators - recommendations?

Condensed matter experimentalists often need to prepare nanoscale thickness films of a variety of materials.  One approach is to use "physical vapor deposition" - in a good vacuum, a material of interest is heated to the point where it has some nonzero vapor pressure, and that vapor collides with a substrate of interest and sticks, building up the film.  One way to heat source material is with a high voltage electron beam, the kind of thing that used to be used at lower intensities to excite the phosphors on old-style cathode ray tube displays.  

My Edwards Auto306 4-pocket e-beam system is really starting to show its age.  It's been a great workhorse for quick things that don't require the cleanroom.  Does anyone out there have recommendations for a system (as inexpensive as possible of course) with similar capabilities, or a vendor you like for such things?  

Wednesday, March 28, 2018

Discussions of quantum mechanics

In a sure sign that I'm getting old, I find myself tempted to read some of the many articles, books, and discussions about interpretations of quantum mechanics that seem to be flaring up in number these days.  (Older physicists seem to return to this topic, I think because there tends to be a lingering feeling of dissatisfaction with just about every way of thinking about the issue.)

To be clear, the reason people refer to interpretations of quantum mechanics is that, in general, there is no disagreement about the results of well-defined calculations, and no observed disagreement between such calculations and experiments.   

There are deep ontological questions here about what physicists mean by something (say the wavefunction) being "real".  There are also fascinating history-of-science stories that capture the imagination, with characters like Einstein criticizing Bohr about whether God plays dice, Schroedinger and his cat, Wigner and his friend, Hugh Everett and his many worlds, etc.  Three of the central physics questions are:
  • Quantum systems can be in superpositions.  We don't see macroscopic quantum superpositions, even though "measuring" devices should also be described using quantum mechanics.  Is there some kind physical process at work that collapses superpositions that is not described by the ordinary Schroedinger equation?   
  • What picks out the classical states that we see?  
  • Is the Born rule a consequence of some underlying principle, or is that just the way things are?
Unfortunately real-life is very busy right now, but I wanted to collect some recent links and some relevant papers in one place, if people are interested.

From Peter Woit's blog, I gleaned these links:
Going down the google scholar rabbit hole, I also found these:
  • This paper has a clean explication of the challenge in whether decoherence due to interactions with large numbers of degrees of freedom really solves the outstanding issues.
  • This is a great review by Zurek about decoherence.
  • This is a subsequent review looking at these issues.
  • And this is a review of "collapse theories", attempts to modify quantum mechanics beyond Schroedinger time evolution to kill superpositions.
No time to read all of these, unfortunately.

Wednesday, March 14, 2018

Stephen Hawking, science communicator

An enormous amount has already been written by both journalists and scientists (here too) on the passing of Stephen Hawking.  Clearly he was an incredibly influential physicist with powerful scientific ideas.  Perhaps more important in the broad scheme of things, he was a gifted communicator who spread a fascination with science to an enormous audience, through his books and through the careful, clever use of his celebrity (as here, here, here, and here).   

While his illness clearly cost him dearly in many ways, I don't think it's too speculative to argue that it was a contributor to his success as a popularizer of science.  Not only was he a clear, expository writer with a gift for conveying a sense of the beauty of some deep ideas, but he was in some ways a larger-than-life heroic character - struck down physically in the prime of life, but able to pursue exotic, foundational ideas through the sheer force of his intellect.   Despite taking on some almost mythic qualities in the eyes of the public, he also conveyed that science is a human endeavor, pursued by complicated, interesting people (willing to do things like place bets on science, or even reconsider their preconceived ideas).

Hawking showed that both science and scientists can be inspiring to a broad audience.  It is rare that top scientists are able to do that, though a combination of their skill as communicators and their personalities.  In physics, besides Hawking the ones that best spring to mind are Feynman (anyone who can win a Nobel and also have their anecdotes described as the Adventures of a Curious Character is worth reading!) and Einstein.   

Sometimes there's a bias that gifted science communicators who care about public outreach are self-aggrandizing publicity hounds and not necessarily serious intellects (not that the two have to be mutually exclusive).  The outpouring of public sympathy on the occasion of Hawking's passing shows how deep an impact he had on so many.  Informing and inspiring people is a great legacy, and hopefully more scientists will be successful on that path thanks to Hawking.   

Wednesday, March 07, 2018

APS March Meeting, day 3 and summary thoughts

Besides the graphene bilayer excitement, a three other highlights from today:

David Cobden of the University of Washington gave a very nice talk about 2d topological insulator response of 1T'-WTe2.  Many of the main results are in this paper (arxiv link).    This system in the single-layer limit has very clear edge conduction while the bulk of the 2d layer is insulating, as determined by a variety of transport measurements.  There are also new eye-popping scanning microwave impedance microscopy results from Yongtao Cui's group at UC Riverside that show fascinating edge channels, indicating tears and cracks in the monolayer material that are otherwise hard to see. 

Steve Forrest of the University of Michigan gave a great presentation about "How Organic Light Emitting Diodes Revolutionized Displays (and maybe lighting)".  The first electroluminescent organic LED was reported about thirty years ago, and it had an external quantum efficiency of about 1%.  First, when an electron and a hole come together in the device, they only have a 1-in-4 chance of producing a singlet exciton, the kind that can readily decay radiatively.  Second, it isn't trivial to get light out of such a device because of total internal reflection.  Adding in the right kind of strong spin-orbit-coupling molecule, it is possible to convert those triplets to singlets and thus get nearly 100% internal quantum efficiency.  In real devices, there can be losses due to light trapped in waveguided modes, but you can create special substrates to couple that light into the far field.  Similarly, you can create modified substrates to avoid losses due to unintentional plasmon modes.  The net result is that you can have OLEDs with about 70% external quantum efficiencies.   OLED displays are a big deal - the global market was about $20B/yr in 2017, and will likely displace LCD displays.  OLED-based lighting is also on the way.  It's an amazing technology, and the industrial scale-up is very impressive.

Barry Stipe from Western Digital also gave a neat talk about the history and present state of the hard disk drive.  Despite the growth of flash memory, 90% of all storage in cloud data centers remains in magnetic hard disks, for capacity and speed.  The numbers are really remarkable.  If you scale all the parts of a hard drive up by a factor of a million, the disk platter would be 95 km in diameter, a bit would be about the size of your finger, and the read head would be flying above the surface at an altitude of 4 mm, and to get the same data rate as a drive, the head would have to be flying at 0.1 c.  I hadn't realized that they now hermetically seal the drives and fill them with He gas.  The He is an excellent thermal conductor for cooling, and because it has a density 1/7 that of air, the Reynolds number is lower for a given speed, meaning less turbulence, meaning they can squeeze additional, thinner platters into the drive housing.  Again, an amazing amount of science and physics, plus incredible engineering.

Some final thoughts (as I can't stay for the rest of the meeting):

  • In the old days, some physicists seemed to generate an intellectual impression by cultivating resemblance to Einstein.  Now, some physicists try to generate an intellectual impression by cultivating resemblance to Paul McEuen.
  • After many years of trying, the APS WiFi finally works properly and well!  
  • This was the largest March Meeting ever (~ 12000 attendees).  This is a genuine problem, as the meeting is growing by several percent per year, and this isn't sustainable, especially in terms of finding convention centers and hotels that can host.  There are serious discussions about what to do about this in the long term - don't be surprised if a survey is sent to some part of the APS membership about this.

Superconductivity in graphene bilayers - why is this exciting and important

As I mentioned here, the big story of this year's March Meeting is the report, in back-to-back Nature papers this week (arxiv pdf links in this sentence), of both Mott insulator and superconductivity in graphene bilayers.  I will post more here later today after seeing the actual talk on this (See below for some updates), but for now, let me give the FAQ-style report.  Skip to the end for the two big questions:
Moire pattern from twisted bilayer
graphene, image from NIST.

  • What's the deal with graphene?  Graphene is the name for a single sheet of graphite - basically an atomically thin hexagonal chickenwire lattice of carbon atoms.  See here and here.  Graphene is the most popular example of an enormous class of 2d materials.  The 2010 Nobel Prize in physics was awarded for the work that really opened up that whole body of materials for study by the physics community.  Graphene has some special electronic properties:  It can easily support either electrons or holes (effective positively charged "lack of electrons") for conduction (unlike a semiconductor, it has no energy gap, but it's a semimetal rather than a metal), and the relationship between kinetic energy and momentum of the charge carriers looks like what you see for massless relativistic things in free space (like light).
  • What is a bilayer?  Take two sheets of graphene and place one on top of the other.  Voila, you've made a bilayer.  The two layers talk to each other electronically.  In ordinary graphite, the layers are stacked in a certain way (Bernal stacking), and a Bernal bilayer acts like a semiconductor.  If you twist the two layers relative to each other, you end up with a Moire pattern (see image) so that along the plane, the electrons feel some sort of periodic potential.
  • What is gating?  It is possible to add or remove charge from the graphene layers by using an underlying or overlying electrode - this is the same mechanism behind the field effect transistors that underpin all of modern electronics.
  • What is actually being reported? If you have really clean graphene and twist the layers relative to each other just right ("magic angle"), the system becomes very insulating when you have just the right number of charge carriers in there.  If you add or remove charge away from that insulating regime, the system apparently becomes superconducting at a temperature below 1.7 K.
  • Why is the insulating behavior interesting?  It is believed that the insulating response in the special twisted case is because of electron-electron interactions - a Mott insulator.  Think about one of those toys with sliding tiles.  You can't park two tiles in the same location, so if there is no open location, the whole set of tiles locks in place.  Mott insulators usually involve atoms that contain d electrons, like NiO or the parent compounds of the high temperature copper oxide superconductors.  Mott response in an all carbon system would be realllllly interesting.  
  • Why is the superconductivity interesting?  Isn't 1.7 K too cold to be useful?  The idea of superconductivity-near-Mott has been widespread since the discovery of high-Tc in 1987.  If that's what's going on here, it means we have a new, highly tunable system to try to understand how this works.  High-Tc remains one of the great unsolved problems in (condensed matter) physics, and insights gained here have the potential to guide us toward greater understanding and maybe higher temperatures in those systems.  
  • Why is this important?  This is a new, tunable, controllable system to study physics that may be directly relevant to one of the great open problems in condensed matter physics.  This may be generalizable to the whole zoo of other 2d materials as well. 
  • Why should you care?  It has the potential to give us deep understanding of high temperature superconductivity.  That could be a big deal.  It's also just pretty neat.  Take a conductive sheet of graphene, and another conducting sheet of graphene, and if you stack them juuuuuust right, you get an insulator or a superconductor depending on how many charge carriers you stick in there.  Come on, that's just wild.
Update:  A few notes from seeing the actual talk.
  • Pablo painted a picture:  In the cuprates, the temperature (energy) scale is hundreds of Kelvin, and the size scale associated with the Mott insulating lattice is fractions of a nm (the spacing between Cu ions in the CuO2 planes).  In ultracold atom optical lattice attempts to look at Mott physics, the temperature scale is nK (and cooling is a real problem), while the spatial scale between sites is more like a micron.  In the twisted graphene bilayers, the temperature scale is a few K, and the spatial scale is about 13.4 nm (for the particular magic angle they use).
  • The way to think about what the twist does:  In real space, it creates a triangular lattice of roughly Bernal-stacked regions (the lighter parts of the Moire pattern above).  In reciprocal space, the Dirac cones at the K and K' points of the two lattices become separated by an amount given by \(k_{\theta} \approx K \theta\), where \(\theta\) is the twist angle, and we've used the small angle approximation.  When you do that and turn on interlayer coupling, you hybridize the bands from the upper and lower layers.  This splits off the parts of the bands that are close in energy to the dirac point, and at the magic angles those bands can be very very flat (like bandwidths of ~ 10 meV, as opposed to multiple eV of the full untwisted bands).  Flat bands = tendency to localize.   The Mott phase then happens if you park exactly one carrier (one hole, for the superconducting states in the paper) per Bernal-patch-site.  
  • Most persuasive reasons they think it's really a Mott insulating state and not something else, besides the fact that it happens right at half-filling of the twist-created triangular lattice:  Changing the angle by a fraction of a degree gets rid of the insulating state, and applying a magnetic field (in plane or perpendicular) makes the system become metallic, which is the opposite of what tends to happen in other insulating situations.  (Generally magnetic fields tend to favor localization.)
  • They see spectroscopic evidence that the important number of effective carriers is determined not by the total density, but by how far away they gate the system from half-filling.
  • At the Mott/superconducting border, they see what looks like Josephson-junction response, as if the system breaks up into superconducting regions separated by weak links.  
  • The ratio of superconducting Tc to the Fermi temperature is about 0.5, which makes this about as strongly coupled (and therefore likely to be some weird unconventional superconductor) as you ever see.
  • Pablo makes the point that this could be very general - for any combo of van der Waals layered materials, there are likely to be magic angles.  Increasing the interlayer coupling increases the magic angle, and could then increase the transition temperature.
Comments by me:
  • This is very exciting, and has great potential.  Really nice work.
  • I wonder what would happen if they used graphite as a gate material rather than a metal layer, given what I wrote here.   It should knock the disorder effects down a lot, and given how flat the bands are, that could really improve things.
  • There are still plenty of unanswered questions.  Why does the superconducting state seem more robust on the hole side of charge neutrality as well as on the hole side of half-filling?  This system is effectively a triangular lattice - that's a very different beast than the square lattice of the cuprates or the pnictides.  That has to matter somehow.  Twisting other 2d materials (square lattice MXenes?) could be very interesting.
  • I predict there will be dozens of theory papers in the next two months trying to predict magic twist angles for a whole zoo of systems.

APS March Meeting 2018, day 2

Day 2 of the meeting was even more scattered than usual for me, because several of my students were giving talks, all in different sessions spread around.  That meant I didn't have a chance to stay too long on any one topic.   A few highlights:

Jeff Urban from LBL gave an interesting talk about different aspects of the connection between electronic transport and thermal transport.  The Wiedemann-Franz relationship is a remarkably general expression based on a simple idea - when charge carriers move, they transport some (thermal) energy as well as charge, so thermal conductivity and electrical conductivity should be proportional to each other.  There are a bunch of assumptions that go into the serious derivation, though, and you could imagine scenarios when you'd expect large deviations from W-F response, particularly if scattering rates of carriers have some complicated energy dependence.  Urban spoke about hybrid materials (e.g., mixtures of inorganic components and conducting polymers).  He then pointed out a paper I'd somehow missed last year about apparent W-F violation in the metallic state of vanadium dioxide.  VO2 is a "bad metal", with an anomalously low electrical conductivity.  Makes me wonder how W-F fairs in other badly metallic systems.

Ali Hussain of the Abbamonte group at Illinois gave a nice talk about (charge) density fluctuations in the strange metal phase (and through the superconducting transition) of the copper oxide superconductor BSSCO.  The paper is here.  They use a particular technique (momentum-resolved electron energy loss spectroscopy) and find that it is peculiarly easy to create particle-hole excitations over a certain low energy range in the material, almost regardless of the momentum of those excitations.  There are also systematics with how this works as a function of doping (carrier concentration in the material), with optimally doped material having particularly temperature-independent response. 

Albert Fert spoke about spin-Hall physics, and the conversion of spin currents in to charge currents and vice versa.  One approach is the inverse Edelstein effect (IEE).  You have a stack of materials, where a ferromagnetic layer is on the top.  Driving ferromagnetic layer into FMR, you can pump a spin current vertically downward (say) into the stack.  Then, because of Rashba spin-orbit coupling, that vertical spin current can drive a lateral charge current (leading to the buildup of a lateral voltage) in a two-dimensional electron gas living at an interface in the stack.  One can use the interface between Bi and Ag (see here).  One can get better results if there is some insulating spacer to keep free conduction electrons not at the interface from interfering, as in LAO/STO structures.  Neat stuff, and it helped clarify for me the differences between the inverse spin Hall effect (3d charge current from 3d spin current) and the IEE (2d charge current from 3d spin current). 

Alexander Govorov of Ohio also gave a nice presentation about the generation of "hot" electrons from excitation of plasmons.  Non-thermally distributed electrons and holes can be extremely useful for a variety of processes (energy harvesting, photocatalysis, etc.). At issue is, what does the electronic distribution really look like.  Relevant papers are here and here.  There was a nice short talk similar in spirit by Yonatan Dubi earlier in the day.

Monday, March 05, 2018

APS March Meeting 2018, day 1

As I explained yesterday, my trip to the APS is even more scattered than in past years, but I'll try to give some key points.  Because of meetings and discussions with some collaborators and old friends, I didn't really sit down and watch entire sessions, but I definitely saw and heard some interesting things.

Markus Raschke of Colorado gave a nice talk about the kinds of ultrafast and nonlinear spectroscopy you can do if you use a very sharp gold tip as a plasmonic waveguide.  The tip has a grating milled onto it a few microns away from the sharp end, so that hitting the grating with a pulsed IR laser excites a propagating surface plasmon mode that is guided down to the really sharp point.  One way to think about this:  When you use the plasmon mode to confine light down to a length scale \(\ell\) comparable to the radius of curvature of the sharp tip, then you effectively probe a wavevector \(k_{\mathrm{eff}} \sim \2\pi/\ell\).  If \(\ell\) is a couple of nm, then you're dealing with \(k_{\mathrm{eff}}\) values associated in free space with x-rays (!).  This lets you do some pretty wild optical spectroscopies.  Because the waveguiding is actually effective over a pretty broad frequency range, that means that you can get very short pulses down there, and the intense electric field can lead to electron emission, generating the shortest electron pulses in the world.  

Andrea Young of UCSB gave a very pretty talk about looking at even-denominator fractional quantum Hall physics in extremely high quality bilayer graphene.  Using ordinary metal electrodes apparently limits how nice the effects can be in the bilayer, because the metal is polycrystalline and that disorder in local work function can actually matter.   By using graphite as both the bottom gate and the top gate (that is, a vertical stack of graphite/boron nitride/bilayer graphene/boron nitride/graphite), it is possible to tune both the filling fraction (ratio of carrier density to magnetic field) in the bilayer and the vertical electric field across the bilayer (which can polarize the states to sit more in one layer or the other).  Capacitance measurements (e.g., between the top gate and the bottom gate, or between either gate and the bilayer) can show extremely clean quantum hall data.

Sankar Das Sarma of Maryland spoke about the current status of trying to use Majorana fermions in semiconductor wire/superconductor electrode structures for topological quantum computing.  For a review of the topic overall, see here.   This is the approach to quantum computing that Microsoft is backing.  The talk was vintage Das Sarma, which is to say, full of amusing quotes, like "Physicists' record at predicting technological breakthroughs is dismal!" and "Just because something is obvious doesn't mean that you should not take it seriously."  The short version:  There has been great progress in the last 8 years, from the initial report of possible signatures of effective Majorana fermions in individual InSb nanowires contacted by NbTiN superconductors, to very clean looking data involving InAs nanowires with single-crystal, epitaxial Al contacts.  However, it remains very challenging to prove definitively that one has Majoranas rather than nearly-look-alike Andreev bound states.

In case you are interested in advanced (beyond-first-year) undergraduate labs and how to do them well, you should check out the University of Minnesota's site, as well as the ALPhA group from the AAPT.   There is also an analogous group working on projects to integrate computation into the undergraduate physics curriculum.

One potentially very big physics news story that I heard about during the day, but won't be here to see the relevant talk: [Update:  Hat tip to a colleague who pointed out that there is a talk tomorrow morning that will cover this!]  There are back-to-back brand new papers in Nature today by Yuan Cao et al. from the Jarillo-Herrero group at MIT.  (The URLs don't work yet for the articles, but I'll paste in what Nature has anyway.)  The first paper apparently shows that when you take two graphene layers and rotationally offset them from graphite-like stacking by 1.05 degrees (!), the resulting bilayer is alleged to be a Mott insulator.  The idea appears to be that the lateral Moire superlattice that results from the rotational offset gives you very flat minibands, so that electron-electron interactions are enough to lock the carriers into place when the number density of carriers is tuned correctly.  The second paper apparently (since I can't read it yet) shows that as the carrier density is tuned away from the Mott insulator filling, the system becomes a superconductor (!!), with a critical temperature of 1.7 K.  This isn't particularly high, but the idea of tuning carrier density away from a Mott state and getting superconductivity is basically the heart of our (incomplete) understanding of the copper oxide high temperature superconductors.  This is very exciting, as summarized in this News and Views commentary and this news report.  

Sunday, March 04, 2018

APS March Meeting 2018

It's that time of year again:  The running of the physicists annual APS March Meeting, a gathering of thousands of (mostly condensed matter) physicists.  These are (sarcasm mode on) famously rowdy conferences (/sarcasm).  This year the meeting is in Los Angeles.  I came to the 1998 March Meeting in LA, having just accepted a fall '98 postdoctoral fellow position at Bell Labs, and shortly after the LA convention center had been renovated.   At the time, the area around the convention center was really a bit of a pit - very few restaurants, few close hotels, and quite a bit of vacant and/or low-end commercial property.  Fast forward 20 years, and now the area around the meeting looks a lot more like a sanitized Times Square, with big video advertisements and tons of high end flashy stores.

Anyway, I will try again to write up some of what I see until I have to leave on Thursday morning, though this year between DCMP business, department chair constraints, and other deadlines, I might be more concise or abbreviated.  (As I wrote last year, if you're at the meeting and you don't already have a copy, now is the perfect time to swing by the Cambridge University Press exhibit at the trade show and pick up my book :-) ).

Thursday, February 22, 2018

Vibranium and its properties

Fictional materials can be a fun starting point for thinking about and maybe teaching about material properties.  Back in 2015 I touched on this here, when I mentioned a few of my favorite science fictional materials (more here, here, and here).  

With the release of Black Panther (BP), we now have much more information about the apparent properties of vibranium in the Marvel Cinematic Universe.   

Vibranium is pretty amazing stuff - like many fictional materials, it sometimes seems to have whatever properties are necessary to the story.  As a physicist I'm not qualified to talk about its putative medicinal properties mentioned in BP, but its physical properties are fun to consider.  Vibranium appears to be a strong, light, silvery metal (see here), and it also has some remarkable abilities in terms of taking macroscopic kinetic energy (e.g., of a projectile) and either dissipating it (look at the spent bullets in the previously linked video) or, according to BP, storing that energy for later release.  At the same time, Captain America's vibranium shield is able to bounce around with incredibly little dissipation of energy, prompting the Spider-Man quote at right.

In the spirit of handwaving physics, I think I've got this figured out.  

In all solids, there is some coupling between the deformation of the atomic lattice and the electronic states of the material (here is a nice set of slides about this).  When we talk about lattice vibrations, this is the electron-phonon coupling, and it is responsible for the transfer of energy from the electrons to the lattice (that is, this is why the actual lattice of atoms in a wire gets warm when you drive electrical current through the material).  The e-ph coupling is also responsible for the interaction that pairs up electrons in conventional superconductors.  If the electron-phonon coupling is really strong, the deformation of the lattice can basically trap the electron - this is polaron physics.  In some insulating materials, where charge is distributed asymmetrically within the unit cell of the crystal, deformation of the material can lead to big displacements of charge, with a corresponding buildup of a voltage across the system - this is piezoelectricity.  

The ability of vibranium to absorb kinetic energy, store it, and then later discharge it with a flash, suggests to me that lattice deformation ends up pumping energy into the electrons somehow.  Moreover, that electronically excited state must somehow be metastable for tens of seconds.  Ordinary electronic excitations in metals are very short-lived (e.g., tens of femtoseconds for individual excited quasiparticles to lose their energy to other electrons).  Gapped-off collective electronic states (like the superconducting condensate) can last very long times.  We have no evidence that vibranium is superconducting (though there are some interesting maglev trains in Wakanda).  That makes me think that what's really going in involves some topologically protected electronic states.  Clearly we need to run experiments (such as scanning SQUID, scanning NV center, or microwave impedance microscopy) to search for the presence of edge currents in percussively excited vibranium films to test this idea.

Thursday, February 15, 2018

Physics in the kitchen: Jamming

Last weekend while making dinner, I came across a great example of emergent physics.  What you see here are a few hundred grams of vacuum-packed arborio rice:
The rice consists of a few thousand oblong grains whose only important interactions here are a mutual "hard core" repulsion.  A chemist would say they are "sterically hindered".  An average person would say that the grains can't overlap.  The vacuum packing means that the whole ensemble of grains is being squeezed by the pressure of the surrounding air, a pressure of around 101,000 N/m2 or 14.7 pounds per in2.  The result is readily seen in the right hand image:  The ensemble of rice forms a mechanically rigid rectangular block.  Take my word for it, it was hard as a rock. 

However, as soon as I cut a little hole in the plastic packaging and thus removed the external pressure on the rice, the ensemble of rice grains lost all of its rigidity and integrity, and was soft and deformable as a beanbag, as shown here. 

So, what is going on here?  How come this collection of little hard objects acts as a single mechanically integral block when squeezed under pressure?  How much pressure does it take to get this kind of emergent rigidity?  Does that pressure depend on the size and shape of the grains, and whether they are deformable? 

This onset of collective resistance to deformation is called jamming.  This situation is entirely classical, and yet the physics is very rich.  This problem is clearly one of classical statistical physics, since it is only well defined in the aggregate and quantum mechanics is unimportant.  At the same time, it's very challenging, because systems like this are inherently not in thermal equilibrium.  When jammed, the particles are mechanically hindered and therefore can't explore lots of possible configurations.   It is possible to map out a kind of phase diagram of how rigid or jammed a system is, as a function of free volume, mechanical load from the outside, and temperature (or average kinetic energy of the particles).   For good discussions of this, try here (pdf), or more technically here and here.   Control over jamming can be very useful, as in this kind of gripping manipulator (see here for video).  

Tuesday, February 13, 2018

Rice Cleanroom position

In case someone out there is interested, Rice is hiring a cleanroom research scientist.  The official job listing is here.  To be clear:  This is not a soft money position.

The Cleanroom Facility at Rice University is a shared equipment facility for enabling micro- and nanofabrication research in the Houston metropolitan area. Current equipment includes deposition, lithography, etching and a number of characterization tools. This facility attracts users from the George R. Brown School of Engineering and the Wiess School of Natural Science and regional universities and corporations whose research programs require advanced fabrication and patterning at the micro- and nanoscale. A new state of the art facility is currently being constructed and is expected to be in operation in summer 2018. Additionally, with new initiatives in Molecular Nanotechnology, the Rice University cleanroom is poised to see significant growth in the next 5-10 years. This job announcement seeks a motivated individual who can lead, manage, teach and grow this advanced facility.

The job responsibilities of a Cleanroom Research Scientist include conducting periodic and scheduled maintenance and safety check of equipment and running qualification and calibration recipes. The incumbent will be expected to maintain the highest safety standards, author and update standard operation procedures (SOPs), maintain and calibrate processes for all equipment. The Cleanroom Research Scientist will help facilitate new equipment installation, contact vendors and manufacturers and work in tandem with them to resolve equipment issues in a timely and safe manner. Efficient inventory management of parts, chemicals and supplies will be required. The Cleanroom Scientist will also oversee personal one-to-one training of users. Additionally, the incumbent will help develop cleanroom laboratory short courses that provide lectures to small groups of students. The incumbent will also coordinate with technical staff members in Rice SEA (Shared Equipment Authority).

Saturday, February 10, 2018

This week in the arxiv

Back when my blogging was young, I had a semi-regular posting of papers that caught my eye that week on the condensed matter part of the arxiv.  As I got busy doing many things, I'd let that fall by the wayside, but I'm going to try to restart it at some rate.  I generally haven't had the time to read these in any detail, and my comments should not be taken too seriously, but these jumped out at me.

arxiv:1802.01045 - Sangwan and Hersam; Electronic transport in two-dimensional materials
If you've been paying any attention to condensed matter and materials physics in the last 14 years, you've noticed a huge amount of work on genuinely two-dimensional materials, often exfoliated from the bulk as in the scotch tape method, or grown by chemical vapor deposition.  This looks like a nice review of many of the relevant issues, and contains lots of references for interested students to chase if they want to learn more.

arxiv:1802.01385 - Froelich; Chiral Anomaly, Topological Field Theory, and Novel States of Matter
While quite mathematical (relativistic field theory always has a certain intimidating quality, at least to me), this also looks like a reasonably pedagogical introduction of topological aspects of condensed matter.  This is not for the general reader, but I'm hopeful that if I put in the time and read it carefully, I will gain a better understanding of some of the topological discussions I hear these days about things like axion insulators and chiral anomalies.

arXiv:1802.01339 - Ugeda et al.; Observation of Topologically Protected States at Crystalline Phase Boundaries in Single-layer WSe2
arXiv:1802.02999 - Huang et al.; Emergence of Topologically Protected Helical States in Minimally Twisted Bilayer Graphene
arXiv:1802.02585 - Schindler et al.; Higher-Order Topology in Bismuth
Remember back when people didn't think about topology in the band structure of materials?  Seems like a million years ago, now that a whole lot of systems (often 2d materials or interfaces between materials) seem to show evidence of topologically special edge states.   These are three examples just this week of new measurements (all using scanning tunneling microscopy as part of the tool-set, to image edge states directly) reporting previously unobserved topological states at edges or surface features.

Sunday, February 04, 2018

New readers: What is condensed matter physics? What is special about the nanoscale?

If you're a new reader, perhaps brought here by the mention of this blog in the Washington Post, welcome!   Great to have you here.  Just a couple of quick FAQs to get you oriented:

What is condensed matter physics?  Condensed matter (once known as "solid state) is a branch of physics that deals with the properties of matter consisting of large numbers of particles (usually atoms or (electrons+the rest of the atoms)) in "condensed" states like liquids and solids - basically the materials that make up an awful lot of the stuff you interact with all the time.  New properties can emerge when you bring lots of particles together.  See here for an example involving plastic balls, or here (pdf) for a famous essay about this general point.  Condensed matter physicists are often interested in identifying the different types of states or phases that can arise, and understanding transitions between those states (like how does water boil, or how does magnetism turn on in iron as its temperature is lowered from the melting point, or how does a ceramic copper oxide suddenly start letting electricity flow without resistance below some particular temperature).  Hard condensed matter typically deals with systems where quantum mechanics is directly important (electronic, magnetic, and optical properties of materials, for example), while soft condensed matter describes systems where the main actors (while quantum deep down like all matter) are not acting in a quantum way - examples include the jamming of grains of sand when you build a sand castle, or the spontaneous alignment of rod-like molecules in the liquid crystal display you're using to read this.

While particle physics tries to look at the tiniest bits of stuff, condensed matter hits on some of the same (literally the same concepts and math) deep ideas about symmetry, and often has direct implications for technologies that affect your daily life.   Understanding this stuff has given us things like the entire electronics industry, the telecommunications industry, and soon probably quantum computers.  

A powerful concept in physics in general and condensed matter in particular is universality.  For example, materials built out of all kinds of different ingredients can be mechanically rigid solids; there is something universal about mechanical rigidity that makes it emerge independent of the microscopic details.  Another example:  Lots of very different systems (metallic lead; waxy crystals of buckyball molecules with some alkaline metal atoms in between; ceramic copper oxides; hydrogen sulfide gas under enormous pressure) can conduct electricity without resistance at low temperatures - why and how is superconductivity an emergent property?

What is special about the nanoscale?  Because it's about collective properties, traditional condensed matter physics often uses a lot of nice approximations to describe systems, like assuming they're infinite in extent, or at least larger than lots of physically important scales.   When you get down to the nanoscale (recall that a typical atom is something like 0.3 nanometers in diameter), a lot of the typical approximations can fail.  As the size of the material or system becomes small compared to the length scales associated with various physical processes, new things can happen and the properties of materials can change dramatically.  Tightly confined liquids can act like solids.  Colorless materials can look brilliantly chromatic when structured on small scales.  Two electrical insulators brought together can produce a nanoscale-thick metallic layer.   We now have different techniques for structuring materials on the nanoscale and for seeing what we're doing down there, where the building blocks are often far smaller than the wavelengths of light.  Investigations at the nanoscale are tied to some of the most active topics in condensed matter, and verge into the interdisciplinary boundaries with chemistry, biology, materials science, electrical engineering, and chemical engineering.   That, and it's fun.

Please browse around through the archives, and I hope you find it interesting.

Friday, February 02, 2018

Why science blogging still matters

Nature has a piece up about science blogging.  It's pretty much on target.  I'm a bit surprised that there wasn't more discussion of blogging vs. twitter vs. other social media platforms, or the interactions between blogs and formal journalism.

Monday, January 29, 2018

Photonics West

A significant piece of my research program is optics-related, and thanks to an invited talk, I'm spending a couple of days at the SPIE Photonics West meeting in San Francisco, a mix of topics from the very applied (that is, details of device engineering and manufacturing) to the fundamental.   It's fun seeing talks on subjects outside of my wheelhouse.

A couple of items of interest from talks so far today:

  • Andrew Rickman gave a talk about integrated Si photonics, touching on his ideas on why, while it's grown, it hasn't taken off in the same crazy exponential way as Moore's Law(s) in the microelectronics world.  On the economic side, he made a completely unsurprising argument:  For that kind of enormous growth, one needs high volume manufacturing with very high yield, and a market that is larger than just optical telecommunications.  One challenge of Si-based photonics is that Si is an indirect band gap material, so that for many photonic purposes (including many laser sources and detectors) it needs to be integrated with III-V semiconductors like InP.  Similarly, getting optical signals on and off of chips usually requires integration with macroscopically large optical fibers.   His big pitch, presumably the basis for his recent founding of Rockley Photonics, is that you're better off making larger Si waveguides (say micron-scale, rather than the 220 nm scale, a standard size set by certain mode choices) - this allegedly gives you much more manufacturing dimensional fault tolerance, easier integration with both III-V and fiber, good integration with electroabsorption modulators, etc. One big market he's really interested in is cloud computing, where apparently people are now planning for the transition form 100 Gbs to 400 Gbs (!) for communication within racks and even on boards.  That is some serious throughput.
  • Min Gu at Royal Melbourne Institute of Technology spoke about work his group has been doing trying to take advantage of the superresolution approach of STED microscopy, but for patterning.   In STED, a diffraction limited laser spot first illuminates a target area (with the idea of exciting fluorescence), and then a spot from a second laser source, in a mode that looks donut-shaped, also hits that location, depleting the fluorescence everywhere except at the location of the "donut hole".  The result is an optical imaging method with resolution at the tens of nm level.  Gu's group has done work combining the STED approach with photopolymerization to do optical 3d printing of tiny structures.  They've been doing a lot with this, including making gyroid-based photonic crystals that can act as helicity-resolved beamsplitters for circularly polarized light.  It turns out that you can make special gyroid structures so that they have broken symmetries so that these photonic crystals support topologically protected (!) modes analogous to Weyl fermions.
  • Venky Narayanamurti gave a talk about how to think about research and its long-standing demarcation into "basic" and "applied".  This drew heavily from his recent book (which is now on my reading list).   The bottom line:  In hindsight, Vannevar Bush didn't necessarily do a good thing by intellectually partitioning science and engineering into "basic" vs. "applied".  Narayanamurti would prefer to think in terms of invention and discovery, defined such that "Invention is the accumulation and creation of knowledge that results in a new tool, device, or process that accomplishes a particular specific purpose; discovery is the creation of new knowledge and facts about the world."  Neither of these are scheduled activities like development.  Research is "an unscheduled quest for new knowledge and the creation of new inventions, whose outcome cannot be predicted in advance, and in which both science and engineering are essential ingredients."  He sounded a very strong call that the US needs to change the way it is thinking about funding of research, and held up China as an example of a country that is investing enormous resources in scientific and engineering research.

Monday, January 22, 2018

In condensed matter, what is a "valley", and why should you care?

One big challenge of talking about condensed matter physics to a general audience is that there are a lot of important physical concepts that don't have easy-to-point-to, visible consequences.  One example of this is the idea of "valleys" in the electronic structure of materials. 

To explain the basic concept, you first have to get across several ideas:

You've heard about wave-particle duality.  A free particle in in quantum mechanics can be described by a wavefunction that really looks like a wave, oscillating in space with some spatial frequency (\(k\ = 2 \pi\)/wavelength).  Momentum is proportional to that spatial frequency (\(p = \hbar k\)), and there is a relationship between kinetic energy and momentum (a "dispersion relation") that looks simple.  In the low-speed limit, K.E. \(= p^2/2m\), and in the relativistic limit, K.E. \( = pc \).

In a large crystal (let's ignore surfaces for the moment), atoms are arranged periodically in space.  This arrangement has lower symmetry than totally empty space, but can still have a lot of symmetries in there.  Depending on the direction one considers, the electron density can have all kinds of interesting spatial periodicities.  Because of the interactions between the electrons and that crystal lattice, the dispersion relation \(E(\mathbf{k})\) becomes direction-dependent (leading to spaghetti diagrams).  Some kinetic energies don't correspond to any allowed electronic states, meaning that there are "bands" in energy of allowed states, separated by gaps.  In a semiconductor, the highest filled (in the limit of zero temperature) band is called the valence band, and the lowest unoccupied band is called the conduction band.

Depending on the symmetry of the material, the lowest energy states in the conduction band might not be near where \(|\mathbf{k}| = 0\).  Instead, the lowest energy electronic states in the conduction band can be at nonzero \(\mathbf{k}\).  These are the conduction band valleys.  In the case of bulk silicon, for example, there are 6 valleys (!), as in the figure.
The six valleys in the Si conduction band, where the axes 
here show the different components of \(\mathbf{k}\), and 
the blue dot is at \(\mathbf{k}=0\).

One way to think about the states at the bottom of these valleys is that there are different wavefunctions that all have the same kinetic energy, the lowest they can and still be in the conduction band, but their actual spatial arrangements (how the electron probability density is arranged in the lattice) differ subtly. 

In the case of graphene, I'd written about this before.  There are two valleys in graphene, and the states at the bottom of those valleys differ subtly about how charge is arranged between the two "sublattices" of carbon atoms that make up the graphene sheet.  What is special about graphene, and why other some materials are getting a lot of attention, is that you can do calculations about the valleys using the same math that gets used when talking about spin, the internal angular momentum of particles.  Instead of being in one graphene valley or the other, you can write about having "pseudospin" up or down. 

Once you start thinking of valley-ness as a kind of internal degree of freedom of the electrons that is often conserved in many processes, like spin, then you can consider all sorts of interesting ideas.  You can talk about "valley ferromagnetism", where available electrons all hang out in one valley.  You can talk about the "valley Hall effect", where carriers of differing valleys tend toward opposite transverse edges of the material.   Because of spin-orbit coupling, these valley effects can link to actual spin physics, and therefore are of interest for possible information processing and optoelectronic ideas.

Saturday, January 13, 2018

About grants: What is cost sharing?

In addition to science, I occasionally use this forum as a way to try to explain to students and the public how sponsored research works in academia.  Previously I wrote about the somewhat mysterious indirect costs.  This time I'd like to discuss cost sharing.

Cost sharing is what it sounds like - when researchers at a university propose a research project, and the funding agency or foundation wants to see the university kick in funding as well (beyond obvious things like the lab space where the investigators work).  Many grants, such as NSF single-investigator awards, expressly forbid explicit cost sharing.  That has certain virtues:  To some extent, it levels the playing field, so that particularly wealthy universities don't have an even larger advantage.  Agencies would all like to see their money leveraged as far as possible, and if cost sharing were unrestricted on grants, you could imagine a situation where wealthy institutions would effectively have an incentive to try to buy their way to grant success by offering big matching funds.   

In other programs, such as the NSF's major research instrumentation program, cost sharing is mandated, but the level is set at a fixed percentage of the total budget.  Similarly, some foundations make it known that they expect university matching at a certain percentage level.  While that might be a reach for some smaller, less-well-off universities when the budget is large, at least it's well-defined.    

Sometimes agencies try to finesse things, forbidding explicit cost sharing but still trying to get universities to invest "skin in the game".  For the NSF materials research science and engineering center program, for example, cost sharing is forbidden (in the sense that explicit promises of $N matching or institutional funding is not allowed), but proposals are required to include a discussion of "organizational commitment":  "Provide a description of the resources that the organization will provide to the project, should it be funded. Resources such as space, faculty release time, faculty and staff positions, capital equipment, access to existing facilities, collaborations, and support of outreach programs should be discussed, but not given as dollar equivalents.

"  First and foremost the science and broader impacts drive the merit review, but there's no question that an institution that happens to be investing synergistically with the topic of such a proposal would look good.

The big challenge for universities are grants where cost sharing is not forbidden, and no guidance is given about expectations.  There is a game theory dilemma at work, where institutions try to guess what level of cost sharing is really needed to be competitive.   

So where does the money for cost sharing come from on the university side?  Good question.  The details depend on the university.  Departments, deans, and the central administration typically have some financial resources that they can use to support cost sharing, but how these responsibilities get arranged and distributed varies.  

For the open-ended cost sharing situations, one question that comes up is, how much is too much?  As I'd discussed before, university administrations often argue that research is already a money-losing proposition, in the sense that the amount of indirect costs that they bring in does not actually come close to covering the true expenses of supporting the research enterprise.  That would argue in favor of minimizing cost sharing offers, except that schools really do want to land some of these awards.  (Clearly there are non-financial or indirect benefits to doing research, such as scholarly reputation, or universities would stop supporting that kind of work.)  It would be very interesting if someone would set up a rumor-mill-style site, so that institutions could share with peers roughly what they are offering up for certain programs - it would be revealing to see what it takes to be competitive.