Harvard Astronomy 201b

Archive for April, 2013|Monthly archive page

ARTICLE: Stellar feedback in galaxies and the origin of galaxy-scale winds (Hopkins et al. 2012)

In Journal Club 2013 on April 27, 2013 at 4:21 pm

Summary by Kate Alexander

Handout: Download Here

Link to paper: Hopkins et al. (2012)

Abstract

Feedback from massive stars is believed to play a critical role in driving galactic super-winds that enrich the intergalactic medium and shape the galaxy mass function, mass–metallicity relation and other global galaxy properties. In previous papers, we have introduced new numerical methods for implementing stellar feedback on sub-giant molecular cloud (sub-GMC) through galactic scales in numerical simulations of galaxies; the key physical processes include radiation pressure in the ultraviolet through infrared, supernovae (Type I and Type II), stellar winds (‘fast’ O star through ‘slow’ asymptotic giant branch winds), and HII photoionization. Here, we showthat these feedback mechanisms drive galactic winds with outflowrates as high as ∼10–20 times the galaxy star formation rate. The mass-loading efficiency (wind mass-loss rate divided by the star formation rate) scales roughly as \dot{M}_{wind}/\dot{M}_* \propto V_c^{-1} (where V_c is the galaxy circular velocity), consistent with simple momentum-conservation expectations. We use our suite of simulations to study the relative contribution of each feedback mechanism to the generation of galactic winds in a range of galaxy models, from Small Magellanic Cloud-like dwarfs and Milky Way (MW) analogues to z ∼ 2 clumpy discs. In massive, gas-rich systems (local starbursts and high-z galaxies), radiation pressure dominates the wind generation. By contrast, for MW-like spirals and dwarf galaxies the gas densities are much lower and sources of shock-heated gas such as supernovae and stellar winds dominate the production of large-scale outflows. In all of our models, however, the winds have a complex multiphase structure that depends on the interaction between multiple feedback mechanisms operating on different spatial scales and time-scales: any single feedback mechanism fails to reproduce the winds observed.We use our simulations to provide fitting functions to the wind mass loading and velocities as a function of galaxy properties, for use in cosmological simulations and semi- analytic models. These differ from typically adopted formulae with an explicit dependence on the gas surface density that can be very important in both low-density dwarf galaxies and high-density gas-rich galaxies.

Introduction

Galaxy evolution cannot be properly understood without accounting for strong feedback from massive stars. Specifically, in cosmological models that don’t include feedback processes, the star formation rates in simulated galaxies are much too high, as gas quickly cools and collapses. Additionally, these simulations find that the total amount of gas present in galactic disks is too high. Both of these problems can be solved by including local outflows and galactic superwinds that remove baryons from the disks, slowing star formation and bringing simulations in line with observations. Such winds are difficult to include in simulations, however, because they have their origins in stellar feedback processes, which occur on small scales. Most simulations are either too low resolution to properly model these processes, or they make simplifying assumptions about the physics that prevent accurate modeling of winds. Thus, although we have seen observational evidence of such winds in real galaxies (for example, Coil et al. 2011; Hall et al. 2012), until recently simulations have not been able to generate galactic winds from first principles and have instead added them in manually. Hopkins, Quataert, and Murray for the first time present a series of numerical simulations that successfully reproduce galactic winds that are consistent with observations for a wide range of galaxy types. Unlike previous work, their simulations have both sufficiently high resolution to focus on small-scale processes in giant molecular clouds (GMCs) and star forming regions and the physics to account for multiple types of stellar feedback, not just thermal heating from supernovae. These simulations are also discussed in two companion papers (Hopkins et al. 2011 and Hopkins et al. 2012), which focus on the star formation histories and properties of the galactic ISM of simulated galaxies and outline rigorous numerical tests of the models. The 2011 paper was discussed in the 2011 Ay 201b journal club and is summarized nicely here.

Key Points

  1. Simulations designed to study stellar feedback processes have, for the first time, succeeded in reproducing galactic winds capable of removing material from galaxies at several times the star formation rate when multiple feedback mechanisms are included. They also reproduce the observed inverse scaling of wind mass loading with galactic circular velocity, \dot{M}_{wind}/\dot{M}_* \propto V_c^{-1}.
  2. Radiation pressure is the primary mechanism for the generation of winds in massive, gas-rich galaxies like local starburst galaxies and high redshift galaxies, while supernovae and stellar winds that shock-heat gas are more important in less gas-rich Milky Way-like galaxies and dwarf galaxies.
  3. The wind mass loading and velocity are shown to depend on the gas surface density, an effect which has not previously been quantified.

Models and Methods

The authors used the parallel TreeSPH code GADGET-3 (Springel 2005) to perform their simulations. The simulations include stars, gas, and dark matter and accounts for cooling, star formation, and stellar feedback. The types of stellar feedback mechanisms they include are local deposition of momentum from radiation pressure, supernovae, and stellar winds; long-range radiation pressure from photons that escape star forming regions; shock heating from supernovae and stellar winds; gas recycling; and photoheating of HII regions. The populations of young, massive stars responsible for most of these feedback mechanisms are evolved using standard stellar population models.

These feedback mechanisms are considered for four different standard galaxy models, each containing a bulge, a disk consisting of stars and gas, and a dark matter halo. These four models are:

  1. HiZ: a massive, starburst galaxy at a redshift of 2, with properties chosen to resemble those of non-merging submillimeter galaxies.
  2. Sbc: a gas-rich spiral galaxy, with properties chosen to resemble those of luminous infrared galaxies (LIRGs).
  3. MW: a Milky Way-like spiral galaxy.
  4. SMC: a dwarf galaxy, with properties similar to those of the Small Magellanic Cloud.

Simulations were run for each of these models at a range of resolutions (ranging from 10 pc to sub-pc smoothing lengths) to ensure numerical convergence before settling on a standard resolution. The standard simulations include about 10^8 particles with masses of 500M_{\odot} and have smoothing lengths of about 1-5 pc. (For more details, see the companion papers Hopkins et al. 2011 and Hopkins et al. 2012 or the appendix of this paper). The authors then ran a series of simulations with one or more feedback mechanisms turned off, to assess the relative importance of each mechanism to the properties of the winds generated in the standard model containing all of the feedback mechanisms.

Results

When all feedback mechanisms are included, the simulations produce the galaxy morphologies seen below. The paper also considers what each galaxy would look like in the X-ray, which traces the thermal emission from the outflows.

The range of morphologies produced for the four different types of galaxies studied (from left to right: HiZ, Sbc, MW, and SMC). Taken from figure 1 of the paper.

The range of morphologies produced with all feedback mechanisms active for the four different model galaxies studied (from left to right: HiZ, Sbc, MW, and SMC). The top two rows show mock images of the galaxies in visible light and the bottom two rows show the distribution of gas at different temperatures (blue=cold molecular gas, pink=warm ionized gas, yellow=hot X-ray emitting gas). Taken from figure 1 of the paper.

Wind properties and dependence on different feedback mechanisms

As shown above, all four galaxy models have clear outflows when all of these feedback mechanisms are included. When individual feedback mechanisms are turned off to study the relative importance of each mechanism in the different models, the strength of the outflows diminishes. For the HiZ and Sbc models, radiation pressure is shown to be the most important contributing process, while for the MW and SMC models gas heating (from supernovae, stellar wind shock heating, and HII photoionization heating) is more important. The winds are found to consist of mostly (mostly ionized) warm (2000 < T < 400000) and (diffuse) hot (T > 400000) gas, with small amounts of (mostly molecular) colder gas (T < 2000K). Particles in the wind have a range of velocities, differing from simple simulations that often assume a wind with a single, constant speed.

For the purpose of studying galaxy formation, the most important property of the wind is the total mass outflow rate, \dot{M}. This is often expressed in terms of the galactic wind mass-loading efficiency, defined as M_{wind}/M_{new} = \int{\dot{M}_{wind}}/\int{\dot{M}_*}, where \dot{M}_{wind} is the wind outflow rate and \dot{M}_* is the star formation rate. The galactic mass-loading efficiency for each galaxy model is shown below. By comparing the mass-loading efficiency produced by simulations with all feedback mechanisms turned on (the “standard model”) to simulations with some feedback mechanisms turned off, the importance of each mechanism becomes clear. While the standard model cannot be replicated without all of the feedback mechanisms turned on, radiation pressure is clearly much more important than heating for the HiZ case and less important than heating for the MW and SMC cases. The Sbc case is intermediate, with radiation pressure and heating being of comparable importance.

Galactic wind-mass loading efficiency for each of the four galaxy models studied, taken from figure 8 of the paper.

Galactic wind-mass loading efficiency for each of the four galaxy models studied, taken from figure 8 of the paper.

Derivation of a new model for predicting the wind efficiency of a galaxy

After doing some basic plotting of the wind mass-loading efficiency versus global properties of the galaxy models studied (such as star formation rate and total galaxy mass), the authors explore whether there exists a better model for predicting what the wind mass-loading efficiency should be for a given galaxy. After studying the relations between the wind mass loss rate \dot{M}_{wind} and a range of galaxy properties as a function of radius R, time t, and model type, they conclude that the mass loss rate is most directly dependent on the star formation rate \dot{M}_*, the circular velocity of the galaxy V_c(R), and the gas surface density \Sigma_{gas}. They find that the mass-loading efficiency can be described by:

\left<\frac{\dot{M}_{wind}}{\dot{M}_*}\right>_R \approx 10\eta_1\left(\frac{V_c(R)}{100 \text{km/s}}\right)^{-(1+\eta_2)}\left(\frac{\Sigma_{gas}(R)}{10 M_{\odot}\text{pc}^{-2}}\right)^{-(0.5+\eta_3)}

where \eta_1\sim 0.7-1.5, \eta_2\sim\pm0.3, and \eta_3\sim\pm0.15 are the uncertainties from the fits of individual simulated galaxies to the model. This relationship is plotted below along with instantaneous and time-averaged properties of simulated galaxies. The dependence of the wind mass loss rate on the star formation rate and the circular velocity of the galaxy match previous results and are easily understandable in terms of conservation of momentum, but the dependence on the surface density of the gas initially seems more surprising. Hopkins et al. posit that this is due to the effects of density on supernovae remnants: for low-density galaxies the expanding hot gas from the supernova will sweep up material with little resistance, increasing its momentum over time, while for high-density galaxies radiative cooling of this gas becomes more important, so it will impart less momentum to swept up material. Therefore supernovae in denser environments contribute less to the wind, all other factors being equal, introducing a dependence of the wind mass loss rate on gas surface density.

new model

Discussion and Caveats

The results from this paper are fairly robust, as the detailed treatment of multiple feedback mechanisms allows the authors to avoid making some of the simplifying assumptions that are often necessary in galaxy simulations (artificially turning off cooling to slow star formation rates, etc). The combination of high resolution simulations and more realistic physics does a good job of confirming previous numerical work and observational results. Needless to say, however, there is still room for improvement.

One major caveat of these results is that all of the model galaxies are assumed to exist in isolation, with no surrounding intergalactic medium (IGM). In reality, galactic outflows will interact with the IGM and hot coronal gas already present in a halo around the galaxy, which affects the structure of the wind. Additionally, feedback effects from black holes and AGN are not discussed, nor are galactic inflows. Comparisons between observations and simulations have shown that AGN-driven winds cannot alone explain the observed star formation rates in real galaxies (Keres et al. 2009), but they may still be an important contributing factor.

Furthermore, the authors note that their method of modeling radiation pressure is “quite approximate” and could be improved. Cosmic rays are not included and the scattering and absorption of UV and IR photons has been simplified. Computational limits (unresolvable processes) also place constraints on the robustness of the results.

Many of the quantities discussed here are easily derivable from high-resolution simulations, but harder to estimate from observations of real galaxies or simulations that have lower resolution. A good discussion of how simulations compare to the observed galaxy population can be found in Keres et al. 2009. Measurements of hydrogen-alpha emission in galaxies can be used to infer their star formation rate and measurements of their X-ray halos can be used to infer the mass-loss rate from galactic winds, but this requires high-quality observational data that becomes increasingly difficult to capture for galaxies at non-zero redshift (Martin 1998). Depending on the resolution of a simulation or telescope, determining quantities like the galactic rotation curve and gas surface density may not be directly possible. When seeking to apply these results to understand the formation history of galaxies in observational data, these limitations should be taken into account.

References and Further Reading

Hopkins, P. F., Quataert, E., & Murray, N. 2012, MNRAS, 421, 3522

Hopkins, P. F., Quataert, E., & Murray, N. 2011, MNRAS, 417, 950 (Paper I)

Hopkins, P. F., Quataert, E., & Murray, N. 2012, MNRAS, 421, 3488 (Paper II)

Keres, D. et al., 2009, MNRAS, 396, 2332

Martin, C. L., 1998, ApJ, 513, 156

Details on radio observations

In Uncategorized on April 26, 2013 at 9:18 pm

How were the observations done?

If you are a theorist, you probably don’t like words like “bandpass”, and hate words like “calibration.”  If you are an observer, you probably know and love these words.  But even theorists have to understand how observations are done: they’re what keep science science.  Here I want to briefly explain some aspects of the paper’s discussion of how the data were taken that were on first read opaque.

First of all, given that the NH_3 rest frequency is roughly 24 GHz, we can find the energy of the transition: E=h\nu = 10^{-4}\;\rm{eV}.  This corresponds to an excitation temperature of 1.2 K, which we can also translate into a velocity via v\sim (E/m)^{1/2} \sim 24\; {\rm m/s}.  A typical molecular cloud temperature is on the order of 20 K, high enough that there should be many molecules in the upper energy state (think of it as a toy-model 2 level system) and hence many upper to lower transitions occurring (leading to emission at 24 GHz).

A 1 MHz window means the observations were done between 24 GHz – 500 MHz and 24 GHz + 500 MHz, and dividing this interval by 256 gives the 3.9 kHz per channel quoted.   This can be translated into a velocity by thinking of it using the small-z redshift formula z\approx v/c. Here, we have a fractional frequency shift \Delta \nu / \nu \simeq v/c, and setting \Delta \nu / \nu =  3.9  kHZ/24 GHz leads to the velocity quoted (.049 km/s).

The baselines given (35 meters up to 1 km) determine the spatial resolution: just good old Carroll and Ostlie diffraction limit, \theta = 1.22 \lambda/D, where \theta is the angular resolution, \lambda the wavelength of light, and D the diameter of the aperture (here, the baseline).  24 GHz has \lambda = 1.2 cm, so with a 1 km baseline the angular resolution is about .05’’ (for comparison, this is 1.3 times the angular resolution of the Hubble Space Telescope at 500 nm).

Nepers is a unit of optical depth; .0689 is optically thin (and measures the optical depth due to the Earth’s atmosphere, since the authors note this corresponds to fair weather; there is negligible optical depth from sources in space at this frequency).  It is interesting to note that the optical depth is given at 22 GHz, but the observations are made at 24 GHz; this is because 22 GHz is a standard fiducial frequency to allow comparison with the weather conditions under which other radio observations might have been made.

Now, calibration—theorists’ bane.  The paper’s discussion is compact and therefore somewhat confusing to those uninitiated into the secrets of radio astronomy, but understanding it is worthwhile for the insight it can yield into what actually goes on at these remote “large arrays” in the desert (alien storage?)

Image credit: www.123rf.com
Possible very large array user? Image credit: http://www.123rf.com

In a radio interferometric observation, what you actually measure is the amplitude and phase of the signal in each channel (here, there are 256): i.e., at each different frequency corresponding to the channels in the window about 24 GHz.  (Note: interferometric means that phases are being compared between signal received at spatially separated points on Earth: this allows the difference in path length from the source to two antennae to be computed, and hence the direction to the source). The amplitude and phase at these different frequencies is desirable because it gives (eventually) intensity as a function of frequency, or equivalently a spectral energy distribution (SED).  This in turn allows inferences about the properties of the source (e.g. dust grain size from SEDs of debris disks, cf. problem set 2, problem 4 (\beta describes the shape of the SED)).There will be some intrinsic noise in both amplitude and phase, which you want to eliminate.  For a source that has a flat spectrum, you know any bumps you see at a particular frequency are due to noise in the channel at that frequency.  This is the purpose of having an amplitude calibrator: typically quasars are used because they have flat spectra.

Now, the amplitude is measured by the amount of electrical current coming from a given channel: the more photons at that frequency hit the antenna, the more current.  But astronomers care about flux, not current.  The absolute flux calibrator is a source with a known flux. Measuring the voltage this source produces allows the conversion factor between voltage and flux to be deduced.  Using a source with known flux also allows one to calibrate the bandpass: the flux should be zero outside the window, but to know this is because the window is working, you have to be sure the flux is not just coincidentally zero at frequencies outside the window anyway.

Finally, the astute reader may notice that noise is reported as mJy/beam.  The noise should be linear in the area of the beam, so a larger beam would have more noise.  Thus two sets of observations done with different beam sizes would have different noise levels due to this; reporting noise/beam allows direct comparison of how good observations done with different beam sizes are.

ARTICLE: On the Density of Neutral Hydrogen in Intergalactic Space

In Journal Club, Journal Club 2013 on April 20, 2013 at 12:25 am

Author: Yuan-Sen Ting, April 2013

 

In the remembrance of Boston bombing victims: M. Richard, S. Collier, L. Lu, K. Campell

“Weeds do not easily grow in a field planted with vegetable.
Evil does not easily arise in a heart filled with goodness.”

Dharma Master Cheng Yen.

 

Related links:

 

  1. For more information and interactive demonstrations of the concepts discussed here, see the interactive online software module I developed for the Harvard AY201b course.

 

Introduction

 

Although this seminal paper by Gunn & Peterson (1965) comprises only four pages (not to mention, it is in single column and has a large margin!), the authors suggested three ideas that are still being actively researched by astronomers today, namely

  1. Lyman alpha (Lyα) forest
  2. Gunn-Peterson trough
  3. Cosmic Microwave Background (CMB) polarization

Admittedly, they put them in a slightly different context, and the current methods on these topics are much more advanced than what they studied, but one could not ask for more in a four pages note! In the following, we will give a brief overview on these topics, the initial discussion in Gunn & Peterson (1965) and the current thinking/results on these topics. Most of the results are collated from the few review articles/papers in the reference list and we will refer them herein.

 

Lyman alpha forest

 

Gunn and Peterson propose using Lyα absorption in the distant quasar spectrum to study the neutral hydrogen in the intergalactic medium (IGM). The quasar acts as the light source, like shining a flashlight across the Universe, and provides a characteristic, relatively smooth spectrum. During the transmission of the light from the quasar to us, intervening neutral hydrogen clouds, along the line of sight to the quasar, cause absorptions on the quasar continuum at the neutral hydrogen HI transition 1215.67Å in the rest frame of the absorbing clouds. Because the characteristic quasar spectrum is well-understood, it’s relatively easy to isolate and study this absorption.

More importantly, in our expanding Universe, the spectrum is continuously redshifted. Therefore the intervening neutral hydrogen at different redshifts will produce absorption at many different wavelengths blueward of the quasars’ Lyα emission. After all these absorptions, the quasar spectrum will have a “forest” of lines (see figure below), hence the name Lyα forest.

 


Figure 1. Animation showing a quasar spectrum being continuously redshifted due to cosmic expansion, with various Lyman absorbers acting at different redshifts. From the interactive module.

 

For a quasar at high redshift, along the line of sight, we are sampling a significant fraction of the Hubble time. Therefore, by studying the Lyα forests, we are essentially reconstructing the history of structure formation. To put our discussion in context, there are two types of Lyα absorbers:

  1. Low column density absorbers: These absorbers are believed to be the sheetlike-filamentary neutral hydrogen distributed in the web of dark matter formed in the ΛCDM. In this cosmic web, the dark matter filaments are sufficiently shallow to avoid gravitational collapse of the gas but deep enough to prevent these clouds of the inter- and proto-galactic medium from dispersing.
  2.  

  3. High column density absorbers: these are mainly due to intervening galaxies/proto-galaxies such as the damped Lyα systems. As opposed to the low column density absorbers, since galaxies are generally more metal rich than the filamentary gas; the absorption of Lyα due to these systems will usually be accompanied by some characteristic metal absorption lines. There will also be an obvious damping wing visible in the Voigt profile of the Lyα line, hence the name damped Lyα system.

Since our discussion here focuses on the IGM, although the high column density absorbers are extremely important to detect high redshift galaxies, for the rest of the discussion in this post, we will only discuss the former.

 

Low column density absorbers

 

So, what can we learn about neutral hydrogen in the Universe by looking at Lyα forests? With a large sample of high redshift quasar spectra, we can bin up the absorption lines due to intervening clouds of the same redshift. This allows us to study the properties of gas at that distance (at that time in the Universe’s history), and learn about the evolution of the IGM. In this section, we summarize some of the crucial properties that are relevant in our later discussion:

 


Figure 2. Animation illustrating the effect of the redshift index on the quasar absorption spectrum. A higher index means more closely spaced absorption near the emission peak. From the interactive module.

 

It turns out that the properties of the IGM can be rather well described by power laws. If we define n(z) as the number density of clouds as a function of redshift z, and N(ρ) as the number density of clouds as a function of column density ρ. It has been observed that [see Rauch (1998) for details],

 

Note that, in order to study these distributions, ideally, high resolution spectroscopy is needed (FWHM < 25 kms-1) such that each line is resolved.

There is an upturn in the redshift power law index at high redshift. This turns out to be relevant in the discussion of the Gunn-Peterson trough. We will discuss this later. It has also been shown, via cosmological hydrodynamic simulations, that the ΛCDM model is able to reproduce to the observed power laws.

But we can learn about more than just gas density. The Doppler widths of Lyα line are consistent with the photoionization temperature (about 104 K). This suggests that the neutral hydrogen gas in the Lyα clouds is in photoionization equilibrium. In other words, the photoionization (i.e. heating) due to the UV background photons is balanced by cooling processes via thermal bremsstrahlung, Compton cooling and the usual recombination. Photoionization equilibrium allows us to calculate how many photons have to be produced, either from galaxies or quasars, in order to maintain the gas at the observed temperature. Based on this number of photons, we can deduce how many stars were formed at each epoch in the history of the Universe.

We’ve been relying on the assumption that these absorbers are intergalactic gas clouds. How can we be sure they’re not actually just outflows of gas from galaxies? This question has been tackled by studying the two point correlation function of the absorption lines. In a nutshell, two point correlation function measures the clustering of the absorbers. Since galaxies are highly clustered, at least at low redshift, if the low density absorbers are related to the outflow of galaxies, one should expect clustering of these absorption signals. The studies of Lyα forest suggest that the low column density absorbers are not clustered as strongly as galaxies, favoring the interpretation that they are filamentary structures or intergalactic gas [see Rauch (1998) for details].

However this discussion is only valid in the relatively recent Universe. When we look further, and therefore study the earliest times, there are less photons and correspondingly more neutral hydrogen atoms. With more and more absorption, eventually the lines in the forest become so compact and deep that the “forest” becomes a “trough”.

 

Gunn-Peterson trough

 

In their original paper in 1965, Gunn and Peterson estimated how much absorption there should be in a quasar spectrum due to the amount of hydrogen in the Universe. The amount of absorption can be quantified by optical depth which is a measure of transparency. A higher optical depth indicates more absorption due to the neutral hydrogen. Their cosmological model during that time is obsolete now, but their reasoning and derivation are still valid, one can still show that the optical depth

 

With a modern twist on the cosmological model, the optical depth of neutral hydrogen at each redshift can be estimated to be

 

where z is the redshift, λ is the transition wavelength, e is the electric charge, me is the electron mass, f is the oscillatory strength of the electronic state transition from N=1 to 2, H(z) is the Hubble constant, h is the dimensionless Hubble’s parameter, Ωb is the baryon density of the Universe, Ωm is the dark matter density, nHI is the number density of neutral hydrogen and nH is the total number density of both the neutral and ionized hydrogen.

Since optical depth is proportional to the density of neutral hydrogen, and the transmitted flux ratio decreases exponentially with respect to the optical depth, even a tiny neutral density fraction, nHI/nH=10-4 (i.e. one neutral hydrogen atom in ten thousands hydrogen atoms) will give rise to complete Lyα absorption. In other words, the transmitted flux should be zero. This complete absorption is known as the Gunn-Peterson trough.

In 1965, Gunn and Peterson performed their calculation assuming that most hydrogen atoms in the Universe are neutral. From their calculation, they expected to see a trough in the quasar (redshift z∼3) data they were studying. This was not the case.

In order to explain the observation, they found that the neutral hydrogen density has to be five orders of magnitude smaller than expected. They came to the striking conclusion that most of the hydrogen in the IGM exists in another form — ionized hydrogen. Lyman series absorptions occur when a ground state electron in a hydrogen atom jumps to a higher state. Ionized hydrogen has no electrons, so it does not absorb photons at Lyman wavelengths.

 


Figure 3. Animation illustrating Gunn-Peterson trough formation. The continuum blueward of Lyα is drastically absorbed as the neutral hydrogen density increases. From the interactive module.

 

Gunn and Peterson also showed that since the IGM density is low, radiative ionization (instead of collisional) is the dominant source to maintain such a level of ionization. What an insight!

 

The history of reionization

 

In modern astronomy language, we refer to Gunn and Peterson’s observation by saying the Universe was “reionized”. Combined with our other knowledge of the Universe, we now know the baryonic medium in the Universe went through three phases.

  1. In the beginning of the Universe (z>1100), all matter is hot, ionized, optically thick to Thomson scattering and strongly coupled to photons. In other words, the photons are being scattered all the time. The Universe is opaque.
  2.  

  3. As the Universe expands and cools adiabatically, the baryonic medium eventually recombines, in other words, electrons become bound to protons and the Universe becomes neutral. With matter and radiation decoupled, the CMB radiation is free to propagate.
  4.  

  5. At some point, the Universe gets reionized again and most neutral hydrogen becomes ionized. It is believed that the first phase of reionization is confined to the Stromgren spheres of the photoionization sources. Eventually, these spheres grow and overlap, completing the reionization. Sources of photoionizing photons slowly etch away at the remaining dense, neutral, filamentary hydrogen.

 

When and how reionization happened in detail are still active research areas that we know very little about. Nevertheless, studying the Gunn-Peterson trough can tell us when reionization completed. The idea is simple. As we look at quasars further and further away, there should be a redshift where the quasar sources were emitting before reionization completed. These quasar spectra should show Gunn-Peterson troughs.

 

Pinning down the end of reionization

 

The first confirmed Gunn-Peterson trough was discovered by Becker et al. (2001). As shown in the figure below, the average transmitted flux in some parts of the quasar spectrum is zero.

 



Figure 4. The first confirmed detection of the Gunn-Peterson trough (top panel, where the continuum goes to zero flux due to intervening absorbers), from a quasar beyond z = 6. The bottom panel shows that the quasar after z = 6 has non-zero flux at all parts. Adapted from Becker et al. (2001).

 



Figure 5. The close up spectra of the redshift z>6 quasar. In the regions corresponding to the absorbers redshift 5.95 <z< 6.16 (i.e. the middle regions), the transmitted flux is consistent with zero flux. This indicates a strong increment of neutral fraction at z>6. Note that, near the quasar (the upper end), there is transmitted flux. This is mostly owing to the fact that the neutral gas near the quasar is ionized by the quasar emission and therefore the Gunn-Peterson trough is suppressed. This is also known as the proximity effect. As we go to the regions corresponding to lower redshift, there is also apparent transmitted flux, indicating the neutral fraction decreases significantly.

 

Could this be due to the dense interstellar medium of a galaxy along the line of sight, rather than IGM absorbers? A few lines of evidence refute that idea. First, metal lines typically seen in galaxies (even in the early Universe) are not observed at the same redshift as the Lyman absorptions. Second, astronomers have corroborated this finding using several other quasars. We see these troughs not just a few particular wavelengths in some of the quasar spectra (as you might expect for scattered galaxies), but rather the average optical depth steadily grows, versus a simple extrapolation of how it grows at low redshifts, starting at redshift of z∼6 (see figure below). The rising optical depth is due to the neutral hydrogen fraction rising dramatically beyond z>6.

 


Figure 6. Quasar optical depth as a function of redshift. The observed optical depth exceeds the predicted trend (solid line), suggesting reionization. Adapted from Becker et al. (2001).

 

Similar studies using high redshift z>5 quasars confirm a dramatic increases in the IGM neutral fraction from nHI/nH=10-4 at z<6, to nHI/nH>10-3–0.1 at z∼6. At z>6, complete absorption troughs begin to appear.

 


Figure 7. A more recent compilation of Figure 6 from Fan et al. (2006). Note that the sample variance increases rapidly with redshift. This could be explained by the fact that high redshift star-forming galaxies, which likely provide most of the UV photons for reionization, are highly clustered, therefore reionization is clumpy in nature.

 


Figure 8. A similar plot, but translating the optical depth into volume-averaged neutral hydrogen fraction of the IGM. The solid points and error bars are measurements based on 19 high redshift quasars. The solid line is inferred from the reionization simulation from Gnedin (2004) which includes the overlapping stage to post-overlapping stage of the reionization spheres.

 

But it’s important to note that, because even a very small amount of neutral hydrogen in the IGM can produce optical depth τ≫1 and cause the Gunn-Peterson trough, this observational feature saturates very quickly and becomes insensitive to higher densities. That means that the existence of a Gunn-Peterson trough by itself does not prove that the object is observed prior to the start of reionization. Instead, this technique mostly probes the later stages of cosmic reionization. So the question now is: when does reionization get started?

 

The start of reionization

 

Gunn and Peterson told us not only how to probe the end of reionization through the Gunn-Peterson trough, but also how to probe the early stages of reionization using the polarization of the CMB. They discussed the following scenario: if the Universe starts to be reionized at certain redshift, then beyond this, the background distant light sources should experience Thomson scattering.

Thomson scattering is the scattering between free electrons and photons. Recall the fact that due to Thomson scattering by the surrounding nebulae, the starlight undergoes this scattering will be polarized (due to the angle dependence of Thomson scattering). By the same analogy, the CMB, after Thomson scattering off the ionized IGM, is polarized.

As opposed to the case of the Gunn-Peterson troughs, the Thomson scattering results from light interacting with the ionized fraction of hydrogen, not the neutral fraction. Therefore, the large scale polarization of the CMB can be regarded as a complimentary probe of reionization. The CMB polarization detection by Wilkinson Microwave Anisotropy Probe (WMAP) space telescope suggests a significant ionization fraction extending to much earlier in the cosmic history, with z∼11±3 (see figure below). This implies that reionization is a process which starts at some point between z∼8–14 and ends at z∼6–8. This means that cosmic reionization appears to be quite extended in time. This is in contrast to recombination, when electrons became bound to protons, which occurred over a very narrow range in time (z=1089±1).

In summary, a very crude chart on reionization using Gunn-Peterson trough and CMB polarization is shown below.

 


Figure 9. The volume-averaged neutral fraction of the IGM versus redshift measured using various techniques, including CMB polarization and Gunn-Peterson troughs.

 

So far, we have left out the discussion of Lyβ, Lyγ absorptions as well as the cosmic Stromgren sphere technique as shown in the figure above. We will now fill in the details before proceeding to the suspects of reionization.

 

How about Lyman beta and Lyman gamma

 

When people talk about the Lyα forest, sometimes it could be an over-simplification. Besides Lyα, other neutral hydrogen transitions such as Lyβ and Lyγ, play a crucial role. Sometimes we can even study the neutral helium Lyα (note that Lyα only designates the transition from electronic state N=1 to 2). We now first look at the advantages and also impediments to include Lyβ and Lyγ absorption lines in the study of the IGM.

  1. Recall the formula for the optical depth: τ∝f λ. Due to the decrease in oscillator strength and also the wavelength, for the same neutral fraction density, the Gunn-Peterson optical depths of Lyβ and Lyγ are 6.2 and 17.9 times smaller than Lyα. Having smaller optical depth means that the absorption lines will only saturate at higher neutral fraction/density. Note that once a line is saturated, a large change in column density will only induce a small change in apparent line width. In other words, we are in the flat part of the curve of growth. In this case, the column densities are relatively difficult to measure. Since Lyβ and Lyγ have a smaller optical depths, they could provide more stringent constraints on the IGM ionization state and density when Lyα is saturated. Especially in the case of Gunn-Peterson trough, as we have shown in the figures above, the ability to detect Lyβ and Lyγ is crucial to study reionization.
  2.  


    Figure 10. The left-hand panel shows the Voigt profile of Lyα absorption line. The animation shows that when the line is saturated, a large change in column density will only induce a small change in the equivalent width (right-hand panel). This degeneracy makes the measurement of column density very difficult when the absorption line is saturated. From the interactive module.

     

  3. Note that, given the same IGM clouds, the absorption of Lyα will also be accompanied by Lyβ and Lyγ absorptions. Since Lyα has a stronger optical depth, it has broader wings in the Voigt profile and has a higher chance to blend with the other lines. Therefore, simultaneous fits to higher order Lyman lines could help to deblend these lines.
  4.  

  5. As the redshift of quasar increases, there is a substantial overlapping region between Lyβ with Lyα, and so forth. Disentangling which absorption line is due to Lyα and which is due to higher order absorptions is difficult within these regions. Due to this limitation, the study of Lyγ, and higher order lines, is rare.
  6.  


    Figure 11. Animation showing the Lyα forest including Lyα and Lyβ absorption lines. One can see that there is a substantial overlapping of Lyα absorption lines onto the the Lyβ region. Therefore, the study of higher order absorptions is very difficult in practice. From the interactive module.

 

Probing even higher neutral fraction

 

In some of the plots above, we have seen that as the neutral fraction increases, Lyα starts to saturate and can only give a moderate lower bound on the optical depth and the neutral fraction. For higher neutral fraction, we have to use Lyβ and Lyγ. It is also clear that even Lyγ will get saturated very quickly. Are there other higher density probes?

Yes, there are other probes! At high neutral fraction, the quasar in this kind of environment resembles a scaled-up version of a O/B type star in a dense neutral hydrogen region (although the latter medium is mostly in molecules, and the former is in atoms). Luminous quasars will produce a highly ionized HII region and create the so called cosmic Stromgren spheres. These cosmic Stromgren spheres could have physical size of about 5 Mpc owing to the extreme luminosity of the quasars.

Measurements of cosmic Stromgren spheres are sensitive to a much larger neutral fraction than the one that Lyγ can probe. However, on the flip side of the coin, they are subjected to many systematics that one could easily imagine, including the three-dimension geometry of the quasar emission, age of the quasar, the clumping factor of the surrounding materials, etc. This explains the large error bar in the reionization chart above.

 

How about helium absorption

 

Although hydrogen is the most abundant element in the Universe, we also know that helium constitutes a large fraction of the Universe. A natural question to ask: how about helium?

As neutral helium has a similar ionization potential and recombination rate compared to neutral hydrogen, it is believed that neutral helium reionization is likely to happen at a similar time together with neutral hydrogen. On the other hand, the ionized HeII has a higher ionization potential, and its reionization is observed to happen at lower redshift, about z∼3. The fact that ionized helium has a higher ionization potential is crucial in disentangling the UV background sources. We will discuss this in detail when we talk about the suspects. In this section, we will discuss some potentials and impediments of the usage of helium Lyα.

  1. Helium and hydrogen have different atomic masses. Note that the bulk motion and thermal motion in gas both contribute to the line broadening. However bulk motions are insensitive to the atomic masses whereas the thermal motion velocities go as 1/√μ, where μ is the mean atomic mass. By comparing the broadening of the HeII 304Å and HI 1215Å lines, it is in principle possible to use the difference in atomic masses to measure the contribution from the bulk and thermal motions to the line broadening, separately. Therefore, the theories of the IGM kinematics can be, in principle, be tested. The usefulness of this approach however is remained to be seen.
  2.  

  3. Since ionized helium has a higher ionization potential (Lyman Limit at 228Å) than the neutral hydrogen (Lyman Limit at 912Å), the relative ionization fraction is a sensitive probe of the spectral shape of the UV background. For instance, the contribution from quasars will provide harder spectra than the soft stellar sources and is able to doubly ionize helium.
  4.  

  5. Although helium Lyα forest is awesome, only a small fraction of all quasars are suitable for the search of HeII absorption. The short wavelength of 304Å of Lyman alpha line for ionized helium requires the quasar to be redshifted significantly in order to enter the optical band such that we have enough high resolution spectroscopic instrument to detect it. Furthermore, it is important to note that the helium Lyα line has smaller wavelength than the neutral hydrogen Lyman Limit. As shown in the interactive module, the bound-free absorption beyond the Lyman Limit, as especially at the presence of high column density absorber, will create a large blanketing of lines. This renders the large majority of quasars useless for a HeII search even they might be redshifted enough into the optical band.

 

The suspects — who ionized the gas?

 

We have discussed enough on the technologies to catch the suspects. So we will now go back to the main question on reionization. Who did it? As we have discussed earlier in this post, the IGM is believed to have been reionized early in the Universe’s history via photoionization. Who produced all the high energy, ionizing photons? There are two natural suspects:

  1. Star-forming galaxies
  2. Quasars

It has been argued that soft sources like star-forming galaxies have to be the dominant sources owing to the fact that there are not many quasars in the early Universe. The Sloan Digital Sky Survey (SDSS) has shown a steep decline in the number of quasars after its peak at z=2.5. If most quasars formed after this epoch, could they play a significant role in the reionization of the Universe, which seems to have completed as early as z∼6.

Taking a closer look at this problem, Faucher-Giguère et al. (2008) made an independent estimate of the photoionization contribution from quasars. In their study, they consider Lyα forests optical depth at redshift 2<z<4.2 from 86 high resolution quasar spectra.

The idea to estimate the fractional contribution from quasars has been described earlier in this post, we will iterate with more details here. Only quasars can produce the very high energy (∼50 eV) photons necessary to doubly ionize helium. Therefore, the luminosity of sources at ∼50 eV can directly tell us the contribution from quasars. By assuming a spectral index for the spectral energy distribution, one can then extrapolate to the lower energies and infer the photoionization rate at the energy range that is relevant to the HI ionization. They show that at most only about 20 per cent of these photons could come from quasars.

Beside showing quasars cannot be the main suspect, Faucher-Giguère et al. also use the Lyα forests to derive the star formation rate indirectly. We have discussed that the absorbers are in photoionzation equilibrium. With this assumption, the authors use the Lyα opacity depth to derive the photoionization rate at different redshifts. With the photoionization rate, they turn that into UV emissivity from the star-forming galaxies and then infer the star formation rate in galaxies at different times in the history of the Universe.

The figure below shows that the derived hydrogen photoionization rate is remarkably flat over the history of the Universe. This suggests that there should be a roughly constant photoionizing source over a large range of cosmic history. This is only possible if the star formation rate in galaxies continues to be high in the early Universe, at high redshift (z∼2.5–4) since the contribution from quasars only begins at this redshift. As the figure shows, such a star formation rate is consistent with the simulation of Hernquist & Springel (2003).

 


Figure 12. Observations of the hydrogen photoionization rate compared to the best-fitting model from Hernquist & Springel (2003). This suggests that the combination of quasars and stars alone may account for the photoionization. Adapted from Faucher-Giguère et al. (2008).

 

But there is another way to trace the history of star formation — by directly counting up the number and size of galaxies at different redshifts, using photometric surveys. It turns out that the results from these direct surveys, as performed by Hopkins & Beacom (2006) for example, are in tension with indirect approach. The photometric surveys suggest that the star formation rate decreases after z>2.5 [Prof Hans-Walter Rix once wittily quoted this as the Universe gets tenured at redshift z∼2.5], like in the figure below. The results says that both the major photoionization machines (quasars and stars in galaxies) would be in decline after z>2.5.

 


Figure 13. Like Figure 12 above, but now using the models of Hopkins & Beacom (2006) where the star formation history is inferred from photometric surveys.

 

If the photometric surveys are right, Faucher-Giguère et al. strongly suggest that the Universe could not provide enough photons to reionize the intergalactic medium. So, why are there two different observational results?

To sum up this section, by studying Lyα forests, Faucher-Giguère et al. argue that

  1. Star-forming galaxies are the major sources of photoionization, although quasars can contribute as much as 20 per cent.
  2. The star formation rate has to be higher than the one estimated observationally from the photometric surveys to provide the required photons.

 

How the suspect did it

 

Although the Faucher-Giguère et al. results suggest that star-forming galaxies are the major sources of photoionization, exactly how these suspects actually did it is not clear. The discrepancy between direct and indirect tracers of the star formation history of the Universe might be reconcilable. Observations of star-forming galaxies are plagued with uncertainties, many of which are still areas of active research. Among the major uncertainties are:

  1. The star formation rate of galaxies at high redshift. We cannot observe galaxies that are faint and far away. How many photons could they provide? Submillimetre observations in near future could shed more light on this due to their amazing negative K-correction and the ability to observe gas and dust in high redshift galaxies.
  2.  

  3. Starburst galaxies create dust, and dust can obscure observation. Infrared and submillimetre observations are crucial to detect these faint galaxies. It is believed that perhaps most of the ionizing photons come from these unassuming, hard-to-detect galaxies, because they are likely individually small and dim but collectively numerous.
  4.  

  5. IGM clumping factor. In other words, how concentrated is the material surrounding the galaxies? This factor could affect the fraction of ionizing photons escaping into the IGM.

In short, it is possible that the star formation did indeed begin way back in the cosmic history, at z>2.5, but we underestimated it in the photometric surveys due to these uncertainties. In this case, all the results from simulations, Lyα forests, and photometric surveys could fit together.

 

Any other suspects on loose

 

We have now two suspects in custody. The remaining question: are there any suspects on loose? To answer this question, some other interesting proposals were raised, among them:

  1. High-energy X-ray photons from supernova remnants or microquasars at high redshift. This is possible, but its contribution might overestimate the soft X-ray background that we observe today.
  2.  

  3. Neutrino is believed to gain their mass through the seesaw mechanism. One of the potential ingredient of this mechanism is the sterile neutrino. Reionization by decaying sterile neutrinos cannot be completely ruled out yet. That said, since there is no confirmed detection of sterile neutrino in particle physics experiments, one should not take this possibility too seriously and blow one own mind.

In summary, we cannot be 100 per cent sure, but other suspect other that quasars and stellar sources is unlikely.

 

So what have we learned?

 

To summarize what we have discussed in this post:

  1. Non-detection can be a good thing. The non-detection of the Gunn-Peterson trough in the early days demonstrated that the IGM is mostly in an ionized state.
  2.  

  3. The Lyα forest gives us a 3D picture of the IGM.
  4.  

  5. The Gunn-Peterson trough provides a direct probe on the end stage of the reionization.
  6.  

  7. CMB radiation polarization suggests when the process of reionization began, and shows that reionization is a process extended in time.
  8.  

  9. Star-forming galaxies are likely to be responsible for the reionization, but how they exactly did it is a question still under investigation.

 

References

 

  1. Becker R. H. et al., 2001, AJ, 122, 2850
  2. Fan X., Carilli C. L., Keating B., 2006, ARA&A, 44, 415
  3. Faucher-Giguère C.-A., Lidz A., Hernquist L., Zaldarriaga M., 2008, ApJ, 688, 85
  4. Faucher-Giguère C.-A., Prochaska J. X., Lidz A., Hernquist L., Zaldarriaga M., 2008, ApJ, 681, 831
  5. Gunn J. E., Peterson B. A., 1965, ApJ, 142, 1633
  6. Hernquist L, Springel V., 2003, MNRAS, 341, 1253
  7. Hopkins A. M., Beacom J. F., 2006, ApJ, 651, 142
  8. Rauch M., 1998, ARA&A, 36, 267

 

Glossary

 

  1. ΛCDM
  2. Curve of growth
  3. FWHM (Full width half maximum)
  4. K-correction
  5. Quasars
  6. Seesaw mechanism
  7. Sterile neutrino
  8. Stromgren sphere
  9. Voigt profile

ARTICLE: Interpreting Spectral Energy Distributions from Young Stellar Objects

In Journal Club, Journal Club 2013 on April 15, 2013 at 8:02 pm

Posted by: Meredith MacGregor

1. Introduction

When discussing young stellar objects and protoplanetary disks (as well as many other topics in astronomy), astronomers continually throw out the term ‘SED.  In early April, I attended a conference titled ‘Transformational Science with ALMA: From Dust to Rocks to Planets– Formation and Evolution of Planetary Systems.’  And, I can attest to the fact that the term ‘SED’ has come up in a very significant fraction of the contributed talks.  For those not intimately familiar with this sub-field, this rampant use of abbreviations can be confusing, making it difficult to glean any useful take-aways from a talk.  So, in addition to summarizing the article by Robitaille et al. (2007), the goal of this post is to give a bit of an introduction to the terminology and motivation for the growing field of star and planetary system formation.

SEDs: What They Are and Why We Care So Much

The abbreviation ‘SED’ stands for ‘Spectral Energy Distribution.’  If you want to sound like an expert, this should be pronounced exactly as it appears (analogous to the non-acronym ‘said’).  A SED is essentially a graph of flux versus wavelength.  In the context of the Robitaille et al. article, we are most interested in the SEDs for young stars and the envelopes and disks surrounding them.  So, why exactly does the flux from a young stellar object (YSO) vary with wavelength?  As it turns out, different regions of the YSO emit at different wavelengths.  This means that when we observe at different wavelengths, we are actually probing distinct regions of the star and its surrounding disk and envelope. By tracing out the entire SED for a YSO, we can determine what the geometry, structure, and constituents are for that object.

Before we go too far, it is worth taking a moment to clarify the terminology used to classify young stars.  A young stellar object or YSO is typically defined as any star in the earliest stages of development.  YSOs are almost always found in or near clouds of interstellar dust and gas.  The broad YSO class of objects is then divided into two sub-classes: protostars and pre-main sequence stars.  Protostars are heavily embedded in dust and gas and are thus invisible at optical wavelengths.  Astronomers typically use infrared, sub-millimeter, and millimeter telescopes to explore this stage of stellar evolution, where a star acquires the bulk of its material via infall and accretion of surrounding material.  Pre-main sequence (PMS) stars are defined to be low mass stars in the post-protostellar phase of evolution.  These stars have yet to enter the hydrogen-burning phase of their life and are sill surrounded by remnant accretion disks called ‘protoplanetary’ disks.  A more detailed a summary of this classification scheme can be found here.  However, because the evolution of young stars is far from well-understand, astronomers often typically these words interchangeably.

SEDs for pre-main sequence stars are often seen to have a bump or excess in the infrared. This bump is generally interpreted as being due to thermal emission from warm dust and gas surrounding the central star. As an illustrative example, let’s consider a protoplanetary disk around a young star. The image below is taken from a review article by Dullemond & Monnier (2010) and shows a graphical representation of the emission spectrum from such a disk.

dullemond

A graphical representation of the emission spectrum from a protoplanetary disk and the telescopes that can be used to probe different regions of the disk. Taken from Dullemond & Monnier (2010).

In this case, emission in the near-infrared traces the warmer inner regions of the disk.  As you move into submillimeter wavelengths, you start to probe the outer, cooler regions of the disk.  By modeling the observed SED of a pre-main sequence star, you can determine what components of the source are contributing to the observed flux.  The second figure below is taken from a paper by Guarcello et al. (2010).  The left panel of this figure shows the observed SED (solid line) of a BWE star (in line with other ridiculous astronomy acronyms, this stands from ‘blue stars with excesses’) and the unreddened photospheric flux we would expect to see if the star did not have a disk (dashed line).  The right panel shows a model fit to this data.  The authors describe the observed SED using a model with four distinct components, each represented by a colored line in the figure: (1) emission from the reddened photosphere of the central star, (2) radiation scattered into the line of sight from dust grains in the disk, (3) emission from a collapsing outer envelope, and (4) thermal emission from a circumstellar disk.  The summation of these four component makes up the complete SED for this BWE star.

guarcello

Left panel: The observed SED (solid line) of a BWE star and the unreddened photospheric flux we would expect to see if the star did not have a disk (dashed line). Right panel: A four component model fit to this data. Taken from Guarcello et al. (2010).

Describing and Classifying the Evolution of a Protostar

As a protostar evolves towards the Zero Age Main Sequence (ZAMS), the system geometry (and thus the SED) will evolve as well. Therefore, the stage of evolution of a protostar is often classified according to both the general shape and the features of the SED.  A graphical overview of the four stages of protostellar evolution are shown below (Andrea Isella’s thesis, 2006).  Class 0 objects are characterized by a very embedded central core in a much larger accreting envelope.  The mass of the central core grows in Class I objects and a flattened circumstellar accretion disk develops.  For Class II objects, the majority of circumstellar material is now found in a disk of gas and dust.  Finally, for Class III objects, the emission from the disk becomes negligible and the SED resembles a pure stellar photosphere.  The distinction between these different classes was initially defined by the slope of the SED (termed the ‘spectral index’) at infrared wavelengths.  Class I sources typically have SEDs that rise in the far- and mid-infrared, while Class II sources have flat or falling SEDs in the mid-infrared.  However, this quantitative distinction is not always clear and is not an effective way to unambiguously distinguish between the different object classes (Protostars and Planets V, page 127-128).

isella

A graphical overview of the four stages of protostar evolution taken from Andrea Isella’s thesis (2006). A typical SED of each class is shown in the left column and a cartoon of the corresponding geometry is shown in the right column.

2. The Article Itself

One common method of fitting SEDs is to assume a given gas and circumstellar dust geometry and set of dust properties (grain size distribution, composition, and opacity), and then use radiative transfer models to predict the resulting SED and find a set of parameters that best reproduce the observations.  However, fitting SEDs by trial and error is a time consuming way to explore a large parameter space.  The problem is even worse if you want to consider thousands of sources.  So, what’s to be done?  Enter Robitaille et al.  In order to attempt to make SED fitting more efficient, they have pre-calculated a large number of radiative transfer models that cover a reasonable amount of parameter space.  Then, for any given source, one can compare the observed SED to this set of models to quickly find the set of parameters that best explains the observations.

Let’s Get Technical

The online version of this fitting tool draws from 20,000 combinations of physical parameters and 10 viewing angles (if you are particularly curious, the online tool is available here).  A brief overview of the parameter space covered is as follows:

  • Stellar mass between 0.1 and 50 solar masses
  • Stellar ages between 10^3 and 10^7 years
  • Stellar radii and temperatures (derived directly from stellar mass using evolutionary tracks)
  • Disk parameters (disk mass, accretion rate, outer radius, inner radius, flaring power, and scale height) and envelope parameters (the envelope accretion rate, outer radius, inner radius, cavity opening angle, and cavity density) sampled randomly within ranges dictated by the age of the source

However, there are truly a vast number of parameters that could be varied in models of YSOs.  Thus, for simplicity, the authors are forced to make a number of assumptions.  Here are some of the biggest assumptions involved:

  1. All stars form via accretion through a disk and an envelope.
  2. The gas-to-dust ratio in the disk is 100.
  3. The apparent size of the source is not larger than the given aperture.

The last constraint is not required, but it allows a number of model SEDs to be cut out and thus speeds up the process.  Furthermore, the authors make a point of saying that the results can always be scaled to account for varying gas-to-dust ratios, since only the dust is taken into account in the actual radiative transfer calculations.

Does This Method Really Work?

If this tool works well, it should be able to correctly reproduce previous results.  In order to test this out, the authors turn to the Taurus-Auriga star forming region.  They select a sample of 30 sources from Kenyon & Hartman (1995) that are spatially resolved, meaning that there is prior knowledge of their evolutionary stage from direct observations (i.e. there is a known result that astronomers are fairly certain of to compare the model fits against).  When fitting their model SEDs with the observed SEDs for this particular star forming region, the authors throw in a few additional assumptions:

  1. All sources are within a distance range of 120 – 160 AU (helps to rule out models that are too faint or too luminous).
  2. The foreground interstellar extinction is no more than A_v = 20.
  3. None of the sources appeared larger than the apertures used to measure fluxes.

The authors then assign an arbitrary cut-off in chi-squared for acceptable model fits: \chi^2 - \chi^2_\text{best} < 3.  Here, \chi^2_\text{best} is the \chi^2 of the best-fit model for each source.  Robitaille et al. acknowledge that this cut-off has no statistical justification: ‘Athough this cut-off is arbitrary, it provides a range of acceptable fits to the eye.’  After taking Jim Moran’s Noise and Data Analysis class, I for one would like to see the authors try a Monte Carlo Markov Chain (MCMC) analysis of their 14-dimensional space (for more detail on MCMC methods see this review by Persi Diaconis).  That might make the analysis a bit less ‘by eye’ and more ‘by statistics.’

The upshot of this study is that for the vast majority of the sources considered, the best-fit values obtained by this new SED fitting tool are close to the previously known values. Check.

It is also worth mentioning here, that there are many other sets of SED modeling codes.  One set of codes of particular note are those written by Paola D’Alessio (D’Alessio et al., 1998; D’Alessio et al., 1999; D’Alessio et al, 2001).  These codes were the most frequently used in the results presented at the ALMA conference I attended.  The distinct change in the D’Alessio models is that they solve for the detailed hydrostatic vertical disk structure in order to account for observations of ‘flared’ disks around T Tauri stars (flaring refers to an increase in disk thickness at larger radii).

But, Wait! There are Caveats!

Although the overall conclusion is that this method fits SEDs with reasonable accuracy, there are a number of caveats that are raised.  First of all, the models tend to overestimate the mid-IR fluxes for DM Tau and GM Aur (two sources known to have inner regions cleared of dust).  The authors explain that this is most likely due to the fact that their models for central holes assume that there is no dust remaining in the hole.  In reality, there is most likely a small amount of dust that remains.  Second, the models do not currently account for the possibility of young binary systems and circumbinary disks (relevant for CoKu Tau 1).

The paper also addresses estimating parameters such as stellar temperature, disk mass, and accretion rate from SED fits.  And, yes, you guessed it, these calculations raise several more issues.  For very young objects, it is difficult to disentangle the envelope and disk, making it very challenging to estimate a total disk mass.  To make these complications clearer, the set of two plots below from the paper show calculated values for the disk mass plotted against the accepted values from the literature.  It is easily seen that the disk masses for the embedded sources are the most dissimilar from the literature values.

robitaille_disk_mass

Two plots from Robitaille et al. (2007) that show calculated values for the disk mass plotted against the accepted values from the literature. It is easily seen that the disk masses for the embedded sources (right) are the most dissimilar from the literature values.

Furthermore, even if the disk can be isolated, the dust mass in the disk is affected by the choice of dust opacity.  That’s a pretty big caveat!  A whole debate was started at the ALMA conference over exactly this issue and the authors have simply stated the problem and swept it under the rug in just one sentence.  In 2009, David Hogg conducted a survey of the rho Ophiucus region and used the models of Robitaille et al. (2007) to determine the best-fit dust opacity index, \beta for this group of sources.   Hogg found that \beta actually decreases for Class II protostars, a possible indication of the presence of larger grains in the disk.  Robitaille et al. also mention that the calculated accretion rates from SED fitting are systematically larger than what is presented in the literature.  The authors conclude that future models should include disk emission inside the dust destruction radius, the radius inside which it is too hot for dust to survive.  A great example of the complications that arise from a disk with a central hole can be seen in LkCa 15 (Espaillat et al., 2010Andrews et al., 2011).   The figure below shows the observed and simulated SEDs for the source (left) as well as the millimeter image (right).  The double ansae (bright peaks or ‘handles‘ apparent on either side of the disk) seen in the millimeter contours are indicative of a disk with a central cavity.

espaillat

Left: The observed and simulated SEDs for Lk Ca 15. The sharp peak seen at 10 microns is due to silicate grains within the inner hole of the disk. Right: The millimeter image of the disk. (Espaillat et al., 2010; Andrews et al., 2011)

In this case, a population of sub-micron sized dust within the hole is needed in order to produce the observed silicate feature at 10 microns.  Furthermore, an inner ring is required to produce the strong near-IR excess shortward of 10 microns.  A cartoon image of the predicted disk geometry is shown below.  To make things even more complicated, the flux at shorter wavelengths appears to vary inversely with the flux at longer wavelengths over time (Espaillat et al., 2010).  This phenomenon is explained by changing the height of the inner disk wall over time.

LkCa15

A cartoon image of the predicted disk geometry for Lk Ca 15 showing the outer ring, silicate grains within the hole, and the inner ring. (Espaillat et al., 2010)

Finally, Robitaille et al. discuss how well parameters are constrained given different combinations of data points for two example sources: AA Tau and IRAS 04361+2547.  In both sources, if only IRAC (Infrared Array Camera on the Spitzer Space Telescope) fluxes obtained between 3.6 and 8 microns are used, the stellar mass, stellar temperature, disk mass, disk accretion rate, and envelope accretion rate are all poorly constrained.  Things are particularly bad for AA Tau in this scenario, where only using IRAC data results in ~5% of all SED models meeting the imposed goodness of fit criterion (yikes!).  Adding in optical data to the mix helps to rule out models that have low central source temperatures and disk accretion rates.  Adding data at wavelengths longer than ~ 20 microns helps to constrain the evolutionary stage of the YSO, because that is where any infrared excess is most apparent.  And, adding submillimeter data helps to pin down the disk mass, since the emission at these longer wavelengths is dominated by the dust.  This just goes to show how necessary it is to obtain multi-wavelength data if we really want to understand YSOs, disks, and the like.

3. Sources:

Where to look if you want to read more about anything mentioned here…

A model of the emergence of coherence

In Uncategorized on April 14, 2013 at 4:50 pm

Download PDF description of the Goodman et al. (1998) model here

Ostriker’s 1964 Isothermal Cylinder model

In Uncategorized on April 14, 2013 at 4:43 pm

Particularly impressive is that, if you read the fine print, Jerry Ostriker was an NSF graduate fellow (meaning he was a grad student!) when he wrote this paper, which solves differential equations analytically at a level of technical virtuosity undoubtedly beyond anyone reared in the age of Mathematica.  Ostriker obtains solutions for polytropic cylinders, of which an isothermal cylinder is the case where n=\infty.  One has

P=K_n \rho ^{1+1/n},

which leads to the fundamental equation

K_n (n+1) \nabla ^2 \rho ^{1/n} = -4\pi G\rho.

Using the transformation

r \equiv \bigg[ \frac{(n+1)K_n}{4\pi G \lambda ^{1-1/n} } \bigg]^{1/2} \xi,

\rho \equiv \lambda \theta ^n,

one has a version of the Lane-Emden equation

\frac {1}{\xi}\frac{d}{d\xi} \big(\xi \frac{d\theta}{d\xi} \big) =\theta''+\frac{1}{\xi}\theta'=-\theta ^n.

For n=0, 1, and infinity there is a closed form solution, for other n a power series solution.  In the particular case of an isothermal cylinder, the EOS is that for an ideal gas, and one has

r\equiv \bigg[ \frac{K_I}{4\pi G \lambda} \big]^{1/2}\xi

and

\rho=\lambda e^{-\psi}.

Using the fundamental equation noted above and manipulating yields

\psi''+\frac{1}{\xi}\psi'=e^{-\psi}.

Letting z=-\psi +2\ln \xi and t=\sqrt{2}\ln\xi, we find

2\frac {d^2z}{dt^2}+e^z=0.

Letting z=\ln y the equation may be integrated to give

\psi(\xi)=2\ln\big(1+\frac{1}{8} \xi^2\big).

Using these results in expressions Ostriker provides in the paper that give general forms for density and mass, one has

\rho= \frac{\rho_0}{\big(1+\xi^2/8 \big)^2}

and M(\xi)=\frac{2 k_B T}{\mu m_0 G} \frac{1}{1+8/\xi^2}.

The expression for density should look familiar from the Pineda paper: it gets integrated to yield his eqn. (1) for the surface density of the cylinder.

ARTICLE: The Acceleration of Cosmic Rays in Shock Fronts (Bell 1978)

In Journal Club 2013 on April 9, 2013 at 5:20 am
Cartoon picture of Bell's diffusive shock acceleration, courtesy of Dr. Mark Pulupa's space physics illustration: http://sprg.ssl.berkeley.edu/~pulupa/illustrations/

Cartoon picture of Bell’s diffusive shock acceleration, courtesy of Dr. Mark Pulupa’s space physics illustration: http://sprg.ssl.berkeley.edu/~pulupa/illustrations/

Summary By Pierre Christian

Disclaimer

WordPress does not do equations very well!!! For ease of viewing, I attach here the PDF file for this paper review. It is identical in content with the one on WordPress, but since it is generated via Latex, the equations are much easier to read. On the other hand, the WordPress version has prettier pictures – take your pick… Here’s the Latex-ed version: Latex-ed Version

Here is the handout for said Journal Club: Handout

Bell’s paper can be accessed here: The Acceleration of Cosmic Rays in Shock Fronts

Introduction: Cosmic Rays in Context

In order to appreciate Bell’s acceleration mechanism, it is important for us to first learn some background information about the cosmic rays themselves. These are highly energetic particles, mostly protons (with a dash of heavier elements) that are so energetic that at the highest energies one proton has the energy equivalent to the fastest tennis serves. What can produce such energetic particles?

Our first clue comes from the composition of these cosmic rays. First, there is no electrically neutral cosmic ray particle. Electromagnetic forces must therefore be important in their production. Second, there is an overabundance of iron and other heavy elements in the cosmic ray population compared to the typical ISM composition (Drury, 2012).

The favored cosmic ray production sites are supernova remnants (Drury, 2012). Why not extra-galactic sources? Well, it has been shown that cosmic ray intensity is higher in the inner region of the Milky Way and goes down as we move radially to the outer disk. In addition, cosmic ray intensity is also shown to be less in the Magellanic clouds compared to the Milky Way. Additionally, strong shocks amplify magnetic fields to large proportions, and due to stellar nucleosynthesis, there is an overabundance of iron and heavy elements. I should also mention that a recent Fermi result conclusively proves that supernova remnants produce cosmic rays.

The observed cosmic ray spectrum is given in the figure below. We can see that the bulk of cosmic rays follow a broken power law. In particular, there are two breaks in this power law, one at 3 \times 10^{15} eV, called the ‘knee’ and one at 3 \times 10^{18} eV, called the ‘ankle’. Bell’s contribution is the derivation of a cosmic ray power law spectrum via acceleration in the shocks of supernova remnants. Bell’s model did not explain why the ‘ankle’ and the ‘knee’ exist, and to my knowledge the reason for their presence is still an open question. One explanation is that galactic accelerators cannot efficiently produce cosmic rays to arbitrarily high energies. The knee marks the point where galactic accelerators reach their energetic limits. The ankle marks the point where the galactic cosmic ray intensity falls below the intensity of cosmic rays from extragalactic sources, the so called ultra high-energy (UHE) cosmic rays (Swordy, 2001).

Observed cosmic ray spectrum from many experiments. Originally published by Swordy (2001), and modified by Dr. William Hanlon of the University of Utah (http://www.physics.utah.edu/~whanlon/spectrum.html). This image shows the three powerlaw regimes and the corresponding two breaks: the knee at 3 x 10^{15} eV and the ankle at 3 x 10^{18} eV.

Observed cosmic ray spectrum from many experiments. Originally published by Swordy (2001), and modified by Dr. William Hanlon of the University of Utah (http://www.physics.utah.edu/~whanlon/spectrum.html). This image shows the three power law regimes and the corresponding two breaks: the knee at 3 x 10^{15} eV and the ankle at 3 x 10^{18} eV.

Bell’s Big Ideas

Bell saw the potential of using the large bulk kinetic energy produced in objects such as a supernova remnant to power the acceleration of particles to cosmic ray energies. In order to harness the energy in bulk fluid motions, Bell needed a mechanism to transfer this energy to individual particles. Cited in Bell’s paper, Jokipii and Fisk had in the late sixties and early seventies deduced a mechanism using shocks as a method for accelerating particles. Bell modified and perfected Jokipii’s and Fisk’s mechanisms into something that could be applicable in the acceleration of cosmic ray particles in strong magnetized shocks.

General Idea
The general idea of Bell’s mechanism is that a particle with gyroradius, much larger than the shock’s width can move between the upstream (region the shock is moving into) and downstream regions (region that the shock had already passed in). Every crossing increases the particle’s energy slightly, and after many crossings, the particle is accelerated up to cosmic ray energies. This requires the particle to be confined within a certain region around the shock; there must be a mechanism that keeps the particle bouncing around the shock.

Confining
As a shock wave wades through the ISM, it produces turbulence in its wake. In the frame of the shock, the bulk kinetic energy of the upstream plasma is processed into smaller scales (turbulence and thermal) in the downstream region. This turbulence cascades down to smaller scales (via a magnetic Kolmogorov spectrum) before being dissipated far downstream. This magnetic turbulence can scatter fast moving charged particles and deflect them. In particular, a particle trying to escape downstream can be deflected by this turbulence back upstream.

If a charged particle saw some magnetic inhomogeneity (a random fluctuation) in the plasma, it is possible for the particle to scatter off these inhomogeneities. It is also known that much like how a particle moving in air produces sound waves, fast moving charged particles in an MHD fluid can produce Alfven waves. In the upstream region, energetic particles will excite Alfven waves. Now, a particle moving much faster than the Alfven speed will essentially see these waves as magnetostatic inhomogeneities. Furthermore, there is a resonant reaction that can scatter a particle with Larmor radius comparable to the wavelength of the inhomogeneities. In the upstream region of the shock, energetic particles will then scatter off Alfven waves that they themselves generate, deflecting them back into the downstream region.

At this point I would like to point out that the Alfven wave scatterings are elastic. Bell used confusing wording that made it seem that Alfven waves decrease the energy of the upstream particles enough so that they get overtaken by the shock. This is not true; magnetostatic interaction necessarily conserve energy because static magnetic fields can do no work (force is always perpendicular to velocity). What happens is that, in the frame of the shock, particles scatter with Alfven waves through a multitude of small angle scatterings, so that the angle between the particle motion and the background magnetic field undergoes a random walk. After several steps in this random walk, some particles will be scattered backwards towards the downstream direction.

Energy Source
Bell emphasizes that there are two different scattering centers in the process. The downstream scattering centers are the turbulence excited by the shock while the upstream scattering centers are the Alfven waves produced by the energetic particles. What is important is that these two waves are moving at different speeds in the frame of the shock. The energy powering the particle acceleration is exactly harnessed from the speed differential between the upstream and downstream scattering centers. Look at the ‘Intuitive Derivation of Equation (4)’ section for more details on this process.

The Power Law Spectra of Cosmic Rays

Bell found the differential energy spectrum of cosmic rays to be:
N(E) dE = \frac{\mu - 1} {E_0} \left( \frac{E} {E_0} \right) ^{-\mu} dE \; ,
where E_0 is the energy of the particle as it is injected in the acceleration process and \mu is defined to be:
\mu = \frac{2 u_2 + u_1} {u_1 - u_2} + O\left(\frac{u_1 - u_2} {c} \right) \; .
Where O\left(\frac{u_1 - u_2} {c} \right) denotes terms of order \left(\frac{u_1 - u_2} {c} \right) and higher.
Now, u_1 and u_2 are respectively the velocity of the scattering centers. Each are given by:
u_1 = v_s - v_A \; ,
u_2 = \frac{v_s} {\chi} + v_W \; ,
where v_A is the Alfven speed, v_W the mean velocity of scattering centers downstream, v_s the shock velocity, and \chi the factor by which the gas is compressed at the shock (\chi = 4 for high Mach number shocks obeying Rankine-Hugoniot jump conditions). Combining these equations we get:
\mu = \frac{ (2 + \chi) + \chi(2 v_W / v_s - 1/M_A)} {(\chi - 1) - \chi (v_W/v_s + 1/M_A)} \; ,
where M_A is the Alfven Mach number of the shock. In particular, for typical shock conditions this gives us a slope of \sim -2 to \sim -2.5, in agreement with the observed power law of the cosmic ray spectrum.

Damping of Alfven Waves

Bell notices that Alfven waves are damped by two important processes: collision of charged particles with neutral particles and the sound cascade. The first effect can be explained by noting that the magnetic field is flux frozen only to the charged particles in the fluid. If there is a significant amount of neutral particles, the charged particles that are ‘waving’ along with the Alfven waves can collide and scatter with these neutral particles (because the neutral particles do not ‘wave’ along with the magnetic field). This transfers energy from the hydromagnetic Alfven waves to heating the fluid, effectively damping the Alfven waves. Bell notes that the spectral index he derived is only good up to an energy cutoff, above which the spectrum becomes more steep. This energy cutoff is:
\frac{E_{crit}}{\rm{GeV}} = 0.07 \left( \frac{100 u_1} {c} \right)^{4/3} \left(\frac{n_e}{cm^{-3}} \right)^{-1/3}\left(\frac{n_H}{cm^{-3}} \right)^{-2/3} \left( \frac{f(0, p) - f_0(p)} {f_{gal}(p)} \right)^{2/3} \; .
Note that f_0(p) , the background particle distribution, is exactly f_{gal}(p) (the Galactic particle distribution) if the shock is moving through undisturbed interstellar gas. This condition is false for objects that generates multiple shock fronts. Also note that due to compression in the shockwave, f(0,p) can be many times the galactic value.

The second damping mechanism is due to the sound cascade. Alfven waves can interact with and lose energy to magnetosonic waves of lower wavelengths. This requires the sound speed to be less than the Alfven speed:
T < 3100 \left( \frac{B} {3 \mu G} \right)^2 \left( \frac{n_e} {cm^{-3}} \right) ^{-1} K \; .
However, this damping mechanism does not completely remove the Alfven waves, but merely limits the wave intensity. If this process is important, it will allow particles upstream to travel further from the shock before crossing back downstream. Bell does not believe that this will hamper the acceleration process in most physical cases.

Injection Problem

Throughout this paper, Bell assumed that the accelerated particles are sufficiently energetic to be able to pass through the shock. In order for this assumption to be valid, the gyroradius of the particle needs to be larger than the shock width. The typical thermal energy of particles in the fluid is much too low to satisfy this condition. Therefore, there is a need for a pre-acceleration phase where thermal particles become energized enough for them to participate in Bell’s acceleration (Bell, 1978).

A lot of work has been put into answering this question and to my knowledge it is still an open problem. One method due to Malkov and Volk is to note that even when the gyroradius of the particle is smaller than the shock width, there is still a leaking of mildly suprathermal particles from downstream to upstream. These particles will then excite Alfven waves and accelerate in much the same way as Bell’s diffusive shock acceleration (Malkov & Voelk, 1995).

What About Electrons?

Why do we observe a deficit of electrons in the cosmic ray composition? There are two primary reasons. The first is that the injection problem is much more severe for an electron. Recall that the gyroradius is:
r_{gyro} = \frac{mv_{\perp}} {|q|B} \; .
As a consequence of electrons being much less massive than protons (protons are heavier by a factor of about 2000), they have a much smaller gyroradius. In particular, this means that an electron needs to be much more energetic than a proton for it to be able to pass through a shock.

The second reason for the electron deficit is efficient radiative cooling processes. Due to its tiny mass, both inverse Compton and synchrotron cooling are extremely efficient is taking kinetic energy away from an electron. In particular, close to a shock where magnetic fields can be amplified to many times their background value, synchrotron radiation becomes extremely effective. As such, not only is it more difficult for electrons to join in the acceleration process, electrons also lose kinetic energy much more efficiently than protons or other heavier nuclei.

However, it should be noted that there are methods for electrons to participate in the acceleration. One of the cutest ones involves electrons hitching a ride on ions. It turns out that the acceleration timescale of ions is comparable to their electron stripping timescale in typical shock conditions. So, ions that still harbor electrons can be accelerated via Bell’s diffusive shock acceleration before the electrons, now also possessing a large kinetic energy due to the acceleration, get stripped from the ions and join in the acceleration party.

Oblique Shocks

Figure 2 of Bell's paper, oblique shock geometry.

Figure 2 of Bell’s paper, showing oblique shock geometry.

What if the shock normal is not parallel to the magnetic field? In general, since the plasma can move both parallel and perpendicular to the magnetic field lines, we have to include the electric field in the description. However, Bell claimed that we can Lorentz transform to a frame where the shock front is stationary and all bulk fluid motion is parallel to the magnetic field as long as the angle between the shock’s normal and the magnetic field is less than cos^{-1} (v_s / c) (better argued in Drury, 1983). In this frame, the electric field would be non-existent and we can use much of our previous calculations. If velocity downstream and upstream are \vec{w_1} and \vec{w_2} respectively (look at Figure 2 of Bell’s paper, reproduced preceding this paragraph), the energy increase due to one crossing is:

E_{k+1} = E_{k} \left( \frac{1 + \vec{v}_{k1} \cdot (\vec{w}_1 - \vec{w}_2)} {1 + \vec{v}_{k2} \cdot (\vec{w}_1 - \vec{w}_2)} \right ) \; ,
Since \vec{w}_1 and \vec{w}_2 are not parallel, the increase per crossing is smaller than that of the parallel shock. However, the particle’s gyration around magnetic field lines would allow the particle to cross the shock multiple times when it drifts close to the shock (remember, the particle’s gyration radius must be large compared to the shock’s width for it to participate in the acceleration process!). In total, Bell claims that these two effects cancel each other out, resulting in the same power law spectrum.

Conclusion

In this paper, Bell described a novel method to use the bulk kinetic energy in shocks around supernova remnants to accelerate particles to cosmic ray energies. The analytically calculated particle energy power law spectrum agrees with the observed spectrum. However, the study of cosmic ray acceleration is still far from over. An issue left open by this paper is the ‘seed’ of the accelerating particles. Because in standard SNR condition thermal particles do not possess enough energy to participate in the acceleration mechanism, Bell’s process requires a pre-acceleration mechanism. This injection mechanism is still unknown, and we would like to direct interested readers to Malkov & Voelk, 1995 for a hypothesis. The subject of ultra high energy cosmic rays (UHECR) is also left undiscussed. In particular, although most observed cosmic rays are accelerated by Milky Way’s SNR’s, these UHECRs are thought to originate from extragalactic sources. Both the site of their acceleration and the mechanism for said acceleration is still a mystery.

Derivation of Equation (4)

Equation (4) in Bell’s paper is a cornerstone equation which Bell’s argument rests upon. For some reason he did not show the derivation of this equation. The easiest way to derive it is to follow Bell’s advice and perform Lorentz transforms of the energy in the rest frame of the scattering center of the new region. This Lorentz boost will be in the direction parallel to the shock’s normal. For a single upstream to downstream crossing, the particle energy is increased by:
E = \frac{E' + vp'} {\sqrt{1 - v^2/c^2}} = \frac{E' + v \gamma m v_{k1}} {\sqrt{1 - v^2/c^2}} = E' \frac{1 + v v_{k1}/c^2} {\sqrt{1 - v^2/c^2}}
where E' is the energy in the original region (upstream) and E' is the energy in the downstream region; v_{k1} is the velocity at which the particle is crossing from upstream to downstream; and v is the difference of the velocity between scattering centers upstream and downstream parallel to the shock’s normal (direction of Lorentz boost) given by: v = (u_1 - u_2) \cos \theta_{k1} , where \theta_{k1} is the angle the motion is making with the shock’s normal. Putting this together gives:
E_{downstream} = E_{upstream} \left( \frac{1 + v_{k1}(u_1 - u_2) \cos \theta_{k1}/c^2} {\sqrt{1 - ((u_1 - u_2) \cos \theta_{k1})^2/c^2}} \right) \; .
Now, equation (4) in Bell’s paper is the total energy change of the particle if it crosses from upstream to downstream and to upstream again. This means we have to once again perform a Lorentz boost, this time from downstream to upstream. This is the exact same procedure, but now we solve for E' instead of E (inverse Lorentz transformation) since we are measuring everything in the frame of the upstream scattering centers. This gives:
E_{final} = E_{initial} \left( \frac{1 + v_{k1}(u_1 - u_2) \cos \theta_{k1}/c^2} {\sqrt{1 - ((u_1 - u_2) \cos \theta_{k1})^2/c^2}} \right) \left(\frac{\sqrt{1 - ((u_1 - u_2) \cos \theta_{k1})^2/c^2}} {1 + v_{k2}(u_1 - u_2) \cos \theta_{k2}/c^2} \right )
= E_{initial} \left( \frac{1 + v_{k1}(u_1 - u_2) \cos \theta_{k1}/c^2} {1 + v_{k2}(u_1 - u_2) \cos \theta_{k2}/c^2} \right ) \; ,
which is exactly equation (4)!

A More Intuitive Derivation of Equation (4)

Although the previous derivation is valid, I think it is not very physically illuminating. Here is a more intuitive derivation of the amount of energy increase a particle receives due to crossing from upstream to downstream. Note that we will not be using Lorentz transforms for this derivation, so it is only accurate to the lowest order in (u_1- u_2)/c .

What is the amount of momentum increase a particle receive due to crossing from upstream to downstream? Suppose p' is the momentum of the particle after it crosses and p is the momentum of the particle before it crosses:
p' = p + \Delta p
= p + \gamma m \Delta v
= p + \gamma m (u_1 - u_2) \cos \theta_{k1} \; .
Now, what is the change of particle energy due to this increase in momentum?
\Delta E = \int \frac{dp}{dt} dl = \int dp v_{k1} \sim v_{k1} \Delta p \;.
The change of energy therefore is:
E' = (E + v_{k1} \Delta p)
= (E + v_{k1} \gamma m (u_1 - u_2) \cos \theta_{k1})
= E ( 1 + v_{k1} m (u_1 - u_2) \cos \theta_{k1}/c^2) \; ,
where we have used E = \gamma m c^2 on the last line. This is the amount of energy the particle possess after it crosses from the upstream to the downstream. The increase in energy is:
\Delta E = E (v_{k1} m (u_1 - u_2) \cos \theta_{k1}/c^2) \; .
Why is this derivation more illuminating than the Lorentz boosts? For one, it is easy to see where the extra energy comes from. It is obvious from our \Delta E equation that it comes from the difference in velocity of the scattering centers upstream and downstream (u_1 - u_2) . Therefore, cosmic ray acceleration is literally a process where energetic particles steal energy from the bulk fluid motion!!!

Another illumination comes from the fact that this equation is exactly the same if we apply it backwards, from downstream to upstream, as long as we change v_{k1} to the particle velocity going back upstream and \theta_{k1} to the angle the motion makes with the shock’s normal (\theta now defined in the downstream region). Now, u_1-u_2 will have to be changed to u_2-u_1 , giving a minus sign. However, the angle must be inverted, \cos\theta \rightarrow - \cos\theta , netting another minus sign. The energy change is always positive no matter if the particle crosses from upstream to downstream or downstream to upstream. Particles always gain and never lose energy when the cross a shock!!! In particular, the total energy gain by a particle after many crossings is just the product of a bunch of these (v_{k} m (u_1 - u_2) \cos \theta_{k}/c^2) terms!

To get to equation (4), we have to perform two crossings, one upstream to downstream and another downstream to upstream.
Some coordinate change trickery is also required. In particular, \theta_{k1} and \theta_{k2} are both measured in the upstream frame. Therefore, the extra minus sign from \cos\theta \rightarrow - \cos\theta is not present. Therefore:
E_{final} = E_{initial} (1 + v_{k1} m (u_1 - u_2) \cos \theta_{k1}/c^2) (1 - v_{k2} m (u_1 - u_2) \cos \theta_{k2}/c^2) \; ,
where the minus sign comes from the u_2 - u_1 term of the second crossing. It is now obvious to see that this is exactly equation (4) if we Taylor expand the denominator to linear order in (u_1 - u_2)/c . Since equation (7) in Bell’s paper only goes to linear order in (u_1 - u_2)/c , this approximation will give us the same differential energy spectrum as equation (9) of Bell’s paper.

Derivation of Equations (1) & (13)

In this section we shall derive Equation (1) of the paper:
\frac{\partial n} {\partial t} + u_2 \frac{\partial n} {\partial x} = \frac{\partial} {\partial x} \left( D(x) \frac{\partial n} {\partial x} \right) \; .
This equation describes the evolution of the particle density, n(x, t) in the downstream region. The two important mechanisms are advection due to the fluid flow (with advection velocity u_2 ) and diffusion.

Start with a flux continuity equation of the particle density n with its flux, n u_2 .
\frac{\partial n} {\partial t} + \nabla \cdot (n \vec{u_2}) = \rm{Source} - \rm{Sink} \; .
Perform a product rule:
\frac{\partial n} {\partial t} + n \nabla \cdot \vec{u_2} + \vec{u_2} \cdot \nabla n = \rm{Source} - \rm{Sink} \; .
Now, n \nabla \cdot \vec{u_2} = 0 since the advection velocity downstream is u_2 regardless of position, so:
\frac{\partial n} {\partial t} + \vec{u_2} \cdot \nabla n = \rm{Source} - \rm{Sink} \; .
This takes care of advection. What about diffusion? If we know how diffusion affects \partial n / \partial t , we can put diffusion into the Source – Sink term.

Let us consider again a flux continuity equation, but only considering diffusion processes (no advection or other source/sink terms) this time:
\frac{\partial n} {\partial t} + \nabla \cdot \vec{J} = 0 \; .
By the phenomenological Fick’s first law:
\vec {J} = -D(\vec{x}) \nabla n \; ,
where D(\vec{x}) is called the \textbf{diffusion coefficient} and can vary with respect to position in the shock. Plugging this to the previous equation gives:
\frac{\partial n} {\partial t} = \nabla \cdot (D(\vec{x}) \nabla n) \; .
We can interpret this equation as an extra contribution to \partial n / \partial t due to diffusion. In particular, this is the Sink-Source due to diffusion. The complete continuity equation with both advection and diffusion is therefore:
\frac{\partial n} {\partial t} + \nabla \cdot (n \vec{u_2}) = \nabla \cdot (D(\vec{x}) \nabla n) \; .
Now, since things do not vary wildly perpendicular to the shock’s normal, the only important terms in these derivatives are the ones parallel to the shock’s normal (the x component in Figure (1) of the paper). Similarly, \vec{u_2} = u_2 \hat{x} and D(\vec{x}) = D(x) , since we assume that the advection velocity and diffusion coefficient do not vary perpendicular to the shock’s normal.
Therefore, the continuity equation becomes:
\frac{\partial n} {\partial t} + u_2 \frac{\partial n} {\partial x} = \frac{\partial} {\partial x} \left( D(x) \frac{\partial n} {\partial x} \right) \; ,
equation (1) of Bell’s paper. Note that I only need to change the advection velocity from u_2 to u_1 if I want to find a similar equation in the upstream region, giving equation (13) of the paper.

References

Bell, A. R. 1978, MNRAS, 182, 443

Drury, L. O. . 2012, Astroparticle Physics, 39, 52

Drury, L. O. 1983, Reports on Progress in Physics, 46, 973

Malkov, M. A., & Voelk, H. J. 1995, A&A, 300, 605

Swordy, S. P. 2001, Space Sci. Rev., 99, 85

Further Readings

Blandford, R., & Eichler, D. 1987, Phys, Rep., 154, 1

Schure, K. M., Bell, A. R., OC Drury, L., & Bykov, A. M. 2012, Space Sci. Rev., 173, 491

Skilling, J., 1975a. Mon. Not. R. astr. Soc., 172, 557.

Skilling, J., 1975b. Mon. Not. R. astr. Soc., 173, 245.

Skilling, J., 1975c. Mon. Not. R. astr. Soc., 173, 255.

Wentzel, D. G., 1974. A. Rev. Astr. Astrophys., 12, 71.

Cas A, a supernova remnant where shocks are accelerating cosmic rays, courtesy of wikipedia.

Cas A, a supernova remnant where shocks are accelerating cosmic rays, courtesy of wikipedia.

Module Prototype: Director’s Cut of a WorldWide Telescope Tour

In Special Topics Modules on April 2, 2013 at 4:46 pm

W5: Multigenerational Star Formation

This post, prepared by Alyssa Goodman and used in class on 4/2/13, is intended to give AY201b students an idea of  how ready their interactive module  should be when it is presented, and the level of detail to offer in a presentation.
Click here to view original WWT Tour, or here for Tour description.  Click here to download “Director’s Cut.” In the (Windows or) web version of WWT, go to “Explore, Open…, Tour” and select the file that you’ve downloaded in order to view the Tour. Or (warning, beta!) watch the Director’s Cut tour in WWT/HTML5.

 Background: Who are the original Tour’s authors?

  • Xavier Koenig: A finishing graduate student at the Harvard-Smithsonian Center for Astrophysics when this Tour was created.  His PhD thesis concerned analysis of Spitzer Space Telescope observations of the star-forming region W5.  As of 2013, Dr. Koenig is a postdoctoral fellow at Yale University.

  • Lori Allen: Thesis advisor at the Center for Astrophysics to Xavier Koenig when this Tour was made.  Today, Dr. Allen, is Deputy Director of the Kitt Peak National Observatory.

  • Sanjana Sharma: High-School student at the Winsor School in Boston when this Tour was made.  Sharma was an intern with the WorldWide Telescope Ambassadors group at the Center for Astrophysics before moving to New Haven, where she is in the class of 2014 at Yale.

 

What points are raised in the WWT Tour narration that could be explored more deeply by an interested viewer? (text in purple, concerning how we know stars ages, are now clickable links within the Tour, as a prototype)

  • small and faint (how small (angle), how faint, #’s)

  • faint diffuse glow, hot gas  (how hot, what does this region look like at other wavelengths)

  • red glow(=warm dust, how warm, and how do we know?)

  • one burst of star formation can cause another (triggered star formation)

  • may have been 3 successive generations of star formation (how do we know which generation is which?)

  • stars…dispersed over time (how fast, how much time? how do we know?)

  • large clusters (what’s the definition of a “cluster” and is it different for young stars?)

  • changing light they emit to infer the presence of multiple objects (how?)

  • disk…that could maybe form planets (discuss how & whether this “hostile” environment matters)

  • comet-shaped tail that glows in the infrared (how does that happen?)

  • pillars compressed from outside…squeezed on inside by internal gravity (how, which forces do what on what time scales?)

  • brand-new stars are emerging (how do we know they are new?)

  • comparison of pillars/mountains W5/Eagle nebula same scale (angular/linear?…turns out both, as these sources are coincidentally at similar distances from us!)

 Additional Resources

  • Video: Spitzer “Hidden Universe” interviews with Allen & Koenig about W5
  • PhD Thesis: Xavier Koenig’s thesis (PDF)

  • Journal Article: Koenig et al. 2008, Clustered and Triggered Star formation in W5: Observations with Spitzer (ADS link)  [Abstract: We present images and initial results from our extensive Spitzer Space Telescope imaging survey of the W5 H II region with the Infrared Array Camera (IRAC) and Multiband Imaging Photometer for Spitzer (MIPS). We detect dense clusters of stars, centered on the O stars HD 18326, BD +60 586, HD 17505, and HD 17520. At 24 μm, substantial extended emission is visible, presumably from heated dust grains that survive in the strongly ionizing environment of the H II region. With photometry of more than 18,000 point sources, we analyze the clustering properties of objects classified as young stars by their IR spectral energy distributions (a total of 2064 sources) across the region using a minimal-spanning-tree algorithm. We find ~40%-70% of infrared excess sources belong to clusters with >=10 members. We find that within the evacuated cavities of the H II regions that make up W5, the ratio of Class II to Class I sources is ~7 times higher than for objects coincident with molecular gas as traced by 12CO emission and near-IR extinction maps. We attribute this contrast to an age difference between the two locations and postulate that at least two distinct generations of star formation are visible across W5. Our preliminary analysis shows that triggering is a plausible mechanism to explain the multiple generations of star formation in W5 and merits further investigation.]

ARTICLE: An L1551 Extravaganza: Three Articles

In Journal Club 2013 on April 1, 2013 at 11:55 am

Wide-Field Near-Infrared Imaging of the L1551 Dark Cloud by Masahiko Hayashi and Tae-Soo Pyo

Observations of CO in L1551 – Evidence for stellar wind driven shocks by Ronald L. Snell, Robert B. Loren & Richard L. Plambeck

Multiple Bipolar Molecular Outflows from the L1551 IRS5 Protostellar System by Po-Feng Wu, Shigehisa Takakuwa, and Jeremy Lim

Summary by Fernando Becerra, Lauren Woolsey, and Walker Lu

Introduction

Young Stellar Objects and Outflows

In the early stages of star formation, Young Stellar Objects (YSOs) produce outflows that perturb the surrounding medium, including their parental gas cloud. The current picture of star formation indicates that once gravity has overcome pressure support, a central protostar is formed surrounded by an infalling and self-supported gas disk. In this context outflows are powered by the release of gravitational potential energy liberated by matter accreting onto the protostar. Outflows are highly energetic and often spatially extended phenomena, and are observable over a wide range of wavelengths from x-ray to the radio. Early studies of molecular outflows (predominantly traced by CO emission lines, e.g. Snell et al. 1980, see below) have shown that most of their momentum is deposited in the surrounding medium and so provide a mass loss history of the protostar. In contrast, the optical and near-infrared (NIR) emission trace active hot shocked gas in the flow.

Interactions with the surrounding medium: Herbig-Haro objects, bow shocks and knots

When outflows interact with the medium surrounding a protostar, emission can often be produced. One example of this is emission from Herbig-Haro (HH) objects, which can be defined as “small nebulae in star-forming regions as manifestations of outflow activity from newborn stars”. The most common pictures show a HH object as a well-collimated jet ending in a symmetric bow shock. Bow shocks are regions where the jet accelerates the ambient material. The shock strength should be greatest at the apex of the bow, where the shock is normal to the outflow, and should decline in the wings, where the shocks become increasingly oblique. Another interesting feature we can distinguish are knots. Their origin is still unknown but a few theories have been developed over the years. They can formed due to the protostar producing bursts of emission periodically in time, or producing emission of varying intensity. They can also form due to interactions between the jet and the surrounding Interstellar Medium (ISM), or due to different regions of the jet having different velocities.

An exceptional case: The L1551 region

The L1551 system is an example of a region in which multiple protostars exhibiting outflows are seen, along with several HH objects and knots. This system has been catalogued for over fifty years (Lyons 1962), but ongoing studies of the star formation and dynamical processes continue to the present day (e.g. Hayashi and Pyo 2009; Wu et al. 2009). L1551 is a dark cloud with a diameter of ~20′ (~1 pc) located at the south end of the Taurus molecular cloud complex. The dark cloud is associated with many young stellar objects. These YSOs show various outflow activities and characteristics such as optical and radio jets, Herbig-Haro objects, molecular outflows, and infrared reflection nebulae. We will start by giving a broad view of the region based on Hayashi and Pyo 2009, and then we will focus on a subregion called L1551 IRS 5 following Snell et al. 1980 and Wu et al. 2009.

Paper I: An overview of the L1551 Region (Hayashi and Pyo 2009)

The L1551 region is very rich in YSOs, outflows and their interaction with the ISM. The most prominent of the YSOs in this region are HL Tau, XZ Tau, LkHα 358, HH 30, L1551 NE, and L1551 IRS 5 (see Fig. 1), arrayed roughly north to south and concentrated in the densest part (diameter ~10′) of the cloud. The authors based their study on observations using two narrowband filters [Fe II] (\lambda_c = 1.6444 μm, \Delta\lambda = 0.026 μm), H_2 (\lambda_c = 2.116 μm, \Delta\lambda = 0.021 μm) and two broad-band filters: H (\lambda_c = 1.64 μm, \Delta\lambda = 0.28 μm) K_s (\lambda_c = 2.14 μm, \Delta\lambda = 0.31 μm). The choice of [Fe II] and H_2 is motivated by previous studies suggesting that the [Fe II] line has higher velocity than the H_2, and thus arises in jet ejecta directly accelerated near the central object, while H_2 emission may originate in shocked regions. In the particular case of bow shocks, regions of higher excitation near the apex are traced by [Fe II], while H_2 is preferentially found along bow wings. The broadband filters were chosen for comparison with NIR narrowband filters and comparison with previous studies. The total sky coverage was 168 arcmin2, focused on 4 regions of the densest part of the L1551 dark cloud, including HL/XZ Tau, HH30, L1551 IRS5, some HH objects to the west, L1551 NE, and part of HH 262 (see Fig. 1).

L1551

Figure 1: An overview of L1551 (Figure 1 of Hayashi and Pyo 2009)

HL/XZ Region

Some of the features the authors identify in this region are:

  • A faint [Fe II] jet emanating from HL Tau to its northeast and southwest. The H_2 emission is hard to identify in the northeast part, but significant H_2 emission blobs are detected in the southwest part (denoted “H2 jet” in Fig. 2)
  • A diffuse feature is also distinguished to the north-northeast of XZ Tau, which may be related to the outflow from one member of the XZ Tau binary.
  • A continuum arc from HL Tau to the north and then bending to the east (“cont arc” in Fig. 2) is also identified. This arc may be a dust density discontinuity where enhanced scattering is observed. Although it is not clear if this arc is related to activities at HL Tau or XZ Tau.
  • Another arc feature to the south from HL Tau curving to the southeast can be identified. Two H_2 features are located in the arc and indicated by arrows in Fig. 2. This may be shocked regions in the density discontinuity.
  • Other H_2 features can be distinguished: “A” (interpreted as a limb-brightened edge of the XZ Tau counter-outflow) and “B”, “C”, “a” (blobs driven by the LkH\alpha 358 outflow and interacting with the southern outflow bubble of XZ Tau).
HLXZ

Figure 2: HL/XZ Region (Figure 2 of Hayashi and Pyo 2009)

HH 30 Region

HH 30 is a Herbig-Haro (HH) object including its central star, which is embedded in an almost edge-on flared disk. Although this object doesn’t have clear signs of large-scale [Fe II] or H_2 emission (see Fig. 3), a spectacular jet was detected in the [S II] emission line in previous studies. Despite that, the authors identify two faint small-scale features based on the [Fe II] frame: one to the northeast (corresponding to the brightest part of the [S II] jet) and one to the south-southeast (corresponding to a reflection nebula)

HH30

Figure 3: HH 30 Region (Figure 3 of Hayashi and Pyo 2009)

L1551 NE

L1551 NE is a deeply embedded object associated with a fan-shaped infrared reflection nebula opening toward the west-southwest seen in the broad-band Ks continuum emission. It has an opening angle of 60^o. The most important features in this region are:

  • A needle-like feature connecting L1551 NE and HP2 is distinguished from the continuum-substracted [Fe II] image, associated with an [Fe II] jet emanating from L1551 NE.
  • A diffuse red patch at the southwest end of the nebula (denoted as HP1) is dominated by H_2 emission
  • Five isolated compact features are detected in the far-side reflection nebula: HP3 and HP3E ([Fe II] emission), HP4 (both [Fe II] and H_2 emission) and HP5 and HP6 (H_2 emission). All of them are aligned on a straight line that is extrapolated from the jet connecting NE and HP2, naturally assigned to features on the counter-jet.
  • Comparing this data to previous observations in [S II] and radio we can deduce radial velocities of 160-190 km/s for HP2, and 140-190 km/s for HP4 and HP5. With radial velocities in the range 100-130 km/s for these knots, the inclination of the jet axis is estimated to be 45^o60^o.

L1551 IRS-5

L1551 IRS 5 is a protostellar binary system with a spectacular molecular outflow (Snell et al. 1980; see below) and a pair of jets emanating from each of the binary protostars. A conspicuous fan-shaped infrared reflection nebula is seen in Fig. 4, widening from IRS 5 toward the southwest. At the center of this nebula, the two [Fe II] jets appear as two filaments elongated from IRS 5 to its west-southwest; the northern jet is the brighter of the two. Knots A, B and C located farther west and west-southwest of PHK3 (associated with H_2 line emission) have significant [Fe II] emission.

IRS-5

Figure 4: A close-up of IRS-5 (Figure 5 of Hayashi and Pyo 2009)

A counter-jet only seen in the [Fe II] frame can be distinguished to the northeast of IRS 5. Considering its good alignment with the northern jet, it can be interpreted as the receding part of the jet. Based on brightness comparison between the both jets, and transforming H-band extinction to visual extinction the authors deduce a total visual extinction of Av=20-30 mag. Besides the counter-jet, the authors also detect the northern and southern edge of the reflection nebula that delineate the receding-side outflow cone of IRS5.

A brief summary of the HH objects detected in the IRS5 region:

  • HH29: Consistent with a bow shock, its [Fe II] emission features are compact, while the H_2 emission is diffuse. Both emissions are relatively separate.
  • HH260: Consisted with a bow shock with compact [Fe II] emission knot located at the apex of a parabolic H_2 emission feature.
  • HP7: Its [Fe II] and H_2 emission suggest it is also a bow shock driven by an outflow either from L1551 IRS5 or NE.
  • HH264: It is a prominent H_2 emission loop located in the overlapping molecular outflow lobes of L1551 IRS5 and NE. Its velocity gradients are consistent with the slower material surrounding a high-velocity (~ -200 km/s in radial velocity) wind axis from L1551 IRS 5 (or that from L1551 NE)
  • HH 102: Loop feature dominated by H_2 emission (and no [Fe II] emission) similar to HH264. Considering that the major axes of the two elliptical features are consisted with extrapolated axis of the HL Tau jet, it is suggested that they might be holes with wakes on L1551 IRS5 and/or NE outflow lobe(s) that were bored by the collimated flow from HL Tau.

Comparison of Observations

Near-infrared [Fe II] and H_2 emission show different spatial distributions in most of the objects analyzed here. On one hand the [Fe II] emission is confined in narrow jets or relatively compact knots. On the other hand, the H_2 emission is generally diffuse or extended compared with the [Fe II] emission, with none of the H_2 features showing the well collimated morphology as seen in [Fe II].
These differences can be understood based on the conditions that produce different combinations of [Fe II] and H_2 emission:

  • Case of spatially associated [Fe II] and H_2 emissions: Generally requires fast dissociative J shocks (Hollenbach & McKee 1989; Smith 1994; Reipurth et al. 2000).
  • Case of a strong H_2 emission without detectable [Fe II] emission: Better explained by non-dissociative C shocks

The interpretation of differences in [Fe II] and H_2 emission as a result of distinct types shocks is supported by observational evidence showing that the [Fe II] emission usually has a much higher radial velocity than the H_2 emission. In the case of HH 29, HH 260 and HP 7 the [Fe II] emission arises in the bow tips where the shock velocity is fast (~50 km/s) and dissociative whereas H_2 emission occurs along the trailing edges where the shock is slower (~20 km/s)

Paper II: Landmark Observations of Snell et al. 1980

One of the original papers in the study of L1551 was written by Snell, Loren and Plambeck (1980). In this paper, the authors use 12CO to map what they find to be a double-lobed structure extending from the infrared source IRS-5 (see Figures 1, 4). This system is also associated with several Herbig-Haro objects, which are small dense patches that are created in the first few thousands of years after a star is formed. This star is consistent with a B star reddened by 20 magnitudes of dust extinction along the line of sight, through a distance of 160 pc (Snell 1979). By studying these outflows, we are able to better understand the evolution of YSOs.

Observations

Snell et al. (1980) made their observations using the 4.9 meter antenna at Millimeter Wave Observatory in Texas. Specifically, they considered the J = 1-0 and J = 2-1 transitions of ^{12}CO and ^{13}CO. Additionally, they made J = 1-0 observations with the NRAO 11 meter antenna. They found asymmetries in the spectral lines, shown below in Figure 5. To the northeast of IRS-5, the high-velocity side of the line has a broad feature, and the southwest of IRS-5 presents a similar broad feature on the low-velocity side of the spectral line. No such features were found to the NW, SE, or in the central position of IRS-5.

Snell et al. 1980, figure 4

Figure 5: 12CO and 13CO 1-0 transition lines; top is NE of central source, bottom is SW of source (Figure 4 of Snell et al. 1980)

The J = 2-1 ^{12}CO transition is enhanced relative to the J = 1-0 transition of ^{12}CO, suggesting that the ^{12}CO emission is not optically thick. If the emission was optically thick, the J = 1-0 line would be the expected dominant transition as it is a lower level transition. The observations also suggest an excitation temperature for the 2-1 transition of T_{ex} ~ 8-35 K. This would only relate to the gas temperature if the environment is in local thermal equilibrium, but it does set a rough minimum temperature. The ^{13}CO emission for the 1-0 transition is roughly 40 times weaker than the same transition for ^{12}CO, which further suggests both isotopes are optically thin in this region (if the ^{12}CO is already optically thin, the weaker transition means ^{13}CO is even more so). The geometry of the asymmetries in the line profiles seen to the NE and SW combined with the distance to L1551 suggest lobes that extend out 0.5 pc in both directions.

Interpretations

Column density

The authors make a rough estimate of the column density of the gas in these broad velocity features by making the following assumptions:

  • the ^{12}CO emission observed is optically thin
  • the excitation temperature is 15 K
  • the ratio of CO to H2 is a constant 5 \times 10^{-5}

With these assumptions, the authors find a column density of 10^{20} cm^{-2}. This is much lower than the region’s extinction measurement of A_{v} = 20 magnitudes by Snell (1979), as the outflow is sweeping out material around the star(s).

Stellar wind and bow shocks

The model of the wind that Snell et al. (1980) suggest is a bimodal wind that sweeps out material in two “bubble-like” lobes, creating a dense shell and possible ionization front that shocks the gas (More on shocks). The physical proximity of the Herbig-Haro (HH) objects in the southwest lobe coming from IRS-5 suggests a causal relationship. Previous work found that the optical spectra of the HH objects resemble spectra expected of shocked gas (Dopita 1978; Raymond 1979).

There is evidence that the CO lobes are the result of a strong stellar wind, the authors clarify this with the schematic shown in Figure 6. They suggest that the wind is creating a bow shock and a shell of swept-up material (More on outflows in star-forming regions). The broad velocity features on the CO emission line wings reach up to 15 km/s, suggesting the shell is moving out at that speed. The Herbig-Haro objects HH29 and HH102 have radial velocities of approximately 50 km/s in the same direction as the SW lobe is expanding (Strom, Grasdalen and Strom 1974). Additionally, Cudworth and Herbig (1979) measured the transverse velocities of HH28 and HH29, and found that the objects were moving at a speed of 150 to 170 km/s away from IRS-5. To have reached these velocities, the HH objects must have been accelerated, most likely by a strong stellar wind at speeds above 200 km/s. The bimodal outflow suggests a thick accretion disk around the young star.

Snell et al. 1980, figure 5

Figure 6: Schematic drawing of stellar outflow (Figure 5 of Snell et al. 1980)

Mass-loss rate

The average density in the region away from the central core is 10^{3} {\rm ~cm}^{-3} (Snell 1979), so the extent and density of the shell implies a swept-up mass of 0.3 to 0.7 solar masses. With the measured velocity of ~15 km/s assumed to be constant during the lifetime of the shell and at the measured distance of 0.5 pc from the star, the shell was created 30,000 years ago. With this age, the authors determined a mass loss using the lower end of the assumed swept-up mass and the observed volume of the shell. They found a mass-loss rate of 8 \times 10^{-7} {\rm ~M}_{Sun}{\rm ~yr}^{-1}, which can be compared to other stars using a chart like that shown in Figure 7. This is not meant to present constraints on the processes that produce the mass loss in the IRS-5 system, but rather to simply provide context for the stellar wind observed. The low-mass main sequence star(s) that will eventually arise from the IRS-5 system will be characterized by much lower mass loss rates, and studies of mass loss rates from other YSOs suggest that this source is at the high end of the range of expected rates.

Cranmer - Ay201a - Stellar Winds

Figure 7: A representative plot of the different types of stellar wind, presented by Steve Cranmer in lectures for Ay201a, Fall 2012

Snell et al. (1980) suggest observational tests of this wind-driven shock model:

  • H2 emission from directly behind the shock
  • FIR emission from dust swept up in the shell; this is a possibly significant source of cooling
  • radio emission from the ionized gas in the wind itself near to IRS-5 with the VLA or similar; an upper limit of 21 mJy at 6 cm for this region was determined by Gilmore (1978), which suggests the wind is completely ionized

The results of some newer observations that support this wind model are presented in the following section.

Paper III: A new look at IRS-5 by Wu et al. 2009

Wu et al. (2009) focus on the outflows in L1551 IRS5, the same region studied by Snell et al. (1980), but at a higher angular resolution (~3 arcsec; <1000 AU) and much smaller field of view (~1 arcmin; ~0.05 pc or 10,000 AU). Using the sub-millimeter array (SMA) right after it was formally dedicated, the authors detected CO(2-1) line and millimeter continuum in this low-mass star formation system. The mm continuum, which comes mostly from thermal dust emission, is used to estimate the dust mass. The CO(2-1) spectral line is used to trace the outflows around the binary or triple protostellar system at the center, revealing complex kinematics that suggest the presence of three possible bipolar outflows. The authors construct a cone-shaped outflow cavity model to explain the X-shaped component, and a precessing outflow or a winding outflow due to orbital motion model to explain the S-shaped component. The third component, the compact central one, is interpreted as newly entrained material by high-velocity jets.

Important concepts

There are several concepts related to radio interferometry that merit some discussion:

1. Extended emission filtered out by the interferometer
This is the well-known ‘missing flux problem’ unique in interferometry. There is a maximum scale over which structure cannot be detected by an interferometer, and this scale is set by the minimum projected baseline (i.e. projected distance between a pair of antennas) in the array. In the channel maps (Fig. 3 of Wu et al. 2009), there is a big gap between 5.8 km/s and 7.1 km/s. It does not indicate that there is no CO gas at these velocities, but rather is the result of very extended and homogenous CO distribution which is unfortunately filtered out. This effect applies to all channels.

2. Visibility vs. image
The data directly obtained from the interferometers are called visibility data, in which the amplitude and phase are stored for each baseline. The amplitude, as it literally means, measures the flux; and the phase derives the relative location with respect to the phase center (a reference position on the antenna). We need to convolve the visibility data with a point spreading function, also called ‘dirty beam’, to get the image we need. Mathematically, visibility and image are related through a Fourier transform. For more information, see this online course.

3. Channel maps and P-V diagram
In radio observations, velocity of spectral lines plays an important role by providing kinematic information inside ISM. Normally, the radio data has (at least) three dimensions, two in spatial (e.g. R.A. and Decl.) and one in frequency or velocity. Velocity itself can be used to identity outflows, turbulence or infall by the analysis of line profile, or it can be combined with spatial distribution of emission, if the spatial resolution allowed as in this paper, in the form of channel maps or P-V diagram, to show the three-dimensional structure. In terms of outflows, we expect to see gas at velocities much different from the systematic velocity, and a symmetric pattern both in red- and blue-shifted sides will be even more persuasive. For the efforts in visualizing the 3-d datacube in a fancier way, see this astrobite.

The classical image of low-mass star formation

Fig. 8 shows a diagram of a simplified four-step model of star formation (Fig. 7 of Shu et al. 1987). First, the dense gas collapses to form a core; second, a disk forms because of conservation of angular momentum; third, a pair of outflows emerge along the rotational axis; finally, a stellar system comes into place. During this process, bipolar outflows are naturally formed when the wind breaks through surrounding gas. Therefore, bipolar outflows are useful tools to indirectly probe the properties of protostellar systems.

Shu et al. 1987, figure 7

Figure 8: The formation of a low-mass star (see Shu et al. 1987)

Protostar candidates in L1551 IRS5

Two protostellar components as well as their own ionized jets and circumstellar disks have been found in this source. In addition, Lim & Takakuwa (2006) found a third protostellar candidate, as seen in Fig. 9. In this paper the authors investigated the possible connection between these protostars and the outflows.

Lim and Takakuwa 2006

Figure 9: Two 7mm continuum peaks in the north and south represent the binary system in IRS 5. Arrows show the direction of jets from each of the protostars. A third protostellar candidate is found close to the northern protostar, marked by a white cross. (see Lim and Takakuwa 2006)

Three outflow components

Based on CO(2-1) emission, the authors found three distinct structures. Here the identification was not only based on morphology, but also on the velocity (see Fig. 5 and Fig. 9 of Wu et al. 2009). In other words, it is based on information in the 3-d datacube, as shown in 3 dimensions by the visualization below.

L551IRS5outflows

Figure 10: 3-D Datacube of the observations. Arrows mark the outflows identified in this paper. Red/blue colors indicate the red/blue-shifted components. The solid arrows mark the X-shaped component; the broad arrows are the S-shaped component; the narrow arrows are the compact central component. There is an axes indicator at the lower-left corner (x – R.A., y – Decl., z- velocity). (visualization by Walker Lu)

  • The X-shaped component

The first one is an X-shaped structure, with its morphology and velocity shown in the paper. Four arms comprise an hour-glass like structure, with an opening angle of ~90 degree. The northwest and southwest arms are blue-shifted with respect to the systematic velocity, and the northeast and southeast arms are red-shifted. This velocity trend is the same with the large-scale bipolar outflow (Snell et al. 1980, Fig. 6 above). However the two blue-shifted arms, i.e. the NW and SW arms, are far from perfect: the SW arm is barely seen, while the NW arm consists of two components, and presents a different velocity pattern. This component coincides well the U-shaped infrared emission found to the SW of IRS5 (see Hayashi & Pyo 2009, or Fig. 4 above).

Outflow components

Figure 11: Components of the Outflow. Coordinates are offset from the pointing center. (Figure 7 of Wu et al. 2009)

  • The S-shaped component

The second component is an S-shaped structure. It extends along the symmetry axis of the X-shaped component as well as the large scale outflow, but in an opposite orientation. As it literally means, this component is twisted like an ‘S’, although the western arm is not so prominent.

  • The compact central component

The third component is a compact, high-velocity outflow very close to the protostars. The authors fitted a 2-d Gaussian structure to this component and made a integrated blue/red-shifted intensity map, which shows a likely outflow feature in the same projected orientation with the X-shaped component and the large scale outflow.

Modeling the outflows

  • A Cone-shaped cavity model for the X-shaped component

The authors then move on to construct outflow models for these components. For the X-shaped component, a cone-shaped outflow cavity model is proposed (see Fig. 12 and compare with Fig. 11). By carefully selecting the opening angle of the cone and position angle of the axis, plus assuming a Hubble-like radial expansion, this model can reproduce the X-shaped morphology and the velocity pattern. The origin of this cone is related to a high-velocity and well collimated wind, followed by a low-velocity and wide-angle wind that excavates the cone-shaped cavity. Therefore, what we see as X-shaped structure is actually the inner walls of the cavity. However, this model cannot incorporate the NW arm into the picture.

Outflow Models

Figure 12: Models from the paper (Figure 10 of Wu et al. 2009)

  • Entrained material model of the compact central component

For the compact central component, the authors argue that it is material that has been entrained recently by the jets. After comparing the momentum of this component and that of the optical jets, they found that the jets are able to drive this component. Moreover, the X-ray emission around this component indicates strong shocks, which could be produced by the high-velocity jets as well. Finally, the possibility of infalling motion instead of outflow is excluded, because the velocity gradient is larger than the typical value and rotation should dominate over infall at this scale.

  • Three models of the X-shaped component

For the S-shaped component, the authors present the boldest idea in this paper. They present three possible explanations at one go, then analyze their pros and cons in four pages. But before we proceed, do we need to consider other possibilities, such as that this S-shaped component is not interconnected at all, but instead contains two separate outflows from different objects, or that the western arm of the component is not actually part of the outflow but some background or foreground emission? Although the model can reproduce the velocity pattern, it has so many adjustable parameters that we could use it to reproduce anything we like (which reminds me of the story of ‘drawing an elephant with four parameters‘). Anyway, let’s consider the three explanations individually.

  1. Near and far sides of outflow cavity walls? This possibility is excluded because it cannot be incorporated into the cone-shaped cavity model aforementioned, and cannot explain the S-shaped morphology.
  2. A precessing outflow? The outflow should be driven by a jet. Then if the jet is precessing, the outflow will also be twisted. The authors considered two scenarios: a straight jet and a bent jet, and found with finely tuned precessing angle and period, the bent jet model can best reproduce the velocity pattern along the symmetry axis. Therefore, the orbital motion between the third protostellar component, which is thought to be the driving source of this jet, and northern protostellar component, is proposed to cause the precession of the jet thus the outflow. A schematic image is shown in Fig 12.
  3. A winding outflow due to orbital motion? The difference between this explanation and the previous one is that in a precessing outflow the driving source itself is swung by its companion protostar, so the outflow is point symmetric, while in a winding outflow the driving source is unaffected but the outflow is dragged by the gravity of its companion as it proceeds, so it has mirror symmetry with respect to the circumstellar disk plain. Again, if we fine-tune the parameters in this model, we can reproduce the velocity pattern.
S-shaped component

Figure 13: The best-fit model for the S-shaped component, a bent precessing jet. Note the velocity patterns for red/blue lobes are not symmetric (Figure 11 of Wu et al. 2009)

A problem here, however, is although either the precessing jet or the winding outflow model is assumed to be symmetric, the authors use asymmetric velocity patterns to fit the two arms of the S-shaped component (see Fig. 12 and 13 in the paper). In the winding outflow model for instance, in order to best fit the observed velocities, the authors fit the eastern arm starting at a net velocity of 2 km/s at the center, while they fit the western arm starting at ~1.2 km/s. This means the two arms start at different velocities at the center.

Discussion

The nature of X-shaped and S-shaped structures interpreted in this paper is based on the analysis of kinematics and comparison with toy models. However, the robustness of their conclusion suffers from several questions: for example, how to explain the uniqueness of the NW arm in the X-shaped structure? Is the X-shaped structure really a bipolar outflow system, or just two crossing outflows? Why is the compact central component filtered out around the systematic velocity? Is the S-shaped structure really a twisted outflow, or it is two outflow lobes from two separated protostars?

All these questions might be caused by the missing flux problem discussed above. Observations from a single-dish telescope could be  combined with the interferometric data to: 1) find the front and back walls of the outflow cavity, given sufficient sensitivity, to confirm that the X-shaped component is interconnected; 2) detect the extended structure around the systematic velocity, thus verify the nature of the compact central component; 3) recover at least part of the flux in the SW arm of the X-shaped component and the west arm of the S-shaped component, and better constrain the models.

Conclusions

Using radio and infrared observations, these three papers together provide a integrated view of jets and outflows around YSOs in L1551. The near infrared observations of Hayashi & Pyo (2009) searched for [Fe II] and H2 features introduced by shocks, and found quite different configurations among the YSOs in this region. Some have complicated IR emission, such as HL/XZ Tau, while others like L1551 NE and IRS5 have well-collimated jets traced by [Fe II]. Among them, L1551 IRS5 is particularly interesting because it shows two parallel jets. The pilot work of Snell et al. (1980) revealed a bipolar molecular outflow traced by 12CO from IRS 5, which is interpreted to be created by a strong stellar wind from the young star. High angular resolution observation by Wu et al. (2009) confirms this outflow component, as well as the presence of another two bipolar outflows originating from the binary, or triple system in IRS 5. All these observations show us that jets and outflows are essential in star formation, no only by transporting the angular momentum so that YSOs can continue accreting, but also by stirring up ambient gas and feeding turbulence into the ISM, which might determine the core mass function as mentioned in Alves et al. 2007.

References

Cudworth, K. M. and Herbig, G. 1979, AJ, 84, 548.
Dopita, M. 1978, ApJ Supp., 37, 117.
Gilmore, W. S. 1978, Ph.D Thesis, University of Maryland
Hayashi, M. and Pyo, T.-S. 2009, ApJ, 694, 582-592.
Hollenback, D. and McKee, C. F. 1989, ApJ, 342, 306-336.
Lim, J. and Takakuwa, S. 2006, ApJ, 653, 1.
Lyons, B. T. 1962, ApJ Supp., 7, 1.
Raymond, J. C. 1979, ApJ Supp., 39, 1.
Reipurth, B., et al. 2000, AJ, 120, 3.
Pineda, J. et al. 2011, ApJ Lett., 739, 1.
Shu, F. H., Adams, F. C., Lizano, S. 1987, ARA&A, 25, 23-81.
Smith, M. D. 1994, A&A, 289, 1.
Snell, R. L. 1979, Ph.D Thesis, University of Texas.
Strom, S. E., Grasdalen, G. L. and Strom, K. M. 1974, ApJ, 191, 111.
Vrba, F. J., Strom, S. E. and Strom, K. M., 1976, AJ, 81, 958.
Wu, P.-F., Takakuwa, S., and Lim, J. 2009, 698, 184-197. Read the rest of this entry »

ARTICLE: A Filament Runs Through It: EVLA observations of B5

In Journal Club, Journal Club 2013 on April 1, 2013 at 4:37 am

“EVLA OBSERVATIONS OF THE BARNARD 5 STAR-FORMING CORE: EMBEDDED FILAMENTS REVEALED”

by JAIME E. PINEDA, ALYSSA A. GOODMAN, HÉCTOR G. ARCE, PAOLA CASELLI, STEVEN LONGMORE, and STUARTT CORDER

Post by Zachary Slepian

0. Main points

Like a good poem, this paper is short but thought-provoking. For a cosmologist, perhaps the most interesting take-aways relate to star formation. First, the paper helps with a long-standing problem (trace it back to Larson’s Laws!) in how stars form—lots of molecular clouds have velocity dispersions too high to allow star formation (or to be explained by thermal motions alone).  But Pineda et al. find that in at least one dense core (within a molecular cloud), velocity dispersions are small enough to make it likely that a Jeans-like instability could form stars.  As evidence for this, the authors find that the Jeans length is on the order of the separation between a young stellar object in the core and a starless condensation to its north, the latter perhaps being a precursor for another star.  The fact that these two objects are separated by a Jeans length suggests they both stem from Jeans-like collapse and fragmentation.

The paper’s second major conclusion is that there is a 5000 AU long filament at the center of the core.  This is interesting on its own, because the filament turns out to be well fit by Ostriker’s 1964 model of an isothermal cylinder in hydrostatic equilibrium, in contrast to a different filament observed in 2011 in another star-forming region, which is not fit by this model and is also much smaller than Pineda et al.’s.   More significantly, though, the filament Pineda et al. find is an additional piece of evidence for the efficacy of a Jeans-like instability, as it could easily have resulted from Jeans collapse.

Ultimately, the low velocity dispersion is the common factor between these two pieces of evidence for Jeans collapse: it is only in the absence of turbulent support that Jeans collapse occurs in cores, and the low velocity dispersion implies this absence.

In what follows, I 1) offer some questions, with linked answers on separate pages, to help you think through the paper’s text and figures, and 2) take-aways summarizing the main point of each figure. Details on how the observations were done, which aren’t essential to understanding the physics,  are here, while I go through Ostriker’s 1964 isothermal filament model here.  I close with some recommended further reading and a linked summary of Alyssa and collaborators’ model of how coherence emerges in cores.

1. Introduction

As noted earlier, molecular clouds previously have been found to have “supersonic” velocity dispersions—what does this mean? (1)

These motions must be dissipated to allow collapse and star formation.  Why?  After all, such motions will increase the Jeans length.   But can’t we just treat them as producing an extra effective pressure, and presume that above the Jeans length these regions still collapse? (2)

The technique used to study these dense cores is NH_3 mapping, specifically the (1,1) transition.  Where is another place you can find NH_3? Why use NH_3 rather than our more popular friend CO?  What the heck is the (1,1) transition? (3)

The authors note that previous work with the Green Bank Telescope (GBT) has already mapped the ammonia in B5—why do it again? Hint: they used another array, the Expanded Very Large Array (EVLA), which has better resolution.  Why would they want better resolution if they are interested in studying velocity dispersions? (4)

2. Results

When in a rush, it is apparently standard astronomy practice to scan an article’s figures, introduction, and conclusion—after all, astronomers, unlike particle physicists, are visual people.  So I’ll focus my discussion on guiding you through the figures.

Figure 1

Figure 1 from Pineda et al. 2011.

Figure 1 from Pineda et al. 2011.

. . . explains why this paper was done even though observations had already been made of B5 with the GBT.  Compare left to right panel: what is the difference? (5)

The figures say they are integrated intensity, but the right-hand panel is in “mJy beam inverse km/s”.  Help!  That does not look like units of intensity.  Try resolving this yourself before clicking the answer! (6)

  • Take-aways: The regions with greater integrated intensity are those with higher density, so this figure is showing us where the structures are likely to form or have formed (e.g. the filament), since the densest regions will collapse first (\tau_J\propto 1/\sqrt{\rho}).  The right panel has zoomed in on the regions with subsonic velocity dispersions, which are bounded in the left panel by the orange contour.

Figure 2

pineda2011_fig2

Figure 2 from Pineda et al. 2011.

What is the most important part of this figure? I’ll give you a hint: it is black and white, and not so exciting looking! (7)

What are the two panels actually showing?  This is not so obvious! (8)

  • Take-aways: the left panel shows that the filament is on order the Jeans length, and also that the dense starless condensation and young stellar object (YSO) are separated by on order a Jeans length.  Hence, Jeans collapse is the likely culprit.  The right panel shows the velocity dispersion: regions with darker blue have smaller \sigma_v and hence more coherent velocities; they become less coherent near the YSO, possibly due to feedback.

Figure 3

Figure 3 from Pineda et al. 2011.

Figure 3 from Pineda et al. 2011.

Is Figure 3 redundant? (9)

Why do the authors do two different histograms in Figure 3?  Why divide into bits of the region near the YSO and not near it? Why might the red histogram (bits of the region near the YSO) have a higher centroid value of \sigma_v and width than that for bits of the region not near the YSO? Extra credit: can you use Figure 3 to estimate how many other stars we should eventually expect to form near the YSO already observed (assume there are no stellar outflows or winds)? (10)

And why do they use the criterion of two beam widths as the cut between objects close to the YSO and far from the YSO? (11)

Finally, in the caption for Figure 3, they note \mu=2.33?  Why?  There’s a very simple answer! (12)

  • Take-aways: This Figure is a different way of seeing the same information as the right panel of Figure 2.  It shows that the velocity dispersion is by and large subsonic, emphasizing that the velocity is fairly coherent, especially in regions away from the YSO.  The red histogram in the Figure emphasizes that near the YSO the velocity dispersion is higher, though still susbsonic, as noted earlier likely due to feedback, e.g. radiation from the YSO or interaction between an outflow or stellar wind and the dense surrounding gas.

Figure 4

Figure 4 from Pineda et al. 2011.

Figure 4 from Pineda et al. 2011.

Is perhaps the most difficult figure at first glance.  Radius of zero on the horizontal axis is the center of the filament, + and – move right and left in the yellow box around the filament in Figure 1. What is the key point of panel a? (13)

Panel b is perhaps more interesting, because it shows the filament is isothermal.  The model with p=4 is a better fit to the filament, which is Ostriker’s predicted value for p in eqn. (1) if the filament is isothermal.  What is the role of the blue curve, the beam response—why should we care about that? (14)

  • Take-aways: Imagine the filament as a cylinder. The top panel shows that as one goes out radial to concentric shells of the cylinder, the velocity dispersions remain sub-sonic  and roughly constant until one is well away from the filament, which is isothermal and resolved.

Pineda et al. note that their filament contrasts with the ones detected by Herschel in other star-forming regions (Arzoumanian et al. 2011), which are not isothermal, and are fit with p=2 instead (see below).

Showing how Herschel filament is better fit with p=2, non-isothermal profile. From Arzoumanian et al. 2011 Fig. 4.

Showing how Herschel filament is better fit with p=2, non-isothermal profile. From Arzoumanian et al. 2011 Fig. 4.

3. Handouts from JC discussion and further reading

Alyssa and collaborators proposed a model of why coherence emerges in “COHERENCE IN DENSE CORES. II. THE TRANSITION TO COHERENCE”, which I go through here.

How does a turbulently-supported core turn into, for instance, a filament? Stella Offner knows!  See “Observing Turbulent Fragmentation in Simulations: Predictions for CARMA and ALMA” by Stella S.R. Offner, John Capodilupo, Scott Schnee, and Alyssa A. Goodman.

For the discovery of the sharp transition to a dense, coherent (velocity) core, alluded to in the paper, see “Direct observation of a sharp transition to coherence in Dense Cores” by Jaime E. Pineda, Alyssa A. Goodman, Héctor G. Arce, Paola Caselli, Jonathan B. Foster, Philip C. Myers, and Erik W. Rosolowsky.

For extremely recent (March 2013) discussion of core formation and filaments in another region, see “Cores, filaments, and bundles: hierarchical core formation in the L1495/B213 Taurus region” by A. Hacar, M. Tafalla, J. Kauffmann, and A. Kovacs.

For the Herschel core observations alluded to in the paper, see “Characterizing interstellar filaments with Herschel in IC 5146″ by D. Arzoumanian et al., 2011.

For context to  “COHERENCE IN DENSE CORES. II. THE TRANSITION TO COHERENCE”,  you may want also to see the companion paper.

“COHERENT DENSE CORES. I. NH3 OBSERVATIONS” by J.A. Barranco and A.A. Goodman.