Harvard Astronomy 201b

Archive for the ‘Uncategorized’ Category

Details on radio observations

In Uncategorized on April 26, 2013 at 9:18 pm

How were the observations done?

If you are a theorist, you probably don’t like words like “bandpass”, and hate words like “calibration.”  If you are an observer, you probably know and love these words.  But even theorists have to understand how observations are done: they’re what keep science science.  Here I want to briefly explain some aspects of the paper’s discussion of how the data were taken that were on first read opaque.

First of all, given that the NH_3 rest frequency is roughly 24 GHz, we can find the energy of the transition: E=h\nu = 10^{-4}\;\rm{eV}.  This corresponds to an excitation temperature of 1.2 K, which we can also translate into a velocity via v\sim (E/m)^{1/2} \sim 24\; {\rm m/s}.  A typical molecular cloud temperature is on the order of 20 K, high enough that there should be many molecules in the upper energy state (think of it as a toy-model 2 level system) and hence many upper to lower transitions occurring (leading to emission at 24 GHz).

A 1 MHz window means the observations were done between 24 GHz – 500 MHz and 24 GHz + 500 MHz, and dividing this interval by 256 gives the 3.9 kHz per channel quoted.   This can be translated into a velocity by thinking of it using the small-z redshift formula z\approx v/c. Here, we have a fractional frequency shift \Delta \nu / \nu \simeq v/c, and setting \Delta \nu / \nu =  3.9  kHZ/24 GHz leads to the velocity quoted (.049 km/s).

The baselines given (35 meters up to 1 km) determine the spatial resolution: just good old Carroll and Ostlie diffraction limit, \theta = 1.22 \lambda/D, where \theta is the angular resolution, \lambda the wavelength of light, and D the diameter of the aperture (here, the baseline).  24 GHz has \lambda = 1.2 cm, so with a 1 km baseline the angular resolution is about .05’’ (for comparison, this is 1.3 times the angular resolution of the Hubble Space Telescope at 500 nm).

Nepers is a unit of optical depth; .0689 is optically thin (and measures the optical depth due to the Earth’s atmosphere, since the authors note this corresponds to fair weather; there is negligible optical depth from sources in space at this frequency).  It is interesting to note that the optical depth is given at 22 GHz, but the observations are made at 24 GHz; this is because 22 GHz is a standard fiducial frequency to allow comparison with the weather conditions under which other radio observations might have been made.

Now, calibration—theorists’ bane.  The paper’s discussion is compact and therefore somewhat confusing to those uninitiated into the secrets of radio astronomy, but understanding it is worthwhile for the insight it can yield into what actually goes on at these remote “large arrays” in the desert (alien storage?)

Image credit: www.123rf.com
Possible very large array user? Image credit: http://www.123rf.com

In a radio interferometric observation, what you actually measure is the amplitude and phase of the signal in each channel (here, there are 256): i.e., at each different frequency corresponding to the channels in the window about 24 GHz.  (Note: interferometric means that phases are being compared between signal received at spatially separated points on Earth: this allows the difference in path length from the source to two antennae to be computed, and hence the direction to the source). The amplitude and phase at these different frequencies is desirable because it gives (eventually) intensity as a function of frequency, or equivalently a spectral energy distribution (SED).  This in turn allows inferences about the properties of the source (e.g. dust grain size from SEDs of debris disks, cf. problem set 2, problem 4 (\beta describes the shape of the SED)).There will be some intrinsic noise in both amplitude and phase, which you want to eliminate.  For a source that has a flat spectrum, you know any bumps you see at a particular frequency are due to noise in the channel at that frequency.  This is the purpose of having an amplitude calibrator: typically quasars are used because they have flat spectra.

Now, the amplitude is measured by the amount of electrical current coming from a given channel: the more photons at that frequency hit the antenna, the more current.  But astronomers care about flux, not current.  The absolute flux calibrator is a source with a known flux. Measuring the voltage this source produces allows the conversion factor between voltage and flux to be deduced.  Using a source with known flux also allows one to calibrate the bandpass: the flux should be zero outside the window, but to know this is because the window is working, you have to be sure the flux is not just coincidentally zero at frequencies outside the window anyway.

Finally, the astute reader may notice that noise is reported as mJy/beam.  The noise should be linear in the area of the beam, so a larger beam would have more noise.  Thus two sets of observations done with different beam sizes would have different noise levels due to this; reporting noise/beam allows direct comparison of how good observations done with different beam sizes are.

Advertisement

A model of the emergence of coherence

In Uncategorized on April 14, 2013 at 4:50 pm

Download PDF description of the Goodman et al. (1998) model here

Ostriker’s 1964 Isothermal Cylinder model

In Uncategorized on April 14, 2013 at 4:43 pm

Particularly impressive is that, if you read the fine print, Jerry Ostriker was an NSF graduate fellow (meaning he was a grad student!) when he wrote this paper, which solves differential equations analytically at a level of technical virtuosity undoubtedly beyond anyone reared in the age of Mathematica.  Ostriker obtains solutions for polytropic cylinders, of which an isothermal cylinder is the case where n=\infty.  One has

P=K_n \rho ^{1+1/n},

which leads to the fundamental equation

K_n (n+1) \nabla ^2 \rho ^{1/n} = -4\pi G\rho.

Using the transformation

r \equiv \bigg[ \frac{(n+1)K_n}{4\pi G \lambda ^{1-1/n} } \bigg]^{1/2} \xi,

\rho \equiv \lambda \theta ^n,

one has a version of the Lane-Emden equation

\frac {1}{\xi}\frac{d}{d\xi} \big(\xi \frac{d\theta}{d\xi} \big) =\theta''+\frac{1}{\xi}\theta'=-\theta ^n.

For n=0, 1, and infinity there is a closed form solution, for other n a power series solution.  In the particular case of an isothermal cylinder, the EOS is that for an ideal gas, and one has

r\equiv \bigg[ \frac{K_I}{4\pi G \lambda} \big]^{1/2}\xi

and

\rho=\lambda e^{-\psi}.

Using the fundamental equation noted above and manipulating yields

\psi''+\frac{1}{\xi}\psi'=e^{-\psi}.

Letting z=-\psi +2\ln \xi and t=\sqrt{2}\ln\xi, we find

2\frac {d^2z}{dt^2}+e^z=0.

Letting z=\ln y the equation may be integrated to give

\psi(\xi)=2\ln\big(1+\frac{1}{8} \xi^2\big).

Using these results in expressions Ostriker provides in the paper that give general forms for density and mass, one has

\rho= \frac{\rho_0}{\big(1+\xi^2/8 \big)^2}

and M(\xi)=\frac{2 k_B T}{\mu m_0 G} \frac{1}{1+8/\xi^2}.

The expression for density should look familiar from the Pineda paper: it gets integrated to yield his eqn. (1) for the surface density of the cylinder.

Answer to question (14)

In Uncategorized on March 28, 2013 at 7:19 am

The blue curve, beam response, shows that the beam has the resolution to resolve the two different models.

Answer to question (13)

In Uncategorized on March 28, 2013 at 7:18 am

That even near the filament the velocity is pretty much sub-sonic—the upper dotted line is the sound speed. This further supports the claim that Jeans collapse may be the mechanism behind the filament, although there may have been post-processing or feedback after the filament formed, so we can take this with a grain of salt.

Answer to question (12)

In Uncategorized on March 28, 2013 at 7:17 am

Molecular cloud—particles are mostly molecular not atomic hydrogen, meaning \mu\sim 2.  The .33 accounts for bigger things, such as Helium.  \mu=2.33 corresponds to assuming a typical, Milky Way elemental abundance.

Answer to question (11)

In Uncategorized on March 28, 2013 at 7:16 am

At first glance the cut seems arbitrary. If the thought is that the YSO is heating a region around it, leading to bigger velocity dispersions, the size of that region should be dictated by something like the physics of a Stromgren sphere, and have nothing to do with the beam size.  A similar comment applies to the case where the region is heated by a stellar wind or outflow.

However, if the beam has width 6”, and we assume that represents the full-width at half maximum, then 1 sigma is 3”, and 12” represents 2-sigma on either side of the Gaussian beam profile.  So, while technically structure >6” would be resolved, anything within 12” of the beam will be smoothed by a Gaussian profile.  Hence there is potential for contamination of the black histogram by the YSO if one is closer to it than 12”.

Answer to question (10)

In Uncategorized on March 28, 2013 at 7:14 am

Radiation from the YSO, interaction between outflow/stellar wind and gas.

 

Answer to question (9)

In Uncategorized on March 28, 2013 at 7:13 am

To some extent, yes—it shows the velocity dispersions are mostly subsonic, which is also evident from Figure 2 b if you calculate c_{s,ave}=.2\;\rm{km/s} and see that most of the region has lower values than this. But Figure 3 shows you just how much of the region doesn’t (i.e. is supersonic), and that most of the bits that are supersonic are near the young stellar object (YSO).

Answer to question (8)

In Uncategorized on March 28, 2013 at 7:12 am

The left-hand panel shows the centroid velocity: in each little beam-size cell, one has a Gaussian-esque line profile, with some centroid whose velocity can be calculated using the line’s red or blue shift.  The right-hand panel shows, in each little beam size cell, what the width of this profile is.  Interestingly, one can compare the sigma of the centroid velocities in the boxed region of the left panel occupied by the filament with the sigma in each beam-sized cell and use this ratio to test the geometry of the structure in the region.  This is essentially because of Larson’s Laws: Philip Mocz’s post on Larson’s laws explains how this ratio depends on the geometry, and calculates it for several idealized cases.  The most relevant one for us is the long sheet, which in 2-d projection would look similar to a filament.  The ratio predicted for such a geometry is 2.67.  Estimating by eye the needed quantities from Figure 2 (try this yourself), we find \sigma_v/\Delta \sigma_v\sim 3—offering somewhat independent confirmation that we have a filament.  Note this calculation will be somewhat sensitive to how you choose the region over which to calculate the ratio—but we already know what we are looking for (the filament), so can choose the region accordingly. However, this is why I qualified this with “somewhat” above.

Answer to question (7)

In Uncategorized on March 28, 2013 at 7:11 am

The Jeans length drawn on, between the YSO and the starless condensation—it is one of the key pieces of evidence that Jeans collapse may be happening.

Answer to question (6)

In Uncategorized on March 28, 2013 at 7:10 am

mJy are a unit of specific intensity, or flux at a particular frequency.  Integrated intensity is therefore mJy times Hz (or 1/s).  One can convert Hz to km/s by determining what velocity is implied by a given frequency shift \Delta\nu/\nu, and that is what has been done here, with \nu \approx 23. GHz.

Answer to question (5)

In Uncategorized on March 28, 2013 at 7:07 am

Resolution!  And thus probing details of velocity dispersion. The GBT image does not really resolve the filament, and only marginally shows the starless condensation—which is separated from the young stellar object by a Jeans length, a key piece of evidence for the efficiency of Jeans collapse in the region.

 

Answer to question (4)

In Uncategorized on March 28, 2013 at 7:05 am

Because the GBT couldn’t resolve spatial variations in the velocity dispersion or column density.  Resolving the former offers more information on the  problem of supersonic vs. subsonic dispersions, already discussed as key for star formation, and column density spatial variations probe whether there are structures along the line of sight—as indeed the authors find (the filament!)

Answer to question (3)

In Uncategorized on March 28, 2013 at 7:04 am

In your bathroom, if you clean it—NH_3 is actually just ammonia!  Ammonia is common in regions near the galactic center (Kaifu et al. 1975). And CO  becomes optically thick before ammonia—and optically thick is optically useless when you want to find densities!  Ammonia allows study of denser regions—exactly what is needed to probe star formation.

The (1,1) transition is actually quite exotic: no mere rotational line here.  Ammonia is shaped like a triangular pyramid, with the three H’s at the base and the N on top.  The N can quantum mechanically tunnel through the potential barrier of the base, inverting the pyramid.  So the (1,1) transition is also known as an “inversion” transition.

You might wonder why there is a potential barrier at all—after all, each H atom is neutral, and so is the N on top.  But, if you were an electron on one of the H’s, were would you want to be? Certainly far from the other 2 Hs’ electrons!  Hence each H’s electron will spend most of its time outside the base, meaning the triangle formed by the H’s will be slightly positively charged on the inside. Similarly, the electron on the N will want to be as far from the other three electrons as it can be, so it will hover above the N, meaning the bit of the N facing the pyramid’s triangular base will be slightly positively charged.  Ergo, potential barrier.

Making simple assumptions, we can estimate the energy of the inversion.  Assuming the distance to the base’s center for each H’s electron is (a+l), a the Bohr radius and l the ammonia bond length, 1 angstrom, and that the N’s electrons are (a+l) above this center, we calculate the potential where the 7 protons in N are.  Converting to energy and thence frequency, we obtain \nu\sim 1 \;\rm{GHz}. Incidentally, to go further one could use the WKB approximation to estimate the tunneling probability.  Given a minimum flux per beam width to which the telescope is sensitive (14 mJy here is the noise), one could even then place a lower bound on the column density observable with this transition by a particular instrument, and assuming isotropy, one could get a density from the column density.

Answer to question (2)

In Uncategorized on March 28, 2013 at 7:02 am

“Inevitably when large enough” is the key.  One can indeed treat the supersonic motions as contributing an additional pressure, and raising the Jeans length—the problem is, they raise it above the typical size of a dense core!  Hence for dense cores to collapse, the turbulence must be dissipated so that the Jeans length goes down below the size of the core.

Answer to question (1)

In Uncategorized on March 28, 2013 at 7:00 am

Thermal velocity dispersions mean you have a spectral line with some width, and the width is given by thermal broadening, so that \sigma_T=\sqrt{k_B T/\mu m_H} from the Equipartition Theorem. This also happens to be the sound speed!  Is it mere coincidence that thermal velocities are on order the sound speed? No!  Thermal motions are (no surprise) set by the temperature.  The sound speed is set by pressure, since sound waves are just pressure-density waves, and the pressure is also ultimately set by the temperature. So it’s not coincidence that thermal motions are sub-sonic, and supersonic motions cannot be explained by thermal broadening.

ARTICLE: The mass function of dense molecular cores and the origin of the IMF (2007)

In Journal Club, Journal Club 2013, Uncategorized on March 12, 2013 at 9:30 pm

The mass function of dense molecular cores and the origin of the IMF

by  Joao Alves Marco Lombardi & Charles Lada
Summary by Doug Ferrer

Introduction

One of the main goals of researching the ISM is understanding the connection between the number and properties of stars and the properties of the surrounding galaxy. We want to be able to look at a stellar population and deduce what sort of material it came from, and the reverse–predict what sort of stars we should expect to form in some region given what we know about its properties. The basics of this connection have been known for a while (eg. Bok 1977). Stars form from the gravitational collapse of dense cores of molecular clouds. Thus the properties of stars are the properties of these dense cores modulated by the physical processes that happen during this collapse.

One of the key items we would like to be able to derive from this understanding of star formation is the stellar initial mass function (IMF)–the number of stars of a particular mass as a function of mass. Understanding how the IMF varies from region to region and across time would be extremely useful in many areas of astrophysics, from cosmology to star clusters. In this paper, Alves et al. attempt to explain the origin of the IMF and provide evidence for this explanation by examining the molecular cloud complex in the Pipe nebula. We will look at some background on the IMF, then review the methods used by Alves et al. and asses the implications for star formation.

The IMF and its Origins

The IMF describes the probability density for any given star to have a mass M, or equivalently the number of stars in a given region with a mass of M. Early work done by Salpeter (1955) showed that the IMF for relatively high mass stars ( M > 1 M_{sun}) follows a power law with the index \alpha \approx -1.3 - 1.4. For lower masses, the current consensus is for a break below 1 M_{sun}, and another break below .1 M_{sun} , with a peak around .3 M_{sun}. A cartoon of this sort of IMF is shown in Fig. 1.  The actual underlying distribution may in fact be log-normal. This is consistent with stars being formed from collapsing over-densities caused by supersonic turbulence within molecular clouds (Krumholz 2011). This is not particularly strong evidence, however, as the log-normal distribution can result from any process that depends on the product of many random variables.

Example of the IMF

Figure 1: An illustration of the accepted form of the IMF. There is a power law region for less than 1 M_{sun}, and a broad peak for lower masses. Image from Krumholz (2011) produced using Chabrier (2003)

The important implication of these results is that there is a characteristic mass scale for star formation of around .2-.3 M_{sun} . There are two (obvious) ways to explain this:

  1. The efficiency of star formation peaks at certain scales.
  2. There is a characteristic scale for the clouds that stars form from

There has been quite a lot of theoretical work examining option 1 (see Krumhoz 2011 for a relatively recent, accessible review). There are many different physical processes at play in star formation–turbulent MHD, chemistry, thermodynamics, radiative transfer, and gravitational collapse.  Many of these processes are not separately well understood, and each is occurring in its complex, and highly non-linear regime.  We are not even close to a complete description of the full problem. Thus it would not be at all surprising if there were some mass scale, or even several mass scales that are singled out by the star formation process, even if none is presently known. There is some approximate analytical work showing that feedback from stellar outflows may provide such a scale (eg. Shu et al. 1987) . More recent work (e.g. Price and Bate 2008) has shown that magnetic fields cause significant effects on the structure of collapsing cloud cores in numerical simulations, and may reduce or enhance fragmentation depending on magnetic field strength and mass scale.

Nevertheless, the authors are skeptical of the idea that star-formation is not a scale-free process. Per Larson (2005), they do not believe that turbulence or magnetic fields are likely to be very important for the smallest, densest scale clouds that starts ultimately form from. Supersonic turbulence is quickly dissipated on these scales, and magnetic fields are dissipated through ambipolar diffusion–the decoupling of neutral molecules from the ionic plasma. Thus Larson argues that thermal support is the most important process in small cores, and the Jeans analysis will be approximately correct.

The authors thus turn to option 2. It is clear that if the dense cores of star forming clouds already follow a distribution like the IMF, then there will be no need for option 1 as an explanation. Unfortunately though, the molecular cloud mass function (Fig. 2) does not at first glance show any breaks at low mass and has too shallow of a power law index .  But what if we look at only the smallest, densest clumps?

Cloud mass Function

Figure 2: The cumulative cloud mass function (Ak is proortional to mass) for several different cloud complexes from Lombardi et al. (2008). While this is not directly comparable to the IMF, the important take away is that there are no breaks at low mass.

Observations

Observations using dense gas emission tracers like C^{18} O and HC^{13}O produces mass distributions more like the stellar IMF (eg. Tachihara et al. 2002, Onishi 2002) . However, there are many systematic uncertainties in emission based analysis. To deal with these issues, the authors instead probed dense cloud mass using wide field extinction mapping (this work).   An extinction map was constructed of the nearby Pipe nebula using the NICER method of Lombardi & Alves (2001), which we have discussed in class. This produced the extinction map shown in Fig. 3 below.

NICER Extinction map of the pipe nebula

Figure 3: NICER Extinction map of the Pipe nebula. A few dense regions are visible, but the noisey, variable background makes it difficult to seperate out seperate cores in a consistent way.

Data Analysis

The NICER map of the pipe nebula reveals a complex filamentary structure with very high dynamic range in column density (~ 10^{21} - 10^{22.5} cm^{-2}). It is difficult to assign cores to regions in such a data set in a coherent way (Alves et al. 2007) using standard techniques–how do we determine what is a core and what is substructure to a core? Since it is precisely the number and distribution of cores that we are interested in, we cannot use a biased method to identify the cores. To avoid this, the authors used a technique called  multiscale wavelet decomposition. Since the authors do not give much information on how this works, we will give a brief overview following a description of a similar technique from Portier-Fozzani et al. (2001).

Wavelet Decomposition

Wavelet analysis is a hybrid of Fourier and coordinate space techniques. A wavelet is a function that  has a characteristic frequency, position and spatial extent, like the one in Fig.  4. Thus if we convolve a wavelet of a given frequency and length with a signal, it will tell us how the spatial frequency of the signal varies with position. This is the type of analysis used to produce a spectrogram in audio visualization.

Wavelet example

Figure 4: A wavelet. This is one is the product of a sinusoid with a gausian envelope. This choice is a perfect tradeoff between spatial and frequency resolution, but other wavelets may be ideal for some other resolution goal. Note that \Delta x \Delta k \ge 1/2

The authors used this technique to separate out dense structures from the background. They performed the analysis using wavelets with spatial scales close to three different length scales separated by factors of 2 (.08 pc, .15 pc, .3 pc). Then they combined the structures identified at each length scale into trees based on spatial overlap, and called only the top level of these trees separate cores. The resulting identified cores are shown in fig. 5.  While the cores are shown as circles in fig. 5, this method does not assume spherical symmetry, and the actual identified core shapes are filamentary, reflecting the qualitative appearance of the image

Dense cores in pipe nebula

Figure 5: Dense cores identified in the Pipe nebula (circles) from Alves et al. (2007). The radii of the circles is proportional to the core radius determined by the wavelet decomposition analysis

Results

After obtaining what they claim to be a mostly complete sample of cores, the authors calculate the mass distribution for them. This is done by converting dust extinction to a dust column density. This gives a dust mass for each clump, which can then be extrapolated to a total mass by assuming a set dust fraction.  The result of this is shown in fig. 6 below. The core mass function they obtain is qualitatively similar in shape to the IMF, but scaled by a factor of ~4 in mass. The analysis is only qualitative, and no statistics are done or functions fit to the data. The authors claim that this result evinces a universal star formation efficiency of ~30%, and that this is good agreement with that calculated analytically by (Shu 2004) and numerical simulations. This is again only a qualitative similarity, however.

We should also note that the IMF is hypothesized to be a log-normal distribution. This sort of distribution can arise out of any process that depends multiplicatively on many independent random factors. Thus the fact that dense cores have a mass function that is a scaled version of the IMF is not necessarily good evidence that they share a simple causal link, in the same way that two variables both being normally distributed does not mean that they are any way related.

Figure 6: The mass function of dense molecular cores (points) and the IMF (solid grey line). The dotted gray line is the IMF with mass argument scaled by a factor of 4. The authors note the qualitative agreement, but do not perform any detailed analysis.

Conclusions

Based on this, the authors conclude that there is no present need for a favored mass scale in star formation as an explanation of the IMF.  Everything can be explained by the distribution produced by dense cloud cores as they are collapsing.  There are a few points of caution however. This is a study of only one cluster, and the data are analyzed using an opaque algorithm that is not publicly available. Additionally, the distributions are not compared statistically (such as with KS), so we have only qualitative similarity.  It would be interesting to see these results replicated for a different cluster using a more transparent statistical method

References

Bok, B. J., Sim, M. E., & Hawarden, T. G. 1977, Nature, 266, 145

Krumholz, M. R. 2011, American Institute of Physics Conference Series, 1386, 9

Lombardi, M., Lada, C. J., & Alves, J. 2008, A&A, 489, 143

Krumholz, M. R., & Tan, J. C. 2007, ApJ, 654, 304

Alves, J., Lombardi, M., & Lada, C. J. 2007, A&A, 462, L17

Lombardi, M., Alves, J., & Lada, C. J. 2006, A&A, 454, 781

Larson, R. B. 2005, MNRAS, 359, 211

Shu, F. H., Li, Z.-Y., & Allen, A. 2004, Star Formation in the Interstellar Medium: In Honor of
David Hollenbach, 323, 37

Onishi, T., Mizuno, A., Kawamura, A., Tachihara, K., & Fukui, Y. 2002, ApJ, 575, 950

Tachihara, K., Onishi, T., Mizuno, A., & Fukui, Y. 2002, A&A, 385, 909

Portier-Fozzani, F., Vandame, B., Bijaoui, A., Maucherat, A. J., & EIT Team 2001, Sol. Phys.,
201, 271

Shu, F. H., Adams, F. C., & Lizano, S. 1987, ARA&A, 25, 23

Salpeter, E. E. 1955, ApJ, 121, 161

ARTICLE: Turbulence and star formation in molecular clouds

In Journal Club, Journal Club 2013, Uncategorized on February 5, 2013 at 4:43 pm

Read the Paper by R.B. Larson (1981)

Summary by Philip Mocz

Abstract

Data for many molecular clouds and condensations show that the internal velocity dispersion of each region is well correlated with its size and mass, and these correlations are approximately of power-law form. The dependence of velocity dispersion on region size is similar to the Kolmogorov law for subsonic turbulence, suggesting that the observed motions are all part of a common hierarchy of interstellar turbulent motions. The regions studied are mostly gravitationally bound and in approximate virial equilibrium. However, they cannot have formed by simple gravitational collapse, and it appears likely that molecular clouds and their substructures have been created at least partly by processes of supersonic hydrodynamics. The hierarchy of subcondensations may terminate with objects so small that their internal motions are no longer supersonic; this predicts a minimum protostellar mass of the order of a few tenths of a solar mass. Massive ‘protostellar’ clumps always have supersonic internal motions and will therefore develop complex internal structures, probably leading to the formation of many pre-stellar condensation nuclei that grow by accretion to produce the final stellar mass spectrum. Molecular clouds must be transient structures, and are probably dispersed after not much more than 10^7 yr.

How do stars form in the ISM? The simple theoretical picture of Jeans collapse — that a large diffuse uniform cloud starts to collapse and fragments into a hierarchy of successively smaller condensations as the density rises and the Jeans mass decreases — is NOT consistent with observations. Firstly, astronomers see complex structure in molecular clouds:  filaments, cores, condensations, and structures suggestive of hydrodynamical processes and turbulent flows. In addition, the data presented in this paper shows that the observed linewidths of regions in molecular clouds are far from thermal. The observed line widths suggest that ISM is supersonically turbulent on all but the smallest scales. The ISM stages an interplay between self-gravity, turbulent (non-thermal) pressure, and feedback from stars (with the fourth component, thermal pressure, not being dominant on most scales). Larson proposes that protostellar cores are created by supersonic turbulent compression, which causes density fluctuations, and gravity becomes dominant  in only the densest (and typically subsonic) parts, making them unstable to collapse. Larson uses internal velocity dispersion measurements of regions in molecular clouds from the literature to support his claim.

Key Observational Findings:

Data for regions in molecular clouds with scales 0.1<L<100 pc follow:

(1) A power-law relation between velocity dispersion σ  and the size of the emitting region, L:

\sigma \propto L^{0.38}

Such power-law forms are typical of turbulent velocity distributions. More detailed studies today find \sigma\propto L^{0.5}, suggesting compressible, supersonic Burger’s turbulence.

(2) Approximate virial equilibrium:

2GM/(\sigma^2L)\sim 1

meaning the regions are roughly in self-gravitational equilibrium.

(3) An inverse relation between average molecular hydrogen H_2 density, n, and length scale L:

L \propto n^{-1.1}

which means that the column density nL is nearly independent of size, indicative of 1D shock-compression processes which preserve column densities.

Note These three laws are not independent. They are algebraically linked: that is, any one law can be derived from the other two. The three laws are consistent.

The Data

Larson compiles data on individual molecular clouds, clumps, and density enhancements of larger clouds from previous radio observations in the literature. The important parameters are:

  • L, the maximum linear extent of the region
  • variation of the radial velocity V across the region
  • variation of linewidth \Delta V across the region
  • mass M obtained without the virial theorem assumption
  • column density of hydrogen n

Larson digs through papers that investigate optically thin line emissions such as ^{13}CO to determine the variations in V and \Delta V, and consequently calculate the three-dimensional velocity dispersion σ  due to all motions present (as indicated by the dispersions \sigma(V) and \sigma(\Delta V)) in the cloud region (assuming isotropic velocity distributions). Both \sigma(V) and \sigma(\Delta V) are needed to obtain the three-dimensional velocity dispersion for a length-scale because the region has both local velocity dispersion and variation in bulk velocity across the region. The two dispersions add in quadrature: \sigma = \sqrt{\sigma(\Delta V)^2 + \sigma(V)^2}.

To estimate the mass, Larson assumes that the ratio of the column density of ^{13}CO to the column density of H_2 is 2\cdot 10^{-6} and that H_2 comprises 70% of the total mass.

Note that for a molecular cloud with temperature 10 K the thermal velocity dispersion is only 0.32 km/s, while the observed velocity dispersions \sigma, are much larger, typically 0.5-5 km/s.

(1) Turbulence in Molecular clouds

A log-log plot of velocity dispersion \sigma versus region length L is shown in Figure 1. below:

Larson1-2

Figure 1. 3D internal velocity dispersion versus the size of a region follows a power-law, expected for turbulent flows. The \sigma_s arrow shows the expected dispersion due to thermal pressure only. The letters in the plots represent different clouds (e.g. O=Orion)

The relation is fit with the line

\sigma({\rm km~s}^{-1}) = 1.10 L({\rm pc})^{0.38}

which is similar to the \sigma \propto L^{1/3} power-law for subsonic Kolmogorov turbulence. Note, however, that the motion in the molecular clouds are mostly supersonic. A characteristic trait of turbulence is that there is no preferred length scale, hence the power-law.

Possible subdominant effects modifying the velocity dispersion include stellar winds, supernova explosions, and expansion of HII regions, which may explain some of the scatter in Figure 1.

(2) Gravity in Molecular Clouds

A plot of the ratio 2GM/(\sigma^2 L) for each cloud region, which is expected to be ~1 if the cloud is in virial equilibrium, is shown in Figure 2. below:t1

Figure 2. Most regions are near virial equilibrium (2GM/(\sigma^2L)\sim 1). The large scatter is mostly due to uncertainties in the estimates of physical parameters.

Most regions are near virial equilibrium. The scatter in the figure may be large, but expected due to the simplifying assumptions about geometric factors in the virial equilibrium equation.

If the turbulent motions dissipate in a region, causing it to contract, and the region is still in approximate virial equilibrium, then L should decrease and \sigma should increase, which should cause some of the intrinsic scatter in Figure 1 (the L\sigma relation). A few observed regions do have unusually high velocity dispersions in Figure 1, indicating significant amount of gravitational contraction.

(3) Density Structure in Molecular Clouds

The \sigma \propto L^{0.38} relation implies smaller regions need higher densities to be gravitationally bound (if one also assumes \rho \sim M /L^3 and virial equilibrium 2GM/(\sigma^2L)\sim 1 then these imply \rho \propto L^{-1.24}). This is observed. The correlation between the density of H_2 in a region and the size of the region is shown in Figure 3 below:

larson3

Figure 3. An inverse relation is found between region density and size

The relationship found is:

n({\rm cm}^{-3}) = 3400 L(pc)^{-1.1}

This means that the column density nL is proportional to L^{-0.1}, which is nearly scale invariant. Such a distribution could result from shock-compression processes which preserve the column density of the regions compressed. Larson also suggested that observational selection effects may have limited the range of column densities explored (CO density needs to be above a critical threshold to be excited for example). Modern observations, such as those by Lombardi, Alves, & Lada (2010), show that that while column density across a sample of regions and clouds appears to be constant, a constant column density does not described well individual clouds (the probability distribution function for column densities follows a log-normal distribution, which are also predicted by detailed theoretical studies of turbulence).

Possible Implications for Star Formation and Molecular Clouds

  • Larson essentially uses relations (1) and (2) to derive a minimum mass and size for molecular clouds by setting the velocity dispersion \sigma to be subsonic. Smallest observed clouds have M\sim M_{\rm \odot} and L\sim 0.1 pc and subsonic internal velocities. These clouds may be protostars. The transition from supersonic to subsonic defines a possible minimum clump mass and size: M\sim 0.25M_{\rm \odot} and L\sim 0.04 pc, which may collapse with high efficiency without fragmentation to produce low mass stars of comparable mass to the initial cloud. Hence the IMF should have a downward turn for masses less than this minimum mass clump. Such a downturn is observed. Simple Jeans collapse fragmentation theory does not identify turnover at this mass scale.
  • Regions that would form massive stars have a hard time collapsing due to the supersonic turbulent velocities. Hence their formation mechanism likely involves accretion/coalescence of clumps. Thus massive stars are predicted to form as members of clusters/associations, as is usually observed.
  • The above two arguments imply that the low-mass slope of the IMF should be deduced from cloud properties, such as temperature and magnitude of turbulent velocities. The high-mass end is more complicated.
  • The associated timescale for the molecular clouds is \tau=L/\sigma. It is found to be \tau({\rm yr}) \sim 10^6L({\rm pc})^{0.62}. Hence the timescales are less 10 Myr for most clouds, meaning that molecular clouds have short lifetimes and are transient.

Philip’s Comments

Larson brings to attention the importance of turbulence for understanding the ISM in this seminal paper. His arguments are simple and are supported by data which are clearly incongruous with the previous simplified picture of Jeans collapse in a uniform, thermally-dominated medium. It is amusing and inspiring that Larson could dig through the literature to find all the data that he needed. He was able to synthesize the vast data in a way the observers missed to build a new coherent picture. As most good papers, Larson’s work fundamentally changes our understanding about an important topic but also provokes new questions for future study. What is the exact nature of the turbulence? What drives and/or sustains it? In what regimes does turbulence no longer become important? ISM physics is still an area of active research.

Many ISM research to this day has roots that draw back to Larson’s paper. One of the few important things Larson did not explain in this paper is the role of magnetic fields in the ISM, which we know today contributes a significant amount to the ISM’s energy budget and can be a source of additional instabilities, turbulence, and wave speeds. Also, there was not much data available at the time on the dense, subsonic molecular cloud cores in which thermal velocity dispersion dominates and the important physical processes are different, and so Larson only theorizes loosely about their role in star formation.

Larson’s estimates for molecular lifetimes (10 Myr) is relatively short compared to galaxy lifetimes and much shorter than what most models at the time estimated. This provoked a lot of debate in the field. Old theories which claim molecular clouds are built up by random collisions and coalescence of smaller clouds predict that Great Molecular Clouds take over 100 Myr to form. Turbulence speeds up this timescale, Larson argues, since turbulent motion is not random but systematic on larger scales.

The Plot Larson Did Not Make

Larson assumed spherical geometry of the molecular cloud regions in this paper to keep things simple. Yet he briefly mentions a way to estimate region geometry. He did not apply this correction to the data and unfortunately does not list the relevant observational parameters (\sigma (V), \sigma (\Delta V)) for the reader to make the calculation. But such a correction would likely have reduced the scatter of the region size L and have steepened the \sigma vs L relation, closer to what we observe today.  His argument for geometrical correction, fleshed out in more detail here, goes like this.

Let’s say the region’s shape is some 3D manifold, M. First, lets suppose M is a sphere of radius 1. Then, the average distance between points along a line of sight through the center is:

\langle \ell\rangle_{\rm los} = \frac{\int |x_1-x_2|\,dx_1\,dx_2}{\int 1 \,dx_1\,dx_2}= 2/3

where the integrals are over x_1=-1,x_2=-1 to x_1=1,x_2=1.

The average distance between points inside the whole volume is:

\langle \ell\rangle_{\rm vol} =\frac{\int \sqrt{(x_1-x_2)^2+(y_1-y_2)^2+(z_1-z_2)^2}r_1^2 r_2^2 \sin\theta_1\sin\theta_2 dr_1 dr_2 d\theta_1 d\theta_2 d\phi_1 d\phi_2}{\int r_1^2 r_2^2 \sin\theta_1\sin\theta_2 dr_1 dr_2 d\theta_1 d\theta_2 d\phi_1 d\phi_2}= 36/35

where the integrals are over r_1=0,r_2=0,\theta_1=0,\theta_2=0,\phi_1=0,\phi_2=0 to r_1=1,r_2=1,\theta_1=\pi,\theta_2=\pi,\phi_1=2\pi,\phi_2=2\pi.

Thus the ratio between these two characteristic lengths is:

\langle \ell\rangle_{\rm vol}/\langle \ell\rangle_{\rm los}=54/35

which is a number Larson quotes.

Now, if the velocity scales as a power-law \sigma \propto L^{0.38}, then one would expect:

\sigma = (\langle \ell\rangle_{\rm vol}/\langle \ell\rangle_{\rm los})^{0.38} \sigma(\Delta V)

We also have the relation

\sigma = \sqrt{\sigma(\Delta V)^2 + \sigma(V)^2}

These two relations above allow you to solve for the ratio

\sigma(V)/\sigma(\Delta V) = 0.62

Larson observes this ratio for regions of size less than 10 pc, meaning that the assumption that the are nearly spherical is good. But for larger regions, Larson sees much larger ratios ~1.7. This ratio can be expected for more sheetlike geometries. For example, if the geometry is 10 by 10 wide and 2 deep, one can calculate that the expected ratio is \sigma (V)/\sigma (\Delta V)= 2.67.

It would have been interesting to see a plot of \sigma (V)/\sigma (\Delta V) as a function of L, which Larson does not include, to learn about how geometry changes with length scale. The largest regions have most deviation from spherical geometries, which is perhaps why Larson did not include large ~1000pc regions in his study.

~~~

northamneb_ware_big

The North America Nebula Larson mentions in his introduction. Complex hydrodynamic processes and turbulent flows are at play, able to create structures with sizes less than the Jeans length. (Credit and Copyright: Jason Ware)

Reference:

Larson (1981) – Turbulence and star formation in molecular clouds

Lombari, Alves, & Lada (2010) – 2MASS wide field extinction maps. III. The Taurus, Perseus, and California cloud complexes

H II Region Line Diagnostics

In Uncategorized on May 5, 2011 at 6:37 pm

Intro/References: I will discuss the five main diagnostics from line ratios and continuum fluxes for H II regions that can be applied to other photoionized emission regions including active galactic nuclei, planetary nebulae, and any other emission line sources such as Wolf-Rayet stars, etc.  For a more detailed description of photoionization modeling, statistical mechanics /  thermodynamics, and quantum mechanics, please see Chapter 18 of Draine, this link (from which the summary and figures below are based), and the all-time best reference AGN2 : Astrophysics of Gaseous Nebulae and Active Galactic Nuclei 2nd Ed. by Osterbrock and Ferland.

Motivation: Line ratios and relative continuum fluxes of photoionized emission regions can give much physical insight regarding the properties of the nebula and its vicinity including the amount and type of dust extinction along the line of sight, the number density, temperature, physical radius / volume, mass, and metallicity of the gas, and even the temperature of the ionizing source.  These indicators may also give further clues as to the surrounding environment, including and certainly not limited to the star formation rate and history, metallicity and mass of the host stellar population / galaxy / AGN, and depletion rates and abundances of the dust and gas.

(1) Dust Extinction and Reddening from Balmer Decrement

Photoionization of hydrogen is fairly well understood as it relies on already quantified atomic properties such as the photoionization cross-section, the recombination rate, the cascade matrix toward lower energies after recombination, oscillator strengths / Einstein A coefficients of the various transitions, etc.  Consider the following table of theoretical emission line strengths jn’n from various Balmer transitions (relative to Hβ):

where Case A and Case B represent the limits of two idealized situations so that actual H II regions must lie between these two extremes.  Namely, Case A assumes the optically thin limit where hydrogen emission from all energy levels (Lyman, Balmer, etc.) can escape the H II region unabsorbed while Case B assumes all transitions more energetic than Lyα are absorbed and re-radiatved via Lyα and longer wavelengths.  This is why the Hα emission line strength is greater in Case B.  Nevertheless, the line ratios between Hβ and Hγ , for example, only vary by ~10% despite the temperature changing by a factor of two and not knowing in which optical depth regime the H II region exists.  Thus, this Balmer decrement of decreasing relative line strengths toward more energetic transitions is fairly insensitive to temperature and number densities across the parameter space of H II regions.

If the actual line strengths of an H II region are measured and since we know the theoretical Balmer decrement, we can infer the amount of dust reddening toward the H II region.  If only two Balmer lines are measured, the amount of dust reddening along with a theoretical dust extinction curve can give us a dereddened spectrum corrected for dust extinction.  Additional Balmer lines can be used to constrain the type and functional dependence on wavelength of the actual dust extinction.  This information has its own merit for inferring the dust properties along the line of sight and possibly dust properties in the vicinity of the H II region itself, but perhaps more important it gives us corrected emission line strength ratios which can be used for other diagnostics (see below).

(2) Electron Number Density and Temperature from Forbidden Lines

The forbidden transitions of [OIII] and [S II] can serve as excellent diagnostics of both temperature and density of the gas.  Theoretically, observations of [OII] and [NII] can also constrain these properties but in reality the transitions of [OII] are too closely spaced to be resolved while the [NII] lines are typically contaminated by Hα.  Consider the energy diagrams of [OIII] and [S II] below, where you can see all the important listed transitions are forbidden due to violation of one or more of the selection rules of quantum mechanics: ΔL = ±2 (such as transitioning from a D state to S state or vice versa),   ΔJ = ±2 (which is intrinsically tied to ΔL), and/or no change in parity πf = π(both initial and final states in these cases have even parity).  Obviously, these transitions are only forbidden to electric dipole radiation, and do occur in nature when densities are sufficiently low such that timescales of collisional deexcitation are longer than the lifetime before spontaneous decay via these forbidden transitions.

Now consider the relative population of ions in the various electronic configurations and the measured transitions that arise from these upper energy levels. For example in [OIII], the ratio of the λ4363 to either or the sum of the λ4959, λ5007 intensities can allow you to infer the relative populations of the 1Sto 1Dstates.  Considering the energies between the two levels are widely spaced, the relative population will be almost completely dictated by the temperature of the gas, so the ratio of these [OIII] line intensities can be used to infer the temperature of the gas with only a slight dependence on number density.  Conversely, the upper energy levels that give rise to the λ6716 and λ6731 transitions in [S II] are closely spaced in terms of energy so that the relative population will be controlled by collisional excitation / deexcitation between the two levels which depends significantly on the electron density.  Thus, the relative intensities of these two transitions can be used to infer the number density of electrons.  Combining observations of both pairs of transitions can further refine the precision of electron density and temperature via detailed photoionization modeling that relies on thermodynamic and quantum mechanical properties such as Boltzmann, Saha, Einstein A coefficients, collisional strengths and cross sections, and incident radiation.

(3) Mass and Radius from Hβ Flux

Once the temperature and density of the gas can be ascertained, other physical properties can be modeled from atomic physics such as the recombination rate αB, the subsequent probability of then cascading via Hβ emission PHβ so that the total volumetric rate of Hβ emission is αHβ = αP, and thus finally the emissivity of Hβ emission εHβ = αHβ . (Historically, Hβ is used instead of Hα because it is in the green part of the optical spectrum where the sensitivity of photographic film peaks and since Hα is typically contaminated on either side by transitions of [N II] in real HII regions.)  The functional fit of Hβ emissivity is given by:

which then gives the following relationship between Hβ luminosity, flux, and emissivity:

Since nis known from the forbidden transitions, εHβ is calculated based on the gas temperature also inferred from the forbidden transitions, the total FHβ can be measured and corrected for dust extinction according to the Balmer decrement, and the ratio of radius to distance R/d can be measured by the angle the HII region subtends on the sky, then the absolute values of d and R can be calculated.  Once the radius of and distance to the HII region are known, the mass of ionized hydrogen within the HII region is simply given by:

where as stated we already know the specified parameters on the right hand side.

(4) Zanstra Temperature of Ionizing Source from Continuum Flux

Using a simple Stromgren sphere analysis, the temperature of the central ionizing star can be inferred from the ratio of continuum flux (e.g. in the visual V filter) to line flux (e.g. Hβ) in a manner that is independent of the distance to the ionizing source and HII region.  The actual relation is the following inequality:

because we must make the assumption that we are in the optically thick regime, i.e. Case B, where all ionizing photons are absorbed within the HII region and reradiated at longer wavelengths. In the equation, f is the fraction of the flux of photons that are capable of ionizing hydrogen, and can be simply modeled while simultaneously fitting the temperature of the ionizing source T*.  So to infer the temperature of the star, you must measure the broadband continuum V magnitude and know the response of the V filter with respect to frequency.  You must also estimate the emissivity which you have already measured from the ratios of forbidden lines.  Nevertheless, in all these input parameters you never had to assume the distance to the HII region, so the Zanstra temperature of the ionizing source is distance independent.

(5) Metallicity / Abundances from Line Ratios

By comparing line ratios from ions of different elements, the metallicity / chemical abundances of the gas can be estimated.  Granted, there is an implicit correction for excitation and ionization, but empirical observations have shown that there is a measured correspondence between certain line ratios and metallicities.  For example, the R23 ratio defined by:

has been found empirically to have the following dependence on the oxygen to hydrogen abundance:

Note that the relationship is double valued because of ionization and excitation effects.  Nevertheless, observations of line ratios of other various ions can help constrain the abundances.  Moreover, detailed photoionization modeling relying on the observed ratios of line intensities of the same ion can be used to derive the actual corrections for excitation and ionization.

The Orion Bar

In Uncategorized on May 1, 2011 at 1:43 am

1. Overview of the Orion Bar

The Hubble image below shows the Orion A complex, which harbors two large HII regions, M43 to the Northeast (top left) the brighter Orion Nebula (M42, center). The HII region within the Orion Nebula is carved out by the Trapezium cluster, which is extremely dense (stellar density of 560 pc-3 – compare that with our local density of ~1 pc-3) and dominated by four stars, the brightest of which, θ1 Ori C (spectral type O7V), produces ~80% of the ionizing photons. M43 is ionized primarily by a single star, NU Ori (spectral type B0.5).

HST optical image of the Orion Nebula (Credit: NASA, ESA, M. Robberto, and the Hubble Space Telescope Orion Treasury Project Team)

The Orion A complex has been useful in understanding HII regions because of its proximity to us (~414 ± 7 pc), which allows its structure to be studied with high spatial resolution. The HII region within the Orion Nebula has broken out of the molecular cloud, creating a champagne flow. M42 has a particularly bright photodissociation region (PDR), known as the Orion Bar, which is visible to the Southeast of the Trapezium stars.

HST optical image of M42, with the Orion bar visible as a bright ridge in the bottom left (Credit: NASA, C.R. O'Dell and S.K. Wong (Rice University))

The Orion Bar stands out as a bright ridge to the Southeast of the Trapezium cluster, but its prominence is actually a consequence of limb brightening, i.e. our peculiar viewing angle. The Orion Nebula is bounded on multiple sides by an ionization front, but we happen to see the bar edge-on, causing it to appear brighter.

2. Structure from Radio Continuum Observations

Dense, hot regions of ionized hydrogen are bright in the radio continuum, as scattering of electrons off of H+ ions produces free-free emission. Felli et al. (1993) (ADS Link) mapped the Orion A complex in the radio continuum using the VLA in several configurations. A particularly nice map of the free-free emission at 20 cm in the Orion Nebula HII region is given in their Fig. 3d, reproduced below:

Fig. 3d from Felli et al. (1993). The Orion Bar is particularly prominent in this radio continuum map of the HII region of the Orion Nebula (20 cm, 6.2" resolution). The positions of several bright stars, including the four brightest Trapezium stars, are marked. The contours range from 95.0 to 300.7 mJy/beam.

Felli et al. further demonstrate that the radio continuum emission correlates well with Hα emission, as we should expect. Bruce Draine (§28.2) calculates a maximum line-of-sight rms electron density of approximately 3200 cm-3 based on the Felli et al. (1993) peak emission measure (the integrated square of the electron density along the line of sight) of 5 × 10 cm-6 pc and an assumed diameter of 0.5 pc for the HII region. For an explanation of the emission measure, see this link or recall §5.3 of Rybicki & Lightman, which discusses free-free absorption. The basic idea is that the absorption coefficient due to free-free absorption is proportional to the product of the ion and electron densities. If the medium is neutral overall, then this is simply the square of the electron density, so the optical depth of an ionized cloud due to free-free absorption is proportional to the integrated square electron density.

3. Progression of Species in the Photodissociation Region

The chemistry and structure of the photodissociation region (PDR, also called the photon-dominated region) is dominated by the effects of the intense incident ultraviolet radiation from the O and B stars in the HII region. As the binding energy of the H2 molecule is lower than that of the electron in the hydrogen atom, HII regions are enveloped by a region of atomic hydrogen. In this region, the UV flux is great enough to photodissociate H2, but the recombination rate is high enough to keep the ionized fraction low. Deeper in the cloud (i.e. farther away from the HII region), the UV flux has been sufficiently attenuated, such that most hydrogen is bound in H2. The interface between the regions dominated by atomic hydrogen and fully ionized hydrogen is called the ionization front, while the boundary between the atomic hydrogen and the molecular hydrogen is called the dissociation front. Figure 31.2 of Draine’s ISM textbook, reproduced below, gives a diagrammatic sketch of the structure of a PDR:

The progression of species refers to the progression of chemical species that dominate as one travels away from the HII region and into the molecular cloud. The key variable which changes along this path is the intensity of the UV flux. This has several effects: the ionized medium at first gives way to neutral atomic species, and deeper into the cloud molecules such as H2, CO, O2 and PAHs (polycyclic aromatic hydrocarbons) become stable; the temperature drops as the incident radiation becomes more attenuated (with increasing optical depth since the ionization front); assuming pressure balance, the density of the gas must increase into the cloud as the temperature drops. The assumption of pressure equilibrium is not exact, however, as HI flows from the dissociation front towards the ionization front. As this flow becomes smaller, the assumption of pressure balance and steady-state become more accurate.

4. Comparison with Theoretical Models

Tielens et al. (1993) (ADS Link) compare the observed structure of the Orion Bar to theoretical models of what a photodissociation region should look like, in what is a short and eminently readable paper. The paper presents observations of the 1-0 S(1) H2 line, the J=1-0 CO rotational line, and the carbon-hydrogen stretching mode of PAHs. Both the H2 and CO lines are caused by decay of excited ro-vibrational states. They are thus dependent on the presence of UV radiation; as one travels deeper into the molecular cloud, the increasing attenuation of the UV flux suppresses the excitation of these excited states. Going in the opposite direction, towards the HII region, the density of molecular hydrogen and CO becomes lower, and the the 1-0 S(1) H2 and CO J=1-0 lines are no longer prominent. The regions in which the H2 and CO ro-vibrational lines are observed should thus be determined by the interplay between the density of these species and the strength of the UV flux. For more on the origin of ro-vibrational transitions in H2, see this link and the introductory paragraph of Laine et al. (2010) (Thanks to Tanmoy for discussions on this and for this last paper). Tielens et al. (1993) present a false-color picture showing the PAH, H2 and CO emission observed from the Orion Bar. The diagram has the same orientation as the picture of the above pictures of the Orion Bar, with the HII region to the upper right, and the molecular cloud to the bottom left. Thus, molecular hydrogen density and UV optical depth increase from the top right to the bottom left. The separation of the peaks of each type of emission is clearly visible:

Figure 1 from Tielens et al. (1993) showing PAH emission (blue), 1-0 S(1) H2 emission (green) and the CO J=1-0 transition (red).

Tielens et al. (1993) use the spatial separation between PAH 3.3 μm, H2 and CO line peaks to map UV penetration in the Orion Bar. By then assuming a hydrogen density to visual extinction ratio NH/Av = 1.9 × 1021 cm-2 mag-1 and an estimate of the viewing angle, they determine a gas density of 1-5 × 104 cm-3. The observed spatial distribution of these three emission mechanisms is compared to a PDR model, which treats the PDR as a homogenous slab of constant density 5 × 104 cm-3. The model takes into account chemical composition, energy balance, radiative transfer and line cooling. The modeled intensity of emission along a cut through the Orion Bar (rightward is deeper into the molecular cloud) is compared to observation:

The authors of the paper argue that the agreement between the modeled and observed emission features UV pumping as the mechanism driving excitation of CO and H2 ro-vibrational lines. As mentioned in the ESO link above, these lines are also observed in post-shock regions, where the gas has been collisionally excited. In order for the emission to be shock induced, however, the shock velocity would have to exceed 10 km/s, which would evaporate the bar on a timescale of 103 years. The authors also note that one weakness of their model is that it does not include clumping, which is likely to be important in the Orion Bar and traced by CO and CS maps. Finally, other molecular tracers, such as HCN, can be used to probe the denser regions of PDRs.

5. References

Draine, Bruce T., Physics of the Interstellar and Intergalactic Medium. Princeton, NJ: Princeton University Press, 2011.

Felli M., Churchwell E., Wilson T. L., Taylor G. B. 1993, A&AS 98, 137-164. (ADS Link)

Rybicki B.R., Lightman A.P, Radiative Processes in Astrophysics, 2nd Ed. Weinheim, Germany: Wiley-VCH Verlag, 2004.

Tielens A.G.G.M., Meixner M.M., van der Werf P.P., Bregman J., Tauber J.A., Stutzki J., Rank D. 1993, Science 262, 86-89. (ADS Link)

van der Werf P.P., Goss W.M. 1989, A&A 224, 209-224. (ADS Link)

van der Werf P.P., Stutzki J., Sternberg A., Krabbe A. 1996, A&A 313, 633-648. (ADS Link)

The Void IGM at z < 6: Key Properties, How We Know

In Uncategorized on April 28, 2011 at 4:14 am

As Nathan described, the IGM transitioned from an HI dominated phase to an HII dominated phase at z \approx 6. Here I will go into a bit more detail about the ionization, temperature, density and magnetic field of the void ISM post-reionization, and how these properties are determined. The term “void” IGM is meant to exclude particularities of the intracluster medium, which is beyond the scope of this brief posting.

Temperature

The temperature of the IGM is dictated by a balance of adiabatic cooling from the Hubble expansion and photoheating by the UV photons that keep the IGM ionized. One simple way to obtain the IGM temperature is to measure Doppler broadening of narrow Ly\alpha absorption lines along the quasar sight-lines. This is most appropriately done with a large sample of absorption lines (along various sight-lines) at the lowest end of the absorber N_{HI} column density distribution, N_{HI} \approx 10^{13} \textrm{cm}^{-2} (see e.g. Ricotti et al. 2000). These considerations minimize potential for inflated linewidth estimates due to redshift broadening arising from the physical extent of the HI overdensity system. The result obtained is:

T_{HI} \sim 10^{4}, \ b_{T} \sim 10 \textrm{km/s} \gtrsim H \times N_{HI}/\overline{n}_{HI}

Where the last expression is a back of the envelope estimate of the redshift broadening (H being the Hubble parameter, \overline{n}_{HI} being a guess of the density based on the average value that accounts for all the baryons in \Omega_b).

Ionization Fraction

With a temperature estimate in hand, the ionization fraction can be calculated from radiative transfer, balancing ionization rate and collisional recombination rate (dependent on T through the radiative recombination coefficient \alpha(T)).

\dot{n}_{HII} = n_{HI}\int_{\nu_{0}}^{\infty}\frac{4\pi J_{\nu}\sigma_{H}(\nu)d\nu}{h\nu}, \ \sigma_{H} \sim \sigma_{0}(\nu/\nu_0)^{-3}

\dot{n}_{HI} = n_{HII}^2\alpha(T)

A crude estimate of the first integral can be obtained by knowing approximately the UV ionizing background, yielding t_{ion} \sim 5 \times 10^{12} \ s (see for example some simple calculations and general IGM discussion by Piero Madau). Again using the \overline{n}_{HI} consistent with \Omega_b and using \alpha(10^4 \textrm{K}), balancing the rates of ionization/recombination yields n_{HI}/n_{HII} \sim 10^{-4}.

More on line-of-sight observations

Virtually all information about the IGM at z < 6 is gleaned from line-of-sight observations, most frequently in the optical. Conclusive detection of metals generally requires identification of multiple lines, so that the intervening systems’ redshift can be determined. Doublets are particularly useful in this context (especially MgII \lambda=2795\textrm{\AA}, 2802\textrm{\AA}, and CIV (\lambda=1548\textrm{\AA},1551\textrm{\AA}). Various common absorber classifications are listed in Table 1 below. See Fig. 1 below for a sample quasar spectrum showing many of the features listed in Table 1 (spectrum from Schneider “Extragalactic Astronomy and Cosmology”).

system classification absorber corresponding HI column density (\textrm{cm}^{-2})
narrow Ly\alpha HI < 10^{17}
Lyman limit HI > 10^{17}, < 10^{20}
damped Ly\alpha HI > 10^{20}
metal CIV, MgII, etc. > 10^{17}, < 10^{21}

Figure 1: Example of a quasar spectrum, showing some common absorption features. The source is at redshift ~2. Figure from Schneider "Extragalactic Astronomy & Cosmology".

IGM Magnetic Fields

Very little is currently known about magnetic fields in the IGM. Recently, lower bounds on the magnitude of the IGM magnetic field have been derived using line-of-sight blazar gamma-ray data (e.g. Tavecchio et al. 2010). The premise of these B-field limits is that conversion of gamma-rays emitted by the blazar to e^{+}/e^{-} pairs on their way to Earth is sensitive to the IGM B-field and can be observed in the high-energy blazar spectra. The present lower limits on B_{IGM} are \sim 10^{-15}-10^{-17} Gauss…not too stringent of a lower bound!

The Warm Hot Intergalactic Medium

Simulations suggest that the z<6 IGM has a large-scale filamentary structure (the “cosmic web“), and quasar line of sight data have provided evidence for this WHIM. By analyzing quasar spectra in the X-ray with Chandra, Prof. Julia Lee and collaborators identified OVIII absorption thought to trace hot T \sim 10^6 \ K conditions in filaments overdense relative to the cosmic average baryon density by a factor \delta \sim 5-50 (Fang et al. 2002, see Fig. 2).

Figure 2: Detection of Lyman-alpha analogue absorption from seven times ionized oxygen in the WHIM, from Fang et al. 2002.

References

Accretion Luminosity

In Uncategorized on April 27, 2011 at 5:05 pm

Let m = \dot{M} t be the orbiting mass entering and leaving the ring.  dE = \frac{dE}{dr} dr = \frac{d}{dr} \left(-G \frac{M_* m}{2 r}\right) dr = G \frac{M_* \dot{M} t }{2 r^2} dr.  In the steady state, conservation of energy requires that the energy radiated by an annulus in the the accretion disk be equal to the difference in energy flux between its inner and outer edges.  Therefore dE = dL_{\rm ring} t.  Equating the two sides gives: dL_{\rm ring} = G \frac{M_*\dot{M}}{2 r^2} dr. The Stefan-Boltzmann relation gives dL_\text{ring} = 4 \pi r \sigma T^4 dr.

Therefore we find that \sigma T^4 = \frac{G M_* \dot{M}}{8 \pi r^3}.  We did not take into account the fact that the star has a finite size which introduces a boundary layer into the disk.  A more thorough analysis (e.g. in Lynden-Bell & Pringle 1974) then adds an additional factor, giving: \sigma T^4 = \frac{G M_* \dot{M}}{8 \pi r^3} \left(1- \sqrt{\frac{R_*}{r}} \right)

Isolation Mass

In Uncategorized on April 27, 2011 at 3:45 pm

If we assume that all planets and planetesimals move on roughly circular orbits, then there is a finite supply of planetesimals for a growing planet to accrete.  The mass at which the growing planet has consumed all of its nearby planetesimal supply is called the isolation mass M_\text{iso}.

The “feeding zone” \delta a_\text{max} of a planet accreting planetesimals is of order the Hill radius.  Thus we can write \delta a_\text{max} = C r_H, where C is a constant of order unity, and r_H is the Hill radius.  Furthermore, we note that the mass within this feeding zone can be calculated from the surface density of the disk Sigma_p: (2 \pi a) (2 \delta a_\text{max}) \Sigma_p \propto M_\text{planet}^{1/3}

Finally we can set the isolation mass to be the mass at which the planet mass is equal to the mass of the planetesimals in its feeding zone: M_\text{iso} = 4 \pi a C (\frac{M_\text{iso}}{3 M_*})^{1/3} a \Sigma_p.  We can solve this equation to find that M_\text{iso} = \frac{8}{\sqrt{3}} \pi^{3/2} C^{3/2} M_*^{-1/2} \Sigma_p^{3/2} a^3.*

*See discussion in Armitage 2007

Special Topics

In Uncategorized on April 26, 2011 at 1:25 pm

According to our end of class survey, here are topics that might warrant (more) discussion/inclusion in a future version of AY201b…

Kolmogorov & Burgers Turbulence
“Kolmogorov” turbulence refers to turbulence in an incompressible medium.  “Burgers turbulence” refers to turbulence in a compressible medium (cf. Burgers equation).
Example of use: “Kolmogorov-Burgers Model of Star-forming Turbulence”, Boldorev 2002.

Dissociative Recombination
A process where a positive molecular ion recombines with an electron, and as a result, the neutral molecule dissociates.  Dissociative recombination is important in determining the chemical composition of the molecular interstellar medium, as it easily changes the balance amongst molecular species.
Typical example of dissociative recombination: CH_3^+ + e^- \rightarrow CH_2 + H

X-wind
The model of Frank Shu et al. for the formation of stars like the Sun.

Fig. 3 of Shang 2007 (reference below). Schematic drawing of a generalized picture of the X-wind. The deadzone is opened probably by continuous magnetic activities that result in reconnection events near the magnetic “Y” points and the current sheets. Adapted from Fig. 1 in Shu et al. (1997)

Recent Reference (and source of figure above): “Jets and Molecular Outflows from Herbig-Haro Objects,” Shang 2007.
Use in meteoritics: The X-wind model offers one explanation for the origin and nature of the chondritic meteorites found in our Solar System.

And, here are some topics that we’re betting will become (even) more important in the coming decade (in order of decreasing spatial scale)…

  • 21-cm tomography of the Early Universe
  • Magnetic fields in the IGM
  • Improved Extragalactic Star-Formation “Prescriptions” (e.g. Kennicutt-Schmidt), based on our understanding of the process near-er by
  • The role of magnetic fields in the low-density ISM (see WISE images…)
  • Numerical Models of star-forming regions that include ALL of these: gravity, heating/cooling, chemistry, radiative transfer, and magnetic fields.
  • AMR Simulations spanning pc to AU (star formation to planet formation)
  • Dense core “fragmentation”
  • The role of chemistry in core and planet formation
  • Time evolution of disk/planet formation (accretion, planet buildup, planet migration)
  • The relationship between planetary atmospheres and their formation environment
  • Laboratory measurements of the low-density, low-gravity, behavior of gases & solids (dust)
  • More…

Future Instruments

In Uncategorized on April 26, 2011 at 12:09 am

The next twenty years promises to be an exciting time for studying the ISM. Each of the following telescopes is described in detail on a separate wordpress page (click on the acronym) and on their official websites (click the full name).

Atacama Large Millimeter Array: ALMA

ALMA will begin early science observations with Cycle 0 in September, 2011 and should be completed in 2013. The high spatial resolution of ALMA will allow astronomers to image young planets embedded in disks around nearby

The James Webb Space Telescope: JWST

JWST is an exciting space-based infrared observatory that promises to acquire a wealth of photometric and spectroscopic information. For studies of the ISM, JWST will be particularly useful for mapping the distribution of dust and for observing obscured systems such as young stellar objects and circumstellar disks (see Gardner et al. 2006).

Thirty Meter Telescope: TMT

TMT will conduct near-UV, optical, and near-infrared observations of young stellar objects, protoplanetary disks, and hot, young Jovian planets. The large primary mirror of the telescope and the adaptive optics system will allow TMT to produce high-resolution images of star and planet formation that include small-scale details that are unobservable with current telescopes.

Giant Magellan Telescope: GMT

GMT has the same strong science case as TMT, but will be a ~25m telescope in the southern hemisphere. The main differences between GMT and TMT are shown in the table below.

Comparison of GMT and TMT

Comparison of GMT and TMT. GMT information from http://www.gmto.org/tech_overview. TMT information from http://www.tmt.org/observatory/telescope.

The Astro2010 Decadel Survey identified U.S. participation in a Giant Segmented Mirror Telescope (either GMT, TMT, or E-ELT) as Priority 3 for large, ground-based missions (after the Large Synoptic Survey Telescope and a Mid-Scale Innovations Program). As part of the process, the National Academy of Sciences conducted an independent cost estimates for the telescopes optics and instruments for GMT and TMT. The resulting estimates at 70% confidence are $1.1 billion for GMT construction and $1.4 billion for TMT construction. These cost estimates assume that the telescopes will begin science operations with adaptive optics and three instruments in spring 2024 for GMT and between 2025 and 2030 for TMT. Although both the TMT website and the GMT website indicate science observation start dates in 2018, the Decadal Survey estimates are probably more realistic.

The Giant Magellan Telescope

In Uncategorized on April 26, 2011 at 12:05 am
GMT at Twilight (GMTO Corporation)

An artist's conception of the Giant Magellan Telescope at twilight. Note the truck at the lower right for scale. Image copyright Giant Magellan Telescope - GMTO Corporation.

The Giant Magellan Telescope is a collaboration between the Carnegie Institution for Science, Harvard University, the Smithsonian Astrophysical Observatory, Texas A&M University, the Korea Astronomy and Space Science Institute, the University of Texas at Austin, the Australian National University, the University of Arizona, Astronomy Australia Ltd. and the University of Chicago. GMT should be completed around 2018.

The Telescope

The primary mirror of GMT will be composed of seven circular segments 8.4m in diameter arranged as shown in the figure below. In order to properly focus the light, the outer six segments are shaped asymmetrically like potato chips. The resolving power of GMT will be equivalent to the resolving power of a 24.5 meter telescope. The secondary mirror (also pictured below) consists of an adaptive shell for each of the primary mirror segments and will be controlled by the adaptive optics system to correct for atmospheric turbulence over a field of view 10′-20′ in diameter.

Artist's conception of GMT primary mirror (Giant Magellan Telescope - GMTO Corporation)

An artist's conception of the primary and secondary mirrors of GMT. Image copyright Giant Magellan Telescope - GMTO Corporation.

The Site

The chosen site for GMT is Cerro Las Campanas in Chile. Cerro Las Campanas, pictured below, is located at an altitude of >2550 meters and has dry weather, dark skies, and good seeing. For more information about the site, see the GMT site selection page.

GMT on Cerro Las Campanas (GMTO Corporation)

An artist's conception of GMT on the peak of Cerro Las Campanas in Chile. Image from GMTO Corporation.

Instruments

GMT’s instruments will be placed behind the central primary mirror. There will be a large (6m x 5m) space directly behind the mirror for large instruments and a rotating platform for smaller instruments. See the technical overview page for more information about instrument mounting.

The proposed first generation instruments for GMT are shown in the table below from the GMT Progress Report SPIE Conf. 7012-46. According to the report, three instruments will be selected for first light.

GMT Instrument Concepts (Progress on the GMT, Johns)

GMT Instrument Concepts. (Table 6 from SPIE 7012-46, Progress on the GMT by Matt Johns at Carnegie Observatories)

Science Goals

The science goals that will be addressed by GMT include:

  • Detection and characterization of exoplanets
  • Study of dark matter and dark energy
  • Observations of stellar populations and the origin of elements
  • Observations of black hole growth
  • Study of galaxy formation
  • Observations of the epoch of reionization

The Thirty Meter Telescope

In Uncategorized on April 25, 2011 at 11:14 pm
Artist's Impression of TMT from NASA.

Artist's Impression of the Thirty Meter Telescope from NASA.

The Thirty Meter Telescope (TMT) is a collaboration between the Association of Canadian Universities for Research in Astronomy, the California Institute of Technology, the University of California, the National Astronomical Observatory of Japan, the National Astronomical Observatories of the Chinese Academy of Sciences, and the Department of Science and Technology of India. According to the TMT Timeline, First Light should occur in October 2017 and the first science should be conducted in June 2018.

The Telescope

The thirty meter primary mirror of TMT will be segmented into 492 1.44m hexagonal segments as shown in the image below. After hitting the primary mirror, the light will be reflected onto a tiltable 3.1m secondary mirror and then onto a 3.5m x 2.5m elliptical tertiary mirror that will send the light into the instruments on the Nasmyth platforms. TMT will have two Nasmyth platforms with space for eight instruments total.

TMT Primary Mirror (TMT Collaboration)

An artist's conception of the segmented primary mirror of TMT. The 1.44m hexagonal segments will be placed only 2.5mm apart. The elliptical tertiary mirror is shown at the center of the primary mirror. Note the tiny person in the upper left for scale. (TMT Collaboration)

The Site

TMT design operations are based in Pasadena, CA, but the selected telescope site is within the 36-acre “Area E” on the summit of Mauna Kea in Hawaii as shown on the map below. Mauna Kea is a well-established site for observatories due to the high-quality seeing, dry conditions, and typical lack of cloud cover. Once constructed, the TMT complex would consist of a dome 56m in height and 66m wide, 5 acres of roads, and 1.44 acres of buildings.

Proposed Site for TMT (UH and USGS)

Proposed Site for TMT in Area E on the summit of Mauna Kea. For reference, the locations of existing telescopes are indicated by the numbered yellow circles. Map produced by UH and USGS.

Instruments

In addition to the Narrow Field Infrared Adaptive Optics System (NFIRAOS), TMT will have three first light instruments:

  1. Wide Field Optical Spectrometer (WFOS): Spectroscopy and imaging without AO at near-ultraviolet and optical wavelengths (0.3-1.0 microns) over a >40 square arcminute FOV.
  2. InfraRed Imaging Spectrometer (IRIS): Integral-field spectroscopy and diffraction-limited imagaing at near-infrared wavelengths (0.8-2.5 microns).
  3. InfraRed Multi-object Spectrometer (IRMS): Slit spectroscopy and diffraction-limited imaging at near-infrared wavelengths (0.8-2.5 microns) over a 2′ diameter FOV.

Science Goals

As explained in the TMT Science Case, the science goals for TMT are:

  • Spectroscopy of the first galaxies
  • Observations of the formation of large-scale structure
  • Detection and investigation of central black holes
  • Observations of star and planet formation
  • Characterization of exoplanet atmospheres
  • Direct detection of exoplanets

The Atacama Large Millimeter Array

In Uncategorized on April 25, 2011 at 10:32 pm
8 of the ALMA Antennas (ESO/NAOJ/NRAO)

Eight of the 12m ALMA Antennas. Image from ALMA (ESO/NAOJ/NRAO)

The Array

The construction of ALMA began in 2003 and should be finished in 2013. Although the array is still under construction, ALMA is currently accepting proposals for Fall 2011 using 16 antennas and four of the ten receiver bands. More information on the Early Science Cycle 0 Call for Proposals is available on the ALMA website. The deadline for submission of Notices of Intent is April 29, so get writing!

When completed, ALMA will consist of 50 12-m antennas. Like the antennas in the Very Large Array and the Submillimeter Array, the ALMA antennas will be mobile to allow for different observing configurations and consequently different spatial resolutions. In addition to the 50 12-m antennas, ALMA will also consist of 12 7-m antennas. Those 7-m antennas and four of the 12-m antennas will make up the Atacama Compact Array (ACA) and will remain in roughly the same position for all observations to increase ALMA’s ability to map large scale structures.

The Site

The Atacama Large Millimeter Array is currently under construction on the Chajnantor plain in the Atacama desert in Chile. Since the site is at an altitude of 5000 meters and quite dry (precipitable water vapor ~ 1 mm), the atmospheric transparency should be excellent for submillimeter observations. The figure below displays a plot of the atmospheric transmission at Chajnantor and the ALMA Observation Bands. Logically, the ALMA Observation Bands were chosen to fit between the major absorption features of water and oxygen.

Atmospheric Transmission at Chajnantor

Atmospheric Transmission at Chajnantor. The colored bands indicate the ALMA Observing Bands. The red bands (3, 6, 7, and 9) will be available first. The primary sources of absorption are H2O (22.2, 183, 325, 380, 448, 475, 557, 621, 752, 988, and 1097 GHz) and oxygen (50-70 GHz and 118 GHz),

Science Goals

ALMA will achieve three main goals:

  1. Detect line emission from CO or CII in under 24 hours from galaxies at a redshift of z=3.
  2. Observe gas kinematics in protostars and protoplanetary disks within 150 pc.
  3. Acquire high dynamic range images at high angular resolution (0.1″).

Extrasolar Planets and Protoplanetary Disks

ALMA will be particularly useful for detecting extrasolar planets and stars during the early stages of formation. The figure below shows a simulation by Wolf & D’Angelo 2005 of possible ALMA observations of embedded Jovian planets. The 1 Jupiter mass and 5 Jupiter mass planets are clearly visible at both 50 pc and 100 pc!

Simulation of ALMA observations of an embedded planet by Wolf & D'Angelo 2005

Simulation of ALMA observations of an embedded planet. The dot in the lower left represents the combined beam size. Left: 1 Jupiter mass planet around a 0.5 Solar mass star in a 0.01 Solar mass disk. Right: 5 Jupiter mass planet around a 2.5 Solar mass star, Top: Distance of 50 pc, Bottom: Distance of 100 pc. Figure 2 from Wolf & D'Angelo 2005.

AGB Stars and Interstellar Dust Grains

ALMA will also advance studies of interstellar dust grains by allowing scientists to create high resolution (<0.1") images of the dust condensation zones around AGB stars at distances of a few hundred parsecs. By comparing the angular sizes of CO envelopes around evolved stars to the known distances of those stars, astronomers will be able to determine the physical size of CO emitting regions. The distances to other evolved stars could then be estimated by comparing the observed angular size of the CO emitting regions around stars of unknown distances to the newly discovered characteristic physical size of CO emitting regions. Once the distances to a large number of evolved stars have been determined, astronomers could then map out the distribution of AGBs.

Other Research Areas

Since many molecular transitions occur at submm wavelengths, ALMA will be sensitive to the presence of a wide range of molecules. Additionally, ALMA will be able to measure the radii and rotation rates of stars and monitor the activity of the Sun.

Observations

ALMA will conduct observations in ten different bands from 84 GHz to 720 GHz at resolutions between 6 mas and 0.7″. The resolution is frequency- and baseline-dependent; the resolution decreases with decreasing baselines and lower frequencies. Within a given band, ALMA will produce a data cube containing up to 8192 frequency channels with widths between 3.8 kHz and 2 GHz. In the most compact configuration, ALMA will have baselines between ~18m and ~125m. In the extended configuration, the baselines will be between ~36m and ~400m. See the ALMA Capabilities page for more detailed observation about planning observations with ALMA.

The James Webb Space Telescope

In Uncategorized on April 25, 2011 at 10:27 pm
Schematic of JWST from NASA

Schematic of JWST. The large sunshield (blue) blocks radiation from the Sun, Earth, and Moon from reaching the science instruments in the ISIM (Integrated Science Instrument Module). Image from NASA.

JWST aboard Ariane 5

An artist's conception of JWST folded and ready for launch aboard an Ariane 5 rocket. Image from Arianespace/ESA/NASA.

The 6.5 meter James Webb Space Telescope (named for former Apollo-era NASA Administrator James Webb) is scheduled to be launched from the Ariane 5 launch site in French Guiana in 2014. Unlike Hubble, which is in Earth orbit, JWST will orbit around the Earth-Sun L2 Lagrange point. The decision to send JWST to L2 was motivated by the need to cool the spacecraft in order to conduct observations in the infrared. Although parts of JWST are actively cooled, the remainder of the spacecraft will be passively cooled by placing the spacecraft in deep space and deploying a tennis court sized shield to block light from the Sun, Earth, and Moon.

Comparison of Hubble and JWST primary mirrors from NASA

Size comparison of JWST's 6.5 m diameter primary mirror to Hubble's 2.4 m diameter mirror. Graphic from NASA.

Due to their large size, the sunshield and the primary mirror of the telescope will be folded to fit inside the payload compartment of the Ariane 5 ECA launch vehicle. The figure on the right compares the JWST’s large primary mirror to Hubble’s 2.4 m mirror and the figure on the far right displays JWST in launch configuration. After launch, the telescope mirror will magically unfold and the solar panels will be deployed as shown in the deployment animation below.

Once the telescope has unfolded, JWST will conduct observations to address four key science goals:

  1. “The End of the Dark Ages: First Light and Reionization”
  2. “Assembly of Galaxies”
  3. “The Birth of Stars and Protoplanetary Systems”
  4. “Planetary Systems and the Origin of Life”

JWST will address those goals using four instruments attached to the Integrated Science Instrument Module(ISIM):

  1. Mid-InfraRed Instrument (MIRI)
  2. Near-InfraRed Camera (NIRCam)
  3. Near-InfraRed Spectrograph (NIRSpec)
  4. Fine Guidance Sensor Tunable Filter Imager(FGS-TFI)

As the name suggests, MIRI is sensitive to mid-infrared wavelengths between 5 and 27 micrometers (or 29 micrometers for spectroscopy). MIRI will be actively cooled to 7K and used for wide-field broadband imagery and medium-resolution spectroscopy. More information about MIRI is available from the University of Arizona and the Space Telescope Science Institute.

NIRCam serves the dual role of acquiring high angular resolution images at 0.6-5 microns over a 2.2’x2.2′ field of view and conducting wavefront sensing using the Optical Telescope Element wavefront sensor. Although JWST will not have to contend with the Earth’s atmosphere, minor differences in the shape and position of the primary mirror segments could introduce disortions and phase variations in the wavefronts received by each segment. The Optical Telescope Element wavefront sensor will be used to monitor such distortions and reshape and realign the mirror segments to correct the wavefront errors.

For science observations, NIRCam will be operated in one of three imaging modes (survey, small source, or coronagraphic) or in medium-resolution spectroscopy mode. More information about the NIRCam imaging modes and filters is available from the Space Telescope Science Institute.

NIRSpec will use a “microshutter array” to acquire simultaneous 0.6-5 micron spectra of 100 objects over a 3’x3′ field of view. In addition to the novel “Micro-Shutter Assembly” (MSA) mode, NIRSpec may also be operated in Fixed Slit mode or Integral Field Unit mode with the spectral resolutions shown the following table from STSCI.

NIRSpec Instrument Modes from STSCI

Description of NIRSpec Instrument Modes from the Space Telescope Science Institute.

The Fine Guidance Sensor component of FGS-TFI consists of a 1-5 micron broadband guide camera capable of finding a guide star at 95% probability anywhere in the sky. FGS will be used to monitor JWST’s pointing throughout the mission and to properly deploy the primary mirror during the unfolding phase of the mission. The second half of FGS-TFI, the Tunable Filter Imager is a science instrument that will be used to acquire narrow-band images between 1.6-4.9 micrometers at R~100 resolution over a wide 2.2’x2.2′ field of view. TFI is being built by the Canadian Space Agency.

The Magellanic Stream

In Uncategorized on April 24, 2011 at 9:59 pm

The Magellanic Stream (MS) is an extended HI stream encircling the Milky Way (MW). It contains at most a handful of stars, but it has more than 10^8 solar masses of neutral hydrogen. First understood as a remnant of the Magellanic Clouds (MCs) some 30+ years ago (Wannier & Wirixon 1972; Mathewson et al. 1974), the Magellanic Steam has since been studied heavily, for its insights into extragalactic gas replenishment, hierarchical merging scenarios, as well as to understand its progenitor(s?) – the Large Magellanic Cloud (LMC) and the Small Magellanic Cloud (SMC). As observational techniques and telescope sensitivity have improved, the observed size of the MS has increased from ~100˚ to potentially ~200˚ (Nidever et al. 2010).

From Nidever et al. (2010); original image from Mellinger 2009

Today, the MS is understood in the context of the larger Magellanic system, which includes — in addition to the MS — the LMC, the SMC, a “Bridge” of gas connecting the two interacting galaxies, and a “Leading Arm” (LA) of diffuse HI clouds extending from the LMC/SMC in the opposite direction as the MS.

History of History: The Debate over the Formation

Until recently, the formation of the MS was understood as either one or a combination of two physical processed: tidal stripping of the LMC/SMC by the MW (e.g., Murai & Fujimori 1980; Gardiner & Noguchi 1996), or ram pressure stripping of the LMC/SMC pair as they plunged through the MW hot halo (e.g., Meurer et al. 1985; Mastropietro et al. 2005). The first of these processes necessarily implied that the LMC/SMC had long been kept in a tight orbit (~2-2.5 Gyr) around the MW, during which time there would have been time to strip the requisite amount of gas to form the Magellanic Stream.  Even in the ram-pressure-dominated case, close proximity is required for the LMC-SMC pair to be disturbed by passing through the hot halo. If either of these formation models were correct, then the MS is a relatively young feature, formed ~1.5 Gyr ago in the temporal vicinity of its pericentric passage.

However, Hubble Space Telescope (HST) proper motion studies by Kallivayalil et al. (2006a, 2006b) have led to revised models for the past orbital motions of the LMC/SMC pair. Besla et al. (2007) used these new velocities (~ 80 km/s higher than previous measurements) as priors in a backward-integration orbital model to show that the Magellanic Clouds could at most have made one orbit about the MW, and that with a Lambda-CDM-motivated NFW dark matter profile, the MCs are on their first passage around the MW. Such an orbit would rule out the traditional formation models, so Besla et al. (2010) argue that the Magellanic Steam is a product of only LMC-SMC interaction. In an N-body+SPH simulation, they manage to reproduce a many key observational features of the Magellanic system, including the absence of stars from the MS, the projected location of the MS, and the asymmetry between the MS and the LA. HI column densities are qualitatively similar to observations, but they do no reproduce them faithfully. Line-of-sight velocities have a much larger spread than in reality. Besla et al. suggest that the inclusion metal cooling, ionization, and interactions with the hot halo may allow for a more realistic reproduction of HI density gradients across the stream.

Original Caption (from Besla et al. 2010): Fig. 2 Stellar surface brightness, H i gas column densities, and line-of-sight velocities of the simulated Magellanic system. Top panel: the resulting stellar distribution is projected in Magellanic coordinates (N08), a variation of the Galactic coordinate system where the Stream is straight. The distribution is color-coded in terms of V-band surface brightness. The past orbit of the LMC/SMC is indicated by the blue lines. Middle panel: the H i gas column densities of the simulated stream range from 1018 to 1021 cm−2 , as expected (Putman et al. 2003). The white circle indicates the observed extent of the LMC’s H i disk: the simulated LMC is more extended than observed, indicating ram pressure likely plays a role to truncate the disk. In both the top and middle panels, the solid white line indicates the past orbit of the SMC according to the old theoretically derived PMs (GN96) which was a priori chosen to trace the Stream on the plane of the sky. The true orbits (determined by all PM measurements) for the LMC/SMC are indicated by the yellow lines. Bottom panel: the line-of-sight velocities along the simulated stream are plotted and color-coded based on H i column density, as in the middle panel. The white line is a fit to the observed data (N08). The LMC disk is too extended, causing a larger velocity spread than observed. The line-of-sight velocities along the past orbits of the LMC/SMC are indicated by the yellow lines, which do not follow the true velocities along the Stream (e.g., B07, Figure 20). The Stream is kinematically distinct from the orbits of the Clouds.

Observations, meanwhile, continue to drive competing theories over the MS formation. Canonical values for the MW circular velocity are ~ 220 km/sec. However, recent astrometric parallax measurements suggest that the circular velocity of the MW ought to be revised upward to ~ 250 km/sec (e.g. Reid et al. 2009). Using this higher circular velocity, Diaz & Bekki (2011a) provide a model that reproduces many observational features of the MS, in which the LMC and SMC are independently bound to the MW for at least 5 Gyr, and only more recently became mutually bound. Like Besla et al., this model relies on the LMC tidally stripping SMC gas to form the MS, but it does so in a bound orbit. This model, however, relies on an unrealistic isothermal dark matter halo, which is used to artificially institute a flat circular velocity profile. A more cosmologically-motivated NFW profile, however, would likely not change the fundamental result, though, which is that a bound orbit still remains plausible, given the uncertainties even in the most recent observations. Moreover, Diaz & Bekki (2011b) introduce a semi-analytic term for the drag caused by hot halo, and they are able to better reproduce LA kinematics, while leaving the MS itself relatively unchanged. This hints at the fact that more realistic treatments of the hydrodynamic effects of the hot halo may indeed be able to account for the diffuse, filamentary, and cloud-like structure of the MS and the LA.

While the formation theories continue to disagree on fundamental points (such as whether the LMC and SMC spent the majority of a Hubble time bound or unbound to the MW), they increasingly also point to a consensus in which a combination of gravitational, hydrodynamic, and stellar feedback effects are responsible for the formation of the Magellanic Stream, and also for the particulars of the content of the MS’s interstellar medium. For example, Nidever et al. (2008) propose a different, but not contradictory model of formation based on observations of large outflows from supergiant shells in the LMC. In this scenario, SNe explosions push gaseous material to larger radii, allowing ram pressure and/or gravitational forces to do the remaining work of removing the gas into the MS and LA. Such a proposal can coexist with either the picture of Besla et al. or of Diaz & Bekki.

Streaming Forward: A Reservoir of Science and Cold Gas

The debate over the formation of the Magellanic Stream does not take place in a vacuum (pun intended). Understanding its history allows for a better understanding of its present and future. While the above theories and observations are primarily focused on dynamics of the HI stream/clouds in the context of interacting galaxies, many current avenues of research are concerned with the using analysis of the stream as a probe of the interstellar medium. Fox et al. (2010) present spectroscopic measurements of the MS in order to determine the metallicity and ionization of the gas, using background quasars as backlights. The most interesting results are the high levels of ionization in the gas which the authors argue can only be explained by a multi-phase plasma model, where the ionization is the result of collisions between clouds and the warm-hot medium. These high ionization levels suggest that the Magellanic Stream may not survive long enough to replenish the MW and thus allow for more star formation; instead, the MS filaments may be merely transitory features that subsequently dissolve into the coronal plasma. Meanwhile, metallicity measurements of [O/H] = –1.00, which is comparable to the metallicity of the SMC, provide additional evidence that that the the MS is formed from LMC stripping of the SMC, rather than MW stripping of both the LMC and SMC.

Original Caption (from Fox et al. 2010): Figure 1. H I map of the MS color coded by velocity and centered on the South Galactic Pole, using 21 cm data from Hulsbosch & Wakker (1988) and Morras et al. (2000) with a sensitivity of log N(H I) ≈ 18.30. The positions of our two sight lines are marked.

What Can be Learned?

The Magellanic Stream will obviously continue to have much to offer to both observational and theoretical astrophysicists in the upcoming decade. As the modeling discussed in the historical section above demonstrates, the addition of more realistic hydrodynamics will potentially allow for incredibly close replication of actual observables of the LMC/SMC dwarf galaxy pair. It is important, however, to remain cautious towards any tendency to attempt to over-predict the specifics of one system. Chaotic effects will render certain features difficult if not impossible to replicate, and in the attempt to reproduce them, it is possible to stray from understanding the basic physics to adapting unneeded prescriptions. Hopefully, instead, the Magellanic Stream will be a proof-of-concept for these types of mergers, in general. Another possibility is that with improved models, constraints will be able to be placed on the halo shape and symmetry, rather than relying on a profile as a prior. In the shorter term, however, it is more likely to remain a probe of the warm-hot halo, since predictions can be directly compared with other observations, unlike the absence of direct observations of the dark matter halo.

The Very High Redshift Universe (in HI)

In Uncategorized on April 21, 2011 at 2:18 pm

An excellent Schematic, from Loeb 06

Definitive Review (121 pages) is  Furlanetto, Oh & Briggs 2006.

Selected Key points/Figures from Furlanetto et al. 2006

21-cm “tomography”

Figure attributed to "Chang and Oh, in preparation" by Furlanetto et al. 2006

21-cm “forest”

C.L. Carilli, N.Y. Gnedin, F. Owen, Astrophys. J. 577 (2002) 22–30

Nearby Galaxies & the Kennicutt-Schmidt Relation

In Uncategorized on April 19, 2011 at 1:59 pm

(AG’s handwritten notes will be merged into this post after the class meeting on 4/19/11.)

Galaxy/ISM “Evolution”

Figure 1 of Galametz et al. 2011

Kennicutt-Schmidt Relations

Good intro: from TAUVEX web site.

Schmidt 1959 Paper on “The Rate of Star Formation”

Kennicutt 1998 Review Paper on (see Figure 9, shown below)

from Kennicutt 1998

Current Understanding of Kennicutt-Schmidt Relations

(reproduced from Goodman & Rosolowsky NSF Proposal, 2008)

The very last line of Marten Schmidt’s 1959 paper reads: “the mean density of a galaxy may determine, as a first approximation, its present gas content and thus its evolution stage.”  And, as a “first approximation,” the relationship between the star formation rate and the gas density,

that Schmidt put forward has held up remarkably well  (Kennicutt 2007).  We seek here to see how far beyond “first approximation” studies of nearby star forming regions can presently help us go in the extragalactic context, and how much a more refined view could help in future studies of galaxy evolution.

The modern version of the Schmidt Law is largely due to the work of Kennicutt and collaborators, who study the relation between “star formation rate” and “surface density” in nearby galaxies.  Many different indicators are used by Kennicutt et al., and by others, to measure each of these quantities, and we focus on the vagaries of their interpretation below.  Suffice it to say here that the “Kennicutt Law” holds over more than 4 orders of magnitude in surface density, and has a scatter about the relationship of roughly 1-2 orders of magnitude (Figure 6).  The Kennicutt Law is:

where a is a proportionality constant and the exponent q is typically of order 1.4 (Kennicutt 2008). If gas scale heights are assumed to not vary much from galaxy to galaxy, then re-writing the Schmidt law (eq. , using volume densities) in terms of surface densities, gives exponent q=1.5, making it the same as the Kennicutt Law to within uncertainties (Krumholz & Thompson 2007).

The K-S relation effectively implies that the efficiency function with which stars form from gas in galaxies is unchanging to within two orders of magnitude, after the many billions of years the galaxies in the sample have had to evolve.   That “efficiency function” however, is not linear, in that q ≠ 1.

This non-linearity has led others to propose two kinds of revisions to K-S ideas, both of which rely upon the important fact (Lada 1992) that studies of local (Milky Way) star forming regions clearly show that stars only form in the densest regions of molecular clouds.

The first kind of “revised” K-S relation investigates how the SFR depends on the surface density of gas above a higher density threshold than just the ~100 cm-3 needed to produce 12CO emission[1]. Gao & Solomon (2004) observed HCN, which is excited at densities above a few x 104 cm-3, in a large sample of galaxies, and they derived the relation:

This empirical relation has (slightly) less scatter than one using CO only, and, perhaps more importantly, has a linear power-law slope, suggesting that the surface density of HCN may be a linear determinant of the star formation rate.  The Gao & Solomon work inspired Wu et al. (2005) to test the relationship between LIR and LHCN in Milky Way molecular clouds, and the linear relationship was shown to continue right down to the scale of local GMCs.  Many in the extragalactic community rejoiced at these results, which could have meant that the quest for the “perfect” tracer of star-forming gas in a galaxy ended at “whatever it is that emits in HCN.”  But, as we explain at the close of this section, the story is, alas, not that simple.

The second kind of revised K-S relation acknowledges that density may not be the only determinant of fecundity in molecular gas. Blitz, Rosolowsky, Wong and collaborators have put forward the idea that pressure, rather than density, is likely to be more fundamental (Blitz & Rosolowsky 2006 and references therein).  The idea that pressure is critical (cf. Bertoldi & McKee 1992) is supported by analysis of nearby star-forming regions where it is clear that dense star-forming cores are often pressure-bound by the weight of the cloud around them (Lada et al. 2008) rather than only being confined only by their own self-gravity.[2]

In §2.4 below, we lay out a plan for measuring properties of Milky Way clouds that should be able to test both of these physically-motivated “revisions” to empirical K-S laws, as well as other ideas.  First, though, let us consider what modern theory predicts.  Krumholz and McKee (2005) can “predict” (explain) the observed K-S relations with three premises they state as:

  1. star formation occurs in virialized molecular clouds that are supersonically turbulent;
  2. the density distribution within these clouds is log-normal, as expected for supersonic isothermal turbulence; and
  3. stars form in any subregion of a cloud that is so overdense that its gravitational potential energy exceeds the energy in turbulent motions.

Our own work long ago as well as many others’ (cf. Larson 1981) has shown that #1 is clearly true.  Our recent work has shown that #2 is true for at least one well-studied local star-forming region (see §1.2, and Figure 1).  Our work on dendrograms allows us to find the “subregions” #3 is talking about, and to quantify the ratio of turbulent to gravitational energy with a virial parameter (see §1.3.2, and Figures 4 and 5).

Additional recent theoretical work, motivated by Gao & Solomon’s HCN results, predicts not only the origin of the K-S relations seen in a 12CO, but also in a host of other spectral lines.  Krumholz and Thompson (2007) and Narayanan et al. (2008) have investigated how the shapes of K-S relations change based on the molecular line tracer used to probe gas surface density.  Both groups’ work points out a very key, but somewhat subtle, feature of molecular line observations that is often ignored in the “K-S” community (but not in the Milky Way star-formation community!).  The relationship between observed emission in a spectral line and the density of the emitting region depends on how far above or below the “critical density” required to excite the transition the emitting region is.

Narayanan et al. (2008) clearly show that emission from a region which is nearly all above the critical density (e.g. CO under nearly any conditions) will give K-S slopes q>1, and emission from regions where much material is below the critical density of the tracer used will give K-S relations with slopes q<1, due to the inclusion of significant amounts of sub-thermally excited matter.

Krumholz & Thompson give an intuitive explanation of how HCN gives a slope of unity.  If a K-S relation has a slope 1.5, then a factor of (Sgas)1 comes from the amount of gas available for star formation, and a factor of (Sgas)0.5 comes from the dependence of free fall time on density.  Krumholz & McKee’s (2005) and several others’, models of the K-S relationship assumes that all “bound” gas (#3 above) collapses on a free-fall time, so that over time, that process gives an exponent of q=1.5 in equation 1, for a galaxy with a finite reservoir of gas and a constant efficiency of turning gas into stars.  Krumholz & Thompson argue that if a tracer (like HCN) has its critical density near or above the average density of star-forming gas, then the “free-fall” factor goes away, leaving Gao & Solomon’s linear relation, because the emission is coming from regions that all have the same free-fall time.

Very recently, Bussmann et al. (2008) found q=0.79±0.09 for a sample of more than 30 nearby galaxies observed in the (high critical density) HCN (3-2) line: that sub-linear slope was predicted in advance by the models of Narayanan et al. 2008.  However, a soon-to-be-published extensive observational study of massive-star-forming clumps within the Milky Way by Wu et al. (2008) finds a more linear (q≈1) relation for high-density tracers, including HCN (3-2).  Wu et al. also find that the five different dense gas tracers for which they construct K-S relations within the Milky Way rise steeply (and not really as  power-law) below an infrared luminosity threshold of ~10^4.5 L, and that above this threshold each gives slightly different (near-unity) slope and offset.


[1] Usually, CO is taken to be an indicator of “gas” in K-S relations.  Kennicutt (Kennicutt 1998) and others have experimented with HI+CO and find slightly tighter correlations,  but others (Blitz & Rosolosky 2006) find systematic effects to be at the origin of the tightened correlations.

[2] We note that this question, about how exactly cores “connect” to their environment is so interesting on its own that an entirely separate NSF proposal from this one has been submitted to address it.

References  for Above, beyond Schmidt & Kennicutt

  • Krumholz, M. R. & McKee, C. F. 2005, A General Theory of Turbulence-regulated Star Formation, from Spirals to Ultraluminous Infrared Galaxies, ApJ, 630, 250-268
  • Krumholz, M. R. & Thompson, T. A. 2007, The Relationship between Molecular Gas Tracers and Kennicutt-Schmidt Laws, ApJ, 669, 289-298
  • Narayanan, D., Cox, T. J., Shirley, Y., Dave, R., Hernquist, L. & Walker, C. K. 2008, Molecular Star Formation Rate Indicators in Galaxies, ApJ, 684, 996-1008
  • Wu, J., Evans, N. J., II, Gao, Y., Solomon, P. M., Shirley, Y. L. & Vanden Bout, P. A. 2005,Connecting Dense Gas Tracers of Star Formation in our Galaxy to High-z Star Formation, ApJ, 635, L173-L176
  • Wu, J., Evans, N. J., Shirley, Y. L. & Knez, C. 2010, The Properties of Massive, Dense Clumps: Mapping Surveys of HCN and CS, ApJS, 188, 313.

Additional Sample Relevant Recent K-S work:

  • “CARMA Survey Toward Infrare-Bright Nearby Galaxies (STING): Molecular Gas Star Formation Law in NGC 4254,” Rahman et al. 2011 (SeeFigure 7 for example of inter-comparision of star formation tracers)
  • “On the relation between the Schmidt and Kennicutt–Schmidt star formation laws and its implications for numerical simulations”, Schaye & Dalla Vecchia 2007. (Different K-S laws can be derived based on assuming different effective equations of state, but authors conclude that this does not give deep physical insight.)

Key (Open) Questions in the Study of the ISM of Other Galaxies, and of the Intergalactic Medium

In Uncategorized on April 14, 2011 at 1:42 pm
  1. How similar is the ISM in nearby galaxies to that of the Milky Way?  In what ways is it different?  (see SAGE Survey paper Journal Club discussion; also see Fukui & Kawmura review).

    Color image of H I 21 cm emission from Deul & van der Hulst (1987) with CO clouds overlaid as green dots. All molecular clouds lie in regions of H I overdensity. The area of the molecular cloud has been scaled to represent the relative masses of the clouds. The coincidence of molecular clouds with H I overdensity is evidence that clouds form out of the atomic gas. From Engargiola, Plambeck, Rosolowsky and Blitz 2003.

  2. How do ISM properties depend on galaxy type (and vice-versa!)?  (e.g. elliptical galaxies are well-known to have virtually no gas–why is that?)

    Spitzer data for elliptical and S0 galaxies. Circles represent observations, triangles upper limits. Filled symbols refer to elliptical galaxies with de Vaucouleurs classification parameter T < –3; open symbols refer to S0 galaxies with T> – 3. Red (blue) symbols have optical colors U-V < 1.1 (U-V>1.1); galaxies with unknown colors are plotted with green symbols. (from Temi Brighenti & Mathews 2009)

  3. How efficient is the ISM at forming stars in other galaxies?  Does this depend only on surface density (Kennicutt-Schmidt relation) or on more (e.g. galaxy mass, metallicity, size, etc.)? Link to nice PDF of PPT on K-S within galaxies.
  4. Is the IMF really “Universal,” and if so, what does that mean? (and how to measure it?…)
  5. How do ISM properties depend on redshift?  What affects “galaxy evolution” over various redshift ranges? (e.g. metallicity of galaxies increases over time as stars “pollute” the ISM with elements heavier than helium…when does that start?  What have big surveys like SDSS told us about this? What can we learn from “deep” observations like the HDF and HUDF?)

    Cosmic star formation rate (per unit comoving volume, h = 0.6, q0 = 0.5) as a function of redshift (the ‘Madau’ plot, Madau et al 1996). The black symbols (with error bars) denote the star formation history deduced from (non-extinction corrected) UV data (Steidel et al 1999 and references therein). Upward pointing dotted green arrows with open boxes mark where these points move when a reddening correction is applied. The green, four arrow symbol is derived from (non-extinction corrected) Hα NICMOS observations (Yan et al 1999). The red, three arrow symbol denotes the lower limit to dusty star formation obtained from SCUBA observations of HDF (N) (Hughes et al 1998). The continuous line marks the total star formation rate deduced from the COBE background and an ‘inversion’ with a starburt SED (Lagache et al 1999b). The filled hatched blue and yellow boxes denotes star formation rate deduced from ISOCAM (CFRS field, Flores et al 1999b) and ISOPHOT-FIRBACK (Puget et al 1999, Dole et al 1999). The light blue, dashed curve is model ‘A’ (no ULIRGSs) and the red dotted curve model ‘E’ (with ULIRGs) of Guuiderdoni et al (1998). Reproduced from Genzel & Cesarsky, ARA&A, 2000.

  6. What is the nature of the Intergalactic Medium under different conditions?  (e.g. in galaxy clusters, where hot X-ray haloes & cooling flows are important vs. in “empty” voids)
  7. What can be learned from long line-of-sight observations, e.g. of distant quasars? (Overview)
    1. Lyman-alpha forest
    2. metallicity variations with redshift (how long did it take the first stars to pollute the ISM?)
    3. Gunn-Peterson effect (ADS link to paper)

      Redshift distributions of galaxies and C IV absorbers in the field of Q1422+2309. The top panel shows the number of objects observed at each redshift. The bottom panel shows the implied overdensity as a function of redshift after smoothing our raw redshifts by a Gaussian of width z = 0.008. The good correspondence of features in the bottom panel shows that C IV systems are preferentially found within galaxy overdensities. (from Adelberger, Steidel, Shapley & Pettini 2003)

  8. What can be learned from direct observations of neutral hydrogen in the “intergalactic” medium before there were even many galaxies?  (New telescopes may be able to detect neutral hydrogen structures).
    1. Probes of the Epoch of Reionization
    2. Tomography” of HI in the Early Universe

      The transition from the neutral IGM left after the Universe recombined, at z ≈ 1,100, to the fully ionized IGM observed today is termed cosmic reionization. After recombination, when the CMB radiation was released, hydrogen in the IGM remained neutral until the first stars and galaxies2, 4 formed, at z ≈ 15–30. These primordial systems released energetic ultraviolet photons capable of ionizing local bubbles of hydrogen gas. As the abundance of these early galaxies increased, the bubbles increasingly overlapped and progressively larger volumes became ionized. This reionization process ended at z ≈ 6–8, ~1 Gyr after the Big Bang. At lower redshifts, the IGM remains highly ionized by radiation provided by star-forming galaxies and the gas accretion onto supermassive black holes that powers quasars. (from Robertson et al., Nature, 2010)

Course Notes

In Uncategorized on April 12, 2011 at 11:26 pm

Stromgren Sphere: An example “chalkboard derivation”

(updated for 2013)


The Stromgren sphere is a simplified analysis of the size of HII regions. Massive O and B stars emit many high-energy photons, which will ionize their surroundings and create HII regions. We assume that such a star is embedded in a uniform medium of neutral hydrogen. A sphere of radius r around this star will become ionized; is called the “Stromgren radius”. The volume of the ionized region will be such that the rate at which ionized hydrogen recombines equals the rate at which the star emits ionizing photons (i.e. all of the ionizing photons are “used up” re-ionizing hydrogen as it recombines)

The recombination rate density is \alpha n^2, where \alpha is the recombination coefficient (in \mathrm{cm}^3~\mathrm{s}^{-1}) and n=n_e=n_\mathrm{H} is the number density (assuming fully ionized gas and only hydrogen, the electron and proton densities are equal). The total rate of ionizing photons (in photons per second) in the volume of the sphere is N^*. Setting the rates of ionization and recombination equal to one another, we get

\frac43 \pi r^3 \alpha n^2 = N^*, and solving for r,

r = ( \frac {3N^*} {4\pi\alpha n^2})^{\frac13}

Typical values for the above variables are N^* \sim 10^{49}~\mathrm{photons~s}^{-1}, \alpha \sim 3\times 10^{-13}\; \mathrm{cm}^3 \; \mathrm s^{-1} and n \sim 10\; \mathrm {cm}^{-3}, implying Stromgren radii of 10 to 100 pc. See the journal club (2013) article for discussion of Stromgren’s seminal 1939 paper.

How do we know there is an ISM?

(updated for 2013)

Early astronomers pointed to 3 lines of evidence for the ISM:

  • Extinction. The ISM obscures the light from background stars. In 1919, Barnard (JC 2011, 2013) called attention to these “dark markings” on the sky, and put forward the (correct) hypothesis that these were the silhouettes of dark clouds. A good rule of thumb for the amount of extinction present is 1 magnitude of extinction per kpc (for typical, mostly unobscured lines-of-sight).
  • Reddening. Even when the ISM doesn’t completely block background starlight, it scatters it. Shorter-wavelength light is preferentially scattered, so stars behind obscuring material appear redder than normal. If a star’s true color is known, its observed color can be used to infer the column density of the ISM between us and the star. Robert Trumpler first used measurements of the apparent “cuspiness” and the brighnesses of star clusters in 1930 to argue for the presence of this effect. Reddening of stars of “known” color is the basis of NICER and related techniques used to map extinction today.
  • Stationary Lines. Spectral observations of binary stars show doppler-shifted lines corresponding to the radial velocity of each star. In addition, some of these spectra exhibit stationary (i.e. not doppler-shifted) absorption lines due to stationary material between us and the binary system. Johannes Hartmann first noticed this in 1904 when investigating the spectrum of \delta Orionis: “The calcium line at \lambda 3934 [angstroms] exhibits a very peculiar behavior. It is distinguished from all the other lines in this spectrum, first by the fact that it always appears extraordinarily week, but almost perfictly sharp… Closer study on this point now led me to the quite surprising result that the calcium line… does not share in the periodic displacements of the lines caused by the orbital motion of the star”

Helpful References: Good discussion of the history of extinction and reddening, from Michael Richmond.

A Sense of Scale

(updated for 2013)


How dense (or not) is the ISM?

  • Dense cores: n \sim 10^5 ~{\rm cm}^{-3}
  • Typical ISM: n \sim 1 ~{\rm cm}^{-3}
  • This room: 1 mol / 22.4L \sim 3 \times 10^{19}~ {\rm cm}^{-3}
  • XVH (eXtremely High Vacuum) — best human-made vacuum: n \sim 3 \times 10^{4}~ {\rm cm}^{-3}
  • Density of stars in the Milky Way: 2.8~{\rm stars/pc}^3 \approx 0.125~M_\odot/{\rm pc}^3 = 8.5 \times 10^{-24} ~{\rm g / cm}^3 \sim 5~{\rm cm}^{-3}

In other words, most of the ISM is at a density far below the densities and pressures we can reproduce in the lab. Thus, the details of most of the microphysics in the ISM are still poorly understood. We also see that the density of stars in the Galaxy is quite small – only a few times the average particle density of the ISM.

See also the interstellar cloud properties table and conversions between angular and linear scale.

Density of the Milky Way’s ISM

(updated for 2013)

How do we know that n \sim 1 ~{\rm cm}^{-3} in the ISM? From the rotation curve of the Milky Way (and some assumptions about the mass ratio of gas to gas+stars+dark matter), we can infer

M_{\rm gas} = 6.7 \times 10^{9} M_\odot

Maps of HI and CO reveal the extent of our galaxy to be

D = 40 kpc

h = 140 pc (scale height of HI)

This applies an approximate volume of

V = \pi D^2 h / 4 = 5 \times 10^{66} ~{\rm cm}^{3}

Which, yields a density of

\rho = 2.5 \times 10^{-24} ~{\rm g cm}^{-3}

Density of the Intergalactic Medium

(updated for 2013)

From cosmology observations, we know the universe to be very nearly flat (\Omega = 1). This implies that the mean density of the universe is \rho = \rho_{\rm crit} = \frac{3 H_0^2}{8 \pi G} = 7 \times 10^{-30} ~{\rm g~ cm}^{-3} \Rightarrow n<4.3 \times 10^{-6}~{\rm cm}^{-3}.

This places an upper limit on the density of the Intergalactic Medium.

Composition of the ISM

(updated for 2013)

  • Gas: by mass, gas is 60% Hydrogen, 30% Helium. By number, gas is 88% H, 10% He, and 2% heavier elements
  • Dust: The term “dust” applies roughly to any molecule too big to name. The size distribution is biased towards small (0.2 \mum) particles, with an approximate distribution N(a) \propto a^{-3.5}. The density of dust in the galaxy is \rho_{\rm dust} \sim .002 M_\odot ~{\rm pc}^{-3} \sim 0.1 \rho_{\rm gas}
  • Cosmic Rays: Charged, high-energy (anti)protons, nuclei, electrons, and positrons. Cosmic rays have an energy density of 0.5 ~{\rm eV ~ cm}^{-3}. The equivalent mass density (using E = mc^2) is 9 \times 10^{-34}~{\rm g cm}^{-3}
  • Magnetic Fields: Typical field strengths in the MW are 1 \mu G \sim 0.2 ~{eV ~cm}^{-3}. This is strong enough to confine cosmic rays.

Bruce Draine’s List of constituents in the ISM:

(updated for 2013)

  1. Gas
  2. Dust
  3. Cosmic Rays*
  4. Photons**
  5. B-Field
  6. Gravitational Field
  7. Dark Matter

*cosmic rays are highly relativistic, super-energetic ions and electrons

**photons include:

  • The Cosmic Microwave Background (2.7 K)
  • starlight from stellar photospheres (UV, optical, NIR,…)
  • h\nu from transitions in atoms, ions, and molecules
  • “thermal emission” from dust (heated by starlight, AGN)
  • free-free emission (bremsstrahlung) in plasma
  • synchrotron radiation from relativistic electrons
  • \gamma-rays from nuclear transitions

His list of “phases” from Table 1.3:

  1. Coronal gas (Hot Ionized Medium, or “HIM”): T> 10^{5.5}~{\rm K}. Shock-heated from supernovae. Fills half the volume of the galaxy, and cools in about 1 Myr.
  2. HII gas: Ionized mostly by O and early B stars. Called an “HII region” when confined by a molecular cloud, otherwise called “diffuse HII”.
  3. Warm HI (Warm Neutral Medium, or “WNM”): atomic, T \sim 10^{3.7}~{\rm K}. n\sim 0.6 ~{\rm cm}^{-3}. Heated by starlight, photoelectric effect, and cosmic rays. Fills ~40% of the volume.
  4. Cool HI (Cold Neutral Medium, or “CNM”). T \sim 100~{\rm K}, n \sim 30 ~{\rm cm}^{-3}. Fills ~1% of the volume.
  5. Diffuse molecular gas. Where HI self-shields from UV radiation to allow H_2 formation on the surfaces of dust grains in cloud interiors. This occurs at 10~{\rm to}~50~{\rm cm}^{-3}.
  6. Dense Molecular gas. “Bound” according to Draine (though maybe not). n >\sim 10^3 ~{\rm cm}^{-3}. Sites of star formation.  See also Bok Globules (JC 2013).
  7. Stellar Outflows. T=50-1000 {\rm K}, n \sim 1-10^6 ~{\rm cm}^{-3}. Winds from cool stars.

These phases are fluid and dynamic, and change on a variety of time and spatial scales. Examples include growth of an HII region, evaporation of molecular clouds, the interface between the ISM and IGM, cooling of supernova remnants, mixing, recombination, etc.

Topology of the ISM

(updated for 2013)


A grab-bag of properties of the Milky Way

  • HII scale height: 1 kpc
  • CO scale height: 50-75 pc
  • HI scale height: 130-400 pc
  • Stellar scale height: 100 pc in spiral arm, 500 pc in disk
  • Stellar mass: 5 \times 10^{10} M_\odot
  • Dark matter mass: 5 \times 10^{10} M_\odot
  • HI mass: 2.9 \times 10^9 M_\odot
  • H2 mass (inferred from CO): 0.84 \times 10^9 M_\odot
  • HII mass: 1.12 \times 10^9~M_\odot
  • -> total gas mass = 6.7 \times 10^9~M_\odot (including He).
  • Total MW mass within 15 kpc: 10^{11} M_\odot (using the Galaxy’s rotation curve). About 50% dark matter.

So the ISM is a relatively small constituent of the Galaxy (by mass).

The Sound Speed

(updated for 2013)

The speed of sound is the speed at which pressure disturbances travel in a medium. It is defined as

c_s \equiv \frac{\partial P}{\partial \rho} ,

where P and \rho are pressure and mass density, respectively. For a polytropic gas, i.e. one defined by the equation of state P \propto \rho^\gamma, this becomes c_s=\sqrt{\gamma P/\rho}. \gamma is the adiabatic index (ratio of specific heats), and \gamma=5/3 describes a monatomic gas.

For an isothermal gas where the ideal gas equation of state P=\rho k_B T / (\mu m_{\rm H}) holds, c_s=\sqrt{k_B T/\mu}. Here, \mu is the mean molecular weight (a factor that accounts for the chemical composition of the gas), and m_{\rm H} is the hydrogen atomic mass. Note that for pure molecular hydrogen \mu=2. For molecular gas with ~10% He by mass and trace metals, \mu \approx 2.3 is often used.

A gas can be approximated to be isothermal if the sound wave period is much higher than the (radiative) cooling time of the gas, as any increase in temperature due to compression by the wave will be immediately followed by radiative cooling to the original equilibrium temperature well before the next compression occurs. Many astrophysical situations in the ISM are close to being isothermal, thus the isothermal sound speed is often used. For example, in conditions where temperature and density are independent such as H II regions (where the gas temperature is set by the ionizing star’s spectrum), the gas is very close to isothermal.

Hydrogen “Slang”

(updated for 2013)

Lyman limit: the minimum energy needed to remove an electron from a Hydrogen atom. A “Lyman limit photon” is a photon with at least this energy.

E = 13.6~{\rm eV} = 1~{\rm `Rydberg'} = hcR_{\rm H} ,

where R_{\rm H}=1.097 \times 10^{7} {\rm m}^{-1} is the Rydberg constant, which has units of 1/\lambda. This energy corresponds to the Lyman limit wavelength as follows:

E = h\nu = hc/\lambda \Rightarrow \lambda=912~{\rm \AA} .

Lyman series: transitions to and from the n=1 energy level of the Bohr atom. The first line in this series was discovered in 1906 using UV studies of electrically excited hydrogen gas.

Balmer series: transitions to and from the n=2 energy level. Discovered in 1885; since these are optical transitions, they were more easily observed than the UV Lyman series transitions.

There are also other named series corresponding to higher n. Examples include Paschen (n=3), Brackett (n=4), and Pfund (n=5). The lowest energy (longest wavelength) transition of a series is designated \alpha, the next lowest energy is \beta, and so on. For example, the transition from n=2 to 1 is Lyman alpha, or {\rm Ly}\alpha, while the transition from n=7 to 4 is Brackett gamma, or {\rm Br}\gamma. The wavelength of a given transition can be computed via the Rydberg equation:

\frac{1}{\lambda}=R_{\rm H} \big|\frac{1}{n_f^2}-\frac{1}{n_i^2}\big| ,

where n_i and n_f are the initial and final energy levels of the electron, respectively. See this handout for a pictorial representation of the low n transitions in hydrogen. Note that the Lyman (or Balmer, Paschen, etc.) limit can be computed by inserting n_i=\infty in the above equation.

The Lyman continuum corresponds to the region of the spectrum near the Lyman limit, where the spacing between energy levels becomes comparable to spectral line widths and so individual lines are no longer distinguishable. Such continua exist for each series of lines.

Chemistry

(updated for 2013)


See Draine Table 1.4 for elemental abundances for the Sun (and thus presumably for the ISM near the Sun).

By number: {\rm H:He:C} = 1:0.1:3 \times 10^{-4} ;

by mass: {\rm H:He:C} = 1:0.4:3.5 \times 10^{-3} .

However, these ratios vary by position in the galaxy, especially for heavier elements (which depend on stellar processing). For example, the abundance of heavy elements (Z ≥ 6, i.e. carbon and heavier) is twice as low at the sun’s position than in the Galactic center. Even though metals account for only 1% of the mass, they dominate most of the important chemistry, ionization, and heating/cooling processes. They are essential for star formation, as they allow molecular clouds to cool and collapse.

Dissociating molecules takes less energy than ionizing atoms, in general. For example:

E_{I,{\rm H}}=13.6~{\rm eV}

E_{D,{\rm H}_2}=4.52~{\rm eV} \Rightarrow \lambda=2743~{\rm \AA} (UV transition)

E_{D,{\rm CO}}=11.2~{\rm eV},

where E_I and E_D are the ionization and dissociation energies, respectively. We can see that it is much easier to dissociate molecular hydrogen than to ionize atomic hydrogen; in other words, atomic H will survive a harsher radiation field than molecular H. The above numbers thus set the structure of molecular clouds in the interstellar radiation field; a large amount of molecular gas needs to gather together in order to allow it to survive via the process of self-shielding, in which a thick enough column of gas exists such that at some distance below the surface of the cloud all of the energetic photons have already been absorbed. Note that the high dissociation energy of CO is a result of the triple bond between the carbon and oxygen atoms. CO is a very important coolant in molecular clouds.

Measuring States in the ISM

(updated for 2013)


There are two primary observational diagnostics of the thermal, chemical, and ionization states in the ISM:

  1. Spectral Energy Distribution (SED; broadband low-resolution)
  2. Spectrum (narrowband, high-resolution)

SEDs

Very generally, if a source’s SED is blackbody-like, one can fit a Planck function to the SED and derive the temperature and column density (if one can assume LTE). If an SED is not blackbody-like, the emission is the sum of various processes, including:

  • thermal emission (e.g. dust, CMB)
  • synchrotron emission (power law spectrum)
  • free-free emission (thermal for a thermal electron distribution)

Spectra

Quantum mechanics combined with chemistry can predict line strengths. Ratios of lines can be used to model “excitation”, i.e. what physical conditions (density, temperature, radiation field, ionization fraction, etc.) lead to the observed distribution of line strengths. Excitation is controlled by

  • collisions between particles (LTE often assumed, but not always true)
  • photons from the interstellar radiation field, nearby stars, shocks, CMB, chemistry, cosmic rays
  • recombination/ionization/dissociation

Which of these processes matter where? In class (2011), we drew the following schematic.

A schematic of several structures in the ISM

Key

A: Dense molecular cloud with stars forming within

  • T=10-50~{\rm K};~n>10^3~{\rm cm}^{-3} (measured, e.g., from line ratios)
  • gas is mostly molecular (low T, high n, self-shielding from UV photons, few shocks)
  • not much photoionization due to high extinction (but could be complicated ionization structure due to patchy extinction)
  • cosmic rays can penetrate, leading to fractional ionization: X_I=n_i/(n_H+n_i) \approx n_i/n_H \propto n_H^{-1/2}, where n_i is the ion density (see Draine 16.5 for details). Measured values for X_e (the electron-to-neutral ratio, which is presumed equal to the ionization fraction) are about X_e \sim 10^{-6}~{\rm to}~10^{-7}.
  • possible shocks due to impinging HII region – could raise T, n, ionization, and change chemistry globally
  • shocks due to embedded young stars w/ outflows and winds -> local changes in Tn, ionization, chemistry
  • time evolution? feedback from stars formed within?

B: Cluster of OB stars (an HII region ionized by their integrated radiation)

  • 7000 < T < 10,000 K (from line ratios)
  • gas primarily ionized due to photons beyond Lyman limit (E > 13.6 eV) produced by O stars
  • elements other than H have different ionization energy, so will ionize more or less easily
  • HII regions are often clumpy; this is observed as a deficit in the average value of n_e from continuum radiation over the entire region as compared to the value of ne derived from line ratios. In other words, certain regions are denser (in ionized gas) than others.
  • The above introduces the idea of a filling factor, defined as the ratio of filled volume to total volume (in this case the filled volume is that of ionized gas)
  • dust is present in HII regions (as evidenced by observations of scattered light), though the smaller grains may be destroyed
  • significant radio emission: free-free (bremsstrahlung), synchrotron, and recombination line (e.g. H76a)
  • chemistry is highly dependent on nT, flux, and time

C: Supernova remnant

  • gas can be ionized in shocks by collisions (high velocities required to produce high energy collisions, high T)
  • e.g. if v > 1000 km/s, T > 106 K
  • atom-electron collisions will ionize H, He; produce x-rays; produce highly ionized heavy elements
  • gas can also be excited (e.g. vibrational H2 emission) and dissociated by shocks

D: General diffuse ISM

  • UV radiation from the interstellar radiation field produces ionization
  • ne best measured from pulsar dispersion measure (DM), an observable. {\rm DM} \propto \int n_e dl
  • role of magnetic fields depends critically on XI(B-fields do not directly affect neutrals, though their effects can be felt through ion-neutral collisions)

Energy Density Comparison

(updated for 2013)


See Draine table 1.5. The primary sources of energy present in the ISM are:

      1. The CMB (T_{\rm CMB}=2.725~{\rm K}
      2. Thermal IR from dust
      3. Starlight (h\nu < 13.6 {\rm eV}
      4. Thermal kinetic energy (3/2 nkT)
      5. Turbulent kinetic energy (1/2 \rho \sigma_v^2)
      6. Magnetic fields (B^2 / 8 \pi )
      7. Cosmic rays

All of these terms have energy densities within an order of magnitude of 1 ~{\rm eV ~ cm}^{-3}. With the exception of the CMB, this is not a coincidence: because of the dynamic nature of the ISM, these processes are coupled together and thus exchange energy with one another.

Relevant Velocities in the ISM

(updated for 2013)


Note: it’s handy to remember that 1 km/s ~ 1 pc / Myr.

  • Galactic rotation: 18 km/s/kpc (e.g. 180 km/s at 10 kpc)
  • Isothermal sound speed: c_s =\sqrt{\frac{kT}{\mu}}
    • For H, this speed is 0.3, 1, and 3 km/s at 10 K, 100 K, and 1000 K, respectively.
  • Alfvén speed: The speed at which magnetic fluctuations propagate. v_A = B / \sqrt{4 \pi \rho} Alfvén waves are transverse waves along the direction of the magnetic field.
    • Note that v_A = {\rm const} if B \propto \rho^{1/2}, which is observed to be true over a large portion of the ISM.
    • Interstellar B-fields can be measured using the Zeeman effect. Observed values range from 5~\mu {\rm G} in the diffuse ISM to 1 mG in dense clouds. For specific conditions:
      • B = 1~\mu{\rm G}, n = 1 ~{\rm cm}^{-3} \Rightarrow v_A = 2~{\rm km~s}^{-1}
      • B = 30~\mu {\rm G}, n = 10^4~{\rm cm}^{-3} \Rightarrow v_A = 0.4~{\rm km~s}^{-1}
      • B = 1~{\rm mG}, n = 10^7 {\rm cm}^{-3} \Rightarrow v_A = 0.5~{\rm km~s}^{-1}
    • Compare to the isothermal sound speed, which is 0.3 km/s in dense gas at 20 K.
      • c_s \approx v_A in dense gas
      • c_s < v_A in diffuse gas
  • Observed velocity dispersion in molecular gas is typically about 1 km/s, and is thus supersonic. This is a signature of the presence of turbulence. (see the summary of Larson’s seminal 1981 paper)

Introductory remarks on Radiative Processes and Equilibrium

(updated for 2013)


The goal of the next several sections is to build an understanding of how photons are produced by, are absorbed by, and interact with the ISM. We consider a system in which one or more constituents are excited under certain physical conditions to produce photons, then the photons pass through other constituents under other conditions, before finally being observed (and thus affected by the limitations and biases of the observational conditions and instruments) on Earth. Local thermodynamic equilibrium is often used to describe the conditions, but this does not always hold. Remember that our overall goal is to turn observations of the ISM into physics, and vice-versa.

The following contribute to an observed Spectral Energy Distribution:

      • gas: spontaneous emission, stimulated emission (e.g. masers), absorption, scattering processes involving photons + electrons or bound atoms/molecules
      • dust: absorption; scattering (the sum of these two -> extinction); emission (blackbody modified by wavelength-dependent emissivity)
      • other: synchrotron, brehmsstrahlung, etc.

The processes taking place in our “system” depend sensitively on the specific conditions of the ISM in question, but the following “rules of thumb” are worth remembering:

      1. Very rarely is a system actually in a true equilibrium state.
      2. Except in HII regions, transitions in the ISM are usually not electronic.
      3. The terms Upper Level and Lower Level refer to any two quantum mechanical states of an atom or molecule where E_{\rm upper}>E_{\rm lower}. We will use k to index the upper state, and j for the lower state.
      4. Transitions can be induced by photons, cosmic rays, collisions with atoms and molecules, and interactions with free electrons.
      5. Levels can refer to electronic, rotational, vibrational, spin, and magnetic states.
      6. To understand radiative processes in the ISM, we will generally need to know the chemical composition, ambient radiation field, and velocity distribution of each ISM component. We will almost always have to make simplifying assumptions about these conditions.

Thermodynamic Equilibrium

(updated for 2013)


Collisions and radiation generally compete to establish the relative populations of different energy states. Randomized collisional processes push the distribution of energy states to the Boltzmann distribution, n_j \propto e^{-E_j / kT}. When collisions dominate over competing processes and establish the Boltzmann distribution, we say the ISM is in Thermodynamic Equilibrium.

Often this only holds locally, hence the term Local Thermodynamic Equilibrium or LTE. For example, the fact that we can observe stars implies that energy (via photons) is escaping the system. While this cannot be considered a state of global thermodynamic equilibrium, localized regions in stellar interiors are in near-equilibrium with their surroundings.

But the ISM is not like stars. In stars, most emission, absorption, scattering, and collision processes occur on timescales very short compared with dynamical or evolutionary timescales. Due to the low density of the ISM, interactions are much more rare. This makes it difficult to establish equilibrium. Furthermore, many additional processes disrupt equilibrium (such as energy input from hot stars, cosmic rays, X-ray background, shocks).

As a consequence, in the ISM the level populations in atoms and molecules are not always in their equilibrium distribution. Because of the low density, most photons are created from (rare) collisional processes (except in locations like HII regions where ionization and recombination become dominant).

Spitzer Notation

(updated for 2013)

We will use the notation from Spitzer (1978). See also Draine, Ch. 3. We represent the density of a state j as

n_j(X^{(r)}), where

      • n: particle density
      • j: quantum state
      • X: element
      • (r): ionization state
      • For example, HI = H^{(0)}

In his book, Spitzer defines something called “Equivalent Thermodynamic Equilibrium” or “ETE”. In ETE, n_j^* gives the “equivalent” density in state j. The true (observed) value is n_j. He then defines the ratio of the true density to the ETE density to be

b_j = n_j / n_j^*.

This quantity approaches 1 when collisions dominate over ionization and recombination. For LTE, b_j = 1 for all levels. The level population is then given by the Boltzmann equation:

\frac{n_j^\star(X^{(r)})}{n_k^\star(X^{(r)})} = (\frac{g_{rj}}{g_{rk}})~e^{ -(E_{rj} - E_{rk}) / kT },

where E_{rj} and g_{rj} are the energy and statistical weight (degeneracy) of level j, ionization state r. The exponential term is called the “Boltzmann factor”‘ and determines the relative probability for a state.

The term “Maxwellian” describes the velocity distribution of a 3-D gas. “Maxwell-Boltzmann” is a special case of the Boltzmann distribution for velocities.

Using our definition of b and dropping the “r” designation,

\frac{n_k}{n_j} = \frac{b_k}{b_j} (\frac{g_k}{g_j})~e^{-h \nu_{jk} / kT }

Where \nu_{jk} is the frequency of the radiative transition from k to j. We will use the convention that E_k > E_j, such that E_{jk}=h\nu_{jk} > 0.

To find the fraction of atoms of species X^{(r)} excited to level j, define:

\sum_k n_k^\star (X^{(r)}) = n^\star(X^{(r)})

as the particle density of X^{(r)} in all states. Then

\frac{ n_j^* (X^{(r)}) } { n^* (X^{(r)})} = \frac{ g_{rj} e^{-E_{rj} / kT} } {\sum_k g_{rk} e^{ -E_{rk} / kT} }

Define f_r, the “partition function” for species X^{(r)}, to be the denominator of the RHS of the above equation. Then we can write, more simply:

\frac{n_j^\star}{n^\star} = \frac{g_{rj}}{f_r} e^{-E_{rj}/kT}

to be the fraction of particles that are in state j. By computing this for all j we now know the distribution of level populations for ETE.

The Saha Equation

(updated for 2013)


How do we deal with the distribution over different states of ionization r? In thermodynamic equilibrium, the Saha equation gives:

\frac{ n^\star(X^{(r+1)}) n_e } { n^\star (X^{(r)}) } = \frac{ f_{r+1} f_e}{f_r},

where f_r and f_{r+1} are the partition functions as discussed in the previous section. The partition function for electrons is given by

f_e = 2\big( \frac{2 \pi m_e k T} {h^2} \big) ^{3/2} = 4.829 \times 10^{15} (\frac{T}{K})^{3/2}

For a derivation of this, see pages 103-104 of this handout from Bowers and Deeming.

If f_r and f_{r+1} are approximated by the first terms in their sums (i.e. if the ground state dominates their level populations), then

\frac{ n^\star ( X^{ (r+1) } ) n_e } {n^\star ( X^{ (r) } ) } = 2 \big(\frac{ g_{r+1,1} }{g_{ r,1}}\big) \big( \frac{ 2 \pi m_e k T} {h^2} \big)^{3/2} e^{-\Phi_r / kT},

where \Phi_r=E_{r+1,1}-E_{r,1} is the energy required to ionize X^{(r)} from the ground (j = 1)  level. Ultimately, this is just a function of n_e and T. This assumes that the only relevant ionization process is via thermal collision (i.e. shocks, strong ionizing sources, etc. are ignored).

Important Properties of Local Thermodynamic Equilibrium

(updated for 2013)

For actual local thermodynamic equilbrium (not ETE), the following are important to keep in mind:

      • Detailed balance: transition rate from j to k = rate from k to j (i.e. no net change in particle distribution)
      • LTE is equivalent to ETE when b_j = 1 or \frac{b_j}{b_k} = 1
      • LTE is only an approximation, good under specific conditions.
      • Radiation intensity produced is not blackbody illumination as you’d want for true thermodynamic equilibrium.
      • Radiation is usually much weaker than the Planck function, which means not all levels are populated.
      • LTE assumption does not mean the Saha equation is applicable since radiative processes (not collisions) dominate in many ISM cases where LTE is applicable.

Definitions of Temperature

(updated for 2013)


The term “temperature” describes several different quantities in the ISM, and in observational astronomy. Only under idealized conditions (i.e. thermodynamic equilibrium, the Rayleigh Jeans regime, etc.) are (some of) these temperatures equivalent. For example, in stellar interiors, where the plasma is very well-coupled, a single “temperature” defines each of the following: the velocity distribution, the ionization distribution, the spectrum, and the level populations. In the ISM each of these can be characterized by a different “temperature!”

Brightness Temperature

T_B = the temperature of a blackbody that reproduces a given flux density at a specific frequency, such that

B_\nu(T_B) = \frac{2 h \nu^3}{c^2} \frac{1}{{\rm exp}(h \nu / kT_B) - 1}

Note: units for B_{\nu} are {\rm erg~cm^{-2}~s^{-1}~Hz^{-1}~ster^{-1}}.

This is a fundamental concept in radio astronomy. Note that the above definition assumes that the index of refraction in the medium is exactly 1.

Effective Temperature

T_{\rm eff} (also called T_{\rm rad}, the radiation temperature) is defined by

\int_\nu B_\nu d\nu = \sigma T_{{\rm eff}}^4 ,

which is the integrated intensity of a blackbody of temperature T_{\rm eff}. \sigma = (2 \pi^5 k^4)/(15 c^2 h^3)=5.669 \times 10^{-5} {\rm erg~cm^{-2}~s^{-1}~K^{-4}} is the Stefan-Boltzmann constant.

Color Temperature

T_c is defined by the slope (in log-log space) of an SED. Thus T_c is the temperature of a blackbody that has the same ratio of fluxes at two wavelengths as a given measurement. Note that T_c = T_b = T_{\rm eff} for a perfect blackbody.

Kinetic Temperature

T_k is the temperature that a particle of gas would have if its Maxwell-Boltzmann velocity distribution reproduced the width of a given line profile. It characterizes the random velocity of particles. For a purely thermal gas, the line profile is given by

I(\nu) = I_0~e^{\frac{-(\nu-\nu_{jk})^2}{2\sigma^2}},

where \sigma_{\nu}=\frac{\nu_{jk}}{c}\sqrt{\frac{kT_k}{\mu}} in frequency units, or

\sigma_v=\sqrt{\frac{kT_k}{\mu}} in velocity units.

In the “hot” ISM T_k is characteristic, but when \Delta v_{\rm non-thermal} > \Delta v_{\rm thermal} (where \Delta v are the Doppler full widths at half-maxima [FWHM]) then T_k does not represent the random velocity distribution. Examples include regions dominated by turbulence.

T_k can be different for neutrals, ions, and electrons because each can have a different Maxwellian distribution. For electrons, T_k = T_e, the electron temperature.

Ionization Temperature

T_I is the temperature which, when plugged into the Saha equation, gives the observed ratio of ionization states.

Excitation Temperature

T_{\rm ex} is the temperature which, when plugged into the Boltzmann distribution, gives the observed ratio of two energy states. Thus it is defined by

\frac{n_k}{n_j}=\frac{g_k}{g_j}~e^{-h\nu_{jk}/kT_{\rm ex}}.

Note that in stellar interiors, T_k = T_I = T_{\rm ex} = T_c. In this room, T_k = T_I = T_{\rm ex} \sim 300K, but T_c \sim 6000K.

Spin Temperature

T_s is a special case of T_{\rm ex} for spin-flip transitions. We’ll return to this when we discuss the important 21-cm line of neutral hydrogen.

Bolometric temperature

T_{\rm bol} is the temperature of a blackbody having the same mean frequency as the observed continuum spectrum. For a blackbody, T_{\rm bol} = T_{\rm eff}. This is a useful quantity for young stellar objects (YSOs), which are often heavily obscured in the optical and have infrared excesses due to the presence of a circumstellar disk.

Antenna temperature

T_A is a directly measured quantity (commonly used in radio astronomy) that incorporates radiative transfer and possible losses between the source emitting the radiation and the detector. In the simplest case,

T_A = \eta T_B( 1 - e^{-\tau}),

where \eta is the telescope efficiency (a numerical factor from 0 to 1) and \tau is the optical depth.

Excitation Processes: Collisions

(updated for 2013)


Collisional coupling means that the gas can be treated in the fluid approximation, i.e. we can treat the system on a macrophysical level.

Collisions are of key importance in the ISM:

      • cause most of the excitation
      • can cause recombinations (electron + ion)
      • lead to chemical reactions

Three types of collisions

      1. Coulomb force-dominated (r^{-1} potential): electron-ion, electron-electron, ion-ion
      2. Ion-neutral: induced dipole in neutral atom leads to r^{-4} potential; e.g. electron-neutral scattering
      3. neutral-neutral: van der Waals forces -> r^{-6} potential; very low cross-section

We will discuss (3) and (2) below; for ion-electron and ion-ion collisions, see Draine Ch. 2.

In general, we will parametrize the interaction rate between two bodies A and B as follows:

{\frac{\rm{reaction~rate}}{\rm{volume}}} = <\sigma v>_{AB} n_a n_B

In this equation, <\sigma v>_{AB} is the collision rate coefficient in \rm{cm}^3 \rm{s}^{-1}. <\sigma v>_{AB}= \int_0^\infty \sigma_{AB}(v) f_v~dv, where \sigma_{AB} (v) is the velocity-dependent cross section and f_v~dv is the particle velocity distribution, i.e. the probability that the relative speed between A and B is v. For the Maxwellian velocity distribution,

f_v~dv = 4 \pi \left(\frac{\mu'}{2\pi k T}\right)^{3/2} e^{-\mu' v^2/2kT} v^2~dv,

where \mu'=m_A m_B/(m_A+m_B) is the reduced mass. The center of mass energy is E=1/2 \mu' v^2, and the distribution can just as well be written in terms of the energy distribution of particles, f_E dE. Since f_E dE = f_v dv, we can rewrite the collision rate coefficient in terms of energy as

\sigma_{AB}=\left(\frac{8kT}{\pi\mu'}\right)^{1/2} \int_0^\infty \sigma_{AB}(E) \left(\frac{E}{kT}\right) e^{-E/kT} \frac{dE}{kT}.

These collision coefficients can occasionally be calculated analytically (via classical or quantum mechanics), and can in other situations be measured in the lab. The collision coefficients often depend on temperature. For practical purposes, many databases tabulate collision rates for different molecules and temperatures (e.g., the LAMBDA databsase).

For more details, see Draine, Chapter 2. In particular, he discusses 3-body collisions relevant at high densities.

Neutral-Neutral Interactions

(updated for 2013)


Short range forces involving “neutral” particles (neutral-ion, neutral-neutral) are inherently quantum-mechanical. Neutral-neutral interactions are very weak until electron clouds overlap (\sim 1 \AA\sim 10^{-8}cm). We can therefore treat these particles as hard spheres. The collisional cross section for two species is a circle of radius r1 + r2, since that is the closest two particles can get without touching.

\sigma_{nn} \sim \pi (r_1 + r_2)^2 \sim 10^{-15}~{\rm cm}^2

What does that collision rate imply? Consider the mean free path:

mfp = \ell_c \approx (n_n \sigma_{nn})^{-1} = \frac{10^{15}} {n_H}~{\rm cm}

This is about 100 AU in typical ISM conditions (n_H = 1 {\rm cm^{-3}})

In gas at temperature T, the mean particle velocity is given by the 3-d kinetic energy: 3/2 m_n v^2 = kT, or

v = \sqrt{\frac{2}{3} \frac{kT}{m_n}}, where m_n is the mass of the neutral particle. The mean free path and velocity allows us to define a collision timescale:

\tau_{nn} \sim \frac{l_c}{v} \sim (\frac{2}{3} \frac{kT}{m_n})^{-1/2} (n_n \sigma_{nn})^{-1} = 4.5 \times 10^3~n_n^{-1}~T^{-1/2}~{\rm years}.

      • For (n,T) = (1~{\rm cm^{-3}, 80~K}), the collision time is 500 years
      • For (n,T) = (10^4~{\rm cm^{-3}, 10~K}), the collision time is 1.7 months
      • For (n,T) = (1~{\rm cm^{-3}, 10^4~K}), the collision time is 45 years

So we see that density matters much more than temperature in determining the frequency of neutral-neutral collisions.

Ion-Neutral Reactions

(updated for 2013)


In Ion-Neutral reactions, the neutral atom is polarized by the electric field of the ion, so that interaction potential is

U(r) \approx \vec{E} \cdot \vec{p} = \frac{Z e} {r^2} ( \alpha \frac{Z e}{r^2} ) = \alpha \frac{Z^2 e^2}{r^4},

where \vec{E} is the electric field due to the charged particle, \vec{p} is the induced dipole moment in the neutral particle (determined by quantum mechanics), and \alpha is the polarizability, which defines \vec{p}=\alpha \vec{E} for a neutral atom in a uniform static electric field. See Draine, section 2.4 for more details.

This interaction can take strong or weak forms. We distinguish between the two cases by considering b, the impact parameter. Recall that the reduced mass of a 2-body system is \mu' = m_1 m_2 / (m_1 + m_2) In the weak regime, the interaction energy is much smaller than the kinetic energy of the reduced mass:

\frac{\alpha Z^2 e^2}{b^4} \ll\frac{\mu' v^2}{2} .

In the strong regime, the opposite holds:

\frac{\alpha Z^2 e^2}{b^4} \gg\frac{\mu' v^2}{2}.

The spatial scale which separates these two regimes corresponds to b_{\rm crit}, the critical impact parameter. Setting the two sides equal, we see that b_{\rm crit} = \big(\frac{2 \alpha Z^2 e^2}{\mu' v^2}\big)^{1/4}

The effective cross section for ion-neutral interactions is

\sigma_{ni} \approx \pi b_{\rm crit}^2 = \pi Z e (\frac{2 \alpha}{\mu'})^{1/2} (\frac{1}{v})

Deriving an interaction rate is tricker than for neutral-neutral collisions because n_i \ne n_n in general. So, let’s leave out an explicit n and calculate a rate coefficient instead, in {\rm cm}^3 {\rm s}^{-1}.

k = <\sigma_{ni} v> (although really \sigma_{ni} \propto 1/v, so k is largely independent of v). Combining with the equation above, we get the ion-neutral scattering rate coefficient

k = \pi Z e (\frac{2 \alpha}{\mu'})^{1/2}

As an example, for C^+ - H interactions we get k \approx 2 \times 10^{-9} {\rm cm^{3} s^{-1}}. This is about the rate for most ion-neutral exothermic reactions. This gives us

\frac{{\rm rate}}{{\rm volume}} = n_i n_n k.

So, if n_i = n_n = 1, the average time \tau between collisions is 16 years. Recall that, for neutral-neutral collisions in the diffuse ISM, we had \tau \sim 500 years. Ion-neutral collisions are much more frequent in most parts of the ISM due to the larger interaction cross section.

The Virial Theorem


(Transcribed by Bence Beky). See also these slides from lecture

See Draine pp 395-396 and appendix J for more details.

The Virial Theorem provides insight about how a volume of gas subject to many forces will evolve. Lets start with virial equilibrium. For a surface S,

0 = \frac12 \frac{\mathrm D^2I}{\mathrm Dt^2} = 2\Gamma + 3\Pi + \mathscr M + W + \frac1{4\pi}\int_S(\mathbf r \cdot \mathbf B_ \mathbf B \cdot \mathrm d \mathbf s - \int_S \left(p+\frac{B^2}{8\pi}\right)\mathbf r \cdot \mathrm d\mathbf s,

see Spitzer pp.~217–218. Here I is the moment of inertia:

I = \int \varrho r^2 \mathrm dV

\Gamma is the bulk kinetic energy of the fluid (macroscopic kinetic energy):

\Gamma = \frac12 \int \varrho v^2 \mathrm dV,

\Pi is \frac23 of the random kinetic energy of thermal particles (molecular motion), or \frac13 of random kinetic energy of relativistic particles (microscopic kinetic energy):

\Pi = \int p \mathrm dV,

\mathscr M is the magnetic energy within S:

\mathscr M = \frac1{8\pi} \int B^2 \mathrm dV

and W is the total gravitational energy of the system if masses outside S don’t contribute to the potential:

W = - \int \varrho \mathbf r \cdot \nabla \Phi \mathrm dV.

Among all these terms, the most used ones are \Gamma, \mathscr M and W. But most often the equation is just quoted as 2\Gamma+W=0. Note that the virial theorem always holds, inapplicability is only a problem when important terms are omitted.

This kind of simple analysis is often used to determine how bound a system is, and predict its future, e.g. collapse, expansion or evaporation. Specific examples will show up later in the course, including instability analyses.

The virial theorem as Chandrasekhar and Fermi formulated it in 1953 is the following:

\underbrace {2T_m} _{2\Gamma} + \underbrace {2T_k} _{3\Pi} + \underbrace {\Omega} _{W} + \mathscr M = \underbrace {0} _{\frac {\mathrm D^2 I} {\mathrm D t^2}}.

This uses a different notation but expresses the same idea, which is very useful in terms of the ISM.

Radiative Transfer


The specific intensity of a radiation field is defined as the energy rate density with respect to frequency, cross sectional area, solid angle and time:

\mathrm dI_\nu = \frac {\mathrm dE} {\mathrm d\nu \mathrm dA \mathrm d\Omega \mathrm dt}

[\mathrm dI_\nu] = 1 \;\mathrm{erg} \; \mathrm {Hz}^{-1} \; \mathrm {cm}^{-2} \; \mathrm {sr}^{-1} \; \mathrm s ^{-1}

in cgs units. It is important to note that specific intensity does not change during the propagation of a ray, no matter what its geometry is, unless there is extinction or emission in the path.

Specific intensity is a function of position, frequency, direction and time. Integrating over all directions, we get the specific flux density, which is a function of position, frequency and time:

F_\nu = \int_{4\pi} I_\nu \cos \theta \mathrm d\Omega

[F_\nu] = 1 \;\mathrm{erg} \; \mathrm {Hz}^{-1} \; \mathrm {cm}^{-2} \; \mathrm s ^{-1}

where \theta is usually assumed to be zero.

A conventional unit of specific flux density, especially in radioastronomy, is jansky, named after the American radio astronomer Karl Guthe Jansky:

1 \; \mathrm {Jy} = 10^{-23} \;\mathrm{erg} \; \mathrm {Hz}^{-1} \; \mathrm {cm}^{-2} \; \mathrm s ^{-1} = 10^{-26} \;\mathrm{W} \; \mathrm {Hz}^{-1} \; \mathrm {m}^{-2}

The specific energy density of a radiation field is

u_\nu = \frac1c \int_{4\pi} I_\nu \mathrm d\Omega

[u_\nu] = 1 \;\mathrm{erg} \; \mathrm {Hz}^{-1} \; \mathrm {cm}^{-3}

The mean specific intensity is the specific intensity at a given position and time averaged over all directions:

J_\nu = \frac1{4\pi} \int_{4\pi} I_\nu \mathrm d\Omega = \frac {cu_\nu} {4\pi} \left( = \frac {F_\nu}{4\pi} \textrm{ if } \theta=0 \right)

[J_\nu] = 1 \;\mathrm{erg} \; \mathrm {Hz}^{-1} \; \mathrm {cm}^{-2} \; \mathrm {s}^{-1}

The above “specific” quantities all have their frequency integrated counterparts: intensity, flux density, energy density and mean intensity.

The emission and absorption coefficients j_\nu and \alpha_\nu are defined as the coefficients in the differential equations governing the change of intensity in a ray transversing some medium:

\mathrm dI_\nu = ( j_\nu - \alpha_\nu I_\nu) \mathrm ds.

The emissivity \varepsilon_\nu and opacity \kappa_\nu are defined by

j_\nu = \frac {\varepsilon_\nu \varrho} {4\pi}, \alpha_\nu = \varrho \kappa_\nu.

To find the integral equation determining the specific intensity as a function of path, we define the source function S_\nu and the differential optical depth \mathrm d\tau_\nu by

S_\nu = \frac {j_\nu} {\alpha_\nu}; \mathrm d\tau_\nu = \alpha_\nu \mathrm ds.

It is left as an exercise to the reader that these lead to the Radiative Transfer Equation

I_\nu (\tau_\nu) = I_\nu (0) e^{-\tau_nu} + \int_0^{\tau_\nu} e^{-(\tau_\nu-\tau'_\nu)} S_\nu(\tau'_\nu) \mathrm d \tau'_\nu

If the source function is constant in a medium, then the two limiting cases are

      • optically thin \tau_\nu \to 0: I_\nu (\tau_\nu) \sim I_\nu(0) + S_\nu \tau_\nu
      • optically thick \tau_\nu \to \infty : I_\nu (\tau_\nu) \to S_\nu


The Planck function, named after the German physicist Max Karl Ernst Ludwig Planck, is

B_\nu (T) = \frac {2h\nu^3}{c^2} \cdot \frac1 {e^{\frac{h\nu}{kT}}-1}

Planck’s law says that a black body, that is, an optically thick object in thermal equilibrium, will emit radiation of specific intensity given by the Planck function, also known as the blackbody function. Any radiation with this specific intensity is called blackbody radiation. A similar concept is thermal emission, which in fact is defined as S_\nu=B_\nu, therefore only implies blackbody radiation if the emitting medium is optically thick. Note that in case of thermal emission, one can substitute the definition of source function to write j_\nu=\alpha B_\nu, which is known as Kirchoff’s law.

The brightness temperature T_b of a radiation of specific intensity I_\nu at a given frequency \nu is defined as the temperature that a black body would have to have in order to have the same specific intensity:

I_\nu = B_\nu (T_\nu)

There are two asymptotic approximations of the blackbody radiation: if h\nu \ll kT, that is, small frequency or high temperature, we have the Rayleigh–Jeans approximation for the specific intensity:

I_\nu^{\mathrm {RJ}} (T) = \frac {2\nu^2}{c^2} \cdot kT

This is valid in most areas of radio astronomy, except for example some cases of thermal dust emission.

In the other limit, where h\nu \gg kT, that is, large frequency or low temperature, we have the Wien approximation, named after the German physicist Wilhelm Carl Werner Otto Fritz Franz Wien who derived it in 1983:

I_\nu^{\mathrm {W}} (T) = \frac {2h\nu^3}{c^2} \cdot e^{-\frac{h\nu}{kT}}

The brightness temperature T_\mathrm b of a radiation of specific intensity I_\nu at a given frequency \nu is defined as the temperature that a black body would have to have in order to have the same specific intensity:

I_\nu = B_\nu (T_\mathrm b)

Now assume the background radiation of brightness temperature T_\mathrm{bg} traverses a medium of temperature T, source function S_\nu=B_\nu(T), and optical depth \tau_\nu. In the Rayleigh–Jeans regime B\propto T, therefore the differential and integral forms of the radiative transfer equation become

\frac {\mathrm dT_\mathrm b}{\mathrm d\tau_\nu} = T - T_\mathrm b

T_\mathrm b = T_\mathrm{bg} e^{-\tau_\nu} + T ( 1-e^{-\tau_\nu})

where T_\mathrm b is the brightness temperature of the radiation leaving the medium.

This can be applied to dust emission observations assuming an “emissivity-modified” blackbody radiation. Then the contribution of particles of a given linear size a is

F_\lambda = N_a \frac {\pi a^2} {D^2} Q_\lambda B_\lambda (T)

where N_a is the number of such particles, D is the distace to the observer, and Q_\lambda is the emissivity. Recall that the blackbody intensity with respect to wavelength is

B_\lambda (T) = \frac {2hc^2} {\lambda ^5} \cdot \frac1 {e^{\frac{hc}{\lambda kT}} - 1 }.

We refer the reader to Hilebrand 1983.

It turns out that the emissivity in far infrared follows a power law:

Q_\mathrm{FIR} \propto \lambda ^{-\beta}

      • \beta=0 for blackbody
      • \beta=1 for amorphous lattice-layer materials
      • \beta=2 for metals and crystalline dielectrics}

The observed flux will be

F_\textrm{observed} = \sum_a N_a \frac {\pi a^2} {D^2} Q_\lambda B_\lambda (T).

The same emission can be a result of different values of T, Q, and N.

To determine the total mass of dust, we write

M_\mathrm{dust} = \frac {4\varrho_\mathrm{dust} F_\lambda D^2} {3B_\lambda(T_\mathrm{dust})} \cdot \left\langle \frac a {Q_\lambda} \right\rangle,

where \langle\cdot\rangle is an average weighted appropriately.

If \tau\ll1, then emission essentially depends on grain surface area, and we can write

F_\nu = \kappa_\nu B_\nu \frac {M_\mathrm{dust}} {D^2}.

Hildebrand 1983 gives us an empirical formula for dust opacity, implicitly assuming a gas-to-dust ratio of 100, which can be extended with the power law described above to arrive at

\kappa_nu = 0.1\;\frac{\mathrm{cm}^2}{\mathrm g} \cdot \left( \frac \nu {1200\;\mathrm{GHz}} \right)^\beta.

A typical modern value for interstellar dust is \beta=1.7, but it is highly disputed. It can be determined based on the SED slope in the Rayleigh–Jeans regime, where

F_\nu \propto \nu^{\beta+2}.

ISM of the Milky Way


The ISM in the Milky way can be divided into cold stuff and hot stuff. Cold stuff are dust and gas. Katherine will talk briefly about the importance of cooling via CO emission. Hot stuff will be discussed by Ragnhild and Vicente (also see Spitzer 1958), and Tanmoy already talked about supernovae. Suggested reading is Chapters 5 and 19 of Draine. In particular, if you ever need a reference of term symbols (uppercase Greek letters with subscripts and superscripts), see page 39.

Cold ISM


The molecular gas is mostly composed of H_2 molecules. This, however, rarely can be observed directly: it has no dipole moment, therefore no rovibrational energy levels. (As exceptions, UV absorption lines can be observed in hot H_2 gas, vibrational lines can be observed in shocked ISM, and NIR lines can be observed in “cold” hot ISM.) Instead of observing hydrogen lines, trace species are used as proxies to infer hydrogen density. The choice of preferred trace species depends on their abundance. To quantify this, we define the critical density as

n_\mathrm{critical} = \frac {A_\mathrm{ul}} {\gamma_\mathrm{ul}},

where A is the Einstein coefficient and \gamma is the collision rate for a given transition. This of course will depend on temperature, as the collision rate is the product of the temperature-independent collisional cross section \sigma and the temperature-dependent particle velocity v. For a detailed explanation of critical density, see pp.~81–87 of Spitzer, and Chapter 19 of Draine.

Typically assumed values are T\sim100\;\mathrm K, v\sim10^5\;\frac{\mathrm{cm}}{\mathrm s}, \sigma\sim10^{-15}\;\mathrm{cm}^{2}, \gamma_\mathrm{ul}\sim10^{-10}\;\frac{\mathrm{cm}^3}{\mathrm s}. The Einstein coefficient for the J=1\to0 transition of CO is A_{10}\sim6\cdot10^{-8}\;\mathrm s^{-1}, yielding a critical density of n_\mathrm{critical}\sim6\cdot10^2\;\mathrm {cm}^{-3}. Spitzer gives us fiducial values for critical densities around T=100\;\mathrm K:

      • CO, J = 1 \to 0, \lambda = 2.6 mm, A = 6 \cdot 10^{-8} \rm s^{-1}, n_{\rm crit} = 4 \cdot 10^3 {~\rm cm^{-3}}
      • NH_3, J = 1 \to 1 \lambda = 12.65 mm, A = 1.7\cdot10^{-7}, n_{\rm crit} = 1.1\cdot10^4 {~\rm cm^{-3}}
      • CS, J=1\to 0, \lambda = 6.12 mm, A = 6.12 {\rm s}^{-1}, n_{\rm crit} = 1.1\cdot10^5 {\rm ~cm^{-3}}
      • HCN, J=1\to 0, \lambda = 3.38 mm, A = 2.5 \cdot10^{-5} {\rm s^{-1}}, n_{\rm crit} = 1.6\cdot10^6 {\rm~ cm^{-3}}

In practice, the following tracers are used for cold clouds in the range of 3\;\mathrm K \lessapprox T \lessapprox 100 \; \mathrm K:

      • Low density (n=10 cm^-3): 12CO
      • Dark Cloud (n=300): 13CO, OH
      • Dense Core (n=10^3): C18O, CS
      • Dense Core (n=5 * 10^3): NH3, N2H+, CS
      • Very Dense (n=10^8): OH Masers
      • Very Very Dense (n=10^10): H20 masers

See handout on dense core multi-line half power contours from Myers 1991, and note that sizes do not proceed as critical densities would suggest.

As a reminder, electronic energy levels are much further apart, resulting in higher frequency transition lines. Vibrational levels are a few orders of magnitude more tightly spaced, then rotational energy levels are even closer. Here we give a brief overview of rotational line structure, see Chapter 5.1.5 of \cite{draine:2010} for more details.

Quantum mechanically, a diatomic molecule has energy levels identified by J=0,1,2,\ldots with energy E_{\rm rot} = \frac{J(J+1) \hbar^2}{2I_v} = B_v J (J+1) (which would have J^2 instead in the quasiclassical theory). Here I_v=r_vm_\mathrm r is the moment of inertia, r_v is the distance between the atoms, and m_\mathrm r is their reduced mass. B_v is the rotation constant. The v index signifies that these values are valid for a given vibrational state only. Therefore the transition energy between the states J-1 and J is

\Delta E_\mathrm{rot} = 2J B_v.

In case of ^{12}C^{16}O, B_v=5.75\cdot10^{16} in some mysterious units. For the J=1\to0 transition,

\Delta E_\mathrm{rot}=4.7\cdot10^{-4}\;\mathrm{eV}, corresponding to \nu=115.271\;\mathrm{GHz}, or \frac{h\nu}k=5.5\;\mathrm K. The J=2\to1 and J=3\to2 transitions have twice and three times higher energy differences, respectively.

We can estimate which transition gives maximum emission (in terms of number of photons) at a given temperature:

E_\mathrm{rot} = B_v J (J+1)

k T_\mathrm{rot} = B_v J (J+1)

J_\mathrm{max} \approx \sqrt {\frac {kT_\mathrm{rot}}{B_v}}

      • The SMA and ALMA are sensitive to J=4-7
      • KAO and Sophia are sensitive to J=20-40

Note that 1\to0 at 2.6 mm is more visible from Earth than 2\to1 due to atmospheric extinction.

Collisional Excitation


In LTE, the transition rates for a collisional u \rightleftarrows l transition are

C_{ul} = n\gamma_{ul}

C_{lu} = C_{ul} \frac{g_u}{g_l} e^{-\frac{h\nu}{T_K}}

where \gamma_{ul}=\langle \sigma_{ul} v \rangle is the rate coefficent for the transition, \sigma is the collisional cross section and v is the particle velocity.

In case of Maxwellian velocity distribution, we have

\gamma_{ul} = \frac 4 {\sqrt \pi} \left( \frac \mu {2kT_K} \right)^{\frac32} \int_0^\infty \sigma_{ul} v^3 e^{-\frac{\frac12\mu v^2}{kT_K}} \mathrm dv

where \mu is the reduced mass. For neutral-neutral state transitions, \gamma is typically 10^{-11}\sim10^{-10}\;\mathrm{cm}^3\;\mathrm s^{-1}, and for an ionizing transition, \gamma\sim10^{-9}\;\mathrm{cm}^3\;\mathrm s^{-1}.

In equilibrium, we have

\dot n_u = n_l C_{lu} - n_u C_{ul} - n_u A_{ul}

0 = n_l C_{ul} \frac{g_u}{g_l} e^{-\frac{h\nu}{T_K}} - n_u C_{ul} - n_u A_{ul}

0 = (n-n_u) \frac{g_u}{g_l} e^{-\frac{h\nu}{T_K}} - n_u - n_u \frac {A_{ul}}{n C_{ul}}

n \frac{g_u}{g_l} e^{-\frac{h\nu}{T_K}} = n_u \frac{g_u}{g_l} e^{-\frac{h\nu}{T_K}} + n_u + n_u \frac {n_\mathrm{crit}}{n}

\frac {n_u} n = \frac {\frac{g_u}{g_l} e^{-\frac{h\nu}{T_K}}} {\frac{g_u}{g_l} e^{-\frac{h\nu}{T_K}} + 1 + \frac {n_\mathrm{crit}}{n}}

where n_\mathrm{crit}=\frac{A_{ul}}{C_{ul}} is the critical density. When n\gg n_\mathrm{crit}, collisions dominate over spontaneous emission, resulting in

\frac {n_u} {n_l} = \frac {g_u}{g_l} e^{-\frac{h\nu}{T_K}}.

However, in case of n\ll n_\mathrm{crit}, spontaneous emission dominates decay, so each collisionally excitation results in an emission:

\frac {n_u} {n_l} = \frac n {n_\mathrm{crit}} \frac {g_u}{g_l} e^{-\frac{h\nu}{T_K}}.

In case of optically thin emission,

F_\mathrm{line} = \frac {h\nu}{4\pi} A_{ul} \Omega \int n_u \mathrm ds,

where \Omega is the apparent solid angle of the source. Then

\textrm {if } n\ll n_\mathrm {crit}: \quad F_\mathrm{line} = \frac {h\nu}{4\pi} \gamma_{ul} \Omega \frac{g_u}{g_l} e^{-\frac{h\nu}{kT_K}} \int n^2 \mathrm ds \propto n^2

\textrm {if } n\gg n_\mathrm {crit}: \quad F_\mathrm{line} = \frac {h\nu}{4\pi} A_{ul} \Omega \frac{g_u}{g_l} e^{-\frac{h\nu}{kT_K}} \int n \mathrm ds \propto n

Recombination of Ions with Electrons


For more details, see Draine Chapter 14.

In HII regions, most recombination happens radiatively: X^+ + e \to X + h \nu.

An electron with kinetic energy “E” can recombine to any level of hydrogen. The energy of the photon emitted is then given by h\nu = E + I_{nl}, where I_{nl} = 13.6 ev / n^2 is the binding energy of quantum state nl.

There are two extreme cases in recombination (see Baker and Menzel 1938):

      • Case A: The medium is opcially thin to ionizing radiation. Appropriate in shock-heated regions (T ) where density is very low.
      • Case B: Optically thick to ionizing radiation. When a atom recombines to the n=1 state, the emitted lyman photon is immediately reabsorbed, so that recombinations to n=1 does not change the ionization state. This is called the “on the spot” approximation

Optical tracers of recombination: H-\alpha and other lines

Radio: Radio recombination lines (see Draine 10.7). Rydberg States are recombinations to very high (n>100) hydrogen energy levels. Spontaneous decay gives

\nu_{n\alpha} = \frac{2n+1}{[n(n+1)]^2} \frac{I_H}{h} \sim 6.479 \big(\frac{100.5}{n+0.5}\big)^3 {\rm GHz}

A popular line is \nu_{166\alpha} = 1425 {\rm MHz}. This is often observed because its proximity to the 1420 MHz line of HI.

Radio recombination lines often involve masing.

Star Formation in Molecular Clouds

Topics to be covered include:

      • The Jeans Mass
      • Free Fall Time
      • Virial Theorem
      • Instabilities
      • Magnetic Fields
      • Non-Isolated Systems
      • “Turbulence”

An overview of the steps of star formation.

Basic properties of a GMC


Mass: 10^{5-6} M_\odot

Lifetime: Uncertain. Probably 10 Myr (maybe as long as 100 Myr). “Lifetime” is not easily defined or inferred, since clouds are constantly fragmenting, exchanging mass with their surroundings, etc.

We roughly describe the hierarchy and fragmentation of clouds via the following terms:

      • Clump. 10-100 M_\odot. 1pc. The progenitor of stellar clusters
      • Core. 1-10 M_\odot. 0.1 pc. The progenitor of individual stars, and small stellar systems.
      • Star. The end-product of fragmentation and collapse. 1 M_\odot.

Note that, across these scales, density increases by tens of orders of magnitude:

\frac{\rho_{\rm star}}{\rho_{\rm core}} \propto \bigg(\frac{R_{\rm star}}{R_{\rm core}} \bigg)^3 = \bigg(\frac{.1 \times 3 \times 10^{18} cm}{7 \times 10^{10} cm} \bigg)^3 \sim 10^{20}

\begin{figure}

\includegraphics[width=4in]{fig_030811_01}

\caption{Schematic of how a GMC fragments into clumps, cores, and stars}

\end{figure}

The Jeans Mass


For further details, see Draine ch 41.

Let’s analyze the stability of a sphere of gas, where thermal pressure is balanced by self-gravity.

Start with the basic hydro equations (conservation of mass, momentum, and Poisson’s equation for the gravitational potential):

\frac{\partial \rho}{\partial t} + \nabla \cdot (\rho \vec{v}) = 0

\frac{\partial v}{\partial t} + (\vec{v} \cdot \nabla) \vec{v} = -\frac{1}{\rho}\nabla P - \nabla \phi

\nabla^2 \phi = 4 \pi G \rho

Consider an equilibrium solution rho_0(\vec{r}), P_0(\vec{r}), etc such that time derivatives are zero. Let’s perturb that solution slightly, and analyze when that perturbation grows unstably.

\vec{v} = \vec{v_0} + \vec{v_1}, \rho = \rho_0 + \rho_1, P = P_0 + P_1, \phi = \phi_0 + \phi_1 latex.

The linear hydro equations, to first order in the perturbations, are

\frac{\partial \rho_1}{\partial t} + \vec{v_0} + \nabla \rho_1 + \vec{v_1} \cdot \nabla \rho_0 = -\rho_1 \nabla \cdot \vec{v_0} - \rho_0 \nabla \cdot \vec{v_1}

\frac{\partial v_1}{\partial t} + (\vec{v_0} + \cdot \nabla) \vec{v_1} + (\vec{v_1} \cdot \nabla) \vec{v_0} = \frac{\rho_1}{\rho_0^2} \nabla P_0 - \frac{1}{\rho_0} \nabla P_1 -\nabla \phi_1

\nabla^2 \phi_1 = 4 \pi G \rho_1

Lets restrict our attention to an isothermal gas, so the equation of state is P = \rho c_s^2, where c_s is the isothermal sound speed. Then, the momentum becomes

\frac{\partial \vec{v_1}}{\partial t} + (\vec{v_0} + \nabla) \vec{v_1} + (\vec{v_1} \cdot \nabla)\vec{v_0} = -c_s^2 \nabla(\frac{\rho_1}{\rho_0}) - \nabla \phi_1

Jeans took these equations and added:

      • Uniform density to start with (\nabla \rho_0 = 0)
      • Stationary gas (v_0 = 0)
      • Gradient-free equilibrium potential \nabla \phi_0 = 0)

Then, the solution becomes (after taking \nabla \cdot the momentum equation)

\frac{\partial^2 \rho_1}{\partial t^2} = c_s^2 \nabla^2 \rho_1 + (4 \pi G \rho_0) \rho_1

Now consider plane wave perturbations

\rho_1 \propto \times exp(i (\vec{k} \cdot \vec{r}) - \omega t)

\omega^2 = k^2 c_s^2 - 4 \pi G \rho_0

Define k_j^2 = 4 \pi G \rho_0 / c_s^2, so

\omega^2 = (k^2 - k_J^2) c_s^2

\omega is reall IFF k . Otherwise, \omega is imaginary, and there is exponential growth of the instability. This then leads to a Jeans Length:

\lambda_J = 2 \pi / k_J = \bigg(\frac{\pi c_s^2}{G \rho_0} \bigg)^{1/2}

Gas is “Jeans Unstable” when \lambda . Exponentially growing perturbations will cause the gas to fragment into parcels of size \lambda \sim \lambda_J.

Converting the Jeans length into a radius (assuming a sphere) yields

M_J = 0.32 M_\odot \big(\frac{T}{10K}\big)^{3/2} \big(\frac{m_H}{\mu}\big)^{3/2} \big(\frac{10^6 cm^{-3}}{n_H}\big)^{1/2}

This defines a “preferred mass” for substructures within a cloud.

Let’s plug in values for a dense core: T=10~K,~\mu = 2.33~{\rm amu},~n_H = 2 \times 10^5~{\rm cm}^{-3}. This yields M_J = 0.2~M_\odot. If we instead plug in numbers appropriate for the mean conditions in a GMC, T=50~{\rm K},~\mu=2.33~{\rm amu},~n_{\rm H}=200~{\rm cm}^{-3}, we get M_J=70~M_\odot.

Note that, once gravitaional collapse and heating set in, our isothermal sphere assumptions are no longer valid.

Collapse Timescale


For large scales, the growth time for the Jeans instability is

\tau_J \sim \frac{1}{k_J c_s} \sim \frac{1}{\sqrt{4 \pi G \rho_0}} = 2.3 \times 10^4 yr \big(\frac{10^6 cm^{-3}}{n_H} \big)^{1/2}

For n_H = 1000, this is about 0.7 Myr. Compared to a “free fall time” (collapse timescale for a pressure-less gas),

\tau_{ff} = \big( \frac{3 \pi} {32 G \rho_0} \big)^{1/2} = 4.4 \times 10^4 yr \big( \frac{10^6 cm^{-3}}{n_H} \big)^{1/2}

For n_H=1000 this is 1.4 Myr — slightly longer than growth time.

The Jeans Swindle


There is a sinister flaw in Jean’s analysis. Assuming \nabla \phi = 0 implies that \nabla^2 \phi = 0. The only way to satisfy this everywhere is for \rho_0 = 0. However, more rigorous analysis verifies that Jean’s approach still yields approximately correct results.

More Realistic Treatment


The Bonnor Ebert Sphere is the “largest mass an isothermal gas sphere can have in a \textbf{pressurized medium} while staying in hydrostatic equilibrium.

Numerical Star Formation Simulations


(Notes from Guest Lecture by Stella Offner)

We start by writing down the conservation laws for mass, momentum and energy, each in the form of “rate of change of conserved quantity + flux = source density”:

\frac {\partial \varrho}{\partial t} + \nabla (\varrho v) = 0

\frac {\partial(\varrho t)}{\partial t} + \nabla (\varrho v^2 + p) = - \varrho \nabla \phi

\frac {\partial(\varrho E)}{\partial t} + \nabla (\varrho v^3 + pv) = - \varrho v \nabla \phi

Also, the Poission equation for gravity is

\nabla^2 \phi = - 4 \pi G \varrho

The initial conditions are determined by T, \varrho(\mathbf r), \mu and bulk \mathbf v(\mathbf r), which further determine the total mass, total angular momentum, turbulence and other global properties.

Grid-based codes

One type of star formation simulations is grid-based. Many of these feature adaptive mash refinement, increasing spatial resolution in areas where parameters vary on smaller scales. Examples of such codes are Orion, Ramses, Athena, Zeus and Enzo.

This algorithm stores \varrho and \mathbf v values at nodes indexed by j. The values at time step n+1 are calculated from those at time step n by discretized versions of the conservation equations. For instance, a discretized mass equation can be written as

\frac {\varrho_j^{n+1}-\varrho_j^n}{\Delta t} + \frac{\varrho_{j+1}^n v_{j+1}^n - \varrho_{j-1}^n v_{j-1}^n} {2\Delta x} = 0

First, a homogeneous grid is created, with resolution satisfying the Truelove condition, also called as the Jeans condition:

\Delta x \leqslant J \lambda_\mathrm J

where the empirical value for the Jeans number J is \frac14. Then the simulation is carried on, repartitioning a square by adding nodes if deemed necessary based on the spatial variation of \varrho or \mathbf v. The timestep is global, but it can also be adaptively changed, as long as it satisfies the Courant condition

\Delta t \leqslant C \cdot \min \left( \frac {\Delta x}{v+c_\mathrm s} \right)

ensuring that gas particles do not move more than half a cell in one time step.

Particle-Based Codes

The other family of codes is particle-based, also referred to as smooth particle hydrodynamics codes. Here individual pointlike particles are traced. Examples codes are Gadget, Gasoline and Hydra.

To solve the equations, density has to be smoothed with a smoothing length h that has to be larger than the characteristic distance between the particles. Usually we want approximately twenty particles within the smoothing radius. The formula for smoothing is

\langle \varrho (\mathbf r) \rangle = \sum_\mathrm{particles} m \omega (\mathbf r-\mathbf r',h)

where an example of the smoothing kernel is

\omega (\mathbf r- \mathbf r', h) = \frac1{Th^3}

      • 1-\frac32u^2+\frac34u^3 \textrm{if } 0 \leqslant u \leqslant 1
      • \frac14(2-u)^2 \textrm{if } 1
      • 0 ~\textrm{if } 2

where u = \frac{|\mathbf r-\mathbf r'|}h.

The Lagrangian of the system is

L = \sum_i \frac12 m_i v_i^2 - \sum_i m_i \phi(\mathbf r_i).

The Euler–Lagrange equations governing the motion of the particles are

\frac {\partial L}{\partial \mathbf r_i} - \frac {\mathrm d}{\mathrm dt} \frac {\partial L}{\partial \dot {\mathbf r}_i} = 0.

It turns out that solving these equations leads to

\frac {\partial v_i}{\partial t} = -\sum_j m_j \left( \frac {p_i}{\varrho_i^2} + \frac {p_j}{\varrho_j^2} + Q_{ij} \right) \nabla_i \omega(\mathbf r_i-\mathbf r_j, h),

where \mathbf Q is an viscosity term added artificially for more accurate modeling of transient behavior. The nature of the simulation requires that particles are stored in a data structure that facilitates easy search based on their position, for example a tree. As particles move, this structure needs to be updated.

The Jeans condition for these kind of simulations is that

M_\mathrm {min} \leqslant M_\mathrm J

where M_\mathrm {min} is the minimum resolved fragmentation mass, equal to the typical total mass within smoothing length from any particle.

Comparison

The advantages of the grid-based codes is that they are

      • accurate for simulating shocks,
      • more stable at instabilities.

Their disadvantages are that

      • grid orientation may cause directional artifacts,
      • grid imprinting might appear at the edges.

Also, they are more likely to break if there’s a bug in the code. On the other hand, particle-based codes have the advantages of being

      • inherently Lagrangian,
      • inherently Galilei invariant, that is, describing convection and flows well,
      • accurate with gravity,
      • good with general geometries.

At the same time, they suffer from

      • resolution problems in low density regions,
      • the need for an artificial viscosity term,
      • the need for post processing to extract information on density and momentum distribution,
      • statistical noise.

See slides for the effect of too large J for a grid-based simulation, too small h for a particle-based simulation, for examples involving shock fronts, Kelvin–Helmholtz instability and Rayleigh–Taylor instability, and star formation. Note that the simulated IMF matches theory and observations within an order of magnitude, but this is very sensitive to the initial temperature of the molecular cloud.

Some thoughts on Jeans scales


Recall the equation for the Jeans Mass:

M_J = \frac1{8} \big( \frac{\pi k T}{G \mu} \big)^{3/2} \frac1{\rho^{1/2}} \propto \frac{T^{3/2}}{\rho^{1/2}}

In common units, this is

M_J = 0.32 M_\odot \big(\frac{T}{10K} \big)^{3/2} \big(\frac{m_H}{\mu}\big)^{3/2} \big( \frac{10^6 {\rm cm^{-3}}}{n_H} \big)^{1/2}

      • 10K, ~2.33 {\rm amu}, ~ n_H = 2 \times 10^5 {\rm cm^{-3}} \to 0.2 M_\odot
      • 50K, ~2.33 {\rm amu}, ~ n_H = 200 {\rm cm^{-3}} \to 70 M_\odot

This raises a question — what jeans mass do we choose if a gas cloud is hierarchical, with many different densities and temperatures? This motivates the idea of turbulent fragmentation, wherein a collapsing cloud fragments at several scales as it collapses. We will return to this

Also recall that the Jeans growth timescale for structures much larger than the Jeans size is

\tau_J = \frac1{k_J c_s} = \frac1{\sqrt{4 \pi G \rho_0}} = \frac{2.3 \times 10^4 {\rm yr}}{\sqrt{n_H / 10^6 {\rm cm^{-3}}}}

for n_H = 1000, ~ \tau_J \sim 0.7 {\rm Myr}

Compare this to the pressureless freefall time:

\tau_{ff} = \big(\frac{3 \pi}{32 G \rho_0} \big)^{1/2} = \frac{4.4 \times 10^4 {\rm yr}}{\sqrt{n_H / 10^6 {\rm cm^{-3}}}}

From which we see

\frac{\tau_{ff}}{\tau_J} = \pi \sqrt{3/8} = 1.92

The Jeans growth time is about 1/2 the free-fall time

Finally, compare this to the crossing time in a region with n = 1000 {\rm cm^{-3}}. The sound speed is c_s = \sqrt{kT / \mu}, which is 0.27 km/s for molecular hydrogen, and .08 km/s for 13CO.

However, note that the observed linewidth in 13CO for such regions is of the order 1 km/s. Clearly, non-thermal energies dominate the motion of gas in the ISM.

Recall that the jeans length is \lambda_J = \big( \frac{\pi c_s^2}{G \rho} \big)^{1/2}

Plugging in the thermal H2 sound speed yields \lambda_J = 1 pc. The crossing time for CO gas moving at 1 km/s is thus 1Myr in this region.

So, in a cloud with n \sim 10^3 and T \sim 20K, the Jeans growth time, free fall time, and crossing time are all comparable.

There is a problem with our derivation, though! We assumed the relevant sound speed that sets the Jeans length is the thermal hydrogen sound speed. However, these clouds are not supported thermally, as the non-thermal, turbulent velocity dispersion dominates the observed linewidths. This suggests we use and equivalent Jeans length where we replace the sound speed by the turbulent linewidth.

This is often done in the literature, but its sketchy. Using turbulent linewidths in a virial analysis assumes that turbulence acts like thermal motion — i.e., it provides an isotropic pressure. This may not be the case, as turbulent motions can be partially ordered in a way that provides little support against gravity.

The question of how a cloud behaves when it is not thermally supported leads to a discussion of Larsons Laws (JC 2013)

Larsons Legacy

see these slides

Stellar Winds in Star Forming Regions


Guest Lecture by Hector Arce. See this post for notes. See also this movie

Introduction to shocks


Shocks occur when density perturbations are driven through a medium at a speed greater than the sound speed in that medium. When that happens, sound waves cannot propagate and dissipate these perturbations. Overdense material then piles up at a shock front

Collisions in the shock front will alter the density, temeprature, and pressure of the gas. The relevant size scale over which this happens (i.e., the size of the shock front) is the size scale over which particles communicate with each other via collisions:

l = (\sigma n)^{-1}. If n=10^2 {\rm cm^{-3}}, ~\sigma \sim (10^{-9} {\rm cm})^2 \to l = 0.01 {\rm pc}

A shock

An excellent reference offering a three-page summary of important shock basics (including jump conditions) is available in this copy of a handout from Jonathan Williams ISM class at the IfA in Hawaii.

Shock De-Jargonification

See this handout and this list of examples.

Shock(ing facts about) viscosity


A perfect shock is a discontinuity, but real shocks are smoothed out by viscosity. Viscosiity is the resistance of a fluid to deformation (it is the equivalent of friction in a fluid)

Inviscid or ideal fluids have zero viscosity. Colder gasses are less viscious, because collisions are less frequent.

Simulations include numerical viscosity by accident — these are the result of computational approximations / discretization that violate Eulers equations and lead to a momentum flow that acts like true viscosity. However, the behavior of numerical viscosity can be unrealistic and of inappropriate magnitude

True shocks are a few mean free paths thick, which is typically less than the resolution of simulations. Thus, simulations also add artificial viscosity to smooth out and resolve shock fronts.

Rankine Hugoniot Jump Conditions


This section is based on the 2013 Wikipedia entry for “Rankine-Hugoniot Conditions,” archived as a PDF here.

The Jump conditions describe how properties of a gas change on either side of a shock front. They are obtained by integrating the Euler Equations over the shock front. As a reminder, the mass, momentum, and energy equations in 1 dimension are

      • \frac{d \rho}{d t} = - \frac{d}{dx} (\rho u)
      • \frac{d \rho u} {d t} = - \frac{d}{dx} (\rho u^2 + P)
      • \frac{d \rho E}{d t} = - \frac{d}{dx} \big[ \rho u (e + 1/2 u^2 + P / \rho) \big]

Where E = e + 1/2 u^2 is the fluid specific energy, and e is the internal (non-kinetic) specific energy.

We supplement these equations with an equation of state. For a adiabatic shocks (a bad term, but describes shocks where radiative losses are small), the equation of state is P = (\gamma - 1) \rho e

Generally, what a jump condition is a condition that holds at a near-discontinuity. It ignores the details about how a quantity changes across the discontinuity, and instead describes the quantity on either side.

The general conservation law for some quantity w is

\frac{d w}{d t} + \frac{d}{dx} f(w) = 0

A shock is a jump in w at some point x = x_s(t). We call x1 the point just upstream of the shock, and x2 the point just downstream

A schematic defining the notation for the shock jump conditions

Integrating the conservation equation over the shock,

\frac{d}{dt} \big ( \int_{x_1}^{x_s(t)} w dx + \int_{x_s(t)}^{x_2} w dx \big ) = - \int_{x_1}^{x_2} \frac{d}{dx} f(w) dx

w1 \frac{dx_s}{dt} - w2 \frac{dx_s}{dt} + \int_{x_1}^{x_s(t)} w_t dx + \int_{x_s(t)}^{x_2} w_t dx = -f(w) |_{x_1}^{x_2}

This holds because \frac{ dx_1}{dt} = 0, \frac{dx_2}{dt} = 0 in the frame moving with the shock

Let x_1 \to x_s(t), ~ x_2 \to x_s(t) when

\int_{x_1}^{x_s(t)} w_t dx \to 0, ~ \int_{x_s(t)}^{x_2} w_t dx \to 0

and in the limit S(w_1 - w_2) = f(w_1) - f(w_2)

Where S = \frac{d_{x_s(t)}}{dt} = \text{characteristic shock speed}

S = \frac{f(w_1) - f(w_2)}{w_1 - w_2}

The upstream and downstream speeds are constrained via

\frac{d}{dw}f(w_1)

Moving away from the generic w formalism and to the Euler equations, we find

      • S(\rho_2 - \rho_1) = \rho_2 u_2 - \rho_1 u_1
      • S(\rho_2 u_2 - \rho_1 u_1) = (\rho_2 u_2^2 + P_2) - (\rho_1 u_1^2 + P_1)
      • S(\rho_2 E_2 - \rho_1 E_1) = \big[ \rho_2 u_2 (e_2 + 1/2 u_2^2 + P_2 / \rho_2) \big] - \big[ \rho_1 u_1 (e_1 + 1/2 u_1^2 + P_1 / \rho_1) \big]

These are the Rankine Hugoniot conditions for the Euler Equations.

In going to a frame co-moving with the shock (i.e. v = S – u), one can show that

S = u_1 + c_1 \sqrt{1 + \frac{\gamma + 1}{2 \gamma} (\frac{P_2}{P_1 - 1})}

where c1 is the upstream sound speed c_1 = \sqrt{\gamma P_1 / \rho_1}

For a stationary shock, S = 0 and the 1D Euler equations become

\rho_1 u_1 = \rho_2 u_2

\rho_1 u_1^2 + P_1 = \rho_2 u_2^2 + P_2

\rho_1 u_1( e_1 + 1/2 u_1^2 + P_1 / \rho_1) = \rho_2 u_2 (e_2 + 1/2 u_2^2 + P_2 / \rho_2)

After more substitution and defining h = P / \rho + e \to 2(h_2 - h_1) = (P_2 - P_1) (\frac1{\rho_1} + \frac1{\rho_2}),

      • \frac{\rho_2}{\rho_1} = \frac{P_2/P_1 (\gamma + 1) + (\gamma - 1)}{(\gamma + 1) + P_2 / P_1 (\gamma - 1)} = \frac{u_1}{u_2}
      • \frac{P_2}{P_1} = \frac{ \rho_2 / \rho_1 (\gamma + 1) + (\gamma - 1)}{ (\gamma + 1) + \rho_2 / \rho_1 (\gamma - 1)}
      • \frac{\rho_2}{\rho_1}

For strong (adiabatic) shocks and an monatomic gas (\gamma = 5/3),

\frac{\rho_2}{\rho_1} \to 4

Introduction to Instabilities


See this handout

Rayleigh-Taylor Instability


The Rayleigh-Taylor instability occurs when a more dense fluid rests on top of a less dense fluid, in the presence of a gravitational potential \phi = g z (though note that accelerations mimic gravity, so many driven jets, etc exhibit the RT instability without gravity)

Notation for the RT instability

To derive the instability, we will solve the momentum and continuity equations while satisfying boundary conditions at the a/b interface

The momentum equation is

\rho \frac{DV}{Dt} = \rho [ \frac{dv}{dt} + v \cdot \nabla v] = - \nabla P - \frac1{8 \pi} \nabla B^2 + \frac1{4 \pi} B \cdot \nabla B - \rho \nabla \phi

The continuity equation is

\frac{D \rho}{D t} = \frac{d \rho}{d t} + v \cdot \nabla \rho = -\rho \nabla \cdot v

To satisfy the continuity equation, we introduce a velocity potential

v = - \nabla \psi

The density is constant (in time) above and below the interface, so the continuity equation implies \nabla^2 \psi = 0

A solution satisfying \psi = 0 for very large Z is

      • \psi = \psi_a = k_a e^{i \omega t - kz} sin(kx) above
      • \psi = \psi_b = k_b e^{i \omega t + kz} sin(kx) below

Where k = 2 \pi / \lambda is the wave number

Take the real part of \psi as the physical answer. Let B = 0, and ignore v \cdot \nabla v in the momentum equation. By integrating the momentum equation over space, we find

\rho \frac{d\psi}{d t} = P + \rho g z

which holds separately above and below the interface

The boundary conditions are that v_z,~ P need to be continuous across the interface

We can find the z of the interface, z_i, by integrating v_z over time, so

      • z_i = - \frac1{i \omega} \frac{d \psi}{dz}
      • z_i = \frac{k}{i \omega} \psi above
      • z_i = - \frac{k}{i \omega} \psi below

To first order in \psi

If v_z is continuous across the boundary then z_i must be the same above and below so

k \psi_a = - k \psi_b at z = z_i

This implies that k_a = -k_b

Continuity of pressure across the intervace gives

P = i \omega \rho \psi - \rho g z

Using the equation for z_i and noting P_a = P_b at z=0 gives

\omega^2 = g K \frac{rho_b - \rho_a}{\rho_b + \rho_a}

If \rho_b then \omega is imaginary, and the instability grows with time

Ionization fronts, HII regions, Stellar Winds, and Disk Structure


Although these 4 objects exist at different scales, their evolutions are coupled. Lets explore this connection

Ionization fronts are caused by stars with large fluxes of ionizing photons. We see them around HII regions. We observe them emission from several ionizaed specias, especially as they recombine and cool

We would like to know how ionization fronts form, expand, and emit. We would also like to know their lifetime, and how different ionization fronts map to different stars in a crowded environment.

The expansion of HII regions is influenced by the time dependent evolution of both the ionization front and the stellar wind with the surrounding medium. Draine 37.2 calculates the expansion of an HII region into a uniform medium

Stellar Winds

We talked about bi-polar flows, but these are a special type of stellar wind. Other winds are less collimated. Different kinds of winds are associated with stars at different evolutionary states:

Pre main sequence stars: T-tauri winds

Main Sequence winds (like the solar wind)

Post-Main Sequence winds / planetary nebulae

Disk Structure

Disks around stars are accretion disks. The structure of a gravitationally bound disk is governed by Keplerian rotation. However, accretion disks have a temperature structure giving rise to thermal forces (plus magnetic/turbulent forces) that can change their internal structure and dynamics

Ionization Fonts


See chapter 20 of Shus Physics of Astrophysics

Note that the mean free path of photons in an ionized gas is much larger than the mean free path in neutral gas:

      • Neutral Gas: Cross section for interaction is size of atom, or \sigma \sim 10^{-17} {\rm cm^{-2}}.
      • Ionized Gas: Interaction is via Thompson scattering, where the cross section is the size of a free electron. \sigma \sim 10^{-24} {\rm cm^{-2}}

We already discussed the Stromgren radius, which sets the size of an idealized HII region by balancing the stellar ionizing photon flux with the rate at which the ionized volume recombines and thus quenches this flux:

R_s = (\frac{3}{4 \pi} \frac{N_\star}{n_0^2 \alpha})^{1/3}

However, this ionized region will have a pressure much higher than the cool, neutral exterior. Thus, HII regions expand over time, and both pressure and ionization fronts are driven into the neutral medium.

Expansion of HII regions


Stage 1

{graphic}

      • Stage begins when a star turns on in a neutral medium
      • Rapid expansion of ionization front into static HI cloud
      • Very little fluid motion (photons travel much faster than shock waves!)
      • This stage ends when the radius approaches the Stromgren radius, and the ionizing photons are quenched by recombinations

Stage 2

      • The hot ionized HII region is over-pressured compared to the ambient exterior. This drives a radiative shock
      • Sweeps up HI gas into a shell. The ionization front is interior to this shell
      • Ends either the star explodes as a supernova, or the radius expands to R_final, where recombinations balance ionizing photons and there is pressure balance between the interior and exterior
      • Realistically, inhomogeneities in the surrounding medium mean that the HII region will evolve in more complicated ways than this

Stellar Winds


See Lamers and Cassivelli 1999

Wind driving mechanisms:

For cool stars (T = 5000K), winds are driven by the pressure expansion of the hot corona. This is like our own sun

For hot stars (T = 10,000 – 100,000K at the surface), the lack of strong convection inhibits the creation of a hot corona. The temperature at these stars surfaces is not hot enough for particles to escape the gravitational potential. Instead, these stars drive winds directly via radiation pressure. These winds travel at v , and have mass loss rates of \sim 10^{-4} M_\odot {\rm yr^{-1}}

The wind mechanism for pre main sequence stars is debated, but probably involves some interaction between an accretion disk and magnetic field

Disk Structure


The initial evidence for accretion disks around young stars was the SED, which shows an excess of IR emission due to accretion-powered luminosity.

The pioneering paper in this field is Adams, Lada and Shu 1987. See also the review paper by Lada 1999

Young stars were initially divided into 3 evolutionary classes, via the shape of there IR SEDS:

Class 0: SED looks like a single, cold blackbody. Emission is due entirely to a cold core, with no central source

Class 0 YSO

Class I: A large infrared excess, with a rising SED in the infrared. Strong IR excess (compared to the blackbody of a hot, compact central source) is due to a massive disk and outflow

Class I YSO

Class II: A flat IR SED, as the flux of the central source rises, and the disk dissipates

Class III: A faling IR SED. The accretion disk is mostly depleted, and adds only a small infrared excess on top of the SED of the central object.

Class III YSO

Disks in the context of Star and Planet Formation


For excellent references on star and planet formation, see “Protostars and Planets V”, as well as Stahler and Palla’s “The Formation of Stars”

What is a protostar?

I like the wikipedia entry:

“A contracting mass of gass that represents an early stage in the formation of a star before nucleosynthesis has begun

Since fusion is a negligible energy source in a protostar, its luminosity comes from gravitational contraction. From the virial theorem, half of the gravitational potential energy gain from contraction is converted into kinetic energy, while the other half is radiated away. In other words, for a homogeneous sphere, E_{\rm rad} = \frac{3}{10} \frac{GM^2}{R}.

For a 1 solar mass star contracted to 500 R_\odot,~ E = 2 \times 10^{45} erg.

What is a planet?

Wikipedia says: “A celestial body moving in an elliptic orbit around a star.” This isn’t quite precise enough — could be an asetroid, KBO, binary star, brown dwarf, etc.

How do planets form?


See this post

Key and Open Questions Regarding the Intergalactic Medium


See also this post

The Kenticutt-Schmidt relation empirically links the surface brightness and star forming rate of galaxies. What is the physical explanation for this? In particular, is there a more prescriptive form or interpretation of the KS relation, that is more astrophysical (and perhaps more accurate at predicting star formation rate)?

Not all galaxies are created equal(ly)

What is the origin of Spiral galaxies? Is the spiral density wave theory of Lin and Shu corre t, or might the GMC perturbation theory of D’Onghia and Hernquist be more apropos? Is the formation of grand design spirals different from that of flocculent spirals?

Where to ellipticals come from? They most likely are the remnants of mergers:

      • They they have no ordered rotation. (suggests random mixing)
      • Old stars, little or no dust/gas
      • Often at the center of clusters
      • Always have central Black Holes

“Young” merger galaxies (i.e. before a galaxy has undergone several mergers and formed an elliptical) have extreme star formation rates. SFR can range from 100s to 1000s of stars/year (by comparison, the SFR in the Milky Way is 1 M_\odot/yr

Significant Variations in the Gas/Dust Ratio and Metallicity

This is true both within galaxies and among galaxies.

How does this affect the K-S relationship?

What are the Kennicutt-Schmidt relations?

See also this post

The “Schmidt Law” is due to Marten Schmidt (1959): “The mean density of a galaxy may determine, as a first aproximation, its present gas content and thus its evolution stage”

\dot{M} \propto n_{\rm gas}^2

Here, n is the volume density.

The “Kennicut Law” is similar to the Schmidt law, but uses the surface density:

\Sigma_{\rm SFR} = a \Sigma_{\rm gas}^q

Where, typically, q~1.4 (The Schmidt Law above would give q=1.5). Notably q is not 1, which would imply a constant efficiency function.

Some tracers of star formation rate:

      • UV flux (from unobscured massive young stars)
      • H-alpha flux (also from young stars)
      • FIR flux – reprocessed starlight (but– that’s also dependent on gas surface density, so its sketchy to use in the KS law. That doesn’t mean people don’t do it!)

Some tracers of surface density:

      • HI
      • CO
      • HI+CO
      • HCN and other tracers that preferentially probe high surface densities. Gao and Solomon 2004 quote Log_{IR} = 1 \pm 0.05 Log_{HCN} + 2.9

A big issue in using the KS law is how “independent” the tracers of surface density and star formation rate are. Also, the KS relationship extends over many orders of magnitude. Because of this, its unclear whether KS says something important of the conversion of gas into stars, or rather something trivial like “bigger galaxies have more gas, and form more stars”.

The ISM at very high redshift


See this post

Special Topics


See this post

How do planets form?

In Uncategorized on April 12, 2011 at 1:15 am

This is NOT a question to which we yet “know” the answer. Instead, we know that the answer isn’t likely to be exactly the same for all planets (e.g. maybe gas giants form differently from rocky smaller planets?).  We also think we know about some of the processes that may be important.

Here’s a quick list of considerations (many of which are shown in this “game” video!):

1.  Disk formation. It is widely agreed that the starting point for planet formation is the disk that forms as material accretes from a molecular cloud core onto a disk around a forming star. (This illustration of “10-steps to star formation,” shows where disk formation fits into the larger star/disk/planet formation process.)  One very popular analytic theory of disks around young stars, and their associated outflows, is called the “X-wind” model.  The X-wind model is due to Frank Shu and colleagues, and it assigns a key role to the magnetic field, in slowing down the rotation of the disk and in generating bipolar outflows.  (This link illustrates the difference between “X-wind” and “D-wind” (or “disk-wind”) models.  In the D-wind, the outflow comes from the disk, rather than the X-point, where the magnetic field of the star connects to the disk.)

1a. Dust v. Gas. Circumstellar disks contain plenty of gas AND dust.  Some theories focus on the sticking of dust grains together into “planetesimals” as the first important step in planet formation, while others focus on instabilities that can cause the fragmentation of the whole disk into over-dense blobs that could be the seeds for future planets.  Also, the relative amounts and distribution of dust and gas (due to their differing opacities) will effect the internal (e.g. “dead zone”, cf. Gammie 1996) and external (e.g. flaring, e.g. van Boekel et al. 2005) structure of the disk.  These arrangements change over time (see “Time Evolution”, below).

from "Dynamics of Protoplanetary Disks," Phil Armitage, ARA&A, 2011.

2. Onset of planet formation. It is NOT clear when in the lifetime of a circumstellar disks planets begin to form.  It is fair to say that there are many competing theories! This video gives an overview of what happens, according to a “consensus” view…details, however, are not shown.  It’s also fair to say that it’s widely believed in 2011 that planet formation takes place roughly at the same time as the central star (or its disk) forms, rather than fully afterwards.

2a. Role of migration. Under many theories, it is far easier to form planets farther from the star, where material has a better chance of sticking together (see “Snow line”, below), and there’s more of it.  So, many theories (Wikipedia) rely upon forming planets at large distances and then letting them “migrate” inward (or outward) due to angular momentum exchange within the disk, as or after they form.  In some theories, many “early” planets crash into the star having been gravitationally drawn there as they migrate inward, and the solar systems we see are just what’s “leftover” when migration ends.

3. Turbulence in disks. In some theories, turbulence is harmful to planet formation, because it can increase the velocity dispersion amongst solid particles, potentially inhibiting growth to larger “planetesimals” with frequent, destructive, collisions.  In other theories, turbulence is helpful, because it can create vorticies which offer regions of relatively reduced velocity dispersion, therefore making it easier for particles to stick to each other.

Schematic Overview of Key Processes in Protoplanetary Disks, from Armitage 2011 ARA&A.

4. “Snow line.” Ice can be very sticky.  (Witness wet hands trying to pick up ice cubes w/o any “sticking” issues!)  It is widely believed that ice helps dust stick together.  So, in order to dust particles to stick (at least a little) when they collide (note that they also tend to break apart), a nice icy coating helps.  So, a key question for any disk is “how far from the star do you have to be before key molecules are solids, rather than liquids”?  That distance is called the “snow line,” and for our solar system it’s about 2.7 AU, illustrating that it is unlikely that the Earth (at 1 AU) formed where we find it now.   The solid particles that form beyond the snow line can grow to be either rocky planets, or the graviatational seeds for gas giants.

from “The Genesis of Planets” by Doug Lin–illustration by Don Dixon ©Scientific American/Nature Publishing Group 2008

5. Time Evolution.  The structure of circumstellar protoplanetary disks changes over time.  There may be a very long period of planet formation/migration, but eventually, the material will be used up, and will also potentially erode due to radiation from the central star.  The figure below, from Williams & Cieza’s 2011 Annual Reviews article, gives a nice breakdown of key phases, and also shows the nature of gas and solids clearly.

The Evolution of a Circumstellar Disk

Full caption:  The evolution of a typical disk. The gas distribution is shown in blue and the dust in brown. (a) Early in its evolution, the disk loses mass through accretion onto the star and FUV photoevaporation of the outer disk. (b) At the same time, grains grow into larger bodies that settle to the mid-plane of the disk. (c) As the disk mass and accretion rate decrease, EUV-induced photoevaporation becomes important, the outer disk is no longer able to resupply the inner disk with material, and the inner disk drains on a viscous timescale (∼105 yr). An inner hole is formed, accretion onto the star ceases, and the disk quickly dissipates from the inside out. (d) Once the remaining gas photoevaporates, the small grains are removed by radiation pressure and Poynting-Robertson drag. Only large grains, planetesimals, and/or planets are left This debris disk is very low mass and is not always detectable.

5a. Debris disks.  As shown above, once all planets a solar system will have have formed and the gas is largely gone, there is still leftover particulate matter.  The disk of “leftovers” (e.g. asteroids and comets), which can grind themselves into smaller and smaller pieces through collisions, is known as a “debris disk.”  Here’s an artistic video “showing” the debris disk around the star b-Pic (based on observational data).  These debris disks are low-mass, but bright enough to be detected in the sub-mm.  Their appearance depends strongly on inclination, as shown in this figure complied for the JCMT/SCUBA-2 “Debris Disk Legacy Survey”:

FIGURE 1: Debris disks seen with SCUBA, including (l-to-r) τ Ceti, ε Eridani, Vega (α Lyr), Fomalhaut (α PSa) and η Corvi. The disks are shown to the same physical scale i.e. as if all at one distance; actual distances are 3 to 18 pc. Sketches at the bottom demonstrate the disk orientations, and the star symbols are at the stellar positions. The spectral types and stellar ages are (l-to-r) G8 V / 10 Gyr, K2 V / 0.85 Gyr, A0 V / ~ 0.4 Gyr, A3 V / 0.3 Gyr and F2 V / ~ 1 Gyr. The images are at 850 μm except for η Corvi at 450 μm.

Suggested reading:

  • The Smithsonian Submillimeter Array has blazed a path (to soon be followed by ALMA) in observing Protoplanetary Disks, and the Protoplanetary Disks Research Group web page at the Harvard-Smithsonian Center for Astrophysics has links to several key publications.
  • Fantastic 2008 Scientific American article on the “Genesis of Planets” by Doug Lin…here’s an excerpt from its introduction: “The study of planet formation lies at the intersection of astrophysics, planetary science, statistical mechanics and nonlinear dynamics. Broadly speaking, planetary scientists have developed two leading theories. The sequential accretion scenario holds that tiny grains of dust clump together to create solid nuggets of rock, which either draw in huge amounts of gas, becoming gas giants such as Jupiter, or do not, becoming rocky planets such as Earth. The main drawback of this scenario is that it is a slow process and that gas may disperse before it can run to completion.
    The alternative, gravitational-instability scenario holds that gas giants take shape in an abrupt whoosh as the prenatal disk of gas and dust breaks up—a process that replicates, in miniature, the formation of stars. This hypothesis remains contentious because it assumes the existence of highly unstable conditions, which may not be attainable. Moreover, astronomers have found that the heaviest planets and the lightest stars are separated by a “desert”—a scarcity of intermediate bodies. The disjunction implies that planets are not simply little stars but have an entirely different origin.
    Although researchers have not settled this controversy, most consider the sequential-accretion scenario the most plausible of the two.”

Formation of planetesimals

In Uncategorized on April 11, 2011 at 9:09 pm

Introduction

Terrestrial planets are thought to be built up through the collisions of many smaller objects. Our story begins in the remains of the stellar accretion disk which, for a sun-like star, is 99% gaseous hydrogen and helium by mass.  Terrestrial planets (and it is thought the cores of giant planets) are created from the remaining 1% of mass, which consists of tiny solid grains.  Collisions of these micrometer dust grains eventually can create planets like Earth.

Read the rest of this entry »

The Magnetorotational Instability

In Uncategorized on April 11, 2011 at 12:53 am

Introduction

The MRI is an instability that arises from the action of the magnetic field in a differentially rotating system (i.e. a disk), and can lead to large scale mixing and turbulence very quickly (MRI grows on a dynamical timescale t_{MRI} \propto 1/\Omega where \Omega is the rotational frequency). The necessary conditions for the MRI to develop are the following:

  • There is a weak poloidal magnetic field (i.e. the field points in a direction normal to the disk)
  • The disk rotates differentially, where \frac{d\Omega}{dR} < 0

Given that magnetic fields are ubiquitous and that astrophysical disks (which rotate differentially) are commonplace, the MRI arises in a huge diversity of astrophysical settings (including X-ray binaries, the Galactic disk, and protoplanetary disks).

 

Read the rest of this entry »

GMC Formation and Spiral Spurs

In Uncategorized on April 7, 2011 at 5:17 am

Fig 1. M51: Composite Hubble Image (Strong m=2 response and interarm spur features)

There are several related questions that we want to address:

  1. Galaxy evolution. How does a Milky Way type galaxy evolve (in isolation)?
  2. Spiral Structure. What is it, how does it form, where does it occur, when, and why?
  3. Star Formation. We may have a good idea of what stars are, but: how do they form? where? when?

We know stars form in dense, cold regions of the ISM. These condensations are themselves part of larger, slightly less dense regions which we call giant molecular clouds (GMCs). Once a galaxy forms GMCs, however, they strongly affect both the structure and subsequent evolution of the galaxy. In particular,

  1. The cloud formation rate sets the overall rate and nature of star formation.
  2. Clouds change the balance of ISM phases (cold/warm/hot as well as molecular/atomic).
  3. Can modify the galactic dynamics, perhaps inducing or preventing further spiral structure.

Giant molecular clouds also influence the formation of each individual star, since the GMC properties (mass, density spectrum, magnetic field, turbulent velocity field, angular momentum) are precisely the initial conditions for star formation. It seems, then, that it would be advantageous to understand how such structures form. There are two basic mechanisms:

  1. Bottom-up / “coagulation” – smaller clouds build up over time (via collisions) into GMC size objects.
  2. Top-down – we must invoke some largre scale disk instability or mechanism to allow clouds to condense from the diffuse ISM.

The most obvious choice is gravity. As a long range force it will naturally bring the ISM together over large scales into increasingly dense regions. However, there are other important effects (and consequences). Differential rotation (shear), magnetic fields (magnetic pressure and tension forces), turbulence (multiscale perturbations of gas properties), and stellar spiral arms (induce local variations in the ISM density, velocity, magnetic field).

In a series of papers by Ostriker, Shetty, and Kim from 2002-2010 they explore the competition among these processes and their role in the formation of giant molecular clouds, particularly within features dubbed “spiral spurs or feathers”. They leverage the standard strength of numerical simulations, namely its ability to disentangle the physics of highly nonlinear systems by selectively turning certain processes on or off, isolating their contributions, and determining which are dominant in various regimes. They solve the magneto-hydrodynamics equations for gas including a source term for self-gravity as well as an externally imposed spiral perturbation, modeled as a rigidly rotating potential with some fixed pattern speed:

Fig 2. Equations of MHD and gas-self gravity (1-4) with the external spiral potential (5).

Their suite of simulations progressed from 2D to 3D, and from local shearing periodic box (a small patch of a spiral arm comoving with the disk) to global simulations of the entire disk. There are several important effects of spirals worth mentioning:

  1. The characteristic timescale for self-gravity condensations is given by t_J = c_s / G \Sigma . So, in an arm, where there is a natural enhancement of the surface density \Sigma, the condensation process is more efficient.
  2. In a spiral arm there is a local shear reduction. Specifically, for a flat rotation curve with V(r) = Const we have for the local gradient in the angular velocity d \ln{\Omega} / d \ln{R} = \Sigma / \Sigma_0 - 2 where \Sigma_0 is the azimuthal average. For instance, for greater than a factor of two overdensity we can actually reverse the direction of the shear. The important point: this allows more time for condensations to grow before they are sheared out.
  3. Consider the dispersion relation for a shearing disk + magnetic fields, in the weak shear limit. Then the instability criterion reduces to exactly that of the 2D Jeans analysis (+ thick disk gravity), in the absence of either rotation or magnetic fields! That is, the presence of a B field removes the stabilizing effect of galactic rotation. Additionally, because magnetic tension forces share angular momentum between neighboring condensations, the effect is to resist epicyclic motions across field lines, and contracting regions are able to grow. This is the so called “MJI” or magneto-Jeans instability.

MJI initially develops in the densest region of the arm and is then convected downstream, out of the arm. The interarm shear then creates the characteristic “spur” shape, which naturally have a trailing sense due to the background differential rotation.

Fig 3. Final density structure of Kim & Ostriker (2002) shearing box MHD simulations, shown in the frame comoving with the spiral pattern.

Ostriker et al. find this to be an efficient mechanism for GMC formation. Several quantitative predictions can then be made based on the simulation results. For instance:

  1. The spur spacing is ~ few times the Jeans length L_J = c_s^2 / G \Sigma .
  2. The surface density enhancement in spurs drives Toomre’s Q parameter for the gas (which scales as 1 / \Sigma ) to be locally unstable.
  3. Fragmenation of ~ Jeans mass (10^6 - 10^7 M_{\odot}) clumps along the length of the arm are associated with GMCs.

To conclude, consider the image at the top of this page of the “grand-design spiral” galaxy M51, taken with HST (H\alpha, V, I) in 2002 and which provided strong observational motivation for the studies described herein. The reader is left to their own conclusions!

References:

Supernova Remnants – in Theory and Practice

In Uncategorized on April 4, 2011 at 3:30 pm

Supernova Remnants Discussion – Important Topics

See also these handwritten notes

Stage 0: The Supernova Explosion

  • 1e53 erg carried away by neutrinos!
  • Ejecta properties: Mass ~ 1 solar mass, v ~ 10,000 km/s, Kinetic Energy ~ 1e51 erg.
  • Typical peak absolute visual magnitude ~ -18, duration ~ 3 months, energy radiated in supernova ~ 1e49 erg
  • Light curves: Type I (no hydrogen) – faster ejecta, powered by decay of radioactive Ni; Type II – slower ejecta, powered by recombination of Hydrogen
  • Primary emission mechanism – BB radiation from thermal ejecta

Stage I: The Ejecta-dominated Phase

  • Free expansion, Hubble flow model
  • Importance of B-fields, formation of shocks
  • Velocity structure of the ejecta
  • Primary emission mechanism – shocked ejecta: thermal bremsstrahlung (X-rays); shocked ISM: radio synchrotron
  • Validity – as long as Ejecta mass >> swept-up ISM mass

Stage II: The Sedov-Taylor Phase

  • Echoes of thunder
  • S-T model: validity, dimensional analysis, homologous model (blast wave position and velocity), Sedov solution (without derivation) + plots & caveats, post-shock temperature
  • Primary emission mechanism – Optical: forbidden lines ([O I], [Fe], [Mg I) ) in type I SNe; X-rays: free-free from shocked ejecta + auroral lines; Radio: synchrotron from forward shock
  • Validity – t < cooling time of ejecta, non-spherical remnants

Stage III: The Snowplow Phase

  • Radiative cooling + timescales, shell formation
  • Instabilities
  • Primary emission mechanism – Optical: forbidden lines [O III], [S II]; X-rays: free-free from shocked ISM; Radio: synchrotron from forward shock
  • Effect of internal pressure on evolution

Resources

Books:
Physical Processes in the Interstellar Medium, Lyman Spitzer, Wiley-Interscience Pub., 1978, section 10.2 (shocks) and 12.2 (supernovae).
Physics of the ISM and IGM, Bruce Draine, PUP, 2011, section 39.1

Online:
http://www.astronomy.ohio-state.edu/~ryden/ast825/ch5-6.pdf
http://wapedia.mobi/en/Supernova_remnant

Numerical Simulations of explosion:
http://qso.lanl.gov/~clf/

Does the IMF come from the CMF?

In Uncategorized on March 31, 2011 at 2:55 am

Does the IMF come from the CMF?

The IMF (Initial Mass Function) is the distribution of masses at which stars were formed. It has been measured empirically for stars in a variety of environments. This excerpt from McKee and Ostriker 2007 describes our current knowledge of the IMF (bolding is mine):

Read the rest of this entry »

Notes on Héctor Arce’s Guest Lecture on Outflows

In Uncategorized on March 29, 2011 at 10:00 pm

Slides, Part 1

Introduction

Driving mechanism is MHD winds (not going to talk much about launching mechanism)

Manifestations

HH knots in the optical; NIR 2.12 micron H_2 emission (can trace slower ~50 km/s material than optical HH knots)

CO emission (redshifted & blueshifted) traces “molecular outflow” made of entrained and accelerated gas

History

  • Herbig & Haro’s observations 1950’s
  • shock-like spectra 1970’s (Schwartz 1975)
  • proper motions (fast!) late 70’s early 80’s
  • 1980’s observations showed very small extents ~0.1 pc
  • HST images 1990s, showing complex internal structure
  • “giant HH flows” found to be common in 1990’s, e.g. PV Ceph
  • episodicity appreciated/measured 1990’s/2000’s

Blackboard Discussion, Part 1

Basic Shock Physics

Radiative Shock: shock cools by emitting radiation more efficiently than by adiabatic expansion t_rc<t_ac ; typically v~a few 10’s of km/s, a few 100’s of km/s, n>~10^4 to 10^5 km/s

Non-radiative Shocks: …  typical of early phases of supernova remnant; n<10^3 to 10^4 cm-3; v~10^3 km/s…. mostly detected in X-rays  also–AGN jets are non-radiative UNLIKE proto-stellar jets

n_e=electron density from [SII], 6716, 6731 Angstroms

Jet Velocity from emission lines, using shock lines emission including strength & line shapes+shock models can give velocities

proper motions can also of course give velocities, but they take longer

X_e=ionization fraction (line ratios & models)

reviews: Hartigan et al. 1994; Baccioti et al. 1996, 1997, 1999, 2000; Hartigan et al. 2000 (Protostars and Planets IV)

Mass loss rate=v_jet•n_e•X_e•(pi r^2)(mu m_H)

typically 10^-8 to 10^-9 solar masses/year

best done with lines that are close in wavelength, so that reddening is not very different from one line to the next

drawing of anatomy of (jet) shock

Spectral-Line Mapping of Bipolar Flows

drawing…how outflow mapping is done

N_CO=column density

X_CO=abundance ratio, ~10^-4

M_H_2=mu • m_H • N_CO • X_CO

Total mass = Sum over x, y, z directions of M_H_2

P=M(v) • v

E=1/2 M(V_o) • v_o^2

 

Magnetic Fields and Spiral arms in M51

In Uncategorized on March 24, 2011 at 9:54 pm

Paper Discussion by Gongjie Li

 Read the Paper by Fletcher et al. (2011)

ABSTRACT

We use new multiwavelength radio observations, made with the VLA and Effelsberg telescopes, to study the magnetic field of the nearby galaxy M51 on scales from 200 pc to several kpc. Interferometric and single-dish data are combined to obtain new maps at λλ3, 6 cm in total and polarized emission, and earlier λ20 cm data are reduced. We compare the spatial distribution of the radio emission with observations of the neutral gas, derive radio spectral index and Faraday depolarization maps, and model the large-scale variation in Faraday rotation in order to deduce the structure of the regular magnetic field. We find that the λ20 cm emission from the disc is severely depolarized and that a dominating fraction of the observed polarized emission at λ6 cm must be due to anisotropic small-scale magnetic fields. Taking this into account, we derive two components for the regular magnetic field in this galaxy; the disc is dominated by a combination of azimuthal modes, m = 0, 2, but in the halo only an m = 1 mode is required to fit the observations. We discuss how the observed arm–interarm contrast in radio intensities can be reconciled with evidence for strong gas compression in the spiral shocks. In the inner spiral arms, the strong arm–interarm contrasts in total and polarized radio emission are roughly consistent with expectations from shock compression of the regular and turbulent components of the magnetic field. However, the average arm-interam contrast, representative of the radii r > 2 kpc where the spiral arms are broader, is not compatible with straightforward compression: lower arm–interarm contrasts than expected may be due to resolution effects and decompression of the magnetic field as it leaves the arms. We suggest a simple method to estimate the turbulent scale in the magneto-ionic medium from the dependence of the standard deviation of the observed Faraday rotation measure on resolution. We thus obtain an estimate of 50 pc for the size of the turbulent eddies.

1) Brief introduction on M51:

M51, the whirlpool galaxy is a spiral galaxy, which is 23+-4Mly away from us. It was the first external galaxy where polarized radio emission was detected and one of the few external galaxies where optical polarization has been studied.

2) Main points of the paper:

  1. The authors mapped the polarized emission of M51
  2. The authors used Faraday rotation and depolarization to introduce a new method to estimate the size of the turbulence cell
  3. The authors fit the polarization angles using a superposition of azimuthal magnetic field modes. They noticed the difference in the dominate modes in the disk and the halo of M51.
  4. The authors calculated the arm and interarm contrast in B field, and gave an explanation to the long standing arm and inter-arm contrast problem.

In detail…

1) The M51 maps

The authors compared the arm inter-arm contracts of gas with magnetic fields, and discussed for the first time the interaction of the magnetic fields with the shock fronts in detail. The polarization of the light is obtained using the stoke parameters (covered in Ay150).

As shown in fig 1, the emission at wavelength = 3cm and 6cm, correspond with the optical spiral arms, and as shown in fig 4, the polarized radio emission correspond with the CO line emission. Because shock compresses gas and magnetic fields (traced by polarized emission), then molecules are formed (traced by CO), and finally thermal emission is generated (traced by infrared). There are systematic shifts between the spiral ridges seen in polarized and total radio emission, integrated CO line emission and infrared. (For more details, see Patrikeev et al. 2006)

Magnetic field strength is calculated using the synchrotron radiation. Specifically, the authors assume equipartition between the energy densities of the magnetic field and cosmic rays, a proton-to-electron ratio of 100 and a path-length through the synchrotron-emitting regions of 1 kpc, estimates for the total field strength are shown in Fig. 8, applying the revised formulae by Beck & Krause (2005)

2) Faraday rotation and depolarization

Faraday depolarization is caused by Faraday dispersion due to turbulent magnetic fields. The authors introduce a new method to estimate the size of the turbulence cell using Faraday depolarization.

The Rotation measure dispersion within a beam of a linear diameter D is related to the dispersion within a cell by: \sigma_{RM, D} \simeq N^{1/2} \sigma_{RM}=\sigma_{RM} \frac{d}{D}, where d is the size of the cell and D is the diameter. Because the internal Faraday dispersion within a cell is determined by turbulence in the magneto-ionic interstellar medium, the dispersion within a cell can be determined as the following: \sigma_{RM} = 0.81\langle n_e \rangle Br(Ld)^{1/2}, where Br is the strength of the component of the random field along the line of sight, L is the total path-length through the ionized gas (see details in Burn 1966; Sokoloff et al. 1998).

Combining these two equations, the size of the turbulence cell can be determined:

d\simeq[\frac{D \sigma_{RM, D}}{0.81\langle n_e \rangle Br(L)^{1/2}}]^{2/3}

or d \simeq 50pc(\frac{D}{600pc})^{2/3}(\frac{\sigma_{RM, D}}{15rad m^2})^{2/3}(\frac{\langle n_e \rangle}{0.1cm^3})^{-2/3}(\frac{Br}{20\mu G})^{-2/3}(\frac{L}{1kpc})^{-1/3}.

The authors estimated the turbulent cell to be 50pc.

3) Regular magnetic field structure

Fit the polarization angles using a superposition of azimuthal magnetic field modes exp(imf) with integer m, where f is the azimuthal angle in the galaxy’s plane measured anticlockwise from the north end of the major axis. The authors found that in the disc, the field can be described as a combination of m=0, 2 and in the halo, the field can be described as m=1. The origin of the halo field is unclear.

4) Arm-interarm contrast

Arm-interarm contrast in the strength of B field is a long-standing problem.

1970: Roberts and Yuan suggested that magnetic field increase in proportion to the gas density at the spiral shock.

1988: Tilanus et al. noticed that from observation, synchrotron emitting interstellar medium is not compressed by shocks.

1974 and 2009: Mouschovias et al. (1974),  and Mouschovias et al. (2009) suggested that only a moderate increase in synchrotron emission is expected due to the Parker instability (instability caused by the magnetic fields and the cosmic ray), so the magnetic fields should be compressed in loops with a scale of 500-1000pc. However, no observations of the periodic pattern of loops are found.

In this paper, the arm-interarm contrast in gas density and radio emission was compared to a model where a regular and isotropic random magnetic field is compressed by shocks along the spiral arms. The inner arms region, where r<1.6kpc is consistent with the model, however, the region where r>2kpc is not consistent with the model. The authors argue that it is because the random field is isotropic in the arms, but the random field becomes anisotropic due to decompression as it enters the interarm in the outer regions, which produce an increase in polarized emission in the interarm region.

2. (From Drain)

Influence of magnetic fields:

The ratio of the magnetic energy density to the kinetic energy density is (va/sigv)2 where, va is the Alfven speed (B/sqrt(0) and sigv is the 3-dimensional velocity dispersion. If the magnetic energy density is comparable to the kinetic energy density, the magnetic field is contributing significantly to supporting the cloud against self-gravity. Then virial equilibrium equation underestimates the mass. However, the mass estimates only increase by a factor of square root of 2. No qualitative changes will be made

Detection of magnetic fields:

a) Zeeman effects

Measuring the splitting of a spectral line into several components in the presence of a static magnetic field

Examples: Crutcher et al. (2010)

b) Chandrasekhar-Fermi method

Measuring the dispersion in directions of polarization over the map (if magnetic field is strong enough to resist substantial distortion by the turbulence, dispersion is small)

Examples:

  1. Crutcher (2004)
  2. Novak et al. (2009)

Disadvantage:

Local effects due to the magnetic field smeared along the line of sight, so that the net magnetic field is underestimated if the magnetic field were “tangled”.

ARTICLE: The “true” column density distribution in star-forming molecular clouds

In Uncategorized on March 24, 2011 at 3:03 am

Read the paper by A.A. Goodman, J.E. Pineda, and S.L. Schnee (2008)

Summary by Bekki Dawson

Abstract

We use the COMPLETE Survey’s observations of the Perseus star-forming region to assess and intercompare the three methods used for measuring column density in molecular clouds: near-infrared (NIR) extinction mapping; thermal emission mapping in the far-IR; and mapping the intensity of CO isotopologues. Overall, the structures shown by all three tracers are morphologically similar, but important differences exist among the tracers. We find that the dust-based measures (NIR extinction and thermal emission) give similar, log-normal, distributions for the full (~20 pc scale) Perseus region, once careful calibration corrections are made. We also compare dust- and gas-based column density distributions for physically meaningful subregions of Perseus, and we find significant variations in the distributions for those (smaller, ~few pc scale) regions. Even though we have used 12CO data to estimate excitation temperatures, and we have corrected for opacity, the 13CO maps seem unable to give column distributions that consistently resemble those from dust measures. We have edited out the effects of the shell around the B-star HD 278942 from the column density distribution comparisons. In that shell’s interior and in the parts where it overlaps the molecular cloud, there appears to be a dearth of 13CO, which is likely due either to 13CO not yet having had time to form in this young structure and/or destruction of 13CO in the molecular cloud by the HD 278942’s wind and/or radiation. We conclude that the use of either dust or gas measures of column density without extreme attention to calibration (e.g., of thermal emission zero-levels) and artifacts (e.g., the shell) is more perilous than even experts might normally admit. And, the use of 13CO data to trace total column density in detail, even after proper calibration, is unavoidably limited in utility due to threshold, depletion, and opacity effects. If one’s main aim is to map column density (rather than temperature or kinematics), then dust extinction seems the best probe, up to a limiting extinction caused by a dearth of sufficient background sources. Linear fits among all three tracers’ estimates of column density are given, allowing us to quantify the inherent uncertainties in using one tracer, in comparison with the others.

Read the rest of this entry »

ARTICLE: The Galactic Distribution of OB Associations in Molecular Clouds

In Uncategorized on March 4, 2011 at 5:19 am

Read the Paper by J.P. Williams and C.F. McKee (1997)

Summary by Vicente Rodriguez Gomez

Abstract

Molecular clouds account for half of the mass of the interstellar medium interior to the solar circle and for all current star formation. Using cloud catalogs of two CO surveys of the first quadrant, we have fitted the mass distribution of molecular clouds to a truncated power law in a similar manner as the luminosity function of OB associations in the companion paper to this work. After extrapolating from the first quadrant to the entire inner Galaxy, we find that the mass of cataloged clouds amounts to only 40% of current estimates of the total Galactic molecular mass. Following Solomon & Rivolo, we have assumed that the remaining molecular gas is in cold clouds, and we normalize the distribution accordingly. The predicted total number of clouds is then shown to be consistent with that observed in the solar neighborhood where cloud catalogs should be more complete. Within the solar circle, the cumulative form of the distribution is Script Nc(>M)=105[(Mu/M)0.6-1], where Script Nc is the number of clouds, and Mu = 6 × 106 Msun is the upper mass limit. The large number of clouds near the upper cutoff to the distribution indicates an underlying physical limit to cloud formation or destruction processes. The slope of the distribution corresponds to dScript Nc/dMvpropM−1.6, implying that although numerically most clouds are of low mass, most of the molecular gas is contained within the most massive clouds.

The distribution of cloud masses is then compared to the Galactic distribution of OB association luminosities to obtain statistical estimates of the number of massive stars expected in any given cloud. The likelihood of massive star formation in a cloud is determined, and it is found that the median cloud mass that contains at least one O star is ~105 Msun. The average star formation efficiency over the lifetime of an association is about 5% but varies by more than 2 orders of magnitude from cloud to cloud and is predicted to increase with cloud mass. O stars photoevaporate their surrounding molecular gas, and even with low rates of formation, they are the principal agents of cloud destruction. Using an improved estimate of the timescale for photoevaporation and our statistics on the expected numbers of stars per cloud, we find that 106 Msun giant molecular clouds (GMCs) are expected to survive for about 3 × 107 yr. Smaller clouds are disrupted, rather than photoionized, by photoevaporation. The porosity of H II regions in large GMCs is shown to be of order unity, which is consistent with self-regulation of massive star formation in GMCs. On average, 10% of the mass of a GMC is converted to stars by the time it is destroyed by photoevaporation.

Read the rest of this entry »

The Hot ISM (Evidence and Observations)

In Uncategorized on March 3, 2011 at 6:59 pm

The hot ISM, also called coronal gas, refers to very hot, low density gas which has been shock-heated by fast stellar winds and blast waves from novae and supernovae. Its temperature and density are T \begin{smallmatrix} > \\ \sim \end{smallmatrix} 10^{5.5} K and n_H \sim 0.004 cm^{-3}, and it is believed to fill about half of the volume of the galactic disk, as well as much of the volume above and below the disk through chimney flows. Beyond the galaxy, much of the IGM is believed to be at T \begin{smallmatrix} > \\ \sim \end{smallmatrix} 10^6 K.

Read the rest of this entry »

Top Ten List for HII Regions

In Uncategorized on March 3, 2011 at 6:36 pm
  1. Caused by ionizing radiation from stars, principally Type O and B. (Fluxes) [1a. Planetary nebulae, caused by young white dwarfs, are also HII regions. (Look.)] Read the rest of this entry »

ARTICLE: Stellar Kinematics of Young Clusters in Turbulent Hydrodynamic Simulations

In Uncategorized on March 3, 2011 at 6:30 pm

Read the paper by S.S.R. Offner, C.E. Hansen, and Mark R. Krumholz (2009)

Summary by Aaron Bray and Gongjie Li

ABSTRACT

The kinematics of newly formed star clusters are interesting both as a probe of the state of the gas clouds from which the stars form, and because they influence planet formation, stellar mass segregation, cluster disruption, and other processes controlled in part by dynamical interactions in young clusters. However, to date there have been no attempts to use simulations of star cluster formation to investigate how the kinematics of young stars change in response to variations in the properties of their parent molecular clouds. In this Letter, we report the results of turbulent self-gravitating simulations of cluster formation in which we consider both clouds in virial balance and those undergoing global collapse. We find that stars in these simulations generally have velocity dispersions smaller than that of the gas by a factor of ∼5, independent of the dynamical state of the parent cloud, so that subvirial stellar velocity dispersions arise naturally even in virialized molecular clouds. The simulated clusters also show large-scale stellar velocity gradients of ∼0.2–2 km s−1 pc−1 and strong correlations between the centroid velocities of stars and gas, both of which are observed in young clusters. We conclude that star clusters should display subvirial velocity dispersions, large-scale velocity gradients, and strong gas–star velocity correlations regardless of whether their parent clouds are in virial balance, and, conversely, that observations of these features cannot be used to infer the dynamical state of the parent gas clouds.

Read the rest of this entry »

Why is CO an important coolant in the (very) cold ISM?

In Uncategorized on March 2, 2011 at 9:56 pm

Cooling mechanisms are very important for facilitating the collapse of molecular clouds, the formation of stars, and radiative equilibrium in the ISM. As discussed in class, molecular hydrogen is a poor radiator as a homonuclear molecule with no dipole moment. This means that H2 can only radiate through forbidden transitions so the rates are too low. Compared to atomic hydrogen, because molecular hydrogen doesn’t even have a 21cm analogue, cooling rates rates for H2 are lower than that for atomic Hydrogen. However as discussed in class, at high temperatures (i.e. shocks) its high abundance makes H2 the dominant coolant. Consequently, heavy molecules play in important role transforming thermal energy to radiation that can escape the region through collisions (mostly with hydrogen) and various emission mechanisms. For the low temperatures of the cold ISM, the potential energy imparted to a CO molecule from an inelastic collision is only enough to excite a rotational transition.

Well, it is both abundant and able to radiate from inelastic collisions at low temperatures and densities.

In general, the cooling rate of a specific is determined by the amount of material, the level populations, and the physics of the transitions:

\Lambda_{CO} = X_{CO} (2J+1) \frac{e^{-E_J/(k_B T)}}{Z} A_{J,J-1} \Delta E_{J,J-1}

At the most general level we might try to understand the equation by area of study. Abundances, X_{CO} are determined by the chemistry of the environment as various processes conspire with the temperature, densities, etc. to form molecules. The level populations, (2J+1) \frac{e^{-E_J/(k_B T)}}{Z},are covered by statistical mechanics. To calculate the transition energy, \Delta E_{J,J-1} = E_J - E_{J-1}, and Einstein A coefficient, A_{J,J-1}, we need to use quantum mechanics.

To answer the question at hand let’s focus on the chemistry and quantum mechanics by first looking at the transition energy. Treating the diatomic molecule CO as a rigid rotator the energy of a particular level, J is set by the rotation constant B:

\Delta E_{J,J-1} = hBJ(J+1) - hBJ(J-1) \propto B

where B = \hbar^2/2m_r r_0^2 where m_r is the reduced mass of the molecule and r_0 is the bond length. We’re working in the very cold ISM so we’ll assume that the CO stays in the least energetic vibrational state.

Furthermore, the Carbon and Oxygen molecules in CO are connected by a triple bond with a dissociate energy of  11.2eV (1120 \AA.) The strength of this bond means that most starlight won’t break up a CO molecule and so often CO will be the most abundant heavy molecule (Solmon & Klemperer, 1972). Furthermore, CO’s reduced mass is large compared to that of, say, molecular hydrogen. This means that transition energies for these rotational levels are small and so they can be excited at low temperatures (think tens of Kelvin). See sections 5.1.4 and 5.1.7 of Draine’s book for a nice discussion.

Even with the small dipole moment of CO (0.112 Debye), optical depth effects are important. When a line is optically thick the photons just bounce from molecule to molecule and there is no net cooling effect. However, turbulence in the medium and velocity gradients will Doppler shift the photons so they may escape through the line wings. In addition, the geometry of the local medium now matters in the optically thick regime since photons emitted near to the cloud’s ‘boundary’ will be more likely to escape.

In brief, the reasons that CO is such a dominant coolant in the cold ISM is because it is abundant and also because it can radiate at low densities and temperatures.

But what happens when you heat the ISM just a bit?

Molecules besides CO play an important role in line cooling. For instance, water has a much larger dipole moment (1.85 D) than CO (0.112 D) and consequently at higher temperatures water is a dominant cooling mechanism. This can be seen from dipole dependence, \mu^2 in the the equation from the Einstein A coefficient:

A_{J+1,J} = \frac{512 \pi^4 B^3 \mu^2}{3 h c^3} \frac{(J+1)^4}{2J+1}

Complicating this analysis is the chemistry since the abundance of a species will depend strongly on the local conditions. Of course this complication can be useful when coupling observations and models of the chemistry (i.e. Jiminez-Serra, I et al. 2009).

Abundances of molecules with Oxygen as a function of magnitude for a model of a photon dominated region that includes surface grain chemistry. Figure from Kaufman 2009.

In brief, ion-neutral reactions dominate at the lowest temperatures so the most abundant Oxygen bearing molecules (besides CO) are O, OH, O2, and H2O. At a transition temperature of ~400 K, neutral-neutral reactions produce lots of water:

O + H_2 \longrightarrow OH + H
OH + H_2 \longrightarrow H_2O + H

By 500 K most of gas-phase oxygen is either in H2O or CO. So in a slightly warmer environment the cooling will be dominated by water.

Cooling rates per molecular Hydrogen molecule for gas temperatures of 40 and 100 K. (Figure 2 from Neufeld, Lepp, and Melnick 1995)

Calculations show that cooling from CO dominates in the low temperature, low density regime of molecular clouds.

Another way of plotting the fractional contribution of various coolants are through contour plots over temperature and density. Effectively this calculated takes the previous figure and expands it to two dimensions. Looking at the figure below the dominance of CO in the low temperature and density regime as well as the transition to water cooling becomes apparent.

Fractions of the total cooling rate from important coolants. Image from the SWASS science page, adapted from Neufeld, Lepp, and Melnick 1995.

So what might this mean to me?

The moral of the story is that if you are doing a calculation involving cooling or heating processes of gas, you’ll want to make sure to treat include coolants such as CO and treat the chemistry and radiative transfer correctly. That is you want the abundances and cooling rate to be right or your answers might be wrong! Some current topics include the formation of stars in low-metallicity regions (i.e. Jappsen et al. 2009) and modeling CO chemistry in Giant Molecular Clouds (i.e. Glover & Clark 2011).

Hopefully this discussion help to answer the question although you’ll have to find the answers to any it brought up your own. Some places to start searching are listed below
References

Download the class handout.

Page by: Katherine Rosenfeld

ESO – ALMA System Specifications

In Uncategorized on March 2, 2011 at 2:25 am

ESO – ALMA System Specifications.  Good to have these on-hand when thinking about the molecular spectroscopy of the future!