Harvard Astronomy 201b

Archive for April, 2011|Monthly archive page

The Void IGM at z < 6: Key Properties, How We Know

In Uncategorized on April 28, 2011 at 4:14 am

As Nathan described, the IGM transitioned from an HI dominated phase to an HII dominated phase at z \approx 6. Here I will go into a bit more detail about the ionization, temperature, density and magnetic field of the void ISM post-reionization, and how these properties are determined. The term “void” IGM is meant to exclude particularities of the intracluster medium, which is beyond the scope of this brief posting.

Temperature

The temperature of the IGM is dictated by a balance of adiabatic cooling from the Hubble expansion and photoheating by the UV photons that keep the IGM ionized. One simple way to obtain the IGM temperature is to measure Doppler broadening of narrow Ly\alpha absorption lines along the quasar sight-lines. This is most appropriately done with a large sample of absorption lines (along various sight-lines) at the lowest end of the absorber N_{HI} column density distribution, N_{HI} \approx 10^{13} \textrm{cm}^{-2} (see e.g. Ricotti et al. 2000). These considerations minimize potential for inflated linewidth estimates due to redshift broadening arising from the physical extent of the HI overdensity system. The result obtained is:

T_{HI} \sim 10^{4}, \ b_{T} \sim 10 \textrm{km/s} \gtrsim H \times N_{HI}/\overline{n}_{HI}

Where the last expression is a back of the envelope estimate of the redshift broadening (H being the Hubble parameter, \overline{n}_{HI} being a guess of the density based on the average value that accounts for all the baryons in \Omega_b).

Ionization Fraction

With a temperature estimate in hand, the ionization fraction can be calculated from radiative transfer, balancing ionization rate and collisional recombination rate (dependent on T through the radiative recombination coefficient \alpha(T)).

\dot{n}_{HII} = n_{HI}\int_{\nu_{0}}^{\infty}\frac{4\pi J_{\nu}\sigma_{H}(\nu)d\nu}{h\nu}, \ \sigma_{H} \sim \sigma_{0}(\nu/\nu_0)^{-3}

\dot{n}_{HI} = n_{HII}^2\alpha(T)

A crude estimate of the first integral can be obtained by knowing approximately the UV ionizing background, yielding t_{ion} \sim 5 \times 10^{12} \ s (see for example some simple calculations and general IGM discussion by Piero Madau). Again using the \overline{n}_{HI} consistent with \Omega_b and using \alpha(10^4 \textrm{K}), balancing the rates of ionization/recombination yields n_{HI}/n_{HII} \sim 10^{-4}.

More on line-of-sight observations

Virtually all information about the IGM at z < 6 is gleaned from line-of-sight observations, most frequently in the optical. Conclusive detection of metals generally requires identification of multiple lines, so that the intervening systems’ redshift can be determined. Doublets are particularly useful in this context (especially MgII \lambda=2795\textrm{\AA}, 2802\textrm{\AA}, and CIV (\lambda=1548\textrm{\AA},1551\textrm{\AA}). Various common absorber classifications are listed in Table 1 below. See Fig. 1 below for a sample quasar spectrum showing many of the features listed in Table 1 (spectrum from Schneider “Extragalactic Astronomy and Cosmology”).

system classification absorber corresponding HI column density (\textrm{cm}^{-2})
narrow Ly\alpha HI < 10^{17}
Lyman limit HI > 10^{17}, < 10^{20}
damped Ly\alpha HI > 10^{20}
metal CIV, MgII, etc. > 10^{17}, < 10^{21}

Figure 1: Example of a quasar spectrum, showing some common absorption features. The source is at redshift ~2. Figure from Schneider "Extragalactic Astronomy & Cosmology".

IGM Magnetic Fields

Very little is currently known about magnetic fields in the IGM. Recently, lower bounds on the magnitude of the IGM magnetic field have been derived using line-of-sight blazar gamma-ray data (e.g. Tavecchio et al. 2010). The premise of these B-field limits is that conversion of gamma-rays emitted by the blazar to e^{+}/e^{-} pairs on their way to Earth is sensitive to the IGM B-field and can be observed in the high-energy blazar spectra. The present lower limits on B_{IGM} are \sim 10^{-15}-10^{-17} Gauss…not too stringent of a lower bound!

The Warm Hot Intergalactic Medium

Simulations suggest that the z<6 IGM has a large-scale filamentary structure (the “cosmic web“), and quasar line of sight data have provided evidence for this WHIM. By analyzing quasar spectra in the X-ray with Chandra, Prof. Julia Lee and collaborators identified OVIII absorption thought to trace hot T \sim 10^6 \ K conditions in filaments overdense relative to the cosmic average baryon density by a factor \delta \sim 5-50 (Fang et al. 2002, see Fig. 2).

Figure 2: Detection of Lyman-alpha analogue absorption from seven times ionized oxygen in the WHIM, from Fang et al. 2002.

References

Accretion Luminosity

In Uncategorized on April 27, 2011 at 5:05 pm

Let m = \dot{M} t be the orbiting mass entering and leaving the ring.  dE = \frac{dE}{dr} dr = \frac{d}{dr} \left(-G \frac{M_* m}{2 r}\right) dr = G \frac{M_* \dot{M} t }{2 r^2} dr.  In the steady state, conservation of energy requires that the energy radiated by an annulus in the the accretion disk be equal to the difference in energy flux between its inner and outer edges.  Therefore dE = dL_{\rm ring} t.  Equating the two sides gives: dL_{\rm ring} = G \frac{M_*\dot{M}}{2 r^2} dr. The Stefan-Boltzmann relation gives dL_\text{ring} = 4 \pi r \sigma T^4 dr.

Therefore we find that \sigma T^4 = \frac{G M_* \dot{M}}{8 \pi r^3}.  We did not take into account the fact that the star has a finite size which introduces a boundary layer into the disk.  A more thorough analysis (e.g. in Lynden-Bell & Pringle 1974) then adds an additional factor, giving: \sigma T^4 = \frac{G M_* \dot{M}}{8 \pi r^3} \left(1- \sqrt{\frac{R_*}{r}} \right)

Isolation Mass

In Uncategorized on April 27, 2011 at 3:45 pm

If we assume that all planets and planetesimals move on roughly circular orbits, then there is a finite supply of planetesimals for a growing planet to accrete.  The mass at which the growing planet has consumed all of its nearby planetesimal supply is called the isolation mass M_\text{iso}.

The “feeding zone” \delta a_\text{max} of a planet accreting planetesimals is of order the Hill radius.  Thus we can write \delta a_\text{max} = C r_H, where C is a constant of order unity, and r_H is the Hill radius.  Furthermore, we note that the mass within this feeding zone can be calculated from the surface density of the disk Sigma_p: (2 \pi a) (2 \delta a_\text{max}) \Sigma_p \propto M_\text{planet}^{1/3}

Finally we can set the isolation mass to be the mass at which the planet mass is equal to the mass of the planetesimals in its feeding zone: M_\text{iso} = 4 \pi a C (\frac{M_\text{iso}}{3 M_*})^{1/3} a \Sigma_p.  We can solve this equation to find that M_\text{iso} = \frac{8}{\sqrt{3}} \pi^{3/2} C^{3/2} M_*^{-1/2} \Sigma_p^{3/2} a^3.*

*See discussion in Armitage 2007

ARTICLE: Evolution of the Intergalactic Opacity: Implications for the Ionizing Background, Cosmic Star Formation, and Quasar Activity

In Journal Club, Journal Club 2011 on April 26, 2011 at 4:30 pm

Read the paper by Faucher-Giguère et al. (2008)

Summary by: Aaron Meisner & Ragnhild Lunnan

Abstract:

“We investigate the implications of the intergalactic opacity for the evolution of the cosmic UV luminosity density and its sources. Our main constraint is our measurement of the Ly\alpha forest opacity at redshifts 2 \leq z  \leq 4.2 from 86 high-resolution quasar spectra. In addition, we impose the requirements that H I must be reionized by z=6 and He II by z \sim 3 and consider estimates of the hardness of the ionizing background from H I-to-He II column density ratios. The derived hydrogen photoionization rate is remarkably flat over the Ly\alpha forest redshift range covered. Because the quasar luminosity function is strongly peaked near z \sim 2, the lack of redshift evolution indicates that star-forming galaxies likely dominate the photoionization rate at z \gtrsim 3. Combined with direct measurements of the galaxy UV luminosity function, this requires only a small fraction f_{esc} \sim 0.5\% of galactic hydrogen-ionizing photons to escape their source for galaxies to solely account for the entire ionizing background. Under the assumption that the galactic UV emissivity traces the star formation rate, current state-of-the-art observational estimates of the star formation rate density appear to underestimate the total photoionization rate at z \sim 4 by a factor of \sim 4, are in tension with recent determinations of the UV luminosity function, and fail to reionize the universe by z \sim 6 if extrapolated to arbitrarily high redshift. A theoretical star formation history peaking earlier fits the Ly\alpha forest photoionization rate well, reionizes the universe in time, and is in better agreement with the rate of z \sim 4 gamma-ray bursts observed by Swift. Quasars suffice to doubly ionize helium by z \sim 3 and likely contribute a nonnegligible and perhaps dominant fraction of the hydrogen-ionizing background at their z \sim 2 peak.”

Big Questions

  • What dominates the hydrogen photoionizing background: stars or quasars? How does the answer vary with redshift?
  • What is the global rate of star formation and how does it evolve with redshift?
  • Do Ly\alpha opacity-based star formation rate estimates agree with inferred star formation histories based on other types of data?
  • Is the star formation history inferred (from Ly\alpha opacity or otherwise) consistent with reionization by z = 6?
  • What is the escape fraction, f_{esc}, of hydrogen photoionizing photons for galaxies and quasars?

Relation to Gunn & Peterson 1965

In an ancillary paper, the authors use a sample of previously published Keck (Vogt et al. 1994, Sheinis et al. 2002) and Magellan (Bernstein et al. 2003) high-redshift quasar spectra to constrain the IGM Ly\alpha opacity \tau(z) by measuring intervening neutral hydrogen absorption. Whereas Gunn & Peterson 1965 discussed (what they considered a lack of) Ly\alpha absorption in the spectrum of a single quasar, 3C 9 at z \sim 2, Faucher-Giguere et al. 2008 use a sample of 86 objects, most with appreciably higher redshifts (see Fig. 1). The resulting determination of \tau(z) is substantially improved relative to previous results, in part because the quasar sample employed is a factor of \sim 2 times larger than in any prior analysis.

Figure 1: Quasar sample used by Faucher-Giguere et al. 2008 to determine the IGM neutral hydrogen opacity. Blue refers to Keck HIRES data, red to Keck HIRES+ESI data, and black to all HIRES+ESI+Magellan MIKE data.

Relation to AY201B themes: star formation, dust obscuration

The authors use the IGM Ly\alpha opacity to constrain the total UV emissivity as a function of redshift, which can in turn constrain the amount of star formation as a function of redshift, quantified as \dot{\rho}_{\star}^{com}(z), the comoving star formation rate density (units of solar masses per year per comoving cubic megaparsec). This means of constraining star formation history is complementary to more direct approaches, for example those using photometric surveys trace the amount of UV light and hence star formation as a function of redshift (see \SComparison with Prior Results below).

Inferring absolute scale of the star formation rate as a function of redshift requires appropriate corrections for the absorption of UV photons between their emission and entry into the diffuse IGM. Thus, dust extinction should be accounted for. If dust extinction in galaxies evolves with redshift, this impacts determinations not only of the absolute scale of star formation, but also of its evolution. It turns out that the redshift evolution of dust corrections is poorly constrained at present, and therefore authors assume a dust correction that is independent of redshift.

The 10 second derivation of key star formation results

The authors go into tremendous detail in explicitly stating and justifying the assumptions necessary to get from their starting point, \tau(z), to the eventual star formation results \dot{\rho}_{\star}^{com}(z) (hence why the paper cites a total of 200 references!). The basic idea is that \tau(z) translates into a certain breakdown of neutral versus ionized hydrogen as a function of z, which implies a particular photoionization rate \Gamma(z). \Gamma(z) requires a particular intensity J_{\nu} of the photoionizing background, which in turn derives from an intrinsic emissivity of astrophysical sources \epsilon_{\nu}^{com}(z). At least some of these sources must be young stars, so that the star formation history \dot{\rho}_{\star}^{com}(z) can be inferred from the emissivity. Of course, each step in this reasoning involves many cosmological assumptions. However, the collective assumptions made by the authors allow a remarkably simple (“10 second”) derivation to proceed between \tau(z) and \dot{\rho}_{\star}^{com}(z):

1/\tau(z) \propto \Gamma(z) \propto J_{\nu} \propto \epsilon_{\nu}^{com}(z)/(1+z) \propto \dot{\rho}_{\star}^{com}(z)/(1+z)

  • \tau(z) the optical depth as given in eq. (2), \tau(z)=-ln(\langle F \rangle (z)), where F is the transmission
  • \Gamma(z) is the photoionization rate
  • J_{\nu} is the ionizing background intensity
  • \epsilon_{\nu}^{com}(z) is the comoving specific emissivity
  • \dot{\rho}_{\star}^{com}(z) is the comoving star formation rate (SFR) density

Note that the first proportionality involves implicit z dependence, but all z dependence in the remaining proportionalities is explicit. That’s why the authors’ plots of inferred \Gamma(z), \epsilon(z), \dot{\rho}_{\star}^{com}(z) look identical up to a factor of (1+z), as shown in Fig. 2 below.

One thing to notice immediately in writing the last proportionality is that the authors are assuming that all of the emissivity is due to star formation in galaxies, neglecting quasars. Other more subtle assumptions are also involved, for example the last proportionality would also be violated if dust corrections evolved with redshift.

 

Figure 2: Fig. 1, Fig. 11, and Fig. 12 of Faucher-Giguere et al. 2008. In all cases the black points with black error bars are the quantities inferred by the authors; the fact that the shape traced by the points in successive plots is so similar (the same up to factors of (1+z)) arises from the simple chain of proportionalities discussed above.

Comparison to prior work

Throughout the text, the authors make extensive comparison of their star formation results to those of one particular paper, Hopkins & Beacom 2006. This paper is considered by the authors to be the most thorough SFR history analysis to date , as it makes use of essentially all cosmological probes of star formation other than Ly\alpha opacity, going so far as to include astrophysical neutrino limits from Super-Kamiokande. However, the results of Hopkins and Beacom 2006 have known deficiencies; in particular their best-fit star formation rate falls rapidly beyond z \sim 3, so fast in fact that star formation fails to reionize the Universe by z = 6.

The results of Faucher-Giguere et al. 2008 are in tension with the results of Hopkins and Beacom 2006 insofar as the Ly\alpha opacity inferred SFR declines less rapidly at high z (see bottom panel of Fig. 2 above, where “H&B (2006) fit” shows the SFR history of Hopkins & Beacom 2006). The SFR history of Faucher-Giguere et al. 2008 remains constant enough out to z \gtrsim 4 that the authors claim its extrapolation can reionize the Universe in time. Hence, this disagreement with prior work actually appears to be a virtue of the Ly\alpha opacity inferred SFR.

Remaining issues: quasar contribution

As noted in \S5.2.1, when the quasar contribution to the hydrogen photoionizing background is estimated from the bolometric luminosity function of Hopkins et al. 2007, at z \lesssim 3, the quasar contribution alone overproduces the photoionizing background. That’s why in Fig. 5 of the text the quasar contribution has been scaled down by factors ranging from 0.1 to 0.35.

However, the quasar contribution (see Fig. 5 of the paper) falls off too rapidly with z to be able to explain the Ly\alpha opacity measurements over the full redshift range considered. In \S5.5.2 the authors use an alternative to the bolometric luminosity function to estimate the quasar contribution to the Ly\alpha photoionizing background. To do so, they impose the constraint that He II be reionized by z = 3. Since only quasars can contribute the \sim 50 eV photons necessary to doubly ionize He II, this requirement roughly fixes the normalization of the quasar luminosity function at the \sim 50 eV energy. Assuming a quasar spectral index then allows the authors to infer the fraction of the photoionizing background at energies relevant to H I ionization. The result is that perhaps \sim 20\% of the H I photoionizing background derives from quasar emission. Clearly, there are disagreements yet to be resolved.

Important Findings

  • The photoionizing background must be dominated by star formation for z\gtrsim 3, and star formation may dominate over quasars at all redshifts considered.
  • The quasar bolometric luminosity function overproduces the photoionizing background with quasar emission alone, while He II reionization arguments suggest quasars make up some non-negligible (but less than unity) fraction \sim20% of the photoionizing background.
  • f_{esc} need only be \sim0.5% for galaxies in order to match the photoionizing background. This f_{esc} value is lower than but possibly consistent with previous f_{esc} estimates.
  • The inferred SFR history in this work peaks earlier than the SFR history of prior works, allowing star formation to reionize the Universe in time, by z = 6.

References

ARTICLE: Spitzer Survey of the Large Magellanic Cloud: Surveying the Agents of a Galaxy’s Evolution (SAGE). I. Overview and Initial Results

In Journal Club, Journal Club 2011 on April 26, 2011 at 3:54 pm

ADS Article Link

Paper Summary by Sukrit Ranjan and Gregory Green

Abstract

We are performing a uniform and unbiased imaging survey of the Large Magellanic Cloud (LMC; \sim 7^{\circ} \times 7^{\circ}) using the IRAC (3.6, 4.5, 5.8, and 8 μm) and MIPS (24, 70, and 160 μm) instruments on board the Spitzer Space Telescope in the Surveying the Agents of a Galaxy’s Evolution (SAGE) survey, these agents being the interstellar medium (ISM) and stars in the LMC. This paper provides an overview of the SAGE Legacy project, including observing strategy, data processing, and initial results. Three key science goals determined the coverage and depth of the survey. The detection of diffuse ISM with column densities > 1.2 \times 10^{21} \, \mathrm{H} / \mathrm{cm^2} permits detailed studies of dust processes in the ISM. SAGE’s point-source sensitivity enables a complete census of newly formed stars with masses >3 Msolar that will determine the current star formation rate in the LMC. SAGE’s detection of evolved stars with mass-loss rates >1×10-8 Msolar yr-1 will quantify the rate at which evolved stars inject mass into the ISM of the LMC. The observing strategy includes two epochs in 2005, separated by 3 months, that both mitigate instrumental artifacts and constrain source variability. The SAGE data are nonproprietary. The data processing includes IRAC and MIPS pipelines and a database for mining the point-source catalogs, which will be released to the community in support of Spitzer proposal cycles 4 and 5. We present initial results on the epoch 1 data for a region near N79 and N83. The MIPS 70 and 160 μm images of the diffuse dust emission of the N79/N83 region reveal a similar distribution to the gas emissions, especially the H I 21 cm emission. The measured point-source sensitivity for the epoch 1 data is consistent with expectations for the survey. The point-source counts are highest for the IRAC 3.6 μm band and decrease dramatically toward longer wavelengths, consistent with the fact that stars dominate the point-source catalogs and the dusty objects detected at the longer wavelengths are rare in comparison. The SAGE epoch 1 point-source catalog has \sim 4 \times 10^6 sources, and more are anticipated when the epoch 1 and 2 data are combined. Using Milky Way (MW) templates as a guide, we adopt a simplified point-source classification to identify three candidate groups-stars without dust, dusty evolved stars, and young stellar objects-that offer a starting point for this work. We outline a strategy for identifying foreground MW stars, which may comprise as much as 18% of the source list, and background galaxies, which may comprise ~12% of the source list.

Overview

Whew! What a mouthful of text! What does that abstract boil down to?

This article presents an overview and preliminary results from the SAGE survey. The SAGE survey was a survey of the Large Magellanic Cloud (LMC) in the infra-red from 3.6-160 microns using the Spitzer Space Telescope. The goal of the survey was to obtain a dataset that could be used to study star formation, the ISM, and mass loss from evolved stars.

This paper does not directly answer science questions. Rather, the purpose of this paper is to report the status of the survey to the community so the community can plan follow up work. What is the nature and quality of the data? What sources have been identified, and how many of them? What are the limits of the survey? Using this information, other scientists can then pursue science questions, likely related to the broad science goals of the survey.

Science Objectives

The SAGE survey was built around three broad science goals:

  • Studying the properties of diffuse ISM, in particular the properties of dust. SAGE aims to detect emission corresponding to column densities of 1.2 \times 10^{21} cm^{-2} of H or greater. This will allow SAGE to measure the dust-to gas ratio across the ISM and look for variations. This traces the degree of dust-gas mixing in the LMC. One can imagine using this as a proxy for processes like turbulence. Further, by using color ratios and fitting the SED for different grain size distributions, SAGE will be able to study the spatial variation in grain size distribution in the LMC. In particular, SAGE aims to get high-resolution data on Polycyclic Aromatic Hydrocarbon (PAH) emission. This emission traces the smallest dust particles, which are important because of their role in heating. Thanks to their small size relative to the photon reabsorption length and processes like secondary and Auger electron emission, these small dust grains are particularly efficient at heating their surrounding gas via the photoelectric effect. Finally, SAGE will have the resolution and spectral range sufficient to distinguish stars from ISM, and identify individual regions of the ISM such as HII regions, molecular clouds, and photodissociation regions. This census will greatly improve the sample size of objects available to researchers studying these topics.
  • Obtaining a complete census of young stellar objects (YSOs). SAGE is sensitive to newly-formed stars with mass exceeding 3 \mathrm{M_{\odot}}. The survey should be able to obtain an unbiased, complete census of all such objects throughout the LMC. This should include both massive star formation regions (traced by HII regions) and lower mass star formation regions (traced by Taurus complexes). The two epochs of SAGE data should also allow variability studies for YSOs, looking for objects like FU Orionis stars, a class of star that exhibit abrupt increases in luminosity on timescales of a year. Current models to explain these stars revolve around accretion from disks on to pre-main-sequence objects.
  • Study mass loss in evolved stars. SAGE can detect mass-loss rates of 10^{-8} \mathrm{M_{\odot}} / \mathrm{yr}. This should allow a complete census of evolved stars with appreciable mass loss rates throughout the LMC. The two epochs of SAGE data should also inform studies of variability in mass loss rates, since mass loss processes in evolved stars are also thought to occur on timescales of around a year.

Figure 2 from the paper, showing the discovery space of SAGE. The black line is SAGE

Observational Approach

The SAGE survey aims to understand and characterize the ISM, star formation, and mass outflows. These phenomena are permeated with dust, which impede observations in the visible due to extinction. However, dust extinction in the near-IR is minimal, while in the far-IR the presence of dust becomes an asset, as irradiated dust reradiates in that wavelength regime. This motivates the use of IR observations to achieve SAGE’s science goals.

SAGE builds on a number of IR surveys, both ground-based (2MASS) and space-based (IRAS). Thanks to improvements in telescope design and detector sensitivity, Spitzer is able to combine a greater wavelength range than IRAS with being above the atmospheric extinction 2MASS had to deal with. These advances allow SAGE to push beyond previous surveys and target fainter – and more numerous – dusty sources in the LMC out of the reach of previous surveys. Specific goals of SAGE include detecting IR point sources down to the spatial resolution limit of Spitzer, and mapping dust emission from HII regions, molecular clouds, and other features from the ISM at high SNR.

SAGE observed the LMC using Spitzer’s IRAC and MIPS cameras. A 7^{\circ} \times 7^{\circ} region corresponding to the LMC was imaged by the telescope in 7 different bands, from the near to far IR. IRAC imaged in the near-IR in the 3.5, 4.5, 5.8 and 8.0 \mathrm{\mu m} bands (291 hours) and MIPS in the mid and far IR in the 24, 70 and 160 \mathrm{\mu m} bands (217 hours). Individual tiles underwent absolute photometric calibration using a network of pre-selected standard stars and were then mosaiced together to form the final survey. Data was taken in two epochs separated by 3 months to allow variability studies. The combined epoch 1 and 2 data has data gaps smaller than a pixel for MIPS. The survey achieved an angular resolution of 2” for IRAC, corresponding to a 0.5 pc spatial resolution at the distance of the LMC. Similarly, MIPS achieved a resolution of 6”, which corresponds to 1.5 pc in the LMC.

The LMC has a number of properties that make it useful for exploring the SAGE science goals. First, it is an observationally opportune target. Its close proximity (50 kpc) relative to other galaxies allows for high photon flux and high SNR. A viewing angle of 35^{\circ} means the LMC is nowhere near edge-on, making it much easier to distinguish features. The combination of these features allow resolving features in much greater detail than permitted by imaging more distant galaxies. Further, lines of sight to the LMC are clear of the Small Magellanic Cloud (SMC) and Milky Way (MW). The expectation on the number of substantial clouds per line of sight is small, so if a cloud is detected along a line of sight, it is the only one. Not having to try to separate clouds along the line of sight greatly simplifies the analysis. Further, all clouds are located at approximately the same distance, making comparisons along the line of sight much easier.

Scientifically, previous surveys such as IRAS and 2MASS have shown that the LMC possesses features at all spatial scales, offering a rich diversity of structures to study and promising the opportunity for a wide variety of science. The low metallicity of the LMC (Z=0.3-0.5) corresponds to the metallicity of the universe at Z~1.5, the era of peak star formation. Studying the LMC offers insight into what star formation and galaxy evolution might have been like during that key epoch.

Preliminary Results

The authors chose two HII regions within the LMC, N79 and N83, to investigate qualitatively in this paper in order to demonstrate the utility of the SAGE survey. N79 and N83 were chosen because they had not been studied at Spitzer wavelengths, contain both young and old stellar populations, and happened to be some of the first regions processed by the SAGE pipeline. Some preliminary results are sketched out below.

The authors compared dust emission in several bands to various tracers. They found that diffuse HI emission (from 21-cm maps) is well traced by the 70- and 160-μm bands, while the Hα and CO J=1-0 lines are traced by the 24 μm band (the origins of the HI, Hα and CO maps are given in the paper). The latter two lines have strong peaks in the HII regions of N79 and N83. The corresponding 24 μm emission is most likely from warm (~120 K) dust heated by young, massive stars. By contrast, 8 μm emission, which traces PAHs, is largely absent from bright star-forming regions, indicating that PAHs are destroyed by the intense UV radiation. See Fig. 9 from the paper, which is reproduced below:

Point-source classification was carried out by dividing up regions of color-color space based on Milky-Way templates. Objects were divided into three groups:

  1. Young Stellar Objects (YSOs)
  2. Stars without dust – main-sequence stars and red giants
  3. Dusty evolved stars – O-rich and C-rich AGB stars, OH/IR stars and carbon stars
The 3.6, 8.0 and 24 μm bands were chosen for the color-color diagram, as they provide the widest range of objects. Longer wavelengths contain too few point sources, while use of just the shorter bands would provide a smaller range of colors. The classification is shown in Fig. 10 from the paper:

Figure 12 of the paper plots these classified objects on various choices of color-magnitude diagrams. The authors comment that many of the point sources classified as young stellar objects are actually background galaxies. As noted in class by Alyssa, a more sophisticated (and accurate) method of classifying point sources is to define regions of the space defined by point-source flux in several observed bands. In this larger space, different populations of objects separate out more clearly, while in color-magnitude space, one is effectively taking a projection of the higher-dimensional magnitude space. In this projection, different populations – e.g. YSOs and background galaxies – may be projected onto one another. Figure 12 in the paper shows several choices of color-magnitude space (the last two panels are reproduced below):

In these two panels, the point sources classified as YSOs separate out nicely from the stars without dust and dusty evolved stars, which in the right panel cluster along the main-sequence.

Follow-up Work

The SAGE paper that we read in for Journal Club serves to introduce the survey and present some of its capabilities through the preliminary analysis sketched out above. More detailed results from two subsequent papers are outlined in what follows.

The first paper by Meixner et al. (2009), and titled “Measuring the lifecycle of baryonic matter in the Large Magellanic Cloud with the Spitzer SAGE-LMC survey” (ADS Link), summarizes certain SAGE results from a 2008 IAU symposium on the Magellanic system. The paper summarizes efforts to put lower bounds on the total rate of star formation and mass loss from AGB stars within the LMC.  The authors first present a map of dust column density derived from 160 μm emission, which is optically thin throughout the LMC, and thus a good tracer of the total dust mass of the galaxy. This map is compared to the column density of HI (from 21 cm emission) and CO. An “infrared excess map” is obtained by subtracting the total gas column density from HI and CO from that determined by dust emission. The infrared excess seems to follow the HI density, and the authors propose three possible origins for the excess: Hgas not traced by CO emission, HI not detected by 21 cm emission, or variations in the gas-to-dust ratio. The total ISM mass is estimated at 10^9 \, \mathrm{M_{\odot}}, as compared to a total stellar mass of 3 \times 10^9 \, \mathrm{M_{\odot}}. The authors summarize the work of Whitney et al. (2008) (ADS Link), who fit the most massive of ~1000 YSOs detected by the survey to a Kroupa (2001) (ADS Link) initial mass function and infer a total star formation rate of 0.1 \mathrm{M_{\odot}} \, / \mathrm{yr}. Whitney et al. note, however, that the UV flux from the LMC gives a higher star formation rate, and that their catalogue of YSOs in the LMC is by no means complete. Meixner et al. note that more recent work presented at the proceedings finds five times as many YSOs using the same color cuts as Whitney et al. Finally, the mass-loss rate of AGB stars is estimated by comparing the 8 μm emission to scaled atmospheric models. The 8 μm excess has been previously correlated to AGB mass-loss rates, and the resulting relation is applied to the entire sample of AGB stars, giving a total mass-loss rate of > 8.7 \times 10^{-3} \, \mathrm{M_{\odot}} / \mathrm{yr}.

The second paper, by Bernard et al. (2008) (ADS Link), which is cited by the above Meixner et al. (2009) paper with regards to excess dust emission in the 160 μm band, deals with dust properties in the LMC. The authors find that dust temperature in the LMC ranges from 12.1 to 37.4 K, with the warmest dust residing in the stellar bar. The authors find two types of IR-excess. The first type, dealing with disparities in column-density measures, was described above. The second “excess” occurs in the MIR, where the spectral energy distribution of the LMC departs significantly from that of the Milky Way, as shown in Fig. 5 of the paper:

The departure is not limited to the LMC, but it also present in SMC. The authors consider a flatter distribution of very small grains (VSGs) in the LMC to be the most likely explanation of the infrared excess. In all, however, they find the relative abundances of dust species in the LMC to be very similar to those in the MW.

Special Topics

In Uncategorized on April 26, 2011 at 1:25 pm

According to our end of class survey, here are topics that might warrant (more) discussion/inclusion in a future version of AY201b…

Kolmogorov & Burgers Turbulence
“Kolmogorov” turbulence refers to turbulence in an incompressible medium.  “Burgers turbulence” refers to turbulence in a compressible medium (cf. Burgers equation).
Example of use: “Kolmogorov-Burgers Model of Star-forming Turbulence”, Boldorev 2002.

Dissociative Recombination
A process where a positive molecular ion recombines with an electron, and as a result, the neutral molecule dissociates.  Dissociative recombination is important in determining the chemical composition of the molecular interstellar medium, as it easily changes the balance amongst molecular species.
Typical example of dissociative recombination: CH_3^+ + e^- \rightarrow CH_2 + H

X-wind
The model of Frank Shu et al. for the formation of stars like the Sun.

Fig. 3 of Shang 2007 (reference below). Schematic drawing of a generalized picture of the X-wind. The deadzone is opened probably by continuous magnetic activities that result in reconnection events near the magnetic “Y” points and the current sheets. Adapted from Fig. 1 in Shu et al. (1997)

Recent Reference (and source of figure above): “Jets and Molecular Outflows from Herbig-Haro Objects,” Shang 2007.
Use in meteoritics: The X-wind model offers one explanation for the origin and nature of the chondritic meteorites found in our Solar System.

And, here are some topics that we’re betting will become (even) more important in the coming decade (in order of decreasing spatial scale)…

  • 21-cm tomography of the Early Universe
  • Magnetic fields in the IGM
  • Improved Extragalactic Star-Formation “Prescriptions” (e.g. Kennicutt-Schmidt), based on our understanding of the process near-er by
  • The role of magnetic fields in the low-density ISM (see WISE images…)
  • Numerical Models of star-forming regions that include ALL of these: gravity, heating/cooling, chemistry, radiative transfer, and magnetic fields.
  • AMR Simulations spanning pc to AU (star formation to planet formation)
  • Dense core “fragmentation”
  • The role of chemistry in core and planet formation
  • Time evolution of disk/planet formation (accretion, planet buildup, planet migration)
  • The relationship between planetary atmospheres and their formation environment
  • Laboratory measurements of the low-density, low-gravity, behavior of gases & solids (dust)
  • More…

Future Instruments

In Uncategorized on April 26, 2011 at 12:09 am

The next twenty years promises to be an exciting time for studying the ISM. Each of the following telescopes is described in detail on a separate wordpress page (click on the acronym) and on their official websites (click the full name).

Atacama Large Millimeter Array: ALMA

ALMA will begin early science observations with Cycle 0 in September, 2011 and should be completed in 2013. The high spatial resolution of ALMA will allow astronomers to image young planets embedded in disks around nearby

The James Webb Space Telescope: JWST

JWST is an exciting space-based infrared observatory that promises to acquire a wealth of photometric and spectroscopic information. For studies of the ISM, JWST will be particularly useful for mapping the distribution of dust and for observing obscured systems such as young stellar objects and circumstellar disks (see Gardner et al. 2006).

Thirty Meter Telescope: TMT

TMT will conduct near-UV, optical, and near-infrared observations of young stellar objects, protoplanetary disks, and hot, young Jovian planets. The large primary mirror of the telescope and the adaptive optics system will allow TMT to produce high-resolution images of star and planet formation that include small-scale details that are unobservable with current telescopes.

Giant Magellan Telescope: GMT

GMT has the same strong science case as TMT, but will be a ~25m telescope in the southern hemisphere. The main differences between GMT and TMT are shown in the table below.

Comparison of GMT and TMT

Comparison of GMT and TMT. GMT information from http://www.gmto.org/tech_overview. TMT information from http://www.tmt.org/observatory/telescope.

The Astro2010 Decadel Survey identified U.S. participation in a Giant Segmented Mirror Telescope (either GMT, TMT, or E-ELT) as Priority 3 for large, ground-based missions (after the Large Synoptic Survey Telescope and a Mid-Scale Innovations Program). As part of the process, the National Academy of Sciences conducted an independent cost estimates for the telescopes optics and instruments for GMT and TMT. The resulting estimates at 70% confidence are $1.1 billion for GMT construction and $1.4 billion for TMT construction. These cost estimates assume that the telescopes will begin science operations with adaptive optics and three instruments in spring 2024 for GMT and between 2025 and 2030 for TMT. Although both the TMT website and the GMT website indicate science observation start dates in 2018, the Decadal Survey estimates are probably more realistic.

The Giant Magellan Telescope

In Uncategorized on April 26, 2011 at 12:05 am
GMT at Twilight (GMTO Corporation)

An artist's conception of the Giant Magellan Telescope at twilight. Note the truck at the lower right for scale. Image copyright Giant Magellan Telescope - GMTO Corporation.

The Giant Magellan Telescope is a collaboration between the Carnegie Institution for Science, Harvard University, the Smithsonian Astrophysical Observatory, Texas A&M University, the Korea Astronomy and Space Science Institute, the University of Texas at Austin, the Australian National University, the University of Arizona, Astronomy Australia Ltd. and the University of Chicago. GMT should be completed around 2018.

The Telescope

The primary mirror of GMT will be composed of seven circular segments 8.4m in diameter arranged as shown in the figure below. In order to properly focus the light, the outer six segments are shaped asymmetrically like potato chips. The resolving power of GMT will be equivalent to the resolving power of a 24.5 meter telescope. The secondary mirror (also pictured below) consists of an adaptive shell for each of the primary mirror segments and will be controlled by the adaptive optics system to correct for atmospheric turbulence over a field of view 10′-20′ in diameter.

Artist's conception of GMT primary mirror (Giant Magellan Telescope - GMTO Corporation)

An artist's conception of the primary and secondary mirrors of GMT. Image copyright Giant Magellan Telescope - GMTO Corporation.

The Site

The chosen site for GMT is Cerro Las Campanas in Chile. Cerro Las Campanas, pictured below, is located at an altitude of >2550 meters and has dry weather, dark skies, and good seeing. For more information about the site, see the GMT site selection page.

GMT on Cerro Las Campanas (GMTO Corporation)

An artist's conception of GMT on the peak of Cerro Las Campanas in Chile. Image from GMTO Corporation.

Instruments

GMT’s instruments will be placed behind the central primary mirror. There will be a large (6m x 5m) space directly behind the mirror for large instruments and a rotating platform for smaller instruments. See the technical overview page for more information about instrument mounting.

The proposed first generation instruments for GMT are shown in the table below from the GMT Progress Report SPIE Conf. 7012-46. According to the report, three instruments will be selected for first light.

GMT Instrument Concepts (Progress on the GMT, Johns)

GMT Instrument Concepts. (Table 6 from SPIE 7012-46, Progress on the GMT by Matt Johns at Carnegie Observatories)

Science Goals

The science goals that will be addressed by GMT include:

  • Detection and characterization of exoplanets
  • Study of dark matter and dark energy
  • Observations of stellar populations and the origin of elements
  • Observations of black hole growth
  • Study of galaxy formation
  • Observations of the epoch of reionization

The Thirty Meter Telescope

In Uncategorized on April 25, 2011 at 11:14 pm
Artist's Impression of TMT from NASA.

Artist's Impression of the Thirty Meter Telescope from NASA.

The Thirty Meter Telescope (TMT) is a collaboration between the Association of Canadian Universities for Research in Astronomy, the California Institute of Technology, the University of California, the National Astronomical Observatory of Japan, the National Astronomical Observatories of the Chinese Academy of Sciences, and the Department of Science and Technology of India. According to the TMT Timeline, First Light should occur in October 2017 and the first science should be conducted in June 2018.

The Telescope

The thirty meter primary mirror of TMT will be segmented into 492 1.44m hexagonal segments as shown in the image below. After hitting the primary mirror, the light will be reflected onto a tiltable 3.1m secondary mirror and then onto a 3.5m x 2.5m elliptical tertiary mirror that will send the light into the instruments on the Nasmyth platforms. TMT will have two Nasmyth platforms with space for eight instruments total.

TMT Primary Mirror (TMT Collaboration)

An artist's conception of the segmented primary mirror of TMT. The 1.44m hexagonal segments will be placed only 2.5mm apart. The elliptical tertiary mirror is shown at the center of the primary mirror. Note the tiny person in the upper left for scale. (TMT Collaboration)

The Site

TMT design operations are based in Pasadena, CA, but the selected telescope site is within the 36-acre “Area E” on the summit of Mauna Kea in Hawaii as shown on the map below. Mauna Kea is a well-established site for observatories due to the high-quality seeing, dry conditions, and typical lack of cloud cover. Once constructed, the TMT complex would consist of a dome 56m in height and 66m wide, 5 acres of roads, and 1.44 acres of buildings.

Proposed Site for TMT (UH and USGS)

Proposed Site for TMT in Area E on the summit of Mauna Kea. For reference, the locations of existing telescopes are indicated by the numbered yellow circles. Map produced by UH and USGS.

Instruments

In addition to the Narrow Field Infrared Adaptive Optics System (NFIRAOS), TMT will have three first light instruments:

  1. Wide Field Optical Spectrometer (WFOS): Spectroscopy and imaging without AO at near-ultraviolet and optical wavelengths (0.3-1.0 microns) over a >40 square arcminute FOV.
  2. InfraRed Imaging Spectrometer (IRIS): Integral-field spectroscopy and diffraction-limited imagaing at near-infrared wavelengths (0.8-2.5 microns).
  3. InfraRed Multi-object Spectrometer (IRMS): Slit spectroscopy and diffraction-limited imaging at near-infrared wavelengths (0.8-2.5 microns) over a 2′ diameter FOV.

Science Goals

As explained in the TMT Science Case, the science goals for TMT are:

  • Spectroscopy of the first galaxies
  • Observations of the formation of large-scale structure
  • Detection and investigation of central black holes
  • Observations of star and planet formation
  • Characterization of exoplanet atmospheres
  • Direct detection of exoplanets

The Atacama Large Millimeter Array

In Uncategorized on April 25, 2011 at 10:32 pm
8 of the ALMA Antennas (ESO/NAOJ/NRAO)

Eight of the 12m ALMA Antennas. Image from ALMA (ESO/NAOJ/NRAO)

The Array

The construction of ALMA began in 2003 and should be finished in 2013. Although the array is still under construction, ALMA is currently accepting proposals for Fall 2011 using 16 antennas and four of the ten receiver bands. More information on the Early Science Cycle 0 Call for Proposals is available on the ALMA website. The deadline for submission of Notices of Intent is April 29, so get writing!

When completed, ALMA will consist of 50 12-m antennas. Like the antennas in the Very Large Array and the Submillimeter Array, the ALMA antennas will be mobile to allow for different observing configurations and consequently different spatial resolutions. In addition to the 50 12-m antennas, ALMA will also consist of 12 7-m antennas. Those 7-m antennas and four of the 12-m antennas will make up the Atacama Compact Array (ACA) and will remain in roughly the same position for all observations to increase ALMA’s ability to map large scale structures.

The Site

The Atacama Large Millimeter Array is currently under construction on the Chajnantor plain in the Atacama desert in Chile. Since the site is at an altitude of 5000 meters and quite dry (precipitable water vapor ~ 1 mm), the atmospheric transparency should be excellent for submillimeter observations. The figure below displays a plot of the atmospheric transmission at Chajnantor and the ALMA Observation Bands. Logically, the ALMA Observation Bands were chosen to fit between the major absorption features of water and oxygen.

Atmospheric Transmission at Chajnantor

Atmospheric Transmission at Chajnantor. The colored bands indicate the ALMA Observing Bands. The red bands (3, 6, 7, and 9) will be available first. The primary sources of absorption are H2O (22.2, 183, 325, 380, 448, 475, 557, 621, 752, 988, and 1097 GHz) and oxygen (50-70 GHz and 118 GHz),

Science Goals

ALMA will achieve three main goals:

  1. Detect line emission from CO or CII in under 24 hours from galaxies at a redshift of z=3.
  2. Observe gas kinematics in protostars and protoplanetary disks within 150 pc.
  3. Acquire high dynamic range images at high angular resolution (0.1″).

Extrasolar Planets and Protoplanetary Disks

ALMA will be particularly useful for detecting extrasolar planets and stars during the early stages of formation. The figure below shows a simulation by Wolf & D’Angelo 2005 of possible ALMA observations of embedded Jovian planets. The 1 Jupiter mass and 5 Jupiter mass planets are clearly visible at both 50 pc and 100 pc!

Simulation of ALMA observations of an embedded planet by Wolf & D'Angelo 2005

Simulation of ALMA observations of an embedded planet. The dot in the lower left represents the combined beam size. Left: 1 Jupiter mass planet around a 0.5 Solar mass star in a 0.01 Solar mass disk. Right: 5 Jupiter mass planet around a 2.5 Solar mass star, Top: Distance of 50 pc, Bottom: Distance of 100 pc. Figure 2 from Wolf & D'Angelo 2005.

AGB Stars and Interstellar Dust Grains

ALMA will also advance studies of interstellar dust grains by allowing scientists to create high resolution (<0.1") images of the dust condensation zones around AGB stars at distances of a few hundred parsecs. By comparing the angular sizes of CO envelopes around evolved stars to the known distances of those stars, astronomers will be able to determine the physical size of CO emitting regions. The distances to other evolved stars could then be estimated by comparing the observed angular size of the CO emitting regions around stars of unknown distances to the newly discovered characteristic physical size of CO emitting regions. Once the distances to a large number of evolved stars have been determined, astronomers could then map out the distribution of AGBs.

Other Research Areas

Since many molecular transitions occur at submm wavelengths, ALMA will be sensitive to the presence of a wide range of molecules. Additionally, ALMA will be able to measure the radii and rotation rates of stars and monitor the activity of the Sun.

Observations

ALMA will conduct observations in ten different bands from 84 GHz to 720 GHz at resolutions between 6 mas and 0.7″. The resolution is frequency- and baseline-dependent; the resolution decreases with decreasing baselines and lower frequencies. Within a given band, ALMA will produce a data cube containing up to 8192 frequency channels with widths between 3.8 kHz and 2 GHz. In the most compact configuration, ALMA will have baselines between ~18m and ~125m. In the extended configuration, the baselines will be between ~36m and ~400m. See the ALMA Capabilities page for more detailed observation about planning observations with ALMA.

The James Webb Space Telescope

In Uncategorized on April 25, 2011 at 10:27 pm
Schematic of JWST from NASA

Schematic of JWST. The large sunshield (blue) blocks radiation from the Sun, Earth, and Moon from reaching the science instruments in the ISIM (Integrated Science Instrument Module). Image from NASA.

JWST aboard Ariane 5

An artist's conception of JWST folded and ready for launch aboard an Ariane 5 rocket. Image from Arianespace/ESA/NASA.

The 6.5 meter James Webb Space Telescope (named for former Apollo-era NASA Administrator James Webb) is scheduled to be launched from the Ariane 5 launch site in French Guiana in 2014. Unlike Hubble, which is in Earth orbit, JWST will orbit around the Earth-Sun L2 Lagrange point. The decision to send JWST to L2 was motivated by the need to cool the spacecraft in order to conduct observations in the infrared. Although parts of JWST are actively cooled, the remainder of the spacecraft will be passively cooled by placing the spacecraft in deep space and deploying a tennis court sized shield to block light from the Sun, Earth, and Moon.

Comparison of Hubble and JWST primary mirrors from NASA

Size comparison of JWST's 6.5 m diameter primary mirror to Hubble's 2.4 m diameter mirror. Graphic from NASA.

Due to their large size, the sunshield and the primary mirror of the telescope will be folded to fit inside the payload compartment of the Ariane 5 ECA launch vehicle. The figure on the right compares the JWST’s large primary mirror to Hubble’s 2.4 m mirror and the figure on the far right displays JWST in launch configuration. After launch, the telescope mirror will magically unfold and the solar panels will be deployed as shown in the deployment animation below.

Once the telescope has unfolded, JWST will conduct observations to address four key science goals:

  1. “The End of the Dark Ages: First Light and Reionization”
  2. “Assembly of Galaxies”
  3. “The Birth of Stars and Protoplanetary Systems”
  4. “Planetary Systems and the Origin of Life”

JWST will address those goals using four instruments attached to the Integrated Science Instrument Module(ISIM):

  1. Mid-InfraRed Instrument (MIRI)
  2. Near-InfraRed Camera (NIRCam)
  3. Near-InfraRed Spectrograph (NIRSpec)
  4. Fine Guidance Sensor Tunable Filter Imager(FGS-TFI)

As the name suggests, MIRI is sensitive to mid-infrared wavelengths between 5 and 27 micrometers (or 29 micrometers for spectroscopy). MIRI will be actively cooled to 7K and used for wide-field broadband imagery and medium-resolution spectroscopy. More information about MIRI is available from the University of Arizona and the Space Telescope Science Institute.

NIRCam serves the dual role of acquiring high angular resolution images at 0.6-5 microns over a 2.2’x2.2′ field of view and conducting wavefront sensing using the Optical Telescope Element wavefront sensor. Although JWST will not have to contend with the Earth’s atmosphere, minor differences in the shape and position of the primary mirror segments could introduce disortions and phase variations in the wavefronts received by each segment. The Optical Telescope Element wavefront sensor will be used to monitor such distortions and reshape and realign the mirror segments to correct the wavefront errors.

For science observations, NIRCam will be operated in one of three imaging modes (survey, small source, or coronagraphic) or in medium-resolution spectroscopy mode. More information about the NIRCam imaging modes and filters is available from the Space Telescope Science Institute.

NIRSpec will use a “microshutter array” to acquire simultaneous 0.6-5 micron spectra of 100 objects over a 3’x3′ field of view. In addition to the novel “Micro-Shutter Assembly” (MSA) mode, NIRSpec may also be operated in Fixed Slit mode or Integral Field Unit mode with the spectral resolutions shown the following table from STSCI.

NIRSpec Instrument Modes from STSCI

Description of NIRSpec Instrument Modes from the Space Telescope Science Institute.

The Fine Guidance Sensor component of FGS-TFI consists of a 1-5 micron broadband guide camera capable of finding a guide star at 95% probability anywhere in the sky. FGS will be used to monitor JWST’s pointing throughout the mission and to properly deploy the primary mirror during the unfolding phase of the mission. The second half of FGS-TFI, the Tunable Filter Imager is a science instrument that will be used to acquire narrow-band images between 1.6-4.9 micrometers at R~100 resolution over a wide 2.2’x2.2′ field of view. TFI is being built by the Canadian Space Agency.

The Magellanic Stream

In Uncategorized on April 24, 2011 at 9:59 pm

The Magellanic Stream (MS) is an extended HI stream encircling the Milky Way (MW). It contains at most a handful of stars, but it has more than 10^8 solar masses of neutral hydrogen. First understood as a remnant of the Magellanic Clouds (MCs) some 30+ years ago (Wannier & Wirixon 1972; Mathewson et al. 1974), the Magellanic Steam has since been studied heavily, for its insights into extragalactic gas replenishment, hierarchical merging scenarios, as well as to understand its progenitor(s?) – the Large Magellanic Cloud (LMC) and the Small Magellanic Cloud (SMC). As observational techniques and telescope sensitivity have improved, the observed size of the MS has increased from ~100˚ to potentially ~200˚ (Nidever et al. 2010).

From Nidever et al. (2010); original image from Mellinger 2009

Today, the MS is understood in the context of the larger Magellanic system, which includes — in addition to the MS — the LMC, the SMC, a “Bridge” of gas connecting the two interacting galaxies, and a “Leading Arm” (LA) of diffuse HI clouds extending from the LMC/SMC in the opposite direction as the MS.

History of History: The Debate over the Formation

Until recently, the formation of the MS was understood as either one or a combination of two physical processed: tidal stripping of the LMC/SMC by the MW (e.g., Murai & Fujimori 1980; Gardiner & Noguchi 1996), or ram pressure stripping of the LMC/SMC pair as they plunged through the MW hot halo (e.g., Meurer et al. 1985; Mastropietro et al. 2005). The first of these processes necessarily implied that the LMC/SMC had long been kept in a tight orbit (~2-2.5 Gyr) around the MW, during which time there would have been time to strip the requisite amount of gas to form the Magellanic Stream.  Even in the ram-pressure-dominated case, close proximity is required for the LMC-SMC pair to be disturbed by passing through the hot halo. If either of these formation models were correct, then the MS is a relatively young feature, formed ~1.5 Gyr ago in the temporal vicinity of its pericentric passage.

However, Hubble Space Telescope (HST) proper motion studies by Kallivayalil et al. (2006a, 2006b) have led to revised models for the past orbital motions of the LMC/SMC pair. Besla et al. (2007) used these new velocities (~ 80 km/s higher than previous measurements) as priors in a backward-integration orbital model to show that the Magellanic Clouds could at most have made one orbit about the MW, and that with a Lambda-CDM-motivated NFW dark matter profile, the MCs are on their first passage around the MW. Such an orbit would rule out the traditional formation models, so Besla et al. (2010) argue that the Magellanic Steam is a product of only LMC-SMC interaction. In an N-body+SPH simulation, they manage to reproduce a many key observational features of the Magellanic system, including the absence of stars from the MS, the projected location of the MS, and the asymmetry between the MS and the LA. HI column densities are qualitatively similar to observations, but they do no reproduce them faithfully. Line-of-sight velocities have a much larger spread than in reality. Besla et al. suggest that the inclusion metal cooling, ionization, and interactions with the hot halo may allow for a more realistic reproduction of HI density gradients across the stream.

Original Caption (from Besla et al. 2010): Fig. 2 Stellar surface brightness, H i gas column densities, and line-of-sight velocities of the simulated Magellanic system. Top panel: the resulting stellar distribution is projected in Magellanic coordinates (N08), a variation of the Galactic coordinate system where the Stream is straight. The distribution is color-coded in terms of V-band surface brightness. The past orbit of the LMC/SMC is indicated by the blue lines. Middle panel: the H i gas column densities of the simulated stream range from 1018 to 1021 cm−2 , as expected (Putman et al. 2003). The white circle indicates the observed extent of the LMC’s H i disk: the simulated LMC is more extended than observed, indicating ram pressure likely plays a role to truncate the disk. In both the top and middle panels, the solid white line indicates the past orbit of the SMC according to the old theoretically derived PMs (GN96) which was a priori chosen to trace the Stream on the plane of the sky. The true orbits (determined by all PM measurements) for the LMC/SMC are indicated by the yellow lines. Bottom panel: the line-of-sight velocities along the simulated stream are plotted and color-coded based on H i column density, as in the middle panel. The white line is a fit to the observed data (N08). The LMC disk is too extended, causing a larger velocity spread than observed. The line-of-sight velocities along the past orbits of the LMC/SMC are indicated by the yellow lines, which do not follow the true velocities along the Stream (e.g., B07, Figure 20). The Stream is kinematically distinct from the orbits of the Clouds.

Observations, meanwhile, continue to drive competing theories over the MS formation. Canonical values for the MW circular velocity are ~ 220 km/sec. However, recent astrometric parallax measurements suggest that the circular velocity of the MW ought to be revised upward to ~ 250 km/sec (e.g. Reid et al. 2009). Using this higher circular velocity, Diaz & Bekki (2011a) provide a model that reproduces many observational features of the MS, in which the LMC and SMC are independently bound to the MW for at least 5 Gyr, and only more recently became mutually bound. Like Besla et al., this model relies on the LMC tidally stripping SMC gas to form the MS, but it does so in a bound orbit. This model, however, relies on an unrealistic isothermal dark matter halo, which is used to artificially institute a flat circular velocity profile. A more cosmologically-motivated NFW profile, however, would likely not change the fundamental result, though, which is that a bound orbit still remains plausible, given the uncertainties even in the most recent observations. Moreover, Diaz & Bekki (2011b) introduce a semi-analytic term for the drag caused by hot halo, and they are able to better reproduce LA kinematics, while leaving the MS itself relatively unchanged. This hints at the fact that more realistic treatments of the hydrodynamic effects of the hot halo may indeed be able to account for the diffuse, filamentary, and cloud-like structure of the MS and the LA.

While the formation theories continue to disagree on fundamental points (such as whether the LMC and SMC spent the majority of a Hubble time bound or unbound to the MW), they increasingly also point to a consensus in which a combination of gravitational, hydrodynamic, and stellar feedback effects are responsible for the formation of the Magellanic Stream, and also for the particulars of the content of the MS’s interstellar medium. For example, Nidever et al. (2008) propose a different, but not contradictory model of formation based on observations of large outflows from supergiant shells in the LMC. In this scenario, SNe explosions push gaseous material to larger radii, allowing ram pressure and/or gravitational forces to do the remaining work of removing the gas into the MS and LA. Such a proposal can coexist with either the picture of Besla et al. or of Diaz & Bekki.

Streaming Forward: A Reservoir of Science and Cold Gas

The debate over the formation of the Magellanic Stream does not take place in a vacuum (pun intended). Understanding its history allows for a better understanding of its present and future. While the above theories and observations are primarily focused on dynamics of the HI stream/clouds in the context of interacting galaxies, many current avenues of research are concerned with the using analysis of the stream as a probe of the interstellar medium. Fox et al. (2010) present spectroscopic measurements of the MS in order to determine the metallicity and ionization of the gas, using background quasars as backlights. The most interesting results are the high levels of ionization in the gas which the authors argue can only be explained by a multi-phase plasma model, where the ionization is the result of collisions between clouds and the warm-hot medium. These high ionization levels suggest that the Magellanic Stream may not survive long enough to replenish the MW and thus allow for more star formation; instead, the MS filaments may be merely transitory features that subsequently dissolve into the coronal plasma. Meanwhile, metallicity measurements of [O/H] = –1.00, which is comparable to the metallicity of the SMC, provide additional evidence that that the the MS is formed from LMC stripping of the SMC, rather than MW stripping of both the LMC and SMC.

Original Caption (from Fox et al. 2010): Figure 1. H I map of the MS color coded by velocity and centered on the South Galactic Pole, using 21 cm data from Hulsbosch & Wakker (1988) and Morras et al. (2000) with a sensitivity of log N(H I) ≈ 18.30. The positions of our two sight lines are marked.

What Can be Learned?

The Magellanic Stream will obviously continue to have much to offer to both observational and theoretical astrophysicists in the upcoming decade. As the modeling discussed in the historical section above demonstrates, the addition of more realistic hydrodynamics will potentially allow for incredibly close replication of actual observables of the LMC/SMC dwarf galaxy pair. It is important, however, to remain cautious towards any tendency to attempt to over-predict the specifics of one system. Chaotic effects will render certain features difficult if not impossible to replicate, and in the attempt to reproduce them, it is possible to stray from understanding the basic physics to adapting unneeded prescriptions. Hopefully, instead, the Magellanic Stream will be a proof-of-concept for these types of mergers, in general. Another possibility is that with improved models, constraints will be able to be placed on the halo shape and symmetry, rather than relying on a profile as a prior. In the shorter term, however, it is more likely to remain a probe of the warm-hot halo, since predictions can be directly compared with other observations, unlike the absence of direct observations of the dark matter halo.

The Very High Redshift Universe (in HI)

In Uncategorized on April 21, 2011 at 2:18 pm

An excellent Schematic, from Loeb 06

Definitive Review (121 pages) is  Furlanetto, Oh & Briggs 2006.

Selected Key points/Figures from Furlanetto et al. 2006

21-cm “tomography”

Figure attributed to "Chang and Oh, in preparation" by Furlanetto et al. 2006

21-cm “forest”

C.L. Carilli, N.Y. Gnedin, F. Owen, Astrophys. J. 577 (2002) 22–30

ARTICLE: On the Density of Neutral Hydrogen in Intergalactic Space

In Journal Club, Journal Club 2011 on April 21, 2011 at 2:07 pm

Read the (classic) paper by Gunn & Peterson (1965).

Summary by Ragnhild Lunnan & Aaron Meisner.

The starting point of the Gunn & Peterson analysis is the discovery of the quasar 3C 9, which at a redshift of 2 has the Ly-α line redshifted  into the visible spectrum. The main idea can be summarized by a few sentences from the introduction:

“Consider, however, the fate of photons emitted to the blue of Ly-α. As we move away from the source along the line of sight, the source becomes redshifted to observers locally at rest in the expansion, and for one such observer, the frequency of any such photon coincides with the rest frequency of Ly-α in his frame and can be scattered by neutral hydrogen in his vicinity.”

Derivation of the Gunn-Peterson Trough

To calculate this effect is a relatively straightforward exercise in radiative transfer (and cosmology). Start with a standard metric: ds^2 = dt^2 - a(t)^2(du^2 + u^2 d\gamma^2)

The probability of scattering a photon in a proper length interval dl = a(t) du is given by dp = n(t)\sigma(\nu_s) dl, where n(t) is the number density of neutral hydrogen at time t, and \sigma(\nu_s) is the cross-section for the Ly-α transition. \nu_s is the redshift at which the photon is being scattered, i.e. \nu_s = \nu \times (1+z), where z is less than the redshift z_0 of the quasar. Plugging in \sigma(\nu) = \frac{\pi e^2}{m c} f g(\nu - \nu_a), together with an expression for dl/dz from the cosmology, the total optical depth can be found by integrating over redshift. This again yields a relation between the optical depth and the number density of (neutral) hydrogen.

(While the 1965 cosmological model used in the paper is outdated, the structure of the argument remains the same. The q_0 = 1/2 model in the paper corresponds to a cosmology where \Omega = \Omega_m = 1.)

Interpretation: Hydrogen must be ionized!

Gunn & Peterson point out, however, that given the modest suppression of flux bluewards of Ly-α in the spectrum of 3C-9, the inferred density of neutral hydrogen is extremely small. This lack of a “Gunn-Peterson trough” (as the effect later became known as) is reconciled as follows:

“We are thus led to the conclusion that either the present cosmological ideas about the density are grossly incorrect, and that space is very nearly empty, or that the matter exists in some other form. [...] It is possible that [the assumption that intergalactic space is filled with hydrogen gas] is still valid but that essentially all of the hydrogen is ionized; this conclusion can be defended if we are allowed to make the intergalactic hydrogen temperature high enough.”

The authors further argue that collisional ionization is too inefficient and so unlikely to be the culprit, but radiative ionization can do the job provided that the temperature of the IGM is high enough.

Constraints on the Epoch of Reionization

Finally, it is pointed out that ionized hydrogen will lead to non-negligible optical depth due to Thomson scattering, for very distant objects. The measurement of the Thomson optical depth in the CMB, in fact, places constraints on the mean redshift of reionization.

More directly, though, observations of the (lack of) Gunn-Peterson troughs in quasars place lower limits on the end of reionization: if no trough is observed at z=6, for example, reionization must have been essentially complete at that redshift.

(The first claim of a detected Gunn-Peterson trough was made by Becker et al. in 2001, for a quasar at z ~ 6.2.)

Minimum Mass Scales for Formation of Massive Star-Forming Galaxies

In Journal Club, Journal Club 2011 on April 20, 2011 at 9:23 pm

Overview

This article describes a result from the recent paper by (Amblard et at 2011). This paper studied the Cosmic Infrared Background (CIB), emission from background sub-resolution submillimeter galaxies, using the Herschel Space Observatory. By taking a power spectrum of the CIB and comparing to models of dark matter distribution, they were able to find a lower bound on the mass scale required for a cluster to form a large, star-forming galaxy.

Instrument

The observatory used for this study was the Herschel Space Observatory. Herschel is a 3.5-meter ESA telescope sensitive to the far IR and sub-mm wavelength ranges. Its Earth-trailing orbit at L2 allows it to avoid the thermal cycling and changing fields of view that challenge telescopes in Earth orbit. The specific instrument used was SPIRE, a sensitive photometer with passbands at 250, 350 and 500 microns. All three channels were used in this study.

Figure 1: Herschel Space Observatory. Figure from http://en.wikipedia.org/wiki/Herschel_Space_Observatory

Cosmic Infrared Background & Observations

Faint, sub-mm galaxies are responsible for more than 85% of extragalactic light emission. This background emission is termed the Cosmic Infrared Background. Low-resolution observations (such as Herschel SPIRES) cannot resolve these large, star-forming galaxies individually, but their clustering should still be visible in variations in the CIB intensity (Amblard et al 2011).

This paper uses these variations to study large-scale structure in the universe. To this end, the authors conduct a 13.5-hour survey of a 218’ by 218’ field in the “Lockman Hole”. This region has very little dust, meaning that the extragalactic CIB emission is more readily detected. The authors implement a number of sophisticated techniques to remove foreground objects and remove instrumental effects, creating a map of the CIB – the first to date.

Power Spectrum and Galaxy Clustering Model

So what does this map tell us? Nothing – yet. To get useful data out of this model, a few more steps are needed. First, the authors take a power spectrum of the data. This gives information on the intensity at different angular separations. If sources are tightly clustered, emission should be high at small separations, which corresponds to high k, where k=1/separation.

To proceed further, we must make some key assumptions. First, the paper assumes the dark matter distribution is traced by these galaxies; therefore, clustering of galaxies traces clustering of dark matter. Second, we must assume we know something about how this dark matter is distributed. The authors assume the Navarro-Frenk-White profile for the dark matter halo density function. The authors assume the concordance cosmology model. This allows, for example, conversion of redshift into distance.

Finally, the authors construct a model for the population of galaxies within a given dark matter halo, the Halo Occupation Distribution (HOD). This model divides galaxies into two types: central, high mass star-forming galaxies occupying the halo center, and small “satellite” galaxies. The model assumes a threshold mass M_{min} required for forming a central galaxy, so the number of central galaxies is N_{cen}=H(M-M_{min}), where H is the Heaviside step function. Note this model assumes any one halo will form at most one large star-forming galaxy. The number of satellite galaxies is taken to be N_{sat}=(\frac{M}{M_1})^{\alpha}, where M_1 and \alpha are parameters to be determined. M_1 corresponds to the mass scale required to form one satellite galaxy in addition to the central galaxy, and is constrained to be 10-25 times M_{min} based on numerical simulations.

Using these assumptions, the authors are able to derive a model power spectrum for the CIB, which they then fit to data. Figure 2 below presents the clustering power spectrum and the fits to the data. In modeling the power spectrum, the authors divide it into P(k,z)=P_{1h}(k,z)+P_{2h}(k,z), where P_{1h} represents 1-halo clustering (clustering at short scales) and P_{2h} represents 2-halo clustering (clustering at long scales).

Figure 2: Clustering power spectrum and fits at 500 microns, taken from (Amblard et al 2011: Supplemental Information). The blue line represents shot noise (subtracted off). The green lines represent power spectrum fits. The dashed green line represents the 1-halo term, the dash-dotted green line represents the 2-halo term, and the solid green line represents the total clustering power spectrum.

Key Result: Minimum Mass Scale

In executing these fits, the authors find that for every waveband, M_{min} \sim 10^{11} M_{sun}. Averaged over all wavebands, the authors find M_{min} \approx 3 \times 10^{11} M_{sun}, with uncertainty of \log10 [M_{min}/M_{sun} = \pm 0.4. This result empirically sets a minimum mass scale for the dark matter halo required to form a central, massive star-forming galaxy.

This is huge. Since these galaxies are thought to be the most active sites of star formation in the known universe, this gives “the preferred mass scale of active star formation in the universe”. Higher mass halos are of course allowed, but tend to be much rarer. The authors speculate that this minimum constraint is set by photoionization feedback: any smaller, and the gravity well of the dark matter halo would be insufficient to keep the galaxy together against radiation pressure, and it would fragment into smaller galaxies. However, this interpretation should be treated with caution. This result runs far ahead of theory; existing semi-analytic galaxy formation models predict a mass scale of order 10 times larger or more. This result provides a key empirical (though heavily model-dependent) input for theorists to consider and work towards in developing their models, and improving our understanding of galaxy formation processes in the Universe.

References

Nearby Galaxies & the Kennicutt-Schmidt Relation

In Uncategorized on April 19, 2011 at 1:59 pm

(AG’s handwritten notes will be merged into this post after the class meeting on 4/19/11.)

Galaxy/ISM “Evolution”

Figure 1 of Galametz et al. 2011

Kennicutt-Schmidt Relations

Good intro: from TAUVEX web site.

Schmidt 1959 Paper on “The Rate of Star Formation”

Kennicutt 1998 Review Paper on (see Figure 9, shown below)

from Kennicutt 1998

Current Understanding of Kennicutt-Schmidt Relations

(reproduced from Goodman & Rosolowsky NSF Proposal, 2008)

The very last line of Marten Schmidt’s 1959 paper reads: “the mean density of a galaxy may determine, as a first approximation, its present gas content and thus its evolution stage.”  And, as a “first approximation,” the relationship between the star formation rate and the gas density,

that Schmidt put forward has held up remarkably well  (Kennicutt 2007).  We seek here to see how far beyond “first approximation” studies of nearby star forming regions can presently help us go in the extragalactic context, and how much a more refined view could help in future studies of galaxy evolution.

The modern version of the Schmidt Law is largely due to the work of Kennicutt and collaborators, who study the relation between “star formation rate” and “surface density” in nearby galaxies.  Many different indicators are used by Kennicutt et al., and by others, to measure each of these quantities, and we focus on the vagaries of their interpretation below.  Suffice it to say here that the “Kennicutt Law” holds over more than 4 orders of magnitude in surface density, and has a scatter about the relationship of roughly 1-2 orders of magnitude (Figure 6).  The Kennicutt Law is:

where a is a proportionality constant and the exponent q is typically of order 1.4 (Kennicutt 2008). If gas scale heights are assumed to not vary much from galaxy to galaxy, then re-writing the Schmidt law (eq. , using volume densities) in terms of surface densities, gives exponent q=1.5, making it the same as the Kennicutt Law to within uncertainties (Krumholz & Thompson 2007).

The K-S relation effectively implies that the efficiency function with which stars form from gas in galaxies is unchanging to within two orders of magnitude, after the many billions of years the galaxies in the sample have had to evolve.   That “efficiency function” however, is not linear, in that q ≠ 1.

This non-linearity has led others to propose two kinds of revisions to K-S ideas, both of which rely upon the important fact (Lada 1992) that studies of local (Milky Way) star forming regions clearly show that stars only form in the densest regions of molecular clouds.

The first kind of “revised” K-S relation investigates how the SFR depends on the surface density of gas above a higher density threshold than just the ~100 cm-3 needed to produce 12CO emission[1]. Gao & Solomon (2004) observed HCN, which is excited at densities above a few x 104 cm-3, in a large sample of galaxies, and they derived the relation:

This empirical relation has (slightly) less scatter than one using CO only, and, perhaps more importantly, has a linear power-law slope, suggesting that the surface density of HCN may be a linear determinant of the star formation rate.  The Gao & Solomon work inspired Wu et al. (2005) to test the relationship between LIR and LHCN in Milky Way molecular clouds, and the linear relationship was shown to continue right down to the scale of local GMCs.  Many in the extragalactic community rejoiced at these results, which could have meant that the quest for the “perfect” tracer of star-forming gas in a galaxy ended at “whatever it is that emits in HCN.”  But, as we explain at the close of this section, the story is, alas, not that simple.

The second kind of revised K-S relation acknowledges that density may not be the only determinant of fecundity in molecular gas. Blitz, Rosolowsky, Wong and collaborators have put forward the idea that pressure, rather than density, is likely to be more fundamental (Blitz & Rosolowsky 2006 and references therein).  The idea that pressure is critical (cf. Bertoldi & McKee 1992) is supported by analysis of nearby star-forming regions where it is clear that dense star-forming cores are often pressure-bound by the weight of the cloud around them (Lada et al. 2008) rather than only being confined only by their own self-gravity.[2]

In §2.4 below, we lay out a plan for measuring properties of Milky Way clouds that should be able to test both of these physically-motivated “revisions” to empirical K-S laws, as well as other ideas.  First, though, let us consider what modern theory predicts.  Krumholz and McKee (2005) can “predict” (explain) the observed K-S relations with three premises they state as:

  1. star formation occurs in virialized molecular clouds that are supersonically turbulent;
  2. the density distribution within these clouds is log-normal, as expected for supersonic isothermal turbulence; and
  3. stars form in any subregion of a cloud that is so overdense that its gravitational potential energy exceeds the energy in turbulent motions.

Our own work long ago as well as many others’ (cf. Larson 1981) has shown that #1 is clearly true.  Our recent work has shown that #2 is true for at least one well-studied local star-forming region (see §1.2, and Figure 1).  Our work on dendrograms allows us to find the “subregions” #3 is talking about, and to quantify the ratio of turbulent to gravitational energy with a virial parameter (see §1.3.2, and Figures 4 and 5).

Additional recent theoretical work, motivated by Gao & Solomon’s HCN results, predicts not only the origin of the K-S relations seen in a 12CO, but also in a host of other spectral lines.  Krumholz and Thompson (2007) and Narayanan et al. (2008) have investigated how the shapes of K-S relations change based on the molecular line tracer used to probe gas surface density.  Both groups’ work points out a very key, but somewhat subtle, feature of molecular line observations that is often ignored in the “K-S” community (but not in the Milky Way star-formation community!).  The relationship between observed emission in a spectral line and the density of the emitting region depends on how far above or below the “critical density” required to excite the transition the emitting region is.

Narayanan et al. (2008) clearly show that emission from a region which is nearly all above the critical density (e.g. CO under nearly any conditions) will give K-S slopes q>1, and emission from regions where much material is below the critical density of the tracer used will give K-S relations with slopes q<1, due to the inclusion of significant amounts of sub-thermally excited matter.

Krumholz & Thompson give an intuitive explanation of how HCN gives a slope of unity.  If a K-S relation has a slope 1.5, then a factor of (Sgas)1 comes from the amount of gas available for star formation, and a factor of (Sgas)0.5 comes from the dependence of free fall time on density.  Krumholz & McKee’s (2005) and several others’, models of the K-S relationship assumes that all “bound” gas (#3 above) collapses on a free-fall time, so that over time, that process gives an exponent of q=1.5 in equation 1, for a galaxy with a finite reservoir of gas and a constant efficiency of turning gas into stars.  Krumholz & Thompson argue that if a tracer (like HCN) has its critical density near or above the average density of star-forming gas, then the “free-fall” factor goes away, leaving Gao & Solomon’s linear relation, because the emission is coming from regions that all have the same free-fall time.

Very recently, Bussmann et al. (2008) found q=0.79±0.09 for a sample of more than 30 nearby galaxies observed in the (high critical density) HCN (3-2) line: that sub-linear slope was predicted in advance by the models of Narayanan et al. 2008.  However, a soon-to-be-published extensive observational study of massive-star-forming clumps within the Milky Way by Wu et al. (2008) finds a more linear (q≈1) relation for high-density tracers, including HCN (3-2).  Wu et al. also find that the five different dense gas tracers for which they construct K-S relations within the Milky Way rise steeply (and not really as  power-law) below an infrared luminosity threshold of ~10^4.5 L, and that above this threshold each gives slightly different (near-unity) slope and offset.


[1] Usually, CO is taken to be an indicator of “gas” in K-S relations.  Kennicutt (Kennicutt 1998) and others have experimented with HI+CO and find slightly tighter correlations,  but others (Blitz & Rosolosky 2006) find systematic effects to be at the origin of the tightened correlations.

[2] We note that this question, about how exactly cores “connect” to their environment is so interesting on its own that an entirely separate NSF proposal from this one has been submitted to address it.

References  for Above, beyond Schmidt & Kennicutt

  • Krumholz, M. R. & McKee, C. F. 2005, A General Theory of Turbulence-regulated Star Formation, from Spirals to Ultraluminous Infrared Galaxies, ApJ, 630, 250-268
  • Krumholz, M. R. & Thompson, T. A. 2007, The Relationship between Molecular Gas Tracers and Kennicutt-Schmidt Laws, ApJ, 669, 289-298
  • Narayanan, D., Cox, T. J., Shirley, Y., Dave, R., Hernquist, L. & Walker, C. K. 2008, Molecular Star Formation Rate Indicators in Galaxies, ApJ, 684, 996-1008
  • Wu, J., Evans, N. J., II, Gao, Y., Solomon, P. M., Shirley, Y. L. & Vanden Bout, P. A. 2005,Connecting Dense Gas Tracers of Star Formation in our Galaxy to High-z Star Formation, ApJ, 635, L173-L176
  • Wu, J., Evans, N. J., Shirley, Y. L. & Knez, C. 2010, The Properties of Massive, Dense Clumps: Mapping Surveys of HCN and CS, ApJS, 188, 313.

Additional Sample Relevant Recent K-S work:

  • “CARMA Survey Toward Infrare-Bright Nearby Galaxies (STING): Molecular Gas Star Formation Law in NGC 4254,” Rahman et al. 2011 (SeeFigure 7 for example of inter-comparision of star formation tracers)
  • “On the relation between the Schmidt and Kennicutt–Schmidt star formation laws and its implications for numerical simulations”, Schaye & Dalla Vecchia 2007. (Different K-S laws can be derived based on assuming different effective equations of state, but authors conclude that this does not give deep physical insight.)

Key (Open) Questions in the Study of the ISM of Other Galaxies, and of the Intergalactic Medium

In Uncategorized on April 14, 2011 at 1:42 pm
  1. How similar is the ISM in nearby galaxies to that of the Milky Way?  In what ways is it different?  (see SAGE Survey paper Journal Club discussion; also see Fukui & Kawmura review).

    Color image of H I 21 cm emission from Deul & van der Hulst (1987) with CO clouds overlaid as green dots. All molecular clouds lie in regions of H I overdensity. The area of the molecular cloud has been scaled to represent the relative masses of the clouds. The coincidence of molecular clouds with H I overdensity is evidence that clouds form out of the atomic gas. From Engargiola, Plambeck, Rosolowsky and Blitz 2003.

  2. How do ISM properties depend on galaxy type (and vice-versa!)?  (e.g. elliptical galaxies are well-known to have virtually no gas–why is that?)

    Spitzer data for elliptical and S0 galaxies. Circles represent observations, triangles upper limits. Filled symbols refer to elliptical galaxies with de Vaucouleurs classification parameter T < –3; open symbols refer to S0 galaxies with T> – 3. Red (blue) symbols have optical colors U-V < 1.1 (U-V>1.1); galaxies with unknown colors are plotted with green symbols. (from Temi Brighenti & Mathews 2009)

  3. How efficient is the ISM at forming stars in other galaxies?  Does this depend only on surface density (Kennicutt-Schmidt relation) or on more (e.g. galaxy mass, metallicity, size, etc.)? Link to nice PDF of PPT on K-S within galaxies.
  4. Is the IMF really “Universal,” and if so, what does that mean? (and how to measure it?…)
  5. How do ISM properties depend on redshift?  What affects “galaxy evolution” over various redshift ranges? (e.g. metallicity of galaxies increases over time as stars “pollute” the ISM with elements heavier than helium…when does that start?  What have big surveys like SDSS told us about this? What can we learn from “deep” observations like the HDF and HUDF?)

    Cosmic star formation rate (per unit comoving volume, h = 0.6, q0 = 0.5) as a function of redshift (the ‘Madau’ plot, Madau et al 1996). The black symbols (with error bars) denote the star formation history deduced from (non-extinction corrected) UV data (Steidel et al 1999 and references therein). Upward pointing dotted green arrows with open boxes mark where these points move when a reddening correction is applied. The green, four arrow symbol is derived from (non-extinction corrected) Hα NICMOS observations (Yan et al 1999). The red, three arrow symbol denotes the lower limit to dusty star formation obtained from SCUBA observations of HDF (N) (Hughes et al 1998). The continuous line marks the total star formation rate deduced from the COBE background and an ‘inversion’ with a starburt SED (Lagache et al 1999b). The filled hatched blue and yellow boxes denotes star formation rate deduced from ISOCAM (CFRS field, Flores et al 1999b) and ISOPHOT-FIRBACK (Puget et al 1999, Dole et al 1999). The light blue, dashed curve is model ‘A’ (no ULIRGSs) and the red dotted curve model ‘E’ (with ULIRGs) of Guuiderdoni et al (1998). Reproduced from Genzel & Cesarsky, ARA&A, 2000.

  6. What is the nature of the Intergalactic Medium under different conditions?  (e.g. in galaxy clusters, where hot X-ray haloes & cooling flows are important vs. in “empty” voids)
  7. What can be learned from long line-of-sight observations, e.g. of distant quasars? (Overview)
    1. Lyman-alpha forest
    2. metallicity variations with redshift (how long did it take the first stars to pollute the ISM?)
    3. Gunn-Peterson effect (ADS link to paper)

      Redshift distributions of galaxies and C IV absorbers in the field of Q1422+2309. The top panel shows the number of objects observed at each redshift. The bottom panel shows the implied overdensity as a function of redshift after smoothing our raw redshifts by a Gaussian of width z = 0.008. The good correspondence of features in the bottom panel shows that C IV systems are preferentially found within galaxy overdensities. (from Adelberger, Steidel, Shapley & Pettini 2003)

  8. What can be learned from direct observations of neutral hydrogen in the “intergalactic” medium before there were even many galaxies?  (New telescopes may be able to detect neutral hydrogen structures).
    1. Probes of the Epoch of Reionization
    2. Tomography” of HI in the Early Universe

      The transition from the neutral IGM left after the Universe recombined, at z ≈ 1,100, to the fully ionized IGM observed today is termed cosmic reionization. After recombination, when the CMB radiation was released, hydrogen in the IGM remained neutral until the first stars and galaxies2, 4 formed, at z ≈ 15–30. These primordial systems released energetic ultraviolet photons capable of ionizing local bubbles of hydrogen gas. As the abundance of these early galaxies increased, the bubbles increasingly overlapped and progressively larger volumes became ionized. This reionization process ended at z ≈ 6–8, ~1 Gyr after the Big Bang. At lower redshifts, the IGM remains highly ionized by radiation provided by star-forming galaxies and the gas accretion onto supermassive black holes that powers quasars. (from Robertson et al., Nature, 2010)

Course Notes

In Uncategorized on April 12, 2011 at 11:26 pm

Stromgren Sphere: An example “chalkboard derivation”

(updated for 2013)


The Stromgren sphere is a simplified analysis of the size of HII regions. Massive O and B stars emit many high-energy photons, which will ionize their surroundings and create HII regions. We assume that such a star is embedded in a uniform medium of neutral hydrogen. A sphere of radius r around this star will become ionized; is called the “Stromgren radius”. The volume of the ionized region will be such that the rate at which ionized hydrogen recombines equals the rate at which the star emits ionizing photons (i.e. all of the ionizing photons are “used up” re-ionizing hydrogen as it recombines)

The recombination rate density is \alpha n^2, where \alpha is the recombination coefficient (in \mathrm{cm}^3~\mathrm{s}^{-1}) and n=n_e=n_\mathrm{H} is the number density (assuming fully ionized gas and only hydrogen, the electron and proton densities are equal). The total rate of ionizing photons (in photons per second) in the volume of the sphere is N^*. Setting the rates of ionization and recombination equal to one another, we get

\frac43 \pi r^3 \alpha n^2 = N^*, and solving for r,

r = ( \frac {3N^*} {4\pi\alpha n^2})^{\frac13}

Typical values for the above variables are N^* \sim 10^{49}~\mathrm{photons~s}^{-1}, \alpha \sim 3\times 10^{-13}\; \mathrm{cm}^3 \; \mathrm s^{-1} and n \sim 10\; \mathrm {cm}^{-3}, implying Stromgren radii of 10 to 100 pc. See the journal club (2013) article for discussion of Stromgren’s seminal 1939 paper.

How do we know there is an ISM?

(updated for 2013)

Early astronomers pointed to 3 lines of evidence for the ISM:

  • Extinction. The ISM obscures the light from background stars. In 1919, Barnard (JC 2011, 2013) called attention to these “dark markings” on the sky, and put forward the (correct) hypothesis that these were the silhouettes of dark clouds. A good rule of thumb for the amount of extinction present is 1 magnitude of extinction per kpc (for typical, mostly unobscured lines-of-sight).
  • Reddening. Even when the ISM doesn’t completely block background starlight, it scatters it. Shorter-wavelength light is preferentially scattered, so stars behind obscuring material appear redder than normal. If a star’s true color is known, its observed color can be used to infer the column density of the ISM between us and the star. Robert Trumpler first used measurements of the apparent “cuspiness” and the brighnesses of star clusters in 1930 to argue for the presence of this effect. Reddening of stars of “known” color is the basis of NICER and related techniques used to map extinction today.
  • Stationary Lines. Spectral observations of binary stars show doppler-shifted lines corresponding to the radial velocity of each star. In addition, some of these spectra exhibit stationary (i.e. not doppler-shifted) absorption lines due to stationary material between us and the binary system. Johannes Hartmann first noticed this in 1904 when investigating the spectrum of \delta Orionis: “The calcium line at \lambda 3934 [angstroms] exhibits a very peculiar behavior. It is distinguished from all the other lines in this spectrum, first by the fact that it always appears extraordinarily week, but almost perfictly sharp… Closer study on this point now led me to the quite surprising result that the calcium line… does not share in the periodic displacements of the lines caused by the orbital motion of the star”

Helpful References: Good discussion of the history of extinction and reddening, from Michael Richmond.

A Sense of Scale

(updated for 2013)


How dense (or not) is the ISM?

  • Dense cores: n \sim 10^5 ~{\rm cm}^{-3}
  • Typical ISM: n \sim 1 ~{\rm cm}^{-3}
  • This room: 1 mol / 22.4L \sim 3 \times 10^{19}~ {\rm cm}^{-3}
  • XVH (eXtremely High Vacuum) — best human-made vacuum: n \sim 3 \times 10^{4}~ {\rm cm}^{-3}
  • Density of stars in the Milky Way: 2.8~{\rm stars/pc}^3 \approx 0.125~M_\odot/{\rm pc}^3 = 8.5 \times 10^{-24} ~{\rm g / cm}^3 \sim 5~{\rm cm}^{-3}

In other words, most of the ISM is at a density far below the densities and pressures we can reproduce in the lab. Thus, the details of most of the microphysics in the ISM are still poorly understood. We also see that the density of stars in the Galaxy is quite small – only a few times the average particle density of the ISM.

See also the interstellar cloud properties table and conversions between angular and linear scale.

Density of the Milky Way’s ISM

(updated for 2013)

How do we know that n \sim 1 ~{\rm cm}^{-3} in the ISM? From the rotation curve of the Milky Way (and some assumptions about the mass ratio of gas to gas+stars+dark matter), we can infer

M_{\rm gas} = 6.7 \times 10^{9} M_\odot

Maps of HI and CO reveal the extent of our galaxy to be

D = 40 kpc

h = 140 pc (scale height of HI)

This applies an approximate volume of

V = \pi D^2 h / 4 = 5 \times 10^{66} ~{\rm cm}^{3}

Which, yields a density of

\rho = 2.5 \times 10^{-24} ~{\rm g cm}^{-3}

Density of the Intergalactic Medium

(updated for 2013)

From cosmology observations, we know the universe to be very nearly flat (\Omega = 1). This implies that the mean density of the universe is \rho = \rho_{\rm crit} = \frac{3 H_0^2}{8 \pi G} = 7 \times 10^{-30} ~{\rm g~ cm}^{-3} \Rightarrow n<4.3 \times 10^{-6}~{\rm cm}^{-3}.

This places an upper limit on the density of the Intergalactic Medium.

Composition of the ISM

(updated for 2013)

  • Gas: by mass, gas is 60% Hydrogen, 30% Helium. By number, gas is 88% H, 10% He, and 2% heavier elements
  • Dust: The term “dust” applies roughly to any molecule too big to name. The size distribution is biased towards small (0.2 \mum) particles, with an approximate distribution N(a) \propto a^{-3.5}. The density of dust in the galaxy is \rho_{\rm dust} \sim .002 M_\odot ~{\rm pc}^{-3} \sim 0.1 \rho_{\rm gas}
  • Cosmic Rays: Charged, high-energy (anti)protons, nuclei, electrons, and positrons. Cosmic rays have an energy density of 0.5 ~{\rm eV ~ cm}^{-3}. The equivalent mass density (using E = mc^2) is 9 \times 10^{-34}~{\rm g cm}^{-3}
  • Magnetic Fields: Typical field strengths in the MW are 1 \mu G \sim 0.2 ~{eV ~cm}^{-3}. This is strong enough to confine cosmic rays.

Bruce Draine’s List of constituents in the ISM:

(updated for 2013)

  1. Gas
  2. Dust
  3. Cosmic Rays*
  4. Photons**
  5. B-Field
  6. Gravitational Field
  7. Dark Matter

*cosmic rays are highly relativistic, super-energetic ions and electrons

**photons include:

  • The Cosmic Microwave Background (2.7 K)
  • starlight from stellar photospheres (UV, optical, NIR,…)
  • h\nu from transitions in atoms, ions, and molecules
  • “thermal emission” from dust (heated by starlight, AGN)
  • free-free emission (bremsstrahlung) in plasma
  • synchrotron radiation from relativistic electrons
  • \gamma-rays from nuclear transitions

His list of “phases” from Table 1.3:

  1. Coronal gas (Hot Ionized Medium, or “HIM”): T> 10^{5.5}~{\rm K}. Shock-heated from supernovae. Fills half the volume of the galaxy, and cools in about 1 Myr.
  2. HII gas: Ionized mostly by O and early B stars. Called an “HII region” when confined by a molecular cloud, otherwise called “diffuse HII”.
  3. Warm HI (Warm Neutral Medium, or “WNM”): atomic, T \sim 10^{3.7}~{\rm K}. n\sim 0.6 ~{\rm cm}^{-3}. Heated by starlight, photoelectric effect, and cosmic rays. Fills ~40% of the volume.
  4. Cool HI (Cold Neutral Medium, or “CNM”). T \sim 100~{\rm K}, n \sim 30 ~{\rm cm}^{-3}. Fills ~1% of the volume.
  5. Diffuse molecular gas. Where HI self-shields from UV radiation to allow H_2 formation on the surfaces of dust grains in cloud interiors. This occurs at 10~{\rm to}~50~{\rm cm}^{-3}.
  6. Dense Molecular gas. “Bound” according to Draine (though maybe not). n >\sim 10^3 ~{\rm cm}^{-3}. Sites of star formation.  See also Bok Globules (JC 2013).
  7. Stellar Outflows. T=50-1000 {\rm K}, n \sim 1-10^6 ~{\rm cm}^{-3}. Winds from cool stars.

These phases are fluid and dynamic, and change on a variety of time and spatial scales. Examples include growth of an HII region, evaporation of molecular clouds, the interface between the ISM and IGM, cooling of supernova remnants, mixing, recombination, etc.

Topology of the ISM

(updated for 2013)


A grab-bag of properties of the Milky Way

  • HII scale height: 1 kpc
  • CO scale height: 50-75 pc
  • HI scale height: 130-400 pc
  • Stellar scale height: 100 pc in spiral arm, 500 pc in disk
  • Stellar mass: 5 \times 10^{10} M_\odot
  • Dark matter mass: 5 \times 10^{10} M_\odot
  • HI mass: 2.9 \times 10^9 M_\odot
  • H2 mass (inferred from CO): 0.84 \times 10^9 M_\odot
  • HII mass: 1.12 \times 10^9~M_\odot
  • -> total gas mass = 6.7 \times 10^9~M_\odot (including He).
  • Total MW mass within 15 kpc: 10^{11} M_\odot (using the Galaxy’s rotation curve). About 50% dark matter.

So the ISM is a relatively small constituent of the Galaxy (by mass).

The Sound Speed

(updated for 2013)

The speed of sound is the speed at which pressure disturbances travel in a medium. It is defined as

c_s \equiv \frac{\partial P}{\partial \rho} ,

where P and \rho are pressure and mass density, respectively. For a polytropic gas, i.e. one defined by the equation of state P \propto \rho^\gamma, this becomes c_s=\sqrt{\gamma P/\rho}. \gamma is the adiabatic index (ratio of specific heats), and \gamma=5/3 describes a monatomic gas.

For an isothermal gas where the ideal gas equation of state P=\rho k_B T / (\mu m_{\rm H}) holds, c_s=\sqrt{k_B T/\mu}. Here, \mu is the mean molecular weight (a factor that accounts for the chemical composition of the gas), and m_{\rm H} is the hydrogen atomic mass. Note that for pure molecular hydrogen \mu=2. For molecular gas with ~10% He by mass and trace metals, \mu \approx 2.3 is often used.

A gas can be approximated to be isothermal if the sound wave period is much higher than the (radiative) cooling time of the gas, as any increase in temperature due to compression by the wave will be immediately followed by radiative cooling to the original equilibrium temperature well before the next compression occurs. Many astrophysical situations in the ISM are close to being isothermal, thus the isothermal sound speed is often used. For example, in conditions where temperature and density are independent such as H II regions (where the gas temperature is set by the ionizing star’s spectrum), the gas is very close to isothermal.

Hydrogen “Slang”

(updated for 2013)

Lyman limit: the minimum energy needed to remove an electron from a Hydrogen atom. A “Lyman limit photon” is a photon with at least this energy.

E = 13.6~{\rm eV} = 1~{\rm `Rydberg'} = hcR_{\rm H} ,

where R_{\rm H}=1.097 \times 10^{7} {\rm m}^{-1} is the Rydberg constant, which has units of 1/\lambda. This energy corresponds to the Lyman limit wavelength as follows:

E = h\nu = hc/\lambda \Rightarrow \lambda=912~{\rm \AA} .

Lyman series: transitions to and from the n=1 energy level of the Bohr atom. The first line in this series was discovered in 1906 using UV studies of electrically excited hydrogen gas.

Balmer series: transitions to and from the n=2 energy level. Discovered in 1885; since these are optical transitions, they were more easily observed than the UV Lyman series transitions.

There are also other named series corresponding to higher n. Examples include Paschen (n=3), Brackett (n=4), and Pfund (n=5). The lowest energy (longest wavelength) transition of a series is designated \alpha, the next lowest energy is \beta, and so on. For example, the transition from n=2 to 1 is Lyman alpha, or {\rm Ly}\alpha, while the transition from n=7 to 4 is Brackett gamma, or {\rm Br}\gamma. The wavelength of a given transition can be computed via the Rydberg equation:

\frac{1}{\lambda}=R_{\rm H} \big|\frac{1}{n_f^2}-\frac{1}{n_i^2}\big| ,

where n_i and n_f are the initial and final energy levels of the electron, respectively. See this handout for a pictorial representation of the low n transitions in hydrogen. Note that the Lyman (or Balmer, Paschen, etc.) limit can be computed by inserting n_i=\infty in the above equation.

The Lyman continuum corresponds to the region of the spectrum near the Lyman limit, where the spacing between energy levels becomes comparable to spectral line widths and so individual lines are no longer distinguishable. Such continua exist for each series of lines.

Chemistry

(updated for 2013)


See Draine Table 1.4 for elemental abundances for the Sun (and thus presumably for the ISM near the Sun).

By number: {\rm H:He:C} = 1:0.1:3 \times 10^{-4} ;

by mass: {\rm H:He:C} = 1:0.4:3.5 \times 10^{-3} .

However, these ratios vary by position in the galaxy, especially for heavier elements (which depend on stellar processing). For example, the abundance of heavy elements (Z ≥ 6, i.e. carbon and heavier) is twice as low at the sun’s position than in the Galactic center. Even though metals account for only 1% of the mass, they dominate most of the important chemistry, ionization, and heating/cooling processes. They are essential for star formation, as they allow molecular clouds to cool and collapse.

Dissociating molecules takes less energy than ionizing atoms, in general. For example:

E_{I,{\rm H}}=13.6~{\rm eV}

E_{D,{\rm H}_2}=4.52~{\rm eV} \Rightarrow \lambda=2743~{\rm \AA} (UV transition)

E_{D,{\rm CO}}=11.2~{\rm eV},

where E_I and E_D are the ionization and dissociation energies, respectively. We can see that it is much easier to dissociate molecular hydrogen than to ionize atomic hydrogen; in other words, atomic H will survive a harsher radiation field than molecular H. The above numbers thus set the structure of molecular clouds in the interstellar radiation field; a large amount of molecular gas needs to gather together in order to allow it to survive via the process of self-shielding, in which a thick enough column of gas exists such that at some distance below the surface of the cloud all of the energetic photons have already been absorbed. Note that the high dissociation energy of CO is a result of the triple bond between the carbon and oxygen atoms. CO is a very important coolant in molecular clouds.

Measuring States in the ISM

(updated for 2013)


There are two primary observational diagnostics of the thermal, chemical, and ionization states in the ISM:

  1. Spectral Energy Distribution (SED; broadband low-resolution)
  2. Spectrum (narrowband, high-resolution)

SEDs

Very generally, if a source’s SED is blackbody-like, one can fit a Planck function to the SED and derive the temperature and column density (if one can assume LTE). If an SED is not blackbody-like, the emission is the sum of various processes, including:

  • thermal emission (e.g. dust, CMB)
  • synchrotron emission (power law spectrum)
  • free-free emission (thermal for a thermal electron distribution)

Spectra

Quantum mechanics combined with chemistry can predict line strengths. Ratios of lines can be used to model “excitation”, i.e. what physical conditions (density, temperature, radiation field, ionization fraction, etc.) lead to the observed distribution of line strengths. Excitation is controlled by

  • collisions between particles (LTE often assumed, but not always true)
  • photons from the interstellar radiation field, nearby stars, shocks, CMB, chemistry, cosmic rays
  • recombination/ionization/dissociation

Which of these processes matter where? In class (2011), we drew the following schematic.

A schematic of several structures in the ISM

Key

A: Dense molecular cloud with stars forming within

  • T=10-50~{\rm K};~n>10^3~{\rm cm}^{-3} (measured, e.g., from line ratios)
  • gas is mostly molecular (low T, high n, self-shielding from UV photons, few shocks)
  • not much photoionization due to high extinction (but could be complicated ionization structure due to patchy extinction)
  • cosmic rays can penetrate, leading to fractional ionization: X_I=n_i/(n_H+n_i) \approx n_i/n_H \propto n_H^{-1/2}, where n_i is the ion density (see Draine 16.5 for details). Measured values for X_e (the electron-to-neutral ratio, which is presumed equal to the ionization fraction) are about X_e \sim 10^{-6}~{\rm to}~10^{-7}.
  • possible shocks due to impinging HII region – could raise T, n, ionization, and change chemistry globally
  • shocks due to embedded young stars w/ outflows and winds -> local changes in Tn, ionization, chemistry
  • time evolution? feedback from stars formed within?

B: Cluster of OB stars (an HII region ionized by their integrated radiation)

  • 7000 < T < 10,000 K (from line ratios)
  • gas primarily ionized due to photons beyond Lyman limit (E > 13.6 eV) produced by O stars
  • elements other than H have different ionization energy, so will ionize more or less easily
  • HII regions are often clumpy; this is observed as a deficit in the average value of n_e from continuum radiation over the entire region as compared to the value of ne derived from line ratios. In other words, certain regions are denser (in ionized gas) than others.
  • The above introduces the idea of a filling factor, defined as the ratio of filled volume to total volume (in this case the filled volume is that of ionized gas)
  • dust is present in HII regions (as evidenced by observations of scattered light), though the smaller grains may be destroyed
  • significant radio emission: free-free (bremsstrahlung), synchrotron, and recombination line (e.g. H76a)
  • chemistry is highly dependent on nT, flux, and time

C: Supernova remnant

  • gas can be ionized in shocks by collisions (high velocities required to produce high energy collisions, high T)
  • e.g. if v > 1000 km/s, T > 106 K
  • atom-electron collisions will ionize H, He; produce x-rays; produce highly ionized heavy elements
  • gas can also be excited (e.g. vibrational H2 emission) and dissociated by shocks

D: General diffuse ISM

  • UV radiation from the interstellar radiation field produces ionization
  • ne best measured from pulsar dispersion measure (DM), an observable. {\rm DM} \propto \int n_e dl
  • role of magnetic fields depends critically on XI(B-fields do not directly affect neutrals, though their effects can be felt through ion-neutral collisions)

Energy Density Comparison

(updated for 2013)


See Draine table 1.5. The primary sources of energy present in the ISM are:

      1. The CMB (T_{\rm CMB}=2.725~{\rm K}
      2. Thermal IR from dust
      3. Starlight (h\nu < 13.6 {\rm eV}
      4. Thermal kinetic energy (3/2 nkT)
      5. Turbulent kinetic energy (1/2 \rho \sigma_v^2)
      6. Magnetic fields (B^2 / 8 \pi )
      7. Cosmic rays

All of these terms have energy densities within an order of magnitude of 1 ~{\rm eV ~ cm}^{-3}. With the exception of the CMB, this is not a coincidence: because of the dynamic nature of the ISM, these processes are coupled together and thus exchange energy with one another.

Relevant Velocities in the ISM

(updated for 2013)


Note: it’s handy to remember that 1 km/s ~ 1 pc / Myr.

  • Galactic rotation: 18 km/s/kpc (e.g. 180 km/s at 10 kpc)
  • Isothermal sound speed: c_s =\sqrt{\frac{kT}{\mu}}
    • For H, this speed is 0.3, 1, and 3 km/s at 10 K, 100 K, and 1000 K, respectively.
  • Alfvén speed: The speed at which magnetic fluctuations propagate. v_A = B / \sqrt{4 \pi \rho} Alfvén waves are transverse waves along the direction of the magnetic field.
    • Note that v_A = {\rm const} if B \propto \rho^{1/2}, which is observed to be true over a large portion of the ISM.
    • Interstellar B-fields can be measured using the Zeeman effect. Observed values range from 5~\mu {\rm G} in the diffuse ISM to 1 mG in dense clouds. For specific conditions:
      • B = 1~\mu{\rm G}, n = 1 ~{\rm cm}^{-3} \Rightarrow v_A = 2~{\rm km~s}^{-1}
      • B = 30~\mu {\rm G}, n = 10^4~{\rm cm}^{-3} \Rightarrow v_A = 0.4~{\rm km~s}^{-1}
      • B = 1~{\rm mG}, n = 10^7 {\rm cm}^{-3} \Rightarrow v_A = 0.5~{\rm km~s}^{-1}
    • Compare to the isothermal sound speed, which is 0.3 km/s in dense gas at 20 K.
      • c_s \approx v_A in dense gas
      • c_s < v_A in diffuse gas
  • Observed velocity dispersion in molecular gas is typically about 1 km/s, and is thus supersonic. This is a signature of the presence of turbulence. (see the summary of Larson’s seminal 1981 paper)

Introductory remarks on Radiative Processes and Equilibrium

(updated for 2013)


The goal of the next several sections is to build an understanding of how photons are produced by, are absorbed by, and interact with the ISM. We consider a system in which one or more constituents are excited under certain physical conditions to produce photons, then the photons pass through other constituents under other conditions, before finally being observed (and thus affected by the limitations and biases of the observational conditions and instruments) on Earth. Local thermodynamic equilibrium is often used to describe the conditions, but this does not always hold. Remember that our overall goal is to turn observations of the ISM into physics, and vice-versa.

The following contribute to an observed Spectral Energy Distribution:

      • gas: spontaneous emission, stimulated emission (e.g. masers), absorption, scattering processes involving photons + electrons or bound atoms/molecules
      • dust: absorption; scattering (the sum of these two -> extinction); emission (blackbody modified by wavelength-dependent emissivity)
      • other: synchrotron, brehmsstrahlung, etc.

The processes taking place in our “system” depend sensitively on the specific conditions of the ISM in question, but the following “rules of thumb” are worth remembering:

      1. Very rarely is a system actually in a true equilibrium state.
      2. Except in HII regions, transitions in the ISM are usually not electronic.
      3. The terms Upper Level and Lower Level refer to any two quantum mechanical states of an atom or molecule where E_{\rm upper}>E_{\rm lower}. We will use k to index the upper state, and j for the lower state.
      4. Transitions can be induced by photons, cosmic rays, collisions with atoms and molecules, and interactions with free electrons.
      5. Levels can refer to electronic, rotational, vibrational, spin, and magnetic states.
      6. To understand radiative processes in the ISM, we will generally need to know the chemical composition, ambient radiation field, and velocity distribution of each ISM component. We will almost always have to make simplifying assumptions about these conditions.

Thermodynamic Equilibrium

(updated for 2013)


Collisions and radiation generally compete to establish the relative populations of different energy states. Randomized collisional processes push the distribution of energy states to the Boltzmann distribution, n_j \propto e^{-E_j / kT}. When collisions dominate over competing processes and establish the Boltzmann distribution, we say the ISM is in Thermodynamic Equilibrium.

Often this only holds locally, hence the term Local Thermodynamic Equilibrium or LTE. For example, the fact that we can observe stars implies that energy (via photons) is escaping the system. While this cannot be considered a state of global thermodynamic equilibrium, localized regions in stellar interiors are in near-equilibrium with their surroundings.

But the ISM is not like stars. In stars, most emission, absorption, scattering, and collision processes occur on timescales very short compared with dynamical or evolutionary timescales. Due to the low density of the ISM, interactions are much more rare. This makes it difficult to establish equilibrium. Furthermore, many additional processes disrupt equilibrium (such as energy input from hot stars, cosmic rays, X-ray background, shocks).

As a consequence, in the ISM the level populations in atoms and molecules are not always in their equilibrium distribution. Because of the low density, most photons are created from (rare) collisional processes (except in locations like HII regions where ionization and recombination become dominant).

Spitzer Notation

(updated for 2013)

We will use the notation from Spitzer (1978). See also Draine, Ch. 3. We represent the density of a state j as

n_j(X^{(r)}), where

      • n: particle density
      • j: quantum state
      • X: element
      • (r): ionization state
      • For example, HI = H^{(0)}

In his book, Spitzer defines something called “Equivalent Thermodynamic Equilibrium” or “ETE”. In ETE, n_j^* gives the “equivalent” density in state j. The true (observed) value is n_j. He then defines the ratio of the true density to the ETE density to be

b_j = n_j / n_j^*.

This quantity approaches 1 when collisions dominate over ionization and recombination. For LTE, b_j = 1 for all levels. The level population is then given by the Boltzmann equation:

\frac{n_j^\star(X^{(r)})}{n_k^\star(X^{(r)})} = (\frac{g_{rj}}{g_{rk}})~e^{ -(E_{rj} - E_{rk}) / kT },

where E_{rj} and g_{rj} are the energy and statistical weight (degeneracy) of level j, ionization state r. The exponential term is called the “Boltzmann factor”‘ and determines the relative probability for a state.

The term “Maxwellian” describes the velocity distribution of a 3-D gas. “Maxwell-Boltzmann” is a special case of the Boltzmann distribution for velocities.

Using our definition of b and dropping the “r” designation,

\frac{n_k}{n_j} = \frac{b_k}{b_j} (\frac{g_k}{g_j})~e^{-h \nu_{jk} / kT }

Where \nu_{jk} is the frequency of the radiative transition from k to j. We will use the convention that E_k > E_j, such that E_{jk}=h\nu_{jk} > 0.

To find the fraction of atoms of species X^{(r)} excited to level j, define:

\sum_k n_k^\star (X^{(r)}) = n^\star(X^{(r)})

as the particle density of X^{(r)} in all states. Then

\frac{ n_j^* (X^{(r)}) } { n^* (X^{(r)})} = \frac{ g_{rj} e^{-E_{rj} / kT} } {\sum_k g_{rk} e^{ -E_{rk} / kT} }

Define f_r, the “partition function” for species X^{(r)}, to be the denominator of the RHS of the above equation. Then we can write, more simply:

\frac{n_j^\star}{n^\star} = \frac{g_{rj}}{f_r} e^{-E_{rj}/kT}

to be the fraction of particles that are in state j. By computing this for all j we now know the distribution of level populations for ETE.

The Saha Equation

(updated for 2013)


How do we deal with the distribution over different states of ionization r? In thermodynamic equilibrium, the Saha equation gives:

\frac{ n^\star(X^{(r+1)}) n_e } { n^\star (X^{(r)}) } = \frac{ f_{r+1} f_e}{f_r},

where f_r and f_{r+1} are the partition functions as discussed in the previous section. The partition function for electrons is given by

f_e = 2\big( \frac{2 \pi m_e k T} {h^2} \big) ^{3/2} = 4.829 \times 10^{15} (\frac{T}{K})^{3/2}

For a derivation of this, see pages 103-104 of this handout from Bowers and Deeming.

If f_r and f_{r+1} are approximated by the first terms in their sums (i.e. if the ground state dominates their level populations), then

\frac{ n^\star ( X^{ (r+1) } ) n_e } {n^\star ( X^{ (r) } ) } = 2 \big(\frac{ g_{r+1,1} }{g_{ r,1}}\big) \big( \frac{ 2 \pi m_e k T} {h^2} \big)^{3/2} e^{-\Phi_r / kT},

where \Phi_r=E_{r+1,1}-E_{r,1} is the energy required to ionize X^{(r)} from the ground (j = 1)  level. Ultimately, this is just a function of n_e and T. This assumes that the only relevant ionization process is via thermal collision (i.e. shocks, strong ionizing sources, etc. are ignored).

Important Properties of Local Thermodynamic Equilibrium

(updated for 2013)

For actual local thermodynamic equilbrium (not ETE), the following are important to keep in mind:

      • Detailed balance: transition rate from j to k = rate from k to j (i.e. no net change in particle distribution)
      • LTE is equivalent to ETE when b_j = 1 or \frac{b_j}{b_k} = 1
      • LTE is only an approximation, good under specific conditions.
      • Radiation intensity produced is not blackbody illumination as you’d want for true thermodynamic equilibrium.
      • Radiation is usually much weaker than the Planck function, which means not all levels are populated.
      • LTE assumption does not mean the Saha equation is applicable since radiative processes (not collisions) dominate in many ISM cases where LTE is applicable.

Definitions of Temperature

(updated for 2013)


The term “temperature” describes several different quantities in the ISM, and in observational astronomy. Only under idealized conditions (i.e. thermodynamic equilibrium, the Rayleigh Jeans regime, etc.) are (some of) these temperatures equivalent. For example, in stellar interiors, where the plasma is very well-coupled, a single “temperature” defines each of the following: the velocity distribution, the ionization distribution, the spectrum, and the level populations. In the ISM each of these can be characterized by a different “temperature!”

Brightness Temperature

T_B = the temperature of a blackbody that reproduces a given flux density at a specific frequency, such that

B_\nu(T_B) = \frac{2 h \nu^3}{c^2} \frac{1}{{\rm exp}(h \nu / kT_B) - 1}

Note: units for B_{\nu} are {\rm erg~cm^{-2}~s^{-1}~Hz^{-1}~ster^{-1}}.

This is a fundamental concept in radio astronomy. Note that the above definition assumes that the index of refraction in the medium is exactly 1.

Effective Temperature

T_{\rm eff} (also called T_{\rm rad}, the radiation temperature) is defined by

\int_\nu B_\nu d\nu = \sigma T_{{\rm eff}}^4 ,

which is the integrated intensity of a blackbody of temperature T_{\rm eff}. \sigma = (2 \pi^5 k^4)/(15 c^2 h^3)=5.669 \times 10^{-5} {\rm erg~cm^{-2}~s^{-1}~K^{-4}} is the Stefan-Boltzmann constant.

Color Temperature

T_c is defined by the slope (in log-log space) of an SED. Thus T_c is the temperature of a blackbody that has the same ratio of fluxes at two wavelengths as a given measurement. Note that T_c = T_b = T_{\rm eff} for a perfect blackbody.

Kinetic Temperature

T_k is the temperature that a particle of gas would have if its Maxwell-Boltzmann velocity distribution reproduced the width of a given line profile. It characterizes the random velocity of particles. For a purely thermal gas, the line profile is given by

I(\nu) = I_0~e^{\frac{-(\nu-\nu_{jk})^2}{2\sigma^2}},

where \sigma_{\nu}=\frac{\nu_{jk}}{c}\sqrt{\frac{kT_k}{\mu}} in frequency units, or

\sigma_v=\sqrt{\frac{kT_k}{\mu}} in velocity units.

In the “hot” ISM T_k is characteristic, but when \Delta v_{\rm non-thermal} > \Delta v_{\rm thermal} (where \Delta v are the Doppler full widths at half-maxima [FWHM]) then T_k does not represent the random velocity distribution. Examples include regions dominated by turbulence.

T_k can be different for neutrals, ions, and electrons because each can have a different Maxwellian distribution. For electrons, T_k = T_e, the electron temperature.

Ionization Temperature

T_I is the temperature which, when plugged into the Saha equation, gives the observed ratio of ionization states.

Excitation Temperature

T_{\rm ex} is the temperature which, when plugged into the Boltzmann distribution, gives the observed ratio of two energy states. Thus it is defined by

\frac{n_k}{n_j}=\frac{g_k}{g_j}~e^{-h\nu_{jk}/kT_{\rm ex}}.

Note that in stellar interiors, T_k = T_I = T_{\rm ex} = T_c. In this room, T_k = T_I = T_{\rm ex} \sim 300K, but T_c \sim 6000K.

Spin Temperature

T_s is a special case of T_{\rm ex} for spin-flip transitions. We’ll return to this when we discuss the important 21-cm line of neutral hydrogen.

Bolometric temperature

T_{\rm bol} is the temperature of a blackbody having the same mean frequency as the observed continuum spectrum. For a blackbody, T_{\rm bol} = T_{\rm eff}. This is a useful quantity for young stellar objects (YSOs), which are often heavily obscured in the optical and have infrared excesses due to the presence of a circumstellar disk.

Antenna temperature

T_A is a directly measured quantity (commonly used in radio astronomy) that incorporates radiative transfer and possible losses between the source emitting the radiation and the detector. In the simplest case,

T_A = \eta T_B( 1 - e^{-\tau}),

where \eta is the telescope efficiency (a numerical factor from 0 to 1) and \tau is the optical depth.

Excitation Processes: Collisions

(updated for 2013)


Collisional coupling means that the gas can be treated in the fluid approximation, i.e. we can treat the system on a macrophysical level.

Collisions are of key importance in the ISM:

      • cause most of the excitation
      • can cause recombinations (electron + ion)
      • lead to chemical reactions

Three types of collisions

      1. Coulomb force-dominated (r^{-1} potential): electron-ion, electron-electron, ion-ion
      2. Ion-neutral: induced dipole in neutral atom leads to r^{-4} potential; e.g. electron-neutral scattering
      3. neutral-neutral: van der Waals forces -> r^{-6} potential; very low cross-section

We will discuss (3) and (2) below; for ion-electron and ion-ion collisions, see Draine Ch. 2.

In general, we will parametrize the interaction rate between two bodies A and B as follows:

{\frac{\rm{reaction~rate}}{\rm{volume}}} = <\sigma v>_{AB} n_a n_B

In this equation, <\sigma v>_{AB} is the collision rate coefficient in \rm{cm}^3 \rm{s}^{-1}. <\sigma v>_{AB}= \int_0^\infty \sigma_{AB}(v) f_v~dv, where \sigma_{AB} (v) is the velocity-dependent cross section and f_v~dv is the particle velocity distribution, i.e. the probability that the relative speed between A and B is v. For the Maxwellian velocity distribution,

f_v~dv = 4 \pi \left(\frac{\mu'}{2\pi k T}\right)^{3/2} e^{-\mu' v^2/2kT} v^2~dv,

where \mu'=m_A m_B/(m_A+m_B) is the reduced mass. The center of mass energy is E=1/2 \mu' v^2, and the distribution can just as well be written in terms of the energy distribution of particles, f_E dE. Since f_E dE = f_v dv, we can rewrite the collision rate coefficient in terms of energy as

\sigma_{AB}=\left(\frac{8kT}{\pi\mu'}\right)^{1/2} \int_0^\infty \sigma_{AB}(E) \left(\frac{E}{kT}\right) e^{-E/kT} \frac{dE}{kT}.

These collision coefficients can occasionally be calculated analytically (via classical or quantum mechanics), and can in other situations be measured in the lab. The collision coefficients often depend on temperature. For practical purposes, many databases tabulate collision rates for different molecules and temperatures (e.g., the LAMBDA databsase).

For more details, see Draine, Chapter 2. In particular, he discusses 3-body collisions relevant at high densities.

Neutral-Neutral Interactions

(updated for 2013)


Short range forces involving “neutral” particles (neutral-ion, neutral-neutral) are inherently quantum-mechanical. Neutral-neutral interactions are very weak until electron clouds overlap (\sim 1 \AA\sim 10^{-8}cm). We can therefore treat these particles as hard spheres. The collisional cross section for two species is a circle of radius r1 + r2, since that is the closest two particles can get without touching.

\sigma_{nn} \sim \pi (r_1 + r_2)^2 \sim 10^{-15}~{\rm cm}^2

What does that collision rate imply? Consider the mean free path:

mfp = \ell_c \approx (n_n \sigma_{nn})^{-1} = \frac{10^{15}} {n_H}~{\rm cm}

This is about 100 AU in typical ISM conditions (n_H = 1 {\rm cm^{-3}})

In gas at temperature T, the mean particle velocity is given by the 3-d kinetic energy: 3/2 m_n v^2 = kT, or

v = \sqrt{\frac{2}{3} \frac{kT}{m_n}}, where m_n is the mass of the neutral particle. The mean free path and velocity allows us to define a collision timescale:

\tau_{nn} \sim \frac{l_c}{v} \sim (\frac{2}{3} \frac{kT}{m_n})^{-1/2} (n_n \sigma_{nn})^{-1} = 4.5 \times 10^3~n_n^{-1}~T^{-1/2}~{\rm years}.

      • For (n,T) = (1~{\rm cm^{-3}, 80~K}), the collision time is 500 years
      • For (n,T) = (10^4~{\rm cm^{-3}, 10~K}), the collision time is 1.7 months
      • For (n,T) = (1~{\rm cm^{-3}, 10^4~K}), the collision time is 45 years

So we see that density matters much more than temperature in determining the frequency of neutral-neutral collisions.

Ion-Neutral Reactions

(updated for 2013)


In Ion-Neutral reactions, the neutral atom is polarized by the electric field of the ion, so that interaction potential is

U(r) \approx \vec{E} \cdot \vec{p} = \frac{Z e} {r^2} ( \alpha \frac{Z e}{r^2} ) = \alpha \frac{Z^2 e^2}{r^4},

where \vec{E} is the electric field due to the charged particle, \vec{p} is the induced dipole moment in the neutral particle (determined by quantum mechanics), and \alpha is the polarizability, which defines \vec{p}=\alpha \vec{E} for a neutral atom in a uniform static electric field. See Draine, section 2.4 for more details.

This interaction can take strong or weak forms. We distinguish between the two cases by considering b, the impact parameter. Recall that the reduced mass of a 2-body system is \mu' = m_1 m_2 / (m_1 + m_2) In the weak regime, the interaction energy is much smaller than the kinetic energy of the reduced mass:

\frac{\alpha Z^2 e^2}{b^4} \ll\frac{\mu' v^2}{2} .

In the strong regime, the opposite holds:

\frac{\alpha Z^2 e^2}{b^4} \gg\frac{\mu' v^2}{2}.

The spatial scale which separates these two regimes corresponds to b_{\rm crit}, the critical impact parameter. Setting the two sides equal, we see that b_{\rm crit} = \big(\frac{2 \alpha Z^2 e^2}{\mu' v^2}\big)^{1/4}

The effective cross section for ion-neutral interactions is

\sigma_{ni} \approx \pi b_{\rm crit}^2 = \pi Z e (\frac{2 \alpha}{\mu'})^{1/2} (\frac{1}{v})

Deriving an interaction rate is tricker than for neutral-neutral collisions because n_i \ne n_n in general. So, let’s leave out an explicit n and calculate a rate coefficient instead, in {\rm cm}^3 {\rm s}^{-1}.

k = <\sigma_{ni} v> (although really \sigma_{ni} \propto 1/v, so k is largely independent of v). Combining with the equation above, we get the ion-neutral scattering rate coefficient

k = \pi Z e (\frac{2 \alpha}{\mu'})^{1/2}

As an example, for C^+ - H interactions we get k \approx 2 \times 10^{-9} {\rm cm^{3} s^{-1}}. This is about the rate for most ion-neutral exothermic reactions. This gives us

\frac{{\rm rate}}{{\rm volume}} = n_i n_n k.

So, if n_i = n_n = 1, the average time \tau between collisions is 16 years. Recall that, for neutral-neutral collisions in the diffuse ISM, we had \tau \sim 500 years. Ion-neutral collisions are much more frequent in most parts of the ISM due to the larger interaction cross section.

The Virial Theorem


(Transcribed by Bence Beky). See also these slides from lecture

See Draine pp 395-396 and appendix J for more details.

The Virial Theorem provides insight about how a volume of gas subject to many forces will evolve. Lets start with virial equilibrium. For a surface S,

0 = \frac12 \frac{\mathrm D^2I}{\mathrm Dt^2} = 2\Gamma + 3\Pi + \mathscr M + W + \frac1{4\pi}\int_S(\mathbf r \cdot \mathbf B_ \mathbf B \cdot \mathrm d \mathbf s - \int_S \left(p+\frac{B^2}{8\pi}\right)\mathbf r \cdot \mathrm d\mathbf s,

see Spitzer pp.~217–218. Here I is the moment of inertia:

I = \int \varrho r^2 \mathrm dV

\Gamma is the bulk kinetic energy of the fluid (macroscopic kinetic energy):

\Gamma = \frac12 \int \varrho v^2 \mathrm dV,

\Pi is \frac23 of the random kinetic energy of thermal particles (molecular motion), or \frac13 of random kinetic energy of relativistic particles (microscopic kinetic energy):

\Pi = \int p \mathrm dV,

\mathscr M is the magnetic energy within S:

\mathscr M = \frac1{8\pi} \int B^2 \mathrm dV

and W is the total gravitational energy of the system if masses outside S don’t contribute to the potential:

W = - \int \varrho \mathbf r \cdot \nabla \Phi \mathrm dV.

Among all these terms, the most used ones are \Gamma, \mathscr M and W. But most often the equation is just quoted as 2\Gamma+W=0. Note that the virial theorem always holds, inapplicability is only a problem when important terms are omitted.

This kind of simple analysis is often used to determine how bound a system is, and predict its future, e.g. collapse, expansion or evaporation. Specific examples will show up later in the course, including instability analyses.

The virial theorem as Chandrasekhar and Fermi formulated it in 1953 is the following:

\underbrace {2T_m} _{2\Gamma} + \underbrace {2T_k} _{3\Pi} + \underbrace {\Omega} _{W} + \mathscr M = \underbrace {0} _{\frac {\mathrm D^2 I} {\mathrm D t^2}}.

This uses a different notation but expresses the same idea, which is very useful in terms of the ISM.

Radiative Transfer


The specific intensity of a radiation field is defined as the energy rate density with respect to frequency, cross sectional area, solid angle and time:

\mathrm dI_\nu = \frac {\mathrm dE} {\mathrm d\nu \mathrm dA \mathrm d\Omega \mathrm dt}

[\mathrm dI_\nu] = 1 \;\mathrm{erg} \; \mathrm {Hz}^{-1} \; \mathrm {cm}^{-2} \; \mathrm {sr}^{-1} \; \mathrm s ^{-1}

in cgs units. It is important to note that specific intensity does not change during the propagation of a ray, no matter what its geometry is, unless there is extinction or emission in the path.

Specific intensity is a function of position, frequency, direction and time. Integrating over all directions, we get the specific flux density, which is a function of position, frequency and time:

F_\nu = \int_{4\pi} I_\nu \cos \theta \mathrm d\Omega

[F_\nu] = 1 \;\mathrm{erg} \; \mathrm {Hz}^{-1} \; \mathrm {cm}^{-2} \; \mathrm s ^{-1}

where \theta is usually assumed to be zero.

A conventional unit of specific flux density, especially in radioastronomy, is jansky, named after the American radio astronomer Karl Guthe Jansky:

1 \; \mathrm {Jy} = 10^{-23} \;\mathrm{erg} \; \mathrm {Hz}^{-1} \; \mathrm {cm}^{-2} \; \mathrm s ^{-1} = 10^{-26} \;\mathrm{W} \; \mathrm {Hz}^{-1} \; \mathrm {m}^{-2}

The specific energy density of a radiation field is

u_\nu = \frac1c \int_{4\pi} I_\nu \mathrm d\Omega

[u_\nu] = 1 \;\mathrm{erg} \; \mathrm {Hz}^{-1} \; \mathrm {cm}^{-3}

The mean specific intensity is the specific intensity at a given position and time averaged over all directions:

J_\nu = \frac1{4\pi} \int_{4\pi} I_\nu \mathrm d\Omega = \frac {cu_\nu} {4\pi} \left( = \frac {F_\nu}{4\pi} \textrm{ if } \theta=0 \right)

[J_\nu] = 1 \;\mathrm{erg} \; \mathrm {Hz}^{-1} \; \mathrm {cm}^{-2} \; \mathrm {s}^{-1}

The above “specific” quantities all have their frequency integrated counterparts: intensity, flux density, energy density and mean intensity.

The emission and absorption coefficients j_\nu and \alpha_\nu are defined as the coefficients in the differential equations governing the change of intensity in a ray transversing some medium:

\mathrm dI_\nu = ( j_\nu - \alpha_\nu I_\nu) \mathrm ds.

The emissivity \varepsilon_\nu and opacity \kappa_\nu are defined by

j_\nu = \frac {\varepsilon_\nu \varrho} {4\pi}, \alpha_\nu = \varrho \kappa_\nu.

To find the integral equation determining the specific intensity as a function of path, we define the source function S_\nu and the differential optical depth \mathrm d\tau_\nu by

S_\nu = \frac {j_\nu} {\alpha_\nu}; \mathrm d\tau_\nu = \alpha_\nu \mathrm ds.

It is left as an exercise to the reader that these lead to the Radiative Transfer Equation

I_\nu (\tau_\nu) = I_\nu (0) e^{-\tau_nu} + \int_0^{\tau_\nu} e^{-(\tau_\nu-\tau'_\nu)} S_\nu(\tau'_\nu) \mathrm d \tau'_\nu

If the source function is constant in a medium, then the two limiting cases are

      • optically thin \tau_\nu \to 0: I_\nu (\tau_\nu) \sim I_\nu(0) + S_\nu \tau_\nu
      • optically thick \tau_\nu \to \infty : I_\nu (\tau_\nu) \to S_\nu


The Planck function, named after the German physicist Max Karl Ernst Ludwig Planck, is

B_\nu (T) = \frac {2h\nu^3}{c^2} \cdot \frac1 {e^{\frac{h\nu}{kT}}-1}

Planck’s law says that a black body, that is, an optically thick object in thermal equilibrium, will emit radiation of specific intensity given by the Planck function, also known as the blackbody function. Any radiation with this specific intensity is called blackbody radiation. A similar concept is thermal emission, which in fact is defined as S_\nu=B_\nu, therefore only implies blackbody radiation if the emitting medium is optically thick. Note that in case of thermal emission, one can substitute the definition of source function to write j_\nu=\alpha B_\nu, which is known as Kirchoff’s law.

The brightness temperature T_b of a radiation of specific intensity I_\nu at a given frequency \nu is defined as the temperature that a black body would have to have in order to have the same specific intensity:

I_\nu = B_\nu (T_\nu)

There are two asymptotic approximations of the blackbody radiation: if h\nu \ll kT, that is, small frequency or high temperature, we have the Rayleigh–Jeans approximation for the specific intensity:

I_\nu^{\mathrm {RJ}} (T) = \frac {2\nu^2}{c^2} \cdot kT

This is valid in most areas of radio astronomy, except for example some cases of thermal dust emission.

In the other limit, where h\nu \gg kT, that is, large frequency or low temperature, we have the Wien approximation, named after the German physicist Wilhelm Carl Werner Otto Fritz Franz Wien who derived it in 1983:

I_\nu^{\mathrm {W}} (T) = \frac {2h\nu^3}{c^2} \cdot e^{-\frac{h\nu}{kT}}

The brightness temperature T_\mathrm b of a radiation of specific intensity I_\nu at a given frequency \nu is defined as the temperature that a black body would have to have in order to have the same specific intensity:

I_\nu = B_\nu (T_\mathrm b)

Now assume the background radiation of brightness temperature T_\mathrm{bg} traverses a medium of temperature T, source function S_\nu=B_\nu(T), and optical depth \tau_\nu. In the Rayleigh–Jeans regime B\propto T, therefore the differential and integral forms of the radiative transfer equation become

\frac {\mathrm dT_\mathrm b}{\mathrm d\tau_\nu} = T - T_\mathrm b

T_\mathrm b = T_\mathrm{bg} e^{-\tau_\nu} + T ( 1-e^{-\tau_\nu})

where T_\mathrm b is the brightness temperature of the radiation leaving the medium.

This can be applied to dust emission observations assuming an “emissivity-modified” blackbody radiation. Then the contribution of particles of a given linear size a is

F_\lambda = N_a \frac {\pi a^2} {D^2} Q_\lambda B_\lambda (T)

where N_a is the number of such particles, D is the distace to the observer, and Q_\lambda is the emissivity. Recall that the blackbody intensity with respect to wavelength is

B_\lambda (T) = \frac {2hc^2} {\lambda ^5} \cdot \frac1 {e^{\frac{hc}{\lambda kT}} - 1 }.

We refer the reader to Hilebrand 1983.

It turns out that the emissivity in far infrared follows a power law:

Q_\mathrm{FIR} \propto \lambda ^{-\beta}

      • \beta=0 for blackbody
      • \beta=1 for amorphous lattice-layer materials
      • \beta=2 for metals and crystalline dielectrics}

The observed flux will be

F_\textrm{observed} = \sum_a N_a \frac {\pi a^2} {D^2} Q_\lambda B_\lambda (T).

The same emission can be a result of different values of T, Q, and N.

To determine the total mass of dust, we write

M_\mathrm{dust} = \frac {4\varrho_\mathrm{dust} F_\lambda D^2} {3B_\lambda(T_\mathrm{dust})} \cdot \left\langle \frac a {Q_\lambda} \right\rangle,

where \langle\cdot\rangle is an average weighted appropriately.

If \tau\ll1, then emission essentially depends on grain surface area, and we can write

F_\nu = \kappa_\nu B_\nu \frac {M_\mathrm{dust}} {D^2}.

Hildebrand 1983 gives us an empirical formula for dust opacity, implicitly assuming a gas-to-dust ratio of 100, which can be extended with the power law described above to arrive at

\kappa_nu = 0.1\;\frac{\mathrm{cm}^2}{\mathrm g} \cdot \left( \frac \nu {1200\;\mathrm{GHz}} \right)^\beta.

A typical modern value for interstellar dust is \beta=1.7, but it is highly disputed. It can be determined based on the SED slope in the Rayleigh–Jeans regime, where

F_\nu \propto \nu^{\beta+2}.

ISM of the Milky Way


The ISM in the Milky way can be divided into cold stuff and hot stuff. Cold stuff are dust and gas. Katherine will talk briefly about the importance of cooling via CO emission. Hot stuff will be discussed by Ragnhild and Vicente (also see Spitzer 1958), and Tanmoy already talked about supernovae. Suggested reading is Chapters 5 and 19 of Draine. In particular, if you ever need a reference of term symbols (uppercase Greek letters with subscripts and superscripts), see page 39.

Cold ISM


The molecular gas is mostly composed of H_2 molecules. This, however, rarely can be observed directly: it has no dipole moment, therefore no rovibrational energy levels. (As exceptions, UV absorption lines can be observed in hot H_2 gas, vibrational lines can be observed in shocked ISM, and NIR lines can be observed in “cold” hot ISM.) Instead of observing hydrogen lines, trace species are used as proxies to infer hydrogen density. The choice of preferred trace species depends on their abundance. To quantify this, we define the critical density as

n_\mathrm{critical} = \frac {A_\mathrm{ul}} {\gamma_\mathrm{ul}},

where A is the Einstein coefficient and \gamma is the collision rate for a given transition. This of course will depend on temperature, as the collision rate is the product of the temperature-independent collisional cross section \sigma and the temperature-dependent particle velocity v. For a detailed explanation of critical density, see pp.~81–87 of Spitzer, and Chapter 19 of Draine.

Typically assumed values are T\sim100\;\mathrm K, v\sim10^5\;\frac{\mathrm{cm}}{\mathrm s}, \sigma\sim10^{-15}\;\mathrm{cm}^{2}, \gamma_\mathrm{ul}\sim10^{-10}\;\frac{\mathrm{cm}^3}{\mathrm s}. The Einstein coefficient for the J=1\to0 transition of CO is A_{10}\sim6\cdot10^{-8}\;\mathrm s^{-1}, yielding a critical density of n_\mathrm{critical}\sim6\cdot10^2\;\mathrm {cm}^{-3}. Spitzer gives us fiducial values for critical densities around T=100\;\mathrm K:

      • CO, J = 1 \to 0, \lambda = 2.6 mm, A = 6 \cdot 10^{-8} \rm s^{-1}, n_{\rm crit} = 4 \cdot 10^3 {~\rm cm^{-3}}
      • NH_3, J = 1 \to 1 \lambda = 12.65 mm, A = 1.7\cdot10^{-7}, n_{\rm crit} = 1.1\cdot10^4 {~\rm cm^{-3}}
      • CS, J=1\to 0, \lambda = 6.12 mm, A = 6.12 {\rm s}^{-1}, n_{\rm crit} = 1.1\cdot10^5 {\rm ~cm^{-3}}
      • HCN, J=1\to 0, \lambda = 3.38 mm, A = 2.5 \cdot10^{-5} {\rm s^{-1}}, n_{\rm crit} = 1.6\cdot10^6 {\rm~ cm^{-3}}

In practice, the following tracers are used for cold clouds in the range of 3\;\mathrm K \lessapprox T \lessapprox 100 \; \mathrm K:

      • Low density (n=10 cm^-3): 12CO
      • Dark Cloud (n=300): 13CO, OH
      • Dense Core (n=10^3): C18O, CS
      • Dense Core (n=5 * 10^3): NH3, N2H+, CS
      • Very Dense (n=10^8): OH Masers
      • Very Very Dense (n=10^10): H20 masers

See handout on dense core multi-line half power contours from Myers 1991, and note that sizes do not proceed as critical densities would suggest.

As a reminder, electronic energy levels are much further apart, resulting in higher frequency transition lines. Vibrational levels are a few orders of magnitude more tightly spaced, then rotational energy levels are even closer. Here we give a brief overview of rotational line structure, see Chapter 5.1.5 of \cite{draine:2010} for more details.

Quantum mechanically, a diatomic molecule has energy levels identified by J=0,1,2,\ldots with energy E_{\rm rot} = \frac{J(J+1) \hbar^2}{2I_v} = B_v J (J+1) (which would have J^2 instead in the quasiclassical theory). Here I_v=r_vm_\mathrm r is the moment of inertia, r_v is the distance between the atoms, and m_\mathrm r is their reduced mass. B_v is the rotation constant. The v index signifies that these values are valid for a given vibrational state only. Therefore the transition energy between the states J-1 and J is

\Delta E_\mathrm{rot} = 2J B_v.

In case of ^{12}C^{16}O, B_v=5.75\cdot10^{16} in some mysterious units. For the J=1\to0 transition,

\Delta E_\mathrm{rot}=4.7\cdot10^{-4}\;\mathrm{eV}, corresponding to \nu=115.271\;\mathrm{GHz}, or \frac{h\nu}k=5.5\;\mathrm K. The J=2\to1 and J=3\to2 transitions have twice and three times higher energy differences, respectively.

We can estimate which transition gives maximum emission (in terms of number of photons) at a given temperature:

E_\mathrm{rot} = B_v J (J+1)

k T_\mathrm{rot} = B_v J (J+1)

J_\mathrm{max} \approx \sqrt {\frac {kT_\mathrm{rot}}{B_v}}

      • The SMA and ALMA are sensitive to J=4-7
      • KAO and Sophia are sensitive to J=20-40

Note that 1\to0 at 2.6 mm is more visible from Earth than 2\to1 due to atmospheric extinction.

Collisional Excitation


In LTE, the transition rates for a collisional u \rightleftarrows l transition are

C_{ul} = n\gamma_{ul}

C_{lu} = C_{ul} \frac{g_u}{g_l} e^{-\frac{h\nu}{T_K}}

where \gamma_{ul}=\langle \sigma_{ul} v \rangle is the rate coefficent for the transition, \sigma is the collisional cross section and v is the particle velocity.

In case of Maxwellian velocity distribution, we have

\gamma_{ul} = \frac 4 {\sqrt \pi} \left( \frac \mu {2kT_K} \right)^{\frac32} \int_0^\infty \sigma_{ul} v^3 e^{-\frac{\frac12\mu v^2}{kT_K}} \mathrm dv

where \mu is the reduced mass. For neutral-neutral state transitions, \gamma is typically 10^{-11}\sim10^{-10}\;\mathrm{cm}^3\;\mathrm s^{-1}, and for an ionizing transition, \gamma\sim10^{-9}\;\mathrm{cm}^3\;\mathrm s^{-1}.

In equilibrium, we have

\dot n_u = n_l C_{lu} - n_u C_{ul} - n_u A_{ul}

0 = n_l C_{ul} \frac{g_u}{g_l} e^{-\frac{h\nu}{T_K}} - n_u C_{ul} - n_u A_{ul}

0 = (n-n_u) \frac{g_u}{g_l} e^{-\frac{h\nu}{T_K}} - n_u - n_u \frac {A_{ul}}{n C_{ul}}

n \frac{g_u}{g_l} e^{-\frac{h\nu}{T_K}} = n_u \frac{g_u}{g_l} e^{-\frac{h\nu}{T_K}} + n_u + n_u \frac {n_\mathrm{crit}}{n}

\frac {n_u} n = \frac {\frac{g_u}{g_l} e^{-\frac{h\nu}{T_K}}} {\frac{g_u}{g_l} e^{-\frac{h\nu}{T_K}} + 1 + \frac {n_\mathrm{crit}}{n}}

where n_\mathrm{crit}=\frac{A_{ul}}{C_{ul}} is the critical density. When n\gg n_\mathrm{crit}, collisions dominate over spontaneous emission, resulting in

\frac {n_u} {n_l} = \frac {g_u}{g_l} e^{-\frac{h\nu}{T_K}}.

However, in case of n\ll n_\mathrm{crit}, spontaneous emission dominates decay, so each collisionally excitation results in an emission:

\frac {n_u} {n_l} = \frac n {n_\mathrm{crit}} \frac {g_u}{g_l} e^{-\frac{h\nu}{T_K}}.

In case of optically thin emission,

F_\mathrm{line} = \frac {h\nu}{4\pi} A_{ul} \Omega \int n_u \mathrm ds,

where \Omega is the apparent solid angle of the source. Then

\textrm {if } n\ll n_\mathrm {crit}: \quad F_\mathrm{line} = \frac {h\nu}{4\pi} \gamma_{ul} \Omega \frac{g_u}{g_l} e^{-\frac{h\nu}{kT_K}} \int n^2 \mathrm ds \propto n^2

\textrm {if } n\gg n_\mathrm {crit}: \quad F_\mathrm{line} = \frac {h\nu}{4\pi} A_{ul} \Omega \frac{g_u}{g_l} e^{-\frac{h\nu}{kT_K}} \int n \mathrm ds \propto n

Recombination of Ions with Electrons


For more details, see Draine Chapter 14.

In HII regions, most recombination happens radiatively: X^+ + e \to X + h \nu.

An electron with kinetic energy “E” can recombine to any level of hydrogen. The energy of the photon emitted is then given by h\nu = E + I_{nl}, where I_{nl} = 13.6 ev / n^2 is the binding energy of quantum state nl.

There are two extreme cases in recombination (see Baker and Menzel 1938):

      • Case A: The medium is opcially thin to ionizing radiation. Appropriate in shock-heated regions (T ) where density is very low.
      • Case B: Optically thick to ionizing radiation. When a atom recombines to the n=1 state, the emitted lyman photon is immediately reabsorbed, so that recombinations to n=1 does not change the ionization state. This is called the “on the spot” approximation

Optical tracers of recombination: H-\alpha and other lines

Radio: Radio recombination lines (see Draine 10.7). Rydberg States are recombinations to very high (n>100) hydrogen energy levels. Spontaneous decay gives

\nu_{n\alpha} = \frac{2n+1}{[n(n+1)]^2} \frac{I_H}{h} \sim 6.479 \big(\frac{100.5}{n+0.5}\big)^3 {\rm GHz}

A popular line is \nu_{166\alpha} = 1425 {\rm MHz}. This is often observed because its proximity to the 1420 MHz line of HI.

Radio recombination lines often involve masing.

Star Formation in Molecular Clouds

Topics to be covered include:

      • The Jeans Mass
      • Free Fall Time
      • Virial Theorem
      • Instabilities
      • Magnetic Fields
      • Non-Isolated Systems
      • “Turbulence”

An overview of the steps of star formation.

Basic properties of a GMC


Mass: 10^{5-6} M_\odot

Lifetime: Uncertain. Probably 10 Myr (maybe as long as 100 Myr). “Lifetime” is not easily defined or inferred, since clouds are constantly fragmenting, exchanging mass with their surroundings, etc.

We roughly describe the hierarchy and fragmentation of clouds via the following terms:

      • Clump. 10-100 M_\odot. 1pc. The progenitor of stellar clusters
      • Core. 1-10 M_\odot. 0.1 pc. The progenitor of individual stars, and small stellar systems.
      • Star. The end-product of fragmentation and collapse. 1 M_\odot.

Note that, across these scales, density increases by tens of orders of magnitude:

\frac{\rho_{\rm star}}{\rho_{\rm core}} \propto \bigg(\frac{R_{\rm star}}{R_{\rm core}} \bigg)^3 = \bigg(\frac{.1 \times 3 \times 10^{18} cm}{7 \times 10^{10} cm} \bigg)^3 \sim 10^{20}

\begin{figure}

\includegraphics[width=4in]{fig_030811_01}

\caption{Schematic of how a GMC fragments into clumps, cores, and stars}

\end{figure}

The Jeans Mass


For further details, see Draine ch 41.

Let’s analyze the stability of a sphere of gas, where thermal pressure is balanced by self-gravity.

Start with the basic hydro equations (conservation of mass, momentum, and Poisson’s equation for the gravitational potential):

\frac{\partial \rho}{\partial t} + \nabla \cdot (\rho \vec{v}) = 0

\frac{\partial v}{\partial t} + (\vec{v} \cdot \nabla) \vec{v} = -\frac{1}{\rho}\nabla P - \nabla \phi

\nabla^2 \phi = 4 \pi G \rho

Consider an equilibrium solution rho_0(\vec{r}), P_0(\vec{r}), etc such that time derivatives are zero. Let’s perturb that solution slightly, and analyze when that perturbation grows unstably.

\vec{v} = \vec{v_0} + \vec{v_1}, \rho = \rho_0 + \rho_1, P = P_0 + P_1, \phi = \phi_0 + \phi_1 latex.

The linear hydro equations, to first order in the perturbations, are

\frac{\partial \rho_1}{\partial t} + \vec{v_0} + \nabla \rho_1 + \vec{v_1} \cdot \nabla \rho_0 = -\rho_1 \nabla \cdot \vec{v_0} - \rho_0 \nabla \cdot \vec{v_1}

\frac{\partial v_1}{\partial t} + (\vec{v_0} + \cdot \nabla) \vec{v_1} + (\vec{v_1} \cdot \nabla) \vec{v_0} = \frac{\rho_1}{\rho_0^2} \nabla P_0 - \frac{1}{\rho_0} \nabla P_1 -\nabla \phi_1

\nabla^2 \phi_1 = 4 \pi G \rho_1

Lets restrict our attention to an isothermal gas, so the equation of state is P = \rho c_s^2, where c_s is the isothermal sound speed. Then, the momentum becomes

\frac{\partial \vec{v_1}}{\partial t} + (\vec{v_0} + \nabla) \vec{v_1} + (\vec{v_1} \cdot \nabla)\vec{v_0} = -c_s^2 \nabla(\frac{\rho_1}{\rho_0}) - \nabla \phi_1

Jeans took these equations and added:

      • Uniform density to start with (\nabla \rho_0 = 0)
      • Stationary gas (v_0 = 0)
      • Gradient-free equilibrium potential \nabla \phi_0 = 0)

Then, the solution becomes (after taking \nabla \cdot the momentum equation)

\frac{\partial^2 \rho_1}{\partial t^2} = c_s^2 \nabla^2 \rho_1 + (4 \pi G \rho_0) \rho_1

Now consider plane wave perturbations

\rho_1 \propto \times exp(i (\vec{k} \cdot \vec{r}) - \omega t)

\omega^2 = k^2 c_s^2 - 4 \pi G \rho_0

Define k_j^2 = 4 \pi G \rho_0 / c_s^2, so

\omega^2 = (k^2 - k_J^2) c_s^2

\omega is reall IFF k . Otherwise, \omega is imaginary, and there is exponential growth of the instability. This then leads to a Jeans Length:

\lambda_J = 2 \pi / k_J = \bigg(\frac{\pi c_s^2}{G \rho_0} \bigg)^{1/2}

Gas is “Jeans Unstable” when \lambda . Exponentially growing perturbations will cause the gas to fragment into parcels of size \lambda \sim \lambda_J.

Converting the Jeans length into a radius (assuming a sphere) yields

M_J = 0.32 M_\odot \big(\frac{T}{10K}\big)^{3/2} \big(\frac{m_H}{\mu}\big)^{3/2} \big(\frac{10^6 cm^{-3}}{n_H}\big)^{1/2}

This defines a “preferred mass” for substructures within a cloud.

Let’s plug in values for a dense core: T=10~K,~\mu = 2.33~{\rm amu},~n_H = 2 \times 10^5~{\rm cm}^{-3}. This yields M_J = 0.2~M_\odot. If we instead plug in numbers appropriate for the mean conditions in a GMC, T=50~{\rm K},~\mu=2.33~{\rm amu},~n_{\rm H}=200~{\rm cm}^{-3}, we get M_J=70~M_\odot.

Note that, once gravitaional collapse and heating set in, our isothermal sphere assumptions are no longer valid.

Collapse Timescale


For large scales, the growth time for the Jeans instability is

\tau_J \sim \frac{1}{k_J c_s} \sim \frac{1}{\sqrt{4 \pi G \rho_0}} = 2.3 \times 10^4 yr \big(\frac{10^6 cm^{-3}}{n_H} \big)^{1/2}

For n_H = 1000, this is about 0.7 Myr. Compared to a “free fall time” (collapse timescale for a pressure-less gas),

\tau_{ff} = \big( \frac{3 \pi} {32 G \rho_0} \big)^{1/2} = 4.4 \times 10^4 yr \big( \frac{10^6 cm^{-3}}{n_H} \big)^{1/2}

For n_H=1000 this is 1.4 Myr — slightly longer than growth time.

The Jeans Swindle


There is a sinister flaw in Jean’s analysis. Assuming \nabla \phi = 0 implies that \nabla^2 \phi = 0. The only way to satisfy this everywhere is for \rho_0 = 0. However, more rigorous analysis verifies that Jean’s approach still yields approximately correct results.

More Realistic Treatment


The Bonnor Ebert Sphere is the “largest mass an isothermal gas sphere can have in a \textbf{pressurized medium} while staying in hydrostatic equilibrium.

Numerical Star Formation Simulations


(Notes from Guest Lecture by Stella Offner)

We start by writing down the conservation laws for mass, momentum and energy, each in the form of “rate of change of conserved quantity + flux = source density”:

\frac {\partial \varrho}{\partial t} + \nabla (\varrho v) = 0

\frac {\partial(\varrho t)}{\partial t} + \nabla (\varrho v^2 + p) = - \varrho \nabla \phi

\frac {\partial(\varrho E)}{\partial t} + \nabla (\varrho v^3 + pv) = - \varrho v \nabla \phi

Also, the Poission equation for gravity is

\nabla^2 \phi = - 4 \pi G \varrho

The initial conditions are determined by T, \varrho(\mathbf r), \mu and bulk \mathbf v(\mathbf r), which further determine the total mass, total angular momentum, turbulence and other global properties.

Grid-based codes

One type of star formation simulations is grid-based. Many of these feature adaptive mash refinement, increasing spatial resolution in areas where parameters vary on smaller scales. Examples of such codes are Orion, Ramses, Athena, Zeus and Enzo.

This algorithm stores \varrho and \mathbf v values at nodes indexed by j. The values at time step n+1 are calculated from those at time step n by discretized versions of the conservation equations. For instance, a discretized mass equation can be written as

\frac {\varrho_j^{n+1}-\varrho_j^n}{\Delta t} + \frac{\varrho_{j+1}^n v_{j+1}^n - \varrho_{j-1}^n v_{j-1}^n} {2\Delta x} = 0

First, a homogeneous grid is created, with resolution satisfying the Truelove condition, also called as the Jeans condition:

\Delta x \leqslant J \lambda_\mathrm J

where the empirical value for the Jeans number J is \frac14. Then the simulation is carried on, repartitioning a square by adding nodes if deemed necessary based on the spatial variation of \varrho or \mathbf v. The timestep is global, but it can also be adaptively changed, as long as it satisfies the Courant condition

\Delta t \leqslant C \cdot \min \left( \frac {\Delta x}{v+c_\mathrm s} \right)

ensuring that gas particles do not move more than half a cell in one time step.

Particle-Based Codes

The other family of codes is particle-based, also referred to as smooth particle hydrodynamics codes. Here individual pointlike particles are traced. Examples codes are Gadget, Gasoline and Hydra.

To solve the equations, density has to be smoothed with a smoothing length h that has to be larger than the characteristic distance between the particles. Usually we want approximately twenty particles within the smoothing radius. The formula for smoothing is

\langle \varrho (\mathbf r) \rangle = \sum_\mathrm{particles} m \omega (\mathbf r-\mathbf r',h)

where an example of the smoothing kernel is

\omega (\mathbf r- \mathbf r', h) = \frac1{Th^3}

      • 1-\frac32u^2+\frac34u^3 \textrm{if } 0 \leqslant u \leqslant 1
      • \frac14(2-u)^2 \textrm{if } 1
      • 0 ~\textrm{if } 2

where u = \frac{|\mathbf r-\mathbf r'|}h.

The Lagrangian of the system is

L = \sum_i \frac12 m_i v_i^2 - \sum_i m_i \phi(\mathbf r_i).

The Euler–Lagrange equations governing the motion of the particles are

\frac {\partial L}{\partial \mathbf r_i} - \frac {\mathrm d}{\mathrm dt} \frac {\partial L}{\partial \dot {\mathbf r}_i} = 0.

It turns out that solving these equations leads to

\frac {\partial v_i}{\partial t} = -\sum_j m_j \left( \frac {p_i}{\varrho_i^2} + \frac {p_j}{\varrho_j^2} + Q_{ij} \right) \nabla_i \omega(\mathbf r_i-\mathbf r_j, h),

where \mathbf Q is an viscosity term added artificially for more accurate modeling of transient behavior. The nature of the simulation requires that particles are stored in a data structure that facilitates easy search based on their position, for example a tree. As particles move, this structure needs to be updated.

The Jeans condition for these kind of simulations is that

M_\mathrm {min} \leqslant M_\mathrm J

where M_\mathrm {min} is the minimum resolved fragmentation mass, equal to the typical total mass within smoothing length from any particle.

Comparison

The advantages of the grid-based codes is that they are

      • accurate for simulating shocks,
      • more stable at instabilities.

Their disadvantages are that

      • grid orientation may cause directional artifacts,
      • grid imprinting might appear at the edges.

Also, they are more likely to break if there’s a bug in the code. On the other hand, particle-based codes have the advantages of being

      • inherently Lagrangian,
      • inherently Galilei invariant, that is, describing convection and flows well,
      • accurate with gravity,
      • good with general geometries.

At the same time, they suffer from

      • resolution problems in low density regions,
      • the need for an artificial viscosity term,
      • the need for post processing to extract information on density and momentum distribution,
      • statistical noise.

See slides for the effect of too large J for a grid-based simulation, too small h for a particle-based simulation, for examples involving shock fronts, Kelvin–Helmholtz instability and Rayleigh–Taylor instability, and star formation. Note that the simulated IMF matches theory and observations within an order of magnitude, but this is very sensitive to the initial temperature of the molecular cloud.

Some thoughts on Jeans scales


Recall the equation for the Jeans Mass:

M_J = \frac1{8} \big( \frac{\pi k T}{G \mu} \big)^{3/2} \frac1{\rho^{1/2}} \propto \frac{T^{3/2}}{\rho^{1/2}}

In common units, this is

M_J = 0.32 M_\odot \big(\frac{T}{10K} \big)^{3/2} \big(\frac{m_H}{\mu}\big)^{3/2} \big( \frac{10^6 {\rm cm^{-3}}}{n_H} \big)^{1/2}

      • 10K, ~2.33 {\rm amu}, ~ n_H = 2 \times 10^5 {\rm cm^{-3}} \to 0.2 M_\odot
      • 50K, ~2.33 {\rm amu}, ~ n_H = 200 {\rm cm^{-3}} \to 70 M_\odot

This raises a question — what jeans mass do we choose if a gas cloud is hierarchical, with many different densities and temperatures? This motivates the idea of turbulent fragmentation, wherein a collapsing cloud fragments at several scales as it collapses. We will return to this

Also recall that the Jeans growth timescale for structures much larger than the Jeans size is

\tau_J = \frac1{k_J c_s} = \frac1{\sqrt{4 \pi G \rho_0}} = \frac{2.3 \times 10^4 {\rm yr}}{\sqrt{n_H / 10^6 {\rm cm^{-3}}}}

for n_H = 1000, ~ \tau_J \sim 0.7 {\rm Myr}

Compare this to the pressureless freefall time:

\tau_{ff} = \big(\frac{3 \pi}{32 G \rho_0} \big)^{1/2} = \frac{4.4 \times 10^4 {\rm yr}}{\sqrt{n_H / 10^6 {\rm cm^{-3}}}}

From which we see

\frac{\tau_{ff}}{\tau_J} = \pi \sqrt{3/8} = 1.92

The Jeans growth time is about 1/2 the free-fall time

Finally, compare this to the crossing time in a region with n = 1000 {\rm cm^{-3}}. The sound speed is c_s = \sqrt{kT / \mu}, which is 0.27 km/s for molecular hydrogen, and .08 km/s for 13CO.

However, note that the observed linewidth in 13CO for such regions is of the order 1 km/s. Clearly, non-thermal energies dominate the motion of gas in the ISM.

Recall that the jeans length is \lambda_J = \big( \frac{\pi c_s^2}{G \rho} \big)^{1/2}

Plugging in the thermal H2 sound speed yields \lambda_J = 1 pc. The crossing time for CO gas moving at 1 km/s is thus 1Myr in this region.

So, in a cloud with n \sim 10^3 and T \sim 20K, the Jeans growth time, free fall time, and crossing time are all comparable.

There is a problem with our derivation, though! We assumed the relevant sound speed that sets the Jeans length is the thermal hydrogen sound speed. However, these clouds are not supported thermally, as the non-thermal, turbulent velocity dispersion dominates the observed linewidths. This suggests we use and equivalent Jeans length where we replace the sound speed by the turbulent linewidth.

This is often done in the literature, but its sketchy. Using turbulent linewidths in a virial analysis assumes that turbulence acts like thermal motion — i.e., it provides an isotropic pressure. This may not be the case, as turbulent motions can be partially ordered in a way that provides little support against gravity.

The question of how a cloud behaves when it is not thermally supported leads to a discussion of Larsons Laws (JC 2013)

Larsons Legacy

see these slides

Stellar Winds in Star Forming Regions


Guest Lecture by Hector Arce. See this post for notes. See also this movie

Introduction to shocks


Shocks occur when density perturbations are driven through a medium at a speed greater than the sound speed in that medium. When that happens, sound waves cannot propagate and dissipate these perturbations. Overdense material then piles up at a shock front

Collisions in the shock front will alter the density, temeprature, and pressure of the gas. The relevant size scale over which this happens (i.e., the size of the shock front) is the size scale over which particles communicate with each other via collisions:

l = (\sigma n)^{-1}. If n=10^2 {\rm cm^{-3}}, ~\sigma \sim (10^{-9} {\rm cm})^2 \to l = 0.01 {\rm pc}

A shock

An excellent reference offering a three-page summary of important shock basics (including jump conditions) is available in this copy of a handout from Jonathan Williams ISM class at the IfA in Hawaii.

Shock De-Jargonification

See this handout and this list of examples.

Shock(ing facts about) viscosity


A perfect shock is a discontinuity, but real shocks are smoothed out by viscosity. Viscosiity is the resistance of a fluid to deformation (it is the equivalent of friction in a fluid)

Inviscid or ideal fluids have zero viscosity. Colder gasses are less viscious, because collisions are less frequent.

Simulations include numerical viscosity by accident — these are the result of computational approximations / discretization that violate Eulers equations and lead to a momentum flow that acts like true viscosity. However, the behavior of numerical viscosity can be unrealistic and of inappropriate magnitude

True shocks are a few mean free paths thick, which is typically less than the resolution of simulations. Thus, simulations also add artificial viscosity to smooth out and resolve shock fronts.

Rankine Hugoniot Jump Conditions


This section is based on the 2013 Wikipedia entry for “Rankine-Hugoniot Conditions,” archived as a PDF here.

The Jump conditions describe how properties of a gas change on either side of a shock front. They are obtained by integrating the Euler Equations over the shock front. As a reminder, the mass, momentum, and energy equations in 1 dimension are

      • \frac{d \rho}{d t} = - \frac{d}{dx} (\rho u)
      • \frac{d \rho u} {d t} = - \frac{d}{dx} (\rho u^2 + P)
      • \frac{d \rho E}{d t} = - \frac{d}{dx} \big[ \rho u (e + 1/2 u^2 + P / \rho) \big]

Where E = e + 1/2 u^2 is the fluid specific energy, and e is the internal (non-kinetic) specific energy.

We supplement these equations with an equation of state. For a adiabatic shocks (a bad term, but describes shocks where radiative losses are small), the equation of state is P = (\gamma - 1) \rho e

Generally, what a jump condition is a condition that holds at a near-discontinuity. It ignores the details about how a quantity changes across the discontinuity, and instead describes the quantity on either side.

The general conservation law for some quantity w is

\frac{d w}{d t} + \frac{d}{dx} f(w) = 0

A shock is a jump in w at some point x = x_s(t). We call x1 the point just upstream of the shock, and x2 the point just downstream

A schematic defining the notation for the shock jump conditions

Integrating the conservation equation over the shock,

\frac{d}{dt} \big ( \int_{x_1}^{x_s(t)} w dx + \int_{x_s(t)}^{x_2} w dx \big ) = - \int_{x_1}^{x_2} \frac{d}{dx} f(w) dx

w1 \frac{dx_s}{dt} - w2 \frac{dx_s}{dt} + \int_{x_1}^{x_s(t)} w_t dx + \int_{x_s(t)}^{x_2} w_t dx = -f(w) |_{x_1}^{x_2}

This holds because \frac{ dx_1}{dt} = 0, \frac{dx_2}{dt} = 0 in the frame moving with the shock

Let x_1 \to x_s(t), ~ x_2 \to x_s(t) when

\int_{x_1}^{x_s(t)} w_t dx \to 0, ~ \int_{x_s(t)}^{x_2} w_t dx \to 0

and in the limit S(w_1 - w_2) = f(w_1) - f(w_2)

Where S = \frac{d_{x_s(t)}}{dt} = \text{characteristic shock speed}

S = \frac{f(w_1) - f(w_2)}{w_1 - w_2}

The upstream and downstream speeds are constrained via

\frac{d}{dw}f(w_1)

Moving away from the generic w formalism and to the Euler equations, we find

      • S(\rho_2 - \rho_1) = \rho_2 u_2 - \rho_1 u_1
      • S(\rho_2 u_2 - \rho_1 u_1) = (\rho_2 u_2^2 + P_2) - (\rho_1 u_1^2 + P_1)
      • S(\rho_2 E_2 - \rho_1 E_1) = \big[ \rho_2 u_2 (e_2 + 1/2 u_2^2 + P_2 / \rho_2) \big] - \big[ \rho_1 u_1 (e_1 + 1/2 u_1^2 + P_1 / \rho_1) \big]

These are the Rankine Hugoniot conditions for the Euler Equations.

In going to a frame co-moving with the shock (i.e. v = S – u), one can show that

S = u_1 + c_1 \sqrt{1 + \frac{\gamma + 1}{2 \gamma} (\frac{P_2}{P_1 - 1})}

where c1 is the upstream sound speed c_1 = \sqrt{\gamma P_1 / \rho_1}

For a stationary shock, S = 0 and the 1D Euler equations become

\rho_1 u_1 = \rho_2 u_2

\rho_1 u_1^2 + P_1 = \rho_2 u_2^2 + P_2

\rho_1 u_1( e_1 + 1/2 u_1^2 + P_1 / \rho_1) = \rho_2 u_2 (e_2 + 1/2 u_2^2 + P_2 / \rho_2)

After more substitution and defining h = P / \rho + e \to 2(h_2 - h_1) = (P_2 - P_1) (\frac1{\rho_1} + \frac1{\rho_2}),

      • \frac{\rho_2}{\rho_1} = \frac{P_2/P_1 (\gamma + 1) + (\gamma - 1)}{(\gamma + 1) + P_2 / P_1 (\gamma - 1)} = \frac{u_1}{u_2}
      • \frac{P_2}{P_1} = \frac{ \rho_2 / \rho_1 (\gamma + 1) + (\gamma - 1)}{ (\gamma + 1) + \rho_2 / \rho_1 (\gamma - 1)}
      • \frac{\rho_2}{\rho_1}

For strong (adiabatic) shocks and an monatomic gas (\gamma = 5/3),

\frac{\rho_2}{\rho_1} \to 4

Introduction to Instabilities


See this handout

Rayleigh-Taylor Instability


The Rayleigh-Taylor instability occurs when a more dense fluid rests on top of a less dense fluid, in the presence of a gravitational potential \phi = g z (though note that accelerations mimic gravity, so many driven jets, etc exhibit the RT instability without gravity)

Notation for the RT instability

To derive the instability, we will solve the momentum and continuity equations while satisfying boundary conditions at the a/b interface

The momentum equation is

\rho \frac{DV}{Dt} = \rho [ \frac{dv}{dt} + v \cdot \nabla v] = - \nabla P - \frac1{8 \pi} \nabla B^2 + \frac1{4 \pi} B \cdot \nabla B - \rho \nabla \phi

The continuity equation is

\frac{D \rho}{D t} = \frac{d \rho}{d t} + v \cdot \nabla \rho = -\rho \nabla \cdot v

To satisfy the continuity equation, we introduce a velocity potential

v = - \nabla \psi

The density is constant (in time) above and below the interface, so the continuity equation implies \nabla^2 \psi = 0

A solution satisfying \psi = 0 for very large Z is

      • \psi = \psi_a = k_a e^{i \omega t - kz} sin(kx) above
      • \psi = \psi_b = k_b e^{i \omega t + kz} sin(kx) below

Where k = 2 \pi / \lambda is the wave number

Take the real part of \psi as the physical answer. Let B = 0, and ignore v \cdot \nabla v in the momentum equation. By integrating the momentum equation over space, we find

\rho \frac{d\psi}{d t} = P + \rho g z

which holds separately above and below the interface

The boundary conditions are that v_z,~ P need to be continuous across the interface

We can find the z of the interface, z_i, by integrating v_z over time, so

      • z_i = - \frac1{i \omega} \frac{d \psi}{dz}
      • z_i = \frac{k}{i \omega} \psi above
      • z_i = - \frac{k}{i \omega} \psi below

To first order in \psi

If v_z is continuous across the boundary then z_i must be the same above and below so

k \psi_a = - k \psi_b at z = z_i

This implies that k_a = -k_b

Continuity of pressure across the intervace gives

P = i \omega \rho \psi - \rho g z

Using the equation for z_i and noting P_a = P_b at z=0 gives

\omega^2 = g K \frac{rho_b - \rho_a}{\rho_b + \rho_a}

If \rho_b then \omega is imaginary, and the instability grows with time

Ionization fronts, HII regions, Stellar Winds, and Disk Structure


Although these 4 objects exist at different scales, their evolutions are coupled. Lets explore this connection

Ionization fronts are caused by stars with large fluxes of ionizing photons. We see them around HII regions. We observe them emission from several ionizaed specias, especially as they recombine and cool

We would like to know how ionization fronts form, expand, and emit. We would also like to know their lifetime, and how different ionization fronts map to different stars in a crowded environment.

The expansion of HII regions is influenced by the time dependent evolution of both the ionization front and the stellar wind with the surrounding medium. Draine 37.2 calculates the expansion of an HII region into a uniform medium

Stellar Winds

We talked about bi-polar flows, but these are a special type of stellar wind. Other winds are less collimated. Different kinds of winds are associated with stars at different evolutionary states:

Pre main sequence stars: T-tauri winds

Main Sequence winds (like the solar wind)

Post-Main Sequence winds / planetary nebulae

Disk Structure

Disks around stars are accretion disks. The structure of a gravitationally bound disk is governed by Keplerian rotation. However, accretion disks have a temperature structure giving rise to thermal forces (plus magnetic/turbulent forces) that can change their internal structure and dynamics

Ionization Fonts


See chapter 20 of Shus Physics of Astrophysics

Note that the mean free path of photons in an ionized gas is much larger than the mean free path in neutral gas:

      • Neutral Gas: Cross section for interaction is size of atom, or \sigma \sim 10^{-17} {\rm cm^{-2}}.
      • Ionized Gas: Interaction is via Thompson scattering, where the cross section is the size of a free electron. \sigma \sim 10^{-24} {\rm cm^{-2}}

We already discussed the Stromgren radius, which sets the size of an idealized HII region by balancing the stellar ionizing photon flux with the rate at which the ionized volume recombines and thus quenches this flux:

R_s = (\frac{3}{4 \pi} \frac{N_\star}{n_0^2 \alpha})^{1/3}

However, this ionized region will have a pressure much higher than the cool, neutral exterior. Thus, HII regions expand over time, and both pressure and ionization fronts are driven into the neutral medium.

Expansion of HII regions


Stage 1

{graphic}

      • Stage begins when a star turns on in a neutral medium
      • Rapid expansion of ionization front into static HI cloud
      • Very little fluid motion (photons travel much faster than shock waves!)
      • This stage ends when the radius approaches the Stromgren radius, and the ionizing photons are quenched by recombinations

Stage 2

      • The hot ionized HII region is over-pressured compared to the ambient exterior. This drives a radiative shock
      • Sweeps up HI gas into a shell. The ionization front is interior to this shell
      • Ends either the star explodes as a supernova, or the radius expands to R_final, where recombinations balance ionizing photons and there is pressure balance between the interior and exterior
      • Realistically, inhomogeneities in the surrounding medium mean that the HII region will evolve in more complicated ways than this

Stellar Winds


See Lamers and Cassivelli 1999

Wind driving mechanisms:

For cool stars (T = 5000K), winds are driven by the pressure expansion of the hot corona. This is like our own sun

For hot stars (T = 10,000 – 100,000K at the surface), the lack of strong convection inhibits the creation of a hot corona. The temperature at these stars surfaces is not hot enough for particles to escape the gravitational potential. Instead, these stars drive winds directly via radiation pressure. These winds travel at v , and have mass loss rates of \sim 10^{-4} M_\odot {\rm yr^{-1}}

The wind mechanism for pre main sequence stars is debated, but probably involves some interaction between an accretion disk and magnetic field

Disk Structure


The initial evidence for accretion disks around young stars was the SED, which shows an excess of IR emission due to accretion-powered luminosity.

The pioneering paper in this field is Adams, Lada and Shu 1987. See also the review paper by Lada 1999

Young stars were initially divided into 3 evolutionary classes, via the shape of there IR SEDS:

Class 0: SED looks like a single, cold blackbody. Emission is due entirely to a cold core, with no central source

Class 0 YSO

Class I: A large infrared excess, with a rising SED in the infrared. Strong IR excess (compared to the blackbody of a hot, compact central source) is due to a massive disk and outflow

Class I YSO

Class II: A flat IR SED, as the flux of the central source rises, and the disk dissipates

Class III: A faling IR SED. The accretion disk is mostly depleted, and adds only a small infrared excess on top of the SED of the central object.

Class III YSO

Disks in the context of Star and Planet Formation


For excellent references on star and planet formation, see “Protostars and Planets V”, as well as Stahler and Palla’s “The Formation of Stars”

What is a protostar?

I like the wikipedia entry:

“A contracting mass of gass that represents an early stage in the formation of a star before nucleosynthesis has begun

Since fusion is a negligible energy source in a protostar, its luminosity comes from gravitational contraction. From the virial theorem, half of the gravitational potential energy gain from contraction is converted into kinetic energy, while the other half is radiated away. In other words, for a homogeneous sphere, E_{\rm rad} = \frac{3}{10} \frac{GM^2}{R}.

For a 1 solar mass star contracted to 500 R_\odot,~ E = 2 \times 10^{45} erg.

What is a planet?

Wikipedia says: “A celestial body moving in an elliptic orbit around a star.” This isn’t quite precise enough — could be an asetroid, KBO, binary star, brown dwarf, etc.

How do planets form?


See this post

Key and Open Questions Regarding the Intergalactic Medium


See also this post

The Kenticutt-Schmidt relation empirically links the surface brightness and star forming rate of galaxies. What is the physical explanation for this? In particular, is there a more prescriptive form or interpretation of the KS relation, that is more astrophysical (and perhaps more accurate at predicting star formation rate)?

Not all galaxies are created equal(ly)

What is the origin of Spiral galaxies? Is the spiral density wave theory of Lin and Shu corre t, or might the GMC perturbation theory of D’Onghia and Hernquist be more apropos? Is the formation of grand design spirals different from that of flocculent spirals?

Where to ellipticals come from? They most likely are the remnants of mergers:

      • They they have no ordered rotation. (suggests random mixing)
      • Old stars, little or no dust/gas
      • Often at the center of clusters
      • Always have central Black Holes

“Young” merger galaxies (i.e. before a galaxy has undergone several mergers and formed an elliptical) have extreme star formation rates. SFR can range from 100s to 1000s of stars/year (by comparison, the SFR in the Milky Way is 1 M_\odot/yr

Significant Variations in the Gas/Dust Ratio and Metallicity

This is true both within galaxies and among galaxies.

How does this affect the K-S relationship?

What are the Kennicutt-Schmidt relations?

See also this post

The “Schmidt Law” is due to Marten Schmidt (1959): “The mean density of a galaxy may determine, as a first aproximation, its present gas content and thus its evolution stage”

\dot{M} \propto n_{\rm gas}^2

Here, n is the volume density.

The “Kennicut Law” is similar to the Schmidt law, but uses the surface density:

\Sigma_{\rm SFR} = a \Sigma_{\rm gas}^q

Where, typically, q~1.4 (The Schmidt Law above would give q=1.5). Notably q is not 1, which would imply a constant efficiency function.

Some tracers of star formation rate:

      • UV flux (from unobscured massive young stars)
      • H-alpha flux (also from young stars)
      • FIR flux – reprocessed starlight (but– that’s also dependent on gas surface density, so its sketchy to use in the KS law. That doesn’t mean people don’t do it!)

Some tracers of surface density:

      • HI
      • CO
      • HI+CO
      • HCN and other tracers that preferentially probe high surface densities. Gao and Solomon 2004 quote Log_{IR} = 1 \pm 0.05 Log_{HCN} + 2.9

A big issue in using the KS law is how “independent” the tracers of surface density and star formation rate are. Also, the KS relationship extends over many orders of magnitude. Because of this, its unclear whether KS says something important of the conversion of gas into stars, or rather something trivial like “bigger galaxies have more gas, and form more stars”.

The ISM at very high redshift


See this post

Special Topics


See this post

How do planets form?

In Uncategorized on April 12, 2011 at 1:15 am

This is NOT a question to which we yet “know” the answer. Instead, we know that the answer isn’t likely to be exactly the same for all planets (e.g. maybe gas giants form differently from rocky smaller planets?).  We also think we know about some of the processes that may be important.

Here’s a quick list of considerations (many of which are shown in this “game” video!):

1.  Disk formation. It is widely agreed that the starting point for planet formation is the disk that forms as material accretes from a molecular cloud core onto a disk around a forming star. (This illustration of “10-steps to star formation,” shows where disk formation fits into the larger star/disk/planet formation process.)  One very popular analytic theory of disks around young stars, and their associated outflows, is called the “X-wind” model.  The X-wind model is due to Frank Shu and colleagues, and it assigns a key role to the magnetic field, in slowing down the rotation of the disk and in generating bipolar outflows.  (This link illustrates the difference between “X-wind” and “D-wind” (or “disk-wind”) models.  In the D-wind, the outflow comes from the disk, rather than the X-point, where the magnetic field of the star connects to the disk.)

1a. Dust v. Gas. Circumstellar disks contain plenty of gas AND dust.  Some theories focus on the sticking of dust grains together into “planetesimals” as the first important step in planet formation, while others focus on instabilities that can cause the fragmentation of the whole disk into over-dense blobs that could be the seeds for future planets.  Also, the relative amounts and distribution of dust and gas (due to their differing opacities) will effect the internal (e.g. “dead zone”, cf. Gammie 1996) and external (e.g. flaring, e.g. van Boekel et al. 2005) structure of the disk.  These arrangements change over time (see “Time Evolution”, below).

from "Dynamics of Protoplanetary Disks," Phil Armitage, ARA&A, 2011.

2. Onset of planet formation. It is NOT clear when in the lifetime of a circumstellar disks planets begin to form.  It is fair to say that there are many competing theories! This video gives an overview of what happens, according to a “consensus” view…details, however, are not shown.  It’s also fair to say that it’s widely believed in 2011 that planet formation takes place roughly at the same time as the central star (or its disk) forms, rather than fully afterwards.

2a. Role of migration. Under many theories, it is far easier to form planets farther from the star, where material has a better chance of sticking together (see “Snow line”, below), and there’s more of it.  So, many theories (Wikipedia) rely upon forming planets at large distances and then letting them “migrate” inward (or outward) due to angular momentum exchange within the disk, as or after they form.  In some theories, many “early” planets crash into the star having been gravitationally drawn there as they migrate inward, and the solar systems we see are just what’s “leftover” when migration ends.

3. Turbulence in disks. In some theories, turbulence is harmful to planet formation, because it can increase the velocity dispersion amongst solid particles, potentially inhibiting growth to larger “planetesimals” with frequent, destructive, collisions.  In other theories, turbulence is helpful, because it can create vorticies which offer regions of relatively reduced velocity dispersion, therefore making it easier for particles to stick to each other.

Schematic Overview of Key Processes in Protoplanetary Disks, from Armitage 2011 ARA&A.

4. “Snow line.” Ice can be very sticky.  (Witness wet hands trying to pick up ice cubes w/o any “sticking” issues!)  It is widely believed that ice helps dust stick together.  So, in order to dust particles to stick (at least a little) when they collide (note that they also tend to break apart), a nice icy coating helps.  So, a key question for any disk is “how far from the star do you have to be before key molecules are solids, rather than liquids”?  That distance is called the “snow line,” and for our solar system it’s about 2.7 AU, illustrating that it is unlikely that the Earth (at 1 AU) formed where we find it now.   The solid particles that form beyond the snow line can grow to be either rocky planets, or the graviatational seeds for gas giants.

from “The Genesis of Planets” by Doug Lin–illustration by Don Dixon ©Scientific American/Nature Publishing Group 2008

5. Time Evolution.  The structure of circumstellar protoplanetary disks changes over time.  There may be a very long period of planet formation/migration, but eventually, the material will be used up, and will also potentially erode due to radiation from the central star.  The figure below, from Williams & Cieza’s 2011 Annual Reviews article, gives a nice breakdown of key phases, and also shows the nature of gas and solids clearly.

The Evolution of a Circumstellar Disk

Full caption:  The evolution of a typical disk. The gas distribution is shown in blue and the dust in brown. (a) Early in its evolution, the disk loses mass through accretion onto the star and FUV photoevaporation of the outer disk. (b) At the same time, grains grow into larger bodies that settle to the mid-plane of the disk. (c) As the disk mass and accretion rate decrease, EUV-induced photoevaporation becomes important, the outer disk is no longer able to resupply the inner disk with material, and the inner disk drains on a viscous timescale (∼105 yr). An inner hole is formed, accretion onto the star ceases, and the disk quickly dissipates from the inside out. (d) Once the remaining gas photoevaporates, the small grains are removed by radiation pressure and Poynting-Robertson drag. Only large grains, planetesimals, and/or planets are left This debris disk is very low mass and is not always detectable.

5a. Debris disks.  As shown above, once all planets a solar system will have have formed and the gas is largely gone, there is still leftover particulate matter.  The disk of “leftovers” (e.g. asteroids and comets), which can grind themselves into smaller and smaller pieces through collisions, is known as a “debris disk.”  Here’s an artistic video “showing” the debris disk around the star b-Pic (based on observational data).  These debris disks are low-mass, but bright enough to be detected in the sub-mm.  Their appearance depends strongly on inclination, as shown in this figure complied for the JCMT/SCUBA-2 “Debris Disk Legacy Survey”:

FIGURE 1: Debris disks seen with SCUBA, including (l-to-r) τ Ceti, ε Eridani, Vega (α Lyr), Fomalhaut (α PSa) and η Corvi. The disks are shown to the same physical scale i.e. as if all at one distance; actual distances are 3 to 18 pc. Sketches at the bottom demonstrate the disk orientations, and the star symbols are at the stellar positions. The spectral types and stellar ages are (l-to-r) G8 V / 10 Gyr, K2 V / 0.85 Gyr, A0 V / ~ 0.4 Gyr, A3 V / 0.3 Gyr and F2 V / ~ 1 Gyr. The images are at 850 μm except for η Corvi at 450 μm.

Suggested reading:

  • The Smithsonian Submillimeter Array has blazed a path (to soon be followed by ALMA) in observing Protoplanetary Disks, and the Protoplanetary Disks Research Group web page at the Harvard-Smithsonian Center for Astrophysics has links to several key publications.
  • Fantastic 2008 Scientific American article on the “Genesis of Planets” by Doug Lin…here’s an excerpt from its introduction: “The study of planet formation lies at the intersection of astrophysics, planetary science, statistical mechanics and nonlinear dynamics. Broadly speaking, planetary scientists have developed two leading theories. The sequential accretion scenario holds that tiny grains of dust clump together to create solid nuggets of rock, which either draw in huge amounts of gas, becoming gas giants such as Jupiter, or do not, becoming rocky planets such as Earth. The main drawback of this scenario is that it is a slow process and that gas may disperse before it can run to completion.
    The alternative, gravitational-instability scenario holds that gas giants take shape in an abrupt whoosh as the prenatal disk of gas and dust breaks up—a process that replicates, in miniature, the formation of stars. This hypothesis remains contentious because it assumes the existence of highly unstable conditions, which may not be attainable. Moreover, astronomers have found that the heaviest planets and the lightest stars are separated by a “desert”—a scarcity of intermediate bodies. The disjunction implies that planets are not simply little stars but have an entirely different origin.
    Although researchers have not settled this controversy, most consider the sequential-accretion scenario the most plausible of the two.”

Formation of planetesimals

In Uncategorized on April 11, 2011 at 9:09 pm

Introduction

Terrestrial planets are thought to be built up through the collisions of many smaller objects. Our story begins in the remains of the stellar accretion disk which, for a sun-like star, is 99% gaseous hydrogen and helium by mass.  Terrestrial planets (and it is thought the cores of giant planets) are created from the remaining 1% of mass, which consists of tiny solid grains.  Collisions of these micrometer dust grains eventually can create planets like Earth.

Read the rest of this entry »

The Magnetorotational Instability

In Uncategorized on April 11, 2011 at 12:53 am

Introduction

The MRI is an instability that arises from the action of the magnetic field in a differentially rotating system (i.e. a disk), and can lead to large scale mixing and turbulence very quickly (MRI grows on a dynamical timescale t_{MRI} \propto 1/\Omega where \Omega is the rotational frequency). The necessary conditions for the MRI to develop are the following:

  • There is a weak poloidal magnetic field (i.e. the field points in a direction normal to the disk)
  • The disk rotates differentially, where \frac{d\Omega}{dR} < 0

Given that magnetic fields are ubiquitous and that astrophysical disks (which rotate differentially) are commonplace, the MRI arises in a huge diversity of astrophysical settings (including X-ray binaries, the Galactic disk, and protoplanetary disks).

 

Read the rest of this entry »

ARTICLE: Self-Regulated Star Formation in Galaxies via Momentum Input from Massive Stars

In Journal Club, Journal Club 2011 on April 7, 2011 at 5:32 am

Read the Paper by P.F. Hopkins, E. Quataert, and N. Murray (2010)

Summary by Dylan Nelson and Josh Suresh

Abstract

Feedback from massive stars is believed to play a critical role in shaping the galaxy mass function, the structure of the interstellar medium (ISM) in galaxies, and the slow conversion of gas into stars over many dynamical times. This paper is the first in a series studying stellar feedback in galaxy formation. We present a new numerical method for implementing stellar feedback via the momentum imparted to the ISM by radiation pressure, supernovae, and stellar winds. In contrast to the majority of the results in the literature, we do not artificially suppress cooling or ‘turn off’ the hydrodynamics for a subset of the gas: the gas can cool to <100K and so the ISM inevitably becomes highly inhomogeneous. For reasonable feedback efficiencies galaxies reach an approximate steady state in which gas collapses due to gravity to form giant molecular clouds and feedback disperses these dense regions back into the more diffuse ISM. This is true for a wide range of galaxy models, from SMC-like dwarfs and Milky-way analogues to z~2 clumpy disks. The resulting star formation efficiencies are consistent with the observed global Kennicutt-Schmidt relation. Moreover, the star formation rates in our galaxy models are nearly independent of the numerically imposed high-density star formation efficiency and density threshold. This is a consequence of star formation regulated by stellar feedback; it enables our method to be more predictive than previous treatments. By contrast, without stellar feedback, the ISM experiences runaway collapse to very high densities and the global star formation rates exceed those observed by 1-2 orders of magnitude. This highlights the critical role that momentum in stellar feedback plays regulating star formation in galaxies. Read the rest of this entry »

GMC Formation and Spiral Spurs

In Uncategorized on April 7, 2011 at 5:17 am

Fig 1. M51: Composite Hubble Image (Strong m=2 response and interarm spur features)

There are several related questions that we want to address:

  1. Galaxy evolution. How does a Milky Way type galaxy evolve (in isolation)?
  2. Spiral Structure. What is it, how does it form, where does it occur, when, and why?
  3. Star Formation. We may have a good idea of what stars are, but: how do they form? where? when?

We know stars form in dense, cold regions of the ISM. These condensations are themselves part of larger, slightly less dense regions which we call giant molecular clouds (GMCs). Once a galaxy forms GMCs, however, they strongly affect both the structure and subsequent evolution of the galaxy. In particular,

  1. The cloud formation rate sets the overall rate and nature of star formation.
  2. Clouds change the balance of ISM phases (cold/warm/hot as well as molecular/atomic).
  3. Can modify the galactic dynamics, perhaps inducing or preventing further spiral structure.

Giant molecular clouds also influence the formation of each individual star, since the GMC properties (mass, density spectrum, magnetic field, turbulent velocity field, angular momentum) are precisely the initial conditions for star formation. It seems, then, that it would be advantageous to understand how such structures form. There are two basic mechanisms:

  1. Bottom-up / “coagulation” – smaller clouds build up over time (via collisions) into GMC size objects.
  2. Top-down – we must invoke some largre scale disk instability or mechanism to allow clouds to condense from the diffuse ISM.

The most obvious choice is gravity. As a long range force it will naturally bring the ISM together over large scales into increasingly dense regions. However, there are other important effects (and consequences). Differential rotation (shear), magnetic fields (magnetic pressure and tension forces), turbulence (multiscale perturbations of gas properties), and stellar spiral arms (induce local variations in the ISM density, velocity, magnetic field).

In a series of papers by Ostriker, Shetty, and Kim from 2002-2010 they explore the competition among these processes and their role in the formation of giant molecular clouds, particularly within features dubbed “spiral spurs or feathers”. They leverage the standard strength of numerical simulations, namely its ability to disentangle the physics of highly nonlinear systems by selectively turning certain processes on or off, isolating their contributions, and determining which are dominant in various regimes. They solve the magneto-hydrodynamics equations for gas including a source term for self-gravity as well as an externally imposed spiral perturbation, modeled as a rigidly rotating potential with some fixed pattern speed:

Fig 2. Equations of MHD and gas-self gravity (1-4) with the external spiral potential (5).

Their suite of simulations progressed from 2D to 3D, and from local shearing periodic box (a small patch of a spiral arm comoving with the disk) to global simulations of the entire disk. There are several important effects of spirals worth mentioning:

  1. The characteristic timescale for self-gravity condensations is given by t_J = c_s / G \Sigma . So, in an arm, where there is a natural enhancement of the surface density \Sigma, the condensation process is more efficient.
  2. In a spiral arm there is a local shear reduction. Specifically, for a flat rotation curve with V(r) = Const we have for the local gradient in the angular velocity d \ln{\Omega} / d \ln{R} = \Sigma / \Sigma_0 - 2 where \Sigma_0 is the azimuthal average. For instance, for greater than a factor of two overdensity we can actually reverse the direction of the shear. The important point: this allows more time for condensations to grow before they are sheared out.
  3. Consider the dispersion relation for a shearing disk + magnetic fields, in the weak shear limit. Then the instability criterion reduces to exactly that of the 2D Jeans analysis (+ thick disk gravity), in the absence of either rotation or magnetic fields! That is, the presence of a B field removes the stabilizing effect of galactic rotation. Additionally, because magnetic tension forces share angular momentum between neighboring condensations, the effect is to resist epicyclic motions across field lines, and contracting regions are able to grow. This is the so called “MJI” or magneto-Jeans instability.

MJI initially develops in the densest region of the arm and is then convected downstream, out of the arm. The interarm shear then creates the characteristic “spur” shape, which naturally have a trailing sense due to the background differential rotation.

Fig 3. Final density structure of Kim & Ostriker (2002) shearing box MHD simulations, shown in the frame comoving with the spiral pattern.

Ostriker et al. find this to be an efficient mechanism for GMC formation. Several quantitative predictions can then be made based on the simulation results. For instance:

  1. The spur spacing is ~ few times the Jeans length L_J = c_s^2 / G \Sigma .
  2. The surface density enhancement in spurs drives Toomre’s Q parameter for the gas (which scales as 1 / \Sigma ) to be locally unstable.
  3. Fragmenation of ~ Jeans mass (10^6 - 10^7 M_{\odot}) clumps along the length of the arm are associated with GMCs.

To conclude, consider the image at the top of this page of the “grand-design spiral” galaxy M51, taken with HST (H\alpha, V, I) in 2002 and which provided strong observational motivation for the studies described herein. The reader is left to their own conclusions!

References:

Supernova Remnants – in Theory and Practice

In Uncategorized on April 4, 2011 at 3:30 pm

Supernova Remnants Discussion – Important Topics

See also these handwritten notes

Stage 0: The Supernova Explosion

  • 1e53 erg carried away by neutrinos!
  • Ejecta properties: Mass ~ 1 solar mass, v ~ 10,000 km/s, Kinetic Energy ~ 1e51 erg.
  • Typical peak absolute visual magnitude ~ -18, duration ~ 3 months, energy radiated in supernova ~ 1e49 erg
  • Light curves: Type I (no hydrogen) – faster ejecta, powered by decay of radioactive Ni; Type II – slower ejecta, powered by recombination of Hydrogen
  • Primary emission mechanism – BB radiation from thermal ejecta

Stage I: The Ejecta-dominated Phase

  • Free expansion, Hubble flow model
  • Importance of B-fields, formation of shocks
  • Velocity structure of the ejecta
  • Primary emission mechanism – shocked ejecta: thermal bremsstrahlung (X-rays); shocked ISM: radio synchrotron
  • Validity – as long as Ejecta mass >> swept-up ISM mass

Stage II: The Sedov-Taylor Phase

  • Echoes of thunder
  • S-T model: validity, dimensional analysis, homologous model (blast wave position and velocity), Sedov solution (without derivation) + plots & caveats, post-shock temperature
  • Primary emission mechanism – Optical: forbidden lines ([O I], [Fe], [Mg I) ) in type I SNe; X-rays: free-free from shocked ejecta + auroral lines; Radio: synchrotron from forward shock
  • Validity – t < cooling time of ejecta, non-spherical remnants

Stage III: The Snowplow Phase

  • Radiative cooling + timescales, shell formation
  • Instabilities
  • Primary emission mechanism – Optical: forbidden lines [O III], [S II]; X-rays: free-free from shocked ISM; Radio: synchrotron from forward shock
  • Effect of internal pressure on evolution

Resources

Books:
Physical Processes in the Interstellar Medium, Lyman Spitzer, Wiley-Interscience Pub., 1978, section 10.2 (shocks) and 12.2 (supernovae).
Physics of the ISM and IGM, Bruce Draine, PUP, 2011, section 39.1

Online:
http://www.astronomy.ohio-state.edu/~ryden/ast825/ch5-6.pdf
http://wapedia.mobi/en/Supernova_remnant

Numerical Simulations of explosion:
http://qso.lanl.gov/~clf/

Follow

Get every new post delivered to your Inbox.

Join 30 other followers