Harvard Astronomy 201b

Posts Tagged ‘probes of density’

ARTICLE: On the Density of Neutral Hydrogen in Intergalactic Space

In Journal Club, Journal Club 2013 on April 20, 2013 at 12:25 am

Author: Yuan-Sen Ting, April 2013

 

In the remembrance of Boston bombing victims: M. Richard, S. Collier, L. Lu, K. Campell

“Weeds do not easily grow in a field planted with vegetable.
Evil does not easily arise in a heart filled with goodness.”

Dharma Master Cheng Yen.

 

Related links:

 

  1. For more information and interactive demonstrations of the concepts discussed here, see the interactive online software module I developed for the Harvard AY201b course.

 

Introduction

 

Although this seminal paper by Gunn & Peterson (1965) comprises only four pages (not to mention, it is in single column and has a large margin!), the authors suggested three ideas that are still being actively researched by astronomers today, namely

  1. Lyman alpha (Lyα) forest
  2. Gunn-Peterson trough
  3. Cosmic Microwave Background (CMB) polarization

Admittedly, they put them in a slightly different context, and the current methods on these topics are much more advanced than what they studied, but one could not ask for more in a four pages note! In the following, we will give a brief overview on these topics, the initial discussion in Gunn & Peterson (1965) and the current thinking/results on these topics. Most of the results are collated from the few review articles/papers in the reference list and we will refer them herein.

 

Lyman alpha forest

 

Gunn and Peterson propose using Lyα absorption in the distant quasar spectrum to study the neutral hydrogen in the intergalactic medium (IGM). The quasar acts as the light source, like shining a flashlight across the Universe, and provides a characteristic, relatively smooth spectrum. During the transmission of the light from the quasar to us, intervening neutral hydrogen clouds, along the line of sight to the quasar, cause absorptions on the quasar continuum at the neutral hydrogen HI transition 1215.67Å in the rest frame of the absorbing clouds. Because the characteristic quasar spectrum is well-understood, it’s relatively easy to isolate and study this absorption.

More importantly, in our expanding Universe, the spectrum is continuously redshifted. Therefore the intervening neutral hydrogen at different redshifts will produce absorption at many different wavelengths blueward of the quasars’ Lyα emission. After all these absorptions, the quasar spectrum will have a “forest” of lines (see figure below), hence the name Lyα forest.

 


Figure 1. Animation showing a quasar spectrum being continuously redshifted due to cosmic expansion, with various Lyman absorbers acting at different redshifts. From the interactive module.

 

For a quasar at high redshift, along the line of sight, we are sampling a significant fraction of the Hubble time. Therefore, by studying the Lyα forests, we are essentially reconstructing the history of structure formation. To put our discussion in context, there are two types of Lyα absorbers:

  1. Low column density absorbers: These absorbers are believed to be the sheetlike-filamentary neutral hydrogen distributed in the web of dark matter formed in the ΛCDM. In this cosmic web, the dark matter filaments are sufficiently shallow to avoid gravitational collapse of the gas but deep enough to prevent these clouds of the inter- and proto-galactic medium from dispersing.
  2.  

  3. High column density absorbers: these are mainly due to intervening galaxies/proto-galaxies such as the damped Lyα systems. As opposed to the low column density absorbers, since galaxies are generally more metal rich than the filamentary gas; the absorption of Lyα due to these systems will usually be accompanied by some characteristic metal absorption lines. There will also be an obvious damping wing visible in the Voigt profile of the Lyα line, hence the name damped Lyα system.

Since our discussion here focuses on the IGM, although the high column density absorbers are extremely important to detect high redshift galaxies, for the rest of the discussion in this post, we will only discuss the former.

 

Low column density absorbers

 

So, what can we learn about neutral hydrogen in the Universe by looking at Lyα forests? With a large sample of high redshift quasar spectra, we can bin up the absorption lines due to intervening clouds of the same redshift. This allows us to study the properties of gas at that distance (at that time in the Universe’s history), and learn about the evolution of the IGM. In this section, we summarize some of the crucial properties that are relevant in our later discussion:

 


Figure 2. Animation illustrating the effect of the redshift index on the quasar absorption spectrum. A higher index means more closely spaced absorption near the emission peak. From the interactive module.

 

It turns out that the properties of the IGM can be rather well described by power laws. If we define n(z) as the number density of clouds as a function of redshift z, and N(ρ) as the number density of clouds as a function of column density ρ. It has been observed that [see Rauch (1998) for details],

 

Note that, in order to study these distributions, ideally, high resolution spectroscopy is needed (FWHM < 25 kms-1) such that each line is resolved.

There is an upturn in the redshift power law index at high redshift. This turns out to be relevant in the discussion of the Gunn-Peterson trough. We will discuss this later. It has also been shown, via cosmological hydrodynamic simulations, that the ΛCDM model is able to reproduce to the observed power laws.

But we can learn about more than just gas density. The Doppler widths of Lyα line are consistent with the photoionization temperature (about 104 K). This suggests that the neutral hydrogen gas in the Lyα clouds is in photoionization equilibrium. In other words, the photoionization (i.e. heating) due to the UV background photons is balanced by cooling processes via thermal bremsstrahlung, Compton cooling and the usual recombination. Photoionization equilibrium allows us to calculate how many photons have to be produced, either from galaxies or quasars, in order to maintain the gas at the observed temperature. Based on this number of photons, we can deduce how many stars were formed at each epoch in the history of the Universe.

We’ve been relying on the assumption that these absorbers are intergalactic gas clouds. How can we be sure they’re not actually just outflows of gas from galaxies? This question has been tackled by studying the two point correlation function of the absorption lines. In a nutshell, two point correlation function measures the clustering of the absorbers. Since galaxies are highly clustered, at least at low redshift, if the low density absorbers are related to the outflow of galaxies, one should expect clustering of these absorption signals. The studies of Lyα forest suggest that the low column density absorbers are not clustered as strongly as galaxies, favoring the interpretation that they are filamentary structures or intergalactic gas [see Rauch (1998) for details].

However this discussion is only valid in the relatively recent Universe. When we look further, and therefore study the earliest times, there are less photons and correspondingly more neutral hydrogen atoms. With more and more absorption, eventually the lines in the forest become so compact and deep that the “forest” becomes a “trough”.

 

Gunn-Peterson trough

 

In their original paper in 1965, Gunn and Peterson estimated how much absorption there should be in a quasar spectrum due to the amount of hydrogen in the Universe. The amount of absorption can be quantified by optical depth which is a measure of transparency. A higher optical depth indicates more absorption due to the neutral hydrogen. Their cosmological model during that time is obsolete now, but their reasoning and derivation are still valid, one can still show that the optical depth

 

With a modern twist on the cosmological model, the optical depth of neutral hydrogen at each redshift can be estimated to be

 

where z is the redshift, λ is the transition wavelength, e is the electric charge, me is the electron mass, f is the oscillatory strength of the electronic state transition from N=1 to 2, H(z) is the Hubble constant, h is the dimensionless Hubble’s parameter, Ωb is the baryon density of the Universe, Ωm is the dark matter density, nHI is the number density of neutral hydrogen and nH is the total number density of both the neutral and ionized hydrogen.

Since optical depth is proportional to the density of neutral hydrogen, and the transmitted flux ratio decreases exponentially with respect to the optical depth, even a tiny neutral density fraction, nHI/nH=10-4 (i.e. one neutral hydrogen atom in ten thousands hydrogen atoms) will give rise to complete Lyα absorption. In other words, the transmitted flux should be zero. This complete absorption is known as the Gunn-Peterson trough.

In 1965, Gunn and Peterson performed their calculation assuming that most hydrogen atoms in the Universe are neutral. From their calculation, they expected to see a trough in the quasar (redshift z∼3) data they were studying. This was not the case.

In order to explain the observation, they found that the neutral hydrogen density has to be five orders of magnitude smaller than expected. They came to the striking conclusion that most of the hydrogen in the IGM exists in another form — ionized hydrogen. Lyman series absorptions occur when a ground state electron in a hydrogen atom jumps to a higher state. Ionized hydrogen has no electrons, so it does not absorb photons at Lyman wavelengths.

 


Figure 3. Animation illustrating Gunn-Peterson trough formation. The continuum blueward of Lyα is drastically absorbed as the neutral hydrogen density increases. From the interactive module.

 

Gunn and Peterson also showed that since the IGM density is low, radiative ionization (instead of collisional) is the dominant source to maintain such a level of ionization. What an insight!

 

The history of reionization

 

In modern astronomy language, we refer to Gunn and Peterson’s observation by saying the Universe was “reionized”. Combined with our other knowledge of the Universe, we now know the baryonic medium in the Universe went through three phases.

  1. In the beginning of the Universe (z>1100), all matter is hot, ionized, optically thick to Thomson scattering and strongly coupled to photons. In other words, the photons are being scattered all the time. The Universe is opaque.
  2.  

  3. As the Universe expands and cools adiabatically, the baryonic medium eventually recombines, in other words, electrons become bound to protons and the Universe becomes neutral. With matter and radiation decoupled, the CMB radiation is free to propagate.
  4.  

  5. At some point, the Universe gets reionized again and most neutral hydrogen becomes ionized. It is believed that the first phase of reionization is confined to the Stromgren spheres of the photoionization sources. Eventually, these spheres grow and overlap, completing the reionization. Sources of photoionizing photons slowly etch away at the remaining dense, neutral, filamentary hydrogen.

 

When and how reionization happened in detail are still active research areas that we know very little about. Nevertheless, studying the Gunn-Peterson trough can tell us when reionization completed. The idea is simple. As we look at quasars further and further away, there should be a redshift where the quasar sources were emitting before reionization completed. These quasar spectra should show Gunn-Peterson troughs.

 

Pinning down the end of reionization

 

The first confirmed Gunn-Peterson trough was discovered by Becker et al. (2001). As shown in the figure below, the average transmitted flux in some parts of the quasar spectrum is zero.

 



Figure 4. The first confirmed detection of the Gunn-Peterson trough (top panel, where the continuum goes to zero flux due to intervening absorbers), from a quasar beyond z = 6. The bottom panel shows that the quasar after z = 6 has non-zero flux at all parts. Adapted from Becker et al. (2001).

 



Figure 5. The close up spectra of the redshift z>6 quasar. In the regions corresponding to the absorbers redshift 5.95 <z< 6.16 (i.e. the middle regions), the transmitted flux is consistent with zero flux. This indicates a strong increment of neutral fraction at z>6. Note that, near the quasar (the upper end), there is transmitted flux. This is mostly owing to the fact that the neutral gas near the quasar is ionized by the quasar emission and therefore the Gunn-Peterson trough is suppressed. This is also known as the proximity effect. As we go to the regions corresponding to lower redshift, there is also apparent transmitted flux, indicating the neutral fraction decreases significantly.

 

Could this be due to the dense interstellar medium of a galaxy along the line of sight, rather than IGM absorbers? A few lines of evidence refute that idea. First, metal lines typically seen in galaxies (even in the early Universe) are not observed at the same redshift as the Lyman absorptions. Second, astronomers have corroborated this finding using several other quasars. We see these troughs not just a few particular wavelengths in some of the quasar spectra (as you might expect for scattered galaxies), but rather the average optical depth steadily grows, versus a simple extrapolation of how it grows at low redshifts, starting at redshift of z∼6 (see figure below). The rising optical depth is due to the neutral hydrogen fraction rising dramatically beyond z>6.

 


Figure 6. Quasar optical depth as a function of redshift. The observed optical depth exceeds the predicted trend (solid line), suggesting reionization. Adapted from Becker et al. (2001).

 

Similar studies using high redshift z>5 quasars confirm a dramatic increases in the IGM neutral fraction from nHI/nH=10-4 at z<6, to nHI/nH>10-3–0.1 at z∼6. At z>6, complete absorption troughs begin to appear.

 


Figure 7. A more recent compilation of Figure 6 from Fan et al. (2006). Note that the sample variance increases rapidly with redshift. This could be explained by the fact that high redshift star-forming galaxies, which likely provide most of the UV photons for reionization, are highly clustered, therefore reionization is clumpy in nature.

 


Figure 8. A similar plot, but translating the optical depth into volume-averaged neutral hydrogen fraction of the IGM. The solid points and error bars are measurements based on 19 high redshift quasars. The solid line is inferred from the reionization simulation from Gnedin (2004) which includes the overlapping stage to post-overlapping stage of the reionization spheres.

 

But it’s important to note that, because even a very small amount of neutral hydrogen in the IGM can produce optical depth τ≫1 and cause the Gunn-Peterson trough, this observational feature saturates very quickly and becomes insensitive to higher densities. That means that the existence of a Gunn-Peterson trough by itself does not prove that the object is observed prior to the start of reionization. Instead, this technique mostly probes the later stages of cosmic reionization. So the question now is: when does reionization get started?

 

The start of reionization

 

Gunn and Peterson told us not only how to probe the end of reionization through the Gunn-Peterson trough, but also how to probe the early stages of reionization using the polarization of the CMB. They discussed the following scenario: if the Universe starts to be reionized at certain redshift, then beyond this, the background distant light sources should experience Thomson scattering.

Thomson scattering is the scattering between free electrons and photons. Recall the fact that due to Thomson scattering by the surrounding nebulae, the starlight undergoes this scattering will be polarized (due to the angle dependence of Thomson scattering). By the same analogy, the CMB, after Thomson scattering off the ionized IGM, is polarized.

As opposed to the case of the Gunn-Peterson troughs, the Thomson scattering results from light interacting with the ionized fraction of hydrogen, not the neutral fraction. Therefore, the large scale polarization of the CMB can be regarded as a complimentary probe of reionization. The CMB polarization detection by Wilkinson Microwave Anisotropy Probe (WMAP) space telescope suggests a significant ionization fraction extending to much earlier in the cosmic history, with z∼11±3 (see figure below). This implies that reionization is a process which starts at some point between z∼8–14 and ends at z∼6–8. This means that cosmic reionization appears to be quite extended in time. This is in contrast to recombination, when electrons became bound to protons, which occurred over a very narrow range in time (z=1089±1).

In summary, a very crude chart on reionization using Gunn-Peterson trough and CMB polarization is shown below.

 


Figure 9. The volume-averaged neutral fraction of the IGM versus redshift measured using various techniques, including CMB polarization and Gunn-Peterson troughs.

 

So far, we have left out the discussion of Lyβ, Lyγ absorptions as well as the cosmic Stromgren sphere technique as shown in the figure above. We will now fill in the details before proceeding to the suspects of reionization.

 

How about Lyman beta and Lyman gamma

 

When people talk about the Lyα forest, sometimes it could be an over-simplification. Besides Lyα, other neutral hydrogen transitions such as Lyβ and Lyγ, play a crucial role. Sometimes we can even study the neutral helium Lyα (note that Lyα only designates the transition from electronic state N=1 to 2). We now first look at the advantages and also impediments to include Lyβ and Lyγ absorption lines in the study of the IGM.

  1. Recall the formula for the optical depth: τ∝f λ. Due to the decrease in oscillator strength and also the wavelength, for the same neutral fraction density, the Gunn-Peterson optical depths of Lyβ and Lyγ are 6.2 and 17.9 times smaller than Lyα. Having smaller optical depth means that the absorption lines will only saturate at higher neutral fraction/density. Note that once a line is saturated, a large change in column density will only induce a small change in apparent line width. In other words, we are in the flat part of the curve of growth. In this case, the column densities are relatively difficult to measure. Since Lyβ and Lyγ have a smaller optical depths, they could provide more stringent constraints on the IGM ionization state and density when Lyα is saturated. Especially in the case of Gunn-Peterson trough, as we have shown in the figures above, the ability to detect Lyβ and Lyγ is crucial to study reionization.
  2.  


    Figure 10. The left-hand panel shows the Voigt profile of Lyα absorption line. The animation shows that when the line is saturated, a large change in column density will only induce a small change in the equivalent width (right-hand panel). This degeneracy makes the measurement of column density very difficult when the absorption line is saturated. From the interactive module.

     

  3. Note that, given the same IGM clouds, the absorption of Lyα will also be accompanied by Lyβ and Lyγ absorptions. Since Lyα has a stronger optical depth, it has broader wings in the Voigt profile and has a higher chance to blend with the other lines. Therefore, simultaneous fits to higher order Lyman lines could help to deblend these lines.
  4.  

  5. As the redshift of quasar increases, there is a substantial overlapping region between Lyβ with Lyα, and so forth. Disentangling which absorption line is due to Lyα and which is due to higher order absorptions is difficult within these regions. Due to this limitation, the study of Lyγ, and higher order lines, is rare.
  6.  


    Figure 11. Animation showing the Lyα forest including Lyα and Lyβ absorption lines. One can see that there is a substantial overlapping of Lyα absorption lines onto the the Lyβ region. Therefore, the study of higher order absorptions is very difficult in practice. From the interactive module.

 

Probing even higher neutral fraction

 

In some of the plots above, we have seen that as the neutral fraction increases, Lyα starts to saturate and can only give a moderate lower bound on the optical depth and the neutral fraction. For higher neutral fraction, we have to use Lyβ and Lyγ. It is also clear that even Lyγ will get saturated very quickly. Are there other higher density probes?

Yes, there are other probes! At high neutral fraction, the quasar in this kind of environment resembles a scaled-up version of a O/B type star in a dense neutral hydrogen region (although the latter medium is mostly in molecules, and the former is in atoms). Luminous quasars will produce a highly ionized HII region and create the so called cosmic Stromgren spheres. These cosmic Stromgren spheres could have physical size of about 5 Mpc owing to the extreme luminosity of the quasars.

Measurements of cosmic Stromgren spheres are sensitive to a much larger neutral fraction than the one that Lyγ can probe. However, on the flip side of the coin, they are subjected to many systematics that one could easily imagine, including the three-dimension geometry of the quasar emission, age of the quasar, the clumping factor of the surrounding materials, etc. This explains the large error bar in the reionization chart above.

 

How about helium absorption

 

Although hydrogen is the most abundant element in the Universe, we also know that helium constitutes a large fraction of the Universe. A natural question to ask: how about helium?

As neutral helium has a similar ionization potential and recombination rate compared to neutral hydrogen, it is believed that neutral helium reionization is likely to happen at a similar time together with neutral hydrogen. On the other hand, the ionized HeII has a higher ionization potential, and its reionization is observed to happen at lower redshift, about z∼3. The fact that ionized helium has a higher ionization potential is crucial in disentangling the UV background sources. We will discuss this in detail when we talk about the suspects. In this section, we will discuss some potentials and impediments of the usage of helium Lyα.

  1. Helium and hydrogen have different atomic masses. Note that the bulk motion and thermal motion in gas both contribute to the line broadening. However bulk motions are insensitive to the atomic masses whereas the thermal motion velocities go as 1/√μ, where μ is the mean atomic mass. By comparing the broadening of the HeII 304Å and HI 1215Å lines, it is in principle possible to use the difference in atomic masses to measure the contribution from the bulk and thermal motions to the line broadening, separately. Therefore, the theories of the IGM kinematics can be, in principle, be tested. The usefulness of this approach however is remained to be seen.
  2.  

  3. Since ionized helium has a higher ionization potential (Lyman Limit at 228Å) than the neutral hydrogen (Lyman Limit at 912Å), the relative ionization fraction is a sensitive probe of the spectral shape of the UV background. For instance, the contribution from quasars will provide harder spectra than the soft stellar sources and is able to doubly ionize helium.
  4.  

  5. Although helium Lyα forest is awesome, only a small fraction of all quasars are suitable for the search of HeII absorption. The short wavelength of 304Å of Lyman alpha line for ionized helium requires the quasar to be redshifted significantly in order to enter the optical band such that we have enough high resolution spectroscopic instrument to detect it. Furthermore, it is important to note that the helium Lyα line has smaller wavelength than the neutral hydrogen Lyman Limit. As shown in the interactive module, the bound-free absorption beyond the Lyman Limit, as especially at the presence of high column density absorber, will create a large blanketing of lines. This renders the large majority of quasars useless for a HeII search even they might be redshifted enough into the optical band.

 

The suspects — who ionized the gas?

 

We have discussed enough on the technologies to catch the suspects. So we will now go back to the main question on reionization. Who did it? As we have discussed earlier in this post, the IGM is believed to have been reionized early in the Universe’s history via photoionization. Who produced all the high energy, ionizing photons? There are two natural suspects:

  1. Star-forming galaxies
  2. Quasars

It has been argued that soft sources like star-forming galaxies have to be the dominant sources owing to the fact that there are not many quasars in the early Universe. The Sloan Digital Sky Survey (SDSS) has shown a steep decline in the number of quasars after its peak at z=2.5. If most quasars formed after this epoch, could they play a significant role in the reionization of the Universe, which seems to have completed as early as z∼6.

Taking a closer look at this problem, Faucher-Giguère et al. (2008) made an independent estimate of the photoionization contribution from quasars. In their study, they consider Lyα forests optical depth at redshift 2<z<4.2 from 86 high resolution quasar spectra.

The idea to estimate the fractional contribution from quasars has been described earlier in this post, we will iterate with more details here. Only quasars can produce the very high energy (∼50 eV) photons necessary to doubly ionize helium. Therefore, the luminosity of sources at ∼50 eV can directly tell us the contribution from quasars. By assuming a spectral index for the spectral energy distribution, one can then extrapolate to the lower energies and infer the photoionization rate at the energy range that is relevant to the HI ionization. They show that at most only about 20 per cent of these photons could come from quasars.

Beside showing quasars cannot be the main suspect, Faucher-Giguère et al. also use the Lyα forests to derive the star formation rate indirectly. We have discussed that the absorbers are in photoionzation equilibrium. With this assumption, the authors use the Lyα opacity depth to derive the photoionization rate at different redshifts. With the photoionization rate, they turn that into UV emissivity from the star-forming galaxies and then infer the star formation rate in galaxies at different times in the history of the Universe.

The figure below shows that the derived hydrogen photoionization rate is remarkably flat over the history of the Universe. This suggests that there should be a roughly constant photoionizing source over a large range of cosmic history. This is only possible if the star formation rate in galaxies continues to be high in the early Universe, at high redshift (z∼2.5–4) since the contribution from quasars only begins at this redshift. As the figure shows, such a star formation rate is consistent with the simulation of Hernquist & Springel (2003).

 


Figure 12. Observations of the hydrogen photoionization rate compared to the best-fitting model from Hernquist & Springel (2003). This suggests that the combination of quasars and stars alone may account for the photoionization. Adapted from Faucher-Giguère et al. (2008).

 

But there is another way to trace the history of star formation — by directly counting up the number and size of galaxies at different redshifts, using photometric surveys. It turns out that the results from these direct surveys, as performed by Hopkins & Beacom (2006) for example, are in tension with indirect approach. The photometric surveys suggest that the star formation rate decreases after z>2.5 [Prof Hans-Walter Rix once wittily quoted this as the Universe gets tenured at redshift z∼2.5], like in the figure below. The results says that both the major photoionization machines (quasars and stars in galaxies) would be in decline after z>2.5.

 


Figure 13. Like Figure 12 above, but now using the models of Hopkins & Beacom (2006) where the star formation history is inferred from photometric surveys.

 

If the photometric surveys are right, Faucher-Giguère et al. strongly suggest that the Universe could not provide enough photons to reionize the intergalactic medium. So, why are there two different observational results?

To sum up this section, by studying Lyα forests, Faucher-Giguère et al. argue that

  1. Star-forming galaxies are the major sources of photoionization, although quasars can contribute as much as 20 per cent.
  2. The star formation rate has to be higher than the one estimated observationally from the photometric surveys to provide the required photons.

 

How the suspect did it

 

Although the Faucher-Giguère et al. results suggest that star-forming galaxies are the major sources of photoionization, exactly how these suspects actually did it is not clear. The discrepancy between direct and indirect tracers of the star formation history of the Universe might be reconcilable. Observations of star-forming galaxies are plagued with uncertainties, many of which are still areas of active research. Among the major uncertainties are:

  1. The star formation rate of galaxies at high redshift. We cannot observe galaxies that are faint and far away. How many photons could they provide? Submillimetre observations in near future could shed more light on this due to their amazing negative K-correction and the ability to observe gas and dust in high redshift galaxies.
  2.  

  3. Starburst galaxies create dust, and dust can obscure observation. Infrared and submillimetre observations are crucial to detect these faint galaxies. It is believed that perhaps most of the ionizing photons come from these unassuming, hard-to-detect galaxies, because they are likely individually small and dim but collectively numerous.
  4.  

  5. IGM clumping factor. In other words, how concentrated is the material surrounding the galaxies? This factor could affect the fraction of ionizing photons escaping into the IGM.

In short, it is possible that the star formation did indeed begin way back in the cosmic history, at z>2.5, but we underestimated it in the photometric surveys due to these uncertainties. In this case, all the results from simulations, Lyα forests, and photometric surveys could fit together.

 

Any other suspects on loose

 

We have now two suspects in custody. The remaining question: are there any suspects on loose? To answer this question, some other interesting proposals were raised, among them:

  1. High-energy X-ray photons from supernova remnants or microquasars at high redshift. This is possible, but its contribution might overestimate the soft X-ray background that we observe today.
  2.  

  3. Neutrino is believed to gain their mass through the seesaw mechanism. One of the potential ingredient of this mechanism is the sterile neutrino. Reionization by decaying sterile neutrinos cannot be completely ruled out yet. That said, since there is no confirmed detection of sterile neutrino in particle physics experiments, one should not take this possibility too seriously and blow one own mind.

In summary, we cannot be 100 per cent sure, but other suspect other that quasars and stellar sources is unlikely.

 

So what have we learned?

 

To summarize what we have discussed in this post:

  1. Non-detection can be a good thing. The non-detection of the Gunn-Peterson trough in the early days demonstrated that the IGM is mostly in an ionized state.
  2.  

  3. The Lyα forest gives us a 3D picture of the IGM.
  4.  

  5. The Gunn-Peterson trough provides a direct probe on the end stage of the reionization.
  6.  

  7. CMB radiation polarization suggests when the process of reionization began, and shows that reionization is a process extended in time.
  8.  

  9. Star-forming galaxies are likely to be responsible for the reionization, but how they exactly did it is a question still under investigation.

 

References

 

  1. Becker R. H. et al., 2001, AJ, 122, 2850
  2. Fan X., Carilli C. L., Keating B., 2006, ARA&A, 44, 415
  3. Faucher-Giguère C.-A., Lidz A., Hernquist L., Zaldarriaga M., 2008, ApJ, 688, 85
  4. Faucher-Giguère C.-A., Prochaska J. X., Lidz A., Hernquist L., Zaldarriaga M., 2008, ApJ, 681, 831
  5. Gunn J. E., Peterson B. A., 1965, ApJ, 142, 1633
  6. Hernquist L, Springel V., 2003, MNRAS, 341, 1253
  7. Hopkins A. M., Beacom J. F., 2006, ApJ, 651, 142
  8. Rauch M., 1998, ARA&A, 36, 267

 

Glossary

 

  1. ΛCDM
  2. Curve of growth
  3. FWHM (Full width half maximum)
  4. K-correction
  5. Quasars
  6. Seesaw mechanism
  7. Sterile neutrino
  8. Stromgren sphere
  9. Voigt profile

CHAPTER: Measuring States in the ISM

In Book Chapter on February 26, 2013 at 3:00 am

(updated for 2013)


There are two primary observational diagnostics of the thermal, chemical, and ionization states in the ISM:

  1. Spectral Energy Distribution (SED; broadband low-resolution)
  2. Spectrum (narrowband, high-resolution)

SEDs

Very generally, if a source’s SED is blackbody-like, one can fit a Planck function to the SED and derive the temperature and column density (if one can assume LTE). If an SED is not blackbody-like, the emission is the sum of various processes, including:

  • thermal emission (e.g. dust, CMB)
  • synchrotron emission (power law spectrum)
  • free-free emission (thermal for a thermal electron distribution)

Spectra

Quantum mechanics combined with chemistry can predict line strengths. Ratios of lines can be used to model “excitation”, i.e. what physical conditions (density, temperature, radiation field, ionization fraction, etc.) lead to the observed distribution of line strengths. Excitation is controlled by

  • collisions between particles (LTE often assumed, but not always true)
  • photons from the interstellar radiation field, nearby stars, shocks, CMB, chemistry, cosmic rays
  • recombination/ionization/dissociation

Which of these processes matter where? In class (2011), we drew the following schematic.

A schematic of several structures in the ISM

Key

A: Dense molecular cloud with stars forming within

  • T=10-50~{\rm K};~n>10^3~{\rm cm}^{-3} (measured, e.g., from line ratios)
  • gas is mostly molecular (low T, high n, self-shielding from UV photons, few shocks)
  • not much photoionization due to high extinction (but could be complicated ionization structure due to patchy extinction)
  • cosmic rays can penetrate, leading to fractional ionization: X_I=n_i/(n_H+n_i) \approx n_i/n_H \propto n_H^{-1/2}, where n_i is the ion density (see Draine 16.5 for details). Measured values for X_e (the electron-to-neutral ratio, which is presumed equal to the ionization fraction) are about X_e \sim 10^{-6}~{\rm to}~10^{-7}.
  • possible shocks due to impinging HII region – could raise T, n, ionization, and change chemistry globally
  • shocks due to embedded young stars w/ outflows and winds -> local changes in Tn, ionization, chemistry
  • time evolution? feedback from stars formed within?

B: Cluster of OB stars (an HII region ionized by their integrated radiation)

  • 7000 < T < 10,000 K (from line ratios)
  • gas primarily ionized due to photons beyond Lyman limit (E > 13.6 eV) produced by O stars
  • elements other than H have different ionization energy, so will ionize more or less easily
  • HII regions are often clumpy; this is observed as a deficit in the average value of n_e from continuum radiation over the entire region as compared to the value of ne derived from line ratios. In other words, certain regions are denser (in ionized gas) than others.
  • The above introduces the idea of a filling factor, defined as the ratio of filled volume to total volume (in this case the filled volume is that of ionized gas)
  • dust is present in HII regions (as evidenced by observations of scattered light), though the smaller grains may be destroyed
  • significant radio emission: free-free (bremsstrahlung), synchrotron, and recombination line (e.g. H76a)
  • chemistry is highly dependent on nT, flux, and time

C: Supernova remnant

  • gas can be ionized in shocks by collisions (high velocities required to produce high energy collisions, high T)
  • e.g. if v > 1000 km/s, T > 106 K
  • atom-electron collisions will ionize H, He; produce x-rays; produce highly ionized heavy elements
  • gas can also be excited (e.g. vibrational H2 emission) and dissociated by shocks

D: General diffuse ISM

  • UV radiation from the interstellar radiation field produces ionization
  • ne best measured from pulsar dispersion measure (DM), an observable. {\rm DM} \propto \int n_e dl
  • role of magnetic fields depends critically on XI(B-fields do not directly affect neutrals, though their effects can be felt through ion-neutral collisions)

ARTICLE: On the Dark Markings in the Sky

In Journal Club, Journal Club 2013 on February 8, 2013 at 2:46 pm

On the Dark Markings in the Sky by Edward E. Barnard (1919)

Summary by Hope Chen

Abstract

By examining photographic plates of various regions on the sky, Edward E. Barnard concluded in this paper that what he called “dark markings” were in fact due to the obscuration of nearby nebulae in most cases. This result had a significant impact on the debate regarding the size and the dimension of the Milky Way and also the research of the interstellar medium, particularly work by Vesto Slipher, Heber Curtis and Robert Trumpler. The publication of  Photographic Atlas of Selected Regions of the Milky Way after Barnard’s death, which included many of the regions mentioned in the paper, further provided a new method of doing astronomy research. In this paper and the Atlas, we are also able to see a paradigm very different from that of today.

It is now well-known that the interstellar medium causes extinction of light from background stars. However, think of a time when the infrared imaging was impossible, and the word “photon” meant nothing but a suspicious idea. Back in such a time in the second decade of the twentieth century, Edward Edison Barnard, by looking at hundreds of photographic plates, proposed an insightful idea that “starless” patches of the sky were dark because they are obscured by nearby nebulae. This idea not only built the foundation of the modern concept of the interstellar medium, but also helped astronomers figure out that the Universe extended so much farther beyond the Milky Way.

Young Astronomer and His Obsession of the Sky

In 1919, E. E. Barnard published this paper and raised the idea that what he called “dark markings” are mostly obscuration from nebulae close to us. The journey, however, started long before the publication of this paper. Born in Nashville, Tennessee in 1857, Barnard was not able to receive much formal education owing to poverty. His first interest, which became important for his later career, was in photography. He started working as a photographer’s assistant at the age of nine, and the work continued throughout most of his teenage years. He then developed an interest in astronomy, or rather, “star-gazing,” and would go watch the sky almost every night with his own telescope. He took courses in natural sciences at Vanderbilt University and started his professional career as an astronomer at the Lick Observatory in 1888. He helped build the Bruce Photographic Telescope at the Lick Observatory and there he started taking pictures of the sky on photographic plates. He then moved on to his career at the Yerkes Observatory at Chicago University and worked there until his death in 1922. (Introduction of the Atlas, Ref. 2)

One of the many plates in the Atlas including the region around Rho Ophiuchii, which was constantly mentioned in many of Barnard's works.

Fig. 1 One of the many plates in the Atlas including the region around Rho Ophiuchii, which was constantly mentioned in many of Barnard’s works. (Ref. 2)

Fig. 1 is one of the many plates taken at the Yerkes Observatory. It shows the region near Rho Ophiuchii, which was a region constantly and repetitively visited by Barnard and his telescope. Barnard noted in his description of this plate, “the [luminous] nebula itself is a beautiful object. With its outlying connections and the dark spot in which it is placed and the vacant lanes running to the East from it, … it gives every evidence that it obscures the stars beyond it.” Numerous similar comments spread throughout his descriptions of various regions covered in A Photographic Atlas of Selected Regions of the Milky Way (hereafter, the Atlas). Then finally in his 1919 paper, he concluded, “To me these are all conclusive evidence that masses of obscuring matter exist in space and are readily shown on photographs with the ordinary portrait lenses,” although “what the nature of this matter may be is quite another thing.” The publication of these plates in the Atlas (unfortunately after his death, put together by Miss Mary R. Calvert, who was Barnard’s assistant at the Yerkes Observatory and helped publish many of Barnard’s works after his death) also provided a new way of conducting astronomical research just as the World Wide Telescope does today. The Atlas for the first time allowed researchers to examine the image and the astronomical coordinates along with lists of dominant objects at the same time.

Except quoting Vesto Slipher’s work on spectrometry measurements of these nebulae, most of the evidences in Barnard’s paper seemed rather qualitative than quantitative. So, as of today’s standard, was the “evidence” really conclusive? Again, the question cannot be answered without knowing the limits of astronomical research at the time. Besides an immature understanding of the underlying physics, astronomers in the beginning of the twentieth century were limited by the lack of tools on both the observation and analysis fronts. Photographic plates as those in the Atlas were pretty much the most advanced imaging technique at the time, on which even a quantitative description of “brightness” was not easy, not to mention an estimation of the extinction of these “dark markings.” However, this being said, a very meaningful and somewhat “quantitative” assumption was drawn in Barnard’s paper: the field stars were more or less uniformly distributed. Barnard came to this assumption by looking at many different places, both in the galactic plane and off the plane, and observing the densities of field stars in these regions. Although numbers were not given in the paper, this was inherently similar to a star count study. Eventually, this assumption lead to what Barnard thought as the conclusive evidence of these dark markings being obscuring nebulae instead of “vacancies.” Considering the many technical limits at the time, while the paper might not seem to be scientific in today’s standard, this paper did pose a “conclusion” which was strong enough to sustain many of the more quantitative following examinations.

The “Great Debate”

Almost at the same time, perviously mentioned Vesto Slipher (working at the Lowell Observatory) began taking spectroscopic measurements of various clouds and tried to understand the constituents of these clouds. Although limited by the wavelength range and the knowledge of different radiative processes (the molecular transition line emission used largely in the research of the interstellar medium today was not observed until half a century later in 1970, by Robert Wilson, who, on a side note, also discovered the Cosmic Microwave Background), Slipher was able to determine the velocities of clusters by measuring the Doppler shifts and concluded that many of these clusters move at a faster rate than the escape velocity of the Milky Way (Fig. 2). This result, coupled with Barnard’s view of intervening nebulae, revolutionized the notion of the Universe in the 1920s.

The velocity measurements from spectroscopic observations done by Vesto Slipher.

Fig. 2 The velocity measurements from spectroscopic observations done by Vesto Slipher. (Ref. 3)

On April 26, 1920 (and in much of the 1920s), the “Great Debate” took place between Harlow Shapley (the Director of Harvard College Observatory at the time) and Curtis Heber (the Lick Observatory, 1902 to 1920). The general debate concerned the dimension of the Universe and the Milky Way, but the basic issue was simply whether distant “spiral nebulae” were small and lay within the Milky Way or whether they were large and independent galaxies. Besides the distance and the velocity measurements, which suffered from large uncertainties due to the technique available at the time, Curtis Heber was able to “win” the debate by claiming that dark lanes in the “Great Andromeda Nebula” resemble local dark clouds as those observed by Barnard (Fig. 3, taken in 1899). The result of the debate then sparked a large amount of work on “extragalactic astronomy” in the next two decades and was treated as the beginning of this particular research field.

The photographic plate of the "Great Andromeda Nebula" taken in 1988 by Isaac Roberts.

Fig. 3 The photographic plate of the “Great Andromeda Nebula” taken in 1988 by Isaac Roberts.

The Paper Finally Has a Plot

Then after the first three decades of the twentieth century, astronomers were finally equipped with a relatively more correct view of the Universe, the idea of photons and quantum theory. In 1930, Robert J. Trumpler (the namesake of the Trumpler Award) published his paper about reddening and reconfirmed the existence of local “dark nebulae.” Fig. 4 shows the famous plot in his paper which showed discrepancies between diameter distances and photometric distances of clusters. In the same paper, Trumpler also tried to categorize effects of the ISM on light from background stars, including what he called “selective absorption” or reddening as it is known today. This paper, together with many of Trumpler’s other papers, is one of the first systematic research results in understanding the properties of Barnard’s dark nebulae, which are now known under various names such as clouds, clumps, and filaments, in the interstellar medium.

Trumpler's measurements of diameter distances v. photometric distances for various clusters.

Fig. 4 Trumpler’s measurements of diameter distances v. photometric distances for various clusters.

Moral of the Story

As Alyssa said in class, it is often more beneficial than we thought to understand what astronomers knew and didn’t know at different periods of time and how we came to know what we see as common sense today, not only in the historically interesting sense but also in the sense of better understanding of various ideas. In this paper, Barnard demonstrated a paradigm which we may call unscientific today but made a huge leap into what later became the modern research field of the interstellar medium.

Selected References

  1. On the Dark Markings in the Sky, E. E. Barnard (1919)
  2. A Photographic Atlas of Selected Regions of the Milky Way, E. E. Barnard, compiled by Edwin B. Frost and Mary R. Calvert (1927)
  3. Spectrographic Observations of Nebulae, V. M. Slipher (1915)
  4. Absorption of Light in the Galactic System, R. J. Trumpler (1930)

ARTICLE: Turbulence and star formation in molecular clouds

In Journal Club, Journal Club 2013, Uncategorized on February 5, 2013 at 4:43 pm

Read the Paper by R.B. Larson (1981)

Summary by Philip Mocz

Abstract

Data for many molecular clouds and condensations show that the internal velocity dispersion of each region is well correlated with its size and mass, and these correlations are approximately of power-law form. The dependence of velocity dispersion on region size is similar to the Kolmogorov law for subsonic turbulence, suggesting that the observed motions are all part of a common hierarchy of interstellar turbulent motions. The regions studied are mostly gravitationally bound and in approximate virial equilibrium. However, they cannot have formed by simple gravitational collapse, and it appears likely that molecular clouds and their substructures have been created at least partly by processes of supersonic hydrodynamics. The hierarchy of subcondensations may terminate with objects so small that their internal motions are no longer supersonic; this predicts a minimum protostellar mass of the order of a few tenths of a solar mass. Massive ‘protostellar’ clumps always have supersonic internal motions and will therefore develop complex internal structures, probably leading to the formation of many pre-stellar condensation nuclei that grow by accretion to produce the final stellar mass spectrum. Molecular clouds must be transient structures, and are probably dispersed after not much more than 10^7 yr.

How do stars form in the ISM? The simple theoretical picture of Jeans collapse — that a large diffuse uniform cloud starts to collapse and fragments into a hierarchy of successively smaller condensations as the density rises and the Jeans mass decreases — is NOT consistent with observations. Firstly, astronomers see complex structure in molecular clouds:  filaments, cores, condensations, and structures suggestive of hydrodynamical processes and turbulent flows. In addition, the data presented in this paper shows that the observed linewidths of regions in molecular clouds are far from thermal. The observed line widths suggest that ISM is supersonically turbulent on all but the smallest scales. The ISM stages an interplay between self-gravity, turbulent (non-thermal) pressure, and feedback from stars (with the fourth component, thermal pressure, not being dominant on most scales). Larson proposes that protostellar cores are created by supersonic turbulent compression, which causes density fluctuations, and gravity becomes dominant  in only the densest (and typically subsonic) parts, making them unstable to collapse. Larson uses internal velocity dispersion measurements of regions in molecular clouds from the literature to support his claim.

Key Observational Findings:

Data for regions in molecular clouds with scales 0.1<L<100 pc follow:

(1) A power-law relation between velocity dispersion σ  and the size of the emitting region, L:

\sigma \propto L^{0.38}

Such power-law forms are typical of turbulent velocity distributions. More detailed studies today find \sigma\propto L^{0.5}, suggesting compressible, supersonic Burger’s turbulence.

(2) Approximate virial equilibrium:

2GM/(\sigma^2L)\sim 1

meaning the regions are roughly in self-gravitational equilibrium.

(3) An inverse relation between average molecular hydrogen H_2 density, n, and length scale L:

L \propto n^{-1.1}

which means that the column density nL is nearly independent of size, indicative of 1D shock-compression processes which preserve column densities.

Note These three laws are not independent. They are algebraically linked: that is, any one law can be derived from the other two. The three laws are consistent.

The Data

Larson compiles data on individual molecular clouds, clumps, and density enhancements of larger clouds from previous radio observations in the literature. The important parameters are:

  • L, the maximum linear extent of the region
  • variation of the radial velocity V across the region
  • variation of linewidth \Delta V across the region
  • mass M obtained without the virial theorem assumption
  • column density of hydrogen n

Larson digs through papers that investigate optically thin line emissions such as ^{13}CO to determine the variations in V and \Delta V, and consequently calculate the three-dimensional velocity dispersion σ  due to all motions present (as indicated by the dispersions \sigma(V) and \sigma(\Delta V)) in the cloud region (assuming isotropic velocity distributions). Both \sigma(V) and \sigma(\Delta V) are needed to obtain the three-dimensional velocity dispersion for a length-scale because the region has both local velocity dispersion and variation in bulk velocity across the region. The two dispersions add in quadrature: \sigma = \sqrt{\sigma(\Delta V)^2 + \sigma(V)^2}.

To estimate the mass, Larson assumes that the ratio of the column density of ^{13}CO to the column density of H_2 is 2\cdot 10^{-6} and that H_2 comprises 70% of the total mass.

Note that for a molecular cloud with temperature 10 K the thermal velocity dispersion is only 0.32 km/s, while the observed velocity dispersions \sigma, are much larger, typically 0.5-5 km/s.

(1) Turbulence in Molecular clouds

A log-log plot of velocity dispersion \sigma versus region length L is shown in Figure 1. below:

Larson1-2

Figure 1. 3D internal velocity dispersion versus the size of a region follows a power-law, expected for turbulent flows. The \sigma_s arrow shows the expected dispersion due to thermal pressure only. The letters in the plots represent different clouds (e.g. O=Orion)

The relation is fit with the line

\sigma({\rm km~s}^{-1}) = 1.10 L({\rm pc})^{0.38}

which is similar to the \sigma \propto L^{1/3} power-law for subsonic Kolmogorov turbulence. Note, however, that the motion in the molecular clouds are mostly supersonic. A characteristic trait of turbulence is that there is no preferred length scale, hence the power-law.

Possible subdominant effects modifying the velocity dispersion include stellar winds, supernova explosions, and expansion of HII regions, which may explain some of the scatter in Figure 1.

(2) Gravity in Molecular Clouds

A plot of the ratio 2GM/(\sigma^2 L) for each cloud region, which is expected to be ~1 if the cloud is in virial equilibrium, is shown in Figure 2. below:t1

Figure 2. Most regions are near virial equilibrium (2GM/(\sigma^2L)\sim 1). The large scatter is mostly due to uncertainties in the estimates of physical parameters.

Most regions are near virial equilibrium. The scatter in the figure may be large, but expected due to the simplifying assumptions about geometric factors in the virial equilibrium equation.

If the turbulent motions dissipate in a region, causing it to contract, and the region is still in approximate virial equilibrium, then L should decrease and \sigma should increase, which should cause some of the intrinsic scatter in Figure 1 (the L\sigma relation). A few observed regions do have unusually high velocity dispersions in Figure 1, indicating significant amount of gravitational contraction.

(3) Density Structure in Molecular Clouds

The \sigma \propto L^{0.38} relation implies smaller regions need higher densities to be gravitationally bound (if one also assumes \rho \sim M /L^3 and virial equilibrium 2GM/(\sigma^2L)\sim 1 then these imply \rho \propto L^{-1.24}). This is observed. The correlation between the density of H_2 in a region and the size of the region is shown in Figure 3 below:

larson3

Figure 3. An inverse relation is found between region density and size

The relationship found is:

n({\rm cm}^{-3}) = 3400 L(pc)^{-1.1}

This means that the column density nL is proportional to L^{-0.1}, which is nearly scale invariant. Such a distribution could result from shock-compression processes which preserve the column density of the regions compressed. Larson also suggested that observational selection effects may have limited the range of column densities explored (CO density needs to be above a critical threshold to be excited for example). Modern observations, such as those by Lombardi, Alves, & Lada (2010), show that that while column density across a sample of regions and clouds appears to be constant, a constant column density does not described well individual clouds (the probability distribution function for column densities follows a log-normal distribution, which are also predicted by detailed theoretical studies of turbulence).

Possible Implications for Star Formation and Molecular Clouds

  • Larson essentially uses relations (1) and (2) to derive a minimum mass and size for molecular clouds by setting the velocity dispersion \sigma to be subsonic. Smallest observed clouds have M\sim M_{\rm \odot} and L\sim 0.1 pc and subsonic internal velocities. These clouds may be protostars. The transition from supersonic to subsonic defines a possible minimum clump mass and size: M\sim 0.25M_{\rm \odot} and L\sim 0.04 pc, which may collapse with high efficiency without fragmentation to produce low mass stars of comparable mass to the initial cloud. Hence the IMF should have a downward turn for masses less than this minimum mass clump. Such a downturn is observed. Simple Jeans collapse fragmentation theory does not identify turnover at this mass scale.
  • Regions that would form massive stars have a hard time collapsing due to the supersonic turbulent velocities. Hence their formation mechanism likely involves accretion/coalescence of clumps. Thus massive stars are predicted to form as members of clusters/associations, as is usually observed.
  • The above two arguments imply that the low-mass slope of the IMF should be deduced from cloud properties, such as temperature and magnitude of turbulent velocities. The high-mass end is more complicated.
  • The associated timescale for the molecular clouds is \tau=L/\sigma. It is found to be \tau({\rm yr}) \sim 10^6L({\rm pc})^{0.62}. Hence the timescales are less 10 Myr for most clouds, meaning that molecular clouds have short lifetimes and are transient.

Philip’s Comments

Larson brings to attention the importance of turbulence for understanding the ISM in this seminal paper. His arguments are simple and are supported by data which are clearly incongruous with the previous simplified picture of Jeans collapse in a uniform, thermally-dominated medium. It is amusing and inspiring that Larson could dig through the literature to find all the data that he needed. He was able to synthesize the vast data in a way the observers missed to build a new coherent picture. As most good papers, Larson’s work fundamentally changes our understanding about an important topic but also provokes new questions for future study. What is the exact nature of the turbulence? What drives and/or sustains it? In what regimes does turbulence no longer become important? ISM physics is still an area of active research.

Many ISM research to this day has roots that draw back to Larson’s paper. One of the few important things Larson did not explain in this paper is the role of magnetic fields in the ISM, which we know today contributes a significant amount to the ISM’s energy budget and can be a source of additional instabilities, turbulence, and wave speeds. Also, there was not much data available at the time on the dense, subsonic molecular cloud cores in which thermal velocity dispersion dominates and the important physical processes are different, and so Larson only theorizes loosely about their role in star formation.

Larson’s estimates for molecular lifetimes (10 Myr) is relatively short compared to galaxy lifetimes and much shorter than what most models at the time estimated. This provoked a lot of debate in the field. Old theories which claim molecular clouds are built up by random collisions and coalescence of smaller clouds predict that Great Molecular Clouds take over 100 Myr to form. Turbulence speeds up this timescale, Larson argues, since turbulent motion is not random but systematic on larger scales.

The Plot Larson Did Not Make

Larson assumed spherical geometry of the molecular cloud regions in this paper to keep things simple. Yet he briefly mentions a way to estimate region geometry. He did not apply this correction to the data and unfortunately does not list the relevant observational parameters (\sigma (V), \sigma (\Delta V)) for the reader to make the calculation. But such a correction would likely have reduced the scatter of the region size L and have steepened the \sigma vs L relation, closer to what we observe today.  His argument for geometrical correction, fleshed out in more detail here, goes like this.

Let’s say the region’s shape is some 3D manifold, M. First, lets suppose M is a sphere of radius 1. Then, the average distance between points along a line of sight through the center is:

\langle \ell\rangle_{\rm los} = \frac{\int |x_1-x_2|\,dx_1\,dx_2}{\int 1 \,dx_1\,dx_2}= 2/3

where the integrals are over x_1=-1,x_2=-1 to x_1=1,x_2=1.

The average distance between points inside the whole volume is:

\langle \ell\rangle_{\rm vol} =\frac{\int \sqrt{(x_1-x_2)^2+(y_1-y_2)^2+(z_1-z_2)^2}r_1^2 r_2^2 \sin\theta_1\sin\theta_2 dr_1 dr_2 d\theta_1 d\theta_2 d\phi_1 d\phi_2}{\int r_1^2 r_2^2 \sin\theta_1\sin\theta_2 dr_1 dr_2 d\theta_1 d\theta_2 d\phi_1 d\phi_2}= 36/35

where the integrals are over r_1=0,r_2=0,\theta_1=0,\theta_2=0,\phi_1=0,\phi_2=0 to r_1=1,r_2=1,\theta_1=\pi,\theta_2=\pi,\phi_1=2\pi,\phi_2=2\pi.

Thus the ratio between these two characteristic lengths is:

\langle \ell\rangle_{\rm vol}/\langle \ell\rangle_{\rm los}=54/35

which is a number Larson quotes.

Now, if the velocity scales as a power-law \sigma \propto L^{0.38}, then one would expect:

\sigma = (\langle \ell\rangle_{\rm vol}/\langle \ell\rangle_{\rm los})^{0.38} \sigma(\Delta V)

We also have the relation

\sigma = \sqrt{\sigma(\Delta V)^2 + \sigma(V)^2}

These two relations above allow you to solve for the ratio

\sigma(V)/\sigma(\Delta V) = 0.62

Larson observes this ratio for regions of size less than 10 pc, meaning that the assumption that the are nearly spherical is good. But for larger regions, Larson sees much larger ratios ~1.7. This ratio can be expected for more sheetlike geometries. For example, if the geometry is 10 by 10 wide and 2 deep, one can calculate that the expected ratio is \sigma (V)/\sigma (\Delta V)= 2.67.

It would have been interesting to see a plot of \sigma (V)/\sigma (\Delta V) as a function of L, which Larson does not include, to learn about how geometry changes with length scale. The largest regions have most deviation from spherical geometries, which is perhaps why Larson did not include large ~1000pc regions in his study.

~~~

northamneb_ware_big

The North America Nebula Larson mentions in his introduction. Complex hydrodynamic processes and turbulent flows are at play, able to create structures with sizes less than the Jeans length. (Credit and Copyright: Jason Ware)

Reference:

Larson (1981) – Turbulence and star formation in molecular clouds

Lombari, Alves, & Lada (2010) – 2MASS wide field extinction maps. III. The Taurus, Perseus, and California cloud complexes

CHAPTER: Density of the Intergalactic Medium

In Book Chapter on January 14, 2013 at 8:50 pm

(updated for 2013)

From cosmology observations, we know the universe to be very nearly flat (\Omega = 1). This implies that the mean density of the universe is \rho = \rho_{\rm crit} = \frac{3 H_0^2}{8 \pi G} = 7 \times 10^{-30} ~{\rm g~ cm}^{-3} \Rightarrow n<4.3 \times 10^{-6}~{\rm cm}^{-3}.

This places an upper limit on the density of the Intergalactic Medium.

CHAPTER: Density of the Milky Way’s ISM

In Book Chapter on January 14, 2013 at 8:46 pm

(updated for 2013)

How do we know that n \sim 1 ~{\rm cm}^{-3} in the ISM? From the rotation curve of the Milky Way (and some assumptions about the mass ratio of gas to gas+stars+dark matter), we can infer

M_{\rm gas} = 6.7 \times 10^{9} M_\odot

Maps of HI and CO reveal the extent of our galaxy to be

D = 40 kpc

h = 140 pc (scale height of HI)

This applies an approximate volume of

V = \pi D^2 h / 4 = 5 \times 10^{66} ~{\rm cm}^{3}

Which, yields a density of

\rho = 2.5 \times 10^{-24} ~{\rm g cm}^{-3}