# Harvard Astronomy 201b

## ARTICLE: On the Density of Neutral Hydrogen in Intergalactic Space

In Journal Club, Journal Club 2013 on April 20, 2013 at 12:25 am

Author: Yuan-Sen Ting, April 2013

In the remembrance of Boston bombing victims: M. Richard, S. Collier, L. Lu, K. Campell

“Weeds do not easily grow in a field planted with vegetable.
Evil does not easily arise in a heart filled with goodness.”

Dharma Master Cheng Yen.

1. For more information and interactive demonstrations of the concepts discussed here, see the interactive online software module I developed for the Harvard AY201b course.

### Introduction

Although this seminal paper by Gunn & Peterson (1965) comprises only four pages (not to mention, it is in single column and has a large margin!), the authors suggested three ideas that are still being actively researched by astronomers today, namely

1. Lyman alpha (Lyα) forest
2. Gunn-Peterson trough
3. Cosmic Microwave Background (CMB) polarization

Admittedly, they put them in a slightly different context, and the current methods on these topics are much more advanced than what they studied, but one could not ask for more in a four pages note! In the following, we will give a brief overview on these topics, the initial discussion in Gunn & Peterson (1965) and the current thinking/results on these topics. Most of the results are collated from the few review articles/papers in the reference list and we will refer them herein.

### Lyman alpha forest

Gunn and Peterson propose using Lyα absorption in the distant quasar spectrum to study the neutral hydrogen in the intergalactic medium (IGM). The quasar acts as the light source, like shining a flashlight across the Universe, and provides a characteristic, relatively smooth spectrum. During the transmission of the light from the quasar to us, intervening neutral hydrogen clouds, along the line of sight to the quasar, cause absorptions on the quasar continuum at the neutral hydrogen HI transition 1215.67Å in the rest frame of the absorbing clouds. Because the characteristic quasar spectrum is well-understood, it’s relatively easy to isolate and study this absorption.

More importantly, in our expanding Universe, the spectrum is continuously redshifted. Therefore the intervening neutral hydrogen at different redshifts will produce absorption at many different wavelengths blueward of the quasars’ Lyα emission. After all these absorptions, the quasar spectrum will have a “forest” of lines (see figure below), hence the name Lyα forest.

Figure 1. Animation showing a quasar spectrum being continuously redshifted due to cosmic expansion, with various Lyman absorbers acting at different redshifts. From the interactive module.

For a quasar at high redshift, along the line of sight, we are sampling a significant fraction of the Hubble time. Therefore, by studying the Lyα forests, we are essentially reconstructing the history of structure formation. To put our discussion in context, there are two types of Lyα absorbers:

1. Low column density absorbers: These absorbers are believed to be the sheetlike-filamentary neutral hydrogen distributed in the web of dark matter formed in the ΛCDM. In this cosmic web, the dark matter filaments are sufficiently shallow to avoid gravitational collapse of the gas but deep enough to prevent these clouds of the inter- and proto-galactic medium from dispersing.
2. High column density absorbers: these are mainly due to intervening galaxies/proto-galaxies such as the damped Lyα systems. As opposed to the low column density absorbers, since galaxies are generally more metal rich than the filamentary gas; the absorption of Lyα due to these systems will usually be accompanied by some characteristic metal absorption lines. There will also be an obvious damping wing visible in the Voigt profile of the Lyα line, hence the name damped Lyα system.

Since our discussion here focuses on the IGM, although the high column density absorbers are extremely important to detect high redshift galaxies, for the rest of the discussion in this post, we will only discuss the former.

### Low column density absorbers

So, what can we learn about neutral hydrogen in the Universe by looking at Lyα forests? With a large sample of high redshift quasar spectra, we can bin up the absorption lines due to intervening clouds of the same redshift. This allows us to study the properties of gas at that distance (at that time in the Universe’s history), and learn about the evolution of the IGM. In this section, we summarize some of the crucial properties that are relevant in our later discussion:

Figure 2. Animation illustrating the effect of the redshift index on the quasar absorption spectrum. A higher index means more closely spaced absorption near the emission peak. From the interactive module.

It turns out that the properties of the IGM can be rather well described by power laws. If we define n(z) as the number density of clouds as a function of redshift z, and N(ρ) as the number density of clouds as a function of column density ρ. It has been observed that [see Rauch (1998) for details],



Note that, in order to study these distributions, ideally, high resolution spectroscopy is needed (FWHM < 25 kms-1) such that each line is resolved.

There is an upturn in the redshift power law index at high redshift. This turns out to be relevant in the discussion of the Gunn-Peterson trough. We will discuss this later. It has also been shown, via cosmological hydrodynamic simulations, that the ΛCDM model is able to reproduce to the observed power laws.

But we can learn about more than just gas density. The Doppler widths of Lyα line are consistent with the photoionization temperature (about 104 K). This suggests that the neutral hydrogen gas in the Lyα clouds is in photoionization equilibrium. In other words, the photoionization (i.e. heating) due to the UV background photons is balanced by cooling processes via thermal bremsstrahlung, Compton cooling and the usual recombination. Photoionization equilibrium allows us to calculate how many photons have to be produced, either from galaxies or quasars, in order to maintain the gas at the observed temperature. Based on this number of photons, we can deduce how many stars were formed at each epoch in the history of the Universe.

We’ve been relying on the assumption that these absorbers are intergalactic gas clouds. How can we be sure they’re not actually just outflows of gas from galaxies? This question has been tackled by studying the two point correlation function of the absorption lines. In a nutshell, two point correlation function measures the clustering of the absorbers. Since galaxies are highly clustered, at least at low redshift, if the low density absorbers are related to the outflow of galaxies, one should expect clustering of these absorption signals. The studies of Lyα forest suggest that the low column density absorbers are not clustered as strongly as galaxies, favoring the interpretation that they are filamentary structures or intergalactic gas [see Rauch (1998) for details].

However this discussion is only valid in the relatively recent Universe. When we look further, and therefore study the earliest times, there are less photons and correspondingly more neutral hydrogen atoms. With more and more absorption, eventually the lines in the forest become so compact and deep that the “forest” becomes a “trough”.

### Gunn-Peterson trough

In their original paper in 1965, Gunn and Peterson estimated how much absorption there should be in a quasar spectrum due to the amount of hydrogen in the Universe. The amount of absorption can be quantified by optical depth which is a measure of transparency. A higher optical depth indicates more absorption due to the neutral hydrogen. Their cosmological model during that time is obsolete now, but their reasoning and derivation are still valid, one can still show that the optical depth



With a modern twist on the cosmological model, the optical depth of neutral hydrogen at each redshift can be estimated to be



where z is the redshift, λ is the transition wavelength, e is the electric charge, me is the electron mass, f is the oscillatory strength of the electronic state transition from N=1 to 2, H(z) is the Hubble constant, h is the dimensionless Hubble’s parameter, Ωb is the baryon density of the Universe, Ωm is the dark matter density, nHI is the number density of neutral hydrogen and nH is the total number density of both the neutral and ionized hydrogen.

Since optical depth is proportional to the density of neutral hydrogen, and the transmitted flux ratio decreases exponentially with respect to the optical depth, even a tiny neutral density fraction, nHI/nH=10-4 (i.e. one neutral hydrogen atom in ten thousands hydrogen atoms) will give rise to complete Lyα absorption. In other words, the transmitted flux should be zero. This complete absorption is known as the Gunn-Peterson trough.

In 1965, Gunn and Peterson performed their calculation assuming that most hydrogen atoms in the Universe are neutral. From their calculation, they expected to see a trough in the quasar (redshift z∼3) data they were studying. This was not the case.

In order to explain the observation, they found that the neutral hydrogen density has to be five orders of magnitude smaller than expected. They came to the striking conclusion that most of the hydrogen in the IGM exists in another form — ionized hydrogen. Lyman series absorptions occur when a ground state electron in a hydrogen atom jumps to a higher state. Ionized hydrogen has no electrons, so it does not absorb photons at Lyman wavelengths.

Figure 3. Animation illustrating Gunn-Peterson trough formation. The continuum blueward of Lyα is drastically absorbed as the neutral hydrogen density increases. From the interactive module.

Gunn and Peterson also showed that since the IGM density is low, radiative ionization (instead of collisional) is the dominant source to maintain such a level of ionization. What an insight!

### The history of reionization

In modern astronomy language, we refer to Gunn and Peterson’s observation by saying the Universe was “reionized”. Combined with our other knowledge of the Universe, we now know the baryonic medium in the Universe went through three phases.

1. In the beginning of the Universe (z>1100), all matter is hot, ionized, optically thick to Thomson scattering and strongly coupled to photons. In other words, the photons are being scattered all the time. The Universe is opaque.
2. As the Universe expands and cools adiabatically, the baryonic medium eventually recombines, in other words, electrons become bound to protons and the Universe becomes neutral. With matter and radiation decoupled, the CMB radiation is free to propagate.
3. At some point, the Universe gets reionized again and most neutral hydrogen becomes ionized. It is believed that the first phase of reionization is confined to the Stromgren spheres of the photoionization sources. Eventually, these spheres grow and overlap, completing the reionization. Sources of photoionizing photons slowly etch away at the remaining dense, neutral, filamentary hydrogen.

When and how reionization happened in detail are still active research areas that we know very little about. Nevertheless, studying the Gunn-Peterson trough can tell us when reionization completed. The idea is simple. As we look at quasars further and further away, there should be a redshift where the quasar sources were emitting before reionization completed. These quasar spectra should show Gunn-Peterson troughs.

### Pinning down the end of reionization

The first confirmed Gunn-Peterson trough was discovered by Becker et al. (2001). As shown in the figure below, the average transmitted flux in some parts of the quasar spectrum is zero.

Figure 4. The first confirmed detection of the Gunn-Peterson trough (top panel, where the continuum goes to zero flux due to intervening absorbers), from a quasar beyond z = 6. The bottom panel shows that the quasar after z = 6 has non-zero flux at all parts. Adapted from Becker et al. (2001).

Figure 5. The close up spectra of the redshift z>6 quasar. In the regions corresponding to the absorbers redshift 5.95 <z< 6.16 (i.e. the middle regions), the transmitted flux is consistent with zero flux. This indicates a strong increment of neutral fraction at z>6. Note that, near the quasar (the upper end), there is transmitted flux. This is mostly owing to the fact that the neutral gas near the quasar is ionized by the quasar emission and therefore the Gunn-Peterson trough is suppressed. This is also known as the proximity effect. As we go to the regions corresponding to lower redshift, there is also apparent transmitted flux, indicating the neutral fraction decreases significantly.

Could this be due to the dense interstellar medium of a galaxy along the line of sight, rather than IGM absorbers? A few lines of evidence refute that idea. First, metal lines typically seen in galaxies (even in the early Universe) are not observed at the same redshift as the Lyman absorptions. Second, astronomers have corroborated this finding using several other quasars. We see these troughs not just a few particular wavelengths in some of the quasar spectra (as you might expect for scattered galaxies), but rather the average optical depth steadily grows, versus a simple extrapolation of how it grows at low redshifts, starting at redshift of z∼6 (see figure below). The rising optical depth is due to the neutral hydrogen fraction rising dramatically beyond z>6.

Figure 6. Quasar optical depth as a function of redshift. The observed optical depth exceeds the predicted trend (solid line), suggesting reionization. Adapted from Becker et al. (2001).

Similar studies using high redshift z>5 quasars confirm a dramatic increases in the IGM neutral fraction from nHI/nH=10-4 at z<6, to nHI/nH>10-3–0.1 at z∼6. At z>6, complete absorption troughs begin to appear.

Figure 7. A more recent compilation of Figure 6 from Fan et al. (2006). Note that the sample variance increases rapidly with redshift. This could be explained by the fact that high redshift star-forming galaxies, which likely provide most of the UV photons for reionization, are highly clustered, therefore reionization is clumpy in nature.

Figure 8. A similar plot, but translating the optical depth into volume-averaged neutral hydrogen fraction of the IGM. The solid points and error bars are measurements based on 19 high redshift quasars. The solid line is inferred from the reionization simulation from Gnedin (2004) which includes the overlapping stage to post-overlapping stage of the reionization spheres.

But it’s important to note that, because even a very small amount of neutral hydrogen in the IGM can produce optical depth τ≫1 and cause the Gunn-Peterson trough, this observational feature saturates very quickly and becomes insensitive to higher densities. That means that the existence of a Gunn-Peterson trough by itself does not prove that the object is observed prior to the start of reionization. Instead, this technique mostly probes the later stages of cosmic reionization. So the question now is: when does reionization get started?

### The start of reionization

Gunn and Peterson told us not only how to probe the end of reionization through the Gunn-Peterson trough, but also how to probe the early stages of reionization using the polarization of the CMB. They discussed the following scenario: if the Universe starts to be reionized at certain redshift, then beyond this, the background distant light sources should experience Thomson scattering.

Thomson scattering is the scattering between free electrons and photons. Recall the fact that due to Thomson scattering by the surrounding nebulae, the starlight undergoes this scattering will be polarized (due to the angle dependence of Thomson scattering). By the same analogy, the CMB, after Thomson scattering off the ionized IGM, is polarized.

As opposed to the case of the Gunn-Peterson troughs, the Thomson scattering results from light interacting with the ionized fraction of hydrogen, not the neutral fraction. Therefore, the large scale polarization of the CMB can be regarded as a complimentary probe of reionization. The CMB polarization detection by Wilkinson Microwave Anisotropy Probe (WMAP) space telescope suggests a significant ionization fraction extending to much earlier in the cosmic history, with z∼11±3 (see figure below). This implies that reionization is a process which starts at some point between z∼8–14 and ends at z∼6–8. This means that cosmic reionization appears to be quite extended in time. This is in contrast to recombination, when electrons became bound to protons, which occurred over a very narrow range in time (z=1089±1).

In summary, a very crude chart on reionization using Gunn-Peterson trough and CMB polarization is shown below.

Figure 9. The volume-averaged neutral fraction of the IGM versus redshift measured using various techniques, including CMB polarization and Gunn-Peterson troughs.

So far, we have left out the discussion of Lyβ, Lyγ absorptions as well as the cosmic Stromgren sphere technique as shown in the figure above. We will now fill in the details before proceeding to the suspects of reionization.

### How about Lyman beta and Lyman gamma

When people talk about the Lyα forest, sometimes it could be an over-simplification. Besides Lyα, other neutral hydrogen transitions such as Lyβ and Lyγ, play a crucial role. Sometimes we can even study the neutral helium Lyα (note that Lyα only designates the transition from electronic state N=1 to 2). We now first look at the advantages and also impediments to include Lyβ and Lyγ absorption lines in the study of the IGM.

1. Recall the formula for the optical depth: τ∝f λ. Due to the decrease in oscillator strength and also the wavelength, for the same neutral fraction density, the Gunn-Peterson optical depths of Lyβ and Lyγ are 6.2 and 17.9 times smaller than Lyα. Having smaller optical depth means that the absorption lines will only saturate at higher neutral fraction/density. Note that once a line is saturated, a large change in column density will only induce a small change in apparent line width. In other words, we are in the flat part of the curve of growth. In this case, the column densities are relatively difficult to measure. Since Lyβ and Lyγ have a smaller optical depths, they could provide more stringent constraints on the IGM ionization state and density when Lyα is saturated. Especially in the case of Gunn-Peterson trough, as we have shown in the figures above, the ability to detect Lyβ and Lyγ is crucial to study reionization.

2. Figure 10. The left-hand panel shows the Voigt profile of Lyα absorption line. The animation shows that when the line is saturated, a large change in column density will only induce a small change in the equivalent width (right-hand panel). This degeneracy makes the measurement of column density very difficult when the absorption line is saturated. From the interactive module.

3. Note that, given the same IGM clouds, the absorption of Lyα will also be accompanied by Lyβ and Lyγ absorptions. Since Lyα has a stronger optical depth, it has broader wings in the Voigt profile and has a higher chance to blend with the other lines. Therefore, simultaneous fits to higher order Lyman lines could help to deblend these lines.
4. As the redshift of quasar increases, there is a substantial overlapping region between Lyβ with Lyα, and so forth. Disentangling which absorption line is due to Lyα and which is due to higher order absorptions is difficult within these regions. Due to this limitation, the study of Lyγ, and higher order lines, is rare.

5. Figure 11. Animation showing the Lyα forest including Lyα and Lyβ absorption lines. One can see that there is a substantial overlapping of Lyα absorption lines onto the the Lyβ region. Therefore, the study of higher order absorptions is very difficult in practice. From the interactive module.

### Probing even higher neutral fraction

In some of the plots above, we have seen that as the neutral fraction increases, Lyα starts to saturate and can only give a moderate lower bound on the optical depth and the neutral fraction. For higher neutral fraction, we have to use Lyβ and Lyγ. It is also clear that even Lyγ will get saturated very quickly. Are there other higher density probes?

Yes, there are other probes! At high neutral fraction, the quasar in this kind of environment resembles a scaled-up version of a O/B type star in a dense neutral hydrogen region (although the latter medium is mostly in molecules, and the former is in atoms). Luminous quasars will produce a highly ionized HII region and create the so called cosmic Stromgren spheres. These cosmic Stromgren spheres could have physical size of about 5 Mpc owing to the extreme luminosity of the quasars.

Measurements of cosmic Stromgren spheres are sensitive to a much larger neutral fraction than the one that Lyγ can probe. However, on the flip side of the coin, they are subjected to many systematics that one could easily imagine, including the three-dimension geometry of the quasar emission, age of the quasar, the clumping factor of the surrounding materials, etc. This explains the large error bar in the reionization chart above.

Although hydrogen is the most abundant element in the Universe, we also know that helium constitutes a large fraction of the Universe. A natural question to ask: how about helium?

As neutral helium has a similar ionization potential and recombination rate compared to neutral hydrogen, it is believed that neutral helium reionization is likely to happen at a similar time together with neutral hydrogen. On the other hand, the ionized HeII has a higher ionization potential, and its reionization is observed to happen at lower redshift, about z∼3. The fact that ionized helium has a higher ionization potential is crucial in disentangling the UV background sources. We will discuss this in detail when we talk about the suspects. In this section, we will discuss some potentials and impediments of the usage of helium Lyα.

1. Helium and hydrogen have different atomic masses. Note that the bulk motion and thermal motion in gas both contribute to the line broadening. However bulk motions are insensitive to the atomic masses whereas the thermal motion velocities go as 1/√μ, where μ is the mean atomic mass. By comparing the broadening of the HeII 304Å and HI 1215Å lines, it is in principle possible to use the difference in atomic masses to measure the contribution from the bulk and thermal motions to the line broadening, separately. Therefore, the theories of the IGM kinematics can be, in principle, be tested. The usefulness of this approach however is remained to be seen.
2. Since ionized helium has a higher ionization potential (Lyman Limit at 228Å) than the neutral hydrogen (Lyman Limit at 912Å), the relative ionization fraction is a sensitive probe of the spectral shape of the UV background. For instance, the contribution from quasars will provide harder spectra than the soft stellar sources and is able to doubly ionize helium.
3. Although helium Lyα forest is awesome, only a small fraction of all quasars are suitable for the search of HeII absorption. The short wavelength of 304Å of Lyman alpha line for ionized helium requires the quasar to be redshifted significantly in order to enter the optical band such that we have enough high resolution spectroscopic instrument to detect it. Furthermore, it is important to note that the helium Lyα line has smaller wavelength than the neutral hydrogen Lyman Limit. As shown in the interactive module, the bound-free absorption beyond the Lyman Limit, as especially at the presence of high column density absorber, will create a large blanketing of lines. This renders the large majority of quasars useless for a HeII search even they might be redshifted enough into the optical band.

### The suspects — who ionized the gas?

We have discussed enough on the technologies to catch the suspects. So we will now go back to the main question on reionization. Who did it? As we have discussed earlier in this post, the IGM is believed to have been reionized early in the Universe’s history via photoionization. Who produced all the high energy, ionizing photons? There are two natural suspects:

1. Star-forming galaxies
2. Quasars

It has been argued that soft sources like star-forming galaxies have to be the dominant sources owing to the fact that there are not many quasars in the early Universe. The Sloan Digital Sky Survey (SDSS) has shown a steep decline in the number of quasars after its peak at z=2.5. If most quasars formed after this epoch, could they play a significant role in the reionization of the Universe, which seems to have completed as early as z∼6.

Taking a closer look at this problem, Faucher-Giguère et al. (2008) made an independent estimate of the photoionization contribution from quasars. In their study, they consider Lyα forests optical depth at redshift 2<z<4.2 from 86 high resolution quasar spectra.

The idea to estimate the fractional contribution from quasars has been described earlier in this post, we will iterate with more details here. Only quasars can produce the very high energy (∼50 eV) photons necessary to doubly ionize helium. Therefore, the luminosity of sources at ∼50 eV can directly tell us the contribution from quasars. By assuming a spectral index for the spectral energy distribution, one can then extrapolate to the lower energies and infer the photoionization rate at the energy range that is relevant to the HI ionization. They show that at most only about 20 per cent of these photons could come from quasars.

Beside showing quasars cannot be the main suspect, Faucher-Giguère et al. also use the Lyα forests to derive the star formation rate indirectly. We have discussed that the absorbers are in photoionzation equilibrium. With this assumption, the authors use the Lyα opacity depth to derive the photoionization rate at different redshifts. With the photoionization rate, they turn that into UV emissivity from the star-forming galaxies and then infer the star formation rate in galaxies at different times in the history of the Universe.

The figure below shows that the derived hydrogen photoionization rate is remarkably flat over the history of the Universe. This suggests that there should be a roughly constant photoionizing source over a large range of cosmic history. This is only possible if the star formation rate in galaxies continues to be high in the early Universe, at high redshift (z∼2.5–4) since the contribution from quasars only begins at this redshift. As the figure shows, such a star formation rate is consistent with the simulation of Hernquist & Springel (2003).

Figure 12. Observations of the hydrogen photoionization rate compared to the best-fitting model from Hernquist & Springel (2003). This suggests that the combination of quasars and stars alone may account for the photoionization. Adapted from Faucher-Giguère et al. (2008).

But there is another way to trace the history of star formation — by directly counting up the number and size of galaxies at different redshifts, using photometric surveys. It turns out that the results from these direct surveys, as performed by Hopkins & Beacom (2006) for example, are in tension with indirect approach. The photometric surveys suggest that the star formation rate decreases after z>2.5 [Prof Hans-Walter Rix once wittily quoted this as the Universe gets tenured at redshift z∼2.5], like in the figure below. The results says that both the major photoionization machines (quasars and stars in galaxies) would be in decline after z>2.5.

Figure 13. Like Figure 12 above, but now using the models of Hopkins & Beacom (2006) where the star formation history is inferred from photometric surveys.

If the photometric surveys are right, Faucher-Giguère et al. strongly suggest that the Universe could not provide enough photons to reionize the intergalactic medium. So, why are there two different observational results?

To sum up this section, by studying Lyα forests, Faucher-Giguère et al. argue that

1. Star-forming galaxies are the major sources of photoionization, although quasars can contribute as much as 20 per cent.
2. The star formation rate has to be higher than the one estimated observationally from the photometric surveys to provide the required photons.

### How the suspect did it

Although the Faucher-Giguère et al. results suggest that star-forming galaxies are the major sources of photoionization, exactly how these suspects actually did it is not clear. The discrepancy between direct and indirect tracers of the star formation history of the Universe might be reconcilable. Observations of star-forming galaxies are plagued with uncertainties, many of which are still areas of active research. Among the major uncertainties are:

1. The star formation rate of galaxies at high redshift. We cannot observe galaxies that are faint and far away. How many photons could they provide? Submillimetre observations in near future could shed more light on this due to their amazing negative K-correction and the ability to observe gas and dust in high redshift galaxies.
2. Starburst galaxies create dust, and dust can obscure observation. Infrared and submillimetre observations are crucial to detect these faint galaxies. It is believed that perhaps most of the ionizing photons come from these unassuming, hard-to-detect galaxies, because they are likely individually small and dim but collectively numerous.
3. IGM clumping factor. In other words, how concentrated is the material surrounding the galaxies? This factor could affect the fraction of ionizing photons escaping into the IGM.

In short, it is possible that the star formation did indeed begin way back in the cosmic history, at z>2.5, but we underestimated it in the photometric surveys due to these uncertainties. In this case, all the results from simulations, Lyα forests, and photometric surveys could fit together.

### Any other suspects on loose

We have now two suspects in custody. The remaining question: are there any suspects on loose? To answer this question, some other interesting proposals were raised, among them:

1. High-energy X-ray photons from supernova remnants or microquasars at high redshift. This is possible, but its contribution might overestimate the soft X-ray background that we observe today.
2. Neutrino is believed to gain their mass through the seesaw mechanism. One of the potential ingredient of this mechanism is the sterile neutrino. Reionization by decaying sterile neutrinos cannot be completely ruled out yet. That said, since there is no confirmed detection of sterile neutrino in particle physics experiments, one should not take this possibility too seriously and blow one own mind.

In summary, we cannot be 100 per cent sure, but other suspect other that quasars and stellar sources is unlikely.

### So what have we learned?

To summarize what we have discussed in this post:

1. Non-detection can be a good thing. The non-detection of the Gunn-Peterson trough in the early days demonstrated that the IGM is mostly in an ionized state.
2. The Lyα forest gives us a 3D picture of the IGM.
3. The Gunn-Peterson trough provides a direct probe on the end stage of the reionization.
4. CMB radiation polarization suggests when the process of reionization began, and shows that reionization is a process extended in time.
5. Star-forming galaxies are likely to be responsible for the reionization, but how they exactly did it is a question still under investigation.

## CHAPTER: Excitation Processes: Collisions

In Book Chapter on March 7, 2013 at 3:18 pm

(updated for 2013)

Collisional coupling means that the gas can be treated in the fluid approximation, i.e. we can treat the system on a macrophysical level.

Collisions are of key importance in the ISM:

• cause most of the excitation
• can cause recombinations (electron + ion)

Three types of collisions

1. Coulomb force-dominated ($r^{-1}$ potential): electron-ion, electron-electron, ion-ion
2. Ion-neutral: induced dipole in neutral atom leads to $r^{-4}$ potential; e.g. electron-neutral scattering
3. neutral-neutral: van der Waals forces -> $r^{-6}$ potential; very low cross-section

We will discuss (3) and (2) below; for ion-electron and ion-ion collisions, see Draine Ch. 2.

In general, we will parametrize the interaction rate between two bodies A and B as follows:

${\frac{\rm{reaction~rate}}{\rm{volume}}} = <\sigma v>_{AB} n_a n_B$

In this equation, $<\sigma v>_{AB}$ is the collision rate coefficient in $\rm{cm}^3 \rm{s}^{-1}. <\sigma v>_{AB}= \int_0^\infty \sigma_{AB}(v) f_v~dv$, where $\sigma_{AB} (v)$ is the velocity-dependent cross section and $f_v~dv$ is the particle velocity distribution, i.e. the probability that the relative speed between A and B is v. For the Maxwellian velocity distribution,

$f_v~dv = 4 \pi \left(\frac{\mu'}{2\pi k T}\right)^{3/2} e^{-\mu' v^2/2kT} v^2~dv$,

where $\mu'=m_A m_B/(m_A+m_B)$ is the reduced mass. The center of mass energy is $E=1/2 \mu' v^2$, and the distribution can just as well be written in terms of the energy distribution of particles, $f_E dE$. Since $f_E dE = f_v dv$, we can rewrite the collision rate coefficient in terms of energy as

$\sigma_{AB}=\left(\frac{8kT}{\pi\mu'}\right)^{1/2} \int_0^\infty \sigma_{AB}(E) \left(\frac{E}{kT}\right) e^{-E/kT} \frac{dE}{kT}$.

These collision coefficients can occasionally be calculated analytically (via classical or quantum mechanics), and can in other situations be measured in the lab. The collision coefficients often depend on temperature. For practical purposes, many databases tabulate collision rates for different molecules and temperatures (e.g., the LAMBDA databsase).

For more details, see Draine, Chapter 2. In particular, he discusses 3-body collisions relevant at high densities.

## CHAPTER: Spitzer Notation

In Book Chapter on March 5, 2013 at 3:19 am

(updated for 2013)

We will use the notation from Spitzer (1978). See also Draine, Ch. 3. We represent the density of a state j as

$n_j(X^{(r)})$, where

• n: particle density
• j: quantum state
• X: element
• (r): ionization state
• For example, $HI = H^{(0)}$

In his book, Spitzer defines something called “Equivalent Thermodynamic Equilibrium” or “ETE”. In ETE, $n_j^*$ gives the “equivalent” density in state j. The true (observed) value is $n_j$. He then defines the ratio of the true density to the ETE density to be

$b_j = n_j / n_j^*$.

This quantity approaches 1 when collisions dominate over ionization and recombination. For LTE, $b_j = 1$ for all levels. The level population is then given by the Boltzmann equation:

$\frac{n_j^\star(X^{(r)})}{n_k^\star(X^{(r)})} = (\frac{g_{rj}}{g_{rk}})~e^{ -(E_{rj} - E_{rk}) / kT }$,

where $E_{rj}$ and $g_{rj}$ are the energy and statistical weight (degeneracy) of level j, ionization state r. The exponential term is called the “Boltzmann factor”‘ and determines the relative probability for a state.

The term “Maxwellian” describes the velocity distribution of a 3-D gas. “Maxwell-Boltzmann” is a special case of the Boltzmann distribution for velocities.

Using our definition of b and dropping the “r” designation,

$\frac{n_k}{n_j} = \frac{b_k}{b_j} (\frac{g_k}{g_j})~e^{-h \nu_{jk} / kT }$

Where $\nu_{jk}$ is the frequency of the radiative transition from k to j. We will use the convention that $E_k > E_j$, such that $E_{jk}=h\nu_{jk} > 0$.

To find the fraction of atoms of species $X^{(r)}$ excited to level j, define:

$\sum_k n_k^\star (X^{(r)}) = n^\star(X^{(r)})$

as the particle density of $X^{(r)}$ in all states. Then

$\frac{ n_j^* (X^{(r)}) } { n^* (X^{(r)})} = \frac{ g_{rj} e^{-E_{rj} / kT} } {\sum_k g_{rk} e^{ -E_{rk} / kT} }$

Define $f_r$, the “partition function” for species $X^{(r)}$, to be the denominator of the RHS of the above equation. Then we can write, more simply:

$\frac{n_j^\star}{n^\star} = \frac{g_{rj}}{f_r} e^{-E_{rj}/kT}$

to be the fraction of particles that are in state j. By computing this for all j we now know the distribution of level populations for ETE.

## CHAPTER: Thermodynamic Equilibrium

In Book Chapter on February 28, 2013 at 3:13 am

(updated for 2013)

Collisions and radiation generally compete to establish the relative populations of different energy states. Randomized collisional processes push the distribution of energy states to the Boltzmann distribution, $n_j \propto e^{-E_j / kT}$. When collisions dominate over competing processes and establish the Boltzmann distribution, we say the ISM is in Thermodynamic Equilibrium.

Often this only holds locally, hence the term Local Thermodynamic Equilibrium or LTE. For example, the fact that we can observe stars implies that energy (via photons) is escaping the system. While this cannot be considered a state of global thermodynamic equilibrium, localized regions in stellar interiors are in near-equilibrium with their surroundings.

But the ISM is not like stars. In stars, most emission, absorption, scattering, and collision processes occur on timescales very short compared with dynamical or evolutionary timescales. Due to the low density of the ISM, interactions are much more rare. This makes it difficult to establish equilibrium. Furthermore, many additional processes disrupt equilibrium (such as energy input from hot stars, cosmic rays, X-ray background, shocks).

As a consequence, in the ISM the level populations in atoms and molecules are not always in their equilibrium distribution. Because of the low density, most photons are created from (rare) collisional processes (except in locations like HII regions where ionization and recombination become dominant).

## CHAPTER: Introductory Remarks on Radiative Processes

In Book Chapter on February 28, 2013 at 3:10 am

(updated for 2013)

The goal of the next several sections is to build an understanding of how photons are produced by, are absorbed by, and interact with the ISM. We consider a system in which one or more constituents are excited under certain physical conditions to produce photons, then the photons pass through other constituents under other conditions, before finally being observed (and thus affected by the limitations and biases of the observational conditions and instruments) on Earth. Local thermodynamic equilibrium is often used to describe the conditions, but this does not always hold. Remember that our overall goal is to turn observations of the ISM into physics, and vice-versa.

The following contribute to an observed Spectral Energy Distribution:

• gas: spontaneous emission, stimulated emission (e.g. masers), absorption, scattering processes involving photons + electrons or bound atoms/molecules
• dust: absorption; scattering (the sum of these two -> extinction); emission (blackbody modified by wavelength-dependent emissivity)
• other: synchrotron, brehmsstrahlung, etc.

The processes taking place in our “system” depend sensitively on the specific conditions of the ISM in question, but the following “rules of thumb” are worth remembering:

1. Very rarely is a system actually in a true equilibrium state.
2. Except in HII regions, transitions in the ISM are usually not electronic.
3. The terms Upper Level and Lower Level refer to any two quantum mechanical states of an atom or molecule where $E_{\rm upper}>E_{\rm lower}$. We will use k to index the upper state, and j for the lower state.
4. Transitions can be induced by photons, cosmic rays, collisions with atoms and molecules, and interactions with free electrons.
5. Levels can refer to electronic, rotational, vibrational, spin, and magnetic states.
6. To understand radiative processes in the ISM, we will generally need to know the chemical composition, ambient radiation field, and velocity distribution of each ISM component. We will almost always have to make simplifying assumptions about these conditions.

## CHAPTER: Measuring States in the ISM

In Book Chapter on February 26, 2013 at 3:00 am

(updated for 2013)

There are two primary observational diagnostics of the thermal, chemical, and ionization states in the ISM:

1. Spectral Energy Distribution (SED; broadband low-resolution)
2. Spectrum (narrowband, high-resolution)

#### SEDs

Very generally, if a source’s SED is blackbody-like, one can fit a Planck function to the SED and derive the temperature and column density (if one can assume LTE). If an SED is not blackbody-like, the emission is the sum of various processes, including:

• thermal emission (e.g. dust, CMB)
• synchrotron emission (power law spectrum)
• free-free emission (thermal for a thermal electron distribution)

#### Spectra

Quantum mechanics combined with chemistry can predict line strengths. Ratios of lines can be used to model “excitation”, i.e. what physical conditions (density, temperature, radiation field, ionization fraction, etc.) lead to the observed distribution of line strengths. Excitation is controlled by

• collisions between particles (LTE often assumed, but not always true)
• photons from the interstellar radiation field, nearby stars, shocks, CMB, chemistry, cosmic rays
• recombination/ionization/dissociation

Which of these processes matter where? In class (2011), we drew the following schematic.

A schematic of several structures in the ISM

Key

A: Dense molecular cloud with stars forming within

• $T=10-50~{\rm K};~n>10^3~{\rm cm}^{-3}$ (measured, e.g., from line ratios)
• gas is mostly molecular (low T, high n, self-shielding from UV photons, few shocks)
• not much photoionization due to high extinction (but could be complicated ionization structure due to patchy extinction)
• cosmic rays can penetrate, leading to fractional ionization: $X_I=n_i/(n_H+n_i) \approx n_i/n_H \propto n_H^{-1/2}$, where $n_i$ is the ion density (see Draine 16.5 for details). Measured values for $X_e$ (the electron-to-neutral ratio, which is presumed equal to the ionization fraction) are about $X_e \sim 10^{-6}~{\rm to}~10^{-7}$.
• possible shocks due to impinging HII region – could raise T, n, ionization, and change chemistry globally
• shocks due to embedded young stars w/ outflows and winds -> local changes in Tn, ionization, chemistry
• time evolution? feedback from stars formed within?

B: Cluster of OB stars (an HII region ionized by their integrated radiation)

• 7000 < T < 10,000 K (from line ratios)
• gas primarily ionized due to photons beyond Lyman limit (E > 13.6 eV) produced by O stars
• elements other than H have different ionization energy, so will ionize more or less easily
• HII regions are often clumpy; this is observed as a deficit in the average value of $n_e$ from continuum radiation over the entire region as compared to the value of ne derived from line ratios. In other words, certain regions are denser (in ionized gas) than others.
• The above introduces the idea of a filling factor, defined as the ratio of filled volume to total volume (in this case the filled volume is that of ionized gas)
• dust is present in HII regions (as evidenced by observations of scattered light), though the smaller grains may be destroyed
• significant radio emission: free-free (bremsstrahlung), synchrotron, and recombination line (e.g. H76a)
• chemistry is highly dependent on nT, flux, and time

C: Supernova remnant

• gas can be ionized in shocks by collisions (high velocities required to produce high energy collisions, high T)
• e.g. if v > 1000 km/s, T > 106 K
• atom-electron collisions will ionize H, He; produce x-rays; produce highly ionized heavy elements
• gas can also be excited (e.g. vibrational H2 emission) and dissociated by shocks

D: General diffuse ISM

• ne best measured from pulsar dispersion measure (DM), an observable. ${\rm DM} \propto \int n_e dl$
• role of magnetic fields depends critically on XI(B-fields do not directly affect neutrals, though their effects can be felt through ion-neutral collisions)

## CHAPTER: Chemistry

In Book Chapter on February 13, 2013 at 10:04 pm

See Draine Table 1.4 for elemental abundances for the proto-solar environment. H:He:C = $1:0.1:3 \times 10^{-4}$ by number. $1:0.4:3.5 \times 10^{-3}$ by mass However, these ratios vary by position in the galaxy, especially for heavier elements (which depend on stellar processing). For example, the abundance of heavy elements (Z > Carbon) is twice as low at the sun’s position than in the Galactic center. Even though metals account for 1% of the mass, they dominate most of the important chemistry, ionization, and heating/cooling processes. They are essential for star formation, as they allow molecular clouds to cool and collapse. Generally, it is easier (i.e. requires less energy) to dissociate a molecule than to ionize something. The lower the electronic state you are trying to ionize, the more energy is needed. The Lyman Limit is the minimum photon energy needed to ionize Hydrogen from the ground state (13.6 eV, 912 Angstrom).

## CHAPTER: Hydrogen Slang

In Book Chapter on February 12, 2013 at 10:02 pm

Lyman limit: the minimum energy needed to remove an electron from a Hydrogen atom. A “Lyman limit photon” is a photon with at least this energy.

$E = 13.6 {\rm eV} = 1~ {\rm Rydberg} = hcR_{\rm H}$,

where $R_{\rm H}=1.097 \times 10^{7} {\rm m}^{-1}$ is the Rydberg constant, which has units of $1/\lambda$. This energy corresponds to the Lyman limit wavelength as follows:

$E = h\nu = hc/\lambda \Rightarrow \lambda=912 \AA$.

Lyman series: transitions to and from the n=1 energy level of the Bohr atom. The first line in this series was discovered in 1906 using UV studies of electrically excited hydrogen gas.

Balmer series: transitions to and from the n=2 energy level. Discovered in 1885; since these are optical transitions, they were more easily observed than the UV Lyman series transitions.

There are also other named series corresponding to higher n. Examples include Paschen (n=3), Brackett (n=4), and Pfund (n=5). The wavelength of a given transition can be computed via the Rydberg equation

$\frac{1}{\lambda}=R_{\rm H} \big(\frac{1}{n_f^2}-\frac{1}{n_i^2}\big)$.

Note that the Lyman (or Balmer, Paschen, etc.) limit can be computed by inserting $n_i=\infty$.

Lyman continuum corresponds to the region of the spectrum near the Lyman limit, where the spacing between energy levels becomes comparable to spectral line widths and so individual lines are no longer distinguishable.

## ARTICLE: The Physical State of Interstellar Hydrogen

In Journal Club 2013 on February 12, 2013 at 9:57 pm

The Physical State of Interstellar Hydrogen by Bengt Strömgren (1939)

Summary by Anjali Tripathi

Abstract

In 1939, Bengt Strömgren published an analytic formulation for the spatial extent of ionization around early type stars.  Motivated by new H-alpha observations of sharply bound “diffuse nebulosities,” Strömgren was able to characterize these ionized regions and their thin boundaries in terms of the ionizing star’s properties and abundances of interstellar gas.  Strömgren’s work on these regions, which have come to be eponymously known as Strömgren spheres, has found longstanding use in the study of HII regions, as it provides a simple analytic approach to recover the idealized properties of such systems.

Background: Atomic Physics in Astronomy & New Observations

Danish astronomer Bengt Strömgren (1908-87) was born into a family of astronomers and educated during a period of rapid development in our understanding of the atom and modern physics.  These developments were felt strongly in Copenhagen where Strömgren studied and worked for much of his life.  At the invitation of Otto Struve, then director of Yerkes Observatory, Strömgren visited the University of Chicago from 1936 to 1938, where he encountered luminaries from across astrophysics, including Chandrasekhar and Kuiper.  With Struve and Kuiper, Strömgren worked to understand how photoionization could explain observations of a shell of gas around an F star, part of the eclipsing binary $\epsilon$ Aurigae (Kuiper, Struve and Strömgren, 1937).  This work laid out the analytic framework for a bounded region of ionized gas around a star, which provided the theoretical foundation for Strömgren’s later work on HII regions.

The observational basis for Strömgren’s 1939 paper came from new spectroscopic measurements taken by Otto Struve.  Using the new 150-Foot Nebular Spectrograph (Struve et al, 1938) perched on a slope at McDonald Observatory, pictured below, Struve and collaborators were able to resolve sharply bound extended regions “enveloped in diffuse nebulosities” in the Balmer H-alpha emission line (Struve and Elvey, 1938).  This emission line results from recombination when electrons transition from the n = 3 to n = 2 energy level of hydrogen, after the gas was initially ionized by UV radiation from O and B stars.  Comparing these observations to those of the central parts of the Orion Nebula led the authors to estimate that the number density of hydrogen with electrons in the n=3 state is $N_3 = 3 \times 10^{-21} cm^{-3}$, assuming a uniform concentration of stars and neglecting self-absorption (Struve and Elvey, 1938).  From his earlier work on $\epsilon$ Aurigae, Strömgren had an analytic framework with which to understand these observations.

Instrument used to resolve HII Regions in H-alpha (Struve et al, 1938)

Putting it together – Strömgren’s analysis

To understand the new observations quantitatively, Strömgren worked out the size of these emission nebulae by finding the extent of the ionized gas around the central star.  As in his paper with Kuiper and Struve, Strömgren considered only neutral and ionized hydrogen, assumed charge neutrality, and used the Saha equation with additional terms:

${N'' N_e \over N'} = \underbrace{{(2 \pi m_e)^{3/2} \over h^3} {2q'' \over q'} (kT)^{3/2}e^{-I/kT}}_\text{Saha} \cdot \underbrace{\sqrt{T_{el} \over T}}_\text{Temperature correction} \cdot \underbrace{R^2 \over 4 s^2}_\text{Geometrical Dilution}\cdot \underbrace{e^{-\tau_u}}_\text{Absorption}\\ N': \text{Neutral hydrogen (HI) number density}\\ N'':\text{Ionized hydrogen (HII) number density}\\ N_e:\text{Electron number density, }N_e = N''\text{ by charge neutrality}\\ x: \text{Ionization fraction}, x = N''/(N'+N'')$

Here, the multiplicative factor of $\sqrt{T_{el} \over T}$ corrects for the difference between the stellar temperature($T$) and the electron temperature($T_{el}$) at a distance $s$ away from the star.  The dilution factor ${R^2 \over 4 s^2}$, where $R$ is the stellar radius and $s$ is the distance from the star, accounts for the decrease in stellar flux with increasing distance.  The factor of $e^{-\tau_u}$, where $\tau_u$ is the optical depth, accounts for the reduction in the ionizing radiation due to absorption.  Taken together, this equation encapsulates the physics of a star whose photons ionize surrounding gas.  This ionization rate is balanced by the rate of recombination of ions and electrons to reform neutral hydrogen.  As a result, close to the star where there is abundant energetic flux, the gas is fully ionized, but further from the star, the gas is primarily neutral.  Strömgren’s formulation allowed him to calculate the location of the transition from ionized to neutral gas and to find the striking result that the transition region between the two is incredibly sharp, as plotted below.

Plot of ionization fraction vs. distance for an HII Region (Values from Table 2 of Strömgren, 1939)

Strömgren found that the gas remains almost completely ionized until a critical distance $s_0$, where the ionization fraction sharply drops and the gas becomes neutral due to absorption.  This critical distance has become known as the Strömgren radius, considered to be the radius of an idealized, spherical HII region.  The distance over which the ionization fraction drops from 1 to 0 is small (~0.01 pc), corresponding to one mean free path of an ionizing photon, compared to the Strömgren radius(~100pc).  Thus Strömgren’s analytic work provided an explanation for sharply bound ionized regions with thin transition zones separating the ionized gas from the exterior neutral gas.

Strömgren also demonstrated how the critical distance depends on the total number density $N$, the stellar effective temperature $T$, and the stellar radius $R$:

$\log{s_0} = -6.17 + {1 \over 3} \log{ \left( {2q'' \over q'} \sqrt{T_{el} \over T} \right)} - {1 \over 3} \log{a_u} - {1 \over 3} \frac{5040K}{T} I + {1 \over 2} \log{T} + {2 \over 3} \log{R} - {2 \over 3} \log{N},$

where $a_u$ is the absorption coefficient for the ionizing radiation per hydrogen atom (here assumed to be frequency independent) and $s_0$ is given in parsecs.  From this relation, we can see that for a given stellar radius and a fixed number density, $s_0 \propto T^{1/2}$, so that hotter, earlier type stars have larger ionized regions.  Plugging in numbers, Strömgren found that for a total number density of $3~cm^{-3}$, a cluster of 10 O7 stars would have a critical radius of 100-150 parsecs, in agreement with estimates made by the Struve and Elvey observations.

To estimate the hydrogen number density from the H-alpha observations, Strömgren also considered the excitation of the n=3 energy levels of hydrogen.  Weighing the relative importance of various mechanisms for excitation – free electron capture, Lyman-line absorption, Balmer-line absorption, and collisions – Strömgren found that their effects on the number densities of the excited states and electron number densities were comparable.  As a result, he estimated from Struve’s and Elvey’s $N_3$ that the number density of hydrogen is 2-3 $cm^{-3}$.

Strömgren’s analysis of ionized regions around stars and neutral hydrogen in “normal regions” matched earlier theoretical work by Eddington into the nature of the ISM (Eddington, 1937).  “With great diffidence, having not yet rid myself of the tradition that ‘atoms are physics, but molecules are chemistry’,” Eddington wrote that “presumably a considerable part” of the ISM is molecular.  As a result, Strömgren outlined how his analysis for ionization regions could be modified to consider regions of molecular hydrogen dissociating, presciently leaving room for the later discovery of an abundance of molecular hydrogen.  Instead of the ionization of atomic hydrogen, Strömgren worked with the dissociation of molecular hydrogen in this analysis.   Given that the energy required to dissociate the bond of molecular hydrogen is less than that required to ionize atomic hydrogen, Stromgren’s analysis gives a model of a star surrounded by ionized atoms, which is surrounded by a sharp, thin transition region of atomic hydrogen, around which molecular hydrogen remains.

In addition to HI and HII, Strömgren also considered the ionization of other atoms and transitions.  For example, Strömgren noted that if the helium abundance was smaller than that of hydrogen, then most all of the helium will be ionized out to the boundary of the hydrogen ionization region.  From similar calculations and considering the observations of Struve and Elvey, Strömgren was able to provide an estimate of the abundance of OII, a ratio of $10^{-2}-10^{-3}$ oxygen atoms to each hydrogen atom.

Strömgren Spheres Today

Strömgren’s idealized formulation for ionized regions around early type stars was well received initially and  has continued to influence thinking about HII regions in the decades since.  The simplicity of Strömgren’s model and its assumptions, however, have been recognized and addressed over time.  Amongst these are concerns about the assumption of a uniformly dense medium around the star.  Optical and radio observations, however, have revealed that the surrounding nebula can have clumps and voids – far from being uniformly dense (Osterbrock and Flather, 1959).  To address this, calculations of the nebula’s density can include a ‘filling factor’ term.  Studies of the Orion Nebula (M42), pictured below, have provided examples of just such clumpiness.  M42 has also been used to study another related limitation of Strömgren’s model – the assumption of a central star surrounded by spherical symmetry.

Orion Nebula, infrared image from WISE. Credit: NASA/JPL/Caltech

Consideration of the geometry of Strömgren spheres has been augmented by blister models of the 1970s whereby a star ionizes surrounding gas but the star is at the surface or edge of a giant molecular cloud (GMC), rather than at the center of it.  As a result, ionized gas breaks out of the GMC, like a popping blister, which in turn can prompt “champagne flows” of ionized gas leaching into the surrounding medium.  In a review article of Strömgren’s work, Odell (1999) states that due to observational selection effects, many HII regions observed in the optical actually are more akin to blister regions, rather than Strömgren spheres, since Strömgren spheres formed at the center or heart of a GMC may be obscured so much that they are observable only at radio wavelengths.

In spite of its simplifying assumptions, Strömgren’s work remains relevant today.   Given its abundance, hydrogen dominates the physical processes of emission nebulae and, thus, Strömgren’s idealized model provides a good first approximation for the ionization structure, even though more species are involved than just atomic hydrogen.  Today we can enhance our understanding of these HII regions using computer codes, such as CLOUDY, to calculate the ionization states of various atoms and molecules.  We can also computationally model the  hydrodynamics  of shocks radiating outwards from the star and use spectral synthesis codes to produce mock spectra.  From these models and the accumulated wealth of observations over time, we have come to accept that dense clouds of molecular gas, dominated with molecular hydrogen, are the sites of star formation.  Young O and B-type stars form out of clumps in these clouds and their ionizing radiation will develop into an emission nebula with ionized atomic hydrogen, sharply bound from the surrounding neutral cloud.  As the stars age and the shocks race onwards, the HII regions will evolve.  What remains, however, is Strömgren’s work which provides a simple analytic basis for understanding the complex physics of HII regions.

Kuiper et al, “The Interpretation of $\epsilon$  Aurigae”, ApJ (1937)