The blue curve, beam response, shows that the beam has the resolution to resolve the two different models.
Archive for March, 2013|Monthly archive page
Answer to question (13)
In Uncategorized on March 28, 2013 at 7:18 amThat even near the filament the velocity is pretty much sub-sonic—the upper dotted line is the sound speed. This further supports the claim that Jeans collapse may be the mechanism behind the filament, although there may have been post-processing or feedback after the filament formed, so we can take this with a grain of salt.
Answer to question (12)
In Uncategorized on March 28, 2013 at 7:17 amMolecular cloud—particles are mostly molecular not atomic hydrogen, meaning . The .33 accounts for bigger things, such as Helium.
corresponds to assuming a typical, Milky Way elemental abundance.
Answer to question (11)
In Uncategorized on March 28, 2013 at 7:16 amAt first glance the cut seems arbitrary. If the thought is that the YSO is heating a region around it, leading to bigger velocity dispersions, the size of that region should be dictated by something like the physics of a Stromgren sphere, and have nothing to do with the beam size. A similar comment applies to the case where the region is heated by a stellar wind or outflow.
However, if the beam has width 6”, and we assume that represents the full-width at half maximum, then 1 sigma is 3”, and 12” represents 2-sigma on either side of the Gaussian beam profile. So, while technically structure >6” would be resolved, anything within 12” of the beam will be smoothed by a Gaussian profile. Hence there is potential for contamination of the black histogram by the YSO if one is closer to it than 12”.
Answer to question (10)
In Uncategorized on March 28, 2013 at 7:14 amRadiation from the YSO, interaction between outflow/stellar wind and gas.
Answer to question (9)
In Uncategorized on March 28, 2013 at 7:13 amTo some extent, yes—it shows the velocity dispersions are mostly subsonic, which is also evident from Figure 2 b if you calculate and see that most of the region has lower values than this. But Figure 3 shows you just how much of the region doesn’t (i.e. is supersonic), and that most of the bits that are supersonic are near the young stellar object (YSO).
Answer to question (8)
In Uncategorized on March 28, 2013 at 7:12 amThe left-hand panel shows the centroid velocity: in each little beam-size cell, one has a Gaussian-esque line profile, with some centroid whose velocity can be calculated using the line’s red or blue shift. The right-hand panel shows, in each little beam size cell, what the width of this profile is. Interestingly, one can compare the sigma of the centroid velocities in the boxed region of the left panel occupied by the filament with the sigma in each beam-sized cell and use this ratio to test the geometry of the structure in the region. This is essentially because of Larson’s Laws: Philip Mocz’s post on Larson’s laws explains how this ratio depends on the geometry, and calculates it for several idealized cases. The most relevant one for us is the long sheet, which in 2-d projection would look similar to a filament. The ratio predicted for such a geometry is 2.67. Estimating by eye the needed quantities from Figure 2 (try this yourself), we find —offering somewhat independent confirmation that we have a filament. Note this calculation will be somewhat sensitive to how you choose the region over which to calculate the ratio—but we already know what we are looking for (the filament), so can choose the region accordingly. However, this is why I qualified this with “somewhat” above.
Answer to question (7)
In Uncategorized on March 28, 2013 at 7:11 amThe Jeans length drawn on, between the YSO and the starless condensation—it is one of the key pieces of evidence that Jeans collapse may be happening.
Answer to question (6)
In Uncategorized on March 28, 2013 at 7:10 ammJy are a unit of specific intensity, or flux at a particular frequency. Integrated intensity is therefore mJy times Hz (or 1/s). One can convert Hz to km/s by determining what velocity is implied by a given frequency shift , and that is what has been done here, with
GHz.
Answer to question (5)
In Uncategorized on March 28, 2013 at 7:07 amResolution! And thus probing details of velocity dispersion. The GBT image does not really resolve the filament, and only marginally shows the starless condensation—which is separated from the young stellar object by a Jeans length, a key piece of evidence for the efficiency of Jeans collapse in the region.
Answer to question (4)
In Uncategorized on March 28, 2013 at 7:05 amBecause the GBT couldn’t resolve spatial variations in the velocity dispersion or column density. Resolving the former offers more information on the problem of supersonic vs. subsonic dispersions, already discussed as key for star formation, and column density spatial variations probe whether there are structures along the line of sight—as indeed the authors find (the filament!)
Answer to question (3)
In Uncategorized on March 28, 2013 at 7:04 amIn your bathroom, if you clean it— is actually just ammonia! Ammonia is common in regions near the galactic center (Kaifu et al. 1975). And CO becomes optically thick before ammonia—and optically thick is optically useless when you want to find densities! Ammonia allows study of denser regions—exactly what is needed to probe star formation.
The (1,1) transition is actually quite exotic: no mere rotational line here. Ammonia is shaped like a triangular pyramid, with the three H’s at the base and the N on top. The N can quantum mechanically tunnel through the potential barrier of the base, inverting the pyramid. So the (1,1) transition is also known as an “inversion” transition.
You might wonder why there is a potential barrier at all—after all, each H atom is neutral, and so is the N on top. But, if you were an electron on one of the H’s, were would you want to be? Certainly far from the other 2 Hs’ electrons! Hence each H’s electron will spend most of its time outside the base, meaning the triangle formed by the H’s will be slightly positively charged on the inside. Similarly, the electron on the N will want to be as far from the other three electrons as it can be, so it will hover above the N, meaning the bit of the N facing the pyramid’s triangular base will be slightly positively charged. Ergo, potential barrier.
Making simple assumptions, we can estimate the energy of the inversion. Assuming the distance to the base’s center for each H’s electron is (a+l), a the Bohr radius and l the ammonia bond length, 1 angstrom, and that the N’s electrons are (a+l) above this center, we calculate the potential where the 7 protons in N are. Converting to energy and thence frequency, we obtain . Incidentally, to go further one could use the WKB approximation to estimate the tunneling probability. Given a minimum flux per beam width to which the telescope is sensitive (14 mJy here is the noise), one could even then place a lower bound on the column density observable with this transition by a particular instrument, and assuming isotropy, one could get a density from the column density.
Answer to question (2)
In Uncategorized on March 28, 2013 at 7:02 am“Inevitably when large enough” is the key. One can indeed treat the supersonic motions as contributing an additional pressure, and raising the Jeans length—the problem is, they raise it above the typical size of a dense core! Hence for dense cores to collapse, the turbulence must be dissipated so that the Jeans length goes down below the size of the core.
Answer to question (1)
In Uncategorized on March 28, 2013 at 7:00 amThermal velocity dispersions mean you have a spectral line with some width, and the width is given by thermal broadening, so that from the Equipartition Theorem. This also happens to be the sound speed! Is it mere coincidence that thermal velocities are on order the sound speed? No! Thermal motions are (no surprise) set by the temperature. The sound speed is set by pressure, since sound waves are just pressure-density waves, and the pressure is also ultimately set by the temperature. So it’s not coincidence that thermal motions are sub-sonic, and supersonic motions cannot be explained by thermal broadening.
ARTICLE: A Theory of the Interstellar Medium: Three Components Regulated by Supernova Explosions in an Inhomogeneous Substrate
In Journal Club 2013 on March 15, 2013 at 11:09 pmAbstract (the paper’s, not ours)
Supernova explosions in a cloudy interstellar medium produce a three-component medium in which a large fraction of the volume is filled with hot, tenuous gas. In the disk of the galaxy the evolution of supernova remnants is altered by evaporation of cool clouds embedded in the hot medium. Radiative losses are enhanced by the resulting increase in density and by radiation from the conductive interfaces between clouds and hot gas. Mass balance (cloud evaporation rate = dense shell formation rate) and energy balance (supernova shock input = radiation loss) determine the density and temperature of the hot medium with (n,T) = (,
) being representative values. Very small clouds will be rapidly evaporated or swept up. The outer edges of “standard” clouds ionized by the diffuse UV and soft X-ray backgrounds provide the warm (~
K) ionized and neutral components. A self-consistent model of the interstellar medium developed herein accounts for the observed pressure of interstellar clouds, the galactic soft X-ray background, the O VI absorption line observations, the ionization and heating of much of the interstellar medium, and the motions of the clouds. In the halo of the galaxy, where the clouds are relatively unimportant, we estimate (n,T) = (
,
) below one pressure scale height. Energy input from halo supernovae is probably adequate to drive a galactic wind.
The gist
The paper’s (McKee and Ostriker 1977) main idea is that supernova remnants (SNRs) play an important role in the regulation of the ISM. Specifically, they argue that these explosions add enough energy that another phase is warranted: a Hot Ionized Medium (HIM)
A Bit About SNRs…
A basic supernova explosion consists of several phases. Their characteristic energies are on the order of erg, and indeed this is a widely-used unit. For a fairly well-characterized SNR, see Cas A which exploded in the late 1600s.
- Free expansion
A supernova explosion begins by ejecting mass with a range of velocities, the rms of which is highly supersonic. This means that a shock wave propagates into the ISM at nearly constant velocity during the beginning. Eventually the density decreases and the shocked pressure overpowers the thermal pressure in the ejected material, creating a reverse shock propagating inwards. This phase lasts for something on the order of several hundred years. Much of the Cas A ejecta is in the free expansion phase, and the reverse shock is currently located at 60% of the outer shock radius. - Sedov-Taylor phase
The reverse shock eventually reaches the SNR center, the pressure of which is now extremely high compared to its surroundings. This is called the “blast wave” portion, in which the shock propagates outwards and sweeps up material into the ISM. The remnant’s time evolution now follows the Sedov-Taylor solution, which finds. This phase ends when the radiative losses (from hot gas interior to the shock front) become important. We expect this phase to last about
years.
- Snowplow phase
When the age of the SNR approaches the radiative cooling timescale, cooling causes thermal pressure behind the shock to drop, stalling it. This phase features a shell of cool gas around a hot volume, the mass of which increases as it sweeps up the surrounding gas like a gasplow. For typical SNRs, this phase ends at an age of aboutyr, leading into the next phase:
- Fadeaway
Eventually the shock speed approaches the sound speed in the gas, and turns into a sound wave. The “fadeaway time” is on the order ofyears.
So why are they important?
To constitute an integral part of a model of the ISM, SNRs must occur fairly often and overlap. In the Milky Way, observations indicate a supernova every 40 years. Given the size of the disk, this yields a supernova rate of .
Here we get some justification for an ISM that’s a bit more complicated than the then-standard two-phase model (proposed by Field, Goldsmith, and Habing (1969) consisting mostly of warm HI gas). Taking into account the typical fadeaway time of a supernova, we can calculate that on average 1 other supernova will explode within a “fadeaway volume” within that original lifetime. That volume is just the characteristic area swept out by the shock front as it approaches the sound speed in the last phase. For a fadeaway time of yr and a typical sound speed of the ISM, this volume is about 100 pc. Thus in just a few million years, this warm neutral medium will be completely overrun by supernova remnants! The resulting medium would consist of low-density hot gas and dense shells of cold gas. McKee and Ostriker saw a better way…
The Three Phase Model
McKee and Ostriker present their model by following the evolution of a supernova remnant, eventually culminating in a consistent picture of the phases of the ISM. Their model consists of a hot ionized medium with cold dense clouds dispersed throughout. The cold dense clouds have surfaces that are heated by hot stars and supernova remnants, making up the warm ionized and neutral media, leaving the unheated interiors as the cold neutral medium. In this picture, supernova remnants are contained by the pressure of the hot ionized medium, and eventually merge with it. In the early phases of their expansion, supernova remnants evaporate the cold clouds, while in the late stages, the supernova remnant material cools by radiative losses and contributes to the mass of cold clouds.
A schematic of the three phase model, showing how supernovae drive the evolution of the interstellar medium.
In the early phases of the supernova remnant, McKee and Ostriker focus on the effects of electron-electron thermal conduction. First, they cite arguments by Chevalier (1975) and Solinger, Rappaport, and Buff (1975) that conduction is efficient enough to make the supernova remnant’s interior almost isothermal. Second, they consider conduction between the supernova remnant and cold clouds that it engulfs. Radiative losses from the supernova remnant are negligible in this stage, so the clouds are evaporated and incorporated into the remnant. Considering this cloud evaporation, McKee and Ostriker modify the Sedov-Taylor solution for this stage of expansion, yielding two substages. In the first substage, the remnant has not swept up much mass from the hot ionized medium, so mass gain from evaporated clouds dominates. They show this mechanism actually modifies the Sedov-Taylor solution to a dependance. In the second substage, the remnant has cooled somewhat, decreasing the cloud evaporation, making mass sweep-up the dominant effect. The classic
Sedov-Taylor solution is recovered.
The transition to the late stages occurs when the remnant has expanded and cooled enough that radiative cooling becomes important. Here, McKee and Ostriker pause to consider the properties of the remnant at this point (using numbers they calculate in later sections): the remnant has an age of 800 kyr, radius of 180 pc, density of , and temperature of 400 000 K. Then, they consider effects that affect the remnant’s evolution at this stage:
- When radiative cooling sets in, a cold, dense shell is formed by runaway cooling: in this regime, radiative losses increase as temperature decreases. This effect is important at a cooling radius where the cooling time equals the age of the remnant.
- When the remnant’s radius is larger than the scale height of the galaxy, it could contribute matter and energy to the halo.
- When the remnant’s pressure is comparable to the pressure of the hot ionized medium, the remnant has merged with the ISM.
- If supernovae happen often enough, two supernova remnants could overlap.
- After the cold shell has developed, when the remnant collides with a cold cloud, it will lose shell material to the cloud.
Frustratingly, they find that these effects become important at about the same remnant radius. However, they find that radiative cooling sets in slightly before the other effects, and continue to follow the remnant’s evolution.
The mean free path of the remnant’s cold shell against cold clouds is very short, making the last effect important once radiative cooling has set in. The shell condenses mass onto the cloud since the cloud is more dense, creating a hole in the shell. The density left behind in the remnant is insufficient to reform the shell around this hole. The radius at which supernova remnants are expected to overlap is about the same as the radius where the remnant is expected to collide with its first cloud after having formed a shell. Then, McKee and Ostriker state that little energy loss occurs when remnants overlap, and so the remnant must merge with the ISM here.
At this point, McKee and Ostriker consider equilibrium in the ISM as a whole to estimate the properties of the hot ionized medium in their model. First, they state that when remnants overlap, they must also be in pressure equilibrium with the hot ionized medium. Second, the remnants have added mass to the hot ionized medium by evaporating clouds and removed mass from the hot ionized medium by forming shells – but there must be a mass balance. This condition implies that the density of the hot ionized medium must be the same as the density of the interior of the remnants on overlap. Third, they state that the supernova injected energy that must be dissipated in order for equilibrium to hold. This energy is lost by radiative cooling, which is possible as long as cooling occurs before remnant overlap. Using supernovae energy and occurrence rate as well as cold cloud size, filling factor, and evaporation rate, they calculate the equilibrium properties of the hot ionized medium. They then continue to calculate “typical” (median) and “average” (mean) properties, using the argument that the hot ionized medium has some volume in equilibrium, and some volume in expanding remnants. They obtain a typical density of , pressure of
, and temperature of 460 000 K.
McKee and Ostriker also use their model to predict different properties in the galactic halo. There are fewer clouds, so a remnant off the plane would not gain as much mass from evaporating clouds. Since the remnant is not as dense, radiative cooling sets in later – and in fact, the remnant comes into pressure equilibrium in the halo before cooling sets in. Supernova thus heat the halo, which they predict would dissipate this energy by radiative cooling and a galactic wind.
Finally, McKee and Ostriker find the properties of the cold clouds in their model, starting from assuming a spectrum of cloud sizes. They use Hobbs’s (1974) observations that the number of clouds with certain column density falls with the column density squared, adding an upper mass limit from when the cloud exceeds the Jeans mass and gravitationally collapses. A lower mass limit is added from considering when a cloud would be optically thin to ionizing radiation. Then, they argue that the majority of the ISM’s mass lies in the cold clouds. Then using the mean density of the ISM and the production rate of ionizing radiation, they can find the number density of clouds and how ionized they are.
Parker Instability
The three-phase model gives little prominence to magnetic fields and giant molecular clouds. As a tangent from McKee and Ostriker’s model, the Parker model (Parker 1966) will be presented briefly to showcase the variety of considerations that can go into modelling the ISM.
The primary motivation for Parker’s model are observations (from Faraday rotation) that the magnetic field of the Galaxy is parallel to the Galactic plane. He also assumes that the intergalactic magnetic field is weak compared to the galactic magnetic field: that is, the galactic magnetic field is confined to the galaxy. Then, Parker suggests what is now known as the Parker instability: that instabilities in the magnetic field cause molecular cloud formation.
Parker’s argument relies on the virial theorem: in particular, that thermal pressure and magnetic pressure must be balanced by gravitational attraction. Put another way, field lines must be “weighed down” by the weight of gas they penetrate: if gravity is too weak, the magnetic fields will expand the gas it penetrates. Then, he rules out topologies where all field lines pass through the center of the galaxy and are weighed down only there: the magnetic field would rise rapidly towards the center, disagreeing with many observations. Thus, if the magnetic field is confined to the galaxy, it must be weighed down by gas throughout the disk.
He then considers a part of the disk, and assumes a uniform magnetic field, and shows that it is unstable to transverse waves in the magnetic field. If the magnetic field is perturbed to rise above the galactic plane, the gas it penetrates will slide down the field line towards the disk because of gravity. Then, the field line has less weight at the location of the perturbation, allowing magnetic pressure to grow the perturbation. Using examples of other magnetic field topologies, he argues that this instability is general as long as gravity is the force balancing magnetic pressure. By this instability, he finds that the end state of the gas is in pockets spaced on the order of the galaxy’s scale height. He suggests that this instability explains giant molecular cloud formation. The spacing between giant molecular clouds is of the right order of magnitude. Also, giant molecular clouds are too diffuse to have formed by gravitational collapse, whereas the Parker instability provides a plausible mode of formation.
In today’s perspective, it is thought that the Parker instability is indeed part of giant molecular cloud formation, but it is unclear how important it is. Kim, Ryu, Hong, Lee, and Franco (2004) collected three arguments against Parker instability being the sole cause:
- The formation time predicted by the Parker instability is ~10 Myr. However, looking at giant molecular clouds as the product of turbulent flows gives very short lifetimes (Ballesteros-Paredes et al. 1999). Also, ~10 Myr post T Tauri stars are not found in giant molecular clouds, suggesting that they are young (Elmegreen 2000).
- Adding a random component to the galactic magnetic field can stabilize the Parker instability (Parker & Jokipii 2000, Kim & Ryu 2001).
- Simulations suggest that the density enhancement from Parker instability is too small to explain GMC formation (Kim et al. 1998, 2001).
Does it hold up to observations?
The paper offers several key observations justifying the model. First, of course, is the observed supernova rate which argues that a warm intercloud medium would self-destruct in a few Myr. Other model inputs include the energy per supernova, the mean density of the ISM, and the mean production rate of UV photons.
They also cite O VI absorption lines and soft X-ray emission as evidence of the three-phase model. The observed oxygen line widths are a factor of 4 smaller than what would be expected if they originated in shocks or the Hot Ionized Medium, and they attribute this to the idea that the lines are generated in the conductive surfaces of clouds — a key finding of their model above. If one observes soft X-ray emission across the sky, a hot component of T ~ K can be seen in data at 0.4-0.85 keV, which cannot be well explained just with SNRs of this temperature (due to their small filling factor). This is interpreted as evidence for large-scale hot gas.
So can it actually predict anything?
Sure! Most importantly, with just the above inputs — the supernova rate, the energy per supernova, and the cooling function — they are able to derive the mean pressure of the ISM (which they predict to be , very close to the observed thermal pressures).
Are there any weaknesses?
The most glaring omission of the three-phase model is that the existence of large amounts of warm HI gas, seen through 21cm emission, is not well explained; they underpredict the fraction of hydrogen in this phase by a factor of 15! In addition, observed cold clouds are not well accounted for; they should disperse very quickly even at temperatures far below that of the ISM that they predict.
ARTICLE: The mass function of dense molecular cores and the origin of the IMF (2007)
In Journal Club, Journal Club 2013, Uncategorized on March 12, 2013 at 9:30 pmThe mass function of dense molecular cores and the origin of the IMF
by Joao Alves Marco Lombardi & Charles Lada Summary by Doug FerrerIntroduction
One of the main goals of researching the ISM is understanding the connection between the number and properties of stars and the properties of the surrounding galaxy. We want to be able to look at a stellar population and deduce what sort of material it came from, and the reverse–predict what sort of stars we should expect to form in some region given what we know about its properties. The basics of this connection have been known for a while (eg. Bok 1977). Stars form from the gravitational collapse of dense cores of molecular clouds. Thus the properties of stars are the properties of these dense cores modulated by the physical processes that happen during this collapse.
One of the key items we would like to be able to derive from this understanding of star formation is the stellar initial mass function (IMF)–the number of stars of a particular mass as a function of mass. Understanding how the IMF varies from region to region and across time would be extremely useful in many areas of astrophysics, from cosmology to star clusters. In this paper, Alves et al. attempt to explain the origin of the IMF and provide evidence for this explanation by examining the molecular cloud complex in the Pipe nebula. We will look at some background on the IMF, then review the methods used by Alves et al. and asses the implications for star formation.
The IMF and its Origins
The IMF describes the probability density for any given star to have a mass M, or equivalently the number of stars in a given region with a mass of M. Early work done by Salpeter (1955) showed that the IMF for relatively high mass stars ( ) follows a power law with the index
. For lower masses, the current consensus is for a break below
, and another break below
, with a peak around
. A cartoon of this sort of IMF is shown in Fig. 1. The actual underlying distribution may in fact be log-normal. This is consistent with stars being formed from collapsing over-densities caused by supersonic turbulence within molecular clouds (Krumholz 2011). This is not particularly strong evidence, however, as the log-normal distribution can result from any process that depends on the product of many random variables.

Figure 1: An illustration of the accepted form of the IMF. There is a power law region for less than , and a broad peak for lower masses. Image from Krumholz (2011) produced using Chabrier (2003)
The important implication of these results is that there is a characteristic mass scale for star formation of around . There are two (obvious) ways to explain this:
- The efficiency of star formation peaks at certain scales.
- There is a characteristic scale for the clouds that stars form from
There has been quite a lot of theoretical work examining option 1 (see Krumhoz 2011 for a relatively recent, accessible review). There are many different physical processes at play in star formation–turbulent MHD, chemistry, thermodynamics, radiative transfer, and gravitational collapse. Many of these processes are not separately well understood, and each is occurring in its complex, and highly non-linear regime. We are not even close to a complete description of the full problem. Thus it would not be at all surprising if there were some mass scale, or even several mass scales that are singled out by the star formation process, even if none is presently known. There is some approximate analytical work showing that feedback from stellar outflows may provide such a scale (eg. Shu et al. 1987) . More recent work (e.g. Price and Bate 2008) has shown that magnetic fields cause significant effects on the structure of collapsing cloud cores in numerical simulations, and may reduce or enhance fragmentation depending on magnetic field strength and mass scale.
Nevertheless, the authors are skeptical of the idea that star-formation is not a scale-free process. Per Larson (2005), they do not believe that turbulence or magnetic fields are likely to be very important for the smallest, densest scale clouds that starts ultimately form from. Supersonic turbulence is quickly dissipated on these scales, and magnetic fields are dissipated through ambipolar diffusion–the decoupling of neutral molecules from the ionic plasma. Thus Larson argues that thermal support is the most important process in small cores, and the Jeans analysis will be approximately correct.
The authors thus turn to option 2. It is clear that if the dense cores of star forming clouds already follow a distribution like the IMF, then there will be no need for option 1 as an explanation. Unfortunately though, the molecular cloud mass function (Fig. 2) does not at first glance show any breaks at low mass and has too shallow of a power law index . But what if we look at only the smallest, densest clumps?

Figure 2: The cumulative cloud mass function (Ak is proortional to mass) for several different cloud complexes from Lombardi et al. (2008). While this is not directly comparable to the IMF, the important take away is that there are no breaks at low mass.
Observations
Observations using dense gas emission tracers like and
produces mass distributions more like the stellar IMF (eg. Tachihara et al. 2002, Onishi 2002) . However, there are many systematic uncertainties in emission based analysis. To deal with these issues, the authors instead probed dense cloud mass using wide field extinction mapping (this work). An extinction map was constructed of the nearby Pipe nebula using the NICER method of Lombardi & Alves (2001), which we have discussed in class. This produced the extinction map shown in Fig. 3 below.

Figure 3: NICER Extinction map of the Pipe nebula. A few dense regions are visible, but the noisey, variable background makes it difficult to seperate out seperate cores in a consistent way.
Data Analysis
The NICER map of the pipe nebula reveals a complex filamentary structure with very high dynamic range in column density (). It is difficult to assign cores to regions in such a data set in a coherent way (Alves et al. 2007) using standard techniques–how do we determine what is a core and what is substructure to a core? Since it is precisely the number and distribution of cores that we are interested in, we cannot use a biased method to identify the cores. To avoid this, the authors used a technique called multiscale wavelet decomposition. Since the authors do not give much information on how this works, we will give a brief overview following a description of a similar technique from Portier-Fozzani et al. (2001).
Wavelet Decomposition
Wavelet analysis is a hybrid of Fourier and coordinate space techniques. A wavelet is a function that has a characteristic frequency, position and spatial extent, like the one in Fig. 4. Thus if we convolve a wavelet of a given frequency and length with a signal, it will tell us how the spatial frequency of the signal varies with position. This is the type of analysis used to produce a spectrogram in audio visualization.
Figure 4: A wavelet. This is one is the product of a sinusoid with a gausian envelope. This choice is a perfect tradeoff between spatial and frequency resolution, but other wavelets may be ideal for some other resolution goal. Note that
The authors used this technique to separate out dense structures from the background. They performed the analysis using wavelets with spatial scales close to three different length scales separated by factors of 2 (.08 pc, .15 pc, .3 pc). Then they combined the structures identified at each length scale into trees based on spatial overlap, and called only the top level of these trees separate cores. The resulting identified cores are shown in fig. 5. While the cores are shown as circles in fig. 5, this method does not assume spherical symmetry, and the actual identified core shapes are filamentary, reflecting the qualitative appearance of the image

Figure 5: Dense cores identified in the Pipe nebula (circles) from Alves et al. (2007). The radii of the circles is proportional to the core radius determined by the wavelet decomposition analysis
Results
After obtaining what they claim to be a mostly complete sample of cores, the authors calculate the mass distribution for them. This is done by converting dust extinction to a dust column density. This gives a dust mass for each clump, which can then be extrapolated to a total mass by assuming a set dust fraction. The result of this is shown in fig. 6 below. The core mass function they obtain is qualitatively similar in shape to the IMF, but scaled by a factor of ~4 in mass. The analysis is only qualitative, and no statistics are done or functions fit to the data. The authors claim that this result evinces a universal star formation efficiency of ~30%, and that this is good agreement with that calculated analytically by (Shu 2004) and numerical simulations. This is again only a qualitative similarity, however.
We should also note that the IMF is hypothesized to be a log-normal distribution. This sort of distribution can arise out of any process that depends multiplicatively on many independent random factors. Thus the fact that dense cores have a mass function that is a scaled version of the IMF is not necessarily good evidence that they share a simple causal link, in the same way that two variables both being normally distributed does not mean that they are any way related.

Figure 6: The mass function of dense molecular cores (points) and the IMF (solid grey line). The dotted gray line is the IMF with mass argument scaled by a factor of 4. The authors note the qualitative agreement, but do not perform any detailed analysis.
Conclusions
Based on this, the authors conclude that there is no present need for a favored mass scale in star formation as an explanation of the IMF. Everything can be explained by the distribution produced by dense cloud cores as they are collapsing. There are a few points of caution however. This is a study of only one cluster, and the data are analyzed using an opaque algorithm that is not publicly available. Additionally, the distributions are not compared statistically (such as with KS), so we have only qualitative similarity. It would be interesting to see these results replicated for a different cluster using a more transparent statistical method
References
Bok, B. J., Sim, M. E., & Hawarden, T. G. 1977, Nature, 266, 145
Krumholz, M. R. 2011, American Institute of Physics Conference Series, 1386, 9
Lombardi, M., Lada, C. J., & Alves, J. 2008, A&A, 489, 143
Krumholz, M. R., & Tan, J. C. 2007, ApJ, 654, 304
Alves, J., Lombardi, M., & Lada, C. J. 2007, A&A, 462, L17
Lombardi, M., Alves, J., & Lada, C. J. 2006, A&A, 454, 781
Larson, R. B. 2005, MNRAS, 359, 211
Shu, F. H., Li, Z.-Y., & Allen, A. 2004, Star Formation in the Interstellar Medium: In Honor of
David Hollenbach, 323, 37
Onishi, T., Mizuno, A., Kawamura, A., Tachihara, K., & Fukui, Y. 2002, ApJ, 575, 950
Tachihara, K., Onishi, T., Mizuno, A., & Fukui, Y. 2002, A&A, 385, 909
Portier-Fozzani, F., Vandame, B., Bijaoui, A., Maucherat, A. J., & EIT Team 2001, Sol. Phys.,
201, 271
Shu, F. H., Adams, F. C., & Lizano, S. 1987, ARA&A, 25, 23
Salpeter, E. E. 1955, ApJ, 121, 161
CHAPTER: The Virial Theorem
In Book Chapter on March 12, 2013 at 3:21 pm
(Transcribed by Bence Beky). See also these slides from lecture
See Draine pp 395-396 and appendix J for more details.
The Virial Theorem provides insight about how a volume of gas subject to many forces will evolve. Lets start with virial equilibrium. For a surface S,
see Spitzer pp.~217–218. Here I is the moment of inertia:
is the bulk kinetic energy of the fluid (macroscopic kinetic energy):
is
of the random kinetic energy of thermal particles (molecular motion), or
of random kinetic energy of relativistic particles (microscopic kinetic energy):
is the magnetic energy within S:
and W is the total gravitational energy of the system if masses outside S don’t contribute to the potential:
Among all these terms, the most used ones are ,
and
. But most often the equation is just quoted as
. Note that the virial theorem always holds, inapplicability is only a problem when important terms are omitted.
This kind of simple analysis is often used to determine how bound a system is, and predict its future, e.g. collapse, expansion or evaporation. Specific examples will show up later in the course, including instability analyses.
The virial theorem as Chandrasekhar and Fermi formulated it in 1953 is the following:
This uses a different notation but expresses the same idea, which is very useful in terms of the ISM.
Detecting the phases of the ISM: wild ideas
In Journal Club, Journal Club 2013 on March 7, 2013 at 7:45 pmThese are the ideas reported by the student groups in our 7 March discussion of McKee & Ostriker (1977).
Group 1
- Use line ratios to discriminate between CNM/WNM
- Use SNR light echoes to determine dust 3D structure
Group 2
- The key is to disentangle v_thermal from v_bulk
- If the velocity dispersion is dominated by thermal motion, then v ~ m^1/2, and therefore different species should have different line widths. If instead you’re looking at a cold cloud with a velocity distribution dominated by bulk motion, the line width will be independent of the mass of the species.
- Use different distributions along a line of sight. If the implied temperature variance changes as a function of distance along the line of sight, then it may be bulk velocity. if it doesn’t, you’re looking at thermal velocity.
- Use face on spiral
- Use dust content as proxy, since it will only sublimate at high temperatures
- Use hyperfine lines kin a transition which have different optical depth to look into different shells of a cloud
Group 3
- conduct a large survey of 21 cm emission
- goal: see whether WNM is a distinct phase, or just the product of several CNM clouds moving quickly
- method: try to distinguish between line widths and line shifts
- need very high-resolution spectrograph!
- look for dust towards a region that might be WNM
- dust sublimates at ~2000 K
- so, dust would exist if the region is actually CNM, but not if it’s WNM
- probe a cloud in a “layered” approach
- test the proposed structure of CNM core with WNM, WIM in successive layers
- observe at a range of wavelengths that probe different layers (perhaps at a range of wavelengths that become optically thick at different layers)
- collaborate with other alien astronomers
- get different lines of sight!
Group 4
- 21-cm observations at different galactic latitudes
- count clouds in a volume to get a filling factor (use CO or IR emission from dust)
- look at other galaxies
- look for bubbles along the line of sight
CHAPTER: Ion-Neutral Reactions
In Book Chapter on March 7, 2013 at 3:20 pm(updated for 2013)
In Ion-Neutral reactions, the neutral atom is polarized by the electric field of the ion, so that interaction potential is
,
where is the electric field due to the charged particle,
is the induced dipole moment in the neutral particle (determined by quantum mechanics), and
is the polarizability, which defines
for a neutral atom in a uniform static electric field. See Draine, section 2.4 for more details.
This interaction can take strong or weak forms. We distinguish between the two cases by considering b, the impact parameter. Recall that the reduced mass of a 2-body system is In the weak regime, the interaction energy is much smaller than the kinetic energy of the reduced mass:
.
In the strong regime, the opposite holds:
.
The spatial scale which separates these two regimes corresponds to , the critical impact parameter. Setting the two sides equal, we see that
The effective cross section for ion-neutral interactions is
Deriving an interaction rate is tricker than for neutral-neutral collisions because in general. So, let’s leave out an explicit n and calculate a rate coefficient k instead, in
.
(although really
, so k is largely independent of v). Combining with the equation above, we get the ion-neutral scattering rate coefficient
As an example, for interactions we get
. This is about the rate for most ion-neutral exothermic reactions. This gives us
.
So, if , the average time
between collisions is 16 years. Recall that, for neutral-neutral collisions in the diffuse ISM, we had
years. Ion-neutral collisions are much more frequent in most parts of the ISM due to the larger interaction cross section.
CHAPTER: Neutral-Neutral Interactions
In Book Chapter on March 7, 2013 at 3:19 pm(updated for 2013)
Short range forces involving “neutral” particles (neutral-ion, neutral-neutral) are inherently quantum-mechanical. Neutral-neutral interactions are very weak until electron clouds overlap (cm). We can therefore treat these particles as hard spheres. The collisional cross section for two species is a circle of radius r1 + r2, since that is the closest two particles can get without touching.
What does that collision rate imply? Consider the mean free path:
This is about 100 AU in typical ISM conditions ()
In gas at temperature T, the mean particle velocity is given by the 3-d kinetic energy: , or
, where
is the mass of the neutral particle. The mean free path and velocity allows us to define a collision timescale:
.
- For (n,T) = (
), the collision time is 500 years
- For (n,T) = (
), the collision time is 1.7 months
- For (n,T) = (
), the collision time is 45 years
So we see that density matters much more than temperature in determining the frequency of neutral-neutral collisions.
CHAPTER: Excitation Processes: Collisions
In Book Chapter on March 7, 2013 at 3:18 pm(updated for 2013)
Collisional coupling means that the gas can be treated in the fluid approximation, i.e. we can treat the system on a macrophysical level.
Collisions are of key importance in the ISM:
- cause most of the excitation
- can cause recombinations (electron + ion)
- lead to chemical reactions
Three types of collisions
- Coulomb force-dominated (
potential): electron-ion, electron-electron, ion-ion
- Ion-neutral: induced dipole in neutral atom leads to
potential; e.g. electron-neutral scattering
- neutral-neutral: van der Waals forces ->
potential; very low cross-section
We will discuss (3) and (2) below; for ion-electron and ion-ion collisions, see Draine Ch. 2.
In general, we will parametrize the interaction rate between two bodies A and B as follows:
In this equation, is the collision rate coefficient in
, where
is the velocity-dependent cross section and
is the particle velocity distribution, i.e. the probability that the relative speed between A and B is v. For the Maxwellian velocity distribution,
,
where is the reduced mass. The center of mass energy is
, and the distribution can just as well be written in terms of the energy distribution of particles,
. Since
, we can rewrite the collision rate coefficient in terms of energy as
.
These collision coefficients can occasionally be calculated analytically (via classical or quantum mechanics), and can in other situations be measured in the lab. The collision coefficients often depend on temperature. For practical purposes, many databases tabulate collision rates for different molecules and temperatures (e.g., the LAMBDA databsase).
For more details, see Draine, Chapter 2. In particular, he discusses 3-body collisions relevant at high densities.
CHAPTER: Definitions of Temperature
In Book Chapter on March 7, 2013 at 3:27 am(updated for 2013)
The term “temperature” describes several different quantities in the ISM, and in observational astronomy. Only under idealized conditions (i.e. thermodynamic equilibrium, the Rayleigh Jeans regime, etc.) are (some of) these temperatures equivalent. For example, in stellar interiors, where the plasma is very well-coupled, a single “temperature” defines each of the following: the velocity distribution, the ionization distribution, the spectrum, and the level populations. In the ISM each of these can be characterized by a different “temperature!”
Brightness Temperature
the temperature of a blackbody that reproduces a given flux density at a specific frequency, such that
Note: units for are
.
This is a fundamental concept in radio astronomy. Note that the above definition assumes that the index of refraction in the medium is exactly 1.
Effective Temperature
(also called
, the radiation temperature) is defined by
,
which is the integrated intensity of a blackbody of temperature .
is the Stefan-Boltzmann constant.
Color Temperature
is defined by the slope (in log-log space) of an SED. Thus
is the temperature of a blackbody that has the same ratio of fluxes at two wavelengths as a given measurement. Note that
for a perfect blackbody.
Kinetic Temperature
is the temperature that a particle of gas would have if its Maxwell-Boltzmann velocity distribution reproduced the width of a given line profile. It characterizes the random velocity of particles. For a purely thermal gas, the line profile is given by
,
where in frequency units, or
in velocity units.
In the “hot” ISM is characteristic, but when
(where
are the Doppler full widths at half-maxima [FWHM]) then
does not represent the random velocity distribution. Examples include regions dominated by turbulence.
can be different for neutrals, ions, and electrons because each can have a different Maxwellian distribution. For electrons,
, the electron temperature.
Ionization Temperature
is the temperature which, when plugged into the Saha equation, gives the observed ratio of ionization states.
Excitation Temperature
is the temperature which, when plugged into the Boltzmann distribution, gives the observed ratio of two energy states. Thus it is defined by
.
Note that in stellar interiors, . In this room,
, but
.
Spin Temperature
is a special case of
for spin-flip transitions. We’ll return to this when we discuss the important 21-cm line of neutral hydrogen.
Bolometric temperature
is the temperature of a blackbody having the same mean frequency as the observed continuum spectrum. For a blackbody,
. This is a useful quantity for young stellar objects (YSOs), which are often heavily obscured in the optical and have infrared excesses due to the presence of a circumstellar disk.
Antenna temperature
is a directly measured quantity (commonly used in radio astronomy) that incorporates radiative transfer and possible losses between the source emitting the radiation and the detector. In the simplest case,
,
where is the telescope efficiency (a numerical factor from 0 to 1) and
is the optical depth.
CHAPTER: Important Properties of Local Thermodynamic Equilibrium
In Book Chapter on March 7, 2013 at 3:25 am(updated for 2013)
For actual local thermodynamic equilbrium (not ETE), the following are important to keep in mind:
- Detailed balance: transition rate from j to k = rate from k to j (i.e. no net change in particle distribution)
- LTE is equivalent to ETE when
or
- LTE is only an approximation, good under specific conditions.
- Radiation intensity produced is not blackbody illumination as you’d want for true thermodynamic equilibrium.
- Radiation is usually much weaker than the Planck function, which means not all levels are populated.
- LTE assumption does not mean the Saha equation is applicable since radiative processes (not collisions) dominate in many ISM cases where LTE is applicable.
The “True” Column Density Distribution in Star-Forming Molecular Clouds
In Journal Club 2013 on March 5, 2013 at 8:25 pmCHAPTER: The Saha Equation
In Book Chapter on March 5, 2013 at 3:21 am(updated for 2013)
How do we deal with the distribution over different states of ionization r? In thermodynamic equilibrium, the Saha equation gives:
,
where and
are the partition functions as discussed in the previous section. The partition function for electrons is given by
For a derivation of this, see pages 103-104 of this handout from Bowers and Deeming.
If and
are approximated by the first terms in their sums (i.e. if the ground state dominates their level populations), then
,
where is the energy required to ionize
from the ground (j = 1) level. Ultimately, this is just a function of
and
. This assumes that the only relevant ionization process is via thermal collision (i.e. shocks, strong ionizing sources, etc. are ignored).
CHAPTER: Spitzer Notation
In Book Chapter on March 5, 2013 at 3:19 am(updated for 2013)
We will use the notation from Spitzer (1978). See also Draine, Ch. 3. We represent the density of a state j as
, where
- n: particle density
- j: quantum state
- X: element
- (r): ionization state
- For example,
In his book, Spitzer defines something called “Equivalent Thermodynamic Equilibrium” or “ETE”. In ETE, gives the “equivalent” density in state j. The true (observed) value is
. He then defines the ratio of the true density to the ETE density to be
.
This quantity approaches 1 when collisions dominate over ionization and recombination. For LTE, for all levels. The level population is then given by the Boltzmann equation:
,
where and
are the energy and statistical weight (degeneracy) of level j, ionization state r. The exponential term is called the “Boltzmann factor”‘ and determines the relative probability for a state.
The term “Maxwellian” describes the velocity distribution of a 3-D gas. “Maxwell-Boltzmann” is a special case of the Boltzmann distribution for velocities.
Using our definition of b and dropping the “r” designation,
Where is the frequency of the radiative transition from k to j. We will use the convention that
, such that
.
To find the fraction of atoms of species excited to level j, define:
as the particle density of in all states. Then
Define , the “partition function” for species
, to be the denominator of the RHS of the above equation. Then we can write, more simply:
to be the fraction of particles that are in state j. By computing this for all j we now know the distribution of level populations for ETE.