-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathxsede.spectroscopy.tex
299 lines (210 loc) · 30.4 KB
/
xsede.spectroscopy.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
\documentclass[11pt,preprint]{aastex}
%compact captions %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\usepackage[font=small,labelfont=bf, textfont=it, justification=justified]{caption}
%\DeclareCaptionFormat{ruleformat}{\baselineskip0.2cm\hrulefill\\\noindent#1#2#3{\hrulefill}}
%\captionsetup[figure]{format=ruleformat}
\setlength{\textfloatsep}{0pt}
%\setlength{\abovecaptionskip}{-10pt}
\setlength{\floatsep}{2pt}
\usepackage{verbatim}
\usepackage{rotating}
\usepackage{lscape}
\usepackage{subfigure}
\usepackage{natbib}
\usepackage{arydshln}
\usepackage[colorlinks=true, linkcolor=blue, urlcolor=blue, citecolor=blue, pdftex, bookmarks=true, linktocpage=true, hyperindex=true]{hyperref}
\usepackage{breakurl}
\providecommand{\xxxx}{{\color{red} ~XXXX~}}
\providecommand{\xsede}{XSEDE~}
\providecommand{\eg}{e.g.,~}
\providecommand{\ie}{\textit{i.e.},~}
\providecommand{\msun}{M$_\odot$~}
\newcommand{\transpose}[1]{{#1}^{\!\mathsf T}}
\newcommand{\given}{\,|\,}
\renewcommand{\det}[1]{||{#1}||}
\newlength\tindent
\setlength{\tindent}{\parindent}
\setlength{\parindent}{0pt}
\renewcommand{\indent}{\hspace*{\tindent}}
\begin{document}
\section*{\Large Measuring the High-Mass Stellar Initial Mass Function in the Distant Universe}
\begin{center}
Principle Investigator: Dr. Daniel Weisz\\
University of Washington
\end{center}
%{\small
%{\bf abstract:}
%Four years ago, we initiated the single most ambitious mapping project
%ever taken with the Hubble Space Telescope: to produce a map of every
%luminous star in the nearby Andromeda Galaxy (M31) in ultraviolet,
%optical, and near-infrared wavelengths. These $>$117,000,000 stars
%encode the history of the galaxy and the rich, detailed physics that
%drives stellar and galactic evolution. Using our first successful XSEDE
%program, our collaboration pioneered spatially mapping the recent
%history of star formation throughout M31, making a ``movie" of the past
%star formation of the galaxy during the last half a Gigayear. We now
%propose to extend this movie back to the oldest ages by using a
%forward modeling approach that reproduces the observed colors and
%brightnesses of stars in small cells across the galaxy. We have
%developed both a parameterized model that responds to the added
%complexity of extending our age coverage, and an optimized search
%strategy that allows us to efficiently scan parameter space. We have
%matched this project to specific XSEDE resources, have meticulously
%optimized our strategy, and have benchmarked our specific analysis
%pipeline. Combining the computational power of XSEDE with the
%unprecedented number of stars cataloged by our large area survey will
%allow us to derive the most precise spatially-resolved star formation
%history of M31 throughout cosmic time, probing the earliest epochs of
%galaxy formation }
\section{Background \& Motivation}
\label{sec:overview}
A primary goal of astrophysics is to decipher the history of the universe. Among the most promising approaches has been through the study of galaxies across cosmic time. A galaxy's brightness and color encodes information about its age, rate of star formation, and metal content, all of which trace the physical conditions of the universe. Thus, by surveying large collections of galaxies at different cosmic epochs, we are able to trace the evolution of the universe.
The use of galaxies as cosmological tools has proliferated over the past two decades. Large, systematic surveys such as Sloan Digital Sky Survey (SDSS) and the Hubble Ultra-Deep field have provided precise flux measurements of millions of galaxies back to the genesis of galaxy formation around 13 billion years ago. The success of these programs has motivated the next-generation of astronomical facilities such as the Large Synoptic Survey Telescope, which will survey more than 1 billion galaxies, and the James Webb Space Telescope, which promises to directly detect the formation of the first galaxies.
Yet, despite the wealth of galaxy observations, how we translate their observed light into physical quantities (e.g., mass, metallicity) remains rudimentary. Typical observations of a distant, unresolved galaxy consist of a handful of fluxes measured at different wavelengths that sample the galaxy's spectral energy distribution (SED). Physically motivated models of galaxies contain many more parameters than typical data points, resulting in significant degeneracies between the model parameters \citep[e.g.,][]{conroy2013}. To mitigate these degeneracies, key assumptions about the form of a galaxy's star formation history (SFH), dust content, and stellar initial mass function (IMF) are necessary.
The validity of these assumptions can usually be tested using detailed observations of nearby, resolved galaxies. For example, the dust distribution of hundreds of local galaxies have been used to establish analytic dust ``laws'', which also appear to provide good descriptions of increasingly distant systems \citep[e.g.,][]{calzetti2000}. In the same vein, the simplified SFH models appear to be reasonable approximations. However, similar tests have not been possible with the stellar IMF, making it the largest source of uncertainty in studies of the distant \citep[e.g.,][]{cool2008, conroy2013}.
The stellar IMF describes the relative distribution of stellar masses for a group of stars born at the same time. In principle, measuring the IMF is a straight-forward exercise in counting the relative number of individual stars at different masses in a star cluster, i.e., a group of stars that formed at the same time and in the same location. However, there are two important factors that complicate this measurement in practice. First is the need for resolved star observations. Even the high-angular resolution of the Hubble Space Telescope can only resolve individual stars for IMF measurements in the nearest few galaxies, placing severe limits on the types of galaxies in which `gold standard' IMF measurements can be made. This forces us to pursue IMF studies in more distant galaxies, which leads to the second practical challenge. Measuring the IMF in unresolved systems is hampered by the strong degeneracy between SFH and the IMF \citep{miller1979, elmegreen2006}. A multi-age population can emulate fluctuations in shape of the IMF, introducing ambiguity into IMF constraints from galaxies where individual stars cannot be resolved. As a result, the most reliable IMF measurements are those from direct star counts in the local universe. However, this sample includes only two galaxies: our own Milky Way and the Large Magallenic Cloud. The extrapolation of the IMF from these two systems to the broader universe remains shaky at best.
The consequences of relying on a local IMF to interpret the light of distant galaxies are unclear. On one hand, direct star counts in the MW and LMC do not show evidence of an IMF that depends systematically on local star-forming conditions \citep[e.g., gas density, metallicity, star formation intensity]{bastian2010}. This could either be because the IMF is ``universal'', i.e., insensitive to environment, or because the local universe has a limited dynamic range of star-forming environments. Thus, in the most conservative scenario, the IMF is universal and our interpretation of the distant universe is unlikely to dramatically change.
On the other hand, IMF studies in the slightly more distant universe have hinted at systematic variations that are environmentally dependent. For example, there have been hints that low-mass, low-metallicity systems form fewer high-mass stars than the current standard IMF \citep[e.g.,][]{lee2009, meurer2009}. If true, this departure from the standard IMF would require a drastic revision to our understand of galaxy evolution \citep[e.g.,][]{pflammAltenburg2009}, and particular the very early universe, which is believed to exclusively host low-mass and metallicity systems. At the same time, these putative IMF variations rely on interpreting the total light of nearby but unresolved galaxies, and do not robustly account for degeneracies between IMF and SFH. Consequently, it is currently impossible to evaluate how well we know one of the most fundamental parameters necessary to understand the history of the universe.
\section{Measuring the High-Mass IMF in the Distant Universe}
To address the current shortcomings of IMF studies, we are developing a qualitatively new method to measure the IMF in unresolved, distant and star-forming galaxies. Due to their high luminosities, star-forming galaxies are preferentially detected at all cosmic epochs (as opposed to fainter, passive galaxies). In such systems, stars more massive than the sun determine the galaxy SEDs. Therefore we are focused on the IMF in this mass regime above 1 \msun (``the high-mass IMF'').
Our approach to measuring the IMF in distant galaxies is through their young star cluster populations. In the local universe, counting stars as a function of their mass in young clusters is the optimal way to measure the IMF. By definition, clusters are single-age populations, which eliminates the degeneracy between IMF and SFH. However, the direct star count technique is limited to the local universe, and instead we must turn to other methods to infer the IMF in the distant universe.
For most distinct clusters, we measure the IMF from their spectrum. The full optical spectrum of a young ($<$ 50 Myr) star cluster contains significant information about the slope of the high mass IMF. The shape of the blue continuum and size of spectral features (e.g., nebular emission, the Balmer jump) are sensitive to the slope of the high mass IMF. Traditional analysis of star clusters have essentially ignored this information. They rely on isolating key absorption features and modeling their relative intensities. The advantage of the conventional approach is that it avoids non-linear complexities that can affect the broad spectrum shape (e.g., non-linear efficiencies in the detector, sky and background over/under subtraction). However, reducing the full spectrum to a set of spectrophotometric index analysis also eliminates sensitivity to the IMF \citep[e.g.,][]{koleva2008}, giving the false impression that spectral analysis of young clusters cannot yield IMF constraints.
In contrast, our approach is to forward model the full optical spectrum of a young star cluster. That is, we use state-of-the-art population stellar population synthesis software to construct the expected spectrum of a young cluster and build a separate noise model to account for observational uncertainties introduced by the resolution of the telescope, inefficiency of the detector, atmospheric turbulence, sky line emission, etc. Many of these noise features are non-linear and require involved noise models (e.g., Gaussian Processes) as we describe in \S \ref{sec:calibration}. The result of this new approach to modeling is a complete model of the observed cluster spectrum, which will, for the first time, enable direct measurements of the high-mass IMF in star-forming galaxies outside the very local universe.
\subsection{Validating Spectroscopic IMF constraints in Andromeda}
Our program currently consists of two phases. The first phase is development and validation of the methodology and the second is the first robust measurement of high-mass IMF outside the local universe.
Following successful trials of the code on simulated data, we are now validating our spectroscopic IMF constraints against `gold standard' IMF measurements from resolved star counts. As part of the Panchromatic Hubble Andromeda Treasury, a 4 year Hubble Space Telescope campaign that resolved $>$100 million stars in our closest neighbor, the Andromeda galaxy, PI Weisz has measured the IMF in $>$500 star clusters using resolved star counts. Concurrently, we have been using the \texttt{MMT} and \texttt{Keck} telescopes to acquire high-fidelity spectra of $\sim$ 50 of the most luminous young clusters in Andromeda in order to spectroscopically measure their IMFs and compare the results with resolved star analysis. In Figure \ref{fig1}, we show an example dataset for a single cluster in Andromeda.
This project will provide an empirical validation of our spectroscopic technique. This step is key in both assessing the reliability of our results and in demonstrating the methodology to the broader astronomical community. Because we are modeling all physical properties of a cluster, we will compare constraints on all physical parameters (such as cluster age, mass, IMF slope) between the resolved star and spectroscopic methodologies. The development of this methodology alone promises to revolutionize the use of young star clusters for studying the distant universe.
\begin{figure}[htbp]
\begin{center}
\plotone{fig1.pdf}
\caption{An example of our dataset for a single cluster in Andromeda. The top panel shows the Hubble Space Telescope mosaic of Andromeda. The lower panels shows: \textit{Left--} a resolved young clusters; \textit{Center --} the clusters' ``color-magnitude diagram''. Each point is a single star, which enables us to measure the cluster's ground truth IMF by direct star counts; \textit{Right --} The optical spectrum of the cluster obtained with the \texttt{MMT}. The focus of this program is to recover the IMF from the optical spectrum and cross-validate it against resolved stars. With this verification, we can then pursue IMF studies in the distant universe, where resolved star studies are not possible. }
\label{fig1}
\end{center}
\end{figure}
\subsection{The First High-Mass IMF Measurement Outside the Local Group}
The second phase of our program is a pilot study of the IMF in NGC~628. NGC~628 is an ideal prototype system as it is a moderately distant (D $\sim$ 9 Mpc), face-on spiral galaxy that hosts a rich young cluster population (Larsen et al. 1999). It has a SFR that is $\sim$10 times higher than M31, meaning it is a fundamentally different environment than where the IMF has been studied locally. Additional benefits include an extensive collection of ancillary data (e.g., Hubble Space Telescope imaging, gas maps from the Very Large Array) that can be used to search for environmental variations of the IMF within NGC~628. Interestingly, it also hosts the highest frequency of recent supernovae (SNe; 3 in the last decade; 2002ap, 2003gd, 2013ej) of any galaxy within 10 Mpc, which could be due to an IMF that favors the formation of more massive stars that predicted by a standard IMF.
In late-2014, we obtained high signal-to-noise spectroscopy of $\sim$ 30 young clusters in NGC~628 using the \texttt{Keck} telescope and will measure the high-mass IMF of each cluster following the completion of the program's first phase. Using our NGC~628 results as a prototype, we are planning a systematic spectroscopic survey of young clusters in diverse star-forming galaxies in order to search for variations in the high-mass IMF.
%\subsection{Future Development}
%
%Although the primary these of our programs has been to the measure the high-mass IMF, the general spectroscopic modeling framework is applicable to a wide variety of astrophysical contexts. Of particular interest is in modeling the integrated light of galaxies with relaxed assumptions of the SFH. As discussed in \S \ref{XXX}, strong assumptions for the analytic form of the SFH are commonly used to mitigate degeneracies with other physical parameters. However, within the framework of this code, we can directly test the validity of these assumptions both on galaxies with only broad SED (currently most common) and those with integrated spectroscopy. Some similar explorations exist in the astronomical literature, but none at the level of detail we propose. Specifically, our flexible noise model is qualitatively new and maximizes the amount of information used in the modeling of clusters and galaxies, whereas conventional methods (e.g., spectrophotometric index modeling) only retain a small fraction of the total dataset. Using the general framework developed for IMF investigations, we plan to explore broader implications for the general modeling of distant galaxies.
\section{Potential Scientific Impact}
The high-mass IMF underpins virtually all of galaxy evolution and cosmological studies that rely on star-forming galaxies. Our program is therefore a gateway to transformative science: measuring the high-mass IMF in distant galaxies has never before been possible.
Our studies in Andromeda and NGC~628 are equally important. The comparison with resolved star results in Andromeda are critical to verifying and establishing the validity of the technique. It will also demonstrate the value of full spectral modeling to the broader astronomical largely. Application to NGC~628 will be the first time the IMF has been measured outside of two very local galaxies. Both studies are critical to motivating larger spectroscopic surveys of clusters throughout the universe and in facilitating a shift toward measuring the IMF that is directly relevant to the earlier universe.
In a broader sense, our analysis framework also provides a baseline by which astronomers from a variety of unrelated fields can improve spectroscopic modeling. Modeling a full spectrum is clearly applicable to cluster and galaxy astrophysics, and it has the potential to transform how stellar astrophysics is done with the Milky Way . This is particularly significant in light of the heavy investments in programs designed to spectroscopically survey the stellar contents of the Milky Way (e.g., APOGEE) in order to determine the history of our own galaxy.
%The long-term scientific impact of our project has the potential to be truly transformative. For example, if we find that the IMF varies with respect to environment, virtually all of galaxy astrophysics, chemical evolution of the universe, and galaxy-based cosmology will have to be revised. This program is the first step in a much longer quest to understand the broader evolution of the universe.
\section{Methodology}
Information about the properties of a stellar cluster -- its age, metallicity, total stellar mass, redshift, and reddening by dust among many others -- is contained in the detailed spectrum and the broadband SED of the cluster.
Our code performs inference of these star cluster parameters by comparing model spectra and photometry to observations for a broad range of model parameters.
This is accomplished in the framework of a likelihood function for the data given the model parameters.
The high dimensionality of the parameter space, the desire to robustly infer degeneracies between the parameters, the presence of informative prior information about the parameters, and the need marginalize over ``nuisance'' parameters all lead us to a Bayesian methodology based on Markov Chain Monte Carlo (MCMC) sampling of the posterior probability distribution for the parameters.
The combination of photometric and spectroscopic information requires a flexible noise model to account for spectroscopic calibration uncertainties.
The heart of our star cluster modeling code consists of the Flexible Stellar Population Synthesis code \citep[FSPS; written by co-I Conroy;][]{fspsI, fspsIII}, combined with a flexible spectroscopic calibration model.
For the MCMC sampling we are using the affine-invariant ensemble sampler implemented in \texttt{emcee} \citep{emcee} which enables the likelihood function evaluations to parallelized across a large number cores using \texttt{mpi4py}.
%The main calculations of \texttt{FSPS} are done in Fortran, but the code is wrapped in Python. \texttt{emcee} is written in Python and uses \texttt{mpi4py} to distribute likelihood calcualtions across CPUs.
\subsection{Star cluster model}
For each set of model parameters a star cluster model must be generated and compared to the data.
%The star cluster model is comprised of a physical model for the spectrum and broadband SED of and a model for the instrumental calibration.
The star cluster parameters include total stellar mass, age, metallicity, line-of-sight dust opacity, IMF slope, redshift, and spectral broadening.
The model for each set of parameters is generated using the state-of-the-art FSPS stellar population synthesis code.
FSPS combines tabulated model stellar spectra according to weights determined from tabulated stellar evolutionary tracks and an IMF.
FSPS includes the most recent updates to stellar evolutionary tracks and advanced high resolution theoretical model spectral libraries.
The tabulated data are loaded into memory at the start of program execution, and there is very little disk I/O in our entire pipeline.
The FSPS code additionally calculates the reddening due to dust and convolves the spectrum with a broadening function representing the instrumental spectral resolution.
Finally, the spectrum is redshifted and the broadband photometry is generated by projecting the model spectrum onto the available observational filters.
The model generation takes approximately 4s for one set of parameters on a single core, and model generation dominates the run time of the likelihood function and the code.
\subsection{Calibration modeling}
\label{sec:calibration}
The absolute flux calibration of spectroscopic data is generally inferior to that of photometric data.
While spectroscopic features such as absorption line depths provide substantial information about the stellar population parameters, large scale features in the observed spectrum and its overall normalization are often affected by substantial (multiplicative) calibration uncertainties.
Accurately marginalizing over the uncertain calibration uncertainties is critical for obtaining unbiased constraints on the IMF.
We have therefore included a very flexible spectroscopic calibration model based on a combination of a low-order polynomial function of wavelength and a Gaussian Process.
The low order polynomial corresponds to gross features in the calibration as a function of wavelength.
The Gaussian Process corresponds to smaller scale deviations away from the polynomial, which can equivalently be thought of as covariant uncertainties on the fluxes of data points close in wavelength.
The likelihood function that we are using for the spectroscopic data is
\begin{eqnarray}\label{eq:spectroscopicLF}
\ln p(s\given\theta,\alpha) &=& -\frac{1}{2}\,\left[\transpose{\Delta}\,C^{-1}\,\Delta + \ln([2\pi]^N\,\det{C}) \right]
\\
\Delta &\equiv& \ln s - \ln \mu(\theta) - f(\alpha)
\\
C_{nn'} &\equiv& \sigma_n^2 \,\delta_{nn'} +
K_\alpha(\lambda_n - \lambda_{n'})
%\\
%\hat{\sigma_n} & = &\sigma_n/d
\quad ,
\end{eqnarray}
where $s$ is the the observed spectrum as a vector,
$\theta$ are the physical parameters of the cluster,
$\alpha$ are the (nuisance) calibration parameters
(which determine the the shape of the polynomial calibration vector and the covariance kernel function),
$\Delta$ is a residual vector,
$\mu(\theta)$ is the modeled spectrum from FSPS as a vector,
$f(\alpha)$ is the polynomial calibration vector,
$C$ is the Gaussian Process covariance matrix,
made up of $N^2$ values $C_{nn'}$,
$K_\alpha$ is a kernel function that sets the wavelength scale and
amplitude of the residual calibration issues,
$\sigma_n$ are the fractional uncertainties of the data,
and each $\lambda_n$ is the wavelength of spectral pixel $n$.
We invert the $N \times N$ matrix $C$ using Cholesky decompositions, as implemented in \texttt{Scipy}.
For our data, $N\sim 2500$.
On Stampede the matrix inversion takes approximately 1s on a single processor, using the linked Intel MKL library.
Meanwhile, the likelihood function that we are using for the photometry is more straightforward,
\begin{eqnarray}\label{eq:photometricLF}
\ln p(d\given\theta,j^2) &=& -\frac{1}{2}\,\sum_{n=1}^N \left[\frac{[d_n - \mu_n(\theta)]^2}{[\sigma_n^2 + j^2]} + \ln(2\pi\,[\sigma_n^2 + j^2]) \right]
\quad ,
\end{eqnarray}
where $d_n$ are the $N$ broadband photometric measurements,
$\theta$ are again the physical parameters of the cluster,
$\mu(\theta)$ are the modeled photometry from FSPS,
made up of $N$ values $\mu_n(\theta)$,
the $\sigma_n^2$ are the observational noise variances,
and $j$ is a parameter allowing for the possibility of additional noise above that reported.
These two log-likelihoods are summed and added to the log of the prior probabilities for each parameter $\{\theta, \alpha\}$ to determine the log of the posterior probability.
\subsection{Optimization and MCMC Sampling}
For each spectrum an initial round of optimization using Powell's method is used to find the approximate maximum of the posterior probability distribution.
The optimization is started from as many positions as there are cores available in the job, with each core performing one realization of the optimization, so that for many cores the chances of being stuck in a local maximum are decreased.
After the separate optimizations have each converged or reached a maximum number of iterations, the most probable of the final parameter positions is used as the central location for the MCMC sampling.
For our MCMC sampling of the $\sim 20$ parameters $\{\theta, \alpha\}$ we are using the Python code \texttt{emcee}.
This code implements an affine-invariant ensemble sampler \citep{goodman_weare} which is based on an ensemble of ``walkers'' in parameter space that, at each iteration of the sampler, are used to generate new proposal positions in parameter space.
This eliminates the need for hand-tuning of the proposal length scales.
Additionally, the posterior probability calculations for the different walkers at a given iteration can be simply parallelized. The final density of walker positions is an approximation to the posterior probability distribution for the parameters.
As described in the code performance document, all parallelization of the code is acheived by parallelizing the posterior probability calls in \texttt{emcee} using \texttt{mpi4py}.
The MCMC phase of the code completely dominates over the optimization phase, and extremely good parallel efficiency can be obtained when the ensemble of ``walkers'' is large.
%The observational data is preprocessed using custom software on local compute resources.
%\subsection{Observational Data}
%Preprocessing and uploading to Stampede
%\subsection{Job submission}
%We will submit a separate job for each cluster spectrum. normal queue
%\subsection{Job details}
%Each CPU loads a copy of the likelihood function into memory, including the FSPS star cluster modelling code and the spectroscopic calibration model.
%\subsection{Post-Processing}
%The stored walker positions are transferred to local servers to run our custom diagnostic and visualization software.
%In cases where the walkers in the sampler have not converged to a stationary distribution in parameter space, we can restart the sampler from the last ensemble of positions, thus taking advantage of previous computations.
%We are investigating techniques for robust online assessment of sampler convergence.
\section{Resource Request \& Justification}
For our startup allocation, we requested time on both Trestles and Stampede, but were ultimately only given time on Trestles. This proved to be problematic. Parallelized jobs on Trestles that began running would die in the midst of sampling with no warning. We spent over 2 months working with Trestles staff to identify the cause of the issue, but there is still no resolution. To facilitate scientific progress, we transferred $\sim$ 25\% of our Trestles allocation to Stampede and have found great success on Stampede. Stampede is fully optimized for the problem at hand, outperforming any of our local resources at UW or Harvard by factors of $\sim$5-10 in time per likelihood call. We therefore request use of Stampede for this program.
Overall, we estimate that each cluster will require an average of 9000 SUs for full convergence. This estimate includes both the initial Powell optimization as well as the burn-in and final MCMC chains. For the 53 clusters in Andromeda and 28 in NGC~628 we request a total of 729000 SUs. A breakdown of our request is summarized in Table \ref{tab1}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{cc|c}
Project & Clusters & SUs \\
\hline
Andromeda & 53 & 477000 \\
NGC~628 & 28 & 252000 \\
\hline Total SUs & & 729000 \\
\hline
\end{tabular}
\end{center}
\label{tab1}
\end{table}%
Due to the relative small size of our data, we do not request any additional storage allocation on top of the standard file system provided on Stampede. Longer term storage will be done at Washington and Harvard as needed.
% \end{deluxetable}
\section{Team}
There are three team members working on this project. All three worked together in the Astronomy Department at the University of California at Santa Cruz in 2013 and 2014, prior to moving to their current institutions.
\textbf{Daniel Weisz:} Principle Investigator and Lead Scientist. PI Weisz is currently a Hubble Fellow at the University of Washington. Previously he held postdoctoral research positions at UC Santa Cruz and the University of Washington. The focus of his Hubble Fellowship project is on searching for variations in the high-mass IMF in the distant universe. PI Weisz holds a Ph.D. in astrophysics from the University of Minnesota, has published 14 first author, including a seminal measurement of the high-mass IMF in M31, and over 75 total peer reviewed papers. He is an expert on local galaxies and star-formation, and is leading efforts to explore new ways to measure the high-mass IMF.
\textbf{Benjamin Johnson:} Co-Investigator and Lead Software Developer. Co-I Johnson is currently a postdoctoral research at the Center for Astrophysics at Harvard University. Co-I Johnson holds a Ph.D. in astronomy from Columbia University and has previously been a postdoctoral researcher at UC Santa Cruz, the CNRS in Paris, and Cambridge University. co-I Johnson is the author of five first author papers and 50 peer reviewed publications in total. He is an expert in galaxy formation and evolution and in analysis of SED and galaxy spectroscopy.
\textbf{Charlie Conroy:} Co-Investigator and creator of FSPS. Co-I Conroy is a new assistant faculty member in the Department of Astronomy at Harvard University. Previously he was an assistant faculty member at UC Santa Cruz and a postdoctoral researcher at Harvard. He holds a Ph.D. in astrophysics from Princeton. Co-I Conroy is the author of FSPS, the spectral synthesis code that underpins the physical models of this project and has published several highly cited papers about modeling the integrated light in galaxies and variations in the low-mass IMF. He is an expert in stellar and galactic astrophysics and in modeling the integrated light of galaxies.
\bibliographystyle{apj}
\bibliography{xsede.spectroscopy.bbl}
%\bibliography{../../../astrobib_dw}
\end{document}