1.            
  2. LSST Survey  
  3. Lynne Jones 
  4. LSE‐180 
  5. Latest Revision Date: November 7, 2013 
  6. Latest Version: 1.0 
  7. Change Record  

                                                                                               Level 2 Photometric Calibration for the LSST Survey                   LSE‐180       
Latest Revision: 11/07/2013 
 
 
 
 
 
 
 
Large Synoptic Survey Telescope (LSST) 
Level 2 Photometric Calibration for the 

Back to top


LSST Survey  

Back to top


Lynne Jones 

Back to top


LSE‐180 

Back to top


Latest Revision Date: November 7, 2013 

Back to top


Latest Version: 1.0 
 
 
This LSST document has been approved as a Content‐Controlled Document. Its contents are subject to 
configuration  control  and  may  not  be  changed,  altered,  or  their  provisions  waived  without  prior 
approval.  If  this  document  is  changed  or  superseded,  the  new  document  will  retain  the  Handle 
designation shown above.  The control  is on the most recent digital document with this Handle in the 
LSST digital archive and not printed versions.   
 
 
 
 

                                                                                               Level 2 Photometric Calibration for the LSST Survey                   LSE‐180       
Latest Revision: 11/07/2013 
ii 
 

Back to top


Change Record  
 
Version 
Date 
Description 
Owner name 
1.0 
11/07/2013 
Initial  Releas e as a docum ent with
 an LSE‐XXX  
handle as recommended by the Change Control 
Board on 11/06/2013.  Th e associated  Change 
Request is LCR‐166.  This document supersedes 
Document‐81 23. 
Lynne Jones 
  
 
 
  
 
 
  
 
 
  
 
 
  
 
 
  
 
 
  
 
 
  
 
 
  
 
 
  
 
 
  
 
 
  
 
 
  
 
 
 
 

Level 2 Photometric Calibration for the LSST Survey
R. Lynne Jones
1
, Tim Axelrod
2
, Zeljk? o Ivezic?
1
, David Burke
3
, Christopher Stubbs
9
,
James G. Bartlett
4
, Gurvan Bazin
4
, Guillaume Blanc
4
, Alexandre Boucaud
4
,
Jean Marc Colley
4
, Michel Cr?eze?
4
, Mario Juric
8
, C?ecile Roucelle
4
, Abhijit Saha
5
,
J. Allyn Smith
7
, Michael A. Strauss
6
, Peter Yoachim
1
on behalfofThePhotometricCalibrationTeam
07/01/13
ABSTRACT
This document describes the photometric calibration procedure for LSST
Data Release catalogs. This procedure will use specialized hardware, an aux-
iliary telescope, an atmospheric water vapor measurement system, and narrow-
band dome screen illuminator, to measure the wavelength dependence of the
atmospheric and hardware response functions, together with a self-calibration
procedure that leverages multiple observations of the same sources over many
epochs, to deliver 1%-level photometry across the observed sky.
Contents
1 Introduction
4
2 Photometric Requirements
6
1
University of Washington
2
University of Arizona
3
SLAC National Accelerator Laboratory
4
APC, Universite Paris Diderot
5
NOAO
6
Princeton University
7
Austin Peay State University
8
LSST Corp
9
Harvard University

{2{
3 The Photometric Calibration Process
7
4 From Flux to Counts
9
4.1 Bandpasses and Associated Magnitudes . . . . . . . . . . . . . . . . . . . . .
11
4.2 Perturbations to the System Bandpass . . . . . . . . . . . . . . . . . . . . .
13
4.3 E?ects of Airmass Variation . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
4.4 E?ects of Atmospheric Variations . . . . . . . . . . . . . . . . . . . . . . . .
14
4.5 Throughput Variations Due to Contamination . . . . . . . . . . . . . . . . .
16
4.6 Variations in Detector Quantum E?ciency . . . . . . . . . . . . . . . . . . .
16
4.7 Throughput Variations Due to Filter Position Shifts . . . . . . . . . . . . . .
16
4.8 PuttingitAllTogether . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17
5 From Counts to Flux
17
5.1 Measuring the Hardware Response . . . . . . . . . . . . . . . . . . . . . . .
17
5.1.1 Determining the Illumination Correction . . . . . . . . . . . . . . . .
19
5.1.2 Normalizing Fluxes Across Wavelengths . . . . . . . . . . . . . . . .
20
5.1.3 Correcting for Pixel Geometry . . . . . . . . . . . . . . . . . . . . . .
20
5.1.4 Accounting For Finite PSF Width . . . . . . . . . . . . . . . . . . . .
20
5.1.5 Constructing the Synthetic Flat . . . . . . . . . . . . . . . . . . . . .
23
5.2 Measuring the Atmospheric Transmission . . . . . . . . . . . . . . . . . . . .
23
5.3 Estimating SEDs From Colors . . . . . . . . . . . . . . . . . . . . . . . . . .
26
5.4 Finding the Zero Points: Self Calibration . . . . . . . . . . . . . . . . . . . .
26
5.5 CalibrationOperations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
28
6 Fixing LSST to an external scale
29
6.1 WhiteDwarfStandards . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
30
6.2 PopulationMethods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
32

{3{
6.3 Computational Technique for Determining ?
b ? r
. . . . . . . . . . . . . . . .
33
7 Calibration Hardware
33
7.1 Flat Field Illumination System. . . . . . . . . . . . . . . . . . . . . . . . . .
33
7.2 AuxiliaryTelescope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33
7.3 Water Vapor Monitoring System. . . . . . . . . . . . . . . . . . . . . . . . .
34
7.4 CameraSystemTelemetry . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
8 Calibration Error Budget
35
8.1 RepeatabilityErrors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35
8.1.1 errors in m
inst
b
...............................
36
8.1.2 errors in ?m
obs
b
..............................
36
8.1.3 errors in Z
obs
b
...............................
38
8.2 Uniformityerrors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
41
9 Testing and Veri?cation
42
9.1 Self Calibration Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
42
9.1.1 Self-Calibration of a Large System Using HEALpixels . . . . . . . . .
43
9.2 Auxiliary Telescope Simulation . . . . . . . . . . . . . . . . . . . . . . . . .
44
9.3 Calibration Performance Metrics. . . . . . . . . . . . . . . . . . . . . . . . .
45
9.3.1 Repeatability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
45
9.3.2 SpatialUniformity . . . . . . . . . . . . . . . . . . . . . . . . . . . .
45
9.3.3 FluxCalibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
45
9.3.4 ColorCalibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
46
10 Software Implementation
46
10.1 Calibration Products Production . . . . . . . . . . . . . . . . . . . . . . . .
46
10.2 Calibration Within the Data Release Production . . . . . . . . . . . . . . . .
47

{4{
10.3 Level2DataProducts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47
10.3.1 Storing and Obtaining ?
b
(?j p ) . . . . . . . . . . . . . . . . . . . . . .
48
10.3.2 Database-level Recalibration . . . . . . . . . . . . . . . . . . . . . . .
49
11 Risks and Mitigations
49
A Filter Set
55
B Photometric measurements for non-main sequence stars
55
C Fiducial Self-Calibration Input
56
D Glossary
60
1. Introduction
LSST is required to deliver photometry with 1% uniformity and 0.5% repeatability
across the observed sky and under a wide range of observing conditions. This represents at
least a factor of two improvement over current wide-?eld surveys such as SDSS, CFHTLS,
and PanSTARRS. This factor of two improvement will have a major impact on science
deliverables because it implies that the uncertainty volume in the ?ve-dimensional LSST
color space will be over thirty times smaller than for SDSS-like photometry. This smaller
uncertainty volume will improve source classi?cation and the precision of quantities such
as photometric redshifts for galaxies and photometric metallicity for stars. For example, a
given spectral energy distribution (SED) corresponding to some galaxy type produces a 1-D
locus in the ugrizy multi-dimensional color space when redshifted, where the position of
the galaxy along that line in ugrizy space is a function of redshift. Di?erent galaxy SEDs
produce lines that are often close to each other in ugrizy space and sometimes even cross.
The smaller the uncertainty volume around an observed galaxy's measured ugrizy colors,
the smaller the number of di?erent lines (thus, di?erent SEDs) and di?erent positions along
the line (thus, di?erent redshifts) which will be consistent with the measurement. The same
conclusion is valid in the case of algorithms that estimate stellar e?ective temperature and
metallicity, as well as any other model-based interpretation of measurements. Furthermore,
the smaller uncertainty volume per source is advantageous even in the absence of any models.
Two sources whose color di?erences produce a value of ?
2
per degree of freedom of 1, will

{5{
have a ?
2
per degree of freedom of 4 when the uncertainties are halved. In case of ?ve
degrees of freedom, ?
2
pdf > 4 will happen by chance in only 0.1% of all cases. Therefore, the
ability to reliably detect color di?erences between sources is a strong function of photometric
uncertainties.
SDSS is widely credited with pioneering high accuracy photometry for large surveys,
and it is instructive to compare its photometric calibration procedure with LSST's. The fac-
tor of two reduction in photometric uncertainty results from two major di?erences between
the surveys. First, each source will receive hundreds of observations in each band over the
ten years of the LSST survey, a much greater number than possible with SDSS. These series
of repeat observations will be used to self-calibrate the photometric system across the sky
and for each observation (akin, but not identical to, the uber-calibration procedure used by
SDSS (Padmanabhan et al. 2008)), allowing LSST to operate in strongly nonphotometric
conditions. Secondly, the wavelength dependence of the hardware and atmospheric trans-
mission response functions will be measured with auxiliary instrumentation on su?ciently
?ne angular and temporal scales to enable their explicit inclusion in the calibration proce-
dure, rather than resorting to traditional approximations such as linear color terms. SNLS
re-processing of CFHT Legacy Survey data found these color-dependent terms to be a sig-
ni?cant contributor to photometric calibration uncertainties (Regnault et al. 2009), on the
level of several percent.
This document describes the calibration requirements and processes for LSST Data Re-
lease photometry. At each Data Release, there will be a complete recalibration of all data
acquired to that point, on approximately an annual schedule. These data products are re-
ferred to as Level 2 Data Products. There will also be a separate photometric calibration
process that provides near real-time, but lower quality, photometry for quality assurance,
generation of alerts, and other quantities required on a nightly basis. This Level 1 photo-
metric calibration process is not discussed here.
Section 2 reviews the survey requirements for photometric calibration, while Section 3
describes the foundation of LSST's calibration procedure, ?rst motivating this procedure
by describing the transmission of ?ux through the atmosphere and LSST system and then
from the calibration point of view, trying to recreate the ?ux from the ADUs measured by
the detector. Sections 4 and 5 describe those aspects in some detail. Section 6 describes
how the LSST's internal photometric scale is tied to external references. Section 7 describes
the hardware required to realize the calibration process. Section 8 presents the uncertainty
budget for each step of the calibration procedure. Section 9 describes how we will verify
that the calibration system functions as designed, and meets the science requirements, ?rst
during the construction phase, and later during survey operations. Section 10 describes

{6{
the implementation of the calibration process in software that will be part of LSST Data
Management. Finally, Section 11 discusses the risks that remain in the implementation of
the calibration process, and the steps we are taking to mitigate them.
2. Photometric Requirements
The LSST Science Requirements Document (SRD) provides a set of requirements on
the annual Data Release (Level 2) photometry. These requirements are extended in the
LSST System Requirements (LSR), the Observatory System Speci?cations (OSS), and the
individual subsystem requirements documents, to cover aspects which are too detailed for the
SRD. In this section we consider only requirements from the SRD. Calibration requirements
from the LSR and below are discussed further in Section 8 on the calibration uncertainty
budget.
The SRD requirements are based on measurements of bright, unresolved, isolated, non-
variable main-sequence stars from individual LSST visits. In this context, \bright" implies
that the measurement of the star's brightness is not dominated by photon statistics, ap-
proximately 1-4 magnitudes fainter than the saturation limit in a given ?lter. \Isolated"
implies that the star's photometry is not signi?cantly a?ected by nearby galaxies or stars.
\Non-variable" objects are astrophysically non-variable at levels well below calibration re-
quirements (1 mmag or less); these will be identi?ed in an iterative fashion from the many
epochs of LSST observations. The \main-sequence" restriction derives from the need for ac-
curate knowledge of the SEDs of calibration objects, given only their multi-band photometry.
Calibration of objects with non-MS SEDs is discussed in Appendix B.
The SRD speci?cations are:
1. Repeatability: the median value of the photometric scatter for each star (the rms of
calibrated magnitude measurements around the mean calibrated magnitude) shall not
exceed 5 millimags in gri, 7.5 millimags in uzy. No more than 10% of these objects
should have a photometric scatter larger than 15 mmag in gri, 22.5 mmag in uzy. This
requirement sets the level above which we can reliably detect intrinsic variability in a
single source.
2. Uniformity: the rms of the internal photometric zeropoint error (for each visit) shall
not exceed 10 millimags in grizy, 20 millimags in uzy. No more than 10% of these
sources can be more than 15 mmag in gri or 22.5 mmag in uzy from the mean internal
zeropoint. This places a constraint on the stability of the photometric system across the
sky as well as an upper limit on various systematic uncertainties, such as any correlation

{7{
of photometric calibration with varying stellar populations (or colors). This makes the
photometry of sources directly comparable over the entire sky, and when combined
with the previous requirement, creates a stable photometric system across the sky and
over time, in a single ?lter.
3. Band-to-band photometric calibration: The absolute band-to-band zeropoint cal-
ibration for main sequence stars must be known with an rms accuracy of 5 millimags
for any color not involving u band, 10 millimags for colors constructed with u band
photometry. This requirement ties photometric measurements in di?erent ?lters to-
gether, enabling precise measurement of colors, and allows LSST photometry to be
compared with that from other optical telescopes, and with astrophysical models.
4. Absolute photometric calibration: The LSST photometric system must transform
to an external physical scale (e.g. AB mags) with an rms accuracy of 10 millimags.
This is essential for comparing with photometry from other wavelength regions, such
as IR or UV.
Requirements 1 and 2 must be met by measuring and then correcting for changes in
hardware and atmospheric transmission as a function of time, location in the sky or focal
plane, and result in a relative calibration within a single ?lter. Requirements 3 and 4 require
comparison of LSST measurements to externally calibrated spectrophotometric standards,
providing a relative calibration from ?lter to ?lter as well as an absolute physical scale
for the overall system. Performance of the LSST system regarding requirement 1 can be
veri?ed by simply measuring the rms of the calibrated magnitude measurements. Veri?cation
of requirement 2 is more complicated; in a simulated system it is simple to compare the
(simulated, thus known) true magnitudes of the stars to the best-?t magnitudes produced
after calibration. In operations, this will be veri?ed using a combination of simulations,
comparisons to known standards, and evaluation of science outputs such as stellar locus
diagrams. These last two tests are also relevant to verifying the ?nal two requirements, 3
and 4. These issues are discussed further in Sections 8 and 9.
3. The Photometric Calibration Process
In traditional photometric calibration, a set of standard stars is observed at a range of
airmasses to calculate zeropoint o?sets and (typically) a single color-dependent extinction
curve per night. With care, this approach can deliver 1% photometry in stable photometric
conditions. Such programs typically follow only a few objects, and devote roughly equal time
to standards and program objects. This approach fails for a survey like LSST, for at least

{8{
two reasons. First, from a calibration point of view, the very wide ?eld and multiple detector
array mean that e?ectively a large number of instruments must be calibrated rather than just
one. Second, historical weather data from Cerro Pach?on tells us only 53% of the available
observing time can be considered photometric even at the 1{2% level. To take advantage
of the full 85% of the available observing time which is usable (total cloud extinction less
than 1.5 magnitudes), and to reach the SRD speci?ed requirements { 0.5% level photometric
repeatability and 1% photometric uniformity { requires a new approach.
This new approach, ?rst proposed by (Stubbs & Tonry 2006), directly measures the
system throughput as a function of wavelength, focal plane position, and time. Further, the
normalization of the throughput in each observation (the gray-scale zeropoint) and the shape
of the throughput curve (the color dependent terms), are explicitly separated and measured
with separate procedures for both the telescope system response and the atmospheric trans-
mission. Calibration systems based on this approach are already in use at PanSTARRS and
DES.
Several hardware systems are required to implement the approach. We brie?y describe them
here, with full descriptions in Section 7:
? A dome screen projector designed to provide uniform (? 10% variation) illumination
across the ?eld of view, while minimizing stray light. This projector system will have
the capability to not only illuminate the screen with broadband white light, but also
narrow-band light to measure the system response at individual wavelengths. The
narrow-band light will be generated by a tunable laser, capable of producing light
from 300 ? 1100 nm and tunable in 1 nm increments. The brightness of the screen is
measured with a NIST-calibrated photodiode, so that the relative intensity at di?erent
wavelengths can be precisely determined.
? A 1.2-m auxiliary telescope with an R ? 400 spectrograph, located adjacent to the
LSST itself. This auxiliary telescope will obtain spectra of a chosen set of atmospheric
probe stars across the sky to determine an atmospheric absorption model.
? Water vapor monitoring system, consisting of a GPS system and a microwave radiome-
ter copointed with the LSST telescope, and monitoring a similar ?eld of view. This
supplements the auxiliary telescope spectra, which are unable to track the sometimes
rapid variations of water vapor in time and space.
An overview of the entire calibration process, from science observation to calibrated
photometric measurements, together with the required calibration data products is shown in

{9{
Figures 1, 2, 3, and 4. Note that four classes of objects participate in the calibration process
in di?erent ways:
? Standard stars. These are stars whose absolute ?ux as a function of wavelength above
the atmosphere is precisely known. This class contains only a few members, perhaps as
few as ten. Their role is to enable the self calibration process to set absolute zeropoints
for each band, and to allow testing of the SRD uniformity requirements. See Section 6.
? Calibration stars. These are stars that densely cover the sky, with typical spacings
between stars of order 1 arcminute. Unlike standard stars, neither their SEDs nor
their absolute ?uxes are precisely known a priori, and their standard magnitudes are
determined by the self calibration process. They have been selected to be on the stellar
main sequence, to be nonvariable, and to be relatively isolated so that their photometry
is not degraded by crowding e?ects.
? Atmospheric probe stars. These are bright stars of known type, distributed roughly
uniformly over the LSST sky, which yield high SNR spectra from the auxiliary tele-
scope.
? Science objects. Calibration of science objects utilizes the results of processing the
standards and the calibration stars, and the measurement of the system bandpass.
If an SED is supplied for an object, an accurate standard magnitude can then be
calculated.
The following section will provide a more in-depth overview of the calibration process.
We will start with a review of what is physically happening to photons in their path toward
the focal plane, and then outline how LSST will translate the measured ADU counts back
to ?uxes above the atmosphere.
4. From Flux to Counts
We ?rst consider how the photons from an astronomical object make their way to the
detector and are converted into counts (ADUs), paying attention to the various temporal or
spatial scales for variability might arise in the LSST system to a?ect the ?nal ADU counts.
Given F
?
(?; t) { the speci?c ?ux
1
(?ux per unit frequency) of an astronomical object at
the top of the atmosphere { at a position described by (alt,az), the total speci?c ?ux from
1
Hereafter, the units for speci?c ?ux (?ux per unit frequency are Jansky (1 Jy = 10
? 23
erg cm
? 2
s
? 1

{ 10 {
the object transmitted through the atmosphere to the telescope pupil is
F
pupil
?
(?;alt;az;t) = F
?
(?;t)S
atm
(?;alt;az;t);
(1)
where S
atm
(?; alt; az) is the (dimensionless) probability that a photon of wavelength ? makes
it through the atmosphere,
S
atm
(?;alt;az;t) = e
? ?
atm
( ?;alt;az;t )
:
(2)
Here ?
atm
(?; alt; az) is the optical depth of the atmospheric layer at wavelength ? towards
the position (alt,az). Observational data (Stubbs et al. 2007b; Burke et al. 2010) show that
the various atmospheric components which contribute to absorption (water vapor, aerosol
scattering, Rayleigh scattering and molecular absorption) can lead to variations in S
atm
(?; t)
on the order of 10% per hour. Clouds represent an additional gray (non-wavelength depen-
dent) contribution to ?
atm
that can vary even more rapidly, on the order of 2{10% of the
total extinction at 1
?
scales within minutes (Ivezi?c et al. 2007).
Given the above F
pupil
?
(?; alt; az; t), the total ADU counts transmitted from the object
to a footprint within the ?eld of view at (x, y) can be written as
C
b
(alt;az;x;y;t) = C
Z
1
0
F
pupil
?
(?;alt;az;t)S
sys
b
(?;x;y;t)?
? 1
d?:
(3)
Here, S
sys
b
(?; x; y; t) is the (dimensionless) probability that a photon will pass through the
telescope's optical path to be converted into an ADU count, and includes the mirror re-
?ectivities, lens transmissions, ?lter transmissions, and detector sensitivities. The term ?
? 1
comes from the conversion of energy per unit frequency into the number of photons per unit
wavelength and b refers to a particular ?lter, ugrizy. The dimensional conversion constant
C is
C=
?D
2
?t
4gh
(4)
where D is the e?ective primary mirror diameter, ?t is the exposure time, g is the gain of the
readout electronics (number of photoelectrons per ADU count, a number greater than one),
and h is the Planck constant. The wavelength-dependent variations in S
sys
b
generally change
quite slowly in time; over periods of months, the mirror re?ectance and ?lter transmission will
Hz
? 1
). The choice of F
?
vs. F
?
makes the ?ux conversion to the AB magnitude scale more transparent, and
the choice of ? as the running variable is more convenient than the choice of ?. Note also, while F
?
(?;t)
(and other quantities that are functions of time) could vary more quickly than the standard LSST exposure
time of 15s, it is assumed that all such quantities are averaged over that short exposure time, so that t refers
to quantities that can vary from exposure to exposure.

{ 11 {
degrade as their coatings age. A more rapidly time-varying wavelength-dependent change
in detector sensitivity (particularly at very red wavelengths in the y band) results from
temperature changes in the detector, but only on scales equivalent to a CCD or larger.
There will also be wavelength-dependent spatial variations in S
sys
b
due to irregularities in
the ?lter material; these are required by the camera speci?cations to vary slowly from the
center of the ?eld of view to the outer edges. The equivalent bandpass shift can be no more
than 2.5% of the e?ective wavelength of the ?lter. Wavelength-independent (gray-scale)
variations in S
sys
b
can occur more rapidly, on timescales of a day for variations caused by
dust particles on the ?lter or dewar window, and on spatial scales ranging from the ampli?er
level, arising from gain changes between ampli?ers, down to the pixel level, in the case of
pixel-to-pixel detector sensitivity variations.
From equation 3 and the paragraphs above, we can see that the generation of counts
C
b
(alt; az; x; y; t) from photons is imprinted with many di?erent e?ects, each with di?erent
variability scales over time, space, and wavelength. In particular the wavelength-dependent
variability (bandpass shape) is typically much slower in time and space than the gray-scale
variations (bandpass normalization). These di?erent scales of variability motivate us to
separate the measurement of the normalization of S
sys
b
and S
atm
from the measurement of
the wavelength-dependent shape of the bandpass.
4.1. Bandpasses and Associated Magnitudes
This then leads us to introduce a `normalized bandpass response function', ?
obs
b
(?; t),
that represents the true bandpass response shape for each observation,
?
obs
b
(?;t) =
S
atm
(?;alt;az;t)S
sys
b
(?;x;y;t)?
? 1
R
1
0
S
atm
(?;alt;az;t)S
sys
b
(?;x;y;t)?
? 1
d?
:
(5)
Note that ?
b
only represents shape information about the bandpass, as by de?nition
Z
1
0
?
b
(?)d?=1:
(6)
Using ?
obs
b
(?;t) we can represent the in-band ?ux at the top of the atmosphere for each
observation as
F
obs
b
(t) =
Z
1
0
F
?
(?;t)?
obs
b
(?;t)d?;
(7)
where the normalization of F
b
(t) corresponds to the top of the atmosphere. Unless F
?
(?; t)
is a ?at (F
?
(?) = constant) SED, F
obs
b
will vary with changes in ?
obs
b
(?;t) due simply to

{ 12 {
changes in the bandpass shape, such as changes with position in the focal plane or di?ering
atmospheric absorption characteristics, even if the source is non-variable.
To provide a reported F
std
b
(t) which is constant for non-variable sources, we also in-
troduce the `standardized bandpass response function', ?
std
b
(?), a curve that will be de?ned
before the start of LSST operations (most likely during commissioning). ?
std
b
(?) represents
a typical hardware and atmospheric transmission curve, roughly minimizing the average
di?erence between the varying ?
obs
b
(?; t) and the standard bandpass. Now,
F
std
b
(t) =
Z
1
0
F
?
(?;t)?
std
b
(?)d?;
(8)
is a constant value for non-variable sources.
We de?ne a `natural magnitude'
m
nat
b
=? 2:5log
10
?
F
obs
b
F
AB
?
(9)
where F
AB
= 3631 Jy. The natural magnitude will vary from observation to observation as
?
obs
b
(?;t) changes, even if the source itself is non-variable. The natural magnitude can be
transformed to a `standard magnitude', m
std
b
, as follows:
m
nat
b
= ? 2:5log
10
?
F
obs
b
F
AB
?
(10)
= ? 2:5log
10
?R
1
0
F
?
(?;t)?
obs
b
(?; t) d?
F
AB
?
(11)
= ? 2:5log
10
??R
1
0
F
?
(?;t)?
obs
b
(?; t) d?
R
1
0
F
?
(?;t)?
std
b
(?; t) d?
??R
1
0
F
?
(?;t)?
std
b
(?; t) d?
F
AB
??
(12)
m
nat
b
= ?m
obs
b
+ m
std
b
(13)
?m
obs
b
= ? 2:5log
10
?R
1
0
F
?
(?;t)?
obs
b
(?; t) d?
R
1
0
F
?
(?;t)?
std
b
(?; t) d?
?
(14)
where ?m
obs
b
varies with the shape of the source spectrum, F
?
(?;t) and the shape of the
bandpass ?
obs
b
(?; t) in each observation. Note that ?m
obs
b
= 0 for ?at (constant) SEDs, as
the integral of ?
b
(?) is always one.
The natural and standard magnitudes can be tied back to the counts produced by the
system by adding the correct zeropoint o?sets. As ?m
obs
b
removes all wavelength dependent
variations in m
std
b
,
m
std
b
= m
inst
b
? ?m
obs
b
+ Z
obs
b
(15)
? m
corr
b
+ Z
obs
b
(16)
m
inst
b
= ? 2:5log
10
(C
obs
b
)
(17)

{ 13 {
The zeropoint correction here, Z
obs
b
, contains only gray-scale normalization e?ects, such
as variations due to the ?at ?eld or cloud extinction. The SED corrected magnitude, m
corr
b
,
is the input to the Self Calibration block in Figure 1, and m
std
b
is its output.
To summarize the various magnitudes utilized, and their associated ?uxes:
? m
inst
b
, the instrumental magnitude. m
inst
b
= ? 2:5log10(C
obs
b
), where C
obs
b
are the in-
strumental counts (ADU) that are attributed to the object
? m
nat
b
, the natural magnitude. This is the magnitude in the AB system that would
be measured for the object if it were measured through the actual normalized system
bandpass, ?
obs
b
(?), at the top of the atmosphere. This bandpass varies from exposure
to exposure. See equations 7 and 9. m
nat
b
= m
inst
b
+ Z
obs
b
? m
std
b
, the standard magnitude. This is the magnitude in the AB system that would be
measured for the object if it were measured through the standard normalized system
bandpass, ?
std
b
(?), at the top of the atmosphere. This bandpass is selected as part of
the survey design, and does not vary. See equation 8.
? m
corr
b
, the SED corrected instrumental magnitude. This is the standard magnitude, but
with an unknown gray zeropoint correction, which will be removed by self calibration.
These magnitudes are the input to self calibration. m
corr
b
= m
inst
b
+ ?m
obs
b
4.2. Perturbations to the System Bandpass
The normalized system bandpass will vary from exposure to exposure due to a myriad
of e?ects. The major ones are:
? Atmospheric transmission variations. The major wavelength-dependent sources of at-
mospheric absorption, Rayleigh scattering from molecules, molecular oxygen lines,
ozone, water vapor, and aerosols, all vary in time and space, with widely varying
temporal and spatial scales. Even in the absence of intrinsic variation in the atmo-
sphere, of course, the bandpass varies due to changing airmass. This latter e?ect is
compensated for in traditional photometric calibration, but the others are generally
ignored.
? Long term variations in the throughput of the optical path due to contamination
? Changing detector quantum e?ciency, particularly in the y-band, due to varying focal
plane temperature

{ 14 {
? Shifts in ?lter position with respect to the system optical axis, due to positioning jitter
and gravity sag
It bears repeating that every perturbation in general a?ects both the zeropoint, through
the gray component of the perturbation, and the shape of the system bandpass, through the
wavelength-dependent component. The gray component is removed by the self calibration
process, while the wavelength-dependent component must be separately characterized and
removed (see Figure 1). We are concerned here only about the latter e?ect, and discuss it
for each of the above categories.
4.3. E?ects of Airmass Variation
The e?ects of airmass variation on photometry is well known to all photometrists. In
fact, it is the only e?ect which is always accounted for in photometric calibration, and quite
often the only e?ect. Figure 5 shows the e?ects of variation over the full airmass range
expected for the LSST survey. There is a more subtle e?ect, however, which is important
because of the LSST's large ?eld of view: the airmass can vary signi?cantly from one side
of the ?eld to the other. For example, if the ?eld center is at airmass 2.1, the airmass varies
from 1.98 to 2.22 across the ?eld. Figure 6 shows the error that would be made in ignoring
this e?ect. This requires us to maintain an atmospheric model which can be interpolated to
any position in the focal plane.
4.4. E?ects of Atmospheric Variations
The main components of wavelength-dependent atmospheric extinction are Rayleigh
scattering from molecules; oxygen molecular lines; water vapor; ozone; and aerosols. Rayleigh
scattering and molecular oxygen extinctions are directly proportional to barometric pressure
at the surface. As is well known from looking at surface pressure charts, away from weather
fronts the pressure varies signi?cantly on timescales of hours or more and spatial scales of
hundreds of km. Variations can be much more rapid in the vicinity of fronts. In any case,
this component can be compensated for very accurately just by measuring the barometric
pressure, and we do not consider it further here. The other components are not so easily
measured, and in the case of water vapor and aerosols, can display complex patterns of
variability.
Figures 7, 8, and 9 give three di?erent perspectives on historical water vapor variability
in the vicinity of CP. There are broad patterns in space (the E-W gradient), and in time

{ 15 {
(the regular seasonal variations). There are large variabilities on top of these, amounting to
several mm of PWV, which can occur in hours or less. The biggest e?ect of water vapor
is in the y-band, since it contains a strong water band (Figure 47). Figure 10 shows the
change in natural magnitude when the PWV is varied from 1mm to 6mm for a set of Kurucz
stars. The 4mm variation in PWV shown in Figure 8 would lead to a roughly +/-3mmag
color-dependent scatter in calibration of y photometry, if not compensated for.
While we do not have aerosol variability data for Cerro Pachon (CP), data from CASLEO,
a site at 2550m in Argentina, serves as a reasonable proxy. Figure 11 shows the time history
of aerosol optical depth at 675nm over a period of roughly a year. The spiky nature of the
data is notable, suggesting rapid variations on timescales of perhaps a few hours. Figure
12, which shows a time-altitude pro?le of aerosol variations at a site in the US, backs up
this impression. Although the site is at much lower altitude than CP, there are nonetheless
signi?cant variation of aerosol extinction at altitudes above 3000m on time scales well under
an hour, and we should expect similar variations at CP. Determining the e?ects of aerosol
variations on the system bandpass is more complex than for other atmospheric components,
because the spectral shape of the aerosol extinction varies as well as its magnitude. Figure
13 shows the change in natural magnitude when the aerosol optical depth is varied from 0.04
mag to 0.16 mag, roughly the range of the CASLEO data, while keeping a constant spectral
index of ? = ? 1:7. Unlike water vapor, which a?ected the reddest bands most strongly,
aerosols a?ect the bluest bands the most. The e?ects in the u-band range from -5 mmag to
+35 mmag, strongly dependent on star color.
Ozone is dominantly a stratospheric component of the atmosphere, and is routinely
monitored by satellite. There is little evidence for variations on short time or space scales,
with most variation periodic on a seasonal scale. Figures 14 and 15 shows the overall vari-
ability of ozone at CP over a roughly 8 year period. Figure 16 shows the changes in the
natural magnitudes for a variation in ozone by 50 Dobson units. The e?ect is very small,
except in the u-band, where it has a noticeable e?ect on red stars.
To summarize the e?ects of wavelength-dependent atmospheric variations, they are
su?ciently large, and occur su?ciently rapidly, that they must be corrected by an atmo-
spheric model with ?delity substantially greater than the traditional photometrist's extinc-
tion model. The combination of the auxiliary telescope and the water vapor monitoring
system will supply the data required for construction of these atmospheric models. It is also
worth emphasizing that correcting for these e?ects requires not only an accurate bandpass
model, but also SEDs for the objects being calibrated. Calibration stars will be picked from
well de?ned stellar populations to minimize SED uncertainty. SED determination for arbi-
trary science objects, such as supernovae and galaxies, will be more challenging, and may

{ 16 {
limit the accuracy of their photometric calibration.
4.5. Throughput Variations Due to Contamination
The rates of contamination buildup on surfaces has been evaluated for both the tele-
scope ((Krabbendam 2012)) and the camera ((Nordsby 2013)). Many contaminants have
wavelength-dependent e?ects, so they will a?ect the system bandpass. The variation, how-
ever, is expected to be very slow, with a year being a typical timescale. These e?ects will be
adequately measured by the collection of monthly monochromatic dome ?ats, as discussed in
Section 5.1. We do not speci?cally account for them in the calibration uncertainty budget.
4.6. Variations in Detector Quantum E?ciency
The detector quantum e?ciency as a function of wavelength varies with the detector
temperature. The camera both measures and actively controls the temperature, but there
is still variation on the order of 0.5 degK. The variation a?ects only wavelengths near the
silicon bandgap at 1100 nm, and therefore only the y ?lter. Figure 28 shows the resulting
e?ect.
4.7. Throughput Variations Due to Filter Position Shifts
The LSST ?lters are quite large interference ?lters, with diameter 75 cm. As manu-
factured, such ?lters always display some variation of the bandpass with position on the
?lter surface. This results in a system bandpass which is a function of position on the sky.
Additionally, the LSST ?lters are exchanged by a mechanism which is not perfectly repeat-
able, and has some sag due to gravity. This results in a time-dependent variation of the
?lter position with respect to the telescope optical axis, leading to an additional bandpass
variation.
Modeling of these e?ects is di?cult due to the lack of knowledge of the detailed ?lter
characteristics, which will not be available until they are manufactured. To do so in our
simulations, we currently use a simple model that the ?lter bandpass has a shift which is a
linear function of radius. The e?ect of a 1% bandpass shift on natural magnitudes is shown
in Figure 29. It is worthy of note that this e?ect is large in all bands, so it must be carefully
corrected in the calibration process.

{ 17 {
4.8. Putting it All Together
Examples of the ?m
obs
b
due to variations in the shape of the hardware and atmospheric
response curves are shown in Figure 17 and Table 2. Two main sequence stellar models (Ku-
rucz 1993) { one with temperature 35000K (blue) and one 6000K (red) { were combined with
three di?erent atmospheric response curves (with airmass X=1.0 with minimal H
2
O vapor,
X=1.2 with a nominal amount of H
2
O(the `standard'), and X=1.8 with a large H
2
Ovapor
content) and two di?erent hardware response curves (one `standard' and one shifted in
wavelength by 1%) to illustrate the resulting changes in observed natural magnitudes. In
Figure 18, the X = 1:8 atmospheric response is combined with a 1% shift (the maximum
allowed in the ?lter manufacturing speci?cation from center to edge) in ?lter bandpass, thus
altering the hardware response, for many main sequence Kurucz models spanning a range
of g ? i colors; the resulting changes in natural magnitudes are plotted. These examples
demonstrate that the scatter in natural magnitudes induced by expected atmospheric and
hardware transmission curve shape changes alone (without any gray-scale changes) can be
much larger than the SRD repeatability requirements would permit. Roughly speaking,
these e?ects reach a level of about 50 mmag. This suggests that our measurement-based
model of wavelength-dependent e?ects must be accurate at the 90-95% level. This issue is
discussed in detail in Section 8.
5. From Counts to Flux
The previous section laid out the origins of ADU count variability from one observation
to another. Now we will consider how we can, in practice, acquire the information neces-
sary to convert a particular observed ADU count to a measurement of F
?
(?;t) above the
atmosphere for a particular object. This requires measuring and then compensating for the
variations in S
atm
(?; alt; az; t) and S
sys
b
(?;x;y;t). Let us ?rst consider measurement of the
variations in the hardware throughput curve, S
sys
b
(?; x; y; t).
5.1. Measuring the Hardware Response
To measure the wavelength-dependent hardware response curve as a function of position
in the focal plane, we will use a dome-screen system that is capable of producing narrow-
band light over a range of wavelengths, producing a data cube of `narrow-band ?at ?elds'. A

{ 18 {
Table 1:: ?m
obs
b
due to variations in system and atmospheric bandpass shape (see
also Fig 17). The ?rst two rows show the baseline (`standard') magnitude of
the star. All other rows show the change in magnitude (in mmag) due to the
variations listed at left. Any value larger than 5 mmag would be larger than the
RMS scatter allowed by the SRD. TODO color-code values larger than 5 mmag
Bandpass
star
u (mag)
g
r
i
z
y
Std (X=1.2) atm, std sys
red
21.472
20.378 20.000 19.911 19.913 19.913
Std (X=1.2) atm, std sys
blue
19.102
19.503 20.000 20.378 20.672 20.886
?u (mmag)
?g
?r
?i
?z
?y
Std (X=1.2), +1% sys shift red
-31
-22
-8
-2
1
1
Std (X=1.2), +1% sys shift blue
9
17
20
20
16
16
X=1.0, std sys
red
7
2
0
0
-0
-1
X=1.0, std sys
blue
-3
-1
-1
-0
1
-4
X=1.0, +1% sys shift
red
-24
-20
-8
-1
1
0
X=1.0, +1% sys shift
blue
7
16
19
20
18
12
X=1.8, std sys
red
-21
-10
-2
-0
0
1
X=1.8, std sys
blue
8
8
4
2
-1
6
X=1.8, +1% sys shift
red
-50
-30
-10
-2
1
2
X=1.8, +1% sys shift
blue
16
24
24
22
15
22
similar approach has already been employed at PS-1 ((Stubbs & Tonry 2012); (Tonry et al.
2012)), and at DES ((Marshall et al. 2013)). A series of steps is needed to convert this data
cube into S
sys
b
(?;t) at each x,y location in the focal plane:
? Determine and apply the monochromatic illumination correction (see Section 5.1.1)
? Normalize ?uxes across wavelengths using the photodiode monitors
? Correct for pixel geometry
The resulting data cube then records (up to an overall normalization constant) S
sys
b
(?; t)
at each x,y location in the focal plane. Further processing constructs a synthetic ?eld that
will be used to ?atten incoming science exposures.
We discuss each of these in turn.

{ 19 {
5.1.1. Determining the Illumination Correction
As mentioned above, before dome ?ats (either broadband or narrow-band) can be used
to measure S
sys
b
, they must be modi?ed to correctly produce photometrically uniform mea-
surements of a collimated source across the ?eld of view. This correction is called the
`illumination correction'. The illumination correction must correct the observed ?at ?elds
for e?ects resulting from non-uniform illumination of the dome screen, for ghosting caused
by internal re?ections in the camera, and for the presence of stray or scattered light arriving
in the focal plane on paths other than the direct image path (such as light bouncing from
the dome ?oor or glinting o? a ?lter holder). Figure 19 gives a perspective on the di?erence
between the direct, total, and ghost ?at?eld illumination from a realistic model of the LSST
telescope, camera, and domescreen illumination system.
The illumination correction is di?cult to measure directly. To do so would require a
collimated light source with diameter small compared with the telescope pupil that is able to
scan in pupil position, angle, and wavelength. Aside from the engineering challenges in pro-
ducing such a light source in the dome, the resulting 5-dimensional data cube would require
an inordinate amount of time to collect, and we have deemed this approach impractical.
Instead, we plan to combine data from several paths to generate an illumination correction
which is consistent with all of them:
? Detailed monte carlo optics models, such as FRED
? Laboratory measurements of the as-built camera with the CCOB (see Appendix D).
This provides the detailed data cube envisioned above, but only for the camera in
isolation. This is su?cient to characterize ghosts, but not scattered light in the full
system.
? Raster scans of star ?elds in photometric conditions. The star ?elds will be chosen to
be dense, without compromising photometry due to crowding, and to contain a wide
range of colors.
? Second order corrections from running self calibration. Self calibration is particularly
capable at determining corrections which are position dependent but wavelength in-
dependent, such as arise from nonuniform illumination of the dome screen (see Figure
21).
This process will consume signi?cant amounts of time in the lab, time on the sky, and
time in analysis. Fortunately, the illumination correction is expected to be stable with time
and will be remeasured only when components in the optical path of the telescope are altered.

{ 20 {
Because the illumination correction is wavelength dependent (highly so near the edges
of the ?lter bandpasses), its e?ect on photometry is dependent on the SED of the object
in question. Figure 20 shows this e?ect as a function of both radius in the focal plane
and star color, for a set of Kurucz main sequence stars. The e?ects are clearly too large to
ignore, or they would overwhelm the uncertainty budget. This is discussed further in Section
8. It seems likely that the quality of the illumination corrections is currently limiting the
photometric accuracy of large surveys, and we expect the LSST treatment of the illumination
correction to yield signi?cant improvement.
5.1.2. Normalizing Fluxes Across Wavelengths
The intensity of the monochromatic illumination source will vary with wavelength. This
intensity will be accurately measured with a calibrated photodiode system that looks at the
screen. Each wavelength slice of the data cube will be divided by the illumination intensity,
so that the ratio of the data at any two wavelengths is the ratio of the system throughputs.
5.1.3. Correcting for Pixel Geometry
A pixel on the optical axis subtends more area on the sky than a pixel near the edge of
the ?eld. This results in an exposure of a uniform surface brightness source being brighter at
the center than it is in the edge. This gradient, which can be precisely calculated from the
optical prescription of the system, must be removed from the ?ats. If it were not, ?attened
science images would have the gradient removed, and the result would be systematic errors
in the photometry of point sources, with a source at the edge being measured brighter than
the same source on the optical axis.
5.1.4. Accounting For Finite PSF Width
Before we can discuss construction of the synthetic ?at, we must take a brief detour
and extend the discussion in Section 4. There, we implicitly assumed that the ?ux from an
object is sampled at a single point in the focal plane. We must now extend the formalism
to account for the fact that the system has a PSF with ?nite width, so that the ?ux from
an object is spread over multiple pixels, each of which may di?er slightly in response from
its neighbors. Let us use i as an index over the pixels in the PSF for a particular object,
and let w
i
be the integral of the PSF over the area of pixel i, with
P
i
w
i
= 1. To make the

{ 21 {
notation compact, de?ne
q
b;i
(?) = S
atm
(?; alt; az; t) S
sys
b
(?;x
i
;y
i
;t)?
? 1
(18)
Further, de?ne the SED of the object, f
?
(?) by
F
?
(?) = F
b
f
?
(?)
(19)
where ? is within band b, and
Z
b
f
?
(?)d? = 1
(20)
Then the total number of counts from the object, C
inst
b
is
C
inst
b
=F
b
X
i
w
i
Z
q
b;i
(?)f(?)d?
(21)
For compactness, let's de?ne
Q
b;i
=
Z
q
b;i
(?)f
?
(?)d?
(22)
Since we are following the path from counts to ?ux, we are interested in
F
b
=
C
inst
b
P
i
w
i
Q
b;i
(23)
At this point, we notice that Q
b;i
is close to what we mean by a '?at'. Let us make it
identical, by factoring out the normalization, which is not directly measured:
Q
b;i
= NQ
norm
b;i
(24)
where N is chosen to make max
i
(Q
norm
b;i
) = 1. N will become the zeropoint of the
exposure: Z
obs
b
= ? 2:5log
10
(N).
There is a problem with using this ?at, however: At the time we process the pixels in
Flat SED Calibration (Figure 1), we are unable to calculate Q
b;i
because we are (probably)
ignorant of both f
?
(?) and the atmospheric bandpass for the observation, S
atm
b
(?). Even

{ 22 {
if we knew that information, this prescription would be unworkable because the ?at would
need to be constructed with an SED that varies rapidly over the focal plane. So we must
accept our inability to work with Q
b;i
when processing the pixels, and must content ourselves
with a broadband ?at measured from the dome ?at?eld system.
Q
BB
b;i
=
Z
q
sys
b;i
(?)f
BB
?
(?)d?
(25)
where f
BB
?
(?) is an SED we are free to choose in constructing the broadband ?at. Note that
q
sys
b;i
(?) = S
sys
b
(?;x
i
;y
i
;t)?
? 1
includes only the system response, not the atmosphere. We
can now write
F
b
=
1
N
"
C
inst
b
P
i
w
i
Q
BB
b;i
#"P
i
w
i
Q
BB
b;i
P
i
w
i
Q
b;i
#
(26)
Expressed in magnitudes, this becomes
m
nat
b
= m
inst
b
+Z
p
? 2:5log
10
"P
i
w
i
Q
BB
b;i
P
i
w
i
Q
norm
b;i
#!
(27)
This may not at ?rst seem like an advance over equation 23, but in fact it is: The ?rst term
on the RHS is just what we usually mean by measuring the ?ux in a ?attened image. It
is done at the pixel level in the usual way, without any dependence on f(?). We can at
least imagine calculating the third, correction, term in exactly the way it is written. We do
not need to actually access the image pixels, but do need to know the PSF and the object
SED, and we need to construct at least a localized ?at from the object SED. We reserve
this as a future option in calibration processing, because it may reduce the systematics ?oor
in the instrumental magnitudes. For the present, however, we make the assumption which
is univerally made today in doing photometry, albeit almost always implicitly: The ratio of
the true ?at to the broadband ?at is independent of the pixel within the PSF footprint, so
that:
m
nat
b
= m
inst
b
+Z
p
? 2:5log
10
?
Q
BB
b
Q
norm
b
?
(28)
Note that the correction term must be applied in the "SED Correction" block of Figure
1. The required information to compute it is not available earlier. We now turn to the actual
measurement of q
b;i
, and the construction of Q
BB
b;i
.

{ 23 {
5.1.5. Constructing the Synthetic Flat
As discussed in the previous section, the choice of the SED used to construct the syn-
thetic ?ats is largely arbitrary. It de?nes a reference SED f
BB
?
(?), which must later be
corrected for as shown in equation 28 when both the true ?
obs
b
(?) and the actual object SED,
f
?
(?) are known.
Given the arbitrariness of f
BB
?
(?), we must choose on grounds of convenience. At least
three choices suggest themselves:
? Choose f
BB
?
(?) = const. This has the virtue of maximal simplicity.
? Choose f
BB
?
(?) to be a ?at SED at the top of the atmosphere, propagated through an
average atmosphere.
? Choose f
BB
?
(?) to be in some sense an average SED for all objects in the survey,
propagated through an average atmosphere, such that the average value of the BBF
correction is minimized.
We will initially choose the ?rst of these. This choice may have unaccounted for impact
on calibration accuracy, however, because the above formalism does not include the propaga-
tion of errors. We can readily change the prescription during construction or commissioning.
Generation of the entire data cube of narrow-band ?ats is too time-consuming to com-
plete on a daily basis (the domescreen requirements allow 4 hours per ?lter, so 24 hours per
set). Instead, the full narrow-band ?at ?eld scan will only be repeated on a time interval
adequate for measurement of the more slowly variable components of S
sys
b
(?; t), approxi-
mately monthly (but to be determined during commissioning). Since the system response
can change on shorter timescales, principally due to a changing population of dust particles
on optical surfaces, we correct the FSF nightly by multiplying by the ratio of two ?ats ob-
tained with the broadband light source, one at the current epoch and one at the reference
epoch when the narrow-band ?ats were acquired.
5.2. Measuring the Atmospheric Transmission
Next, considering S
atm
(?; alt; az; t), we will again separate the measurement of the shape
of the atmospheric response and the measurement of normalization of the transmission. The
currently available data are still incomplete, but suggests that the wavelength-dependent
variations in ?
atm
(?;t) change smoothly over spatial scales larger than the ?eld of view

{ 24 {
and over several minutes. By using an auxiliary telescope equipped with a spectrograph
to observe bright stars with known SEDs, we can measure atmospheric absorption at a
variety of positions in the sky every 5{10 minutes throughout the night. These observations
are used as constraints for MODTRAN atmospheric models, generating representations of
the atmospheric throughput in the form of a set of absorption components as a function of
alt; az; t. These components can be interpolated in time and space to generate a wavelength-
dependent atmospheric absorption pro?le, ?
atm
b
(?; alt; az; t), for each observation.
This approach has been put into practice by Burke et al. (2010). Probe stars were
selected in the range 9 < V < 12, and spectra were taken on a 1.5m telescope with a
R ? 400 resolution spectrograph covering a usable wavelength range of approximately 500 <
? < 1000nm. Exposure times were 2 to 4 minutes. The observing pattern is shown in Figure
22. The atmospheric model was a simple linear combination of templates generated by
Modtran4 for molecular scattering, molecular absorption, aerosols, ozone, and water vapor.
The models, together with some factors to account for the spectrograph e?ciency as a
function of wavelength, were ?t to all the observed spectra simultaneously. Figures 23 and 25
show some results. The ?t to the individual spectra are impressively good, and the coe?cients
in the resulting atmospheric models show strong variation, as expected for the highly variable
nights for which the data was obtained. Traditional extinction plots (Figure 24), show
behavior that would be well ?t by traditional calibration in the r- and i-bands, together with
behavior in the z- and y-bands that would not. In subsequent development of the approach,
Burke et al. (2013) obtained probe star spectra while simultaneously imaging over a wide ?eld
- roughly comparable to LSST operations. Unlike LSST, however, the imaging exposures
were managed to keep the pixel coordinates of a given star nearly constant for all exposures.
The full calibration process on the imaging data, including both atmospheric ?tting and gray
extinction determination through self calibration, yielded repeatability of approximately 8
mmag - good enough to meet the SRD minimimum requirement for repeatability.
Using MODTRAN we can generate atmospheric transmission pro?les at a variety of
airmasses for each of these major sources of atmospheric extinction { molecular (Rayleigh)
scattering, aerosol (Mie) scattering, and molecular absorption from each of O
3
, H
2
O, and
combined O
2
/trace species, as is shown in Figure 30 for a standard atmospheric composition
(the 1976 US Standard). These pro?les capture the wavelength dependence of each compo-
nent individually, over a grid of airmasses, and can be used as templates to generate new
atmospheric transmission curves for any desired atmospheric composition as follows:
S
fit
(alt;az;t;?) = e
? ?
aerosol
( alt;az;t;? ) X
? (1 ? C
mol
(BP(t)=BP
o
) A
Rayleigh
(X; ?))
?(1?
p
C
mol
(BP(t)=BP
o
)A
O
2
(X;?))

{ 25 {
?(1? C
O
3
(t)A
O
3
(X;?))
?(1? C
H
2
O
(alt;az;t)A
H
2
O
(X;?)):
(29)
The A
Rayleigh=O
2
=O
3
=H
2
O
functions are absorption templates (i.e. 1 minus the transmission
pro?les from the MODTRAN models), the C
mol;O
3
;H
2
O
are coe?cients describing the com-
position of the atmosphere together with ?
aerosol
, and BP(t) is measured. An example of an
atmosphere generated in this fashion is shown in Figure 31, demonstrating that this method
can be used to generate an atmosphere at any airmass for any composition desired, without
needing to generate a full MODTRAN model.
With this capability, we can ?t the auxiliary telescope spectroscopic data taken through-
out the night for the values of C
mol;O
3
;H
2
O
, increasing our SNR for these coe?cients by mod-
eling their expected behavior over time and across the sky. The Rayleigh scattering and
molecular absorption due to O
2
and other trace species are ?t with a single coe?cient, C
mol
,
which simply scales the MODTRAN templates to the appropriate level for Cerro Pachon,
and then only change with the barometric pressure (BP). The O
3
absorption is ?t with a
single C
O
3
value for each night, as it is not expected to vary more than 5-10% within a night.
The aerosol absorption, as it is expected to have a small spatial variation across the sky, is
modeled as
?
aerosol
(alt;az;t;?) = (?
0
+ ?
1
EW + ?
2
NS)
?
?
?
0
?
?
;
(30)
where EW and NS are de?ned as EW = cos(alt)sin(az), NS = cos(alt)cos(az), projections
of the telescope pointing in the EW/NS directions. Single values of ?
0
, ?
1
, ?
2
and ? are ?t
for each night of observing, with ?
1
and ?
2
likely to be very small (Burke et al. 2010). The
H
2
O absorption is likewise expected to show spatial variation, but also time variability, and
is modeled as
C
H
2
O
(alt; az; t) = C
H
2
O
(t) +
dC
H
2
O
dEW
EW+
dC
H
2
O
dNS
NS
(31)
using a constant spatial EW and NS gradient per night and a C
H
2
O
(t) that is ?t to each
auxiliary telescope measurement (and interpolated between these times).
The coe?cients C
mol=O
3
=H
2
O
and ?
aerosol
will be determined using spectra of bright stars
obtained from the 1.2-m LSST auxiliary telescope. The stars observed with the auxiliary
telescope must be bright (r < 12) and ideally either white dwarfs or F stars { stars with
relatively simple and well-understood SEDs to minimize confusion with the atmospheric
extinction. By observing the same grid of stars on multiple nights, even if the SEDs are not
well determined initially, they can be bootstrapped from the many epochs of data.

{ 26 {
5.3. Estimating SEDs From Colors
Correcting for the color terms resulting from the di?erence between ?
meas
b
(?;t) and
?
std
b
(?) requires some preliminary measurement of the color of each calibration star (to within
0.02 magnitudes). This means we must either have some prior knowledge of the colors of
each star (from Gaia, for example) or we must have some other method for measuring colors
in the ugi bands relevant to determining metallicity and the color corrections detailed in
Section 4.1, presumably by measuring the magnitudes of these stars in photometric data.
Without this requirement, we could just combine all photometric and non-photometric data
in the self-calibration routine, leaving the self-calibration solver to determine the appropriate
?z
j
to compensate for any non-photometric images.
Assuming that we must ?rst identify and use photometric data to determine the colors
of each object, this could proceed as follows. Identify all observations which were obtained in
relatively photometric conditions by searching for images where the average scatter in magni-
tude measured for each source was less than some threshhold (say < 0:05 magnitudes). Using
these images and standard stars in these images, measure a preliminary color for each object.
With this preliminary color, make a correction for ?m
meas
b
and run the self-calibration solver
for this (photometric) subset of the data. Iterate the results of the self-calibration solver
to improve the color determination for each star, until the color measurement converges to
within 0.02 magnitudes.
At this point, we have colors accurate enough to apply a ?m
meas
b
correction su?cient to
run self-calibration on all images, including the non-photometric data. There will be some
data which is not calibrateable, due to a large amount of cloud extinction; these images will
be identi?able by the low signal-to-noise ratio of the stars in the image.
5.4. Finding the Zero Points: Self Calibration
In order to correct for the more rapid gray-scale variations in the relative normalization
of S
atm
(alt; az; t) due principally to cloud extinction, we must use the observations of cal-
ibration stars in the images themselves. This `self-calibration' procedure could be thought
of as creating a massive calibration `standard' star catalog, where the calibration stars are
a selected set of the non-variable, main-sequence stars in the science images; their main
di?erence from true standards is that the true magnitudes of the calibration stars have
to be bootstrapped from the many di?erent observations of the survey, and their SEDs
need to be inferred from multicolor photometry. For every calibration star, corrections for
?
sys + atm
b
(?; t) must be applied to produce a standardized magnitude, m
std
b
, then in the self-

{ 27 {
calibration procedure we minimize the di?erence between the standardized magnitude and
a model magnitude,
?
2
= ?
ij
m
std
b;ij
? m
model
b;ij
?
std
b;ij
!
2
(32)
where the model magnitude is derived from the best-?t `true' magnitude of the calibration
star and a model describing how we expect the magnitude to vary from observation to ob-
servation. In the simplest self-calibration plan, this model simply consists of a normalization
constant (zeropoint o?set) for a `patch' equivalent to the size of a CCD,
m
model
b;ij
=m
best
b;i
? ?z
b;j
:
(33)
This produces best-?t magnitudes for the calibration star catalog as well as zeropoint o?-
sets (normalization constants) for each CCD in every observation, allowing us to correct for
atmospheric extinction on the scale of a CCD. By adopting a more complex model, this
procedure can also correct for variations in the relative normalization of the total system
throughput beyond those contributed by cloud extinction (such as remaining errors in the
illumination correction for the broadband and narrow-band ?at ?elds), but is generally lim-
ited by the number of stars and number of observations of each star that are obtained. A
CCD size patch provides several hundred calibration stars per patch, allowing good signal to
noise when determining cloud extinction which varies from observation to observation. This
is similar in nature to the ubercal method applied to SDSS in Padmanabhan et al. (2008),
and more recently DLS (Wittman et al. 2012) and PanSTARRS-1 (Schla?y et al. 2012).
Repeating Equation 15 above, adjusting obs indexes to meas to re?ect the di?erence
between the true and measured quantities,
m
std
b
= m
inst
b
? ?m
meas
b
+ Z
meas
b
(34)
we can relate the terms in this equation to the corrections just described above. ?m
meas
b
originates from the di?erence between ?
meas
b
(?;t;x;y) and ?
std
b
(?) convolved with the source
SED, and thus it depends on the shape of the total system response as well as the shape
of the source SED. ?m
meas
b
will be calculated by combining a series of model SEDs with
?
meas
b
(?;t;x;y) at various locations in the focal plane, creating a lookup table of values to
apply to measured magnitudes. The Z
meas
b
zeropoint o?set comes from any normalization
constants generated by the self-calibration procedure (in the simple model, just the ?z
b;j
in
equation 33 above).
These standard magnitudes are calibrated for variations in the observed bandpass shape
(where applicable) and relative normalization, thus are directly comparable from one obser-
vation to the next. However, they are not yet tied to an external physical scale or from one

{ 28 {
?lter band to another, and thus only de?ne an internally calibrated LSST magnitude in a
particular ?lter.
To ful?ll SRD requirements 3 and 4, these internally calibrated natural magnitudes
must also be tied from one ?lter band to another, and then tied to an absolute external
physical scale. For this, a further set of measurements is needed. In all ?lters, a set of
spectrophotometric standards must be observed, and calibrated using the steps described
above. Then the known SED is combined with the standard bandpass shape to generate
synthetic color photometry. The synthetic colors are then compared with the calibrated
measured standard magnitudes to calculate ?
b ? r
, the corrections needed to tie measurements
in each ?lter together (referenced to r band). At this point, only one ?nal measurement is
necessary to tie the entire system to an external physical scale: an r band LSST natural
magnitude measurement of an absolutely calibrated source on a photometric night. Although
in theory these last two steps could be done with a single externally calibrated object, on
a single photometric night, a larger set of external reference objects with well known AB
magnitudes will be used to reduce systematic errors. This de?nes an AB magnitude,
m
AB
b
= m
std
b
+?
b ? r
+?
r
(35)
which can be compared to absolute physical ?ux scales.
This self-calibration procedure can be successful only if patches overlap on the sky, so
that the same star is observed on multiple patches. This means complete sky coverage is
necessary to link all stars together into a rigid system, but also indicates that some amount
of dither is required. Our investigations have shown that dither patterns where the overlap
is one quarter of the ?eld of view or more produce results meeting the SRD requirements.
Note that m
best
b;i
and ?z
b;j
are constrained only up to an arbitrary additive constant. For
convenience, this constant can be set so that stars have roughly correct AB magnitudes,
however the goal after self-calibration is primarily to have a rigid, self-consistent magnitude
system, equivalent to the natural magnitudes. Accurately calibrating the internal magnitudes
to an external scale is discussed in Section 6.
5.5. Calibration Operations
The sequence for photometric calibration is then:
1. Acquire the data cube of narrow-band ?at ?eld images, approximately monthly. Deter-
mine and apply wavelength-dependent illumination correction. Measure ?
sys
b
(?; t; x; y).

{ 29 {
Note that the time required to acquire these ?ats is required to be less than four hours
per ?lter, so a full set of ?lters requires 24 hours.
2. Acquire a broadband ?at in each ?lter at the start of each observing night. Generate
the ?ats which will be used for the night's images by scaling each synthetic broadband
?at by the ratio of tonight's broadband ?at to the reference broadband ?at (Section
5.1.5). These ?ats will also be used to do a nightly camera gain calibration.
3. After performing usual mosaic image processing (bias correction, fringe correction,
?attening, etc) extract ADU counts of sources from images.
4. Acquire spectra of known stars roughly every 5{10 minutes throughout each night, ?t
for atmospheric absorption coe?cients and generate ?
atm
b
(?; t; az; el) for each science
images.
5. Combine ?
atm
b
and ?
sys
b
with a range of model SEDs to create lookup tables for ?m
meas
b
for various locations in the focal plane, and store these as a data product
6. At every Data Release, run the self-calibration procedure, applying ?m
meas
b
to stars
chosen for self-calibration procedure and minimizing ?
2
from equation 32.
7. Apply appropriate Z
meas
b
and ?m
meas
b
to all objects in Data Release catalog, producing
standardized magnitudes.
8. Apply measured corrections ?
b ? r
and ?
r
, producing absolutely calibrated magnitudes.
This results in calibrated m
AB
b
values in a standardized bandpass shape, with above-the-
atmosphere ?uxes.
6. Fixing LSST to an external scale
The next two subsections describe how the internally calibrated standard magnitudes,
independently calibrated in each ?lter bandpass, are ?xed to an external scale such that the
?ux in a single band can be compared to the ?ux in another ?lter band (SRD requirement
3) and that the ?ux in a particular ?lter band can be compared to an absolute external
system (SRD requirement 4). This is equivalent to determining ?
b ? r
and ?
r
from Eqn 35.
In practice, the same standards will be used for both tasks, and we discuss them together.
Both the band to band calibration for each ?lter b (the ?
b ? r
values) and the absolute
calibration of ?
r
will be determined by measuring the ?ux from standards. Ideally these

{ 30 {
would be standard sources placed above the atmosphere, but in practice must be celestial
objects whose SEDs across the LSST wavelength range are precisely measured and/or pre-
dicted by physical models, and which are contained in the LSST survey ?elds. Although in
principle a single standard with known colors would be su?cient, the major concern here
is with systematic errors, for example a variation in color calibration across the sky in-
duced by say seasonal and/or directional variations in water vapor and aerosol content. Our
current simulations (Section 9) show signi?cant spatial patterns in photometric zeropoints.
Monitoring and controlling these systematics will require a larger set of standards.
These fall into three groups:
? DA white dwarfs observed with HST. Relatively few in number, these form the "gold
standard" for the survey, and include not only the three fundamental standards ((Bohlin
& Gilliland 2004a)) that are used to calibrate instruments on board the HST that is
self-consistent to better than 5 milli-mag, but also a growing set from the HST program
GO-12697 of Saha et al, for which observations are still underway.
? DA white dwarfs for which accurate spectroscopic determination of T
eff
, g, and inter-
stellar reddening are available.
? Stars observed with Gaia. By roughly 2020, when LSST will be in commissioning,
Gaia will achieve 1mmag photometry in the Gaia G band for a large set of stars
with V < 18:5 ((Jordi et al. 2010)). IF the Gaia G magnitudes can be accurately
transformed to the LSST bands, we will have a very large set of absolute standards.
This transformation is conceivable, because Gaia produces low resolution spectra for
every object. We are currently investigating the photometric accuracy that can be
achieved, and do not yet intend to rely on Gaia photometry for calibrating LSST.
Additionally, the main sequence stellar locus can be used to check for systematic errors in
photometry.
6.1. White Dwarf Standards
DA white dwarfs have atmospheres dominated by hydrogen, and thus are the simplest
stellar atmospheres to model. The opacities are known from ?rst principles, and in the tem-
perature ranges we are interested in, photospheres are purely radiative and photometrically
stable. The emergent ?uxes from such an atmosphere can be de?ned by just two parame-
ters: e?ective temperature and a surface gravity, which only require a ?ux normalization at

{ 31 {
some wavelength. These parameters are easily determined spectrosopically from a detailed
analysis of the H I Balmer pro?les, without reference to any photometry. Thus the entire
spectral energy distribution (SED) from the UV to the IR can be calculated at arbitrary
spectral resolution and folded through any well determined photometric band pass or spec-
troscopic system. The primary example of such an e?ort is Holberg & Bergeron (2006),
where synthetic photometry of DA white dwarfs was used to place UBV RI, 2MASS JHK,
SDSS ugriz and Stromgren uby magnitudes on the HST photometric scale to 1% using a
consistent set of photometric zero points and magnitude o?sets. A fundamental independent
check on this photometric calibration was established by using a set of DA white dwarfs hav-
ing good trigonometric parallaxes and calculating the corresponding photometric parallaxes
from the Bergeron photometric grid (Holberg et al. 2008). The two parallaxes agreed at the
1% level.
This idea has been quantitatively tested with HST/STIS spectroscopy (Bohlin 2000),
who observed 4 bright DA white dwarfs spanning a range of temperatures and una?ected
by reddening (due to their proximity to us), and found their relative ?ux distributions to be
internally consistent with model predictions (Finley et al. 1997) from spectroscopic T
e
and
log g to better than 1% in the wavelength range 0:2? 0:9?. Spectrophotometry of Vega with
STIS (Bohlin & Gilliland (2004b), Bohlin (2007)) referred to the DA white dwarf (WD)
?ux scale shows agreement with the Hayes et al. (1985) calibration at the 1 to 2% level,
and with the Kurucz (2003) Vega atmosphere model to better than 1% in the wavelength
range 0:5 ? 0:8?. This seems to indicate that in the visible region, the Kurucz model is
a better predictor of the ?ux from Vega than the ground based comparisons to standard
sources. The WD based calibration disagrees with Hayes (1985) by 5% at 0:4?, and by 10%
between 0:9 and 1:0?. The 9400K Kurucz model for Vega agrees with the STIS observations
of Vega tied to the WD scale to ? 2% from 0.28 to 1.0?. We surmise that 1) outside
0:5 ? 0:8?, the atmospheric extinction is a real hindrance to setting up calibrators, and 2)
the self-consistency of the WDs is better than the comparison with the best model for Vega,
verifying that the WD modeling is superior to that for A stars, and validating the assertion
that DA white dwarfs are best understood as standard SEDs. The calibration in place for
HST instruments relies on Vega for the ?ux at 555:6?: ?uxes relative to this wavelength are
determined by 3 (4 are available) primary WD models.
For LSST, it is desirable to have a larger set of calibrated DA white dwarfs which are
faint enough to be unsaturated in the regular survey observations. Ideally there should be
at least 3 of them visible in the sky at any time spanning a range of airmass and direction.
However, DA white dwarfs fainter than 17th mag in general su?er some reddening, which
must therefore be determined in addition to T
eff
and logg. The HST program GO-12697
(PI: Saha) is photometrically measuring a set of 9 such DA white dwarfs near the equator

{ 32 {
and distributed in RA with ACS on board the HST. This photometry will be on the gold
standard of the three white dwarfs that de?ne the HST absolute scale, and systemically
accurate to within 10 mmag. The program is also obtaining high S/N spectra with the
Gemini telescope(s) to determine accurate T
eff
and log g. Comparison of the photometric
colors, and the model SED will then yield the reddening, and thus render the object as an
SED standard. To date only 2 of the 9 objects have been observed by HST, while the Gemini
spectroscopy is farther along. A proposal to augment the targets to span non-equitorial
declinations in a follow up proposal in the HST cycle was rejected by their TAC.
In addition to the selectively crafted e?ort described above, it is estimated that there
will be ? 100/10 DA/DB WD stars with r < 24 in each LSST image at the South Galactic
Pole, though few of these will have available the spectra necessary for synthetic photometry.
At least 100{1000 across the sky will be used to search for systematic e?ects. Catalogs of
WD stars visible from Cerro Pachon have been constructed.
6.2. Population Methods
An alternative approach to using a small handful of precisely known standards is to use
the properties of entire populations of astronomical objects.
The locus of main sequence stars in color-color space is also reasonably well understood
and has been used to calibrate photometric colors with success in previous surveys (Mac-
Donald et al. 2004; Ivezi?c et al. 2007; High et al. 2009). The use of the main sequence stellar
locus in addition to WD stars will provide a valuable check on systematic e?ects that may
arise from using (primarily) white dwarfs in the determination of ?
atm
(?; alt; az; t), as white
dwarfs are bluer than most of the main sequence stars used for the bulk of the remainder of
the calibration procedures.
Additional checks on the quality of color calibration will be based on color residuals
when determining photometric redshifts for galaxies. Analyzing these residuals as a function
of galaxy brightness and color, and across the LSST footprint, will yield detailed quantiative
estimates of the calibration quality.

{ 33 {
6.3. Computational Technique for Determining ?
b ? r
The values for ?
b ? r
will be determined by generating model m
std
b
values for each band-
band calibration object, then minimizing
?
2
= ?
i
(m
std
b;i
? m
std
r;i
)
meas
? (m
std
b;i
? m
std
r;i
)
model
?
b ? r;i
!
2
:
(36)
This comparison can be done using subsets of objects from low Galactic extinction regions,
and then bootstrapping to the entire sky to check for systematic e?ects.
After determining the band to band calibration, there is one further value required to
calibrate the entire system to an absolute ?ux scale: ?
r
. This could again be determined us-
ing a single object with a well-known ?ux and spectral energy distribution, however multiple
external calibrators provide a valuable check on systematic e?ects.
7. Calibration Hardware
7.1. Flat Field Illumination System
The ?at ?eld illumination system is designed to illuminate the dome screen with a
uniform ?ood of either broadband or tunable narrowband light. A precision photodiode
measures the intensity of the illumination. The requirements for the ?at ?eld illumination
system are in (Sebag & Krabbendam 2013). The overall goals are very similar to the DECal
system used for the Dark Energy Survey ((Marshall et al. 2013)), but due to the larger
telescope aperture, and more compact dome, is designed somewhat di?erently. The overall
design is shown in Figure 52.
7.2. Auxiliary Telescope
The role of the auxiliary telescope in the calibration process was described in Section
5.2. The requirements for the auxiliary telescope are in Sebag and Krabbendam ((Sebag &
Krabbendam 2013)). We will use the refurbished Calypso telescope, which will be moved
from Kitt Peak to Cerro Pachon. It has a 1.2 meter diameter F/18 primary mirror, with
instruments at the Nasmyth focus. Figure 50 shows the siting of the auxiliary telescope in
relation to LSST, and Figure 51 shows the auxiliary telescope itself.
The auxiliary telescope will be instrumented with a spectrograph covering the wave-
length range of 400 to 1125 nm at a resolution of approximately 400. Exposure times for

{ 34 {
typical atmospheric probe stars to obtain SNR ? 200 are estimated to be between 60 and
250 sec. Its operation will be automated, and under the control of the LSST Scheduler.
Generally, the auxiliary telescope will not observe stars along the same line of sight as LSST,
for two reasons: First, it requires exposure times roughly ten times longer than LSST, so
it would rapidly get left behind. Second, the values for C
mol=O
3
=H
2
O
and ?
aerosol
are better
constrained by observing a wide variety of airmasses and locations on the sky that cover
a wide range in N/S/E/W directions, as well as utilizing repeat observations of the same
star throughout each night, and then ?tting the spectroscopic data from the entire night.
This improves the signal to noise for the atmospheric absorption pro?les generated for each
science observation.
7.3. Water Vapor Monitoring System
As discussed in Section 4.4, water vapor can potentially vary on spatial and temporal
scales that will not be adequately sampled by the auxiliary telescope. Because y-band
photometry is so sensitive to water vapor (PWV), the LSST has a dedicated system to
monitor it. The system has two components
? A GPS system that measures an all sky value for PWV. This system operates contin-
uously, whether the LSST is observing or not, and forms part of the site's complement
of weather instruments.
? A microwave radiometer optimized for measuring PWV that is copointed with the
LSST, so that a PWV measurement is available for every exposure.
Both of these system components are well tested in astronomical applications, and are rela-
tively inexpensive (Blake & Shaw (2011), Radomski et al. (2010), Ga?ard & Hewison (2003)).
Their data will be logged through the Engineering and Facility Data Base.
7.4. Camera System Telemetry
The calibration process relies on measurements collected by the camera system on a
per-exposure basis, and stored in the Engineering and Facility Database. The measurements
directly utilized are:
? focalplane temperature

{ 35 {
? shutter travel time
? gain for each ampli?er
There is a much wider range of available information, and we have the option to mine
it to understand and compensate for systematics in the system.
8. Calibration Error Budget
Many sources of errors feed into the calibration process. It is a challenging task to
identify and quantify all of them, and we have not yet completely done so. We present our
current estimates here, but they are subject to change as our analysis improves. We ?rst
discuss repeatability errors, and then uniformity errors. The band-to-band, and absolute
calibration errors ?ow directly from the characteristics of the standard stars, and have already
been discussed in Section 6.
8.1. Repeatability Errors
The repeatability error in m
std
b
for individual calibration stars must meet the requirement
in the SRD (Requirement 1 in Section 2). The contributions to this error are best discussed
in the context of Figure 1. If we recall that
m
std
b
= m
inst
b
? ?m
obs
b
+ Z
obs
b
(37)
it is natural to divide the error sources into three categories:
? errors in m
inst
b
, determined by Flat SED Calibration
? errors in ?m
obs
b
, determined by SED Correction
? errors in Z
obs
b
, determined by Self Calibration
We will assume that the errors in these terms are uncorrelated, and add in quadrature
to produce the overall error.

{ 36 {
8.1.1. errors in m
inst
b
As sketched in Figure 1, m
inst
b
is produced by the LSST Data Management System
(DM), through processing raw science exposures. Leading error contributions come from
the ?at spectrum ?at (modulated by position and PSF changes of the star on the focal
plane), sky background subtraction, and the photometry algorithm used to extract source
counts from the ?attened and sky-subtracted image. We do not attempt to quantify those
individual contributions here, in good part because the relevant software is still early in its
development process within DM. Rather, we note that photometric errors, considered as
a function of stellar magnitude, always display a systematics ?oor at the bright end. The
contribution of this error to repeatability can not be reduced by any part of the subsequent
calibration process, and adds in quadrature to the repeatability error estimate. The estimate
from DM is that this error ?oor will be 3mmag.
8.1.2. errors in ?m
obs
b
As shown in Section 5, ?m
obs
b
is a function both of the system bandpass, ?
obs
b
, and the
SED of the object, F
?
(?;t). It is a?ected by the following errors:
? errors in the atmospheric bandpass, S
atm
{ errors induced by measurement noise in atmospheric probe instruments (auxiliary
telescope; water vapor monitoring system)
Measurement errors from the water vapor monitoring system are expected to be
?1mm. Making use of the plots in Figure 10, which is for a variation of 5mm,
this translates to errors of 0.2 mmag in u-band, negligible in g,r, and i-bands, 1
mmag in z, and 2mmag in y.
We do not yet have reliable error levels for aerosols and ozone. Based on the data
from Burke et al. (2010), we estimate that the errors are no more than 10% of
the total variation seen in each of those components, RSSed together. Figures 35
and 36 show that the resulting errors are roughly 1.6 mmag in u-, 2.1 in g-band,
0.1 in r, and negligible in all other bands.
{ Unmodelled spatial and/or temporal variation of atmospheric components, par-
ticularly aerosols
The water vapor extinction is measured along the LSST's line of sight by the co-
pointed microwave radiometer, so it is not subject to this errors. For aerosols we

{ 37 {
again take 10% of the total variation seen. This gives 1.4 mmag in u, 0.7 mmag
in g, and neglible for others.
? errors in determination of monochromatic illumination correction, which propagate
directly into errors in S
sys
b
.
If the monochromatic illumination correction is ignored, the resulting errors in ?m
obs
b
are shown in Figure 20. The maximum e?ect is roughly 15mmag in u-band and 3mmag
or less in the other bands. As discussed in Section 5.1.1, we will determine the illumina-
tion correction from a combination of modelling and dedicated measurements. While
the accuracy of the result is di?cult to assess in advance, we conservatively assume
that the error will be 20%. This results in an error contribution of 3mmag in u-band,
and 0.6mmag in the others.
? Wavelength-dependent errors in measurement of the domescreen intensity by the pho-
todiode monitor. This will result in errors in combining the monochromatic dome?ats
to determine S
sys
b
.
A photodiode monitor for this purpose was ?rst employed by Stubbs on the Blanco
telescope at CTIO ((Stubbs et al. 2007a)), and an error analysis was undertaken in
(Stubbs et al. 2010) . We are sensitive to systematic errors in the photodiode response
rather than to noise, which can be e?ectively averaged over. Figure 46 estimates the
level of these systematic errors as 2 mmag at 400nm, 1 mmag between 470 and 950nm,
and increasing beyond 950nm to 10 mmag. Figure 46 shows the e?ects on Kurucz
SEDs of a randomly chosen systematic error curve that conforms to these levels. The
e?ects are negligible in all but the u-band.
? errors in determining the SEDs of calibration stars from their multicolor photometry.
In practice, ?m
obs
b
for a calibration star will be determined by looking up its SED as
a function of a set of colors formed from the 6 band magnitudes, and then integrating
against the system bandpasses ?
obs
b
and ?
std
b
according to equation 14. The SEDs will
come from some model of main sequence (MS) stars (we have been using Kurucz in our
work). Real stars deviate from the canonical main sequence (MS) locus, however, due
to many causes. Intrinsic widths of the MS color-color locus are generally estimated
to be on the order of 20mmag ((Ivezi?c et al. 2007; High et al. 2009)). This translates
directly into an error in ?m
obs
b
. For example, if we use Figure 18 as an indication of the
maximal expected values of ?m
obs
b
, we can multiply the maximum slope in each band
by 20mmag to get an estimate of the e?ect. The values of the maximum slopes are
roughly: 0.28 (u), 0.04 (g), 0.02 (r), 0.02 (i), 0.02 (z), 0.01 (y). This leads to the values
in Table 3. This error source is negligible for all but the u-band, where it contributes

{ 38 {
signi?cantly to the error budget. It is likely that a more careful choice of SED models
can reduce this error term.
? E?ects of contamination buildup between monochromatic dome?ats
Still TBR. This will likely determine the required frequency of taking monochromatic
dome?ats
? E?ects of errors in focal plane temperature on wavelength-dependent detector QE
The detector QE at wavelengths near the red cuto? is a?ected by temperature, with
higher temperatures resulting in higher QE. The focal plane temperature is monitored
by the camera, allowing the temperature at any point on the focal plane to be predicted
to 0.5 K with respect to the reference condition when the monochromatic dome ?ats
were obtained. Figure 28 shows, as expected, that the e?ect of that prediction error
on the natural magnitudes is negligible for all but the y-band, where it is 0.2 mmag.
There is, of course, a larger e?ect on the zeropoints, discussed below.
? errors in ?lter bandpass due to variation in position with respect to the optical axis
Figure 29 shows the e?ects of shifting the ?lter bandpass by 1% of the central wave-
length. The e?ect varies with the ?lter band, but an ensemble of stars of varied color
will show an increased scater of roughly 40 mmag. The LSST ?lter requirements per-
mit a 2.5% shift in the bandpass from center to edge, and require a smooth variation.
Assuming that variation is linear, a shift in the ?lter position by ?R mm, results in a
bandpass shift
??
?
=
0:025 ?R
325
(38)
The resulting magnitude scatter, in mmag, is then
?m?
50
0:01
??
?
?0:30?R
(39)
If we require that the e?ect be no more than 0.5 mmag, ?R < 1:6 mm. The camera
requirement for ?lter position knowledge is chosen to meet this photometric require-
ment.
8.1.3. errors in Z
obs
b
Every exposure has a spatial zeropoint model determined by Self Calibration as the solution
to a large linear least squares problem, as discussed in Section 5. The errors in the zero
points are dependent on the spatial scale of the perturbations, as one can see from a simple

{ 39 {
argument. One can reasonably expect that Self Calibration will determine the true magni-
tudes of the calibration stars with an error that is well below the repeatability requirement
(? ? 5 mmag) for individual measurements, so for present purposes we ignore the errors in
the true magnitudes. Suppose a perturbation shifts, for a single exposure, the zeropoint over
an area A by some constant ?Z. If that area contains N calibration stars, each provides an
estimate of ?Z with an error of ?. The full set of calibration stars, assuming that their mea-
surement errors are uncorrelated, allows ?Z to be estimated to a precision of ?
?Z
= ?=
p
N.
The density of calibration stars varies over the sky. Here we will use a mean value of 2
arcmin
-2
, which corresponds to stars in the range 20 > V > 18 at Galactic latitude of 30
degrees. This provides the following rough estimates for spatial scales of relevance:
? Detector segment: A = 11 arcmin
2
, N ? 22, ?
?Z
? 1 mmag
? Detector: A = 178 arcmin
2
, N ? 356, ?
?Z
? 0:25 mmag
? Focalplane: A = 3:4?10
4
arcmin
2
, N ? 6:8?10
4
, ?
?Z
? 0:02 mmag
This makes it clear that the size of a detector segment is a rough boundary, above which
zeropoint errors should be negligible, while below it they may become quite signi?cant.
2
This expectation is con?rmed by our simulation results, as shown in Figure 38. These runs,
for the r-?lter, have no color dependent errors, and data is plotted only for zeropoint errors
less than 100 mmag (the fat tails extend further). The zeropoint errors after self calibration
saturate at a level independent of the input zeropoint perturbations, and the saturation level
is approximately ?
sat
=
p
(?
1
2
=N + ?
sys
2
) where ?
1
? 3 mmag and ?
sys
? 0:7 mmag.
Zeropoint errors obey the approximate relation above only when they are relatively
small. If they become large enough to dim the calibration stars signi?cantly, the reduction
2
In reality, many zeropoint perturbations (such as clouds and focal plane temperature variations) have
a continuous spatial power spectrum, and a more sophisticated estimate is required. This estimate can
be provided through "objective analysis", a formulation developed by the oceanographic and meteorological
communities, and based on the Gauss-Markov theorem ((Bretherton et al. 1976), (Bretherton & McWilliams
1980),(McIntosh 1990)). The problem addressed by objective analysis is: Given a continuous scalar ?eld,
measured in a two dimensional domain at a set of points, what is the optimal estimator for values of the
?eld throughout the domain, and what is the expected error of that estimator? In our application, the scalar
?eld is the zeropoint, and there are (noisy) measurements of it at the position of each calibration star in the
?eld. The formalism allows the error to be predicted based on the measurement errors and the structure
function of the ?eld. Figure 40 shows an example result using a realistic cloud structure function, with
characteristic length scale of 500 meters, which is at the short (and more stressing) end of what we expect.
This development has just begun, but it will o?er an independent check of the simulation results. Early
results show consistency between the two methods.

{ 40 {
in their signal-to-noise ratio increases the errors that come out of Self Calibration. This is
shown in Figures 41, 42 and 43, which present the results of a self calibration simulation,
again without color-dependent e?ects. As the ?rst of these ?gures illustrates, the zeropoint
error has the shape of a fan in (zp, zp-error) space, with the width of the fan increasing
as the input zeropoint increases. The other two ?gures, which are cross sections of the
fan at di?erent zeropoint levels, show that the error distribution is roughly a gaussian core
with fat tails. When the zeropoint is between 0 and 0.5 mag, the gaussian sigma is only 1
mmag. Under extinction conditions near the limit at which we propose to continue survey
operations, between 1 and 1.5 mag, the gaussian core has a sigma of 30 mmag.
In practice, zeropoint variations are dominated by clouds, with smaller e?ects described
below. The impact of these variations on calibration performance as a whole will depend
strongly on the pdf of the cloud extinction, as the above results show. The strength of this
dependence derives partly from the fact that the standard deviation in cloud extinction is
roughly proportional to the extinction itself. Additionally, increasing extinction increases
the photometric errors in the measurements of the calibration stars. Our current estimate
for the cloud extinction pdf is based on rather crude data from CTIO, and is shown in
Figure 39 for the r ?lter. Averaged over all ?lters and the full survey, this data suggests
that the extinction should be less than 0.5 mag for approximately 94% of the observations.
The higher extinction data therefore comfortably falls within the 10% fraction of the data
excludable from the repeatability requirement by the SRD.
Other perturbations to the zeropoints arise from:
? Varying gray averages of extinction from aerosols, water vapor, and ozone. These are
dominated by aerosols, and have spatial scales large compared to a detector.
? Camera gain variation. This is constrained by Camera requirements to be less than
1 mmag rms over a 1 hour period, and less than 10 mmag over 12 hours. These gain
variations occur at the detector segment level.
? Shutter travel time variation. Camera requirements constrain this to 20 mmag. The
e?ects are over spatial scales comparable to the size of the focal plane
? Detector QE variation due to varying focal plane temperature. These occur over full
detectors, a?ect only the y-band, and knowledge of them is constrained by Camera
requirements to be better than 0.5 K. This results in a 1 mmag zeropoint perturbation.
In summary, for the 90% of the survey observations with cloud extinction less than
0.5 mag, we expect the zeropoint errors to be dominated by perturbations that act at the

{ 41 {
detector segment scale. It is prudent to expect that the resultant errors will be at ?
sat
for
that scale, the saturation level determined above. For typical detector segment size areas
on the sky, these contain roughly 20 stars, and ?
sat
? 1 mmag. Some segment areas will
have substantially less, however, leading to increased ?
sat
? 2 mmag. To be conservative,
we carry this latter value in our error budget. The error budget for repeatability errors,
including all of the above e?ects, and for the above speci?ed 90 % fraction of the survey, is
summarized in Table 3.
8.2. Uniformity errors
Uniformity errors must meet the uniformity requirement in the SRD (Requirement 2 in
Section 2). Nonuniformity can arise from systematic errors in either Z
obs
b
, ?m
obs
b
, or both.
These in turn tend to arise from the combination of two factors: systematic patterns across
the sky in stellar populations, and the way they are observed by the survey (sky systematics);
incompletely modelled systematic variation of the system bandpass with respect to focalplane
position and/or color (system systematics).
There are several sources of sky systematics:
? errors in standards. Because standards are relatively sparse on the sky (Section 6), an
error in the ?ux of a single standard can create signi?cant nonuniformity in the overall
calibration in the area of the sky which is near to it.
? Systematic variation of calibration star properties across the sky, in color or in more
subtle e?ects on the SEDs, for example from interstellar reddening.
? Unmodelled systematic variations in the atmosphere. These can include persistent
patterns in cloud extinction or wavelength-dependent atmospheric components, such
as aerosols. These are especially pernicious when they exhibit a N-S dependence.
Two examples of sky systematics are shown in Figures 44 and 45. In the ?rst of these,
the survey's dither pattern leaves intact a pattern of varying radius on the focalplane. In
the second, there is systematic variation over the sky of cloud extinction (likely unrealistic).
Turning to system systematics, we discussed many causes for varying system bandpass
in Section 8.1, and several of these can act as the second factor in generating nonuniformity:
? Unmodelled variation of system bandpass as a function of focal plane position, which
can beat against the sparse dither pattern of the survey

{ 42 {
For reasons not yet fully understood, the least squares system solved by self calibration
mixes spatial modes of di?erent scales. In particular, input errors with focalplane scale can
drive output errors with much larger spatial scales. Further, modes with large spatial scales
tend to be poorly damped, so small amplitude errors on focalplane scales can sometimes
result in long wavelength error modes with larger amplitude.
We do not at present have the analytical tools to usefully predict these error processes.
We must rely on the results of simulations, as discussed in the following section. The error
budget for uniformity, based on these simulations, is summarized in Table 4.
9. Testing and Veri?cation
The calibration process presented above is complex, and meeting the SRD requirements
is dependent on understanding and controlling a large number of small perturbations. We
seek to verify that our approach will produce the required results, and are doing so with
di?erent techniques that apply to the stages of LSST's construction and operation. During
the current ?nal design phase, we are employing simulation tools, backed up when possible
by measurements on the sky from existing telescopes. (Burke et al. 2013) and (Burke et al.
2010) are examples of this approach. During the construction phase, we will be able to feed
measured data from actual telescope components into the simulations. During operations,
our focus is on designing metrics that will let us assess the calibration quality on an ongoing
basis.
9.1. Self Calibration Simulation
One of the ?nal steps in the LSST calibration process is running self-calibration, solving
for a large number of stellar magnitudes and observation zeropoints. The accuracy and
precision of the self-calibration procedure has complicated dependencies on various sources
of noise as well as the overall observing strategy which determines how well di?erent regions
of sky can be tied together. We have undertaken a series of simulations to test the validity of
the LSST self-calibration process. A more comprehensive discussion is available in Yoachim
et al. (2013).
To simulate self-calibration, we require a catalog of realistic LSST observations of bright
stars. We generate this catalog by combining Galfast and Opsim. The Galfast code generates
a realistic model of the Milky Way stellar populations, matching the observed distributions
from SDSS. The LSST Opsim can then be used to generate observations of the stars in the

{ 43 {
Galfast model. For each Opsim pointing, we generate a list of observed stars along with an
observed magnitude that includes atmospheric and hardware e?ects.
An example self-calibration simulation is shown in Figure 53. This simulates the ?rst
two years of LSST observations in r-band, with 1.3 million stars distributed fairly uniformily
across the sky. The observed stellar magnitudes include o?sets for cloud extinction, noise
based on the Opsim 5?-limiting depth, errors due to ghosting and illumination corrections,
and errors based on variations in the ?lter throughput and placement. The results are
promising, showing we easily meet the SRD uniformity requirement, and are very close to
meeting the repeatability requirement. The detailed input parameters of this simulation are
listed in xC.
Self-calibration simulations will continue to be important during commissioning and
regular survey operations. By running simulations that match the conditions of the actual
LSST observations, we will be able to identify regions of the sky which are poorly linked to
the rest of the survey. We will also be able to identify which spatial scales have the largest
errors, which can have important rami?cations for cosmological observations (e.g., galaxy
clustering) (Huterer et al. 2013).
9.1.1. Self-Calibration of a Large System Using HEALpixels
The self-calibration problem for LSST results in a very large least squares system. For
comparison, the uber-cal of SDSS involved 36 million observations of 12 million unique stars.
For LSST in the ?rst two years we will have around 3.2 billion observations of 100 million
unique stars. The memory and computation power required to solve such a system demands
that we parallelize the problem in some way.
We have developed a technique using the Hierarchical Equal Area isoLatitude Pixeliza-
tion (HEALpix) tessellation of a sphere. The HEALpix tessellation was originally designed
for analyzing all-sky CMB observations. HEALpixels have equal area, and are distributed
to allow fast calculations of multipole moments and power spectra.
To run self-calibration in parallel, we assign each observation (consisting of a patch ID,
star ID, observed magnitude, magnitude error, and possibly Illumination patch ID) to the
nearest four HEALpixels on the sky. This divides the observations in such a way that we
have regions on the sky that are entire HEALpixels plus an added border of approximately
one-half HEALpixel. We have had good results with using 768 (53 square degrees) or 3072
(13 square degree) HEALpixels.

{ 44 {
Once the observations have been divided, we run the self-calibration solver on each
HEALpix region independently. By solving each HEALpixel in isolation, each result has a
unique ?oating zeropoint. To tie the system back to a single ?oating zeropoint, we construct
a matrix based on
P
model
ij
=P
best
i
+HP
j
(40)
where the P
model
ij
are the patch zeropoints that were ?tted on each HEALpixel. The equation
is solved for the true patch zeropoint P
best
i
and the HEALpixel ?oating zeropoints HP
j
,
completely analogous to Eqn 33. If an illumination correction is included, we replace the
patch ID with a unique ID for each patch ID and illumination ID combination present.
At full density, we expect ? 100 stars per calibration patch and ?15 observations per
year. Thus, by using the patches to tie large scale solutions together, we have reduced
the computational requirements by three orders of magnitude. For example, the two-year
r-band simulation presented shown in Figure 53 has 129 billion non-zero matrix elements in
the full self-calibration formulation, but the patch solution needs only 32 million non-zero
elements.
This procedure is ine?cient in the sense that each patch zeropoint and stellar magnitude
is solved for 4 times. However, there is substantial speedup provided by not having to run
the solver to convergence over very large spatial scales. Once the ?nal patch zeropoints are
solved for, we loop back through the data and apply the zeropoints (and possibly illumination
solution) to the stellar observations and calculate a weighted mean for the ?nal best-?t stellar
magnitudes. We are currently weighting the patches in Eqn 40 by the number of stars they
contain. This should probably be re?ned, as patches taken in cloudy conditions will be
poorly ?t even if they contain many stars.
The solutions returned by ?tting in parallel are well-matched to solutions which solve
the system simultaneously. Figure 55 compares a ?t with the traditional global solver with
a solution made with HEALpixels.
9.2. Auxiliary Telescope Simulation
We have a simulator under development, \auxteles", for the generation of spectra by the
auxiliary telescope, and the ?tting of atmospheric models to them. The spectrum generation
is done by simSpectro, which propagates Kurucz spectra through atmospheres generated by
Modtran4, and into a simpli?ed spectrograph model. The atmospheric model ?tting is done
by simSolver, and is based on the formulation of Burke et al (Burke et al. 2010). Auxteles has
been used to study the impact of spectrograph resolution on the accuracy of the atmospheric
models, with results shown in Figure 56. Note that the errors shown do not include errors

{ 45 {
due to unmodeled atmospheric variations away from locations of the probe stars (see Section
8)
9.3. Calibration Performance Metrics
9.3.1. Repeatability
Testing our repeatability is rather trivial, as it does not require any external data.
We simply apply the best-?t patch zeropoints to each patch and measure how well we
make repeat measurements of each star. By comparing the repeatability in data taken in
photometric conditions versus non-photometric, we can empirically test the systematic limits
of the photometry pipeline as well as constraining the structure function of typical clouds at
Cerro Pachon.
9.3.2. Spatial Uniformity
The primary means of testing the spatial uniformity of the ?nal calibration is to compare
to previous surveys. Pan-Starrs notes that they tend to see SDSS-shaped footprints in
their residuals when comparing surveys. This will probably only put an upper limit on the
uniformity, as LSST will be deeper and/or have larger coverage than available comparison
surveys. Simulations can also help identify regions which we would expect could be poorly
linked to the rest of the survey. This is expected to be a particular problem early in the
survey before all regions of the sky have many well-linked observations. We can mesure very
accurately our RMS as a function of patch and magnitude, and thus generate mock catalog
realizations to run through the self-calibration procedure. These mock catalogs should give a
good picture of the spatial uniformity, and can be used to help schedule future observations
to ensure calibration residuals are minimized.
9.3.3. Flux Calibration
As mentioned in Section 6.1, there should soon be a system of white dwarf ?ux standards
with HST observations. We can use subsets of these ?ux standards to make bootstrap
estimates of our overall ?ux calibration. This is also a potential test of the spatial uniformity
of the calibration, although this will be limited if the ?ux standards are concentrated qlong
the celestial equator.

{ 46 {
9.3.4. Color Calibration
There are now a number of techniques for comparing stellar colors across the sky. Ivezi?c
et al. (2004) use a principal color analysis, High et al. (2009) use a stellar locus regression,
and Schla?y et al. (2010) look at the color of main-sequence turn-o? stars. For all of these
techniques, the signal is usually dominated by dust extinction. However, at high galactic
latitudes, the di?erences in stellar colors reveals errors in the calibration.
In addition to main-sequence stars, our ?ux standards should be useful for checking the
color calibration. As with the ?ux calibration in a single band, we can exclude some of the
?ux standards from the analysis and use the excluded stars to see how well we recover their
colors.
10. Software Implementation
The calibration process presented above is software intensive, as is clear from Figures 1 -
4. All software components are implemented within the LSST Data Management System, in
two separate productions, described brie?y here, and in much more detail in DM publications.
The end products of the calibration process are Level 2 data products, also described below.
10.1. Calibration Products Production
The Calibration Products Production is responsible for the processes shown in Figures
2 and 3. These processes occur with varying frequencies:
? The monochromatic illumination correction is determined at most monthly, and likely
less frequently, since it is expected to be quite stable.
? The corrected monochromatic ?ats are formed monthly.
? The synthetic broadband ?ats used in nightly reductions are computed every night.
? Atmospheric models are computed from the auxiliary telescope spectra and water vapor
monitoring system every 24 hours.
Additionally, this production creates other data products used in nightly image pro-
cessing, including bias frames, fringe frames, and crosstalk correction matrices. These are
produced as needed.

{ 47 {
10.2. Calibration Within the Data Release Production
The processes shown in Figure 1 are performed as part of the Data Release Production
(Figure 57).
10.3. Level 2 Data Products
LSST will characterize the system throughput to a degree necessary to achieve the
SRD calibration requirements. The system throughput information will be captured by the
normalized system response function, ?
b
:
?
b
(?j p ) =
?
? 1
S
b
(?j p )
R
?
? 1
S
b
(?j p )d?
(41)
which relates the object's speci?c ?ux (spectral energy distribution; SED), F
?
(?), and the
calibrated, in-band, ?ux at the top of the atmosphere, F
b
:
F
b
=
Z
F
?
(?)?
b
(?j p )d?
(42)
In the equations above, p captures the dependence on time, telescope pointing (alt, az), and
the position (x, y) at which the source has been images in the focal plane, and S
b
(?j p ) is
the total system throughput. For details, see Eqs 4 and 5 in the LSST SRD.
LSST aims to deliver calibrated ?uxes, F
b
, in standard bandpass, for all objects and
sources in its catalog. We plan to provide the calibrated ?uxes for each source/object
computed assuming i) ?at F
?
(?) = const SED (F
flat
b
), and ii) an SEDs chosen from a
library of SEDs, that is consistent with object properties (measured color being the most
apparent one, but shape, Galactic longitude and latitude, and others may be considered as
well). We expect most users will prefer and use the latter.
These assumed SEDs and precomputed ?uxes will not be appropriate for all cases.
Examples include objects exhibiting SED variability, and objects with exotic SEDs. To
enable recalibration of the measured ?uxes using user-supplied SEDs, we will retain and
make available the normalized system response function, ?
b
, for every visit. This will allow
the computation of the multiplicative corrections to the provided ?ux:
c=
R
f
user
?
(?)?
b
(?j p )d?
R
f
flat
?
(?)?
b
(?j p )d?
(43)

{ 48 {
and computation of the recalibrated ?ux as:
F
user
b
= cF
flat
b
(44)
Note that f
flat
?
(?) ? 1 and since ?
b
is normalized to one, the denominator of Eq. 43 is
identically equal to one as well. Therefore, only the numerator needs to be computed to get
c.
10.3.1. Storing and Obtaining ?
b
(?j p )
The total system throughput, S
b
(?j p ), from which ?
b
(?j p ) is derived, can be factored
into two components, S
sys
b
and S
atm
b
:
S
b
(?;t;alt;az;x;y) = S
sys
b
(?;x;y;t) ? S
atm
b
(?;alt;az;t)
(45)
, capturing its dependence on the telescope+camera system and the atmosphere, respectively.
The two will be measured and/or derived as described in Document XXX.
S
sys
b
will be stored as a series of (?;x;y) data cubes. For each measurement, we expect
on order of ? 100 bins in both x and y, and about ? 200 bins in ?. As discussed in
Section 5, S
sys
b
is expected to vary slowly with time, necessitating not more than ?monthly
re-characterization (and therefore, about one new data cube per month). These data cubes
will be made available to the user, both in bulk form (e.g., FITS ?les), and as tables (or
functions) in the LSST database.
S
atm
b
will be computed from models of the atmosphere derived using the spectra obtained
with the auxiliary telescope. The parameters of this model will be stored in the database
and made available to the user, together with the software needed to compute S
atm
b
. For ease
of use, we will also provide database tables or functions returning (?; S
atm
b
; ?
S
atm
b
) evaluated
with ?? = 1nm resolution, for each visit
3
.
The radiometer/GPS data on precipitable water vapor will be stored in the database
together with other exposure metadata. The user will be able to query this information for
each exposure.
3
Or potentially for each CCD in a visit, in case of strong gradients in S
atm
b

{ 49 {
10.3.2. Database-level Recalibration
We will provide an easy-to-use database interface to perform recalibration given user-
de?ned SEDs. While the exact syntax is yet to be ?nalized, the following example should
be illustrative of how the user could obtain recalibrated time series using the highest-level
of provided APIs:
SELECT
midPointTai,
filter,
recalibPsFlux(sourceId, mySeds.sedId, "mySeds") as flux,
recalibPsFluxErr(sourceId, mySeds.sedId, "mySeds") as fluxErr,
FROM
Sources, mySeds
WHERE
Sources.objectId = mySeds.sedId
INTO
myRecalibratedMagnitudes
The query above assumes that, for each object of interest, the user has uploaded their SED
into a table named mySeds and keyed it by objectId . Given the name of the table and
the ID of the SED entry, functions recalibPsFlux and recalibPsFluxErr will use the
sourceId to locate the appropriate S
sys
b
and S
atm
b
pertaining to that speci?c measurement,
compute ?
b
and the correction c, and return the recalibrated point source ?ux and its error,
respectively.
A suite of lower level functions will be provided as well, providing a more granular
control of the recalibration process.
11. Risks and Mitigations
The error budget presented in Section 8, and the simulation results presented in Section
9 show that LSST will meet SRD requirements for calibration, easily for uniformity, but
with small margin for repeatability. There is clearly a risk that once commissioning or
survey operations begin, an unanticipated source of error will arise, or that known error
sources will have been underestimated. The calibration process that we have designed is
very similar to those of PS-1 and DES, as previously mentioned. It is therefore cause for
some concern that the PS-1 results reported in (Tonry et al. 2012) show clear evidence for

{ 50 {
systematic errors that are not understood. Schla?y et al. (2012) nonetheless achieved 10
mmag calibration by empirically correcting for those errors, but it's far from clear that a
further improvement by a factor of two would be achievable without better understanding
of the source of the systematics. Before turning to more speci?c risks, we ?rst address this
general concern.
It is important to recognize that LSST will incorporate some signi?cant improvements
over PS-1:
? Continuous determination of atmospheric extinction through the auxiliary telescope
and water vapor monitoring system
? Determination of monochromatic illumination corrections through forward modelling,
observations of star grids, and feedback from self calibration
? Careful treatment of the di?erence between the SED used for construction of the
broadband ?at and that of objects being photometered
We are con?dent that these improvements will go a signi?cant way toward better control
of the systematics. To address the possibility that our con?dence is misplaced, we are
pursuing several mitigation strategies that will further improve performance.
? The ?rst mitigation that we will develop is a more rigorous treatment of ?nite PSF
e?ects, along the lines of equation 28. We have some reason to expect that this will
lower the systematics error ?oor on instrumental magnitudes.
? A second mitigation involves a modi?cation of the overall calibration process. As pre-
sented in Section 3, the current design cleanly separates compensation for changing
system bandpass shape (performed in SED Correction, see Figure 1), and for chang-
ing zeropoints (performed in Self Calibration). The SED Correction process is based
completely on forward modeling, in which the e?ect of some wavelength-dependent
perturbation (eg change in atmospheric aerosols) on a particular calibration star is
predicted by a measurement-based model of the e?ect. As an alternative, one can
imagine combining the SED Correction and Self Calibration processes into one large
least squares problem, in which not only the zeropoints are determined, but all the
wavelength-dependent e?ects as well. We have tested a ?rst step in this direction by
incorporating a parameterized model of the wavelength-dependent illumination into
Self Calibration. Initial results have not been promising, but this is far from invalidat-
ing the overall approach.

{ 51 {
? Third, it is clear that systematic errors that a?ect the uniformity of calibration tie in to
the overall survey strategy, and in particular its dithering pattern. The survey strategy
is not rigid, and changes to improve calibration performance may well be possible.
We are also tracking a number of more speci?c risks:
? Simulation tools may produce misleading results because their functionality is limited,
and validation is not complete.
We are well along in the development of a second generation calibration simulator which
will address these concerns. Also, we will develop the Gauss-Markov approach to error
modelling, mentioned in Section 8.1.3, into a tool which can act as an independent
check on many aspects of the simulations.
? Systematics error ?oor for photometry is higher than expected
We are working with LSST Data Management to evaluate this risk as the software and
hardware evolve, and mitigate it with improved image processing algorithms.
? Forward models for illumination correction are less accurate than expected
There is considerable scope to improve the models, and add additional sources of data
as required.
? Sensor complexities may a?ect calibration
Testing of prototype LSST sensors has uncovered complex behavior which has not
generally been recognized in previous generations of CCD detectors (though it is likely
to be there at lower levels). E?ects include "tree rings", intensity dependent PSF, and
charge dependent electron transport. These may leave an imprint on calibration which
is not removed by the standard image processing approach.
We are working with LSST Data Management to develop better calibration products,
particularly ?at ?elds, that, in conjunction with improved image processing algorithms,
will reduce the impact.
? The currently planned 400nm short wavelength cuto? of the auxiliary telescope may
result in insu?cient precision in ozone determination
The e?ect of the short wavelength cuto? can readily be evaluated with existing data,
prior to FDR, allowing a change in the spectrograph design if needed.

{ 52 {
REFERENCES
Blake, C. H., & Shaw, M. M. 2011, PASP, 123, 1302
Bohlin, R. C. 2000, AJ, 120, 437
Bohlin, R. C. 2007, in Astronomical Society of the Paci?c Conference Series, Vol. 364, The
Future of Photometric, Spectrophotometric and Polarimetric Standardization, ed.
C. Sterken, 315
Bohlin, R. C., & Gilliland, R. L. 2004a, AJ, 128, 3053
|. 2004b, AJ, 127, 3508
Bretherton, F. P., Davis, R. E., & Fandry, C. 1976, in Deep Sea Research and Oceanographic
Abstracts, Vol. 23, Elsevier, 559{582
Bretherton, F. P., & McWilliams, J. C. 1980, Reviews of Geophysics and Space Physics, 18,
789
Burke, D., et al. 2013, AJ(submitted)
Burke, D. L., et al. 2010, ApJ, 720, 811
Eppeldauer, G. P., Yoon, H. W., Zong, Y., Larason, T. C., Smith, A., & Racz, M. 2009,
Metrologia, 46, 139
Ga?ard, C., & Hewison, T. 2003, Observations/Development Technical Report TR26, Met
O?ce, National Meteorological Library, Exeter, UK. Also available from http://tim.
hewison. org/TR26. pdf
Hansen, O. L., & Caimanque, L. 1975, PASP, 87, 935
Hayes, D. S., Pasinetti, L. E., Philip, A. G. D., & Lynga, G. 1985, Calibration of funda-
mental stellar quantities. Proceedings of the 111th Symposium of the International
Astronomical Union, held at Villa Olmo, Como, Italy, 24-29 May 1984.
High, F. W., Stubbs, C. W., Rest, A., Stalder, B., & Challis, P. 2009, AJ, 138, 110
Holberg, J. B., & Bergeron, P. 2006, AJ, 132, 1221
Holberg, J. B., Bergeron, P., & Gianninas, A. 2008, AJ, 135, 1239
Huterer, D., Cunha, C. E., & Fang, W. 2013, MNRAS

{ 53 {
Ivezi?c, Z.,? et al. 2004, Astronomische Nachrichten, 325, 583
|. 2007, AJ, 134, 973
Jordi, C., et al. 2010, A&A, 523, A48
Krabbendam, V. 2012, LSST Docushare LTS-66
Kurucz, R. L. 1993, VizieR Online Data Catalog, 6039, 0
MacDonald, E. C., et al. 2004, MNRAS, 352, 1255
Marshall, J. L., et al. 2013, ArXiv e-prints
McIntosh, P. C. 1990, J. Geophys. Res., 95, 13529
Nordsby, M. 2013, LSST Docushare LCA-18
Nugent, P., Kim, A., & Perlmutter, S. 2002, PASP, 114, 803
Padmanabhan, N., et al. 2008, ApJ, 674, 1217
Radomski, J., Trancho, G., Fuhrman, L., Falvey, M., Gigoux, P., Montes, V., Daruich,
F., & Lazo, M. 2010, in Society of Photo-Optical Instrumentation Engineers (SPIE)
Conference Series, Vol. 7737, Society of Photo-Optical Instrumentation Engineers
(SPIE) Conference Series
Regnault, N., et al. 2009, A&A, 506, 999
Schla?y, E. F., Finkbeiner, D. P., Schlegel, D. J., Juri?c, M., Ivezi?c, Z.,? Gibson, R. R., Knapp,
G. R., & Weaver, B. A. 2010, ApJ, 725, 1175
Schla?y, E. F., et al. 2012, ApJ, 756, 158
Sebag, J., & Krabbendam, V. 2013, LSST Docushare LSE-60
Stubbs, C. W., Doherty, P., Cramer, C., Narayan, G., Brown, Y. J., Lykke, K. R., Wood-
ward, J. T., & Tonry, J. L. 2010, ApJS, 191, 376
Stubbs, C. W., & Tonry, J. L. 2006, ApJ, 646, 1436
|. 2012, ArXiv e-prints
Stubbs, C. W., et al. 2007a, in Astronomical Society of the Paci?c Conference Series, Vol. 364,
The Future of Photometric, Spectrophotometric and Polarimetric Standardization,
ed. C. Sterken, 373

{ 54 {
Stubbs, C. W., et al. 2007b, PASP, 119, 1163
Tonry, J. L., et al. 2012, ApJ, 750, 99
Vanden Berk, D. E., et al. 2001, AJ, 122, 549
Veselovskii, I., Whiteman, D. N., Korenskiy, M., Kolgotin, A., Dubovik, O., & Perez-
Ramirez, D. 2013, Atmospheric Measurement Techniques Discussions, 6, 3059
Wittman, D., Ryan, R., & Thorman, P. 2012, MNRAS, 421, 2251
Yoachim, P., Jones, L., Ivezic, Z., & Axelrod, T. 2013, LSST Docushare TBD
This preprint was prepared with the AAS L
A
TEX macros v5.2.

{ 55 {
A. Filter Set
Figure 47 illustrates the baseline LSST ?lter bandpasses, including a `standard' atmo-
sphere, and baseline estimates for the mirrors, lenses, ?lter and detector transmission and
sensitivity functions.
B. Photometric measurements for non-main sequence stars
LSST will record a series of m
nat
b
measurements for each astronomical object in each
visit. These m
nat
b
measurements are generated directly from the counts recorded in each
image, corrected with the photometrically uniform, synthetic broad-band ?at ?eld and for
gray (cloud) atmospheric extinction e?ects. However, unless the object SED is ?at, m
nat
b
measurements will vary as the shape of the bandpass changes, whether as a function of
position in the focal plane or as a function of changes in atmospheric absorption components.
Correcting for these e?ects requires assuming a particular SED for each source, and produces
m
std
b
values after applying ?m
meas
b
o?sets (see the overview of calibration in section 5 for a
review).
To generate precision photometry for objects with arbitrary SEDs, LSST will provide a
record of ?
meas
b
(?;alt;az;x;y;t) as well as the zeropoint o?sets, Z
obs
b
, for each observation,
as discussed in Section 10. With these data products, users can generate the appropriate
?m
meas
b
corrections and m
std
b
for their chosen object SED. Section 4.8 outlines the typical
magnitudes of these corrections for main sequence stars; ?m
meas
b
can easily be on the order
of 20 mmag for gri, or even 100 mmag in u band. For more extreme SEDs, these corrections
may be even larger.
Figure 49 illustrates the likely magnitude of these ?m
meas
b
corrections for a wide variety
of SEDs. In each plot, the main sequence stars are shown as in the ?gures in the main paper
(small dots, color-coded by metallicity), although given the increased scale here they only
appear as a purple series of circles. M dwarfs are now included, generally mimicking the
behavior of the main sequence stars but extending further into the red. More unusual SEDs
are also included; a quasar SED, based on a composite of many empirical quasars from SDSS
from Vanden Berk et al. (2001) that has been extended to the full LSST wavelength range
through the addition of power law ?ux above and below the original range (f
?
/ 1=?
0 : 5
for
? < 89nm & f
?
/ 1=?
1 : 5
for ? > 800nm), and redshifted from z = 0 to z = 3; also a sample
of SN Ia from templates generated by Peter Nugent (Nugent et al. 2002), redshifted from
z = 0 to z = 1.
The ?gure shows the ?m
meas
b
values that would be expected under a maximum change

{ 56 {
of atmospheric parameters and under a likely bandpass shift. This demonstrates how much
the reported m
nat
b
values could vary for each object. If LSST was to just calculate an o?set
between m
nat
b
and m
std
b
based on an object's color (and assuming that the object had an
SED similar to a main sequence star), the resulting m
std
b
values would be incorrect by the
value of the o?set between the true ?m
meas
b
for the SED and the main sequence ?m
meas
b
values at each color; this could easily be more than 20mmag.
C. Fiducial Self-Calibration Input
The input ?le used with simSelfCalib.py to generate the simlutated LSST catalor pre-
sented in x9.1.
# the number of total stars in the sim
nStarTot = 2000000
# the magnitude range
magmin = 17
magmax = 21
# the color (g-i) range of the sim stars
colmax = 3.5
colmin = -0.8
# A value for the random seed, if desired.
random_seed = 42
# Distribution to use for mag_rand_err. Options: Guassian, Cauchy
errorDist = Gaussian
# Parameters for using flux standard stars.
# Flux standards are all placed on patchid 0.
# number of stars per block to use.
# values < 1 make a fraction of the stars standards,
# values > 1 are the number of stars per block.
fluxFrac = 0
# Gaussian noise to add to the flux standard mags.
fluxNoise = 0.0001
# the next few values are about the zeropoint and color terms
# the random error added to the stars, distribution set by errorDist.
# This is an error floor.
mag_rand_err = 0.003
# calculate what LSST will think the error is based on observed mag (True) or output a

{ 57 {
# more realistic error (False)
calcerror = False
# Use Opsim xparency for the gray zeropoints
zp_opsim = True
# the maximum zeropoint change added to the patches
zp_var_max = 1.0
# the maximum zeropoint gradient added to the patches
zp_grad_max = 0.0
# the maximum color term change in mag added to stars in a patch
colterm_max = 0.005
# the range between -1 < (g-i) < 1 in the color term due to filter bandpass shift.
colterm_rad_max = -0.040
# the error in the observed color -- 0 or negative values will result
# in no color gradient correction
color_obs_noise = 0.05
# the fractional error in the color correction term ()
color_correction_error = 0.10
# fraction of the FoV that the filter can jitter:
filter_jitter = 0.0055
# fractional variation in the gain (Gaussian RMS)
gainvar = 0.00
# fractional error in the exposure time (e.g., from shutter errors)
exptvar = 0.001
# number of HEALpixel sides to use. Must be power of 2.
nside = 16
# Use cloud images. Each visit has a cloud image generated on a sparse
# grid then interpolated to the star positions.
use_cloudsimage = True
# the magnitude of the sinusoidal variation
# in the zero point in the X and Y-directions. Note, these
# interfere with eachother so the total error
# will be \pm (sinvarx_mag+sinvary_mag)
sinvarx_mag = 0.00
sinvary_mag = 0.00
# the spatial scale for the sinusoidal variation
# value of 1 will make the FOV run from -pi to pi, 2--> -2pi to 2pi
sinvarx_scale = 1.
sinvary_scale = 1.

{ 58 {
sinx_phase = 0.0
siny_phase = 0.0
# angle between the x and y variation axes. This is held constant.
sinx_angle = 20.0
# a second set of spatially varying sinusoidal errors.
# Here, the phase is randomly set for each night to simulate flat fielding errors
flat_sinvarx_mag = 0.00
flat_sinvary_mag = 0.00
flat_sinvarx_scale = 1.
flat_sinvary_scale = 1.
# angle between x and y axes. Held constant, but the phase is
# varied from night-to-night
flat_sinx_angle = 0.0
# how many nights to keep the phase constant
phase_persist = 1
# illumination correction file jdsu_utest.dat
# illumcorr_filename = None
#
# illumination correction file jdsu_r.dat ricbb.dat
#
sed_illumcorr_filename = jdsu_r.dat
bb_illumcorr_filename = ricbb.dat
# what fraction of the illumination error to use
illum_err_factor = 0.1
# focal plane temperature model file
# fpModel1_filename = fpTemp1.dat
# fpModel2_filename = fpTemp2.dat
# spatial variation from cloud removal
# fraction of the total zeropoint shift to put in as the cloud magnitude
cloud_mag = 0.00
cloud_sinvar_scale = 5.
# Variable star contamination
rr_fraction = 0
rr_amplitude = 1.
# Use Kepler stellar variability statistics?
kepler_variablity = False
# limits for sim footprint
# the ra/dec limits, in degrees

{ 59 {
raMin = 0
raMax = 360.
decMin = -90.
decMax = 0.
# you can use opsim (True or False) to generate fields
# use opsim database (opsim3.61)
use_opsim = True
# flag to use the opsim dither scheme, instead of random dither.
# note that opsim dither scheme covers the entire
# radius_fov (i.e. dith_Offset_frac = 1)
use_opsimdither = True
# opsim filter, if using this
opsimfilter = r
# then specify time to start/stop, in nights from start of opsim
tstart = 0
tstop = 730
# or don't use opsim and specify nEpoch instead
# nepoch = number of visits to each field
nEpoch = 10
# you can use calsim database to generate the stars.
# Available calsimtables include: "msrgb", "msrgb_1e6", and "msrgb_1e7"
use_calsim = True
calsimtable = msrgb_1e6
# you can also change the radius of the field of view
radius_fov = 1.8
# and the number of patches to split this radius into. should be 'square' (nPatch=N^2)
nPatch = 25
# raOff and decOff control dithering
dith_raOff_frac = 0.5
dith_decOff_frac = 0.5
# Limits of camera rotation, in degrees (has a dithering effect)
# (LSST standard = -90 to 90)
dith_skyRot_min = -90
dith_skyRot_max = 90
# filenames for outputs
# starobs is the file input to the solver
starobs_filename = star_obs.dat
# should starobs contain the subpatch information?

{ 60 {
# These can then be solved for illumination corrections.
print_subpatch = True
# number of radius bins to use for the illumination correction
nRadiusBins = 5
# number of g-i color bins to use for the illumination correction
nColorBins = 1
# output each ra/dec block to it's own file (only use for very large sims)
multifile = False
# master file contains everything relevant to every measurement of every star
master_filename = /dev/null
# stardata is the basic star magnitudes, positions and colors
stardata_filename = stardata.dat
# star data for re-asigning patches
ras_filename = /dev/null
# visitdata gives the information on the visits used in the simulation
visit_filename = visit.dat
# patchdata gives the information on the patches in the simulation
patch_filename = patchdata.dat
D. Glossary
? Level 1 Data Product. A data product, such as a measurement of an astronom-
ical object's position or ?ux in a single image, that is computed on a nightly basis.
Level 1 data products primarily consist of alerts on transient, variable and moving
objects. The photometric calibration process outlined in this paper does not apply to
Level 1 data products. Level 1 data products will be calibrated using all applicable
prior knowledge (including secondary standard catalogs generated from previous Data
Release calibration of all LSST-observed stars in the ?eld).
? Level 2 Data Product. A data product, such a measurement of an astronomical
object's position or ?ux in either a single image or a series of images, that is computed
on the Data Release schedule, on a six-month or yearly schedule. Level 2 data products
leverage all previous observations of the same object, as well as all knowledge of the
LSST system accumulated to that point. The photometric calibration process outlined
in this paper is used to generate Level 2 data products.
? Normalized system response, ?
b
(?) . The normalized system response describes
the shape of the bandpass transmission curve, separating this from the normalization

{ 61 {
of the throughput curve which can be determined separately. ?
b
(?) is described by
Equation 5. The integral of ?
b
(?) is always 1.
? Camera Calibration Optical Bench (CCOB) . The CCOB is an apparatus to
calibrate the spatial and wavelength-dependent response of the focal plane (detector
+ camera). The CCOB uses a well controlled, wavelength-variable, light source (such
as a tuneable laser) calibrated using a NIST photodiode to illuminate the focal plane
when the camera is unmounted from the telescope. This light source, which produces
a spot in the focal plane approximately the size of or smaller than the PSF, will be
scanned across the detector (x;y) at a variety of beam incident angles, (?;?) and at
a variety of wavelengths (?). This allows the response of the detector to be measured
in the presence of a well-understood light source. The response of the detector can be
measured in two di?erent con?gurations: one with only the detector and the dewar
window - which doubles as lens 3 (L3) - and one with the detector, L3, L2, L1, a small
test-section of ?lter and the camera shutter. The ?lter test section used is not the full
LSST ?lter, and thus will not capture spatial non-uniformities in the ?lter bandpass.
The CCOB provides test data about the camera assembly for camera acceptance and
will help constrain the optical ZEMAX model, although without a full ?lter it cannot
capture the full set of parameter required for the ZEMAX model. More details about
the requirements and physical apparatus of the CCOB are available in LSST-10015
and LSST-8217.
? Broadband ?at ?eld. An image obtained by observing a light source which generates
photons with a wide range of wavelengths with relatively uniform illumination across
the ?eld of view. Night sky ?ats, twilight ?ats, and white-light or broadband dome
screen ?ats would all generate broadband ?at ?elds.
? Narrowband ?at ?eld. An image obtained by observing a light source which gener-
ates photons with a very narrow range of wavelengths with relatively uniform illumi-
nation across the ?eld of view. A dome screen illuminated with a narrow-band laser
light source will generate a narrow-band ?at ?eld.
? `Synthetic' ?at ?eld. A ?at ?eld constructed from a datacube of narrowband ?ats.
This ?at is used in image reduction, and has had a number of corrections applied to
ensure that the resulting photometry is uniform as long as the objects possess the same
SED as used in constructing the synthetic ?at.
? Illumination Correction. The ratio between an ideal photometric ?at obtained by
a raster scan across the ?eld of a collimated light source at in?nity, and the observed

{ 62 {
?at ?eld.
Flat
photometric
= Flat
observed
? IlluminationCorrection
(D1)
.
? Natural magnitude. A magnitude measurement which relates directly to the number
of counts measurement in an image (after including a photometric ?at ?eld correction
and a rough zeropoint for an entire image). The natural magnitude relate to an ADU
count that does not account for the color or SED of the source being observed, thus
does not include any wavelength-dependent corrections. For a non-variable source ob-
served under variable atmospheric transmission conditions and/or at varying locations
in the ?eld of view, the natural magnitude reported will change due to changes in the
bandpass shape. The natural magnitude is equivalent to an observed magnitude, after
the appropriate zeropoints have been applied.
? Standard magnitude. A magnitude measurement which includes not only correc-
tions for the photometric ?at ?eld and a rough zeropoint for the image, but also
includes a correction for wavelength-dependent e?ects. This means the ?m
meas
b
ap-
propriate to correct the natural magnitude of the object from the observed bandpass
shape, ?
meas
b
(?;t), to the standard bandpass shape, ?
std
b
(?), has been calculated for
the SED of the object and applied. For a non-variable source, m
std
b
will be constant
over time even if the atmospheric absorption curve or the location in the ?eld of view
changes.
? Operations Simulation. The Operations Simulation is a simulated pointing history
of LSST, covering the sky in the same manner as the telescope could, in practice. It uses
weather conditions based on historical records from Cerro Tololo, including appropriate
seeing and sky brightness variations. The motion of the telescope is simulated in high
?delity, including acceleration from ?eld to ?eld and cable wrap. A variety of proposals
are used to determine which ?elds to observe at each time; these proposal include the
`universal cadence' (satisfying most of LSST's science requirements) and `deep drilling'
(a limited set of ?elds, observed frequently and deeply over the lifetime of the survey).
? PWV. Precipitable Water Vapor. The total column depth of water vapor in the
atmosphere, measured at zenith. The units are mm of liquid water equivalent.

{ 63 {
Figure 1: The overall ?ow of the Data Release photometric calibration process. The
process consists of three main steps. In the ?rst, Flat SED Calibration, science image pixels are
processed to instrumental magnitudes as if all calibration object SEDs were ?at (F
?
= const).
The second step, SED Correction, corrects the ?at SED object counts to account for the real sys-
tem bandpass and calibration object SEDs, generating corrected magnitudes. The ?nal step, Self
Calibration, solves a least squares system in the SED corrected magnitudes to yield the standard
magnitudes for the calibration objects and a spatially dependent zeropoint correction for each expo-
sure. Additionally, corrections to the illumination correction are calculated, for use in "Determine
Illumination Correction" (Figure 2). Note that SED Correction and Self Calibration are performed
only for calibration objects. General science objects are calibrated later, using the data products
from the calibration process (Figure 4)

{ 64 {
Figure 2: Synthetic Flat Determination. Determination of the synthetic ?at is of central
importance. The initial step in the calibration process treats all objects as if their SED were ?at.
This is consistent only if the ?at?eld is that for a ?at SED, as transmitted through a reference
atmosphere. Since the illumination source for the actual broadband ?at does not have the required
SED, and since the illumination correction is a function of wavelength, we cannot use it directly.
Instead, we synthesize the required ?at from the monochromatic ?ats. These ?ats as measured are
contaminated by light which arrives from paths other than the direct path, such as ghosting. These
e?ects are taken out by the illumination correction. Additionally, the ?ats must be corrected for
the nonuniform pixel solid angle on the sky. Finally, the monochromatic ?ats are taken relatively
infrequently, and will not re?ect changes on shorter timescales, such as the appearance of new dust
particles. We account for this by multiplying the synthetic ?at by the ratio of two broadband
?ats, one at the current epoch and one at the reference epoch, when the monochromatic ?ats were
gathered.

{ 65 {
Figure 3: System Bandpass Determination. The system bandpass, ?, is determined for every
exposure, and is a major output of the calibration process. It is the normalized product of the
atmospheric bandpass and that of the telescope/camera system. The atmospheric bandpass is
determined by ?tting an atmospheric model to a set of measured atmospheric data. The bandpass
of the telescope/camera system is measured by the corrected monochromatic ?ats, modi?ed by
current data from the camera system, principally focalplane temperature and the ?lter position.

{ 66 {
Figure 4: Science Object Calibration. Photometric calibration for science objects takes place
in two stages. In the ?rst, the natural magnitudes are calculated, using the zeropoint model for
each exposure. This stage is implemented in both Level 1 (nightly) and Level 2 (data release)
processing. These are stored in the Level 1 and Level 2 databases. The second, optional, stage,
relies on knowledge of the object SEDs, which is an external input to the system, supplied by the
user. The object SED, in conjunction with the system bandpass, ?, allows standard magnitudes to
be calculated. Standard magnitudes can be calculated for Level 1 as well as Level 2 photometry,
which enables better calibration of transient sources. ? will generally be available within 24 hours of
an exposure being taken. The diagram simpli?es the data ?ow for Level 2 considerably: Measuring
object properties with multi?t is not shown.

{ 67 {
Figure 5: E?ect on natural magnitudes of Kurucz stars from changing the airmass from 1.0
to 2.1.

{ 68 {
Figure 6: E?ect on natural magnitudes of Kurucz stars from airmass variation across the
?eld when the airmass at ?eld center is 2.1.

{ 69 {
Figure 7: Three years of PWV measured at CTIO ((Hansen & Caimanque 1975)), based on
the depth of the 1.87 ?m line to the Solar continuum at 1.65 ?m.

{ 70 {
Figure 8: PWV measured on a single day at Gemini S (Radomski et al. 2010). Two tech-
niques were used, GPS and Phoenix, an IR spectrograph, and the data are in good agreement.
Note the decline of approximately 4 mm over a period of two hours. This is a case where
use of a single average atmosphere for a whole night would give poor results, especially in
the y-band.

{ 71 {
Figure 9: PWV measurement from the MODIS satellite at one time in the region around
Cerro Pachon. The color scale ranges from 4.3 mm (blue) to 7.6 mm (dark red). Note the
strong E-W gradient in PWV.

{ 72 {
Figure 10: The e?ects of varying ? due to change in PWV from 1mm to 6mm - about the
range observed at Cerro Pachon. The SEDs are for Kurucz stars at varying temperatures
and metallicities.

{ 73 {
Figure 11: Varying aersol optical depth at CASLEO, El Leoncito, Argentina. The site
elevation at 2550m is similar to CP.

{ 74 {
Figure 12: Lidar measurements of aerosol extinction at 355nm over a single night near
Greenbelt, MD (Veselovskii et al. 2013). Cerro Pachon is, of course, a di?erent environment,
and is at an altitude of 2700m, removing much of the variability. Even above that altitude,
however, signi?cant variation on rapid time scales remains.

{ 75 {
Figure 13: E?ect on natural magnitude of Kurucz stars from varying the aerosol optical
depth from 0.04mag to 0.16mag

{ 76 {
Figure 14: Time variability of ozone column depth at Cerro Pachon from the TOMS satellite
instrument

{ 77 {
Figure 15: Range of variability of ozone column depth at Cerro Pachon from the TOMS
satellite instrument

{ 78 {
Figure 16: E?ect on natural magnitude of Kurucz stars from varying the ozone column by
50 Dobson units

{ 79 {
Figure 17: ?m
obs
b
due to variations in hardware and atmospheric bandpass shape. Two
main sequence Kurucz model stars, one blue (35000 K, approximately O type) and one red (6000
K approximately G type), were used to generate natural magnitudes (see Eqn 9) using three
di?erent atmospheric transmission pro?les and two di?erent hardware transmission pro?les. The
stellar ?ux pro?les are shown in the top center panel, while the atmospheric transmission functions
(S
atm
(?)) are shown across the second row and the two hardware transmission pro?les (S
sys
b
(?)) are
duplicated across the third row. The atmospheric transmission pro?les correspond to an airmass
X=1.0, 1.2 and 1.8 (from left to right), with variable atmospheric absorption components. The
X=1.0 atmosphere is very similar but not identical to the current LSST default X=1.2 atmosphere
throughput curve, which is used as `standard' here. The hardware transmission pro?les consist
of a `standard' pro?le (matching the LSST current expected values) and version where the ?lter
throughputs have been shifted by 1% of the e?ective wavelength of each ?lter (consistent with
the shift expected near the spatial edge of each ?lter). The ?nal row demonstrates the changes
in observed magnitudes produced by the X=1.0, `standard' and X=1.8 atmospheres (left to right,
respectively), combined with both the `standard' hardware transmission (represented by the star
points) and the +1% shifted hardware transmission (represented by the ?lled circles) for both the
red and blue stars. The exact di?erences in magnitudes resulting from this calculation are listed in
Table 2.

{ 80 {
Table 2:: ?m
obs
b
due to variations in system and atmospheric bandpass shape (see
also Fig 17). The ?rst two rows show the baseline (`standard') magnitude of
the star. All other rows show the change in magnitude (in mmag) due to the
variations listed at left. Any value larger than 5 mmag would be larger than the
RMS scatter allowed by the SRD. TODO color-code values larger than 5 mmag
Bandpass
star
u (mag)
g
r
i
z
y
Std (X=1.2) atm, std sys
red
21.472
20.378 20.000 19.911 19.913 19.913
Std (X=1.2) atm, std sys
blue
19.102
19.503 20.000 20.378 20.672 20.886
?u (mmag)
?g
?r
?i
?z
?y
Std (X=1.2), +1% sys shift red
-31
-22
-8
-2
1
1
Std (X=1.2), +1% sys shift blue
9
17
20
20
16
16
X=1.0, std sys
red
7
2
0
0
-0
-1
X=1.0, std sys
blue
-3
-1
-1
-0
1
-4
X=1.0, +1% sys shift
red
-24
-20
-8
-1
1
0
X=1.0, +1% sys shift
blue
7
16
19
20
18
12
X=1.8, std sys
red
-21
-10
-2
-0
0
1
X=1.8, std sys
blue
8
8
4
2
-1
6
X=1.8, +1% sys shift
red
-50
-30
-10
-2
1
2
X=1.8, +1% sys shift
blue
16
24
24
22
15
22

{ 81 {
Figure 18: ?m
obs
b
due to a change in bandpass shape corresponding to a ?lter shift of 1%
and an X = 1:8 atmosphere. 850 Kurucz models with temperatures between 5000K and 35000K
and metallicity indexes between -5.0 and 1.0 (solar) were combined with a standard system response
(standard atmosphere and standard hardware bandpasses), then with a total system response where
the atmosphere was replaced by an X=1.8 atmosphere and the ?lter component of the hardware
transmission was shifted by 1% (as in Fig 17). The points in each plot are color-coded by metallicity,
in steps of 1 dex between -5.0 (blue) to 1.0 (magenta). It can be seen that the relationship between
?m
obs
b
and g ? i can be parameterized, although generally not with a simple linear relationship. In
some cases (such as seen in the ?u and ?g panels), calculating ?m
obs
b
to SRD levels may require
more than a simple g ? i color, but this is then primarily a function of metallicity (which is possible
to determine given the u? g color in addition to the g ? i information).

{ 82 {
(a)
(b)
(c)
(d)
Figure 19: FRED calculation of ?at from monochromatically illuminated screen. The u ?lter
is in place, and the illuminating wavelength is 360nm, near the red limit of the ?lter. (a)
direct illumination (b) total illumination (c) ghost illumination (d) screen illumination. The
illumination correction is the ratio direct / total.

{ 83 {
(a)
(b)
(c)
(d)
(e)
(f)
Figure 20: The e?ects of the illumination correction, based on Andy Rasmussen's code, on
the natural magnitudes of Kurucz stars of varying temperatures. All 6 LSST passbands are
shown.

{ 84 {
Figure 21: Recovery of input broadband illumination correction by self calibration. The input
illumination correction was wavelength independent, but strongly dependent on radial position, as
shown. The recovered illumination correction is essentially identical to that input.

{ 85 {
Figure 22: Pattern of observing atmospheric probe stars from (Burke et al. 2010). The Solid
lines trace the temporal order of the observations

{ 86 {
Figure 23: Typical observed spectrum (black) and ?t (red) from (Burke et al. 2010)

{ 87 {
Figure 24: Non-gray atmospheric extinction from (Burke et al. 2010). Red is r-band, cyan
is i-band, magenta is z-band, black is y-band. The excess scatter in the z- and y-bands is to
to variability of PWV

{ 88 {
Figure 25: Fitted water vapor coe?cient from (Burke et al. 2010), expressed as a ratio to
the standard value (?lled circles). Crosses are relative humidity from CTIO (changed scale).

{ 89 {
(a) Stubbs et al. (2010)
(b) Eppeldauer et al. (2009)
Figure 26: Quantum e?ciency curve and fractional error for NIST-calibrated photodi-
ode, from Stubbs et al. (2010) and Eppeldauer et al. (2009). Panel 26a: Between 400 and
900 nm, calibration methods already in use in test systems indicate photodiode accuracy is better
than 0.1%, as in the bottom part of this panel. The sudden decrease in calibration accuracy below
900 nm is due to calibration methods used by NIST in 2005. Panel 26b: More recent photodiode
calibration e?orts by Eppeldauer et al. (2009) show better than 0.1% accuracy can be achieved to
beyond 1200 nm, the limit of detector response for LSST, as shown here in the response curves
resulting from multiple scans of a single source using the same photodiode.

{ 90 {
Figure 27: Baseline ?lter curves and a potential (1% of the central wavelength) shift due
to nonuniformity in the ?lter bandpass. The solid lines indicate standard ?lter bandpasses
(top panel: ?lter alone, bottom panel: ?lter plus standard mirror, lens, detector and atmosphere
response curves) while the dashed lines indicate the same bandpass shifted redward by 1% of the
central wavelength.

{ 91 {
Figure 28: E?ect of 0.5 K variation in focal plane temperature on natural mags

{ 92 {
Figure 29: ?m
obs
b
due to a hardware response curve shift of 1% of the central wave-
length of each bandpass. 850 main sequence star Kurucz models with temperatures between
5000K and 35000K and metallicity indexes between -5.0 and 1.0 (solar) were combined with a stan-
dard atmosphere and standard hardware bandpass, and then with a total system response where
the atmosphere remained constant but the hardware response was shifted by 1% of the central
wavelength of each bandpass (as in Fig 27). The points in each plot are color-coded by metallicity,
in steps of 1 dex between -5.0 (blue) to 1.0 (magenta). The resulting changes in observed natural
magnitudes are on the order of 20 mmag typically, except in u band where the shift can create a ?u
of closer to 80 mmag for certain temperatures of main sequence stars. By measuring the bandpass
shape as a function of radius and the colors of the main sequence stars, we can remove these e?ects.

{ 93 {
(a)
(b)
Figure 30: Components of atmospheric absorption. The wavelength dependence of various
atmospheric absorption components at zenith (Panel 30a) and at airmass=2.0 (Panel 30b) are shown
here. The H
2
O (blue) and O
3
(red) molecular absorption contributions are shown separately, while
the O
2
absorption is combined with other trace elements (magenta). A typical example of aerosol
scattering (Mie scattering) is included (yellow), as is molecular scattering (Rayleigh scattering)
(green). All components except aerosol scattering were generated using MODTRAN4 with the US
Standard option (aerosol scattering is not part of the US Standard atmosphere). The resulting
total absorption curve is the product of each of these e?ects and is shown with the dotted black
line. This is an illustrative atmosphere; under actual observing conditions the molecular absorption
components will vary in strength with time and the square root of the airmass, the molecular and
aerosol scattering will depend on airmass, and the aerosol scattering pro?le will also vary with time.

{ 94 {
Figure 31: Example of an atmosphere generated from a typical mix of atmospheric
components. The bottom panel shows the MODTRAN absorption templates at this airmass
used in generating the ?nal atmosphere (the A
rayleigh=O
2
=O
3
=H
2
O
and A
aerosol
= 1 ? e
?
aerosol
from
Equation 29). The top panel shows the ?nal combined atmospheric transmission curve in black,
as well as a `standardized' atmospheric transmission curve in red. This demonstrates that (even
without using the full MODTRAN software, just the transmission templates) that we can closely
recreate any atmosphere desired with any composition.

{ 95 {
(a)
(b)
(c)
(d)
Figure 32: ?m
obs
b
due to variations of each individual absorption component. Each
atmospheric transmission curve (at X=1.2) was combined with the set of main sequence Kurucz
curves to determine the resulting changes in observed magnitudes, as in Figure 29. Panels 46a and
46b show the e?ects of varying aerosol absorption in ?
0
and ? respectively, Panel 32c shows the
e?ect of varying O
3
absorption. These e?ects are concentrated in u and g bands, with a negligible
e?ect in izy. Panel 32d shows the e?ect of varying the H
2
Oabsorption, which is strongest in y,
with some e?ect in z and no e?ect in ugri.

{ 96 {
(a)
(b)
Figure 33: `Extreme' atmospheres generated from MODTRAN pro?les and extremes
of atmospheric coe?cients. Using the extremes of C
H
2
O
, C
O
3
, and ?
0
and ? from Burke et al.
(2010), two test atmospheres with X = 1:2 were created using Equation 29.

{ 97 {
Figure 34: ?m
obs
b
due to `extreme' variations of atmospheric transmission. Two atmo-
spheric transmission curves were created using Equation 29 and the widest variations of atmospheric
extinction coe?cients from Burke et al. (2010). The wavelength pro?le of these atmospheres is
shown in Figure 33. These atmospheric transmission curves were combined with the baseline LSST
hardware transmission curves, and used to generate magnitudes for 850 Kurucz models with tem-
peratures between 5000 K and 35000 K and metallicities between -5.0 and 1.0 (solar). The resulting
di?erences in natural magnitudes between the two extremes of the atmospheric transmission in each
?lter are shown above.

{ 98 {
Figure 35: ?m
obs
b
due to an error of 10% of the expected aerosol

{ 99 {
Figure 36: ?m
obs
b
due to an error of 10% of the expected ozone

{ 100 {
Figure 37: ?m
obs
b
due to 10% variations of atmospheric transmission in O
3
and aerosol,
with 30% variation of H
2
O. This is similar to Figure 34, except C
O
3
, ?
0
and ? were only varied
by 10% of the total range of values measured in Burke et al. (2010), and C
H
2
O
was varied by 30%
of the total range.

{ 101 {
(a)
(b)
Figure 38: Zeropoint error distributions from an r-band simulation with no color-dependent errors.
The top plot shows the distribution of errors for calibration patches with varying number of stars.
The bottom plot shows the behavior of the ? of the Gaussian core as a function of the number of
stars. The solid curve is the ?
sat
function discussed in the text.

{ 102 {
Figure 39: Distribution of cloud extinction from r-band Opsim run

{ 103 {
Figure 40: Gauss-Markov predictions for zeropoint errors from clouds with average
extinction of 0.8 mag, characteristic scale 500 meters, and averaging length 300 meters.

{ 104 {
Figure 41: Zeropoint errors vs zeropoint, from a self calibration simulation without
color-dependent errors. The color scale shows the log10 of the number of samples at
that point. The periodic behavior of the zeropoint density is an artifact of OpSim's
treatment of cloud cover.

{ 105 {
Figure 42: Zeropoint errors vs zeropoint, from a self calibration simulation without
color-dependent errors. The histogram shows a vertical crossection of the error fan,
with zeropoint in the range 0 < zp < 0:5mag

{ 106 {
Figure 43: Zeropoint errors vs zeropoint, from a self calibration simulation without
color-dependent errors. The histogram shows a vertical crossection of the error fan,
with zeropoint in the range 1:0 < zp < 1:5mag

{ 107 {
Table 3:: Repeatability error budget. All values are in mmag
A?ectedterm
E?ect
u g r i z y
m
obs
b
Total
3.0 3.0 3.0 3.0 3.0 3.0
?m
obs
b
Atmospheric water vapor errors
0.2
0
0
0
1.0 2.0
Atmospheric aerosol and ozone errors
1.6 2.1 0.1
0
0
0
Undetected atmospheric variability
1.4 0.7
0
0
0
0
Monochromatic illumination correction errors
3.0 0.5 0.8 0.5 0.1 0.6
Photodiode monitoring system errors
0.8 0.5
0
0
0
0.1
Calibration star SED errors
5.6 0.8 0.5 0.4 0.4 0.2
Focal plane temperature errors
0
0
0
0
0
0.2
Filter positioning errors
1.0 0.5 0.5 0.5 0.5 0.5
Total
6.8 2.5 1.1 0.8 1.2 2.2
Z
obs
b
Saturated self calibration errors from clouds
(90% of obs), atmosphere, and camera e?ects
2.0 2.0 2.0 2.0 2.0 2.0
Total Estimate
7.7 4.4 3.7 3.8 3.8 4.2
Simulation Result (IQR Median)
8.0 4.9 5.0 5.0 4.4 6.1
Design Requirement
7.5 5.0 5.0 5.0 7.5 7.5
Min. Requirement
12 8.0 8.0 8.0 12 12

{ 108 {
Table 4:: Simulated uniformity errors. All values are in mmag
ugriz y
Simulation Result (RMS) 8.0 2.0 1.6 1.8 2.3 TBD
Design Requirement
20 10 10 10 10
10
Min. Requirement
30 15 15 15 15
15
Figure 44: Pattern of focal plane radius across sky

{ 109 {
Figure 45: Pattern of cloud zeropoints across sky

{ 110 {
(a)
(b)
Figure 46: ?m
obs
b
due to systematic errors in photodiode response. Panel 46a shows
the randomly chosen input error curve for the photodiode. Panel 46b shows the e?ect
on Kurucz SEDs.
300
400
500
600
700
800
900
1000
1100
Wavelength (nm)
0.0
0.2
0.4
0.6
0.8
1.0
Throughput (0-1)
Airmass 1.2
u
g
r
i
z
y
Figure 47: The baseline LSST ?lter set.

{ 111 {
Figure 48: E?ect on natural mags of SN1a of variation of aerosol optical depth from 0.04 to
0.16 mag

{ 112 {
Figure 49: ?m
obs
b
due to changes in a hardware bandpass shift and a maximum change
in atmospheric absorption components. This plot is similar in nature to a combination of
Figure 29 and 34, but has been extended to include a wider variety of object SEDs. Main sequence
stars are shown as the sequence of purple dots, and Mdwarfs are shown as the sequence of blue 'x's.
The large round circles represent a quasar SED at various redshifts, color-coded with redshift as
follows: 0 < z < 1 is blue, 1 < z < 2 is green, and 2 < z < 3 is red. The large ?lled squares show
the change in natural magnitudes for SNIa templates at times of 0, 20, and 40 days from peak;
0 < z < 0:36 are blue squares, 0:36 < z < 0:72 are green squares, and 0:72 < z < 1 SNIa are red
squares.

{ 113 {
Figure 50: The auxiliary telescope is sited adjacent to LSST

{ 114 {
Figure 51: The auxiliary telescope

{ 115 {
Figure 52: The ?at ?eld dome screen in front of the telescope

{ 116 {
(a)
(b)
(c)
(d)
Figure 53: Results from a self-calibration simulation. Panel 53a shows the number of visits
across the sky for the ?rst two years of the survey in r-band. The color range has been truncated
since the deep-drilling ?elds can have over 1000 visits. Conditions for this simulation are described
in the text. Panel 53b shows the true minus best-?t stellar magnitudes residuals across the sky after
iterating the self-calibration solver. Panel 53c shows similar information, but as a histogram of all
stars, demonstrating the `uniformity' requirement in the SRD. Panel 53d shows a histogram of the
RMS of the di?erence between the calibrated and true magnitudes for bright stars, demonstrating
the `repeatability' requirement in the SRD.

{ 117 {
Figure 54: HEALpixel maps for 8 and 16 sides, resulting in 400 and 1568 individual pixels
in the Southern hemisphere respectively.

{ 118 {
Figure 55: Comparison of a simulation solved simultaneously (left) with one solved on in-
dividual HEALpixels and combined. Statistically, the di?erences between the two solutions
are at the sub-millimag level, which is much less than the di?erences we see between runs
with di?erent starting seeds (Yoachim et al. 2013).

{ 119 {
Figure 56: errors in atmospheric transmission functions determined over 7 nights with 80 spectra
per night. The x axis is the spectrograph resolution. Improvement of resolution beyond 400 does
not improve the accuracy. This may be linked to the similar resolution of the Kurucz spectra used
for the stars.

{ 120 {
Figure 57: Data Release Production

Back to top