LSST Active Optics System Software Architecture
    Sandrine J. Thomas
    , Paul Lotz
    , Srinivasan Chandrasekharan
    , Bo Xin
    , Charles Claver
    George Angeli
    , Jacques Sebag
    , and Gregory P. Dubois-Felsmann
    LSST, 950 N. Cherry Av, Tucson, AZ, USA
    Caltech, 1200 E California Blvd, Pasadena, CA 91125, USA
    The Large Synoptic Survey Telescope (LSST) is an 8-meter class wide-?eld telescope now under construction on
    Cerro Pach?on, near La Serena, Chile. This ground-based telescope is designed to conduct a decade-long time
    domain survey of the optical sky. In order to achieve the LSST scienti?c goals, the telescope requires delivering
    seeing limited image quality over the 3.5 degree ?eld of view. Like many telescopes, LSST will use an Active
    Optics System (AOS) to correct in near real-time the system aberrations primarily introduced by gravity and
    temperature gradients. The LSST AOS uses a combination of 4 curvature wavefront sensors (CWS) located on
    the outside of the LSST ?eld-of-view. The information coming from the 4 CWS is combined to calculate the
    appropriate corrections to be sent to the 3 di?erent mirrors composing LSST. The AOS software incorporates a
    wavefront sensor estimation pipeline (WEP) and an active optics control system (AOCS). The WEP estimates
    the wavefront residual error from the CWS images. The AOCS determines the correction to be sent to the
    di?erent degrees of freedom every 30 seconds. In this paper, we describe the design and implementation of the
    AOS. More particularly, we will focus on the software architecture as well as the AOS interactions with the
    various subsystems within LSST.
    Keywords: Large Synoptic Survey Telescope, Active Optics, Wide Field of View, Curvature Sensing
    The Large Synoptic Survey Telescope (LSST) is an 8.4 meter diameter telescope now under construction on
    Cerro Pachon in Chile.
    The telescope is composed of three aspheric mirrors: the 8.4m primary mirror (M1),
    a 3.4m secondary mirror (M2) and a 5m tertiary mirror (M3). The primary and tertiary mirrors form a single
    monolithic mirror called the M1M3
    . LSST has one instrument, a three-lens camera that directs the light
    path onto a 3.2 gigapixel camera, with a ?eld-of-view (FOV) of 3.5 degrees.
    The camera has 6 di?erent ?lters,
    u,g,r,i,z,y. The overall system image quality budget for the LSST is 0.4 arcsec FWHM with 0.25 arcsec allocated
    to the telescope and 0.3 arcsec associated to the camera. LSST is a seeing-limited telescope and thus these
    performance do not include the errors coming from the atmosphere.
    In order to optimize the image quality across the 3.5 degree FOV of the camera, LSST relies on an Active
    Optics System (AOS). Both M1M3 and M2 are equipped with actuators that allow for the AOS to control the
    shape of the mirror surface and their position.
    In addition, M2 and the camera are on hexapods allowing further
    positioning adjustment.
    The wavefront control is performed by determining a base set of control parameters based on both ?nite
    element analysis (FEA) and on-sky open-loop measurements as well as by adding low temporal frequency closed
    loop corrections determined from real-time wavefront measurements. The open-loop model may take the form
    of equations or tables. In this paper, we also refer to this as Look-up Table (LUT). The LUT provides near
    optimum values for all actuator forces and hexapods positions and depends primarily on elevation angle and
    temperature. Although constructed using FEA models, it will be veri?ed extensively during the commissioning
    Further author information: (Send correspondence to S.J.T.)
    S.J.T.: E-mail:, Telephone: 1 520 318 8227

    The corrections given by the LUT, however, do not allow the system to meet the LSST image quality
    requirements due to non-repeatable and/or unpredictable e?ects. These e?ects include temperature errors, wind
    loads and hysteresis. Accordingly, LSST plans to use real-time wavefront sensor measurements to compensate
    for these errors to achieve the LSST required image quality. The wavefront sensor selected is a curvature sensor,
    since the wide-?eld and the fast beam make the use of the more popular Shack-Hartmann wavefront sensor
    problematic. Moreover, the curvature sensors can be used in the focal plane, while the Shack-Hartmann is used
    in the pupil plane. The curvature sensor option avoids non-common path errors. More details on the origin and
    impact of di?erent environmental perturbations on the LSST image quality are described in a previous paper.
    In addition to on-sky wavefront correction, the active optics will be used to conduct alignment veri?cation in
    coordination with a laser tracker at the beginning of every night or during the night if needed.
    This paper describes the software architecture of the active optics system. Section
    presents the di?erent
    use-cases of the AOS. Section
    describes the technical aspect of the wavefront sensor both mechanically and
    relative to software. Section
    and section
    detail the steps and functions of the components of the AOS.
    2.1 Normal Operation and Engineering Operation
    Normal operation is de?ned by routine nighttime observations for which there is no human intervention. The
    AOS input is 4 individual pairs of intra- and extra-focal images corresponding to the 4 di?erent wavefront
    sensors. The images come directly from the Data Acquisition system (DAQ) in the Camera Subsystem. In order
    to maintain the required image quality, the AOS output consists of the bending modes sent to the mirror support
    system of M1M3 and M2 to control the mirror shapes, and positions sent to the M2 and camera hexapods. The
    observatory cadence of the LSST telescope is relatively fast since the telescope pointing changes every 39 seconds.
    These 39 seconds are decomposed into two 16s visits (1s of which is to allow the shutter to open and close), a 2s
    readout and a 5s slew time. The current baseline plan uses the data from the ?rst visit only and performs the
    calculation during the second visit.
    During normal operation, we will also have an engineering or manual mode. This mode will be used in open
    loop, meaning that after an image is acquired, the AOS pipeline does not automatically run through and send
    the corrections to the mirrors, but allows interactions between the AOS and the user. This submode will be
    used during commissioning or for corrective maintenance during operation. For instance, when the AOS will
    be operated manually, it will enable local hardware control with a personal computer via the facility control
    network. Visualization of images and parameters will also be available.
    2.2 Full Focal Plane Array Wavefront Sensing
    The Full Focal Plane Array (FFPA) mode will be used to build and optimize the LUT during commissioning and
    on occasion during the day as part of calibration. The science focal plane array (FPA) consists of 189 science
    In this FFPA mode, the AOS obtains the data from the Data Management subsystem (DM) due to
    the required temporal frequency of correction being dramatically reduced. The images originate from the 189
    science detectors for each of the di?erent sets of defocus, mostly ? 2 mm, as well as WFS images with the camera
    in focus for reference. At this reference position, the AOS will collect images from both WFS detectors and
    science detectors. Using the full focal plane for wavefront sensing is useful to relate the wavefront sensor data to
    image quality in the ?eld.
    This mode will also be used during early commissioning using the commissioning camera (ComCam). Com-
    Cam has 1 raft with 9 science detectors, therefore leading to a FOV of 0.6 degrees. This mode will help the
    commissioning team exercise the AOS algorithm and accomplish the ?rst alignment procedure checks. With
    regards to software and algorithms, this operating mode is a slight divergence from the nominal automatic
    operation mode, mostly consisting of an increase of number of sensors read and order of operations.

    2.3 Alignment procedure
    As mentioned earlier, the AOS will also be used at the beginning of each night as an alignment tool to verify
    and adjust the reference position relative to the LUT, if needed. In other words, at the beginning of each night,
    the observatory operator aligns the main optical elements (mirrors and camera) using the following tools:
    - A look-up-table incorporates the state of the system measured during the previous night. The look-up-table
    is the set of absolute hexapod positions and mirror shapes in order to achieve the required image quality. Its
    requirement is de?ned below. The reference positions are ?xed for at least one night.
    - A laser tracker works with the beam re?ectors to adjust tip/tilt/rotation and translation. More information
    on the laser tracker is in Araujo et al.
    - The curvature wavefront sensor further minimizes the wavefront error. Before the night's observing, the
    appropriate active optics corrections are applied to the mirror shapes, and the camera and M2 mirror are
    positioned relative to the M1M3 mirror by their hexapods. The position of the camera and M2 will likely be
    veri?ed by the laser tracker system. This ensures that the wavefront errors are within the dynamic range of the
    active optics system.
    This section gives a more detailed description of the wavefront sensing for LSST. The ?rst subsection presents
    the optical layout of the camera dedicated to wavefront sensing and the second section focuses on the software
    general architecture.
    As mentioned in the introduction, the optical wavefront sensor chosen for LSST is a curvature wavefront
    sensor, leading to the need to have sets of two images with a de?ned focus o?set between them. To satisfy an
    additional constraint to reduce or eliminate non-common path errors, the wavefront sensor is located in the focal
    plane of the camera along with the other science sensors.
    The wavefront sensor is composed of four sensors manufactured by the Imaging Technology Laboratory (ITL)
    at the University of Arizona. The sensors are located at the corner of the focal plane as shown on Figure
    Each wavefront sensor is composed of two defocused detectors of 2k ? 4k pixels as shown on Figure
    -right. The
    optimal defocus was the result of a trade-o? study, including impact of signal-to-noise, atmospheric turbulence,
    non-linearity of the wavefront sensor and sky coverage.
    The FOV area of each wavefront sensing detector is 6.8 arcminutes ? 13.7 arcminutes. Figure
    -right shows
    a simulation of the defocused images on the full FOV of one of the four wavefront sensors, using a photon
    simulation tool called PhoSim.
    These simulations are executed using a bright star catalog such as the catalog
    from the United State Naval Observatory (USNO) led by Monet et al.
    or internal catalogs.
    The images in Figure
    -right demonstrate some of the di?culties in using the LSST wavefront sensor images
    to reconstruct the wavefront and calculate the best set of corrections to apply to the di?erent degrees of freedom
    (presented in section
    ). The images show blending issues where two stars are so close to another that the two
    donuts overlap, resulting in di?culties in the estimation of the wavefront. Some of the donuts also have a low
    signal-to-noise ratio. The two images show the di?erences in star counts that can occur even for very close ?elds.
    The Active Optics System (AOS) is responsible for measuring the residual wavefront error from the defocused
    images and estimating the correct bending modes to send to the mirror control systems (M1M3 and M2) and
    the positions to send to the hexapods (M2 and camera). The AOS is part of the Telescope Control System.
    describes the inputs and the output to the AOS or interfaces. The interfaces to the AOS are
    of several types: interfaces to other subsystems (DM and Camera), and interfaces to the di?erent controllers
    (M1M3, M2 and the hexapods). The AOS exchanges messages with other systems using a publish-subscribe
    middleware solution based on an implementation of the Data Distribution Service speci?cation.
    Input interfaces.
    In normal operation the AOS requires crosstalk corrected images from the Camera
    Data Acquisition System; these images arrive via a custom push data interface. Still in normal operation, the
    AOS will need to know the state of the camera (selected ?lter, for example); this information comes from the
    Observatory Control System (OCS).
    Finally, the baseline requires the use of calibration images and data such as

    Figure 1. Focal plane layout of the detector. The left drawing shows the full focal plane with the science sensors, the
    wavefront sensors and the guide sensors. The wavefront sensors and the guide sensors will have vignetting e?ects as well
    as distortion that must be taken into account in the wavefront estimation. The right panel shows simulated intra and
    extra images for one of the wavefront sensors. They were obtained using a complex simulation tool called PhoSim.
    ?at ?eld images or bad pixels maps as described in section
    . This metadata comes from the Data Management
    subsystem, once a night.
    Output interfaces. The AOS outputs bending modes and position information to the di?erent controllers
    listed in section
    Figure 2. AOS ?ow diagram showing inputs and outputs between di?erent components. The AOS receives the images
    either from the camera or the DM subsystems. It also receives telemetry from the Observatory Control System. The
    output to the AOS is sent to the di?erent mirrors and hexapods controllers. The information sent is the bending modes
    o?sets for the mirrors and the positions for the hexapods/rotator.
    The AOS includes the Wavefront Estimation Pipeline (WEP) and the Active Optics Control System (AOCS).

    Each entity will be described in the following section. The WEP sends to the AOCS a set of Zernike coe?cients
    describing the wavefront error for each sensor { 4 in normal operation mode and 189 in FFPA mode.
    As mentioned in the introduction, the Wavefront Estimation Pipeline (WEP) is the algorithm responsible for
    estimating the wavefront errors in annular Zernike coe?cients associated with each of the corner rafts. In normal
    operation (see section
    ), the inputs to the WEP are an intra-focal image and an extra-focal image coming from
    each of the four corner rafts.
    The WFS data are pulled directly from the Wavefront Data Acquisition System (DAQ) after each exposure.
    The camera team is responsible for the DAQ and will provide its Application Programming Interface (API)
    and compilable client source code on a Linux-based computer, along with connectivity instructions. This di?ers
    from the nominal science operation mode, for which the images coming from the camera are directly sent to the
    data management team for processing and archiving at the National Center for Supercomputing Applications
    Each detector of a corner raft sees a di?erent ?eld of the sky; therefore the stars on each the 2 detectors in each
    corner are di?erent. In addition, compared to normal active or adaptive optics systems, the wavefront sensors
    have a large FOV, with numerous stars. In addition, the detectors are located 1.7 degrees o?-axis, introducing
    vignetting and image distortion. For these reasons, image processing before using any curvature sensing algorithm
    is required to make the images usable. This includes basic image processing such as Instrumentation Signature
    Removal (ISR),
    source selection using a bright star catalog, source processing (deblending and intensity
    scaling) and master creation. Where applicable, the WEP utilizes common methods available in the LSST
    software stack.
    shows the overall architecture of the WEP.
    Figure 3. This diagram shows the di?erent steps included in the WEP algorithm. These steps are described in the text.
    4.1 Preprocessing
    Wavefront Data Collector: During normal operation (see section
    ) the wavefront sensor data is directly
    sent to the AOS from the camera subsystem using the DAQ through a push/pull interface. For the full array
    use case, the baseline is for the wavefront sensor data to go directly to the archive before being pulled by the
    AOCS. In this mode, the camera control system publishes an event, triggering the active optics system to pull
    the wavefront images from the 4 wavefront sensor detectors. For both cases, the images are cross-talk corrected.

    Instrument Signature Removal Wrapper: The WEP is compatible with the Instrument Signature Removal
    (ISR) developed by the DM team and will use some of its functions. The stack features used by the WEP
    are simple dark subtraction, ?at ?eld correction, gain and biases correction. The ISR will also include world
    coordinate system (WCS) correction. The other corrections required by the WEP are not part of the ISR and
    are described in the following paragraphs.
    The Bright Star Catalog and Source Selector: Due to the large FOV of the wavefront sensor detectors,
    the source ?nding will rely on a bright star catalog. The impact of galaxies is expected to be minimal. The catalog
    contains the RA and DEC of bright stars only. It includes the star's limiting magnitude, and the saturation
    as a function of the LSST ?lter, and information about the number of neighbors falling in a prede?ned nearby
    surrounding region. The ?nal acceptable number of neighbors within a certain region of interest and the size
    of that region of interest are currently being de?ned. The ?rst instance of the catalog is expected to utilize the
    catalog developed by USNO;
    this catalog will be re?ned using LSST data during the lifetime of the survey.
    shows the pre-requisite and components of the source selector step.
    Figure 4. This diagram shows details of the source selection process, including items needed such as a bright star catalog
    or components of this step.
    Creation of intra/extra focal plan image masters: The result of the source ?nding step is a set of postage
    stamps for each of the 8 detectors, including neighbors. The WEP then cleans the postage stamp images creating
    sub-images of donuts without any contamination from nearby stars. This process is called deblending. Because
    of the di?erent magnitudes on each detector, the intensity from each sub-image is scaled to a same magnitude.
    Finally, all the images are summed to create a single pair of intra- and extra-focal images per sensor reference
    to as the master. These intra and extra masters can be seen as the input to regular curvature sensing.
    The limiting magnitude will vary slightly depending on the condition of the night (seeing, cloud coverage,
    etc.). Therefore, the quality control is required to determine if a particular donut has the required signal to noise
    ratio ( ˘ 10) to be used in the WEP without introducing extra residual errors.
    4.2 Wavefront Estimator
    Each set of intra and extra masters are then sent to the wavefront estimator. The wavefront sensor algorithm
    used for LSST is curvature sensing and has been described in detail in Xin et al.
    Due to the location of
    the wavefront sensors in the focal plane ( ˘ 1.7 degrees o? axis) there is a need for distortion and vignetting
    corrections, preventing analytical mapping from the telescope aperture to the defocused image. In addition,
    since LSST has a large central obstruction (60%) and fast beam (f-number of 1.23), the WEP uses a numerical
    solution, representing the mapping between the two sets of coordinates with 2D 10th-order polynomials. The

    o?-axis distortion and the vignetting information comes from the optical design (Zemax and pupil information)
    and depend on the star location on the detector.
    To determine a ?nal algorithm, our strategy is to choose two well-established algorithms that are known to
    work for large f-number, on-axis systems, implement them as our baseline, then extend them to work with small
    f-number and o?-axis sensors. As our two baseline algorithms we have chosen the iterative fast Fourier transform
    (FFT) method by Roddier and Roddier,
    and the series expansion technique by Gureyev and Nugent.
    details are given in Xin et al.
    4.3 Required Inputs and output
    provides further details on the inputs and sub steps that happen in the wavefront estimator box shown
    in Figure
    Figure 5. This diagram shows the details of the inputs of the di?erent steps included in the wavefront estimator algorithm.
    The output of the WEP is an annular Zernike decomposition of the wavefront. The output is communicated
    via Data Distribution System (DDS)
    or middleware to the active optics control system as a structure with
    three main elements for each sensor: the basis-set name, the number of terms, and the di?erent coe?cients. In
    addition, there is an image quality metric associated with each result.
    4.4 Pipeline Implementation
    WEP implementation will utilize Luigi, a platform independent Python package designed to simplify processing
    pipelines. It contains boilerplate code and the dependencies are decentralized. This removes the need for large
    con?guration ?les since each task speci?es its own dependencies. The use of Luigi will be limited to de?ning
    the pipeline task hierarchy. The processing of each task will be de?ned within its own package. This separates
    the responsibility for controlling execution order from the actual processing. The result is a set of Python
    classes with very simple implementations that clearly describe each task's dependencies, parameters, outputs,
    and responsibilities.
    This subsection describes the Active Optics Control System (AOCS). As mentioned above, the AOCS uses the
    wavefront measurements pushed by the WEP through the middleware to calculate the correction to send to the
    di?erent control elements. The overall feedback algorithm can be separated into two steps: the optical state
    reconstructor and the control algorithm (called setpoint mapper in the block diagram presented in Figure
    The degrees of freedom that the telescope possesses are the following:
    - M1M3 shape, up to 20 bending modes

    - M2 shape, up to 20 bending modes
    - M2 piston, x/y decenter and x/y tilt using the M2 hexapod
    - Camera piston, x/y decenter and x/y tilt using the camera hexapod/rotator
    The corrections, de?ned as bending modes and positions, are sent to the di?erent correction elements con-
    trollers, namely, the M1M3 controller, M2 controller, M2 hexapod and camera hexapod, as shown on the right
    hand of Figure
    . The number of bending modes is the optimum number to achieve the required image quality
    given by complete simulations.
    The output of the AOCS (and thus the AOS) is de?ned in terms of bending
    mode o?sets for the M1M3 and M2 shapes as well as the hexapod setpoints. The o?sets are relative to the LUT
    (or open model) de?ned in the introduction. The AOS sends setpoints to the component controllers responsible
    for moving the relevant axes. These controllers utilize open-loop models (possibly LUTs), in?uence matrices to
    convert from bending modes to forces, and custom electronics boards known as Inner Loop Controllers.
    Similarly to the wavefront estimation pipeline, the optical feedback control algorithm design is described in
    detail in a previous paper.
    The focus of this paper is on the architecture.
    Figure 6. The block diagram shows the di?erent steps needed to run the AOCS algorithm. This ?ow-down comes after
    the WEP ?ow-down shown in Figure
    . In this block diagram, the "setpoint mapper" is the control algorithm used to
    create the correction to improve the image quality. What is not shown in this graph is the sensitivity matrix needed by
    both the optical state estimator and the control algorithm.
    Optical State Estimation: The AOCS derives the appropriate state of the optical system from the wavefront
    sensors in the presence of algorithmic, WCS and atmospheric noise errors.
    The state estimation is either done
    from the 4 sets of basis coe?cients for normal operation or 189 sets for full array use case. The most conceptually
    simple version of this estimator uses the pseudo-inverse of the sensitivity matrix described at the end of this
    Control algorithm (or setpoint mapper): Once the AOCS estimates the optical state of the telescope,
    it uses an optimal control determining using a cost function to derive the realistic bending modes to apply to
    M1M3 and M2 control systems as well as the M2 and camera hexapods. The result of this cost function provides
    the best optical quality with the smallest actuator action. It also sets the control authority between actuator
    groups (mirror bending modes and hexapod displacements). The cost function is built to allow the algorithm
    to calculate the solution that will optimize the image quality, such as by minimizing the variance of the Full
    Width Half Max (FWHM) over the large FOV. Furthermore, the cost function helps limit large actuator swings
    to avoid damage to the glass, as well as ensure smooth transitions between iterations. The FOV mapping uses
    the Gaussian Quadratic method. The temporal (sampling) behavior of the described controller is equivalent to

    an integral controller. Performance can possibly be further optimized by a more complex dynamic controller
    utilizing earlier system state estimates.
    Sensitivity Matrix: The AOCS uses a sensitivity matrix calculated prior to any AOCS reconstruction. The
    sensitivity matrix describes the optical response of the system to the various controlled degrees of freedom. The
    columns of the sensitivity matrix correspond to the controlled degrees of freedom (described above). The rows
    of the sensitivity matrix correspond to the Zernike coe?cients of the wavefront at the N sensor locations. This
    matrix A is used in both the state estimation process and the control algorithm. This sensitivity matrix is
    planned to be measured on-sky during commissioning. Note that the bending modes and Zernike coe?cients are
    measured relative to a reference state. The sensitivity matrix is calculated o?ine after gathering on-sky data. It
    is not expected to change often and will not be modi?ed during a night.
    LSST is equipped with an active optics to optimize the image quality of the system. The active optics is a
    curvature sensing systemoptimized for the LSST wide ?eld-of-view and optical design. The software architecture
    of the AOS is decomposed into two main components: the wavefront estimation pipeline (WEP) which estimates
    the residual wavefront from the open-loop model, and the active optics control system (AOCS) which calculates
    the M1M3 and M2 shapes as well as the M2 and camera hexapods positions. Both WEP and AOCS are complex
    entities that require severals inputs from other LSST subsystems and comprised of several steps. The AOS is
    right now in full development with a completion date in accordance with the Telescope and Site Integration Plan,
    late 2018.
    This material is based upon work supported in part by the National Science Foundation through Cooperative
    Agreement 1258333 managed by the Association of Universities for Research in Astronomy (AURA), and the
    Department of Energy under Contract No. DE-AC02-76SF00515 with the SLAC National Accelerator Labora-
    tory. Additional LSST funding comes from private donations, grants to universities, and in-kind support from
    LSSTC Institutional Members.
    [1] Kahn, S., \LSST ?nal design overview," Proc. SPIE 9906 , Paper 17, in press (2016).
    [2] Gressler, W. et al., \LSST telescope and site status," Proc. SPIE 9906 , Paper 19, in press (2016).
    [3] Araujo, C., Jacques, S., Ming, L., Neill, D., Thomas, Sandrine, J., Vucina, T., and Gressler, W., \Overview
    of the LSST mirror system," Proc. SPIE 9906 , Paper 20, in press (2016).
    [4] Neill, Douglas, R., Muller, G., Hileman, E., Joe, D., Araujo, C., Gressler, W., Lotz, P., Dave, M., Sebag, J.,
    Thomas, S., Warner, M., and Wiecha, O., \Final design of the LSST primary/tertiary mirror cell assembly,"
    Proc. SPIE 9906 , Paper 25, in press (2016).
    [5] Kurita, N. et al., \Large Synoptic Survey Telescope camera design and construction," Proc. SPIE 9912 ,
    Paper 27, in press (2016).
    [6] Neill, D., Angeli, G., Claver, C., Hileman, E., DeVries, J., Sebag, J., and Xin, B., \Overview of the LSST
    active optics system," Proc. SPIE 9150 , 91500G{91500G{16 (2014).
    [7] Sneed, R., Neill, D., Kidney, S., Araujo, C., Gressler, W., Lotz, P., Dave, M., Sebag, J., Sebring, T., A.,
    Warner, M., and Wiecha, O., \Final design of the LSST hexapod and rotator," Proc. SPIE 9906 , Paper
    18, in press (2016).
    [8] Angeli, G. Z., Xin, B., Claver, C., MacMartin, D., Neill, D., Britton, M., Sebag, J., and Chandrasekharan,
    S., \Real time wavefront control system for the Large Synoptic Survey Telescope (LSST)," Proc. SPIE
    9150 , 91500H{91500H{16 (2014).
    [9] Angeli, George, Z., Xin, B., Claver, C., Cho, M., Dribusch, C., Neill, D., Peterson, J., Sebag, J., and
    Thomas, S., \An integrated modeling framework for the Large Synoptic Survey Telescope (LSST)," Proc.
    SPIE 9911 , Paper 46, in press (2016).

    [10] Peterson, J. R., Jernigan, J. G., Kahn, S. M., Rasmussen, A. P., Peng, E., Ahmad, Z., Bankert, J., Chang, C.,
    Claver, C., Gilmore, D. K., Grace, E., Hannel, M., Hodge, M., Lorenz, S., Lupu, A., Meert, A., Nagarajan,
    S., Todd, N., Winans, A., and Young, M., \Simulation of astronomical images from optical survey telescopes
    using a comprehensive photon monte carlo approach," The Astrophysical Journal Supplement Series 218 (1),
    14 (2015).
    [11] Monet, D. G., Levine, S. E., Canzian, B., Ables, H. D., Bird, A. R., Dahn, C. C., Guetter, H. H., Harris,
    H. C., Henden, A. A., Leggett, S. K., Levison, H. F., Luginbuhl, C. B., Martini, J., Monet, A. K. B., Munn,
    J. A., Pier, J. R., Rhodes, A. R., Riepe, B., Sell, S., Stone, R. C., Vrba, F. J., Walker, R. L., Westerhout,
    G., Brucato, R. J., Reid, I. N., Schoening, W., Hartley, M., Read, M. A., and Tritton, S. B., \The USNO-B
    Catalog," The Astrophysical Journal 125 , 984{993 (2003).
    [12] Connolly, A. J., Angeli, G. Z., Chandrasekharan, S., Claver, C. F., Cook, K., Ivezic, Z., Jones, R. L.,
    Krugho?, K. S., Peng, E.-H., Peterson, J., Petry, C., Rasmussen, A. P., Ridgway, S. T., Saha, A., Sembroski,
    G., vanderPlas, J., and Yoachim, P., \An end-to-end simulation framework for the Large Synoptic Survey
    Telescope," Proc. SPIE 9150 , 915014 (2014).
    [13] Lotz, P., Dubois-Felsmann, Gregory, P., Lim, K.-T., Johnson, T., Chandrasekharan, S., Mills, D., Daly, P.,
    Schumacher, G., Delgado, F., Pietrowicz, S., Selvy, B., Sebag, J., Marshall, S., Sundararaman, H., Contaxis,
    C., Bovill, R., and Jeness, T., \LSST control software component design," Proc. SPIE 9913 , Paper 9, in
    press (2016).
    [14] Mills, D., Schumacher, G., and Lotz, P., \LSST communications middleware implementation," Proc. SPIE
    9906 , Paper 204, in press (2016).
    [15] Daly, Phil, N., Schumacher, G., Delgado, F., and Mills, D., \LSST OCS plan and status," Proc. SPIE 9913 ,
    Paper 112, in press (2016).
    [16] Jenness, T., Bosch, J., Owen, R., Parejko, J., Sick, J., Swinbank, J., de Val-Borr, M., Dubois-Felsmann, G.,
    Lim, K.-T., Lupton, Robert, H., Schellart, P., Krugho?, Simon, K., and Tollerud, Erik, J., \Investigating
    interoperability of the LSST data management software stack with astropy," Proc. SPIE 9913 , Paper 16,
    in press (2016).
    [17] Juri?c, M., Kantor, J., Lim, K.-T., Lupton, R. H., Dubois-Felsmann, G., Jenness, T., Axelrod, T. S., Aleksi?c,
    J., Allsman, R. A., AlSayyad, Y., Alt, J., Armstrong, R., Basney, J., Becker, A. C., Becla, J., Bickerton,
    S. J., Biswas, R., Bosch, J., Boutigny, D., Carrasco Kind, M., Ciardi, D. R., Connolly, A. J., Daniel, S. F.,
    Daues, G. E., Economou, F., Chiang, H.-F., Fausti, A., Fisher-Levine, M., Freemon, D. M., Gee, P., Gris,
    P., Hernandez, F., Hoblitt, J., Ivezi?c, Z.,? Jammes, F., Jevremovi?c, D., Jones, R. L., Bryce Kalmbach,
    J., Kasliwal, V. P., Krugho?, K. S., Lang, D., Lurie, J., Lust, N. B., Mullally, F., MacArthur, L. A.,
    Melchior, P., Moeyens, J., Nidever, D. L., Owen, Russell aNeillnd Parejko, J. K., Peterson, J. M., Petravick,
    D., Pietrowicz, S. R., Price, P. A., Reiss, D. J., Shaw, R. A., Sick, J., Slater, C. T., Strauss, M. A.,
    Sullivan, I. S., Swinbank, J. D., Van Dyk, S., Vuj?ci?c, V., Withers, A., Yoachim, P., and LSST Project, f. t.,
    \The LSST Data Management System," Astronomical Data Analysis Software and Systems XXV proc.,
    arXiv:1512.07914 (2015).
    [18] Xin, B., Claver, C., Liang, M., Chandrasekharan, S., Angeli, G., and Shipsey, I., \Curvature wavefront
    sensing for the Large Synoptic Survey Telescope," Applied Optics 54 , 9045{ (Oct. 2015).
    [19] Roddier, C. and Roddier, F., \Wave-front reconstruction from defocused images and the testing of ground-
    based optical telescopes.," Journal of the Optical Society of America A 10 , 2277{2287 (1993).
    [20] Gureyev, T. E. and Nugent, K. A., \Phase retrieval with the transport-of-intensity equation. II. Orthogonal
    series solution for nonuniform illumination," Journal of the Optical Society of America A: Optics 13 (8),
    1670{1682 (1996).

    Back to top