Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Version History

« Previous Version 83 Next »

The LSST is a the nation's top priority next-gen ground-based astronomy project, with the objective of 
conducting observations of the entire accessible sky, with about 800 visits per field. The scheduler for the project
will determine the order in which these fields are observed, with the goal of maximizing some scientific merit function. 
A portion of that merit function has to do with Fourier coverage in the time domain. 
The variability in sky conditions (cloud cover, sky brightness and atmospheric seeing) makes the scheduling 
problem non-trivial. It's a traveling salesman problem with a stochastic component, subject to certain constraints. 
Our goals are to 
1. devise a sensible quantitative framework that would accommodate various merit functions. 
2. assess whether an instantaneous (nightly) sequence of observations optimization will achieve a global optimum, 
3. build some numerical tools to make a toy model and try out some implementation schemes.

LSST synopsis

The telescope feeds a focal plane that spans a field of view of 9.6 square degrees, so it's 3.1 degrees on a side. 

The default plan calls for a pair of 15 second exposures at each pointing of the telescope, and this pair is termed a "visit" to a field. 

The LSST will be situated in Chile at a latitude of -30 degrees. From there it can usefully access the sky up to a declination of +30 degrees. The full sky is 41,253 square degrees, and LSST plans to conduct a survey over 20,000 (or 18,000?) square degrees in addition to a few "deeper" fields that will receive longer and more frequent observations.

This implies that we need to schedule observations in 6 bands, spread across 20,000/9.6~2100 fields over the course of 10 years. 

Images are obtained in 6 different optical passbands, designated u,g,r,i,z and y, that span the atmospheric cutoff at 340 nm up to the longest wavelength that can be detected in silicon CCDs, about one micron.  

The time it takes the telescope to slew from one place to another on the sky is determined by two upper limits, one is the maximum angular acceleration (about 3 deg/s^2) and the other is the maximum angular rate (3 deg/s). 

Coordinates and Timeframes. 

There are three natural timescales involved: a day, a month and a year. 

Relationship between RA, DEC, HA and zenith angle (from Zombeck)

HA=LST-RA

alt=altitude

az=azimuth, from W towards S

lat=latitude

cos(alt)sin(az)=cos(dec)sin(HA)

cos(alt)cos(az)=-sin(dec)cos(lat)+cos(dec)cos(HA)sin(lat)

sin(alt)=sin(dec)sin(lat) + cos(dec) cos(HA) cos(lat)

Solid angle Omega subtended in an angle A from celestial pole is Omega=2pi*(1-cos(A)). 

A (Deg)declinationOmega(sr)/2pisq degN fields
30-600.132681280
45-450.295981623
60-300.5103131074
70-200.66136071417
9001206262148
120+301.5309393222

So within 2 airmasses we can get to 3/4 of the entire sky. A six-band, annual 3pi survey would require 6*3222 = 19K visits. At 50 seconds per visit, this is essentially 16K minutes ~ 270 hours = 30 nights per year. But of course it's only the marginal investment that should count. 

Airmass vs. HA and dec, with field centers for N-up implementation:

Number of fields vs. declination

-90 0
-87 7
-84 13
-81 19
-78 25
-75 31
-72 36
-69 42
-66 48
-63 53
-60 59
-57 64
-54 69
-51 74
-48 78
-45 83
-42 87
-39 91
-36 94
-33 98
-30 101
-27 104
-24 107
-21 109
-18 111
-15 113
-12 114
-9 115
-6 116
-3 116
0 117
3 116
6 116
9 115
12 114
15 113
18 111
21 109
24 107
27 104
30 101

 

Cadence, coverage, and passband trades.

Only considering time between astronomical twilight, using skycalc:

monthdateduration
0.5Jan 116.9
1Jan 267.4
1.5Feb 097.8
2Feb 258.4
2.5Mar 118.9
3Mar 269.4
3.5Apr 099.8
4Apr 2510.2
5May 0910.5
5..5May 2410.8
6Jun 0710.9
6.5Jun 2210.9
7Jul 0710.9
7.5Jul 2210.7
8Aug 0610.4
8.5Aug 2010.1
9Sep 049.7
9.5Sep 189.3
10Oct 048.8
10.5Oct 188.3
11Nov 027.8
11.5Nov 177.3
12Dec 026.9
12.5Dec 166.7
13Dec 316.7

  average duration is 9 hours. Obstime=9.2+2*cos(2pi t/yr). 

 


 

Sky Brightness from LSST ETC

FWHM of 0.87 at airmass=1.2 derived from FWHM=0.7 in r at zenith.

footprint is 26.82 pixels, Gaussian weighted. Units are electrons in 15 sec exposure. Moon is 90 deg from boresight

lunar phase\filter

u

g

r

i

z

y4

0

42

89

111

159

218

238

3

53

107

113

159

218

238

7

81

160

140

174

224

238

11

143

280

205

215

242

242

14

240

474

305

269

263

254

(bright/dark) ratio

5.7

5.3

2.7

1.7

1.2

1.07

SNR impact~sqrt(sky)

2.4

2.3

1.6

1.3

1.1

1.03

SDSS cumulative DR1 sky brightness distribution (no bright time imaging...)

 The 10% to 90% range in r is 20.7-21.1=0.4 mag=factor of 1.4. The 10% to 90% range in z (OH dominated) is 19.4-18.6=0.8 mag=factor of 2.

y band sky brightness varies over a night, from High, Stubbs et al PASP paper on y band variability:

Some observations:

  • Sky brightness contribution from OH typically varies by a factor of two over the course of a single night, darkest at midnight. This corresponds to 0.25 mag change in m5. 

  • For the OH contribution from slab of emission in the upper atmosphere, we'd expect sky darkest at zenith, getting brighter proportional to airmass

  • Clouds produce scattered moonlight, so in grey time the night sky brightness doesn't have lunar angle dependence that's built into current version of opsim

  • We definitely want a real-time adaptive scheduler, that optimizes based on both cloud transparency and sky brightness. 

  • Can we expect to slew during readout? if so, that saves us 2 seconds of slew time, during which we can move by half a field width. 

Merit Function Ingredients. 

Observational effectiveness is impacted by both deterministic and stochastic factors. A reasonable initial merit function is the signal to noise ratio for flux determination of an "unresolved" source (i.e. a star). The numerator contains the flux from the source. For the LSST case, the denominator (noise) is dominated by the Poisson noise from the sky background, within the footprint of the source. This footprint is turn dominated by time-variable turbulence in the atmosphere, which produces a Gaussian flux distribution with a FWHM that astronomers term "seeing". The surface brightness of the night sky depends on time of day, on the phase of the moon and distance to the moon, and on the optical filter. 

The signal to noise ratio for measuring the flux phi (at the telescope input aperture) from an unresolved point source for an integration time t scales as

SNR~ (phi)(sqrt(t)) / (FWHM)(sqrt(sky)). 

Both phi and FWHM depend on the "airmass" a of the observation. Looking straight up corresponds to a=1. Looking at an angle of 60 degrees from the zenith has a=2. The minimum airmass for any given field occurs on the meridian, which is the great circle on the sky that include the zenith and the celestial pole at declination = -90 degrees.

The atmosphere attenuates the light by an amount that depends on the airmass as well as the passband. So the SNR as a function of airmass and exposure time is a sensible merit function that we seek to optimize, summed over all fields and over the 10 year duration of the survey. This is subject to some science-driven constraints, such as the desire to achieve a uniform survey-integrated SNR across the survey area and the desire to achieve some temporal sampling cadence across the different portions of the sky. 

The FWHM varies depending on atmospheric conditions, and is usually expressed in units of arcseconds. LSST expects to achieve a median seeing of 0.6 arcsec, with a long term distribution as shown in the Figure below.

The deterministic factors that influence the merit function are

  • sky brightness, as a function of optical filter, phase of the moon and distance to the moon. 

  • time lost during telescope slews and focal plane readout (~3 seconds per image)

  • zenith angle dependence of the signal to noise ratio

  • exposure time spent on each field

Stochastic factors are

  • seeing

  • cloud cover

Point Source Photometric Merit Function

This suggests that we define a quantity point source photometric merit factor (PSPMF) that tracks how a given image contributes to SNR per band i, where

PSPMF_i=sqrt(t/15)*(T/1)*(1/FWHM)*(sqrt(1/sky)).  

with T=atmospheric transmission, which depends on both cloud coverage and airmass and t=exposure time. Summing this quantity over all visits to a given location would provide a cumulative photometric merit factor.

Weak Lensing Merit Function

Another LSST objective is to measure the shapes of resolved galaxies. This is similar to the photometric merit function above, which determines the surface density of galaxies at a given SNR, but has an additional dependence on the seeing. Poor seeing degrades the ability to measure galaxy shapes, since it tends to circularize them.

This shape measurement only takes place in the r passband, and so this only applies in that one band. 

We'll parameterize this with an exponent alpha and define the Shape Measurement Merit as

WLMF_r=sqrt(t/15)*(T/1)*(sqrt(1/sky))*(1/FWHM)^(1+alpha). 

Temporal Sampling Merit Function

In order to facilitate the determination of the relative priority of fields at any given time, we need to track the amount of time since it was last observed compared to some desired cadence interval tau(field, passband, revisit history). The temporal merit is then given by

TMF_i=exp(t'/tau)

where t' in general contains information about a combination of the time since the field was last observed, and the conditions under which that took place. A first approximation might be

t'=(t-t_last)*(FWHM/0.8)*sqrt(median_sky/sky). 

This formulation allows us to give "partial credit" for observations through clouds, or in poor seeing.

Field and Passband Prioritization

Not all locations on the sky are of equal interest to the survey, and this priority can differ across passbands. So we need to allocate a Field Priority Factor FPF(field, band) to tune the relative priority of observations.  

Slewing Overhead Matrix

For N fields there is an N x N symmetrical matrix that lists the time overhead required to move the telescope from field i to field j. These times depend only on the angular separation between two fields, and the matrix only needs to be computed once.  

The temporal observing inefficiency factor is the additional overhead imposed by slewing, after accounting for focal plane readout. A candidate observing sequence should have its overall merit score adjusted by a slewing factor that is given by SF=(t_slew/t_total), where t_total is the total observing time available. 

Filter Exchange Overhead Matrix

For F filters there is a time penalty associated with changing filters. This can be represented by a symmetrical F x F matrix, but it's probably fine to just use a scalar t_filter. 

Scientific Weighting Factors

Not all science goals are equal. We should assign pre-factors to the photometric and weak lensing merit functions, and any others that might be useful, such that the sum of all weights is 1.


Oct 18 2013, C. Stubbs

Observations obtained at angles z from zenith suffer from two effects: 

1) additional optical attenuation due to increased atmospheric path length

2) degraded "seeing". 

band

central wavelength

extinction (mag per airmass)

seeing degradation compared to r band at zenith

u

350 nm

0.40*a

(1.1)*a0.6

g

450 nm

0.18*a

(1.07)*a0.6

r

650 nm

0.10*a

(1.0)*a0.6

i

750 nm

0.08*a

(0.97)*a0.6

z

850 nm

0.05*a

(0.94)*a0.6

y

1000 nm

0.04*a

(0.91)*a0.6


Plot of seeing degradation vs. airmass, and polynomial fit:

Decent approximation to seeing degradation vs. airmass a is

S=0.35+0.72a^2-0.07a

Signal to Noise degradation vs. airmass:

SNR scales as source flux in the numerator and (for unresolved objects) as seeing in the denominator. Flux at an airmass "a" is reduced by a factor f(a)=10^(x*(a-1)/2.5) where x is the extinction coefficient listed in the table above. So SNR vs. airmass at fixed exposure time for unresolved point sources scales as SNR(a)~10^(x*(a-1)/2.5)/(0.35+0.72a^2-0.07a).

airmass

seeing degradation

SNR_u

SNR_g

SNR_r

SNR_i

SNR_z

SNR_y

1.0

1.0

1.0

1.0

1.0

1.0

1.0

1.0

1.1

1.06

0.91

0.93

0.94

0.94

0.94

0.94

1.2

1.11

0.83

0.87

0.88

0.88

0.89

0.89

1.3

1.17

0.77

0.81

0.83

0.84

0.84

0.85

1.4

1.22

0.71

0.77

0.79

0.79

0.80

0.81

1.5

1.27

0.65

0.72

0.75

0.76

0.77

0.77

1.6

1.32

0.61

0.68

0.71

0.72

0.73

0.74

1.7

1.37

0.56

0.65

0.68

0.69

0.71

0.71

1.8

1.42

0.53

0.62

0.65

0.66

0.68

0.68

1.9

1.46

0.49

0.59

0.62

0.64

0.65

0.66

2.0

1.52

0.46

0.56

0.60

0.61

0.63

0.64

fits

Seeing=0.35+0.72a^2-0.07a

SNR_u=2.1-1.4*a+0.30*a^2

SNR_g=1.9-1.1*a+0.23*a^2

SNR_r=1.8-a+0.21*a^2

SNR_i=1.8-0.98*a+0.20*a^2

SNR_z=1.7-0.94*a+0.19*a^2

SNR_y=1.7-0.93*a+0.19*a^2

Slew times. 

Oct 19 2013, CWS. 

angular distance moved is limited by both the maximum angular rate and the maximum angular acceleration. If we imagine the max angular rate is 3 deg/s and max angular acceleration is 3.5 deg/s/s, then to move one field width requires a slew of 3 degrees in angle (gives small overlap). For no coast phase this takes a time given by t=2*sqrt(2*1.5deg/alpha)=2 seconds. The maximum angular rate achieved is omega=alpha(1)=3 deg/sec. So for these parameters for any slew larger than a field width, we are angular-rate-limited, and the slew requires a time t_slew~2+dtheta/3 seconds. IF we can slew during readout, the overhead between images separated by an angle theta then is approximately (2+theta/3) seconds.

Arguably the system operation is optimized if the slew time is exactly equal to the (unavoidable) 2 second readout time. This suggests that we should tile the sky with overlapping observations that slew by half a field width between successive exposures.

Operating on the meridian in this mode, with 15 second exposures we'll assume it would take 20 seconds total on average, per exposure. It would take 24*(3.1/360)~12 minutes for the sky to move by one field width, on the equator.  In the course of 12 minutes we can acquire 12minutes*3images/min=36 images on the meridian. At this half-overlap rate we could cover 36*3.1/2~56 degrees.

Chuck says settling time is of order 1 sec. 

See http://www.gb.nrao.edu/~rcreager/GBTMetrology/140ft/l0058/gbtmemo52/memo52.html for az rates vs. zenith angle. 

 

 

Coverage Rate

At 35 seconds per visit and 9.6 square degrees per field, we cover the sky at a rate of 7900 square degrees in an 8 hour night. That means that on average we revisit interval (no weather) is 3 days for 18,000 square degrees.  

Sky rotation.

The position of objects on the sky changes in right ascension direction at an angular rate of 15 degrees per hour times cos(declination).  How long does it take the sky to rotate by one field width, as a function of declination? It takes 3.1/15 = 0.2 hours = 12 minutes on the equator, and so t(dec)=cos(dec)*12 minutes elsewhere. So if we were scanning along the meridian we would have return to a given declination at an interval of cos(dec)*12 minutes, to get full coverage at minimum airmass for each declination band. At 50 seconds per field (average) in 12 minutes we would cover 3.1*12*(60/50)=45 degrees of declination.  

 

One potential approach:

  1. determine the rank-ordered priority of all fields on the meridian, or at that night's minimum airmass if they don't transit, in each passband, for different potential values of seeing. 

  2. reject the fields that never appear in the top ~1000. These have such low priority we'd never get to them in a single night. 

  3. For each parametric value of seeing, compute the sequence of observations that maximizes the merit function, including the slew overhead contribution. 

A merit function we'd seek to maximize therefore might look something like this:

MF=(sequence efficiency) * sum_fields {temporal merit} {(Point source weight)*PSPMF+(1-Point source weight)*WLMF}{Field and filter priority}

subject to these constraints:

  • sum of time used = total time available

  • ...

Issues, and notes from Feb 2014 OpSim review.

  • partial credit for imperfect observations
  • look-ahead needed, or not? 
  • HA weighting, for 24-hr urgent fields. (HA weighted by night or annually)
  • homogenization
  • we should do single band version of this problem
  • get field centers. 
  • 10 year Op Sim now takes 2 days to run- bottlenecks are look-ahead and slew rate computation. 
  • rotator and az cable wrap are the two variable things for slew computation.
  • repulsion from moon, as a function of passband
  • different aspects of look-ahead: sky, moon, homogenization, twilight, etc
  • fallback plan of march along the meridian in a fixed band. 
  • partial credit for mediocre observations
  • get around the "science proposal" mentality
  • We need to request d(merit)/d(m5) for all science goals. 
  • They claim they will do actual real-time pointing decisions, that they can run at 2000x real time now, so they can keep up with telemetry stream. 
  • Predictive cloud motion.  
  • currently 120 sec for filter change. 
  • 18,000 sq deg is default area
  • minimum slew is 5 sec
  • dome is pacing item
  • optics tweak takes 34 seconds, if elev slew exceeds some threshold
  • az wrap is cumulative, just track it and add it up. Add penalty for az unwind
  • assume rotator wrap is no issue
  • SRD median slew is 5 sec.
  • Field tessellation has closer distance between field centers than we can tolerate- depends on whether we dither in offset or rotation. In other words 18,000 sq deg/N_fields is substantially less than camera FOV. 
  • minimum of two filter changes per night, due to twilight, etc. 
  • Kem claims we now have 94.2% of available visits. 
  • need adjustable readout time and noise for first vs. second exposure
  • What is histogram of slew times between az,alt pairs? 
  • Kem asserts it takes about 5 sec to slew between adjacent fields. But that's not right. Assuming we go mostly in elevation, no dome limitation, is angular acceleration limited. We accelerate for half and decelerate for half the time. So the half-slew time obeys (elevation-limited) theta=(1/2)alpha*t^2, and theta = 3.5/2 and alpha for elevation is (conveniently) 3.5 deg/s/s so t=sqrt(2*(theta/alpha)=sqrt(2) = 1.41 sec. So kinematic adjacent-field best slew time is 2.8 sec. Add one sec for settling and we get 3.8 sec, call it 4, not 5. 
  • ** we should take OpSim output of pointings, with rotator angle, and compute bounding boxes in RA, DEC, then import into GIS database to do rapid queries of coverage in time and filter space. 
  • Do maximum unobserved gap analysis for each field for each filter. 
  • Phil Pinto suggests computing next-field time needed as a parallel process. 
  • Do object-based coverage analysis rather than field-center-based analysis
  • How do we represent both transparency and sky brightness across a field? Scalar (what's done now), vector (histogram) or matrix (sky image)?\

Feb 8, 2014

Using ESSENCE data set to look at cloud transmission. Email from Gautham:

Hi,

We have both things, but neither exactly in the form in your email. We save zero points for each image of course, and I have the average zero point. You can take the differences, and some component of the difference is attributable to extinction due to clouds - I'd imagine the histogram of the differences will be some gaussian with an exponential decay on one side.
This data is in a binary table - I can make it FITS or text if you like but it's 32MB compressed so too big to email - do you have your odyssey account?

It's got other information you don't ask for in your email, but I'm guessing you will want - filter, exptime, airmass, fwhm, aperture correction, zptmag (offset + apercorr modulo a negative sign), number of stars used in the zptmag fit, Chi_sq of that fit, MJD-OBS...

In [9]: zptdata.dtype.descr
Out[9]:
[('date', '|S6'),
('dcmpfile', '|S32'),
('filter', '|S1'),
('exptime', '<f8'),
('airmass', '<f8'),
('field', '|S4'),
('amp', '|S2'),
('apercorr', '<f8'),
('apercerr', '<f8'),
('offset', '<f8'),
('doffset', '<f8'),
('x2red', '<f8'),
('skyadu', '<f8'),
('sky0', '<f8'),
('nim', '<i8'),
('nused', '<i8'),
('nskip', '<i8'),
('pused', '<f8'),
('cterm', '<f8'),
('ctermerr', '<f8'),
('fitgood', '<i8'),
('fitair', '<i8'),
('fitcolor', '<i8'),
('airmflag', '<i8'),
('apcorflag', '<i8'),
('dpfwhm1', '<f8'),
('dpfwhm2', '<f8'),
('mjdobs', '<f8')]



There is also sorta 5 sigma uncertainties - but they are really saved as magnitudes at specific values of median uncertainty:

gnarayan@rclogin05|/n/panstarrs/data/v10.0/W/workspace/sm061128/10> imhead wdd7.061128_0658.133_10.sw.dcmp | grep MAU
MAU010  = 'UNKNOWN,<=20.2'     / magnitude with median uncertainty of 0.010
MAU015  = 'UNKNOWN,<=20.2'     / magnitude with median uncertainty of 0.015
MAU020  =               20.333 / magnitude with median uncertainty of 0.020
MAU050  =               21.399 / magnitude with median uncertainty of 0.050
MAU100  =               22.173 / magnitude with median uncertainty of 0.100
MAU200  =               23.012 / magnitude with median uncertainty of 0.200


These are only stored in the FITS headers - I can convert these also into a table. You probably also want the additional file headers.

---------------

Logged onto odyssey. (run gorc.sh &, then use RSA key). 

SM data are in 

/n/panstarrs/data/v10.0/W/workspace

then smddmmyy. 

to get Gautham/Armin toolkit, on odyssey 

csh

source .myrc

(which I stole from Gautham)

.dcmp files carry image headers and can use wcstools imhead utility on those files. 

can get them with 

ls w*.sw.dcmp

pulled out zero points for one single amp for all nights with

stubbs@rclogin10|/n/panstarrs/data/v10.0/W/workspace> gethead MJD-OBS ZPTMAG sm??????/3/w*.sw.dcmp > ESSENCE_zpts.dat

put ESSENCE_zpts.dat in my home directory on odyssey.

Here is it, also: 

ESSENCE_zpts.dat

Also, Gautham extracted R and I band zero point values, and provided these two data files: 

R band extinction is 0.104 mag/airmass. 

The three columns are : observation name, MJD of observation, zero point. 

changes in zero point are due to both clouds and airmass. 

Took Gautham's data set and did correction for airmass, make cumulative plot of delta zero point, sorted. 

percentilemagnitudes of extinction from clouds, after mean is subtracted
10-0.146
25-0.111
50-0.076
75-0.033
80-0.014
900.166
950.486
991.71
99.92.52
  

Stubbs Notes, May 25 2014. 

Jaimal and I have agreed that weighted sum and a product figure of merit amount to the same thing. So we'll stick with the weighted sum, and compute a Figure of Merit accordingly: 

where the weights w and merits M are drawn from multiple considerations. We'll tune values of M to range from 0 to 1, where they saturate. Some candidate elements for the merit by field:

 

 

For the atan() function, tau_1 determines the 50% point and tau_2 the slope of the merit function at that point. 

Atan merit function for tau1=45 and tau2 values of 1 (red), 5 (black) and 10 (blue) days. Same basic thing happens for FWHM and depth, where merit increases

And here is a plot of exponential weight evolution for taus of 5 (red), 10 (black) and 30 (blue) days. 

Here is an example of FWHM-based merit, driving a field higher if seeing is really excellent. This is for FWHM_1= 0.5 and FWHM_2=0.1. Depth uniformity would look the same as this. 

 

 

This FOM is computed per field, per passband, for each potential observation. We can also introduce a couple of penalties:

  • penalize an observation if a better opportunity will come up within the characteristic time tau
  • penalize observations as a function of hour angle, favoring observations towards the East, since we can follow transients for a longer time for those fields. 
  • We also need to compute a penalty that connects pairs of fields, namely the slew time between them.

We can adopt the 5 or 10 sigma point source limiting magnitude as a good indicator of quality of an observation.  

Apart from atmospheric variation in cloud transparency and seeing, the observing conditions as a function of time are deterministic. The zenith angle and sky brightness can be computed, and so the ten sigma point source magnitude depends upon

  1. sky brightness, which is a function of moon phase, distance from the moon, lunar elevation, solar cycle. 
  2. zenith angle, which affects both atmospheric attenuation and seeing degradation. 

We can compute all of this in advance, for each field. Jaimal found what seems to be a good Python package for this

http://pythonhosted.org/Astropysics/coremods/obstools.html#astropysics.obstools.Site.apparentCoordinates

So we want to compute, for each field and for each observing opportunity, a zenith-angle and sky-brightness adjusted 5 sigma point source magnitude, m5. The signal to noise ratio for a point source scales as 

for a fixed integration time and in the sky-dominated regime. Taking the log of both sides, and incorporating the zenith-dependence of FWHM and also passband-dependent extinction A(zenith), and incorporating the attenuation due to clouds AC (in magnitudes), we compute a change in m5 relative to observing at the zenith and under a sky background of m_o magnitudes per square arc sec, 

This includes the zenith-depedence of FWHM, which scales as airmass^0.6, and extinction in the various bands. The coefficient of the final term comes from the airmass dependence of seeing, a^0.6, and the 2.5 factor for magnitudes, so that 2.5*0.6=1.5. 

Have each science program fill out this table, for each field center. Constrain field weights so they sum to one, for each program. Examples from SN, weak lensing, and static sky are illustrated

programprogram weightfield IDfilterfield weighttau 1 (days)tau 2 (days)FWHM 1FWHM 2depth 1
WL0.3100repsilon3651000.50.127
  101repsilon3651000.50.127
SN0.2205gepsilon2520.10.125
static sky0.2205gepsilon33651000.80.227

 

A high value for tau1,2 de-emphasizes that aspect. A low value for FWHM1,2 de-emphasizes seeing. 

A prescription for a (single-band, for now) optimization strategy would be

  1. allocate weights to different science programs, based on fashion and merit
  2. have those science programs determine merit attributes for all fields
  3. pre-calculate zenith angle and sky background dependent m5 values for all fields, for all potential observations. 
  4. Construct a nominal m5 value for zero clouds and median FWHM, for all fields for all observation slots. 

Then, before the start of each night

  1. trim list of candidate fields to the ones that are above some cutoff airmass
  2. estimate the co-added depth for each one, compute their depth merit functions 
  3. determine the (partial-credit) time since last observed for each one, compute the temporal merit function for each field
  4. compute merit function for each field, and calculate nominal sky merit function vs. t by looking forward until temporal merit hits 0.9. Store best merit within that interval.
  5.  

Some references

LSST science book

lucent paper, 1965, on TSP

SPIE_2006

Hubble Space Telescope scheduler

genetic_edge_recombination_operator

genetic_algorithms_review

kubanek_MS_thesis

genetic_alg_scheduler_SPIE2012

Genetic_algorithm_for_Robotic_Telescope_Scheduling

Multiobjective_Scheduling_by_Genetic_Algorithms

Multi-Objective_Optimization_Paper

stochastic_TSP_paper

LSST OpSim user documentation at NOAO

SSTAR user documentation at NOAO

OpSIm web page at LSST

configuration properties for LSST (max rates, etc)

Field_table.txt - ASCII file with: fieldID fieldFov fieldRA fieldDec fieldGL fieldGB fieldEL fieldEB, GL abd GB are Galactic l,b and EL EB are ecliptic l,b.

 

  • No labels