Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 23 Next »

For Kewei to do

  • Read the rest of the book! (done)
  • Get swipe card access to the lab (done)
  • Start to learn about scripting and shell scripts (bash) in Unix of Mac (done)
  • get up to Numpy and Scipy in Python
  • bug Chris about PanStarrs data file locations on Odyssey machine (done)
  • extract multicolor light curves for ~10,000 objects, in g,r,i,z,y.
  • make plots of colors (g-r) vs. (z-y), etc. and reject outliers from the stellar locus (see High et al SLR paper) 
  • Do this for different pairs of data, taken at different times.

 

For Chris to do:

  1. tell Peter to move the computer that's in the office (tomorrow, Thursday)
  2. approve RC account  (done!)
  3. give Gautham access to this wiki. His emails are gnarayan@cfa.harvard.edu and gsnarayan@gmail.com

 

Li Kewei's Lab log for the week of Jun 10-16

  • Learnt about python and unix. I have set up a python development environment on my local machine, and learnt about the unix system for the computing server.
  • Wrote a crawler to find the relevant data that's needed to find a light curve from the data on the server. The star to be used is given as an input. The crawler finds all objects within 1 pixel or the sum of variances of the 2 PSFs involved and puts all the data into a file. I used the pickle module for saving the file, but that doesn't seem to preserve all aspects of the data. I'm finding another way to save the data right now. But besides that I think the code is working correctly. I am testing the code for data collected in MD04 in the month of May, 2013.

Monday, Jun 17

  • I can process data and produce plots now. Here are some light curves:


Thursday, 20 June

  • Finished reading "Learning Unix for OS X Mountain Lion"
  • Gautham informed me that there's a hard disk error on the server. As a result file operations (such as cp) are not completing. This is giving me considerable trouble trying to process the data. I'm waiting for the disk to get online again.
  • Meanwhile, I'm reading High's paper on Stellar Locus Regression and learning a bit more about numpy and scipy.

Saturday, 29th June

  • Plotted the color diagram for all the objects in stacked gpc1v3. The diagram gets too cluttered if I plot all the objects, so I limited it to points where the error bars are small (<0.002 mag).

Stubbs comments, June 30 2013.

Good work! As a reminder the long term goal here is to see whether the location of the stellar locus changes, in subtle ways, for observations of the same objects taken at different times. 

As a starting point, we'll need a table of magnitudes for (unstacked) images of the Celestial Pole field (called NCP for panstarrs) in different passbands. The email from Gene Magnier talks about how to get that photometry. You then need to select good comparison stars, just like you did above in picking ones with low uncertainties. This entails:

  1. matching up the object catalogs so that you (in effect) get light curves for the stars
  2. choosing ones that are bright, isolated, and not variable stars. I'd start with 
    1. unambiguous matches from the object catalogs, i.e. no nearby companions (like within 20 arcsec or so)
    2. high signal to noise ratio in r band, i.e. median uncertainties less than, say, 0.005 mag. 
    3. no evidence for temporal variability: reduced chi-squared of a fit to a straight line of order one. 
    4. good signal in all passbands (g,r,i,z,y). 
    5. good temporal coverage, like 20-30 data points per band (depends on how many images we actually have)
    6. PSF FWHM consistent with stars, not galaxies. But note the FWHM for stars varies from frame to frame due to changes in atmospheric turbulence

Then, pick a set of images taken in g,r,i,z,y on different nights. The goal is to look at how the different observed magnitudes might depend on atmospheric water vapor, which affects the y band the most, z next, and hardly any effect on g,r,i. And the magnitude changes due to water vapor will depend on the color of the star.  So I'd suggest the following plots:

  • make some color-color plots that include y band, say i-y vs. g-i, for different image pairs. It's important that those plots include a common set of objects, taken from a single pair of images. But you can overlay multiple pairs using symbol colors, etc. 
  • figure out the median magnitude for each object in the "clean" catalog, and then plot (mag-median(mag)) vs. (r-i) color, for various bands. This will let us see color-dependent residuals, if any. 

Well done, this is good progress!
 


Gautham's response to Stubbs comments, June 30 (extracted by Kewei from email)

Chris in his response suggests choosing stars with good signal in all passbands. This may not actually be possible (since astrophysics ensures that the blue stars won't have much signal in the red, while the red stars don't have much signal in blue, and the PS1 telescope and camera ensures that the brightest stars which will give you good signal for the blue stars in the red, will saturate for the blue stars in the blue).
You also don't want a strong cut based on errors, since closer stars are brighter, and so have smaller errors, and you might not probe how the population varies fully by only selecting these.
I'd guess long term, you'll want a cut based on colors, but you'll have to try several things.

 

What I'm suggesting is to do a test in the regular **Medium Deep Fields** before going to the NCP, because we have more data in the Medium deep fields (all the photpipe measurements + stable astrometry)

 

Comments on Stubbs suggested conditions (italicized)

    1. unambiguous matches from the object catalogs, i.e. no nearby companions (like within 20 arcsec or so)
      This is fine, but you can be more aggressive with the matching tolerance, since you already know these objects are stars, and the photpipe dumps are from the stacked images, and therefore are deeper. if the stars are isolated in the photpipe dcmps from the stacked images, they will be isolated in the unstacked images.
    2. high signal to noise ratio in r band, i.e. median uncertainties less than, say, 0.005 mag. 
      Yes - we'll have to plot light curves of the stars for a few CMF files over 2-3 nights and actually look and make sure your code is behaving here too.
    3. no evidence for temporal variability: reduced chi-squared of a fit to a straight line of order one. 
      Very hard to get because of astrophysics, and the dynamic range of the telescope - you can test how well you do from the stacks themselves, but you'll only really have 5 magnitudes of dynamic range to work with in the unstacked images, and "good signal" (high-significance detection - small errors - is going to be hard to get in all those passbands at once). Again, good reason to work with the MDS data that is better characterized than hit your head against the wall with the NCP data.
    4. good signal in all passbands (g,r,i,z,y).
      This is easy in the Medium Deep Fields, and we can even check these against existing data.
      This should be possible in the NCP as long as the PS1 astrometry is good and we can match up at least 20-30 image catalogs.
    5. good temporal coverage, like 20-30 data points per band (depends on how many images we actually have)
      Again, helpful to use MDS before NCP, because if it's not consistent with a galaxy in the dcmp catalogs, it can be safely treated as a star.
    6. PSF FWHM consistent with stars, not galaxies. But note the FWHM for stars varies from frame to frame due to changes in atmospheric turbulence
      I'd not impose this cut until later. Magnitude error cuts can give you very odd subsets of the full population.

Once you have a list of magnitudes, I've code that does robust 3-sigma clipping, which will nicely clean up the catalog.


Jul 23

I have been working on making the code more robust/building abstraction barriers, so now I have code that can reliably process data from both ipp and photpipe, and produce plots (color-color diagrams or otherwise). I also got SLR working with Gautham's help, so now we can have calibrated colors too.

I have looked at sample data from CNP, but Gautham says I should work with MD data for a while longer. I can't start processing CNP data right now because I don't know how to perform the WCS transformations. The data doesn't seem to be very robust just yet anyway?

Ok I guess my main question now is, what kind of plots should I be making now?

 

Jul 31

As far as I understand, my tasks now are the following:

  1. Use stacked images from photpipe to build a catalogue of stars with low error in all bands, no close neighbors and are close to the stellar locus.
  2. Check how (airmass corrected) stellar color in that catalogue change with time, and if so if that change is dependent on median stellar color.

So currently I am very close to attaining the first goal. The second goal is a bit tricky because of the mass of data I have to churn through, but hopefully I'll be able to get the code right soon.


Stubbs' comments July 31 2013:

I'm unclear on whether you're still working with MDF images or have shifted over to NCP data. The advantage of the latter is that apart from clouds the magnitudes should be essentially the same from night to night, and so the effects of water content variation should be easy to pick out. Here's a re-iteration of what I sent via email on July 26:

1) make sure you can identify a common set of stars that have many data points
2) extract a table of magnitudes vs. time. The g and r bands should be insensitive to water, while i,z,y 
3) find the instances of data being taken in photometric conditions (a common stable zeropoint)
4) see what the range in airmass is, since the field of view is a few degrees, and plot colors vs. airmass. Correct if necessary
5) then plot the set of airmass-corrected colors vs. time, and look for variations on the timescale of a few days that might be
due to water vapor.
6) for each star, determine the median colors (r-i), (r-z), (r-y), and (z-y). Plot the departures from the medians as a function of 
the median color. Do redder stars have bigger excursions? 
7) make some SLR plots and see if there is a measurable signal there.
I think the specific response to your question about what to plot is in items 5-7.  

Aug 2

I was working with MDF data but not I'm switching over to NCP data. For MDF data, I have been able to identify a common set of stars (with low errors in all passbands and no nearby objects) and collect data about them over time (from the IPP data). I identified two objects to be identical if they are within 1 pixel of each other, but that condition seems to be too tight (I'm not getting much data at all) and I'm going to relax this to 5 pixels. This shouldn't be a problem because the stars are not supposed to have any neighbors within 20 pixels. The code for that is currently running and might take some time. I have also written code for plotting data, but right now there isn't much to plot because I set the identification condition to be too tight.

Anyhow, I'm pretty sure the code is working, so I'm going to go ahead and look at the NCP data. Identifying a common set of stars from that data is probably going to be more difficult because we don't have stacked images, but my idea now is to look for a few "good" images taken in photometric conditions and identifies the stars from there. Will try this tomorrow.


So I have been playing around with MDF data for a while more because the stacked data from photpipe produces a nice catalogue, and that trims down the amount of data I have churn through by a lot. Anyway I found the color residue (I used instrumental colors minus the colors calibrated using SLR) against g-r calibrated color. There does not seem to be much of a trend at all:

I limited the data to only points with small error bars, but still there's no trend:

So this seems to suggest that the error in color is not a function of stellar color? I will try this again with unstacked ipp data soon.


Data access on Odyssey:

  1. Run JAuth.jar to get login key
  2. ssh -Y into to odyssey.fas.harvard.edu, or herophysics.fas.harvard.edu, using the electronic key. 
  3. run tcsh
  4. source .myrcstubbs
  5. data are at /n/panlfs/data/MIRROR/ps1-md/gpc1/
  6. nightly science uses individually warped images, nightly stacks run on stacked frames
  7. image types: wrp is warped. 
  8. see available modules with "module avail"
  9. load a module with "module load hpc/ds9-5.6"
  10. photometry is in .cmf files, as FITS tables. 
  11. in python: 
    1. import pyfits as p
    2. p.open('filename')
    3. print a[0].header
  12. or, imhead on command line
  13. a[1].data.AP_MAG for aperture magnitudes
  14. PSF_RA and PSF_DEC are in the skycell files. 
  15. make a scratch directory for data in /n/panlfs

Photpipe photometry as text files are at

run gpc1v3 to invoke scripts for photpipe

then files by ut, but aubsets a-j as 10 spatial subsets. 

for example

/n/panlfs/data/v10.0/GPC1v3/workspace/ut130525f/41

and use the dcmp files, four per stacked image. 

take catalog photometry entries and add ZPTMAG to all entries to get corrected photometry. 

photpipe are DoPhot with aperture correction. 

 

 

  • No labels