Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Version History

« Previous Version 15 Next »

For Kewei to do

  • Read the rest of the book! (done)
  • Get swipe card access to the lab (done)
  • Start to learn about scripting and shell scripts (bash) in Unix of Mac (done)
  • get up to Numpy and Scipy in Python
  • bug Chris about PanStarrs data file locations on Odyssey machine (done)
  • extract multicolor light curves for ~10,000 objects, in g,r,i,z,y.
  • make plots of colors (g-r) vs. (z-y), etc. and reject outliers from the stellar locus (see High et al SLR paper) 
  • Do this for different pairs of data, taken at different times.

 

For Chris to do:

  1. tell Peter to move the computer that's in the office (tomorrow, Thursday)
  2. approve RC account  (done!)

 

Li Kewei's Lab log for the week of Jun 10-16

  • Learnt about python and unix. I have set up a python development environment on my local machine, and learnt about the unix system for the computing server.
  • Wrote a crawler to find the relevant data that's needed to find a light curve from the data on the server. The star to be used is given as an input. The crawler finds all objects within 1 pixel or the sum of variances of the 2 PSFs involved and puts all the data into a file. I used the pickle module for saving the file, but that doesn't seem to preserve all aspects of the data. I'm finding another way to save the data right now. But besides that I think the code is working correctly. I am testing the code for data collected in MD04 in the month of May, 2013.

Monday, Jun 17

  • I can process data and produce plots now. Here are some light curves:


Thursday, 20 June

  • Finished reading "Learning Unix for OS X Mountain Lion"
  • Gautham informed me that there's a hard disk error on the server. As a result file operations (such as cp) are not completing. This is giving me considerable trouble trying to process the data. I'm waiting for the disk to get online again.
  • Meanwhile, I'm reading High's paper on Stellar Locus Regression and learning a bit more about numpy and scipy.

Saturday, 29th June

  • Plotted the color diagram for all the objects in stacked gpc1v3. The diagram gets too cluttered if I plot all the objects, so I limited it to points where the error bars are small (<0.002 mag).

Stubbs comments, June 30 2013.

Good work! As a reminder the long term goal here is to see whether the location of the stellar locus changes, in subtle ways, for observations of the same objects taken at different times. 

As a starting point, we'll need a table of magnitudes for (unstacked) images of the Celestial Pole field (called NCP for panstarrs) in different passbands. The email from Gene Magnier talks about how to get that photometry. You then need to select good comparison stars, just like you did above in picking ones with low uncertainties. This entails:

  1. matching up the object catalogs so that you (in effect) get light curves for the stars
  2. choosing ones that are bright, isolated, and not variable stars. I'd start with 
    1. unambiguous matches from the object catalogs, i.e. no nearby companions (like within 20 arcsec or so)
    2. high signal to noise ratio in r band, i.e. median uncertainties less than, say, 0.005 mag. 
    3. no evidence for temporal variability: reduced chi-squared of a fit to a straight line of order one. 
    4. good signal in all passbands (g,r,i,z,y). 
    5. good temporal coverage, like 20-30 data points per band (depends on how many images we actually have)
    6. PSF FWHM consistent with stars, not galaxies. But note the FWHM for stars varies from frame to frame due to changes in atmospheric turbulence

Then, pick a set of images taken in g,r,i,z,y on different nights. The goal is to look at how the different observed magnitudes might depend on atmospheric water vapor, which affects the y band the most, z next, and hardly any effect on g,r,i. And the magnitude changes due to water vapor will depend on the color of the star.  So I'd suggest the following plots:

  • make some color-color plots that include y band, say i-y vs. g-i, for different image pairs. It's important that those plots include a common set of objects, taken from a single pair of images. But you can overlay multiple pairs using symbol colors, etc. 
  • figure out the median magnitude for each object in the "clean" catalog, and then plot (mag-median(mag)) vs. (r-i) color, for various bands. This will let us see color-dependent residuals, if any. 

Well done, this is good progress!
 


Data access on Odyssey:

  1. Run JAuth.jar to get login key
  2. ssh -Y into to odyssey.fas.harvard.edu, or herophysics.fas.harvard.edu, using the electronic key. 
  3. run tcsh
  4. source .myrcstubbs
  5. data are at /n/panlfs/data/MIRROR/ps1-md/gpc1/
  6. nightly science uses individually warped images, nightly stacks run on stacked frames
  7. image types: wrp is warped. 
  8. see available modules with "module avail"
  9. load a module with "module load hpc/ds9-5.6"
  10. photometry is in .cmf files, as FITS tables. 
  11. in python: 
    1. import pyfits as p
    2. p.open('filename')
    3. print a[0].header
  12. or, imhead on command line
  13. a[1].data.AP_MAG for aperture magnitudes
  14. PSF_RA and PSF_DEC are in the skycell files. 
  15. make a scratch directory for data in /n/panlfs

Photpipe photometry as text files are at

run gpc1v3 to invoke scripts for photpipe

then files by ut, but aubsets a-j as 10 spatial subsets. 

for example

/n/panlfs/data/v10.0/GPC1v3/workspace/ut130525f/41

and use the dcmp files, four per stacked image. 

take catalog photometry entries and add ZPTMAG to all entries to get corrected photometry. 

photpipe are DoPhot with aperture correction. 

 

 

  • No labels