Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The current analysis code is structured as follows (algorithm for any part can be easily changed):

  1. Color correction
    1. Take ~20 images from a given exposure batch (enough so that you have enough bright pixels of each color)
    2. Pick a percentile (generally 95) and compute the ratio (with black subtracted) between red and green and blue and green at that percentile.
  2. Background subtraction
    1. For each image, subtract background based on 3 sigma clipping.
    2. Rescale red and blue counts based on the ratio found in 1.
  3. Find peaks/sources for each frame
    1. Find peaks using the photutils find_peak algorithm and a threshold that depends on the exposure time (but probably a few hundred) and a box size that is approximately the spacing between sources (depends on lenslet array)
  4. Find centroids of sources and errors on the centroids for each frame
    1. Go through each source list and compute the centroid by computing the weighted mean. Note: the box size used here is probably too large: it's about half the spacing between pixels. I will fix this.
    2. This list of sources gets saved into a dataframe and then csv file along with sources from all the other frames in the batch.
  5. Link sources together between different frames
    1. Choose an ideal frame in the batch where all the sources (and no extra sources) have been identified) and turn that into a KDtree.
    2. Turn the list of sources from all frames in the batch into a KDtree, then use query_ball_tree to link all sources from those frames to the ideal frame.
  6. Compute source positions and displacements relative to mean source position (relative to center of mass of frame) for each frame.
    1. Note: different numbers of sources may be detected in each frame. There are different ways to deal with that. This may not be the optimal way, but here is what I currently do:
      1. Compute the x and y displacement of each source relative to the mean position of the source in all frames.
      2. Subtract off the mean x and y displacements in the frame.
  7. Convert source positions and displacements from pixels to meters and arcsec, respectively.
    1. Compute the median nearest neighbor distance for the sources in the frame. This is the lenslet pitch and allows you to convert the pixel spacing to um on the lenslet array.
    2. We know the size of the AuxTel beam is 1.2 m. The diameter of the beam, based on the parabolic mirror, is 8.5 mm (you could measure this directly; I don't currently), so you can convert lenslet distance to pupil distance, and thus get pixels to meters conversion.
    3. arcsec/pixel = (meters/pixel on microlens array) * 1 /focal length(m) * (1 arcsec/4.85e-6 radians) * 1/etendue(1.2/8.5e-3) is the conversion between measured displacement of a spot in pixels to arcsec using the small angle approximation
  8. Do analysis (correlation measurements, rms of displacements in each frame, etc.)
    1. Use treecorr and do kappa-kappa correlations with the x displacements, and y-displacements (and shear-shear correlations with x and y vectors)
    2. Compute the rms displacements in each frame.