Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

Tuesday

File Reorganization

  • Have a lot of jupyter notebooks that are getting a bit hard to keep track of and getting cluttered. Went through each one and made a directory of them with a description of what each file does
  • README.md

Spline Fitting

  • Worked on file "Continuum_Reduction_Ind.ipynb" to create a function that fits the spline function on the spectrum of the fits file. Made the function callable from other python notebooks
  • Created file "Continuum_Reduction_All.ipynb" that fits a spline function all the fits files and graphs them. Here are some sample spline function graphs:


  • Some of the spline curves don't look like the best fit; I need to do some more reading about spline curves to see if I'm doing the right spline fit (there's quite a few types of spline fitting like bivariate splines etc. that I still need to explore).
  • I can also get the coefficients of the spline fit, but I'm not sure what they represent. Can ask Stubbs / Eske more about spline fitting.

 -

Spline Fitting

  • Working in Continuum_Reduction_Ind.ipynb
  • Met with Chris and he suggested that I take a few points (around 10) for each graph to have better spline fits that pass through data points
  • I am trying to automate the process of choosing points for spline fit
  • Looking into the scipy.interpolate.UnivariateSpline, especially the w attribute, which allows me to weigh the points differently in a separate array (so higher weighing points are more likely to be fitted). Trying to devise a math function to fit the data points such that a higher density of points (more clustered) are weighed less than  the singular points ( like inflection points )
    • Since data is already organized into buckets (grouped by a range of 5 to reduce variability), I thought if more data points are in a bucket, then the resulting averaged data point should have less weight.
    • I tried using a math function weight = 1/(bucket length) which would give higher weight to smaller bucket sizes, but the spline didn't look good: 
    • Tried weight = 1/ pow(x, bucket_length) for diff values where 1 <= x <= 2, and the higher orders did not look very good either (ex. using x = 1.5 (left) and x = 2 (right))
    • Maybe a better math function would work
  • To do next time: choose points by hand and see how the fit goes, once fit is good, try to design an algorithm around the points already in buckets




  • No labels