Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Version History

« Previous Version 7 Next »

This project is not yet finished.  The project is outlined in this Overleaf document (editable at this link): 

https://www.overleaf.com/6955685585cxpvzqqjypkz

The current task of the project is to determine two "nuisance" parameters in the fit (two of the parameters in Table 1): the binning radius, R_{bin}, and the grid density, R_{grid}.  I think we should use five different values for R_{bin}: 100, 150, 200, 250, and 300 Mpc.  These are chosen, as they span the general scale of superclusters/voids at the lower end and are the size of a PanStarrs medium deep field at the maximum redshift of z = 0.8 at the high end. 

Ideally, R_{grid} would be arbitrarily small (we would do the fit in the entire sky).  But that is not practically feasible.  Instead, we should find the values of R_{grid} at which the SNe 3d peculiarity fits converge.  In other words, we should run the full SNe 3 peculiarity effects at a range of R_{grid} values for each of the R_{bin} values, and determine at which point making R_{grid} better (smaller) ceases to matter.  This is a somewhat lengthy process.  We are partially done. 

Once these convergence values are determined, we should run a series (~1000) of chains with data randomized.  We keep track of the data in this sheet: 

https://docs.google.com/spreadsheets/d/1ZE3wbgJJHSjeuIt7HxJZCIfzJ6m1zlP-R85ch81ml94/edit#gid=377748410 

Green cells in the sheet are chains that have been completed.

Yellow cells are chains that have been executed on the cluster, but not completed. 

Red cells are chains that we think should be run, but have not yet been started. 

Icy blue cells are chains that we do not presently have any intention to run (they're "on ice") 


Because the number of seed points can be quite large before we achieve convergence in R_{grid}, we divide the sky into a certain number of slices on the sky.  Experimentation shows that ~180 slices is a good number.  Therefore, for each of the five values of R_{bin}, we want to run ~ 10 X 180 = 1800 chains to find the value of convergence value of R_{grid} (that's the 10 in that product), and then an additional 100 X 180 = 18000 (and ideally 1000 x 180 = 180000) chains for the random bootstraps.  That is a lot of chains.  Keeping track of those takes some accounting. That's what the Google sheets are for.  

Starting a new set of chains

Let's say we want to test the best fit value for R_{grid} = 24, 26, 28, and 30, and R_{bin} = 200, with the sky divided up into 180 slices. 

We first need to generate the sbatch files for the chains.  We can use the existing sbatch files as a basis.  Let's use the existing R_{bin} = 250, R_{grid} = 30 file as our reference. 

First, on the cluster, move into the appropriate directory (NOTE: for all the command-line code below, do not copy the $ sign - just copy and paste what follows it): 

$ mv ~/stubbs/SNIsotropyProject/

Then, copy the reference file and (from the command line) change the appropriate line(s) in the file: 

$ cp doSNIsotropyFit_Real_Grid32_Bin200_MinSN14_Z0p8_HemisphereBoth_Angle_1of180.slurm doSNIsotropyFit_Real_Grid30_Bin300_MinSN14_Z0p8_HemisphereBoth_Angle_1of180.slurm 

$ sed -i "s/comoving_bin=200/comoving_bin=300/" doSNIsotropyFit_Real_Grid30_Bin300_MinSN14_Z0p8_HemisphereBoth_Angle_1of180.slurm

$ sed -i "s/comoving_grid=32/comoving_grid=30/" doSNIsotropyFit_Real_Grid30_Bin300_MinSN14_Z0p8_HemisphereBoth_Angle_1of180.slurm

Now let's copy that file for the other values of R_{grid} and update the appropriate line (do these one line at a time): 

for i in 24 26 28
do
cp doSNIsotropyFit_Real_Grid30_Bin300_MinSN14_Z0p8_HemisphereBoth_Angle_1of180.slurm doSNIsotropyFit_Real_Grid"$i"_Bin300_MinSN14_Z0p8_HemisphereBoth_Angle_1of180.slurm
sed -i "s/comoving_grid=30/comoving_grid=$i/" doSNIsotropyFit_Real_Grid"$i"_Bin300_MinSN14_Z0p8_HemisphereBoth_Angle_1of180.slurm
done

We now have the slurm files for the first of the 180 sky slices for each of the considered R_{grid} values.  Now we need to generate the remaining 179 angle slice files for each.  We'll do this with two nested bash for loops.  This will generate 180 X 4 = 720 bash files (again, copy and paste one line at a time). 

for i in 24 26 28 30
do
for j in {2..180}
do
cp doSNIsotropyFit_Real_Grid"$i"_Bin300_MinSN14_Z0p8_HemisphereBoth_Angle_1of180.slurm doSNIsotropyFit_Real_Grid"$i"_Bin300_MinSN14_Z0p8_HemisphereBoth_Angle_"$j"of180.slurm
sed -i "s/angle_slice=1/angle_slice=$j/" doSNIsotropyFit_Real_Grid"$i"_Bin300_MinSN14_Z0p8_HemisphereBoth_Angle_"$j"of180.slurm
echo Updated file: doSNIsotropyFit_Real_Grid"$i"_Bin300_MinSN14_Z0p8_HemisphereBoth_Angle_"$j"of180.slurm
done
done

And now we can repeat those for loops, executing these bash files: 

for i in 24 26 28 30
do
for j in {2..180}
do
sbatch doSNIsotropyFit_Real_Grid"$i"_Bin300_MinSN14_Z0p8_HemisphereBoth_Angle_"$j"of180.slurm
done
done

 

  • No labels