1. General introduction
This wiki outlines the procedures for running the MIT General Circulation Model (MITgcm) Hg/POPs simulations on Odyssey system. General information about the MITgcm can be found in the MITgcm User's Manual.
We have one type of simulation so far:
1) A nominal 1 degree x 1 degree online simulation with ECCOv4 ocean circulation data over a global domain with higher spatial resolution over the Arctic Ocean.
2. Obtain source code
Users from the Harvard BGC group can obtain a copy of the source code from:
/n/sunderland_lab/Lab/MITgcm/
Note: Do NOT copy the verification folder, it takes up huge disk space.
In your ~username home directory make an MITgcm directory and copy all of the folders except verification into your MITgcm directory from the Lab copies. For example:
cd
mkdir MITgcm
cd MITgcm
cp -r /n/sunderland_lab/Lab/MITgcm/bin/ .
cp -r /n/sunderland_lab/Lab/MITgcm/doc/ .
cp -r /n/sunderland_lab/Lab/MITgcm/eesupp/ .
...etc.!
For users outside this group, we are currently working on a Github site.
The numerical model is contained within a execution environment support wrapper. This wrapper is designed to provide a general framework for grid-point models. MITgcm is a specific numerical model that uses the framework. Under this structure, the model is split into execution environment support code and conventional numerical model code. The execution environment support code is is in the eesupp/ directory. The grid point model code is in the model/ directory. Code execution actually starts in the eesupp/ routines and not in the model routines. For this reason, the top-level MAIN.F is in the eesupp/src/ directory. In general, end-users should not need to worry about this level. The top-level routine for the numerical part of the code is in model/src/THE_MODEL_MAIN.F.
Here is a brief description of the directory structure of the model under the root tree:
doc: Contains brief documentation notes.
eesupp: Contains the execution environment source code. Also subdivided into two subdirectories inc/ and src/.
model: This directory contains the main source code. Also subdivided into two subdirectories inc/ and src/.
pkg: Contains the source code for the packages. Each package corresponds to a subdirectory. For example, gmredi/ contains the code related to the Gent-McWilliams/Redi scheme.
tools: This directory contains various useful tools. For example, genmake2 is a script written in csh (C-shell) used to generate your Makefile. The directory adjoint/ contains the Makefile specific to the Tangent linear and Adjoint Compiler (TAMC) that generates the adjoint code. The tools/ directory also contains the subdirectory build_options/, which contains the 'optfiles' with the compiler options for the different compilers and machines that can run MITgcm.
utils: This directory contains various utilities. The subdirectory knudsen2/ contains code and a Makefile that compute coefficients of the polynomial approximation to the knudsen formula for an ocean nonlinear equation of state. The matlab/ subdirectory contains MATLAB scripts for reading model output directly into MATLAB. The scripts/ directory contains C-shell post-processing scripts for joining processor-based and tiled-based model output. The subdirectory exch2/ contains the code needed for the exch2 package to work with different combinations of domain decompositions.
jobs: Contains sample job scripts for running MITgcm.
lsopt: Line search code used for optimization.
optim: Interface between MITgcm and line search code.
3. Insert your own package or migrate a package from ECCOv1 to ECCOv4
If you want to add a chemical tracer simulation (e.g., Hg, PCBs, PFOS), please follow the instructions below:
- Copy your code package as separate folder in MITgcm/pkg/ (e.g., MITgcm/pkg/pfos/). If you don't know how to develop such a package, a good template to follow is the hg package in /home/geos_harvard/yanxu/MITgcm/pkg/hg/. Generally, you need to write a series of functions in your package that solve different terms of the continuity equation. The physical transport is handled by the ptracer/ package, so you just need to focus on the source-sink terms of your pollutant(s) of interest. You also need a couple of header files to define a series of variables and some files to handle the disk I/O.
- Hook up your code with the main program via the gchem/ package. You modify several files, including:
- gchem_calc_tendency.F: from here you can call the functions that solve different biogeochemical processes, e.g. chemistry, surface forcing, partitioning.
- gchem_fields_load.F: from here you can call the function to load input data.
- GCHEM.h: add a trigger to enable your package, such as useHG, usePCB.
d. gchem_readparms.F: from here you can call the function which handles initializing parameters.
e. gchem_init_fixed.F: from here you can call the function which handles initializing diagnostics.
You can also refer to my modification to the gchem/ package at /home/geos_harvard/yanxu/MITgcm/pkg/gchem. Search modifications by grep -i "yxz" *.
Tips & tricks if you're migrating your own package to ECCOv4 [contributed by Helen Amos]
Here are issues that came up when migrating the PCB simulation from ECCOv1 to ECCOv4:
- Need to comment out all calls to ALLOW_CAL in pcb_fields_load.F
- In gchem_init_fixed.F, you need to make sure you have the line: CALL PCB_PARM. Yanxu got rid of his hg_parms.F file, so a CALL HG_PARM line is missing from his gchem_initi_fixed.F file. The PCB simulation still has a pcb_parms.F file and if it isn't "turned on" by calling it from gchem_init_fixed.F, then your output will be all NaNs.
- Use online wind, ice, and solar radiation information from ECCOv4. In ECCOv1, we read wind, ice, and radiation from an offline file (e.g., archived from MERRA or GEOS-5). Now those variables are generated online. You need to do two things to activate this capability:
- Add "#ifdef USE_EXFIWR" statements to your package. The easiest way to do this is to search "USE_EXFIWR" in the HG code (
/n/sunderland_lab/MITgcm/pkg/hg/) and copy these to your own code.
- After adding the "#ifdef USE_EXFIWR" statements to your package, you need to update the names of your ice, wind, and radiation variables. You probably need to do this if your code has air-sea exchange, ice interactions, or photochemistry. In pcba_surfforcing.F, which handles air-sea exchange, I had to replace wind(i,j,bi,bj)with windo and fIce(i,j,bi,bj) with ice. If you haven't done this properly, your PTRACER output might have big blocks missing, like this:
- Add "#ifdef USE_EXFIWR" statements to your package. The easiest way to do this is to search "USE_EXFIWR" in the HG code (
4. Make a working directory
Here we will set up the following directories within your ~username/MITgcm/verification/global_hg_llc90/ directory:
code: Header/option or other files that are often modified.
build: Where the make program puts the intermediate source code.
run: The run directory.
Before compiling the code, you need to obtain the content of code/ directory. Copy all the files in /n/sunderland_lab/Lab/MITgcm/verification/global_hg_llc90/code/ :
cd ~username/MITgcm/verification/global_hg_llc90/
cp -r /n/sunderland_lab/Lab/MITgcm/verification/global_hg_llc90/code/ ./
Lastly, make empty build/ and run/ directories within your ~username/MITgcm/verification/global_hg_llc90/ directory:
cd ~username/MITgcm/verification/global_hg_llc90/
mkdir build
mkdir run
If you are running the Hg simulation, you should be all set. If you are running a different simulation (e.g., PCBs or PFOS) and only using Hg as a template, you need to modify:
X_OPTIONS.h : Where 'X' needs to be renamed to match your chemical package (e.g. PCB_OPTIONS.h) and the contents should match the options in your package code (e.g., in pkg/pcb/).
packages.config : Search and replace 'hg' with your package name (e.g., 'pcb').
X_SIZE.h : *Special instructions!* If you have copied Yanxu's Hg source code, you do not need to add an HG_SIZE.h file to our code/ directory. Your HG_SIZE.h file is already in pkg/hg/. If you a running another package (e.g., PFOS), check your pkg/pfos/ directory. If you don't see a PFOS_SIZE_.h file, then you need to add one to your code/ directory.
5. Compiling the code
Before compiling, you also need to load the proper compilers for the code (MPI, intel Fortran etc.). You can also add this to your ~/.bashrc file (see /n/home09/hmh/.bashrc_sunderland for an example). IMPORTANT: Odyssey is constantly updating its "modules" and will be phasing out the "old" module system that is used below. For each set of modules (intel Fortran compiler, MPI, netcdf) loaded, you will need to refer to a specific file within your "build_options" directory, and may need to edit the file paths within it to match the location of the modules on Odyssey. See Appendix for lists of combinations of modules that work with MITgcm ECCOv4 and their corresponding build_options files (which will be attached to this wiki).
module load hpc/openmpi-intel-latest
module load hpc/netcdf-3.6.3
Then let's go to the build/ directory and build your Makefile:
cd ~username/MITgcm/verification/global_hg_llc90/build
First, build the Makefile. Note: the "-optfile" filename below (and contents) will need to be changed if you have to load different module versions than the specific ones listed above.
make clean Note: this is needed if you change which modules are loaded and/or the optfile
../../../tools/genmake2 -mods=../code -optfile=../../../tools/build_options/linux_ia64_ifort+mpi_harvard3
The command line option tells genmake to override model source code with any files in the directory ../code/. I have written an alias called 'premake' in my .bashrc file to replace this long genmake command. If you copy the 'premake' alias from (for example) see /n/home09/hmh/.bashrc_sunderland into your own .bashrc file, then you could type instead:
premake
Once a Makefile has been generated, we create the dependencies with the command:
make depend
This modifies the Makefile by attaching a (usually, long) list of files upon which other files depend. The purpose of this is to reduce re-compilation if and when you start to modify the code. The make depend command also creates links from the model source to this directory. It is important to note that the make depend stage will occasionally produce warnings or errors since the dependency parsing tool is unable to find all of the necessary header files (eg. netcdf.inc). In these circumstances, it is usually OK to ignore the warnings/errors and proceed to the next step.
Next, you compile the code using:
make
You can speed up the compilation by adding "-j4" or "-j8" (e.g., "make -j8") to split the compilation amongst 4 or 8 processors.
Debugging Tip: If you are only making small changes to the code, you don't need to go through the whole recompilation process again. Just type "make" or "make -j 8" to recompile. BUT: if you change the Odyssey modules you loaded and/or your build_options/ file of choice, you should "make clean" and then "premake", "make depend", and "make".
If the compiling goes well (i.e. no error message), we can move the generated mitgcmuv file to your run directory:
mv mitgcmuv ../run
6. Running the simulation
The first time you run, you need to follow steps 6.0-6.6 to set up your run directory properly.
MITgcm output is big and will quickly fill up your space on the head node (300 Gb). Once you have space on a file server, at a minimum, you should store your model output on the file server. You can even set up your run/ directory on the file server and run jobs directly from there, so you're not bogging down the head node by moving large files after each run. If you want to see an example of a run/ directory set up on a file server, check these out:
/n/sunderland_lab/Lab/MITgcm/verification/global_hg_llc90/run/
6.1 Copy first batch of files (contributed by Helen Amos)
Copy these folders to ~username/MITgcm/verification/
cd ~username/MITgcm/verification/
cp -r /n/sunderland_lab/Lab/MITgcm/verification/global_oce_cs32/ ./
cp -r /n/sunderland_lab/Lab/MITgcm/verification/global_oce_input_fields/ ./
Copy these folders to global_X_llc90/, where 'X' corresponds to the simulation you're trying to run (e.g., 'darwin', 'hg', 'pcb'). see below for Hg:
cd global_hg_llc90
cp -r /n/sunderland_lab/Lab/MITgcm/verification/global_hg_llc90/input/ ./
cp -r /n/sunderland_lab/Lab/MITgcm/verification/global_hg_llc90/input.core2 ./
cp -r /n/sunderland_lab/Lab/MITgcm/verification/global_hg_llc90/input.ecco_v4 ./
cp -r /n/sunderland_lab/Lab/MITgcm/verification/global_hg_llc90/input.ecmwf ./
cp -r /n/sunderland_lab/Lab/MITgcm/verification/global_hg_llc90/input_itXX ./
cd into the input_itXX/ directory and open the prepare_run file (e.g., in emacs, vi, nano, or whichever text editor you prefer).
Now edit the file paths. NOTE: the dirInputFields one is specific to where you are on Odyssey; this is just an example. type 'pwd' if you don't know which home server you are on.
set dirInputFields = /n/home09/hmh/MITgcm/verification/global_oce_input_fields
set dirLlc90 = /n/sunderland_lab/Lab/MITgcm/verification/global_oce_llc90
Next, execute this command in your MITgcm/verification/global_hg_llc90/run/ directory:
csh (note: this command opens a c-shell)
../input_itXX/prepare_run
exit (note: this closes the c-shell)
Debugging tip: if when you attempt to execute prepare_run you get an error, make sure the file paths are correct to get from within your ~username/MITgcm/verification/global_hg_llc90 directory to the sunderland_lab copy of global_oce_ll90. For example, if your MITgcm directory is not in your home (username) directory you may need another set of "../".
6.2 Link forcing files to your run folder
Go to your run/ folder:
cd ~username/MITgcm/verification/global_hg_llc90/run
ln -s /n/sunderland_lab/Lab/eccov4_input/controls/* .
ln -s /n/sunderland_lab/Lab/eccov4_input/MITprof/* .
ln -s /n/sunderland_lab/Lab/eccov4_input/pickups/* .
ln -s /n/sunderland_lab/Lab/eccov4_input/era-interim/* .
6.3 Forcing folder
Still in your run/ directory, make a forcing/ subdirectory:
mkdir forcing
Move all the forcing files insde:
mv EIG* forcing/
cp /n/sunderland_lab/Lab/MITgcm/verification/global_hg_llc90/run/forcing/runoff-2d-Fekete-1deg-mon-V4-SMOOTH.bin forcing/.
6.4 Initial conditions and other forcing files
Still in your run/ directory, make an initial/ subdirectory:
mkdir initial/
In this folder, you can put the initial conditions of your tracers. If you have not run the model before, you must link to these files from sunderland_lab as follows:
cd initial/
ln -s /n/sunderland_lab/Lab/MITgcm/verification/global_hg_llc90/run/initial/* .
(Helpful tip: if later you want to change the initial conditions files/filenames, you must edit them in your data.ptracers file within your run directory; data files are copied in step 6.6)
Go back to your run directory and make another directory called input_hg/ for Hg deposition input from the atmosphere:
cd .. (to get back to your run directory, assuming you were in run/intial/)
mkdir input_hg
Now fill it with your input files. If you do not have any, use the input files from sunderland_lab:
ln -s /n/sunderland_lab/Lab/MITgcm/verification/global_hg_llc90/run/input_hg/* input_hg/.
If you are running with the food web model (maybe, not really sure why we need these... will explain later) (see ~username/MITgcm/verification/global_hg_llc90/code/HG_OPTIONS.h and look if this is set to "define"), you will need to get plankton inputs.
Still in your run directory:
mkdir input_darwin
ln -s /n/sunderland_lab/Lab/MITgcm/verification/global_hg_llc90/run/input_darwin/* input_darwin/.
6.5 Control files
Still in your run/ directory, make a control/ subdirectory:
mkdir control
Move all the control files into this folder
mv xx_* control/
cp /n/sunderland_lab/Lab/MITgcm/verification/global_hg_llc90/run/control/wt_* control/
6.6 data* files
If you're running an Hg simulation, copy data* files to your run/ directory from here:
cp /n/sunderland_lab/Lab/MITgcm/verification/global_hg_llc90/run/data* .
Note: within the file called "data" are variables to set how long of a run you want to do (in # of timesteps) and to set the length of a timestep (in seconds).
If you're running the PCB and DARWIN simulations, copy data* files to your run/ directory from Svante. If you don't know how, Helen Amos might help you this. The bottom line is that you can not reuse the old files from the older ECCO version simulations.
6.7 Submit job
Copy the submit script into run/, and modify it to any name you like:
cp /n/sunderland_lab/Lab/MITgcm/verification/global_hg_llc90/run/run.mehg .
Then we can submit the job to the queue. To submit:
sbatch YOUR_RUN_SCRIPT
If your run finishes without any problems, the very last line of your STDOUT.0000 file should indicate the model 'ENDED NORMALLY'
6.8 How to check on your run during and after completion
You can most simply enter either of the following for information on your runs:
squeue
sacct
See the very useful webpage https://rc.fas.harvard.edu/resources/documentation/convenient-slurm-commands/ for more options on how to check on status of your submitted runs, cancel a run, etc. Even more info on "squeue" and "sacct" (e.g., seeing other people running on your partition, getting information about completed runs) here: https://computing.llnl.gov/linux/slurm/squeue.html ; https://computing.llnl.gov/linux/slurm/sacct.html
Use this line to see what your memory requirements were for the run to adjust what you ask for in your run script, replacing JOBID with the # of your previous job:
sacct -j JOBID --format=JobID,JobName,ReqMem,MaxRSS,Elapsed
for on how to use the output from the above see https://rc.fas.harvard.edu/resources/odyssey-quickstart-guide/ section "A note on requesting memory".
6.9 Debugging tips [contributed by Helen Amos]
1. If your run crashes, check the following files for error messages:
STDERR.0000
STDOUT.0000
slurm.o<job_id> (where <job_id> is the ID number assigned by the queue system)
2. You may also find it is helpful to check the *.f files in your build/ directory. This is what the code looks like at 'run time', so if pieces of code are being chopped off or #include statements are missing, this kind of thing will turn up in the *.f compiled files.
3. If you want to isolate if your problem is coming from partitioning, chemistry, deposition, etc, you can comment out individual processes in gchem_calc_tendency.F. Recompile with 'make' and run with a limited number of processes turned on.
Debugging with fewer or more processors
ECCOv4 is configured to run on 96 cores. While you are encouraged to run with 96 cores for simulations that you'll do science with, 96 cores is a huge requirement and you can end up waiting a LONG in the queue for your job to begin. If you are debugging and only need to do short tests, use 13 cores.
WARNING: setting sNx (tile size in X) and sNy (tile size in Y) to 90 as done below will result in a "relocation truncated to fit" compilation error on Odyssey because there is more memory needed for a single tile than the default intel Fortran compiler is set to allow (2 GB). You will need to add the flags "-mcmodel=medium -shared-intel" to FFLAGS="$FFLAGS ...." within your build_options/ file of choice. Click here to download an example optfile with the flags added and see Appendix 1 of the wiki. For more information: http://wiki.seas.harvard.edu/geos-chem/index.php/Intel_Fortran_Compiler#Relocation_truncated_to_fit_error
Here's how to change the number of cores you need for a job:
- Go to your code/ directory.
- Make a copy of your SIZE.h file and rename it SIZE.h.96np. Now when you need to go back to 96 cores, your SIZE.h file information is saved.
- Open SIZE.h
- In SIZE.h, change sNx and sNy = 90
- In SIZE.h., change nPx = 13
- Save and close SIZE.h
- Edit your optfile of choice within ~username/MITgcm/tools/build_options/ to have "-mcmodel=medium -shared-intel" within $FFLAGS.
- Go to your build/ directory.
- Recompile your code.
- Move the mitgcmuv* executable to your run/ directory.
- Go to your run directory.
- Make a copy of data.exch2 and rename it data.exch2.96np. Now when you need to go back to 96 cores, it's saved.
- Open data.exch2.
- Comment out the line blanklist = XXXX, where XXX will be a list of numbers.
- Save and close data.exch2
- Make a copy of your run script and rename it <your_run_script.csh.96np>. Now when you need to go back to 96 cores, your run script is saved.
- Open your run script
Change the number of cores you're requesting from 96 to 13 in your run script.
Submit your job.
6.9 Final remarks
Documentation for ECCOv4, the physical model configuration of our simulations [pdf], and the associated publication [pdf].
Processing model output and regridding input files involves the gcmfaces/ package. Documentation for gcmfaces/ is available here [pdf].
Special thanks to Gael Forget, Stephanie Dutkiewicz and Jeffery Scott at MIT.
7. Issues to watch out for
Is your run crashing because of diagnostics issues?
- The value assigned to PTRACERS_num in code/PTRACERS_SIZE.h needs to match the value of PTRACERS_numInUse in run/data.ptracers.
- If PTRACERS_useKPP = .FALSE. in run/data.ptracers, then you have to remove all KPP diagnostics from your run/data.diagnostics file.
- Make sure your package is turned on in run/data.gchem (e.g., usePCB = .TRUE.)
- Make sure the gchem package is turned on in run/data.pkg (useGChem = .TRUE.)
- Make sure you package is listed in code/packages.config under 'gchem' (e.g., 'hg' or 'pcb')
Sometimes your run will crash because files you linked to earlier (the ln -s command) got corrupted somehow. To check and make sure the links are not broken type:
ls -ll
If the link is not to a specific file within the /n/lab_sunderland directory, you may need to re-link! e.g., if it looks like this: "../../global_oce_input_fields/ecmwf//EIG_dsw_1992" instead of "/n/sunderland_lab/Lab/eccov4_input/era-interim/EIG_dsw_1992". Note: this only seems to be a problem sometimes.. so don't be concerned if your links look weird, only if your run crashes looking for a linked file.
8. Appendix 1: Odyssey modules and optfile working combinations for compiling MITgcm
Odyssey sometimes changes which modules are available and this might change as they are switching to a "new" module system. See https://rc.fas.harvard.edu/resources/documentation/software-on-odyssey/intro/
Here are some examples of combinations of modules + optfiles (attached) that work on the old and new module system, plus instructions of how to change your optfile if the available MPI, Intel Fortran, and/or netCDF modules change. The format will be to list "module load" commands that can be done at the command line or put in your ~/.bashrc file, then list what the "premake" command will be - this is what is done within your /build/ directory (if you are using Hg, e.g., ~username/MITgcm/verification/global_hg_llc90/build/).
You should keep all your optfiles within your directory: ~username/MITgcm/tools/build_options/ .
1. Old module system:
a. Standard, as written in instructions above. optfile is already in your build_options directory if you follow the instructions to copy folders from sunderland_lab
module load hpc/openmpi-intel-latest
module load hpc/netcdf-3.6.3
../../../tools/genmake2 -mods=../code -optfile=../../../tools/build_options/linux_ia64_ifort+mpi_harvard3
Note: this will load the following versions: intel compiler 13.0.079; openmpi 1.6.2.
b. used by Chris Horvat. Download optfile by clicking here. use scp to copy this file to Odyssey, then mv into build_options directory.
module load centos6/openmpi-1.7.2_intel-13.0.079
module load centos6/netcdf-4.3.0_intel-13.0.079
../../../tools/genmake2 -mods=../code -optfile=../../../tools/build_options/linux_amd64_ifort_mpi_odyssey2 -mpi -enable=mnc
2. New module system:
Download optfile by clicking here.
source new-modules.sh
module load intel/13.0.079-fasrc01
module load openmpi/1.8.3-fasrc01
module load netcdf/3.6.3-fasrc01
../../../tools/genmake2 -mods=../code -optfile=../../../tools/build_options/linux_ia64_ifort+mpi_harvard5
3. Example optfile if you are debugging on 13 processors only
This is for the same modules loaded in "2. New module system" but the $FFLAGS section was modified to be able to compile using the larger tile size needed when using only 13 CPUs. Click here to download.
4. How to figure out which modules to load and update your optfile to work with them.
MITgcm seems to be meant to work with the Intel Fortran compiler version 13.0.079. Continuing off of that assumption, this is how to find which openmpi and netCDF modules are available on Odyssey and compatible with Intel 13.0.079.
1.Load Lmod, Odyssey's new module system. at the command line, in any directory, enter:
source new-modules.sh
2. Load intel compiler:
module load intel/13.0.079-fasrc01
3. Find out which modules are compatible with this intel version:
module avail
Right now, the list looks something like this:
openmpi/1.6.5-fasrc01
openmpi/1.8.1-fasrc01
openmpi/1.8.3-fasrc01
netcdf/3.6.3-fasrc01
This means you can choose any of the 3 openmpi versions, but there is only one compatible netCDF version.
4. Load your openmpi module of choice and netCDF module. As an example, here we'll choose openmpi 1.6.5.
module load openmpi/1.6.5-fasrc01
module load netcdf/3.6.3-fasrc01
5. Find out what the filepaths are for these modules:
printenv
Now look for "LD_LIBRARY_PATH" and "CPATH" (search within the terminal window). For the modules above, it should look something like this:
LD_LIBRARY_PATH=/n/sw/fasrcsw/apps/Comp/intel/13.0.079-fasrc01/netcdf/3.6.3-fasrc01/lib64:/n/sw/fasrcsw/apps/Comp/intel/13.0.079-fasrc01/openmpi/1.6.5-fasrc01/lib:/n/sw/intel_cluster_studio-2013/lib/intel64:/lsf/7.0/linux2.6-glibc2.3-x86_64/lib
CPATH=/n/sw/fasrcsw/apps/Comp/intel/13.0.079-fasrc01/netcdf/3.6.3-fasrc01/include:/n/sw/fasrcsw/apps/Comp/intel/13.0.079-fasrc01/openmpi/1.6.5-fasrc01/include:/n/sw/intel_cluster_studio-2013/composerxe/include/intel64:/n/sw/intel_cluster_studio-2013/composerxe/include
6. Create a new optfile, by making a copy of a previous one, within your ~username/MITgcm/tools/build_options/ directory.
cd ~username/MITgcm/tools/build_options/
cp linux_ia64_ifort+mpi_harvard3 linux_ia64_ifort+mpi_harvard_test (just an example, can change filename to whatever you want)
7. Open the file you've just copied (e.g., with emacs, nano, vi, or whatever text editor), and look for the following lines, which you will want to edit (Note, they may be slightly different, this is an example):
INCLUDES='-I/n/sw/openmpi-1.6.2_intel-13.0.079/include -I/n/sw/intel_cluster_studio-2013/mkl/include'
LIBS='-L/n/sw/openmpi-1.6.2_intel-13.0.079/lib -L/n/sw/intel_cluster_studio-2013/mkl/lib/intel64'
8. Copy the openmpi paths from LD_LIBRARY_PATH into "LIBS" (after -L) and CPATH into "INCLUDES" (after -I). Keep mkl the same. Following our same example, they will now look like this:
INCLUDES='-I/n/sw/fasrcsw/apps/Comp/intel/13.0.079-fasrc01/openmpi/1.6.5-fasrc01/include -I/n/sw/intel_cluster_studio-2013/mkl/include'
LIBS='-L/n/sw/fasrcsw/apps/Comp/intel/13.0.079-fasrc01/openmpi/1.6.5-fasrc01/lib -L/n/sw/intel_cluster_studio-2013/mkl/lib/intel64'
cd /n/sw/fasrcsw/apps/Comp/intel/13.0.079-fasrc01/openmpi/1.6.5-fasrc01/include
5. More information on Odyssey modules & useful commands:
https://rc.fas.harvard.edu/resources/documentation/software-on-odyssey/modules/
module purge - clears all loaded modules
module list - shows currently loaded modules