Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Next »

Now that you have compiled the MITgcm and copied the executable to the run directory, you can start a MITgcm simulation.  We will look at the file

Run directory files

In the global_hg_llc90/run, pfos/run, and pcb/run directory, you will find several sample scripts that you can use to run MITgcm jobs. 

Simulation typeRun scriptdata.exch filedata file
13 CPUs, debug run (10 hours)run.mitgcm.13np.debugdata.exch2.13npdata.debug.run
13 CPUs, 1-month runrun.mitgcm.13np.1monthdata.exch2.13npdata.1month_run
96 CPUs, debug run (10 hours)run.mitgcm.96np.debugdata.exch2.96npdata.debug_run
96 CPUs, 20 year runrun.mitgcm.96np.20yrdata.exch2.96npdata.20yr_run

We will look at each of these scripts in more detail below.

The run.mitgcm scripts

The run.mitgcm* scripts are used to start a MITgcm simulation with 13 CPUs (for debugging) or 96 CPUs.   For example, the run.mitgcm.13np.debug scripts look like this:

#!/bin/bash 
#SBATCH -n 13
#SBATCH -N 1
#SBATCH -t 60
#SBATCH -p regal
#SBATCH --mem-per-cpu=3750
#SBATCH --mail-type=ALL 
#EOC
#------------------------------------------------------------------------------
#              Harvard Biogeochemistry of Global Pollutants Group             !
#------------------------------------------------------------------------------
#BOP
#
# !IROUTINE: run.mitgcm.13np.debug
#
# !DESCRIPTION: Script to run a debug MITgcm simulation with 13 CPUs.
#\\
#\\
# !CALLING SEQUENCE:
#  sbatch run.mitgcm.13np.debug       # To submit a batch job
#  ./run.mitgcm.hg.13np.debug         # To run in an interactive session
#
# !REMARKS:
#  Consider requesting an entire node (-N 64 -n 1), which will prevent
#  outside jobs from slowing down your simulation.
#
#  Also note: Make your timestep edits in "data.debug_run", which will
#  automatically be copied to "data" by this script.
#
# !REVISION HISTORY:
#  17 Feb 2015 - R. Yantosca - Initial version
#EOP
#------------------------------------------------------------------------------
#BOC

# Make sure we apply the .bashrc_mitgcm settings
source ~/.bashrc_mitgcm

# Copy run-time parameter input files for the 13 CPU run
cp -f data.debug_run   data
cp -f data.exch2.13np  data.exch2

# Remove old output files
rm -f STDOUT.*
rm -f STDERR.*
rm -f PTRACER*

# Run MITgcm with 13 CPUs
time -p ( mpirun -np 13  ./mitgcmuv )
exit 0
#EOC

 

The run.mitgcm.96np.debug is similar except for the fact that it requests SLURM to give it more CPUs:

#!/bin/bash 
#SBATCH -n 128
#SBATCH -N 2
#SBATCH -t 60
#SBATCH -p regal
#SBATCH --mem-per-cpu=3750
#SBATCH --mail-type=ALL 
#EOC
#------------------------------------------------------------------------------
#                  GEOS-Chem Global Chemical Transport Model                  !
#------------------------------------------------------------------------------
#BOP
#
# !IROUTINE: run.mitgcm.96np.debug
#
# !DESCRIPTION: Script to run a debug MITgcm Hg simulation with 96 CPUs.
#\\
#\\
# !CALLING SEQUENCE:
#  sbatch run.mitgcm.96np.debug   # To submit a batch job
#
# !REMARKS:
#  Consider requesting 2 entire nodes (-n 128 -N 2), which will prevent
#  outside jobs from slowing down your simulation.
#
#  Also note: Make your timestep edits in "data.debug_run", which will
#  automatically be copied to "data" by this script.
#
# !REVISION HISTORY:
#  17 Feb 2015 - R. Yantosca - Initial version
#EOP
#------------------------------------------------------------------------------
#BOC

# Make sure we apply the .bashrc_mitgcm settings
source ~/.bashrc_mitgcm

# Copy run-time parameter input files for the 96 CPU run
cp -f data.debug.run   data
cp -f data.exch2.96np  data.exch2

# Remove old output files
rm -f STDOUT.*
rm -f STDERR.*
rm -f PTRACER*

# Run MITgcm with 96 CPUs
time -p ( mpirun -np 96  ./mitgcmuv )
exit 0
#EOC

 

The run.mitgcm* scripts all do the following things:

  1. Gets the proper compiler and library settings from your ~/.bashrc_mitgcm file.

  2. Reserves CPUs for the MITgcm run.

    • NOTE: For MITgcm production runs, we recommend that you request 128 CPUs (i.e. 2 entire nodes) even though the MITgcm only uses 96.  This will reserve both nodes exclusively for your MITgcm simulation, and will prevent other Odyssey jobs from running on the same node and competing for resources.

  3. Creates the proper data file for your simulation from a template.  This file contains basic information for the simulation, including

    • The number of timesteps for the simulation to run;
    • How frequently diagnostics are saved to disk (i.e. dumpFreq);
    • How frequenlty statistics are written to the log file (i.e. monitorFreq)

  4. Creates the proper data.exch file for your simulation from a template.

    • The data.exch file, which is described below, contains information about the tiles used for the horizontal grid specification.

  5. Runs the MITgcm simulation and prints the user, cpu, and system time in seconds.

The data.exch2.13np and data.exch2.96np files

The data.exch2.13np contains the following namelist data declaration. This is used to set up the horizontal grid for 13 CPus.

 &W2_EXCH2_PARM01
  W2_printMsg          = 0                                                    ,
  W2_mapIO             = 1                                                    ,
  preDefTopol          = 0                                                    ,
#==============================================================================
#-- 5 facets llc_120 topology (drop facet 6 and its connection):
#==============================================================================
  dimsFacets(1:10)     = 90, 270, 90, 270, 90, 90, 270, 90, 270, 90           ,
  facetEdgeLink(1:4,1) = 3.4, 0. , 2.4, 5.1                                   ,
  facetEdgeLink(1:4,2) = 3.2, 0. , 4.2, 1.3                                   ,
  facetEdgeLink(1:4,3) = 5.4, 2.1, 4.4, 1.1                                   ,
  facetEdgeLink(1:4,4) = 5.2, 2.3, 0. , 3.3                                   ,
  facetEdgeLink(1:4,5) = 1.4, 4.1, 0. , 3.1                                   ,
/

 

The data,exch2.96np is used to set up the horizontal grid for 96 CPUs.  It contains the same namelist variables as does data.exch2.13np, with an additional variable named blanklist.  This is used to set certain tiles to zero.

  &W2_EXCH2_PARM01
  W2_printMsg          = 0                                                    ,
  W2_mapIO             = 1                                                    ,
  preDefTopol          = 0                                                    ,
#==============================================================================
#-- 5 facets llc_120 topology (drop facet 6 and its connection):
#==============================================================================
  dimsFacets(1:10)     = 90, 270, 90, 270, 90, 90, 270, 90, 270, 90           ,
  facetEdgeLink(1:4,1) = 3.4, 0. , 2.4, 5.1                                   ,
  facetEdgeLink(1:4,2) = 3.2, 0. , 4.2, 1.3                                   ,
  facetEdgeLink(1:4,3) = 5.4, 2.1, 4.4, 1.1                                   ,
  facetEdgeLink(1:4,4) = 5.2, 2.3, 0. , 3.3                                   ,
  facetEdgeLink(1:4,5) = 1.4, 4.1, 0. , 3.1                                   ,
#==============================================================================
#-- 30x30   nprocs = 96 : Blank out certain tiles
#==============================================================================
  blankList(1:21)      = 1,2,3,5,6,28,29,30,31,32,33,49,50
                         52,53,72,81,90,99,108,117
/

 

The run.mitgcm* scripts will copy data.exch2.13np or data.exch2.96np to a file named data.exch2, so that you won't forget to do this yourself.

NOTE: You should not have to touch the data.exch2* input files because they are already set up for the 13 CPU and 96 CPU runs.  The only time you would have to modify these input files if you were changing the horizontal grid specification and the number of CPUs that you wanted to use.

The data files

Debug run

To submit a debugging run (on 13 CPUs), type the following commands:

#### To run a debug Hg simulation ###
cd MITgcm_code/                         # Switch to main code directory
setcpus 13 hg                           # Pcks the proper SIZE.h and data.exch2 file for 13 CPUs
cd verification/global_hg_llc90/run     # Change to the Hg run directory
sbatch run.mitgcm.13np.1month           # Submit the run to SLURM

#### To run a debug PFOS simulation ###
cd MITgcm_code/                         # Switch to main code directory
setcpus 13 pfos                         # Picks the proper SIZE.h and data.exch2 file for 13 CPUs
cd verification/pfos/run                # Change to the Hg run directory
sbatch run.mitgcm.13np.1month           # Submit the run to SLURM

#### To run a debug PCB simulation ###
cd MITgcm_code/                         # Switch to main code directory
setcpus 13 pcb                          # Picks the proper SIZE.h and data.exch2 file for 13 CPUs
cd verification/pfos/run                # Change to the Hg run directory
sbatch run.mitgcm.13np.1month           # Submit the run to SLURM

 

1-month run

Hello

20 year run

Hello

Other runs

Previous | Up | Next


  • No labels