/[MITgcm]/manual/s_getstarted/text/getting_started.tex
ViewVC logotype

Diff of /manual/s_getstarted/text/getting_started.tex

Parent Directory Parent Directory | Revision Log Revision Log | View Revision Graph Revision Graph | View Patch Patch

revision 1.16 by edhill, Thu Jan 29 03:02:33 2004 UTC revision 1.36 by molod, Tue Jun 27 19:08:22 2006 UTC
# Line 17  you are ready to try implementing the co Line 17  you are ready to try implementing the co
17    
18  \section{Where to find information}  \section{Where to find information}
19  \label{sect:whereToFindInfo}  \label{sect:whereToFindInfo}
20    \begin{rawhtml}
21    <!-- CMIREDIR:whereToFindInfo: -->
22    \end{rawhtml}
23    
24  A web site is maintained for release 2 (``Pelican'') of MITgcm:  A web site is maintained for release 2 (``Pelican'') of MITgcm:
25  \begin{rawhtml} <A href=http://mitgcm.org/pelican/ target="idontexist"> \end{rawhtml}  \begin{rawhtml} <A href=http://mitgcm.org/pelican/ target="idontexist"> \end{rawhtml}
# Line 50  http://mitgcm.org/htdig/ Line 53  http://mitgcm.org/htdig/
53    
54  \section{Obtaining the code}  \section{Obtaining the code}
55  \label{sect:obtainingCode}  \label{sect:obtainingCode}
56    \begin{rawhtml}
57    <!-- CMIREDIR:obtainingCode: -->
58    \end{rawhtml}
59    
60  MITgcm can be downloaded from our system by following  MITgcm can be downloaded from our system by following
61  the instructions below. As a courtesy we ask that you send e-mail to us at  the instructions below. As a courtesy we ask that you send e-mail to us at
# Line 79  provide easy support for maintenance upd Line 85  provide easy support for maintenance upd
85    
86  \end{enumerate}  \end{enumerate}
87    
88    \subsection{Method 1 - Checkout from CVS}
89    \label{sect:cvs_checkout}
90    
91  If CVS is available on your system, we strongly encourage you to use it. CVS  If CVS is available on your system, we strongly encourage you to use it. CVS
92  provides an efficient and elegant way of organizing your code and keeping  provides an efficient and elegant way of organizing your code and keeping
93  track of your changes. If CVS is not available on your machine, you can also  track of your changes. If CVS is not available on your machine, you can also
# Line 89  be set within your shell.  For a csh or Line 98  be set within your shell.  For a csh or
98  \begin{verbatim}  \begin{verbatim}
99  % setenv CVSROOT :pserver:cvsanon@mitgcm.org:/u/gcmpack  % setenv CVSROOT :pserver:cvsanon@mitgcm.org:/u/gcmpack
100  \end{verbatim}  \end{verbatim}
101  in your .cshrc or .tcshrc file.  For bash or sh shells, put:  in your \texttt{.cshrc} or \texttt{.tcshrc} file.  For bash or sh
102    shells, put:
103  \begin{verbatim}  \begin{verbatim}
104  % export CVSROOT=':pserver:cvsanon@mitgcm.org:/u/gcmpack'  % export CVSROOT=':pserver:cvsanon@mitgcm.org:/u/gcmpack'
105  \end{verbatim}  \end{verbatim}
106  in your .profile or .bashrc file.  in your \texttt{.profile} or \texttt{.bashrc} file.
107    
108    
109  To get MITgcm through CVS, first register with the MITgcm CVS server  To get MITgcm through CVS, first register with the MITgcm CVS server
# Line 115  The MITgcm web site contains further dir Line 125  The MITgcm web site contains further dir
125  code and CVS.  It also contains a web interface to our CVS archive so  code and CVS.  It also contains a web interface to our CVS archive so
126  that one may easily view the state of files, revisions, and other  that one may easily view the state of files, revisions, and other
127  development milestones:  development milestones:
128  \begin{rawhtml} <A href=http://mitgcm.org/download target="idontexist"> \end{rawhtml}  \begin{rawhtml} <A href="http://mitgcm.org/download" target="idontexist"> \end{rawhtml}
129  \begin{verbatim}  \begin{verbatim}
130  http://mitgcm.org/source\_code.html  http://mitgcm.org/source_code.html
131  \end{verbatim}  \end{verbatim}
132  \begin{rawhtml} </A> \end{rawhtml}  \begin{rawhtml} </A> \end{rawhtml}
133    
134    As a convenience, the MITgcm CVS server contains aliases which are
135    named subsets of the codebase.  These aliases can be especially
136    helpful when used over slow internet connections or on machines with
137    restricted storage space.  Table \ref{tab:cvsModules} contains a list
138    of CVS aliases
139    \begin{table}[htb]
140      \centering
141      \begin{tabular}[htb]{|lp{3.25in}|}\hline
142        \textbf{Alias Name}    &  \textbf{Information (directories) Contained}  \\\hline
143        \texttt{MITgcm\_code}  &  Only the source code -- none of the verification examples.  \\
144        \texttt{MITgcm\_verif\_basic}
145        &  Source code plus a small set of the verification examples
146        (\texttt{global\_ocean.90x40x15}, \texttt{aim.5l\_cs}, \texttt{hs94.128x64x5},
147        \texttt{front\_relax}, and \texttt{plume\_on\_slope}).  \\
148        \texttt{MITgcm\_verif\_atmos}  &  Source code plus all of the atmospheric examples.  \\
149        \texttt{MITgcm\_verif\_ocean}  &  Source code plus all of the oceanic examples.  \\
150        \texttt{MITgcm\_verif\_all}    &  Source code plus all of the
151        verification examples. \\\hline
152      \end{tabular}
153      \caption{MITgcm CVS Modules}
154      \label{tab:cvsModules}
155    \end{table}
156    
157  The checkout process creates a directory called \textit{MITgcm}. If  The checkout process creates a directory called \texttt{MITgcm}. If
158  the directory \textit{MITgcm} exists this command updates your code  the directory \texttt{MITgcm} exists this command updates your code
159  based on the repository. Each directory in the source tree contains a  based on the repository. Each directory in the source tree contains a
160  directory \textit{CVS}. This information is required by CVS to keep  directory \texttt{CVS}. This information is required by CVS to keep
161  track of your file versions with respect to the repository. Don't edit  track of your file versions with respect to the repository. Don't edit
162  the files in \textit{CVS}!  You can also use CVS to download code  the files in \texttt{CVS}!  You can also use CVS to download code
163  updates.  More extensive information on using CVS for maintaining  updates.  More extensive information on using CVS for maintaining
164  MITgcm code can be found  MITgcm code can be found
165  \begin{rawhtml} <A href=http://mitgcm.org/usingcvstoget.html target="idontexist"> \end{rawhtml}  \begin{rawhtml} <A href="http://mitgcm.org/usingcvstoget.html" target="idontexist"> \end{rawhtml}
166  here  here
167  \begin{rawhtml} </A> \end{rawhtml}  \begin{rawhtml} </A> \end{rawhtml}
168  .  .
169    It is important to note that the CVS aliases in Table
170    \ref{tab:cvsModules} cannot be used in conjunction with the CVS
171    \texttt{-d DIRNAME} option.  However, the \texttt{MITgcm} directories
172    they create can be changed to a different name following the check-out:
173    \begin{verbatim}
174       %  cvs co MITgcm_verif_basic
175       %  mv MITgcm MITgcm_verif_basic
176    \end{verbatim}
177    
178    
179  \paragraph*{Conventional download method}  \subsection{Method 2 - Tar file download}
180  \label{sect:conventionalDownload}  \label{sect:conventionalDownload}
181    
182  If you do not have CVS on your system, you can download the model as a  If you do not have CVS on your system, you can download the model as a
# Line 150  The tar file still contains CVS informat Line 190  The tar file still contains CVS informat
190  delete; even if you do not use CVS yourself the information can help  delete; even if you do not use CVS yourself the information can help
191  us if you should need to send us your copy of the code.  If a recent  us if you should need to send us your copy of the code.  If a recent
192  tar file does not exist, then please contact the developers through  tar file does not exist, then please contact the developers through
193  the MITgcm-support list.  the
194    \begin{rawhtml} <A href="mailto:MITgcm-support@mitgcm.org"> \end{rawhtml}
195    MITgcm-support@mitgcm.org
196    \begin{rawhtml} </A> \end{rawhtml}
197    mailing list.
198    
199  \paragraph*{Upgrading from an earlier version}  \subsubsection{Upgrading from an earlier version}
200    
201  If you already have an earlier version of the code you can ``upgrade''  If you already have an earlier version of the code you can ``upgrade''
202  your copy instead of downloading the entire repository again. First,  your copy instead of downloading the entire repository again. First,
# Line 178  If the list of conflicts scrolled off th Line 222  If the list of conflicts scrolled off th
222  cvs update command and it will report the conflicts. Conflicts are  cvs update command and it will report the conflicts. Conflicts are
223  indicated in the code by the delimites ``$<<<<<<<$'', ``======='' and  indicated in the code by the delimites ``$<<<<<<<$'', ``======='' and
224  ``$>>>>>>>$''. For example,  ``$>>>>>>>$''. For example,
225    {\small
226  \begin{verbatim}  \begin{verbatim}
227  <<<<<<< ini_parms.F  <<<<<<< ini_parms.F
228       & bottomDragLinear,myOwnBottomDragCoefficient,       & bottomDragLinear,myOwnBottomDragCoefficient,
# Line 185  indicated in the code by the delimites ` Line 230  indicated in the code by the delimites `
230       & bottomDragLinear,bottomDragQuadratic,       & bottomDragLinear,bottomDragQuadratic,
231  >>>>>>> 1.18  >>>>>>> 1.18
232  \end{verbatim}  \end{verbatim}
233    }
234  means that you added ``myOwnBottomDragCoefficient'' to a namelist at  means that you added ``myOwnBottomDragCoefficient'' to a namelist at
235  the same time and place that we added ``bottomDragQuadratic''. You  the same time and place that we added ``bottomDragQuadratic''. You
236  need to resolve this conflict and in this case the line should be  need to resolve this conflict and in this case the line should be
237  changed to:  changed to:
238    {\small
239  \begin{verbatim}  \begin{verbatim}
240       & bottomDragLinear,bottomDragQuadratic,myOwnBottomDragCoefficient,       & bottomDragLinear,bottomDragQuadratic,myOwnBottomDragCoefficient,
241  \end{verbatim}  \end{verbatim}
242    }
243  and the lines with the delimiters ($<<<<<<$,======,$>>>>>>$) be deleted.  and the lines with the delimiters ($<<<<<<$,======,$>>>>>>$) be deleted.
244  Unless you are making modifications which exactly parallel  Unless you are making modifications which exactly parallel
245  developments we make, these types of conflicts should be rare.  developments we make, these types of conflicts should be rare.
# Line 215  also means we can't tell what version of Line 263  also means we can't tell what version of
263  with. So please be sure you understand what you're doing.  with. So please be sure you understand what you're doing.
264    
265  \section{Model and directory structure}  \section{Model and directory structure}
266    \begin{rawhtml}
267    <!-- CMIREDIR:directory_structure: -->
268    \end{rawhtml}
269    
270  The ``numerical'' model is contained within a execution environment  The ``numerical'' model is contained within a execution environment
271  support wrapper. This wrapper is designed to provide a general  support wrapper. This wrapper is designed to provide a general
# Line 222  framework for grid-point models. MITgcmU Line 273  framework for grid-point models. MITgcmU
273  model that uses the framework. Under this structure the model is split  model that uses the framework. Under this structure the model is split
274  into execution environment support code and conventional numerical  into execution environment support code and conventional numerical
275  model code. The execution environment support code is held under the  model code. The execution environment support code is held under the
276  \textit{eesupp} directory. The grid point model code is held under the  \texttt{eesupp} directory. The grid point model code is held under the
277  \textit{model} directory. Code execution actually starts in the  \texttt{model} directory. Code execution actually starts in the
278  \textit{eesupp} routines and not in the \textit{model} routines. For  \texttt{eesupp} routines and not in the \texttt{model} routines. For
279  this reason the top-level  this reason the top-level \texttt{MAIN.F} is in the
280  \textit{MAIN.F} is in the \textit{eesupp/src} directory. In general,  \texttt{eesupp/src} directory. In general, end-users should not need
281  end-users should not need to worry about this level. The top-level routine  to worry about this level. The top-level routine for the numerical
282  for the numerical part of the code is in \textit{model/src/THE\_MODEL\_MAIN.F%  part of the code is in \texttt{model/src/THE\_MODEL\_MAIN.F}. Here is
283  }. Here is a brief description of the directory structure of the model under  a brief description of the directory structure of the model under the
284  the root tree (a detailed description is given in section 3: Code structure).  root tree (a detailed description is given in section 3: Code
285    structure).
 \begin{itemize}  
 \item \textit{bin}: this directory is initially empty. It is the default  
 directory in which to compile the code.  
   
 \item \textit{diags}: contains the code relative to time-averaged  
 diagnostics. It is subdivided into two subdirectories \textit{inc} and  
 \textit{src} that contain include files (*.\textit{h} files) and Fortran  
 subroutines (*.\textit{F} files), respectively.  
   
 \item \textit{doc}: contains brief documentation notes.  
   
 \item \textit{eesupp}: contains the execution environment source code. Also  
 subdivided into two subdirectories \textit{inc} and \textit{src}.  
   
 \item \textit{exe}: this directory is initially empty. It is the default  
 directory in which to execute the code.  
   
 \item \textit{model}: this directory contains the main source code. Also  
 subdivided into two subdirectories \textit{inc} and \textit{src}.  
   
 \item \textit{pkg}: contains the source code for the packages. Each package  
 corresponds to a subdirectory. For example, \textit{gmredi} contains the  
 code related to the Gent-McWilliams/Redi scheme, \textit{aim} the code  
 relative to the atmospheric intermediate physics. The packages are described  
 in detail in section 3.  
   
 \item \textit{tools}: this directory contains various useful tools. For  
 example, \textit{genmake2} is a script written in csh (C-shell) that should  
 be used to generate your makefile. The directory \textit{adjoint} contains  
 the makefile specific to the Tangent linear and Adjoint Compiler (TAMC) that  
 generates the adjoint code. The latter is described in details in part V.  
   
 \item \textit{utils}: this directory contains various utilities. The  
 subdirectory \textit{knudsen2} contains code and a makefile that  
 compute coefficients of the polynomial approximation to the knudsen  
 formula for an ocean nonlinear equation of state. The \textit{matlab}  
 subdirectory contains matlab scripts for reading model output directly  
 into matlab. \textit{scripts} contains C-shell post-processing  
 scripts for joining processor-based and tiled-based model output.  
   
 \item \textit{verification}: this directory contains the model examples. See  
 section \ref{sect:modelExamples}.  
 \end{itemize}  
   
 \section{Example experiments}  
 \label{sect:modelExamples}  
   
 %% a set of twenty-four pre-configured numerical experiments  
   
 The MITgcm distribution comes with more than a dozen pre-configured  
 numerical experiments. Some of these example experiments are tests of  
 individual parts of the model code, but many are fully fledged  
 numerical simulations. A few of the examples are used for tutorial  
 documentation in sections \ref{sect:eg-baro} - \ref{sect:eg-global}.  
 The other examples follow the same general structure as the tutorial  
 examples. However, they only include brief instructions in a text file  
 called {\it README}.  The examples are located in subdirectories under  
 the directory \textit{verification}. Each example is briefly described  
 below.  
   
 \subsection{Full list of model examples}  
   
 \begin{enumerate}  
 \item \textit{exp0} - single layer, ocean double gyre (barotropic with  
   free-surface). This experiment is described in detail in section  
   \ref{sect:eg-baro}.  
   
 \item \textit{exp1} - Four layer, ocean double gyre. This experiment  
   is described in detail in section \ref{sect:eg-baroc}.  
     
 \item \textit{exp2} - 4x4 degree global ocean simulation with steady  
   climatological forcing. This experiment is described in detail in  
   section \ref{sect:eg-global}.  
     
 \item \textit{exp4} - Flow over a Gaussian bump in open-water or  
   channel with open boundaries.  
     
 \item \textit{exp5} - Inhomogenously forced ocean convection in a  
   doubly periodic box.  
   
 \item \textit{front\_relax} - Relaxation of an ocean thermal front (test for  
 Gent/McWilliams scheme). 2D (Y-Z).  
   
 \item \textit{internal wave} - Ocean internal wave forced by open  
   boundary conditions.  
     
 \item \textit{natl\_box} - Eastern subtropical North Atlantic with KPP  
   scheme; 1 month integration  
     
 \item \textit{hs94.1x64x5} - Zonal averaged atmosphere using Held and  
   Suarez '94 forcing.  
     
 \item \textit{hs94.128x64x5} - 3D atmosphere dynamics using Held and  
   Suarez '94 forcing.  
     
 \item \textit{hs94.cs-32x32x5} - 3D atmosphere dynamics using Held and  
   Suarez '94 forcing on the cubed sphere.  
     
 \item \textit{aim.5l\_zon-ave} - Intermediate Atmospheric physics.  
   Global Zonal Mean configuration, 1x64x5 resolution.  
     
 \item \textit{aim.5l\_XZ\_Equatorial\_Slice} - Intermediate  
   Atmospheric physics, equatorial Slice configuration.  2D (X-Z).  
     
 \item \textit{aim.5l\_Equatorial\_Channel} - Intermediate Atmospheric  
   physics. 3D Equatorial Channel configuration.  
     
 \item \textit{aim.5l\_LatLon} - Intermediate Atmospheric physics.  
   Global configuration, on latitude longitude grid with 128x64x5 grid  
   points ($2.8^\circ{\rm degree}$ resolution).  
     
 \item \textit{adjustment.128x64x1} Barotropic adjustment problem on  
   latitude longitude grid with 128x64 grid points ($2.8^\circ{\rm  
     degree}$ resolution).  
     
 \item \textit{adjustment.cs-32x32x1} Barotropic adjustment problem on  
   cube sphere grid with 32x32 points per face ( roughly $2.8^\circ{\rm  
     degree}$ resolution).  
     
 \item \textit{advect\_cs} Two-dimensional passive advection test on  
   cube sphere grid.  
     
 \item \textit{advect\_xy} Two-dimensional (horizontal plane) passive  
   advection test on Cartesian grid.  
     
 \item \textit{advect\_yz} Two-dimensional (vertical plane) passive  
   advection test on Cartesian grid.  
     
 \item \textit{carbon} Simple passive tracer experiment. Includes  
   derivative calculation. Described in detail in section  
   \ref{sect:eg-carbon-ad}.  
   
 \item \textit{flt\_example} Example of using float package.  
     
 \item \textit{global\_ocean.90x40x15} Global circulation with GM, flux  
   boundary conditions and poles.  
   
 \item \textit{global\_ocean\_pressure} Global circulation in pressure  
   coordinate (non-Boussinesq ocean model). Described in detail in  
   section \ref{sect:eg-globalpressure}.  
     
 \item \textit{solid-body.cs-32x32x1} Solid body rotation test for cube  
   sphere grid.  
   
 \end{enumerate}  
   
 \subsection{Directory structure of model examples}  
   
 Each example directory has the following subdirectories:  
286    
287  \begin{itemize}  \begin{itemize}
 \item \textit{code}: contains the code particular to the example. At a  
   minimum, this directory includes the following files:  
288    
289    \begin{itemize}  \item \texttt{bin}: this directory is initially empty. It is the
290    \item \textit{code/CPP\_EEOPTIONS.h}: declares CPP keys relative to    default directory in which to compile the code.
291      the ``execution environment'' part of the code. The default    
292      version is located in \textit{eesupp/inc}.  \item \texttt{diags}: contains the code relative to time-averaged
293        diagnostics. It is subdivided into two subdirectories \texttt{inc}
294    \item \textit{code/CPP\_OPTIONS.h}: declares CPP keys relative to    and \texttt{src} that contain include files (\texttt{*.h} files) and
295      the ``numerical model'' part of the code. The default version is    Fortran subroutines (\texttt{*.F} files), respectively.
296      located in \textit{model/inc}.  
297      \item \texttt{doc}: contains brief documentation notes.
298    \item \textit{code/SIZE.h}: declares size of underlying    
299      computational grid.  The default version is located in  \item \texttt{eesupp}: contains the execution environment source code.
300      \textit{model/inc}.    Also subdivided into two subdirectories \texttt{inc} and
301    \end{itemize}    \texttt{src}.
302      
303    \item \texttt{exe}: this directory is initially empty. It is the
304      default directory in which to execute the code.
305      
306    \item \texttt{model}: this directory contains the main source code.
307      Also subdivided into two subdirectories \texttt{inc} and
308      \texttt{src}.
309      
310    \item \texttt{pkg}: contains the source code for the packages. Each
311      package corresponds to a subdirectory. For example, \texttt{gmredi}
312      contains the code related to the Gent-McWilliams/Redi scheme,
313      \texttt{aim} the code relative to the atmospheric intermediate
314      physics. The packages are described in detail in section 3.
315      
316    \item \texttt{tools}: this directory contains various useful tools.
317      For example, \texttt{genmake2} is a script written in csh (C-shell)
318      that should be used to generate your makefile. The directory
319      \texttt{adjoint} contains the makefile specific to the Tangent
320      linear and Adjoint Compiler (TAMC) that generates the adjoint code.
321      The latter is described in details in part V.
322      
323    \item \texttt{utils}: this directory contains various utilities. The
324      subdirectory \texttt{knudsen2} contains code and a makefile that
325      compute coefficients of the polynomial approximation to the knudsen
326      formula for an ocean nonlinear equation of state. The
327      \texttt{matlab} subdirectory contains matlab scripts for reading
328      model output directly into matlab. \texttt{scripts} contains C-shell
329      post-processing scripts for joining processor-based and tiled-based
330      model output.
331        
332    In addition, other include files and subroutines might be present in  \item \texttt{verification}: this directory contains the model
333    \textit{code} depending on the particular experiment. See Section 2    examples. See section \ref{sect:modelExamples}.
   for more details.  
     
 \item \textit{input}: contains the input data files required to run  
   the example. At a minimum, the \textit{input} directory contains the  
   following files:  
334    
   \begin{itemize}  
   \item \textit{input/data}: this file, written as a namelist,  
     specifies the main parameters for the experiment.  
     
   \item \textit{input/data.pkg}: contains parameters relative to the  
     packages used in the experiment.  
     
   \item \textit{input/eedata}: this file contains ``execution  
     environment'' data. At present, this consists of a specification  
     of the number of threads to use in $X$ and $Y$ under multithreaded  
     execution.  
   \end{itemize}  
   
 In addition, you will also find in this directory the forcing and  
 topography files as well as the files describing the initial state of  
 the experiment.  This varies from experiment to experiment. See  
 section 2 for more details.  
   
 \item \textit{results}: this directory contains the output file  
   \textit{output.txt} produced by the simulation example. This file is  
   useful for comparison with your own output when you run the  
   experiment.  
335  \end{itemize}  \end{itemize}
336    
337  Once you have chosen the example you want to run, you are ready to compile  \section[Building MITgcm]{Building the code}
 the code.  
   
 \section{Building the code}  
338  \label{sect:buildingCode}  \label{sect:buildingCode}
339    \begin{rawhtml}
340  To compile the code, we use the {\em make} program. This uses a file  <!-- CMIREDIR:buildingCode: -->
341  ({\em Makefile}) that allows us to pre-process source files, specify  \end{rawhtml}
342  compiler and optimization options and also figures out any file  
343  dependencies. We supply a script ({\em genmake2}), described in  To compile the code, we use the \texttt{make} program. This uses a
344  section \ref{sect:genmake}, that automatically creates the {\em  file (\texttt{Makefile}) that allows us to pre-process source files,
345    Makefile} for you. You then need to build the dependencies and  specify compiler and optimization options and also figures out any
346    file dependencies. We supply a script (\texttt{genmake2}), described
347    in section \ref{sect:genmake}, that automatically creates the
348    \texttt{Makefile} for you. You then need to build the dependencies and
349  compile the code.  compile the code.
350    
351  As an example, let's assume that you want to build and run experiment  As an example, assume that you want to build and run experiment
352  \textit{verification/exp2}. The are multiple ways and places to  \texttt{verification/exp2}. The are multiple ways and places to
353  actually do this but here let's build the code in  actually do this but here let's build the code in
354  \textit{verification/exp2/input}:  \texttt{verification/exp2/build}:
355  \begin{verbatim}  \begin{verbatim}
356  % cd verification/exp2/input  % cd verification/exp2/build
357  \end{verbatim}  \end{verbatim}
358  First, build the {\em Makefile}:  First, build the \texttt{Makefile}:
359  \begin{verbatim}  \begin{verbatim}
360  % ../../../tools/genmake2 -mods=../code  % ../../../tools/genmake2 -mods=../code
361  \end{verbatim}  \end{verbatim}
362  The command line option tells {\em genmake} to override model source  The command line option tells \texttt{genmake} to override model source
363  code with any files in the directory {\em ./code/}.  code with any files in the directory \texttt{../code/}.
364    
365  On many systems, the {\em genmake2} program will be able to  On many systems, the \texttt{genmake2} program will be able to
366  automatically recognize the hardware, find compilers and other tools  automatically recognize the hardware, find compilers and other tools
367  within the user's path (``echo \$PATH''), and then choose an  within the user's path (``\texttt{echo \$PATH}''), and then choose an
368  appropriate set of options from the files contained in the {\em  appropriate set of options from the files (``optfiles'') contained in
369    tools/build\_options} directory.  Under some circumstances, a user  the \texttt{tools/build\_options} directory.  Under some
370  may have to create a new ``optfile'' in order to specify the exact  circumstances, a user may have to create a new ``optfile'' in order to
371  combination of compiler, compiler flags, libraries, and other options  specify the exact combination of compiler, compiler flags, libraries,
372  necessary to build a particular configuration of MITgcm.  In such  and other options necessary to build a particular configuration of
373  cases, it is generally helpful to read the existing ``optfiles'' and  MITgcm.  In such cases, it is generally helpful to read the existing
374  mimic their syntax.  ``optfiles'' and mimic their syntax.
375    
376  Through the MITgcm-support list, the MITgcm developers are willing to  Through the MITgcm-support list, the MITgcm developers are willing to
377  provide help writing or modifing ``optfiles''.  And we encourage users  provide help writing or modifing ``optfiles''.  And we encourage users
378  to post new ``optfiles'' (particularly ones for new machines or  to post new ``optfiles'' (particularly ones for new machines or
379  architectures) to the MITgcm-support list.  architectures) to the
380    \begin{rawhtml} <A href="mailto:MITgcm-support@mitgcm.org"> \end{rawhtml}
381    MITgcm-support@mitgcm.org
382    \begin{rawhtml} </A> \end{rawhtml}
383    list.
384    
385  To specify an optfile to {\em genmake2}, the syntax is:  To specify an optfile to \texttt{genmake2}, the syntax is:
386  \begin{verbatim}  \begin{verbatim}
387  % ../../../tools/genmake2 -mods=../code -of /path/to/optfile  % ../../../tools/genmake2 -mods=../code -of /path/to/optfile
388  \end{verbatim}  \end{verbatim}
389    
390  Once a {\em Makefile} has been generated, we create the dependencies:  Once a \texttt{Makefile} has been generated, we create the
391    dependencies with the command:
392  \begin{verbatim}  \begin{verbatim}
393  % make depend  % make depend
394  \end{verbatim}  \end{verbatim}
395  This modifies the {\em Makefile} by attaching a [long] list of files  This modifies the \texttt{Makefile} by attaching a (usually, long)
396  upon which other files depend. The purpose of this is to reduce  list of files upon which other files depend. The purpose of this is to
397  re-compilation if and when you start to modify the code. The {\tt make  reduce re-compilation if and when you start to modify the code. The
398    depend} command also creates links from the model source to this  {\tt make depend} command also creates links from the model source to
399  directory.  this directory.  It is important to note that the {\tt make depend}
400    stage will occasionally produce warnings or errors since the
401    dependency parsing tool is unable to find all of the necessary header
402    files (\textit{eg.}  \texttt{netcdf.inc}).  In these circumstances, it
403    is usually OK to ignore the warnings/errors and proceed to the next
404    step.
405    
406  Next compile the code:  Next one can compile the code using:
407  \begin{verbatim}  \begin{verbatim}
408  % make  % make
409  \end{verbatim}  \end{verbatim}
410  The {\tt make} command creates an executable called \textit{mitgcmuv}.  The {\tt make} command creates an executable called \texttt{mitgcmuv}.
411  Additional make ``targets'' are defined within the makefile to aid in  Additional make ``targets'' are defined within the makefile to aid in
412  the production of adjoint and other versions of MITgcm.  the production of adjoint and other versions of MITgcm.  On SMP
413    (shared multi-processor) systems, the build process can often be sped
414    up appreciably using the command:
415    \begin{verbatim}
416    % make -j 2
417    \end{verbatim}
418    where the ``2'' can be replaced with a number that corresponds to the
419    number of CPUs available.
420    
421  Now you are ready to run the model. General instructions for doing so are  Now you are ready to run the model. General instructions for doing so are
422  given in section \ref{sect:runModel}. Here, we can run the model with:  given in section \ref{sect:runModel}. Here, we can run the model by
423    first creating links to all the input files:
424    \begin{verbatim}
425    ln -s ../input/* .
426    \end{verbatim}
427    and then calling the executable with:
428  \begin{verbatim}  \begin{verbatim}
429  ./mitgcmuv > output.txt  ./mitgcmuv > output.txt
430  \end{verbatim}  \end{verbatim}
431  where we are re-directing the stream of text output to the file {\em  where we are re-directing the stream of text output to the file
432  output.txt}.  \texttt{output.txt}.
   
433    
434  \subsection{Building/compiling the code elsewhere}  \subsection{Building/compiling the code elsewhere}
435    
# Line 613  the one experiment: Line 536  the one experiment:
536  \end{verbatim}  \end{verbatim}
537    
538    
539    \subsection{Using \texttt{genmake2}}
 \subsection{Using \textit{genmake2}}  
540  \label{sect:genmake}  \label{sect:genmake}
541    
542  To compile the code, first use the program \texttt{genmake2} (located  To compile the code, first use the program \texttt{genmake2} (located
543  in the \textit{tools} directory) to generate a Makefile.  in the \texttt{tools} directory) to generate a Makefile.
544  \texttt{genmake2} is a shell script written to work with all  \texttt{genmake2} is a shell script written to work with all
545  ``sh''--compatible shells including bash v1, bash v2, and Bourne.  ``sh''--compatible shells including bash v1, bash v2, and Bourne.
546  Internally, \texttt{genmake2} determines the locations of needed  Internally, \texttt{genmake2} determines the locations of needed
547  files, the compiler, compiler options, libraries, and Unix tools.  It  files, the compiler, compiler options, libraries, and Unix tools.  It
548  relies upon a number of ``optfiles'' located in the {\em  relies upon a number of ``optfiles'' located in the
549    tools/build\_options} directory.  \texttt{tools/build\_options} directory.
550    
551  The purpose of the optfiles is to provide all the compilation options  The purpose of the optfiles is to provide all the compilation options
552  for particular ``platforms'' (where ``platform'' roughly means the  for particular ``platforms'' (where ``platform'' roughly means the
# Line 707  obtained from: Line 629  obtained from:
629  The most important command-line options are:  The most important command-line options are:
630  \begin{description}  \begin{description}
631        
632  \item[--optfile=/PATH/FILENAME] specifies the optfile that should be  \item[\texttt{--optfile=/PATH/FILENAME}] specifies the optfile that
633    used for a particular build.    should be used for a particular build.
634        
635    If no "optfile" is specified (either through the command line or the    If no "optfile" is specified (either through the command line or the
636    MITGCM\_OPTFILE environment variable), genmake2 will try to make a    MITGCM\_OPTFILE environment variable), genmake2 will try to make a
# Line 719  The most important command-line options Line 641  The most important command-line options
641    the user's path.  When these three items have been identified,    the user's path.  When these three items have been identified,
642    genmake2 will try to find an optfile that has a matching name.    genmake2 will try to find an optfile that has a matching name.
643        
644  \item[--pdepend=/PATH/FILENAME] specifies the dependency file used for  \item[\texttt{--pdefault='PKG1 PKG2 PKG3 ...'}] specifies the default
645    packages.    set of packages to be used.  The normal order of precedence for
646      packages is as follows:
647      \begin{enumerate}
648      \item If available, the command line (\texttt{--pdefault}) settings
649        over-rule any others.
650    
651      \item Next, \texttt{genmake2} will look for a file named
652        ``\texttt{packages.conf}'' in the local directory or in any of the
653        directories specified with the \texttt{--mods} option.
654        
655      \item Finally, if neither of the above are available,
656        \texttt{genmake2} will use the \texttt{/pkg/pkg\_default} file.
657      \end{enumerate}
658      
659    \item[\texttt{--pdepend=/PATH/FILENAME}] specifies the dependency file
660      used for packages.
661        
662    If not specified, the default dependency file {\em pkg/pkg\_depend}    If not specified, the default dependency file {\em pkg/pkg\_depend}
663    is used.  The syntax for this file is parsed on a line-by-line basis    is used.  The syntax for this file is parsed on a line-by-line basis
# Line 731  The most important command-line options Line 668  The most important command-line options
668    assumed that the two packages are compatible and will function    assumed that the two packages are compatible and will function
669    either with or without each other.    either with or without each other.
670        
671  \item[--pdefault='PKG1 PKG2 PKG3 ...'] specifies the default set of  \item[\texttt{--adof=/path/to/file}] specifies the "adjoint" or
672    packages to be used.    automatic differentiation options file to be used.  The file is
673        analogous to the ``optfile'' defined above but it specifies
674    If not set, the default package list will be read from {\em    information for the AD build process.
     pkg/pkg\_default}  
     
 \item[--adof=/path/to/file] specifies the "adjoint" or automatic  
   differentiation options file to be used.  The file is analogous to  
   the ``optfile'' defined above but it specifies information for the  
   AD build process.  
675        
676    The default file is located in {\em    The default file is located in {\em
677      tools/adjoint\_options/adjoint\_default} and it defines the "TAF"      tools/adjoint\_options/adjoint\_default} and it defines the "TAF"
# Line 749  The most important command-line options Line 680  The most important command-line options
680    "STAF" compiler.  As with any compilers, it is helpful to have their    "STAF" compiler.  As with any compilers, it is helpful to have their
681    directories listed in your {\tt \$PATH} environment variable.    directories listed in your {\tt \$PATH} environment variable.
682        
683  \item[--mods='DIR1 DIR2 DIR3 ...'] specifies a list of directories  \item[\texttt{--mods='DIR1 DIR2 DIR3 ...'}] specifies a list of
684    containing ``modifications''.  These directories contain files with    directories containing ``modifications''.  These directories contain
685    names that may (or may not) exist in the main MITgcm source tree but    files with names that may (or may not) exist in the main MITgcm
686    will be overridden by any identically-named sources within the    source tree but will be overridden by any identically-named sources
687    ``MODS'' directories.    within the ``MODS'' directories.
688        
689    The order of precedence for this "name-hiding" is as follows:    The order of precedence for this "name-hiding" is as follows:
690    \begin{itemize}    \begin{itemize}
# Line 766  The most important command-line options Line 697  The most important command-line options
697      ``-standarddirs'' option)      ``-standarddirs'' option)
698    \end{itemize}    \end{itemize}
699        
700  \item[--make=/path/to/gmake] Due to the poor handling of soft-links and  \item[\texttt{--mpi}] This option enables certain MPI features (using
701    other bugs common with the \texttt{make} versions provided by    CPP \texttt{\#define}s) within the code and is necessary for MPI
702    commercial Unix vendors, GNU \texttt{make} (sometimes called    builds (see Section \ref{sect:mpi-build}).
703    \texttt{gmake}) should be preferred.  This option provides a means    
704    for specifying the make executable to be used.  \item[\texttt{--make=/path/to/gmake}] Due to the poor handling of
705      soft-links and other bugs common with the \texttt{make} versions
706      provided by commercial Unix vendors, GNU \texttt{make} (sometimes
707      called \texttt{gmake}) should be preferred.  This option provides a
708      means for specifying the make executable to be used.
709      
710    \item[\texttt{--bash=/path/to/sh}] On some (usually older UNIX)
711      machines, the ``bash'' shell is unavailable.  To run on these
712      systems, \texttt{genmake2} can be invoked using an ``sh'' (that is,
713      a Bourne, POSIX, or compatible) shell.  The syntax in these
714      circumstances is:
715      \begin{center}
716        \texttt{\%  /bin/sh genmake2 -bash=/bin/sh [...options...]}
717      \end{center}
718      where \texttt{/bin/sh} can be replaced with the full path and name
719      of the desired shell.
720    
721  \end{description}  \end{description}
722    
723    
724    \subsection{Building with MPI}
725    \label{sect:mpi-build}
726    
727  \section{Running the model}  Building MITgcm to use MPI libraries can be complicated due to the
728  \label{sect:runModel}  variety of different MPI implementations available, their dependencies
729    or interactions with different compilers, and their often ad-hoc
730    locations within file systems.  For these reasons, its generally a
731    good idea to start by finding and reading the documentation for your
732    machine(s) and, if necessary, seeking help from your local systems
733    administrator.
734    
735    The steps for building MITgcm with MPI support are:
736    \begin{enumerate}
737      
738    \item Determine the locations of your MPI-enabled compiler and/or MPI
739      libraries and put them into an options file as described in Section
740      \ref{sect:genmake}.  One can start with one of the examples in:
741      \begin{rawhtml} <A
742        href="http://mitgcm.org/cgi-bin/viewcvs.cgi/MITgcm/tools/build_options/">
743      \end{rawhtml}
744      \begin{center}
745        \texttt{MITgcm/tools/build\_options/}
746      \end{center}
747      \begin{rawhtml} </A> \end{rawhtml}
748      such as \texttt{linux\_ia32\_g77+mpi\_cg01} or
749      \texttt{linux\_ia64\_efc+mpi} and then edit it to suit the machine at
750      hand.  You may need help from your user guide or local systems
751      administrator to determine the exact location of the MPI libraries.
752      If libraries are not installed, MPI implementations and related
753      tools are available including:
754      \begin{itemize}
755      \item \begin{rawhtml} <A
756          href="http://www-unix.mcs.anl.gov/mpi/mpich/">
757        \end{rawhtml}
758        MPICH
759        \begin{rawhtml} </A> \end{rawhtml}
760    
761      \item \begin{rawhtml} <A
762          href="http://www.lam-mpi.org/">
763        \end{rawhtml}
764        LAM/MPI
765        \begin{rawhtml} </A> \end{rawhtml}
766    
767      \item \begin{rawhtml} <A
768          href="http://www.osc.edu/~pw/mpiexec/">
769        \end{rawhtml}
770        MPIexec
771        \begin{rawhtml} </A> \end{rawhtml}
772      \end{itemize}
773      
774    \item Build the code with the \texttt{genmake2} \texttt{-mpi} option
775      (see Section \ref{sect:genmake}) using commands such as:
776    {\footnotesize \begin{verbatim}
777      %  ../../../tools/genmake2 -mods=../code -mpi -of=YOUR_OPTFILE
778      %  make depend
779      %  make
780    \end{verbatim} }
781      
782    \item Run the code with the appropriate MPI ``run'' or ``exec''
783      program provided with your particular implementation of MPI.
784      Typical MPI packages such as MPICH will use something like:
785    \begin{verbatim}
786      %  mpirun -np 4 -machinefile mf ./mitgcmuv
787    \end{verbatim}
788      Sightly more complicated scripts may be needed for many machines
789      since execution of the code may be controlled by both the MPI
790      library and a job scheduling and queueing system such as PBS,
791      LoadLeveller, Condor, or any of a number of similar tools.  A few
792      example scripts (those used for our \begin{rawhtml} <A
793        href="http://mitgcm.org/testing.html"> \end{rawhtml}regular
794      verification runs\begin{rawhtml} </A> \end{rawhtml}) are available
795      at:
796      \begin{rawhtml} <A
797        href="http://mitgcm.org/cgi-bin/viewcvs.cgi/MITgcm_contrib/test_scripts/">
798      \end{rawhtml}
799      {\footnotesize \tt
800        http://mitgcm.org/cgi-bin/viewcvs.cgi/MITgcm\_contrib/test\_scripts/ }
801      \begin{rawhtml} </A> \end{rawhtml}
802    
803    \end{enumerate}
804    
805    An example of the above process on the MITgcm cluster (``cg01'') using
806    the GNU g77 compiler and the mpich MPI library is:
807    
808    {\footnotesize \begin{verbatim}
809      %  cd MITgcm/verification/exp5
810      %  mkdir build
811      %  cd build
812      %  ../../../tools/genmake2 -mpi -mods=../code \
813           -of=../../../tools/build_options/linux_ia32_g77+mpi_cg01
814      %  make depend
815      %  make
816      %  cd ../input
817      %  /usr/local/pkg/mpi/mpi-1.2.4..8a-gm-1.5/g77/bin/mpirun.ch_gm \
818           -machinefile mf --gm-kill 5 -v -np 2  ../build/mitgcmuv
819    \end{verbatim} }
820    
821  If compilation finished succesfuully (section \ref{sect:buildModel})  \section[Running MITgcm]{Running the model in prognostic mode}
822  then an executable called {\em mitgcmuv} will now exist in the local  \label{sect:runModel}
823  directory.  \begin{rawhtml}
824    <!-- CMIREDIR:runModel: -->
825    \end{rawhtml}
826    
827    If compilation finished succesfully (section \ref{sect:buildingCode})
828    then an executable called \texttt{mitgcmuv} will now exist in the
829    local directory.
830    
831  To run the model as a single process (ie. not in parallel) simply  To run the model as a single process (\textit{ie.} not in parallel)
832  type:  simply type:
833  \begin{verbatim}  \begin{verbatim}
834  % ./mitgcmuv  % ./mitgcmuv
835  \end{verbatim}  \end{verbatim}
# Line 794  do!). The above command will spew out ma Line 839  do!). The above command will spew out ma
839  your screen.  This output contains details such as parameter values as  your screen.  This output contains details such as parameter values as
840  well as diagnostics such as mean Kinetic energy, largest CFL number,  well as diagnostics such as mean Kinetic energy, largest CFL number,
841  etc. It is worth keeping this text output with the binary output so we  etc. It is worth keeping this text output with the binary output so we
842  normally re-direct the {\em stdout} stream as follows:  normally re-direct the \texttt{stdout} stream as follows:
843  \begin{verbatim}  \begin{verbatim}
844  % ./mitgcmuv > output.txt  % ./mitgcmuv > output.txt
845  \end{verbatim}  \end{verbatim}
846    In the event that the model encounters an error and stops, it is very
847  For the example experiments in {\em vericication}, an example of the  helpful to include the last few line of this \texttt{output.txt} file
848  output is kept in {\em results/output.txt} for comparison. You can compare  along with the (\texttt{stderr}) error message within any bug reports.
849  your {\em output.txt} with this one to check that the set-up works.  
850    For the example experiments in \texttt{verification}, an example of the
851    output is kept in \texttt{results/output.txt} for comparison. You can
852    compare your \texttt{output.txt} with the corresponding one for that
853    experiment to check that the set-up works.
854    
855    
856    
857  \subsection{Output files}  \subsection{Output files}
858    
859  The model produces various output files. At a minimum, the instantaneous  The model produces various output files and, when using \texttt{mnc},
860  ``state'' of the model is written out, which is made of the following files:  sometimes even directories.  Depending upon the I/O package(s)
861    selected at compile time (either \texttt{mdsio} or \texttt{mnc} or
862    both as determined by \texttt{code/packages.conf}) and the run-time
863    flags set (in \texttt{input/data.pkg}), the following output may
864    appear.
865    
866    
867    \subsubsection{MDSIO output files}
868    
869    The ``traditional'' output files are generated by the \texttt{mdsio}
870    package.  At a minimum, the instantaneous ``state'' of the model is
871    written out, which is made of the following files:
872    
873  \begin{itemize}  \begin{itemize}
874  \item \textit{U.00000nIter} - zonal component of velocity field (m/s and $>  \item \texttt{U.00000nIter} - zonal component of velocity field (m/s
875  0 $ eastward).    and positive eastward).
876    
877  \item \textit{V.00000nIter} - meridional component of velocity field (m/s  \item \texttt{V.00000nIter} - meridional component of velocity field
878  and $> 0$ northward).    (m/s and positive northward).
879    
880  \item \textit{W.00000nIter} - vertical component of velocity field (ocean:  \item \texttt{W.00000nIter} - vertical component of velocity field
881  m/s and $> 0$ upward, atmosphere: Pa/s and $> 0$ towards increasing pressure    (ocean: m/s and positive upward, atmosphere: Pa/s and positive
882  i.e. downward).    towards increasing pressure i.e. downward).
883    
884  \item \textit{T.00000nIter} - potential temperature (ocean: $^{0}$C,  \item \texttt{T.00000nIter} - potential temperature (ocean:
885  atmosphere: $^{0}$K).    $^{\circ}\mathrm{C}$, atmosphere: $^{\circ}\mathrm{K}$).
886    
887  \item \textit{S.00000nIter} - ocean: salinity (psu), atmosphere: water vapor  \item \texttt{S.00000nIter} - ocean: salinity (psu), atmosphere: water
888  (g/kg).    vapor (g/kg).
889    
890  \item \textit{Eta.00000nIter} - ocean: surface elevation (m), atmosphere:  \item \texttt{Eta.00000nIter} - ocean: surface elevation (m),
891  surface pressure anomaly (Pa).    atmosphere: surface pressure anomaly (Pa).
892  \end{itemize}  \end{itemize}
893    
894  The chain \textit{00000nIter} consists of ten figures that specify the  The chain \texttt{00000nIter} consists of ten figures that specify the
895  iteration number at which the output is written out. For example, \textit{%  iteration number at which the output is written out. For example,
896  U.0000000300} is the zonal velocity at iteration 300.  \texttt{U.0000000300} is the zonal velocity at iteration 300.
897    
898  In addition, a ``pickup'' or ``checkpoint'' file called:  In addition, a ``pickup'' or ``checkpoint'' file called:
899    
900  \begin{itemize}  \begin{itemize}
901  \item \textit{pickup.00000nIter}  \item \texttt{pickup.00000nIter}
902  \end{itemize}  \end{itemize}
903    
904  is written out. This file represents the state of the model in a condensed  is written out. This file represents the state of the model in a condensed
# Line 846  form and is used for restarting the inte Line 906  form and is used for restarting the inte
906  there is an additional ``pickup'' file:  there is an additional ``pickup'' file:
907    
908  \begin{itemize}  \begin{itemize}
909  \item \textit{pickup\_cd.00000nIter}  \item \texttt{pickup\_cd.00000nIter}
910  \end{itemize}  \end{itemize}
911    
912  containing the D-grid velocity data and that has to be written out as well  containing the D-grid velocity data and that has to be written out as well
913  in order to restart the integration. Rolling checkpoint files are the same  in order to restart the integration. Rolling checkpoint files are the same
914  as the pickup files but are named differently. Their name contain the chain  as the pickup files but are named differently. Their name contain the chain
915  \textit{ckptA} or \textit{ckptB} instead of \textit{00000nIter}. They can be  \texttt{ckptA} or \texttt{ckptB} instead of \texttt{00000nIter}. They can be
916  used to restart the model but are overwritten every other time they are  used to restart the model but are overwritten every other time they are
917  output to save disk space during long integrations.  output to save disk space during long integrations.
918    
919    
920    
921    \subsubsection{MNC output files}
922    
923    Unlike the \texttt{mdsio} output, the \texttt{mnc}--generated output
924    is usually (though not necessarily) placed within a subdirectory with
925    a name such as \texttt{mnc\_test\_\${DATE}\_\${SEQ}}.  The files
926    within this subdirectory are all in the ``self-describing'' netCDF
927    format and can thus be browsed and/or plotted using tools such as:
928    \begin{itemize}
929    \item \texttt{ncdump} is a utility which is typically included
930      with every netCDF install:
931      \begin{rawhtml} <A href="http://www.unidata.ucar.edu/packages/netcdf/"> \end{rawhtml}
932    \begin{verbatim}
933    http://www.unidata.ucar.edu/packages/netcdf/
934    \end{verbatim}
935      \begin{rawhtml} </A> \end{rawhtml} and it converts the netCDF
936      binaries into formatted ASCII text files.
937    
938    \item \texttt{ncview} utility is a very convenient and quick way
939      to plot netCDF data and it runs on most OSes:
940      \begin{rawhtml} <A href="http://meteora.ucsd.edu/~pierce/ncview_home_page.html"> \end{rawhtml}
941    \begin{verbatim}
942    http://meteora.ucsd.edu/~pierce/ncview_home_page.html
943    \end{verbatim}
944      \begin{rawhtml} </A> \end{rawhtml}
945      
946    \item MatLAB(c) and other common post-processing environments provide
947      various netCDF interfaces including:
948      \begin{rawhtml} <A href="http://mexcdf.sourceforge.net/"> \end{rawhtml}
949    \begin{verbatim}
950    http://mexcdf.sourceforge.net/
951    \end{verbatim}
952      \begin{rawhtml} </A> \end{rawhtml}
953      \begin{rawhtml} <A href="http://woodshole.er.usgs.gov/staffpages/cdenham/public_html/MexCDF/nc4ml5.html"> \end{rawhtml}
954    \begin{verbatim}
955    http://woodshole.er.usgs.gov/staffpages/cdenham/public_html/MexCDF/nc4ml5.html
956    \end{verbatim}
957      \begin{rawhtml} </A> \end{rawhtml}
958    \end{itemize}
959    
960    
961  \subsection{Looking at the output}  \subsection{Looking at the output}
962    
963  All the model data are written according to a ``meta/data'' file format.  The ``traditional'' or mdsio model data are written according to a
964  Each variable is associated with two files with suffix names \textit{.data}  ``meta/data'' file format.  Each variable is associated with two files
965  and \textit{.meta}. The \textit{.data} file contains the data written in  with suffix names \texttt{.data} and \texttt{.meta}. The
966  binary form (big\_endian by default). The \textit{.meta} file is a  \texttt{.data} file contains the data written in binary form
967  ``header'' file that contains information about the size and the structure  (big\_endian by default). The \texttt{.meta} file is a ``header'' file
968  of the \textit{.data} file. This way of organizing the output is  that contains information about the size and the structure of the
969  particularly useful when running multi-processors calculations. The base  \texttt{.data} file. This way of organizing the output is particularly
970  version of the model includes a few matlab utilities to read output files  useful when running multi-processors calculations. The base version of
971  written in this format. The matlab scripts are located in the directory  the model includes a few matlab utilities to read output files written
972  \textit{utils/matlab} under the root tree. The script \textit{rdmds.m} reads  in this format. The matlab scripts are located in the directory
973  the data. Look at the comments inside the script to see how to use it.  \texttt{utils/matlab} under the root tree. The script \texttt{rdmds.m}
974    reads the data. Look at the comments inside the script to see how to
975    use it.
976    
977  Some examples of reading and visualizing some output in {\em Matlab}:  Some examples of reading and visualizing some output in {\em Matlab}:
978  \begin{verbatim}  \begin{verbatim}
# Line 885  Some examples of reading and visualizing Line 989  Some examples of reading and visualizing
989  >> for n=1:11; imagesc(eta(:,:,n)');axis ij;colorbar;pause(.5);end  >> for n=1:11; imagesc(eta(:,:,n)');axis ij;colorbar;pause(.5);end
990  \end{verbatim}  \end{verbatim}
991    
992  \section{Doing it yourself: customizing the code}  Similar scripts for netCDF output (\texttt{rdmnc.m}) are available and
993    they are described in Section \ref{sec:pkg:mnc}.
 When you are ready to run the model in the configuration you want, the  
 easiest thing is to use and adapt the setup of the case studies experiment  
 (described previously) that is the closest to your configuration. Then, the  
 amount of setup will be minimized. In this section, we focus on the setup  
 relative to the ''numerical model'' part of the code (the setup relative to  
 the ''execution environment'' part is covered in the parallel implementation  
 section) and on the variables and parameters that you are likely to change.  
   
 \subsection{Configuration and setup}  
   
 The CPP keys relative to the ''numerical model'' part of the code are all  
 defined and set in the file \textit{CPP\_OPTIONS.h }in the directory \textit{%  
 model/inc }or in one of the \textit{code }directories of the case study  
 experiments under \textit{verification.} The model parameters are defined  
 and declared in the file \textit{model/inc/PARAMS.h }and their default  
 values are set in the routine \textit{model/src/set\_defaults.F. }The  
 default values can be modified in the namelist file \textit{data }which  
 needs to be located in the directory where you will run the model. The  
 parameters are initialized in the routine \textit{model/src/ini\_parms.F}.  
 Look at this routine to see in what part of the namelist the parameters are  
 located.  
   
 In what follows the parameters are grouped into categories related to the  
 computational domain, the equations solved in the model, and the simulation  
 controls.  
   
 \subsection{Computational domain, geometry and time-discretization}  
   
 \begin{itemize}  
 \item dimensions  
 \end{itemize}  
   
 The number of points in the x, y,\textit{\ }and r\textit{\ }directions are  
 represented by the variables \textbf{sNx}\textit{, }\textbf{sNy}\textit{, }%  
 and \textbf{Nr}\textit{\ }respectively which are declared and set in the  
 file \textit{model/inc/SIZE.h. }(Again, this assumes a mono-processor  
 calculation. For multiprocessor calculations see section on parallel  
 implementation.)  
   
 \begin{itemize}  
 \item grid  
 \end{itemize}  
   
 Three different grids are available: cartesian, spherical polar, and  
 curvilinear (including the cubed sphere). The grid is set through the  
 logical variables \textbf{usingCartesianGrid}\textit{, }\textbf{%  
 usingSphericalPolarGrid}\textit{, }and \textit{\ }\textbf{%  
 usingCurvilinearGrid}\textit{. }In the case of spherical and curvilinear  
 grids, the southern boundary is defined through the variable \textbf{phiMin}%  
 \textit{\ }which corresponds to the latitude of the southern most cell face  
 (in degrees). The resolution along the x and y directions is controlled by  
 the 1D arrays \textbf{delx}\textit{\ }and \textbf{dely}\textit{\ }(in meters  
 in the case of a cartesian grid, in degrees otherwise). The vertical grid  
 spacing is set through the 1D array \textbf{delz }for the ocean (in meters)  
 or \textbf{delp}\textit{\ }for the atmosphere (in Pa). The variable \textbf{%  
 Ro\_SeaLevel} represents the standard position of Sea-Level in ''R''  
 coordinate. This is typically set to 0m for the ocean (default value) and 10$%  
 ^{5}$Pa for the atmosphere. For the atmosphere, also set the logical  
 variable \textbf{groundAtK1} to '.\texttt{TRUE}.'. which put the first level  
 (k=1) at the lower boundary (ground).  
   
 For the cartesian grid case, the Coriolis parameter $f$ is set through the  
 variables \textbf{f0}\textit{\ }and \textbf{beta}\textit{\ }which correspond  
 to the reference Coriolis parameter (in s$^{-1}$) and $\frac{\partial f}{%  
 \partial y}$(in m$^{-1}$s$^{-1}$) respectively. If \textbf{beta }\textit{\ }%  
 is set to a nonzero value, \textbf{f0}\textit{\ }is the value of $f$ at the  
 southern edge of the domain.  
   
 \begin{itemize}  
 \item topography - full and partial cells  
 \end{itemize}  
   
 The domain bathymetry is read from a file that contains a 2D (x,y) map of  
 depths (in m) for the ocean or pressures (in Pa) for the atmosphere. The  
 file name is represented by the variable \textbf{bathyFile}\textit{. }The  
 file is assumed to contain binary numbers giving the depth (pressure) of the  
 model at each grid cell, ordered with the x coordinate varying fastest. The  
 points are ordered from low coordinate to high coordinate for both axes. The  
 model code applies without modification to enclosed, periodic, and double  
 periodic domains. Periodicity is assumed by default and is suppressed by  
 setting the depths to 0m for the cells at the limits of the computational  
 domain (note: not sure this is the case for the atmosphere). The precision  
 with which to read the binary data is controlled by the integer variable  
 \textbf{readBinaryPrec }which can take the value \texttt{32} (single  
 precision) or \texttt{64} (double precision). See the matlab program \textit{%  
 gendata.m }in the \textit{input }directories under \textit{verification }to  
 see how the bathymetry files are generated for the case study experiments.  
   
 To use the partial cell capability, the variable \textbf{hFacMin}\textit{\ }%  
 needs to be set to a value between 0 and 1 (it is set to 1 by default)  
 corresponding to the minimum fractional size of the cell. For example if the  
 bottom cell is 500m thick and \textbf{hFacMin}\textit{\ }is set to 0.1, the  
 actual thickness of the cell (i.e. used in the code) can cover a range of  
 discrete values 50m apart from 50m to 500m depending on the value of the  
 bottom depth (in \textbf{bathyFile}) at this point.  
   
 Note that the bottom depths (or pressures) need not coincide with the models  
 levels as deduced from \textbf{delz}\textit{\ }or\textit{\ }\textbf{delp}%  
 \textit{. }The model will interpolate the numbers in \textbf{bathyFile}%  
 \textit{\ }so that they match the levels obtained from \textbf{delz}\textit{%  
 \ }or\textit{\ }\textbf{delp}\textit{\ }and \textbf{hFacMin}\textit{. }  
   
 (Note: the atmospheric case is a bit more complicated than what is written  
 here I think. To come soon...)  
   
 \begin{itemize}  
 \item time-discretization  
 \end{itemize}  
   
 The time steps are set through the real variables \textbf{deltaTMom}  
 and \textbf{deltaTtracer} (in s) which represent the time step for the  
 momentum and tracer equations, respectively. For synchronous  
 integrations, simply set the two variables to the same value (or you  
 can prescribe one time step only through the variable  
 \textbf{deltaT}). The Adams-Bashforth stabilizing parameter is set  
 through the variable \textbf{abEps} (dimensionless). The stagger  
 baroclinic time stepping can be activated by setting the logical  
 variable \textbf{staggerTimeStep} to '.\texttt{TRUE}.'.  
   
 \subsection{Equation of state}  
   
 First, because the model equations are written in terms of  
 perturbations, a reference thermodynamic state needs to be specified.  
 This is done through the 1D arrays \textbf{tRef} and \textbf{sRef}.  
 \textbf{tRef} specifies the reference potential temperature profile  
 (in $^{o}$C for the ocean and $^{o}$K for the atmosphere) starting  
 from the level k=1. Similarly, \textbf{sRef} specifies the reference  
 salinity profile (in ppt) for the ocean or the reference specific  
 humidity profile (in g/kg) for the atmosphere.  
   
 The form of the equation of state is controlled by the character  
 variables \textbf{buoyancyRelation} and \textbf{eosType}.  
 \textbf{buoyancyRelation} is set to '\texttt{OCEANIC}' by default and  
 needs to be set to '\texttt{ATMOSPHERIC}' for atmosphere simulations.  
 In this case, \textbf{eosType} must be set to '\texttt{IDEALGAS}'.  
 For the ocean, two forms of the equation of state are available:  
 linear (set \textbf{eosType} to '\texttt{LINEAR}') and a polynomial  
 approximation to the full nonlinear equation ( set  
 \textbf{eosType}\textit{\ }to '\texttt{POLYNOMIAL}'). In the linear  
 case, you need to specify the thermal and haline expansion  
 coefficients represented by the variables \textbf{tAlpha}\textit{\  
   }(in K$^{-1}$) and \textbf{sBeta} (in ppt$^{-1}$). For the nonlinear  
 case, you need to generate a file of polynomial coefficients called  
 \textit{POLY3.COEFFS}. To do this, use the program  
 \textit{utils/knudsen2/knudsen2.f} under the model tree (a Makefile is  
 available in the same directory and you will need to edit the number  
 and the values of the vertical levels in \textit{knudsen2.f} so that  
 they match those of your configuration).  
   
 There there are also higher polynomials for the equation of state:  
 \begin{description}  
 \item['\texttt{UNESCO}':] The UNESCO equation of state formula of  
   Fofonoff and Millard \cite{fofonoff83}. This equation of state  
   assumes in-situ temperature, which is not a model variable; \emph{its use  
   is therefore discouraged, and it is only listed for completeness}.  
 \item['\texttt{JMD95Z}':] A modified UNESCO formula by Jackett and  
   McDougall \cite{jackett95}, which uses the model variable potential  
   temperature as input. The '\texttt{Z}' indicates that this equation  
   of state uses a horizontally and temporally constant pressure  
   $p_{0}=-g\rho_{0}z$.  
 \item['\texttt{JMD95P}':] A modified UNESCO formula by Jackett and  
   McDougall \cite{jackett95}, which uses the model variable potential  
   temperature as input. The '\texttt{P}' indicates that this equation  
   of state uses the actual hydrostatic pressure of the last time  
   step. Lagging the pressure in this way requires an additional pickup  
   file for restarts.  
 \item['\texttt{MDJWF}':] The new, more accurate and less expensive  
   equation of state by McDougall et~al. \cite{mcdougall03}. It also  
   requires lagging the pressure and therefore an additional pickup  
   file for restarts.  
 \end{description}  
 For none of these options an reference profile of temperature or  
 salinity is required.  
   
 \subsection{Momentum equations}  
   
 In this section, we only focus for now on the parameters that you are likely  
 to change, i.e. the ones relative to forcing and dissipation for example.  
 The details relevant to the vector-invariant form of the equations and the  
 various advection schemes are not covered for the moment. We assume that you  
 use the standard form of the momentum equations (i.e. the flux-form) with  
 the default advection scheme. Also, there are a few logical variables that  
 allow you to turn on/off various terms in the momentum equation. These  
 variables are called \textbf{momViscosity, momAdvection, momForcing,  
 useCoriolis, momPressureForcing, momStepping}\textit{, }and \textit{\ }%  
 \textbf{metricTerms }and are assumed to be set to '.\texttt{TRUE}.' here.  
 Look at the file \textit{model/inc/PARAMS.h }for a precise definition of  
 these variables.  
   
 \begin{itemize}  
 \item initialization  
 \end{itemize}  
   
 The velocity components are initialized to 0 unless the simulation is  
 starting from a pickup file (see section on simulation control parameters).  
   
 \begin{itemize}  
 \item forcing  
 \end{itemize}  
   
 This section only applies to the ocean. You need to generate wind-stress  
 data into two files \textbf{zonalWindFile}\textit{\ }and \textbf{%  
 meridWindFile }corresponding to the zonal and meridional components of the  
 wind stress, respectively (if you want the stress to be along the direction  
 of only one of the model horizontal axes, you only need to generate one  
 file). The format of the files is similar to the bathymetry file. The zonal  
 (meridional) stress data are assumed to be in Pa and located at U-points  
 (V-points). As for the bathymetry, the precision with which to read the  
 binary data is controlled by the variable \textbf{readBinaryPrec}.\textbf{\ }  
 See the matlab program \textit{gendata.m }in the \textit{input }directories  
 under \textit{verification }to see how simple analytical wind forcing data  
 are generated for the case study experiments.  
   
 There is also the possibility of prescribing time-dependent periodic  
 forcing. To do this, concatenate the successive time records into a single  
 file (for each stress component) ordered in a (x, y, t) fashion and set the  
 following variables: \textbf{periodicExternalForcing }to '.\texttt{TRUE}.',  
 \textbf{externForcingPeriod }to the period (in s) of which the forcing  
 varies (typically 1 month), and \textbf{externForcingCycle }to the repeat  
 time (in s) of the forcing (typically 1 year -- note: \textbf{%  
 externForcingCycle }must be a multiple of \textbf{externForcingPeriod}).  
 With these variables set up, the model will interpolate the forcing linearly  
 at each iteration.  
   
 \begin{itemize}  
 \item dissipation  
 \end{itemize}  
   
 The lateral eddy viscosity coefficient is specified through the variable  
 \textbf{viscAh}\textit{\ }(in m$^{2}$s$^{-1}$). The vertical eddy viscosity  
 coefficient is specified through the variable \textbf{viscAz }(in m$^{2}$s$%  
 ^{-1}$) for the ocean and \textbf{viscAp}\textit{\ }(in Pa$^{2}$s$^{-1}$)  
 for the atmosphere. The vertical diffusive fluxes can be computed implicitly  
 by setting the logical variable \textbf{implicitViscosity }to '.\texttt{TRUE}%  
 .'. In addition, biharmonic mixing can be added as well through the variable  
 \textbf{viscA4}\textit{\ }(in m$^{4}$s$^{-1}$). On a spherical polar grid,  
 you might also need to set the variable \textbf{cosPower} which is set to 0  
 by default and which represents the power of cosine of latitude to multiply  
 viscosity. Slip or no-slip conditions at lateral and bottom boundaries are  
 specified through the logical variables \textbf{no\_slip\_sides}\textit{\ }%  
 and \textbf{no\_slip\_bottom}. If set to '\texttt{.FALSE.}', free-slip  
 boundary conditions are applied. If no-slip boundary conditions are applied  
 at the bottom, a bottom drag can be applied as well. Two forms are  
 available: linear (set the variable \textbf{bottomDragLinear}\textit{\ }in s$%  
 ^{-1}$) and quadratic (set the variable \textbf{bottomDragQuadratic}\textit{%  
 \ }in m$^{-1}$).  
   
 The Fourier and Shapiro filters are described elsewhere.  
   
 \begin{itemize}  
 \item C-D scheme  
 \end{itemize}  
   
 If you run at a sufficiently coarse resolution, you will need the C-D scheme  
 for the computation of the Coriolis terms. The variable\textbf{\ tauCD},  
 which represents the C-D scheme coupling timescale (in s) needs to be set.  
   
 \begin{itemize}  
 \item calculation of pressure/geopotential  
 \end{itemize}  
   
 First, to run a non-hydrostatic ocean simulation, set the logical variable  
 \textbf{nonHydrostatic} to '.\texttt{TRUE}.'. The pressure field is then  
 inverted through a 3D elliptic equation. (Note: this capability is not  
 available for the atmosphere yet.) By default, a hydrostatic simulation is  
 assumed and a 2D elliptic equation is used to invert the pressure field. The  
 parameters controlling the behaviour of the elliptic solvers are the  
 variables \textbf{cg2dMaxIters}\textit{\ }and \textbf{cg2dTargetResidual }%  
 for the 2D case and \textbf{cg3dMaxIters}\textit{\ }and \textbf{%  
 cg3dTargetResidual }for the 3D case. You probably won't need to alter the  
 default values (are we sure of this?).  
   
 For the calculation of the surface pressure (for the ocean) or surface  
 geopotential (for the atmosphere) you need to set the logical variables  
 \textbf{rigidLid} and \textbf{implicitFreeSurface}\textit{\ }(set one to '.%  
 \texttt{TRUE}.' and the other to '.\texttt{FALSE}.' depending on how you  
 want to deal with the ocean upper or atmosphere lower boundary).  
   
 \subsection{Tracer equations}  
   
 This section covers the tracer equations i.e. the potential temperature  
 equation and the salinity (for the ocean) or specific humidity (for the  
 atmosphere) equation. As for the momentum equations, we only describe for  
 now the parameters that you are likely to change. The logical variables  
 \textbf{tempDiffusion}\textit{, }\textbf{tempAdvection}\textit{, }\textbf{%  
 tempForcing}\textit{,} and \textbf{tempStepping} allow you to turn on/off  
 terms in the temperature equation (same thing for salinity or specific  
 humidity with variables \textbf{saltDiffusion}\textit{, }\textbf{%  
 saltAdvection}\textit{\ }etc). These variables are all assumed here to be  
 set to '.\texttt{TRUE}.'. Look at file \textit{model/inc/PARAMS.h }for a  
 precise definition.  
   
 \begin{itemize}  
 \item initialization  
 \end{itemize}  
   
 The initial tracer data can be contained in the binary files \textbf{%  
 hydrogThetaFile }and \textbf{hydrogSaltFile}. These files should contain 3D  
 data ordered in an (x, y, r) fashion with k=1 as the first vertical level.  
 If no file names are provided, the tracers are then initialized with the  
 values of \textbf{tRef }and \textbf{sRef }mentioned above (in the equation  
 of state section). In this case, the initial tracer data are uniform in x  
 and y for each depth level.  
   
 \begin{itemize}  
 \item forcing  
 \end{itemize}  
   
 This part is more relevant for the ocean, the procedure for the atmosphere  
 not being completely stabilized at the moment.  
   
 A combination of fluxes data and relaxation terms can be used for driving  
 the tracer equations. \ For potential temperature, heat flux data (in W/m$%  
 ^{2}$) can be stored in the 2D binary file \textbf{surfQfile}\textit{. }%  
 Alternatively or in addition, the forcing can be specified through a  
 relaxation term. The SST data to which the model surface temperatures are  
 restored to are supposed to be stored in the 2D binary file \textbf{%  
 thetaClimFile}\textit{. }The corresponding relaxation time scale coefficient  
 is set through the variable \textbf{tauThetaClimRelax}\textit{\ }(in s). The  
 same procedure applies for salinity with the variable names \textbf{EmPmRfile%  
 }\textit{, }\textbf{saltClimFile}\textit{, }and \textbf{tauSaltClimRelax}%  
 \textit{\ }for freshwater flux (in m/s) and surface salinity (in ppt) data  
 files and relaxation time scale coefficient (in s), respectively. Also for  
 salinity, if the CPP key \textbf{USE\_NATURAL\_BCS} is turned on, natural  
 boundary conditions are applied i.e. when computing the surface salinity  
 tendency, the freshwater flux is multiplied by the model surface salinity  
 instead of a constant salinity value.  
   
 As for the other input files, the precision with which to read the data is  
 controlled by the variable \textbf{readBinaryPrec}. Time-dependent, periodic  
 forcing can be applied as well following the same procedure used for the  
 wind forcing data (see above).  
   
 \begin{itemize}  
 \item dissipation  
 \end{itemize}  
   
 Lateral eddy diffusivities for temperature and salinity/specific humidity  
 are specified through the variables \textbf{diffKhT }and \textbf{diffKhS }%  
 (in m$^{2}$/s). Vertical eddy diffusivities are specified through the  
 variables \textbf{diffKzT }and \textbf{diffKzS }(in m$^{2}$/s) for the ocean  
 and \textbf{diffKpT }and \textbf{diffKpS }(in Pa$^{2}$/s) for the  
 atmosphere. The vertical diffusive fluxes can be computed implicitly by  
 setting the logical variable \textbf{implicitDiffusion }to '.\texttt{TRUE}%  
 .'. In addition, biharmonic diffusivities can be specified as well through  
 the coefficients \textbf{diffK4T }and \textbf{diffK4S }(in m$^{4}$/s). Note  
 that the cosine power scaling (specified through \textbf{cosPower }- see the  
 momentum equations section) is applied to the tracer diffusivities  
 (Laplacian and biharmonic) as well. The Gent and McWilliams parameterization  
 for oceanic tracers is described in the package section. Finally, note that  
 tracers can be also subject to Fourier and Shapiro filtering (see the  
 corresponding section on these filters).  
   
 \begin{itemize}  
 \item ocean convection  
 \end{itemize}  
   
 Two options are available to parameterize ocean convection: one is to use  
 the convective adjustment scheme. In this case, you need to set the variable  
 \textbf{cadjFreq}, which represents the frequency (in s) with which the  
 adjustment algorithm is called, to a non-zero value (if set to a negative  
 value by the user, the model will set it to the tracer time step). The other  
 option is to parameterize convection with implicit vertical diffusion. To do  
 this, set the logical variable \textbf{implicitDiffusion }to '.\texttt{TRUE}%  
 .' and the real variable \textbf{ivdc\_kappa }to a value (in m$^{2}$/s) you  
 wish the tracer vertical diffusivities to have when mixing tracers  
 vertically due to static instabilities. Note that \textbf{cadjFreq }and  
 \textbf{ivdc\_kappa }can not both have non-zero value.  
   
 \subsection{Simulation controls}  
   
 The model ''clock'' is defined by the variable \textbf{deltaTClock }(in s)  
 which determines the IO frequencies and is used in tagging output.  
 Typically, you will set it to the tracer time step for accelerated runs  
 (otherwise it is simply set to the default time step \textbf{deltaT}).  
 Frequency of checkpointing and dumping of the model state are referenced to  
 this clock (see below).  
   
 \begin{itemize}  
 \item run duration  
 \end{itemize}  
   
 The beginning of a simulation is set by specifying a start time (in s)  
 through the real variable \textbf{startTime }or by specifying an initial  
 iteration number through the integer variable \textbf{nIter0}. If these  
 variables are set to nonzero values, the model will look for a ''pickup''  
 file \textit{pickup.0000nIter0 }to restart the integration\textit{. }The end  
 of a simulation is set through the real variable \textbf{endTime }(in s).  
 Alternatively, you can specify instead the number of time steps to execute  
 through the integer variable \textbf{nTimeSteps}.  
   
 \begin{itemize}  
 \item frequency of output  
 \end{itemize}  
994    
 Real variables defining frequencies (in s) with which output files are  
 written on disk need to be set up. \textbf{dumpFreq }controls the frequency  
 with which the instantaneous state of the model is saved. \textbf{chkPtFreq }%  
 and \textbf{pchkPtFreq }control the output frequency of rolling and  
 permanent checkpoint files, respectively. See section 1.5.1 Output files for the  
 definition of model state and checkpoint files. In addition, time-averaged  
 fields can be written out by setting the variable \textbf{taveFreq} (in s).  
 The precision with which to write the binary data is controlled by the  
 integer variable w\textbf{riteBinaryPrec }(set it to \texttt{32} or \texttt{%  
 64}).  
   
 %%% Local Variables:  
 %%% mode: latex  
 %%% TeX-master: t  
 %%% End:  

Legend:
Removed from v.1.16  
changed lines
  Added in v.1.36

  ViewVC Help
Powered by ViewVC 1.1.22