/[MITgcm]/manual/s_getstarted/text/getting_started.tex
ViewVC logotype

Diff of /manual/s_getstarted/text/getting_started.tex

Parent Directory Parent Directory | Revision Log Revision Log | View Revision Graph Revision Graph | View Patch Patch

revision 1.14 by edhill, Wed Jul 30 13:42:52 2003 UTC revision 1.45 by jmc, Wed May 11 18:58:02 2011 UTC
# Line 3  Line 3 
3    
4  %\section{Getting started}  %\section{Getting started}
5    
6  In this section, we describe how to use the model. In the first  We believe the best way to familiarize yourself with the
 section, we provide enough information to help you get started with  
 the model. We believe the best way to familiarize yourself with the  
7  model is to run the case study examples provided with the base  model is to run the case study examples provided with the base
8  version. Information on how to obtain, compile, and run the code is  version. Information on how to obtain, compile, and run the code is
9  found there as well as a brief description of the model structure  found here as well as a brief description of the model structure
10  directory and the case study examples.  The latter and the code  directory and the case study examples. Information is also provided
11  structure are described more fully in chapters  here on how to customize the code when you are ready to try implementing
12  \ref{chap:discretization} and \ref{chap:sarch}, respectively. Here, in  the configuration you have in mind.  The code and algorithm
13  this section, we provide information on how to customize the code when  are described more fully in chapters \ref{chap:discretization} and
14  you are ready to try implementing the configuration you have in mind.  \ref{chap:sarch}.
15    
16  \section{Where to find information}  \section{Where to find information}
17  \label{sect:whereToFindInfo}  \label{sec:whereToFindInfo}
18    \begin{rawhtml}
19  A web site is maintained for release 1 (Sealion) of MITgcm:  <!-- CMIREDIR:whereToFindInfo: -->
20  \begin{verbatim}  \end{rawhtml}
21  http://mitgcm.org/sealion  
22  \end{verbatim}  There is a web-archived support mailing list for the model that
23  Here you will find an on-line version of this document, a  you can email at \texttt{MITgcm-support@mitgcm.org} or browse at:
24  ``browsable'' copy of the code and a searchable database of the model  \begin{rawhtml} <A href=http://mitgcm.org/mailman/listinfo/mitgcm-support/ target="idontexist"> \end{rawhtml}
 and site, as well as links for downloading the model and  
 documentation, to data-sources and other related sites.  
   
 There is also a support news group for the model that you can email at  
 \texttt{MITgcm-support@mitgcm.org} or browse at:  
25  \begin{verbatim}  \begin{verbatim}
26  news://mitgcm.org/mitgcm.support  http://mitgcm.org/mailman/listinfo/mitgcm-support/
27    http://mitgcm.org/pipermail/mitgcm-support/
28  \end{verbatim}  \end{verbatim}
29  A mail to the email list will reach all the developers and be archived  \begin{rawhtml} </A> \end{rawhtml}
 on the newsgroup. A users email list will be established at some time  
 in the future.  
30    
31  \section{Obtaining the code}  \section{Obtaining the code}
32  \label{sect:obtainingCode}  \label{sec:obtainingCode}
33    \begin{rawhtml}
34    <!-- CMIREDIR:obtainingCode: -->
35    \end{rawhtml}
36    
37  MITgcm can be downloaded from our system by following  MITgcm can be downloaded from our system by following
38  the instructions below. As a courtesy we ask that you send e-mail to us at  the instructions below. As a courtesy we ask that you send e-mail to us at
# Line 67  provide easy support for maintenance upd Line 62  provide easy support for maintenance upd
62    
63  \end{enumerate}  \end{enumerate}
64    
65    \subsection{Method 1 - Checkout from CVS}
66    \label{sec:cvs_checkout}
67    
68  If CVS is available on your system, we strongly encourage you to use it. CVS  If CVS is available on your system, we strongly encourage you to use it. CVS
69  provides an efficient and elegant way of organizing your code and keeping  provides an efficient and elegant way of organizing your code and keeping
70  track of your changes. If CVS is not available on your machine, you can also  track of your changes. If CVS is not available on your machine, you can also
71  download a tar file.  download a tar file.
72    
73  Before you can use CVS, the following environment variable has to be set in  Before you can use CVS, the following environment variable(s) should
74  your .cshrc or .tcshrc:  be set within your shell.  For a csh or tcsh shell, put the following
75  \begin{verbatim}  \begin{verbatim}
76  % setenv CVSROOT :pserver:cvsanon@mitgcm.org:/u/u0/gcmpack  % setenv CVSROOT :pserver:cvsanon@mitgcm.org:/u/gcmpack
77  \end{verbatim}  \end{verbatim}
78    in your \texttt{.cshrc} or \texttt{.tcshrc} file.  For bash or sh
79    shells, put:
80    \begin{verbatim}
81    % export CVSROOT=':pserver:cvsanon@mitgcm.org:/u/gcmpack'
82    \end{verbatim}
83    in your \texttt{.profile} or \texttt{.bashrc} file.
84    
85    
86  To start using CVS, register with the MITgcm CVS server using command:  To get MITgcm through CVS, first register with the MITgcm CVS server
87    using command:
88  \begin{verbatim}  \begin{verbatim}
89  % cvs login ( CVS password: cvsanon )  % cvs login ( CVS password: cvsanon )
90  \end{verbatim}  \end{verbatim}
91  You only need to do ``cvs login'' once.  You only need to do a ``cvs login'' once.
92    
93  To obtain the sources for release1 type:  To obtain the latest sources type:
94    \begin{verbatim}
95    % cvs co -P MITgcm
96    \end{verbatim}
97    or to get a specific release type:
98  \begin{verbatim}  \begin{verbatim}
99  % cvs co -d directory -P -r release1_beta1 MITgcm  % cvs co -P -r checkpoint52i_post MITgcm
100  \end{verbatim}  \end{verbatim}
101    The CVS command ``\texttt{cvs co}'' is the abreviation of the full-name
102    ``\texttt{cvs checkout}'' command and using the option ``-P'' (\texttt{cvs co -P})
103    will prevent to download unnecessary empty directories.
104    
105  This creates a directory called \textit{directory}. If \textit{directory}  The MITgcm web site contains further directions concerning the source
106  exists this command updates your code based on the repository. Each  code and CVS.  It also contains a web interface to our CVS archive so
107  directory in the source tree contains a directory \textit{CVS}. This  that one may easily view the state of files, revisions, and other
108  information is required by CVS to keep track of your file versions with  development milestones:
109  respect to the repository. Don't edit the files in \textit{CVS}!  \begin{rawhtml} <A href="http://mitgcm.org/viewvc/MITgcm/MITgcm/" target="idontexist"> \end{rawhtml}
110  You can also use CVS to download code updates.  More extensive  \begin{verbatim}
111  information on using CVS for maintaining MITgcm code can be found  http://mitgcm.org/viewvc/MITgcm/MITgcm/
112  \begin{rawhtml} <A href=http://mitgcm.org/usingcvstoget.html target="idontexist"> \end{rawhtml}  \end{verbatim}
113    \begin{rawhtml} </A> \end{rawhtml}
114    
115    As a convenience, the MITgcm CVS server contains aliases which are
116    named subsets of the codebase.  These aliases can be especially
117    helpful when used over slow internet connections or on machines with
118    restricted storage space.  Table \ref{tab:cvsModules} contains a list
119    of CVS aliases
120    \begin{table}[htb]
121      \centering
122      \begin{tabular}[htb]{|lp{3.25in}|}\hline
123        \textbf{Alias Name}    &  \textbf{Information (directories) Contained}  \\\hline
124        \texttt{MITgcm\_code}  &  Only the source code -- none of the verification examples.  \\
125        \texttt{MITgcm\_verif\_basic}
126        &  Source code plus a small set of the verification examples
127        (\texttt{global\_ocean.90x40x15}, \texttt{aim.5l\_cs}, \texttt{hs94.128x64x5},
128        \texttt{front\_relax}, and \texttt{plume\_on\_slope}).  \\
129        \texttt{MITgcm\_verif\_atmos}  &  Source code plus all of the atmospheric examples.  \\
130        \texttt{MITgcm\_verif\_ocean}  &  Source code plus all of the oceanic examples.  \\
131        \texttt{MITgcm\_verif\_all}    &  Source code plus all of the
132        verification examples. \\\hline
133      \end{tabular}
134      \caption{MITgcm CVS Modules}
135      \label{tab:cvsModules}
136    \end{table}
137    
138    The checkout process creates a directory called \texttt{MITgcm}. If
139    the directory \texttt{MITgcm} exists this command updates your code
140    based on the repository. Each directory in the source tree contains a
141    directory \texttt{CVS}. This information is required by CVS to keep
142    track of your file versions with respect to the repository. Don't edit
143    the files in \texttt{CVS}!  You can also use CVS to download code
144    updates.  More extensive information on using CVS for maintaining
145    MITgcm code can be found
146    \begin{rawhtml} <A href="http://mitgcm.org/public/using_cvs.html" target="idontexist"> \end{rawhtml}
147  here  here
148  \begin{rawhtml} </A> \end{rawhtml}  \begin{rawhtml} </A> \end{rawhtml}
149  .  .
150    It is important to note that the CVS aliases in Table
151    \ref{tab:cvsModules} cannot be used in conjunction with the CVS
152  \paragraph*{Conventional download method}  \texttt{-d DIRNAME} option.  However, the \texttt{MITgcm} directories
153  \label{sect:conventionalDownload}  they create can be changed to a different name following the check-out:
   
 If you do not have CVS on your system, you can download the model as a  
 tar file from the reference web site at:  
 \begin{rawhtml} <A href=http://mitgcm.org/download target="idontexist"> \end{rawhtml}  
154  \begin{verbatim}  \begin{verbatim}
155  http://mitgcm.org/download/     %  cvs co -P MITgcm_verif_basic
156       %  mv MITgcm MITgcm_verif_basic
157  \end{verbatim}  \end{verbatim}
 \begin{rawhtml} </A> \end{rawhtml}  
 The tar file still contains CVS information which we urge you not to  
 delete; even if you do not use CVS yourself the information can help  
 us if you should need to send us your copy of the code.  
158    
159  \paragraph*{Upgrading from an earlier version}  \subsubsection{Upgrading from an earlier version}
160    
161  If you already have an earlier version of the code you can ``upgrade''  If you already have an earlier version of the code you can ``upgrade''
162  your copy instead of downloading the entire repository again. First,  your copy instead of downloading the entire repository again. First,
# Line 124  your copy instead of downloading the ent Line 164  your copy instead of downloading the ent
164  \begin{verbatim}  \begin{verbatim}
165  % cd MITgcm  % cd MITgcm
166  \end{verbatim}  \end{verbatim}
167  and then issue the cvs update command:  and then issue the cvs update command such as:
168  \begin{verbatim}  \begin{verbatim}
169  % cvs -q update -r release1_beta1 -d -P  % cvs -q update -d -P -r checkpoint52i_post
170  \end{verbatim}  \end{verbatim}
171  This will update the ``tag'' to ``release1\_beta1'', add any new  This will update the ``tag'' to ``checkpoint52i\_post'', add any new
172  directories (-d) and remove any empty directories (-P). The -q option  directories (-d) and remove any empty directories (-P). The -q option
173  means be quiet which will reduce the number of messages you'll see in  means be quiet which will reduce the number of messages you'll see in
174  the terminal. If you have modified the code prior to upgrading, CVS  the terminal. If you have modified the code prior to upgrading, CVS
# Line 140  C model/src/ini_parms.F Line 180  C model/src/ini_parms.F
180  \end{verbatim}  \end{verbatim}
181  If the list of conflicts scrolled off the screen, you can re-issue the  If the list of conflicts scrolled off the screen, you can re-issue the
182  cvs update command and it will report the conflicts. Conflicts are  cvs update command and it will report the conflicts. Conflicts are
183  indicated in the code by the delimites ``<<<<<<<'', ``======='' and  indicated in the code by the delimites ``$<<<<<<<$'', ``======='' and
184  ``>>>>>>>''. For example,  ``$>>>>>>>$''. For example,
185    {\small
186  \begin{verbatim}  \begin{verbatim}
187  <<<<<<< ini_parms.F  <<<<<<< ini_parms.F
188       & bottomDragLinear,myOwnBottomDragCoefficient,       & bottomDragLinear,myOwnBottomDragCoefficient,
# Line 149  indicated in the code by the delimites ` Line 190  indicated in the code by the delimites `
190       & bottomDragLinear,bottomDragQuadratic,       & bottomDragLinear,bottomDragQuadratic,
191  >>>>>>> 1.18  >>>>>>> 1.18
192  \end{verbatim}  \end{verbatim}
193    }
194  means that you added ``myOwnBottomDragCoefficient'' to a namelist at  means that you added ``myOwnBottomDragCoefficient'' to a namelist at
195  the same time and place that we added ``bottomDragQuadratic''. You  the same time and place that we added ``bottomDragQuadratic''. You
196  need to resolve this conflict and in this case the line should be  need to resolve this conflict and in this case the line should be
197  changed to:  changed to:
198    {\small
199  \begin{verbatim}  \begin{verbatim}
200       & bottomDragLinear,bottomDragQuadratic,myOwnBottomDragCoefficient,       & bottomDragLinear,bottomDragQuadratic,myOwnBottomDragCoefficient,
201  \end{verbatim}  \end{verbatim}
202  and the lines with the delimiters (<<<<<<,======,>>>>>>) be deleted.  }
203    and the lines with the delimiters ($<<<<<<$,======,$>>>>>>$) be deleted.
204  Unless you are making modifications which exactly parallel  Unless you are making modifications which exactly parallel
205  developments we make, these types of conflicts should be rare.  developments we make, these types of conflicts should be rare.
206    
# Line 168  have run into a problem for which ``we h Line 212  have run into a problem for which ``we h
212  latest code'' and we haven't made a ``tag'' or ``release'' since that  latest code'' and we haven't made a ``tag'' or ``release'' since that
213  patch then you'll need to get the latest code:  patch then you'll need to get the latest code:
214  \begin{verbatim}  \begin{verbatim}
215  % cvs -q update -A -d -P  % cvs -q update -d -P -A
216  \end{verbatim}  \end{verbatim}
217  Unlike, the ``check-out'' and ``update'' procedures above, there is no  Unlike, the ``check-out'' and ``update'' procedures above, there is no
218  ``tag'' or release name. The -A tells CVS to upgrade to the  ``tag'' or release name. The -A tells CVS to upgrade to the
# Line 178  that you may only have part of a patch. Line 222  that you may only have part of a patch.
222  also means we can't tell what version of the code you are working  also means we can't tell what version of the code you are working
223  with. So please be sure you understand what you're doing.  with. So please be sure you understand what you're doing.
224    
225    \subsection{Method 2 - Tar file download}
226    \label{sec:conventionalDownload}
227    
228    If you do not have CVS on your system, you can download the model as a
229    tar file from the web site at:
230    \begin{rawhtml} <A href=http://mitgcm.org/download/ target="idontexist"> \end{rawhtml}
231    \begin{verbatim}
232    http://mitgcm.org/download/
233    \end{verbatim}
234    \begin{rawhtml} </A> \end{rawhtml}
235    The tar file still contains CVS information which we urge you not to
236    delete; even if you do not use CVS yourself the information can help
237    us if you should need to send us your copy of the code.  If a recent
238    tar file does not exist, then please contact the developers through
239    the
240    \begin{rawhtml} <A href="mailto:MITgcm-support@mitgcm.org"> \end{rawhtml}
241    MITgcm-support@mitgcm.org
242    \begin{rawhtml} </A> \end{rawhtml}
243    mailing list.
244    
245  \section{Model and directory structure}  \section{Model and directory structure}
246    \begin{rawhtml}
247    <!-- CMIREDIR:directory_structure: -->
248    \end{rawhtml}
249    
250  The ``numerical'' model is contained within a execution environment  The ``numerical'' model is contained within a execution environment
251  support wrapper. This wrapper is designed to provide a general  support wrapper. This wrapper is designed to provide a general
# Line 186  framework for grid-point models. MITgcmU Line 253  framework for grid-point models. MITgcmU
253  model that uses the framework. Under this structure the model is split  model that uses the framework. Under this structure the model is split
254  into execution environment support code and conventional numerical  into execution environment support code and conventional numerical
255  model code. The execution environment support code is held under the  model code. The execution environment support code is held under the
256  \textit{eesupp} directory. The grid point model code is held under the  \texttt{eesupp} directory. The grid point model code is held under the
257  \textit{model} directory. Code execution actually starts in the  \texttt{model} directory. Code execution actually starts in the
258  \textit{eesupp} routines and not in the \textit{model} routines. For  \texttt{eesupp} routines and not in the \texttt{model} routines. For
259  this reason the top-level  this reason the top-level \texttt{MAIN.F} is in the
260  \textit{MAIN.F} is in the \textit{eesupp/src} directory. In general,  \texttt{eesupp/src} directory. In general, end-users should not need
261  end-users should not need to worry about this level. The top-level routine  to worry about this level. The top-level routine for the numerical
262  for the numerical part of the code is in \textit{model/src/THE\_MODEL\_MAIN.F%  part of the code is in \texttt{model/src/THE\_MODEL\_MAIN.F}. Here is
263  }. Here is a brief description of the directory structure of the model under  a brief description of the directory structure of the model under the
264  the root tree (a detailed description is given in section 3: Code structure).  root tree (a detailed description is given in section 3: Code
265    structure).
266  \begin{itemize}  
267  \item \textit{bin}: this directory is initially empty. It is the default  \begin{itemize}
268  directory in which to compile the code.  
269    \item \texttt{doc}: contains brief documentation notes.
270  \item \textit{diags}: contains the code relative to time-averaged    
271  diagnostics. It is subdivided into two subdirectories \textit{inc} and  \item \texttt{eesupp}: contains the execution environment source code.
272  \textit{src} that contain include files (*.\textit{h} files) and Fortran    Also subdivided into two subdirectories \texttt{inc} and
273  subroutines (*.\textit{F} files), respectively.    \texttt{src}.
274      
275  \item \textit{doc}: contains brief documentation notes.  \item \texttt{model}: this directory contains the main source code.
276      Also subdivided into two subdirectories \texttt{inc} and
277  \item \textit{eesupp}: contains the execution environment source code. Also    \texttt{src}.
278  subdivided into two subdirectories \textit{inc} and \textit{src}.    
279    \item \texttt{pkg}: contains the source code for the packages. Each
280  \item \textit{exe}: this directory is initially empty. It is the default    package corresponds to a subdirectory. For example, \texttt{gmredi}
281  directory in which to execute the code.    contains the code related to the Gent-McWilliams/Redi scheme,
282      \texttt{aim} the code relative to the atmospheric intermediate
283  \item \textit{model}: this directory contains the main source code. Also    physics. The packages are described in detail in chapter \ref{chap:packagesI}.
284  subdivided into two subdirectories \textit{inc} and \textit{src}.    
285    \item \texttt{tools}: this directory contains various useful tools.
286  \item \textit{pkg}: contains the source code for the packages. Each package    For example, \texttt{genmake2} is a script written in csh (C-shell)
287  corresponds to a subdirectory. For example, \textit{gmredi} contains the    that should be used to generate your makefile. The directory
288  code related to the Gent-McWilliams/Redi scheme, \textit{aim} the code    \texttt{adjoint} contains the makefile specific to the Tangent
289  relative to the atmospheric intermediate physics. The packages are described    linear and Adjoint Compiler (TAMC) that generates the adjoint code.
290  in detail in section 3.    The latter is described in detail in part \ref{chap.ecco}.
291      This directory also contains the subdirectory build\_options, which
292  \item \textit{tools}: this directory contains various useful tools. For    contains the `optfiles' with the compiler options for the different
293  example, \textit{genmake} is a script written in csh (C-shell) that should    compilers and machines that can run MITgcm.
294  be used to generate your makefile. The directory \textit{adjoint} contains    
295  the makefile specific to the Tangent linear and Adjoint Compiler (TAMC) that  \item \texttt{utils}: this directory contains various utilities. The
296  generates the adjoint code. The latter is described in details in part V.    subdirectory \texttt{knudsen2} contains code and a makefile that
297      compute coefficients of the polynomial approximation to the knudsen
298  \item \textit{utils}: this directory contains various utilities. The    formula for an ocean nonlinear equation of state. The
299  subdirectory \textit{knudsen2} contains code and a makefile that    \texttt{matlab} subdirectory contains matlab scripts for reading
300  compute coefficients of the polynomial approximation to the knudsen    model output directly into matlab. \texttt{scripts} contains C-shell
301  formula for an ocean nonlinear equation of state. The \textit{matlab}    post-processing scripts for joining processor-based and tiled-based
302  subdirectory contains matlab scripts for reading model output directly    model output. The subdirectory exch2 contains the code needed for
303  into matlab. \textit{scripts} contains C-shell post-processing    the exch2 package to work with different combinations of domain
304  scripts for joining processor-based and tiled-based model output.    decompositions.
305      
306  \item \textit{verification}: this directory contains the model examples. See  \item \texttt{verification}: this directory contains the model
307  section \ref{sect:modelExamples}.    examples. See section \ref{sec:modelExamples}.
308  \end{itemize}  
309    \item \texttt{jobs}: contains sample job scripts for running MITgcm.
310  \section{Example experiments}    
311  \label{sect:modelExamples}  \item \texttt{lsopt}: Line search code used for optimization.
312      
313  The MITgcm distribution comes with a set of twenty-four pre-configured  \item \texttt{optim}: Interface between MITgcm and line search code.
314  numerical experiments. Some of these examples experiments are tests of    
315  individual parts of the model code, but many are fully fledged numerical  \end{itemize}
316  simulations. A few of the examples are used for tutorial documentation  
317  in sections \ref{sect:eg-baro} - \ref{sect:eg-global}. The other examples  \section[Building MITgcm]{Building the code}
318  follow the same general structure as the tutorial examples. However,  \label{sec:buildingCode}
319  they only include brief instructions in a text file called {\it README}.  \begin{rawhtml}
320  The examples are located in subdirectories under  <!-- CMIREDIR:buildingCode: -->
321  the directory \textit{verification}. Each  \end{rawhtml}
322  example is briefly described below.  
323    To compile the code, we use the \texttt{make} program. This uses a
324  \subsection{Full list of model examples}  file (\texttt{Makefile}) that allows us to pre-process source files,
325    specify compiler and optimization options and also figures out any
326  \begin{enumerate}  file dependencies. We supply a script (\texttt{genmake2}), described
327  \item \textit{exp0} - single layer, ocean double gyre (barotropic with  in section \ref{sec:genmake}, that automatically creates the
328  free-surface). This experiment is described in detail in section  \texttt{Makefile} for you. You then need to build the dependencies and
329  \ref{sect:eg-baro}.  compile the code.
330    
331  \item \textit{exp1} - Four layer, ocean double gyre. This experiment is described in detail in section  As an example, assume that you want to build and run experiment
332  \ref{sect:eg-baroc}.  \texttt{verification/exp2}. The are multiple ways and places to
333    actually do this but here let's build the code in
334  \item \textit{exp2} - 4x4 degree global ocean simulation with steady  \texttt{verification/exp2/build}:
335  climatological forcing. This experiment is described in detail in section  \begin{verbatim}
336  \ref{sect:eg-global}.  % cd verification/exp2/build
337    \end{verbatim}
338  \item \textit{exp4} - Flow over a Gaussian bump in open-water or channel  First, build the \texttt{Makefile}:
339  with open boundaries.  \begin{verbatim}
340    % ../../../tools/genmake2 -mods=../code
341  \item \textit{exp5} - Inhomogenously forced ocean convection in a doubly  \end{verbatim}
342  periodic box.  The command line option tells \texttt{genmake} to override model source
343    code with any files in the directory \texttt{../code/}.
344  \item \textit{front\_relax} - Relaxation of an ocean thermal front (test for  
345  Gent/McWilliams scheme). 2D (Y-Z).  On many systems, the \texttt{genmake2} program will be able to
346    automatically recognize the hardware, find compilers and other tools
347  \item \textit{internal wave} - Ocean internal wave forced by open boundary  within the user's path (``\texttt{echo \$PATH}''), and then choose an
348  conditions.  appropriate set of options from the files (``optfiles'') contained in
349    the \texttt{tools/build\_options} directory.  Under some
350  \item \textit{natl\_box} - Eastern subtropical North Atlantic with KPP  circumstances, a user may have to create a new ``optfile'' in order to
351  scheme; 1 month integration  specify the exact combination of compiler, compiler flags, libraries,
352    and other options necessary to build a particular configuration of
353  \item \textit{hs94.1x64x5} - Zonal averaged atmosphere using Held and Suarez  MITgcm.  In such cases, it is generally helpful to read the existing
354  '94 forcing.  ``optfiles'' and mimic their syntax.
355    
356  \item \textit{hs94.128x64x5} - 3D atmosphere dynamics using Held and Suarez  Through the MITgcm-support list, the MITgcm developers are willing to
357  '94 forcing.  provide help writing or modifing ``optfiles''.  And we encourage users
358    to post new ``optfiles'' (particularly ones for new machines or
359  \item \textit{hs94.cs-32x32x5} - 3D atmosphere dynamics using Held and  architectures) to the
360  Suarez '94 forcing on the cubed sphere.  \begin{rawhtml} <A href="mailto:MITgcm-support@mitgcm.org"> \end{rawhtml}
361    MITgcm-support@mitgcm.org
362  \item \textit{aim.5l\_zon-ave} - Intermediate Atmospheric physics. Global  \begin{rawhtml} </A> \end{rawhtml}
363  Zonal Mean configuration, 1x64x5 resolution.  list.
   
 \item \textit{aim.5l\_XZ\_Equatorial\_Slice} - Intermediate Atmospheric  
 physics, equatorial Slice configuration.  
 2D (X-Z).  
   
 \item \textit{aim.5l\_Equatorial\_Channel} - Intermediate Atmospheric  
 physics. 3D Equatorial Channel configuration.  
   
 \item \textit{aim.5l\_LatLon} - Intermediate Atmospheric physics.  
 Global configuration, on latitude longitude grid with 128x64x5 grid points  
 ($2.8^\circ{\rm degree}$ resolution).  
   
 \item \textit{adjustment.128x64x1} Barotropic adjustment  
 problem on latitude longitude grid with 128x64 grid points ($2.8^\circ{\rm degree}$ resolution).  
   
 \item \textit{adjustment.cs-32x32x1}  
 Barotropic adjustment  
 problem on cube sphere grid with 32x32 points per face ( roughly  
 $2.8^\circ{\rm degree}$ resolution).  
   
 \item \textit{advect\_cs} Two-dimensional passive advection test on  
 cube sphere grid.  
   
 \item \textit{advect\_xy} Two-dimensional (horizontal plane) passive advection  
 test on Cartesian grid.  
   
 \item \textit{advect\_yz} Two-dimensional (vertical plane) passive advection test on Cartesian grid.  
   
 \item \textit{carbon} Simple passive tracer experiment. Includes derivative  
 calculation. Described in detail in section \ref{sect:eg-carbon-ad}.  
   
 \item \textit{flt\_example} Example of using float package.  
   
 \item \textit{global\_ocean.90x40x15} Global circulation with  
 GM, flux boundary conditions and poles.  
   
 \item \textit{global\_ocean\_pressure} Global circulation in pressure  
   coordinate (non-Boussinesq ocean model). Described in detail in  
   section \ref{sect:eg-globalpressure}.  
   
 \item \textit{solid-body.cs-32x32x1} Solid body rotation test for cube sphere  
 grid.  
   
 \end{enumerate}  
   
 \subsection{Directory structure of model examples}  
   
 Each example directory has the following subdirectories:  
   
 \begin{itemize}  
 \item \textit{code}: contains the code particular to the example. At a  
 minimum, this directory includes the following files:  
   
 \begin{itemize}  
 \item \textit{code/CPP\_EEOPTIONS.h}: declares CPP keys relative to the  
 ``execution environment'' part of the code. The default version is located  
 in \textit{eesupp/inc}.  
   
 \item \textit{code/CPP\_OPTIONS.h}: declares CPP keys relative to the  
 ``numerical model'' part of the code. The default version is located in  
 \textit{model/inc}.  
   
 \item \textit{code/SIZE.h}: declares size of underlying computational grid.  
 The default version is located in \textit{model/inc}.  
 \end{itemize}  
   
 In addition, other include files and subroutines might be present in \textit{%  
 code} depending on the particular experiment. See section 2 for more details.  
   
 \item \textit{input}: contains the input data files required to run the  
 example. At a minimum, the \textit{input} directory contains the following  
 files:  
   
 \begin{itemize}  
 \item \textit{input/data}: this file, written as a namelist, specifies the  
 main parameters for the experiment.  
   
 \item \textit{input/data.pkg}: contains parameters relative to the packages  
 used in the experiment.  
   
 \item \textit{input/eedata}: this file contains ``execution environment''  
 data. At present, this consists of a specification of the number of threads  
 to use in $X$ and $Y$ under multithreaded execution.  
 \end{itemize}  
   
 In addition, you will also find in this directory the forcing and topography  
 files as well as the files describing the initial state of the experiment.  
 This varies from experiment to experiment. See section 2 for more details.  
   
 \item \textit{results}: this directory contains the output file \textit{%  
 output.txt} produced by the simulation example. This file is useful for  
 comparison with your own output when you run the experiment.  
 \end{itemize}  
   
 Once you have chosen the example you want to run, you are ready to compile  
 the code.  
   
 \section{Building the code}  
 \label{sect:buildingCode}  
   
 To compile the code, we use the {\em make} program. This uses a file  
 ({\em Makefile}) that allows us to pre-process source files, specify  
 compiler and optimization options and also figures out any file  
 dependencies. We supply a script ({\em genmake}), described in section  
 \ref{sect:genmake}, that automatically creates the {\em Makefile} for  
 you. You then need to build the dependencies and compile the code.  
   
 As an example, let's assume that you want to build and run experiment  
 \textit{verification/exp2}. The are multiple ways and places to actually  
 do this but here let's build the code in  
 \textit{verification/exp2/input}:  
 \begin{verbatim}  
 % cd verification/exp2/input  
 \end{verbatim}  
 First, build the {\em Makefile}:  
 \begin{verbatim}  
 % ../../../tools/genmake -mods=../code  
 \end{verbatim}  
 The command line option tells {\em genmake} to override model source  
 code with any files in the directory {\em ./code/}.  
364    
365  If there is no \textit{.genmakerc} in the \textit{input} directory, you have  To specify an optfile to \texttt{genmake2}, the syntax is:
 to use the following options when invoking \textit{genmake}:  
366  \begin{verbatim}  \begin{verbatim}
367  % ../../../tools/genmake  -mods=../code  % ../../../tools/genmake2 -mods=../code -of /path/to/optfile
368  \end{verbatim}  \end{verbatim}
369    
370  Next, create the dependencies:  Once a \texttt{Makefile} has been generated, we create the
371    dependencies with the command:
372  \begin{verbatim}  \begin{verbatim}
373  % make depend  % make depend
374  \end{verbatim}  \end{verbatim}
375  This modifies {\em Makefile} by attaching a [long] list of files on  This modifies the \texttt{Makefile} by attaching a (usually, long)
376  which other files depend. The purpose of this is to reduce  list of files upon which other files depend. The purpose of this is to
377  re-compilation if and when you start to modify the code. {\tt make  reduce re-compilation if and when you start to modify the code. The
378  depend} also created links from the model source to this directory.  {\tt make depend} command also creates links from the model source to
379    this directory.  It is important to note that the {\tt make depend}
380    stage will occasionally produce warnings or errors since the
381    dependency parsing tool is unable to find all of the necessary header
382    files (\textit{eg.}  \texttt{netcdf.inc}).  In these circumstances, it
383    is usually OK to ignore the warnings/errors and proceed to the next
384    step.
385    
386  Now compile the code:  Next one can compile the code using:
387  \begin{verbatim}  \begin{verbatim}
388  % make  % make
389  \end{verbatim}  \end{verbatim}
390  The {\tt make} command creates an executable called \textit{mitgcmuv}.  The {\tt make} command creates an executable called \texttt{mitgcmuv}.
391    Additional make ``targets'' are defined within the makefile to aid in
392    the production of adjoint and other versions of MITgcm.  On SMP
393    (shared multi-processor) systems, the build process can often be sped
394    up appreciably using the command:
395    \begin{verbatim}
396    % make -j 2
397    \end{verbatim}
398    where the ``2'' can be replaced with a number that corresponds to the
399    number of CPUs available.
400    
401  Now you are ready to run the model. General instructions for doing so are  Now you are ready to run the model. General instructions for doing so are
402  given in section \ref{sect:runModel}. Here, we can run the model with:  given in section \ref{sec:runModel}. Here, we can run the model by
403    first creating links to all the input files:
404    \begin{verbatim}
405    ln -s ../input/* .
406    \end{verbatim}
407    and then calling the executable with:
408  \begin{verbatim}  \begin{verbatim}
409  ./mitgcmuv > output.txt  ./mitgcmuv > output.txt
410  \end{verbatim}  \end{verbatim}
411  where we are re-directing the stream of text output to the file {\em  where we are re-directing the stream of text output to the file
412  output.txt}.  \texttt{output.txt}.
   
413    
414  \subsection{Building/compiling the code elsewhere}  \subsection{Building/compiling the code elsewhere}
415    
416  In the example above (section \ref{sect:buildingCode}) we built the  In the example above (section \ref{sec:buildingCode}) we built the
417  executable in the {\em input} directory of the experiment for  executable in the {\em input} directory of the experiment for
418  convenience. You can also configure and compile the code in other  convenience. You can also configure and compile the code in other
419  locations, for example on a scratch disk with out having to copy the  locations, for example on a scratch disk with out having to copy the
420  entire source tree. The only requirement to do so is you have {\tt  entire source tree. The only requirement to do so is you have {\tt
421  genmake} in your path or you know the absolute path to {\tt genmake}.    genmake2} in your path or you know the absolute path to {\tt
422      genmake2}.
423    
424  The following sections outline some possible methods of organizing you  The following sections outline some possible methods of organizing
425  source and data.  your source and data.
426    
427  \subsubsection{Building from the {\em ../code directory}}  \subsubsection{Building from the {\em ../code directory}}
428    
429  This is just as simple as building in the {\em input/} directory:  This is just as simple as building in the {\em input/} directory:
430  \begin{verbatim}  \begin{verbatim}
431  % cd verification/exp2/code  % cd verification/exp2/code
432  % ../../../tools/genmake  % ../../../tools/genmake2
433  % make depend  % make depend
434  % make  % make
435  \end{verbatim}  \end{verbatim}
# Line 491  within {\em verification/exp2/}: Line 458  within {\em verification/exp2/}:
458  % cd verification/exp2  % cd verification/exp2
459  % mkdir build  % mkdir build
460  % cd build  % cd build
461  % ../../../tools/genmake -mods=../code  % ../../../tools/genmake2 -mods=../code
462  % make depend  % make depend
463  % make  % make
464  \end{verbatim}  \end{verbatim}
# Line 513  running in a new directory each time mig Line 480  running in a new directory each time mig
480  % ./mitgcmuv > output.txt  % ./mitgcmuv > output.txt
481  \end{verbatim}  \end{verbatim}
482    
483  \subsubsection{Building from on a scratch disk}  \subsubsection{Building on a scratch disk}
484    
485  Model object files and output data can use up large amounts of disk  Model object files and output data can use up large amounts of disk
486  space so it is often the case that you will be operating on a large  space so it is often the case that you will be operating on a large
# Line 521  scratch disk. Assuming the model source Line 488  scratch disk. Assuming the model source
488  following commands will build the model in {\em /scratch/exp2-run1}:  following commands will build the model in {\em /scratch/exp2-run1}:
489  \begin{verbatim}  \begin{verbatim}
490  % cd /scratch/exp2-run1  % cd /scratch/exp2-run1
491  % ~/MITgcm/tools/genmake -rootdir=~/MITgcm -mods=~/MITgcm/verification/exp2/code  % ~/MITgcm/tools/genmake2 -rootdir=~/MITgcm \
492      -mods=~/MITgcm/verification/exp2/code
493  % make depend  % make depend
494  % make  % make
495  \end{verbatim}  \end{verbatim}
# Line 537  the one experiment: Line 505  the one experiment:
505  % cd /scratch/exp2  % cd /scratch/exp2
506  % mkdir build  % mkdir build
507  % cd build  % cd build
508  % ~/MITgcm/tools/genmake -rootdir=~/MITgcm -mods=~/MITgcm/verification/exp2/code  % ~/MITgcm/tools/genmake2 -rootdir=~/MITgcm \
509      -mods=~/MITgcm/verification/exp2/code
510  % make depend  % make depend
511  % make  % make
512  % cd ../  % cd ../
# Line 547  the one experiment: Line 516  the one experiment:
516  \end{verbatim}  \end{verbatim}
517    
518    
519    \subsection{Using \texttt{genmake2}}
520    \label{sec:genmake}
521    
522  \subsection{\textit{genmake}}  To compile the code, first use the program \texttt{genmake2} (located
523  \label{sect:genmake}  in the \texttt{tools} directory) to generate a Makefile.
524    \texttt{genmake2} is a shell script written to work with all
525  To compile the code, use the script \textit{genmake} located in the \textit{%  ``sh''--compatible shells including bash v1, bash v2, and Bourne.
526  tools} directory. \textit{genmake} is a script that generates the makefile.  %Internally, \texttt{genmake2} determines the locations of needed
527  It has been written so that the code can be compiled on a wide diversity of  %files, the compiler, compiler options, libraries, and Unix tools.  It
528  machines and systems. However, if it doesn't work the first time on your  %relies upon a number of ``optfiles'' located in the
529  platform, you might need to edit certain lines of \textit{genmake} in the  %\texttt{tools/build\_options} directory.
530  section containing the setups for the different machines. The file is  \texttt{genmake2} parses information from the following sources:
531  structured like this:  \begin{description}
532  \begin{verbatim}  \item[-] a {\em gemake\_local} file if one is found in the current
533          .    directory
534          .  \item[-] command-line options
535          .  \item[-] an "options file" as specified by the command-line option
536  general instructions (machine independent)    \texttt{--optfile=/PATH/FILENAME}
537          .  \item[-] a {\em packages.conf} file (if one is found) with the
538          .    specific list of packages to compile. The search path for
539          .    file {\em packages.conf} is, first, the current directory and
540      - setup machine 1    then each of the "MODS" directories in the given order (see below).
541      - setup machine 2  \end{description}
     - setup machine 3  
     - setup machine 4  
        etc  
         .  
         .  
         .  
 \end{verbatim}  
   
 For example, the setup corresponding to a DEC alpha machine is reproduced  
 here:  
 \begin{verbatim}  
   case OSF1+mpi:  
     echo "Configuring for DEC Alpha"  
     set CPP        = ( '/usr/bin/cpp -P' )  
     set DEFINES    = ( ${DEFINES}  '-DTARGET_DEC -DWORDLENGTH=1' )  
     set KPP        = ( 'kapf' )  
     set KPPFILES   = ( 'main.F' )  
     set KFLAGS1    = ( '-scan=132 -noconc -cmp=' )  
     set FC         = ( 'f77' )  
     set FFLAGS     = ( '-convert big_endian -r8 -extend_source -automatic -call_shared -notransform_loops -align dcommons' )  
     set FOPTIM     = ( '-O5 -fast -tune host -inline all' )  
     set NOOPTFLAGS = ( '-O0' )  
     set LIBS       = ( '-lfmpi -lmpi -lkmp_osfp10 -pthread' )  
     set NOOPTFILES = ( 'barrier.F different_multiple.F external_fields_load.F')  
     set RMFILES    = ( '*.p.out' )  
     breaksw  
 \end{verbatim}  
   
 Typically, these are the lines that you might need to edit to make \textit{%  
 genmake} work on your platform if it doesn't work the first time. \textit{%  
 genmake} understands several options that are described here:  
   
 \begin{itemize}  
 \item -rootdir=dir  
   
 indicates where the model root directory is relative to the directory where  
 you are compiling. This option is not needed if you compile in the \textit{%  
 bin} directory (which is the default compilation directory) or within the  
 \textit{verification} tree.  
542    
543  \item -mods=dir1,dir2,...  \subsubsection{Optfiles in \texttt{tools/build\_options} directory:}
544    
545  indicates the relative or absolute paths directories where the sources  The purpose of the optfiles is to provide all the compilation options
546  should take precedence over the default versions (located in \textit{model},  for particular ``platforms'' (where ``platform'' roughly means the
547  \textit{eesupp},...). Typically, this option is used when running the  combination of the hardware and the compiler) and code configurations.
548  examples, see below.  Given the combinations of possible compilers and library dependencies
549    ({\it eg.}  MPI and NetCDF) there may be numerous optfiles available
550    for a single machine.  The naming scheme for the majority of the
551    optfiles shipped with the code is
552    \begin{center}
553      {\bf OS\_HARDWARE\_COMPILER }
554    \end{center}
555    where
556    \begin{description}
557    \item[OS] is the name of the operating system (generally the
558      lower-case output of the {\tt 'uname'} command)
559    \item[HARDWARE] is a string that describes the CPU type and
560      corresponds to output from the  {\tt 'uname -m'} command:
561      \begin{description}
562      \item[ia32] is for ``x86'' machines such as i386, i486, i586, i686,
563        and athlon
564      \item[ia64] is for Intel IA64 systems (eg. Itanium, Itanium2)
565      \item[amd64] is AMD x86\_64 systems
566      \item[ppc] is for Mac PowerPC systems
567      \end{description}
568    \item[COMPILER] is the compiler name (generally, the name of the
569      FORTRAN executable)
570    \end{description}
571    
572  \item -enable=pkg1,pkg2,...  In many cases, the default optfiles are sufficient and will result in
573    usable Makefiles.  However, for some machines or code configurations,
574    new ``optfiles'' must be written. To create a new optfile, it is
575    generally best to start with one of the defaults and modify it to suit
576    your needs.  Like \texttt{genmake2}, the optfiles are all written
577    using a simple ``sh''--compatible syntax.  While nearly all variables
578    used within \texttt{genmake2} may be specified in the optfiles, the
579    critical ones that should be defined are:
580    
581  enables packages source code \textit{pkg1}, \textit{pkg2},... when creating  \begin{description}
582  the makefile.  \item[FC] the FORTRAN compiler (executable) to use
583    \item[DEFINES] the command-line DEFINE options passed to the compiler
584    \item[CPP] the C pre-processor to use
585    \item[NOOPTFLAGS] options flags for special files that should not be
586      optimized
587    \end{description}
588    
589  \item -disable=pkg1,pkg2,...  For example, the optfile for a typical Red Hat Linux machine (``ia32''
590    architecture) using the GCC (g77) compiler is
591    \begin{verbatim}
592    FC=g77
593    DEFINES='-D_BYTESWAPIO -DWORDLENGTH=4'
594    CPP='cpp  -traditional -P'
595    NOOPTFLAGS='-O0'
596    #  For IEEE, use the "-ffloat-store" option
597    if test "x$IEEE" = x ; then
598        FFLAGS='-Wimplicit -Wunused -Wuninitialized'
599        FOPTIM='-O3 -malign-double -funroll-loops'
600    else
601        FFLAGS='-Wimplicit -Wunused -ffloat-store'
602        FOPTIM='-O0 -malign-double'
603    fi
604    \end{verbatim}
605    
606    If you write an optfile for an unrepresented machine or compiler, you
607    are strongly encouraged to submit the optfile to the MITgcm project
608    for inclusion.  Please send the file to the
609    \begin{rawhtml} <A href="mail-to:MITgcm-support@mitgcm.org"> \end{rawhtml}
610    \begin{center}
611      MITgcm-support@mitgcm.org
612    \end{center}
613    \begin{rawhtml} </A> \end{rawhtml}
614    mailing list.
615    
616  disables packages source code \textit{pkg1}, \textit{pkg2},... when creating  \subsubsection{Command-line options:}
 the makefile.  
617    
618  \item -platform=machine  In addition to the optfiles, \texttt{genmake2} supports a number of
619    helpful command-line options.  A complete list of these options can be
620    obtained from:
621    \begin{verbatim}
622    % genmake2 -h
623    \end{verbatim}
624    
625  specifies the platform for which you want the makefile. In general, you  The most important command-line options are:
626  won't need this option. \textit{genmake} will select the right machine for  \begin{description}
627  you (the one you're working on!). However, this option is useful if you have    
628  a choice of several compilers on one machine and you want to use the one  \item[\texttt{--optfile=/PATH/FILENAME}] specifies the optfile that
629  that is not the default (ex: \texttt{pgf77} instead of \texttt{f77} under    should be used for a particular build.
630  Linux).    
631      If no "optfile" is specified (either through the command line or the
632      MITGCM\_OPTFILE environment variable), genmake2 will try to make a
633      reasonable guess from the list provided in {\em
634        tools/build\_options}.  The method used for making this guess is
635      to first determine the combination of operating system and hardware
636      (eg. "linux\_ia32") and then find a working FORTRAN compiler within
637      the user's path.  When these three items have been identified,
638      genmake2 will try to find an optfile that has a matching name.
639      
640    \item[\texttt{--mods='DIR1 DIR2 DIR3 ...'}] specifies a list of
641      directories containing ``modifications''.  These directories contain
642      files with names that may (or may not) exist in the main MITgcm
643      source tree but will be overridden by any identically-named sources
644      within the ``MODS'' directories.
645      
646      The order of precedence for this "name-hiding" is as follows:
647      \begin{itemize}
648      \item ``MODS'' directories (in the order given)
649      \item Packages either explicitly specified or provided by default
650        (in the order given)
651      \item Packages included due to package dependencies (in the order
652        that that package dependencies are parsed)
653      \item The "standard dirs" (which may have been specified by the
654        ``-standarddirs'' option)
655      \end{itemize}
656      
657    \item[\texttt{--pgroups=/PATH/FILENAME}] specifies the file
658      where package groups are defined. If not set, the package-groups
659      definition will be read from {\em pkg/pkg\_groups}.
660      It also contains the default list of packages (defined
661      as the group ``{\it default\_pkg\_list}'' which is used
662      when no specific package list ({\em packages.conf})
663      is found in current directory or in any "MODS" directory.
664    
665    \item[\texttt{--pdepend=/PATH/FILENAME}] specifies the dependency file
666      used for packages.
667      
668      If not specified, the default dependency file {\em pkg/pkg\_depend}
669      is used.  The syntax for this file is parsed on a line-by-line basis
670      where each line containes either a comment ("\#") or a simple
671      "PKGNAME1 (+|-)PKGNAME2" pairwise rule where the "+" or "-" symbol
672      specifies a "must be used with" or a "must not be used with"
673      relationship, respectively.  If no rule is specified, then it is
674      assumed that the two packages are compatible and will function
675      either with or without each other.
676      
677    \item[\texttt{--adof=/path/to/file}] specifies the "adjoint" or
678      automatic differentiation options file to be used.  The file is
679      analogous to the ``optfile'' defined above but it specifies
680      information for the AD build process.
681      
682      The default file is located in {\em
683        tools/adjoint\_options/adjoint\_default} and it defines the "TAF"
684      and "TAMC" compilers.  An alternate version is also available at
685      {\em tools/adjoint\_options/adjoint\_staf} that selects the newer
686      "STAF" compiler.  As with any compilers, it is helpful to have their
687      directories listed in your {\tt \$PATH} environment variable.
688      
689    \item[\texttt{--mpi}] This option enables certain MPI features (using
690      CPP \texttt{\#define}s) within the code and is necessary for MPI
691      builds (see Section \ref{sec:mpi-build}).
692      
693    \item[\texttt{--make=/path/to/gmake}] Due to the poor handling of
694      soft-links and other bugs common with the \texttt{make} versions
695      provided by commercial Unix vendors, GNU \texttt{make} (sometimes
696      called \texttt{gmake}) should be preferred.  This option provides a
697      means for specifying the make executable to be used.
698      
699    \item[\texttt{--bash=/path/to/sh}] On some (usually older UNIX)
700      machines, the ``bash'' shell is unavailable.  To run on these
701      systems, \texttt{genmake2} can be invoked using an ``sh'' (that is,
702      a Bourne, POSIX, or compatible) shell.  The syntax in these
703      circumstances is:
704      \begin{center}
705        \texttt{\%  /bin/sh genmake2 -bash=/bin/sh [...options...]}
706      \end{center}
707      where \texttt{/bin/sh} can be replaced with the full path and name
708      of the desired shell.
709    
710  \item -mpi  \end{description}
711    
 this is used when you want to run the model in parallel processing mode  
 under mpi (see section on parallel computation for more details).  
712    
713  \item -jam  \subsection{Building with MPI}
714    \label{sec:mpi-build}
715    
716  this is used when you want to run the model in parallel processing mode  Building MITgcm to use MPI libraries can be complicated due to the
717  under jam (see section on parallel computation for more details).  variety of different MPI implementations available, their dependencies
718  \end{itemize}  or interactions with different compilers, and their often ad-hoc
719    locations within file systems.  For these reasons, its generally a
720    good idea to start by finding and reading the documentation for your
721    machine(s) and, if necessary, seeking help from your local systems
722    administrator.
723    
724  For some of the examples, there is a file called \textit{.genmakerc} in the  The steps for building MITgcm with MPI support are:
725  \textit{input} directory that has the relevant \textit{genmake} options for  \begin{enumerate}
726  that particular example. In this way you don't need to type the options when    
727  invoking \textit{genmake}.  \item Determine the locations of your MPI-enabled compiler and/or MPI
728      libraries and put them into an options file as described in Section
729      \ref{sec:genmake}.  One can start with one of the examples in:
730      \begin{rawhtml} <A
731        href="http://mitgcm.org/viewvc/MITgcm/MITgcm/tools/build_options/">
732      \end{rawhtml}
733      \begin{center}
734        \texttt{MITgcm/tools/build\_options/}
735      \end{center}
736      \begin{rawhtml} </A> \end{rawhtml}
737      such as \texttt{linux\_ia32\_g77+mpi\_cg01} or
738      \texttt{linux\_ia64\_efc+mpi} and then edit it to suit the machine at
739      hand.  You may need help from your user guide or local systems
740      administrator to determine the exact location of the MPI libraries.
741      If libraries are not installed, MPI implementations and related
742      tools are available including:
743      \begin{itemize}
744      \item \begin{rawhtml} <A
745          href="http://www-unix.mcs.anl.gov/mpi/mpich/">
746        \end{rawhtml}
747        MPICH
748        \begin{rawhtml} </A> \end{rawhtml}
749    
750      \item \begin{rawhtml} <A
751          href="http://www.lam-mpi.org/">
752        \end{rawhtml}
753        LAM/MPI
754        \begin{rawhtml} </A> \end{rawhtml}
755    
756      \item \begin{rawhtml} <A
757          href="http://www.osc.edu/~pw/mpiexec/">
758        \end{rawhtml}
759        MPIexec
760        \begin{rawhtml} </A> \end{rawhtml}
761      \end{itemize}
762      
763    \item Build the code with the \texttt{genmake2} \texttt{-mpi} option
764      (see Section \ref{sec:genmake}) using commands such as:
765    {\footnotesize \begin{verbatim}
766      %  ../../../tools/genmake2 -mods=../code -mpi -of=YOUR_OPTFILE
767      %  make depend
768      %  make
769    \end{verbatim} }
770      
771    \item Run the code with the appropriate MPI ``run'' or ``exec''
772      program provided with your particular implementation of MPI.
773      Typical MPI packages such as MPICH will use something like:
774    \begin{verbatim}
775      %  mpirun -np 4 -machinefile mf ./mitgcmuv
776    \end{verbatim}
777      Sightly more complicated scripts may be needed for many machines
778      since execution of the code may be controlled by both the MPI
779      library and a job scheduling and queueing system such as PBS,
780      LoadLeveller, Condor, or any of a number of similar tools.  A few
781      example scripts (those used for our \begin{rawhtml} <A
782        href="http://mitgcm.org/public/testing.html"> \end{rawhtml}regular
783      verification runs\begin{rawhtml} </A> \end{rawhtml}) are available
784      at:
785      \begin{rawhtml} <A
786        href="http://mitgcm.org/viewvc/MITgcm/MITgcm/tools/example_scripts/">
787      \end{rawhtml}
788      {\footnotesize \tt
789        http://mitgcm.org/viewvc/MITgcm/MITgcm/tools/example\_scripts/ }
790      \begin{rawhtml} </A> \end{rawhtml}
791      or at:
792      \begin{rawhtml} <A
793        href="http://mitgcm.org/viewvc/MITgcm/MITgcm_contrib/test_scripts/">
794      \end{rawhtml}
795      {\footnotesize \tt
796        http://mitgcm.org/viewvc/MITgcm/MITgcm\_contrib/test\_scripts/ }
797      \begin{rawhtml} </A> \end{rawhtml}
798    
799    \end{enumerate}
800    
801  \section{Running the model}  An example of the above process on the MITgcm cluster (``cg01'') using
802  \label{sect:runModel}  the GNU g77 compiler and the mpich MPI library is:
803    
804  If compilation finished succesfuully (section \ref{sect:buildModel})  {\footnotesize \begin{verbatim}
805  then an executable called {\em mitgcmuv} will now exist in the local    %  cd MITgcm/verification/exp5
806  directory.    %  mkdir build
807      %  cd build
808      %  ../../../tools/genmake2 -mpi -mods=../code \
809           -of=../../../tools/build_options/linux_ia32_g77+mpi_cg01
810      %  make depend
811      %  make
812      %  cd ../input
813      %  /usr/local/pkg/mpi/mpi-1.2.4..8a-gm-1.5/g77/bin/mpirun.ch_gm \
814           -machinefile mf --gm-kill 5 -v -np 2  ../build/mitgcmuv
815    \end{verbatim} }
816    
817    \section[Running MITgcm]{Running the model in prognostic mode}
818    \label{sec:runModel}
819    \begin{rawhtml}
820    <!-- CMIREDIR:runModel: -->
821    \end{rawhtml}
822    
823    If compilation finished succesfully (section \ref{sec:buildingCode})
824    then an executable called \texttt{mitgcmuv} will now exist in the
825    local directory.
826    
827  To run the model as a single process (ie. not in parallel) simply  To run the model as a single process (\textit{ie.} not in parallel)
828  type:  simply type:
829  \begin{verbatim}  \begin{verbatim}
830  % ./mitgcmuv  % ./mitgcmuv
831  \end{verbatim}  \end{verbatim}
# Line 669  do!). The above command will spew out ma Line 835  do!). The above command will spew out ma
835  your screen.  This output contains details such as parameter values as  your screen.  This output contains details such as parameter values as
836  well as diagnostics such as mean Kinetic energy, largest CFL number,  well as diagnostics such as mean Kinetic energy, largest CFL number,
837  etc. It is worth keeping this text output with the binary output so we  etc. It is worth keeping this text output with the binary output so we
838  normally re-direct the {\em stdout} stream as follows:  normally re-direct the \texttt{stdout} stream as follows:
839  \begin{verbatim}  \begin{verbatim}
840  % ./mitgcmuv > output.txt  % ./mitgcmuv > output.txt
841  \end{verbatim}  \end{verbatim}
842    In the event that the model encounters an error and stops, it is very
843  For the example experiments in {\em vericication}, an example of the  helpful to include the last few line of this \texttt{output.txt} file
844  output is kept in {\em results/output.txt} for comparison. You can compare  along with the (\texttt{stderr}) error message within any bug reports.
845  your {\em output.txt} with this one to check that the set-up works.  
846    For the example experiments in \texttt{verification}, an example of the
847    output is kept in \texttt{results/output.txt} for comparison. You can
848    compare your \texttt{output.txt} with the corresponding one for that
849    experiment to check that the set-up works.
850    
851    
852    
853  \subsection{Output files}  \subsection{Output files}
854    
855  The model produces various output files. At a minimum, the instantaneous  The model produces various output files and, when using \texttt{mnc},
856  ``state'' of the model is written out, which is made of the following files:  sometimes even directories.  Depending upon the I/O package(s)
857    selected at compile time (either \texttt{mdsio} or \texttt{mnc} or
858    both as determined by \texttt{code/packages.conf}) and the run-time
859    flags set (in \texttt{input/data.pkg}), the following output may
860    appear.
861    
862    
863    \subsubsection{MDSIO output files}
864    
865    The ``traditional'' output files are generated by the \texttt{mdsio}
866    package.  At a minimum, the instantaneous ``state'' of the model is
867    written out, which is made of the following files:
868    
869  \begin{itemize}  \begin{itemize}
870  \item \textit{U.00000nIter} - zonal component of velocity field (m/s and $>  \item \texttt{U.00000nIter} - zonal component of velocity field (m/s
871  0 $ eastward).    and positive eastward).
872    
873  \item \textit{V.00000nIter} - meridional component of velocity field (m/s  \item \texttt{V.00000nIter} - meridional component of velocity field
874  and $> 0$ northward).    (m/s and positive northward).
875    
876  \item \textit{W.00000nIter} - vertical component of velocity field (ocean:  \item \texttt{W.00000nIter} - vertical component of velocity field
877  m/s and $> 0$ upward, atmosphere: Pa/s and $> 0$ towards increasing pressure    (ocean: m/s and positive upward, atmosphere: Pa/s and positive
878  i.e. downward).    towards increasing pressure i.e. downward).
879    
880  \item \textit{T.00000nIter} - potential temperature (ocean: $^{0}$C,  \item \texttt{T.00000nIter} - potential temperature (ocean:
881  atmosphere: $^{0}$K).    $^{\circ}\mathrm{C}$, atmosphere: $^{\circ}\mathrm{K}$).
882    
883  \item \textit{S.00000nIter} - ocean: salinity (psu), atmosphere: water vapor  \item \texttt{S.00000nIter} - ocean: salinity (psu), atmosphere: water
884  (g/kg).    vapor (g/kg).
885    
886  \item \textit{Eta.00000nIter} - ocean: surface elevation (m), atmosphere:  \item \texttt{Eta.00000nIter} - ocean: surface elevation (m),
887  surface pressure anomaly (Pa).    atmosphere: surface pressure anomaly (Pa).
888  \end{itemize}  \end{itemize}
889    
890  The chain \textit{00000nIter} consists of ten figures that specify the  The chain \texttt{00000nIter} consists of ten figures that specify the
891  iteration number at which the output is written out. For example, \textit{%  iteration number at which the output is written out. For example,
892  U.0000000300} is the zonal velocity at iteration 300.  \texttt{U.0000000300} is the zonal velocity at iteration 300.
893    
894  In addition, a ``pickup'' or ``checkpoint'' file called:  In addition, a ``pickup'' or ``checkpoint'' file called:
895    
896  \begin{itemize}  \begin{itemize}
897  \item \textit{pickup.00000nIter}  \item \texttt{pickup.00000nIter}
898  \end{itemize}  \end{itemize}
899    
900  is written out. This file represents the state of the model in a condensed  is written out. This file represents the state of the model in a condensed
# Line 721  form and is used for restarting the inte Line 902  form and is used for restarting the inte
902  there is an additional ``pickup'' file:  there is an additional ``pickup'' file:
903    
904  \begin{itemize}  \begin{itemize}
905  \item \textit{pickup\_cd.00000nIter}  \item \texttt{pickup\_cd.00000nIter}
906  \end{itemize}  \end{itemize}
907    
908  containing the D-grid velocity data and that has to be written out as well  containing the D-grid velocity data and that has to be written out as well
909  in order to restart the integration. Rolling checkpoint files are the same  in order to restart the integration. Rolling checkpoint files are the same
910  as the pickup files but are named differently. Their name contain the chain  as the pickup files but are named differently. Their name contain the chain
911  \textit{ckptA} or \textit{ckptB} instead of \textit{00000nIter}. They can be  \texttt{ckptA} or \texttt{ckptB} instead of \texttt{00000nIter}. They can be
912  used to restart the model but are overwritten every other time they are  used to restart the model but are overwritten every other time they are
913  output to save disk space during long integrations.  output to save disk space during long integrations.
914    
915    \subsubsection{MNC output files}
916    
917    Unlike the \texttt{mdsio} output, the \texttt{mnc}--generated output
918    is usually (though not necessarily) placed within a subdirectory with
919    a name such as \texttt{mnc\_test\_\${DATE}\_\${SEQ}}.  
920    
921  \subsection{Looking at the output}  \subsection{Looking at the output}
922    
923  All the model data are written according to a ``meta/data'' file format.  The ``traditional'' or mdsio model data are written according to a
924  Each variable is associated with two files with suffix names \textit{.data}  ``meta/data'' file format.  Each variable is associated with two files
925  and \textit{.meta}. The \textit{.data} file contains the data written in  with suffix names \texttt{.data} and \texttt{.meta}. The
926  binary form (big\_endian by default). The \textit{.meta} file is a  \texttt{.data} file contains the data written in binary form
927  ``header'' file that contains information about the size and the structure  (big\_endian by default). The \texttt{.meta} file is a ``header'' file
928  of the \textit{.data} file. This way of organizing the output is  that contains information about the size and the structure of the
929  particularly useful when running multi-processors calculations. The base  \texttt{.data} file. This way of organizing the output is particularly
930  version of the model includes a few matlab utilities to read output files  useful when running multi-processors calculations. The base version of
931  written in this format. The matlab scripts are located in the directory  the model includes a few matlab utilities to read output files written
932  \textit{utils/matlab} under the root tree. The script \textit{rdmds.m} reads  in this format. The matlab scripts are located in the directory
933  the data. Look at the comments inside the script to see how to use it.  \texttt{utils/matlab} under the root tree. The script \texttt{rdmds.m}
934    reads the data. Look at the comments inside the script to see how to
935    use it.
936    
937  Some examples of reading and visualizing some output in {\em Matlab}:  Some examples of reading and visualizing some output in {\em Matlab}:
938  \begin{verbatim}  \begin{verbatim}
# Line 760  Some examples of reading and visualizing Line 949  Some examples of reading and visualizing
949  >> for n=1:11; imagesc(eta(:,:,n)');axis ij;colorbar;pause(.5);end  >> for n=1:11; imagesc(eta(:,:,n)');axis ij;colorbar;pause(.5);end
950  \end{verbatim}  \end{verbatim}
951    
952  \section{Doing it yourself: customizing the code}  Similar scripts for netCDF output (\texttt{rdmnc.m}) are available and
953    they are described in Section \ref{sec:pkg:mnc}.
 When you are ready to run the model in the configuration you want, the  
 easiest thing is to use and adapt the setup of the case studies experiment  
 (described previously) that is the closest to your configuration. Then, the  
 amount of setup will be minimized. In this section, we focus on the setup  
 relative to the ''numerical model'' part of the code (the setup relative to  
 the ''execution environment'' part is covered in the parallel implementation  
 section) and on the variables and parameters that you are likely to change.  
   
 \subsection{Configuration and setup}  
   
 The CPP keys relative to the ''numerical model'' part of the code are all  
 defined and set in the file \textit{CPP\_OPTIONS.h }in the directory \textit{%  
 model/inc }or in one of the \textit{code }directories of the case study  
 experiments under \textit{verification.} The model parameters are defined  
 and declared in the file \textit{model/inc/PARAMS.h }and their default  
 values are set in the routine \textit{model/src/set\_defaults.F. }The  
 default values can be modified in the namelist file \textit{data }which  
 needs to be located in the directory where you will run the model. The  
 parameters are initialized in the routine \textit{model/src/ini\_parms.F}.  
 Look at this routine to see in what part of the namelist the parameters are  
 located.  
   
 In what follows the parameters are grouped into categories related to the  
 computational domain, the equations solved in the model, and the simulation  
 controls.  
   
 \subsection{Computational domain, geometry and time-discretization}  
   
 \begin{itemize}  
 \item dimensions  
 \end{itemize}  
   
 The number of points in the x, y,\textit{\ }and r\textit{\ }directions are  
 represented by the variables \textbf{sNx}\textit{, }\textbf{sNy}\textit{, }%  
 and \textbf{Nr}\textit{\ }respectively which are declared and set in the  
 file \textit{model/inc/SIZE.h. }(Again, this assumes a mono-processor  
 calculation. For multiprocessor calculations see section on parallel  
 implementation.)  
   
 \begin{itemize}  
 \item grid  
 \end{itemize}  
   
 Three different grids are available: cartesian, spherical polar, and  
 curvilinear (including the cubed sphere). The grid is set through the  
 logical variables \textbf{usingCartesianGrid}\textit{, }\textbf{%  
 usingSphericalPolarGrid}\textit{, }and \textit{\ }\textbf{%  
 usingCurvilinearGrid}\textit{. }In the case of spherical and curvilinear  
 grids, the southern boundary is defined through the variable \textbf{phiMin}%  
 \textit{\ }which corresponds to the latitude of the southern most cell face  
 (in degrees). The resolution along the x and y directions is controlled by  
 the 1D arrays \textbf{delx}\textit{\ }and \textbf{dely}\textit{\ }(in meters  
 in the case of a cartesian grid, in degrees otherwise). The vertical grid  
 spacing is set through the 1D array \textbf{delz }for the ocean (in meters)  
 or \textbf{delp}\textit{\ }for the atmosphere (in Pa). The variable \textbf{%  
 Ro\_SeaLevel} represents the standard position of Sea-Level in ''R''  
 coordinate. This is typically set to 0m for the ocean (default value) and 10$%  
 ^{5}$Pa for the atmosphere. For the atmosphere, also set the logical  
 variable \textbf{groundAtK1} to '.\texttt{TRUE}.'. which put the first level  
 (k=1) at the lower boundary (ground).  
   
 For the cartesian grid case, the Coriolis parameter $f$ is set through the  
 variables \textbf{f0}\textit{\ }and \textbf{beta}\textit{\ }which correspond  
 to the reference Coriolis parameter (in s$^{-1}$) and $\frac{\partial f}{%  
 \partial y}$(in m$^{-1}$s$^{-1}$) respectively. If \textbf{beta }\textit{\ }%  
 is set to a nonzero value, \textbf{f0}\textit{\ }is the value of $f$ at the  
 southern edge of the domain.  
   
 \begin{itemize}  
 \item topography - full and partial cells  
 \end{itemize}  
   
 The domain bathymetry is read from a file that contains a 2D (x,y) map of  
 depths (in m) for the ocean or pressures (in Pa) for the atmosphere. The  
 file name is represented by the variable \textbf{bathyFile}\textit{. }The  
 file is assumed to contain binary numbers giving the depth (pressure) of the  
 model at each grid cell, ordered with the x coordinate varying fastest. The  
 points are ordered from low coordinate to high coordinate for both axes. The  
 model code applies without modification to enclosed, periodic, and double  
 periodic domains. Periodicity is assumed by default and is suppressed by  
 setting the depths to 0m for the cells at the limits of the computational  
 domain (note: not sure this is the case for the atmosphere). The precision  
 with which to read the binary data is controlled by the integer variable  
 \textbf{readBinaryPrec }which can take the value \texttt{32} (single  
 precision) or \texttt{64} (double precision). See the matlab program \textit{%  
 gendata.m }in the \textit{input }directories under \textit{verification }to  
 see how the bathymetry files are generated for the case study experiments.  
   
 To use the partial cell capability, the variable \textbf{hFacMin}\textit{\ }%  
 needs to be set to a value between 0 and 1 (it is set to 1 by default)  
 corresponding to the minimum fractional size of the cell. For example if the  
 bottom cell is 500m thick and \textbf{hFacMin}\textit{\ }is set to 0.1, the  
 actual thickness of the cell (i.e. used in the code) can cover a range of  
 discrete values 50m apart from 50m to 500m depending on the value of the  
 bottom depth (in \textbf{bathyFile}) at this point.  
   
 Note that the bottom depths (or pressures) need not coincide with the models  
 levels as deduced from \textbf{delz}\textit{\ }or\textit{\ }\textbf{delp}%  
 \textit{. }The model will interpolate the numbers in \textbf{bathyFile}%  
 \textit{\ }so that they match the levels obtained from \textbf{delz}\textit{%  
 \ }or\textit{\ }\textbf{delp}\textit{\ }and \textbf{hFacMin}\textit{. }  
   
 (Note: the atmospheric case is a bit more complicated than what is written  
 here I think. To come soon...)  
954    
955    The MNC output files are all in the ``self-describing'' netCDF
956    format and can thus be browsed and/or plotted using tools such as:
957  \begin{itemize}  \begin{itemize}
958  \item time-discretization  \item \texttt{ncdump} is a utility which is typically included
959  \end{itemize}    with every netCDF install:
960      \begin{rawhtml} <A href="http://www.unidata.ucar.edu/packages/netcdf/"> \end{rawhtml}
961  The time steps are set through the real variables \textbf{deltaTMom}  \begin{verbatim}
962  and \textbf{deltaTtracer} (in s) which represent the time step for the  http://www.unidata.ucar.edu/packages/netcdf/
963  momentum and tracer equations, respectively. For synchronous  \end{verbatim}
964  integrations, simply set the two variables to the same value (or you    \begin{rawhtml} </A> \end{rawhtml} and it converts the netCDF
965  can prescribe one time step only through the variable    binaries into formatted ASCII text files.
 \textbf{deltaT}). The Adams-Bashforth stabilizing parameter is set  
 through the variable \textbf{abEps} (dimensionless). The stagger  
 baroclinic time stepping can be activated by setting the logical  
 variable \textbf{staggerTimeStep} to '.\texttt{TRUE}.'.  
   
 \subsection{Equation of state}  
   
 First, because the model equations are written in terms of  
 perturbations, a reference thermodynamic state needs to be specified.  
 This is done through the 1D arrays \textbf{tRef} and \textbf{sRef}.  
 \textbf{tRef} specifies the reference potential temperature profile  
 (in $^{o}$C for the ocean and $^{o}$K for the atmosphere) starting  
 from the level k=1. Similarly, \textbf{sRef} specifies the reference  
 salinity profile (in ppt) for the ocean or the reference specific  
 humidity profile (in g/kg) for the atmosphere.  
   
 The form of the equation of state is controlled by the character  
 variables \textbf{buoyancyRelation} and \textbf{eosType}.  
 \textbf{buoyancyRelation} is set to '\texttt{OCEANIC}' by default and  
 needs to be set to '\texttt{ATMOSPHERIC}' for atmosphere simulations.  
 In this case, \textbf{eosType} must be set to '\texttt{IDEALGAS}'.  
 For the ocean, two forms of the equation of state are available:  
 linear (set \textbf{eosType} to '\texttt{LINEAR}') and a polynomial  
 approximation to the full nonlinear equation ( set  
 \textbf{eosType}\textit{\ }to '\texttt{POLYNOMIAL}'). In the linear  
 case, you need to specify the thermal and haline expansion  
 coefficients represented by the variables \textbf{tAlpha}\textit{\  
   }(in K$^{-1}$) and \textbf{sBeta} (in ppt$^{-1}$). For the nonlinear  
 case, you need to generate a file of polynomial coefficients called  
 \textit{POLY3.COEFFS}. To do this, use the program  
 \textit{utils/knudsen2/knudsen2.f} under the model tree (a Makefile is  
 available in the same directory and you will need to edit the number  
 and the values of the vertical levels in \textit{knudsen2.f} so that  
 they match those of your configuration).  
   
 There there are also higher polynomials for the equation of state:  
 \begin{description}  
 \item['\texttt{UNESCO}':] The UNESCO equation of state formula of  
   Fofonoff and Millard \cite{fofonoff83}. This equation of state  
   assumes in-situ temperature, which is not a model variable; \emph{its use  
   is therefore discouraged, and it is only listed for completeness}.  
 \item['\texttt{JMD95Z}':] A modified UNESCO formula by Jackett and  
   McDougall \cite{jackett95}, which uses the model variable potential  
   temperature as input. The '\texttt{Z}' indicates that this equation  
   of state uses a horizontally and temporally constant pressure  
   $p_{0}=-g\rho_{0}z$.  
 \item['\texttt{JMD95P}':] A modified UNESCO formula by Jackett and  
   McDougall \cite{jackett95}, which uses the model variable potential  
   temperature as input. The '\texttt{P}' indicates that this equation  
   of state uses the actual hydrostatic pressure of the last time  
   step. Lagging the pressure in this way requires an additional pickup  
   file for restarts.  
 \item['\texttt{MDJWF}':] The new, more accurate and less expensive  
   equation of state by McDougall et~al. \cite{mcdougall03}. It also  
   requires lagging the pressure and therefore an additional pickup  
   file for restarts.  
 \end{description}  
 For none of these options an reference profile of temperature or  
 salinity is required.  
   
 \subsection{Momentum equations}  
   
 In this section, we only focus for now on the parameters that you are likely  
 to change, i.e. the ones relative to forcing and dissipation for example.  
 The details relevant to the vector-invariant form of the equations and the  
 various advection schemes are not covered for the moment. We assume that you  
 use the standard form of the momentum equations (i.e. the flux-form) with  
 the default advection scheme. Also, there are a few logical variables that  
 allow you to turn on/off various terms in the momentum equation. These  
 variables are called \textbf{momViscosity, momAdvection, momForcing,  
 useCoriolis, momPressureForcing, momStepping}\textit{, }and \textit{\ }%  
 \textbf{metricTerms }and are assumed to be set to '.\texttt{TRUE}.' here.  
 Look at the file \textit{model/inc/PARAMS.h }for a precise definition of  
 these variables.  
   
 \begin{itemize}  
 \item initialization  
 \end{itemize}  
   
 The velocity components are initialized to 0 unless the simulation is  
 starting from a pickup file (see section on simulation control parameters).  
   
 \begin{itemize}  
 \item forcing  
 \end{itemize}  
   
 This section only applies to the ocean. You need to generate wind-stress  
 data into two files \textbf{zonalWindFile}\textit{\ }and \textbf{%  
 meridWindFile }corresponding to the zonal and meridional components of the  
 wind stress, respectively (if you want the stress to be along the direction  
 of only one of the model horizontal axes, you only need to generate one  
 file). The format of the files is similar to the bathymetry file. The zonal  
 (meridional) stress data are assumed to be in Pa and located at U-points  
 (V-points). As for the bathymetry, the precision with which to read the  
 binary data is controlled by the variable \textbf{readBinaryPrec}.\textbf{\ }  
 See the matlab program \textit{gendata.m }in the \textit{input }directories  
 under \textit{verification }to see how simple analytical wind forcing data  
 are generated for the case study experiments.  
   
 There is also the possibility of prescribing time-dependent periodic  
 forcing. To do this, concatenate the successive time records into a single  
 file (for each stress component) ordered in a (x, y, t) fashion and set the  
 following variables: \textbf{periodicExternalForcing }to '.\texttt{TRUE}.',  
 \textbf{externForcingPeriod }to the period (in s) of which the forcing  
 varies (typically 1 month), and \textbf{externForcingCycle }to the repeat  
 time (in s) of the forcing (typically 1 year -- note: \textbf{%  
 externForcingCycle }must be a multiple of \textbf{externForcingPeriod}).  
 With these variables set up, the model will interpolate the forcing linearly  
 at each iteration.  
   
 \begin{itemize}  
 \item dissipation  
 \end{itemize}  
   
 The lateral eddy viscosity coefficient is specified through the variable  
 \textbf{viscAh}\textit{\ }(in m$^{2}$s$^{-1}$). The vertical eddy viscosity  
 coefficient is specified through the variable \textbf{viscAz }(in m$^{2}$s$%  
 ^{-1}$) for the ocean and \textbf{viscAp}\textit{\ }(in Pa$^{2}$s$^{-1}$)  
 for the atmosphere. The vertical diffusive fluxes can be computed implicitly  
 by setting the logical variable \textbf{implicitViscosity }to '.\texttt{TRUE}%  
 .'. In addition, biharmonic mixing can be added as well through the variable  
 \textbf{viscA4}\textit{\ }(in m$^{4}$s$^{-1}$). On a spherical polar grid,  
 you might also need to set the variable \textbf{cosPower} which is set to 0  
 by default and which represents the power of cosine of latitude to multiply  
 viscosity. Slip or no-slip conditions at lateral and bottom boundaries are  
 specified through the logical variables \textbf{no\_slip\_sides}\textit{\ }%  
 and \textbf{no\_slip\_bottom}. If set to '\texttt{.FALSE.}', free-slip  
 boundary conditions are applied. If no-slip boundary conditions are applied  
 at the bottom, a bottom drag can be applied as well. Two forms are  
 available: linear (set the variable \textbf{bottomDragLinear}\textit{\ }in s$%  
 ^{-1}$) and quadratic (set the variable \textbf{bottomDragQuadratic}\textit{%  
 \ }in m$^{-1}$).  
   
 The Fourier and Shapiro filters are described elsewhere.  
   
 \begin{itemize}  
 \item C-D scheme  
 \end{itemize}  
   
 If you run at a sufficiently coarse resolution, you will need the C-D scheme  
 for the computation of the Coriolis terms. The variable\textbf{\ tauCD},  
 which represents the C-D scheme coupling timescale (in s) needs to be set.  
   
 \begin{itemize}  
 \item calculation of pressure/geopotential  
 \end{itemize}  
   
 First, to run a non-hydrostatic ocean simulation, set the logical variable  
 \textbf{nonHydrostatic} to '.\texttt{TRUE}.'. The pressure field is then  
 inverted through a 3D elliptic equation. (Note: this capability is not  
 available for the atmosphere yet.) By default, a hydrostatic simulation is  
 assumed and a 2D elliptic equation is used to invert the pressure field. The  
 parameters controlling the behaviour of the elliptic solvers are the  
 variables \textbf{cg2dMaxIters}\textit{\ }and \textbf{cg2dTargetResidual }%  
 for the 2D case and \textbf{cg3dMaxIters}\textit{\ }and \textbf{%  
 cg3dTargetResidual }for the 3D case. You probably won't need to alter the  
 default values (are we sure of this?).  
   
 For the calculation of the surface pressure (for the ocean) or surface  
 geopotential (for the atmosphere) you need to set the logical variables  
 \textbf{rigidLid} and \textbf{implicitFreeSurface}\textit{\ }(set one to '.%  
 \texttt{TRUE}.' and the other to '.\texttt{FALSE}.' depending on how you  
 want to deal with the ocean upper or atmosphere lower boundary).  
   
 \subsection{Tracer equations}  
   
 This section covers the tracer equations i.e. the potential temperature  
 equation and the salinity (for the ocean) or specific humidity (for the  
 atmosphere) equation. As for the momentum equations, we only describe for  
 now the parameters that you are likely to change. The logical variables  
 \textbf{tempDiffusion}\textit{, }\textbf{tempAdvection}\textit{, }\textbf{%  
 tempForcing}\textit{,} and \textbf{tempStepping} allow you to turn on/off  
 terms in the temperature equation (same thing for salinity or specific  
 humidity with variables \textbf{saltDiffusion}\textit{, }\textbf{%  
 saltAdvection}\textit{\ }etc). These variables are all assumed here to be  
 set to '.\texttt{TRUE}.'. Look at file \textit{model/inc/PARAMS.h }for a  
 precise definition.  
   
 \begin{itemize}  
 \item initialization  
 \end{itemize}  
   
 The initial tracer data can be contained in the binary files \textbf{%  
 hydrogThetaFile }and \textbf{hydrogSaltFile}. These files should contain 3D  
 data ordered in an (x, y, r) fashion with k=1 as the first vertical level.  
 If no file names are provided, the tracers are then initialized with the  
 values of \textbf{tRef }and \textbf{sRef }mentioned above (in the equation  
 of state section). In this case, the initial tracer data are uniform in x  
 and y for each depth level.  
   
 \begin{itemize}  
 \item forcing  
 \end{itemize}  
   
 This part is more relevant for the ocean, the procedure for the atmosphere  
 not being completely stabilized at the moment.  
   
 A combination of fluxes data and relaxation terms can be used for driving  
 the tracer equations. \ For potential temperature, heat flux data (in W/m$%  
 ^{2}$) can be stored in the 2D binary file \textbf{surfQfile}\textit{. }%  
 Alternatively or in addition, the forcing can be specified through a  
 relaxation term. The SST data to which the model surface temperatures are  
 restored to are supposed to be stored in the 2D binary file \textbf{%  
 thetaClimFile}\textit{. }The corresponding relaxation time scale coefficient  
 is set through the variable \textbf{tauThetaClimRelax}\textit{\ }(in s). The  
 same procedure applies for salinity with the variable names \textbf{EmPmRfile%  
 }\textit{, }\textbf{saltClimFile}\textit{, }and \textbf{tauSaltClimRelax}%  
 \textit{\ }for freshwater flux (in m/s) and surface salinity (in ppt) data  
 files and relaxation time scale coefficient (in s), respectively. Also for  
 salinity, if the CPP key \textbf{USE\_NATURAL\_BCS} is turned on, natural  
 boundary conditions are applied i.e. when computing the surface salinity  
 tendency, the freshwater flux is multiplied by the model surface salinity  
 instead of a constant salinity value.  
   
 As for the other input files, the precision with which to read the data is  
 controlled by the variable \textbf{readBinaryPrec}. Time-dependent, periodic  
 forcing can be applied as well following the same procedure used for the  
 wind forcing data (see above).  
   
 \begin{itemize}  
 \item dissipation  
 \end{itemize}  
   
 Lateral eddy diffusivities for temperature and salinity/specific humidity  
 are specified through the variables \textbf{diffKhT }and \textbf{diffKhS }%  
 (in m$^{2}$/s). Vertical eddy diffusivities are specified through the  
 variables \textbf{diffKzT }and \textbf{diffKzS }(in m$^{2}$/s) for the ocean  
 and \textbf{diffKpT }and \textbf{diffKpS }(in Pa$^{2}$/s) for the  
 atmosphere. The vertical diffusive fluxes can be computed implicitly by  
 setting the logical variable \textbf{implicitDiffusion }to '.\texttt{TRUE}%  
 .'. In addition, biharmonic diffusivities can be specified as well through  
 the coefficients \textbf{diffK4T }and \textbf{diffK4S }(in m$^{4}$/s). Note  
 that the cosine power scaling (specified through \textbf{cosPower }- see the  
 momentum equations section) is applied to the tracer diffusivities  
 (Laplacian and biharmonic) as well. The Gent and McWilliams parameterization  
 for oceanic tracers is described in the package section. Finally, note that  
 tracers can be also subject to Fourier and Shapiro filtering (see the  
 corresponding section on these filters).  
   
 \begin{itemize}  
 \item ocean convection  
 \end{itemize}  
   
 Two options are available to parameterize ocean convection: one is to use  
 the convective adjustment scheme. In this case, you need to set the variable  
 \textbf{cadjFreq}, which represents the frequency (in s) with which the  
 adjustment algorithm is called, to a non-zero value (if set to a negative  
 value by the user, the model will set it to the tracer time step). The other  
 option is to parameterize convection with implicit vertical diffusion. To do  
 this, set the logical variable \textbf{implicitDiffusion }to '.\texttt{TRUE}%  
 .' and the real variable \textbf{ivdc\_kappa }to a value (in m$^{2}$/s) you  
 wish the tracer vertical diffusivities to have when mixing tracers  
 vertically due to static instabilities. Note that \textbf{cadjFreq }and  
 \textbf{ivdc\_kappa }can not both have non-zero value.  
   
 \subsection{Simulation controls}  
   
 The model ''clock'' is defined by the variable \textbf{deltaTClock }(in s)  
 which determines the IO frequencies and is used in tagging output.  
 Typically, you will set it to the tracer time step for accelerated runs  
 (otherwise it is simply set to the default time step \textbf{deltaT}).  
 Frequency of checkpointing and dumping of the model state are referenced to  
 this clock (see below).  
   
 \begin{itemize}  
 \item run duration  
 \end{itemize}  
   
 The beginning of a simulation is set by specifying a start time (in s)  
 through the real variable \textbf{startTime }or by specifying an initial  
 iteration number through the integer variable \textbf{nIter0}. If these  
 variables are set to nonzero values, the model will look for a ''pickup''  
 file \textit{pickup.0000nIter0 }to restart the integration\textit{. }The end  
 of a simulation is set through the real variable \textbf{endTime }(in s).  
 Alternatively, you can specify instead the number of time steps to execute  
 through the integer variable \textbf{nTimeSteps}.  
966    
967  \begin{itemize}  \item \texttt{ncview} utility is a very convenient and quick way
968  \item frequency of output    to plot netCDF data and it runs on most OSes:
969      \begin{rawhtml} <A href="http://meteora.ucsd.edu/~pierce/ncview_home_page.html"> \end{rawhtml}
970    \begin{verbatim}
971    http://meteora.ucsd.edu/~pierce/ncview_home_page.html
972    \end{verbatim}
973      \begin{rawhtml} </A> \end{rawhtml}
974      
975    \item MatLAB(c) and other common post-processing environments provide
976      various netCDF interfaces including:
977      \begin{rawhtml} <A href="http://mexcdf.sourceforge.net/"> \end{rawhtml}
978    \begin{verbatim}
979    http://mexcdf.sourceforge.net/
980    \end{verbatim}
981      \begin{rawhtml} </A> \end{rawhtml}
982      \begin{rawhtml} <A href="http://woodshole.er.usgs.gov/staffpages/cdenham/public_html/MexCDF/nc4ml5.html"> \end{rawhtml}
983    \begin{verbatim}
984    http://woodshole.er.usgs.gov/staffpages/cdenham/public_html/MexCDF/nc4ml5.html
985    \end{verbatim}
986      \begin{rawhtml} </A> \end{rawhtml}
987  \end{itemize}  \end{itemize}
988    
 Real variables defining frequencies (in s) with which output files are  
 written on disk need to be set up. \textbf{dumpFreq }controls the frequency  
 with which the instantaneous state of the model is saved. \textbf{chkPtFreq }%  
 and \textbf{pchkPtFreq }control the output frequency of rolling and  
 permanent checkpoint files, respectively. See section 1.5.1 Output files for the  
 definition of model state and checkpoint files. In addition, time-averaged  
 fields can be written out by setting the variable \textbf{taveFreq} (in s).  
 The precision with which to write the binary data is controlled by the  
 integer variable w\textbf{riteBinaryPrec }(set it to \texttt{32} or \texttt{%  
 64}).  
   
 %%% Local Variables:  
 %%% mode: latex  
 %%% TeX-master: t  
 %%% End:  

Legend:
Removed from v.1.14  
changed lines
  Added in v.1.45

  ViewVC Help
Powered by ViewVC 1.1.22