/[MITgcm]/manual/s_getstarted/text/getting_started.tex
ViewVC logotype

Diff of /manual/s_getstarted/text/getting_started.tex

Parent Directory Parent Directory | Revision Log Revision Log | View Revision Graph Revision Graph | View Patch Patch

revision 1.1 by adcroft, Wed Aug 8 16:15:31 2001 UTC revision 1.27 by cnh, Thu Oct 14 14:24:28 2004 UTC
# Line 1  Line 1 
1  % $Header$  % $Header$
2  % $Name$  % $Name$
3    
4    %\section{Getting started}
5    
6  \begin{center}  In this section, we describe how to use the model. In the first
7  {\Large \textbf{Using the model}}  section, we provide enough information to help you get started with
8    the model. We believe the best way to familiarize yourself with the
9    model is to run the case study examples provided with the base
10    version. Information on how to obtain, compile, and run the code is
11    found there as well as a brief description of the model structure
12    directory and the case study examples.  The latter and the code
13    structure are described more fully in chapters
14    \ref{chap:discretization} and \ref{chap:sarch}, respectively. Here, in
15    this section, we provide information on how to customize the code when
16    you are ready to try implementing the configuration you have in mind.
17    
18  \vspace*{4mm}  \section{Where to find information}
19    \label{sect:whereToFindInfo}
20    
21  \vspace*{3mm} {\large July 2001}  A web site is maintained for release 2 (``Pelican'') of MITgcm:
22  \end{center}  \begin{rawhtml} <A href=http://mitgcm.org/pelican/ target="idontexist"> \end{rawhtml}
23    \begin{verbatim}
24    http://mitgcm.org/pelican
25    \end{verbatim}
26    \begin{rawhtml} </A> \end{rawhtml}
27    Here you will find an on-line version of this document, a
28    ``browsable'' copy of the code and a searchable database of the model
29    and site, as well as links for downloading the model and
30    documentation, to data-sources, and other related sites.
31    
32    There is also a web-archived support mailing list for the model that
33    you can email at \texttt{MITgcm-support@mitgcm.org} or browse at:
34    \begin{rawhtml} <A href=http://mitgcm.org/mailman/listinfo/mitgcm-support/ target="idontexist"> \end{rawhtml}
35    \begin{verbatim}
36    http://mitgcm.org/mailman/listinfo/mitgcm-support/
37    http://mitgcm.org/pipermail/mitgcm-support/
38    \end{verbatim}
39    \begin{rawhtml} </A> \end{rawhtml}
40    Essentially all of the MITgcm web pages can be searched using a
41    popular web crawler such as Google or through our own search facility:
42    \begin{rawhtml} <A href=http://mitgcm.org/mailman/htdig/ target="idontexist"> \end{rawhtml}
43    \begin{verbatim}
44    http://mitgcm.org/htdig/
45    \end{verbatim}
46    \begin{rawhtml} </A> \end{rawhtml}
47    %%% http://www.google.com/search?q=hydrostatic+site%3Amitgcm.org
48    
 In this part, we describe how to use the model. In the first section, we  
 provide enough information to help you get started with the model. We  
 believe the best way to familiarize yourself with the model is to run the  
 case study examples provided with the base version. Information on how to  
 obtain, compile, and run the code is found there as well as a brief  
 description of the model structure directory and the case study examples.  
 The latter and the code structure are described more fully in sections 2 and  
 3, respectively. In section 4, we provide information on how to customize  
 the code when you are ready to try implementing the configuration you have  
 in mind.  
49    
 \section{Getting started}  
50    
51  \subsection{Obtaining the code}  \section{Obtaining the code}
52    \label{sect:obtainingCode}
53    
54  The reference web site for the model is:  MITgcm can be downloaded from our system by following
55  \begin{verbatim}  the instructions below. As a courtesy we ask that you send e-mail to us at
56  http://mitgcm.org  \begin{rawhtml} <A href=mailto:MITgcm-support@mitgcm.org> \end{rawhtml}
57  \end{verbatim}  MITgcm-support@mitgcm.org
58    \begin{rawhtml} </A> \end{rawhtml}
59    to enable us to keep track of who's using the model and in what application.
60    You can download the model two ways:
61    
62    \begin{enumerate}
63    \item Using CVS software. CVS is a freely available source code management
64    tool. To use CVS you need to have the software installed. Many systems
65    come with CVS pre-installed, otherwise good places to look for
66    the software for a particular platform are
67    \begin{rawhtml} <A href=http://www.cvshome.org/ target="idontexist"> \end{rawhtml}
68    cvshome.org
69    \begin{rawhtml} </A> \end{rawhtml}
70    and
71    \begin{rawhtml} <A href=http://www.wincvs.org/ target="idontexist"> \end{rawhtml}
72    wincvs.org
73    \begin{rawhtml} </A> \end{rawhtml}
74    .
75    
76    \item Using a tar file. This method is simple and does not
77    require any special software. However, this method does not
78    provide easy support for maintenance updates.
79    
80  On this site, you can download the model as well as find useful information,  \end{enumerate}
81  some of which might overlap with what is written here. There is also a  
82  support news group for the model located at (send your message to \texttt{%  \subsection{Method 1 - Checkout from CVS}
83  support@mitgcm.org}):  \label{sect:cvs_checkout}
 \begin{verbatim}  
 news://mitgcm.org/mitgcm.support  
 \end{verbatim}  
84    
85  If CVS is available on your system, we strongly encourage you to use it. CVS  If CVS is available on your system, we strongly encourage you to use it. CVS
86  provides an efficient and elegant way of organizing your code and keeping  provides an efficient and elegant way of organizing your code and keeping
87  track of your changes. If CVS is not available on your machine, you can also  track of your changes. If CVS is not available on your machine, you can also
88  download a tar file.  download a tar file.
89    
90  \subsubsection{using CVS}  Before you can use CVS, the following environment variable(s) should
91    be set within your shell.  For a csh or tcsh shell, put the following
92    \begin{verbatim}
93    % setenv CVSROOT :pserver:cvsanon@mitgcm.org:/u/gcmpack
94    \end{verbatim}
95    in your .cshrc or .tcshrc file.  For bash or sh shells, put:
96    \begin{verbatim}
97    % export CVSROOT=':pserver:cvsanon@mitgcm.org:/u/gcmpack'
98    \end{verbatim}
99    in your \texttt{.profile} or \texttt{.bashrc} file.
100    
101    
102  Before you can use CVS, the following environment variable has to be set in  To get MITgcm through CVS, first register with the MITgcm CVS server
103  your .cshrc or .tcshrc:  using command:
104  \begin{verbatim}  \begin{verbatim}
 % setenv CVSROOT :pserver:cvsanon@mitgcm.org:/u/u0/gcmpack  
105  % cvs login ( CVS password: cvsanon )  % cvs login ( CVS password: cvsanon )
106  \end{verbatim}  \end{verbatim}
107    You only need to do a ``cvs login'' once.
108    
109  You only need to do ``cvs login'' once. To obtain the latest source:  To obtain the latest sources type:
110  \begin{verbatim}  \begin{verbatim}
111  % cvs co -d directory models/MITgcmUV  % cvs co MITgcm
112  \end{verbatim}  \end{verbatim}
113    or to get a specific release type:
114    \begin{verbatim}
115    % cvs co -P -r checkpoint52i_post  MITgcm
116    \end{verbatim}
117    The MITgcm web site contains further directions concerning the source
118    code and CVS.  It also contains a web interface to our CVS archive so
119    that one may easily view the state of files, revisions, and other
120    development milestones:
121    \begin{rawhtml} <A href=''http://mitgcm.org/download'' target="idontexist"> \end{rawhtml}
122    \begin{verbatim}
123    http://mitgcm.org/source_code.html
124    \end{verbatim}
125    \begin{rawhtml} </A> \end{rawhtml}
126    
127  This creates a directory called \textit{directory}. If \textit{directory}  As a convenience, the MITgcm CVS server contains aliases which are
128  exists this command updates your code based on the repository. Each  named subsets of the codebase.  These aliases can be especially
129  directory in the source tree contains a directory \textit{CVS}. This  helpful when used over slow internet connections or on machines with
130  information is required by CVS to keep track of your file versions with  restricted storage space.  Table \ref{tab:cvsModules} contains a list
131  respect to the repository. Don't edit the files in \textit{CVS}! To obtain a  of CVS aliases
132  specific \textit{version} that is not the latest source:  \begin{table}[htb]
133      \centering
134      \begin{tabular}[htb]{|lp{3.25in}|}\hline
135        \textbf{Alias Name}    &  \textbf{Information (directories) Contained}  \\\hline
136        \texttt{MITgcm\_code}  &  Only the source code -- none of the verification examples.  \\
137        \texttt{MITgcm\_verif\_basic}
138        &  Source code plus a small set of the verification examples
139        (\texttt{global\_ocean.90x40x15}, \texttt{aim.5l\_cs}, \texttt{hs94.128x64x5},
140        \texttt{front\_relax}, and \texttt{plume\_on\_slope}).  \\
141        \texttt{MITgcm\_verif\_atmos}  &  Source code plus all of the atmospheric examples.  \\
142        \texttt{MITgcm\_verif\_ocean}  &  Source code plus all of the oceanic examples.  \\
143        \texttt{MITgcm\_verif\_all}    &  Source code plus all of the
144        verification examples. \\\hline
145      \end{tabular}
146      \caption{MITgcm CVS Modules}
147      \label{tab:cvsModules}
148    \end{table}
149    
150    The checkout process creates a directory called \textit{MITgcm}. If
151    the directory \textit{MITgcm} exists this command updates your code
152    based on the repository. Each directory in the source tree contains a
153    directory \textit{CVS}. This information is required by CVS to keep
154    track of your file versions with respect to the repository. Don't edit
155    the files in \textit{CVS}!  You can also use CVS to download code
156    updates.  More extensive information on using CVS for maintaining
157    MITgcm code can be found
158    \begin{rawhtml} <A href=''http://mitgcm.org/usingcvstoget.html'' target="idontexist"> \end{rawhtml}
159    here
160    \begin{rawhtml} </A> \end{rawhtml}
161    .
162    It is important to note that the CVS aliases in Table
163    \ref{tab:cvsModules} cannot be used in conjunction with the CVS
164    \texttt{-d DIRNAME} option.  However, the \texttt{MITgcm} directories
165    they create can be changed to a different name following the check-out:
166  \begin{verbatim}  \begin{verbatim}
167  % cvs co -d directory -r version models/MITgcmUV     %  cvs co MITgcm_verif_basic
168       %  mv MITgcm MITgcm_verif_basic
169  \end{verbatim}  \end{verbatim}
170    
 \subsubsection{other methods}  
171    
172  You can download the model as a tar file from the reference web site at:  \subsection{Method 2 - Tar file download}
173    \label{sect:conventionalDownload}
174    
175    If you do not have CVS on your system, you can download the model as a
176    tar file from the web site at:
177    \begin{rawhtml} <A href=http://mitgcm.org/download target="idontexist"> \end{rawhtml}
178  \begin{verbatim}  \begin{verbatim}
179  http://mitgcm.org/download/  http://mitgcm.org/download/
180  \end{verbatim}  \end{verbatim}
181    \begin{rawhtml} </A> \end{rawhtml}
182  \subsection{Model and directory structure}  The tar file still contains CVS information which we urge you not to
183    delete; even if you do not use CVS yourself the information can help
184  The ``numerical'' model is contained within a execution environment support  us if you should need to send us your copy of the code.  If a recent
185  wrapper. This wrapper is designed to provide a general framework for  tar file does not exist, then please contact the developers through
186  grid-point models. MITgcmUV is a specific numerical model that uses the  the
187  framework. Under this structure the model is split into execution  \begin{rawhtml} <A href=''mailto:MITgcm-support@mitgcm.org"> \end{rawhtml}
188  environment support code and conventional numerical model code. The  MITgcm-support@mitgcm.org
189  execution environment support code is held under the \textit{eesupp}  \begin{rawhtml} </A> \end{rawhtml}
190  directory. The grid point model code is held under the \textit{model}  mailing list.
191  directory. Code execution actually starts in the \textit{eesupp} routines  
192  and not in the \textit{model} routines. For this reason the top-level  \subsubsection{Upgrading from an earlier version}
193  \textit{MAIN.F} is in the \textit{eesupp/src} directory. In general,  
194  end-users should not need to worry about this level. The top-level routine  If you already have an earlier version of the code you can ``upgrade''
195  for the numerical part of the code is in \textit{model/src/THE\_MODEL\_MAIN.F%  your copy instead of downloading the entire repository again. First,
196  }. Here is a brief description of the directory structure of the model under  ``cd'' (change directory) to the top of your working copy:
197  the root tree (a detailed description is given in section 3: Code structure).  \begin{verbatim}
198    % cd MITgcm
199    \end{verbatim}
200    and then issue the cvs update command such as:
201    \begin{verbatim}
202    % cvs -q update -r checkpoint52i_post -d -P
203    \end{verbatim}
204    This will update the ``tag'' to ``checkpoint52i\_post'', add any new
205    directories (-d) and remove any empty directories (-P). The -q option
206    means be quiet which will reduce the number of messages you'll see in
207    the terminal. If you have modified the code prior to upgrading, CVS
208    will try to merge your changes with the upgrades. If there is a
209    conflict between your modifications and the upgrade, it will report
210    that file with a ``C'' in front, e.g.:
211    \begin{verbatim}
212    C model/src/ini_parms.F
213    \end{verbatim}
214    If the list of conflicts scrolled off the screen, you can re-issue the
215    cvs update command and it will report the conflicts. Conflicts are
216    indicated in the code by the delimites ``$<<<<<<<$'', ``======='' and
217    ``$>>>>>>>$''. For example,
218    {\small
219    \begin{verbatim}
220    <<<<<<< ini_parms.F
221         & bottomDragLinear,myOwnBottomDragCoefficient,
222    =======
223         & bottomDragLinear,bottomDragQuadratic,
224    >>>>>>> 1.18
225    \end{verbatim}
226    }
227    means that you added ``myOwnBottomDragCoefficient'' to a namelist at
228    the same time and place that we added ``bottomDragQuadratic''. You
229    need to resolve this conflict and in this case the line should be
230    changed to:
231    {\small
232    \begin{verbatim}
233         & bottomDragLinear,bottomDragQuadratic,myOwnBottomDragCoefficient,
234    \end{verbatim}
235    }
236    and the lines with the delimiters ($<<<<<<$,======,$>>>>>>$) be deleted.
237    Unless you are making modifications which exactly parallel
238    developments we make, these types of conflicts should be rare.
239    
240    \paragraph*{Upgrading to the current pre-release version}
241    
242    We don't make a ``release'' for every little patch and bug fix in
243    order to keep the frequency of upgrades to a minimum. However, if you
244    have run into a problem for which ``we have already fixed in the
245    latest code'' and we haven't made a ``tag'' or ``release'' since that
246    patch then you'll need to get the latest code:
247    \begin{verbatim}
248    % cvs -q update -A -d -P
249    \end{verbatim}
250    Unlike, the ``check-out'' and ``update'' procedures above, there is no
251    ``tag'' or release name. The -A tells CVS to upgrade to the
252    very latest version. As a rule, we don't recommend this since you
253    might upgrade while we are in the processes of checking in the code so
254    that you may only have part of a patch. Using this method of updating
255    also means we can't tell what version of the code you are working
256    with. So please be sure you understand what you're doing.
257    
258    \section{Model and directory structure}
259    
260    The ``numerical'' model is contained within a execution environment
261    support wrapper. This wrapper is designed to provide a general
262    framework for grid-point models. MITgcmUV is a specific numerical
263    model that uses the framework. Under this structure the model is split
264    into execution environment support code and conventional numerical
265    model code. The execution environment support code is held under the
266    \textit{eesupp} directory. The grid point model code is held under the
267    \textit{model} directory. Code execution actually starts in the
268    \textit{eesupp} routines and not in the \textit{model} routines. For
269    this reason the top-level \textit{MAIN.F} is in the
270    \textit{eesupp/src} directory. In general, end-users should not need
271    to worry about this level. The top-level routine for the numerical
272    part of the code is in \textit{model/src/THE\_MODEL\_MAIN.F}. Here is
273    a brief description of the directory structure of the model under the
274    root tree (a detailed description is given in section 3: Code
275    structure).
276    
277  \begin{itemize}  \begin{itemize}
 \item \textit{bin}: this directory is initially empty. It is the default  
 directory in which to compile the code.  
278    
279    \item \textit{bin}: this directory is initially empty. It is the
280      default directory in which to compile the code.
281      
282  \item \textit{diags}: contains the code relative to time-averaged  \item \textit{diags}: contains the code relative to time-averaged
283  diagnostics. It is subdivided into two subdirectories \textit{inc} and    diagnostics. It is subdivided into two subdirectories \textit{inc}
284  \textit{src} that contain include files (*.\textit{h} files) and fortran    and \textit{src} that contain include files (*.\textit{h} files) and
285  subroutines (*.\textit{F} files), respectively.    Fortran subroutines (*.\textit{F} files), respectively.
286    
287  \item \textit{doc}: contains brief documentation notes.  \item \textit{doc}: contains brief documentation notes.
288      
289  \item \textit{eesupp}: contains the execution environment source code. Also  \item \textit{eesupp}: contains the execution environment source code.
290  subdivided into two subdirectories \textit{inc} and \textit{src}.    Also subdivided into two subdirectories \textit{inc} and
291      \textit{src}.
292  \item \textit{exe}: this directory is initially empty. It is the default    
293  directory in which to execute the code.  \item \textit{exe}: this directory is initially empty. It is the
294      default directory in which to execute the code.
295  \item \textit{model}: this directory contains the main source code. Also    
296  subdivided into two subdirectories \textit{inc} and \textit{src}.  \item \textit{model}: this directory contains the main source code.
297      Also subdivided into two subdirectories \textit{inc} and
298  \item \textit{pkg}: contains the source code for the packages. Each package    \textit{src}.
299  corresponds to a subdirectory. For example, \textit{gmredi} contains the    
300  code related to the Gent-McWilliams/Redi scheme, \textit{aim} the code  \item \textit{pkg}: contains the source code for the packages. Each
301  relative to the atmospheric intermediate physics. The packages are described    package corresponds to a subdirectory. For example, \textit{gmredi}
302  in detail in section 3.    contains the code related to the Gent-McWilliams/Redi scheme,
303      \textit{aim} the code relative to the atmospheric intermediate
304  \item \textit{tools}: this directory contains various useful tools. For    physics. The packages are described in detail in section 3.
305  example, \textit{genmake} is a script written in csh (C-shell) that should    
306  be used to generate your makefile. The directory \textit{adjoint} contains  \item \textit{tools}: this directory contains various useful tools.
307  the makefile specific to the Tangent linear and Adjoint Compiler (TAMC) that    For example, \textit{genmake2} is a script written in csh (C-shell)
308  generates the adjoint code. The latter is described in details in part V.    that should be used to generate your makefile. The directory
309      \textit{adjoint} contains the makefile specific to the Tangent
310      linear and Adjoint Compiler (TAMC) that generates the adjoint code.
311      The latter is described in details in part V.
312      
313  \item \textit{utils}: this directory contains various utilities. The  \item \textit{utils}: this directory contains various utilities. The
314  subdirectory \textit{knudsen2} contains code and a makefile that compute    subdirectory \textit{knudsen2} contains code and a makefile that
315  coefficients of the polynomial approximation to the knudsen formula for an    compute coefficients of the polynomial approximation to the knudsen
316  ocean nonlinear equation of state. The \textit{matlab} subdirectory contains    formula for an ocean nonlinear equation of state. The
317  matlab scripts for reading model output directly into matlab. \textit{scripts%    \textit{matlab} subdirectory contains matlab scripts for reading
318  } contains C-shell post-processing scripts for joining processor-based and    model output directly into matlab. \textit{scripts} contains C-shell
319  tiled-based model output.    post-processing scripts for joining processor-based and tiled-based
320      model output.
321      
322    \item \textit{verification}: this directory contains the model
323      examples. See section \ref{sect:modelExamples}.
324    
 \item \textit{verification}: this directory contains the model examples. See  
 below.  
325  \end{itemize}  \end{itemize}
326    
327  \subsection{Model examples}  \section[MITgcm Example Experiments]{Example experiments}
328    \label{sect:modelExamples}
329  Now that you have successfully downloaded the model code we recommend that  
330  you first try to run the examples provided with the base version. You will  %% a set of twenty-four pre-configured numerical experiments
331  probably want to run the example that is the closest to the configuration  
332  you will use eventually. The examples are located in subdirectories under  The MITgcm distribution comes with more than a dozen pre-configured
333  the directory \textit{verification} and are briefly described below (a full  numerical experiments. Some of these example experiments are tests of
334  description is given in section 2):  individual parts of the model code, but many are fully fledged
335    numerical simulations. A few of the examples are used for tutorial
336    documentation in sections \ref{sect:eg-baro} - \ref{sect:eg-global}.
337    The other examples follow the same general structure as the tutorial
338    examples. However, they only include brief instructions in a text file
339    called {\it README}.  The examples are located in subdirectories under
340    the directory \textit{verification}. Each example is briefly described
341    below.
342    
343  \subsubsection{List of model examples}  \subsection{Full list of model examples}
344    
345  \begin{itemize}  \begin{enumerate}
346      
347  \item \textit{exp0} - single layer, ocean double gyre (barotropic with  \item \textit{exp0} - single layer, ocean double gyre (barotropic with
348  free-surface).    free-surface). This experiment is described in detail in section
349      \ref{sect:eg-baro}.
 \item \textit{exp1} - 4 layers, ocean double gyre.  
350    
351    \item \textit{exp1} - Four layer, ocean double gyre. This experiment
352      is described in detail in section \ref{sect:eg-baroc}.
353      
354  \item \textit{exp2} - 4x4 degree global ocean simulation with steady  \item \textit{exp2} - 4x4 degree global ocean simulation with steady
355  climatological forcing.    climatological forcing. This experiment is described in detail in
356      section \ref{sect:eg-global}.
357  \item \textit{exp4} - flow over a Gaussian bump in open-water or channel    
358  with open boundaries.  \item \textit{exp4} - Flow over a Gaussian bump in open-water or
359      channel with open boundaries.
360      
361    \item \textit{exp5} - Inhomogenously forced ocean convection in a
362      doubly periodic box.
363    
364  \item \textit{exp5} - inhomogenously forced ocean convection in a doubly  \item \textit{front\_relax} - Relaxation of an ocean thermal front (test for
 periodic box.  
   
 \item \textit{front\_relax} - relaxation of an ocean thermal front (test for  
365  Gent/McWilliams scheme). 2D (Y-Z).  Gent/McWilliams scheme). 2D (Y-Z).
366    
367  \item \textit{internal wave} - ocean internal wave forced by open boundary  \item \textit{internal wave} - Ocean internal wave forced by open
368  conditions.    boundary conditions.
369      
370  \item \textit{natl\_box} - eastern subtropical North Atlantic with KPP  \item \textit{natl\_box} - Eastern subtropical North Atlantic with KPP
371  scheme; 1 month integration    scheme; 1 month integration
372      
373  \item \textit{hs94.1x64x5} - zonal averaged atmosphere using Held and Suarez  \item \textit{hs94.1x64x5} - Zonal averaged atmosphere using Held and
374  '94 forcing.    Suarez '94 forcing.
375      
376  \item \textit{hs94.128x64x5} - 3D atmosphere dynamics using Held and Suarez  \item \textit{hs94.128x64x5} - 3D atmosphere dynamics using Held and
377  '94 forcing.    Suarez '94 forcing.
378      
379  \item \textit{hs94.cs-32x32x5} - 3D atmosphere dynamics using Held and  \item \textit{hs94.cs-32x32x5} - 3D atmosphere dynamics using Held and
380  Suarez '94 forcing on the cubed sphere.    Suarez '94 forcing on the cubed sphere.
381      
382  \item \textit{aim.5l\_zon-ave} - Intermediate Atmospheric physics, 5 layers  \item \textit{aim.5l\_zon-ave} - Intermediate Atmospheric physics.
383  Molteni physics package. Global Zonal Mean configuration, 1x64x5 resolution.    Global Zonal Mean configuration, 1x64x5 resolution.
384      
385  \item \textit{aim.5l\_XZ\_Equatorial\_Slice} - Intermediate Atmospheric  \item \textit{aim.5l\_XZ\_Equatorial\_Slice} - Intermediate
386  physics, 5 layers Molteni physics package. Equatorial Slice configuration.    Atmospheric physics, equatorial Slice configuration.  2D (X-Z).
387  2D (X-Z).    
   
388  \item \textit{aim.5l\_Equatorial\_Channel} - Intermediate Atmospheric  \item \textit{aim.5l\_Equatorial\_Channel} - Intermediate Atmospheric
389  physics, 5 layers Molteni physics package. 3D Equatorial Channel    physics. 3D Equatorial Channel configuration.
390  configuration (not completely tested).    
391    \item \textit{aim.5l\_LatLon} - Intermediate Atmospheric physics.
392  \item \textit{aim.5l\_LatLon} - Intermediate Atmospheric physics, 5 layers    Global configuration, on latitude longitude grid with 128x64x5 grid
393  Molteni physics package. Global configuration, 128x64x5 resolution.    points ($2.8^\circ{\rm degree}$ resolution).
394      
395    \item \textit{adjustment.128x64x1} Barotropic adjustment problem on
396      latitude longitude grid with 128x64 grid points ($2.8^\circ{\rm
397        degree}$ resolution).
398      
399    \item \textit{adjustment.cs-32x32x1} Barotropic adjustment problem on
400      cube sphere grid with 32x32 points per face ( roughly $2.8^\circ{\rm
401        degree}$ resolution).
402      
403    \item \textit{advect\_cs} Two-dimensional passive advection test on
404      cube sphere grid.
405      
406    \item \textit{advect\_xy} Two-dimensional (horizontal plane) passive
407      advection test on Cartesian grid.
408      
409    \item \textit{advect\_yz} Two-dimensional (vertical plane) passive
410      advection test on Cartesian grid.
411      
412    \item \textit{carbon} Simple passive tracer experiment. Includes
413      derivative calculation. Described in detail in section
414      \ref{sect:eg-carbon-ad}.
415    
416    \item \textit{flt\_example} Example of using float package.
417      
418    \item \textit{global\_ocean.90x40x15} Global circulation with GM, flux
419      boundary conditions and poles.
420    
421    \item \textit{global\_ocean\_pressure} Global circulation in pressure
422      coordinate (non-Boussinesq ocean model). Described in detail in
423      section \ref{sect:eg-globalpressure}.
424      
425    \item \textit{solid-body.cs-32x32x1} Solid body rotation test for cube
426      sphere grid.
427    
428  \item \textit{adjustment.128x64x1}  \end{enumerate}
429    
430  \item \textit{adjustment.cs-32x32x1}  \subsection{Directory structure of model examples}
 \end{itemize}  
   
 \subsubsection{Directory structure of model examples}  
431    
432  Each example directory has the following subdirectories:  Each example directory has the following subdirectories:
433    
434  \begin{itemize}  \begin{itemize}
435  \item \textit{code}: contains the code particular to the example. At a  \item \textit{code}: contains the code particular to the example. At a
436  minimum, this directory includes the following files:    minimum, this directory includes the following files:
   
 \begin{itemize}  
 \item \textit{code/CPP\_EEOPTIONS.h}: declares CPP keys relative to the  
 ``execution environment'' part of the code. The default version is located  
 in \textit{eesupp/inc}.  
   
 \item \textit{code/CPP\_OPTIONS.h}: declares CPP keys relative to the  
 ``numerical model'' part of the code. The default version is located in  
 \textit{model/inc}.  
   
 \item \textit{code/SIZE.h}: declares size of underlying computational grid.  
 The default version is located in \textit{model/inc}.  
 \end{itemize}  
   
 In addition, other include files and subroutines might be present in \textit{%  
 code} depending on the particular experiment. See section 2 for more details.  
   
 \item \textit{input}: contains the input data files required to run the  
 example. At a mimimum, the \textit{input} directory contains the following  
 files:  
   
 \begin{itemize}  
 \item \textit{input/data}: this file, written as a namelist, specifies the  
 main parameters for the experiment.  
   
 \item \textit{input/data.pkg}: contains parameters relative to the packages  
 used in the experiment.  
437    
438  \item \textit{input/eedata}: this file contains ``execution environment''    \begin{itemize}
439  data. At present, this consists of a specification of the number of threads    \item \textit{code/CPP\_EEOPTIONS.h}: declares CPP keys relative to
440  to use in $X$ and $Y$ under multithreaded execution.      the ``execution environment'' part of the code. The default
441  \end{itemize}      version is located in \textit{eesupp/inc}.
442      
443      \item \textit{code/CPP\_OPTIONS.h}: declares CPP keys relative to
444        the ``numerical model'' part of the code. The default version is
445        located in \textit{model/inc}.
446      
447      \item \textit{code/SIZE.h}: declares size of underlying
448        computational grid.  The default version is located in
449        \textit{model/inc}.
450      \end{itemize}
451      
452      In addition, other include files and subroutines might be present in
453      \textit{code} depending on the particular experiment. See Section 2
454      for more details.
455      
456    \item \textit{input}: contains the input data files required to run
457      the example. At a minimum, the \textit{input} directory contains the
458      following files:
459    
460      \begin{itemize}
461      \item \textit{input/data}: this file, written as a namelist,
462        specifies the main parameters for the experiment.
463      
464      \item \textit{input/data.pkg}: contains parameters relative to the
465        packages used in the experiment.
466      
467      \item \textit{input/eedata}: this file contains ``execution
468        environment'' data. At present, this consists of a specification
469        of the number of threads to use in $X$ and $Y$ under multithreaded
470        execution.
471      \end{itemize}
472      
473      In addition, you will also find in this directory the forcing and
474      topography files as well as the files describing the initial state
475      of the experiment.  This varies from experiment to experiment. See
476      section 2 for more details.
477    
478    \item \textit{results}: this directory contains the output file
479      \textit{output.txt} produced by the simulation example. This file is
480      useful for comparison with your own output when you run the
481      experiment.
482    \end{itemize}
483    
484    Once you have chosen the example you want to run, you are ready to
485    compile the code.
486    
487    \section[Building MITgcm]{Building the code}
488    \label{sect:buildingCode}
489    
490    To compile the code, we use the {\em make} program. This uses a file
491    ({\em Makefile}) that allows us to pre-process source files, specify
492    compiler and optimization options and also figures out any file
493    dependencies. We supply a script ({\em genmake2}), described in
494    section \ref{sect:genmake}, that automatically creates the {\em
495      Makefile} for you. You then need to build the dependencies and
496    compile the code.
497    
498    As an example, let's assume that you want to build and run experiment
499    \textit{verification/exp2}. The are multiple ways and places to
500    actually do this but here let's build the code in
501    \textit{verification/exp2/input}:
502    \begin{verbatim}
503    % cd verification/exp2/input
504    \end{verbatim}
505    First, build the {\em Makefile}:
506    \begin{verbatim}
507    % ../../../tools/genmake2 -mods=../code
508    \end{verbatim}
509    The command line option tells {\em genmake} to override model source
510    code with any files in the directory {\em ./code/}.
511    
512  In addition, you will also find in this directory the forcing and topography  On many systems, the {\em genmake2} program will be able to
513  files as well as the files describing the initial state of the experiment.  automatically recognize the hardware, find compilers and other tools
514  This varies from experiment to experiment. See section 2 for more details.  within the user's path (``echo \$PATH''), and then choose an
515    appropriate set of options from the files contained in the {\em
516  \item \textit{results}: this directory contains the output file \textit{%    tools/build\_options} directory.  Under some circumstances, a user
517  output.txt} produced by the simulation example. This file is useful for  may have to create a new ``optfile'' in order to specify the exact
518  comparison with your own output when you run the experiment.  combination of compiler, compiler flags, libraries, and other options
519  \end{itemize}  necessary to build a particular configuration of MITgcm.  In such
520    cases, it is generally helpful to read the existing ``optfiles'' and
521    mimic their syntax.
522    
523    Through the MITgcm-support list, the MITgcm developers are willing to
524    provide help writing or modifing ``optfiles''.  And we encourage users
525    to post new ``optfiles'' (particularly ones for new machines or
526    architectures) to the
527    \begin{rawhtml} <A href=''mailto:MITgcm-support@mitgcm.org"> \end{rawhtml}
528    MITgcm-support@mitgcm.org
529    \begin{rawhtml} </A> \end{rawhtml}
530    list.
531    
532  Once you have chosen the example you want to run, you are ready to compile  To specify an optfile to {\em genmake2}, the syntax is:
533  the code.  \begin{verbatim}
534    % ../../../tools/genmake2 -mods=../code -of /path/to/optfile
535    \end{verbatim}
536    
537  \subsection{Compiling the code}  Once a {\em Makefile} has been generated, we create the dependencies:
538    \begin{verbatim}
539    % make depend
540    \end{verbatim}
541    This modifies the {\em Makefile} by attaching a [long] list of files
542    upon which other files depend. The purpose of this is to reduce
543    re-compilation if and when you start to modify the code. The {\tt make
544      depend} command also creates links from the model source to this
545    directory.
546    
547  \subsubsection{The script \textit{genmake}}  Next compile the code:
548    \begin{verbatim}
549    % make
550    \end{verbatim}
551    The {\tt make} command creates an executable called \textit{mitgcmuv}.
552    Additional make ``targets'' are defined within the makefile to aid in
553    the production of adjoint and other versions of MITgcm.
554    
555  To compile the code, use the script \textit{genmake} located in the \textit{%  Now you are ready to run the model. General instructions for doing so are
556  tools} directory. \textit{genmake} is a script that generates the makefile.  given in section \ref{sect:runModel}. Here, we can run the model with:
557  It has been written so that the code can be compiled on a wide diversity of  \begin{verbatim}
558  machines and systems. However, if it doesn't work the first time on your  ./mitgcmuv > output.txt
559  platform, you might need to edit certain lines of \textit{genmake} in the  \end{verbatim}
560  section containing the setups for the different machines. The file is  where we are re-directing the stream of text output to the file {\em
561  structured like this:  output.txt}.
 \begin{verbatim}  
         .  
         .  
         .  
 general instructions (machine independent)  
         .  
         .  
         .  
     - setup machine 1  
     - setup machine 2  
     - setup machine 3  
     - setup machine 4  
        etc  
         .  
         .  
         .  
 \end{verbatim}  
   
 For example, the setup corresponding to a DEC alpha machine is reproduced  
 here:  
 \begin{verbatim}  
   case OSF1+mpi:  
     echo "Configuring for DEC Alpha"  
     set CPP        = ( '/usr/bin/cpp -P' )  
     set DEFINES    = ( ${DEFINES}  '-DTARGET_DEC -DWORDLENGTH=1' )  
     set KPP        = ( 'kapf' )  
     set KPPFILES   = ( 'main.F' )  
     set KFLAGS1    = ( '-scan=132 -noconc -cmp=' )  
     set FC         = ( 'f77' )  
     set FFLAGS     = ( '-convert big_endian -r8 -extend_source -automatic -call_shared -notransform_loops -align dcommons' )  
     set FOPTIM     = ( '-O5 -fast -tune host -inline all' )  
     set NOOPTFLAGS = ( '-O0' )  
     set LIBS       = ( '-lfmpi -lmpi -lkmp_osfp10 -pthread' )  
     set NOOPTFILES = ( 'barrier.F different_multiple.F external_fields_load.F')  
     set RMFILES    = ( '*.p.out' )  
     breaksw  
 \end{verbatim}  
   
 Typically, these are the lines that you might need to edit to make \textit{%  
 genmake} work on your platform if it doesn't work the first time. \textit{%  
 genmake} understands several options that are described here:  
562    
 \begin{itemize}  
 \item -rootdir=dir  
563    
564  indicates where the model root directory is relative to the directory where  \subsection{Building/compiling the code elsewhere}
 you are compiling. This option is not needed if you compile in the \textit{%  
 bin} directory (which is the default compilation directory) or within the  
 \textit{verification} tree.  
565    
566  \item -mods=dir1,dir2,...  In the example above (section \ref{sect:buildingCode}) we built the
567    executable in the {\em input} directory of the experiment for
568    convenience. You can also configure and compile the code in other
569    locations, for example on a scratch disk with out having to copy the
570    entire source tree. The only requirement to do so is you have {\tt
571      genmake2} in your path or you know the absolute path to {\tt
572      genmake2}.
573    
574  indicates the relative or absolute paths directories where the sources  The following sections outline some possible methods of organizing
575  should take precedence over the default versions (located in \textit{model},  your source and data.
 \textit{eesupp},...). Typically, this option is used when running the  
 examples, see below.  
576    
577  \item -enable=pkg1,pkg2,...  \subsubsection{Building from the {\em ../code directory}}
578    
579  enables packages source code \textit{pkg1}, \textit{pkg2},... when creating  This is just as simple as building in the {\em input/} directory:
580  the makefile.  \begin{verbatim}
581    % cd verification/exp2/code
582    % ../../../tools/genmake2
583    % make depend
584    % make
585    \end{verbatim}
586    However, to run the model the executable ({\em mitgcmuv}) and input
587    files must be in the same place. If you only have one calculation to make:
588    \begin{verbatim}
589    % cd ../input
590    % cp ../code/mitgcmuv ./
591    % ./mitgcmuv > output.txt
592    \end{verbatim}
593    or if you will be making multiple runs with the same executable:
594    \begin{verbatim}
595    % cd ../
596    % cp -r input run1
597    % cp code/mitgcmuv run1
598    % cd run1
599    % ./mitgcmuv > output.txt
600    \end{verbatim}
601    
602  \item -disable=pkg1,pkg2,...  \subsubsection{Building from a new directory}
603    
604  disables packages source code \textit{pkg1}, \textit{pkg2},... when creating  Since the {\em input} directory contains input files it is often more
605  the makefile.  useful to keep {\em input} pristine and build in a new directory
606    within {\em verification/exp2/}:
607    \begin{verbatim}
608    % cd verification/exp2
609    % mkdir build
610    % cd build
611    % ../../../tools/genmake2 -mods=../code
612    % make depend
613    % make
614    \end{verbatim}
615    This builds the code exactly as before but this time you need to copy
616    either the executable or the input files or both in order to run the
617    model. For example,
618    \begin{verbatim}
619    % cp ../input/* ./
620    % ./mitgcmuv > output.txt
621    \end{verbatim}
622    or if you tend to make multiple runs with the same executable then
623    running in a new directory each time might be more appropriate:
624    \begin{verbatim}
625    % cd ../
626    % mkdir run1
627    % cp build/mitgcmuv run1/
628    % cp input/* run1/
629    % cd run1
630    % ./mitgcmuv > output.txt
631    \end{verbatim}
632    
633  \item -platform=machine  \subsubsection{Building on a scratch disk}
634    
635  specifies the platform for which you want the makefile. In general, you  Model object files and output data can use up large amounts of disk
636  won't need this option. \textit{genmake} will select the right machine for  space so it is often the case that you will be operating on a large
637  you (the one you're working on!). However, this option is useful if you have  scratch disk. Assuming the model source is in {\em ~/MITgcm} then the
638  a choice of several compilers on one machine and you want to use the one  following commands will build the model in {\em /scratch/exp2-run1}:
639  that is not the default (ex: \texttt{pgf77} instead of \texttt{f77} under  \begin{verbatim}
640  Linux).  % cd /scratch/exp2-run1
641    % ~/MITgcm/tools/genmake2 -rootdir=~/MITgcm \
642      -mods=~/MITgcm/verification/exp2/code
643    % make depend
644    % make
645    \end{verbatim}
646    To run the model here, you'll need the input files:
647    \begin{verbatim}
648    % cp ~/MITgcm/verification/exp2/input/* ./
649    % ./mitgcmuv > output.txt
650    \end{verbatim}
651    
652  \item -mpi  As before, you could build in one directory and make multiple runs of
653    the one experiment:
654    \begin{verbatim}
655    % cd /scratch/exp2
656    % mkdir build
657    % cd build
658    % ~/MITgcm/tools/genmake2 -rootdir=~/MITgcm \
659      -mods=~/MITgcm/verification/exp2/code
660    % make depend
661    % make
662    % cd ../
663    % cp -r ~/MITgcm/verification/exp2/input run2
664    % cd run2
665    % ./mitgcmuv > output.txt
666    \end{verbatim}
667    
 this is used when you want to run the model in parallel processing mode  
 under mpi (see section on parallel computation for more details).  
668    
669  \item -jam  \subsection{Using \texttt{genmake2}}
670    \label{sect:genmake}
671    
672  this is used when you want to run the model in parallel processing mode  To compile the code, first use the program \texttt{genmake2} (located
673  under jam (see section on parallel computation for more details).  in the \texttt{tools} directory) to generate a Makefile.
674  \end{itemize}  \texttt{genmake2} is a shell script written to work with all
675    ``sh''--compatible shells including bash v1, bash v2, and Bourne.
676    Internally, \texttt{genmake2} determines the locations of needed
677    files, the compiler, compiler options, libraries, and Unix tools.  It
678    relies upon a number of ``optfiles'' located in the
679    \texttt{tools/build\_options} directory.
680    
681    The purpose of the optfiles is to provide all the compilation options
682    for particular ``platforms'' (where ``platform'' roughly means the
683    combination of the hardware and the compiler) and code configurations.
684    Given the combinations of possible compilers and library dependencies
685    ({\it eg.}  MPI and NetCDF) there may be numerous optfiles available
686    for a single machine.  The naming scheme for the majority of the
687    optfiles shipped with the code is
688    \begin{center}
689      {\bf OS\_HARDWARE\_COMPILER }
690    \end{center}
691    where
692    \begin{description}
693    \item[OS] is the name of the operating system (generally the
694      lower-case output of the {\tt 'uname'} command)
695    \item[HARDWARE] is a string that describes the CPU type and
696      corresponds to output from the  {\tt 'uname -m'} command:
697      \begin{description}
698      \item[ia32] is for ``x86'' machines such as i386, i486, i586, i686,
699        and athlon
700      \item[ia64] is for Intel IA64 systems (eg. Itanium, Itanium2)
701      \item[amd64] is AMD x86\_64 systems
702      \item[ppc] is for Mac PowerPC systems
703      \end{description}
704    \item[COMPILER] is the compiler name (generally, the name of the
705      FORTRAN executable)
706    \end{description}
707    
708    In many cases, the default optfiles are sufficient and will result in
709    usable Makefiles.  However, for some machines or code configurations,
710    new ``optfiles'' must be written. To create a new optfile, it is
711    generally best to start with one of the defaults and modify it to suit
712    your needs.  Like \texttt{genmake2}, the optfiles are all written
713    using a simple ``sh''--compatible syntax.  While nearly all variables
714    used within \texttt{genmake2} may be specified in the optfiles, the
715    critical ones that should be defined are:
716    
717    \begin{description}
718    \item[FC] the FORTRAN compiler (executable) to use
719    \item[DEFINES] the command-line DEFINE options passed to the compiler
720    \item[CPP] the C pre-processor to use
721    \item[NOOPTFLAGS] options flags for special files that should not be
722      optimized
723    \end{description}
724    
725  For some of the examples, there is a file called \textit{.genmakerc} in the  For example, the optfile for a typical Red Hat Linux machine (``ia32''
726  \textit{input} directory that has the relevant \textit{genmake} options for  architecture) using the GCC (g77) compiler is
 that particular example. In this way you don't need to type the options when  
 invoking \textit{genmake}.  
   
 \subsubsection{Compiling}  
   
 Let's assume that you want to run, say, example \textit{exp2} in the \textit{%  
 input} directory. To compile the code, type the following commands from the  
 model root tree:  
727  \begin{verbatim}  \begin{verbatim}
728  % cd verification/exp2/input  FC=g77
729  % ../../../tools/genmake  DEFINES='-D_BYTESWAPIO -DWORDLENGTH=4'
730  % make depend  CPP='cpp  -traditional -P'
731  % make  NOOPTFLAGS='-O0'
732    #  For IEEE, use the "-ffloat-store" option
733    if test "x$IEEE" = x ; then
734        FFLAGS='-Wimplicit -Wunused -Wuninitialized'
735        FOPTIM='-O3 -malign-double -funroll-loops'
736    else
737        FFLAGS='-Wimplicit -Wunused -ffloat-store'
738        FOPTIM='-O0 -malign-double'
739    fi
740  \end{verbatim}  \end{verbatim}
741    
742  If there is no \textit{.genmakerc} in the \textit{input} directory, you have  If you write an optfile for an unrepresented machine or compiler, you
743  to use the following options when invoking \textit{genmake}:  are strongly encouraged to submit the optfile to the MITgcm project
744    for inclusion.  Please send the file to the
745    \begin{rawhtml} <A href="mail-to:MITgcm-support@mitgcm.org"> \end{rawhtml}
746    \begin{center}
747      MITgcm-support@mitgcm.org
748    \end{center}
749    \begin{rawhtml} </A> \end{rawhtml}
750    mailing list.
751    
752    In addition to the optfiles, \texttt{genmake2} supports a number of
753    helpful command-line options.  A complete list of these options can be
754    obtained from:
755  \begin{verbatim}  \begin{verbatim}
756  % ../../../tools/genmake  -mods=../code  % genmake2 -h
757  \end{verbatim}  \end{verbatim}
758    
759  In addition, you will probably want to disable some of the packages. Taking  The most important command-line options are:
760  again the case of \textit{exp2}, the full \textit{genmake} command will  \begin{description}
761  probably look like this:    
762    \item[\texttt{--optfile=/PATH/FILENAME}] specifies the optfile that
763      should be used for a particular build.
764      
765      If no "optfile" is specified (either through the command line or the
766      MITGCM\_OPTFILE environment variable), genmake2 will try to make a
767      reasonable guess from the list provided in {\em
768        tools/build\_options}.  The method used for making this guess is
769      to first determine the combination of operating system and hardware
770      (eg. "linux\_ia32") and then find a working FORTRAN compiler within
771      the user's path.  When these three items have been identified,
772      genmake2 will try to find an optfile that has a matching name.
773      
774    \item[\texttt{--pdefault='PKG1 PKG2 PKG3 ...'}] specifies the default
775      set of packages to be used.  The normal order of precedence for
776      packages is as follows:
777      \begin{enumerate}
778      \item If available, the command line (\texttt{--pdefault}) settings
779        over-rule any others.
780    
781      \item Next, \texttt{genmake2} will look for a file named
782        ``\texttt{packages.conf}'' in the local directory or in any of the
783        directories specified with the \texttt{--mods} option.
784        
785      \item Finally, if neither of the above are available,
786        \texttt{genmake2} will use the \texttt{/pkg/pkg\_default} file.
787      \end{enumerate}
788      
789    \item[\texttt{--pdepend=/PATH/FILENAME}] specifies the dependency file
790      used for packages.
791      
792      If not specified, the default dependency file {\em pkg/pkg\_depend}
793      is used.  The syntax for this file is parsed on a line-by-line basis
794      where each line containes either a comment ("\#") or a simple
795      "PKGNAME1 (+|-)PKGNAME2" pairwise rule where the "+" or "-" symbol
796      specifies a "must be used with" or a "must not be used with"
797      relationship, respectively.  If no rule is specified, then it is
798      assumed that the two packages are compatible and will function
799      either with or without each other.
800      
801    \item[\texttt{--adof=/path/to/file}] specifies the "adjoint" or
802      automatic differentiation options file to be used.  The file is
803      analogous to the ``optfile'' defined above but it specifies
804      information for the AD build process.
805      
806      The default file is located in {\em
807        tools/adjoint\_options/adjoint\_default} and it defines the "TAF"
808      and "TAMC" compilers.  An alternate version is also available at
809      {\em tools/adjoint\_options/adjoint\_staf} that selects the newer
810      "STAF" compiler.  As with any compilers, it is helpful to have their
811      directories listed in your {\tt \$PATH} environment variable.
812      
813    \item[\texttt{--mods='DIR1 DIR2 DIR3 ...'}] specifies a list of
814      directories containing ``modifications''.  These directories contain
815      files with names that may (or may not) exist in the main MITgcm
816      source tree but will be overridden by any identically-named sources
817      within the ``MODS'' directories.
818      
819      The order of precedence for this "name-hiding" is as follows:
820      \begin{itemize}
821      \item ``MODS'' directories (in the order given)
822      \item Packages either explicitly specified or provided by default
823        (in the order given)
824      \item Packages included due to package dependencies (in the order
825        that that package dependencies are parsed)
826      \item The "standard dirs" (which may have been specified by the
827        ``-standarddirs'' option)
828      \end{itemize}
829      
830    \item[\texttt{--mpi}] This option enables certain MPI features (using
831      CPP \texttt{\#define}s) within the code and is necessary for MPI
832      builds (see Section \ref{sect:mpi-build}).
833      
834    \item[\texttt{--make=/path/to/gmake}] Due to the poor handling of
835      soft-links and other bugs common with the \texttt{make} versions
836      provided by commercial Unix vendors, GNU \texttt{make} (sometimes
837      called \texttt{gmake}) should be preferred.  This option provides a
838      means for specifying the make executable to be used.
839      
840    \item[\texttt{--bash=/path/to/sh}] On some (usually older UNIX)
841      machines, the ``bash'' shell is unavailable.  To run on these
842      systems, \texttt{genmake2} can be invoked using an ``sh'' (that is,
843      a Bourne, POSIX, or compatible) shell.  The syntax in these
844      circumstances is:
845      \begin{center}
846        \texttt{\%  /bin/sh genmake2 -bash=/bin/sh [...options...]}
847      \end{center}
848      where \texttt{/bin/sh} can be replaced with the full path and name
849      of the desired shell.
850    
851    \end{description}
852    
853    
854    \subsection{Building with MPI}
855    \label{sect:mpi-build}
856    
857    Building MITgcm to use MPI libraries can be complicated due to the
858    variety of different MPI implementations available, their dependencies
859    or interactions with different compilers, and their often ad-hoc
860    locations within file systems.  For these reasons, its generally a
861    good idea to start by finding and reading the documentation for your
862    machine(s) and, if necessary, seeking help from your local systems
863    administrator.
864    
865    The steps for building MITgcm with MPI support are:
866    \begin{enumerate}
867      
868    \item Determine the locations of your MPI-enabled compiler and/or MPI
869      libraries and put them into an options file as described in Section
870      \ref{sect:genmake}.  One can start with one of the examples in:
871      \begin{rawhtml} <A
872        href="http://mitgcm.org/cgi-bin/viewcvs.cgi/MITgcm/tools/build_options/">
873      \end{rawhtml}
874      \begin{center}
875        \texttt{MITgcm/tools/build\_options/}
876      \end{center}
877      \begin{rawhtml} </A> \end{rawhtml}
878      such as \texttt{linux\_ia32\_g77+mpi\_cg01} or
879      \texttt{linux\_ia64\_efc+mpi} and then edit it to suit the machine at
880      hand.  You may need help from your user guide or local systems
881      administrator to determine the exact location of the MPI libraries.
882      If libraries are not installed, MPI implementations and related
883      tools are available including:
884      \begin{itemize}
885      \item \begin{rawhtml} <A
886          href="http://www-unix.mcs.anl.gov/mpi/mpich/">
887        \end{rawhtml}
888        MPICH
889        \begin{rawhtml} </A> \end{rawhtml}
890    
891      \item \begin{rawhtml} <A
892          href="http://www.lam-mpi.org/">
893        \end{rawhtml}
894        LAM/MPI
895        \begin{rawhtml} </A> \end{rawhtml}
896    
897      \item \begin{rawhtml} <A
898          href="http://www.osc.edu/~pw/mpiexec/">
899        \end{rawhtml}
900        MPIexec
901        \begin{rawhtml} </A> \end{rawhtml}
902      \end{itemize}
903      
904    \item Build the code with the \texttt{genmake2} \texttt{-mpi} option
905      (see Section \ref{sect:genmake}) using commands such as:
906    {\footnotesize \begin{verbatim}
907      %  ../../../tools/genmake2 -mods=../code -mpi -of=YOUR_OPTFILE
908      %  make depend
909      %  make
910    \end{verbatim} }
911      
912    \item Run the code with the appropriate MPI ``run'' or ``exec''
913      program provided with your particular implementation of MPI.
914      Typical MPI packages such as MPICH will use something like:
915  \begin{verbatim}  \begin{verbatim}
916  % ../../../tools/genmake  -mods=../code  -disable=kpp,gmredi,aim,...    %  mpirun -np 4 -machinefile mf ./mitgcmuv
917  \end{verbatim}  \end{verbatim}
918      Sightly more complicated scripts may be needed for many machines
919      since execution of the code may be controlled by both the MPI
920      library and a job scheduling and queueing system such as PBS,
921      LoadLeveller, Condor, or any of a number of similar tools.  A few
922      example scripts (those used for our \begin{rawhtml} <A
923        href="http://mitgcm.org/testing.html"> \end{rawhtml}regular
924      verification runs\begin{rawhtml} </A> \end{rawhtml}) are available
925      at:
926      \begin{rawhtml} <A
927        href="http://mitgcm.org/cgi-bin/viewcvs.cgi/MITgcm_contrib/test_scripts/">
928      \end{rawhtml}
929      {\footnotesize \tt
930        http://mitgcm.org/cgi-bin/viewcvs.cgi/MITgcm\_contrib/test\_scripts/ }
931      \begin{rawhtml} </A> \end{rawhtml}
932    
933    \end{enumerate}
934    
935    An example of the above process on the MITgcm cluster (``cg01'') using
936    the GNU g77 compiler and the mpich MPI library is:
937    
938    {\footnotesize \begin{verbatim}
939      %  cd MITgcm/verification/exp5
940      %  mkdir build
941      %  cd build
942      %  ../../../tools/genmake2 -mpi -mods=../code \
943           -of=../../../tools/build_options/linux_ia32_g77+mpi_cg01
944      %  make depend
945      %  make
946      %  cd ../input
947      %  /usr/local/pkg/mpi/mpi-1.2.4..8a-gm-1.5/g77/bin/mpirun.ch_gm \
948           -machinefile mf --gm-kill 5 -v -np 2  ../build/mitgcmuv
949    \end{verbatim} }
950    
951    
 The make command creates an executable called \textit{mitgcmuv}.  
952    
953  Note that you can compile and run the code in another directory than \textit{%  \section[Running MITgcm]{Running the model in prognostic mode}
954  input}. You just need to make sure that you copy the input data files into  \label{sect:runModel}
955  the directory where you want to run the model. For example to compile from  
956  \textit{code}:  If compilation finished succesfuully (section \ref{sect:buildingCode})
957    then an executable called \texttt{mitgcmuv} will now exist in the
958    local directory.
959    
960    To run the model as a single process (ie. not in parallel) simply
961    type:
962  \begin{verbatim}  \begin{verbatim}
963  % cd verification/exp2/code  % ./mitgcmuv
964  % ../../../tools/genmake  \end{verbatim}
965  % make depend  The ``./'' is a safe-guard to make sure you use the local executable
966  % make  in case you have others that exist in your path (surely odd if you
967    do!). The above command will spew out many lines of text output to
968    your screen.  This output contains details such as parameter values as
969    well as diagnostics such as mean Kinetic energy, largest CFL number,
970    etc. It is worth keeping this text output with the binary output so we
971    normally re-direct the {\em stdout} stream as follows:
972    \begin{verbatim}
973    % ./mitgcmuv > output.txt
974  \end{verbatim}  \end{verbatim}
975    
976  \subsection{Running the model}  For the example experiments in {\em verification}, an example of the
977    output is kept in {\em results/output.txt} for comparison. You can compare
978    your {\em output.txt} with this one to check that the set-up works.
979    
 The first thing to do is to run the code by typing \textit{mitgcmuv} and see  
 what happens. You can compare what you get with what is in the \textit{%  
 results} directory. Unless noted otherwise, most examples are set up to run  
 for a few time steps only so that you can quickly figure out whether the  
 model is working or not.  
980    
981  \subsubsection{Output files}  
982    \subsection{Output files}
983    
984  The model produces various output files. At a minimum, the instantaneous  The model produces various output files. At a minimum, the instantaneous
985  ``state'' of the model is written out, which is made of the following files:  ``state'' of the model is written out, which is made of the following files:
# Line 450  as the pickup files but are named differ Line 1030  as the pickup files but are named differ
1030  used to restart the model but are overwritten every other time they are  used to restart the model but are overwritten every other time they are
1031  output to save disk space during long integrations.  output to save disk space during long integrations.
1032    
1033  \subsubsection{Looking at the output}  \subsection{Looking at the output}
1034    
1035  All the model data are written according to a ``meta/data'' file format.  All the model data are written according to a ``meta/data'' file format.
1036  Each variable is associated with two files with suffix names \textit{.data}  Each variable is associated with two files with suffix names \textit{.data}
# Line 464  written in this format. The matlab scrip Line 1044  written in this format. The matlab scrip
1044  \textit{utils/matlab} under the root tree. The script \textit{rdmds.m} reads  \textit{utils/matlab} under the root tree. The script \textit{rdmds.m} reads
1045  the data. Look at the comments inside the script to see how to use it.  the data. Look at the comments inside the script to see how to use it.
1046    
1047  \section{Code structure}  Some examples of reading and visualizing some output in {\em Matlab}:
1048    \begin{verbatim}
1049  \section{Doing it yourself: customizing the code}  % matlab
1050    >> H=rdmds('Depth');
1051  \subsection{\protect\bigskip Configuration and setup}  >> contourf(H');colorbar;
1052    >> title('Depth of fluid as used by model');
1053  When you are ready to run the model in the configuration you want, the  
1054  easiest thing is to use and adapt the setup of the case studies experiment  >> eta=rdmds('Eta',10);
1055  (described previously) that is the closest to your configuration. Then, the  >> imagesc(eta');axis ij;colorbar;
1056  amount of setup will be minimized. In this section, we focus on the setup  >> title('Surface height at iter=10');
 relative to the ''numerical model'' part of the code (the setup relative to  
 the ''execution environment'' part is covered in the parallel implementation  
 section) and on the variables and parameters that you are likely to change.  
   
 The CPP keys relative to the ''numerical model'' part of the code are all  
 defined and set in the file \textit{CPP\_OPTIONS.h }in the directory \textit{%  
 model/inc }or in one of the \textit{code }directories of the case study  
 experiments under \textit{verification.} The model parameters are defined  
 and declared in the file \textit{model/inc/PARAMS.h }and their default  
 values are set in the routine \textit{model/src/set\_defaults.F. }The  
 default values can be modified in the namelist file \textit{data }which  
 needs to be located in the directory where you will run the model. The  
 parameters are initialized in the routine \textit{model/src/ini\_parms.F}.  
 Look at this routine to see in what part of the namelist the parameters are  
 located.  
   
 In what follows the parameters are grouped into categories related to the  
 computational domain, the equations solved in the model, and the simulation  
 controls.  
   
 \subsubsection{Computational domain, geometry and time-discretization}  
   
 \begin{itemize}  
 \item dimensions  
 \end{itemize}  
   
 The number of points in the x, y,\textit{\ }and r\textit{\ }directions are  
 represented by the variables \textbf{sNx}\textit{, }\textbf{sNy}\textit{, }%  
 and \textbf{Nr}\textit{\ }respectively which are declared and set in the  
 file \textit{model/inc/SIZE.h. }(Again, this assumes a mono-processor  
 calculation. For multiprocessor calculations see section on parallel  
 implementation.)  
   
 \begin{itemize}  
 \item grid  
 \end{itemize}  
   
 Three different grids are available: cartesian, spherical polar, and  
 curvilinear (including the cubed sphere). The grid is set through the  
 logical variables \textbf{usingCartesianGrid}\textit{, }\textbf{%  
 usingSphericalPolarGrid}\textit{, }and \textit{\ }\textbf{%  
 usingCurvilinearGrid}\textit{. }In the case of spherical and curvilinear  
 grids, the southern boundary is defined through the variable \textbf{phiMin}%  
 \textit{\ }which corresponds to the latitude of the southern most cell face  
 (in degrees). The resolution along the x and y directions is controlled by  
 the 1D arrays \textbf{delx}\textit{\ }and \textbf{dely}\textit{\ }(in meters  
 in the case of a cartesian grid, in degrees otherwise). The vertical grid  
 spacing is set through the 1D array \textbf{delz }for the ocean (in meters)  
 or \textbf{delp}\textit{\ }for the atmosphere (in Pa). The variable \textbf{%  
 Ro\_SeaLevel} represents the standard position of Sea-Level in ''R''  
 coordinate. This is typically set to 0m for the ocean (default value) and 10$%  
 ^{5}$Pa for the atmosphere. For the atmosphere, also set the logical  
 variable \textbf{groundAtK1} to '.\texttt{TRUE}.'. which put the first level  
 (k=1) at the lower boundary (ground).  
   
 For the cartesian grid case, the Coriolis parameter $f$ is set through the  
 variables \textbf{f0}\textit{\ }and \textbf{beta}\textit{\ }which correspond  
 to the reference Coriolis parameter (in s$^{-1}$) and $\frac{\partial f}{%  
 \partial y}$(in m$^{-1}$s$^{-1}$) respectively. If \textbf{beta }\textit{\ }%  
 is set to a nonzero value, \textbf{f0}\textit{\ }is the value of $f$ at the  
 southern edge of the domain.  
   
 \begin{itemize}  
 \item topography - full and partial cells  
 \end{itemize}  
   
 The domain bathymetry is read from a file that contains a 2D (x,y) map of  
 depths (in m) for the ocean or pressures (in Pa) for the atmosphere. The  
 file name is represented by the variable \textbf{bathyFile}\textit{. }The  
 file is assumed to contain binary numbers giving the depth (pressure) of the  
 model at each grid cell, ordered with the x coordinate varying fastest. The  
 points are ordered from low coordinate to high coordinate for both axes. The  
 model code applies without modification to enclosed, periodic, and double  
 periodic domains. Periodicity is assumed by default and is suppressed by  
 setting the depths to 0m for the cells at the limits of the computational  
 domain (note: not sure this is the case for the atmosphere). The precision  
 with which to read the binary data is controlled by the integer variable  
 \textbf{readBinaryPrec }which can take the value \texttt{32} (single  
 precision) or \texttt{64} (double precision). See the matlab program \textit{%  
 gendata.m }in the \textit{input }directories under \textit{verification }to  
 see how the bathymetry files are generated for the case study experiments.  
   
 To use the partial cell capability, the variable \textbf{hFacMin}\textit{\ }%  
 needs to be set to a value between 0 and 1 (it is set to 1 by default)  
 corresponding to the minimum fractional size of the cell. For example if the  
 bottom cell is 500m thick and \textbf{hFacMin}\textit{\ }is set to 0.1, the  
 actual thickness of the cell (i.e. used in the code) can cover a range of  
 discrete values 50m apart from 50m to 500m depending on the value of the  
 bottom depth (in \textbf{bathyFile}) at this point.  
   
 Note that the bottom depths (or pressures) need not coincide with the models  
 levels as deduced from \textbf{delz}\textit{\ }or\textit{\ }\textbf{delp}%  
 \textit{. }The model will interpolate the numbers in \textbf{bathyFile}%  
 \textit{\ }so that they match the levels obtained from \textbf{delz}\textit{%  
 \ }or\textit{\ }\textbf{delp}\textit{\ }and \textbf{hFacMin}\textit{. }  
   
 (Note: the atmospheric case is a bit more complicated than what is written  
 here I think. To come soon...)  
   
 \begin{itemize}  
 \item time-discretization  
 \end{itemize}  
   
 The time steps are set through the real variables \textbf{deltaTMom }and  
 \textbf{deltaTtracer }(in s) which represent the time step for the momentum  
 and tracer equations, respectively. For synchronous integrations, simply set  
 the two variables to the same value (or you can prescribe one time step only  
 through the variable \textbf{deltaT}). The Adams-Bashforth stabilizing  
 parameter is set through the variable \textbf{abEps }(dimensionless). The  
 stagger baroclinic time stepping can be activated by setting the logical  
 variable \textbf{staggerTimeStep }to '.\texttt{TRUE}.'.  
   
 \subsubsection{Equation of state}  
   
 First, because the model equations are written in terms of perturbations, a  
 reference thermodynamic state needs to be specified. This is done through  
 the 1D arrays \textbf{tRef}\textit{\ }and \textbf{sRef}. \textbf{tRef }%  
 specifies the reference potential temperature profile (in $^{o}$C for  
 the ocean and $^{o}$K for the atmosphere) starting from the level  
 k=1. Similarly, \textbf{sRef}\textit{\ }specifies the reference salinity  
 profile (in ppt) for the ocean or the reference specific humidity profile  
 (in g/kg) for the atmosphere.  
   
 The form of the equation of state is controlled by the character variables  
 \textbf{buoyancyRelation}\textit{\ }and \textbf{eosType}\textit{. }\textbf{%  
 buoyancyRelation}\textit{\ }is set to '\texttt{OCEANIC}' by default and  
 needs to be set to '\texttt{ATMOSPHERIC}' for atmosphere simulations. In  
 this case, \textbf{eosType}\textit{\ }must be set to '\texttt{IDEALGAS}'.  
 For the ocean, two forms of the equation of state are available: linear (set  
 \textbf{eosType}\textit{\ }to '\texttt{LINEAR}') and a polynomial  
 approximation to the full nonlinear equation ( set \textbf{eosType}\textit{\  
 }to '\texttt{POLYNOMIAL}'). In the linear case, you need to specify the  
 thermal and haline expansion coefficients represented by the variables  
 \textbf{tAlpha}\textit{\ }(in K$^{-1}$) and \textbf{sBeta}\textit{\ }(in ppt$%  
 ^{-1}$). For the nonlinear case, you need to generate a file of polynomial  
 coefficients called \textit{POLY3.COEFFS. }To do this, use the program  
 \textit{utils/knudsen2/knudsen2.f }under the model tree (a Makefile is  
 available in the same directory and you will need to edit the number and the  
 values of the vertical levels in \textit{knudsen2.f }so that they match  
 those of your configuration). \textit{\ }  
   
 \subsubsection{Momentum equations}  
   
 In this section, we only focus for now on the parameters that you are likely  
 to change, i.e. the ones relative to forcing and dissipation for example.  
 The details relevant to the vector-invariant form of the equations and the  
 various advection schemes are not covered for the moment. We assume that you  
 use the standard form of the momentum equations (i.e. the flux-form) with  
 the default advection scheme. Also, there are a few logical variables that  
 allow you to turn on/off various terms in the momentum equation. These  
 variables are called \textbf{momViscosity, momAdvection, momForcing,  
 useCoriolis, momPressureForcing, momStepping}\textit{, }and \textit{\ }%  
 \textbf{metricTerms }and are assumed to be set to '.\texttt{TRUE}.' here.  
 Look at the file \textit{model/inc/PARAMS.h }for a precise definition of  
 these variables.  
   
 \begin{itemize}  
 \item initialization  
 \end{itemize}  
   
 The velocity components are initialized to 0 unless the simulation is  
 starting from a pickup file (see section on simulation control parameters).  
   
 \begin{itemize}  
 \item forcing  
 \end{itemize}  
   
 This section only applies to the ocean. You need to generate wind-stress  
 data into two files \textbf{zonalWindFile}\textit{\ }and \textbf{%  
 meridWindFile }corresponding to the zonal and meridional components of the  
 wind stress, respectively (if you want the stress to be along the direction  
 of only one of the model horizontal axes, you only need to generate one  
 file). The format of the files is similar to the bathymetry file. The zonal  
 (meridional) stress data are assumed to be in Pa and located at U-points  
 (V-points). As for the bathymetry, the precision with which to read the  
 binary data is controlled by the variable \textbf{readBinaryPrec}.\textbf{\ }  
 See the matlab program \textit{gendata.m }in the \textit{input }directories  
 under \textit{verification }to see how simple analytical wind forcing data  
 are generated for the case study experiments.  
   
 There is also the possibility of prescribing time-dependent periodic  
 forcing. To do this, concatenate the successive time records into a single  
 file (for each stress component) ordered in a (x, y, t) fashion and set the  
 following variables: \textbf{periodicExternalForcing }to '.\texttt{TRUE}.',  
 \textbf{externForcingPeriod }to the period (in s) of which the forcing  
 varies (typically 1 month), and \textbf{externForcingCycle }to the repeat  
 time (in s) of the forcing (typically 1 year -- note: \textbf{%  
 externForcingCycle }must be a multiple of \textbf{externForcingPeriod}).  
 With these variables set up, the model will interpolate the forcing linearly  
 at each iteration.  
   
 \begin{itemize}  
 \item dissipation  
 \end{itemize}  
   
 The lateral eddy viscosity coefficient is specified through the variable  
 \textbf{viscAh}\textit{\ }(in m$^{2}$s$^{-1}$). The vertical eddy viscosity  
 coefficient is specified through the variable \textbf{viscAz }(in m$^{2}$s$%  
 ^{-1}$) for the ocean and \textbf{viscAp}\textit{\ }(in Pa$^{2}$s$^{-1}$)  
 for the atmosphere. The vertical diffusive fluxes can be computed implicitly  
 by setting the logical variable \textbf{implicitViscosity }to '.\texttt{TRUE}%  
 .'. In addition, biharmonic mixing can be added as well through the variable  
 \textbf{viscA4}\textit{\ }(in m$^{4}$s$^{-1}$). On a spherical polar grid,  
 you might also need to set the variable \textbf{cosPower} which is set to 0  
 by default and which represents the power of cosine of latitude to multiply  
 viscosity. Slip or no-slip conditions at lateral and bottom boundaries are  
 specified through the logical variables \textbf{no\_slip\_sides}\textit{\ }%  
 and \textbf{no\_slip\_bottom}. If set to '\texttt{.FALSE.}', free-slip  
 boundary conditions are applied. If no-slip boundary conditions are applied  
 at the bottom, a bottom drag can be applied as well. Two forms are  
 available: linear (set the variable \textbf{bottomDragLinear}\textit{\ }in s$%  
 ^{-1}$) and quadratic (set the variable \textbf{bottomDragQuadratic}\textit{%  
 \ }in m$^{-1}$).  
   
 The Fourier and Shapiro filters are described elsewhere.  
   
 \begin{itemize}  
 \item C-D scheme  
 \end{itemize}  
   
 If you run at a sufficiently coarse resolution, you will need the C-D scheme  
 for the computation of the Coriolis terms. The variable\textbf{\ tauCD},  
 which represents the C-D scheme coupling timescale (in s) needs to be set.  
   
 \begin{itemize}  
 \item calculation of pressure/geopotential  
 \end{itemize}  
   
 First, to run a non-hydrostatic ocean simulation, set the logical variable  
 \textbf{nonHydrostatic} to '.\texttt{TRUE}.'. The pressure field is then  
 inverted through a 3D elliptic equation. (Note: this capability is not  
 available for the atmosphere yet.) By default, a hydrostatic simulation is  
 assumed and a 2D elliptic equation is used to invert the pressure field. The  
 parameters controlling the behaviour of the elliptic solvers are the  
 variables \textbf{cg2dMaxIters}\textit{\ }and \textbf{cg2dTargetResidual }%  
 for the 2D case and \textbf{cg3dMaxIters}\textit{\ }and \textbf{%  
 cg3dTargetResidual }for the 3D case. You probably won't need to alter the  
 default values (are we sure of this?).  
   
 For the calculation of the surface pressure (for the ocean) or surface  
 geopotential (for the atmosphere) you need to set the logical variables  
 \textbf{rigidLid} and \textbf{implicitFreeSurface}\textit{\ }(set one to '.%  
 \texttt{TRUE}.' and the other to '.\texttt{FALSE}.' depending on how you  
 want to deal with the ocean upper or atmosphere lower boundary).  
   
 \subsubsection{Tracer equations}  
   
 This section covers the tracer equations i.e. the potential temperature  
 equation and the salinity (for the ocean) or specific humidity (for the  
 atmosphere) equation. As for the momentum equations, we only describe for  
 now the parameters that you are likely to change. The logical variables  
 \textbf{tempDiffusion}\textit{, }\textbf{tempAdvection}\textit{, }\textbf{%  
 tempForcing}\textit{,} and \textbf{tempStepping} allow you to turn on/off  
 terms in the temperature equation (same thing for salinity or specific  
 humidity with variables \textbf{saltDiffusion}\textit{, }\textbf{%  
 saltAdvection}\textit{\ }etc). These variables are all assumed here to be  
 set to '.\texttt{TRUE}.'. Look at file \textit{model/inc/PARAMS.h }for a  
 precise definition.  
   
 \begin{itemize}  
 \item initialization  
 \end{itemize}  
   
 The initial tracer data can be contained in the binary files \textbf{%  
 hydrogThetaFile }and \textbf{hydrogSaltFile}. These files should contain 3D  
 data ordered in an (x, y, r) fashion with k=1 as the first vertical level.  
 If no file names are provided, the tracers are then initialized with the  
 values of \textbf{tRef }and \textbf{sRef }mentioned above (in the equation  
 of state section). In this case, the initial tracer data are uniform in x  
 and y for each depth level.  
   
 \begin{itemize}  
 \item forcing  
 \end{itemize}  
   
 This part is more relevant for the ocean, the procedure for the atmosphere  
 not being completely stabilized at the moment.  
   
 A combination of fluxes data and relaxation terms can be used for driving  
 the tracer equations. \ For potential temperature, heat flux data (in W/m$%  
 ^{2}$) can be stored in the 2D binary file \textbf{surfQfile}\textit{. }%  
 Alternatively or in addition, the forcing can be specified through a  
 relaxation term. The SST data to which the model surface temperatures are  
 restored to are supposed to be stored in the 2D binary file \textbf{%  
 thetaClimFile}\textit{. }The corresponding relaxation time scale coefficient  
 is set through the variable \textbf{tauThetaClimRelax}\textit{\ }(in s). The  
 same procedure applies for salinity with the variable names \textbf{EmPmRfile%  
 }\textit{, }\textbf{saltClimFile}\textit{, }and \textbf{tauSaltClimRelax}%  
 \textit{\ }for freshwater flux (in m/s) and surface salinity (in ppt) data  
 files and relaxation time scale coefficient (in s), respectively. Also for  
 salinity, if the CPP key \textbf{USE\_NATURAL\_BCS} is turned on, natural  
 boundary conditions are applied i.e. when computing the surface salinity  
 tendency, the freshwater flux is multiplied by the model surface salinity  
 instead of a constant salinity value.  
   
 As for the other input files, the precision with which to read the data is  
 controlled by the variable \textbf{readBinaryPrec}. Time-dependent, periodic  
 forcing can be applied as well following the same procedure used for the  
 wind forcing data (see above).  
   
 \begin{itemize}  
 \item dissipation  
 \end{itemize}  
   
 Lateral eddy diffusivities for temperature and salinity/specific humidity  
 are specified through the variables \textbf{diffKhT }and \textbf{diffKhS }%  
 (in m$^{2}$/s). Vertical eddy diffusivities are specified through the  
 variables \textbf{diffKzT }and \textbf{diffKzS }(in m$^{2}$/s) for the ocean  
 and \textbf{diffKpT }and \textbf{diffKpS }(in Pa$^{2}$/s) for the  
 atmosphere. The vertical diffusive fluxes can be computed implicitly by  
 setting the logical variable \textbf{implicitDiffusion }to '.\texttt{TRUE}%  
 .'. In addition, biharmonic diffusivities can be specified as well through  
 the coefficients \textbf{diffK4T }and \textbf{diffK4S }(in m$^{4}$/s). Note  
 that the cosine power scaling (specified through \textbf{cosPower }- see the  
 momentum equations section) is applied to the tracer diffusivities  
 (Laplacian and biharmonic) as well. The Gent and McWilliams parameterization  
 for oceanic tracers is described in the package section. Finally, note that  
 tracers can be also subject to Fourier and Shapiro filtering (see the  
 corresponding section on these filters).  
   
 \begin{itemize}  
 \item ocean convection  
 \end{itemize}  
   
 Two options are available to parameterize ocean convection: one is to use  
 the convective adjustment scheme. In this case, you need to set the variable  
 \textbf{cadjFreq}, which represents the frequency (in s) with which the  
 adjustment algorithm is called, to a non-zero value (if set to a negative  
 value by the user, the model will set it to the tracer time step). The other  
 option is to parameterize convection with implicit vertical diffusion. To do  
 this, set the logical variable \textbf{implicitDiffusion }to '.\texttt{TRUE}%  
 .' and the real variable \textbf{ivdc\_kappa }to a value (in m$^{2}$/s) you  
 wish the tracer vertical diffusivities to have when mixing tracers  
 vertically due to static instabilities. Note that \textbf{cadjFreq }and  
 \textbf{ivdc\_kappa }can not both have non-zero value.  
   
 \subsubsection{Simulation controls}  
   
 The model ''clock'' is defined by the variable \textbf{deltaTClock }(in s)  
 which determines the IO frequencies and is used in tagging output.  
 Typically, you will set it to the tracer time step for accelerated runs  
 (otherwise it is simply set to the default time step \textbf{deltaT}).  
 Frequency of checkpointing and dumping of the model state are referenced to  
 this clock (see below).  
   
 \begin{itemize}  
 \item run duration  
 \end{itemize}  
   
 The beginning of a simulation is set by specifying a start time (in s)  
 through the real variable \textbf{startTime }or by specifying an initial  
 iteration number through the integer variable \textbf{nIter0}. If these  
 variables are set to nonzero values, the model will look for a ''pickup''  
 file \textit{pickup.0000nIter0 }to restart the integration\textit{. }The end  
 of a simulation is set through the real variable \textbf{endTime }(in s).  
 Alternatively, you can specify instead the number of time steps to execute  
 through the integer variable \textbf{nTimeSteps}.  
1057    
1058  \begin{itemize}  >> eta=rdmds('Eta',[0:10:100]);
1059  \item frequency of output  >> for n=1:11; imagesc(eta(:,:,n)');axis ij;colorbar;pause(.5);end
1060  \end{itemize}  \end{verbatim}
1061    
 Real variables defining frequencies (in s) with which output files are  
 written on disk need to be set up. \textbf{dumpFreq }controls the frequency  
 with which the instantaneous state of the model is saved. \textbf{chkPtFreq }%  
 and \textbf{pchkPtFreq }control the output frequency of rolling and  
 permanent checkpoint files, respectively. See section 1.5.1 Output files for the  
 definition of model state and checkpoint files. In addition, time-averaged  
 fields can be written out by setting the variable \textbf{taveFreq} (in s).  
 The precision with which to write the binary data is controlled by the  
 integer variable w\textbf{riteBinaryPrec }(set it to \texttt{32} or \texttt{%  
 64}).  

Legend:
Removed from v.1.1  
changed lines
  Added in v.1.27

  ViewVC Help
Powered by ViewVC 1.1.22