/[MITgcm]/manual/s_getstarted/text/getting_started.tex
ViewVC logotype

Diff of /manual/s_getstarted/text/getting_started.tex

Parent Directory Parent Directory | Revision Log Revision Log | View Revision Graph Revision Graph | View Patch Patch

revision 1.1 by adcroft, Wed Aug 8 16:15:31 2001 UTC revision 1.46 by dimitri, Sat Nov 21 03:19:54 2015 UTC
# Line 1  Line 1 
1  % $Header$  % $Header$
2  % $Name$  % $Name$
3    
4    %\section{Getting started}
5    
6  \begin{center}  We believe the best way to familiarize yourself with the
7  {\Large \textbf{Using the model}}  model is to run the case study examples provided with the base
8    version. Information on how to obtain, compile, and run the code is
9  \vspace*{4mm}  found here as well as a brief description of the model structure
10    directory and the case study examples. Information is also provided
11  \vspace*{3mm} {\large July 2001}  here on how to customize the code when you are ready to try implementing
12  \end{center}  the configuration you have in mind.  The code and algorithm
13    are described more fully in chapters \ref{chap:discretization} and
14    \ref{chap:sarch}.
15    
16    \section{Where to find information}
17    \label{sec:whereToFindInfo}
18    \begin{rawhtml}
19    <!-- CMIREDIR:whereToFindInfo: -->
20    \end{rawhtml}
21    
22    There is a web-archived support mailing list for the model that
23    you can email at \texttt{MITgcm-support@mitgcm.org} or browse at:
24    \begin{rawhtml} <A href=http://mitgcm.org/mailman/listinfo/mitgcm-support/ target="idontexist"> \end{rawhtml}
25    \begin{verbatim}
26    http://mitgcm.org/mailman/listinfo/mitgcm-support/
27    http://mitgcm.org/pipermail/mitgcm-support/
28    \end{verbatim}
29    \begin{rawhtml} </A> \end{rawhtml}
30    
31    \section{Obtaining the code}
32    \label{sec:obtainingCode}
33    \begin{rawhtml}
34    <!-- CMIREDIR:obtainingCode: -->
35    \end{rawhtml}
36    
37    MITgcm can be downloaded from our system by following
38    the instructions below. As a courtesy we ask that you send e-mail to us at
39    \begin{rawhtml} <A href=mailto:MITgcm-support@mitgcm.org> \end{rawhtml}
40    MITgcm-support@mitgcm.org
41    \begin{rawhtml} </A> \end{rawhtml}
42    to enable us to keep track of who's using the model and in what application.
43    You can download the model two ways:
44    
45    \begin{enumerate}
46    \item Using CVS software. CVS is a freely available source code management
47    tool. To use CVS you need to have the software installed. Many systems
48    come with CVS pre-installed, otherwise good places to look for
49    the software for a particular platform are
50    \begin{rawhtml} <A href=http://www.cvshome.org/ target="idontexist"> \end{rawhtml}
51    cvshome.org
52    \begin{rawhtml} </A> \end{rawhtml}
53    and
54    \begin{rawhtml} <A href=http://www.wincvs.org/ target="idontexist"> \end{rawhtml}
55    wincvs.org
56    \begin{rawhtml} </A> \end{rawhtml}
57    .
58    
59    \item Using a tar file. This method is simple and does not
60    require any special software. However, this method does not
61    provide easy support for maintenance updates.
62    
63  In this part, we describe how to use the model. In the first section, we  \end{enumerate}
 provide enough information to help you get started with the model. We  
 believe the best way to familiarize yourself with the model is to run the  
 case study examples provided with the base version. Information on how to  
 obtain, compile, and run the code is found there as well as a brief  
 description of the model structure directory and the case study examples.  
 The latter and the code structure are described more fully in sections 2 and  
 3, respectively. In section 4, we provide information on how to customize  
 the code when you are ready to try implementing the configuration you have  
 in mind.  
64    
65  \section{Getting started}  \subsection{Method 1 - Checkout from CVS}
66    \label{sec:cvs_checkout}
67    
68  \subsection{Obtaining the code}  If CVS is available on your system, we strongly encourage you to use it. CVS
69    provides an efficient and elegant way of organizing your code and keeping
70    track of your changes. If CVS is not available on your machine, you can also
71    download a tar file.
72    
73  The reference web site for the model is:  Before you can use CVS, the following environment variable(s) should
74    be set within your shell.  For a csh or tcsh shell, put the following
75  \begin{verbatim}  \begin{verbatim}
76  http://mitgcm.org  % setenv CVSROOT :pserver:cvsanon@mitgcm.org:/u/gcmpack
77  \end{verbatim}  \end{verbatim}
78    in your \texttt{.cshrc} or \texttt{.tcshrc} file.  For bash or sh
79  On this site, you can download the model as well as find useful information,  shells, put:
 some of which might overlap with what is written here. There is also a  
 support news group for the model located at (send your message to \texttt{%  
 support@mitgcm.org}):  
80  \begin{verbatim}  \begin{verbatim}
81  news://mitgcm.org/mitgcm.support  % export CVSROOT=':pserver:cvsanon@mitgcm.org:/u/gcmpack'
82  \end{verbatim}  \end{verbatim}
83    in your \texttt{.profile} or \texttt{.bashrc} file.
84    
 If CVS is available on your system, we strongly encourage you to use it. CVS  
 provides an efficient and elegant way of organizing your code and keeping  
 track of your changes. If CVS is not available on your machine, you can also  
 download a tar file.  
   
 \subsubsection{using CVS}  
85    
86  Before you can use CVS, the following environment variable has to be set in  To get MITgcm through CVS, first register with the MITgcm CVS server
87  your .cshrc or .tcshrc:  using command:
88  \begin{verbatim}  \begin{verbatim}
 % setenv CVSROOT :pserver:cvsanon@mitgcm.org:/u/u0/gcmpack  
89  % cvs login ( CVS password: cvsanon )  % cvs login ( CVS password: cvsanon )
90  \end{verbatim}  \end{verbatim}
91    You only need to do a ``cvs login'' once.
92    
93  You only need to do ``cvs login'' once. To obtain the latest source:  To obtain the latest sources type:
94  \begin{verbatim}  \begin{verbatim}
95  % cvs co -d directory models/MITgcmUV  % cvs co -P MITgcm
96  \end{verbatim}  \end{verbatim}
97    or to get a specific release type:
 This creates a directory called \textit{directory}. If \textit{directory}  
 exists this command updates your code based on the repository. Each  
 directory in the source tree contains a directory \textit{CVS}. This  
 information is required by CVS to keep track of your file versions with  
 respect to the repository. Don't edit the files in \textit{CVS}! To obtain a  
 specific \textit{version} that is not the latest source:  
98  \begin{verbatim}  \begin{verbatim}
99  % cvs co -d directory -r version models/MITgcmUV  % cvs co -P -r checkpoint52i_post MITgcm
100  \end{verbatim}  \end{verbatim}
101    The CVS command ``\texttt{cvs co}'' is the abreviation of the full-name
102  \subsubsection{other methods}  ``\texttt{cvs checkout}'' command and using the option ``-P'' (\texttt{cvs co -P})
103    will prevent to download unnecessary empty directories.
104  You can download the model as a tar file from the reference web site at:  
105    The MITgcm web site contains further directions concerning the source
106    code and CVS.  It also contains a web interface to our CVS archive so
107    that one may easily view the state of files, revisions, and other
108    development milestones:
109    \begin{rawhtml} <A href="http://mitgcm.org/viewvc/MITgcm/MITgcm/" target="idontexist"> \end{rawhtml}
110    \begin{verbatim}
111    http://mitgcm.org/viewvc/MITgcm/MITgcm/
112    \end{verbatim}
113    \begin{rawhtml} </A> \end{rawhtml}
114    
115    As a convenience, the MITgcm CVS server contains aliases which are
116    named subsets of the codebase.  These aliases can be especially
117    helpful when used over slow internet connections or on machines with
118    restricted storage space.  Table \ref{tab:cvsModules} contains a list
119    of CVS aliases
120    \begin{table}[htb]
121      \centering
122      \begin{tabular}[htb]{|lp{3.25in}|}\hline
123        \textbf{Alias Name}    &  \textbf{Information (directories) Contained}  \\\hline
124        \texttt{MITgcm\_code}  &  Only the source code -- none of the verification examples.  \\
125        \texttt{MITgcm\_verif\_basic}
126        &  Source code plus a small set of the verification examples
127        (\texttt{global\_ocean.90x40x15}, \texttt{aim.5l\_cs}, \texttt{hs94.128x64x5},
128        \texttt{front\_relax}, and \texttt{plume\_on\_slope}).  \\
129        \texttt{MITgcm\_verif\_atmos}  &  Source code plus all of the atmospheric examples.  \\
130        \texttt{MITgcm\_verif\_ocean}  &  Source code plus all of the oceanic examples.  \\
131        \texttt{MITgcm\_verif\_all}    &  Source code plus all of the
132        verification examples. \\\hline
133      \end{tabular}
134      \caption{MITgcm CVS Modules}
135      \label{tab:cvsModules}
136    \end{table}
137    
138    The checkout process creates a directory called \texttt{MITgcm}. If
139    the directory \texttt{MITgcm} exists this command updates your code
140    based on the repository. Each directory in the source tree contains a
141    directory \texttt{CVS}. This information is required by CVS to keep
142    track of your file versions with respect to the repository. Don't edit
143    the files in \texttt{CVS}!  You can also use CVS to download code
144    updates.  More extensive information on using CVS for maintaining
145    MITgcm code can be found
146    \begin{rawhtml} <A href="http://mitgcm.org/public/using_cvs.html" target="idontexist"> \end{rawhtml}
147    here
148    \begin{rawhtml} </A> \end{rawhtml}.
149    It is important to note that the CVS aliases in Table
150    \ref{tab:cvsModules} cannot be used in conjunction with the CVS
151    \texttt{-d DIRNAME} option.  However, the \texttt{MITgcm} directories
152    they create can be changed to a different name following the check-out:
153    \begin{verbatim}
154       %  cvs co -P MITgcm_verif_basic
155       %  mv MITgcm MITgcm_verif_basic
156    \end{verbatim}
157    
158    Note that it is possible to checkout code without ``cvs login'' and without
159    setting any shell environment variables by specifying the pserver name and
160    password in one line, for example:
161    \begin{verbatim}
162       %  cvs -d :pserver:cvsanon:cvsanon@mitgcm.org:/u/gcmpack co -P MITgcm
163    \end{verbatim}
164    
165    \subsubsection{Upgrading from an earlier version}
166    
167    If you already have an earlier version of the code you can ``upgrade''
168    your copy instead of downloading the entire repository again. First,
169    ``cd'' (change directory) to the top of your working copy:
170    \begin{verbatim}
171    % cd MITgcm
172    \end{verbatim}
173    and then issue the cvs update command such as:
174    \begin{verbatim}
175    % cvs -q update -d -P -r checkpoint52i_post
176    \end{verbatim}
177    This will update the ``tag'' to ``checkpoint52i\_post'', add any new
178    directories (-d) and remove any empty directories (-P). The -q option
179    means be quiet which will reduce the number of messages you'll see in
180    the terminal. If you have modified the code prior to upgrading, CVS
181    will try to merge your changes with the upgrades. If there is a
182    conflict between your modifications and the upgrade, it will report
183    that file with a ``C'' in front, e.g.:
184    \begin{verbatim}
185    C model/src/ini_parms.F
186    \end{verbatim}
187    If the list of conflicts scrolled off the screen, you can re-issue the
188    cvs update command and it will report the conflicts. Conflicts are
189    indicated in the code by the delimites ``$<<<<<<<$'', ``======='' and
190    ``$>>>>>>>$''. For example,
191    {\small
192    \begin{verbatim}
193    <<<<<<< ini_parms.F
194         & bottomDragLinear,myOwnBottomDragCoefficient,
195    =======
196         & bottomDragLinear,bottomDragQuadratic,
197    >>>>>>> 1.18
198    \end{verbatim}
199    }
200    means that you added ``myOwnBottomDragCoefficient'' to a namelist at
201    the same time and place that we added ``bottomDragQuadratic''. You
202    need to resolve this conflict and in this case the line should be
203    changed to:
204    {\small
205    \begin{verbatim}
206         & bottomDragLinear,bottomDragQuadratic,myOwnBottomDragCoefficient,
207    \end{verbatim}
208    }
209    and the lines with the delimiters ($<<<<<<$,======,$>>>>>>$) be deleted.
210    Unless you are making modifications which exactly parallel
211    developments we make, these types of conflicts should be rare.
212    
213    \paragraph*{Upgrading to the current pre-release version}
214    
215    We don't make a ``release'' for every little patch and bug fix in
216    order to keep the frequency of upgrades to a minimum. However, if you
217    have run into a problem for which ``we have already fixed in the
218    latest code'' and we haven't made a ``tag'' or ``release'' since that
219    patch then you'll need to get the latest code:
220    \begin{verbatim}
221    % cvs -q update -d -P -A
222    \end{verbatim}
223    Unlike, the ``check-out'' and ``update'' procedures above, there is no
224    ``tag'' or release name. The -A tells CVS to upgrade to the
225    very latest version. As a rule, we don't recommend this since you
226    might upgrade while we are in the processes of checking in the code so
227    that you may only have part of a patch. Using this method of updating
228    also means we can't tell what version of the code you are working
229    with. So please be sure you understand what you're doing.
230    
231    \subsection{Method 2 - Tar file download}
232    \label{sec:conventionalDownload}
233    
234    If you do not have CVS on your system, you can download the model as a
235    tar file from the web site at:
236    \begin{rawhtml} <A href=http://mitgcm.org/download/ target="idontexist"> \end{rawhtml}
237  \begin{verbatim}  \begin{verbatim}
238  http://mitgcm.org/download/  http://mitgcm.org/download/
239  \end{verbatim}  \end{verbatim}
240    \begin{rawhtml} </A> \end{rawhtml}
241    The tar file still contains CVS information which we urge you not to
242    delete; even if you do not use CVS yourself the information can help
243    us if you should need to send us your copy of the code.  If a recent
244    tar file does not exist, then please contact the developers through
245    the
246    \begin{rawhtml} <A href="mailto:MITgcm-support@mitgcm.org"> \end{rawhtml}
247    MITgcm-support@mitgcm.org
248    \begin{rawhtml} </A> \end{rawhtml}
249    mailing list.
250    
251    \section{Model and directory structure}
252    \begin{rawhtml}
253    <!-- CMIREDIR:directory_structure: -->
254    \end{rawhtml}
255    
256    The ``numerical'' model is contained within a execution environment
257    support wrapper. This wrapper is designed to provide a general
258    framework for grid-point models. MITgcmUV is a specific numerical
259    model that uses the framework. Under this structure the model is split
260    into execution environment support code and conventional numerical
261    model code. The execution environment support code is held under the
262    \texttt{eesupp} directory. The grid point model code is held under the
263    \texttt{model} directory. Code execution actually starts in the
264    \texttt{eesupp} routines and not in the \texttt{model} routines. For
265    this reason the top-level \texttt{MAIN.F} is in the
266    \texttt{eesupp/src} directory. In general, end-users should not need
267    to worry about this level. The top-level routine for the numerical
268    part of the code is in \texttt{model/src/THE\_MODEL\_MAIN.F}. Here is
269    a brief description of the directory structure of the model under the
270    root tree (a detailed description is given in section 3: Code
271    structure).
272    
273    \begin{itemize}
274    
275    \item \texttt{doc}: contains brief documentation notes.
276      
277    \item \texttt{eesupp}: contains the execution environment source code.
278      Also subdivided into two subdirectories \texttt{inc} and
279      \texttt{src}.
280      
281    \item \texttt{model}: this directory contains the main source code.
282      Also subdivided into two subdirectories \texttt{inc} and
283      \texttt{src}.
284      
285    \item \texttt{pkg}: contains the source code for the packages. Each
286      package corresponds to a subdirectory. For example, \texttt{gmredi}
287      contains the code related to the Gent-McWilliams/Redi scheme,
288      \texttt{aim} the code relative to the atmospheric intermediate
289      physics. The packages are described in detail in chapter \ref{chap:packagesI}.
290      
291    \item \texttt{tools}: this directory contains various useful tools.
292      For example, \texttt{genmake2} is a script written in csh (C-shell)
293      that should be used to generate your makefile. The directory
294      \texttt{adjoint} contains the makefile specific to the Tangent
295      linear and Adjoint Compiler (TAMC) that generates the adjoint code.
296      The latter is described in detail in part \ref{chap.ecco}.
297      This directory also contains the subdirectory build\_options, which
298      contains the `optfiles' with the compiler options for the different
299      compilers and machines that can run MITgcm.
300      
301    \item \texttt{utils}: this directory contains various utilities. The
302      subdirectory \texttt{knudsen2} contains code and a makefile that
303      compute coefficients of the polynomial approximation to the knudsen
304      formula for an ocean nonlinear equation of state. The
305      \texttt{matlab} subdirectory contains matlab scripts for reading
306      model output directly into matlab. \texttt{scripts} contains C-shell
307      post-processing scripts for joining processor-based and tiled-based
308      model output. The subdirectory exch2 contains the code needed for
309      the exch2 package to work with different combinations of domain
310      decompositions.
311      
312    \item \texttt{verification}: this directory contains the model
313      examples. See section \ref{sec:modelExamples}.
314    
315    \item \texttt{jobs}: contains sample job scripts for running MITgcm.
316      
317    \item \texttt{lsopt}: Line search code used for optimization.
318      
319    \item \texttt{optim}: Interface between MITgcm and line search code.
320      
321    \end{itemize}
322    
323    \section[Building MITgcm]{Building the code}
324    \label{sec:buildingCode}
325    \begin{rawhtml}
326    <!-- CMIREDIR:buildingCode: -->
327    \end{rawhtml}
328    
329    To compile the code, we use the \texttt{make} program. This uses a
330    file (\texttt{Makefile}) that allows us to pre-process source files,
331    specify compiler and optimization options and also figures out any
332    file dependencies. We supply a script (\texttt{genmake2}), described
333    in section \ref{sec:genmake}, that automatically creates the
334    \texttt{Makefile} for you. You then need to build the dependencies and
335    compile the code.
336    
337    As an example, assume that you want to build and run experiment
338    \texttt{verification/exp2}. The are multiple ways and places to
339    actually do this but here let's build the code in
340    \texttt{verification/exp2/build}:
341    \begin{verbatim}
342    % cd verification/exp2/build
343    \end{verbatim}
344    First, build the \texttt{Makefile}:
345    \begin{verbatim}
346    % ../../../tools/genmake2 -mods=../code
347    \end{verbatim}
348    The command line option tells \texttt{genmake} to override model source
349    code with any files in the directory \texttt{../code/}.
350    
351    On many systems, the \texttt{genmake2} program will be able to
352    automatically recognize the hardware, find compilers and other tools
353    within the user's path (``\texttt{echo \$PATH}''), and then choose an
354    appropriate set of options from the files (``optfiles'') contained in
355    the \texttt{tools/build\_options} directory.  Under some
356    circumstances, a user may have to create a new ``optfile'' in order to
357    specify the exact combination of compiler, compiler flags, libraries,
358    and other options necessary to build a particular configuration of
359    MITgcm.  In such cases, it is generally helpful to read the existing
360    ``optfiles'' and mimic their syntax.
361    
362    Through the MITgcm-support list, the MITgcm developers are willing to
363    provide help writing or modifing ``optfiles''.  And we encourage users
364    to post new ``optfiles'' (particularly ones for new machines or
365    architectures) to the
366    \begin{rawhtml} <A href="mailto:MITgcm-support@mitgcm.org"> \end{rawhtml}
367    MITgcm-support@mitgcm.org
368    \begin{rawhtml} </A> \end{rawhtml}
369    list.
370    
371  \subsection{Model and directory structure}  To specify an optfile to \texttt{genmake2}, the syntax is:
372    \begin{verbatim}
373  The ``numerical'' model is contained within a execution environment support  % ../../../tools/genmake2 -mods=../code -of /path/to/optfile
374  wrapper. This wrapper is designed to provide a general framework for  \end{verbatim}
 grid-point models. MITgcmUV is a specific numerical model that uses the  
 framework. Under this structure the model is split into execution  
 environment support code and conventional numerical model code. The  
 execution environment support code is held under the \textit{eesupp}  
 directory. The grid point model code is held under the \textit{model}  
 directory. Code execution actually starts in the \textit{eesupp} routines  
 and not in the \textit{model} routines. For this reason the top-level  
 \textit{MAIN.F} is in the \textit{eesupp/src} directory. In general,  
 end-users should not need to worry about this level. The top-level routine  
 for the numerical part of the code is in \textit{model/src/THE\_MODEL\_MAIN.F%  
 }. Here is a brief description of the directory structure of the model under  
 the root tree (a detailed description is given in section 3: Code structure).  
   
 \begin{itemize}  
 \item \textit{bin}: this directory is initially empty. It is the default  
 directory in which to compile the code.  
   
 \item \textit{diags}: contains the code relative to time-averaged  
 diagnostics. It is subdivided into two subdirectories \textit{inc} and  
 \textit{src} that contain include files (*.\textit{h} files) and fortran  
 subroutines (*.\textit{F} files), respectively.  
   
 \item \textit{doc}: contains brief documentation notes.  
   
 \item \textit{eesupp}: contains the execution environment source code. Also  
 subdivided into two subdirectories \textit{inc} and \textit{src}.  
   
 \item \textit{exe}: this directory is initially empty. It is the default  
 directory in which to execute the code.  
   
 \item \textit{model}: this directory contains the main source code. Also  
 subdivided into two subdirectories \textit{inc} and \textit{src}.  
   
 \item \textit{pkg}: contains the source code for the packages. Each package  
 corresponds to a subdirectory. For example, \textit{gmredi} contains the  
 code related to the Gent-McWilliams/Redi scheme, \textit{aim} the code  
 relative to the atmospheric intermediate physics. The packages are described  
 in detail in section 3.  
   
 \item \textit{tools}: this directory contains various useful tools. For  
 example, \textit{genmake} is a script written in csh (C-shell) that should  
 be used to generate your makefile. The directory \textit{adjoint} contains  
 the makefile specific to the Tangent linear and Adjoint Compiler (TAMC) that  
 generates the adjoint code. The latter is described in details in part V.  
   
 \item \textit{utils}: this directory contains various utilities. The  
 subdirectory \textit{knudsen2} contains code and a makefile that compute  
 coefficients of the polynomial approximation to the knudsen formula for an  
 ocean nonlinear equation of state. The \textit{matlab} subdirectory contains  
 matlab scripts for reading model output directly into matlab. \textit{scripts%  
 } contains C-shell post-processing scripts for joining processor-based and  
 tiled-based model output.  
   
 \item \textit{verification}: this directory contains the model examples. See  
 below.  
 \end{itemize}  
   
 \subsection{Model examples}  
   
 Now that you have successfully downloaded the model code we recommend that  
 you first try to run the examples provided with the base version. You will  
 probably want to run the example that is the closest to the configuration  
 you will use eventually. The examples are located in subdirectories under  
 the directory \textit{verification} and are briefly described below (a full  
 description is given in section 2):  
   
 \subsubsection{List of model examples}  
   
 \begin{itemize}  
 \item \textit{exp0} - single layer, ocean double gyre (barotropic with  
 free-surface).  
   
 \item \textit{exp1} - 4 layers, ocean double gyre.  
   
 \item \textit{exp2} - 4x4 degree global ocean simulation with steady  
 climatological forcing.  
   
 \item \textit{exp4} - flow over a Gaussian bump in open-water or channel  
 with open boundaries.  
   
 \item \textit{exp5} - inhomogenously forced ocean convection in a doubly  
 periodic box.  
   
 \item \textit{front\_relax} - relaxation of an ocean thermal front (test for  
 Gent/McWilliams scheme). 2D (Y-Z).  
   
 \item \textit{internal wave} - ocean internal wave forced by open boundary  
 conditions.  
   
 \item \textit{natl\_box} - eastern subtropical North Atlantic with KPP  
 scheme; 1 month integration  
   
 \item \textit{hs94.1x64x5} - zonal averaged atmosphere using Held and Suarez  
 '94 forcing.  
   
 \item \textit{hs94.128x64x5} - 3D atmosphere dynamics using Held and Suarez  
 '94 forcing.  
   
 \item \textit{hs94.cs-32x32x5} - 3D atmosphere dynamics using Held and  
 Suarez '94 forcing on the cubed sphere.  
   
 \item \textit{aim.5l\_zon-ave} - Intermediate Atmospheric physics, 5 layers  
 Molteni physics package. Global Zonal Mean configuration, 1x64x5 resolution.  
   
 \item \textit{aim.5l\_XZ\_Equatorial\_Slice} - Intermediate Atmospheric  
 physics, 5 layers Molteni physics package. Equatorial Slice configuration.  
 2D (X-Z).  
   
 \item \textit{aim.5l\_Equatorial\_Channel} - Intermediate Atmospheric  
 physics, 5 layers Molteni physics package. 3D Equatorial Channel  
 configuration (not completely tested).  
   
 \item \textit{aim.5l\_LatLon} - Intermediate Atmospheric physics, 5 layers  
 Molteni physics package. Global configuration, 128x64x5 resolution.  
   
 \item \textit{adjustment.128x64x1}  
   
 \item \textit{adjustment.cs-32x32x1}  
 \end{itemize}  
   
 \subsubsection{Directory structure of model examples}  
   
 Each example directory has the following subdirectories:  
   
 \begin{itemize}  
 \item \textit{code}: contains the code particular to the example. At a  
 minimum, this directory includes the following files:  
   
 \begin{itemize}  
 \item \textit{code/CPP\_EEOPTIONS.h}: declares CPP keys relative to the  
 ``execution environment'' part of the code. The default version is located  
 in \textit{eesupp/inc}.  
   
 \item \textit{code/CPP\_OPTIONS.h}: declares CPP keys relative to the  
 ``numerical model'' part of the code. The default version is located in  
 \textit{model/inc}.  
   
 \item \textit{code/SIZE.h}: declares size of underlying computational grid.  
 The default version is located in \textit{model/inc}.  
 \end{itemize}  
   
 In addition, other include files and subroutines might be present in \textit{%  
 code} depending on the particular experiment. See section 2 for more details.  
   
 \item \textit{input}: contains the input data files required to run the  
 example. At a mimimum, the \textit{input} directory contains the following  
 files:  
   
 \begin{itemize}  
 \item \textit{input/data}: this file, written as a namelist, specifies the  
 main parameters for the experiment.  
   
 \item \textit{input/data.pkg}: contains parameters relative to the packages  
 used in the experiment.  
   
 \item \textit{input/eedata}: this file contains ``execution environment''  
 data. At present, this consists of a specification of the number of threads  
 to use in $X$ and $Y$ under multithreaded execution.  
 \end{itemize}  
   
 In addition, you will also find in this directory the forcing and topography  
 files as well as the files describing the initial state of the experiment.  
 This varies from experiment to experiment. See section 2 for more details.  
   
 \item \textit{results}: this directory contains the output file \textit{%  
 output.txt} produced by the simulation example. This file is useful for  
 comparison with your own output when you run the experiment.  
 \end{itemize}  
   
 Once you have chosen the example you want to run, you are ready to compile  
 the code.  
   
 \subsection{Compiling the code}  
   
 \subsubsection{The script \textit{genmake}}  
   
 To compile the code, use the script \textit{genmake} located in the \textit{%  
 tools} directory. \textit{genmake} is a script that generates the makefile.  
 It has been written so that the code can be compiled on a wide diversity of  
 machines and systems. However, if it doesn't work the first time on your  
 platform, you might need to edit certain lines of \textit{genmake} in the  
 section containing the setups for the different machines. The file is  
 structured like this:  
 \begin{verbatim}  
         .  
         .  
         .  
 general instructions (machine independent)  
         .  
         .  
         .  
     - setup machine 1  
     - setup machine 2  
     - setup machine 3  
     - setup machine 4  
        etc  
         .  
         .  
         .  
 \end{verbatim}  
   
 For example, the setup corresponding to a DEC alpha machine is reproduced  
 here:  
 \begin{verbatim}  
   case OSF1+mpi:  
     echo "Configuring for DEC Alpha"  
     set CPP        = ( '/usr/bin/cpp -P' )  
     set DEFINES    = ( ${DEFINES}  '-DTARGET_DEC -DWORDLENGTH=1' )  
     set KPP        = ( 'kapf' )  
     set KPPFILES   = ( 'main.F' )  
     set KFLAGS1    = ( '-scan=132 -noconc -cmp=' )  
     set FC         = ( 'f77' )  
     set FFLAGS     = ( '-convert big_endian -r8 -extend_source -automatic -call_shared -notransform_loops -align dcommons' )  
     set FOPTIM     = ( '-O5 -fast -tune host -inline all' )  
     set NOOPTFLAGS = ( '-O0' )  
     set LIBS       = ( '-lfmpi -lmpi -lkmp_osfp10 -pthread' )  
     set NOOPTFILES = ( 'barrier.F different_multiple.F external_fields_load.F')  
     set RMFILES    = ( '*.p.out' )  
     breaksw  
 \end{verbatim}  
   
 Typically, these are the lines that you might need to edit to make \textit{%  
 genmake} work on your platform if it doesn't work the first time. \textit{%  
 genmake} understands several options that are described here:  
   
 \begin{itemize}  
 \item -rootdir=dir  
   
 indicates where the model root directory is relative to the directory where  
 you are compiling. This option is not needed if you compile in the \textit{%  
 bin} directory (which is the default compilation directory) or within the  
 \textit{verification} tree.  
   
 \item -mods=dir1,dir2,...  
   
 indicates the relative or absolute paths directories where the sources  
 should take precedence over the default versions (located in \textit{model},  
 \textit{eesupp},...). Typically, this option is used when running the  
 examples, see below.  
   
 \item -enable=pkg1,pkg2,...  
   
 enables packages source code \textit{pkg1}, \textit{pkg2},... when creating  
 the makefile.  
   
 \item -disable=pkg1,pkg2,...  
375    
376  disables packages source code \textit{pkg1}, \textit{pkg2},... when creating  Once a \texttt{Makefile} has been generated, we create the
377  the makefile.  dependencies with the command:
378    \begin{verbatim}
379    % make depend
380    \end{verbatim}
381    This modifies the \texttt{Makefile} by attaching a (usually, long)
382    list of files upon which other files depend. The purpose of this is to
383    reduce re-compilation if and when you start to modify the code. The
384    {\tt make depend} command also creates links from the model source to
385    this directory.  It is important to note that the {\tt make depend}
386    stage will occasionally produce warnings or errors since the
387    dependency parsing tool is unable to find all of the necessary header
388    files (\textit{eg.}  \texttt{netcdf.inc}).  In these circumstances, it
389    is usually OK to ignore the warnings/errors and proceed to the next
390    step.
391    
392  \item -platform=machine  Next one can compile the code using:
393    \begin{verbatim}
394    % make
395    \end{verbatim}
396    The {\tt make} command creates an executable called \texttt{mitgcmuv}.
397    Additional make ``targets'' are defined within the makefile to aid in
398    the production of adjoint and other versions of MITgcm.  On SMP
399    (shared multi-processor) systems, the build process can often be sped
400    up appreciably using the command:
401    \begin{verbatim}
402    % make -j 2
403    \end{verbatim}
404    where the ``2'' can be replaced with a number that corresponds to the
405    number of CPUs available.
406    
407  specifies the platform for which you want the makefile. In general, you  Now you are ready to run the model. General instructions for doing so are
408  won't need this option. \textit{genmake} will select the right machine for  given in section \ref{sec:runModel}. Here, we can run the model by
409  you (the one you're working on!). However, this option is useful if you have  first creating links to all the input files:
410  a choice of several compilers on one machine and you want to use the one  \begin{verbatim}
411  that is not the default (ex: \texttt{pgf77} instead of \texttt{f77} under  ln -s ../input/* .
412  Linux).  \end{verbatim}
413    and then calling the executable with:
414    \begin{verbatim}
415    ./mitgcmuv > output.txt
416    \end{verbatim}
417    where we are re-directing the stream of text output to the file
418    \texttt{output.txt}.
419    
420  \item -mpi  \subsection{Building/compiling the code elsewhere}
421    
422  this is used when you want to run the model in parallel processing mode  In the example above (section \ref{sec:buildingCode}) we built the
423  under mpi (see section on parallel computation for more details).  executable in the {\em input} directory of the experiment for
424    convenience. You can also configure and compile the code in other
425    locations, for example on a scratch disk with out having to copy the
426    entire source tree. The only requirement to do so is you have {\tt
427      genmake2} in your path or you know the absolute path to {\tt
428      genmake2}.
429    
430  \item -jam  The following sections outline some possible methods of organizing
431    your source and data.
432    
433  this is used when you want to run the model in parallel processing mode  \subsubsection{Building from the {\em ../code directory}}
 under jam (see section on parallel computation for more details).  
 \end{itemize}  
434    
435  For some of the examples, there is a file called \textit{.genmakerc} in the  This is just as simple as building in the {\em input/} directory:
 \textit{input} directory that has the relevant \textit{genmake} options for  
 that particular example. In this way you don't need to type the options when  
 invoking \textit{genmake}.  
   
 \subsubsection{Compiling}  
   
 Let's assume that you want to run, say, example \textit{exp2} in the \textit{%  
 input} directory. To compile the code, type the following commands from the  
 model root tree:  
436  \begin{verbatim}  \begin{verbatim}
437  % cd verification/exp2/input  % cd verification/exp2/code
438  % ../../../tools/genmake  % ../../../tools/genmake2
439  % make depend  % make depend
440  % make  % make
441  \end{verbatim}  \end{verbatim}
442    However, to run the model the executable ({\em mitgcmuv}) and input
443  If there is no \textit{.genmakerc} in the \textit{input} directory, you have  files must be in the same place. If you only have one calculation to make:
 to use the following options when invoking \textit{genmake}:  
444  \begin{verbatim}  \begin{verbatim}
445  % ../../../tools/genmake  -mods=../code  % cd ../input
446    % cp ../code/mitgcmuv ./
447    % ./mitgcmuv > output.txt
448  \end{verbatim}  \end{verbatim}
449    or if you will be making multiple runs with the same executable:
 In addition, you will probably want to disable some of the packages. Taking  
 again the case of \textit{exp2}, the full \textit{genmake} command will  
 probably look like this:  
450  \begin{verbatim}  \begin{verbatim}
451  % ../../../tools/genmake  -mods=../code  -disable=kpp,gmredi,aim,...  % cd ../
452    % cp -r input run1
453    % cp code/mitgcmuv run1
454    % cd run1
455    % ./mitgcmuv > output.txt
456  \end{verbatim}  \end{verbatim}
457    
458  The make command creates an executable called \textit{mitgcmuv}.  \subsubsection{Building from a new directory}
459    
460  Note that you can compile and run the code in another directory than \textit{%  Since the {\em input} directory contains input files it is often more
461  input}. You just need to make sure that you copy the input data files into  useful to keep {\em input} pristine and build in a new directory
462  the directory where you want to run the model. For example to compile from  within {\em verification/exp2/}:
 \textit{code}:  
463  \begin{verbatim}  \begin{verbatim}
464  % cd verification/exp2/code  % cd verification/exp2
465  % ../../../tools/genmake  % mkdir build
466    % cd build
467    % ../../../tools/genmake2 -mods=../code
468  % make depend  % make depend
469  % make  % make
470  \end{verbatim}  \end{verbatim}
471    This builds the code exactly as before but this time you need to copy
472    either the executable or the input files or both in order to run the
473    model. For example,
474    \begin{verbatim}
475    % cp ../input/* ./
476    % ./mitgcmuv > output.txt
477    \end{verbatim}
478    or if you tend to make multiple runs with the same executable then
479    running in a new directory each time might be more appropriate:
480    \begin{verbatim}
481    % cd ../
482    % mkdir run1
483    % cp build/mitgcmuv run1/
484    % cp input/* run1/
485    % cd run1
486    % ./mitgcmuv > output.txt
487    \end{verbatim}
488    
489    \subsubsection{Building on a scratch disk}
490    
491    Model object files and output data can use up large amounts of disk
492    space so it is often the case that you will be operating on a large
493    scratch disk. Assuming the model source is in {\em ~/MITgcm} then the
494    following commands will build the model in {\em /scratch/exp2-run1}:
495    \begin{verbatim}
496    % cd /scratch/exp2-run1
497    % ~/MITgcm/tools/genmake2 -rootdir=~/MITgcm \
498      -mods=~/MITgcm/verification/exp2/code
499    % make depend
500    % make
501    \end{verbatim}
502    To run the model here, you'll need the input files:
503    \begin{verbatim}
504    % cp ~/MITgcm/verification/exp2/input/* ./
505    % ./mitgcmuv > output.txt
506    \end{verbatim}
507    
508  \subsection{Running the model}  As before, you could build in one directory and make multiple runs of
509    the one experiment:
510  The first thing to do is to run the code by typing \textit{mitgcmuv} and see  \begin{verbatim}
511  what happens. You can compare what you get with what is in the \textit{%  % cd /scratch/exp2
512  results} directory. Unless noted otherwise, most examples are set up to run  % mkdir build
513  for a few time steps only so that you can quickly figure out whether the  % cd build
514  model is working or not.  % ~/MITgcm/tools/genmake2 -rootdir=~/MITgcm \
515      -mods=~/MITgcm/verification/exp2/code
516  \subsubsection{Output files}  % make depend
517    % make
518  The model produces various output files. At a minimum, the instantaneous  % cd ../
519  ``state'' of the model is written out, which is made of the following files:  % cp -r ~/MITgcm/verification/exp2/input run2
520    % cd run2
521  \begin{itemize}  % ./mitgcmuv > output.txt
522  \item \textit{U.00000nIter} - zonal component of velocity field (m/s and $>  \end{verbatim}
 0 $ eastward).  
   
 \item \textit{V.00000nIter} - meridional component of velocity field (m/s  
 and $> 0$ northward).  
   
 \item \textit{W.00000nIter} - vertical component of velocity field (ocean:  
 m/s and $> 0$ upward, atmosphere: Pa/s and $> 0$ towards increasing pressure  
 i.e. downward).  
523    
 \item \textit{T.00000nIter} - potential temperature (ocean: $^{0}$C,  
 atmosphere: $^{0}$K).  
524    
525  \item \textit{S.00000nIter} - ocean: salinity (psu), atmosphere: water vapor  \subsection{Using \texttt{genmake2}}
526  (g/kg).  \label{sec:genmake}
527    
528    To compile the code, first use the program \texttt{genmake2} (located
529    in the \texttt{tools} directory) to generate a Makefile.
530    \texttt{genmake2} is a shell script written to work with all
531    ``sh''--compatible shells including bash v1, bash v2, and Bourne.
532    %Internally, \texttt{genmake2} determines the locations of needed
533    %files, the compiler, compiler options, libraries, and Unix tools.  It
534    %relies upon a number of ``optfiles'' located in the
535    %\texttt{tools/build\_options} directory.
536    \texttt{genmake2} parses information from the following sources:
537    \begin{description}
538    \item[-] a {\em gemake\_local} file if one is found in the current
539      directory
540    \item[-] command-line options
541    \item[-] an "options file" as specified by the command-line option
542      \texttt{--optfile=/PATH/FILENAME}
543    \item[-] a {\em packages.conf} file (if one is found) with the
544      specific list of packages to compile. The search path for
545      file {\em packages.conf} is, first, the current directory and
546      then each of the "MODS" directories in the given order (see below).
547    \end{description}
548    
549    \subsubsection{Optfiles in \texttt{tools/build\_options} directory:}
550    
551    The purpose of the optfiles is to provide all the compilation options
552    for particular ``platforms'' (where ``platform'' roughly means the
553    combination of the hardware and the compiler) and code configurations.
554    Given the combinations of possible compilers and library dependencies
555    ({\it eg.}  MPI and NetCDF) there may be numerous optfiles available
556    for a single machine.  The naming scheme for the majority of the
557    optfiles shipped with the code is
558    \begin{center}
559      {\bf OS\_HARDWARE\_COMPILER }
560    \end{center}
561    where
562    \begin{description}
563    \item[OS] is the name of the operating system (generally the
564      lower-case output of the {\tt 'uname'} command)
565    \item[HARDWARE] is a string that describes the CPU type and
566      corresponds to output from the  {\tt 'uname -m'} command:
567      \begin{description}
568      \item[ia32] is for ``x86'' machines such as i386, i486, i586, i686,
569        and athlon
570      \item[ia64] is for Intel IA64 systems (eg. Itanium, Itanium2)
571      \item[amd64] is AMD x86\_64 systems
572      \item[ppc] is for Mac PowerPC systems
573      \end{description}
574    \item[COMPILER] is the compiler name (generally, the name of the
575      FORTRAN executable)
576    \end{description}
577    
578    In many cases, the default optfiles are sufficient and will result in
579    usable Makefiles.  However, for some machines or code configurations,
580    new ``optfiles'' must be written. To create a new optfile, it is
581    generally best to start with one of the defaults and modify it to suit
582    your needs.  Like \texttt{genmake2}, the optfiles are all written
583    using a simple ``sh''--compatible syntax.  While nearly all variables
584    used within \texttt{genmake2} may be specified in the optfiles, the
585    critical ones that should be defined are:
586    
587    \begin{description}
588    \item[FC] the FORTRAN compiler (executable) to use
589    \item[DEFINES] the command-line DEFINE options passed to the compiler
590    \item[CPP] the C pre-processor to use
591    \item[NOOPTFLAGS] options flags for special files that should not be
592      optimized
593    \end{description}
594    
595    For example, the optfile for a typical Red Hat Linux machine (``ia32''
596    architecture) using the GCC (g77) compiler is
597    \begin{verbatim}
598    FC=g77
599    DEFINES='-D_BYTESWAPIO -DWORDLENGTH=4'
600    CPP='cpp  -traditional -P'
601    NOOPTFLAGS='-O0'
602    #  For IEEE, use the "-ffloat-store" option
603    if test "x$IEEE" = x ; then
604        FFLAGS='-Wimplicit -Wunused -Wuninitialized'
605        FOPTIM='-O3 -malign-double -funroll-loops'
606    else
607        FFLAGS='-Wimplicit -Wunused -ffloat-store'
608        FOPTIM='-O0 -malign-double'
609    fi
610    \end{verbatim}
611    
612    If you write an optfile for an unrepresented machine or compiler, you
613    are strongly encouraged to submit the optfile to the MITgcm project
614    for inclusion.  Please send the file to the
615    \begin{rawhtml} <A href="mail-to:MITgcm-support@mitgcm.org"> \end{rawhtml}
616    \begin{center}
617      MITgcm-support@mitgcm.org
618    \end{center}
619    \begin{rawhtml} </A> \end{rawhtml}
620    mailing list.
621    
622  \item \textit{Eta.00000nIter} - ocean: surface elevation (m), atmosphere:  \subsubsection{Command-line options:}
 surface pressure anomaly (Pa).  
 \end{itemize}  
623    
624  The chain \textit{00000nIter} consists of ten figures that specify the  In addition to the optfiles, \texttt{genmake2} supports a number of
625  iteration number at which the output is written out. For example, \textit{%  helpful command-line options.  A complete list of these options can be
626  U.0000000300} is the zonal velocity at iteration 300.  obtained from:
627    \begin{verbatim}
628    % genmake2 -h
629    \end{verbatim}
630    
631    The most important command-line options are:
632    \begin{description}
633      
634    \item[\texttt{--optfile=/PATH/FILENAME}] specifies the optfile that
635      should be used for a particular build.
636      
637      If no "optfile" is specified (either through the command line or the
638      MITGCM\_OPTFILE environment variable), genmake2 will try to make a
639      reasonable guess from the list provided in {\em
640        tools/build\_options}.  The method used for making this guess is
641      to first determine the combination of operating system and hardware
642      (eg. "linux\_ia32") and then find a working FORTRAN compiler within
643      the user's path.  When these three items have been identified,
644      genmake2 will try to find an optfile that has a matching name.
645      
646    \item[\texttt{--mods='DIR1 DIR2 DIR3 ...'}] specifies a list of
647      directories containing ``modifications''.  These directories contain
648      files with names that may (or may not) exist in the main MITgcm
649      source tree but will be overridden by any identically-named sources
650      within the ``MODS'' directories.
651      
652      The order of precedence for this "name-hiding" is as follows:
653      \begin{itemize}
654      \item ``MODS'' directories (in the order given)
655      \item Packages either explicitly specified or provided by default
656        (in the order given)
657      \item Packages included due to package dependencies (in the order
658        that that package dependencies are parsed)
659      \item The "standard dirs" (which may have been specified by the
660        ``-standarddirs'' option)
661      \end{itemize}
662      
663    \item[\texttt{--pgroups=/PATH/FILENAME}] specifies the file
664      where package groups are defined. If not set, the package-groups
665      definition will be read from {\em pkg/pkg\_groups}.
666      It also contains the default list of packages (defined
667      as the group ``{\it default\_pkg\_list}'' which is used
668      when no specific package list ({\em packages.conf})
669      is found in current directory or in any "MODS" directory.
670    
671    \item[\texttt{--pdepend=/PATH/FILENAME}] specifies the dependency file
672      used for packages.
673      
674      If not specified, the default dependency file {\em pkg/pkg\_depend}
675      is used.  The syntax for this file is parsed on a line-by-line basis
676      where each line containes either a comment ("\#") or a simple
677      "PKGNAME1 (+|-)PKGNAME2" pairwise rule where the "+" or "-" symbol
678      specifies a "must be used with" or a "must not be used with"
679      relationship, respectively.  If no rule is specified, then it is
680      assumed that the two packages are compatible and will function
681      either with or without each other.
682      
683    \item[\texttt{--adof=/path/to/file}] specifies the "adjoint" or
684      automatic differentiation options file to be used.  The file is
685      analogous to the ``optfile'' defined above but it specifies
686      information for the AD build process.
687      
688      The default file is located in {\em
689        tools/adjoint\_options/adjoint\_default} and it defines the "TAF"
690      and "TAMC" compilers.  An alternate version is also available at
691      {\em tools/adjoint\_options/adjoint\_staf} that selects the newer
692      "STAF" compiler.  As with any compilers, it is helpful to have their
693      directories listed in your {\tt \$PATH} environment variable.
694      
695    \item[\texttt{--mpi}] This option enables certain MPI features (using
696      CPP \texttt{\#define}s) within the code and is necessary for MPI
697      builds (see Section \ref{sec:mpi-build}).
698      
699    \item[\texttt{--make=/path/to/gmake}] Due to the poor handling of
700      soft-links and other bugs common with the \texttt{make} versions
701      provided by commercial Unix vendors, GNU \texttt{make} (sometimes
702      called \texttt{gmake}) should be preferred.  This option provides a
703      means for specifying the make executable to be used.
704      
705    \item[\texttt{--bash=/path/to/sh}] On some (usually older UNIX)
706      machines, the ``bash'' shell is unavailable.  To run on these
707      systems, \texttt{genmake2} can be invoked using an ``sh'' (that is,
708      a Bourne, POSIX, or compatible) shell.  The syntax in these
709      circumstances is:
710      \begin{center}
711        \texttt{\%  /bin/sh genmake2 -bash=/bin/sh [...options...]}
712      \end{center}
713      where \texttt{/bin/sh} can be replaced with the full path and name
714      of the desired shell.
715    
716    \end{description}
717    
718    
719    \subsection{Building with MPI}
720    \label{sec:mpi-build}
721    
722    Building MITgcm to use MPI libraries can be complicated due to the
723    variety of different MPI implementations available, their dependencies
724    or interactions with different compilers, and their often ad-hoc
725    locations within file systems.  For these reasons, its generally a
726    good idea to start by finding and reading the documentation for your
727    machine(s) and, if necessary, seeking help from your local systems
728    administrator.
729    
730    The steps for building MITgcm with MPI support are:
731    \begin{enumerate}
732      
733    \item Determine the locations of your MPI-enabled compiler and/or MPI
734      libraries and put them into an options file as described in Section
735      \ref{sec:genmake}.  One can start with one of the examples in:
736      \begin{rawhtml} <A
737        href="http://mitgcm.org/viewvc/MITgcm/MITgcm/tools/build_options/">
738      \end{rawhtml}
739      \begin{center}
740        \texttt{MITgcm/tools/build\_options/}
741      \end{center}
742      \begin{rawhtml} </A> \end{rawhtml}
743      such as \texttt{linux\_ia32\_g77+mpi\_cg01} or
744      \texttt{linux\_ia64\_efc+mpi} and then edit it to suit the machine at
745      hand.  You may need help from your user guide or local systems
746      administrator to determine the exact location of the MPI libraries.
747      If libraries are not installed, MPI implementations and related
748      tools are available including:
749      \begin{itemize}
750      \item \begin{rawhtml} <A
751          href="http://www-unix.mcs.anl.gov/mpi/mpich/">
752        \end{rawhtml}
753        MPICH
754        \begin{rawhtml} </A> \end{rawhtml}
755    
756      \item \begin{rawhtml} <A
757          href="http://www.lam-mpi.org/">
758        \end{rawhtml}
759        LAM/MPI
760        \begin{rawhtml} </A> \end{rawhtml}
761    
762      \item \begin{rawhtml} <A
763          href="http://www.osc.edu/~pw/mpiexec/">
764        \end{rawhtml}
765        MPIexec
766        \begin{rawhtml} </A> \end{rawhtml}
767      \end{itemize}
768      
769    \item Build the code with the \texttt{genmake2} \texttt{-mpi} option
770      (see Section \ref{sec:genmake}) using commands such as:
771    {\footnotesize \begin{verbatim}
772      %  ../../../tools/genmake2 -mods=../code -mpi -of=YOUR_OPTFILE
773      %  make depend
774      %  make
775    \end{verbatim} }
776      
777    \item Run the code with the appropriate MPI ``run'' or ``exec''
778      program provided with your particular implementation of MPI.
779      Typical MPI packages such as MPICH will use something like:
780    \begin{verbatim}
781      %  mpirun -np 4 -machinefile mf ./mitgcmuv
782    \end{verbatim}
783      Sightly more complicated scripts may be needed for many machines
784      since execution of the code may be controlled by both the MPI
785      library and a job scheduling and queueing system such as PBS,
786      LoadLeveller, Condor, or any of a number of similar tools.  A few
787      example scripts (those used for our \begin{rawhtml} <A
788        href="http://mitgcm.org/public/testing.html"> \end{rawhtml}regular
789      verification runs\begin{rawhtml} </A> \end{rawhtml}) are available
790      at:
791      \begin{rawhtml} <A
792        href="http://mitgcm.org/viewvc/MITgcm/MITgcm/tools/example_scripts/">
793      \end{rawhtml}
794      {\footnotesize \tt
795        http://mitgcm.org/viewvc/MITgcm/MITgcm/tools/example\_scripts/ }
796      \begin{rawhtml} </A> \end{rawhtml}
797      or at:
798      \begin{rawhtml} <A
799        href="http://mitgcm.org/viewvc/MITgcm/MITgcm_contrib/test_scripts/">
800      \end{rawhtml}
801      {\footnotesize \tt
802        http://mitgcm.org/viewvc/MITgcm/MITgcm\_contrib/test\_scripts/ }
803      \begin{rawhtml} </A> \end{rawhtml}
804    
805    \end{enumerate}
806    
807    An example of the above process on the MITgcm cluster (``cg01'') using
808    the GNU g77 compiler and the mpich MPI library is:
809    
810    {\footnotesize \begin{verbatim}
811      %  cd MITgcm/verification/exp5
812      %  mkdir build
813      %  cd build
814      %  ../../../tools/genmake2 -mpi -mods=../code \
815           -of=../../../tools/build_options/linux_ia32_g77+mpi_cg01
816      %  make depend
817      %  make
818      %  cd ../input
819      %  /usr/local/pkg/mpi/mpi-1.2.4..8a-gm-1.5/g77/bin/mpirun.ch_gm \
820           -machinefile mf --gm-kill 5 -v -np 2  ../build/mitgcmuv
821    \end{verbatim} }
822    
823    \section[Running MITgcm]{Running the model in prognostic mode}
824    \label{sec:runModel}
825    \begin{rawhtml}
826    <!-- CMIREDIR:runModel: -->
827    \end{rawhtml}
828    
829    If compilation finished succesfully (section \ref{sec:buildingCode})
830    then an executable called \texttt{mitgcmuv} will now exist in the
831    local directory.
832    
833    To run the model as a single process (\textit{ie.} not in parallel)
834    simply type:
835    \begin{verbatim}
836    % ./mitgcmuv
837    \end{verbatim}
838    The ``./'' is a safe-guard to make sure you use the local executable
839    in case you have others that exist in your path (surely odd if you
840    do!). The above command will spew out many lines of text output to
841    your screen.  This output contains details such as parameter values as
842    well as diagnostics such as mean Kinetic energy, largest CFL number,
843    etc. It is worth keeping this text output with the binary output so we
844    normally re-direct the \texttt{stdout} stream as follows:
845    \begin{verbatim}
846    % ./mitgcmuv > output.txt
847    \end{verbatim}
848    In the event that the model encounters an error and stops, it is very
849    helpful to include the last few line of this \texttt{output.txt} file
850    along with the (\texttt{stderr}) error message within any bug reports.
851    
852    For the example experiments in \texttt{verification}, an example of the
853    output is kept in \texttt{results/output.txt} for comparison. You can
854    compare your \texttt{output.txt} with the corresponding one for that
855    experiment to check that the set-up works.
856    
857    
858    
859    \subsection{Output files}
860    
861    The model produces various output files and, when using \texttt{mnc},
862    sometimes even directories.  Depending upon the I/O package(s)
863    selected at compile time (either \texttt{mdsio} or \texttt{mnc} or
864    both as determined by \texttt{code/packages.conf}) and the run-time
865    flags set (in \texttt{input/data.pkg}), the following output may
866    appear.
867    
868    
869    \subsubsection{MDSIO output files}
870    
871    The ``traditional'' output files are generated by the \texttt{mdsio}
872    package.  At a minimum, the instantaneous ``state'' of the model is
873    written out, which is made of the following files:
874    
875    \begin{itemize}
876    \item \texttt{U.00000nIter} - zonal component of velocity field (m/s
877      and positive eastward).
878    
879    \item \texttt{V.00000nIter} - meridional component of velocity field
880      (m/s and positive northward).
881    
882    \item \texttt{W.00000nIter} - vertical component of velocity field
883      (ocean: m/s and positive upward, atmosphere: Pa/s and positive
884      towards increasing pressure i.e. downward).
885    
886    \item \texttt{T.00000nIter} - potential temperature (ocean:
887      $^{\circ}\mathrm{C}$, atmosphere: $^{\circ}\mathrm{K}$).
888    
889    \item \texttt{S.00000nIter} - ocean: salinity (psu), atmosphere: water
890      vapor (g/kg).
891    
892    \item \texttt{Eta.00000nIter} - ocean: surface elevation (m),
893      atmosphere: surface pressure anomaly (Pa).
894    \end{itemize}
895    
896    The chain \texttt{00000nIter} consists of ten figures that specify the
897    iteration number at which the output is written out. For example,
898    \texttt{U.0000000300} is the zonal velocity at iteration 300.
899    
900  In addition, a ``pickup'' or ``checkpoint'' file called:  In addition, a ``pickup'' or ``checkpoint'' file called:
901    
902  \begin{itemize}  \begin{itemize}
903  \item \textit{pickup.00000nIter}  \item \texttt{pickup.00000nIter}
904  \end{itemize}  \end{itemize}
905    
906  is written out. This file represents the state of the model in a condensed  is written out. This file represents the state of the model in a condensed
# Line 440  form and is used for restarting the inte Line 908  form and is used for restarting the inte
908  there is an additional ``pickup'' file:  there is an additional ``pickup'' file:
909    
910  \begin{itemize}  \begin{itemize}
911  \item \textit{pickup\_cd.00000nIter}  \item \texttt{pickup\_cd.00000nIter}
912  \end{itemize}  \end{itemize}
913    
914  containing the D-grid velocity data and that has to be written out as well  containing the D-grid velocity data and that has to be written out as well
915  in order to restart the integration. Rolling checkpoint files are the same  in order to restart the integration. Rolling checkpoint files are the same
916  as the pickup files but are named differently. Their name contain the chain  as the pickup files but are named differently. Their name contain the chain
917  \textit{ckptA} or \textit{ckptB} instead of \textit{00000nIter}. They can be  \texttt{ckptA} or \texttt{ckptB} instead of \texttt{00000nIter}. They can be
918  used to restart the model but are overwritten every other time they are  used to restart the model but are overwritten every other time they are
919  output to save disk space during long integrations.  output to save disk space during long integrations.
920    
921  \subsubsection{Looking at the output}  \subsubsection{MNC output files}
   
 All the model data are written according to a ``meta/data'' file format.  
 Each variable is associated with two files with suffix names \textit{.data}  
 and \textit{.meta}. The \textit{.data} file contains the data written in  
 binary form (big\_endian by default). The \textit{.meta} file is a  
 ``header'' file that contains information about the size and the structure  
 of the \textit{.data} file. This way of organizing the output is  
 particularly useful when running multi-processors calculations. The base  
 version of the model includes a few matlab utilities to read output files  
 written in this format. The matlab scripts are located in the directory  
 \textit{utils/matlab} under the root tree. The script \textit{rdmds.m} reads  
 the data. Look at the comments inside the script to see how to use it.  
   
 \section{Code structure}  
   
 \section{Doing it yourself: customizing the code}  
   
 \subsection{\protect\bigskip Configuration and setup}  
   
 When you are ready to run the model in the configuration you want, the  
 easiest thing is to use and adapt the setup of the case studies experiment  
 (described previously) that is the closest to your configuration. Then, the  
 amount of setup will be minimized. In this section, we focus on the setup  
 relative to the ''numerical model'' part of the code (the setup relative to  
 the ''execution environment'' part is covered in the parallel implementation  
 section) and on the variables and parameters that you are likely to change.  
   
 The CPP keys relative to the ''numerical model'' part of the code are all  
 defined and set in the file \textit{CPP\_OPTIONS.h }in the directory \textit{%  
 model/inc }or in one of the \textit{code }directories of the case study  
 experiments under \textit{verification.} The model parameters are defined  
 and declared in the file \textit{model/inc/PARAMS.h }and their default  
 values are set in the routine \textit{model/src/set\_defaults.F. }The  
 default values can be modified in the namelist file \textit{data }which  
 needs to be located in the directory where you will run the model. The  
 parameters are initialized in the routine \textit{model/src/ini\_parms.F}.  
 Look at this routine to see in what part of the namelist the parameters are  
 located.  
   
 In what follows the parameters are grouped into categories related to the  
 computational domain, the equations solved in the model, and the simulation  
 controls.  
   
 \subsubsection{Computational domain, geometry and time-discretization}  
   
 \begin{itemize}  
 \item dimensions  
 \end{itemize}  
   
 The number of points in the x, y,\textit{\ }and r\textit{\ }directions are  
 represented by the variables \textbf{sNx}\textit{, }\textbf{sNy}\textit{, }%  
 and \textbf{Nr}\textit{\ }respectively which are declared and set in the  
 file \textit{model/inc/SIZE.h. }(Again, this assumes a mono-processor  
 calculation. For multiprocessor calculations see section on parallel  
 implementation.)  
   
 \begin{itemize}  
 \item grid  
 \end{itemize}  
   
 Three different grids are available: cartesian, spherical polar, and  
 curvilinear (including the cubed sphere). The grid is set through the  
 logical variables \textbf{usingCartesianGrid}\textit{, }\textbf{%  
 usingSphericalPolarGrid}\textit{, }and \textit{\ }\textbf{%  
 usingCurvilinearGrid}\textit{. }In the case of spherical and curvilinear  
 grids, the southern boundary is defined through the variable \textbf{phiMin}%  
 \textit{\ }which corresponds to the latitude of the southern most cell face  
 (in degrees). The resolution along the x and y directions is controlled by  
 the 1D arrays \textbf{delx}\textit{\ }and \textbf{dely}\textit{\ }(in meters  
 in the case of a cartesian grid, in degrees otherwise). The vertical grid  
 spacing is set through the 1D array \textbf{delz }for the ocean (in meters)  
 or \textbf{delp}\textit{\ }for the atmosphere (in Pa). The variable \textbf{%  
 Ro\_SeaLevel} represents the standard position of Sea-Level in ''R''  
 coordinate. This is typically set to 0m for the ocean (default value) and 10$%  
 ^{5}$Pa for the atmosphere. For the atmosphere, also set the logical  
 variable \textbf{groundAtK1} to '.\texttt{TRUE}.'. which put the first level  
 (k=1) at the lower boundary (ground).  
   
 For the cartesian grid case, the Coriolis parameter $f$ is set through the  
 variables \textbf{f0}\textit{\ }and \textbf{beta}\textit{\ }which correspond  
 to the reference Coriolis parameter (in s$^{-1}$) and $\frac{\partial f}{%  
 \partial y}$(in m$^{-1}$s$^{-1}$) respectively. If \textbf{beta }\textit{\ }%  
 is set to a nonzero value, \textbf{f0}\textit{\ }is the value of $f$ at the  
 southern edge of the domain.  
   
 \begin{itemize}  
 \item topography - full and partial cells  
 \end{itemize}  
   
 The domain bathymetry is read from a file that contains a 2D (x,y) map of  
 depths (in m) for the ocean or pressures (in Pa) for the atmosphere. The  
 file name is represented by the variable \textbf{bathyFile}\textit{. }The  
 file is assumed to contain binary numbers giving the depth (pressure) of the  
 model at each grid cell, ordered with the x coordinate varying fastest. The  
 points are ordered from low coordinate to high coordinate for both axes. The  
 model code applies without modification to enclosed, periodic, and double  
 periodic domains. Periodicity is assumed by default and is suppressed by  
 setting the depths to 0m for the cells at the limits of the computational  
 domain (note: not sure this is the case for the atmosphere). The precision  
 with which to read the binary data is controlled by the integer variable  
 \textbf{readBinaryPrec }which can take the value \texttt{32} (single  
 precision) or \texttt{64} (double precision). See the matlab program \textit{%  
 gendata.m }in the \textit{input }directories under \textit{verification }to  
 see how the bathymetry files are generated for the case study experiments.  
   
 To use the partial cell capability, the variable \textbf{hFacMin}\textit{\ }%  
 needs to be set to a value between 0 and 1 (it is set to 1 by default)  
 corresponding to the minimum fractional size of the cell. For example if the  
 bottom cell is 500m thick and \textbf{hFacMin}\textit{\ }is set to 0.1, the  
 actual thickness of the cell (i.e. used in the code) can cover a range of  
 discrete values 50m apart from 50m to 500m depending on the value of the  
 bottom depth (in \textbf{bathyFile}) at this point.  
   
 Note that the bottom depths (or pressures) need not coincide with the models  
 levels as deduced from \textbf{delz}\textit{\ }or\textit{\ }\textbf{delp}%  
 \textit{. }The model will interpolate the numbers in \textbf{bathyFile}%  
 \textit{\ }so that they match the levels obtained from \textbf{delz}\textit{%  
 \ }or\textit{\ }\textbf{delp}\textit{\ }and \textbf{hFacMin}\textit{. }  
   
 (Note: the atmospheric case is a bit more complicated than what is written  
 here I think. To come soon...)  
922    
923  \begin{itemize}  Unlike the \texttt{mdsio} output, the \texttt{mnc}--generated output
924  \item time-discretization  is usually (though not necessarily) placed within a subdirectory with
925  \end{itemize}  a name such as \texttt{mnc\_test\_\${DATE}\_\${SEQ}}.  
   
 The time steps are set through the real variables \textbf{deltaTMom }and  
 \textbf{deltaTtracer }(in s) which represent the time step for the momentum  
 and tracer equations, respectively. For synchronous integrations, simply set  
 the two variables to the same value (or you can prescribe one time step only  
 through the variable \textbf{deltaT}). The Adams-Bashforth stabilizing  
 parameter is set through the variable \textbf{abEps }(dimensionless). The  
 stagger baroclinic time stepping can be activated by setting the logical  
 variable \textbf{staggerTimeStep }to '.\texttt{TRUE}.'.  
   
 \subsubsection{Equation of state}  
   
 First, because the model equations are written in terms of perturbations, a  
 reference thermodynamic state needs to be specified. This is done through  
 the 1D arrays \textbf{tRef}\textit{\ }and \textbf{sRef}. \textbf{tRef }%  
 specifies the reference potential temperature profile (in $^{o}$C for  
 the ocean and $^{o}$K for the atmosphere) starting from the level  
 k=1. Similarly, \textbf{sRef}\textit{\ }specifies the reference salinity  
 profile (in ppt) for the ocean or the reference specific humidity profile  
 (in g/kg) for the atmosphere.  
   
 The form of the equation of state is controlled by the character variables  
 \textbf{buoyancyRelation}\textit{\ }and \textbf{eosType}\textit{. }\textbf{%  
 buoyancyRelation}\textit{\ }is set to '\texttt{OCEANIC}' by default and  
 needs to be set to '\texttt{ATMOSPHERIC}' for atmosphere simulations. In  
 this case, \textbf{eosType}\textit{\ }must be set to '\texttt{IDEALGAS}'.  
 For the ocean, two forms of the equation of state are available: linear (set  
 \textbf{eosType}\textit{\ }to '\texttt{LINEAR}') and a polynomial  
 approximation to the full nonlinear equation ( set \textbf{eosType}\textit{\  
 }to '\texttt{POLYNOMIAL}'). In the linear case, you need to specify the  
 thermal and haline expansion coefficients represented by the variables  
 \textbf{tAlpha}\textit{\ }(in K$^{-1}$) and \textbf{sBeta}\textit{\ }(in ppt$%  
 ^{-1}$). For the nonlinear case, you need to generate a file of polynomial  
 coefficients called \textit{POLY3.COEFFS. }To do this, use the program  
 \textit{utils/knudsen2/knudsen2.f }under the model tree (a Makefile is  
 available in the same directory and you will need to edit the number and the  
 values of the vertical levels in \textit{knudsen2.f }so that they match  
 those of your configuration). \textit{\ }  
   
 \subsubsection{Momentum equations}  
   
 In this section, we only focus for now on the parameters that you are likely  
 to change, i.e. the ones relative to forcing and dissipation for example.  
 The details relevant to the vector-invariant form of the equations and the  
 various advection schemes are not covered for the moment. We assume that you  
 use the standard form of the momentum equations (i.e. the flux-form) with  
 the default advection scheme. Also, there are a few logical variables that  
 allow you to turn on/off various terms in the momentum equation. These  
 variables are called \textbf{momViscosity, momAdvection, momForcing,  
 useCoriolis, momPressureForcing, momStepping}\textit{, }and \textit{\ }%  
 \textbf{metricTerms }and are assumed to be set to '.\texttt{TRUE}.' here.  
 Look at the file \textit{model/inc/PARAMS.h }for a precise definition of  
 these variables.  
   
 \begin{itemize}  
 \item initialization  
 \end{itemize}  
   
 The velocity components are initialized to 0 unless the simulation is  
 starting from a pickup file (see section on simulation control parameters).  
   
 \begin{itemize}  
 \item forcing  
 \end{itemize}  
   
 This section only applies to the ocean. You need to generate wind-stress  
 data into two files \textbf{zonalWindFile}\textit{\ }and \textbf{%  
 meridWindFile }corresponding to the zonal and meridional components of the  
 wind stress, respectively (if you want the stress to be along the direction  
 of only one of the model horizontal axes, you only need to generate one  
 file). The format of the files is similar to the bathymetry file. The zonal  
 (meridional) stress data are assumed to be in Pa and located at U-points  
 (V-points). As for the bathymetry, the precision with which to read the  
 binary data is controlled by the variable \textbf{readBinaryPrec}.\textbf{\ }  
 See the matlab program \textit{gendata.m }in the \textit{input }directories  
 under \textit{verification }to see how simple analytical wind forcing data  
 are generated for the case study experiments.  
   
 There is also the possibility of prescribing time-dependent periodic  
 forcing. To do this, concatenate the successive time records into a single  
 file (for each stress component) ordered in a (x, y, t) fashion and set the  
 following variables: \textbf{periodicExternalForcing }to '.\texttt{TRUE}.',  
 \textbf{externForcingPeriod }to the period (in s) of which the forcing  
 varies (typically 1 month), and \textbf{externForcingCycle }to the repeat  
 time (in s) of the forcing (typically 1 year -- note: \textbf{%  
 externForcingCycle }must be a multiple of \textbf{externForcingPeriod}).  
 With these variables set up, the model will interpolate the forcing linearly  
 at each iteration.  
926    
927  \begin{itemize}  \subsection{Looking at the output}
 \item dissipation  
 \end{itemize}  
   
 The lateral eddy viscosity coefficient is specified through the variable  
 \textbf{viscAh}\textit{\ }(in m$^{2}$s$^{-1}$). The vertical eddy viscosity  
 coefficient is specified through the variable \textbf{viscAz }(in m$^{2}$s$%  
 ^{-1}$) for the ocean and \textbf{viscAp}\textit{\ }(in Pa$^{2}$s$^{-1}$)  
 for the atmosphere. The vertical diffusive fluxes can be computed implicitly  
 by setting the logical variable \textbf{implicitViscosity }to '.\texttt{TRUE}%  
 .'. In addition, biharmonic mixing can be added as well through the variable  
 \textbf{viscA4}\textit{\ }(in m$^{4}$s$^{-1}$). On a spherical polar grid,  
 you might also need to set the variable \textbf{cosPower} which is set to 0  
 by default and which represents the power of cosine of latitude to multiply  
 viscosity. Slip or no-slip conditions at lateral and bottom boundaries are  
 specified through the logical variables \textbf{no\_slip\_sides}\textit{\ }%  
 and \textbf{no\_slip\_bottom}. If set to '\texttt{.FALSE.}', free-slip  
 boundary conditions are applied. If no-slip boundary conditions are applied  
 at the bottom, a bottom drag can be applied as well. Two forms are  
 available: linear (set the variable \textbf{bottomDragLinear}\textit{\ }in s$%  
 ^{-1}$) and quadratic (set the variable \textbf{bottomDragQuadratic}\textit{%  
 \ }in m$^{-1}$).  
   
 The Fourier and Shapiro filters are described elsewhere.  
   
 \begin{itemize}  
 \item C-D scheme  
 \end{itemize}  
   
 If you run at a sufficiently coarse resolution, you will need the C-D scheme  
 for the computation of the Coriolis terms. The variable\textbf{\ tauCD},  
 which represents the C-D scheme coupling timescale (in s) needs to be set.  
   
 \begin{itemize}  
 \item calculation of pressure/geopotential  
 \end{itemize}  
   
 First, to run a non-hydrostatic ocean simulation, set the logical variable  
 \textbf{nonHydrostatic} to '.\texttt{TRUE}.'. The pressure field is then  
 inverted through a 3D elliptic equation. (Note: this capability is not  
 available for the atmosphere yet.) By default, a hydrostatic simulation is  
 assumed and a 2D elliptic equation is used to invert the pressure field. The  
 parameters controlling the behaviour of the elliptic solvers are the  
 variables \textbf{cg2dMaxIters}\textit{\ }and \textbf{cg2dTargetResidual }%  
 for the 2D case and \textbf{cg3dMaxIters}\textit{\ }and \textbf{%  
 cg3dTargetResidual }for the 3D case. You probably won't need to alter the  
 default values (are we sure of this?).  
   
 For the calculation of the surface pressure (for the ocean) or surface  
 geopotential (for the atmosphere) you need to set the logical variables  
 \textbf{rigidLid} and \textbf{implicitFreeSurface}\textit{\ }(set one to '.%  
 \texttt{TRUE}.' and the other to '.\texttt{FALSE}.' depending on how you  
 want to deal with the ocean upper or atmosphere lower boundary).  
   
 \subsubsection{Tracer equations}  
   
 This section covers the tracer equations i.e. the potential temperature  
 equation and the salinity (for the ocean) or specific humidity (for the  
 atmosphere) equation. As for the momentum equations, we only describe for  
 now the parameters that you are likely to change. The logical variables  
 \textbf{tempDiffusion}\textit{, }\textbf{tempAdvection}\textit{, }\textbf{%  
 tempForcing}\textit{,} and \textbf{tempStepping} allow you to turn on/off  
 terms in the temperature equation (same thing for salinity or specific  
 humidity with variables \textbf{saltDiffusion}\textit{, }\textbf{%  
 saltAdvection}\textit{\ }etc). These variables are all assumed here to be  
 set to '.\texttt{TRUE}.'. Look at file \textit{model/inc/PARAMS.h }for a  
 precise definition.  
   
 \begin{itemize}  
 \item initialization  
 \end{itemize}  
928    
929  The initial tracer data can be contained in the binary files \textbf{%  The ``traditional'' or mdsio model data are written according to a
930  hydrogThetaFile }and \textbf{hydrogSaltFile}. These files should contain 3D  ``meta/data'' file format.  Each variable is associated with two files
931  data ordered in an (x, y, r) fashion with k=1 as the first vertical level.  with suffix names \texttt{.data} and \texttt{.meta}. The
932  If no file names are provided, the tracers are then initialized with the  \texttt{.data} file contains the data written in binary form
933  values of \textbf{tRef }and \textbf{sRef }mentioned above (in the equation  (big\_endian by default). The \texttt{.meta} file is a ``header'' file
934  of state section). In this case, the initial tracer data are uniform in x  that contains information about the size and the structure of the
935  and y for each depth level.  \texttt{.data} file. This way of organizing the output is particularly
936    useful when running multi-processors calculations. The base version of
937    the model includes a few matlab utilities to read output files written
938    in this format. The matlab scripts are located in the directory
939    \texttt{utils/matlab} under the root tree. The script \texttt{rdmds.m}
940    reads the data. Look at the comments inside the script to see how to
941    use it.
942    
943  \begin{itemize}  Some examples of reading and visualizing some output in {\em Matlab}:
944  \item forcing  \begin{verbatim}
945  \end{itemize}  % matlab
946    >> H=rdmds('Depth');
947  This part is more relevant for the ocean, the procedure for the atmosphere  >> contourf(H');colorbar;
948  not being completely stabilized at the moment.  >> title('Depth of fluid as used by model');
   
 A combination of fluxes data and relaxation terms can be used for driving  
 the tracer equations. \ For potential temperature, heat flux data (in W/m$%  
 ^{2}$) can be stored in the 2D binary file \textbf{surfQfile}\textit{. }%  
 Alternatively or in addition, the forcing can be specified through a  
 relaxation term. The SST data to which the model surface temperatures are  
 restored to are supposed to be stored in the 2D binary file \textbf{%  
 thetaClimFile}\textit{. }The corresponding relaxation time scale coefficient  
 is set through the variable \textbf{tauThetaClimRelax}\textit{\ }(in s). The  
 same procedure applies for salinity with the variable names \textbf{EmPmRfile%  
 }\textit{, }\textbf{saltClimFile}\textit{, }and \textbf{tauSaltClimRelax}%  
 \textit{\ }for freshwater flux (in m/s) and surface salinity (in ppt) data  
 files and relaxation time scale coefficient (in s), respectively. Also for  
 salinity, if the CPP key \textbf{USE\_NATURAL\_BCS} is turned on, natural  
 boundary conditions are applied i.e. when computing the surface salinity  
 tendency, the freshwater flux is multiplied by the model surface salinity  
 instead of a constant salinity value.  
   
 As for the other input files, the precision with which to read the data is  
 controlled by the variable \textbf{readBinaryPrec}. Time-dependent, periodic  
 forcing can be applied as well following the same procedure used for the  
 wind forcing data (see above).  
   
 \begin{itemize}  
 \item dissipation  
 \end{itemize}  
949    
950  Lateral eddy diffusivities for temperature and salinity/specific humidity  >> eta=rdmds('Eta',10);
951  are specified through the variables \textbf{diffKhT }and \textbf{diffKhS }%  >> imagesc(eta');axis ij;colorbar;
952  (in m$^{2}$/s). Vertical eddy diffusivities are specified through the  >> title('Surface height at iter=10');
 variables \textbf{diffKzT }and \textbf{diffKzS }(in m$^{2}$/s) for the ocean  
 and \textbf{diffKpT }and \textbf{diffKpS }(in Pa$^{2}$/s) for the  
 atmosphere. The vertical diffusive fluxes can be computed implicitly by  
 setting the logical variable \textbf{implicitDiffusion }to '.\texttt{TRUE}%  
 .'. In addition, biharmonic diffusivities can be specified as well through  
 the coefficients \textbf{diffK4T }and \textbf{diffK4S }(in m$^{4}$/s). Note  
 that the cosine power scaling (specified through \textbf{cosPower }- see the  
 momentum equations section) is applied to the tracer diffusivities  
 (Laplacian and biharmonic) as well. The Gent and McWilliams parameterization  
 for oceanic tracers is described in the package section. Finally, note that  
 tracers can be also subject to Fourier and Shapiro filtering (see the  
 corresponding section on these filters).  
953    
954  \begin{itemize}  >> eta=rdmds('Eta',[0:10:100]);
955  \item ocean convection  >> for n=1:11; imagesc(eta(:,:,n)');axis ij;colorbar;pause(.5);end
956  \end{itemize}  \end{verbatim}
957    
958  Two options are available to parameterize ocean convection: one is to use  Similar scripts for netCDF output (\texttt{rdmnc.m}) are available and
959  the convective adjustment scheme. In this case, you need to set the variable  they are described in Section \ref{sec:pkg:mnc}.
 \textbf{cadjFreq}, which represents the frequency (in s) with which the  
 adjustment algorithm is called, to a non-zero value (if set to a negative  
 value by the user, the model will set it to the tracer time step). The other  
 option is to parameterize convection with implicit vertical diffusion. To do  
 this, set the logical variable \textbf{implicitDiffusion }to '.\texttt{TRUE}%  
 .' and the real variable \textbf{ivdc\_kappa }to a value (in m$^{2}$/s) you  
 wish the tracer vertical diffusivities to have when mixing tracers  
 vertically due to static instabilities. Note that \textbf{cadjFreq }and  
 \textbf{ivdc\_kappa }can not both have non-zero value.  
   
 \subsubsection{Simulation controls}  
   
 The model ''clock'' is defined by the variable \textbf{deltaTClock }(in s)  
 which determines the IO frequencies and is used in tagging output.  
 Typically, you will set it to the tracer time step for accelerated runs  
 (otherwise it is simply set to the default time step \textbf{deltaT}).  
 Frequency of checkpointing and dumping of the model state are referenced to  
 this clock (see below).  
960    
961    The MNC output files are all in the ``self-describing'' netCDF
962    format and can thus be browsed and/or plotted using tools such as:
963  \begin{itemize}  \begin{itemize}
964  \item run duration  \item \texttt{ncdump} is a utility which is typically included
965  \end{itemize}    with every netCDF install:
966      \begin{rawhtml} <A href="http://www.unidata.ucar.edu/packages/netcdf/"> \end{rawhtml}
967  The beginning of a simulation is set by specifying a start time (in s)  \begin{verbatim}
968  through the real variable \textbf{startTime }or by specifying an initial  http://www.unidata.ucar.edu/packages/netcdf/
969  iteration number through the integer variable \textbf{nIter0}. If these  \end{verbatim}
970  variables are set to nonzero values, the model will look for a ''pickup''    \begin{rawhtml} </A> \end{rawhtml} and it converts the netCDF
971  file \textit{pickup.0000nIter0 }to restart the integration\textit{. }The end    binaries into formatted ASCII text files.
 of a simulation is set through the real variable \textbf{endTime }(in s).  
 Alternatively, you can specify instead the number of time steps to execute  
 through the integer variable \textbf{nTimeSteps}.  
972    
973  \begin{itemize}  \item \texttt{ncview} utility is a very convenient and quick way
974  \item frequency of output    to plot netCDF data and it runs on most OSes:
975      \begin{rawhtml} <A href="http://meteora.ucsd.edu/~pierce/ncview_home_page.html"> \end{rawhtml}
976    \begin{verbatim}
977    http://meteora.ucsd.edu/~pierce/ncview_home_page.html
978    \end{verbatim}
979      \begin{rawhtml} </A> \end{rawhtml}
980      
981    \item MatLAB(c) and other common post-processing environments provide
982      various netCDF interfaces including:
983      \begin{rawhtml} <A href="http://mexcdf.sourceforge.net/"> \end{rawhtml}
984    \begin{verbatim}
985    http://mexcdf.sourceforge.net/
986    \end{verbatim}
987      \begin{rawhtml} </A> \end{rawhtml}
988      \begin{rawhtml} <A href="http://woodshole.er.usgs.gov/staffpages/cdenham/public_html/MexCDF/nc4ml5.html"> \end{rawhtml}
989    \begin{verbatim}
990    http://woodshole.er.usgs.gov/staffpages/cdenham/public_html/MexCDF/nc4ml5.html
991    \end{verbatim}
992      \begin{rawhtml} </A> \end{rawhtml}
993  \end{itemize}  \end{itemize}
994    
 Real variables defining frequencies (in s) with which output files are  
 written on disk need to be set up. \textbf{dumpFreq }controls the frequency  
 with which the instantaneous state of the model is saved. \textbf{chkPtFreq }%  
 and \textbf{pchkPtFreq }control the output frequency of rolling and  
 permanent checkpoint files, respectively. See section 1.5.1 Output files for the  
 definition of model state and checkpoint files. In addition, time-averaged  
 fields can be written out by setting the variable \textbf{taveFreq} (in s).  
 The precision with which to write the binary data is controlled by the  
 integer variable w\textbf{riteBinaryPrec }(set it to \texttt{32} or \texttt{%  
 64}).  

Legend:
Removed from v.1.1  
changed lines
  Added in v.1.46

  ViewVC Help
Powered by ViewVC 1.1.22