/[MITgcm]/manual/s_getstarted/text/getting_started.tex
ViewVC logotype

Diff of /manual/s_getstarted/text/getting_started.tex

Parent Directory Parent Directory | Revision Log Revision Log | View Revision Graph Revision Graph | View Patch Patch

revision 1.1.1.1 by adcroft, Wed Aug 8 16:15:31 2001 UTC revision 1.35 by molod, Thu Apr 20 22:09:08 2006 UTC
# Line 1  Line 1 
1  % $Header$  % $Header$
2  % $Name$  % $Name$
3    
4    %\section{Getting started}
5    
6  \begin{center}  In this section, we describe how to use the model. In the first
7  {\Large \textbf{Using the model}}  section, we provide enough information to help you get started with
8    the model. We believe the best way to familiarize yourself with the
9  \vspace*{4mm}  model is to run the case study examples provided with the base
10    version. Information on how to obtain, compile, and run the code is
11    found there as well as a brief description of the model structure
12    directory and the case study examples.  The latter and the code
13    structure are described more fully in chapters
14    \ref{chap:discretization} and \ref{chap:sarch}, respectively. Here, in
15    this section, we provide information on how to customize the code when
16    you are ready to try implementing the configuration you have in mind.
17    
18    \section{Where to find information}
19    \label{sect:whereToFindInfo}
20    \begin{rawhtml}
21    <!-- CMIREDIR:whereToFindInfo: -->
22    \end{rawhtml}
23    
24  \vspace*{3mm} {\large July 2001}  A web site is maintained for release 2 (``Pelican'') of MITgcm:
25  \end{center}  \begin{rawhtml} <A href=http://mitgcm.org/pelican/ target="idontexist"> \end{rawhtml}
26    \begin{verbatim}
27    http://mitgcm.org/pelican
28    \end{verbatim}
29    \begin{rawhtml} </A> \end{rawhtml}
30    Here you will find an on-line version of this document, a
31    ``browsable'' copy of the code and a searchable database of the model
32    and site, as well as links for downloading the model and
33    documentation, to data-sources, and other related sites.
34    
35    There is also a web-archived support mailing list for the model that
36    you can email at \texttt{MITgcm-support@mitgcm.org} or browse at:
37    \begin{rawhtml} <A href=http://mitgcm.org/mailman/listinfo/mitgcm-support/ target="idontexist"> \end{rawhtml}
38    \begin{verbatim}
39    http://mitgcm.org/mailman/listinfo/mitgcm-support/
40    http://mitgcm.org/pipermail/mitgcm-support/
41    \end{verbatim}
42    \begin{rawhtml} </A> \end{rawhtml}
43    Essentially all of the MITgcm web pages can be searched using a
44    popular web crawler such as Google or through our own search facility:
45    \begin{rawhtml} <A href=http://mitgcm.org/mailman/htdig/ target="idontexist"> \end{rawhtml}
46    \begin{verbatim}
47    http://mitgcm.org/htdig/
48    \end{verbatim}
49    \begin{rawhtml} </A> \end{rawhtml}
50    %%% http://www.google.com/search?q=hydrostatic+site%3Amitgcm.org
51    
 In this part, we describe how to use the model. In the first section, we  
 provide enough information to help you get started with the model. We  
 believe the best way to familiarize yourself with the model is to run the  
 case study examples provided with the base version. Information on how to  
 obtain, compile, and run the code is found there as well as a brief  
 description of the model structure directory and the case study examples.  
 The latter and the code structure are described more fully in sections 2 and  
 3, respectively. In section 4, we provide information on how to customize  
 the code when you are ready to try implementing the configuration you have  
 in mind.  
52    
 \section{Getting started}  
53    
54  \subsection{Obtaining the code}  \section{Obtaining the code}
55    \label{sect:obtainingCode}
56    \begin{rawhtml}
57    <!-- CMIREDIR:obtainingCode: -->
58    \end{rawhtml}
59    
60    MITgcm can be downloaded from our system by following
61    the instructions below. As a courtesy we ask that you send e-mail to us at
62    \begin{rawhtml} <A href=mailto:MITgcm-support@mitgcm.org> \end{rawhtml}
63    MITgcm-support@mitgcm.org
64    \begin{rawhtml} </A> \end{rawhtml}
65    to enable us to keep track of who's using the model and in what application.
66    You can download the model two ways:
67    
68    \begin{enumerate}
69    \item Using CVS software. CVS is a freely available source code management
70    tool. To use CVS you need to have the software installed. Many systems
71    come with CVS pre-installed, otherwise good places to look for
72    the software for a particular platform are
73    \begin{rawhtml} <A href=http://www.cvshome.org/ target="idontexist"> \end{rawhtml}
74    cvshome.org
75    \begin{rawhtml} </A> \end{rawhtml}
76    and
77    \begin{rawhtml} <A href=http://www.wincvs.org/ target="idontexist"> \end{rawhtml}
78    wincvs.org
79    \begin{rawhtml} </A> \end{rawhtml}
80    .
81    
82    \item Using a tar file. This method is simple and does not
83    require any special software. However, this method does not
84    provide easy support for maintenance updates.
85    
86  The reference web site for the model is:  \end{enumerate}
 \begin{verbatim}  
 http://mitgcm.org  
 \end{verbatim}  
87    
88  On this site, you can download the model as well as find useful information,  \subsection{Method 1 - Checkout from CVS}
89  some of which might overlap with what is written here. There is also a  \label{sect:cvs_checkout}
 support news group for the model located at (send your message to \texttt{%  
 support@mitgcm.org}):  
 \begin{verbatim}  
 news://mitgcm.org/mitgcm.support  
 \end{verbatim}  
90    
91  If CVS is available on your system, we strongly encourage you to use it. CVS  If CVS is available on your system, we strongly encourage you to use it. CVS
92  provides an efficient and elegant way of organizing your code and keeping  provides an efficient and elegant way of organizing your code and keeping
93  track of your changes. If CVS is not available on your machine, you can also  track of your changes. If CVS is not available on your machine, you can also
94  download a tar file.  download a tar file.
95    
96  \subsubsection{using CVS}  Before you can use CVS, the following environment variable(s) should
97    be set within your shell.  For a csh or tcsh shell, put the following
98    \begin{verbatim}
99    % setenv CVSROOT :pserver:cvsanon@mitgcm.org:/u/gcmpack
100    \end{verbatim}
101    in your \texttt{.cshrc} or \texttt{.tcshrc} file.  For bash or sh
102    shells, put:
103    \begin{verbatim}
104    % export CVSROOT=':pserver:cvsanon@mitgcm.org:/u/gcmpack'
105    \end{verbatim}
106    in your \texttt{.profile} or \texttt{.bashrc} file.
107    
108    
109  Before you can use CVS, the following environment variable has to be set in  To get MITgcm through CVS, first register with the MITgcm CVS server
110  your .cshrc or .tcshrc:  using command:
111  \begin{verbatim}  \begin{verbatim}
 % setenv CVSROOT :pserver:cvsanon@mitgcm.org:/u/u0/gcmpack  
112  % cvs login ( CVS password: cvsanon )  % cvs login ( CVS password: cvsanon )
113  \end{verbatim}  \end{verbatim}
114    You only need to do a ``cvs login'' once.
115    
116  You only need to do ``cvs login'' once. To obtain the latest source:  To obtain the latest sources type:
117  \begin{verbatim}  \begin{verbatim}
118  % cvs co -d directory models/MITgcmUV  % cvs co MITgcm
119  \end{verbatim}  \end{verbatim}
120    or to get a specific release type:
 This creates a directory called \textit{directory}. If \textit{directory}  
 exists this command updates your code based on the repository. Each  
 directory in the source tree contains a directory \textit{CVS}. This  
 information is required by CVS to keep track of your file versions with  
 respect to the repository. Don't edit the files in \textit{CVS}! To obtain a  
 specific \textit{version} that is not the latest source:  
121  \begin{verbatim}  \begin{verbatim}
122  % cvs co -d directory -r version models/MITgcmUV  % cvs co -P -r checkpoint52i_post  MITgcm
123  \end{verbatim}  \end{verbatim}
124    The MITgcm web site contains further directions concerning the source
125  \subsubsection{other methods}  code and CVS.  It also contains a web interface to our CVS archive so
126    that one may easily view the state of files, revisions, and other
127  You can download the model as a tar file from the reference web site at:  development milestones:
128    \begin{rawhtml} <A href="http://mitgcm.org/download" target="idontexist"> \end{rawhtml}
129  \begin{verbatim}  \begin{verbatim}
130  http://mitgcm.org/download/  http://mitgcm.org/source_code.html
131  \end{verbatim}  \end{verbatim}
132    \begin{rawhtml} </A> \end{rawhtml}
133    
134  \subsection{Model and directory structure}  As a convenience, the MITgcm CVS server contains aliases which are
135    named subsets of the codebase.  These aliases can be especially
136  The ``numerical'' model is contained within a execution environment support  helpful when used over slow internet connections or on machines with
137  wrapper. This wrapper is designed to provide a general framework for  restricted storage space.  Table \ref{tab:cvsModules} contains a list
138  grid-point models. MITgcmUV is a specific numerical model that uses the  of CVS aliases
139  framework. Under this structure the model is split into execution  \begin{table}[htb]
140  environment support code and conventional numerical model code. The    \centering
141  execution environment support code is held under the \textit{eesupp}    \begin{tabular}[htb]{|lp{3.25in}|}\hline
142  directory. The grid point model code is held under the \textit{model}      \textbf{Alias Name}    &  \textbf{Information (directories) Contained}  \\\hline
143  directory. Code execution actually starts in the \textit{eesupp} routines      \texttt{MITgcm\_code}  &  Only the source code -- none of the verification examples.  \\
144  and not in the \textit{model} routines. For this reason the top-level      \texttt{MITgcm\_verif\_basic}
145  \textit{MAIN.F} is in the \textit{eesupp/src} directory. In general,      &  Source code plus a small set of the verification examples
146  end-users should not need to worry about this level. The top-level routine      (\texttt{global\_ocean.90x40x15}, \texttt{aim.5l\_cs}, \texttt{hs94.128x64x5},
147  for the numerical part of the code is in \textit{model/src/THE\_MODEL\_MAIN.F%      \texttt{front\_relax}, and \texttt{plume\_on\_slope}).  \\
148  }. Here is a brief description of the directory structure of the model under      \texttt{MITgcm\_verif\_atmos}  &  Source code plus all of the atmospheric examples.  \\
149  the root tree (a detailed description is given in section 3: Code structure).      \texttt{MITgcm\_verif\_ocean}  &  Source code plus all of the oceanic examples.  \\
150        \texttt{MITgcm\_verif\_all}    &  Source code plus all of the
151  \begin{itemize}      verification examples. \\\hline
152  \item \textit{bin}: this directory is initially empty. It is the default    \end{tabular}
153  directory in which to compile the code.    \caption{MITgcm CVS Modules}
154      \label{tab:cvsModules}
155  \item \textit{diags}: contains the code relative to time-averaged  \end{table}
156  diagnostics. It is subdivided into two subdirectories \textit{inc} and  
157  \textit{src} that contain include files (*.\textit{h} files) and fortran  The checkout process creates a directory called \texttt{MITgcm}. If
158  subroutines (*.\textit{F} files), respectively.  the directory \texttt{MITgcm} exists this command updates your code
159    based on the repository. Each directory in the source tree contains a
160  \item \textit{doc}: contains brief documentation notes.  directory \texttt{CVS}. This information is required by CVS to keep
161    track of your file versions with respect to the repository. Don't edit
162  \item \textit{eesupp}: contains the execution environment source code. Also  the files in \texttt{CVS}!  You can also use CVS to download code
163  subdivided into two subdirectories \textit{inc} and \textit{src}.  updates.  More extensive information on using CVS for maintaining
164    MITgcm code can be found
165  \item \textit{exe}: this directory is initially empty. It is the default  \begin{rawhtml} <A href="http://mitgcm.org/usingcvstoget.html" target="idontexist"> \end{rawhtml}
166  directory in which to execute the code.  here
167    \begin{rawhtml} </A> \end{rawhtml}
168  \item \textit{model}: this directory contains the main source code. Also  .
169  subdivided into two subdirectories \textit{inc} and \textit{src}.  It is important to note that the CVS aliases in Table
170    \ref{tab:cvsModules} cannot be used in conjunction with the CVS
171  \item \textit{pkg}: contains the source code for the packages. Each package  \texttt{-d DIRNAME} option.  However, the \texttt{MITgcm} directories
172  corresponds to a subdirectory. For example, \textit{gmredi} contains the  they create can be changed to a different name following the check-out:
173  code related to the Gent-McWilliams/Redi scheme, \textit{aim} the code  \begin{verbatim}
174  relative to the atmospheric intermediate physics. The packages are described     %  cvs co MITgcm_verif_basic
175  in detail in section 3.     %  mv MITgcm MITgcm_verif_basic
176    \end{verbatim}
 \item \textit{tools}: this directory contains various useful tools. For  
 example, \textit{genmake} is a script written in csh (C-shell) that should  
 be used to generate your makefile. The directory \textit{adjoint} contains  
 the makefile specific to the Tangent linear and Adjoint Compiler (TAMC) that  
 generates the adjoint code. The latter is described in details in part V.  
   
 \item \textit{utils}: this directory contains various utilities. The  
 subdirectory \textit{knudsen2} contains code and a makefile that compute  
 coefficients of the polynomial approximation to the knudsen formula for an  
 ocean nonlinear equation of state. The \textit{matlab} subdirectory contains  
 matlab scripts for reading model output directly into matlab. \textit{scripts%  
 } contains C-shell post-processing scripts for joining processor-based and  
 tiled-based model output.  
   
 \item \textit{verification}: this directory contains the model examples. See  
 below.  
 \end{itemize}  
   
 \subsection{Model examples}  
   
 Now that you have successfully downloaded the model code we recommend that  
 you first try to run the examples provided with the base version. You will  
 probably want to run the example that is the closest to the configuration  
 you will use eventually. The examples are located in subdirectories under  
 the directory \textit{verification} and are briefly described below (a full  
 description is given in section 2):  
   
 \subsubsection{List of model examples}  
   
 \begin{itemize}  
 \item \textit{exp0} - single layer, ocean double gyre (barotropic with  
 free-surface).  
   
 \item \textit{exp1} - 4 layers, ocean double gyre.  
177    
 \item \textit{exp2} - 4x4 degree global ocean simulation with steady  
 climatological forcing.  
178    
179  \item \textit{exp4} - flow over a Gaussian bump in open-water or channel  \subsection{Method 2 - Tar file download}
180  with open boundaries.  \label{sect:conventionalDownload}
181    
182  \item \textit{exp5} - inhomogenously forced ocean convection in a doubly  If you do not have CVS on your system, you can download the model as a
183  periodic box.  tar file from the web site at:
184    \begin{rawhtml} <A href=http://mitgcm.org/download target="idontexist"> \end{rawhtml}
185    \begin{verbatim}
186    http://mitgcm.org/download/
187    \end{verbatim}
188    \begin{rawhtml} </A> \end{rawhtml}
189    The tar file still contains CVS information which we urge you not to
190    delete; even if you do not use CVS yourself the information can help
191    us if you should need to send us your copy of the code.  If a recent
192    tar file does not exist, then please contact the developers through
193    the
194    \begin{rawhtml} <A href="mailto:MITgcm-support@mitgcm.org"> \end{rawhtml}
195    MITgcm-support@mitgcm.org
196    \begin{rawhtml} </A> \end{rawhtml}
197    mailing list.
198    
199    \subsubsection{Upgrading from an earlier version}
200    
201    If you already have an earlier version of the code you can ``upgrade''
202    your copy instead of downloading the entire repository again. First,
203    ``cd'' (change directory) to the top of your working copy:
204    \begin{verbatim}
205    % cd MITgcm
206    \end{verbatim}
207    and then issue the cvs update command such as:
208    \begin{verbatim}
209    % cvs -q update -r checkpoint52i_post -d -P
210    \end{verbatim}
211    This will update the ``tag'' to ``checkpoint52i\_post'', add any new
212    directories (-d) and remove any empty directories (-P). The -q option
213    means be quiet which will reduce the number of messages you'll see in
214    the terminal. If you have modified the code prior to upgrading, CVS
215    will try to merge your changes with the upgrades. If there is a
216    conflict between your modifications and the upgrade, it will report
217    that file with a ``C'' in front, e.g.:
218    \begin{verbatim}
219    C model/src/ini_parms.F
220    \end{verbatim}
221    If the list of conflicts scrolled off the screen, you can re-issue the
222    cvs update command and it will report the conflicts. Conflicts are
223    indicated in the code by the delimites ``$<<<<<<<$'', ``======='' and
224    ``$>>>>>>>$''. For example,
225    {\small
226    \begin{verbatim}
227    <<<<<<< ini_parms.F
228         & bottomDragLinear,myOwnBottomDragCoefficient,
229    =======
230         & bottomDragLinear,bottomDragQuadratic,
231    >>>>>>> 1.18
232    \end{verbatim}
233    }
234    means that you added ``myOwnBottomDragCoefficient'' to a namelist at
235    the same time and place that we added ``bottomDragQuadratic''. You
236    need to resolve this conflict and in this case the line should be
237    changed to:
238    {\small
239    \begin{verbatim}
240         & bottomDragLinear,bottomDragQuadratic,myOwnBottomDragCoefficient,
241    \end{verbatim}
242    }
243    and the lines with the delimiters ($<<<<<<$,======,$>>>>>>$) be deleted.
244    Unless you are making modifications which exactly parallel
245    developments we make, these types of conflicts should be rare.
246    
247    \paragraph*{Upgrading to the current pre-release version}
248    
249    We don't make a ``release'' for every little patch and bug fix in
250    order to keep the frequency of upgrades to a minimum. However, if you
251    have run into a problem for which ``we have already fixed in the
252    latest code'' and we haven't made a ``tag'' or ``release'' since that
253    patch then you'll need to get the latest code:
254    \begin{verbatim}
255    % cvs -q update -A -d -P
256    \end{verbatim}
257    Unlike, the ``check-out'' and ``update'' procedures above, there is no
258    ``tag'' or release name. The -A tells CVS to upgrade to the
259    very latest version. As a rule, we don't recommend this since you
260    might upgrade while we are in the processes of checking in the code so
261    that you may only have part of a patch. Using this method of updating
262    also means we can't tell what version of the code you are working
263    with. So please be sure you understand what you're doing.
264    
265    \section{Model and directory structure}
266    \begin{rawhtml}
267    <!-- CMIREDIR:directory_structure: -->
268    \end{rawhtml}
269    
270    The ``numerical'' model is contained within a execution environment
271    support wrapper. This wrapper is designed to provide a general
272    framework for grid-point models. MITgcmUV is a specific numerical
273    model that uses the framework. Under this structure the model is split
274    into execution environment support code and conventional numerical
275    model code. The execution environment support code is held under the
276    \texttt{eesupp} directory. The grid point model code is held under the
277    \texttt{model} directory. Code execution actually starts in the
278    \texttt{eesupp} routines and not in the \texttt{model} routines. For
279    this reason the top-level \texttt{MAIN.F} is in the
280    \texttt{eesupp/src} directory. In general, end-users should not need
281    to worry about this level. The top-level routine for the numerical
282    part of the code is in \texttt{model/src/THE\_MODEL\_MAIN.F}. Here is
283    a brief description of the directory structure of the model under the
284    root tree (a detailed description is given in section 3: Code
285    structure).
286    
287    \begin{itemize}
288    
289    \item \texttt{bin}: this directory is initially empty. It is the
290      default directory in which to compile the code.
291      
292    \item \texttt{diags}: contains the code relative to time-averaged
293      diagnostics. It is subdivided into two subdirectories \texttt{inc}
294      and \texttt{src} that contain include files (\texttt{*.h} files) and
295      Fortran subroutines (\texttt{*.F} files), respectively.
296    
297    \item \texttt{doc}: contains brief documentation notes.
298      
299    \item \texttt{eesupp}: contains the execution environment source code.
300      Also subdivided into two subdirectories \texttt{inc} and
301      \texttt{src}.
302      
303    \item \texttt{exe}: this directory is initially empty. It is the
304      default directory in which to execute the code.
305      
306    \item \texttt{model}: this directory contains the main source code.
307      Also subdivided into two subdirectories \texttt{inc} and
308      \texttt{src}.
309      
310    \item \texttt{pkg}: contains the source code for the packages. Each
311      package corresponds to a subdirectory. For example, \texttt{gmredi}
312      contains the code related to the Gent-McWilliams/Redi scheme,
313      \texttt{aim} the code relative to the atmospheric intermediate
314      physics. The packages are described in detail in section 3.
315      
316    \item \texttt{tools}: this directory contains various useful tools.
317      For example, \texttt{genmake2} is a script written in csh (C-shell)
318      that should be used to generate your makefile. The directory
319      \texttt{adjoint} contains the makefile specific to the Tangent
320      linear and Adjoint Compiler (TAMC) that generates the adjoint code.
321      The latter is described in details in part V.
322      
323    \item \texttt{utils}: this directory contains various utilities. The
324      subdirectory \texttt{knudsen2} contains code and a makefile that
325      compute coefficients of the polynomial approximation to the knudsen
326      formula for an ocean nonlinear equation of state. The
327      \texttt{matlab} subdirectory contains matlab scripts for reading
328      model output directly into matlab. \texttt{scripts} contains C-shell
329      post-processing scripts for joining processor-based and tiled-based
330      model output.
331      
332    \item \texttt{verification}: this directory contains the model
333      examples. See section \ref{sect:modelExamples}.
334    
335    \end{itemize}
336    
337    \section[MITgcm Example Experiments]{Example experiments}
338    \label{sect:modelExamples}
339    \begin{rawhtml}
340    <!-- CMIREDIR:modelExamples: -->
341    \end{rawhtml}
342    
343    %% a set of twenty-four pre-configured numerical experiments
344    
345    The full MITgcm distribution comes with more than a dozen
346    pre-configured numerical experiments. Some of these example
347    experiments are tests of individual parts of the model code, but many
348    are fully fledged numerical simulations. A few of the examples are
349    used for tutorial documentation in sections \ref{sect:eg-baro} -
350    \ref{sect:eg-global}.  The other examples follow the same general
351    structure as the tutorial examples. However, they only include brief
352    instructions in a text file called {\it README}.  The examples are
353    located in subdirectories under the directory \texttt{verification}.
354    Each example is briefly described below.
355    
356    \subsection{Full list of model examples}
357    
358    \begin{enumerate}
359      
360    \item \texttt{exp0} - single layer, ocean double gyre (barotropic with
361      free-surface). This experiment is described in detail in section
362      \ref{sect:eg-baro}.
363    
364    \item \texttt{exp1} - Four layer, ocean double gyre. This experiment
365      is described in detail in section \ref{sect:eg-baroc}.
366      
367    \item \texttt{exp2} - 4x4 degree global ocean simulation with steady
368      climatological forcing. This experiment is described in detail in
369      section \ref{sect:eg-global}.
370      
371    \item \texttt{exp4} - Flow over a Gaussian bump in open-water or
372      channel with open boundaries.
373      
374    \item \texttt{exp5} - Inhomogenously forced ocean convection in a
375      doubly periodic box.
376    
377  \item \textit{front\_relax} - relaxation of an ocean thermal front (test for  \item \texttt{front\_relax} - Relaxation of an ocean thermal front (test for
378  Gent/McWilliams scheme). 2D (Y-Z).  Gent/McWilliams scheme). 2D (Y-Z).
379    
380  \item \textit{internal wave} - ocean internal wave forced by open boundary  \item \texttt{internal wave} - Ocean internal wave forced by open
381  conditions.    boundary conditions.
382      
383  \item \textit{natl\_box} - eastern subtropical North Atlantic with KPP  \item \texttt{natl\_box} - Eastern subtropical North Atlantic with KPP
384  scheme; 1 month integration    scheme; 1 month integration
385      
386  \item \textit{hs94.1x64x5} - zonal averaged atmosphere using Held and Suarez  \item \texttt{hs94.1x64x5} - Zonal averaged atmosphere using Held and
387  '94 forcing.    Suarez '94 forcing.
388      
389  \item \textit{hs94.128x64x5} - 3D atmosphere dynamics using Held and Suarez  \item \texttt{hs94.128x64x5} - 3D atmosphere dynamics using Held and
390  '94 forcing.    Suarez '94 forcing.
391      
392  \item \textit{hs94.cs-32x32x5} - 3D atmosphere dynamics using Held and  \item \texttt{hs94.cs-32x32x5} - 3D atmosphere dynamics using Held and
393  Suarez '94 forcing on the cubed sphere.    Suarez '94 forcing on the cubed sphere.
394      
395    \item \texttt{aim.5l\_zon-ave} - Intermediate Atmospheric physics.
396      Global Zonal Mean configuration, 1x64x5 resolution.
397      
398    \item \texttt{aim.5l\_XZ\_Equatorial\_Slice} - Intermediate
399      Atmospheric physics, equatorial Slice configuration.  2D (X-Z).
400      
401    \item \texttt{aim.5l\_Equatorial\_Channel} - Intermediate Atmospheric
402      physics. 3D Equatorial Channel configuration.
403      
404    \item \texttt{aim.5l\_LatLon} - Intermediate Atmospheric physics.
405      Global configuration, on latitude longitude grid with 128x64x5 grid
406      points ($2.8^\circ$ resolution).
407      
408    \item \texttt{adjustment.128x64x1} Barotropic adjustment problem on
409      latitude longitude grid with 128x64 grid points ($2.8^\circ$ resolution).
410      
411    \item \texttt{adjustment.cs-32x32x1} Barotropic adjustment problem on
412      cube sphere grid with 32x32 points per face (roughly $2.8^\circ$
413      resolution).
414      
415    \item \texttt{advect\_cs} Two-dimensional passive advection test on
416      cube sphere grid.
417      
418    \item \texttt{advect\_xy} Two-dimensional (horizontal plane) passive
419      advection test on Cartesian grid.
420      
421    \item \texttt{advect\_yz} Two-dimensional (vertical plane) passive
422      advection test on Cartesian grid.
423      
424    \item \texttt{carbon} Simple passive tracer experiment. Includes
425      derivative calculation. Described in detail in section
426      \ref{sect:eg-carbon-ad}.
427    
428    \item \texttt{flt\_example} Example of using float package.
429      
430    \item \texttt{global\_ocean.90x40x15} Global circulation with GM, flux
431      boundary conditions and poles.
432    
433    \item \texttt{global\_ocean\_pressure} Global circulation in pressure
434      coordinate (non-Boussinesq ocean model). Described in detail in
435      section \ref{sect:eg-globalpressure}.
436      
437    \item \texttt{solid-body.cs-32x32x1} Solid body rotation test for cube
438      sphere grid.
439    
440  \item \textit{aim.5l\_zon-ave} - Intermediate Atmospheric physics, 5 layers  \end{enumerate}
 Molteni physics package. Global Zonal Mean configuration, 1x64x5 resolution.  
   
 \item \textit{aim.5l\_XZ\_Equatorial\_Slice} - Intermediate Atmospheric  
 physics, 5 layers Molteni physics package. Equatorial Slice configuration.  
 2D (X-Z).  
   
 \item \textit{aim.5l\_Equatorial\_Channel} - Intermediate Atmospheric  
 physics, 5 layers Molteni physics package. 3D Equatorial Channel  
 configuration (not completely tested).  
   
 \item \textit{aim.5l\_LatLon} - Intermediate Atmospheric physics, 5 layers  
 Molteni physics package. Global configuration, 128x64x5 resolution.  
   
 \item \textit{adjustment.128x64x1}  
   
 \item \textit{adjustment.cs-32x32x1}  
 \end{itemize}  
441    
442  \subsubsection{Directory structure of model examples}  \subsection{Directory structure of model examples}
443    
444  Each example directory has the following subdirectories:  Each example directory has the following subdirectories:
445    
446  \begin{itemize}  \begin{itemize}
447  \item \textit{code}: contains the code particular to the example. At a  \item \texttt{code}: contains the code particular to the example. At a
448  minimum, this directory includes the following files:    minimum, this directory includes the following files:
449    
450  \begin{itemize}    \begin{itemize}
451  \item \textit{code/CPP\_EEOPTIONS.h}: declares CPP keys relative to the    \item \texttt{code/packages.conf}: declares the list of packages or
452  ``execution environment'' part of the code. The default version is located      package groups to be used.  If not included, the default version
453  in \textit{eesupp/inc}.      is located in \texttt{pkg/pkg\_default}.  Package groups are
454        simply convenient collections of commonly used packages which are
455  \item \textit{code/CPP\_OPTIONS.h}: declares CPP keys relative to the      defined in \texttt{pkg/pkg\_default}.  Some packages may require
456  ``numerical model'' part of the code. The default version is located in      other packages or may require their absence (that is, they are
457  \textit{model/inc}.      incompatible) and these package dependencies are listed in
458        \texttt{pkg/pkg\_depend}.
459  \item \textit{code/SIZE.h}: declares size of underlying computational grid.  
460  The default version is located in \textit{model/inc}.    \item \texttt{code/CPP\_EEOPTIONS.h}: declares CPP keys relative to
461  \end{itemize}      the ``execution environment'' part of the code. The default
462        version is located in \texttt{eesupp/inc}.
463  In addition, other include files and subroutines might be present in \textit{%    
464  code} depending on the particular experiment. See section 2 for more details.    \item \texttt{code/CPP\_OPTIONS.h}: declares CPP keys relative to
465        the ``numerical model'' part of the code. The default version is
466  \item \textit{input}: contains the input data files required to run the      located in \texttt{model/inc}.
467  example. At a mimimum, the \textit{input} directory contains the following    
468  files:    \item \texttt{code/SIZE.h}: declares size of underlying
469        computational grid.  The default version is located in
470  \begin{itemize}      \texttt{model/inc}.
471  \item \textit{input/data}: this file, written as a namelist, specifies the    \end{itemize}
472  main parameters for the experiment.    
473      In addition, other include files and subroutines might be present in
474  \item \textit{input/data.pkg}: contains parameters relative to the packages    \texttt{code} depending on the particular experiment. See Section 2
475  used in the experiment.    for more details.
476      
477    \item \texttt{input}: contains the input data files required to run
478      the example. At a minimum, the \texttt{input} directory contains the
479      following files:
480    
481      \begin{itemize}
482      \item \texttt{input/data}: this file, written as a namelist,
483        specifies the main parameters for the experiment.
484      
485      \item \texttt{input/data.pkg}: contains parameters relative to the
486        packages used in the experiment.
487      
488      \item \texttt{input/eedata}: this file contains ``execution
489        environment'' data. At present, this consists of a specification
490        of the number of threads to use in $X$ and $Y$ under multithreaded
491        execution.
492      \end{itemize}
493      
494      In addition, you will also find in this directory the forcing and
495      topography files as well as the files describing the initial state
496      of the experiment.  This varies from experiment to experiment. See
497      section 2 for more details.
498    
499    \item \texttt{results}: this directory contains the output file
500      \texttt{output.txt} produced by the simulation example. This file is
501      useful for comparison with your own output when you run the
502      experiment.
503    \end{itemize}
504    
505    Once you have chosen the example you want to run, you are ready to
506    compile the code.
507    
508    \section[Building MITgcm]{Building the code}
509    \label{sect:buildingCode}
510    \begin{rawhtml}
511    <!-- CMIREDIR:buildingCode: -->
512    \end{rawhtml}
513    
514    To compile the code, we use the \texttt{make} program. This uses a
515    file (\texttt{Makefile}) that allows us to pre-process source files,
516    specify compiler and optimization options and also figures out any
517    file dependencies. We supply a script (\texttt{genmake2}), described
518    in section \ref{sect:genmake}, that automatically creates the
519    \texttt{Makefile} for you. You then need to build the dependencies and
520    compile the code.
521    
522    As an example, assume that you want to build and run experiment
523    \texttt{verification/exp2}. The are multiple ways and places to
524    actually do this but here let's build the code in
525    \texttt{verification/exp2/build}:
526    \begin{verbatim}
527    % cd verification/exp2/build
528    \end{verbatim}
529    First, build the \texttt{Makefile}:
530    \begin{verbatim}
531    % ../../../tools/genmake2 -mods=../code
532    \end{verbatim}
533    The command line option tells \texttt{genmake} to override model source
534    code with any files in the directory \texttt{../code/}.
535    
536  \item \textit{input/eedata}: this file contains ``execution environment''  On many systems, the \texttt{genmake2} program will be able to
537  data. At present, this consists of a specification of the number of threads  automatically recognize the hardware, find compilers and other tools
538  to use in $X$ and $Y$ under multithreaded execution.  within the user's path (``\texttt{echo \$PATH}''), and then choose an
539  \end{itemize}  appropriate set of options from the files (``optfiles'') contained in
540    the \texttt{tools/build\_options} directory.  Under some
541    circumstances, a user may have to create a new ``optfile'' in order to
542    specify the exact combination of compiler, compiler flags, libraries,
543    and other options necessary to build a particular configuration of
544    MITgcm.  In such cases, it is generally helpful to read the existing
545    ``optfiles'' and mimic their syntax.
546    
547    Through the MITgcm-support list, the MITgcm developers are willing to
548    provide help writing or modifing ``optfiles''.  And we encourage users
549    to post new ``optfiles'' (particularly ones for new machines or
550    architectures) to the
551    \begin{rawhtml} <A href="mailto:MITgcm-support@mitgcm.org"> \end{rawhtml}
552    MITgcm-support@mitgcm.org
553    \begin{rawhtml} </A> \end{rawhtml}
554    list.
555    
556  In addition, you will also find in this directory the forcing and topography  To specify an optfile to \texttt{genmake2}, the syntax is:
557  files as well as the files describing the initial state of the experiment.  \begin{verbatim}
558  This varies from experiment to experiment. See section 2 for more details.  % ../../../tools/genmake2 -mods=../code -of /path/to/optfile
559    \end{verbatim}
 \item \textit{results}: this directory contains the output file \textit{%  
 output.txt} produced by the simulation example. This file is useful for  
 comparison with your own output when you run the experiment.  
 \end{itemize}  
560    
561  Once you have chosen the example you want to run, you are ready to compile  Once a \texttt{Makefile} has been generated, we create the
562  the code.  dependencies with the command:
563    \begin{verbatim}
564    % make depend
565    \end{verbatim}
566    This modifies the \texttt{Makefile} by attaching a (usually, long)
567    list of files upon which other files depend. The purpose of this is to
568    reduce re-compilation if and when you start to modify the code. The
569    {\tt make depend} command also creates links from the model source to
570    this directory.  It is important to note that the {\tt make depend}
571    stage will occasionally produce warnings or errors since the
572    dependency parsing tool is unable to find all of the necessary header
573    files (\textit{eg.}  \texttt{netcdf.inc}).  In these circumstances, it
574    is usually OK to ignore the warnings/errors and proceed to the next
575    step.
576    
577  \subsection{Compiling the code}  Next one can compile the code using:
578    \begin{verbatim}
579    % make
580    \end{verbatim}
581    The {\tt make} command creates an executable called \texttt{mitgcmuv}.
582    Additional make ``targets'' are defined within the makefile to aid in
583    the production of adjoint and other versions of MITgcm.  On SMP
584    (shared multi-processor) systems, the build process can often be sped
585    up appreciably using the command:
586    \begin{verbatim}
587    % make -j 2
588    \end{verbatim}
589    where the ``2'' can be replaced with a number that corresponds to the
590    number of CPUs available.
591    
592  \subsubsection{The script \textit{genmake}}  Now you are ready to run the model. General instructions for doing so are
593    given in section \ref{sect:runModel}. Here, we can run the model by
594    first creating links to all the input files:
595    \begin{verbatim}
596    ln -s ../input/* .
597    \end{verbatim}
598    and then calling the executable with:
599    \begin{verbatim}
600    ./mitgcmuv > output.txt
601    \end{verbatim}
602    where we are re-directing the stream of text output to the file
603    \texttt{output.txt}.
604    
605  To compile the code, use the script \textit{genmake} located in the \textit{%  \subsection{Building/compiling the code elsewhere}
 tools} directory. \textit{genmake} is a script that generates the makefile.  
 It has been written so that the code can be compiled on a wide diversity of  
 machines and systems. However, if it doesn't work the first time on your  
 platform, you might need to edit certain lines of \textit{genmake} in the  
 section containing the setups for the different machines. The file is  
 structured like this:  
 \begin{verbatim}  
         .  
         .  
         .  
 general instructions (machine independent)  
         .  
         .  
         .  
     - setup machine 1  
     - setup machine 2  
     - setup machine 3  
     - setup machine 4  
        etc  
         .  
         .  
         .  
 \end{verbatim}  
   
 For example, the setup corresponding to a DEC alpha machine is reproduced  
 here:  
 \begin{verbatim}  
   case OSF1+mpi:  
     echo "Configuring for DEC Alpha"  
     set CPP        = ( '/usr/bin/cpp -P' )  
     set DEFINES    = ( ${DEFINES}  '-DTARGET_DEC -DWORDLENGTH=1' )  
     set KPP        = ( 'kapf' )  
     set KPPFILES   = ( 'main.F' )  
     set KFLAGS1    = ( '-scan=132 -noconc -cmp=' )  
     set FC         = ( 'f77' )  
     set FFLAGS     = ( '-convert big_endian -r8 -extend_source -automatic -call_shared -notransform_loops -align dcommons' )  
     set FOPTIM     = ( '-O5 -fast -tune host -inline all' )  
     set NOOPTFLAGS = ( '-O0' )  
     set LIBS       = ( '-lfmpi -lmpi -lkmp_osfp10 -pthread' )  
     set NOOPTFILES = ( 'barrier.F different_multiple.F external_fields_load.F')  
     set RMFILES    = ( '*.p.out' )  
     breaksw  
 \end{verbatim}  
   
 Typically, these are the lines that you might need to edit to make \textit{%  
 genmake} work on your platform if it doesn't work the first time. \textit{%  
 genmake} understands several options that are described here:  
606    
607  \begin{itemize}  In the example above (section \ref{sect:buildingCode}) we built the
608  \item -rootdir=dir  executable in the {\em input} directory of the experiment for
609    convenience. You can also configure and compile the code in other
610    locations, for example on a scratch disk with out having to copy the
611    entire source tree. The only requirement to do so is you have {\tt
612      genmake2} in your path or you know the absolute path to {\tt
613      genmake2}.
614    
615  indicates where the model root directory is relative to the directory where  The following sections outline some possible methods of organizing
616  you are compiling. This option is not needed if you compile in the \textit{%  your source and data.
 bin} directory (which is the default compilation directory) or within the  
 \textit{verification} tree.  
617    
618  \item -mods=dir1,dir2,...  \subsubsection{Building from the {\em ../code directory}}
619    
620  indicates the relative or absolute paths directories where the sources  This is just as simple as building in the {\em input/} directory:
621  should take precedence over the default versions (located in \textit{model},  \begin{verbatim}
622  \textit{eesupp},...). Typically, this option is used when running the  % cd verification/exp2/code
623  examples, see below.  % ../../../tools/genmake2
624    % make depend
625    % make
626    \end{verbatim}
627    However, to run the model the executable ({\em mitgcmuv}) and input
628    files must be in the same place. If you only have one calculation to make:
629    \begin{verbatim}
630    % cd ../input
631    % cp ../code/mitgcmuv ./
632    % ./mitgcmuv > output.txt
633    \end{verbatim}
634    or if you will be making multiple runs with the same executable:
635    \begin{verbatim}
636    % cd ../
637    % cp -r input run1
638    % cp code/mitgcmuv run1
639    % cd run1
640    % ./mitgcmuv > output.txt
641    \end{verbatim}
642    
643  \item -enable=pkg1,pkg2,...  \subsubsection{Building from a new directory}
644    
645  enables packages source code \textit{pkg1}, \textit{pkg2},... when creating  Since the {\em input} directory contains input files it is often more
646  the makefile.  useful to keep {\em input} pristine and build in a new directory
647    within {\em verification/exp2/}:
648    \begin{verbatim}
649    % cd verification/exp2
650    % mkdir build
651    % cd build
652    % ../../../tools/genmake2 -mods=../code
653    % make depend
654    % make
655    \end{verbatim}
656    This builds the code exactly as before but this time you need to copy
657    either the executable or the input files or both in order to run the
658    model. For example,
659    \begin{verbatim}
660    % cp ../input/* ./
661    % ./mitgcmuv > output.txt
662    \end{verbatim}
663    or if you tend to make multiple runs with the same executable then
664    running in a new directory each time might be more appropriate:
665    \begin{verbatim}
666    % cd ../
667    % mkdir run1
668    % cp build/mitgcmuv run1/
669    % cp input/* run1/
670    % cd run1
671    % ./mitgcmuv > output.txt
672    \end{verbatim}
673    
674  \item -disable=pkg1,pkg2,...  \subsubsection{Building on a scratch disk}
675    
676  disables packages source code \textit{pkg1}, \textit{pkg2},... when creating  Model object files and output data can use up large amounts of disk
677  the makefile.  space so it is often the case that you will be operating on a large
678    scratch disk. Assuming the model source is in {\em ~/MITgcm} then the
679    following commands will build the model in {\em /scratch/exp2-run1}:
680    \begin{verbatim}
681    % cd /scratch/exp2-run1
682    % ~/MITgcm/tools/genmake2 -rootdir=~/MITgcm \
683      -mods=~/MITgcm/verification/exp2/code
684    % make depend
685    % make
686    \end{verbatim}
687    To run the model here, you'll need the input files:
688    \begin{verbatim}
689    % cp ~/MITgcm/verification/exp2/input/* ./
690    % ./mitgcmuv > output.txt
691    \end{verbatim}
692    
693  \item -platform=machine  As before, you could build in one directory and make multiple runs of
694    the one experiment:
695    \begin{verbatim}
696    % cd /scratch/exp2
697    % mkdir build
698    % cd build
699    % ~/MITgcm/tools/genmake2 -rootdir=~/MITgcm \
700      -mods=~/MITgcm/verification/exp2/code
701    % make depend
702    % make
703    % cd ../
704    % cp -r ~/MITgcm/verification/exp2/input run2
705    % cd run2
706    % ./mitgcmuv > output.txt
707    \end{verbatim}
708    
 specifies the platform for which you want the makefile. In general, you  
 won't need this option. \textit{genmake} will select the right machine for  
 you (the one you're working on!). However, this option is useful if you have  
 a choice of several compilers on one machine and you want to use the one  
 that is not the default (ex: \texttt{pgf77} instead of \texttt{f77} under  
 Linux).  
709    
710  \item -mpi  \subsection{Using \texttt{genmake2}}
711    \label{sect:genmake}
712    
713  this is used when you want to run the model in parallel processing mode  To compile the code, first use the program \texttt{genmake2} (located
714  under mpi (see section on parallel computation for more details).  in the \texttt{tools} directory) to generate a Makefile.
715    \texttt{genmake2} is a shell script written to work with all
716    ``sh''--compatible shells including bash v1, bash v2, and Bourne.
717    Internally, \texttt{genmake2} determines the locations of needed
718    files, the compiler, compiler options, libraries, and Unix tools.  It
719    relies upon a number of ``optfiles'' located in the
720    \texttt{tools/build\_options} directory.
721    
722    The purpose of the optfiles is to provide all the compilation options
723    for particular ``platforms'' (where ``platform'' roughly means the
724    combination of the hardware and the compiler) and code configurations.
725    Given the combinations of possible compilers and library dependencies
726    ({\it eg.}  MPI and NetCDF) there may be numerous optfiles available
727    for a single machine.  The naming scheme for the majority of the
728    optfiles shipped with the code is
729    \begin{center}
730      {\bf OS\_HARDWARE\_COMPILER }
731    \end{center}
732    where
733    \begin{description}
734    \item[OS] is the name of the operating system (generally the
735      lower-case output of the {\tt 'uname'} command)
736    \item[HARDWARE] is a string that describes the CPU type and
737      corresponds to output from the  {\tt 'uname -m'} command:
738      \begin{description}
739      \item[ia32] is for ``x86'' machines such as i386, i486, i586, i686,
740        and athlon
741      \item[ia64] is for Intel IA64 systems (eg. Itanium, Itanium2)
742      \item[amd64] is AMD x86\_64 systems
743      \item[ppc] is for Mac PowerPC systems
744      \end{description}
745    \item[COMPILER] is the compiler name (generally, the name of the
746      FORTRAN executable)
747    \end{description}
748    
749    In many cases, the default optfiles are sufficient and will result in
750    usable Makefiles.  However, for some machines or code configurations,
751    new ``optfiles'' must be written. To create a new optfile, it is
752    generally best to start with one of the defaults and modify it to suit
753    your needs.  Like \texttt{genmake2}, the optfiles are all written
754    using a simple ``sh''--compatible syntax.  While nearly all variables
755    used within \texttt{genmake2} may be specified in the optfiles, the
756    critical ones that should be defined are:
757    
758    \begin{description}
759    \item[FC] the FORTRAN compiler (executable) to use
760    \item[DEFINES] the command-line DEFINE options passed to the compiler
761    \item[CPP] the C pre-processor to use
762    \item[NOOPTFLAGS] options flags for special files that should not be
763      optimized
764    \end{description}
765    
766  \item -jam  For example, the optfile for a typical Red Hat Linux machine (``ia32''
767    architecture) using the GCC (g77) compiler is
768    \begin{verbatim}
769    FC=g77
770    DEFINES='-D_BYTESWAPIO -DWORDLENGTH=4'
771    CPP='cpp  -traditional -P'
772    NOOPTFLAGS='-O0'
773    #  For IEEE, use the "-ffloat-store" option
774    if test "x$IEEE" = x ; then
775        FFLAGS='-Wimplicit -Wunused -Wuninitialized'
776        FOPTIM='-O3 -malign-double -funroll-loops'
777    else
778        FFLAGS='-Wimplicit -Wunused -ffloat-store'
779        FOPTIM='-O0 -malign-double'
780    fi
781    \end{verbatim}
782    
783  this is used when you want to run the model in parallel processing mode  If you write an optfile for an unrepresented machine or compiler, you
784  under jam (see section on parallel computation for more details).  are strongly encouraged to submit the optfile to the MITgcm project
785  \end{itemize}  for inclusion.  Please send the file to the
786    \begin{rawhtml} <A href="mail-to:MITgcm-support@mitgcm.org"> \end{rawhtml}
787    \begin{center}
788      MITgcm-support@mitgcm.org
789    \end{center}
790    \begin{rawhtml} </A> \end{rawhtml}
791    mailing list.
792    
793  For some of the examples, there is a file called \textit{.genmakerc} in the  In addition to the optfiles, \texttt{genmake2} supports a number of
794  \textit{input} directory that has the relevant \textit{genmake} options for  helpful command-line options.  A complete list of these options can be
795  that particular example. In this way you don't need to type the options when  obtained from:
 invoking \textit{genmake}.  
   
 \subsubsection{Compiling}  
   
 Let's assume that you want to run, say, example \textit{exp2} in the \textit{%  
 input} directory. To compile the code, type the following commands from the  
 model root tree:  
796  \begin{verbatim}  \begin{verbatim}
797  % cd verification/exp2/input  % genmake2 -h
 % ../../../tools/genmake  
 % make depend  
 % make  
798  \end{verbatim}  \end{verbatim}
799    
800  If there is no \textit{.genmakerc} in the \textit{input} directory, you have  The most important command-line options are:
801  to use the following options when invoking \textit{genmake}:  \begin{description}
802      
803    \item[\texttt{--optfile=/PATH/FILENAME}] specifies the optfile that
804      should be used for a particular build.
805      
806      If no "optfile" is specified (either through the command line or the
807      MITGCM\_OPTFILE environment variable), genmake2 will try to make a
808      reasonable guess from the list provided in {\em
809        tools/build\_options}.  The method used for making this guess is
810      to first determine the combination of operating system and hardware
811      (eg. "linux\_ia32") and then find a working FORTRAN compiler within
812      the user's path.  When these three items have been identified,
813      genmake2 will try to find an optfile that has a matching name.
814      
815    \item[\texttt{--pdefault='PKG1 PKG2 PKG3 ...'}] specifies the default
816      set of packages to be used.  The normal order of precedence for
817      packages is as follows:
818      \begin{enumerate}
819      \item If available, the command line (\texttt{--pdefault}) settings
820        over-rule any others.
821    
822      \item Next, \texttt{genmake2} will look for a file named
823        ``\texttt{packages.conf}'' in the local directory or in any of the
824        directories specified with the \texttt{--mods} option.
825        
826      \item Finally, if neither of the above are available,
827        \texttt{genmake2} will use the \texttt{/pkg/pkg\_default} file.
828      \end{enumerate}
829      
830    \item[\texttt{--pdepend=/PATH/FILENAME}] specifies the dependency file
831      used for packages.
832      
833      If not specified, the default dependency file {\em pkg/pkg\_depend}
834      is used.  The syntax for this file is parsed on a line-by-line basis
835      where each line containes either a comment ("\#") or a simple
836      "PKGNAME1 (+|-)PKGNAME2" pairwise rule where the "+" or "-" symbol
837      specifies a "must be used with" or a "must not be used with"
838      relationship, respectively.  If no rule is specified, then it is
839      assumed that the two packages are compatible and will function
840      either with or without each other.
841      
842    \item[\texttt{--adof=/path/to/file}] specifies the "adjoint" or
843      automatic differentiation options file to be used.  The file is
844      analogous to the ``optfile'' defined above but it specifies
845      information for the AD build process.
846      
847      The default file is located in {\em
848        tools/adjoint\_options/adjoint\_default} and it defines the "TAF"
849      and "TAMC" compilers.  An alternate version is also available at
850      {\em tools/adjoint\_options/adjoint\_staf} that selects the newer
851      "STAF" compiler.  As with any compilers, it is helpful to have their
852      directories listed in your {\tt \$PATH} environment variable.
853      
854    \item[\texttt{--mods='DIR1 DIR2 DIR3 ...'}] specifies a list of
855      directories containing ``modifications''.  These directories contain
856      files with names that may (or may not) exist in the main MITgcm
857      source tree but will be overridden by any identically-named sources
858      within the ``MODS'' directories.
859      
860      The order of precedence for this "name-hiding" is as follows:
861      \begin{itemize}
862      \item ``MODS'' directories (in the order given)
863      \item Packages either explicitly specified or provided by default
864        (in the order given)
865      \item Packages included due to package dependencies (in the order
866        that that package dependencies are parsed)
867      \item The "standard dirs" (which may have been specified by the
868        ``-standarddirs'' option)
869      \end{itemize}
870      
871    \item[\texttt{--mpi}] This option enables certain MPI features (using
872      CPP \texttt{\#define}s) within the code and is necessary for MPI
873      builds (see Section \ref{sect:mpi-build}).
874      
875    \item[\texttt{--make=/path/to/gmake}] Due to the poor handling of
876      soft-links and other bugs common with the \texttt{make} versions
877      provided by commercial Unix vendors, GNU \texttt{make} (sometimes
878      called \texttt{gmake}) should be preferred.  This option provides a
879      means for specifying the make executable to be used.
880      
881    \item[\texttt{--bash=/path/to/sh}] On some (usually older UNIX)
882      machines, the ``bash'' shell is unavailable.  To run on these
883      systems, \texttt{genmake2} can be invoked using an ``sh'' (that is,
884      a Bourne, POSIX, or compatible) shell.  The syntax in these
885      circumstances is:
886      \begin{center}
887        \texttt{\%  /bin/sh genmake2 -bash=/bin/sh [...options...]}
888      \end{center}
889      where \texttt{/bin/sh} can be replaced with the full path and name
890      of the desired shell.
891    
892    \end{description}
893    
894    
895    \subsection{Building with MPI}
896    \label{sect:mpi-build}
897    
898    Building MITgcm to use MPI libraries can be complicated due to the
899    variety of different MPI implementations available, their dependencies
900    or interactions with different compilers, and their often ad-hoc
901    locations within file systems.  For these reasons, its generally a
902    good idea to start by finding and reading the documentation for your
903    machine(s) and, if necessary, seeking help from your local systems
904    administrator.
905    
906    The steps for building MITgcm with MPI support are:
907    \begin{enumerate}
908      
909    \item Determine the locations of your MPI-enabled compiler and/or MPI
910      libraries and put them into an options file as described in Section
911      \ref{sect:genmake}.  One can start with one of the examples in:
912      \begin{rawhtml} <A
913        href="http://mitgcm.org/cgi-bin/viewcvs.cgi/MITgcm/tools/build_options/">
914      \end{rawhtml}
915      \begin{center}
916        \texttt{MITgcm/tools/build\_options/}
917      \end{center}
918      \begin{rawhtml} </A> \end{rawhtml}
919      such as \texttt{linux\_ia32\_g77+mpi\_cg01} or
920      \texttt{linux\_ia64\_efc+mpi} and then edit it to suit the machine at
921      hand.  You may need help from your user guide or local systems
922      administrator to determine the exact location of the MPI libraries.
923      If libraries are not installed, MPI implementations and related
924      tools are available including:
925      \begin{itemize}
926      \item \begin{rawhtml} <A
927          href="http://www-unix.mcs.anl.gov/mpi/mpich/">
928        \end{rawhtml}
929        MPICH
930        \begin{rawhtml} </A> \end{rawhtml}
931    
932      \item \begin{rawhtml} <A
933          href="http://www.lam-mpi.org/">
934        \end{rawhtml}
935        LAM/MPI
936        \begin{rawhtml} </A> \end{rawhtml}
937    
938      \item \begin{rawhtml} <A
939          href="http://www.osc.edu/~pw/mpiexec/">
940        \end{rawhtml}
941        MPIexec
942        \begin{rawhtml} </A> \end{rawhtml}
943      \end{itemize}
944      
945    \item Build the code with the \texttt{genmake2} \texttt{-mpi} option
946      (see Section \ref{sect:genmake}) using commands such as:
947    {\footnotesize \begin{verbatim}
948      %  ../../../tools/genmake2 -mods=../code -mpi -of=YOUR_OPTFILE
949      %  make depend
950      %  make
951    \end{verbatim} }
952      
953    \item Run the code with the appropriate MPI ``run'' or ``exec''
954      program provided with your particular implementation of MPI.
955      Typical MPI packages such as MPICH will use something like:
956  \begin{verbatim}  \begin{verbatim}
957  % ../../../tools/genmake  -mods=../code    %  mpirun -np 4 -machinefile mf ./mitgcmuv
958  \end{verbatim}  \end{verbatim}
959      Sightly more complicated scripts may be needed for many machines
960      since execution of the code may be controlled by both the MPI
961      library and a job scheduling and queueing system such as PBS,
962      LoadLeveller, Condor, or any of a number of similar tools.  A few
963      example scripts (those used for our \begin{rawhtml} <A
964        href="http://mitgcm.org/testing.html"> \end{rawhtml}regular
965      verification runs\begin{rawhtml} </A> \end{rawhtml}) are available
966      at:
967      \begin{rawhtml} <A
968        href="http://mitgcm.org/cgi-bin/viewcvs.cgi/MITgcm_contrib/test_scripts/">
969      \end{rawhtml}
970      {\footnotesize \tt
971        http://mitgcm.org/cgi-bin/viewcvs.cgi/MITgcm\_contrib/test\_scripts/ }
972      \begin{rawhtml} </A> \end{rawhtml}
973    
974    \end{enumerate}
975    
976    An example of the above process on the MITgcm cluster (``cg01'') using
977    the GNU g77 compiler and the mpich MPI library is:
978    
979    {\footnotesize \begin{verbatim}
980      %  cd MITgcm/verification/exp5
981      %  mkdir build
982      %  cd build
983      %  ../../../tools/genmake2 -mpi -mods=../code \
984           -of=../../../tools/build_options/linux_ia32_g77+mpi_cg01
985      %  make depend
986      %  make
987      %  cd ../input
988      %  /usr/local/pkg/mpi/mpi-1.2.4..8a-gm-1.5/g77/bin/mpirun.ch_gm \
989           -machinefile mf --gm-kill 5 -v -np 2  ../build/mitgcmuv
990    \end{verbatim} }
991    
992    \section[Running MITgcm]{Running the model in prognostic mode}
993    \label{sect:runModel}
994    \begin{rawhtml}
995    <!-- CMIREDIR:runModel: -->
996    \end{rawhtml}
997    
998    If compilation finished succesfully (section \ref{sect:buildingCode})
999    then an executable called \texttt{mitgcmuv} will now exist in the
1000    local directory.
1001    
1002  In addition, you will probably want to disable some of the packages. Taking  To run the model as a single process (\textit{ie.} not in parallel)
1003  again the case of \textit{exp2}, the full \textit{genmake} command will  simply type:
 probably look like this:  
1004  \begin{verbatim}  \begin{verbatim}
1005  % ../../../tools/genmake  -mods=../code  -disable=kpp,gmredi,aim,...  % ./mitgcmuv
1006  \end{verbatim}  \end{verbatim}
1007    The ``./'' is a safe-guard to make sure you use the local executable
1008  The make command creates an executable called \textit{mitgcmuv}.  in case you have others that exist in your path (surely odd if you
1009    do!). The above command will spew out many lines of text output to
1010  Note that you can compile and run the code in another directory than \textit{%  your screen.  This output contains details such as parameter values as
1011  input}. You just need to make sure that you copy the input data files into  well as diagnostics such as mean Kinetic energy, largest CFL number,
1012  the directory where you want to run the model. For example to compile from  etc. It is worth keeping this text output with the binary output so we
1013  \textit{code}:  normally re-direct the \texttt{stdout} stream as follows:
1014  \begin{verbatim}  \begin{verbatim}
1015  % cd verification/exp2/code  % ./mitgcmuv > output.txt
 % ../../../tools/genmake  
 % make depend  
 % make  
1016  \end{verbatim}  \end{verbatim}
1017    In the event that the model encounters an error and stops, it is very
1018    helpful to include the last few line of this \texttt{output.txt} file
1019    along with the (\texttt{stderr}) error message within any bug reports.
1020    
1021    For the example experiments in \texttt{verification}, an example of the
1022    output is kept in \texttt{results/output.txt} for comparison. You can
1023    compare your \texttt{output.txt} with the corresponding one for that
1024    experiment to check that the set-up works.
1025    
1026    
 \subsection{Running the model}  
1027    
1028  The first thing to do is to run the code by typing \textit{mitgcmuv} and see  \subsection{Output files}
 what happens. You can compare what you get with what is in the \textit{%  
 results} directory. Unless noted otherwise, most examples are set up to run  
 for a few time steps only so that you can quickly figure out whether the  
 model is working or not.  
1029    
1030  \subsubsection{Output files}  The model produces various output files and, when using \texttt{mnc},
1031    sometimes even directories.  Depending upon the I/O package(s)
1032    selected at compile time (either \texttt{mdsio} or \texttt{mnc} or
1033    both as determined by \texttt{code/packages.conf}) and the run-time
1034    flags set (in \texttt{input/data.pkg}), the following output may
1035    appear.
1036    
1037  The model produces various output files. At a minimum, the instantaneous  
1038  ``state'' of the model is written out, which is made of the following files:  \subsubsection{MDSIO output files}
1039    
1040    The ``traditional'' output files are generated by the \texttt{mdsio}
1041    package.  At a minimum, the instantaneous ``state'' of the model is
1042    written out, which is made of the following files:
1043    
1044  \begin{itemize}  \begin{itemize}
1045  \item \textit{U.00000nIter} - zonal component of velocity field (m/s and $>  \item \texttt{U.00000nIter} - zonal component of velocity field (m/s
1046  0 $ eastward).    and positive eastward).
1047    
1048  \item \textit{V.00000nIter} - meridional component of velocity field (m/s  \item \texttt{V.00000nIter} - meridional component of velocity field
1049  and $> 0$ northward).    (m/s and positive northward).
1050    
1051  \item \textit{W.00000nIter} - vertical component of velocity field (ocean:  \item \texttt{W.00000nIter} - vertical component of velocity field
1052  m/s and $> 0$ upward, atmosphere: Pa/s and $> 0$ towards increasing pressure    (ocean: m/s and positive upward, atmosphere: Pa/s and positive
1053  i.e. downward).    towards increasing pressure i.e. downward).
1054    
1055  \item \textit{T.00000nIter} - potential temperature (ocean: $^{0}$C,  \item \texttt{T.00000nIter} - potential temperature (ocean:
1056  atmosphere: $^{0}$K).    $^{\circ}\mathrm{C}$, atmosphere: $^{\circ}\mathrm{K}$).
1057    
1058  \item \textit{S.00000nIter} - ocean: salinity (psu), atmosphere: water vapor  \item \texttt{S.00000nIter} - ocean: salinity (psu), atmosphere: water
1059  (g/kg).    vapor (g/kg).
1060    
1061  \item \textit{Eta.00000nIter} - ocean: surface elevation (m), atmosphere:  \item \texttt{Eta.00000nIter} - ocean: surface elevation (m),
1062  surface pressure anomaly (Pa).    atmosphere: surface pressure anomaly (Pa).
1063  \end{itemize}  \end{itemize}
1064    
1065  The chain \textit{00000nIter} consists of ten figures that specify the  The chain \texttt{00000nIter} consists of ten figures that specify the
1066  iteration number at which the output is written out. For example, \textit{%  iteration number at which the output is written out. For example,
1067  U.0000000300} is the zonal velocity at iteration 300.  \texttt{U.0000000300} is the zonal velocity at iteration 300.
1068    
1069  In addition, a ``pickup'' or ``checkpoint'' file called:  In addition, a ``pickup'' or ``checkpoint'' file called:
1070    
1071  \begin{itemize}  \begin{itemize}
1072  \item \textit{pickup.00000nIter}  \item \texttt{pickup.00000nIter}
1073  \end{itemize}  \end{itemize}
1074    
1075  is written out. This file represents the state of the model in a condensed  is written out. This file represents the state of the model in a condensed
# Line 440  form and is used for restarting the inte Line 1077  form and is used for restarting the inte
1077  there is an additional ``pickup'' file:  there is an additional ``pickup'' file:
1078    
1079  \begin{itemize}  \begin{itemize}
1080  \item \textit{pickup\_cd.00000nIter}  \item \texttt{pickup\_cd.00000nIter}
1081  \end{itemize}  \end{itemize}
1082    
1083  containing the D-grid velocity data and that has to be written out as well  containing the D-grid velocity data and that has to be written out as well
1084  in order to restart the integration. Rolling checkpoint files are the same  in order to restart the integration. Rolling checkpoint files are the same
1085  as the pickup files but are named differently. Their name contain the chain  as the pickup files but are named differently. Their name contain the chain
1086  \textit{ckptA} or \textit{ckptB} instead of \textit{00000nIter}. They can be  \texttt{ckptA} or \texttt{ckptB} instead of \texttt{00000nIter}. They can be
1087  used to restart the model but are overwritten every other time they are  used to restart the model but are overwritten every other time they are
1088  output to save disk space during long integrations.  output to save disk space during long integrations.
1089    
 \subsubsection{Looking at the output}  
   
 All the model data are written according to a ``meta/data'' file format.  
 Each variable is associated with two files with suffix names \textit{.data}  
 and \textit{.meta}. The \textit{.data} file contains the data written in  
 binary form (big\_endian by default). The \textit{.meta} file is a  
 ``header'' file that contains information about the size and the structure  
 of the \textit{.data} file. This way of organizing the output is  
 particularly useful when running multi-processors calculations. The base  
 version of the model includes a few matlab utilities to read output files  
 written in this format. The matlab scripts are located in the directory  
 \textit{utils/matlab} under the root tree. The script \textit{rdmds.m} reads  
 the data. Look at the comments inside the script to see how to use it.  
   
 \section{Code structure}  
   
 \section{Doing it yourself: customizing the code}  
   
 \subsection{\protect\bigskip Configuration and setup}  
   
 When you are ready to run the model in the configuration you want, the  
 easiest thing is to use and adapt the setup of the case studies experiment  
 (described previously) that is the closest to your configuration. Then, the  
 amount of setup will be minimized. In this section, we focus on the setup  
 relative to the ''numerical model'' part of the code (the setup relative to  
 the ''execution environment'' part is covered in the parallel implementation  
 section) and on the variables and parameters that you are likely to change.  
   
 The CPP keys relative to the ''numerical model'' part of the code are all  
 defined and set in the file \textit{CPP\_OPTIONS.h }in the directory \textit{%  
 model/inc }or in one of the \textit{code }directories of the case study  
 experiments under \textit{verification.} The model parameters are defined  
 and declared in the file \textit{model/inc/PARAMS.h }and their default  
 values are set in the routine \textit{model/src/set\_defaults.F. }The  
 default values can be modified in the namelist file \textit{data }which  
 needs to be located in the directory where you will run the model. The  
 parameters are initialized in the routine \textit{model/src/ini\_parms.F}.  
 Look at this routine to see in what part of the namelist the parameters are  
 located.  
   
 In what follows the parameters are grouped into categories related to the  
 computational domain, the equations solved in the model, and the simulation  
 controls.  
   
 \subsubsection{Computational domain, geometry and time-discretization}  
   
 \begin{itemize}  
 \item dimensions  
 \end{itemize}  
   
 The number of points in the x, y,\textit{\ }and r\textit{\ }directions are  
 represented by the variables \textbf{sNx}\textit{, }\textbf{sNy}\textit{, }%  
 and \textbf{Nr}\textit{\ }respectively which are declared and set in the  
 file \textit{model/inc/SIZE.h. }(Again, this assumes a mono-processor  
 calculation. For multiprocessor calculations see section on parallel  
 implementation.)  
   
 \begin{itemize}  
 \item grid  
 \end{itemize}  
   
 Three different grids are available: cartesian, spherical polar, and  
 curvilinear (including the cubed sphere). The grid is set through the  
 logical variables \textbf{usingCartesianGrid}\textit{, }\textbf{%  
 usingSphericalPolarGrid}\textit{, }and \textit{\ }\textbf{%  
 usingCurvilinearGrid}\textit{. }In the case of spherical and curvilinear  
 grids, the southern boundary is defined through the variable \textbf{phiMin}%  
 \textit{\ }which corresponds to the latitude of the southern most cell face  
 (in degrees). The resolution along the x and y directions is controlled by  
 the 1D arrays \textbf{delx}\textit{\ }and \textbf{dely}\textit{\ }(in meters  
 in the case of a cartesian grid, in degrees otherwise). The vertical grid  
 spacing is set through the 1D array \textbf{delz }for the ocean (in meters)  
 or \textbf{delp}\textit{\ }for the atmosphere (in Pa). The variable \textbf{%  
 Ro\_SeaLevel} represents the standard position of Sea-Level in ''R''  
 coordinate. This is typically set to 0m for the ocean (default value) and 10$%  
 ^{5}$Pa for the atmosphere. For the atmosphere, also set the logical  
 variable \textbf{groundAtK1} to '.\texttt{TRUE}.'. which put the first level  
 (k=1) at the lower boundary (ground).  
   
 For the cartesian grid case, the Coriolis parameter $f$ is set through the  
 variables \textbf{f0}\textit{\ }and \textbf{beta}\textit{\ }which correspond  
 to the reference Coriolis parameter (in s$^{-1}$) and $\frac{\partial f}{%  
 \partial y}$(in m$^{-1}$s$^{-1}$) respectively. If \textbf{beta }\textit{\ }%  
 is set to a nonzero value, \textbf{f0}\textit{\ }is the value of $f$ at the  
 southern edge of the domain.  
1090    
 \begin{itemize}  
 \item topography - full and partial cells  
 \end{itemize}  
   
 The domain bathymetry is read from a file that contains a 2D (x,y) map of  
 depths (in m) for the ocean or pressures (in Pa) for the atmosphere. The  
 file name is represented by the variable \textbf{bathyFile}\textit{. }The  
 file is assumed to contain binary numbers giving the depth (pressure) of the  
 model at each grid cell, ordered with the x coordinate varying fastest. The  
 points are ordered from low coordinate to high coordinate for both axes. The  
 model code applies without modification to enclosed, periodic, and double  
 periodic domains. Periodicity is assumed by default and is suppressed by  
 setting the depths to 0m for the cells at the limits of the computational  
 domain (note: not sure this is the case for the atmosphere). The precision  
 with which to read the binary data is controlled by the integer variable  
 \textbf{readBinaryPrec }which can take the value \texttt{32} (single  
 precision) or \texttt{64} (double precision). See the matlab program \textit{%  
 gendata.m }in the \textit{input }directories under \textit{verification }to  
 see how the bathymetry files are generated for the case study experiments.  
   
 To use the partial cell capability, the variable \textbf{hFacMin}\textit{\ }%  
 needs to be set to a value between 0 and 1 (it is set to 1 by default)  
 corresponding to the minimum fractional size of the cell. For example if the  
 bottom cell is 500m thick and \textbf{hFacMin}\textit{\ }is set to 0.1, the  
 actual thickness of the cell (i.e. used in the code) can cover a range of  
 discrete values 50m apart from 50m to 500m depending on the value of the  
 bottom depth (in \textbf{bathyFile}) at this point.  
   
 Note that the bottom depths (or pressures) need not coincide with the models  
 levels as deduced from \textbf{delz}\textit{\ }or\textit{\ }\textbf{delp}%  
 \textit{. }The model will interpolate the numbers in \textbf{bathyFile}%  
 \textit{\ }so that they match the levels obtained from \textbf{delz}\textit{%  
 \ }or\textit{\ }\textbf{delp}\textit{\ }and \textbf{hFacMin}\textit{. }  
   
 (Note: the atmospheric case is a bit more complicated than what is written  
 here I think. To come soon...)  
   
 \begin{itemize}  
 \item time-discretization  
 \end{itemize}  
   
 The time steps are set through the real variables \textbf{deltaTMom }and  
 \textbf{deltaTtracer }(in s) which represent the time step for the momentum  
 and tracer equations, respectively. For synchronous integrations, simply set  
 the two variables to the same value (or you can prescribe one time step only  
 through the variable \textbf{deltaT}). The Adams-Bashforth stabilizing  
 parameter is set through the variable \textbf{abEps }(dimensionless). The  
 stagger baroclinic time stepping can be activated by setting the logical  
 variable \textbf{staggerTimeStep }to '.\texttt{TRUE}.'.  
   
 \subsubsection{Equation of state}  
   
 First, because the model equations are written in terms of perturbations, a  
 reference thermodynamic state needs to be specified. This is done through  
 the 1D arrays \textbf{tRef}\textit{\ }and \textbf{sRef}. \textbf{tRef }%  
 specifies the reference potential temperature profile (in $^{o}$C for  
 the ocean and $^{o}$K for the atmosphere) starting from the level  
 k=1. Similarly, \textbf{sRef}\textit{\ }specifies the reference salinity  
 profile (in ppt) for the ocean or the reference specific humidity profile  
 (in g/kg) for the atmosphere.  
   
 The form of the equation of state is controlled by the character variables  
 \textbf{buoyancyRelation}\textit{\ }and \textbf{eosType}\textit{. }\textbf{%  
 buoyancyRelation}\textit{\ }is set to '\texttt{OCEANIC}' by default and  
 needs to be set to '\texttt{ATMOSPHERIC}' for atmosphere simulations. In  
 this case, \textbf{eosType}\textit{\ }must be set to '\texttt{IDEALGAS}'.  
 For the ocean, two forms of the equation of state are available: linear (set  
 \textbf{eosType}\textit{\ }to '\texttt{LINEAR}') and a polynomial  
 approximation to the full nonlinear equation ( set \textbf{eosType}\textit{\  
 }to '\texttt{POLYNOMIAL}'). In the linear case, you need to specify the  
 thermal and haline expansion coefficients represented by the variables  
 \textbf{tAlpha}\textit{\ }(in K$^{-1}$) and \textbf{sBeta}\textit{\ }(in ppt$%  
 ^{-1}$). For the nonlinear case, you need to generate a file of polynomial  
 coefficients called \textit{POLY3.COEFFS. }To do this, use the program  
 \textit{utils/knudsen2/knudsen2.f }under the model tree (a Makefile is  
 available in the same directory and you will need to edit the number and the  
 values of the vertical levels in \textit{knudsen2.f }so that they match  
 those of your configuration). \textit{\ }  
   
 \subsubsection{Momentum equations}  
   
 In this section, we only focus for now on the parameters that you are likely  
 to change, i.e. the ones relative to forcing and dissipation for example.  
 The details relevant to the vector-invariant form of the equations and the  
 various advection schemes are not covered for the moment. We assume that you  
 use the standard form of the momentum equations (i.e. the flux-form) with  
 the default advection scheme. Also, there are a few logical variables that  
 allow you to turn on/off various terms in the momentum equation. These  
 variables are called \textbf{momViscosity, momAdvection, momForcing,  
 useCoriolis, momPressureForcing, momStepping}\textit{, }and \textit{\ }%  
 \textbf{metricTerms }and are assumed to be set to '.\texttt{TRUE}.' here.  
 Look at the file \textit{model/inc/PARAMS.h }for a precise definition of  
 these variables.  
   
 \begin{itemize}  
 \item initialization  
 \end{itemize}  
   
 The velocity components are initialized to 0 unless the simulation is  
 starting from a pickup file (see section on simulation control parameters).  
   
 \begin{itemize}  
 \item forcing  
 \end{itemize}  
   
 This section only applies to the ocean. You need to generate wind-stress  
 data into two files \textbf{zonalWindFile}\textit{\ }and \textbf{%  
 meridWindFile }corresponding to the zonal and meridional components of the  
 wind stress, respectively (if you want the stress to be along the direction  
 of only one of the model horizontal axes, you only need to generate one  
 file). The format of the files is similar to the bathymetry file. The zonal  
 (meridional) stress data are assumed to be in Pa and located at U-points  
 (V-points). As for the bathymetry, the precision with which to read the  
 binary data is controlled by the variable \textbf{readBinaryPrec}.\textbf{\ }  
 See the matlab program \textit{gendata.m }in the \textit{input }directories  
 under \textit{verification }to see how simple analytical wind forcing data  
 are generated for the case study experiments.  
   
 There is also the possibility of prescribing time-dependent periodic  
 forcing. To do this, concatenate the successive time records into a single  
 file (for each stress component) ordered in a (x, y, t) fashion and set the  
 following variables: \textbf{periodicExternalForcing }to '.\texttt{TRUE}.',  
 \textbf{externForcingPeriod }to the period (in s) of which the forcing  
 varies (typically 1 month), and \textbf{externForcingCycle }to the repeat  
 time (in s) of the forcing (typically 1 year -- note: \textbf{%  
 externForcingCycle }must be a multiple of \textbf{externForcingPeriod}).  
 With these variables set up, the model will interpolate the forcing linearly  
 at each iteration.  
   
 \begin{itemize}  
 \item dissipation  
 \end{itemize}  
   
 The lateral eddy viscosity coefficient is specified through the variable  
 \textbf{viscAh}\textit{\ }(in m$^{2}$s$^{-1}$). The vertical eddy viscosity  
 coefficient is specified through the variable \textbf{viscAz }(in m$^{2}$s$%  
 ^{-1}$) for the ocean and \textbf{viscAp}\textit{\ }(in Pa$^{2}$s$^{-1}$)  
 for the atmosphere. The vertical diffusive fluxes can be computed implicitly  
 by setting the logical variable \textbf{implicitViscosity }to '.\texttt{TRUE}%  
 .'. In addition, biharmonic mixing can be added as well through the variable  
 \textbf{viscA4}\textit{\ }(in m$^{4}$s$^{-1}$). On a spherical polar grid,  
 you might also need to set the variable \textbf{cosPower} which is set to 0  
 by default and which represents the power of cosine of latitude to multiply  
 viscosity. Slip or no-slip conditions at lateral and bottom boundaries are  
 specified through the logical variables \textbf{no\_slip\_sides}\textit{\ }%  
 and \textbf{no\_slip\_bottom}. If set to '\texttt{.FALSE.}', free-slip  
 boundary conditions are applied. If no-slip boundary conditions are applied  
 at the bottom, a bottom drag can be applied as well. Two forms are  
 available: linear (set the variable \textbf{bottomDragLinear}\textit{\ }in s$%  
 ^{-1}$) and quadratic (set the variable \textbf{bottomDragQuadratic}\textit{%  
 \ }in m$^{-1}$).  
   
 The Fourier and Shapiro filters are described elsewhere.  
   
 \begin{itemize}  
 \item C-D scheme  
 \end{itemize}  
1091    
1092  If you run at a sufficiently coarse resolution, you will need the C-D scheme  \subsubsection{MNC output files}
 for the computation of the Coriolis terms. The variable\textbf{\ tauCD},  
 which represents the C-D scheme coupling timescale (in s) needs to be set.  
   
 \begin{itemize}  
 \item calculation of pressure/geopotential  
 \end{itemize}  
   
 First, to run a non-hydrostatic ocean simulation, set the logical variable  
 \textbf{nonHydrostatic} to '.\texttt{TRUE}.'. The pressure field is then  
 inverted through a 3D elliptic equation. (Note: this capability is not  
 available for the atmosphere yet.) By default, a hydrostatic simulation is  
 assumed and a 2D elliptic equation is used to invert the pressure field. The  
 parameters controlling the behaviour of the elliptic solvers are the  
 variables \textbf{cg2dMaxIters}\textit{\ }and \textbf{cg2dTargetResidual }%  
 for the 2D case and \textbf{cg3dMaxIters}\textit{\ }and \textbf{%  
 cg3dTargetResidual }for the 3D case. You probably won't need to alter the  
 default values (are we sure of this?).  
   
 For the calculation of the surface pressure (for the ocean) or surface  
 geopotential (for the atmosphere) you need to set the logical variables  
 \textbf{rigidLid} and \textbf{implicitFreeSurface}\textit{\ }(set one to '.%  
 \texttt{TRUE}.' and the other to '.\texttt{FALSE}.' depending on how you  
 want to deal with the ocean upper or atmosphere lower boundary).  
   
 \subsubsection{Tracer equations}  
   
 This section covers the tracer equations i.e. the potential temperature  
 equation and the salinity (for the ocean) or specific humidity (for the  
 atmosphere) equation. As for the momentum equations, we only describe for  
 now the parameters that you are likely to change. The logical variables  
 \textbf{tempDiffusion}\textit{, }\textbf{tempAdvection}\textit{, }\textbf{%  
 tempForcing}\textit{,} and \textbf{tempStepping} allow you to turn on/off  
 terms in the temperature equation (same thing for salinity or specific  
 humidity with variables \textbf{saltDiffusion}\textit{, }\textbf{%  
 saltAdvection}\textit{\ }etc). These variables are all assumed here to be  
 set to '.\texttt{TRUE}.'. Look at file \textit{model/inc/PARAMS.h }for a  
 precise definition.  
1093    
1094  \begin{itemize}  Unlike the \texttt{mdsio} output, the \texttt{mnc}--generated output
1095  \item initialization  is usually (though not necessarily) placed within a subdirectory with
1096  \end{itemize}  a name such as \texttt{mnc\_test\_\${DATE}\_\${SEQ}}.  The files
1097    within this subdirectory are all in the ``self-describing'' netCDF
1098  The initial tracer data can be contained in the binary files \textbf{%  format and can thus be browsed and/or plotted using tools such as:
1099  hydrogThetaFile }and \textbf{hydrogSaltFile}. These files should contain 3D  \begin{itemize}
1100  data ordered in an (x, y, r) fashion with k=1 as the first vertical level.  \item \texttt{ncdump} is a utility which is typically included
1101  If no file names are provided, the tracers are then initialized with the    with every netCDF install:
1102  values of \textbf{tRef }and \textbf{sRef }mentioned above (in the equation    \begin{rawhtml} <A href="http://www.unidata.ucar.edu/packages/netcdf/"> \end{rawhtml}
1103  of state section). In this case, the initial tracer data are uniform in x  \begin{verbatim}
1104  and y for each depth level.  http://www.unidata.ucar.edu/packages/netcdf/
1105    \end{verbatim}
1106  \begin{itemize}    \begin{rawhtml} </A> \end{rawhtml} and it converts the netCDF
1107  \item forcing    binaries into formatted ASCII text files.
 \end{itemize}  
   
 This part is more relevant for the ocean, the procedure for the atmosphere  
 not being completely stabilized at the moment.  
   
 A combination of fluxes data and relaxation terms can be used for driving  
 the tracer equations. \ For potential temperature, heat flux data (in W/m$%  
 ^{2}$) can be stored in the 2D binary file \textbf{surfQfile}\textit{. }%  
 Alternatively or in addition, the forcing can be specified through a  
 relaxation term. The SST data to which the model surface temperatures are  
 restored to are supposed to be stored in the 2D binary file \textbf{%  
 thetaClimFile}\textit{. }The corresponding relaxation time scale coefficient  
 is set through the variable \textbf{tauThetaClimRelax}\textit{\ }(in s). The  
 same procedure applies for salinity with the variable names \textbf{EmPmRfile%  
 }\textit{, }\textbf{saltClimFile}\textit{, }and \textbf{tauSaltClimRelax}%  
 \textit{\ }for freshwater flux (in m/s) and surface salinity (in ppt) data  
 files and relaxation time scale coefficient (in s), respectively. Also for  
 salinity, if the CPP key \textbf{USE\_NATURAL\_BCS} is turned on, natural  
 boundary conditions are applied i.e. when computing the surface salinity  
 tendency, the freshwater flux is multiplied by the model surface salinity  
 instead of a constant salinity value.  
   
 As for the other input files, the precision with which to read the data is  
 controlled by the variable \textbf{readBinaryPrec}. Time-dependent, periodic  
 forcing can be applied as well following the same procedure used for the  
 wind forcing data (see above).  
1108    
1109  \begin{itemize}  \item \texttt{ncview} utility is a very convenient and quick way
1110  \item dissipation    to plot netCDF data and it runs on most OSes:
1111      \begin{rawhtml} <A href="http://meteora.ucsd.edu/~pierce/ncview_home_page.html"> \end{rawhtml}
1112    \begin{verbatim}
1113    http://meteora.ucsd.edu/~pierce/ncview_home_page.html
1114    \end{verbatim}
1115      \begin{rawhtml} </A> \end{rawhtml}
1116      
1117    \item MatLAB(c) and other common post-processing environments provide
1118      various netCDF interfaces including:
1119      \begin{rawhtml} <A href="http://mexcdf.sourceforge.net/"> \end{rawhtml}
1120    \begin{verbatim}
1121    http://mexcdf.sourceforge.net/
1122    \end{verbatim}
1123      \begin{rawhtml} </A> \end{rawhtml}
1124      \begin{rawhtml} <A href="http://woodshole.er.usgs.gov/staffpages/cdenham/public_html/MexCDF/nc4ml5.html"> \end{rawhtml}
1125    \begin{verbatim}
1126    http://woodshole.er.usgs.gov/staffpages/cdenham/public_html/MexCDF/nc4ml5.html
1127    \end{verbatim}
1128      \begin{rawhtml} </A> \end{rawhtml}
1129  \end{itemize}  \end{itemize}
1130    
 Lateral eddy diffusivities for temperature and salinity/specific humidity  
 are specified through the variables \textbf{diffKhT }and \textbf{diffKhS }%  
 (in m$^{2}$/s). Vertical eddy diffusivities are specified through the  
 variables \textbf{diffKzT }and \textbf{diffKzS }(in m$^{2}$/s) for the ocean  
 and \textbf{diffKpT }and \textbf{diffKpS }(in Pa$^{2}$/s) for the  
 atmosphere. The vertical diffusive fluxes can be computed implicitly by  
 setting the logical variable \textbf{implicitDiffusion }to '.\texttt{TRUE}%  
 .'. In addition, biharmonic diffusivities can be specified as well through  
 the coefficients \textbf{diffK4T }and \textbf{diffK4S }(in m$^{4}$/s). Note  
 that the cosine power scaling (specified through \textbf{cosPower }- see the  
 momentum equations section) is applied to the tracer diffusivities  
 (Laplacian and biharmonic) as well. The Gent and McWilliams parameterization  
 for oceanic tracers is described in the package section. Finally, note that  
 tracers can be also subject to Fourier and Shapiro filtering (see the  
 corresponding section on these filters).  
1131    
1132  \begin{itemize}  \subsection{Looking at the output}
 \item ocean convection  
 \end{itemize}  
1133    
1134  Two options are available to parameterize ocean convection: one is to use  The ``traditional'' or mdsio model data are written according to a
1135  the convective adjustment scheme. In this case, you need to set the variable  ``meta/data'' file format.  Each variable is associated with two files
1136  \textbf{cadjFreq}, which represents the frequency (in s) with which the  with suffix names \texttt{.data} and \texttt{.meta}. The
1137  adjustment algorithm is called, to a non-zero value (if set to a negative  \texttt{.data} file contains the data written in binary form
1138  value by the user, the model will set it to the tracer time step). The other  (big\_endian by default). The \texttt{.meta} file is a ``header'' file
1139  option is to parameterize convection with implicit vertical diffusion. To do  that contains information about the size and the structure of the
1140  this, set the logical variable \textbf{implicitDiffusion }to '.\texttt{TRUE}%  \texttt{.data} file. This way of organizing the output is particularly
1141  .' and the real variable \textbf{ivdc\_kappa }to a value (in m$^{2}$/s) you  useful when running multi-processors calculations. The base version of
1142  wish the tracer vertical diffusivities to have when mixing tracers  the model includes a few matlab utilities to read output files written
1143  vertically due to static instabilities. Note that \textbf{cadjFreq }and  in this format. The matlab scripts are located in the directory
1144  \textbf{ivdc\_kappa }can not both have non-zero value.  \texttt{utils/matlab} under the root tree. The script \texttt{rdmds.m}
1145    reads the data. Look at the comments inside the script to see how to
1146  \subsubsection{Simulation controls}  use it.
   
 The model ''clock'' is defined by the variable \textbf{deltaTClock }(in s)  
 which determines the IO frequencies and is used in tagging output.  
 Typically, you will set it to the tracer time step for accelerated runs  
 (otherwise it is simply set to the default time step \textbf{deltaT}).  
 Frequency of checkpointing and dumping of the model state are referenced to  
 this clock (see below).  
1147    
1148  \begin{itemize}  Some examples of reading and visualizing some output in {\em Matlab}:
1149  \item run duration  \begin{verbatim}
1150  \end{itemize}  % matlab
1151    >> H=rdmds('Depth');
1152    >> contourf(H');colorbar;
1153    >> title('Depth of fluid as used by model');
1154    
1155    >> eta=rdmds('Eta',10);
1156    >> imagesc(eta');axis ij;colorbar;
1157    >> title('Surface height at iter=10');
1158    
1159  The beginning of a simulation is set by specifying a start time (in s)  >> eta=rdmds('Eta',[0:10:100]);
1160  through the real variable \textbf{startTime }or by specifying an initial  >> for n=1:11; imagesc(eta(:,:,n)');axis ij;colorbar;pause(.5);end
1161  iteration number through the integer variable \textbf{nIter0}. If these  \end{verbatim}
 variables are set to nonzero values, the model will look for a ''pickup''  
 file \textit{pickup.0000nIter0 }to restart the integration\textit{. }The end  
 of a simulation is set through the real variable \textbf{endTime }(in s).  
 Alternatively, you can specify instead the number of time steps to execute  
 through the integer variable \textbf{nTimeSteps}.  
1162    
1163  \begin{itemize}  Similar scripts for netCDF output (\texttt{rdmnc.m}) are available and
1164  \item frequency of output  they are described in Section \ref{sec:pkg:mnc}.
 \end{itemize}  
1165    
 Real variables defining frequencies (in s) with which output files are  
 written on disk need to be set up. \textbf{dumpFreq }controls the frequency  
 with which the instantaneous state of the model is saved. \textbf{chkPtFreq }%  
 and \textbf{pchkPtFreq }control the output frequency of rolling and  
 permanent checkpoint files, respectively. See section 1.5.1 Output files for the  
 definition of model state and checkpoint files. In addition, time-averaged  
 fields can be written out by setting the variable \textbf{taveFreq} (in s).  
 The precision with which to write the binary data is controlled by the  
 integer variable w\textbf{riteBinaryPrec }(set it to \texttt{32} or \texttt{%  
 64}).  

Legend:
Removed from v.1.1.1.1  
changed lines
  Added in v.1.35

  ViewVC Help
Powered by ViewVC 1.1.22