/[MITgcm]/manual/s_getstarted/text/getting_started.tex
ViewVC logotype

Diff of /manual/s_getstarted/text/getting_started.tex

Parent Directory Parent Directory | Revision Log Revision Log | View Revision Graph Revision Graph | View Patch Patch

revision 1.10 by adcroft, Tue Nov 13 20:13:54 2001 UTC revision 1.30 by edhill, Sat Oct 16 03:40:13 2004 UTC
# Line 15  structure are described more fully in ch Line 15  structure are described more fully in ch
15  this section, we provide information on how to customize the code when  this section, we provide information on how to customize the code when
16  you are ready to try implementing the configuration you have in mind.  you are ready to try implementing the configuration you have in mind.
17    
18    
19  \section{Where to find information}  \section{Where to find information}
20  \label{sect:whereToFindInfo}  \label{sect:whereToFindInfo}
21    \begin{rawhtml}
22    <!-- CMIREDIR:whereToFindInfo: -->
23    \end{rawhtml}
24    
25  A web site is maintained for release 1 (Sealion) of MITgcm:  A web site is maintained for release 2 (``Pelican'') of MITgcm:
26    \begin{rawhtml} <A href=http://mitgcm.org/pelican/ target="idontexist"> \end{rawhtml}
27  \begin{verbatim}  \begin{verbatim}
28  http://mitgcm.org/sealion  http://mitgcm.org/pelican
29  \end{verbatim}  \end{verbatim}
30    \begin{rawhtml} </A> \end{rawhtml}
31  Here you will find an on-line version of this document, a  Here you will find an on-line version of this document, a
32  ``browsable'' copy of the code and a searchable database of the model  ``browsable'' copy of the code and a searchable database of the model
33  and site, as well as links for downloading the model and  and site, as well as links for downloading the model and
34  documentation, to data-sources and other related sites.  documentation, to data-sources, and other related sites.
35    
36  There is also a support news group for the model that you can email at  There is also a web-archived support mailing list for the model that
37  \texttt{support@mitgcm.org} or browse at:  you can email at \texttt{MITgcm-support@mitgcm.org} or browse at:
38    \begin{rawhtml} <A href=http://mitgcm.org/mailman/listinfo/mitgcm-support/ target="idontexist"> \end{rawhtml}
39  \begin{verbatim}  \begin{verbatim}
40  news://mitgcm.org/mitgcm.support  http://mitgcm.org/mailman/listinfo/mitgcm-support/
41    http://mitgcm.org/pipermail/mitgcm-support/
42  \end{verbatim}  \end{verbatim}
43  A mail to the email list will reach all the developers and be archived  \begin{rawhtml} </A> \end{rawhtml}
44  on the newsgroup. A users email list will be established at some time  Essentially all of the MITgcm web pages can be searched using a
45  in the future.  popular web crawler such as Google or through our own search facility:
46    \begin{rawhtml} <A href=http://mitgcm.org/mailman/htdig/ target="idontexist"> \end{rawhtml}
47    \begin{verbatim}
48    http://mitgcm.org/htdig/
49    \end{verbatim}
50    \begin{rawhtml} </A> \end{rawhtml}
51    %%% http://www.google.com/search?q=hydrostatic+site%3Amitgcm.org
52    
53    
54    
55  \section{Obtaining the code}  \section{Obtaining the code}
56  \label{sect:obtainingCode}  \label{sect:obtainingCode}
57    \begin{rawhtml}
58    <!-- CMIREDIR:obtainingCode: -->
59    \end{rawhtml}
60    
61  MITgcm can be downloaded from our system by following  MITgcm can be downloaded from our system by following
62  the instructions below. As a courtesy we ask that you send e-mail to us at  the instructions below. As a courtesy we ask that you send e-mail to us at
63  \begin{rawhtml} <A href=mailto:support@mitgcm.org> \end{rawhtml}  \begin{rawhtml} <A href=mailto:MITgcm-support@mitgcm.org> \end{rawhtml}
64  support@mitgcm.org  MITgcm-support@mitgcm.org
65  \begin{rawhtml} </A> \end{rawhtml}  \begin{rawhtml} </A> \end{rawhtml}
66  to enable us to keep track of who's using the model and in what application.  to enable us to keep track of who's using the model and in what application.
67  You can download the model two ways:  You can download the model two ways:
# Line 67  provide easy support for maintenance upd Line 86  provide easy support for maintenance upd
86    
87  \end{enumerate}  \end{enumerate}
88    
89    \subsection{Method 1 - Checkout from CVS}
90    \label{sect:cvs_checkout}
91    
92  If CVS is available on your system, we strongly encourage you to use it. CVS  If CVS is available on your system, we strongly encourage you to use it. CVS
93  provides an efficient and elegant way of organizing your code and keeping  provides an efficient and elegant way of organizing your code and keeping
94  track of your changes. If CVS is not available on your machine, you can also  track of your changes. If CVS is not available on your machine, you can also
95  download a tar file.  download a tar file.
96    
97  Before you can use CVS, the following environment variable has to be set in  Before you can use CVS, the following environment variable(s) should
98  your .cshrc or .tcshrc:  be set within your shell.  For a csh or tcsh shell, put the following
99    \begin{verbatim}
100    % setenv CVSROOT :pserver:cvsanon@mitgcm.org:/u/gcmpack
101    \end{verbatim}
102    in your .cshrc or .tcshrc file.  For bash or sh shells, put:
103  \begin{verbatim}  \begin{verbatim}
104  % setenv CVSROOT :pserver:cvsanon@mitgcm.org:/u/u0/gcmpack  % export CVSROOT=':pserver:cvsanon@mitgcm.org:/u/gcmpack'
105  \end{verbatim}  \end{verbatim}
106    in your \texttt{.profile} or \texttt{.bashrc} file.
107    
108    
109  To start using CVS, register with the MITgcm CVS server using command:  To get MITgcm through CVS, first register with the MITgcm CVS server
110    using command:
111  \begin{verbatim}  \begin{verbatim}
112  % cvs login ( CVS password: cvsanon )  % cvs login ( CVS password: cvsanon )
113  \end{verbatim}  \end{verbatim}
114  You only need to do ``cvs login'' once.  You only need to do a ``cvs login'' once.
115    
116  To obtain the sources for release1 type:  To obtain the latest sources type:
117    \begin{verbatim}
118    % cvs co MITgcm
119    \end{verbatim}
120    or to get a specific release type:
121    \begin{verbatim}
122    % cvs co -P -r checkpoint52i_post  MITgcm
123    \end{verbatim}
124    The MITgcm web site contains further directions concerning the source
125    code and CVS.  It also contains a web interface to our CVS archive so
126    that one may easily view the state of files, revisions, and other
127    development milestones:
128    \begin{rawhtml} <A href=''http://mitgcm.org/download'' target="idontexist"> \end{rawhtml}
129  \begin{verbatim}  \begin{verbatim}
130  % cvs co -d directory -P -r release1 MITgcmUV  http://mitgcm.org/source_code.html
131  \end{verbatim}  \end{verbatim}
132    \begin{rawhtml} </A> \end{rawhtml}
133    
134  This creates a directory called \textit{directory}. If \textit{directory}  As a convenience, the MITgcm CVS server contains aliases which are
135  exists this command updates your code based on the repository. Each  named subsets of the codebase.  These aliases can be especially
136  directory in the source tree contains a directory \textit{CVS}. This  helpful when used over slow internet connections or on machines with
137  information is required by CVS to keep track of your file versions with  restricted storage space.  Table \ref{tab:cvsModules} contains a list
138  respect to the repository. Don't edit the files in \textit{CVS}!  of CVS aliases
139  You can also use CVS to download code updates.  More extensive  \begin{table}[htb]
140  information on using CVS for maintaining MITgcm code can be found    \centering
141  \begin{rawhtml} <A href=http://mitgcm.org/usingcvstoget.html target="idontexist"> \end{rawhtml}    \begin{tabular}[htb]{|lp{3.25in}|}\hline
142        \textbf{Alias Name}    &  \textbf{Information (directories) Contained}  \\\hline
143        \texttt{MITgcm\_code}  &  Only the source code -- none of the verification examples.  \\
144        \texttt{MITgcm\_verif\_basic}
145        &  Source code plus a small set of the verification examples
146        (\texttt{global\_ocean.90x40x15}, \texttt{aim.5l\_cs}, \texttt{hs94.128x64x5},
147        \texttt{front\_relax}, and \texttt{plume\_on\_slope}).  \\
148        \texttt{MITgcm\_verif\_atmos}  &  Source code plus all of the atmospheric examples.  \\
149        \texttt{MITgcm\_verif\_ocean}  &  Source code plus all of the oceanic examples.  \\
150        \texttt{MITgcm\_verif\_all}    &  Source code plus all of the
151        verification examples. \\\hline
152      \end{tabular}
153      \caption{MITgcm CVS Modules}
154      \label{tab:cvsModules}
155    \end{table}
156    
157    The checkout process creates a directory called \textit{MITgcm}. If
158    the directory \textit{MITgcm} exists this command updates your code
159    based on the repository. Each directory in the source tree contains a
160    directory \textit{CVS}. This information is required by CVS to keep
161    track of your file versions with respect to the repository. Don't edit
162    the files in \textit{CVS}!  You can also use CVS to download code
163    updates.  More extensive information on using CVS for maintaining
164    MITgcm code can be found
165    \begin{rawhtml} <A href=''http://mitgcm.org/usingcvstoget.html'' target="idontexist"> \end{rawhtml}
166  here  here
167  \begin{rawhtml} </A> \end{rawhtml}  \begin{rawhtml} </A> \end{rawhtml}
168  .  .
169    It is important to note that the CVS aliases in Table
170    \ref{tab:cvsModules} cannot be used in conjunction with the CVS
171    \texttt{-d DIRNAME} option.  However, the \texttt{MITgcm} directories
172    they create can be changed to a different name following the check-out:
173    \begin{verbatim}
174       %  cvs co MITgcm_verif_basic
175       %  mv MITgcm MITgcm_verif_basic
176    \end{verbatim}
177    
178    
179  \paragraph*{Conventional download method}  \subsection{Method 2 - Tar file download}
180  \label{sect:conventionalDownload}  \label{sect:conventionalDownload}
181    
182  If you do not have CVS on your system, you can download the model as a  If you do not have CVS on your system, you can download the model as a
183  tar file from the reference web site at:  tar file from the web site at:
184  \begin{rawhtml} <A href=http://mitgcm.org/download target="idontexist"> \end{rawhtml}  \begin{rawhtml} <A href=http://mitgcm.org/download target="idontexist"> \end{rawhtml}
185  \begin{verbatim}  \begin{verbatim}
186  http://mitgcm.org/download/  http://mitgcm.org/download/
# Line 114  http://mitgcm.org/download/ Line 188  http://mitgcm.org/download/
188  \begin{rawhtml} </A> \end{rawhtml}  \begin{rawhtml} </A> \end{rawhtml}
189  The tar file still contains CVS information which we urge you not to  The tar file still contains CVS information which we urge you not to
190  delete; even if you do not use CVS yourself the information can help  delete; even if you do not use CVS yourself the information can help
191  us if you should need to send us your copy of the code.  us if you should need to send us your copy of the code.  If a recent
192    tar file does not exist, then please contact the developers through
193    the
194    \begin{rawhtml} <A href=''mailto:MITgcm-support@mitgcm.org"> \end{rawhtml}
195    MITgcm-support@mitgcm.org
196    \begin{rawhtml} </A> \end{rawhtml}
197    mailing list.
198    
199  \section{Model and directory structure}  \subsubsection{Upgrading from an earlier version}
200    
201    If you already have an earlier version of the code you can ``upgrade''
202    your copy instead of downloading the entire repository again. First,
203    ``cd'' (change directory) to the top of your working copy:
204    \begin{verbatim}
205    % cd MITgcm
206    \end{verbatim}
207    and then issue the cvs update command such as:
208    \begin{verbatim}
209    % cvs -q update -r checkpoint52i_post -d -P
210    \end{verbatim}
211    This will update the ``tag'' to ``checkpoint52i\_post'', add any new
212    directories (-d) and remove any empty directories (-P). The -q option
213    means be quiet which will reduce the number of messages you'll see in
214    the terminal. If you have modified the code prior to upgrading, CVS
215    will try to merge your changes with the upgrades. If there is a
216    conflict between your modifications and the upgrade, it will report
217    that file with a ``C'' in front, e.g.:
218    \begin{verbatim}
219    C model/src/ini_parms.F
220    \end{verbatim}
221    If the list of conflicts scrolled off the screen, you can re-issue the
222    cvs update command and it will report the conflicts. Conflicts are
223    indicated in the code by the delimites ``$<<<<<<<$'', ``======='' and
224    ``$>>>>>>>$''. For example,
225    {\small
226    \begin{verbatim}
227    <<<<<<< ini_parms.F
228         & bottomDragLinear,myOwnBottomDragCoefficient,
229    =======
230         & bottomDragLinear,bottomDragQuadratic,
231    >>>>>>> 1.18
232    \end{verbatim}
233    }
234    means that you added ``myOwnBottomDragCoefficient'' to a namelist at
235    the same time and place that we added ``bottomDragQuadratic''. You
236    need to resolve this conflict and in this case the line should be
237    changed to:
238    {\small
239    \begin{verbatim}
240         & bottomDragLinear,bottomDragQuadratic,myOwnBottomDragCoefficient,
241    \end{verbatim}
242    }
243    and the lines with the delimiters ($<<<<<<$,======,$>>>>>>$) be deleted.
244    Unless you are making modifications which exactly parallel
245    developments we make, these types of conflicts should be rare.
246    
247    \paragraph*{Upgrading to the current pre-release version}
248    
249    We don't make a ``release'' for every little patch and bug fix in
250    order to keep the frequency of upgrades to a minimum. However, if you
251    have run into a problem for which ``we have already fixed in the
252    latest code'' and we haven't made a ``tag'' or ``release'' since that
253    patch then you'll need to get the latest code:
254    \begin{verbatim}
255    % cvs -q update -A -d -P
256    \end{verbatim}
257    Unlike, the ``check-out'' and ``update'' procedures above, there is no
258    ``tag'' or release name. The -A tells CVS to upgrade to the
259    very latest version. As a rule, we don't recommend this since you
260    might upgrade while we are in the processes of checking in the code so
261    that you may only have part of a patch. Using this method of updating
262    also means we can't tell what version of the code you are working
263    with. So please be sure you understand what you're doing.
264    
265  The ``numerical'' model is contained within a execution environment support  \section{Model and directory structure}
266  wrapper. This wrapper is designed to provide a general framework for  \begin{rawhtml}
267  grid-point models. MITgcmUV is a specific numerical model that uses the  <!-- CMIREDIR:directory_structure: -->
268  framework. Under this structure the model is split into execution  \end{rawhtml}
269  environment support code and conventional numerical model code. The  
270  execution environment support code is held under the \textit{eesupp}  The ``numerical'' model is contained within a execution environment
271  directory. The grid point model code is held under the \textit{model}  support wrapper. This wrapper is designed to provide a general
272  directory. Code execution actually starts in the \textit{eesupp} routines  framework for grid-point models. MITgcmUV is a specific numerical
273  and not in the \textit{model} routines. For this reason the top-level  model that uses the framework. Under this structure the model is split
274  \textit{MAIN.F} is in the \textit{eesupp/src} directory. In general,  into execution environment support code and conventional numerical
275  end-users should not need to worry about this level. The top-level routine  model code. The execution environment support code is held under the
276  for the numerical part of the code is in \textit{model/src/THE\_MODEL\_MAIN.F%  \textit{eesupp} directory. The grid point model code is held under the
277  }. Here is a brief description of the directory structure of the model under  \textit{model} directory. Code execution actually starts in the
278  the root tree (a detailed description is given in section 3: Code structure).  \textit{eesupp} routines and not in the \textit{model} routines. For
279    this reason the top-level \textit{MAIN.F} is in the
280    \textit{eesupp/src} directory. In general, end-users should not need
281    to worry about this level. The top-level routine for the numerical
282    part of the code is in \textit{model/src/THE\_MODEL\_MAIN.F}. Here is
283    a brief description of the directory structure of the model under the
284    root tree (a detailed description is given in section 3: Code
285    structure).
286    
287  \begin{itemize}  \begin{itemize}
 \item \textit{bin}: this directory is initially empty. It is the default  
 directory in which to compile the code.  
288    
289    \item \textit{bin}: this directory is initially empty. It is the
290      default directory in which to compile the code.
291      
292  \item \textit{diags}: contains the code relative to time-averaged  \item \textit{diags}: contains the code relative to time-averaged
293  diagnostics. It is subdivided into two subdirectories \textit{inc} and    diagnostics. It is subdivided into two subdirectories \textit{inc}
294  \textit{src} that contain include files (*.\textit{h} files) and Fortran    and \textit{src} that contain include files (*.\textit{h} files) and
295  subroutines (*.\textit{F} files), respectively.    Fortran subroutines (*.\textit{F} files), respectively.
296    
297  \item \textit{doc}: contains brief documentation notes.  \item \textit{doc}: contains brief documentation notes.
298      
299  \item \textit{eesupp}: contains the execution environment source code. Also  \item \textit{eesupp}: contains the execution environment source code.
300  subdivided into two subdirectories \textit{inc} and \textit{src}.    Also subdivided into two subdirectories \textit{inc} and
301      \textit{src}.
302  \item \textit{exe}: this directory is initially empty. It is the default    
303  directory in which to execute the code.  \item \textit{exe}: this directory is initially empty. It is the
304      default directory in which to execute the code.
305  \item \textit{model}: this directory contains the main source code. Also    
306  subdivided into two subdirectories \textit{inc} and \textit{src}.  \item \textit{model}: this directory contains the main source code.
307      Also subdivided into two subdirectories \textit{inc} and
308  \item \textit{pkg}: contains the source code for the packages. Each package    \textit{src}.
309  corresponds to a subdirectory. For example, \textit{gmredi} contains the    
310  code related to the Gent-McWilliams/Redi scheme, \textit{aim} the code  \item \textit{pkg}: contains the source code for the packages. Each
311  relative to the atmospheric intermediate physics. The packages are described    package corresponds to a subdirectory. For example, \textit{gmredi}
312  in detail in section 3.    contains the code related to the Gent-McWilliams/Redi scheme,
313      \textit{aim} the code relative to the atmospheric intermediate
314  \item \textit{tools}: this directory contains various useful tools. For    physics. The packages are described in detail in section 3.
315  example, \textit{genmake} is a script written in csh (C-shell) that should    
316  be used to generate your makefile. The directory \textit{adjoint} contains  \item \textit{tools}: this directory contains various useful tools.
317  the makefile specific to the Tangent linear and Adjoint Compiler (TAMC) that    For example, \textit{genmake2} is a script written in csh (C-shell)
318  generates the adjoint code. The latter is described in details in part V.    that should be used to generate your makefile. The directory
319      \textit{adjoint} contains the makefile specific to the Tangent
320      linear and Adjoint Compiler (TAMC) that generates the adjoint code.
321      The latter is described in details in part V.
322      
323  \item \textit{utils}: this directory contains various utilities. The  \item \textit{utils}: this directory contains various utilities. The
324  subdirectory \textit{knudsen2} contains code and a makefile that    subdirectory \textit{knudsen2} contains code and a makefile that
325  compute coefficients of the polynomial approximation to the knudsen    compute coefficients of the polynomial approximation to the knudsen
326  formula for an ocean nonlinear equation of state. The \textit{matlab}    formula for an ocean nonlinear equation of state. The
327  subdirectory contains matlab scripts for reading model output directly    \textit{matlab} subdirectory contains matlab scripts for reading
328  into matlab. \textit{scripts} contains C-shell post-processing    model output directly into matlab. \textit{scripts} contains C-shell
329  scripts for joining processor-based and tiled-based model output.    post-processing scripts for joining processor-based and tiled-based
330      model output.
331      
332    \item \textit{verification}: this directory contains the model
333      examples. See section \ref{sect:modelExamples}.
334    
 \item \textit{verification}: this directory contains the model examples. See  
 section \ref{sect:modelExamples}.  
335  \end{itemize}  \end{itemize}
336    
337  \section{Example experiments}  \section[MITgcm Example Experiments]{Example experiments}
338  \label{sect:modelExamples}  \label{sect:modelExamples}
339    \begin{rawhtml}
340  The MITgcm distribution comes with a set of twenty-four pre-configured  <!-- CMIREDIR:modelExamples: -->
341  numerical experiments. Some of these examples experiments are tests of  \end{rawhtml}
342  individual parts of the model code, but many are fully fledged numerical  
343  simulations. A few of the examples are used for tutorial documentation  %% a set of twenty-four pre-configured numerical experiments
344  in sections \ref{sect:eg-baro} - \ref{sect:eg-global}. The other examples  
345  follow the same general structure as the tutorial examples. However,  The MITgcm distribution comes with more than a dozen pre-configured
346  they only include brief instructions in a text file called {\it README}.  numerical experiments. Some of these example experiments are tests of
347  The examples are located in subdirectories under  individual parts of the model code, but many are fully fledged
348  the directory \textit{verification}. Each  numerical simulations. A few of the examples are used for tutorial
349  example is briefly described below.  documentation in sections \ref{sect:eg-baro} - \ref{sect:eg-global}.
350    The other examples follow the same general structure as the tutorial
351    examples. However, they only include brief instructions in a text file
352    called {\it README}.  The examples are located in subdirectories under
353    the directory \textit{verification}. Each example is briefly described
354    below.
355    
356  \subsection{Full list of model examples}  \subsection{Full list of model examples}
357    
358  \begin{enumerate}  \begin{enumerate}
359      
360  \item \textit{exp0} - single layer, ocean double gyre (barotropic with  \item \textit{exp0} - single layer, ocean double gyre (barotropic with
361  free-surface). This experiment is described in detail in section    free-surface). This experiment is described in detail in section
362  \ref{sect:eg-baro}.    \ref{sect:eg-baro}.
   
 \item \textit{exp1} - Four layer, ocean double gyre. This experiment is described in detail in section  
 \ref{sect:eg-baroc}.  
363    
364    \item \textit{exp1} - Four layer, ocean double gyre. This experiment
365      is described in detail in section \ref{sect:eg-baroc}.
366      
367  \item \textit{exp2} - 4x4 degree global ocean simulation with steady  \item \textit{exp2} - 4x4 degree global ocean simulation with steady
368  climatological forcing. This experiment is described in detail in section    climatological forcing. This experiment is described in detail in
369  \ref{sect:eg-global}.    section \ref{sect:eg-global}.
370      
371  \item \textit{exp4} - Flow over a Gaussian bump in open-water or channel  \item \textit{exp4} - Flow over a Gaussian bump in open-water or
372  with open boundaries.    channel with open boundaries.
373      
374  \item \textit{exp5} - Inhomogenously forced ocean convection in a doubly  \item \textit{exp5} - Inhomogenously forced ocean convection in a
375  periodic box.    doubly periodic box.
376    
377  \item \textit{front\_relax} - Relaxation of an ocean thermal front (test for  \item \textit{front\_relax} - Relaxation of an ocean thermal front (test for
378  Gent/McWilliams scheme). 2D (Y-Z).  Gent/McWilliams scheme). 2D (Y-Z).
379    
380  \item \textit{internal wave} - Ocean internal wave forced by open boundary  \item \textit{internal wave} - Ocean internal wave forced by open
381  conditions.    boundary conditions.
382      
383  \item \textit{natl\_box} - Eastern subtropical North Atlantic with KPP  \item \textit{natl\_box} - Eastern subtropical North Atlantic with KPP
384  scheme; 1 month integration    scheme; 1 month integration
385      
386  \item \textit{hs94.1x64x5} - Zonal averaged atmosphere using Held and Suarez  \item \textit{hs94.1x64x5} - Zonal averaged atmosphere using Held and
387  '94 forcing.    Suarez '94 forcing.
388      
389  \item \textit{hs94.128x64x5} - 3D atmosphere dynamics using Held and Suarez  \item \textit{hs94.128x64x5} - 3D atmosphere dynamics using Held and
390  '94 forcing.    Suarez '94 forcing.
391      
392  \item \textit{hs94.cs-32x32x5} - 3D atmosphere dynamics using Held and  \item \textit{hs94.cs-32x32x5} - 3D atmosphere dynamics using Held and
393  Suarez '94 forcing on the cubed sphere.    Suarez '94 forcing on the cubed sphere.
394      
395  \item \textit{aim.5l\_zon-ave} - Intermediate Atmospheric physics. Global  \item \textit{aim.5l\_zon-ave} - Intermediate Atmospheric physics.
396  Zonal Mean configuration, 1x64x5 resolution.    Global Zonal Mean configuration, 1x64x5 resolution.
397      
398  \item \textit{aim.5l\_XZ\_Equatorial\_Slice} - Intermediate Atmospheric  \item \textit{aim.5l\_XZ\_Equatorial\_Slice} - Intermediate
399  physics, equatorial Slice configuration.    Atmospheric physics, equatorial Slice configuration.  2D (X-Z).
400  2D (X-Z).    
   
401  \item \textit{aim.5l\_Equatorial\_Channel} - Intermediate Atmospheric  \item \textit{aim.5l\_Equatorial\_Channel} - Intermediate Atmospheric
402  physics. 3D Equatorial Channel configuration.    physics. 3D Equatorial Channel configuration.
403      
404  \item \textit{aim.5l\_LatLon} - Intermediate Atmospheric physics.  \item \textit{aim.5l\_LatLon} - Intermediate Atmospheric physics.
405  Global configuration, on latitude longitude grid with 128x64x5 grid points    Global configuration, on latitude longitude grid with 128x64x5 grid
406  ($2.8^\circ{\rm degree}$ resolution).    points ($2.8^\circ{\rm degree}$ resolution).
407      
408  \item \textit{adjustment.128x64x1} Barotropic adjustment  \item \textit{adjustment.128x64x1} Barotropic adjustment problem on
409  problem on latitude longitude grid with 128x64 grid points ($2.8^\circ{\rm degree}$ resolution).    latitude longitude grid with 128x64 grid points ($2.8^\circ{\rm
410        degree}$ resolution).
411  \item \textit{adjustment.cs-32x32x1}    
412  Barotropic adjustment  \item \textit{adjustment.cs-32x32x1} Barotropic adjustment problem on
413  problem on cube sphere grid with 32x32 points per face ( roughly    cube sphere grid with 32x32 points per face ( roughly $2.8^\circ{\rm
414  $2.8^\circ{\rm degree}$ resolution).      degree}$ resolution).
415      
416  \item \textit{advect\_cs} Two-dimensional passive advection test on  \item \textit{advect\_cs} Two-dimensional passive advection test on
417  cube sphere grid.    cube sphere grid.
418      
419  \item \textit{advect\_xy} Two-dimensional (horizontal plane) passive advection  \item \textit{advect\_xy} Two-dimensional (horizontal plane) passive
420  test on Cartesian grid.    advection test on Cartesian grid.
421      
422  \item \textit{advect\_yz} Two-dimensional (vertical plane) passive advection test on Cartesian grid.  \item \textit{advect\_yz} Two-dimensional (vertical plane) passive
423      advection test on Cartesian grid.
424  \item \textit{carbon} Simple passive tracer experiment. Includes derivative    
425  calculation. Described in detail in section \ref{sect:eg-carbon-ad}.  \item \textit{carbon} Simple passive tracer experiment. Includes
426      derivative calculation. Described in detail in section
427      \ref{sect:eg-carbon-ad}.
428    
429  \item \textit{flt\_example} Example of using float package.  \item \textit{flt\_example} Example of using float package.
430      
431  \item \textit{global\_ocean.90x40x15} Global circulation with  \item \textit{global\_ocean.90x40x15} Global circulation with GM, flux
432  GM, flux boundary conditions and poles.    boundary conditions and poles.
433    
434  \item \textit{solid-body.cs-32x32x1} Solid body rotation test for cube sphere  \item \textit{global\_ocean\_pressure} Global circulation in pressure
435  grid.    coordinate (non-Boussinesq ocean model). Described in detail in
436      section \ref{sect:eg-globalpressure}.
437      
438    \item \textit{solid-body.cs-32x32x1} Solid body rotation test for cube
439      sphere grid.
440    
441  \end{enumerate}  \end{enumerate}
442    
# Line 278  Each example directory has the following Line 446  Each example directory has the following
446    
447  \begin{itemize}  \begin{itemize}
448  \item \textit{code}: contains the code particular to the example. At a  \item \textit{code}: contains the code particular to the example. At a
449  minimum, this directory includes the following files:    minimum, this directory includes the following files:
   
 \begin{itemize}  
 \item \textit{code/CPP\_EEOPTIONS.h}: declares CPP keys relative to the  
 ``execution environment'' part of the code. The default version is located  
 in \textit{eesupp/inc}.  
   
 \item \textit{code/CPP\_OPTIONS.h}: declares CPP keys relative to the  
 ``numerical model'' part of the code. The default version is located in  
 \textit{model/inc}.  
   
 \item \textit{code/SIZE.h}: declares size of underlying computational grid.  
 The default version is located in \textit{model/inc}.  
 \end{itemize}  
   
 In addition, other include files and subroutines might be present in \textit{%  
 code} depending on the particular experiment. See section 2 for more details.  
450    
451  \item \textit{input}: contains the input data files required to run the    \begin{itemize}
452  example. At a minimum, the \textit{input} directory contains the following    \item \textit{code/CPP\_EEOPTIONS.h}: declares CPP keys relative to
453  files:      the ``execution environment'' part of the code. The default
454        version is located in \textit{eesupp/inc}.
455  \begin{itemize}    
456  \item \textit{input/data}: this file, written as a namelist, specifies the    \item \textit{code/CPP\_OPTIONS.h}: declares CPP keys relative to
457  main parameters for the experiment.      the ``numerical model'' part of the code. The default version is
458        located in \textit{model/inc}.
459  \item \textit{input/data.pkg}: contains parameters relative to the packages    
460  used in the experiment.    \item \textit{code/SIZE.h}: declares size of underlying
461        computational grid.  The default version is located in
462  \item \textit{input/eedata}: this file contains ``execution environment''      \textit{model/inc}.
463  data. At present, this consists of a specification of the number of threads    \end{itemize}
464  to use in $X$ and $Y$ under multithreaded execution.    
465      In addition, other include files and subroutines might be present in
466      \textit{code} depending on the particular experiment. See Section 2
467      for more details.
468      
469    \item \textit{input}: contains the input data files required to run
470      the example. At a minimum, the \textit{input} directory contains the
471      following files:
472    
473      \begin{itemize}
474      \item \textit{input/data}: this file, written as a namelist,
475        specifies the main parameters for the experiment.
476      
477      \item \textit{input/data.pkg}: contains parameters relative to the
478        packages used in the experiment.
479      
480      \item \textit{input/eedata}: this file contains ``execution
481        environment'' data. At present, this consists of a specification
482        of the number of threads to use in $X$ and $Y$ under multithreaded
483        execution.
484      \end{itemize}
485      
486      In addition, you will also find in this directory the forcing and
487      topography files as well as the files describing the initial state
488      of the experiment.  This varies from experiment to experiment. See
489      section 2 for more details.
490    
491    \item \textit{results}: this directory contains the output file
492      \textit{output.txt} produced by the simulation example. This file is
493      useful for comparison with your own output when you run the
494      experiment.
495  \end{itemize}  \end{itemize}
496    
497  In addition, you will also find in this directory the forcing and topography  Once you have chosen the example you want to run, you are ready to
498  files as well as the files describing the initial state of the experiment.  compile the code.
 This varies from experiment to experiment. See section 2 for more details.  
   
 \item \textit{results}: this directory contains the output file \textit{%  
 output.txt} produced by the simulation example. This file is useful for  
 comparison with your own output when you run the experiment.  
 \end{itemize}  
499    
500  Once you have chosen the example you want to run, you are ready to compile  \section[Building MITgcm]{Building the code}
 the code.  
   
 \section{Building the code}  
501  \label{sect:buildingCode}  \label{sect:buildingCode}
502    \begin{rawhtml}
503    <!-- CMIREDIR:buildingCode: -->
504    \end{rawhtml}
505    
506  To compile the code, we use the {\em make} program. This uses a file  To compile the code, we use the {\em make} program. This uses a file
507  ({\em Makefile}) that allows us to pre-process source files, specify  ({\em Makefile}) that allows us to pre-process source files, specify
508  compiler and optimization options and also figures out any file  compiler and optimization options and also figures out any file
509  dependencies. We supply a script ({\em genmake}), described in section  dependencies. We supply a script ({\em genmake2}), described in
510  \ref{sect:genmake}, that automatically creates the {\em Makefile} for  section \ref{sect:genmake}, that automatically creates the {\em
511  you. You then need to build the dependencies and compile the code.    Makefile} for you. You then need to build the dependencies and
512    compile the code.
513    
514  As an example, let's assume that you want to build and run experiment  As an example, let's assume that you want to build and run experiment
515  \textit{verification/exp2}. The are multiple ways and places to actually  \textit{verification/exp2}. The are multiple ways and places to
516  do this but here let's build the code in  actually do this but here let's build the code in
517  \textit{verification/exp2/input}:  \textit{verification/exp2/input}:
518  \begin{verbatim}  \begin{verbatim}
519  % cd verification/exp2/input  % cd verification/exp2/input
520  \end{verbatim}  \end{verbatim}
521  First, build the {\em Makefile}:  First, build the {\em Makefile}:
522  \begin{verbatim}  \begin{verbatim}
523  % ../../../tools/genmake -mods=../code  % ../../../tools/genmake2 -mods=../code
524  \end{verbatim}  \end{verbatim}
525  The command line option tells {\em genmake} to override model source  The command line option tells {\em genmake} to override model source
526  code with any files in the directory {\em ./code/}.  code with any files in the directory {\em ../code/}.
527    
528    On many systems, the {\em genmake2} program will be able to
529    automatically recognize the hardware, find compilers and other tools
530    within the user's path (``echo \$PATH''), and then choose an
531    appropriate set of options from the files (``optfiles'') contained in
532    the {\em tools/build\_options} directory.  Under some circumstances, a
533    user may have to create a new ``optfile'' in order to specify the
534    exact combination of compiler, compiler flags, libraries, and other
535    options necessary to build a particular configuration of MITgcm.  In
536    such cases, it is generally helpful to read the existing ``optfiles''
537    and mimic their syntax.
538    
539    Through the MITgcm-support list, the MITgcm developers are willing to
540    provide help writing or modifing ``optfiles''.  And we encourage users
541    to post new ``optfiles'' (particularly ones for new machines or
542    architectures) to the
543    \begin{rawhtml} <A href=''mailto:MITgcm-support@mitgcm.org"> \end{rawhtml}
544    MITgcm-support@mitgcm.org
545    \begin{rawhtml} </A> \end{rawhtml}
546    list.
547    
548  If there is no \textit{.genmakerc} in the \textit{input} directory, you have  To specify an optfile to {\em genmake2}, the syntax is:
 to use the following options when invoking \textit{genmake}:  
549  \begin{verbatim}  \begin{verbatim}
550  % ../../../tools/genmake  -mods=../code  % ../../../tools/genmake2 -mods=../code -of /path/to/optfile
551  \end{verbatim}  \end{verbatim}
552    
553  Next, create the dependencies:  Once a {\em Makefile} has been generated, we create the dependencies:
554  \begin{verbatim}  \begin{verbatim}
555  % make depend  % make depend
556  \end{verbatim}  \end{verbatim}
557  This modifies {\em Makefile} by attaching a [long] list of files on  This modifies the {\em Makefile} by attaching a [long] list of files
558  which other files depend. The purpose of this is to reduce  upon which other files depend. The purpose of this is to reduce
559  re-compilation if and when you start to modify the code. {\tt make  re-compilation if and when you start to modify the code. The {\tt make
560  depend} also created links from the model source to this directory.    depend} command also creates links from the model source to this
561    directory.  It is important to note that the {\tt make depend} stage
562    will occasionally produce warnings or errors since the dependency
563    parsing tool is unable to find all of the necessary header files
564    (\textit{eg.}  \texttt{netcdf.inc}).  In these circumstances, it is
565    usually OK to ignore the warnings/errors and proceed to the next step.
566    
567  Now compile the code:  Next compile the code:
568  \begin{verbatim}  \begin{verbatim}
569  % make  % make
570  \end{verbatim}  \end{verbatim}
571  The {\tt make} command creates an executable called \textit{mitgcmuv}.  The {\tt make} command creates an executable called \textit{mitgcmuv}.
572    Additional make ``targets'' are defined within the makefile to aid in
573    the production of adjoint and other versions of MITgcm.
574    
575  Now you are ready to run the model. General instructions for doing so are  Now you are ready to run the model. General instructions for doing so are
576  given in section \ref{sect:runModel}. Here, we can run the model with:  given in section \ref{sect:runModel}. Here, we can run the model with:
# Line 378  where we are re-directing the stream of Line 581  where we are re-directing the stream of
581  output.txt}.  output.txt}.
582    
583    
584  \subsection{Building/compiling the code elsewhere}  \section[Running MITgcm]{Running the model in prognostic mode}
   
 In the example above (section \ref{sect:buildingCode}) we built the  
 executable in the {\em input} directory of the experiment for  
 convenience. You can also configure and compile the code in other  
 locations, for example on a scratch disk with out having to copy the  
 entire source tree. The only requirement to do so is you have {\tt  
 genmake} in your path or you know the absolute path to {\tt genmake}.  
   
 The following sections outline some possible methods of organizing you  
 source and data.  
   
 \subsubsection{Building from the {\em ../code directory}}  
   
 This is just as simple as building in the {\em input/} directory:  
 \begin{verbatim}  
 % cd verification/exp2/code  
 % ../../../tools/genmake  
 % make depend  
 % make  
 \end{verbatim}  
 However, to run the model the executable ({\em mitgcmuv}) and input  
 files must be in the same place. If you only have one calculation to make:  
 \begin{verbatim}  
 % cd ../input  
 % cp ../code/mitgcmuv ./  
 % ./mitgcmuv > output.txt  
 \end{verbatim}  
 or if you will be making multiple runs with the same executable:  
 \begin{verbatim}  
 % cd ../  
 % cp -r input run1  
 % cp code/mitgcmuv run1  
 % cd run1  
 % ./mitgcmuv > output.txt  
 \end{verbatim}  
   
 \subsubsection{Building from a new directory}  
   
 Since the {\em input} directory contains input files it is often more  
 useful to keep {\em input} pristine and build in a new directory  
 within {\em verification/exp2/}:  
 \begin{verbatim}  
 % cd verification/exp2  
 % mkdir build  
 % cd build  
 % ../../../tools/genmake -mods=../code  
 % make depend  
 % make  
 \end{verbatim}  
 This builds the code exactly as before but this time you need to copy  
 either the executable or the input files or both in order to run the  
 model. For example,  
 \begin{verbatim}  
 % cp ../input/* ./  
 % ./mitgcmuv > output.txt  
 \end{verbatim}  
 or if you tend to make multiple runs with the same executable then  
 running in a new directory each time might be more appropriate:  
 \begin{verbatim}  
 % cd ../  
 % mkdir run1  
 % cp build/mitgcmuv run1/  
 % cp input/* run1/  
 % cd run1  
 % ./mitgcmuv > output.txt  
 \end{verbatim}  
   
 \subsubsection{Building from on a scratch disk}  
   
 Model object files and output data can use up large amounts of disk  
 space so it is often the case that you will be operating on a large  
 scratch disk. Assuming the model source is in {\em ~/MITgcm} then the  
 following commands will build the model in {\em /scratch/exp2-run1}:  
 \begin{verbatim}  
 % cd /scratch/exp2-run1  
 % ~/MITgcm/tools/genmake -rootdir=~/MITgcm -mods=~/MITgcm/verification/exp2/code  
 % make depend  
 % make  
 \end{verbatim}  
 To run the model here, you'll need the input files:  
 \begin{verbatim}  
 % cp ~/MITgcm/verification/exp2/input/* ./  
 % ./mitgcmuv > output.txt  
 \end{verbatim}  
   
 As before, you could build in one directory and make multiple runs of  
 the one experiment:  
 \begin{verbatim}  
 % cd /scratch/exp2  
 % mkdir build  
 % cd build  
 % ~/MITgcm/tools/genmake -rootdir=~/MITgcm -mods=~/MITgcm/verification/exp2/code  
 % make depend  
 % make  
 % cd ../  
 % cp -r ~/MITgcm/verification/exp2/input run2  
 % cd run2  
 % ./mitgcmuv > output.txt  
 \end{verbatim}  
   
   
   
 \subsection{\textit{genmake}}  
 \label{sect:genmake}  
   
 To compile the code, use the script \textit{genmake} located in the \textit{%  
 tools} directory. \textit{genmake} is a script that generates the makefile.  
 It has been written so that the code can be compiled on a wide diversity of  
 machines and systems. However, if it doesn't work the first time on your  
 platform, you might need to edit certain lines of \textit{genmake} in the  
 section containing the setups for the different machines. The file is  
 structured like this:  
 \begin{verbatim}  
         .  
         .  
         .  
 general instructions (machine independent)  
         .  
         .  
         .  
     - setup machine 1  
     - setup machine 2  
     - setup machine 3  
     - setup machine 4  
        etc  
         .  
         .  
         .  
 \end{verbatim}  
   
 For example, the setup corresponding to a DEC alpha machine is reproduced  
 here:  
 \begin{verbatim}  
   case OSF1+mpi:  
     echo "Configuring for DEC Alpha"  
     set CPP        = ( '/usr/bin/cpp -P' )  
     set DEFINES    = ( ${DEFINES}  '-DTARGET_DEC -DWORDLENGTH=1' )  
     set KPP        = ( 'kapf' )  
     set KPPFILES   = ( 'main.F' )  
     set KFLAGS1    = ( '-scan=132 -noconc -cmp=' )  
     set FC         = ( 'f77' )  
     set FFLAGS     = ( '-convert big_endian -r8 -extend_source -automatic -call_shared -notransform_loops -align dcommons' )  
     set FOPTIM     = ( '-O5 -fast -tune host -inline all' )  
     set NOOPTFLAGS = ( '-O0' )  
     set LIBS       = ( '-lfmpi -lmpi -lkmp_osfp10 -pthread' )  
     set NOOPTFILES = ( 'barrier.F different_multiple.F external_fields_load.F')  
     set RMFILES    = ( '*.p.out' )  
     breaksw  
 \end{verbatim}  
   
 Typically, these are the lines that you might need to edit to make \textit{%  
 genmake} work on your platform if it doesn't work the first time. \textit{%  
 genmake} understands several options that are described here:  
   
 \begin{itemize}  
 \item -rootdir=dir  
   
 indicates where the model root directory is relative to the directory where  
 you are compiling. This option is not needed if you compile in the \textit{%  
 bin} directory (which is the default compilation directory) or within the  
 \textit{verification} tree.  
   
 \item -mods=dir1,dir2,...  
   
 indicates the relative or absolute paths directories where the sources  
 should take precedence over the default versions (located in \textit{model},  
 \textit{eesupp},...). Typically, this option is used when running the  
 examples, see below.  
   
 \item -enable=pkg1,pkg2,...  
   
 enables packages source code \textit{pkg1}, \textit{pkg2},... when creating  
 the makefile.  
   
 \item -disable=pkg1,pkg2,...  
   
 disables packages source code \textit{pkg1}, \textit{pkg2},... when creating  
 the makefile.  
   
 \item -platform=machine  
   
 specifies the platform for which you want the makefile. In general, you  
 won't need this option. \textit{genmake} will select the right machine for  
 you (the one you're working on!). However, this option is useful if you have  
 a choice of several compilers on one machine and you want to use the one  
 that is not the default (ex: \texttt{pgf77} instead of \texttt{f77} under  
 Linux).  
   
 \item -mpi  
   
 this is used when you want to run the model in parallel processing mode  
 under mpi (see section on parallel computation for more details).  
   
 \item -jam  
   
 this is used when you want to run the model in parallel processing mode  
 under jam (see section on parallel computation for more details).  
 \end{itemize}  
   
 For some of the examples, there is a file called \textit{.genmakerc} in the  
 \textit{input} directory that has the relevant \textit{genmake} options for  
 that particular example. In this way you don't need to type the options when  
 invoking \textit{genmake}.  
   
   
 \section{Running the model}  
585  \label{sect:runModel}  \label{sect:runModel}
586    \begin{rawhtml}
587    <!-- CMIREDIR:runModel: -->
588    \end{rawhtml}
589    
590    If compilation finished succesfuully (section \ref{sect:buildingCode})
591    then an executable called \texttt{mitgcmuv} will now exist in the
592    local directory.
593    
594  If compilation finished succesfuully (section \ref{sect:buildModel})  To run the model as a single process (\textit{ie.} not in parallel)
595  then an executable called {\em mitgcmuv} will now exist in the local  simply type:
 directory.  
   
 To run the model as a single process (ie. not in parallel) simply  
 type:  
596  \begin{verbatim}  \begin{verbatim}
597  % ./mitgcmuv  % ./mitgcmuv
598  \end{verbatim}  \end{verbatim}
# Line 606  normally re-direct the {\em stdout} stre Line 606  normally re-direct the {\em stdout} stre
606  \begin{verbatim}  \begin{verbatim}
607  % ./mitgcmuv > output.txt  % ./mitgcmuv > output.txt
608  \end{verbatim}  \end{verbatim}
609    In the event that the model encounters an error and stops, it is very
610  For the example experiments in {\em vericication}, an example of the  helpful to include the last few line of this \texttt{output.txt} file
611  output is kept in {\em results/output.txt} for comparison. You can compare  along with the (\texttt{stderr}) error message within any bug reports.
612  your {\em output.txt} with this one to check that the set-up works.  
613    For the example experiments in {\em verification}, an example of the
614    output is kept in {\em results/output.txt} for comparison. You can
615    compare your {\em output.txt} with the corresponding one for that
616    experiment to check that the set-up works.
617    
618    
619    
620  \subsection{Output files}  \subsection{Output files}
621    
622  The model produces various output files. At a minimum, the instantaneous  The model produces various output files.  Depending upon the I/O
623  ``state'' of the model is written out, which is made of the following files:  package selected (either \texttt{mdsio} or \texttt{mnc} or both as
624    determined by both the compile-time settings and the run-time flags in
625    \texttt{data.pkg}), the following output may appear.
626    
627    
628    \subsubsection{MDSIO output files}
629    
630    The ``traditional'' output files are generated by the \texttt{mdsio}
631    package.  At a minimum, the instantaneous ``state'' of the model is
632    written out, which is made of the following files:
633    
634  \begin{itemize}  \begin{itemize}
635  \item \textit{U.00000nIter} - zonal component of velocity field (m/s and $>  \item \textit{U.00000nIter} - zonal component of velocity field (m/s and $>
# Line 664  as the pickup files but are named differ Line 677  as the pickup files but are named differ
677  used to restart the model but are overwritten every other time they are  used to restart the model but are overwritten every other time they are
678  output to save disk space during long integrations.  output to save disk space during long integrations.
679    
680    
681    
682    \subsubsection{MNC output files}
683    
684    Unlike the \texttt{mdsio} output, the \texttt{mnc}--generated output
685    is usually (though not necessarily) placed within a subdirectory with
686    a name such as \texttt{mnc\_test\_\${DATE}\_\${SEQ}}.  The files
687    within this subdirectory are all in the ``self-describing'' netCDF
688    format and can thus be browsed and/or plotted using tools such as:
689    \begin{itemize}
690    \item At a minimum, the \texttt{ncdump} utility is typically included
691      with every netCDF install:
692      \begin{rawhtml} <A href="http://www.unidata.ucar.edu/packages/netcdf/"> \end{rawhtml}
693    \begin{verbatim}
694    http://www.unidata.ucar.edu/packages/netcdf/
695    \end{verbatim}
696      \begin{rawhtml} </A> \end{rawhtml}
697    
698    \item The \texttt{ncview} utility is a very convenient and quick way
699      to plot netCDF data and it runs on most OSes:
700      \begin{rawhtml} <A href="http://meteora.ucsd.edu/~pierce/ncview_home_page.html"> \end{rawhtml}
701    \begin{verbatim}
702    http://meteora.ucsd.edu/~pierce/ncview_home_page.html
703    \end{verbatim}
704      \begin{rawhtml} </A> \end{rawhtml}
705      
706    \item MatLAB(c) and other common post-processing environments provide
707      various netCDF interfaces including:
708      \begin{rawhtml} <A href="http://woodshole.er.usgs.gov/staffpages/cdenham/public_html/MexCDF/nc4ml5.html"> \end{rawhtml}
709    \begin{verbatim}
710    http://woodshole.er.usgs.gov/staffpages/cdenham/public_html/MexCDF/nc4ml5.html
711    \end{verbatim}
712      \begin{rawhtml} </A> \end{rawhtml}
713    
714    \end{itemize}
715    
716    
717  \subsection{Looking at the output}  \subsection{Looking at the output}
718    
719  All the model data are written according to a ``meta/data'' file format.  The ``traditional'' or mdsio model data are written according to a
720  Each variable is associated with two files with suffix names \textit{.data}  ``meta/data'' file format.  Each variable is associated with two files
721  and \textit{.meta}. The \textit{.data} file contains the data written in  with suffix names \textit{.data} and \textit{.meta}. The
722  binary form (big\_endian by default). The \textit{.meta} file is a  \textit{.data} file contains the data written in binary form
723  ``header'' file that contains information about the size and the structure  (big\_endian by default). The \textit{.meta} file is a ``header'' file
724  of the \textit{.data} file. This way of organizing the output is  that contains information about the size and the structure of the
725  particularly useful when running multi-processors calculations. The base  \textit{.data} file. This way of organizing the output is particularly
726  version of the model includes a few matlab utilities to read output files  useful when running multi-processors calculations. The base version of
727  written in this format. The matlab scripts are located in the directory  the model includes a few matlab utilities to read output files written
728  \textit{utils/matlab} under the root tree. The script \textit{rdmds.m} reads  in this format. The matlab scripts are located in the directory
729  the data. Look at the comments inside the script to see how to use it.  \textit{utils/matlab} under the root tree. The script \textit{rdmds.m}
730    reads the data. Look at the comments inside the script to see how to
731    use it.
732    
733  Some examples of reading and visualizing some output in {\em Matlab}:  Some examples of reading and visualizing some output in {\em Matlab}:
734  \begin{verbatim}  \begin{verbatim}
# Line 693  Some examples of reading and visualizing Line 745  Some examples of reading and visualizing
745  >> for n=1:11; imagesc(eta(:,:,n)');axis ij;colorbar;pause(.5);end  >> for n=1:11; imagesc(eta(:,:,n)');axis ij;colorbar;pause(.5);end
746  \end{verbatim}  \end{verbatim}
747    
748  \section{Doing it yourself: customizing the code}  Similar scripts for netCDF output (\texttt{rdmnc.m}) are available.
   
 When you are ready to run the model in the configuration you want, the  
 easiest thing is to use and adapt the setup of the case studies experiment  
 (described previously) that is the closest to your configuration. Then, the  
 amount of setup will be minimized. In this section, we focus on the setup  
 relative to the ''numerical model'' part of the code (the setup relative to  
 the ''execution environment'' part is covered in the parallel implementation  
 section) and on the variables and parameters that you are likely to change.  
   
 \subsection{Configuration and setup}  
   
 The CPP keys relative to the ''numerical model'' part of the code are all  
 defined and set in the file \textit{CPP\_OPTIONS.h }in the directory \textit{%  
 model/inc }or in one of the \textit{code }directories of the case study  
 experiments under \textit{verification.} The model parameters are defined  
 and declared in the file \textit{model/inc/PARAMS.h }and their default  
 values are set in the routine \textit{model/src/set\_defaults.F. }The  
 default values can be modified in the namelist file \textit{data }which  
 needs to be located in the directory where you will run the model. The  
 parameters are initialized in the routine \textit{model/src/ini\_parms.F}.  
 Look at this routine to see in what part of the namelist the parameters are  
 located.  
   
 In what follows the parameters are grouped into categories related to the  
 computational domain, the equations solved in the model, and the simulation  
 controls.  
   
 \subsection{Computational domain, geometry and time-discretization}  
   
 \begin{itemize}  
 \item dimensions  
 \end{itemize}  
   
 The number of points in the x, y,\textit{\ }and r\textit{\ }directions are  
 represented by the variables \textbf{sNx}\textit{, }\textbf{sNy}\textit{, }%  
 and \textbf{Nr}\textit{\ }respectively which are declared and set in the  
 file \textit{model/inc/SIZE.h. }(Again, this assumes a mono-processor  
 calculation. For multiprocessor calculations see section on parallel  
 implementation.)  
   
 \begin{itemize}  
 \item grid  
 \end{itemize}  
   
 Three different grids are available: cartesian, spherical polar, and  
 curvilinear (including the cubed sphere). The grid is set through the  
 logical variables \textbf{usingCartesianGrid}\textit{, }\textbf{%  
 usingSphericalPolarGrid}\textit{, }and \textit{\ }\textbf{%  
 usingCurvilinearGrid}\textit{. }In the case of spherical and curvilinear  
 grids, the southern boundary is defined through the variable \textbf{phiMin}%  
 \textit{\ }which corresponds to the latitude of the southern most cell face  
 (in degrees). The resolution along the x and y directions is controlled by  
 the 1D arrays \textbf{delx}\textit{\ }and \textbf{dely}\textit{\ }(in meters  
 in the case of a cartesian grid, in degrees otherwise). The vertical grid  
 spacing is set through the 1D array \textbf{delz }for the ocean (in meters)  
 or \textbf{delp}\textit{\ }for the atmosphere (in Pa). The variable \textbf{%  
 Ro\_SeaLevel} represents the standard position of Sea-Level in ''R''  
 coordinate. This is typically set to 0m for the ocean (default value) and 10$%  
 ^{5}$Pa for the atmosphere. For the atmosphere, also set the logical  
 variable \textbf{groundAtK1} to '.\texttt{TRUE}.'. which put the first level  
 (k=1) at the lower boundary (ground).  
   
 For the cartesian grid case, the Coriolis parameter $f$ is set through the  
 variables \textbf{f0}\textit{\ }and \textbf{beta}\textit{\ }which correspond  
 to the reference Coriolis parameter (in s$^{-1}$) and $\frac{\partial f}{%  
 \partial y}$(in m$^{-1}$s$^{-1}$) respectively. If \textbf{beta }\textit{\ }%  
 is set to a nonzero value, \textbf{f0}\textit{\ }is the value of $f$ at the  
 southern edge of the domain.  
   
 \begin{itemize}  
 \item topography - full and partial cells  
 \end{itemize}  
   
 The domain bathymetry is read from a file that contains a 2D (x,y) map of  
 depths (in m) for the ocean or pressures (in Pa) for the atmosphere. The  
 file name is represented by the variable \textbf{bathyFile}\textit{. }The  
 file is assumed to contain binary numbers giving the depth (pressure) of the  
 model at each grid cell, ordered with the x coordinate varying fastest. The  
 points are ordered from low coordinate to high coordinate for both axes. The  
 model code applies without modification to enclosed, periodic, and double  
 periodic domains. Periodicity is assumed by default and is suppressed by  
 setting the depths to 0m for the cells at the limits of the computational  
 domain (note: not sure this is the case for the atmosphere). The precision  
 with which to read the binary data is controlled by the integer variable  
 \textbf{readBinaryPrec }which can take the value \texttt{32} (single  
 precision) or \texttt{64} (double precision). See the matlab program \textit{%  
 gendata.m }in the \textit{input }directories under \textit{verification }to  
 see how the bathymetry files are generated for the case study experiments.  
   
 To use the partial cell capability, the variable \textbf{hFacMin}\textit{\ }%  
 needs to be set to a value between 0 and 1 (it is set to 1 by default)  
 corresponding to the minimum fractional size of the cell. For example if the  
 bottom cell is 500m thick and \textbf{hFacMin}\textit{\ }is set to 0.1, the  
 actual thickness of the cell (i.e. used in the code) can cover a range of  
 discrete values 50m apart from 50m to 500m depending on the value of the  
 bottom depth (in \textbf{bathyFile}) at this point.  
   
 Note that the bottom depths (or pressures) need not coincide with the models  
 levels as deduced from \textbf{delz}\textit{\ }or\textit{\ }\textbf{delp}%  
 \textit{. }The model will interpolate the numbers in \textbf{bathyFile}%  
 \textit{\ }so that they match the levels obtained from \textbf{delz}\textit{%  
 \ }or\textit{\ }\textbf{delp}\textit{\ }and \textbf{hFacMin}\textit{. }  
   
 (Note: the atmospheric case is a bit more complicated than what is written  
 here I think. To come soon...)  
   
 \begin{itemize}  
 \item time-discretization  
 \end{itemize}  
   
 The time steps are set through the real variables \textbf{deltaTMom }and  
 \textbf{deltaTtracer }(in s) which represent the time step for the momentum  
 and tracer equations, respectively. For synchronous integrations, simply set  
 the two variables to the same value (or you can prescribe one time step only  
 through the variable \textbf{deltaT}). The Adams-Bashforth stabilizing  
 parameter is set through the variable \textbf{abEps }(dimensionless). The  
 stagger baroclinic time stepping can be activated by setting the logical  
 variable \textbf{staggerTimeStep }to '.\texttt{TRUE}.'.  
   
 \subsection{Equation of state}  
   
 First, because the model equations are written in terms of perturbations, a  
 reference thermodynamic state needs to be specified. This is done through  
 the 1D arrays \textbf{tRef}\textit{\ }and \textbf{sRef}. \textbf{tRef }%  
 specifies the reference potential temperature profile (in $^{o}$C for  
 the ocean and $^{o}$K for the atmosphere) starting from the level  
 k=1. Similarly, \textbf{sRef}\textit{\ }specifies the reference salinity  
 profile (in ppt) for the ocean or the reference specific humidity profile  
 (in g/kg) for the atmosphere.  
   
 The form of the equation of state is controlled by the character variables  
 \textbf{buoyancyRelation}\textit{\ }and \textbf{eosType}\textit{. }\textbf{%  
 buoyancyRelation}\textit{\ }is set to '\texttt{OCEANIC}' by default and  
 needs to be set to '\texttt{ATMOSPHERIC}' for atmosphere simulations. In  
 this case, \textbf{eosType}\textit{\ }must be set to '\texttt{IDEALGAS}'.  
 For the ocean, two forms of the equation of state are available: linear (set  
 \textbf{eosType}\textit{\ }to '\texttt{LINEAR}') and a polynomial  
 approximation to the full nonlinear equation ( set \textbf{eosType}\textit{\  
 }to '\texttt{POLYNOMIAL}'). In the linear case, you need to specify the  
 thermal and haline expansion coefficients represented by the variables  
 \textbf{tAlpha}\textit{\ }(in K$^{-1}$) and \textbf{sBeta}\textit{\ }(in ppt$%  
 ^{-1}$). For the nonlinear case, you need to generate a file of polynomial  
 coefficients called \textit{POLY3.COEFFS. }To do this, use the program  
 \textit{utils/knudsen2/knudsen2.f }under the model tree (a Makefile is  
 available in the same directory and you will need to edit the number and the  
 values of the vertical levels in \textit{knudsen2.f }so that they match  
 those of your configuration). \textit{\ }  
   
 \subsection{Momentum equations}  
   
 In this section, we only focus for now on the parameters that you are likely  
 to change, i.e. the ones relative to forcing and dissipation for example.  
 The details relevant to the vector-invariant form of the equations and the  
 various advection schemes are not covered for the moment. We assume that you  
 use the standard form of the momentum equations (i.e. the flux-form) with  
 the default advection scheme. Also, there are a few logical variables that  
 allow you to turn on/off various terms in the momentum equation. These  
 variables are called \textbf{momViscosity, momAdvection, momForcing,  
 useCoriolis, momPressureForcing, momStepping}\textit{, }and \textit{\ }%  
 \textbf{metricTerms }and are assumed to be set to '.\texttt{TRUE}.' here.  
 Look at the file \textit{model/inc/PARAMS.h }for a precise definition of  
 these variables.  
   
 \begin{itemize}  
 \item initialization  
 \end{itemize}  
   
 The velocity components are initialized to 0 unless the simulation is  
 starting from a pickup file (see section on simulation control parameters).  
   
 \begin{itemize}  
 \item forcing  
 \end{itemize}  
   
 This section only applies to the ocean. You need to generate wind-stress  
 data into two files \textbf{zonalWindFile}\textit{\ }and \textbf{%  
 meridWindFile }corresponding to the zonal and meridional components of the  
 wind stress, respectively (if you want the stress to be along the direction  
 of only one of the model horizontal axes, you only need to generate one  
 file). The format of the files is similar to the bathymetry file. The zonal  
 (meridional) stress data are assumed to be in Pa and located at U-points  
 (V-points). As for the bathymetry, the precision with which to read the  
 binary data is controlled by the variable \textbf{readBinaryPrec}.\textbf{\ }  
 See the matlab program \textit{gendata.m }in the \textit{input }directories  
 under \textit{verification }to see how simple analytical wind forcing data  
 are generated for the case study experiments.  
   
 There is also the possibility of prescribing time-dependent periodic  
 forcing. To do this, concatenate the successive time records into a single  
 file (for each stress component) ordered in a (x, y, t) fashion and set the  
 following variables: \textbf{periodicExternalForcing }to '.\texttt{TRUE}.',  
 \textbf{externForcingPeriod }to the period (in s) of which the forcing  
 varies (typically 1 month), and \textbf{externForcingCycle }to the repeat  
 time (in s) of the forcing (typically 1 year -- note: \textbf{%  
 externForcingCycle }must be a multiple of \textbf{externForcingPeriod}).  
 With these variables set up, the model will interpolate the forcing linearly  
 at each iteration.  
   
 \begin{itemize}  
 \item dissipation  
 \end{itemize}  
   
 The lateral eddy viscosity coefficient is specified through the variable  
 \textbf{viscAh}\textit{\ }(in m$^{2}$s$^{-1}$). The vertical eddy viscosity  
 coefficient is specified through the variable \textbf{viscAz }(in m$^{2}$s$%  
 ^{-1}$) for the ocean and \textbf{viscAp}\textit{\ }(in Pa$^{2}$s$^{-1}$)  
 for the atmosphere. The vertical diffusive fluxes can be computed implicitly  
 by setting the logical variable \textbf{implicitViscosity }to '.\texttt{TRUE}%  
 .'. In addition, biharmonic mixing can be added as well through the variable  
 \textbf{viscA4}\textit{\ }(in m$^{4}$s$^{-1}$). On a spherical polar grid,  
 you might also need to set the variable \textbf{cosPower} which is set to 0  
 by default and which represents the power of cosine of latitude to multiply  
 viscosity. Slip or no-slip conditions at lateral and bottom boundaries are  
 specified through the logical variables \textbf{no\_slip\_sides}\textit{\ }%  
 and \textbf{no\_slip\_bottom}. If set to '\texttt{.FALSE.}', free-slip  
 boundary conditions are applied. If no-slip boundary conditions are applied  
 at the bottom, a bottom drag can be applied as well. Two forms are  
 available: linear (set the variable \textbf{bottomDragLinear}\textit{\ }in s$%  
 ^{-1}$) and quadratic (set the variable \textbf{bottomDragQuadratic}\textit{%  
 \ }in m$^{-1}$).  
   
 The Fourier and Shapiro filters are described elsewhere.  
   
 \begin{itemize}  
 \item C-D scheme  
 \end{itemize}  
   
 If you run at a sufficiently coarse resolution, you will need the C-D scheme  
 for the computation of the Coriolis terms. The variable\textbf{\ tauCD},  
 which represents the C-D scheme coupling timescale (in s) needs to be set.  
   
 \begin{itemize}  
 \item calculation of pressure/geopotential  
 \end{itemize}  
   
 First, to run a non-hydrostatic ocean simulation, set the logical variable  
 \textbf{nonHydrostatic} to '.\texttt{TRUE}.'. The pressure field is then  
 inverted through a 3D elliptic equation. (Note: this capability is not  
 available for the atmosphere yet.) By default, a hydrostatic simulation is  
 assumed and a 2D elliptic equation is used to invert the pressure field. The  
 parameters controlling the behaviour of the elliptic solvers are the  
 variables \textbf{cg2dMaxIters}\textit{\ }and \textbf{cg2dTargetResidual }%  
 for the 2D case and \textbf{cg3dMaxIters}\textit{\ }and \textbf{%  
 cg3dTargetResidual }for the 3D case. You probably won't need to alter the  
 default values (are we sure of this?).  
   
 For the calculation of the surface pressure (for the ocean) or surface  
 geopotential (for the atmosphere) you need to set the logical variables  
 \textbf{rigidLid} and \textbf{implicitFreeSurface}\textit{\ }(set one to '.%  
 \texttt{TRUE}.' and the other to '.\texttt{FALSE}.' depending on how you  
 want to deal with the ocean upper or atmosphere lower boundary).  
   
 \subsection{Tracer equations}  
   
 This section covers the tracer equations i.e. the potential temperature  
 equation and the salinity (for the ocean) or specific humidity (for the  
 atmosphere) equation. As for the momentum equations, we only describe for  
 now the parameters that you are likely to change. The logical variables  
 \textbf{tempDiffusion}\textit{, }\textbf{tempAdvection}\textit{, }\textbf{%  
 tempForcing}\textit{,} and \textbf{tempStepping} allow you to turn on/off  
 terms in the temperature equation (same thing for salinity or specific  
 humidity with variables \textbf{saltDiffusion}\textit{, }\textbf{%  
 saltAdvection}\textit{\ }etc). These variables are all assumed here to be  
 set to '.\texttt{TRUE}.'. Look at file \textit{model/inc/PARAMS.h }for a  
 precise definition.  
   
 \begin{itemize}  
 \item initialization  
 \end{itemize}  
   
 The initial tracer data can be contained in the binary files \textbf{%  
 hydrogThetaFile }and \textbf{hydrogSaltFile}. These files should contain 3D  
 data ordered in an (x, y, r) fashion with k=1 as the first vertical level.  
 If no file names are provided, the tracers are then initialized with the  
 values of \textbf{tRef }and \textbf{sRef }mentioned above (in the equation  
 of state section). In this case, the initial tracer data are uniform in x  
 and y for each depth level.  
   
 \begin{itemize}  
 \item forcing  
 \end{itemize}  
   
 This part is more relevant for the ocean, the procedure for the atmosphere  
 not being completely stabilized at the moment.  
   
 A combination of fluxes data and relaxation terms can be used for driving  
 the tracer equations. \ For potential temperature, heat flux data (in W/m$%  
 ^{2}$) can be stored in the 2D binary file \textbf{surfQfile}\textit{. }%  
 Alternatively or in addition, the forcing can be specified through a  
 relaxation term. The SST data to which the model surface temperatures are  
 restored to are supposed to be stored in the 2D binary file \textbf{%  
 thetaClimFile}\textit{. }The corresponding relaxation time scale coefficient  
 is set through the variable \textbf{tauThetaClimRelax}\textit{\ }(in s). The  
 same procedure applies for salinity with the variable names \textbf{EmPmRfile%  
 }\textit{, }\textbf{saltClimFile}\textit{, }and \textbf{tauSaltClimRelax}%  
 \textit{\ }for freshwater flux (in m/s) and surface salinity (in ppt) data  
 files and relaxation time scale coefficient (in s), respectively. Also for  
 salinity, if the CPP key \textbf{USE\_NATURAL\_BCS} is turned on, natural  
 boundary conditions are applied i.e. when computing the surface salinity  
 tendency, the freshwater flux is multiplied by the model surface salinity  
 instead of a constant salinity value.  
   
 As for the other input files, the precision with which to read the data is  
 controlled by the variable \textbf{readBinaryPrec}. Time-dependent, periodic  
 forcing can be applied as well following the same procedure used for the  
 wind forcing data (see above).  
   
 \begin{itemize}  
 \item dissipation  
 \end{itemize}  
   
 Lateral eddy diffusivities for temperature and salinity/specific humidity  
 are specified through the variables \textbf{diffKhT }and \textbf{diffKhS }%  
 (in m$^{2}$/s). Vertical eddy diffusivities are specified through the  
 variables \textbf{diffKzT }and \textbf{diffKzS }(in m$^{2}$/s) for the ocean  
 and \textbf{diffKpT }and \textbf{diffKpS }(in Pa$^{2}$/s) for the  
 atmosphere. The vertical diffusive fluxes can be computed implicitly by  
 setting the logical variable \textbf{implicitDiffusion }to '.\texttt{TRUE}%  
 .'. In addition, biharmonic diffusivities can be specified as well through  
 the coefficients \textbf{diffK4T }and \textbf{diffK4S }(in m$^{4}$/s). Note  
 that the cosine power scaling (specified through \textbf{cosPower }- see the  
 momentum equations section) is applied to the tracer diffusivities  
 (Laplacian and biharmonic) as well. The Gent and McWilliams parameterization  
 for oceanic tracers is described in the package section. Finally, note that  
 tracers can be also subject to Fourier and Shapiro filtering (see the  
 corresponding section on these filters).  
   
 \begin{itemize}  
 \item ocean convection  
 \end{itemize}  
   
 Two options are available to parameterize ocean convection: one is to use  
 the convective adjustment scheme. In this case, you need to set the variable  
 \textbf{cadjFreq}, which represents the frequency (in s) with which the  
 adjustment algorithm is called, to a non-zero value (if set to a negative  
 value by the user, the model will set it to the tracer time step). The other  
 option is to parameterize convection with implicit vertical diffusion. To do  
 this, set the logical variable \textbf{implicitDiffusion }to '.\texttt{TRUE}%  
 .' and the real variable \textbf{ivdc\_kappa }to a value (in m$^{2}$/s) you  
 wish the tracer vertical diffusivities to have when mixing tracers  
 vertically due to static instabilities. Note that \textbf{cadjFreq }and  
 \textbf{ivdc\_kappa }can not both have non-zero value.  
   
 \subsection{Simulation controls}  
   
 The model ''clock'' is defined by the variable \textbf{deltaTClock }(in s)  
 which determines the IO frequencies and is used in tagging output.  
 Typically, you will set it to the tracer time step for accelerated runs  
 (otherwise it is simply set to the default time step \textbf{deltaT}).  
 Frequency of checkpointing and dumping of the model state are referenced to  
 this clock (see below).  
   
 \begin{itemize}  
 \item run duration  
 \end{itemize}  
   
 The beginning of a simulation is set by specifying a start time (in s)  
 through the real variable \textbf{startTime }or by specifying an initial  
 iteration number through the integer variable \textbf{nIter0}. If these  
 variables are set to nonzero values, the model will look for a ''pickup''  
 file \textit{pickup.0000nIter0 }to restart the integration\textit{. }The end  
 of a simulation is set through the real variable \textbf{endTime }(in s).  
 Alternatively, you can specify instead the number of time steps to execute  
 through the integer variable \textbf{nTimeSteps}.  
   
 \begin{itemize}  
 \item frequency of output  
 \end{itemize}  
   
 Real variables defining frequencies (in s) with which output files are  
 written on disk need to be set up. \textbf{dumpFreq }controls the frequency  
 with which the instantaneous state of the model is saved. \textbf{chkPtFreq }%  
 and \textbf{pchkPtFreq }control the output frequency of rolling and  
 permanent checkpoint files, respectively. See section 1.5.1 Output files for the  
 definition of model state and checkpoint files. In addition, time-averaged  
 fields can be written out by setting the variable \textbf{taveFreq} (in s).  
 The precision with which to write the binary data is controlled by the  
 integer variable w\textbf{riteBinaryPrec }(set it to \texttt{32} or \texttt{%  
 64}).  

Legend:
Removed from v.1.10  
changed lines
  Added in v.1.30

  ViewVC Help
Powered by ViewVC 1.1.22