/[MITgcm]/manual/s_getstarted/text/getting_started.tex
ViewVC logotype

Diff of /manual/s_getstarted/text/getting_started.tex

Parent Directory Parent Directory | Revision Log Revision Log | View Revision Graph Revision Graph | View Patch Patch

revision 1.1.1.1 by adcroft, Wed Aug 8 16:15:31 2001 UTC revision 1.31 by edhill, Tue Aug 9 21:52:09 2005 UTC
# Line 1  Line 1 
1  % $Header$  % $Header$
2  % $Name$  % $Name$
3    
4    %\section{Getting started}
5    
6  \begin{center}  In this section, we describe how to use the model. In the first
7  {\Large \textbf{Using the model}}  section, we provide enough information to help you get started with
8    the model. We believe the best way to familiarize yourself with the
9    model is to run the case study examples provided with the base
10    version. Information on how to obtain, compile, and run the code is
11    found there as well as a brief description of the model structure
12    directory and the case study examples.  The latter and the code
13    structure are described more fully in chapters
14    \ref{chap:discretization} and \ref{chap:sarch}, respectively. Here, in
15    this section, we provide information on how to customize the code when
16    you are ready to try implementing the configuration you have in mind.
17    
18    
19    \section{Where to find information}
20    \label{sect:whereToFindInfo}
21    \begin{rawhtml}
22    <!-- CMIREDIR:whereToFindInfo: -->
23    \end{rawhtml}
24    
25    A web site is maintained for release 2 (``Pelican'') of MITgcm:
26    \begin{rawhtml} <A href=http://mitgcm.org/pelican/ target="idontexist"> \end{rawhtml}
27    \begin{verbatim}
28    http://mitgcm.org/pelican
29    \end{verbatim}
30    \begin{rawhtml} </A> \end{rawhtml}
31    Here you will find an on-line version of this document, a
32    ``browsable'' copy of the code and a searchable database of the model
33    and site, as well as links for downloading the model and
34    documentation, to data-sources, and other related sites.
35    
36    There is also a web-archived support mailing list for the model that
37    you can email at \texttt{MITgcm-support@mitgcm.org} or browse at:
38    \begin{rawhtml} <A href=http://mitgcm.org/mailman/listinfo/mitgcm-support/ target="idontexist"> \end{rawhtml}
39    \begin{verbatim}
40    http://mitgcm.org/mailman/listinfo/mitgcm-support/
41    http://mitgcm.org/pipermail/mitgcm-support/
42    \end{verbatim}
43    \begin{rawhtml} </A> \end{rawhtml}
44    Essentially all of the MITgcm web pages can be searched using a
45    popular web crawler such as Google or through our own search facility:
46    \begin{rawhtml} <A href=http://mitgcm.org/mailman/htdig/ target="idontexist"> \end{rawhtml}
47    \begin{verbatim}
48    http://mitgcm.org/htdig/
49    \end{verbatim}
50    \begin{rawhtml} </A> \end{rawhtml}
51    %%% http://www.google.com/search?q=hydrostatic+site%3Amitgcm.org
52    
53    
54    
55    \section{Obtaining the code}
56    \label{sect:obtainingCode}
57    \begin{rawhtml}
58    <!-- CMIREDIR:obtainingCode: -->
59    \end{rawhtml}
60    
61    MITgcm can be downloaded from our system by following
62    the instructions below. As a courtesy we ask that you send e-mail to us at
63    \begin{rawhtml} <A href=mailto:MITgcm-support@mitgcm.org> \end{rawhtml}
64    MITgcm-support@mitgcm.org
65    \begin{rawhtml} </A> \end{rawhtml}
66    to enable us to keep track of who's using the model and in what application.
67    You can download the model two ways:
68    
69    \begin{enumerate}
70    \item Using CVS software. CVS is a freely available source code management
71    tool. To use CVS you need to have the software installed. Many systems
72    come with CVS pre-installed, otherwise good places to look for
73    the software for a particular platform are
74    \begin{rawhtml} <A href=http://www.cvshome.org/ target="idontexist"> \end{rawhtml}
75    cvshome.org
76    \begin{rawhtml} </A> \end{rawhtml}
77    and
78    \begin{rawhtml} <A href=http://www.wincvs.org/ target="idontexist"> \end{rawhtml}
79    wincvs.org
80    \begin{rawhtml} </A> \end{rawhtml}
81    .
82    
83    \item Using a tar file. This method is simple and does not
84    require any special software. However, this method does not
85    provide easy support for maintenance updates.
86    
87  \vspace*{4mm}  \end{enumerate}
88    
89  \vspace*{3mm} {\large July 2001}  \subsection{Method 1 - Checkout from CVS}
90  \end{center}  \label{sect:cvs_checkout}
91    
92  In this part, we describe how to use the model. In the first section, we  If CVS is available on your system, we strongly encourage you to use it. CVS
93  provide enough information to help you get started with the model. We  provides an efficient and elegant way of organizing your code and keeping
94  believe the best way to familiarize yourself with the model is to run the  track of your changes. If CVS is not available on your machine, you can also
95  case study examples provided with the base version. Information on how to  download a tar file.
 obtain, compile, and run the code is found there as well as a brief  
 description of the model structure directory and the case study examples.  
 The latter and the code structure are described more fully in sections 2 and  
 3, respectively. In section 4, we provide information on how to customize  
 the code when you are ready to try implementing the configuration you have  
 in mind.  
   
 \section{Getting started}  
   
 \subsection{Obtaining the code}  
96    
97  The reference web site for the model is:  Before you can use CVS, the following environment variable(s) should
98    be set within your shell.  For a csh or tcsh shell, put the following
99  \begin{verbatim}  \begin{verbatim}
100  http://mitgcm.org  % setenv CVSROOT :pserver:cvsanon@mitgcm.org:/u/gcmpack
101  \end{verbatim}  \end{verbatim}
102    in your \texttt{.cshrc} or \texttt{.tcshrc} file.  For bash or sh
103  On this site, you can download the model as well as find useful information,  shells, put:
 some of which might overlap with what is written here. There is also a  
 support news group for the model located at (send your message to \texttt{%  
 support@mitgcm.org}):  
104  \begin{verbatim}  \begin{verbatim}
105  news://mitgcm.org/mitgcm.support  % export CVSROOT=':pserver:cvsanon@mitgcm.org:/u/gcmpack'
106  \end{verbatim}  \end{verbatim}
107    in your \texttt{.profile} or \texttt{.bashrc} file.
108    
 If CVS is available on your system, we strongly encourage you to use it. CVS  
 provides an efficient and elegant way of organizing your code and keeping  
 track of your changes. If CVS is not available on your machine, you can also  
 download a tar file.  
   
 \subsubsection{using CVS}  
109    
110  Before you can use CVS, the following environment variable has to be set in  To get MITgcm through CVS, first register with the MITgcm CVS server
111  your .cshrc or .tcshrc:  using command:
112  \begin{verbatim}  \begin{verbatim}
 % setenv CVSROOT :pserver:cvsanon@mitgcm.org:/u/u0/gcmpack  
113  % cvs login ( CVS password: cvsanon )  % cvs login ( CVS password: cvsanon )
114  \end{verbatim}  \end{verbatim}
115    You only need to do a ``cvs login'' once.
116    
117  You only need to do ``cvs login'' once. To obtain the latest source:  To obtain the latest sources type:
118  \begin{verbatim}  \begin{verbatim}
119  % cvs co -d directory models/MITgcmUV  % cvs co MITgcm
120  \end{verbatim}  \end{verbatim}
121    or to get a specific release type:
 This creates a directory called \textit{directory}. If \textit{directory}  
 exists this command updates your code based on the repository. Each  
 directory in the source tree contains a directory \textit{CVS}. This  
 information is required by CVS to keep track of your file versions with  
 respect to the repository. Don't edit the files in \textit{CVS}! To obtain a  
 specific \textit{version} that is not the latest source:  
122  \begin{verbatim}  \begin{verbatim}
123  % cvs co -d directory -r version models/MITgcmUV  % cvs co -P -r checkpoint52i_post  MITgcm
124  \end{verbatim}  \end{verbatim}
125    The MITgcm web site contains further directions concerning the source
126  \subsubsection{other methods}  code and CVS.  It also contains a web interface to our CVS archive so
127    that one may easily view the state of files, revisions, and other
128  You can download the model as a tar file from the reference web site at:  development milestones:
129    \begin{rawhtml} <A href=''http://mitgcm.org/download'' target="idontexist"> \end{rawhtml}
130    \begin{verbatim}
131    http://mitgcm.org/source_code.html
132    \end{verbatim}
133    \begin{rawhtml} </A> \end{rawhtml}
134    
135    As a convenience, the MITgcm CVS server contains aliases which are
136    named subsets of the codebase.  These aliases can be especially
137    helpful when used over slow internet connections or on machines with
138    restricted storage space.  Table \ref{tab:cvsModules} contains a list
139    of CVS aliases
140    \begin{table}[htb]
141      \centering
142      \begin{tabular}[htb]{|lp{3.25in}|}\hline
143        \textbf{Alias Name}    &  \textbf{Information (directories) Contained}  \\\hline
144        \texttt{MITgcm\_code}  &  Only the source code -- none of the verification examples.  \\
145        \texttt{MITgcm\_verif\_basic}
146        &  Source code plus a small set of the verification examples
147        (\texttt{global\_ocean.90x40x15}, \texttt{aim.5l\_cs}, \texttt{hs94.128x64x5},
148        \texttt{front\_relax}, and \texttt{plume\_on\_slope}).  \\
149        \texttt{MITgcm\_verif\_atmos}  &  Source code plus all of the atmospheric examples.  \\
150        \texttt{MITgcm\_verif\_ocean}  &  Source code plus all of the oceanic examples.  \\
151        \texttt{MITgcm\_verif\_all}    &  Source code plus all of the
152        verification examples. \\\hline
153      \end{tabular}
154      \caption{MITgcm CVS Modules}
155      \label{tab:cvsModules}
156    \end{table}
157    
158    The checkout process creates a directory called \texttt{MITgcm}. If
159    the directory \texttt{MITgcm} exists this command updates your code
160    based on the repository. Each directory in the source tree contains a
161    directory \texttt{CVS}. This information is required by CVS to keep
162    track of your file versions with respect to the repository. Don't edit
163    the files in \texttt{CVS}!  You can also use CVS to download code
164    updates.  More extensive information on using CVS for maintaining
165    MITgcm code can be found
166    \begin{rawhtml} <A href=''http://mitgcm.org/usingcvstoget.html'' target="idontexist"> \end{rawhtml}
167    here
168    \begin{rawhtml} </A> \end{rawhtml}
169    .
170    It is important to note that the CVS aliases in Table
171    \ref{tab:cvsModules} cannot be used in conjunction with the CVS
172    \texttt{-d DIRNAME} option.  However, the \texttt{MITgcm} directories
173    they create can be changed to a different name following the check-out:
174    \begin{verbatim}
175       %  cvs co MITgcm_verif_basic
176       %  mv MITgcm MITgcm_verif_basic
177    \end{verbatim}
178    
179    
180    \subsection{Method 2 - Tar file download}
181    \label{sect:conventionalDownload}
182    
183    If you do not have CVS on your system, you can download the model as a
184    tar file from the web site at:
185    \begin{rawhtml} <A href=http://mitgcm.org/download target="idontexist"> \end{rawhtml}
186  \begin{verbatim}  \begin{verbatim}
187  http://mitgcm.org/download/  http://mitgcm.org/download/
188  \end{verbatim}  \end{verbatim}
189    \begin{rawhtml} </A> \end{rawhtml}
190  \subsection{Model and directory structure}  The tar file still contains CVS information which we urge you not to
191    delete; even if you do not use CVS yourself the information can help
192  The ``numerical'' model is contained within a execution environment support  us if you should need to send us your copy of the code.  If a recent
193  wrapper. This wrapper is designed to provide a general framework for  tar file does not exist, then please contact the developers through
194  grid-point models. MITgcmUV is a specific numerical model that uses the  the
195  framework. Under this structure the model is split into execution  \begin{rawhtml} <A href=''mailto:MITgcm-support@mitgcm.org"> \end{rawhtml}
196  environment support code and conventional numerical model code. The  MITgcm-support@mitgcm.org
197  execution environment support code is held under the \textit{eesupp}  \begin{rawhtml} </A> \end{rawhtml}
198  directory. The grid point model code is held under the \textit{model}  mailing list.
199  directory. Code execution actually starts in the \textit{eesupp} routines  
200  and not in the \textit{model} routines. For this reason the top-level  \subsubsection{Upgrading from an earlier version}
201  \textit{MAIN.F} is in the \textit{eesupp/src} directory. In general,  
202  end-users should not need to worry about this level. The top-level routine  If you already have an earlier version of the code you can ``upgrade''
203  for the numerical part of the code is in \textit{model/src/THE\_MODEL\_MAIN.F%  your copy instead of downloading the entire repository again. First,
204  }. Here is a brief description of the directory structure of the model under  ``cd'' (change directory) to the top of your working copy:
205  the root tree (a detailed description is given in section 3: Code structure).  \begin{verbatim}
206    % cd MITgcm
207  \begin{itemize}  \end{verbatim}
208  \item \textit{bin}: this directory is initially empty. It is the default  and then issue the cvs update command such as:
209  directory in which to compile the code.  \begin{verbatim}
210    % cvs -q update -r checkpoint52i_post -d -P
211  \item \textit{diags}: contains the code relative to time-averaged  \end{verbatim}
212  diagnostics. It is subdivided into two subdirectories \textit{inc} and  This will update the ``tag'' to ``checkpoint52i\_post'', add any new
213  \textit{src} that contain include files (*.\textit{h} files) and fortran  directories (-d) and remove any empty directories (-P). The -q option
214  subroutines (*.\textit{F} files), respectively.  means be quiet which will reduce the number of messages you'll see in
215    the terminal. If you have modified the code prior to upgrading, CVS
216  \item \textit{doc}: contains brief documentation notes.  will try to merge your changes with the upgrades. If there is a
217    conflict between your modifications and the upgrade, it will report
218  \item \textit{eesupp}: contains the execution environment source code. Also  that file with a ``C'' in front, e.g.:
219  subdivided into two subdirectories \textit{inc} and \textit{src}.  \begin{verbatim}
220    C model/src/ini_parms.F
221  \item \textit{exe}: this directory is initially empty. It is the default  \end{verbatim}
222  directory in which to execute the code.  If the list of conflicts scrolled off the screen, you can re-issue the
223    cvs update command and it will report the conflicts. Conflicts are
224  \item \textit{model}: this directory contains the main source code. Also  indicated in the code by the delimites ``$<<<<<<<$'', ``======='' and
225  subdivided into two subdirectories \textit{inc} and \textit{src}.  ``$>>>>>>>$''. For example,
226    {\small
227  \item \textit{pkg}: contains the source code for the packages. Each package  \begin{verbatim}
228  corresponds to a subdirectory. For example, \textit{gmredi} contains the  <<<<<<< ini_parms.F
229  code related to the Gent-McWilliams/Redi scheme, \textit{aim} the code       & bottomDragLinear,myOwnBottomDragCoefficient,
230  relative to the atmospheric intermediate physics. The packages are described  =======
231  in detail in section 3.       & bottomDragLinear,bottomDragQuadratic,
232    >>>>>>> 1.18
233  \item \textit{tools}: this directory contains various useful tools. For  \end{verbatim}
234  example, \textit{genmake} is a script written in csh (C-shell) that should  }
235  be used to generate your makefile. The directory \textit{adjoint} contains  means that you added ``myOwnBottomDragCoefficient'' to a namelist at
236  the makefile specific to the Tangent linear and Adjoint Compiler (TAMC) that  the same time and place that we added ``bottomDragQuadratic''. You
237  generates the adjoint code. The latter is described in details in part V.  need to resolve this conflict and in this case the line should be
238    changed to:
239  \item \textit{utils}: this directory contains various utilities. The  {\small
240  subdirectory \textit{knudsen2} contains code and a makefile that compute  \begin{verbatim}
241  coefficients of the polynomial approximation to the knudsen formula for an       & bottomDragLinear,bottomDragQuadratic,myOwnBottomDragCoefficient,
242  ocean nonlinear equation of state. The \textit{matlab} subdirectory contains  \end{verbatim}
243  matlab scripts for reading model output directly into matlab. \textit{scripts%  }
244  } contains C-shell post-processing scripts for joining processor-based and  and the lines with the delimiters ($<<<<<<$,======,$>>>>>>$) be deleted.
245  tiled-based model output.  Unless you are making modifications which exactly parallel
246    developments we make, these types of conflicts should be rare.
247  \item \textit{verification}: this directory contains the model examples. See  
248    \paragraph*{Upgrading to the current pre-release version}
249    
250    We don't make a ``release'' for every little patch and bug fix in
251    order to keep the frequency of upgrades to a minimum. However, if you
252    have run into a problem for which ``we have already fixed in the
253    latest code'' and we haven't made a ``tag'' or ``release'' since that
254    patch then you'll need to get the latest code:
255    \begin{verbatim}
256    % cvs -q update -A -d -P
257    \end{verbatim}
258    Unlike, the ``check-out'' and ``update'' procedures above, there is no
259    ``tag'' or release name. The -A tells CVS to upgrade to the
260    very latest version. As a rule, we don't recommend this since you
261    might upgrade while we are in the processes of checking in the code so
262    that you may only have part of a patch. Using this method of updating
263    also means we can't tell what version of the code you are working
264    with. So please be sure you understand what you're doing.
265    
266    \section{Model and directory structure}
267    \begin{rawhtml}
268    <!-- CMIREDIR:directory_structure: -->
269    \end{rawhtml}
270    
271    The ``numerical'' model is contained within a execution environment
272    support wrapper. This wrapper is designed to provide a general
273    framework for grid-point models. MITgcmUV is a specific numerical
274    model that uses the framework. Under this structure the model is split
275    into execution environment support code and conventional numerical
276    model code. The execution environment support code is held under the
277    \texttt{eesupp} directory. The grid point model code is held under the
278    \texttt{model} directory. Code execution actually starts in the
279    \texttt{eesupp} routines and not in the \texttt{model} routines. For
280    this reason the top-level \texttt{MAIN.F} is in the
281    \texttt{eesupp/src} directory. In general, end-users should not need
282    to worry about this level. The top-level routine for the numerical
283    part of the code is in \texttt{model/src/THE\_MODEL\_MAIN.F}. Here is
284    a brief description of the directory structure of the model under the
285    root tree (a detailed description is given in section 3: Code
286    structure).
287    
288    \begin{itemize}
289    
290    \item \texttt{bin}: this directory is initially empty. It is the
291      default directory in which to compile the code.
292      
293    \item \texttt{diags}: contains the code relative to time-averaged
294      diagnostics. It is subdivided into two subdirectories \texttt{inc}
295      and \texttt{src} that contain include files (\texttt{*.h} files) and
296      Fortran subroutines (\texttt{*.F} files), respectively.
297    
298    \item \texttt{doc}: contains brief documentation notes.
299      
300    \item \texttt{eesupp}: contains the execution environment source code.
301      Also subdivided into two subdirectories \texttt{inc} and
302      \texttt{src}.
303      
304    \item \texttt{exe}: this directory is initially empty. It is the
305      default directory in which to execute the code.
306      
307    \item \texttt{model}: this directory contains the main source code.
308      Also subdivided into two subdirectories \texttt{inc} and
309      \texttt{src}.
310      
311    \item \texttt{pkg}: contains the source code for the packages. Each
312      package corresponds to a subdirectory. For example, \texttt{gmredi}
313      contains the code related to the Gent-McWilliams/Redi scheme,
314      \texttt{aim} the code relative to the atmospheric intermediate
315      physics. The packages are described in detail in section 3.
316      
317    \item \texttt{tools}: this directory contains various useful tools.
318      For example, \texttt{genmake2} is a script written in csh (C-shell)
319      that should be used to generate your makefile. The directory
320      \texttt{adjoint} contains the makefile specific to the Tangent
321      linear and Adjoint Compiler (TAMC) that generates the adjoint code.
322      The latter is described in details in part V.
323      
324    \item \texttt{utils}: this directory contains various utilities. The
325      subdirectory \texttt{knudsen2} contains code and a makefile that
326      compute coefficients of the polynomial approximation to the knudsen
327      formula for an ocean nonlinear equation of state. The
328      \texttt{matlab} subdirectory contains matlab scripts for reading
329      model output directly into matlab. \texttt{scripts} contains C-shell
330      post-processing scripts for joining processor-based and tiled-based
331      model output.
332      
333    \item \texttt{verification}: this directory contains the model
334      examples. See section \ref{sect:modelExamples}.
335    
336    \end{itemize}
337    
338    \section[MITgcm Example Experiments]{Example experiments}
339    \label{sect:modelExamples}
340    \begin{rawhtml}
341    <!-- CMIREDIR:modelExamples: -->
342    \end{rawhtml}
343    
344    %% a set of twenty-four pre-configured numerical experiments
345    
346    The MITgcm distribution comes with more than a dozen pre-configured
347    numerical experiments. Some of these example experiments are tests of
348    individual parts of the model code, but many are fully fledged
349    numerical simulations. A few of the examples are used for tutorial
350    documentation in sections \ref{sect:eg-baro} - \ref{sect:eg-global}.
351    The other examples follow the same general structure as the tutorial
352    examples. However, they only include brief instructions in a text file
353    called {\it README}.  The examples are located in subdirectories under
354    the directory \texttt{verification}. Each example is briefly described
355  below.  below.
 \end{itemize}  
   
 \subsection{Model examples}  
   
 Now that you have successfully downloaded the model code we recommend that  
 you first try to run the examples provided with the base version. You will  
 probably want to run the example that is the closest to the configuration  
 you will use eventually. The examples are located in subdirectories under  
 the directory \textit{verification} and are briefly described below (a full  
 description is given in section 2):  
   
 \subsubsection{List of model examples}  
   
 \begin{itemize}  
 \item \textit{exp0} - single layer, ocean double gyre (barotropic with  
 free-surface).  
   
 \item \textit{exp1} - 4 layers, ocean double gyre.  
   
 \item \textit{exp2} - 4x4 degree global ocean simulation with steady  
 climatological forcing.  
356    
357  \item \textit{exp4} - flow over a Gaussian bump in open-water or channel  \subsection{Full list of model examples}
 with open boundaries.  
358    
359  \item \textit{exp5} - inhomogenously forced ocean convection in a doubly  \begin{enumerate}
360  periodic box.    
361    \item \texttt{exp0} - single layer, ocean double gyre (barotropic with
362      free-surface). This experiment is described in detail in section
363      \ref{sect:eg-baro}.
364    
365    \item \texttt{exp1} - Four layer, ocean double gyre. This experiment
366      is described in detail in section \ref{sect:eg-baroc}.
367      
368    \item \texttt{exp2} - 4x4 degree global ocean simulation with steady
369      climatological forcing. This experiment is described in detail in
370      section \ref{sect:eg-global}.
371      
372    \item \texttt{exp4} - Flow over a Gaussian bump in open-water or
373      channel with open boundaries.
374      
375    \item \texttt{exp5} - Inhomogenously forced ocean convection in a
376      doubly periodic box.
377    
378  \item \textit{front\_relax} - relaxation of an ocean thermal front (test for  \item \texttt{front\_relax} - Relaxation of an ocean thermal front (test for
379  Gent/McWilliams scheme). 2D (Y-Z).  Gent/McWilliams scheme). 2D (Y-Z).
380    
381  \item \textit{internal wave} - ocean internal wave forced by open boundary  \item \texttt{internal wave} - Ocean internal wave forced by open
382  conditions.    boundary conditions.
383      
384  \item \textit{natl\_box} - eastern subtropical North Atlantic with KPP  \item \texttt{natl\_box} - Eastern subtropical North Atlantic with KPP
385  scheme; 1 month integration    scheme; 1 month integration
386      
387  \item \textit{hs94.1x64x5} - zonal averaged atmosphere using Held and Suarez  \item \texttt{hs94.1x64x5} - Zonal averaged atmosphere using Held and
388  '94 forcing.    Suarez '94 forcing.
389      
390  \item \textit{hs94.128x64x5} - 3D atmosphere dynamics using Held and Suarez  \item \texttt{hs94.128x64x5} - 3D atmosphere dynamics using Held and
391  '94 forcing.    Suarez '94 forcing.
392      
393  \item \textit{hs94.cs-32x32x5} - 3D atmosphere dynamics using Held and  \item \texttt{hs94.cs-32x32x5} - 3D atmosphere dynamics using Held and
394  Suarez '94 forcing on the cubed sphere.    Suarez '94 forcing on the cubed sphere.
395      
396  \item \textit{aim.5l\_zon-ave} - Intermediate Atmospheric physics, 5 layers  \item \texttt{aim.5l\_zon-ave} - Intermediate Atmospheric physics.
397  Molteni physics package. Global Zonal Mean configuration, 1x64x5 resolution.    Global Zonal Mean configuration, 1x64x5 resolution.
398      
399  \item \textit{aim.5l\_XZ\_Equatorial\_Slice} - Intermediate Atmospheric  \item \texttt{aim.5l\_XZ\_Equatorial\_Slice} - Intermediate
400  physics, 5 layers Molteni physics package. Equatorial Slice configuration.    Atmospheric physics, equatorial Slice configuration.  2D (X-Z).
401  2D (X-Z).    
402    \item \texttt{aim.5l\_Equatorial\_Channel} - Intermediate Atmospheric
403      physics. 3D Equatorial Channel configuration.
404      
405    \item \texttt{aim.5l\_LatLon} - Intermediate Atmospheric physics.
406      Global configuration, on latitude longitude grid with 128x64x5 grid
407      points ($2.8^\circ{\rm degree}$ resolution).
408      
409    \item \texttt{adjustment.128x64x1} Barotropic adjustment problem on
410      latitude longitude grid with 128x64 grid points ($2.8^\circ{\rm
411        degree}$ resolution).
412      
413    \item \texttt{adjustment.cs-32x32x1} Barotropic adjustment problem on
414      cube sphere grid with 32x32 points per face ( roughly $2.8^\circ{\rm
415        degree}$ resolution).
416      
417    \item \texttt{advect\_cs} Two-dimensional passive advection test on
418      cube sphere grid.
419      
420    \item \texttt{advect\_xy} Two-dimensional (horizontal plane) passive
421      advection test on Cartesian grid.
422      
423    \item \texttt{advect\_yz} Two-dimensional (vertical plane) passive
424      advection test on Cartesian grid.
425      
426    \item \texttt{carbon} Simple passive tracer experiment. Includes
427      derivative calculation. Described in detail in section
428      \ref{sect:eg-carbon-ad}.
429    
430    \item \texttt{flt\_example} Example of using float package.
431      
432    \item \texttt{global\_ocean.90x40x15} Global circulation with GM, flux
433      boundary conditions and poles.
434    
435    \item \texttt{global\_ocean\_pressure} Global circulation in pressure
436      coordinate (non-Boussinesq ocean model). Described in detail in
437      section \ref{sect:eg-globalpressure}.
438      
439    \item \texttt{solid-body.cs-32x32x1} Solid body rotation test for cube
440      sphere grid.
441    
442  \item \textit{aim.5l\_Equatorial\_Channel} - Intermediate Atmospheric  \end{enumerate}
 physics, 5 layers Molteni physics package. 3D Equatorial Channel  
 configuration (not completely tested).  
443    
444  \item \textit{aim.5l\_LatLon} - Intermediate Atmospheric physics, 5 layers  \subsection{Directory structure of model examples}
 Molteni physics package. Global configuration, 128x64x5 resolution.  
   
 \item \textit{adjustment.128x64x1}  
   
 \item \textit{adjustment.cs-32x32x1}  
 \end{itemize}  
   
 \subsubsection{Directory structure of model examples}  
445    
446  Each example directory has the following subdirectories:  Each example directory has the following subdirectories:
447    
448  \begin{itemize}  \begin{itemize}
449  \item \textit{code}: contains the code particular to the example. At a  \item \texttt{code}: contains the code particular to the example. At a
450  minimum, this directory includes the following files:    minimum, this directory includes the following files:
   
 \begin{itemize}  
 \item \textit{code/CPP\_EEOPTIONS.h}: declares CPP keys relative to the  
 ``execution environment'' part of the code. The default version is located  
 in \textit{eesupp/inc}.  
   
 \item \textit{code/CPP\_OPTIONS.h}: declares CPP keys relative to the  
 ``numerical model'' part of the code. The default version is located in  
 \textit{model/inc}.  
   
 \item \textit{code/SIZE.h}: declares size of underlying computational grid.  
 The default version is located in \textit{model/inc}.  
 \end{itemize}  
   
 In addition, other include files and subroutines might be present in \textit{%  
 code} depending on the particular experiment. See section 2 for more details.  
   
 \item \textit{input}: contains the input data files required to run the  
 example. At a mimimum, the \textit{input} directory contains the following  
 files:  
   
 \begin{itemize}  
 \item \textit{input/data}: this file, written as a namelist, specifies the  
 main parameters for the experiment.  
   
 \item \textit{input/data.pkg}: contains parameters relative to the packages  
 used in the experiment.  
   
 \item \textit{input/eedata}: this file contains ``execution environment''  
 data. At present, this consists of a specification of the number of threads  
 to use in $X$ and $Y$ under multithreaded execution.  
 \end{itemize}  
   
 In addition, you will also find in this directory the forcing and topography  
 files as well as the files describing the initial state of the experiment.  
 This varies from experiment to experiment. See section 2 for more details.  
   
 \item \textit{results}: this directory contains the output file \textit{%  
 output.txt} produced by the simulation example. This file is useful for  
 comparison with your own output when you run the experiment.  
 \end{itemize}  
   
 Once you have chosen the example you want to run, you are ready to compile  
 the code.  
   
 \subsection{Compiling the code}  
   
 \subsubsection{The script \textit{genmake}}  
   
 To compile the code, use the script \textit{genmake} located in the \textit{%  
 tools} directory. \textit{genmake} is a script that generates the makefile.  
 It has been written so that the code can be compiled on a wide diversity of  
 machines and systems. However, if it doesn't work the first time on your  
 platform, you might need to edit certain lines of \textit{genmake} in the  
 section containing the setups for the different machines. The file is  
 structured like this:  
 \begin{verbatim}  
         .  
         .  
         .  
 general instructions (machine independent)  
         .  
         .  
         .  
     - setup machine 1  
     - setup machine 2  
     - setup machine 3  
     - setup machine 4  
        etc  
         .  
         .  
         .  
 \end{verbatim}  
   
 For example, the setup corresponding to a DEC alpha machine is reproduced  
 here:  
 \begin{verbatim}  
   case OSF1+mpi:  
     echo "Configuring for DEC Alpha"  
     set CPP        = ( '/usr/bin/cpp -P' )  
     set DEFINES    = ( ${DEFINES}  '-DTARGET_DEC -DWORDLENGTH=1' )  
     set KPP        = ( 'kapf' )  
     set KPPFILES   = ( 'main.F' )  
     set KFLAGS1    = ( '-scan=132 -noconc -cmp=' )  
     set FC         = ( 'f77' )  
     set FFLAGS     = ( '-convert big_endian -r8 -extend_source -automatic -call_shared -notransform_loops -align dcommons' )  
     set FOPTIM     = ( '-O5 -fast -tune host -inline all' )  
     set NOOPTFLAGS = ( '-O0' )  
     set LIBS       = ( '-lfmpi -lmpi -lkmp_osfp10 -pthread' )  
     set NOOPTFILES = ( 'barrier.F different_multiple.F external_fields_load.F')  
     set RMFILES    = ( '*.p.out' )  
     breaksw  
 \end{verbatim}  
   
 Typically, these are the lines that you might need to edit to make \textit{%  
 genmake} work on your platform if it doesn't work the first time. \textit{%  
 genmake} understands several options that are described here:  
   
 \begin{itemize}  
 \item -rootdir=dir  
   
 indicates where the model root directory is relative to the directory where  
 you are compiling. This option is not needed if you compile in the \textit{%  
 bin} directory (which is the default compilation directory) or within the  
 \textit{verification} tree.  
   
 \item -mods=dir1,dir2,...  
   
 indicates the relative or absolute paths directories where the sources  
 should take precedence over the default versions (located in \textit{model},  
 \textit{eesupp},...). Typically, this option is used when running the  
 examples, see below.  
   
 \item -enable=pkg1,pkg2,...  
451    
452  enables packages source code \textit{pkg1}, \textit{pkg2},... when creating    \begin{itemize}
453  the makefile.    \item \texttt{code/packages.conf}: declares the list of packages or
454        package groups to be used.  If not included, the default version
455        is located in \texttt{pkg/pkg\_default}.  Package groups are
456        simply convenient collections of commonly used packages which are
457        defined in \texttt{pkg/pkg\_default}.  Some packages may require
458        other packages or may require their absence (that is, they are
459        incompatible) and these package dependencies are listed in
460        \texttt{pkg/pkg\_depend}.
461    
462      \item \texttt{code/CPP\_EEOPTIONS.h}: declares CPP keys relative to
463        the ``execution environment'' part of the code. The default
464        version is located in \texttt{eesupp/inc}.
465      
466      \item \texttt{code/CPP\_OPTIONS.h}: declares CPP keys relative to
467        the ``numerical model'' part of the code. The default version is
468        located in \texttt{model/inc}.
469      
470      \item \texttt{code/SIZE.h}: declares size of underlying
471        computational grid.  The default version is located in
472        \texttt{model/inc}.
473      \end{itemize}
474      
475      In addition, other include files and subroutines might be present in
476      \texttt{code} depending on the particular experiment. See Section 2
477      for more details.
478      
479    \item \texttt{input}: contains the input data files required to run
480      the example. At a minimum, the \texttt{input} directory contains the
481      following files:
482    
483      \begin{itemize}
484      \item \texttt{input/data}: this file, written as a namelist,
485        specifies the main parameters for the experiment.
486      
487      \item \texttt{input/data.pkg}: contains parameters relative to the
488        packages used in the experiment.
489      
490      \item \texttt{input/eedata}: this file contains ``execution
491        environment'' data. At present, this consists of a specification
492        of the number of threads to use in $X$ and $Y$ under multithreaded
493        execution.
494      \end{itemize}
495      
496      In addition, you will also find in this directory the forcing and
497      topography files as well as the files describing the initial state
498      of the experiment.  This varies from experiment to experiment. See
499      section 2 for more details.
500    
501    \item \texttt{results}: this directory contains the output file
502      \texttt{output.txt} produced by the simulation example. This file is
503      useful for comparison with your own output when you run the
504      experiment.
505    \end{itemize}
506    
507    Once you have chosen the example you want to run, you are ready to
508    compile the code.
509    
510    \section[Building MITgcm]{Building the code}
511    \label{sect:buildingCode}
512    \begin{rawhtml}
513    <!-- CMIREDIR:buildingCode: -->
514    \end{rawhtml}
515    
516    To compile the code, we use the \texttt{make} program. This uses a
517    file (\texttt{Makefile}) that allows us to pre-process source files,
518    specify compiler and optimization options and also figures out any
519    file dependencies. We supply a script (\texttt{genmake2}), described
520    in section \ref{sect:genmake}, that automatically creates the
521    \texttt{Makefile} for you. You then need to build the dependencies and
522    compile the code.
523    
524    As an example, assume that you want to build and run experiment
525    \texttt{verification/exp2}. The are multiple ways and places to
526    actually do this but here let's build the code in
527    \texttt{verification/exp2/build}:
528    \begin{verbatim}
529    % cd verification/exp2/build
530    \end{verbatim}
531    First, build the \texttt{Makefile}:
532    \begin{verbatim}
533    % ../../../tools/genmake2 -mods=../code
534    \end{verbatim}
535    The command line option tells \texttt{genmake} to override model source
536    code with any files in the directory \texttt{../code/}.
537    
538    On many systems, the \texttt{genmake2} program will be able to
539    automatically recognize the hardware, find compilers and other tools
540    within the user's path (``\texttt{echo \$PATH}''), and then choose an
541    appropriate set of options from the files (``optfiles'') contained in
542    the \texttt{tools/build\_options} directory.  Under some
543    circumstances, a user may have to create a new ``optfile'' in order to
544    specify the exact combination of compiler, compiler flags, libraries,
545    and other options necessary to build a particular configuration of
546    MITgcm.  In such cases, it is generally helpful to read the existing
547    ``optfiles'' and mimic their syntax.
548    
549    Through the MITgcm-support list, the MITgcm developers are willing to
550    provide help writing or modifing ``optfiles''.  And we encourage users
551    to post new ``optfiles'' (particularly ones for new machines or
552    architectures) to the
553    \begin{rawhtml} <A href=''mailto:MITgcm-support@mitgcm.org"> \end{rawhtml}
554    MITgcm-support@mitgcm.org
555    \begin{rawhtml} </A> \end{rawhtml}
556    list.
557    
558  \item -disable=pkg1,pkg2,...  To specify an optfile to \texttt{genmake2}, the syntax is:
559    \begin{verbatim}
560  disables packages source code \textit{pkg1}, \textit{pkg2},... when creating  % ../../../tools/genmake2 -mods=../code -of /path/to/optfile
561  the makefile.  \end{verbatim}
   
 \item -platform=machine  
   
 specifies the platform for which you want the makefile. In general, you  
 won't need this option. \textit{genmake} will select the right machine for  
 you (the one you're working on!). However, this option is useful if you have  
 a choice of several compilers on one machine and you want to use the one  
 that is not the default (ex: \texttt{pgf77} instead of \texttt{f77} under  
 Linux).  
   
 \item -mpi  
   
 this is used when you want to run the model in parallel processing mode  
 under mpi (see section on parallel computation for more details).  
   
 \item -jam  
   
 this is used when you want to run the model in parallel processing mode  
 under jam (see section on parallel computation for more details).  
 \end{itemize}  
562    
563  For some of the examples, there is a file called \textit{.genmakerc} in the  Once a \texttt{Makefile} has been generated, we create the
564  \textit{input} directory that has the relevant \textit{genmake} options for  dependencies with the command:
 that particular example. In this way you don't need to type the options when  
 invoking \textit{genmake}.  
   
 \subsubsection{Compiling}  
   
 Let's assume that you want to run, say, example \textit{exp2} in the \textit{%  
 input} directory. To compile the code, type the following commands from the  
 model root tree:  
565  \begin{verbatim}  \begin{verbatim}
 % cd verification/exp2/input  
 % ../../../tools/genmake  
566  % make depend  % make depend
 % make  
567  \end{verbatim}  \end{verbatim}
568    This modifies the \texttt{Makefile} by attaching a (usually, long)
569    list of files upon which other files depend. The purpose of this is to
570    reduce re-compilation if and when you start to modify the code. The
571    {\tt make depend} command also creates links from the model source to
572    this directory.  It is important to note that the {\tt make depend}
573    stage will occasionally produce warnings or errors since the
574    dependency parsing tool is unable to find all of the necessary header
575    files (\textit{eg.}  \texttt{netcdf.inc}).  In these circumstances, it
576    is usually OK to ignore the warnings/errors and proceed to the next
577    step.
578    
579  If there is no \textit{.genmakerc} in the \textit{input} directory, you have  Next one can compile the code using:
580  to use the following options when invoking \textit{genmake}:  \begin{verbatim}
581    % make
582    \end{verbatim}
583    The {\tt make} command creates an executable called \texttt{mitgcmuv}.
584    Additional make ``targets'' are defined within the makefile to aid in
585    the production of adjoint and other versions of MITgcm.  On SMP
586    (shared multi-processor) systems, the build process can often be sped
587    up appreciably using the command:
588  \begin{verbatim}  \begin{verbatim}
589  % ../../../tools/genmake  -mods=../code  % make -j 2
590  \end{verbatim}  \end{verbatim}
591    where the ``2'' can be replaced with a number that corresponds to the
592    number of CPUs available.
593    
594  In addition, you will probably want to disable some of the packages. Taking  Now you are ready to run the model. General instructions for doing so are
595  again the case of \textit{exp2}, the full \textit{genmake} command will  given in section \ref{sect:runModel}. Here, we can run the model by
596  probably look like this:  first creating links to all the input files:
597  \begin{verbatim}  \begin{verbatim}
598  % ../../../tools/genmake  -mods=../code  -disable=kpp,gmredi,aim,...  ln -s ../input/* .
599  \end{verbatim}  \end{verbatim}
600    and then calling the executable with:
601    \begin{verbatim}
602    ./mitgcmuv > output.txt
603    \end{verbatim}
604    where we are re-directing the stream of text output to the file
605    \texttt{output.txt}.
606    
607    
608    \section[Running MITgcm]{Running the model in prognostic mode}
609    \label{sect:runModel}
610    \begin{rawhtml}
611    <!-- CMIREDIR:runModel: -->
612    \end{rawhtml}
613    
614  The make command creates an executable called \textit{mitgcmuv}.  If compilation finished succesfully (section \ref{sect:buildingCode})
615    then an executable called \texttt{mitgcmuv} will now exist in the
616    local directory.
617    
618  Note that you can compile and run the code in another directory than \textit{%  To run the model as a single process (\textit{ie.} not in parallel)
619  input}. You just need to make sure that you copy the input data files into  simply type:
 the directory where you want to run the model. For example to compile from  
 \textit{code}:  
620  \begin{verbatim}  \begin{verbatim}
621  % cd verification/exp2/code  % ./mitgcmuv
 % ../../../tools/genmake  
 % make depend  
 % make  
622  \end{verbatim}  \end{verbatim}
623    The ``./'' is a safe-guard to make sure you use the local executable
624    in case you have others that exist in your path (surely odd if you
625    do!). The above command will spew out many lines of text output to
626    your screen.  This output contains details such as parameter values as
627    well as diagnostics such as mean Kinetic energy, largest CFL number,
628    etc. It is worth keeping this text output with the binary output so we
629    normally re-direct the \texttt{stdout} stream as follows:
630    \begin{verbatim}
631    % ./mitgcmuv > output.txt
632    \end{verbatim}
633    In the event that the model encounters an error and stops, it is very
634    helpful to include the last few line of this \texttt{output.txt} file
635    along with the (\texttt{stderr}) error message within any bug reports.
636    
637    For the example experiments in \texttt{verification}, an example of the
638    output is kept in \texttt{results/output.txt} for comparison. You can
639    compare your \texttt{output.txt} with the corresponding one for that
640    experiment to check that the set-up works.
641    
 \subsection{Running the model}  
642    
 The first thing to do is to run the code by typing \textit{mitgcmuv} and see  
 what happens. You can compare what you get with what is in the \textit{%  
 results} directory. Unless noted otherwise, most examples are set up to run  
 for a few time steps only so that you can quickly figure out whether the  
 model is working or not.  
643    
644  \subsubsection{Output files}  \subsection{Output files}
645    
646  The model produces various output files. At a minimum, the instantaneous  The model produces various output files and, when using \texttt{mnc},
647  ``state'' of the model is written out, which is made of the following files:  sometimes even directories.  Depending upon the I/O package(s)
648    selected at compile time (either \texttt{mdsio} or \texttt{mnc} or
649    both as determined by \texttt{code/packages.conf}) and the run-time
650    flags set (in \texttt{input/data.pkg}), the following output may
651    appear.
652    
653    
654    \subsubsection{MDSIO output files}
655    
656    The ``traditional'' output files are generated by the \texttt{mdsio}
657    package.  At a minimum, the instantaneous ``state'' of the model is
658    written out, which is made of the following files:
659    
660  \begin{itemize}  \begin{itemize}
661  \item \textit{U.00000nIter} - zonal component of velocity field (m/s and $>  \item \texttt{U.00000nIter} - zonal component of velocity field (m/s and $>
662  0 $ eastward).  0 $ eastward).
663    
664  \item \textit{V.00000nIter} - meridional component of velocity field (m/s  \item \texttt{V.00000nIter} - meridional component of velocity field (m/s
665  and $> 0$ northward).  and $> 0$ northward).
666    
667  \item \textit{W.00000nIter} - vertical component of velocity field (ocean:  \item \texttt{W.00000nIter} - vertical component of velocity field (ocean:
668  m/s and $> 0$ upward, atmosphere: Pa/s and $> 0$ towards increasing pressure  m/s and $> 0$ upward, atmosphere: Pa/s and $> 0$ towards increasing pressure
669  i.e. downward).  i.e. downward).
670    
671  \item \textit{T.00000nIter} - potential temperature (ocean: $^{0}$C,  \item \texttt{T.00000nIter} - potential temperature (ocean: $^{0}$C,
672  atmosphere: $^{0}$K).  atmosphere: $^{0}$K).
673    
674  \item \textit{S.00000nIter} - ocean: salinity (psu), atmosphere: water vapor  \item \texttt{S.00000nIter} - ocean: salinity (psu), atmosphere: water vapor
675  (g/kg).  (g/kg).
676    
677  \item \textit{Eta.00000nIter} - ocean: surface elevation (m), atmosphere:  \item \texttt{Eta.00000nIter} - ocean: surface elevation (m), atmosphere:
678  surface pressure anomaly (Pa).  surface pressure anomaly (Pa).
679  \end{itemize}  \end{itemize}
680    
681  The chain \textit{00000nIter} consists of ten figures that specify the  The chain \texttt{00000nIter} consists of ten figures that specify the
682  iteration number at which the output is written out. For example, \textit{%  iteration number at which the output is written out. For example, \texttt{%
683  U.0000000300} is the zonal velocity at iteration 300.  U.0000000300} is the zonal velocity at iteration 300.
684    
685  In addition, a ``pickup'' or ``checkpoint'' file called:  In addition, a ``pickup'' or ``checkpoint'' file called:
686    
687  \begin{itemize}  \begin{itemize}
688  \item \textit{pickup.00000nIter}  \item \texttt{pickup.00000nIter}
689  \end{itemize}  \end{itemize}
690    
691  is written out. This file represents the state of the model in a condensed  is written out. This file represents the state of the model in a condensed
# Line 440  form and is used for restarting the inte Line 693  form and is used for restarting the inte
693  there is an additional ``pickup'' file:  there is an additional ``pickup'' file:
694    
695  \begin{itemize}  \begin{itemize}
696  \item \textit{pickup\_cd.00000nIter}  \item \texttt{pickup\_cd.00000nIter}
697  \end{itemize}  \end{itemize}
698    
699  containing the D-grid velocity data and that has to be written out as well  containing the D-grid velocity data and that has to be written out as well
700  in order to restart the integration. Rolling checkpoint files are the same  in order to restart the integration. Rolling checkpoint files are the same
701  as the pickup files but are named differently. Their name contain the chain  as the pickup files but are named differently. Their name contain the chain
702  \textit{ckptA} or \textit{ckptB} instead of \textit{00000nIter}. They can be  \texttt{ckptA} or \texttt{ckptB} instead of \texttt{00000nIter}. They can be
703  used to restart the model but are overwritten every other time they are  used to restart the model but are overwritten every other time they are
704  output to save disk space during long integrations.  output to save disk space during long integrations.
705    
 \subsubsection{Looking at the output}  
   
 All the model data are written according to a ``meta/data'' file format.  
 Each variable is associated with two files with suffix names \textit{.data}  
 and \textit{.meta}. The \textit{.data} file contains the data written in  
 binary form (big\_endian by default). The \textit{.meta} file is a  
 ``header'' file that contains information about the size and the structure  
 of the \textit{.data} file. This way of organizing the output is  
 particularly useful when running multi-processors calculations. The base  
 version of the model includes a few matlab utilities to read output files  
 written in this format. The matlab scripts are located in the directory  
 \textit{utils/matlab} under the root tree. The script \textit{rdmds.m} reads  
 the data. Look at the comments inside the script to see how to use it.  
   
 \section{Code structure}  
   
 \section{Doing it yourself: customizing the code}  
   
 \subsection{\protect\bigskip Configuration and setup}  
   
 When you are ready to run the model in the configuration you want, the  
 easiest thing is to use and adapt the setup of the case studies experiment  
 (described previously) that is the closest to your configuration. Then, the  
 amount of setup will be minimized. In this section, we focus on the setup  
 relative to the ''numerical model'' part of the code (the setup relative to  
 the ''execution environment'' part is covered in the parallel implementation  
 section) and on the variables and parameters that you are likely to change.  
   
 The CPP keys relative to the ''numerical model'' part of the code are all  
 defined and set in the file \textit{CPP\_OPTIONS.h }in the directory \textit{%  
 model/inc }or in one of the \textit{code }directories of the case study  
 experiments under \textit{verification.} The model parameters are defined  
 and declared in the file \textit{model/inc/PARAMS.h }and their default  
 values are set in the routine \textit{model/src/set\_defaults.F. }The  
 default values can be modified in the namelist file \textit{data }which  
 needs to be located in the directory where you will run the model. The  
 parameters are initialized in the routine \textit{model/src/ini\_parms.F}.  
 Look at this routine to see in what part of the namelist the parameters are  
 located.  
   
 In what follows the parameters are grouped into categories related to the  
 computational domain, the equations solved in the model, and the simulation  
 controls.  
706    
 \subsubsection{Computational domain, geometry and time-discretization}  
707    
708  \begin{itemize}  \subsubsection{MNC output files}
 \item dimensions  
 \end{itemize}  
   
 The number of points in the x, y,\textit{\ }and r\textit{\ }directions are  
 represented by the variables \textbf{sNx}\textit{, }\textbf{sNy}\textit{, }%  
 and \textbf{Nr}\textit{\ }respectively which are declared and set in the  
 file \textit{model/inc/SIZE.h. }(Again, this assumes a mono-processor  
 calculation. For multiprocessor calculations see section on parallel  
 implementation.)  
   
 \begin{itemize}  
 \item grid  
 \end{itemize}  
   
 Three different grids are available: cartesian, spherical polar, and  
 curvilinear (including the cubed sphere). The grid is set through the  
 logical variables \textbf{usingCartesianGrid}\textit{, }\textbf{%  
 usingSphericalPolarGrid}\textit{, }and \textit{\ }\textbf{%  
 usingCurvilinearGrid}\textit{. }In the case of spherical and curvilinear  
 grids, the southern boundary is defined through the variable \textbf{phiMin}%  
 \textit{\ }which corresponds to the latitude of the southern most cell face  
 (in degrees). The resolution along the x and y directions is controlled by  
 the 1D arrays \textbf{delx}\textit{\ }and \textbf{dely}\textit{\ }(in meters  
 in the case of a cartesian grid, in degrees otherwise). The vertical grid  
 spacing is set through the 1D array \textbf{delz }for the ocean (in meters)  
 or \textbf{delp}\textit{\ }for the atmosphere (in Pa). The variable \textbf{%  
 Ro\_SeaLevel} represents the standard position of Sea-Level in ''R''  
 coordinate. This is typically set to 0m for the ocean (default value) and 10$%  
 ^{5}$Pa for the atmosphere. For the atmosphere, also set the logical  
 variable \textbf{groundAtK1} to '.\texttt{TRUE}.'. which put the first level  
 (k=1) at the lower boundary (ground).  
   
 For the cartesian grid case, the Coriolis parameter $f$ is set through the  
 variables \textbf{f0}\textit{\ }and \textbf{beta}\textit{\ }which correspond  
 to the reference Coriolis parameter (in s$^{-1}$) and $\frac{\partial f}{%  
 \partial y}$(in m$^{-1}$s$^{-1}$) respectively. If \textbf{beta }\textit{\ }%  
 is set to a nonzero value, \textbf{f0}\textit{\ }is the value of $f$ at the  
 southern edge of the domain.  
   
 \begin{itemize}  
 \item topography - full and partial cells  
 \end{itemize}  
   
 The domain bathymetry is read from a file that contains a 2D (x,y) map of  
 depths (in m) for the ocean or pressures (in Pa) for the atmosphere. The  
 file name is represented by the variable \textbf{bathyFile}\textit{. }The  
 file is assumed to contain binary numbers giving the depth (pressure) of the  
 model at each grid cell, ordered with the x coordinate varying fastest. The  
 points are ordered from low coordinate to high coordinate for both axes. The  
 model code applies without modification to enclosed, periodic, and double  
 periodic domains. Periodicity is assumed by default and is suppressed by  
 setting the depths to 0m for the cells at the limits of the computational  
 domain (note: not sure this is the case for the atmosphere). The precision  
 with which to read the binary data is controlled by the integer variable  
 \textbf{readBinaryPrec }which can take the value \texttt{32} (single  
 precision) or \texttt{64} (double precision). See the matlab program \textit{%  
 gendata.m }in the \textit{input }directories under \textit{verification }to  
 see how the bathymetry files are generated for the case study experiments.  
   
 To use the partial cell capability, the variable \textbf{hFacMin}\textit{\ }%  
 needs to be set to a value between 0 and 1 (it is set to 1 by default)  
 corresponding to the minimum fractional size of the cell. For example if the  
 bottom cell is 500m thick and \textbf{hFacMin}\textit{\ }is set to 0.1, the  
 actual thickness of the cell (i.e. used in the code) can cover a range of  
 discrete values 50m apart from 50m to 500m depending on the value of the  
 bottom depth (in \textbf{bathyFile}) at this point.  
   
 Note that the bottom depths (or pressures) need not coincide with the models  
 levels as deduced from \textbf{delz}\textit{\ }or\textit{\ }\textbf{delp}%  
 \textit{. }The model will interpolate the numbers in \textbf{bathyFile}%  
 \textit{\ }so that they match the levels obtained from \textbf{delz}\textit{%  
 \ }or\textit{\ }\textbf{delp}\textit{\ }and \textbf{hFacMin}\textit{. }  
   
 (Note: the atmospheric case is a bit more complicated than what is written  
 here I think. To come soon...)  
709    
710    Unlike the \texttt{mdsio} output, the \texttt{mnc}--generated output
711    is usually (though not necessarily) placed within a subdirectory with
712    a name such as \texttt{mnc\_test\_\${DATE}\_\${SEQ}}.  The files
713    within this subdirectory are all in the ``self-describing'' netCDF
714    format and can thus be browsed and/or plotted using tools such as:
715  \begin{itemize}  \begin{itemize}
716  \item time-discretization  \item \texttt{ncdump} is a utility which is typically included
717  \end{itemize}    with every netCDF install:
718      \begin{rawhtml} <A href="http://www.unidata.ucar.edu/packages/netcdf/"> \end{rawhtml}
719  The time steps are set through the real variables \textbf{deltaTMom }and  \begin{verbatim}
720  \textbf{deltaTtracer }(in s) which represent the time step for the momentum       http://www.unidata.ucar.edu/packages/netcdf/
721  and tracer equations, respectively. For synchronous integrations, simply set  \end{verbatim}
722  the two variables to the same value (or you can prescribe one time step only    \begin{rawhtml} </A> \end{rawhtml} and it converts the netCDF
723  through the variable \textbf{deltaT}). The Adams-Bashforth stabilizing    binaries into formatted ASCII text files.
 parameter is set through the variable \textbf{abEps }(dimensionless). The  
 stagger baroclinic time stepping can be activated by setting the logical  
 variable \textbf{staggerTimeStep }to '.\texttt{TRUE}.'.  
   
 \subsubsection{Equation of state}  
   
 First, because the model equations are written in terms of perturbations, a  
 reference thermodynamic state needs to be specified. This is done through  
 the 1D arrays \textbf{tRef}\textit{\ }and \textbf{sRef}. \textbf{tRef }%  
 specifies the reference potential temperature profile (in $^{o}$C for  
 the ocean and $^{o}$K for the atmosphere) starting from the level  
 k=1. Similarly, \textbf{sRef}\textit{\ }specifies the reference salinity  
 profile (in ppt) for the ocean or the reference specific humidity profile  
 (in g/kg) for the atmosphere.  
   
 The form of the equation of state is controlled by the character variables  
 \textbf{buoyancyRelation}\textit{\ }and \textbf{eosType}\textit{. }\textbf{%  
 buoyancyRelation}\textit{\ }is set to '\texttt{OCEANIC}' by default and  
 needs to be set to '\texttt{ATMOSPHERIC}' for atmosphere simulations. In  
 this case, \textbf{eosType}\textit{\ }must be set to '\texttt{IDEALGAS}'.  
 For the ocean, two forms of the equation of state are available: linear (set  
 \textbf{eosType}\textit{\ }to '\texttt{LINEAR}') and a polynomial  
 approximation to the full nonlinear equation ( set \textbf{eosType}\textit{\  
 }to '\texttt{POLYNOMIAL}'). In the linear case, you need to specify the  
 thermal and haline expansion coefficients represented by the variables  
 \textbf{tAlpha}\textit{\ }(in K$^{-1}$) and \textbf{sBeta}\textit{\ }(in ppt$%  
 ^{-1}$). For the nonlinear case, you need to generate a file of polynomial  
 coefficients called \textit{POLY3.COEFFS. }To do this, use the program  
 \textit{utils/knudsen2/knudsen2.f }under the model tree (a Makefile is  
 available in the same directory and you will need to edit the number and the  
 values of the vertical levels in \textit{knudsen2.f }so that they match  
 those of your configuration). \textit{\ }  
   
 \subsubsection{Momentum equations}  
   
 In this section, we only focus for now on the parameters that you are likely  
 to change, i.e. the ones relative to forcing and dissipation for example.  
 The details relevant to the vector-invariant form of the equations and the  
 various advection schemes are not covered for the moment. We assume that you  
 use the standard form of the momentum equations (i.e. the flux-form) with  
 the default advection scheme. Also, there are a few logical variables that  
 allow you to turn on/off various terms in the momentum equation. These  
 variables are called \textbf{momViscosity, momAdvection, momForcing,  
 useCoriolis, momPressureForcing, momStepping}\textit{, }and \textit{\ }%  
 \textbf{metricTerms }and are assumed to be set to '.\texttt{TRUE}.' here.  
 Look at the file \textit{model/inc/PARAMS.h }for a precise definition of  
 these variables.  
   
 \begin{itemize}  
 \item initialization  
 \end{itemize}  
   
 The velocity components are initialized to 0 unless the simulation is  
 starting from a pickup file (see section on simulation control parameters).  
   
 \begin{itemize}  
 \item forcing  
 \end{itemize}  
   
 This section only applies to the ocean. You need to generate wind-stress  
 data into two files \textbf{zonalWindFile}\textit{\ }and \textbf{%  
 meridWindFile }corresponding to the zonal and meridional components of the  
 wind stress, respectively (if you want the stress to be along the direction  
 of only one of the model horizontal axes, you only need to generate one  
 file). The format of the files is similar to the bathymetry file. The zonal  
 (meridional) stress data are assumed to be in Pa and located at U-points  
 (V-points). As for the bathymetry, the precision with which to read the  
 binary data is controlled by the variable \textbf{readBinaryPrec}.\textbf{\ }  
 See the matlab program \textit{gendata.m }in the \textit{input }directories  
 under \textit{verification }to see how simple analytical wind forcing data  
 are generated for the case study experiments.  
   
 There is also the possibility of prescribing time-dependent periodic  
 forcing. To do this, concatenate the successive time records into a single  
 file (for each stress component) ordered in a (x, y, t) fashion and set the  
 following variables: \textbf{periodicExternalForcing }to '.\texttt{TRUE}.',  
 \textbf{externForcingPeriod }to the period (in s) of which the forcing  
 varies (typically 1 month), and \textbf{externForcingCycle }to the repeat  
 time (in s) of the forcing (typically 1 year -- note: \textbf{%  
 externForcingCycle }must be a multiple of \textbf{externForcingPeriod}).  
 With these variables set up, the model will interpolate the forcing linearly  
 at each iteration.  
   
 \begin{itemize}  
 \item dissipation  
 \end{itemize}  
   
 The lateral eddy viscosity coefficient is specified through the variable  
 \textbf{viscAh}\textit{\ }(in m$^{2}$s$^{-1}$). The vertical eddy viscosity  
 coefficient is specified through the variable \textbf{viscAz }(in m$^{2}$s$%  
 ^{-1}$) for the ocean and \textbf{viscAp}\textit{\ }(in Pa$^{2}$s$^{-1}$)  
 for the atmosphere. The vertical diffusive fluxes can be computed implicitly  
 by setting the logical variable \textbf{implicitViscosity }to '.\texttt{TRUE}%  
 .'. In addition, biharmonic mixing can be added as well through the variable  
 \textbf{viscA4}\textit{\ }(in m$^{4}$s$^{-1}$). On a spherical polar grid,  
 you might also need to set the variable \textbf{cosPower} which is set to 0  
 by default and which represents the power of cosine of latitude to multiply  
 viscosity. Slip or no-slip conditions at lateral and bottom boundaries are  
 specified through the logical variables \textbf{no\_slip\_sides}\textit{\ }%  
 and \textbf{no\_slip\_bottom}. If set to '\texttt{.FALSE.}', free-slip  
 boundary conditions are applied. If no-slip boundary conditions are applied  
 at the bottom, a bottom drag can be applied as well. Two forms are  
 available: linear (set the variable \textbf{bottomDragLinear}\textit{\ }in s$%  
 ^{-1}$) and quadratic (set the variable \textbf{bottomDragQuadratic}\textit{%  
 \ }in m$^{-1}$).  
   
 The Fourier and Shapiro filters are described elsewhere.  
   
 \begin{itemize}  
 \item C-D scheme  
 \end{itemize}  
   
 If you run at a sufficiently coarse resolution, you will need the C-D scheme  
 for the computation of the Coriolis terms. The variable\textbf{\ tauCD},  
 which represents the C-D scheme coupling timescale (in s) needs to be set.  
   
 \begin{itemize}  
 \item calculation of pressure/geopotential  
 \end{itemize}  
   
 First, to run a non-hydrostatic ocean simulation, set the logical variable  
 \textbf{nonHydrostatic} to '.\texttt{TRUE}.'. The pressure field is then  
 inverted through a 3D elliptic equation. (Note: this capability is not  
 available for the atmosphere yet.) By default, a hydrostatic simulation is  
 assumed and a 2D elliptic equation is used to invert the pressure field. The  
 parameters controlling the behaviour of the elliptic solvers are the  
 variables \textbf{cg2dMaxIters}\textit{\ }and \textbf{cg2dTargetResidual }%  
 for the 2D case and \textbf{cg3dMaxIters}\textit{\ }and \textbf{%  
 cg3dTargetResidual }for the 3D case. You probably won't need to alter the  
 default values (are we sure of this?).  
   
 For the calculation of the surface pressure (for the ocean) or surface  
 geopotential (for the atmosphere) you need to set the logical variables  
 \textbf{rigidLid} and \textbf{implicitFreeSurface}\textit{\ }(set one to '.%  
 \texttt{TRUE}.' and the other to '.\texttt{FALSE}.' depending on how you  
 want to deal with the ocean upper or atmosphere lower boundary).  
   
 \subsubsection{Tracer equations}  
   
 This section covers the tracer equations i.e. the potential temperature  
 equation and the salinity (for the ocean) or specific humidity (for the  
 atmosphere) equation. As for the momentum equations, we only describe for  
 now the parameters that you are likely to change. The logical variables  
 \textbf{tempDiffusion}\textit{, }\textbf{tempAdvection}\textit{, }\textbf{%  
 tempForcing}\textit{,} and \textbf{tempStepping} allow you to turn on/off  
 terms in the temperature equation (same thing for salinity or specific  
 humidity with variables \textbf{saltDiffusion}\textit{, }\textbf{%  
 saltAdvection}\textit{\ }etc). These variables are all assumed here to be  
 set to '.\texttt{TRUE}.'. Look at file \textit{model/inc/PARAMS.h }for a  
 precise definition.  
   
 \begin{itemize}  
 \item initialization  
 \end{itemize}  
   
 The initial tracer data can be contained in the binary files \textbf{%  
 hydrogThetaFile }and \textbf{hydrogSaltFile}. These files should contain 3D  
 data ordered in an (x, y, r) fashion with k=1 as the first vertical level.  
 If no file names are provided, the tracers are then initialized with the  
 values of \textbf{tRef }and \textbf{sRef }mentioned above (in the equation  
 of state section). In this case, the initial tracer data are uniform in x  
 and y for each depth level.  
724    
725  \begin{itemize}  \item \texttt{ncview} utility is a very convenient and quick way
726  \item forcing    to plot netCDF data and it runs on most OSes:
727      \begin{rawhtml} <A href="http://meteora.ucsd.edu/~pierce/ncview_home_page.html"> \end{rawhtml}
728    \begin{verbatim}
729         http://meteora.ucsd.edu/~pierce/ncview_home_page.html
730    \end{verbatim}
731      \begin{rawhtml} </A> \end{rawhtml}
732      
733    \item MatLAB(c) and other common post-processing environments provide
734      various netCDF interfaces including:
735      \begin{rawhtml} <A href="http://woodshole.er.usgs.gov/staffpages/cdenham/public_html/MexCDF/nc4ml5.html"> \end{rawhtml}
736    \begin{verbatim}
737    http://woodshole.er.usgs.gov/staffpages/cdenham/public_html/MexCDF/nc4ml5.html
738    \end{verbatim}
739      \begin{rawhtml} </A> \end{rawhtml}
740  \end{itemize}  \end{itemize}
741    
 This part is more relevant for the ocean, the procedure for the atmosphere  
 not being completely stabilized at the moment.  
742    
743  A combination of fluxes data and relaxation terms can be used for driving  \subsection{Looking at the output}
 the tracer equations. \ For potential temperature, heat flux data (in W/m$%  
 ^{2}$) can be stored in the 2D binary file \textbf{surfQfile}\textit{. }%  
 Alternatively or in addition, the forcing can be specified through a  
 relaxation term. The SST data to which the model surface temperatures are  
 restored to are supposed to be stored in the 2D binary file \textbf{%  
 thetaClimFile}\textit{. }The corresponding relaxation time scale coefficient  
 is set through the variable \textbf{tauThetaClimRelax}\textit{\ }(in s). The  
 same procedure applies for salinity with the variable names \textbf{EmPmRfile%  
 }\textit{, }\textbf{saltClimFile}\textit{, }and \textbf{tauSaltClimRelax}%  
 \textit{\ }for freshwater flux (in m/s) and surface salinity (in ppt) data  
 files and relaxation time scale coefficient (in s), respectively. Also for  
 salinity, if the CPP key \textbf{USE\_NATURAL\_BCS} is turned on, natural  
 boundary conditions are applied i.e. when computing the surface salinity  
 tendency, the freshwater flux is multiplied by the model surface salinity  
 instead of a constant salinity value.  
   
 As for the other input files, the precision with which to read the data is  
 controlled by the variable \textbf{readBinaryPrec}. Time-dependent, periodic  
 forcing can be applied as well following the same procedure used for the  
 wind forcing data (see above).  
744    
745  \begin{itemize}  The ``traditional'' or mdsio model data are written according to a
746  \item dissipation  ``meta/data'' file format.  Each variable is associated with two files
747  \end{itemize}  with suffix names \texttt{.data} and \texttt{.meta}. The
748    \texttt{.data} file contains the data written in binary form
749    (big\_endian by default). The \texttt{.meta} file is a ``header'' file
750    that contains information about the size and the structure of the
751    \texttt{.data} file. This way of organizing the output is particularly
752    useful when running multi-processors calculations. The base version of
753    the model includes a few matlab utilities to read output files written
754    in this format. The matlab scripts are located in the directory
755    \texttt{utils/matlab} under the root tree. The script \texttt{rdmds.m}
756    reads the data. Look at the comments inside the script to see how to
757    use it.
758    
759  Lateral eddy diffusivities for temperature and salinity/specific humidity  Some examples of reading and visualizing some output in {\em Matlab}:
760  are specified through the variables \textbf{diffKhT }and \textbf{diffKhS }%  \begin{verbatim}
761  (in m$^{2}$/s). Vertical eddy diffusivities are specified through the  % matlab
762  variables \textbf{diffKzT }and \textbf{diffKzS }(in m$^{2}$/s) for the ocean  >> H=rdmds('Depth');
763  and \textbf{diffKpT }and \textbf{diffKpS }(in Pa$^{2}$/s) for the  >> contourf(H');colorbar;
764  atmosphere. The vertical diffusive fluxes can be computed implicitly by  >> title('Depth of fluid as used by model');
 setting the logical variable \textbf{implicitDiffusion }to '.\texttt{TRUE}%  
 .'. In addition, biharmonic diffusivities can be specified as well through  
 the coefficients \textbf{diffK4T }and \textbf{diffK4S }(in m$^{4}$/s). Note  
 that the cosine power scaling (specified through \textbf{cosPower }- see the  
 momentum equations section) is applied to the tracer diffusivities  
 (Laplacian and biharmonic) as well. The Gent and McWilliams parameterization  
 for oceanic tracers is described in the package section. Finally, note that  
 tracers can be also subject to Fourier and Shapiro filtering (see the  
 corresponding section on these filters).  
   
 \begin{itemize}  
 \item ocean convection  
 \end{itemize}  
   
 Two options are available to parameterize ocean convection: one is to use  
 the convective adjustment scheme. In this case, you need to set the variable  
 \textbf{cadjFreq}, which represents the frequency (in s) with which the  
 adjustment algorithm is called, to a non-zero value (if set to a negative  
 value by the user, the model will set it to the tracer time step). The other  
 option is to parameterize convection with implicit vertical diffusion. To do  
 this, set the logical variable \textbf{implicitDiffusion }to '.\texttt{TRUE}%  
 .' and the real variable \textbf{ivdc\_kappa }to a value (in m$^{2}$/s) you  
 wish the tracer vertical diffusivities to have when mixing tracers  
 vertically due to static instabilities. Note that \textbf{cadjFreq }and  
 \textbf{ivdc\_kappa }can not both have non-zero value.  
   
 \subsubsection{Simulation controls}  
   
 The model ''clock'' is defined by the variable \textbf{deltaTClock }(in s)  
 which determines the IO frequencies and is used in tagging output.  
 Typically, you will set it to the tracer time step for accelerated runs  
 (otherwise it is simply set to the default time step \textbf{deltaT}).  
 Frequency of checkpointing and dumping of the model state are referenced to  
 this clock (see below).  
765    
766  \begin{itemize}  >> eta=rdmds('Eta',10);
767  \item run duration  >> imagesc(eta');axis ij;colorbar;
768  \end{itemize}  >> title('Surface height at iter=10');
769    
770  The beginning of a simulation is set by specifying a start time (in s)  >> eta=rdmds('Eta',[0:10:100]);
771  through the real variable \textbf{startTime }or by specifying an initial  >> for n=1:11; imagesc(eta(:,:,n)');axis ij;colorbar;pause(.5);end
772  iteration number through the integer variable \textbf{nIter0}. If these  \end{verbatim}
 variables are set to nonzero values, the model will look for a ''pickup''  
 file \textit{pickup.0000nIter0 }to restart the integration\textit{. }The end  
 of a simulation is set through the real variable \textbf{endTime }(in s).  
 Alternatively, you can specify instead the number of time steps to execute  
 through the integer variable \textbf{nTimeSteps}.  
773    
774  \begin{itemize}  Similar scripts for netCDF output (\texttt{rdmnc.m}) are available and
775  \item frequency of output  they are described in Section \ref{sec:pkg:mnc}.
 \end{itemize}  
776    
 Real variables defining frequencies (in s) with which output files are  
 written on disk need to be set up. \textbf{dumpFreq }controls the frequency  
 with which the instantaneous state of the model is saved. \textbf{chkPtFreq }%  
 and \textbf{pchkPtFreq }control the output frequency of rolling and  
 permanent checkpoint files, respectively. See section 1.5.1 Output files for the  
 definition of model state and checkpoint files. In addition, time-averaged  
 fields can be written out by setting the variable \textbf{taveFreq} (in s).  
 The precision with which to write the binary data is controlled by the  
 integer variable w\textbf{riteBinaryPrec }(set it to \texttt{32} or \texttt{%  
 64}).  

Legend:
Removed from v.1.1.1.1  
changed lines
  Added in v.1.31

  ViewVC Help
Powered by ViewVC 1.1.22