/[MITgcm]/manual/s_getstarted/text/getting_started.tex
ViewVC logotype

Diff of /manual/s_getstarted/text/getting_started.tex

Parent Directory Parent Directory | Revision Log Revision Log | View Revision Graph Revision Graph | View Patch Patch

revision 1.21 by edhill, Thu Mar 11 16:11:56 2004 UTC revision 1.33 by edhill, Sat Apr 8 01:50:49 2006 UTC
# Line 15  structure are described more fully in ch Line 15  structure are described more fully in ch
15  this section, we provide information on how to customize the code when  this section, we provide information on how to customize the code when
16  you are ready to try implementing the configuration you have in mind.  you are ready to try implementing the configuration you have in mind.
17    
18    
19  \section{Where to find information}  \section{Where to find information}
20  \label{sect:whereToFindInfo}  \label{sect:whereToFindInfo}
21    \begin{rawhtml}
22    <!-- CMIREDIR:whereToFindInfo: -->
23    \end{rawhtml}
24    
25  A web site is maintained for release 2 (``Pelican'') of MITgcm:  A web site is maintained for release 2 (``Pelican'') of MITgcm:
26  \begin{rawhtml} <A href=http://mitgcm.org/pelican/ target="idontexist"> \end{rawhtml}  \begin{rawhtml} <A href=http://mitgcm.org/pelican/ target="idontexist"> \end{rawhtml}
# Line 50  http://mitgcm.org/htdig/ Line 54  http://mitgcm.org/htdig/
54    
55  \section{Obtaining the code}  \section{Obtaining the code}
56  \label{sect:obtainingCode}  \label{sect:obtainingCode}
57    \begin{rawhtml}
58    <!-- CMIREDIR:obtainingCode: -->
59    \end{rawhtml}
60    
61  MITgcm can be downloaded from our system by following  MITgcm can be downloaded from our system by following
62  the instructions below. As a courtesy we ask that you send e-mail to us at  the instructions below. As a courtesy we ask that you send e-mail to us at
# Line 79  provide easy support for maintenance upd Line 86  provide easy support for maintenance upd
86    
87  \end{enumerate}  \end{enumerate}
88    
89  \subsubsection{Checkout from CVS}  \subsection{Method 1 - Checkout from CVS}
90  \label{sect:cvs_checkout}  \label{sect:cvs_checkout}
91    
92  If CVS is available on your system, we strongly encourage you to use it. CVS  If CVS is available on your system, we strongly encourage you to use it. CVS
# Line 92  be set within your shell.  For a csh or Line 99  be set within your shell.  For a csh or
99  \begin{verbatim}  \begin{verbatim}
100  % setenv CVSROOT :pserver:cvsanon@mitgcm.org:/u/gcmpack  % setenv CVSROOT :pserver:cvsanon@mitgcm.org:/u/gcmpack
101  \end{verbatim}  \end{verbatim}
102  in your .cshrc or .tcshrc file.  For bash or sh shells, put:  in your \texttt{.cshrc} or \texttt{.tcshrc} file.  For bash or sh
103    shells, put:
104  \begin{verbatim}  \begin{verbatim}
105  % export CVSROOT=':pserver:cvsanon@mitgcm.org:/u/gcmpack'  % export CVSROOT=':pserver:cvsanon@mitgcm.org:/u/gcmpack'
106  \end{verbatim}  \end{verbatim}
# Line 147  of CVS aliases Line 155  of CVS aliases
155    \label{tab:cvsModules}    \label{tab:cvsModules}
156  \end{table}  \end{table}
157    
158  The checkout process creates a directory called \textit{MITgcm}. If  The checkout process creates a directory called \texttt{MITgcm}. If
159  the directory \textit{MITgcm} exists this command updates your code  the directory \texttt{MITgcm} exists this command updates your code
160  based on the repository. Each directory in the source tree contains a  based on the repository. Each directory in the source tree contains a
161  directory \textit{CVS}. This information is required by CVS to keep  directory \texttt{CVS}. This information is required by CVS to keep
162  track of your file versions with respect to the repository. Don't edit  track of your file versions with respect to the repository. Don't edit
163  the files in \textit{CVS}!  You can also use CVS to download code  the files in \texttt{CVS}!  You can also use CVS to download code
164  updates.  More extensive information on using CVS for maintaining  updates.  More extensive information on using CVS for maintaining
165  MITgcm code can be found  MITgcm code can be found
166  \begin{rawhtml} <A href=''http://mitgcm.org/usingcvstoget.html'' target="idontexist"> \end{rawhtml}  \begin{rawhtml} <A href=''http://mitgcm.org/usingcvstoget.html'' target="idontexist"> \end{rawhtml}
# Line 169  they create can be changed to a differen Line 177  they create can be changed to a differen
177  \end{verbatim}  \end{verbatim}
178    
179    
180  \subsubsection{Conventional download method}  \subsection{Method 2 - Tar file download}
181  \label{sect:conventionalDownload}  \label{sect:conventionalDownload}
182    
183  If you do not have CVS on your system, you can download the model as a  If you do not have CVS on your system, you can download the model as a
# Line 256  also means we can't tell what version of Line 264  also means we can't tell what version of
264  with. So please be sure you understand what you're doing.  with. So please be sure you understand what you're doing.
265    
266  \section{Model and directory structure}  \section{Model and directory structure}
267    \begin{rawhtml}
268    <!-- CMIREDIR:directory_structure: -->
269    \end{rawhtml}
270    
271  The ``numerical'' model is contained within a execution environment  The ``numerical'' model is contained within a execution environment
272  support wrapper. This wrapper is designed to provide a general  support wrapper. This wrapper is designed to provide a general
# Line 263  framework for grid-point models. MITgcmU Line 274  framework for grid-point models. MITgcmU
274  model that uses the framework. Under this structure the model is split  model that uses the framework. Under this structure the model is split
275  into execution environment support code and conventional numerical  into execution environment support code and conventional numerical
276  model code. The execution environment support code is held under the  model code. The execution environment support code is held under the
277  \textit{eesupp} directory. The grid point model code is held under the  \texttt{eesupp} directory. The grid point model code is held under the
278  \textit{model} directory. Code execution actually starts in the  \texttt{model} directory. Code execution actually starts in the
279  \textit{eesupp} routines and not in the \textit{model} routines. For  \texttt{eesupp} routines and not in the \texttt{model} routines. For
280  this reason the top-level \textit{MAIN.F} is in the  this reason the top-level \texttt{MAIN.F} is in the
281  \textit{eesupp/src} directory. In general, end-users should not need  \texttt{eesupp/src} directory. In general, end-users should not need
282  to worry about this level. The top-level routine for the numerical  to worry about this level. The top-level routine for the numerical
283  part of the code is in \textit{model/src/THE\_MODEL\_MAIN.F}. Here is  part of the code is in \texttt{model/src/THE\_MODEL\_MAIN.F}. Here is
284  a brief description of the directory structure of the model under the  a brief description of the directory structure of the model under the
285  root tree (a detailed description is given in section 3: Code  root tree (a detailed description is given in section 3: Code
286  structure).  structure).
287    
288  \begin{itemize}  \begin{itemize}
289    
290  \item \textit{bin}: this directory is initially empty. It is the  \item \texttt{bin}: this directory is initially empty. It is the
291    default directory in which to compile the code.    default directory in which to compile the code.
292        
293  \item \textit{diags}: contains the code relative to time-averaged  \item \texttt{diags}: contains the code relative to time-averaged
294    diagnostics. It is subdivided into two subdirectories \textit{inc}    diagnostics. It is subdivided into two subdirectories \texttt{inc}
295    and \textit{src} that contain include files (*.\textit{h} files) and    and \texttt{src} that contain include files (\texttt{*.h} files) and
296    Fortran subroutines (*.\textit{F} files), respectively.    Fortran subroutines (\texttt{*.F} files), respectively.
297    
298  \item \textit{doc}: contains brief documentation notes.  \item \texttt{doc}: contains brief documentation notes.
299        
300  \item \textit{eesupp}: contains the execution environment source code.  \item \texttt{eesupp}: contains the execution environment source code.
301    Also subdivided into two subdirectories \textit{inc} and    Also subdivided into two subdirectories \texttt{inc} and
302    \textit{src}.    \texttt{src}.
303        
304  \item \textit{exe}: this directory is initially empty. It is the  \item \texttt{exe}: this directory is initially empty. It is the
305    default directory in which to execute the code.    default directory in which to execute the code.
306        
307  \item \textit{model}: this directory contains the main source code.  \item \texttt{model}: this directory contains the main source code.
308    Also subdivided into two subdirectories \textit{inc} and    Also subdivided into two subdirectories \texttt{inc} and
309    \textit{src}.    \texttt{src}.
310        
311  \item \textit{pkg}: contains the source code for the packages. Each  \item \texttt{pkg}: contains the source code for the packages. Each
312    package corresponds to a subdirectory. For example, \textit{gmredi}    package corresponds to a subdirectory. For example, \texttt{gmredi}
313    contains the code related to the Gent-McWilliams/Redi scheme,    contains the code related to the Gent-McWilliams/Redi scheme,
314    \textit{aim} the code relative to the atmospheric intermediate    \texttt{aim} the code relative to the atmospheric intermediate
315    physics. The packages are described in detail in section 3.    physics. The packages are described in detail in section 3.
316        
317  \item \textit{tools}: this directory contains various useful tools.  \item \texttt{tools}: this directory contains various useful tools.
318    For example, \textit{genmake2} is a script written in csh (C-shell)    For example, \texttt{genmake2} is a script written in csh (C-shell)
319    that should be used to generate your makefile. The directory    that should be used to generate your makefile. The directory
320    \textit{adjoint} contains the makefile specific to the Tangent    \texttt{adjoint} contains the makefile specific to the Tangent
321    linear and Adjoint Compiler (TAMC) that generates the adjoint code.    linear and Adjoint Compiler (TAMC) that generates the adjoint code.
322    The latter is described in details in part V.    The latter is described in details in part V.
323        
324  \item \textit{utils}: this directory contains various utilities. The  \item \texttt{utils}: this directory contains various utilities. The
325    subdirectory \textit{knudsen2} contains code and a makefile that    subdirectory \texttt{knudsen2} contains code and a makefile that
326    compute coefficients of the polynomial approximation to the knudsen    compute coefficients of the polynomial approximation to the knudsen
327    formula for an ocean nonlinear equation of state. The    formula for an ocean nonlinear equation of state. The
328    \textit{matlab} subdirectory contains matlab scripts for reading    \texttt{matlab} subdirectory contains matlab scripts for reading
329    model output directly into matlab. \textit{scripts} contains C-shell    model output directly into matlab. \texttt{scripts} contains C-shell
330    post-processing scripts for joining processor-based and tiled-based    post-processing scripts for joining processor-based and tiled-based
331    model output.    model output.
332        
333  \item \textit{verification}: this directory contains the model  \item \texttt{verification}: this directory contains the model
334    examples. See section \ref{sect:modelExamples}.    examples. See section \ref{sect:modelExamples}.
335    
336  \end{itemize}  \end{itemize}
337    
338  \section{Example experiments}  \section[MITgcm Example Experiments]{Example experiments}
339  \label{sect:modelExamples}  \label{sect:modelExamples}
340    \begin{rawhtml}
341    <!-- CMIREDIR:modelExamples: -->
342    \end{rawhtml}
343    
344  %% a set of twenty-four pre-configured numerical experiments  %% a set of twenty-four pre-configured numerical experiments
345    
346  The MITgcm distribution comes with more than a dozen pre-configured  The full MITgcm distribution comes with more than a dozen
347  numerical experiments. Some of these example experiments are tests of  pre-configured numerical experiments. Some of these example
348  individual parts of the model code, but many are fully fledged  experiments are tests of individual parts of the model code, but many
349  numerical simulations. A few of the examples are used for tutorial  are fully fledged numerical simulations. A few of the examples are
350  documentation in sections \ref{sect:eg-baro} - \ref{sect:eg-global}.  used for tutorial documentation in sections \ref{sect:eg-baro} -
351  The other examples follow the same general structure as the tutorial  \ref{sect:eg-global}.  The other examples follow the same general
352  examples. However, they only include brief instructions in a text file  structure as the tutorial examples. However, they only include brief
353  called {\it README}.  The examples are located in subdirectories under  instructions in a text file called {\it README}.  The examples are
354  the directory \textit{verification}. Each example is briefly described  located in subdirectories under the directory \texttt{verification}.
355  below.  Each example is briefly described below.
356    
357  \subsection{Full list of model examples}  \subsection{Full list of model examples}
358    
359  \begin{enumerate}  \begin{enumerate}
360        
361  \item \textit{exp0} - single layer, ocean double gyre (barotropic with  \item \texttt{exp0} - single layer, ocean double gyre (barotropic with
362    free-surface). This experiment is described in detail in section    free-surface). This experiment is described in detail in section
363    \ref{sect:eg-baro}.    \ref{sect:eg-baro}.
364    
365  \item \textit{exp1} - Four layer, ocean double gyre. This experiment  \item \texttt{exp1} - Four layer, ocean double gyre. This experiment
366    is described in detail in section \ref{sect:eg-baroc}.    is described in detail in section \ref{sect:eg-baroc}.
367        
368  \item \textit{exp2} - 4x4 degree global ocean simulation with steady  \item \texttt{exp2} - 4x4 degree global ocean simulation with steady
369    climatological forcing. This experiment is described in detail in    climatological forcing. This experiment is described in detail in
370    section \ref{sect:eg-global}.    section \ref{sect:eg-global}.
371        
372  \item \textit{exp4} - Flow over a Gaussian bump in open-water or  \item \texttt{exp4} - Flow over a Gaussian bump in open-water or
373    channel with open boundaries.    channel with open boundaries.
374        
375  \item \textit{exp5} - Inhomogenously forced ocean convection in a  \item \texttt{exp5} - Inhomogenously forced ocean convection in a
376    doubly periodic box.    doubly periodic box.
377    
378  \item \textit{front\_relax} - Relaxation of an ocean thermal front (test for  \item \texttt{front\_relax} - Relaxation of an ocean thermal front (test for
379  Gent/McWilliams scheme). 2D (Y-Z).  Gent/McWilliams scheme). 2D (Y-Z).
380    
381  \item \textit{internal wave} - Ocean internal wave forced by open  \item \texttt{internal wave} - Ocean internal wave forced by open
382    boundary conditions.    boundary conditions.
383        
384  \item \textit{natl\_box} - Eastern subtropical North Atlantic with KPP  \item \texttt{natl\_box} - Eastern subtropical North Atlantic with KPP
385    scheme; 1 month integration    scheme; 1 month integration
386        
387  \item \textit{hs94.1x64x5} - Zonal averaged atmosphere using Held and  \item \texttt{hs94.1x64x5} - Zonal averaged atmosphere using Held and
388    Suarez '94 forcing.    Suarez '94 forcing.
389        
390  \item \textit{hs94.128x64x5} - 3D atmosphere dynamics using Held and  \item \texttt{hs94.128x64x5} - 3D atmosphere dynamics using Held and
391    Suarez '94 forcing.    Suarez '94 forcing.
392        
393  \item \textit{hs94.cs-32x32x5} - 3D atmosphere dynamics using Held and  \item \texttt{hs94.cs-32x32x5} - 3D atmosphere dynamics using Held and
394    Suarez '94 forcing on the cubed sphere.    Suarez '94 forcing on the cubed sphere.
395        
396  \item \textit{aim.5l\_zon-ave} - Intermediate Atmospheric physics.  \item \texttt{aim.5l\_zon-ave} - Intermediate Atmospheric physics.
397    Global Zonal Mean configuration, 1x64x5 resolution.    Global Zonal Mean configuration, 1x64x5 resolution.
398        
399  \item \textit{aim.5l\_XZ\_Equatorial\_Slice} - Intermediate  \item \texttt{aim.5l\_XZ\_Equatorial\_Slice} - Intermediate
400    Atmospheric physics, equatorial Slice configuration.  2D (X-Z).    Atmospheric physics, equatorial Slice configuration.  2D (X-Z).
401        
402  \item \textit{aim.5l\_Equatorial\_Channel} - Intermediate Atmospheric  \item \texttt{aim.5l\_Equatorial\_Channel} - Intermediate Atmospheric
403    physics. 3D Equatorial Channel configuration.    physics. 3D Equatorial Channel configuration.
404        
405  \item \textit{aim.5l\_LatLon} - Intermediate Atmospheric physics.  \item \texttt{aim.5l\_LatLon} - Intermediate Atmospheric physics.
406    Global configuration, on latitude longitude grid with 128x64x5 grid    Global configuration, on latitude longitude grid with 128x64x5 grid
407    points ($2.8^\circ{\rm degree}$ resolution).    points ($2.8^\circ$ resolution).
408        
409  \item \textit{adjustment.128x64x1} Barotropic adjustment problem on  \item \texttt{adjustment.128x64x1} Barotropic adjustment problem on
410    latitude longitude grid with 128x64 grid points ($2.8^\circ{\rm    latitude longitude grid with 128x64 grid points ($2.8^\circ$ resolution).
     degree}$ resolution).  
411        
412  \item \textit{adjustment.cs-32x32x1} Barotropic adjustment problem on  \item \texttt{adjustment.cs-32x32x1} Barotropic adjustment problem on
413    cube sphere grid with 32x32 points per face ( roughly $2.8^\circ{\rm    cube sphere grid with 32x32 points per face (roughly $2.8^\circ$
414      degree}$ resolution).    resolution).
415        
416  \item \textit{advect\_cs} Two-dimensional passive advection test on  \item \texttt{advect\_cs} Two-dimensional passive advection test on
417    cube sphere grid.    cube sphere grid.
418        
419  \item \textit{advect\_xy} Two-dimensional (horizontal plane) passive  \item \texttt{advect\_xy} Two-dimensional (horizontal plane) passive
420    advection test on Cartesian grid.    advection test on Cartesian grid.
421        
422  \item \textit{advect\_yz} Two-dimensional (vertical plane) passive  \item \texttt{advect\_yz} Two-dimensional (vertical plane) passive
423    advection test on Cartesian grid.    advection test on Cartesian grid.
424        
425  \item \textit{carbon} Simple passive tracer experiment. Includes  \item \texttt{carbon} Simple passive tracer experiment. Includes
426    derivative calculation. Described in detail in section    derivative calculation. Described in detail in section
427    \ref{sect:eg-carbon-ad}.    \ref{sect:eg-carbon-ad}.
428    
429  \item \textit{flt\_example} Example of using float package.  \item \texttt{flt\_example} Example of using float package.
430        
431  \item \textit{global\_ocean.90x40x15} Global circulation with GM, flux  \item \texttt{global\_ocean.90x40x15} Global circulation with GM, flux
432    boundary conditions and poles.    boundary conditions and poles.
433    
434  \item \textit{global\_ocean\_pressure} Global circulation in pressure  \item \texttt{global\_ocean\_pressure} Global circulation in pressure
435    coordinate (non-Boussinesq ocean model). Described in detail in    coordinate (non-Boussinesq ocean model). Described in detail in
436    section \ref{sect:eg-globalpressure}.    section \ref{sect:eg-globalpressure}.
437        
438  \item \textit{solid-body.cs-32x32x1} Solid body rotation test for cube  \item \texttt{solid-body.cs-32x32x1} Solid body rotation test for cube
439    sphere grid.    sphere grid.
440    
441  \end{enumerate}  \end{enumerate}
# Line 432  Gent/McWilliams scheme). 2D (Y-Z). Line 445  Gent/McWilliams scheme). 2D (Y-Z).
445  Each example directory has the following subdirectories:  Each example directory has the following subdirectories:
446    
447  \begin{itemize}  \begin{itemize}
448  \item \textit{code}: contains the code particular to the example. At a  \item \texttt{code}: contains the code particular to the example. At a
449    minimum, this directory includes the following files:    minimum, this directory includes the following files:
450    
451    \begin{itemize}    \begin{itemize}
452    \item \textit{code/CPP\_EEOPTIONS.h}: declares CPP keys relative to    \item \texttt{code/packages.conf}: declares the list of packages or
453        package groups to be used.  If not included, the default version
454        is located in \texttt{pkg/pkg\_default}.  Package groups are
455        simply convenient collections of commonly used packages which are
456        defined in \texttt{pkg/pkg\_default}.  Some packages may require
457        other packages or may require their absence (that is, they are
458        incompatible) and these package dependencies are listed in
459        \texttt{pkg/pkg\_depend}.
460    
461      \item \texttt{code/CPP\_EEOPTIONS.h}: declares CPP keys relative to
462      the ``execution environment'' part of the code. The default      the ``execution environment'' part of the code. The default
463      version is located in \textit{eesupp/inc}.      version is located in \texttt{eesupp/inc}.
464        
465    \item \textit{code/CPP\_OPTIONS.h}: declares CPP keys relative to    \item \texttt{code/CPP\_OPTIONS.h}: declares CPP keys relative to
466      the ``numerical model'' part of the code. The default version is      the ``numerical model'' part of the code. The default version is
467      located in \textit{model/inc}.      located in \texttt{model/inc}.
468        
469    \item \textit{code/SIZE.h}: declares size of underlying    \item \texttt{code/SIZE.h}: declares size of underlying
470      computational grid.  The default version is located in      computational grid.  The default version is located in
471      \textit{model/inc}.      \texttt{model/inc}.
472    \end{itemize}    \end{itemize}
473        
474    In addition, other include files and subroutines might be present in    In addition, other include files and subroutines might be present in
475    \textit{code} depending on the particular experiment. See Section 2    \texttt{code} depending on the particular experiment. See Section 2
476    for more details.    for more details.
477        
478  \item \textit{input}: contains the input data files required to run  \item \texttt{input}: contains the input data files required to run
479    the example. At a minimum, the \textit{input} directory contains the    the example. At a minimum, the \texttt{input} directory contains the
480    following files:    following files:
481    
482    \begin{itemize}    \begin{itemize}
483    \item \textit{input/data}: this file, written as a namelist,    \item \texttt{input/data}: this file, written as a namelist,
484      specifies the main parameters for the experiment.      specifies the main parameters for the experiment.
485        
486    \item \textit{input/data.pkg}: contains parameters relative to the    \item \texttt{input/data.pkg}: contains parameters relative to the
487      packages used in the experiment.      packages used in the experiment.
488        
489    \item \textit{input/eedata}: this file contains ``execution    \item \texttt{input/eedata}: this file contains ``execution
490      environment'' data. At present, this consists of a specification      environment'' data. At present, this consists of a specification
491      of the number of threads to use in $X$ and $Y$ under multithreaded      of the number of threads to use in $X$ and $Y$ under multithreaded
492      execution.      execution.
# Line 475  Each example directory has the following Line 497  Each example directory has the following
497    of the experiment.  This varies from experiment to experiment. See    of the experiment.  This varies from experiment to experiment. See
498    section 2 for more details.    section 2 for more details.
499    
500  \item \textit{results}: this directory contains the output file  \item \texttt{results}: this directory contains the output file
501    \textit{output.txt} produced by the simulation example. This file is    \texttt{output.txt} produced by the simulation example. This file is
502    useful for comparison with your own output when you run the    useful for comparison with your own output when you run the
503    experiment.    experiment.
504  \end{itemize}  \end{itemize}
# Line 484  Each example directory has the following Line 506  Each example directory has the following
506  Once you have chosen the example you want to run, you are ready to  Once you have chosen the example you want to run, you are ready to
507  compile the code.  compile the code.
508    
509  \section{Building the code}  \section[Building MITgcm]{Building the code}
510  \label{sect:buildingCode}  \label{sect:buildingCode}
511    \begin{rawhtml}
512  To compile the code, we use the {\em make} program. This uses a file  <!-- CMIREDIR:buildingCode: -->
513  ({\em Makefile}) that allows us to pre-process source files, specify  \end{rawhtml}
514  compiler and optimization options and also figures out any file  
515  dependencies. We supply a script ({\em genmake2}), described in  To compile the code, we use the \texttt{make} program. This uses a
516  section \ref{sect:genmake}, that automatically creates the {\em  file (\texttt{Makefile}) that allows us to pre-process source files,
517    Makefile} for you. You then need to build the dependencies and  specify compiler and optimization options and also figures out any
518    file dependencies. We supply a script (\texttt{genmake2}), described
519    in section \ref{sect:genmake}, that automatically creates the
520    \texttt{Makefile} for you. You then need to build the dependencies and
521  compile the code.  compile the code.
522    
523  As an example, let's assume that you want to build and run experiment  As an example, assume that you want to build and run experiment
524  \textit{verification/exp2}. The are multiple ways and places to  \texttt{verification/exp2}. The are multiple ways and places to
525  actually do this but here let's build the code in  actually do this but here let's build the code in
526  \textit{verification/exp2/input}:  \texttt{verification/exp2/build}:
527  \begin{verbatim}  \begin{verbatim}
528  % cd verification/exp2/input  % cd verification/exp2/build
529  \end{verbatim}  \end{verbatim}
530  First, build the {\em Makefile}:  First, build the \texttt{Makefile}:
531  \begin{verbatim}  \begin{verbatim}
532  % ../../../tools/genmake2 -mods=../code  % ../../../tools/genmake2 -mods=../code
533  \end{verbatim}  \end{verbatim}
534  The command line option tells {\em genmake} to override model source  The command line option tells \texttt{genmake} to override model source
535  code with any files in the directory {\em ./code/}.  code with any files in the directory \texttt{../code/}.
536    
537  On many systems, the {\em genmake2} program will be able to  On many systems, the \texttt{genmake2} program will be able to
538  automatically recognize the hardware, find compilers and other tools  automatically recognize the hardware, find compilers and other tools
539  within the user's path (``echo \$PATH''), and then choose an  within the user's path (``\texttt{echo \$PATH}''), and then choose an
540  appropriate set of options from the files contained in the {\em  appropriate set of options from the files (``optfiles'') contained in
541    tools/build\_options} directory.  Under some circumstances, a user  the \texttt{tools/build\_options} directory.  Under some
542  may have to create a new ``optfile'' in order to specify the exact  circumstances, a user may have to create a new ``optfile'' in order to
543  combination of compiler, compiler flags, libraries, and other options  specify the exact combination of compiler, compiler flags, libraries,
544  necessary to build a particular configuration of MITgcm.  In such  and other options necessary to build a particular configuration of
545  cases, it is generally helpful to read the existing ``optfiles'' and  MITgcm.  In such cases, it is generally helpful to read the existing
546  mimic their syntax.  ``optfiles'' and mimic their syntax.
547    
548  Through the MITgcm-support list, the MITgcm developers are willing to  Through the MITgcm-support list, the MITgcm developers are willing to
549  provide help writing or modifing ``optfiles''.  And we encourage users  provide help writing or modifing ``optfiles''.  And we encourage users
# Line 529  MITgcm-support@mitgcm.org Line 554  MITgcm-support@mitgcm.org
554  \begin{rawhtml} </A> \end{rawhtml}  \begin{rawhtml} </A> \end{rawhtml}
555  list.  list.
556    
557  To specify an optfile to {\em genmake2}, the syntax is:  To specify an optfile to \texttt{genmake2}, the syntax is:
558  \begin{verbatim}  \begin{verbatim}
559  % ../../../tools/genmake2 -mods=../code -of /path/to/optfile  % ../../../tools/genmake2 -mods=../code -of /path/to/optfile
560  \end{verbatim}  \end{verbatim}
561    
562  Once a {\em Makefile} has been generated, we create the dependencies:  Once a \texttt{Makefile} has been generated, we create the
563    dependencies with the command:
564  \begin{verbatim}  \begin{verbatim}
565  % make depend  % make depend
566  \end{verbatim}  \end{verbatim}
567  This modifies the {\em Makefile} by attaching a [long] list of files  This modifies the \texttt{Makefile} by attaching a (usually, long)
568  upon which other files depend. The purpose of this is to reduce  list of files upon which other files depend. The purpose of this is to
569  re-compilation if and when you start to modify the code. The {\tt make  reduce re-compilation if and when you start to modify the code. The
570    depend} command also creates links from the model source to this  {\tt make depend} command also creates links from the model source to
571  directory.  this directory.  It is important to note that the {\tt make depend}
572    stage will occasionally produce warnings or errors since the
573    dependency parsing tool is unable to find all of the necessary header
574    files (\textit{eg.}  \texttt{netcdf.inc}).  In these circumstances, it
575    is usually OK to ignore the warnings/errors and proceed to the next
576    step.
577    
578  Next compile the code:  Next one can compile the code using:
579  \begin{verbatim}  \begin{verbatim}
580  % make  % make
581  \end{verbatim}  \end{verbatim}
582  The {\tt make} command creates an executable called \textit{mitgcmuv}.  The {\tt make} command creates an executable called \texttt{mitgcmuv}.
583  Additional make ``targets'' are defined within the makefile to aid in  Additional make ``targets'' are defined within the makefile to aid in
584  the production of adjoint and other versions of MITgcm.  the production of adjoint and other versions of MITgcm.  On SMP
585    (shared multi-processor) systems, the build process can often be sped
586  Now you are ready to run the model. General instructions for doing so are  up appreciably using the command:
 given in section \ref{sect:runModel}. Here, we can run the model with:  
587  \begin{verbatim}  \begin{verbatim}
588  ./mitgcmuv > output.txt  % make -j 2
589  \end{verbatim}  \end{verbatim}
590  where we are re-directing the stream of text output to the file {\em  where the ``2'' can be replaced with a number that corresponds to the
591  output.txt}.  number of CPUs available.
   
   
 \subsection{Building/compiling the code elsewhere}  
   
 In the example above (section \ref{sect:buildingCode}) we built the  
 executable in the {\em input} directory of the experiment for  
 convenience. You can also configure and compile the code in other  
 locations, for example on a scratch disk with out having to copy the  
 entire source tree. The only requirement to do so is you have {\tt  
   genmake2} in your path or you know the absolute path to {\tt  
   genmake2}.  
   
 The following sections outline some possible methods of organizing  
 your source and data.  
592    
593  \subsubsection{Building from the {\em ../code directory}}  Now you are ready to run the model. General instructions for doing so are
594    given in section \ref{sect:runModel}. Here, we can run the model by
595  This is just as simple as building in the {\em input/} directory:  first creating links to all the input files:
 \begin{verbatim}  
 % cd verification/exp2/code  
 % ../../../tools/genmake2  
 % make depend  
 % make  
 \end{verbatim}  
 However, to run the model the executable ({\em mitgcmuv}) and input  
 files must be in the same place. If you only have one calculation to make:  
 \begin{verbatim}  
 % cd ../input  
 % cp ../code/mitgcmuv ./  
 % ./mitgcmuv > output.txt  
 \end{verbatim}  
 or if you will be making multiple runs with the same executable:  
 \begin{verbatim}  
 % cd ../  
 % cp -r input run1  
 % cp code/mitgcmuv run1  
 % cd run1  
 % ./mitgcmuv > output.txt  
 \end{verbatim}  
   
 \subsubsection{Building from a new directory}  
   
 Since the {\em input} directory contains input files it is often more  
 useful to keep {\em input} pristine and build in a new directory  
 within {\em verification/exp2/}:  
 \begin{verbatim}  
 % cd verification/exp2  
 % mkdir build  
 % cd build  
 % ../../../tools/genmake2 -mods=../code  
 % make depend  
 % make  
 \end{verbatim}  
 This builds the code exactly as before but this time you need to copy  
 either the executable or the input files or both in order to run the  
 model. For example,  
 \begin{verbatim}  
 % cp ../input/* ./  
 % ./mitgcmuv > output.txt  
 \end{verbatim}  
 or if you tend to make multiple runs with the same executable then  
 running in a new directory each time might be more appropriate:  
596  \begin{verbatim}  \begin{verbatim}
597  % cd ../  ln -s ../input/* .
 % mkdir run1  
 % cp build/mitgcmuv run1/  
 % cp input/* run1/  
 % cd run1  
 % ./mitgcmuv > output.txt  
598  \end{verbatim}  \end{verbatim}
599    and then calling the executable with:
 \subsubsection{Building on a scratch disk}  
   
 Model object files and output data can use up large amounts of disk  
 space so it is often the case that you will be operating on a large  
 scratch disk. Assuming the model source is in {\em ~/MITgcm} then the  
 following commands will build the model in {\em /scratch/exp2-run1}:  
 \begin{verbatim}  
 % cd /scratch/exp2-run1  
 % ~/MITgcm/tools/genmake2 -rootdir=~/MITgcm \  
   -mods=~/MITgcm/verification/exp2/code  
 % make depend  
 % make  
 \end{verbatim}  
 To run the model here, you'll need the input files:  
600  \begin{verbatim}  \begin{verbatim}
601  % cp ~/MITgcm/verification/exp2/input/* ./  ./mitgcmuv > output.txt
 % ./mitgcmuv > output.txt  
 \end{verbatim}  
   
 As before, you could build in one directory and make multiple runs of  
 the one experiment:  
 \begin{verbatim}  
 % cd /scratch/exp2  
 % mkdir build  
 % cd build  
 % ~/MITgcm/tools/genmake2 -rootdir=~/MITgcm \  
   -mods=~/MITgcm/verification/exp2/code  
 % make depend  
 % make  
 % cd ../  
 % cp -r ~/MITgcm/verification/exp2/input run2  
 % cd run2  
 % ./mitgcmuv > output.txt  
602  \end{verbatim}  \end{verbatim}
603    where we are re-directing the stream of text output to the file
604    \texttt{output.txt}.
605    
606    
607    \section[Running MITgcm]{Running the model in prognostic mode}
 \subsection{Using \textit{genmake2}}  
 \label{sect:genmake}  
   
 To compile the code, first use the program \texttt{genmake2} (located  
 in the \textit{tools} directory) to generate a Makefile.  
 \texttt{genmake2} is a shell script written to work with all  
 ``sh''--compatible shells including bash v1, bash v2, and Bourne.  
 Internally, \texttt{genmake2} determines the locations of needed  
 files, the compiler, compiler options, libraries, and Unix tools.  It  
 relies upon a number of ``optfiles'' located in the {\em  
   tools/build\_options} directory.  
   
 The purpose of the optfiles is to provide all the compilation options  
 for particular ``platforms'' (where ``platform'' roughly means the  
 combination of the hardware and the compiler) and code configurations.  
 Given the combinations of possible compilers and library dependencies  
 ({\it eg.}  MPI and NetCDF) there may be numerous optfiles available  
 for a single machine.  The naming scheme for the majority of the  
 optfiles shipped with the code is  
 \begin{center}  
   {\bf OS\_HARDWARE\_COMPILER }  
 \end{center}  
 where  
 \begin{description}  
 \item[OS] is the name of the operating system (generally the  
   lower-case output of the {\tt 'uname'} command)  
 \item[HARDWARE] is a string that describes the CPU type and  
   corresponds to output from the  {\tt 'uname -m'} command:  
   \begin{description}  
   \item[ia32] is for ``x86'' machines such as i386, i486, i586, i686,  
     and athlon  
   \item[ia64] is for Intel IA64 systems (eg. Itanium, Itanium2)  
   \item[amd64] is AMD x86\_64 systems  
   \item[ppc] is for Mac PowerPC systems  
   \end{description}  
 \item[COMPILER] is the compiler name (generally, the name of the  
   FORTRAN executable)  
 \end{description}  
   
 In many cases, the default optfiles are sufficient and will result in  
 usable Makefiles.  However, for some machines or code configurations,  
 new ``optfiles'' must be written. To create a new optfile, it is  
 generally best to start with one of the defaults and modify it to suit  
 your needs.  Like \texttt{genmake2}, the optfiles are all written  
 using a simple ``sh''--compatible syntax.  While nearly all variables  
 used within \texttt{genmake2} may be specified in the optfiles, the  
 critical ones that should be defined are:  
   
 \begin{description}  
 \item[FC] the FORTRAN compiler (executable) to use  
 \item[DEFINES] the command-line DEFINE options passed to the compiler  
 \item[CPP] the C pre-processor to use  
 \item[NOOPTFLAGS] options flags for special files that should not be  
   optimized  
 \end{description}  
   
 For example, the optfile for a typical Red Hat Linux machine (``ia32''  
 architecture) using the GCC (g77) compiler is  
 \begin{verbatim}  
 FC=g77  
 DEFINES='-D_BYTESWAPIO -DWORDLENGTH=4'  
 CPP='cpp  -traditional -P'  
 NOOPTFLAGS='-O0'  
 #  For IEEE, use the "-ffloat-store" option  
 if test "x$IEEE" = x ; then  
     FFLAGS='-Wimplicit -Wunused -Wuninitialized'  
     FOPTIM='-O3 -malign-double -funroll-loops'  
 else  
     FFLAGS='-Wimplicit -Wunused -ffloat-store'  
     FOPTIM='-O0 -malign-double'  
 fi  
 \end{verbatim}  
   
 If you write an optfile for an unrepresented machine or compiler, you  
 are strongly encouraged to submit the optfile to the MITgcm project  
 for inclusion.  Please send the file to the  
 \begin{rawhtml} <A href="mail-to:MITgcm-support@mitgcm.org"> \end{rawhtml}  
 \begin{center}  
   MITgcm-support@mitgcm.org  
 \end{center}  
 \begin{rawhtml} </A> \end{rawhtml}  
 mailing list.  
   
 In addition to the optfiles, \texttt{genmake2} supports a number of  
 helpful command-line options.  A complete list of these options can be  
 obtained from:  
 \begin{verbatim}  
 % genmake2 -h  
 \end{verbatim}  
   
 The most important command-line options are:  
 \begin{description}  
     
 \item[\texttt{--optfile=/PATH/FILENAME}] specifies the optfile that  
   should be used for a particular build.  
     
   If no "optfile" is specified (either through the command line or the  
   MITGCM\_OPTFILE environment variable), genmake2 will try to make a  
   reasonable guess from the list provided in {\em  
     tools/build\_options}.  The method used for making this guess is  
   to first determine the combination of operating system and hardware  
   (eg. "linux\_ia32") and then find a working FORTRAN compiler within  
   the user's path.  When these three items have been identified,  
   genmake2 will try to find an optfile that has a matching name.  
     
 \item[\texttt{--pdepend=/PATH/FILENAME}] specifies the dependency file  
   used for packages.  
     
   If not specified, the default dependency file {\em pkg/pkg\_depend}  
   is used.  The syntax for this file is parsed on a line-by-line basis  
   where each line containes either a comment ("\#") or a simple  
   "PKGNAME1 (+|-)PKGNAME2" pairwise rule where the "+" or "-" symbol  
   specifies a "must be used with" or a "must not be used with"  
   relationship, respectively.  If no rule is specified, then it is  
   assumed that the two packages are compatible and will function  
   either with or without each other.  
     
 \item[\texttt{--pdefault='PKG1 PKG2 PKG3 ...'}] specifies the default  
   set of packages to be used.  
     
   If not set, the default package list will be read from {\em  
     pkg/pkg\_default}  
     
 \item[\texttt{--adof=/path/to/file}] specifies the "adjoint" or  
   automatic differentiation options file to be used.  The file is  
   analogous to the ``optfile'' defined above but it specifies  
   information for the AD build process.  
     
   The default file is located in {\em  
     tools/adjoint\_options/adjoint\_default} and it defines the "TAF"  
   and "TAMC" compilers.  An alternate version is also available at  
   {\em tools/adjoint\_options/adjoint\_staf} that selects the newer  
   "STAF" compiler.  As with any compilers, it is helpful to have their  
   directories listed in your {\tt \$PATH} environment variable.  
     
 \item[\texttt{--mods='DIR1 DIR2 DIR3 ...'}] specifies a list of  
   directories containing ``modifications''.  These directories contain  
   files with names that may (or may not) exist in the main MITgcm  
   source tree but will be overridden by any identically-named sources  
   within the ``MODS'' directories.  
     
   The order of precedence for this "name-hiding" is as follows:  
   \begin{itemize}  
   \item ``MODS'' directories (in the order given)  
   \item Packages either explicitly specified or provided by default  
     (in the order given)  
   \item Packages included due to package dependencies (in the order  
     that that package dependencies are parsed)  
   \item The "standard dirs" (which may have been specified by the  
     ``-standarddirs'' option)  
   \end{itemize}  
     
 \item[\texttt{--make=/path/to/gmake}] Due to the poor handling of  
   soft-links and other bugs common with the \texttt{make} versions  
   provided by commercial Unix vendors, GNU \texttt{make} (sometimes  
   called \texttt{gmake}) should be preferred.  This option provides a  
   means for specifying the make executable to be used.  
     
 \item[\texttt{--bash=/path/to/sh}] On some (usually older UNIX)  
   machines, the ``bash'' shell is unavailable.  To run on these  
   systems, \texttt{genmake2} can be invoked using an ``sh'' (that is,  
   a Bourne, POSIX, or compatible) shell.  The syntax in these  
   circumstances is:  
   \begin{center}  
     \texttt{/bin/sh genmake2 -bash=/bin/sh [...options...]}  
   \end{center}  
   where \texttt{/bin/sh} can be replaced with the full path and name  
   of the desired shell.  
   
 \end{description}  
   
   
   
 \section{Running the model}  
608  \label{sect:runModel}  \label{sect:runModel}
609    \begin{rawhtml}
610    <!-- CMIREDIR:runModel: -->
611    \end{rawhtml}
612    
613    If compilation finished succesfully (section \ref{sect:buildingCode})
614    then an executable called \texttt{mitgcmuv} will now exist in the
615    local directory.
616    
617  If compilation finished succesfuully (section \ref{sect:buildModel})  To run the model as a single process (\textit{ie.} not in parallel)
618  then an executable called {\em mitgcmuv} will now exist in the local  simply type:
 directory.  
   
 To run the model as a single process (ie. not in parallel) simply  
 type:  
619  \begin{verbatim}  \begin{verbatim}
620  % ./mitgcmuv  % ./mitgcmuv
621  \end{verbatim}  \end{verbatim}
# Line 858  do!). The above command will spew out ma Line 625  do!). The above command will spew out ma
625  your screen.  This output contains details such as parameter values as  your screen.  This output contains details such as parameter values as
626  well as diagnostics such as mean Kinetic energy, largest CFL number,  well as diagnostics such as mean Kinetic energy, largest CFL number,
627  etc. It is worth keeping this text output with the binary output so we  etc. It is worth keeping this text output with the binary output so we
628  normally re-direct the {\em stdout} stream as follows:  normally re-direct the \texttt{stdout} stream as follows:
629  \begin{verbatim}  \begin{verbatim}
630  % ./mitgcmuv > output.txt  % ./mitgcmuv > output.txt
631  \end{verbatim}  \end{verbatim}
632    In the event that the model encounters an error and stops, it is very
633  For the example experiments in {\em verification}, an example of the  helpful to include the last few line of this \texttt{output.txt} file
634  output is kept in {\em results/output.txt} for comparison. You can compare  along with the (\texttt{stderr}) error message within any bug reports.
635  your {\em output.txt} with this one to check that the set-up works.  
636    For the example experiments in \texttt{verification}, an example of the
637    output is kept in \texttt{results/output.txt} for comparison. You can
638    compare your \texttt{output.txt} with the corresponding one for that
639    experiment to check that the set-up works.
640    
641    
642    
643  \subsection{Output files}  \subsection{Output files}
644    
645  The model produces various output files. At a minimum, the instantaneous  The model produces various output files and, when using \texttt{mnc},
646  ``state'' of the model is written out, which is made of the following files:  sometimes even directories.  Depending upon the I/O package(s)
647    selected at compile time (either \texttt{mdsio} or \texttt{mnc} or
648    both as determined by \texttt{code/packages.conf}) and the run-time
649    flags set (in \texttt{input/data.pkg}), the following output may
650    appear.
651    
652    
653    \subsubsection{MDSIO output files}
654    
655    The ``traditional'' output files are generated by the \texttt{mdsio}
656    package.  At a minimum, the instantaneous ``state'' of the model is
657    written out, which is made of the following files:
658    
659  \begin{itemize}  \begin{itemize}
660  \item \textit{U.00000nIter} - zonal component of velocity field (m/s and $>  \item \texttt{U.00000nIter} - zonal component of velocity field (m/s and $>
661  0 $ eastward).  0 $ eastward).
662    
663  \item \textit{V.00000nIter} - meridional component of velocity field (m/s  \item \texttt{V.00000nIter} - meridional component of velocity field (m/s
664  and $> 0$ northward).  and $> 0$ northward).
665    
666  \item \textit{W.00000nIter} - vertical component of velocity field (ocean:  \item \texttt{W.00000nIter} - vertical component of velocity field (ocean:
667  m/s and $> 0$ upward, atmosphere: Pa/s and $> 0$ towards increasing pressure  m/s and $> 0$ upward, atmosphere: Pa/s and $> 0$ towards increasing pressure
668  i.e. downward).  i.e. downward).
669    
670  \item \textit{T.00000nIter} - potential temperature (ocean: $^{0}$C,  \item \texttt{T.00000nIter} - potential temperature (ocean: $^{0}$C,
671  atmosphere: $^{0}$K).  atmosphere: $^{0}$K).
672    
673  \item \textit{S.00000nIter} - ocean: salinity (psu), atmosphere: water vapor  \item \texttt{S.00000nIter} - ocean: salinity (psu), atmosphere: water vapor
674  (g/kg).  (g/kg).
675    
676  \item \textit{Eta.00000nIter} - ocean: surface elevation (m), atmosphere:  \item \texttt{Eta.00000nIter} - ocean: surface elevation (m), atmosphere:
677  surface pressure anomaly (Pa).  surface pressure anomaly (Pa).
678  \end{itemize}  \end{itemize}
679    
680  The chain \textit{00000nIter} consists of ten figures that specify the  The chain \texttt{00000nIter} consists of ten figures that specify the
681  iteration number at which the output is written out. For example, \textit{%  iteration number at which the output is written out. For example, \texttt{%
682  U.0000000300} is the zonal velocity at iteration 300.  U.0000000300} is the zonal velocity at iteration 300.
683    
684  In addition, a ``pickup'' or ``checkpoint'' file called:  In addition, a ``pickup'' or ``checkpoint'' file called:
685    
686  \begin{itemize}  \begin{itemize}
687  \item \textit{pickup.00000nIter}  \item \texttt{pickup.00000nIter}
688  \end{itemize}  \end{itemize}
689    
690  is written out. This file represents the state of the model in a condensed  is written out. This file represents the state of the model in a condensed
# Line 910  form and is used for restarting the inte Line 692  form and is used for restarting the inte
692  there is an additional ``pickup'' file:  there is an additional ``pickup'' file:
693    
694  \begin{itemize}  \begin{itemize}
695  \item \textit{pickup\_cd.00000nIter}  \item \texttt{pickup\_cd.00000nIter}
696  \end{itemize}  \end{itemize}
697    
698  containing the D-grid velocity data and that has to be written out as well  containing the D-grid velocity data and that has to be written out as well
699  in order to restart the integration. Rolling checkpoint files are the same  in order to restart the integration. Rolling checkpoint files are the same
700  as the pickup files but are named differently. Their name contain the chain  as the pickup files but are named differently. Their name contain the chain
701  \textit{ckptA} or \textit{ckptB} instead of \textit{00000nIter}. They can be  \texttt{ckptA} or \texttt{ckptB} instead of \texttt{00000nIter}. They can be
702  used to restart the model but are overwritten every other time they are  used to restart the model but are overwritten every other time they are
703  output to save disk space during long integrations.  output to save disk space during long integrations.
704    
705    
706    
707    \subsubsection{MNC output files}
708    
709    Unlike the \texttt{mdsio} output, the \texttt{mnc}--generated output
710    is usually (though not necessarily) placed within a subdirectory with
711    a name such as \texttt{mnc\_test\_\${DATE}\_\${SEQ}}.  The files
712    within this subdirectory are all in the ``self-describing'' netCDF
713    format and can thus be browsed and/or plotted using tools such as:
714    \begin{itemize}
715    \item \texttt{ncdump} is a utility which is typically included
716      with every netCDF install:
717      \begin{rawhtml} <A href="http://www.unidata.ucar.edu/packages/netcdf/"> \end{rawhtml}
718    \begin{verbatim}
719         http://www.unidata.ucar.edu/packages/netcdf/
720    \end{verbatim}
721      \begin{rawhtml} </A> \end{rawhtml} and it converts the netCDF
722      binaries into formatted ASCII text files.
723    
724    \item \texttt{ncview} utility is a very convenient and quick way
725      to plot netCDF data and it runs on most OSes:
726      \begin{rawhtml} <A href="http://meteora.ucsd.edu/~pierce/ncview_home_page.html"> \end{rawhtml}
727    \begin{verbatim}
728         http://meteora.ucsd.edu/~pierce/ncview_home_page.html
729    \end{verbatim}
730      \begin{rawhtml} </A> \end{rawhtml}
731      
732    \item MatLAB(c) and other common post-processing environments provide
733      various netCDF interfaces including:
734      \begin{rawhtml} <A href="http://woodshole.er.usgs.gov/staffpages/cdenham/public_html/MexCDF/nc4ml5.html"> \end{rawhtml}
735    \begin{verbatim}
736    http://woodshole.er.usgs.gov/staffpages/cdenham/public_html/MexCDF/nc4ml5.html
737    \end{verbatim}
738      \begin{rawhtml} </A> \end{rawhtml}
739    \end{itemize}
740    
741    
742  \subsection{Looking at the output}  \subsection{Looking at the output}
743    
744  All the model data are written according to a ``meta/data'' file format.  The ``traditional'' or mdsio model data are written according to a
745  Each variable is associated with two files with suffix names \textit{.data}  ``meta/data'' file format.  Each variable is associated with two files
746  and \textit{.meta}. The \textit{.data} file contains the data written in  with suffix names \texttt{.data} and \texttt{.meta}. The
747  binary form (big\_endian by default). The \textit{.meta} file is a  \texttt{.data} file contains the data written in binary form
748  ``header'' file that contains information about the size and the structure  (big\_endian by default). The \texttt{.meta} file is a ``header'' file
749  of the \textit{.data} file. This way of organizing the output is  that contains information about the size and the structure of the
750  particularly useful when running multi-processors calculations. The base  \texttt{.data} file. This way of organizing the output is particularly
751  version of the model includes a few matlab utilities to read output files  useful when running multi-processors calculations. The base version of
752  written in this format. The matlab scripts are located in the directory  the model includes a few matlab utilities to read output files written
753  \textit{utils/matlab} under the root tree. The script \textit{rdmds.m} reads  in this format. The matlab scripts are located in the directory
754  the data. Look at the comments inside the script to see how to use it.  \texttt{utils/matlab} under the root tree. The script \texttt{rdmds.m}
755    reads the data. Look at the comments inside the script to see how to
756    use it.
757    
758  Some examples of reading and visualizing some output in {\em Matlab}:  Some examples of reading and visualizing some output in {\em Matlab}:
759  \begin{verbatim}  \begin{verbatim}
# Line 949  Some examples of reading and visualizing Line 770  Some examples of reading and visualizing
770  >> for n=1:11; imagesc(eta(:,:,n)');axis ij;colorbar;pause(.5);end  >> for n=1:11; imagesc(eta(:,:,n)');axis ij;colorbar;pause(.5);end
771  \end{verbatim}  \end{verbatim}
772    
773  \section{Doing it yourself: customizing the code}  Similar scripts for netCDF output (\texttt{rdmnc.m}) are available and
774    they are described in Section \ref{sec:pkg:mnc}.
775    
 When you are ready to run the model in the configuration you want, the  
 easiest thing is to use and adapt the setup of the case studies  
 experiment (described previously) that is the closest to your  
 configuration. Then, the amount of setup will be minimized. In this  
 section, we focus on the setup relative to the ``numerical model''  
 part of the code (the setup relative to the ``execution environment''  
 part is covered in the parallel implementation section) and on the  
 variables and parameters that you are likely to change.  
   
 \subsection{Configuration and setup}  
   
 The CPP keys relative to the ``numerical model'' part of the code are  
 all defined and set in the file \textit{CPP\_OPTIONS.h }in the  
 directory \textit{ model/inc }or in one of the \textit{code  
 }directories of the case study experiments under  
 \textit{verification.} The model parameters are defined and declared  
 in the file \textit{model/inc/PARAMS.h }and their default values are  
 set in the routine \textit{model/src/set\_defaults.F. }The default  
 values can be modified in the namelist file \textit{data }which needs  
 to be located in the directory where you will run the model. The  
 parameters are initialized in the routine  
 \textit{model/src/ini\_parms.F}.  Look at this routine to see in what  
 part of the namelist the parameters are located.  
   
 In what follows the parameters are grouped into categories related to  
 the computational domain, the equations solved in the model, and the  
 simulation controls.  
   
 \subsection{Computational domain, geometry and time-discretization}  
   
 \begin{description}  
 \item[dimensions] \  
     
   The number of points in the x, y, and r directions are represented  
   by the variables \textbf{sNx}, \textbf{sNy} and \textbf{Nr}  
   respectively which are declared and set in the file  
   \textit{model/inc/SIZE.h}.  (Again, this assumes a mono-processor  
   calculation. For multiprocessor calculations see the section on  
   parallel implementation.)  
   
 \item[grid] \  
     
   Three different grids are available: cartesian, spherical polar, and  
   curvilinear (which includes the cubed sphere). The grid is set  
   through the logical variables \textbf{usingCartesianGrid},  
   \textbf{usingSphericalPolarGrid}, and \textbf{usingCurvilinearGrid}.  
   In the case of spherical and curvilinear grids, the southern  
   boundary is defined through the variable \textbf{phiMin} which  
   corresponds to the latitude of the southern most cell face (in  
   degrees). The resolution along the x and y directions is controlled  
   by the 1D arrays \textbf{delx} and \textbf{dely} (in meters in the  
   case of a cartesian grid, in degrees otherwise).  The vertical grid  
   spacing is set through the 1D array \textbf{delz} for the ocean (in  
   meters) or \textbf{delp} for the atmosphere (in Pa).  The variable  
   \textbf{Ro\_SeaLevel} represents the standard position of Sea-Level  
   in ``R'' coordinate. This is typically set to 0m for the ocean  
   (default value) and 10$^{5}$Pa for the atmosphere. For the  
   atmosphere, also set the logical variable \textbf{groundAtK1} to  
   \texttt{'.TRUE.'} which puts the first level (k=1) at the lower  
   boundary (ground).  
     
   For the cartesian grid case, the Coriolis parameter $f$ is set  
   through the variables \textbf{f0} and \textbf{beta} which correspond  
   to the reference Coriolis parameter (in s$^{-1}$) and  
   $\frac{\partial f}{ \partial y}$(in m$^{-1}$s$^{-1}$) respectively.  
   If \textbf{beta } is set to a nonzero value, \textbf{f0} is the  
   value of $f$ at the southern edge of the domain.  
   
 \item[topography - full and partial cells] \  
     
   The domain bathymetry is read from a file that contains a 2D (x,y)  
   map of depths (in m) for the ocean or pressures (in Pa) for the  
   atmosphere. The file name is represented by the variable  
   \textbf{bathyFile}. The file is assumed to contain binary numbers  
   giving the depth (pressure) of the model at each grid cell, ordered  
   with the x coordinate varying fastest. The points are ordered from  
   low coordinate to high coordinate for both axes. The model code  
   applies without modification to enclosed, periodic, and double  
   periodic domains. Periodicity is assumed by default and is  
   suppressed by setting the depths to 0m for the cells at the limits  
   of the computational domain (note: not sure this is the case for the  
   atmosphere). The precision with which to read the binary data is  
   controlled by the integer variable \textbf{readBinaryPrec} which can  
   take the value \texttt{32} (single precision) or \texttt{64} (double  
   precision). See the matlab program \textit{gendata.m} in the  
   \textit{input} directories under \textit{verification} to see how  
   the bathymetry files are generated for the case study experiments.  
     
   To use the partial cell capability, the variable \textbf{hFacMin}  
   needs to be set to a value between 0 and 1 (it is set to 1 by  
   default) corresponding to the minimum fractional size of the cell.  
   For example if the bottom cell is 500m thick and \textbf{hFacMin} is  
   set to 0.1, the actual thickness of the cell (i.e. used in the code)  
   can cover a range of discrete values 50m apart from 50m to 500m  
   depending on the value of the bottom depth (in \textbf{bathyFile})  
   at this point.  
     
   Note that the bottom depths (or pressures) need not coincide with  
   the models levels as deduced from \textbf{delz} or \textbf{delp}.  
   The model will interpolate the numbers in \textbf{bathyFile} so that  
   they match the levels obtained from \textbf{delz} or \textbf{delp}  
   and \textbf{hFacMin}.  
     
   (Note: the atmospheric case is a bit more complicated than what is  
   written here I think. To come soon...)  
   
 \item[time-discretization] \  
     
   The time steps are set through the real variables \textbf{deltaTMom}  
   and \textbf{deltaTtracer} (in s) which represent the time step for  
   the momentum and tracer equations, respectively. For synchronous  
   integrations, simply set the two variables to the same value (or you  
   can prescribe one time step only through the variable  
   \textbf{deltaT}). The Adams-Bashforth stabilizing parameter is set  
   through the variable \textbf{abEps} (dimensionless). The stagger  
   baroclinic time stepping can be activated by setting the logical  
   variable \textbf{staggerTimeStep} to \texttt{'.TRUE.'}.  
   
 \end{description}  
   
   
 \subsection{Equation of state}  
   
 First, because the model equations are written in terms of  
 perturbations, a reference thermodynamic state needs to be specified.  
 This is done through the 1D arrays \textbf{tRef} and \textbf{sRef}.  
 \textbf{tRef} specifies the reference potential temperature profile  
 (in $^{o}$C for the ocean and $^{o}$K for the atmosphere) starting  
 from the level k=1. Similarly, \textbf{sRef} specifies the reference  
 salinity profile (in ppt) for the ocean or the reference specific  
 humidity profile (in g/kg) for the atmosphere.  
   
 The form of the equation of state is controlled by the character  
 variables \textbf{buoyancyRelation} and \textbf{eosType}.  
 \textbf{buoyancyRelation} is set to \texttt{'OCEANIC'} by default and  
 needs to be set to \texttt{'ATMOSPHERIC'} for atmosphere simulations.  
 In this case, \textbf{eosType} must be set to \texttt{'IDEALGAS'}.  
 For the ocean, two forms of the equation of state are available:  
 linear (set \textbf{eosType} to \texttt{'LINEAR'}) and a polynomial  
 approximation to the full nonlinear equation ( set \textbf{eosType} to  
 \texttt{'POLYNOMIAL'}). In the linear case, you need to specify the  
 thermal and haline expansion coefficients represented by the variables  
 \textbf{tAlpha} (in K$^{-1}$) and \textbf{sBeta} (in ppt$^{-1}$). For  
 the nonlinear case, you need to generate a file of polynomial  
 coefficients called \textit{POLY3.COEFFS}. To do this, use the program  
 \textit{utils/knudsen2/knudsen2.f} under the model tree (a Makefile is  
 available in the same directory and you will need to edit the number  
 and the values of the vertical levels in \textit{knudsen2.f} so that  
 they match those of your configuration).  
   
 There there are also higher polynomials for the equation of state:  
 \begin{description}  
 \item[\texttt{'UNESCO'}:] The UNESCO equation of state formula of  
   Fofonoff and Millard \cite{fofonoff83}. This equation of state  
   assumes in-situ temperature, which is not a model variable; {\em its  
     use is therefore discouraged, and it is only listed for  
     completeness}.  
 \item[\texttt{'JMD95Z'}:] A modified UNESCO formula by Jackett and  
   McDougall \cite{jackett95}, which uses the model variable potential  
   temperature as input. The \texttt{'Z'} indicates that this equation  
   of state uses a horizontally and temporally constant pressure  
   $p_{0}=-g\rho_{0}z$.  
 \item[\texttt{'JMD95P'}:] A modified UNESCO formula by Jackett and  
   McDougall \cite{jackett95}, which uses the model variable potential  
   temperature as input. The \texttt{'P'} indicates that this equation  
   of state uses the actual hydrostatic pressure of the last time  
   step. Lagging the pressure in this way requires an additional pickup  
   file for restarts.  
 \item[\texttt{'MDJWF'}:] The new, more accurate and less expensive  
   equation of state by McDougall et~al. \cite{mcdougall03}. It also  
   requires lagging the pressure and therefore an additional pickup  
   file for restarts.  
 \end{description}  
 For none of these options an reference profile of temperature or  
 salinity is required.  
   
 \subsection{Momentum equations}  
   
 In this section, we only focus for now on the parameters that you are  
 likely to change, i.e. the ones relative to forcing and dissipation  
 for example.  The details relevant to the vector-invariant form of the  
 equations and the various advection schemes are not covered for the  
 moment. We assume that you use the standard form of the momentum  
 equations (i.e. the flux-form) with the default advection scheme.  
 Also, there are a few logical variables that allow you to turn on/off  
 various terms in the momentum equation. These variables are called  
 \textbf{momViscosity, momAdvection, momForcing, useCoriolis,  
   momPressureForcing, momStepping} and \textbf{metricTerms }and are  
 assumed to be set to \texttt{'.TRUE.'} here.  Look at the file  
 \textit{model/inc/PARAMS.h }for a precise definition of these  
 variables.  
   
 \begin{description}  
 \item[initialization] \  
     
   The velocity components are initialized to 0 unless the simulation  
   is starting from a pickup file (see section on simulation control  
   parameters).  
   
 \item[forcing] \  
     
   This section only applies to the ocean. You need to generate  
   wind-stress data into two files \textbf{zonalWindFile} and  
   \textbf{meridWindFile} corresponding to the zonal and meridional  
   components of the wind stress, respectively (if you want the stress  
   to be along the direction of only one of the model horizontal axes,  
   you only need to generate one file). The format of the files is  
   similar to the bathymetry file. The zonal (meridional) stress data  
   are assumed to be in Pa and located at U-points (V-points). As for  
   the bathymetry, the precision with which to read the binary data is  
   controlled by the variable \textbf{readBinaryPrec}.  See the matlab  
   program \textit{gendata.m} in the \textit{input} directories under  
   \textit{verification} to see how simple analytical wind forcing data  
   are generated for the case study experiments.  
     
   There is also the possibility of prescribing time-dependent periodic  
   forcing. To do this, concatenate the successive time records into a  
   single file (for each stress component) ordered in a (x,y,t) fashion  
   and set the following variables: \textbf{periodicExternalForcing }to  
   \texttt{'.TRUE.'}, \textbf{externForcingPeriod }to the period (in s)  
   of which the forcing varies (typically 1 month), and  
   \textbf{externForcingCycle} to the repeat time (in s) of the forcing  
   (typically 1 year -- note: \textbf{ externForcingCycle} must be a  
   multiple of \textbf{externForcingPeriod}).  With these variables set  
   up, the model will interpolate the forcing linearly at each  
   iteration.  
   
 \item[dissipation] \  
     
   The lateral eddy viscosity coefficient is specified through the  
   variable \textbf{viscAh} (in m$^{2}$s$^{-1}$). The vertical eddy  
   viscosity coefficient is specified through the variable  
   \textbf{viscAz} (in m$^{2}$s$^{-1}$) for the ocean and  
   \textbf{viscAp} (in Pa$^{2}$s$^{-1}$) for the atmosphere.  The  
   vertical diffusive fluxes can be computed implicitly by setting the  
   logical variable \textbf{implicitViscosity }to \texttt{'.TRUE.'}.  
   In addition, biharmonic mixing can be added as well through the  
   variable \textbf{viscA4} (in m$^{4}$s$^{-1}$). On a spherical polar  
   grid, you might also need to set the variable \textbf{cosPower}  
   which is set to 0 by default and which represents the power of  
   cosine of latitude to multiply viscosity. Slip or no-slip conditions  
   at lateral and bottom boundaries are specified through the logical  
   variables \textbf{no\_slip\_sides} and \textbf{no\_slip\_bottom}. If  
   set to \texttt{'.FALSE.'}, free-slip boundary conditions are  
   applied. If no-slip boundary conditions are applied at the bottom, a  
   bottom drag can be applied as well. Two forms are available: linear  
   (set the variable \textbf{bottomDragLinear} in s$ ^{-1}$) and  
   quadratic (set the variable \textbf{bottomDragQuadratic} in  
   m$^{-1}$).  
   
   The Fourier and Shapiro filters are described elsewhere.  
   
 \item[C-D scheme] \  
     
   If you run at a sufficiently coarse resolution, you will need the  
   C-D scheme for the computation of the Coriolis terms. The  
   variable\textbf{\ tauCD}, which represents the C-D scheme coupling  
   timescale (in s) needs to be set.  
     
 \item[calculation of pressure/geopotential] \  
     
   First, to run a non-hydrostatic ocean simulation, set the logical  
   variable \textbf{nonHydrostatic} to \texttt{'.TRUE.'}. The pressure  
   field is then inverted through a 3D elliptic equation. (Note: this  
   capability is not available for the atmosphere yet.) By default, a  
   hydrostatic simulation is assumed and a 2D elliptic equation is used  
   to invert the pressure field. The parameters controlling the  
   behaviour of the elliptic solvers are the variables  
   \textbf{cg2dMaxIters} and \textbf{cg2dTargetResidual } for  
   the 2D case and \textbf{cg3dMaxIters} and  
   \textbf{cg3dTargetResidual} for the 3D case. You probably won't need to  
   alter the default values (are we sure of this?).  
     
   For the calculation of the surface pressure (for the ocean) or  
   surface geopotential (for the atmosphere) you need to set the  
   logical variables \textbf{rigidLid} and \textbf{implicitFreeSurface}  
   (set one to \texttt{'.TRUE.'} and the other to \texttt{'.FALSE.'}  
   depending on how you want to deal with the ocean upper or atmosphere  
   lower boundary).  
   
 \end{description}  
   
 \subsection{Tracer equations}  
   
 This section covers the tracer equations i.e. the potential  
 temperature equation and the salinity (for the ocean) or specific  
 humidity (for the atmosphere) equation. As for the momentum equations,  
 we only describe for now the parameters that you are likely to change.  
 The logical variables \textbf{tempDiffusion} \textbf{tempAdvection}  
 \textbf{tempForcing}, and \textbf{tempStepping} allow you to turn  
 on/off terms in the temperature equation (same thing for salinity or  
 specific humidity with variables \textbf{saltDiffusion},  
 \textbf{saltAdvection} etc.). These variables are all assumed here to  
 be set to \texttt{'.TRUE.'}. Look at file \textit{model/inc/PARAMS.h}  
 for a precise definition.  
   
 \begin{description}  
 \item[initialization] \  
     
   The initial tracer data can be contained in the binary files  
   \textbf{hydrogThetaFile} and \textbf{hydrogSaltFile}. These files  
   should contain 3D data ordered in an (x,y,r) fashion with k=1 as the  
   first vertical level.  If no file names are provided, the tracers  
   are then initialized with the values of \textbf{tRef} and  
   \textbf{sRef} mentioned above (in the equation of state section). In  
   this case, the initial tracer data are uniform in x and y for each  
   depth level.  
   
 \item[forcing] \  
     
   This part is more relevant for the ocean, the procedure for the  
   atmosphere not being completely stabilized at the moment.  
     
   A combination of fluxes data and relaxation terms can be used for  
   driving the tracer equations.  For potential temperature, heat flux  
   data (in W/m$ ^{2}$) can be stored in the 2D binary file  
   \textbf{surfQfile}.  Alternatively or in addition, the forcing can  
   be specified through a relaxation term. The SST data to which the  
   model surface temperatures are restored to are supposed to be stored  
   in the 2D binary file \textbf{thetaClimFile}. The corresponding  
   relaxation time scale coefficient is set through the variable  
   \textbf{tauThetaClimRelax} (in s). The same procedure applies for  
   salinity with the variable names \textbf{EmPmRfile},  
   \textbf{saltClimFile}, and \textbf{tauSaltClimRelax} for freshwater  
   flux (in m/s) and surface salinity (in ppt) data files and  
   relaxation time scale coefficient (in s), respectively. Also for  
   salinity, if the CPP key \textbf{USE\_NATURAL\_BCS} is turned on,  
   natural boundary conditions are applied i.e. when computing the  
   surface salinity tendency, the freshwater flux is multiplied by the  
   model surface salinity instead of a constant salinity value.  
     
   As for the other input files, the precision with which to read the  
   data is controlled by the variable \textbf{readBinaryPrec}.  
   Time-dependent, periodic forcing can be applied as well following  
   the same procedure used for the wind forcing data (see above).  
   
 \item[dissipation] \  
     
   Lateral eddy diffusivities for temperature and salinity/specific  
   humidity are specified through the variables \textbf{diffKhT} and  
   \textbf{diffKhS} (in m$^{2}$/s). Vertical eddy diffusivities are  
   specified through the variables \textbf{diffKzT} and  
   \textbf{diffKzS} (in m$^{2}$/s) for the ocean and \textbf{diffKpT  
   }and \textbf{diffKpS} (in Pa$^{2}$/s) for the atmosphere. The  
   vertical diffusive fluxes can be computed implicitly by setting the  
   logical variable \textbf{implicitDiffusion} to \texttt{'.TRUE.'}.  
   In addition, biharmonic diffusivities can be specified as well  
   through the coefficients \textbf{diffK4T} and \textbf{diffK4S} (in  
   m$^{4}$/s). Note that the cosine power scaling (specified through  
   \textbf{cosPower}---see the momentum equations section) is applied to  
   the tracer diffusivities (Laplacian and biharmonic) as well. The  
   Gent and McWilliams parameterization for oceanic tracers is  
   described in the package section. Finally, note that tracers can be  
   also subject to Fourier and Shapiro filtering (see the corresponding  
   section on these filters).  
   
 \item[ocean convection] \  
     
   Two options are available to parameterize ocean convection: one is  
   to use the convective adjustment scheme. In this case, you need to  
   set the variable \textbf{cadjFreq}, which represents the frequency  
   (in s) with which the adjustment algorithm is called, to a non-zero  
   value (if set to a negative value by the user, the model will set it  
   to the tracer time step). The other option is to parameterize  
   convection with implicit vertical diffusion. To do this, set the  
   logical variable \textbf{implicitDiffusion} to \texttt{'.TRUE.'}  
   and the real variable \textbf{ivdc\_kappa} to a value (in m$^{2}$/s)  
   you wish the tracer vertical diffusivities to have when mixing  
   tracers vertically due to static instabilities. Note that  
   \textbf{cadjFreq} and \textbf{ivdc\_kappa}can not both have non-zero  
   value.  
   
 \end{description}  
   
 \subsection{Simulation controls}  
   
 The model ''clock'' is defined by the variable \textbf{deltaTClock}  
 (in s) which determines the IO frequencies and is used in tagging  
 output.  Typically, you will set it to the tracer time step for  
 accelerated runs (otherwise it is simply set to the default time step  
 \textbf{deltaT}).  Frequency of checkpointing and dumping of the model  
 state are referenced to this clock (see below).  
   
 \begin{description}  
 \item[run duration] \  
     
   The beginning of a simulation is set by specifying a start time (in  
   s) through the real variable \textbf{startTime} or by specifying an  
   initial iteration number through the integer variable  
   \textbf{nIter0}. If these variables are set to nonzero values, the  
   model will look for a ''pickup'' file \textit{pickup.0000nIter0} to  
   restart the integration. The end of a simulation is set through the  
   real variable \textbf{endTime} (in s).  Alternatively, you can  
   specify instead the number of time steps to execute through the  
   integer variable \textbf{nTimeSteps}.  
   
 \item[frequency of output] \  
     
   Real variables defining frequencies (in s) with which output files  
   are written on disk need to be set up. \textbf{dumpFreq} controls  
   the frequency with which the instantaneous state of the model is  
   saved. \textbf{chkPtFreq} and \textbf{pchkPtFreq} control the output  
   frequency of rolling and permanent checkpoint files, respectively.  
   See section 1.5.1 Output files for the definition of model state and  
   checkpoint files. In addition, time-averaged fields can be written  
   out by setting the variable \textbf{taveFreq} (in s).  The precision  
   with which to write the binary data is controlled by the integer  
   variable w\textbf{riteBinaryPrec} (set it to \texttt{32} or  
   \texttt{64}).  
   
 \end{description}  
   
   
 %%% Local Variables:  
 %%% mode: latex  
 %%% TeX-master: t  
 %%% End:  

Legend:
Removed from v.1.21  
changed lines
  Added in v.1.33

  ViewVC Help
Powered by ViewVC 1.1.22