--- manual/s_getstarted/text/getting_started.tex 2001/10/25 18:36:54 1.9
+++ manual/s_getstarted/text/getting_started.tex 2004/01/29 19:22:35 1.18
@@ -1,4 +1,4 @@
-% $Header: /home/ubuntu/mnt/e9_copy/manual/s_getstarted/text/getting_started.tex,v 1.9 2001/10/25 18:36:54 cnh Exp $
+% $Header: /home/ubuntu/mnt/e9_copy/manual/s_getstarted/text/getting_started.tex,v 1.18 2004/01/29 19:22:35 edhill Exp $
% $Name: $
%\section{Getting started}
@@ -18,31 +18,43 @@
\section{Where to find information}
\label{sect:whereToFindInfo}
-A web site is maintained for release 1 (Sealion) of MITgcm:
+A web site is maintained for release 2 (``Pelican'') of MITgcm:
+\begin{rawhtml} \end{rawhtml}
\begin{verbatim}
-http://mitgcm.org/sealion
+http://mitgcm.org/pelican
\end{verbatim}
+\begin{rawhtml} \end{rawhtml}
Here you will find an on-line version of this document, a
``browsable'' copy of the code and a searchable database of the model
and site, as well as links for downloading the model and
-documentation, to data-sources and other related sites.
+documentation, to data-sources, and other related sites.
-There is also a support news group for the model that you can email at
-\texttt{support@mitgcm.org} or browse at:
+There is also a web-archived support mailing list for the model that
+you can email at \texttt{MITgcm-support@mitgcm.org} or browse at:
+\begin{rawhtml} \end{rawhtml}
+\begin{verbatim}
+http://mitgcm.org/mailman/listinfo/mitgcm-support/
+http://mitgcm.org/pipermail/mitgcm-support/
+\end{verbatim}
+\begin{rawhtml} \end{rawhtml}
+Essentially all of the MITgcm web pages can be searched using a
+popular web crawler such as Google or through our own search facility:
+\begin{rawhtml} \end{rawhtml}
\begin{verbatim}
-news://mitgcm.org/mitgcm.support
+http://mitgcm.org/htdig/
\end{verbatim}
-A mail to the email list will reach all the developers and be archived
-on the newsgroup. A users email list will be established at some time
-in the future.
+\begin{rawhtml} \end{rawhtml}
+%%% http://www.google.com/search?q=hydrostatic+site%3Amitgcm.org
+
+
\section{Obtaining the code}
\label{sect:obtainingCode}
MITgcm can be downloaded from our system by following
the instructions below. As a courtesy we ask that you send e-mail to us at
-\begin{rawhtml} \end{rawhtml}
-support@mitgcm.org
+\begin{rawhtml} \end{rawhtml}
+MITgcm-support@mitgcm.org
\begin{rawhtml} \end{rawhtml}
to enable us to keep track of who's using the model and in what application.
You can download the model two ways:
@@ -72,31 +84,53 @@
track of your changes. If CVS is not available on your machine, you can also
download a tar file.
-Before you can use CVS, the following environment variable has to be set in
-your .cshrc or .tcshrc:
+Before you can use CVS, the following environment variable(s) should
+be set within your shell. For a csh or tcsh shell, put the following
+\begin{verbatim}
+% setenv CVSROOT :pserver:cvsanon@mitgcm.org:/u/gcmpack
+\end{verbatim}
+in your .cshrc or .tcshrc file. For bash or sh shells, put:
\begin{verbatim}
-% setenv CVSROOT :pserver:cvsanon@mitgcm.org:/u/u0/gcmpack
+% export CVSROOT=':pserver:cvsanon@mitgcm.org:/u/gcmpack'
\end{verbatim}
+in your .profile or .bashrc file.
+
-To start using CVS, register with the MITgcm CVS server using command:
+To get MITgcm through CVS, first register with the MITgcm CVS server
+using command:
\begin{verbatim}
% cvs login ( CVS password: cvsanon )
\end{verbatim}
-You only need to do ``cvs login'' once.
+You only need to do a ``cvs login'' once.
-To obtain the sources for release1 type:
+To obtain the latest sources type:
+\begin{verbatim}
+% cvs co MITgcm
+\end{verbatim}
+or to get a specific release type:
\begin{verbatim}
-% cvs co -d directory -P -r release1 MITgcmUV
+% cvs co -P -r checkpoint52i_post MITgcm
\end{verbatim}
+The MITgcm web site contains further directions concerning the source
+code and CVS. It also contains a web interface to our CVS archive so
+that one may easily view the state of files, revisions, and other
+development milestones:
+\begin{rawhtml} \end{rawhtml}
+\begin{verbatim}
+http://mitgcm.org/source_code.html
+\end{verbatim}
+\begin{rawhtml} \end{rawhtml}
+
-This creates a directory called \textit{directory}. If \textit{directory}
-exists this command updates your code based on the repository. Each
-directory in the source tree contains a directory \textit{CVS}. This
-information is required by CVS to keep track of your file versions with
-respect to the repository. Don't edit the files in \textit{CVS}!
-You can also use CVS to download code updates. More extensive
-information on using CVS for maintaining MITgcm code can be found
-\begin{rawhtml} \end{rawhtml}
+The checkout process creates a directory called \textit{MITgcm}. If
+the directory \textit{MITgcm} exists this command updates your code
+based on the repository. Each directory in the source tree contains a
+directory \textit{CVS}. This information is required by CVS to keep
+track of your file versions with respect to the repository. Don't edit
+the files in \textit{CVS}! You can also use CVS to download code
+updates. More extensive information on using CVS for maintaining
+MITgcm code can be found
+\begin{rawhtml} \end{rawhtml}
here
\begin{rawhtml} \end{rawhtml}
.
@@ -106,7 +140,7 @@
\label{sect:conventionalDownload}
If you do not have CVS on your system, you can download the model as a
-tar file from the reference web site at:
+tar file from the web site at:
\begin{rawhtml} \end{rawhtml}
\begin{verbatim}
http://mitgcm.org/download/
@@ -114,161 +148,249 @@
\begin{rawhtml} \end{rawhtml}
The tar file still contains CVS information which we urge you not to
delete; even if you do not use CVS yourself the information can help
-us if you should need to send us your copy of the code.
+us if you should need to send us your copy of the code. If a recent
+tar file does not exist, then please contact the developers through
+the
+\begin{rawhtml} \end{rawhtml}
+MITgcm-support@mitgcm.org
+\begin{rawhtml} \end{rawhtml}
+mailing list.
+
+\paragraph*{Upgrading from an earlier version}
+
+If you already have an earlier version of the code you can ``upgrade''
+your copy instead of downloading the entire repository again. First,
+``cd'' (change directory) to the top of your working copy:
+\begin{verbatim}
+% cd MITgcm
+\end{verbatim}
+and then issue the cvs update command such as:
+\begin{verbatim}
+% cvs -q update -r checkpoint52i_post -d -P
+\end{verbatim}
+This will update the ``tag'' to ``checkpoint52i\_post'', add any new
+directories (-d) and remove any empty directories (-P). The -q option
+means be quiet which will reduce the number of messages you'll see in
+the terminal. If you have modified the code prior to upgrading, CVS
+will try to merge your changes with the upgrades. If there is a
+conflict between your modifications and the upgrade, it will report
+that file with a ``C'' in front, e.g.:
+\begin{verbatim}
+C model/src/ini_parms.F
+\end{verbatim}
+If the list of conflicts scrolled off the screen, you can re-issue the
+cvs update command and it will report the conflicts. Conflicts are
+indicated in the code by the delimites ``$<<<<<<<$'', ``======='' and
+``$>>>>>>>$''. For example,
+{\small
+\begin{verbatim}
+<<<<<<< ini_parms.F
+ & bottomDragLinear,myOwnBottomDragCoefficient,
+=======
+ & bottomDragLinear,bottomDragQuadratic,
+>>>>>>> 1.18
+\end{verbatim}
+}
+means that you added ``myOwnBottomDragCoefficient'' to a namelist at
+the same time and place that we added ``bottomDragQuadratic''. You
+need to resolve this conflict and in this case the line should be
+changed to:
+{\small
+\begin{verbatim}
+ & bottomDragLinear,bottomDragQuadratic,myOwnBottomDragCoefficient,
+\end{verbatim}
+}
+and the lines with the delimiters ($<<<<<<$,======,$>>>>>>$) be deleted.
+Unless you are making modifications which exactly parallel
+developments we make, these types of conflicts should be rare.
+
+\paragraph*{Upgrading to the current pre-release version}
+
+We don't make a ``release'' for every little patch and bug fix in
+order to keep the frequency of upgrades to a minimum. However, if you
+have run into a problem for which ``we have already fixed in the
+latest code'' and we haven't made a ``tag'' or ``release'' since that
+patch then you'll need to get the latest code:
+\begin{verbatim}
+% cvs -q update -A -d -P
+\end{verbatim}
+Unlike, the ``check-out'' and ``update'' procedures above, there is no
+``tag'' or release name. The -A tells CVS to upgrade to the
+very latest version. As a rule, we don't recommend this since you
+might upgrade while we are in the processes of checking in the code so
+that you may only have part of a patch. Using this method of updating
+also means we can't tell what version of the code you are working
+with. So please be sure you understand what you're doing.
\section{Model and directory structure}
-The ``numerical'' model is contained within a execution environment support
-wrapper. This wrapper is designed to provide a general framework for
-grid-point models. MITgcmUV is a specific numerical model that uses the
-framework. Under this structure the model is split into execution
-environment support code and conventional numerical model code. The
-execution environment support code is held under the \textit{eesupp}
-directory. The grid point model code is held under the \textit{model}
-directory. Code execution actually starts in the \textit{eesupp} routines
-and not in the \textit{model} routines. For this reason the top-level
-\textit{MAIN.F} is in the \textit{eesupp/src} directory. In general,
-end-users should not need to worry about this level. The top-level routine
-for the numerical part of the code is in \textit{model/src/THE\_MODEL\_MAIN.F%
-}. Here is a brief description of the directory structure of the model under
-the root tree (a detailed description is given in section 3: Code structure).
+The ``numerical'' model is contained within a execution environment
+support wrapper. This wrapper is designed to provide a general
+framework for grid-point models. MITgcmUV is a specific numerical
+model that uses the framework. Under this structure the model is split
+into execution environment support code and conventional numerical
+model code. The execution environment support code is held under the
+\textit{eesupp} directory. The grid point model code is held under the
+\textit{model} directory. Code execution actually starts in the
+\textit{eesupp} routines and not in the \textit{model} routines. For
+this reason the top-level \textit{MAIN.F} is in the
+\textit{eesupp/src} directory. In general, end-users should not need
+to worry about this level. The top-level routine for the numerical
+part of the code is in \textit{model/src/THE\_MODEL\_MAIN.F}. Here is
+a brief description of the directory structure of the model under the
+root tree (a detailed description is given in section 3: Code
+structure).
\begin{itemize}
-\item \textit{bin}: this directory is initially empty. It is the default
-directory in which to compile the code.
+\item \textit{bin}: this directory is initially empty. It is the
+ default directory in which to compile the code.
+
\item \textit{diags}: contains the code relative to time-averaged
-diagnostics. It is subdivided into two subdirectories \textit{inc} and
-\textit{src} that contain include files (*.\textit{h} files) and Fortran
-subroutines (*.\textit{F} files), respectively.
+ diagnostics. It is subdivided into two subdirectories \textit{inc}
+ and \textit{src} that contain include files (*.\textit{h} files) and
+ Fortran subroutines (*.\textit{F} files), respectively.
\item \textit{doc}: contains brief documentation notes.
-
-\item \textit{eesupp}: contains the execution environment source code. Also
-subdivided into two subdirectories \textit{inc} and \textit{src}.
-
-\item \textit{exe}: this directory is initially empty. It is the default
-directory in which to execute the code.
-
-\item \textit{model}: this directory contains the main source code. Also
-subdivided into two subdirectories \textit{inc} and \textit{src}.
-
-\item \textit{pkg}: contains the source code for the packages. Each package
-corresponds to a subdirectory. For example, \textit{gmredi} contains the
-code related to the Gent-McWilliams/Redi scheme, \textit{aim} the code
-relative to the atmospheric intermediate physics. The packages are described
-in detail in section 3.
-
-\item \textit{tools}: this directory contains various useful tools. For
-example, \textit{genmake} is a script written in csh (C-shell) that should
-be used to generate your makefile. The directory \textit{adjoint} contains
-the makefile specific to the Tangent linear and Adjoint Compiler (TAMC) that
-generates the adjoint code. The latter is described in details in part V.
-
+
+\item \textit{eesupp}: contains the execution environment source code.
+ Also subdivided into two subdirectories \textit{inc} and
+ \textit{src}.
+
+\item \textit{exe}: this directory is initially empty. It is the
+ default directory in which to execute the code.
+
+\item \textit{model}: this directory contains the main source code.
+ Also subdivided into two subdirectories \textit{inc} and
+ \textit{src}.
+
+\item \textit{pkg}: contains the source code for the packages. Each
+ package corresponds to a subdirectory. For example, \textit{gmredi}
+ contains the code related to the Gent-McWilliams/Redi scheme,
+ \textit{aim} the code relative to the atmospheric intermediate
+ physics. The packages are described in detail in section 3.
+
+\item \textit{tools}: this directory contains various useful tools.
+ For example, \textit{genmake2} is a script written in csh (C-shell)
+ that should be used to generate your makefile. The directory
+ \textit{adjoint} contains the makefile specific to the Tangent
+ linear and Adjoint Compiler (TAMC) that generates the adjoint code.
+ The latter is described in details in part V.
+
\item \textit{utils}: this directory contains various utilities. The
-subdirectory \textit{knudsen2} contains code and a makefile that
-compute coefficients of the polynomial approximation to the knudsen
-formula for an ocean nonlinear equation of state. The \textit{matlab}
-subdirectory contains matlab scripts for reading model output directly
-into matlab. \textit{scripts} contains C-shell post-processing
-scripts for joining processor-based and tiled-based model output.
+ subdirectory \textit{knudsen2} contains code and a makefile that
+ compute coefficients of the polynomial approximation to the knudsen
+ formula for an ocean nonlinear equation of state. The
+ \textit{matlab} subdirectory contains matlab scripts for reading
+ model output directly into matlab. \textit{scripts} contains C-shell
+ post-processing scripts for joining processor-based and tiled-based
+ model output.
+
+\item \textit{verification}: this directory contains the model
+ examples. See section \ref{sect:modelExamples}.
-\item \textit{verification}: this directory contains the model examples. See
-section \ref{sect:modelExamples}.
\end{itemize}
\section{Example experiments}
\label{sect:modelExamples}
-The MITgcm distribution comes with a set of twenty-four pre-configured
-numerical experiments. Some of these examples experiments are tests of
-individual parts of the model code, but many are fully fledged numerical
-simulations. A few of the examples are used for tutorial documentation
-in sections \ref{sec:eg-baro} - \ref{sec:eg-global}. The other examples
-follow the same general structure as the tutorial examples. However,
-they only include brief instructions in a text file called {\it README}.
-The examples are located in subdirectories under
-the directory \textit{verification}. Each
-example is briefly described below.
+%% a set of twenty-four pre-configured numerical experiments
+
+The MITgcm distribution comes with more than a dozen pre-configured
+numerical experiments. Some of these example experiments are tests of
+individual parts of the model code, but many are fully fledged
+numerical simulations. A few of the examples are used for tutorial
+documentation in sections \ref{sect:eg-baro} - \ref{sect:eg-global}.
+The other examples follow the same general structure as the tutorial
+examples. However, they only include brief instructions in a text file
+called {\it README}. The examples are located in subdirectories under
+the directory \textit{verification}. Each example is briefly described
+below.
\subsection{Full list of model examples}
\begin{enumerate}
+
\item \textit{exp0} - single layer, ocean double gyre (barotropic with
-free-surface). This experiment is described in detail in section
-\ref{sec:eg-baro}.
-
-\item \textit{exp1} - Four layer, ocean double gyre. This experiment is described in detail in section
-\ref{sec:eg-baroc}.
+ free-surface). This experiment is described in detail in section
+ \ref{sect:eg-baro}.
+\item \textit{exp1} - Four layer, ocean double gyre. This experiment
+ is described in detail in section \ref{sect:eg-baroc}.
+
\item \textit{exp2} - 4x4 degree global ocean simulation with steady
-climatological forcing. This experiment is described in detail in section
-\ref{sec:eg-global}.
-
-\item \textit{exp4} - Flow over a Gaussian bump in open-water or channel
-with open boundaries.
-
-\item \textit{exp5} - Inhomogenously forced ocean convection in a doubly
-periodic box.
+ climatological forcing. This experiment is described in detail in
+ section \ref{sect:eg-global}.
+
+\item \textit{exp4} - Flow over a Gaussian bump in open-water or
+ channel with open boundaries.
+
+\item \textit{exp5} - Inhomogenously forced ocean convection in a
+ doubly periodic box.
\item \textit{front\_relax} - Relaxation of an ocean thermal front (test for
Gent/McWilliams scheme). 2D (Y-Z).
-\item \textit{internal wave} - Ocean internal wave forced by open boundary
-conditions.
-
+\item \textit{internal wave} - Ocean internal wave forced by open
+ boundary conditions.
+
\item \textit{natl\_box} - Eastern subtropical North Atlantic with KPP
-scheme; 1 month integration
-
-\item \textit{hs94.1x64x5} - Zonal averaged atmosphere using Held and Suarez
-'94 forcing.
-
-\item \textit{hs94.128x64x5} - 3D atmosphere dynamics using Held and Suarez
-'94 forcing.
-
+ scheme; 1 month integration
+
+\item \textit{hs94.1x64x5} - Zonal averaged atmosphere using Held and
+ Suarez '94 forcing.
+
+\item \textit{hs94.128x64x5} - 3D atmosphere dynamics using Held and
+ Suarez '94 forcing.
+
\item \textit{hs94.cs-32x32x5} - 3D atmosphere dynamics using Held and
-Suarez '94 forcing on the cubed sphere.
-
-\item \textit{aim.5l\_zon-ave} - Intermediate Atmospheric physics. Global
-Zonal Mean configuration, 1x64x5 resolution.
-
-\item \textit{aim.5l\_XZ\_Equatorial\_Slice} - Intermediate Atmospheric
-physics, equatorial Slice configuration.
-2D (X-Z).
-
+ Suarez '94 forcing on the cubed sphere.
+
+\item \textit{aim.5l\_zon-ave} - Intermediate Atmospheric physics.
+ Global Zonal Mean configuration, 1x64x5 resolution.
+
+\item \textit{aim.5l\_XZ\_Equatorial\_Slice} - Intermediate
+ Atmospheric physics, equatorial Slice configuration. 2D (X-Z).
+
\item \textit{aim.5l\_Equatorial\_Channel} - Intermediate Atmospheric
-physics. 3D Equatorial Channel configuration.
-
+ physics. 3D Equatorial Channel configuration.
+
\item \textit{aim.5l\_LatLon} - Intermediate Atmospheric physics.
-Global configuration, on latitude longitude grid with 128x64x5 grid points
-($2.8^\circ{\rm degree}$ resolution).
-
-\item \textit{adjustment.128x64x1} Barotropic adjustment
-problem on latitude longitude grid with 128x64 grid points ($2.8^\circ{\rm degree}$ resolution).
-
-\item \textit{adjustment.cs-32x32x1}
-Barotropic adjustment
-problem on cube sphere grid with 32x32 points per face ( roughly
-$2.8^\circ{\rm degree}$ resolution).
-
+ Global configuration, on latitude longitude grid with 128x64x5 grid
+ points ($2.8^\circ{\rm degree}$ resolution).
+
+\item \textit{adjustment.128x64x1} Barotropic adjustment problem on
+ latitude longitude grid with 128x64 grid points ($2.8^\circ{\rm
+ degree}$ resolution).
+
+\item \textit{adjustment.cs-32x32x1} Barotropic adjustment problem on
+ cube sphere grid with 32x32 points per face ( roughly $2.8^\circ{\rm
+ degree}$ resolution).
+
\item \textit{advect\_cs} Two-dimensional passive advection test on
-cube sphere grid.
-
-\item \textit{advect\_xy} Two-dimensional (horizontal plane) passive advection
-test on Cartesian grid.
-
-\item \textit{advect\_yz} Two-dimensional (vertical plane) passive advection test on Cartesian grid.
-
-\item \textit{carbon} Simple passive tracer experiment. Includes derivative
-calculation. Described in detail in section \ref{sec:eg-carbon-ad}.
+ cube sphere grid.
+
+\item \textit{advect\_xy} Two-dimensional (horizontal plane) passive
+ advection test on Cartesian grid.
+
+\item \textit{advect\_yz} Two-dimensional (vertical plane) passive
+ advection test on Cartesian grid.
+
+\item \textit{carbon} Simple passive tracer experiment. Includes
+ derivative calculation. Described in detail in section
+ \ref{sect:eg-carbon-ad}.
\item \textit{flt\_example} Example of using float package.
-
-\item \textit{global\_ocean.90x40x15} Global circulation with
-GM, flux boundary conditions and poles.
-
-\item \textit{solid-body.cs-32x32x1} Solid body rotation test for cube sphere
-grid.
+
+\item \textit{global\_ocean.90x40x15} Global circulation with GM, flux
+ boundary conditions and poles.
+
+\item \textit{global\_ocean\_pressure} Global circulation in pressure
+ coordinate (non-Boussinesq ocean model). Described in detail in
+ section \ref{sect:eg-globalpressure}.
+
+\item \textit{solid-body.cs-32x32x1} Solid body rotation test for cube
+ sphere grid.
\end{enumerate}
@@ -278,51 +400,56 @@
\begin{itemize}
\item \textit{code}: contains the code particular to the example. At a
-minimum, this directory includes the following files:
+ minimum, this directory includes the following files:
-\begin{itemize}
-\item \textit{code/CPP\_EEOPTIONS.h}: declares CPP keys relative to the
-``execution environment'' part of the code. The default version is located
-in \textit{eesupp/inc}.
-
-\item \textit{code/CPP\_OPTIONS.h}: declares CPP keys relative to the
-``numerical model'' part of the code. The default version is located in
-\textit{model/inc}.
-
-\item \textit{code/SIZE.h}: declares size of underlying computational grid.
-The default version is located in \textit{model/inc}.
+ \begin{itemize}
+ \item \textit{code/CPP\_EEOPTIONS.h}: declares CPP keys relative to
+ the ``execution environment'' part of the code. The default
+ version is located in \textit{eesupp/inc}.
+
+ \item \textit{code/CPP\_OPTIONS.h}: declares CPP keys relative to
+ the ``numerical model'' part of the code. The default version is
+ located in \textit{model/inc}.
+
+ \item \textit{code/SIZE.h}: declares size of underlying
+ computational grid. The default version is located in
+ \textit{model/inc}.
+ \end{itemize}
+
+ In addition, other include files and subroutines might be present in
+ \textit{code} depending on the particular experiment. See Section 2
+ for more details.
+
+\item \textit{input}: contains the input data files required to run
+ the example. At a minimum, the \textit{input} directory contains the
+ following files:
+
+ \begin{itemize}
+ \item \textit{input/data}: this file, written as a namelist,
+ specifies the main parameters for the experiment.
+
+ \item \textit{input/data.pkg}: contains parameters relative to the
+ packages used in the experiment.
+
+ \item \textit{input/eedata}: this file contains ``execution
+ environment'' data. At present, this consists of a specification
+ of the number of threads to use in $X$ and $Y$ under multithreaded
+ execution.
+ \end{itemize}
+
+ In addition, you will also find in this directory the forcing and
+ topography files as well as the files describing the initial state
+ of the experiment. This varies from experiment to experiment. See
+ section 2 for more details.
+
+\item \textit{results}: this directory contains the output file
+ \textit{output.txt} produced by the simulation example. This file is
+ useful for comparison with your own output when you run the
+ experiment.
\end{itemize}
-In addition, other include files and subroutines might be present in \textit{%
-code} depending on the particular experiment. See section 2 for more details.
-
-\item \textit{input}: contains the input data files required to run the
-example. At a minimum, the \textit{input} directory contains the following
-files:
-
-\begin{itemize}
-\item \textit{input/data}: this file, written as a namelist, specifies the
-main parameters for the experiment.
-
-\item \textit{input/data.pkg}: contains parameters relative to the packages
-used in the experiment.
-
-\item \textit{input/eedata}: this file contains ``execution environment''
-data. At present, this consists of a specification of the number of threads
-to use in $X$ and $Y$ under multithreaded execution.
-\end{itemize}
-
-In addition, you will also find in this directory the forcing and topography
-files as well as the files describing the initial state of the experiment.
-This varies from experiment to experiment. See section 2 for more details.
-
-\item \textit{results}: this directory contains the output file \textit{%
-output.txt} produced by the simulation example. This file is useful for
-comparison with your own output when you run the experiment.
-\end{itemize}
-
-Once you have chosen the example you want to run, you are ready to compile
-the code.
+Once you have chosen the example you want to run, you are ready to
+compile the code.
\section{Building the code}
\label{sect:buildingCode}
@@ -330,44 +457,67 @@
To compile the code, we use the {\em make} program. This uses a file
({\em Makefile}) that allows us to pre-process source files, specify
compiler and optimization options and also figures out any file
-dependencies. We supply a script ({\em genmake}), described in section
-\ref{sect:genmake}, that automatically creates the {\em Makefile} for
-you. You then need to build the dependencies and compile the code.
+dependencies. We supply a script ({\em genmake2}), described in
+section \ref{sect:genmake}, that automatically creates the {\em
+ Makefile} for you. You then need to build the dependencies and
+compile the code.
As an example, let's assume that you want to build and run experiment
-\textit{verification/exp2}. The are multiple ways and places to actually
-do this but here let's build the code in
+\textit{verification/exp2}. The are multiple ways and places to
+actually do this but here let's build the code in
\textit{verification/exp2/input}:
\begin{verbatim}
% cd verification/exp2/input
\end{verbatim}
First, build the {\em Makefile}:
\begin{verbatim}
-% ../../../tools/genmake -mods=../code
+% ../../../tools/genmake2 -mods=../code
\end{verbatim}
The command line option tells {\em genmake} to override model source
code with any files in the directory {\em ./code/}.
-If there is no \textit{.genmakerc} in the \textit{input} directory, you have
-to use the following options when invoking \textit{genmake}:
+On many systems, the {\em genmake2} program will be able to
+automatically recognize the hardware, find compilers and other tools
+within the user's path (``echo \$PATH''), and then choose an
+appropriate set of options from the files contained in the {\em
+ tools/build\_options} directory. Under some circumstances, a user
+may have to create a new ``optfile'' in order to specify the exact
+combination of compiler, compiler flags, libraries, and other options
+necessary to build a particular configuration of MITgcm. In such
+cases, it is generally helpful to read the existing ``optfiles'' and
+mimic their syntax.
+
+Through the MITgcm-support list, the MITgcm developers are willing to
+provide help writing or modifing ``optfiles''. And we encourage users
+to post new ``optfiles'' (particularly ones for new machines or
+architectures) to the
+\begin{rawhtml} \end{rawhtml}
+MITgcm-support@mitgcm.org
+\begin{rawhtml} \end{rawhtml}
+list.
+
+To specify an optfile to {\em genmake2}, the syntax is:
\begin{verbatim}
-% ../../../tools/genmake -mods=../code
+% ../../../tools/genmake2 -mods=../code -of /path/to/optfile
\end{verbatim}
-Next, create the dependencies:
+Once a {\em Makefile} has been generated, we create the dependencies:
\begin{verbatim}
% make depend
\end{verbatim}
-This modifies {\em Makefile} by attaching a [long] list of files on
-which other files depend. The purpose of this is to reduce
-re-compilation if and when you start to modify the code. {\tt make
-depend} also created links from the model source to this directory.
+This modifies the {\em Makefile} by attaching a [long] list of files
+upon which other files depend. The purpose of this is to reduce
+re-compilation if and when you start to modify the code. The {\tt make
+ depend} command also creates links from the model source to this
+directory.
-Now compile the code:
+Next compile the code:
\begin{verbatim}
% make
\end{verbatim}
The {\tt make} command creates an executable called \textit{mitgcmuv}.
+Additional make ``targets'' are defined within the makefile to aid in
+the production of adjoint and other versions of MITgcm.
Now you are ready to run the model. General instructions for doing so are
given in section \ref{sect:runModel}. Here, we can run the model with:
@@ -385,17 +535,18 @@
convenience. You can also configure and compile the code in other
locations, for example on a scratch disk with out having to copy the
entire source tree. The only requirement to do so is you have {\tt
-genmake} in your path or you know the absolute path to {\tt genmake}.
+ genmake2} in your path or you know the absolute path to {\tt
+ genmake2}.
-The following sections outline some possible methods of organizing you
-source and data.
+The following sections outline some possible methods of organizing
+your source and data.
\subsubsection{Building from the {\em ../code directory}}
This is just as simple as building in the {\em input/} directory:
\begin{verbatim}
% cd verification/exp2/code
-% ../../../tools/genmake
+% ../../../tools/genmake2
% make depend
% make
\end{verbatim}
@@ -424,7 +575,7 @@
% cd verification/exp2
% mkdir build
% cd build
-% ../../../tools/genmake -mods=../code
+% ../../../tools/genmake2 -mods=../code
% make depend
% make
\end{verbatim}
@@ -446,7 +597,7 @@
% ./mitgcmuv > output.txt
\end{verbatim}
-\subsubsection{Building from on a scratch disk}
+\subsubsection{Building on a scratch disk}
Model object files and output data can use up large amounts of disk
space so it is often the case that you will be operating on a large
@@ -454,7 +605,8 @@
following commands will build the model in {\em /scratch/exp2-run1}:
\begin{verbatim}
% cd /scratch/exp2-run1
-% ~/MITgcm/tools/genmake -rootdir=~/MITgcm -mods=~/MITgcm/verification/exp2/code
+% ~/MITgcm/tools/genmake2 -rootdir=~/MITgcm \
+ -mods=~/MITgcm/verification/exp2/code
% make depend
% make
\end{verbatim}
@@ -470,7 +622,8 @@
% cd /scratch/exp2
% mkdir build
% cd build
-% ~/MITgcm/tools/genmake -rootdir=~/MITgcm -mods=~/MITgcm/verification/exp2/code
+% ~/MITgcm/tools/genmake2 -rootdir=~/MITgcm \
+ -mods=~/MITgcm/verification/exp2/code
% make depend
% make
% cd ../
@@ -481,107 +634,166 @@
-\subsection{\textit{genmake}}
+\subsection{Using \textit{genmake2}}
\label{sect:genmake}
-To compile the code, use the script \textit{genmake} located in the \textit{%
-tools} directory. \textit{genmake} is a script that generates the makefile.
-It has been written so that the code can be compiled on a wide diversity of
-machines and systems. However, if it doesn't work the first time on your
-platform, you might need to edit certain lines of \textit{genmake} in the
-section containing the setups for the different machines. The file is
-structured like this:
-\begin{verbatim}
- .
- .
- .
-general instructions (machine independent)
- .
- .
- .
- - setup machine 1
- - setup machine 2
- - setup machine 3
- - setup machine 4
- etc
- .
- .
- .
-\end{verbatim}
-
-For example, the setup corresponding to a DEC alpha machine is reproduced
-here:
-\begin{verbatim}
- case OSF1+mpi:
- echo "Configuring for DEC Alpha"
- set CPP = ( '/usr/bin/cpp -P' )
- set DEFINES = ( ${DEFINES} '-DTARGET_DEC -DWORDLENGTH=1' )
- set KPP = ( 'kapf' )
- set KPPFILES = ( 'main.F' )
- set KFLAGS1 = ( '-scan=132 -noconc -cmp=' )
- set FC = ( 'f77' )
- set FFLAGS = ( '-convert big_endian -r8 -extend_source -automatic -call_shared -notransform_loops -align dcommons' )
- set FOPTIM = ( '-O5 -fast -tune host -inline all' )
- set NOOPTFLAGS = ( '-O0' )
- set LIBS = ( '-lfmpi -lmpi -lkmp_osfp10 -pthread' )
- set NOOPTFILES = ( 'barrier.F different_multiple.F external_fields_load.F')
- set RMFILES = ( '*.p.out' )
- breaksw
-\end{verbatim}
-
-Typically, these are the lines that you might need to edit to make \textit{%
-genmake} work on your platform if it doesn't work the first time. \textit{%
-genmake} understands several options that are described here:
-
-\begin{itemize}
-\item -rootdir=dir
-
-indicates where the model root directory is relative to the directory where
-you are compiling. This option is not needed if you compile in the \textit{%
-bin} directory (which is the default compilation directory) or within the
-\textit{verification} tree.
-
-\item -mods=dir1,dir2,...
-
-indicates the relative or absolute paths directories where the sources
-should take precedence over the default versions (located in \textit{model},
-\textit{eesupp},...). Typically, this option is used when running the
-examples, see below.
-
-\item -enable=pkg1,pkg2,...
-
-enables packages source code \textit{pkg1}, \textit{pkg2},... when creating
-the makefile.
-
-\item -disable=pkg1,pkg2,...
-
-disables packages source code \textit{pkg1}, \textit{pkg2},... when creating
-the makefile.
-
-\item -platform=machine
-
-specifies the platform for which you want the makefile. In general, you
-won't need this option. \textit{genmake} will select the right machine for
-you (the one you're working on!). However, this option is useful if you have
-a choice of several compilers on one machine and you want to use the one
-that is not the default (ex: \texttt{pgf77} instead of \texttt{f77} under
-Linux).
-
-\item -mpi
-
-this is used when you want to run the model in parallel processing mode
-under mpi (see section on parallel computation for more details).
+To compile the code, first use the program \texttt{genmake2} (located
+in the \textit{tools} directory) to generate a Makefile.
+\texttt{genmake2} is a shell script written to work with all
+``sh''--compatible shells including bash v1, bash v2, and Bourne.
+Internally, \texttt{genmake2} determines the locations of needed
+files, the compiler, compiler options, libraries, and Unix tools. It
+relies upon a number of ``optfiles'' located in the {\em
+ tools/build\_options} directory.
+
+The purpose of the optfiles is to provide all the compilation options
+for particular ``platforms'' (where ``platform'' roughly means the
+combination of the hardware and the compiler) and code configurations.
+Given the combinations of possible compilers and library dependencies
+({\it eg.} MPI and NetCDF) there may be numerous optfiles available
+for a single machine. The naming scheme for the majority of the
+optfiles shipped with the code is
+\begin{center}
+ {\bf OS\_HARDWARE\_COMPILER }
+\end{center}
+where
+\begin{description}
+\item[OS] is the name of the operating system (generally the
+ lower-case output of the {\tt 'uname'} command)
+\item[HARDWARE] is a string that describes the CPU type and
+ corresponds to output from the {\tt 'uname -m'} command:
+ \begin{description}
+ \item[ia32] is for ``x86'' machines such as i386, i486, i586, i686,
+ and athlon
+ \item[ia64] is for Intel IA64 systems (eg. Itanium, Itanium2)
+ \item[amd64] is AMD x86\_64 systems
+ \item[ppc] is for Mac PowerPC systems
+ \end{description}
+\item[COMPILER] is the compiler name (generally, the name of the
+ FORTRAN executable)
+\end{description}
+
+In many cases, the default optfiles are sufficient and will result in
+usable Makefiles. However, for some machines or code configurations,
+new ``optfiles'' must be written. To create a new optfile, it is
+generally best to start with one of the defaults and modify it to suit
+your needs. Like \texttt{genmake2}, the optfiles are all written
+using a simple ``sh''--compatible syntax. While nearly all variables
+used within \texttt{genmake2} may be specified in the optfiles, the
+critical ones that should be defined are:
+
+\begin{description}
+\item[FC] the FORTRAN compiler (executable) to use
+\item[DEFINES] the command-line DEFINE options passed to the compiler
+\item[CPP] the C pre-processor to use
+\item[NOOPTFLAGS] options flags for special files that should not be
+ optimized
+\end{description}
+
+For example, the optfile for a typical Red Hat Linux machine (``ia32''
+architecture) using the GCC (g77) compiler is
+\begin{verbatim}
+FC=g77
+DEFINES='-D_BYTESWAPIO -DWORDLENGTH=4'
+CPP='cpp -traditional -P'
+NOOPTFLAGS='-O0'
+# For IEEE, use the "-ffloat-store" option
+if test "x$IEEE" = x ; then
+ FFLAGS='-Wimplicit -Wunused -Wuninitialized'
+ FOPTIM='-O3 -malign-double -funroll-loops'
+else
+ FFLAGS='-Wimplicit -Wunused -ffloat-store'
+ FOPTIM='-O0 -malign-double'
+fi
+\end{verbatim}
+
+If you write an optfile for an unrepresented machine or compiler, you
+are strongly encouraged to submit the optfile to the MITgcm project
+for inclusion. Please send the file to the
+\begin{rawhtml} \end{rawhtml}
+\begin{center}
+ MITgcm-support@mitgcm.org
+\end{center}
+\begin{rawhtml} \end{rawhtml}
+mailing list.
-\item -jam
+In addition to the optfiles, \texttt{genmake2} supports a number of
+helpful command-line options. A complete list of these options can be
+obtained from:
+\begin{verbatim}
+% genmake2 -h
+\end{verbatim}
+
+The most important command-line options are:
+\begin{description}
+
+\item[\texttt{--optfile=/PATH/FILENAME}] specifies the optfile that
+ should be used for a particular build.
+
+ If no "optfile" is specified (either through the command line or the
+ MITGCM\_OPTFILE environment variable), genmake2 will try to make a
+ reasonable guess from the list provided in {\em
+ tools/build\_options}. The method used for making this guess is
+ to first determine the combination of operating system and hardware
+ (eg. "linux\_ia32") and then find a working FORTRAN compiler within
+ the user's path. When these three items have been identified,
+ genmake2 will try to find an optfile that has a matching name.
+
+\item[\texttt{--pdepend=/PATH/FILENAME}] specifies the dependency file
+ used for packages.
+
+ If not specified, the default dependency file {\em pkg/pkg\_depend}
+ is used. The syntax for this file is parsed on a line-by-line basis
+ where each line containes either a comment ("\#") or a simple
+ "PKGNAME1 (+|-)PKGNAME2" pairwise rule where the "+" or "-" symbol
+ specifies a "must be used with" or a "must not be used with"
+ relationship, respectively. If no rule is specified, then it is
+ assumed that the two packages are compatible and will function
+ either with or without each other.
+
+\item[\texttt{--pdefault='PKG1 PKG2 PKG3 ...'}] specifies the default
+ set of packages to be used.
+
+ If not set, the default package list will be read from {\em
+ pkg/pkg\_default}
+
+\item[\texttt{--adof=/path/to/file}] specifies the "adjoint" or
+ automatic differentiation options file to be used. The file is
+ analogous to the ``optfile'' defined above but it specifies
+ information for the AD build process.
+
+ The default file is located in {\em
+ tools/adjoint\_options/adjoint\_default} and it defines the "TAF"
+ and "TAMC" compilers. An alternate version is also available at
+ {\em tools/adjoint\_options/adjoint\_staf} that selects the newer
+ "STAF" compiler. As with any compilers, it is helpful to have their
+ directories listed in your {\tt \$PATH} environment variable.
+
+\item[\texttt{--mods='DIR1 DIR2 DIR3 ...'}] specifies a list of
+ directories containing ``modifications''. These directories contain
+ files with names that may (or may not) exist in the main MITgcm
+ source tree but will be overridden by any identically-named sources
+ within the ``MODS'' directories.
+
+ The order of precedence for this "name-hiding" is as follows:
+ \begin{itemize}
+ \item ``MODS'' directories (in the order given)
+ \item Packages either explicitly specified or provided by default
+ (in the order given)
+ \item Packages included due to package dependencies (in the order
+ that that package dependencies are parsed)
+ \item The "standard dirs" (which may have been specified by the
+ ``-standarddirs'' option)
+ \end{itemize}
+
+\item[\texttt{--make=/path/to/gmake}] Due to the poor handling of
+ soft-links and other bugs common with the \texttt{make} versions
+ provided by commercial Unix vendors, GNU \texttt{make} (sometimes
+ called \texttt{gmake}) should be preferred. This option provides a
+ means for specifying the make executable to be used.
-this is used when you want to run the model in parallel processing mode
-under jam (see section on parallel computation for more details).
-\end{itemize}
+\end{description}
-For some of the examples, there is a file called \textit{.genmakerc} in the
-\textit{input} directory that has the relevant \textit{genmake} options for
-that particular example. In this way you don't need to type the options when
-invoking \textit{genmake}.
\section{Running the model}
@@ -607,7 +819,7 @@
% ./mitgcmuv > output.txt
\end{verbatim}
-For the example experiments in {\em vericication}, an example of the
+For the example experiments in {\em verification}, an example of the
output is kept in {\em results/output.txt} for comparison. You can compare
your {\em output.txt} with this one to check that the set-up works.
@@ -696,380 +908,419 @@
\section{Doing it yourself: customizing the code}
When you are ready to run the model in the configuration you want, the
-easiest thing is to use and adapt the setup of the case studies experiment
-(described previously) that is the closest to your configuration. Then, the
-amount of setup will be minimized. In this section, we focus on the setup
-relative to the ''numerical model'' part of the code (the setup relative to
-the ''execution environment'' part is covered in the parallel implementation
-section) and on the variables and parameters that you are likely to change.
+easiest thing is to use and adapt the setup of the case studies
+experiment (described previously) that is the closest to your
+configuration. Then, the amount of setup will be minimized. In this
+section, we focus on the setup relative to the ``numerical model''
+part of the code (the setup relative to the ``execution environment''
+part is covered in the parallel implementation section) and on the
+variables and parameters that you are likely to change.
\subsection{Configuration and setup}
-The CPP keys relative to the ''numerical model'' part of the code are all
-defined and set in the file \textit{CPP\_OPTIONS.h }in the directory \textit{%
-model/inc }or in one of the \textit{code }directories of the case study
-experiments under \textit{verification.} The model parameters are defined
-and declared in the file \textit{model/inc/PARAMS.h }and their default
-values are set in the routine \textit{model/src/set\_defaults.F. }The
-default values can be modified in the namelist file \textit{data }which
-needs to be located in the directory where you will run the model. The
-parameters are initialized in the routine \textit{model/src/ini\_parms.F}.
-Look at this routine to see in what part of the namelist the parameters are
-located.
-
-In what follows the parameters are grouped into categories related to the
-computational domain, the equations solved in the model, and the simulation
-controls.
+The CPP keys relative to the ``numerical model'' part of the code are
+all defined and set in the file \textit{CPP\_OPTIONS.h }in the
+directory \textit{ model/inc }or in one of the \textit{code
+}directories of the case study experiments under
+\textit{verification.} The model parameters are defined and declared
+in the file \textit{model/inc/PARAMS.h }and their default values are
+set in the routine \textit{model/src/set\_defaults.F. }The default
+values can be modified in the namelist file \textit{data }which needs
+to be located in the directory where you will run the model. The
+parameters are initialized in the routine
+\textit{model/src/ini\_parms.F}. Look at this routine to see in what
+part of the namelist the parameters are located.
+
+In what follows the parameters are grouped into categories related to
+the computational domain, the equations solved in the model, and the
+simulation controls.
\subsection{Computational domain, geometry and time-discretization}
-\begin{itemize}
-\item dimensions
-\end{itemize}
-
-The number of points in the x, y,\textit{\ }and r\textit{\ }directions are
-represented by the variables \textbf{sNx}\textit{, }\textbf{sNy}\textit{, }%
-and \textbf{Nr}\textit{\ }respectively which are declared and set in the
-file \textit{model/inc/SIZE.h. }(Again, this assumes a mono-processor
-calculation. For multiprocessor calculations see section on parallel
-implementation.)
-
-\begin{itemize}
-\item grid
-\end{itemize}
-
-Three different grids are available: cartesian, spherical polar, and
-curvilinear (including the cubed sphere). The grid is set through the
-logical variables \textbf{usingCartesianGrid}\textit{, }\textbf{%
-usingSphericalPolarGrid}\textit{, }and \textit{\ }\textbf{%
-usingCurvilinearGrid}\textit{. }In the case of spherical and curvilinear
-grids, the southern boundary is defined through the variable \textbf{phiMin}%
-\textit{\ }which corresponds to the latitude of the southern most cell face
-(in degrees). The resolution along the x and y directions is controlled by
-the 1D arrays \textbf{delx}\textit{\ }and \textbf{dely}\textit{\ }(in meters
-in the case of a cartesian grid, in degrees otherwise). The vertical grid
-spacing is set through the 1D array \textbf{delz }for the ocean (in meters)
-or \textbf{delp}\textit{\ }for the atmosphere (in Pa). The variable \textbf{%
-Ro\_SeaLevel} represents the standard position of Sea-Level in ''R''
-coordinate. This is typically set to 0m for the ocean (default value) and 10$%
-^{5}$Pa for the atmosphere. For the atmosphere, also set the logical
-variable \textbf{groundAtK1} to '.\texttt{TRUE}.'. which put the first level
-(k=1) at the lower boundary (ground).
-
-For the cartesian grid case, the Coriolis parameter $f$ is set through the
-variables \textbf{f0}\textit{\ }and \textbf{beta}\textit{\ }which correspond
-to the reference Coriolis parameter (in s$^{-1}$) and $\frac{\partial f}{%
-\partial y}$(in m$^{-1}$s$^{-1}$) respectively. If \textbf{beta }\textit{\ }%
-is set to a nonzero value, \textbf{f0}\textit{\ }is the value of $f$ at the
-southern edge of the domain.
-
-\begin{itemize}
-\item topography - full and partial cells
-\end{itemize}
-
-The domain bathymetry is read from a file that contains a 2D (x,y) map of
-depths (in m) for the ocean or pressures (in Pa) for the atmosphere. The
-file name is represented by the variable \textbf{bathyFile}\textit{. }The
-file is assumed to contain binary numbers giving the depth (pressure) of the
-model at each grid cell, ordered with the x coordinate varying fastest. The
-points are ordered from low coordinate to high coordinate for both axes. The
-model code applies without modification to enclosed, periodic, and double
-periodic domains. Periodicity is assumed by default and is suppressed by
-setting the depths to 0m for the cells at the limits of the computational
-domain (note: not sure this is the case for the atmosphere). The precision
-with which to read the binary data is controlled by the integer variable
-\textbf{readBinaryPrec }which can take the value \texttt{32} (single
-precision) or \texttt{64} (double precision). See the matlab program \textit{%
-gendata.m }in the \textit{input }directories under \textit{verification }to
-see how the bathymetry files are generated for the case study experiments.
-
-To use the partial cell capability, the variable \textbf{hFacMin}\textit{\ }%
-needs to be set to a value between 0 and 1 (it is set to 1 by default)
-corresponding to the minimum fractional size of the cell. For example if the
-bottom cell is 500m thick and \textbf{hFacMin}\textit{\ }is set to 0.1, the
-actual thickness of the cell (i.e. used in the code) can cover a range of
-discrete values 50m apart from 50m to 500m depending on the value of the
-bottom depth (in \textbf{bathyFile}) at this point.
-
-Note that the bottom depths (or pressures) need not coincide with the models
-levels as deduced from \textbf{delz}\textit{\ }or\textit{\ }\textbf{delp}%
-\textit{. }The model will interpolate the numbers in \textbf{bathyFile}%
-\textit{\ }so that they match the levels obtained from \textbf{delz}\textit{%
-\ }or\textit{\ }\textbf{delp}\textit{\ }and \textbf{hFacMin}\textit{. }
-
-(Note: the atmospheric case is a bit more complicated than what is written
-here I think. To come soon...)
+\begin{description}
+\item[dimensions] \
+
+ The number of points in the x, y, and r directions are represented
+ by the variables \textbf{sNx}, \textbf{sNy} and \textbf{Nr}
+ respectively which are declared and set in the file
+ \textit{model/inc/SIZE.h}. (Again, this assumes a mono-processor
+ calculation. For multiprocessor calculations see the section on
+ parallel implementation.)
+
+\item[grid] \
+
+ Three different grids are available: cartesian, spherical polar, and
+ curvilinear (which includes the cubed sphere). The grid is set
+ through the logical variables \textbf{usingCartesianGrid},
+ \textbf{usingSphericalPolarGrid}, and \textbf{usingCurvilinearGrid}.
+ In the case of spherical and curvilinear grids, the southern
+ boundary is defined through the variable \textbf{phiMin} which
+ corresponds to the latitude of the southern most cell face (in
+ degrees). The resolution along the x and y directions is controlled
+ by the 1D arrays \textbf{delx} and \textbf{dely} (in meters in the
+ case of a cartesian grid, in degrees otherwise). The vertical grid
+ spacing is set through the 1D array \textbf{delz} for the ocean (in
+ meters) or \textbf{delp} for the atmosphere (in Pa). The variable
+ \textbf{Ro\_SeaLevel} represents the standard position of Sea-Level
+ in ``R'' coordinate. This is typically set to 0m for the ocean
+ (default value) and 10$^{5}$Pa for the atmosphere. For the
+ atmosphere, also set the logical variable \textbf{groundAtK1} to
+ \texttt{'.TRUE.'} which puts the first level (k=1) at the lower
+ boundary (ground).
+
+ For the cartesian grid case, the Coriolis parameter $f$ is set
+ through the variables \textbf{f0} and \textbf{beta} which correspond
+ to the reference Coriolis parameter (in s$^{-1}$) and
+ $\frac{\partial f}{ \partial y}$(in m$^{-1}$s$^{-1}$) respectively.
+ If \textbf{beta } is set to a nonzero value, \textbf{f0} is the
+ value of $f$ at the southern edge of the domain.
+
+\item[topography - full and partial cells] \
+
+ The domain bathymetry is read from a file that contains a 2D (x,y)
+ map of depths (in m) for the ocean or pressures (in Pa) for the
+ atmosphere. The file name is represented by the variable
+ \textbf{bathyFile}. The file is assumed to contain binary numbers
+ giving the depth (pressure) of the model at each grid cell, ordered
+ with the x coordinate varying fastest. The points are ordered from
+ low coordinate to high coordinate for both axes. The model code
+ applies without modification to enclosed, periodic, and double
+ periodic domains. Periodicity is assumed by default and is
+ suppressed by setting the depths to 0m for the cells at the limits
+ of the computational domain (note: not sure this is the case for the
+ atmosphere). The precision with which to read the binary data is
+ controlled by the integer variable \textbf{readBinaryPrec} which can
+ take the value \texttt{32} (single precision) or \texttt{64} (double
+ precision). See the matlab program \textit{gendata.m} in the
+ \textit{input} directories under \textit{verification} to see how
+ the bathymetry files are generated for the case study experiments.
+
+ To use the partial cell capability, the variable \textbf{hFacMin}
+ needs to be set to a value between 0 and 1 (it is set to 1 by
+ default) corresponding to the minimum fractional size of the cell.
+ For example if the bottom cell is 500m thick and \textbf{hFacMin} is
+ set to 0.1, the actual thickness of the cell (i.e. used in the code)
+ can cover a range of discrete values 50m apart from 50m to 500m
+ depending on the value of the bottom depth (in \textbf{bathyFile})
+ at this point.
+
+ Note that the bottom depths (or pressures) need not coincide with
+ the models levels as deduced from \textbf{delz} or \textbf{delp}.
+ The model will interpolate the numbers in \textbf{bathyFile} so that
+ they match the levels obtained from \textbf{delz} or \textbf{delp}
+ and \textbf{hFacMin}.
+
+ (Note: the atmospheric case is a bit more complicated than what is
+ written here I think. To come soon...)
+
+\item[time-discretization] \
+
+ The time steps are set through the real variables \textbf{deltaTMom}
+ and \textbf{deltaTtracer} (in s) which represent the time step for
+ the momentum and tracer equations, respectively. For synchronous
+ integrations, simply set the two variables to the same value (or you
+ can prescribe one time step only through the variable
+ \textbf{deltaT}). The Adams-Bashforth stabilizing parameter is set
+ through the variable \textbf{abEps} (dimensionless). The stagger
+ baroclinic time stepping can be activated by setting the logical
+ variable \textbf{staggerTimeStep} to \texttt{'.TRUE.'}.
-\begin{itemize}
-\item time-discretization
-\end{itemize}
+\end{description}
-The time steps are set through the real variables \textbf{deltaTMom }and
-\textbf{deltaTtracer }(in s) which represent the time step for the momentum
-and tracer equations, respectively. For synchronous integrations, simply set
-the two variables to the same value (or you can prescribe one time step only
-through the variable \textbf{deltaT}). The Adams-Bashforth stabilizing
-parameter is set through the variable \textbf{abEps }(dimensionless). The
-stagger baroclinic time stepping can be activated by setting the logical
-variable \textbf{staggerTimeStep }to '.\texttt{TRUE}.'.
\subsection{Equation of state}
-First, because the model equations are written in terms of perturbations, a
-reference thermodynamic state needs to be specified. This is done through
-the 1D arrays \textbf{tRef}\textit{\ }and \textbf{sRef}. \textbf{tRef }%
-specifies the reference potential temperature profile (in $^{o}$C for
-the ocean and $^{o}$K for the atmosphere) starting from the level
-k=1. Similarly, \textbf{sRef}\textit{\ }specifies the reference salinity
-profile (in ppt) for the ocean or the reference specific humidity profile
-(in g/kg) for the atmosphere.
-
-The form of the equation of state is controlled by the character variables
-\textbf{buoyancyRelation}\textit{\ }and \textbf{eosType}\textit{. }\textbf{%
-buoyancyRelation}\textit{\ }is set to '\texttt{OCEANIC}' by default and
-needs to be set to '\texttt{ATMOSPHERIC}' for atmosphere simulations. In
-this case, \textbf{eosType}\textit{\ }must be set to '\texttt{IDEALGAS}'.
-For the ocean, two forms of the equation of state are available: linear (set
-\textbf{eosType}\textit{\ }to '\texttt{LINEAR}') and a polynomial
-approximation to the full nonlinear equation ( set \textbf{eosType}\textit{\
-}to '\texttt{POLYNOMIAL}'). In the linear case, you need to specify the
-thermal and haline expansion coefficients represented by the variables
-\textbf{tAlpha}\textit{\ }(in K$^{-1}$) and \textbf{sBeta}\textit{\ }(in ppt$%
-^{-1}$). For the nonlinear case, you need to generate a file of polynomial
-coefficients called \textit{POLY3.COEFFS. }To do this, use the program
-\textit{utils/knudsen2/knudsen2.f }under the model tree (a Makefile is
-available in the same directory and you will need to edit the number and the
-values of the vertical levels in \textit{knudsen2.f }so that they match
-those of your configuration). \textit{\ }
+First, because the model equations are written in terms of
+perturbations, a reference thermodynamic state needs to be specified.
+This is done through the 1D arrays \textbf{tRef} and \textbf{sRef}.
+\textbf{tRef} specifies the reference potential temperature profile
+(in $^{o}$C for the ocean and $^{o}$K for the atmosphere) starting
+from the level k=1. Similarly, \textbf{sRef} specifies the reference
+salinity profile (in ppt) for the ocean or the reference specific
+humidity profile (in g/kg) for the atmosphere.
+
+The form of the equation of state is controlled by the character
+variables \textbf{buoyancyRelation} and \textbf{eosType}.
+\textbf{buoyancyRelation} is set to \texttt{'OCEANIC'} by default and
+needs to be set to \texttt{'ATMOSPHERIC'} for atmosphere simulations.
+In this case, \textbf{eosType} must be set to \texttt{'IDEALGAS'}.
+For the ocean, two forms of the equation of state are available:
+linear (set \textbf{eosType} to \texttt{'LINEAR'}) and a polynomial
+approximation to the full nonlinear equation ( set \textbf{eosType} to
+\texttt{'POLYNOMIAL'}). In the linear case, you need to specify the
+thermal and haline expansion coefficients represented by the variables
+\textbf{tAlpha} (in K$^{-1}$) and \textbf{sBeta} (in ppt$^{-1}$). For
+the nonlinear case, you need to generate a file of polynomial
+coefficients called \textit{POLY3.COEFFS}. To do this, use the program
+\textit{utils/knudsen2/knudsen2.f} under the model tree (a Makefile is
+available in the same directory and you will need to edit the number
+and the values of the vertical levels in \textit{knudsen2.f} so that
+they match those of your configuration).
+
+There there are also higher polynomials for the equation of state:
+\begin{description}
+\item[\texttt{'UNESCO'}:] The UNESCO equation of state formula of
+ Fofonoff and Millard \cite{fofonoff83}. This equation of state
+ assumes in-situ temperature, which is not a model variable; {\em its
+ use is therefore discouraged, and it is only listed for
+ completeness}.
+\item[\texttt{'JMD95Z'}:] A modified UNESCO formula by Jackett and
+ McDougall \cite{jackett95}, which uses the model variable potential
+ temperature as input. The \texttt{'Z'} indicates that this equation
+ of state uses a horizontally and temporally constant pressure
+ $p_{0}=-g\rho_{0}z$.
+\item[\texttt{'JMD95P'}:] A modified UNESCO formula by Jackett and
+ McDougall \cite{jackett95}, which uses the model variable potential
+ temperature as input. The \texttt{'P'} indicates that this equation
+ of state uses the actual hydrostatic pressure of the last time
+ step. Lagging the pressure in this way requires an additional pickup
+ file for restarts.
+\item[\texttt{'MDJWF'}:] The new, more accurate and less expensive
+ equation of state by McDougall et~al. \cite{mcdougall03}. It also
+ requires lagging the pressure and therefore an additional pickup
+ file for restarts.
+\end{description}
+For none of these options an reference profile of temperature or
+salinity is required.
\subsection{Momentum equations}
-In this section, we only focus for now on the parameters that you are likely
-to change, i.e. the ones relative to forcing and dissipation for example.
-The details relevant to the vector-invariant form of the equations and the
-various advection schemes are not covered for the moment. We assume that you
-use the standard form of the momentum equations (i.e. the flux-form) with
-the default advection scheme. Also, there are a few logical variables that
-allow you to turn on/off various terms in the momentum equation. These
-variables are called \textbf{momViscosity, momAdvection, momForcing,
-useCoriolis, momPressureForcing, momStepping}\textit{, }and \textit{\ }%
-\textbf{metricTerms }and are assumed to be set to '.\texttt{TRUE}.' here.
-Look at the file \textit{model/inc/PARAMS.h }for a precise definition of
-these variables.
+In this section, we only focus for now on the parameters that you are
+likely to change, i.e. the ones relative to forcing and dissipation
+for example. The details relevant to the vector-invariant form of the
+equations and the various advection schemes are not covered for the
+moment. We assume that you use the standard form of the momentum
+equations (i.e. the flux-form) with the default advection scheme.
+Also, there are a few logical variables that allow you to turn on/off
+various terms in the momentum equation. These variables are called
+\textbf{momViscosity, momAdvection, momForcing, useCoriolis,
+ momPressureForcing, momStepping} and \textbf{metricTerms }and are
+assumed to be set to \texttt{'.TRUE.'} here. Look at the file
+\textit{model/inc/PARAMS.h }for a precise definition of these
+variables.
+
+\begin{description}
+\item[initialization] \
+
+ The velocity components are initialized to 0 unless the simulation
+ is starting from a pickup file (see section on simulation control
+ parameters).
+
+\item[forcing] \
+
+ This section only applies to the ocean. You need to generate
+ wind-stress data into two files \textbf{zonalWindFile} and
+ \textbf{meridWindFile} corresponding to the zonal and meridional
+ components of the wind stress, respectively (if you want the stress
+ to be along the direction of only one of the model horizontal axes,
+ you only need to generate one file). The format of the files is
+ similar to the bathymetry file. The zonal (meridional) stress data
+ are assumed to be in Pa and located at U-points (V-points). As for
+ the bathymetry, the precision with which to read the binary data is
+ controlled by the variable \textbf{readBinaryPrec}. See the matlab
+ program \textit{gendata.m} in the \textit{input} directories under
+ \textit{verification} to see how simple analytical wind forcing data
+ are generated for the case study experiments.
+
+ There is also the possibility of prescribing time-dependent periodic
+ forcing. To do this, concatenate the successive time records into a
+ single file (for each stress component) ordered in a (x,y,t) fashion
+ and set the following variables: \textbf{periodicExternalForcing }to
+ \texttt{'.TRUE.'}, \textbf{externForcingPeriod }to the period (in s)
+ of which the forcing varies (typically 1 month), and
+ \textbf{externForcingCycle} to the repeat time (in s) of the forcing
+ (typically 1 year -- note: \textbf{ externForcingCycle} must be a
+ multiple of \textbf{externForcingPeriod}). With these variables set
+ up, the model will interpolate the forcing linearly at each
+ iteration.
+
+\item[dissipation] \
+
+ The lateral eddy viscosity coefficient is specified through the
+ variable \textbf{viscAh} (in m$^{2}$s$^{-1}$). The vertical eddy
+ viscosity coefficient is specified through the variable
+ \textbf{viscAz} (in m$^{2}$s$^{-1}$) for the ocean and
+ \textbf{viscAp} (in Pa$^{2}$s$^{-1}$) for the atmosphere. The
+ vertical diffusive fluxes can be computed implicitly by setting the
+ logical variable \textbf{implicitViscosity }to \texttt{'.TRUE.'}.
+ In addition, biharmonic mixing can be added as well through the
+ variable \textbf{viscA4} (in m$^{4}$s$^{-1}$). On a spherical polar
+ grid, you might also need to set the variable \textbf{cosPower}
+ which is set to 0 by default and which represents the power of
+ cosine of latitude to multiply viscosity. Slip or no-slip conditions
+ at lateral and bottom boundaries are specified through the logical
+ variables \textbf{no\_slip\_sides} and \textbf{no\_slip\_bottom}. If
+ set to \texttt{'.FALSE.'}, free-slip boundary conditions are
+ applied. If no-slip boundary conditions are applied at the bottom, a
+ bottom drag can be applied as well. Two forms are available: linear
+ (set the variable \textbf{bottomDragLinear} in s$ ^{-1}$) and
+ quadratic (set the variable \textbf{bottomDragQuadratic} in
+ m$^{-1}$).
+
+ The Fourier and Shapiro filters are described elsewhere.
+
+\item[C-D scheme] \
+
+ If you run at a sufficiently coarse resolution, you will need the
+ C-D scheme for the computation of the Coriolis terms. The
+ variable\textbf{\ tauCD}, which represents the C-D scheme coupling
+ timescale (in s) needs to be set.
+
+\item[calculation of pressure/geopotential] \
+
+ First, to run a non-hydrostatic ocean simulation, set the logical
+ variable \textbf{nonHydrostatic} to \texttt{'.TRUE.'}. The pressure
+ field is then inverted through a 3D elliptic equation. (Note: this
+ capability is not available for the atmosphere yet.) By default, a
+ hydrostatic simulation is assumed and a 2D elliptic equation is used
+ to invert the pressure field. The parameters controlling the
+ behaviour of the elliptic solvers are the variables
+ \textbf{cg2dMaxIters} and \textbf{cg2dTargetResidual } for
+ the 2D case and \textbf{cg3dMaxIters} and
+ \textbf{cg3dTargetResidual} for the 3D case. You probably won't need to
+ alter the default values (are we sure of this?).
+
+ For the calculation of the surface pressure (for the ocean) or
+ surface geopotential (for the atmosphere) you need to set the
+ logical variables \textbf{rigidLid} and \textbf{implicitFreeSurface}
+ (set one to \texttt{'.TRUE.'} and the other to \texttt{'.FALSE.'}
+ depending on how you want to deal with the ocean upper or atmosphere
+ lower boundary).
-\begin{itemize}
-\item initialization
-\end{itemize}
-
-The velocity components are initialized to 0 unless the simulation is
-starting from a pickup file (see section on simulation control parameters).
-
-\begin{itemize}
-\item forcing
-\end{itemize}
-
-This section only applies to the ocean. You need to generate wind-stress
-data into two files \textbf{zonalWindFile}\textit{\ }and \textbf{%
-meridWindFile }corresponding to the zonal and meridional components of the
-wind stress, respectively (if you want the stress to be along the direction
-of only one of the model horizontal axes, you only need to generate one
-file). The format of the files is similar to the bathymetry file. The zonal
-(meridional) stress data are assumed to be in Pa and located at U-points
-(V-points). As for the bathymetry, the precision with which to read the
-binary data is controlled by the variable \textbf{readBinaryPrec}.\textbf{\ }
-See the matlab program \textit{gendata.m }in the \textit{input }directories
-under \textit{verification }to see how simple analytical wind forcing data
-are generated for the case study experiments.
-
-There is also the possibility of prescribing time-dependent periodic
-forcing. To do this, concatenate the successive time records into a single
-file (for each stress component) ordered in a (x, y, t) fashion and set the
-following variables: \textbf{periodicExternalForcing }to '.\texttt{TRUE}.',
-\textbf{externForcingPeriod }to the period (in s) of which the forcing
-varies (typically 1 month), and \textbf{externForcingCycle }to the repeat
-time (in s) of the forcing (typically 1 year -- note: \textbf{%
-externForcingCycle }must be a multiple of \textbf{externForcingPeriod}).
-With these variables set up, the model will interpolate the forcing linearly
-at each iteration.
-
-\begin{itemize}
-\item dissipation
-\end{itemize}
-
-The lateral eddy viscosity coefficient is specified through the variable
-\textbf{viscAh}\textit{\ }(in m$^{2}$s$^{-1}$). The vertical eddy viscosity
-coefficient is specified through the variable \textbf{viscAz }(in m$^{2}$s$%
-^{-1}$) for the ocean and \textbf{viscAp}\textit{\ }(in Pa$^{2}$s$^{-1}$)
-for the atmosphere. The vertical diffusive fluxes can be computed implicitly
-by setting the logical variable \textbf{implicitViscosity }to '.\texttt{TRUE}%
-.'. In addition, biharmonic mixing can be added as well through the variable
-\textbf{viscA4}\textit{\ }(in m$^{4}$s$^{-1}$). On a spherical polar grid,
-you might also need to set the variable \textbf{cosPower} which is set to 0
-by default and which represents the power of cosine of latitude to multiply
-viscosity. Slip or no-slip conditions at lateral and bottom boundaries are
-specified through the logical variables \textbf{no\_slip\_sides}\textit{\ }%
-and \textbf{no\_slip\_bottom}. If set to '\texttt{.FALSE.}', free-slip
-boundary conditions are applied. If no-slip boundary conditions are applied
-at the bottom, a bottom drag can be applied as well. Two forms are
-available: linear (set the variable \textbf{bottomDragLinear}\textit{\ }in s$%
-^{-1}$) and quadratic (set the variable \textbf{bottomDragQuadratic}\textit{%
-\ }in m$^{-1}$).
-
-The Fourier and Shapiro filters are described elsewhere.
-
-\begin{itemize}
-\item C-D scheme
-\end{itemize}
-
-If you run at a sufficiently coarse resolution, you will need the C-D scheme
-for the computation of the Coriolis terms. The variable\textbf{\ tauCD},
-which represents the C-D scheme coupling timescale (in s) needs to be set.
-
-\begin{itemize}
-\item calculation of pressure/geopotential
-\end{itemize}
-
-First, to run a non-hydrostatic ocean simulation, set the logical variable
-\textbf{nonHydrostatic} to '.\texttt{TRUE}.'. The pressure field is then
-inverted through a 3D elliptic equation. (Note: this capability is not
-available for the atmosphere yet.) By default, a hydrostatic simulation is
-assumed and a 2D elliptic equation is used to invert the pressure field. The
-parameters controlling the behaviour of the elliptic solvers are the
-variables \textbf{cg2dMaxIters}\textit{\ }and \textbf{cg2dTargetResidual }%
-for the 2D case and \textbf{cg3dMaxIters}\textit{\ }and \textbf{%
-cg3dTargetResidual }for the 3D case. You probably won't need to alter the
-default values (are we sure of this?).
-
-For the calculation of the surface pressure (for the ocean) or surface
-geopotential (for the atmosphere) you need to set the logical variables
-\textbf{rigidLid} and \textbf{implicitFreeSurface}\textit{\ }(set one to '.%
-\texttt{TRUE}.' and the other to '.\texttt{FALSE}.' depending on how you
-want to deal with the ocean upper or atmosphere lower boundary).
+\end{description}
\subsection{Tracer equations}
-This section covers the tracer equations i.e. the potential temperature
-equation and the salinity (for the ocean) or specific humidity (for the
-atmosphere) equation. As for the momentum equations, we only describe for
-now the parameters that you are likely to change. The logical variables
-\textbf{tempDiffusion}\textit{, }\textbf{tempAdvection}\textit{, }\textbf{%
-tempForcing}\textit{,} and \textbf{tempStepping} allow you to turn on/off
-terms in the temperature equation (same thing for salinity or specific
-humidity with variables \textbf{saltDiffusion}\textit{, }\textbf{%
-saltAdvection}\textit{\ }etc). These variables are all assumed here to be
-set to '.\texttt{TRUE}.'. Look at file \textit{model/inc/PARAMS.h }for a
-precise definition.
+This section covers the tracer equations i.e. the potential
+temperature equation and the salinity (for the ocean) or specific
+humidity (for the atmosphere) equation. As for the momentum equations,
+we only describe for now the parameters that you are likely to change.
+The logical variables \textbf{tempDiffusion} \textbf{tempAdvection}
+\textbf{tempForcing}, and \textbf{tempStepping} allow you to turn
+on/off terms in the temperature equation (same thing for salinity or
+specific humidity with variables \textbf{saltDiffusion},
+\textbf{saltAdvection} etc.). These variables are all assumed here to
+be set to \texttt{'.TRUE.'}. Look at file \textit{model/inc/PARAMS.h}
+for a precise definition.
+
+\begin{description}
+\item[initialization] \
+
+ The initial tracer data can be contained in the binary files
+ \textbf{hydrogThetaFile} and \textbf{hydrogSaltFile}. These files
+ should contain 3D data ordered in an (x,y,r) fashion with k=1 as the
+ first vertical level. If no file names are provided, the tracers
+ are then initialized with the values of \textbf{tRef} and
+ \textbf{sRef} mentioned above (in the equation of state section). In
+ this case, the initial tracer data are uniform in x and y for each
+ depth level.
+
+\item[forcing] \
+
+ This part is more relevant for the ocean, the procedure for the
+ atmosphere not being completely stabilized at the moment.
+
+ A combination of fluxes data and relaxation terms can be used for
+ driving the tracer equations. For potential temperature, heat flux
+ data (in W/m$ ^{2}$) can be stored in the 2D binary file
+ \textbf{surfQfile}. Alternatively or in addition, the forcing can
+ be specified through a relaxation term. The SST data to which the
+ model surface temperatures are restored to are supposed to be stored
+ in the 2D binary file \textbf{thetaClimFile}. The corresponding
+ relaxation time scale coefficient is set through the variable
+ \textbf{tauThetaClimRelax} (in s). The same procedure applies for
+ salinity with the variable names \textbf{EmPmRfile},
+ \textbf{saltClimFile}, and \textbf{tauSaltClimRelax} for freshwater
+ flux (in m/s) and surface salinity (in ppt) data files and
+ relaxation time scale coefficient (in s), respectively. Also for
+ salinity, if the CPP key \textbf{USE\_NATURAL\_BCS} is turned on,
+ natural boundary conditions are applied i.e. when computing the
+ surface salinity tendency, the freshwater flux is multiplied by the
+ model surface salinity instead of a constant salinity value.
+
+ As for the other input files, the precision with which to read the
+ data is controlled by the variable \textbf{readBinaryPrec}.
+ Time-dependent, periodic forcing can be applied as well following
+ the same procedure used for the wind forcing data (see above).
+
+\item[dissipation] \
+
+ Lateral eddy diffusivities for temperature and salinity/specific
+ humidity are specified through the variables \textbf{diffKhT} and
+ \textbf{diffKhS} (in m$^{2}$/s). Vertical eddy diffusivities are
+ specified through the variables \textbf{diffKzT} and
+ \textbf{diffKzS} (in m$^{2}$/s) for the ocean and \textbf{diffKpT
+ }and \textbf{diffKpS} (in Pa$^{2}$/s) for the atmosphere. The
+ vertical diffusive fluxes can be computed implicitly by setting the
+ logical variable \textbf{implicitDiffusion} to \texttt{'.TRUE.'}.
+ In addition, biharmonic diffusivities can be specified as well
+ through the coefficients \textbf{diffK4T} and \textbf{diffK4S} (in
+ m$^{4}$/s). Note that the cosine power scaling (specified through
+ \textbf{cosPower}---see the momentum equations section) is applied to
+ the tracer diffusivities (Laplacian and biharmonic) as well. The
+ Gent and McWilliams parameterization for oceanic tracers is
+ described in the package section. Finally, note that tracers can be
+ also subject to Fourier and Shapiro filtering (see the corresponding
+ section on these filters).
+
+\item[ocean convection] \
+
+ Two options are available to parameterize ocean convection: one is
+ to use the convective adjustment scheme. In this case, you need to
+ set the variable \textbf{cadjFreq}, which represents the frequency
+ (in s) with which the adjustment algorithm is called, to a non-zero
+ value (if set to a negative value by the user, the model will set it
+ to the tracer time step). The other option is to parameterize
+ convection with implicit vertical diffusion. To do this, set the
+ logical variable \textbf{implicitDiffusion} to \texttt{'.TRUE.'}
+ and the real variable \textbf{ivdc\_kappa} to a value (in m$^{2}$/s)
+ you wish the tracer vertical diffusivities to have when mixing
+ tracers vertically due to static instabilities. Note that
+ \textbf{cadjFreq} and \textbf{ivdc\_kappa}can not both have non-zero
+ value.
-\begin{itemize}
-\item initialization
-\end{itemize}
-
-The initial tracer data can be contained in the binary files \textbf{%
-hydrogThetaFile }and \textbf{hydrogSaltFile}. These files should contain 3D
-data ordered in an (x, y, r) fashion with k=1 as the first vertical level.
-If no file names are provided, the tracers are then initialized with the
-values of \textbf{tRef }and \textbf{sRef }mentioned above (in the equation
-of state section). In this case, the initial tracer data are uniform in x
-and y for each depth level.
-
-\begin{itemize}
-\item forcing
-\end{itemize}
-
-This part is more relevant for the ocean, the procedure for the atmosphere
-not being completely stabilized at the moment.
-
-A combination of fluxes data and relaxation terms can be used for driving
-the tracer equations. \ For potential temperature, heat flux data (in W/m$%
-^{2}$) can be stored in the 2D binary file \textbf{surfQfile}\textit{. }%
-Alternatively or in addition, the forcing can be specified through a
-relaxation term. The SST data to which the model surface temperatures are
-restored to are supposed to be stored in the 2D binary file \textbf{%
-thetaClimFile}\textit{. }The corresponding relaxation time scale coefficient
-is set through the variable \textbf{tauThetaClimRelax}\textit{\ }(in s). The
-same procedure applies for salinity with the variable names \textbf{EmPmRfile%
-}\textit{, }\textbf{saltClimFile}\textit{, }and \textbf{tauSaltClimRelax}%
-\textit{\ }for freshwater flux (in m/s) and surface salinity (in ppt) data
-files and relaxation time scale coefficient (in s), respectively. Also for
-salinity, if the CPP key \textbf{USE\_NATURAL\_BCS} is turned on, natural
-boundary conditions are applied i.e. when computing the surface salinity
-tendency, the freshwater flux is multiplied by the model surface salinity
-instead of a constant salinity value.
-
-As for the other input files, the precision with which to read the data is
-controlled by the variable \textbf{readBinaryPrec}. Time-dependent, periodic
-forcing can be applied as well following the same procedure used for the
-wind forcing data (see above).
-
-\begin{itemize}
-\item dissipation
-\end{itemize}
-
-Lateral eddy diffusivities for temperature and salinity/specific humidity
-are specified through the variables \textbf{diffKhT }and \textbf{diffKhS }%
-(in m$^{2}$/s). Vertical eddy diffusivities are specified through the
-variables \textbf{diffKzT }and \textbf{diffKzS }(in m$^{2}$/s) for the ocean
-and \textbf{diffKpT }and \textbf{diffKpS }(in Pa$^{2}$/s) for the
-atmosphere. The vertical diffusive fluxes can be computed implicitly by
-setting the logical variable \textbf{implicitDiffusion }to '.\texttt{TRUE}%
-.'. In addition, biharmonic diffusivities can be specified as well through
-the coefficients \textbf{diffK4T }and \textbf{diffK4S }(in m$^{4}$/s). Note
-that the cosine power scaling (specified through \textbf{cosPower }- see the
-momentum equations section) is applied to the tracer diffusivities
-(Laplacian and biharmonic) as well. The Gent and McWilliams parameterization
-for oceanic tracers is described in the package section. Finally, note that
-tracers can be also subject to Fourier and Shapiro filtering (see the
-corresponding section on these filters).
-
-\begin{itemize}
-\item ocean convection
-\end{itemize}
-
-Two options are available to parameterize ocean convection: one is to use
-the convective adjustment scheme. In this case, you need to set the variable
-\textbf{cadjFreq}, which represents the frequency (in s) with which the
-adjustment algorithm is called, to a non-zero value (if set to a negative
-value by the user, the model will set it to the tracer time step). The other
-option is to parameterize convection with implicit vertical diffusion. To do
-this, set the logical variable \textbf{implicitDiffusion }to '.\texttt{TRUE}%
-.' and the real variable \textbf{ivdc\_kappa }to a value (in m$^{2}$/s) you
-wish the tracer vertical diffusivities to have when mixing tracers
-vertically due to static instabilities. Note that \textbf{cadjFreq }and
-\textbf{ivdc\_kappa }can not both have non-zero value.
+\end{description}
\subsection{Simulation controls}
-The model ''clock'' is defined by the variable \textbf{deltaTClock }(in s)
-which determines the IO frequencies and is used in tagging output.
-Typically, you will set it to the tracer time step for accelerated runs
-(otherwise it is simply set to the default time step \textbf{deltaT}).
-Frequency of checkpointing and dumping of the model state are referenced to
-this clock (see below).
-
-\begin{itemize}
-\item run duration
-\end{itemize}
-
-The beginning of a simulation is set by specifying a start time (in s)
-through the real variable \textbf{startTime }or by specifying an initial
-iteration number through the integer variable \textbf{nIter0}. If these
-variables are set to nonzero values, the model will look for a ''pickup''
-file \textit{pickup.0000nIter0 }to restart the integration\textit{. }The end
-of a simulation is set through the real variable \textbf{endTime }(in s).
-Alternatively, you can specify instead the number of time steps to execute
-through the integer variable \textbf{nTimeSteps}.
-
-\begin{itemize}
-\item frequency of output
-\end{itemize}
-
-Real variables defining frequencies (in s) with which output files are
-written on disk need to be set up. \textbf{dumpFreq }controls the frequency
-with which the instantaneous state of the model is saved. \textbf{chkPtFreq }%
-and \textbf{pchkPtFreq }control the output frequency of rolling and
-permanent checkpoint files, respectively. See section 1.5.1 Output files for the
-definition of model state and checkpoint files. In addition, time-averaged
-fields can be written out by setting the variable \textbf{taveFreq} (in s).
-The precision with which to write the binary data is controlled by the
-integer variable w\textbf{riteBinaryPrec }(set it to \texttt{32} or \texttt{%
-64}).
+The model ''clock'' is defined by the variable \textbf{deltaTClock}
+(in s) which determines the IO frequencies and is used in tagging
+output. Typically, you will set it to the tracer time step for
+accelerated runs (otherwise it is simply set to the default time step
+\textbf{deltaT}). Frequency of checkpointing and dumping of the model
+state are referenced to this clock (see below).
+
+\begin{description}
+\item[run duration] \
+
+ The beginning of a simulation is set by specifying a start time (in
+ s) through the real variable \textbf{startTime} or by specifying an
+ initial iteration number through the integer variable
+ \textbf{nIter0}. If these variables are set to nonzero values, the
+ model will look for a ''pickup'' file \textit{pickup.0000nIter0} to
+ restart the integration. The end of a simulation is set through the
+ real variable \textbf{endTime} (in s). Alternatively, you can
+ specify instead the number of time steps to execute through the
+ integer variable \textbf{nTimeSteps}.
+
+\item[frequency of output] \
+
+ Real variables defining frequencies (in s) with which output files
+ are written on disk need to be set up. \textbf{dumpFreq} controls
+ the frequency with which the instantaneous state of the model is
+ saved. \textbf{chkPtFreq} and \textbf{pchkPtFreq} control the output
+ frequency of rolling and permanent checkpoint files, respectively.
+ See section 1.5.1 Output files for the definition of model state and
+ checkpoint files. In addition, time-averaged fields can be written
+ out by setting the variable \textbf{taveFreq} (in s). The precision
+ with which to write the binary data is controlled by the integer
+ variable w\textbf{riteBinaryPrec} (set it to \texttt{32} or
+ \texttt{64}).
+
+\end{description}
+
+
+%%% Local Variables:
+%%% mode: latex
+%%% TeX-master: t
+%%% End: