--- manual/s_getstarted/text/getting_started.tex 2004/10/13 05:06:25 1.26
+++ manual/s_getstarted/text/getting_started.tex 2004/10/16 03:40:13 1.30
@@ -1,4 +1,4 @@
-% $Header: /home/ubuntu/mnt/e9_copy/manual/s_getstarted/text/getting_started.tex,v 1.26 2004/10/13 05:06:25 cnh Exp $
+% $Header: /home/ubuntu/mnt/e9_copy/manual/s_getstarted/text/getting_started.tex,v 1.30 2004/10/16 03:40:13 edhill Exp $
% $Name: $
%\section{Getting started}
@@ -15,8 +15,12 @@
this section, we provide information on how to customize the code when
you are ready to try implementing the configuration you have in mind.
+
\section{Where to find information}
\label{sect:whereToFindInfo}
+\begin{rawhtml}
+
+\end{rawhtml}
A web site is maintained for release 2 (``Pelican'') of MITgcm:
\begin{rawhtml} \end{rawhtml}
@@ -50,6 +54,9 @@
\section{Obtaining the code}
\label{sect:obtainingCode}
+\begin{rawhtml}
+
+\end{rawhtml}
MITgcm can be downloaded from our system by following
the instructions below. As a courtesy we ask that you send e-mail to us at
@@ -79,7 +86,7 @@
\end{enumerate}
-\subsubsection{Checkout from CVS}
+\subsection{Method 1 - Checkout from CVS}
\label{sect:cvs_checkout}
If CVS is available on your system, we strongly encourage you to use it. CVS
@@ -169,7 +176,7 @@
\end{verbatim}
-\subsubsection{Conventional download method}
+\subsection{Method 2 - Tar file download}
\label{sect:conventionalDownload}
If you do not have CVS on your system, you can download the model as a
@@ -256,6 +263,9 @@
with. So please be sure you understand what you're doing.
\section{Model and directory structure}
+\begin{rawhtml}
+
+\end{rawhtml}
The ``numerical'' model is contained within a execution environment
support wrapper. This wrapper is designed to provide a general
@@ -326,6 +336,9 @@
\section[MITgcm Example Experiments]{Example experiments}
\label{sect:modelExamples}
+\begin{rawhtml}
+
+\end{rawhtml}
%% a set of twenty-four pre-configured numerical experiments
@@ -486,6 +499,9 @@
\section[Building MITgcm]{Building the code}
\label{sect:buildingCode}
+\begin{rawhtml}
+
+\end{rawhtml}
To compile the code, we use the {\em make} program. This uses a file
({\em Makefile}) that allows us to pre-process source files, specify
@@ -507,18 +523,18 @@
% ../../../tools/genmake2 -mods=../code
\end{verbatim}
The command line option tells {\em genmake} to override model source
-code with any files in the directory {\em ./code/}.
+code with any files in the directory {\em ../code/}.
On many systems, the {\em genmake2} program will be able to
automatically recognize the hardware, find compilers and other tools
within the user's path (``echo \$PATH''), and then choose an
-appropriate set of options from the files contained in the {\em
- tools/build\_options} directory. Under some circumstances, a user
-may have to create a new ``optfile'' in order to specify the exact
-combination of compiler, compiler flags, libraries, and other options
-necessary to build a particular configuration of MITgcm. In such
-cases, it is generally helpful to read the existing ``optfiles'' and
-mimic their syntax.
+appropriate set of options from the files (``optfiles'') contained in
+the {\em tools/build\_options} directory. Under some circumstances, a
+user may have to create a new ``optfile'' in order to specify the
+exact combination of compiler, compiler flags, libraries, and other
+options necessary to build a particular configuration of MITgcm. In
+such cases, it is generally helpful to read the existing ``optfiles''
+and mimic their syntax.
Through the MITgcm-support list, the MITgcm developers are willing to
provide help writing or modifing ``optfiles''. And we encourage users
@@ -542,7 +558,11 @@
upon which other files depend. The purpose of this is to reduce
re-compilation if and when you start to modify the code. The {\tt make
depend} command also creates links from the model source to this
-directory.
+directory. It is important to note that the {\tt make depend} stage
+will occasionally produce warnings or errors since the dependency
+parsing tool is unable to find all of the necessary header files
+(\textit{eg.} \texttt{netcdf.inc}). In these circumstances, it is
+usually OK to ignore the warnings/errors and proceed to the next step.
Next compile the code:
\begin{verbatim}
@@ -561,404 +581,18 @@
output.txt}.
-\subsection{Building/compiling the code elsewhere}
-
-In the example above (section \ref{sect:buildingCode}) we built the
-executable in the {\em input} directory of the experiment for
-convenience. You can also configure and compile the code in other
-locations, for example on a scratch disk with out having to copy the
-entire source tree. The only requirement to do so is you have {\tt
- genmake2} in your path or you know the absolute path to {\tt
- genmake2}.
-
-The following sections outline some possible methods of organizing
-your source and data.
-
-\subsubsection{Building from the {\em ../code directory}}
-
-This is just as simple as building in the {\em input/} directory:
-\begin{verbatim}
-% cd verification/exp2/code
-% ../../../tools/genmake2
-% make depend
-% make
-\end{verbatim}
-However, to run the model the executable ({\em mitgcmuv}) and input
-files must be in the same place. If you only have one calculation to make:
-\begin{verbatim}
-% cd ../input
-% cp ../code/mitgcmuv ./
-% ./mitgcmuv > output.txt
-\end{verbatim}
-or if you will be making multiple runs with the same executable:
-\begin{verbatim}
-% cd ../
-% cp -r input run1
-% cp code/mitgcmuv run1
-% cd run1
-% ./mitgcmuv > output.txt
-\end{verbatim}
-
-\subsubsection{Building from a new directory}
-
-Since the {\em input} directory contains input files it is often more
-useful to keep {\em input} pristine and build in a new directory
-within {\em verification/exp2/}:
-\begin{verbatim}
-% cd verification/exp2
-% mkdir build
-% cd build
-% ../../../tools/genmake2 -mods=../code
-% make depend
-% make
-\end{verbatim}
-This builds the code exactly as before but this time you need to copy
-either the executable or the input files or both in order to run the
-model. For example,
-\begin{verbatim}
-% cp ../input/* ./
-% ./mitgcmuv > output.txt
-\end{verbatim}
-or if you tend to make multiple runs with the same executable then
-running in a new directory each time might be more appropriate:
-\begin{verbatim}
-% cd ../
-% mkdir run1
-% cp build/mitgcmuv run1/
-% cp input/* run1/
-% cd run1
-% ./mitgcmuv > output.txt
-\end{verbatim}
-
-\subsubsection{Building on a scratch disk}
-
-Model object files and output data can use up large amounts of disk
-space so it is often the case that you will be operating on a large
-scratch disk. Assuming the model source is in {\em ~/MITgcm} then the
-following commands will build the model in {\em /scratch/exp2-run1}:
-\begin{verbatim}
-% cd /scratch/exp2-run1
-% ~/MITgcm/tools/genmake2 -rootdir=~/MITgcm \
- -mods=~/MITgcm/verification/exp2/code
-% make depend
-% make
-\end{verbatim}
-To run the model here, you'll need the input files:
-\begin{verbatim}
-% cp ~/MITgcm/verification/exp2/input/* ./
-% ./mitgcmuv > output.txt
-\end{verbatim}
-
-As before, you could build in one directory and make multiple runs of
-the one experiment:
-\begin{verbatim}
-% cd /scratch/exp2
-% mkdir build
-% cd build
-% ~/MITgcm/tools/genmake2 -rootdir=~/MITgcm \
- -mods=~/MITgcm/verification/exp2/code
-% make depend
-% make
-% cd ../
-% cp -r ~/MITgcm/verification/exp2/input run2
-% cd run2
-% ./mitgcmuv > output.txt
-\end{verbatim}
-
-
-\subsection{Using \texttt{genmake2}}
-\label{sect:genmake}
-
-To compile the code, first use the program \texttt{genmake2} (located
-in the \texttt{tools} directory) to generate a Makefile.
-\texttt{genmake2} is a shell script written to work with all
-``sh''--compatible shells including bash v1, bash v2, and Bourne.
-Internally, \texttt{genmake2} determines the locations of needed
-files, the compiler, compiler options, libraries, and Unix tools. It
-relies upon a number of ``optfiles'' located in the
-\texttt{tools/build\_options} directory.
-
-The purpose of the optfiles is to provide all the compilation options
-for particular ``platforms'' (where ``platform'' roughly means the
-combination of the hardware and the compiler) and code configurations.
-Given the combinations of possible compilers and library dependencies
-({\it eg.} MPI and NetCDF) there may be numerous optfiles available
-for a single machine. The naming scheme for the majority of the
-optfiles shipped with the code is
-\begin{center}
- {\bf OS\_HARDWARE\_COMPILER }
-\end{center}
-where
-\begin{description}
-\item[OS] is the name of the operating system (generally the
- lower-case output of the {\tt 'uname'} command)
-\item[HARDWARE] is a string that describes the CPU type and
- corresponds to output from the {\tt 'uname -m'} command:
- \begin{description}
- \item[ia32] is for ``x86'' machines such as i386, i486, i586, i686,
- and athlon
- \item[ia64] is for Intel IA64 systems (eg. Itanium, Itanium2)
- \item[amd64] is AMD x86\_64 systems
- \item[ppc] is for Mac PowerPC systems
- \end{description}
-\item[COMPILER] is the compiler name (generally, the name of the
- FORTRAN executable)
-\end{description}
-
-In many cases, the default optfiles are sufficient and will result in
-usable Makefiles. However, for some machines or code configurations,
-new ``optfiles'' must be written. To create a new optfile, it is
-generally best to start with one of the defaults and modify it to suit
-your needs. Like \texttt{genmake2}, the optfiles are all written
-using a simple ``sh''--compatible syntax. While nearly all variables
-used within \texttt{genmake2} may be specified in the optfiles, the
-critical ones that should be defined are:
-
-\begin{description}
-\item[FC] the FORTRAN compiler (executable) to use
-\item[DEFINES] the command-line DEFINE options passed to the compiler
-\item[CPP] the C pre-processor to use
-\item[NOOPTFLAGS] options flags for special files that should not be
- optimized
-\end{description}
-
-For example, the optfile for a typical Red Hat Linux machine (``ia32''
-architecture) using the GCC (g77) compiler is
-\begin{verbatim}
-FC=g77
-DEFINES='-D_BYTESWAPIO -DWORDLENGTH=4'
-CPP='cpp -traditional -P'
-NOOPTFLAGS='-O0'
-# For IEEE, use the "-ffloat-store" option
-if test "x$IEEE" = x ; then
- FFLAGS='-Wimplicit -Wunused -Wuninitialized'
- FOPTIM='-O3 -malign-double -funroll-loops'
-else
- FFLAGS='-Wimplicit -Wunused -ffloat-store'
- FOPTIM='-O0 -malign-double'
-fi
-\end{verbatim}
-
-If you write an optfile for an unrepresented machine or compiler, you
-are strongly encouraged to submit the optfile to the MITgcm project
-for inclusion. Please send the file to the
-\begin{rawhtml} \end{rawhtml}
-\begin{center}
- MITgcm-support@mitgcm.org
-\end{center}
-\begin{rawhtml} \end{rawhtml}
-mailing list.
-
-In addition to the optfiles, \texttt{genmake2} supports a number of
-helpful command-line options. A complete list of these options can be
-obtained from:
-\begin{verbatim}
-% genmake2 -h
-\end{verbatim}
-
-The most important command-line options are:
-\begin{description}
-
-\item[\texttt{--optfile=/PATH/FILENAME}] specifies the optfile that
- should be used for a particular build.
-
- If no "optfile" is specified (either through the command line or the
- MITGCM\_OPTFILE environment variable), genmake2 will try to make a
- reasonable guess from the list provided in {\em
- tools/build\_options}. The method used for making this guess is
- to first determine the combination of operating system and hardware
- (eg. "linux\_ia32") and then find a working FORTRAN compiler within
- the user's path. When these three items have been identified,
- genmake2 will try to find an optfile that has a matching name.
-
-\item[\texttt{--pdefault='PKG1 PKG2 PKG3 ...'}] specifies the default
- set of packages to be used. The normal order of precedence for
- packages is as follows:
- \begin{enumerate}
- \item If available, the command line (\texttt{--pdefault}) settings
- over-rule any others.
-
- \item Next, \texttt{genmake2} will look for a file named
- ``\texttt{packages.conf}'' in the local directory or in any of the
- directories specified with the \texttt{--mods} option.
-
- \item Finally, if neither of the above are available,
- \texttt{genmake2} will use the \texttt{/pkg/pkg\_default} file.
- \end{enumerate}
-
-\item[\texttt{--pdepend=/PATH/FILENAME}] specifies the dependency file
- used for packages.
-
- If not specified, the default dependency file {\em pkg/pkg\_depend}
- is used. The syntax for this file is parsed on a line-by-line basis
- where each line containes either a comment ("\#") or a simple
- "PKGNAME1 (+|-)PKGNAME2" pairwise rule where the "+" or "-" symbol
- specifies a "must be used with" or a "must not be used with"
- relationship, respectively. If no rule is specified, then it is
- assumed that the two packages are compatible and will function
- either with or without each other.
-
-\item[\texttt{--adof=/path/to/file}] specifies the "adjoint" or
- automatic differentiation options file to be used. The file is
- analogous to the ``optfile'' defined above but it specifies
- information for the AD build process.
-
- The default file is located in {\em
- tools/adjoint\_options/adjoint\_default} and it defines the "TAF"
- and "TAMC" compilers. An alternate version is also available at
- {\em tools/adjoint\_options/adjoint\_staf} that selects the newer
- "STAF" compiler. As with any compilers, it is helpful to have their
- directories listed in your {\tt \$PATH} environment variable.
-
-\item[\texttt{--mods='DIR1 DIR2 DIR3 ...'}] specifies a list of
- directories containing ``modifications''. These directories contain
- files with names that may (or may not) exist in the main MITgcm
- source tree but will be overridden by any identically-named sources
- within the ``MODS'' directories.
-
- The order of precedence for this "name-hiding" is as follows:
- \begin{itemize}
- \item ``MODS'' directories (in the order given)
- \item Packages either explicitly specified or provided by default
- (in the order given)
- \item Packages included due to package dependencies (in the order
- that that package dependencies are parsed)
- \item The "standard dirs" (which may have been specified by the
- ``-standarddirs'' option)
- \end{itemize}
-
-\item[\texttt{--mpi}] This option enables certain MPI features (using
- CPP \texttt{\#define}s) within the code and is necessary for MPI
- builds (see Section \ref{sect:mpi-build}).
-
-\item[\texttt{--make=/path/to/gmake}] Due to the poor handling of
- soft-links and other bugs common with the \texttt{make} versions
- provided by commercial Unix vendors, GNU \texttt{make} (sometimes
- called \texttt{gmake}) should be preferred. This option provides a
- means for specifying the make executable to be used.
-
-\item[\texttt{--bash=/path/to/sh}] On some (usually older UNIX)
- machines, the ``bash'' shell is unavailable. To run on these
- systems, \texttt{genmake2} can be invoked using an ``sh'' (that is,
- a Bourne, POSIX, or compatible) shell. The syntax in these
- circumstances is:
- \begin{center}
- \texttt{\% /bin/sh genmake2 -bash=/bin/sh [...options...]}
- \end{center}
- where \texttt{/bin/sh} can be replaced with the full path and name
- of the desired shell.
-
-\end{description}
-
-
-\subsection{Building with MPI}
-\label{sect:mpi-build}
-
-Building MITgcm to use MPI libraries can be complicated due to the
-variety of different MPI implementations available, their dependencies
-or interactions with different compilers, and their often ad-hoc
-locations within file systems. For these reasons, its generally a
-good idea to start by finding and reading the documentation for your
-machine(s) and, if necessary, seeking help from your local systems
-administrator.
-
-The steps for building MITgcm with MPI support are:
-\begin{enumerate}
-
-\item Determine the locations of your MPI-enabled compiler and/or MPI
- libraries and put them into an options file as described in Section
- \ref{sect:genmake}. One can start with one of the examples in:
- \begin{rawhtml}
- \end{rawhtml}
- \begin{center}
- \texttt{MITgcm/tools/build\_options/}
- \end{center}
- \begin{rawhtml} \end{rawhtml}
- such as \texttt{linux\_ia32\_g77+mpi\_cg01} or
- \texttt{linux\_ia64\_efc+mpi} and then edit it to suit the machine at
- hand. You may need help from your user guide or local systems
- administrator to determine the exact location of the MPI libraries.
- If libraries are not installed, MPI implementations and related
- tools are available including:
- \begin{itemize}
- \item \begin{rawhtml}
- \end{rawhtml}
- MPICH
- \begin{rawhtml} \end{rawhtml}
-
- \item \begin{rawhtml}
- \end{rawhtml}
- LAM/MPI
- \begin{rawhtml} \end{rawhtml}
-
- \item \begin{rawhtml}
- \end{rawhtml}
- MPIexec
- \begin{rawhtml} \end{rawhtml}
- \end{itemize}
-
-\item Build the code with the \texttt{genmake2} \texttt{-mpi} option
- (see Section \ref{sect:genmake}) using commands such as:
-{\footnotesize \begin{verbatim}
- % ../../../tools/genmake2 -mods=../code -mpi -of=YOUR_OPTFILE
- % make depend
- % make
-\end{verbatim} }
-
-\item Run the code with the appropriate MPI ``run'' or ``exec''
- program provided with your particular implementation of MPI.
- Typical MPI packages such as MPICH will use something like:
-\begin{verbatim}
- % mpirun -np 4 -machinefile mf ./mitgcmuv
-\end{verbatim}
- Sightly more complicated scripts may be needed for many machines
- since execution of the code may be controlled by both the MPI
- library and a job scheduling and queueing system such as PBS,
- LoadLeveller, Condor, or any of a number of similar tools. A few
- example scripts (those used for our \begin{rawhtml} \end{rawhtml}regular
- verification runs\begin{rawhtml} \end{rawhtml}) are available
- at:
- \begin{rawhtml}
- \end{rawhtml}
- {\footnotesize \tt
- http://mitgcm.org/cgi-bin/viewcvs.cgi/MITgcm\_contrib/test\_scripts/ }
- \begin{rawhtml} \end{rawhtml}
-
-\end{enumerate}
-
-An example of the above process on the MITgcm cluster (``cg01'') using
-the GNU g77 compiler and the mpich MPI library is:
-
-{\footnotesize \begin{verbatim}
- % cd MITgcm/verification/exp5
- % mkdir build
- % cd build
- % ../../../tools/genmake2 -mpi -mods=../code \
- -of=../../../tools/build_options/linux_ia32_g77+mpi_cg01
- % make depend
- % make
- % cd ../input
- % /usr/local/pkg/mpi/mpi-1.2.4..8a-gm-1.5/g77/bin/mpirun.ch_gm \
- -machinefile mf --gm-kill 5 -v -np 2 ../build/mitgcmuv
-\end{verbatim} }
-
-
-
\section[Running MITgcm]{Running the model in prognostic mode}
\label{sect:runModel}
+\begin{rawhtml}
+
+\end{rawhtml}
If compilation finished succesfuully (section \ref{sect:buildingCode})
then an executable called \texttt{mitgcmuv} will now exist in the
local directory.
-To run the model as a single process (ie. not in parallel) simply
-type:
+To run the model as a single process (\textit{ie.} not in parallel)
+simply type:
\begin{verbatim}
% ./mitgcmuv
\end{verbatim}
@@ -972,17 +606,30 @@
\begin{verbatim}
% ./mitgcmuv > output.txt
\end{verbatim}
+In the event that the model encounters an error and stops, it is very
+helpful to include the last few line of this \texttt{output.txt} file
+along with the (\texttt{stderr}) error message within any bug reports.
For the example experiments in {\em verification}, an example of the
-output is kept in {\em results/output.txt} for comparison. You can compare
-your {\em output.txt} with this one to check that the set-up works.
+output is kept in {\em results/output.txt} for comparison. You can
+compare your {\em output.txt} with the corresponding one for that
+experiment to check that the set-up works.
\subsection{Output files}
-The model produces various output files. At a minimum, the instantaneous
-``state'' of the model is written out, which is made of the following files:
+The model produces various output files. Depending upon the I/O
+package selected (either \texttt{mdsio} or \texttt{mnc} or both as
+determined by both the compile-time settings and the run-time flags in
+\texttt{data.pkg}), the following output may appear.
+
+
+\subsubsection{MDSIO output files}
+
+The ``traditional'' output files are generated by the \texttt{mdsio}
+package. At a minimum, the instantaneous ``state'' of the model is
+written out, which is made of the following files:
\begin{itemize}
\item \textit{U.00000nIter} - zonal component of velocity field (m/s and $>
@@ -1030,19 +677,58 @@
used to restart the model but are overwritten every other time they are
output to save disk space during long integrations.
+
+
+\subsubsection{MNC output files}
+
+Unlike the \texttt{mdsio} output, the \texttt{mnc}--generated output
+is usually (though not necessarily) placed within a subdirectory with
+a name such as \texttt{mnc\_test\_\${DATE}\_\${SEQ}}. The files
+within this subdirectory are all in the ``self-describing'' netCDF
+format and can thus be browsed and/or plotted using tools such as:
+\begin{itemize}
+\item At a minimum, the \texttt{ncdump} utility is typically included
+ with every netCDF install:
+ \begin{rawhtml} \end{rawhtml}
+\begin{verbatim}
+http://www.unidata.ucar.edu/packages/netcdf/
+\end{verbatim}
+ \begin{rawhtml} \end{rawhtml}
+
+\item The \texttt{ncview} utility is a very convenient and quick way
+ to plot netCDF data and it runs on most OSes:
+ \begin{rawhtml} \end{rawhtml}
+\begin{verbatim}
+http://meteora.ucsd.edu/~pierce/ncview_home_page.html
+\end{verbatim}
+ \begin{rawhtml} \end{rawhtml}
+
+\item MatLAB(c) and other common post-processing environments provide
+ various netCDF interfaces including:
+ \begin{rawhtml} \end{rawhtml}
+\begin{verbatim}
+http://woodshole.er.usgs.gov/staffpages/cdenham/public_html/MexCDF/nc4ml5.html
+\end{verbatim}
+ \begin{rawhtml} \end{rawhtml}
+
+\end{itemize}
+
+
\subsection{Looking at the output}
-All the model data are written according to a ``meta/data'' file format.
-Each variable is associated with two files with suffix names \textit{.data}
-and \textit{.meta}. The \textit{.data} file contains the data written in
-binary form (big\_endian by default). The \textit{.meta} file is a
-``header'' file that contains information about the size and the structure
-of the \textit{.data} file. This way of organizing the output is
-particularly useful when running multi-processors calculations. The base
-version of the model includes a few matlab utilities to read output files
-written in this format. The matlab scripts are located in the directory
-\textit{utils/matlab} under the root tree. The script \textit{rdmds.m} reads
-the data. Look at the comments inside the script to see how to use it.
+The ``traditional'' or mdsio model data are written according to a
+``meta/data'' file format. Each variable is associated with two files
+with suffix names \textit{.data} and \textit{.meta}. The
+\textit{.data} file contains the data written in binary form
+(big\_endian by default). The \textit{.meta} file is a ``header'' file
+that contains information about the size and the structure of the
+\textit{.data} file. This way of organizing the output is particularly
+useful when running multi-processors calculations. The base version of
+the model includes a few matlab utilities to read output files written
+in this format. The matlab scripts are located in the directory
+\textit{utils/matlab} under the root tree. The script \textit{rdmds.m}
+reads the data. Look at the comments inside the script to see how to
+use it.
Some examples of reading and visualizing some output in {\em Matlab}:
\begin{verbatim}
@@ -1059,422 +745,4 @@
>> for n=1:11; imagesc(eta(:,:,n)');axis ij;colorbar;pause(.5);end
\end{verbatim}
-\section[Customizing MITgcm]{Doing it yourself: customizing the code}
-
-When you are ready to run the model in the configuration you want, the
-easiest thing is to use and adapt the setup of the case studies
-experiment (described previously) that is the closest to your
-configuration. Then, the amount of setup will be minimized. In this
-section, we focus on the setup relative to the ``numerical model''
-part of the code (the setup relative to the ``execution environment''
-part is covered in the parallel implementation section) and on the
-variables and parameters that you are likely to change.
-
-\subsection{Configuration and setup}
-
-The CPP keys relative to the ``numerical model'' part of the code are
-all defined and set in the file \textit{CPP\_OPTIONS.h }in the
-directory \textit{ model/inc }or in one of the \textit{code
-}directories of the case study experiments under
-\textit{verification.} The model parameters are defined and declared
-in the file \textit{model/inc/PARAMS.h }and their default values are
-set in the routine \textit{model/src/set\_defaults.F. }The default
-values can be modified in the namelist file \textit{data }which needs
-to be located in the directory where you will run the model. The
-parameters are initialized in the routine
-\textit{model/src/ini\_parms.F}. Look at this routine to see in what
-part of the namelist the parameters are located.
-
-In what follows the parameters are grouped into categories related to
-the computational domain, the equations solved in the model, and the
-simulation controls.
-
-\subsection{Computational domain, geometry and time-discretization}
-
-\begin{description}
-\item[dimensions] \
-
- The number of points in the x, y, and r directions are represented
- by the variables \textbf{sNx}, \textbf{sNy} and \textbf{Nr}
- respectively which are declared and set in the file
- \textit{model/inc/SIZE.h}. (Again, this assumes a mono-processor
- calculation. For multiprocessor calculations see the section on
- parallel implementation.)
-
-\item[grid] \
-
- Three different grids are available: cartesian, spherical polar, and
- curvilinear (which includes the cubed sphere). The grid is set
- through the logical variables \textbf{usingCartesianGrid},
- \textbf{usingSphericalPolarGrid}, and \textbf{usingCurvilinearGrid}.
- In the case of spherical and curvilinear grids, the southern
- boundary is defined through the variable \textbf{phiMin} which
- corresponds to the latitude of the southern most cell face (in
- degrees). The resolution along the x and y directions is controlled
- by the 1D arrays \textbf{delx} and \textbf{dely} (in meters in the
- case of a cartesian grid, in degrees otherwise). The vertical grid
- spacing is set through the 1D array \textbf{delz} for the ocean (in
- meters) or \textbf{delp} for the atmosphere (in Pa). The variable
- \textbf{Ro\_SeaLevel} represents the standard position of Sea-Level
- in ``R'' coordinate. This is typically set to 0m for the ocean
- (default value) and 10$^{5}$Pa for the atmosphere. For the
- atmosphere, also set the logical variable \textbf{groundAtK1} to
- \texttt{'.TRUE.'} which puts the first level (k=1) at the lower
- boundary (ground).
-
- For the cartesian grid case, the Coriolis parameter $f$ is set
- through the variables \textbf{f0} and \textbf{beta} which correspond
- to the reference Coriolis parameter (in s$^{-1}$) and
- $\frac{\partial f}{ \partial y}$(in m$^{-1}$s$^{-1}$) respectively.
- If \textbf{beta } is set to a nonzero value, \textbf{f0} is the
- value of $f$ at the southern edge of the domain.
-
-\item[topography - full and partial cells] \
-
- The domain bathymetry is read from a file that contains a 2D (x,y)
- map of depths (in m) for the ocean or pressures (in Pa) for the
- atmosphere. The file name is represented by the variable
- \textbf{bathyFile}. The file is assumed to contain binary numbers
- giving the depth (pressure) of the model at each grid cell, ordered
- with the x coordinate varying fastest. The points are ordered from
- low coordinate to high coordinate for both axes. The model code
- applies without modification to enclosed, periodic, and double
- periodic domains. Periodicity is assumed by default and is
- suppressed by setting the depths to 0m for the cells at the limits
- of the computational domain (note: not sure this is the case for the
- atmosphere). The precision with which to read the binary data is
- controlled by the integer variable \textbf{readBinaryPrec} which can
- take the value \texttt{32} (single precision) or \texttt{64} (double
- precision). See the matlab program \textit{gendata.m} in the
- \textit{input} directories under \textit{verification} to see how
- the bathymetry files are generated for the case study experiments.
-
- To use the partial cell capability, the variable \textbf{hFacMin}
- needs to be set to a value between 0 and 1 (it is set to 1 by
- default) corresponding to the minimum fractional size of the cell.
- For example if the bottom cell is 500m thick and \textbf{hFacMin} is
- set to 0.1, the actual thickness of the cell (i.e. used in the code)
- can cover a range of discrete values 50m apart from 50m to 500m
- depending on the value of the bottom depth (in \textbf{bathyFile})
- at this point.
-
- Note that the bottom depths (or pressures) need not coincide with
- the models levels as deduced from \textbf{delz} or \textbf{delp}.
- The model will interpolate the numbers in \textbf{bathyFile} so that
- they match the levels obtained from \textbf{delz} or \textbf{delp}
- and \textbf{hFacMin}.
-
- (Note: the atmospheric case is a bit more complicated than what is
- written here I think. To come soon...)
-
-\item[time-discretization] \
-
- The time steps are set through the real variables \textbf{deltaTMom}
- and \textbf{deltaTtracer} (in s) which represent the time step for
- the momentum and tracer equations, respectively. For synchronous
- integrations, simply set the two variables to the same value (or you
- can prescribe one time step only through the variable
- \textbf{deltaT}). The Adams-Bashforth stabilizing parameter is set
- through the variable \textbf{abEps} (dimensionless). The stagger
- baroclinic time stepping can be activated by setting the logical
- variable \textbf{staggerTimeStep} to \texttt{'.TRUE.'}.
-
-\end{description}
-
-
-\subsection{Equation of state}
-
-First, because the model equations are written in terms of
-perturbations, a reference thermodynamic state needs to be specified.
-This is done through the 1D arrays \textbf{tRef} and \textbf{sRef}.
-\textbf{tRef} specifies the reference potential temperature profile
-(in $^{o}$C for the ocean and $^{o}$K for the atmosphere) starting
-from the level k=1. Similarly, \textbf{sRef} specifies the reference
-salinity profile (in ppt) for the ocean or the reference specific
-humidity profile (in g/kg) for the atmosphere.
-
-The form of the equation of state is controlled by the character
-variables \textbf{buoyancyRelation} and \textbf{eosType}.
-\textbf{buoyancyRelation} is set to \texttt{'OCEANIC'} by default and
-needs to be set to \texttt{'ATMOSPHERIC'} for atmosphere simulations.
-In this case, \textbf{eosType} must be set to \texttt{'IDEALGAS'}.
-For the ocean, two forms of the equation of state are available:
-linear (set \textbf{eosType} to \texttt{'LINEAR'}) and a polynomial
-approximation to the full nonlinear equation ( set \textbf{eosType} to
-\texttt{'POLYNOMIAL'}). In the linear case, you need to specify the
-thermal and haline expansion coefficients represented by the variables
-\textbf{tAlpha} (in K$^{-1}$) and \textbf{sBeta} (in ppt$^{-1}$). For
-the nonlinear case, you need to generate a file of polynomial
-coefficients called \textit{POLY3.COEFFS}. To do this, use the program
-\textit{utils/knudsen2/knudsen2.f} under the model tree (a Makefile is
-available in the same directory and you will need to edit the number
-and the values of the vertical levels in \textit{knudsen2.f} so that
-they match those of your configuration).
-
-There there are also higher polynomials for the equation of state:
-\begin{description}
-\item[\texttt{'UNESCO'}:] The UNESCO equation of state formula of
- Fofonoff and Millard \cite{fofonoff83}. This equation of state
- assumes in-situ temperature, which is not a model variable; {\em its
- use is therefore discouraged, and it is only listed for
- completeness}.
-\item[\texttt{'JMD95Z'}:] A modified UNESCO formula by Jackett and
- McDougall \cite{jackett95}, which uses the model variable potential
- temperature as input. The \texttt{'Z'} indicates that this equation
- of state uses a horizontally and temporally constant pressure
- $p_{0}=-g\rho_{0}z$.
-\item[\texttt{'JMD95P'}:] A modified UNESCO formula by Jackett and
- McDougall \cite{jackett95}, which uses the model variable potential
- temperature as input. The \texttt{'P'} indicates that this equation
- of state uses the actual hydrostatic pressure of the last time
- step. Lagging the pressure in this way requires an additional pickup
- file for restarts.
-\item[\texttt{'MDJWF'}:] The new, more accurate and less expensive
- equation of state by McDougall et~al. \cite{mcdougall03}. It also
- requires lagging the pressure and therefore an additional pickup
- file for restarts.
-\end{description}
-For none of these options an reference profile of temperature or
-salinity is required.
-
-\subsection{Momentum equations}
-
-In this section, we only focus for now on the parameters that you are
-likely to change, i.e. the ones relative to forcing and dissipation
-for example. The details relevant to the vector-invariant form of the
-equations and the various advection schemes are not covered for the
-moment. We assume that you use the standard form of the momentum
-equations (i.e. the flux-form) with the default advection scheme.
-Also, there are a few logical variables that allow you to turn on/off
-various terms in the momentum equation. These variables are called
-\textbf{momViscosity, momAdvection, momForcing, useCoriolis,
- momPressureForcing, momStepping} and \textbf{metricTerms }and are
-assumed to be set to \texttt{'.TRUE.'} here. Look at the file
-\textit{model/inc/PARAMS.h }for a precise definition of these
-variables.
-
-\begin{description}
-\item[initialization] \
-
- The velocity components are initialized to 0 unless the simulation
- is starting from a pickup file (see section on simulation control
- parameters).
-
-\item[forcing] \
-
- This section only applies to the ocean. You need to generate
- wind-stress data into two files \textbf{zonalWindFile} and
- \textbf{meridWindFile} corresponding to the zonal and meridional
- components of the wind stress, respectively (if you want the stress
- to be along the direction of only one of the model horizontal axes,
- you only need to generate one file). The format of the files is
- similar to the bathymetry file. The zonal (meridional) stress data
- are assumed to be in Pa and located at U-points (V-points). As for
- the bathymetry, the precision with which to read the binary data is
- controlled by the variable \textbf{readBinaryPrec}. See the matlab
- program \textit{gendata.m} in the \textit{input} directories under
- \textit{verification} to see how simple analytical wind forcing data
- are generated for the case study experiments.
-
- There is also the possibility of prescribing time-dependent periodic
- forcing. To do this, concatenate the successive time records into a
- single file (for each stress component) ordered in a (x,y,t) fashion
- and set the following variables: \textbf{periodicExternalForcing }to
- \texttt{'.TRUE.'}, \textbf{externForcingPeriod }to the period (in s)
- of which the forcing varies (typically 1 month), and
- \textbf{externForcingCycle} to the repeat time (in s) of the forcing
- (typically 1 year -- note: \textbf{ externForcingCycle} must be a
- multiple of \textbf{externForcingPeriod}). With these variables set
- up, the model will interpolate the forcing linearly at each
- iteration.
-
-\item[dissipation] \
-
- The lateral eddy viscosity coefficient is specified through the
- variable \textbf{viscAh} (in m$^{2}$s$^{-1}$). The vertical eddy
- viscosity coefficient is specified through the variable
- \textbf{viscAz} (in m$^{2}$s$^{-1}$) for the ocean and
- \textbf{viscAp} (in Pa$^{2}$s$^{-1}$) for the atmosphere. The
- vertical diffusive fluxes can be computed implicitly by setting the
- logical variable \textbf{implicitViscosity }to \texttt{'.TRUE.'}.
- In addition, biharmonic mixing can be added as well through the
- variable \textbf{viscA4} (in m$^{4}$s$^{-1}$). On a spherical polar
- grid, you might also need to set the variable \textbf{cosPower}
- which is set to 0 by default and which represents the power of
- cosine of latitude to multiply viscosity. Slip or no-slip conditions
- at lateral and bottom boundaries are specified through the logical
- variables \textbf{no\_slip\_sides} and \textbf{no\_slip\_bottom}. If
- set to \texttt{'.FALSE.'}, free-slip boundary conditions are
- applied. If no-slip boundary conditions are applied at the bottom, a
- bottom drag can be applied as well. Two forms are available: linear
- (set the variable \textbf{bottomDragLinear} in s$ ^{-1}$) and
- quadratic (set the variable \textbf{bottomDragQuadratic} in
- m$^{-1}$).
-
- The Fourier and Shapiro filters are described elsewhere.
-
-\item[C-D scheme] \
-
- If you run at a sufficiently coarse resolution, you will need the
- C-D scheme for the computation of the Coriolis terms. The
- variable\textbf{\ tauCD}, which represents the C-D scheme coupling
- timescale (in s) needs to be set.
-
-\item[calculation of pressure/geopotential] \
-
- First, to run a non-hydrostatic ocean simulation, set the logical
- variable \textbf{nonHydrostatic} to \texttt{'.TRUE.'}. The pressure
- field is then inverted through a 3D elliptic equation. (Note: this
- capability is not available for the atmosphere yet.) By default, a
- hydrostatic simulation is assumed and a 2D elliptic equation is used
- to invert the pressure field. The parameters controlling the
- behaviour of the elliptic solvers are the variables
- \textbf{cg2dMaxIters} and \textbf{cg2dTargetResidual } for
- the 2D case and \textbf{cg3dMaxIters} and
- \textbf{cg3dTargetResidual} for the 3D case. You probably won't need to
- alter the default values (are we sure of this?).
-
- For the calculation of the surface pressure (for the ocean) or
- surface geopotential (for the atmosphere) you need to set the
- logical variables \textbf{rigidLid} and \textbf{implicitFreeSurface}
- (set one to \texttt{'.TRUE.'} and the other to \texttt{'.FALSE.'}
- depending on how you want to deal with the ocean upper or atmosphere
- lower boundary).
-
-\end{description}
-
-\subsection{Tracer equations}
-
-This section covers the tracer equations i.e. the potential
-temperature equation and the salinity (for the ocean) or specific
-humidity (for the atmosphere) equation. As for the momentum equations,
-we only describe for now the parameters that you are likely to change.
-The logical variables \textbf{tempDiffusion} \textbf{tempAdvection}
-\textbf{tempForcing}, and \textbf{tempStepping} allow you to turn
-on/off terms in the temperature equation (same thing for salinity or
-specific humidity with variables \textbf{saltDiffusion},
-\textbf{saltAdvection} etc.). These variables are all assumed here to
-be set to \texttt{'.TRUE.'}. Look at file \textit{model/inc/PARAMS.h}
-for a precise definition.
-
-\begin{description}
-\item[initialization] \
-
- The initial tracer data can be contained in the binary files
- \textbf{hydrogThetaFile} and \textbf{hydrogSaltFile}. These files
- should contain 3D data ordered in an (x,y,r) fashion with k=1 as the
- first vertical level. If no file names are provided, the tracers
- are then initialized with the values of \textbf{tRef} and
- \textbf{sRef} mentioned above (in the equation of state section). In
- this case, the initial tracer data are uniform in x and y for each
- depth level.
-
-\item[forcing] \
-
- This part is more relevant for the ocean, the procedure for the
- atmosphere not being completely stabilized at the moment.
-
- A combination of fluxes data and relaxation terms can be used for
- driving the tracer equations. For potential temperature, heat flux
- data (in W/m$ ^{2}$) can be stored in the 2D binary file
- \textbf{surfQfile}. Alternatively or in addition, the forcing can
- be specified through a relaxation term. The SST data to which the
- model surface temperatures are restored to are supposed to be stored
- in the 2D binary file \textbf{thetaClimFile}. The corresponding
- relaxation time scale coefficient is set through the variable
- \textbf{tauThetaClimRelax} (in s). The same procedure applies for
- salinity with the variable names \textbf{EmPmRfile},
- \textbf{saltClimFile}, and \textbf{tauSaltClimRelax} for freshwater
- flux (in m/s) and surface salinity (in ppt) data files and
- relaxation time scale coefficient (in s), respectively. Also for
- salinity, if the CPP key \textbf{USE\_NATURAL\_BCS} is turned on,
- natural boundary conditions are applied i.e. when computing the
- surface salinity tendency, the freshwater flux is multiplied by the
- model surface salinity instead of a constant salinity value.
-
- As for the other input files, the precision with which to read the
- data is controlled by the variable \textbf{readBinaryPrec}.
- Time-dependent, periodic forcing can be applied as well following
- the same procedure used for the wind forcing data (see above).
-
-\item[dissipation] \
-
- Lateral eddy diffusivities for temperature and salinity/specific
- humidity are specified through the variables \textbf{diffKhT} and
- \textbf{diffKhS} (in m$^{2}$/s). Vertical eddy diffusivities are
- specified through the variables \textbf{diffKzT} and
- \textbf{diffKzS} (in m$^{2}$/s) for the ocean and \textbf{diffKpT
- }and \textbf{diffKpS} (in Pa$^{2}$/s) for the atmosphere. The
- vertical diffusive fluxes can be computed implicitly by setting the
- logical variable \textbf{implicitDiffusion} to \texttt{'.TRUE.'}.
- In addition, biharmonic diffusivities can be specified as well
- through the coefficients \textbf{diffK4T} and \textbf{diffK4S} (in
- m$^{4}$/s). Note that the cosine power scaling (specified through
- \textbf{cosPower}---see the momentum equations section) is applied to
- the tracer diffusivities (Laplacian and biharmonic) as well. The
- Gent and McWilliams parameterization for oceanic tracers is
- described in the package section. Finally, note that tracers can be
- also subject to Fourier and Shapiro filtering (see the corresponding
- section on these filters).
-
-\item[ocean convection] \
-
- Two options are available to parameterize ocean convection: one is
- to use the convective adjustment scheme. In this case, you need to
- set the variable \textbf{cadjFreq}, which represents the frequency
- (in s) with which the adjustment algorithm is called, to a non-zero
- value (if set to a negative value by the user, the model will set it
- to the tracer time step). The other option is to parameterize
- convection with implicit vertical diffusion. To do this, set the
- logical variable \textbf{implicitDiffusion} to \texttt{'.TRUE.'}
- and the real variable \textbf{ivdc\_kappa} to a value (in m$^{2}$/s)
- you wish the tracer vertical diffusivities to have when mixing
- tracers vertically due to static instabilities. Note that
- \textbf{cadjFreq} and \textbf{ivdc\_kappa}can not both have non-zero
- value.
-
-\end{description}
-
-\subsection{Simulation controls}
-
-The model ''clock'' is defined by the variable \textbf{deltaTClock}
-(in s) which determines the IO frequencies and is used in tagging
-output. Typically, you will set it to the tracer time step for
-accelerated runs (otherwise it is simply set to the default time step
-\textbf{deltaT}). Frequency of checkpointing and dumping of the model
-state are referenced to this clock (see below).
-
-\begin{description}
-\item[run duration] \
-
- The beginning of a simulation is set by specifying a start time (in
- s) through the real variable \textbf{startTime} or by specifying an
- initial iteration number through the integer variable
- \textbf{nIter0}. If these variables are set to nonzero values, the
- model will look for a ''pickup'' file \textit{pickup.0000nIter0} to
- restart the integration. The end of a simulation is set through the
- real variable \textbf{endTime} (in s). Alternatively, you can
- specify instead the number of time steps to execute through the
- integer variable \textbf{nTimeSteps}.
-
-\item[frequency of output] \
-
- Real variables defining frequencies (in s) with which output files
- are written on disk need to be set up. \textbf{dumpFreq} controls
- the frequency with which the instantaneous state of the model is
- saved. \textbf{chkPtFreq} and \textbf{pchkPtFreq} control the output
- frequency of rolling and permanent checkpoint files, respectively.
- See section 1.5.1 Output files for the definition of model state and
- checkpoint files. In addition, time-averaged fields can be written
- out by setting the variable \textbf{taveFreq} (in s). The precision
- with which to write the binary data is controlled by the integer
- variable w\textbf{riteBinaryPrec} (set it to \texttt{32} or
- \texttt{64}).
-
-\end{description}
-
-
-%%% Local Variables:
-%%% mode: latex
-%%% TeX-master: t
-%%% End:
+Similar scripts for netCDF output (\texttt{rdmnc.m}) are available.