79 |
|
|
80 |
\end{enumerate} |
\end{enumerate} |
81 |
|
|
82 |
|
\subsection{Method 1 - Checkout from CVS} |
83 |
|
\label{sect:cvs_checkout} |
84 |
|
|
85 |
If CVS is available on your system, we strongly encourage you to use it. CVS |
If CVS is available on your system, we strongly encourage you to use it. CVS |
86 |
provides an efficient and elegant way of organizing your code and keeping |
provides an efficient and elegant way of organizing your code and keeping |
87 |
track of your changes. If CVS is not available on your machine, you can also |
track of your changes. If CVS is not available on your machine, you can also |
96 |
\begin{verbatim} |
\begin{verbatim} |
97 |
% export CVSROOT=':pserver:cvsanon@mitgcm.org:/u/gcmpack' |
% export CVSROOT=':pserver:cvsanon@mitgcm.org:/u/gcmpack' |
98 |
\end{verbatim} |
\end{verbatim} |
99 |
in your .profile or .bashrc file. |
in your \texttt{.profile} or \texttt{.bashrc} file. |
100 |
|
|
101 |
|
|
102 |
To get MITgcm through CVS, first register with the MITgcm CVS server |
To get MITgcm through CVS, first register with the MITgcm CVS server |
124 |
\end{verbatim} |
\end{verbatim} |
125 |
\begin{rawhtml} </A> \end{rawhtml} |
\begin{rawhtml} </A> \end{rawhtml} |
126 |
|
|
127 |
|
As a convenience, the MITgcm CVS server contains aliases which are |
128 |
|
named subsets of the codebase. These aliases can be especially |
129 |
|
helpful when used over slow internet connections or on machines with |
130 |
|
restricted storage space. Table \ref{tab:cvsModules} contains a list |
131 |
|
of CVS aliases |
132 |
|
\begin{table}[htb] |
133 |
|
\centering |
134 |
|
\begin{tabular}[htb]{|lp{3.25in}|}\hline |
135 |
|
\textbf{Alias Name} & \textbf{Information (directories) Contained} \\\hline |
136 |
|
\texttt{MITgcm\_code} & Only the source code -- none of the verification examples. \\ |
137 |
|
\texttt{MITgcm\_verif\_basic} |
138 |
|
& Source code plus a small set of the verification examples |
139 |
|
(\texttt{global\_ocean.90x40x15}, \texttt{aim.5l\_cs}, \texttt{hs94.128x64x5}, |
140 |
|
\texttt{front\_relax}, and \texttt{plume\_on\_slope}). \\ |
141 |
|
\texttt{MITgcm\_verif\_atmos} & Source code plus all of the atmospheric examples. \\ |
142 |
|
\texttt{MITgcm\_verif\_ocean} & Source code plus all of the oceanic examples. \\ |
143 |
|
\texttt{MITgcm\_verif\_all} & Source code plus all of the |
144 |
|
verification examples. \\\hline |
145 |
|
\end{tabular} |
146 |
|
\caption{MITgcm CVS Modules} |
147 |
|
\label{tab:cvsModules} |
148 |
|
\end{table} |
149 |
|
|
150 |
The checkout process creates a directory called \textit{MITgcm}. If |
The checkout process creates a directory called \textit{MITgcm}. If |
151 |
the directory \textit{MITgcm} exists this command updates your code |
the directory \textit{MITgcm} exists this command updates your code |
159 |
here |
here |
160 |
\begin{rawhtml} </A> \end{rawhtml} |
\begin{rawhtml} </A> \end{rawhtml} |
161 |
. |
. |
162 |
|
It is important to note that the CVS aliases in Table |
163 |
|
\ref{tab:cvsModules} cannot be used in conjunction with the CVS |
164 |
|
\texttt{-d DIRNAME} option. However, the \texttt{MITgcm} directories |
165 |
|
they create can be changed to a different name following the check-out: |
166 |
|
\begin{verbatim} |
167 |
|
% cvs co MITgcm_verif_basic |
168 |
|
% mv MITgcm MITgcm_verif_basic |
169 |
|
\end{verbatim} |
170 |
|
|
171 |
|
|
172 |
\paragraph*{Conventional download method} |
\subsection{Method 2 - Tar file download} |
173 |
\label{sect:conventionalDownload} |
\label{sect:conventionalDownload} |
174 |
|
|
175 |
If you do not have CVS on your system, you can download the model as a |
If you do not have CVS on your system, you can download the model as a |
189 |
\begin{rawhtml} </A> \end{rawhtml} |
\begin{rawhtml} </A> \end{rawhtml} |
190 |
mailing list. |
mailing list. |
191 |
|
|
192 |
\paragraph*{Upgrading from an earlier version} |
\subsubsection{Upgrading from an earlier version} |
193 |
|
|
194 |
If you already have an earlier version of the code you can ``upgrade'' |
If you already have an earlier version of the code you can ``upgrade'' |
195 |
your copy instead of downloading the entire repository again. First, |
your copy instead of downloading the entire repository again. First, |
324 |
|
|
325 |
\end{itemize} |
\end{itemize} |
326 |
|
|
327 |
\section{Example experiments} |
\section[MITgcm Example Experiments]{Example experiments} |
328 |
\label{sect:modelExamples} |
\label{sect:modelExamples} |
329 |
|
|
330 |
%% a set of twenty-four pre-configured numerical experiments |
%% a set of twenty-four pre-configured numerical experiments |
484 |
Once you have chosen the example you want to run, you are ready to |
Once you have chosen the example you want to run, you are ready to |
485 |
compile the code. |
compile the code. |
486 |
|
|
487 |
\section{Building the code} |
\section[Building MITgcm]{Building the code} |
488 |
\label{sect:buildingCode} |
\label{sect:buildingCode} |
489 |
|
|
490 |
To compile the code, we use the {\em make} program. This uses a file |
To compile the code, we use the {\em make} program. This uses a file |
507 |
% ../../../tools/genmake2 -mods=../code |
% ../../../tools/genmake2 -mods=../code |
508 |
\end{verbatim} |
\end{verbatim} |
509 |
The command line option tells {\em genmake} to override model source |
The command line option tells {\em genmake} to override model source |
510 |
code with any files in the directory {\em ./code/}. |
code with any files in the directory {\em ../code/}. |
511 |
|
|
512 |
On many systems, the {\em genmake2} program will be able to |
On many systems, the {\em genmake2} program will be able to |
513 |
automatically recognize the hardware, find compilers and other tools |
automatically recognize the hardware, find compilers and other tools |
514 |
within the user's path (``echo \$PATH''), and then choose an |
within the user's path (``echo \$PATH''), and then choose an |
515 |
appropriate set of options from the files contained in the {\em |
appropriate set of options from the files (``optfiles'') contained in |
516 |
tools/build\_options} directory. Under some circumstances, a user |
the {\em tools/build\_options} directory. Under some circumstances, a |
517 |
may have to create a new ``optfile'' in order to specify the exact |
user may have to create a new ``optfile'' in order to specify the |
518 |
combination of compiler, compiler flags, libraries, and other options |
exact combination of compiler, compiler flags, libraries, and other |
519 |
necessary to build a particular configuration of MITgcm. In such |
options necessary to build a particular configuration of MITgcm. In |
520 |
cases, it is generally helpful to read the existing ``optfiles'' and |
such cases, it is generally helpful to read the existing ``optfiles'' |
521 |
mimic their syntax. |
and mimic their syntax. |
522 |
|
|
523 |
Through the MITgcm-support list, the MITgcm developers are willing to |
Through the MITgcm-support list, the MITgcm developers are willing to |
524 |
provide help writing or modifing ``optfiles''. And we encourage users |
provide help writing or modifing ``optfiles''. And we encourage users |
542 |
upon which other files depend. The purpose of this is to reduce |
upon which other files depend. The purpose of this is to reduce |
543 |
re-compilation if and when you start to modify the code. The {\tt make |
re-compilation if and when you start to modify the code. The {\tt make |
544 |
depend} command also creates links from the model source to this |
depend} command also creates links from the model source to this |
545 |
directory. |
directory. It is important to note that the {\tt make depend} stage |
546 |
|
will occasionally produce warnings or errors since the dependency |
547 |
|
parsing tool is unable to find all of the necessary header files |
548 |
|
(\textit{eg.} \texttt{netcdf.inc}). In these circumstances, it is |
549 |
|
usually OK to ignore the warnings/errors and proceed to the next step. |
550 |
|
|
551 |
Next compile the code: |
Next compile the code: |
552 |
\begin{verbatim} |
\begin{verbatim} |
565 |
output.txt}. |
output.txt}. |
566 |
|
|
567 |
|
|
568 |
\subsection{Building/compiling the code elsewhere} |
\section[Running MITgcm]{Running the model in prognostic mode} |
|
|
|
|
In the example above (section \ref{sect:buildingCode}) we built the |
|
|
executable in the {\em input} directory of the experiment for |
|
|
convenience. You can also configure and compile the code in other |
|
|
locations, for example on a scratch disk with out having to copy the |
|
|
entire source tree. The only requirement to do so is you have {\tt |
|
|
genmake2} in your path or you know the absolute path to {\tt |
|
|
genmake2}. |
|
|
|
|
|
The following sections outline some possible methods of organizing |
|
|
your source and data. |
|
|
|
|
|
\subsubsection{Building from the {\em ../code directory}} |
|
|
|
|
|
This is just as simple as building in the {\em input/} directory: |
|
|
\begin{verbatim} |
|
|
% cd verification/exp2/code |
|
|
% ../../../tools/genmake2 |
|
|
% make depend |
|
|
% make |
|
|
\end{verbatim} |
|
|
However, to run the model the executable ({\em mitgcmuv}) and input |
|
|
files must be in the same place. If you only have one calculation to make: |
|
|
\begin{verbatim} |
|
|
% cd ../input |
|
|
% cp ../code/mitgcmuv ./ |
|
|
% ./mitgcmuv > output.txt |
|
|
\end{verbatim} |
|
|
or if you will be making multiple runs with the same executable: |
|
|
\begin{verbatim} |
|
|
% cd ../ |
|
|
% cp -r input run1 |
|
|
% cp code/mitgcmuv run1 |
|
|
% cd run1 |
|
|
% ./mitgcmuv > output.txt |
|
|
\end{verbatim} |
|
|
|
|
|
\subsubsection{Building from a new directory} |
|
|
|
|
|
Since the {\em input} directory contains input files it is often more |
|
|
useful to keep {\em input} pristine and build in a new directory |
|
|
within {\em verification/exp2/}: |
|
|
\begin{verbatim} |
|
|
% cd verification/exp2 |
|
|
% mkdir build |
|
|
% cd build |
|
|
% ../../../tools/genmake2 -mods=../code |
|
|
% make depend |
|
|
% make |
|
|
\end{verbatim} |
|
|
This builds the code exactly as before but this time you need to copy |
|
|
either the executable or the input files or both in order to run the |
|
|
model. For example, |
|
|
\begin{verbatim} |
|
|
% cp ../input/* ./ |
|
|
% ./mitgcmuv > output.txt |
|
|
\end{verbatim} |
|
|
or if you tend to make multiple runs with the same executable then |
|
|
running in a new directory each time might be more appropriate: |
|
|
\begin{verbatim} |
|
|
% cd ../ |
|
|
% mkdir run1 |
|
|
% cp build/mitgcmuv run1/ |
|
|
% cp input/* run1/ |
|
|
% cd run1 |
|
|
% ./mitgcmuv > output.txt |
|
|
\end{verbatim} |
|
|
|
|
|
\subsubsection{Building on a scratch disk} |
|
|
|
|
|
Model object files and output data can use up large amounts of disk |
|
|
space so it is often the case that you will be operating on a large |
|
|
scratch disk. Assuming the model source is in {\em ~/MITgcm} then the |
|
|
following commands will build the model in {\em /scratch/exp2-run1}: |
|
|
\begin{verbatim} |
|
|
% cd /scratch/exp2-run1 |
|
|
% ~/MITgcm/tools/genmake2 -rootdir=~/MITgcm \ |
|
|
-mods=~/MITgcm/verification/exp2/code |
|
|
% make depend |
|
|
% make |
|
|
\end{verbatim} |
|
|
To run the model here, you'll need the input files: |
|
|
\begin{verbatim} |
|
|
% cp ~/MITgcm/verification/exp2/input/* ./ |
|
|
% ./mitgcmuv > output.txt |
|
|
\end{verbatim} |
|
|
|
|
|
As before, you could build in one directory and make multiple runs of |
|
|
the one experiment: |
|
|
\begin{verbatim} |
|
|
% cd /scratch/exp2 |
|
|
% mkdir build |
|
|
% cd build |
|
|
% ~/MITgcm/tools/genmake2 -rootdir=~/MITgcm \ |
|
|
-mods=~/MITgcm/verification/exp2/code |
|
|
% make depend |
|
|
% make |
|
|
% cd ../ |
|
|
% cp -r ~/MITgcm/verification/exp2/input run2 |
|
|
% cd run2 |
|
|
% ./mitgcmuv > output.txt |
|
|
\end{verbatim} |
|
|
|
|
|
|
|
|
|
|
|
\subsection{Using \textit{genmake2}} |
|
|
\label{sect:genmake} |
|
|
|
|
|
To compile the code, first use the program \texttt{genmake2} (located |
|
|
in the \textit{tools} directory) to generate a Makefile. |
|
|
\texttt{genmake2} is a shell script written to work with all |
|
|
``sh''--compatible shells including bash v1, bash v2, and Bourne. |
|
|
Internally, \texttt{genmake2} determines the locations of needed |
|
|
files, the compiler, compiler options, libraries, and Unix tools. It |
|
|
relies upon a number of ``optfiles'' located in the {\em |
|
|
tools/build\_options} directory. |
|
|
|
|
|
The purpose of the optfiles is to provide all the compilation options |
|
|
for particular ``platforms'' (where ``platform'' roughly means the |
|
|
combination of the hardware and the compiler) and code configurations. |
|
|
Given the combinations of possible compilers and library dependencies |
|
|
({\it eg.} MPI and NetCDF) there may be numerous optfiles available |
|
|
for a single machine. The naming scheme for the majority of the |
|
|
optfiles shipped with the code is |
|
|
\begin{center} |
|
|
{\bf OS\_HARDWARE\_COMPILER } |
|
|
\end{center} |
|
|
where |
|
|
\begin{description} |
|
|
\item[OS] is the name of the operating system (generally the |
|
|
lower-case output of the {\tt 'uname'} command) |
|
|
\item[HARDWARE] is a string that describes the CPU type and |
|
|
corresponds to output from the {\tt 'uname -m'} command: |
|
|
\begin{description} |
|
|
\item[ia32] is for ``x86'' machines such as i386, i486, i586, i686, |
|
|
and athlon |
|
|
\item[ia64] is for Intel IA64 systems (eg. Itanium, Itanium2) |
|
|
\item[amd64] is AMD x86\_64 systems |
|
|
\item[ppc] is for Mac PowerPC systems |
|
|
\end{description} |
|
|
\item[COMPILER] is the compiler name (generally, the name of the |
|
|
FORTRAN executable) |
|
|
\end{description} |
|
|
|
|
|
In many cases, the default optfiles are sufficient and will result in |
|
|
usable Makefiles. However, for some machines or code configurations, |
|
|
new ``optfiles'' must be written. To create a new optfile, it is |
|
|
generally best to start with one of the defaults and modify it to suit |
|
|
your needs. Like \texttt{genmake2}, the optfiles are all written |
|
|
using a simple ``sh''--compatible syntax. While nearly all variables |
|
|
used within \texttt{genmake2} may be specified in the optfiles, the |
|
|
critical ones that should be defined are: |
|
|
|
|
|
\begin{description} |
|
|
\item[FC] the FORTRAN compiler (executable) to use |
|
|
\item[DEFINES] the command-line DEFINE options passed to the compiler |
|
|
\item[CPP] the C pre-processor to use |
|
|
\item[NOOPTFLAGS] options flags for special files that should not be |
|
|
optimized |
|
|
\end{description} |
|
|
|
|
|
For example, the optfile for a typical Red Hat Linux machine (``ia32'' |
|
|
architecture) using the GCC (g77) compiler is |
|
|
\begin{verbatim} |
|
|
FC=g77 |
|
|
DEFINES='-D_BYTESWAPIO -DWORDLENGTH=4' |
|
|
CPP='cpp -traditional -P' |
|
|
NOOPTFLAGS='-O0' |
|
|
# For IEEE, use the "-ffloat-store" option |
|
|
if test "x$IEEE" = x ; then |
|
|
FFLAGS='-Wimplicit -Wunused -Wuninitialized' |
|
|
FOPTIM='-O3 -malign-double -funroll-loops' |
|
|
else |
|
|
FFLAGS='-Wimplicit -Wunused -ffloat-store' |
|
|
FOPTIM='-O0 -malign-double' |
|
|
fi |
|
|
\end{verbatim} |
|
|
|
|
|
If you write an optfile for an unrepresented machine or compiler, you |
|
|
are strongly encouraged to submit the optfile to the MITgcm project |
|
|
for inclusion. Please send the file to the |
|
|
\begin{rawhtml} <A href="mail-to:MITgcm-support@mitgcm.org"> \end{rawhtml} |
|
|
\begin{center} |
|
|
MITgcm-support@mitgcm.org |
|
|
\end{center} |
|
|
\begin{rawhtml} </A> \end{rawhtml} |
|
|
mailing list. |
|
|
|
|
|
In addition to the optfiles, \texttt{genmake2} supports a number of |
|
|
helpful command-line options. A complete list of these options can be |
|
|
obtained from: |
|
|
\begin{verbatim} |
|
|
% genmake2 -h |
|
|
\end{verbatim} |
|
|
|
|
|
The most important command-line options are: |
|
|
\begin{description} |
|
|
|
|
|
\item[\texttt{--optfile=/PATH/FILENAME}] specifies the optfile that |
|
|
should be used for a particular build. |
|
|
|
|
|
If no "optfile" is specified (either through the command line or the |
|
|
MITGCM\_OPTFILE environment variable), genmake2 will try to make a |
|
|
reasonable guess from the list provided in {\em |
|
|
tools/build\_options}. The method used for making this guess is |
|
|
to first determine the combination of operating system and hardware |
|
|
(eg. "linux\_ia32") and then find a working FORTRAN compiler within |
|
|
the user's path. When these three items have been identified, |
|
|
genmake2 will try to find an optfile that has a matching name. |
|
|
|
|
|
\item[\texttt{--pdepend=/PATH/FILENAME}] specifies the dependency file |
|
|
used for packages. |
|
|
|
|
|
If not specified, the default dependency file {\em pkg/pkg\_depend} |
|
|
is used. The syntax for this file is parsed on a line-by-line basis |
|
|
where each line containes either a comment ("\#") or a simple |
|
|
"PKGNAME1 (+|-)PKGNAME2" pairwise rule where the "+" or "-" symbol |
|
|
specifies a "must be used with" or a "must not be used with" |
|
|
relationship, respectively. If no rule is specified, then it is |
|
|
assumed that the two packages are compatible and will function |
|
|
either with or without each other. |
|
|
|
|
|
\item[\texttt{--pdefault='PKG1 PKG2 PKG3 ...'}] specifies the default |
|
|
set of packages to be used. |
|
|
|
|
|
If not set, the default package list will be read from {\em |
|
|
pkg/pkg\_default} |
|
|
|
|
|
\item[\texttt{--adof=/path/to/file}] specifies the "adjoint" or |
|
|
automatic differentiation options file to be used. The file is |
|
|
analogous to the ``optfile'' defined above but it specifies |
|
|
information for the AD build process. |
|
|
|
|
|
The default file is located in {\em |
|
|
tools/adjoint\_options/adjoint\_default} and it defines the "TAF" |
|
|
and "TAMC" compilers. An alternate version is also available at |
|
|
{\em tools/adjoint\_options/adjoint\_staf} that selects the newer |
|
|
"STAF" compiler. As with any compilers, it is helpful to have their |
|
|
directories listed in your {\tt \$PATH} environment variable. |
|
|
|
|
|
\item[\texttt{--mods='DIR1 DIR2 DIR3 ...'}] specifies a list of |
|
|
directories containing ``modifications''. These directories contain |
|
|
files with names that may (or may not) exist in the main MITgcm |
|
|
source tree but will be overridden by any identically-named sources |
|
|
within the ``MODS'' directories. |
|
|
|
|
|
The order of precedence for this "name-hiding" is as follows: |
|
|
\begin{itemize} |
|
|
\item ``MODS'' directories (in the order given) |
|
|
\item Packages either explicitly specified or provided by default |
|
|
(in the order given) |
|
|
\item Packages included due to package dependencies (in the order |
|
|
that that package dependencies are parsed) |
|
|
\item The "standard dirs" (which may have been specified by the |
|
|
``-standarddirs'' option) |
|
|
\end{itemize} |
|
|
|
|
|
\item[\texttt{--make=/path/to/gmake}] Due to the poor handling of |
|
|
soft-links and other bugs common with the \texttt{make} versions |
|
|
provided by commercial Unix vendors, GNU \texttt{make} (sometimes |
|
|
called \texttt{gmake}) should be preferred. This option provides a |
|
|
means for specifying the make executable to be used. |
|
|
|
|
|
\end{description} |
|
|
|
|
|
|
|
|
|
|
|
\section{Running the model} |
|
569 |
\label{sect:runModel} |
\label{sect:runModel} |
570 |
|
|
571 |
If compilation finished succesfuully (section \ref{sect:buildModel}) |
If compilation finished succesfuully (section \ref{sect:buildingCode}) |
572 |
then an executable called {\em mitgcmuv} will now exist in the local |
then an executable called \texttt{mitgcmuv} will now exist in the |
573 |
directory. |
local directory. |
574 |
|
|
575 |
To run the model as a single process (ie. not in parallel) simply |
To run the model as a single process (\textit{ie.} not in parallel) |
576 |
type: |
simply type: |
577 |
\begin{verbatim} |
\begin{verbatim} |
578 |
% ./mitgcmuv |
% ./mitgcmuv |
579 |
\end{verbatim} |
\end{verbatim} |
587 |
\begin{verbatim} |
\begin{verbatim} |
588 |
% ./mitgcmuv > output.txt |
% ./mitgcmuv > output.txt |
589 |
\end{verbatim} |
\end{verbatim} |
590 |
|
In the event that the model encounters an error and stops, it is very |
591 |
|
helpful to include the last few line of this \texttt{output.txt} file |
592 |
|
along with the (\texttt{stderr}) error message within any bug reports. |
593 |
|
|
594 |
For the example experiments in {\em verification}, an example of the |
For the example experiments in {\em verification}, an example of the |
595 |
output is kept in {\em results/output.txt} for comparison. You can compare |
output is kept in {\em results/output.txt} for comparison. You can |
596 |
your {\em output.txt} with this one to check that the set-up works. |
compare your {\em output.txt} with the corresponding one for that |
597 |
|
experiment to check that the set-up works. |
598 |
|
|
599 |
|
|
600 |
|
|
601 |
\subsection{Output files} |
\subsection{Output files} |
602 |
|
|
603 |
The model produces various output files. At a minimum, the instantaneous |
The model produces various output files. Depending upon the I/O |
604 |
``state'' of the model is written out, which is made of the following files: |
package selected (either \texttt{mdsio} or \texttt{mnc} or both as |
605 |
|
determined by both the compile-time settings and the run-time flags in |
606 |
|
\texttt{data.pkg}), the following output may appear. |
607 |
|
|
608 |
|
|
609 |
|
\subsubsection{MDSIO output files} |
610 |
|
|
611 |
|
The ``traditional'' output files are generated by the \texttt{mdsio} |
612 |
|
package. At a minimum, the instantaneous ``state'' of the model is |
613 |
|
written out, which is made of the following files: |
614 |
|
|
615 |
\begin{itemize} |
\begin{itemize} |
616 |
\item \textit{U.00000nIter} - zonal component of velocity field (m/s and $> |
\item \textit{U.00000nIter} - zonal component of velocity field (m/s and $> |
658 |
used to restart the model but are overwritten every other time they are |
used to restart the model but are overwritten every other time they are |
659 |
output to save disk space during long integrations. |
output to save disk space during long integrations. |
660 |
|
|
661 |
|
|
662 |
|
|
663 |
|
\subsubsection{MNC output files} |
664 |
|
|
665 |
|
Unlike the \texttt{mdsio} output, the \texttt{mnc}--generated output |
666 |
|
is usually (though not necessarily) placed within a subdirectory with |
667 |
|
a name such as \texttt{mnc\_test\_\${DATE}\_\${SEQ}}. The files |
668 |
|
within this subdirectory are all in the ``self-describing'' netCDF |
669 |
|
format and can thus be browsed and/or plotted using tools such as: |
670 |
|
\begin{itemize} |
671 |
|
\item At a minimum, the \texttt{ncdump} utility is typically included |
672 |
|
with every netCDF install: |
673 |
|
\begin{rawhtml} <A href="http://www.unidata.ucar.edu/packages/netcdf/"> \end{rawhtml} |
674 |
|
\begin{verbatim} |
675 |
|
http://www.unidata.ucar.edu/packages/netcdf/ |
676 |
|
\end{verbatim} |
677 |
|
\begin{rawhtml} </A> \end{rawhtml} |
678 |
|
|
679 |
|
\item The \texttt{ncview} utility is a very convenient and quick way |
680 |
|
to plot netCDF data and it runs on most OSes: |
681 |
|
\begin{rawhtml} <A href="http://meteora.ucsd.edu/~pierce/ncview_home_page.html"> \end{rawhtml} |
682 |
|
\begin{verbatim} |
683 |
|
http://meteora.ucsd.edu/~pierce/ncview_home_page.html |
684 |
|
\end{verbatim} |
685 |
|
\begin{rawhtml} </A> \end{rawhtml} |
686 |
|
|
687 |
|
\item MatLAB(c) and other common post-processing environments provide |
688 |
|
various netCDF interfaces including: |
689 |
|
\begin{rawhtml} <A href="http://woodshole.er.usgs.gov/staffpages/cdenham/public_html/MexCDF/nc4ml5.html"> \end{rawhtml} |
690 |
|
\begin{verbatim} |
691 |
|
http://woodshole.er.usgs.gov/staffpages/cdenham/public_html/MexCDF/nc4ml5.html |
692 |
|
\end{verbatim} |
693 |
|
\begin{rawhtml} </A> \end{rawhtml} |
694 |
|
|
695 |
|
\end{itemize} |
696 |
|
|
697 |
|
|
698 |
\subsection{Looking at the output} |
\subsection{Looking at the output} |
699 |
|
|
700 |
All the model data are written according to a ``meta/data'' file format. |
The ``traditional'' or mdsio model data are written according to a |
701 |
Each variable is associated with two files with suffix names \textit{.data} |
``meta/data'' file format. Each variable is associated with two files |
702 |
and \textit{.meta}. The \textit{.data} file contains the data written in |
with suffix names \textit{.data} and \textit{.meta}. The |
703 |
binary form (big\_endian by default). The \textit{.meta} file is a |
\textit{.data} file contains the data written in binary form |
704 |
``header'' file that contains information about the size and the structure |
(big\_endian by default). The \textit{.meta} file is a ``header'' file |
705 |
of the \textit{.data} file. This way of organizing the output is |
that contains information about the size and the structure of the |
706 |
particularly useful when running multi-processors calculations. The base |
\textit{.data} file. This way of organizing the output is particularly |
707 |
version of the model includes a few matlab utilities to read output files |
useful when running multi-processors calculations. The base version of |
708 |
written in this format. The matlab scripts are located in the directory |
the model includes a few matlab utilities to read output files written |
709 |
\textit{utils/matlab} under the root tree. The script \textit{rdmds.m} reads |
in this format. The matlab scripts are located in the directory |
710 |
the data. Look at the comments inside the script to see how to use it. |
\textit{utils/matlab} under the root tree. The script \textit{rdmds.m} |
711 |
|
reads the data. Look at the comments inside the script to see how to |
712 |
|
use it. |
713 |
|
|
714 |
Some examples of reading and visualizing some output in {\em Matlab}: |
Some examples of reading and visualizing some output in {\em Matlab}: |
715 |
\begin{verbatim} |
\begin{verbatim} |
726 |
>> for n=1:11; imagesc(eta(:,:,n)');axis ij;colorbar;pause(.5);end |
>> for n=1:11; imagesc(eta(:,:,n)');axis ij;colorbar;pause(.5);end |
727 |
\end{verbatim} |
\end{verbatim} |
728 |
|
|
729 |
\section{Doing it yourself: customizing the code} |
Similar scripts for netCDF output (\texttt{rdmnc.m}) are available. |
|
|
|
|
When you are ready to run the model in the configuration you want, the |
|
|
easiest thing is to use and adapt the setup of the case studies |
|
|
experiment (described previously) that is the closest to your |
|
|
configuration. Then, the amount of setup will be minimized. In this |
|
|
section, we focus on the setup relative to the ``numerical model'' |
|
|
part of the code (the setup relative to the ``execution environment'' |
|
|
part is covered in the parallel implementation section) and on the |
|
|
variables and parameters that you are likely to change. |
|
|
|
|
|
\subsection{Configuration and setup} |
|
|
|
|
|
The CPP keys relative to the ``numerical model'' part of the code are |
|
|
all defined and set in the file \textit{CPP\_OPTIONS.h }in the |
|
|
directory \textit{ model/inc }or in one of the \textit{code |
|
|
}directories of the case study experiments under |
|
|
\textit{verification.} The model parameters are defined and declared |
|
|
in the file \textit{model/inc/PARAMS.h }and their default values are |
|
|
set in the routine \textit{model/src/set\_defaults.F. }The default |
|
|
values can be modified in the namelist file \textit{data }which needs |
|
|
to be located in the directory where you will run the model. The |
|
|
parameters are initialized in the routine |
|
|
\textit{model/src/ini\_parms.F}. Look at this routine to see in what |
|
|
part of the namelist the parameters are located. |
|
|
|
|
|
In what follows the parameters are grouped into categories related to |
|
|
the computational domain, the equations solved in the model, and the |
|
|
simulation controls. |
|
|
|
|
|
\subsection{Computational domain, geometry and time-discretization} |
|
|
|
|
|
\begin{description} |
|
|
\item[dimensions] \ |
|
|
|
|
|
The number of points in the x, y, and r directions are represented |
|
|
by the variables \textbf{sNx}, \textbf{sNy} and \textbf{Nr} |
|
|
respectively which are declared and set in the file |
|
|
\textit{model/inc/SIZE.h}. (Again, this assumes a mono-processor |
|
|
calculation. For multiprocessor calculations see the section on |
|
|
parallel implementation.) |
|
|
|
|
|
\item[grid] \ |
|
|
|
|
|
Three different grids are available: cartesian, spherical polar, and |
|
|
curvilinear (which includes the cubed sphere). The grid is set |
|
|
through the logical variables \textbf{usingCartesianGrid}, |
|
|
\textbf{usingSphericalPolarGrid}, and \textbf{usingCurvilinearGrid}. |
|
|
In the case of spherical and curvilinear grids, the southern |
|
|
boundary is defined through the variable \textbf{phiMin} which |
|
|
corresponds to the latitude of the southern most cell face (in |
|
|
degrees). The resolution along the x and y directions is controlled |
|
|
by the 1D arrays \textbf{delx} and \textbf{dely} (in meters in the |
|
|
case of a cartesian grid, in degrees otherwise). The vertical grid |
|
|
spacing is set through the 1D array \textbf{delz} for the ocean (in |
|
|
meters) or \textbf{delp} for the atmosphere (in Pa). The variable |
|
|
\textbf{Ro\_SeaLevel} represents the standard position of Sea-Level |
|
|
in ``R'' coordinate. This is typically set to 0m for the ocean |
|
|
(default value) and 10$^{5}$Pa for the atmosphere. For the |
|
|
atmosphere, also set the logical variable \textbf{groundAtK1} to |
|
|
\texttt{'.TRUE.'} which puts the first level (k=1) at the lower |
|
|
boundary (ground). |
|
|
|
|
|
For the cartesian grid case, the Coriolis parameter $f$ is set |
|
|
through the variables \textbf{f0} and \textbf{beta} which correspond |
|
|
to the reference Coriolis parameter (in s$^{-1}$) and |
|
|
$\frac{\partial f}{ \partial y}$(in m$^{-1}$s$^{-1}$) respectively. |
|
|
If \textbf{beta } is set to a nonzero value, \textbf{f0} is the |
|
|
value of $f$ at the southern edge of the domain. |
|
|
|
|
|
\item[topography - full and partial cells] \ |
|
|
|
|
|
The domain bathymetry is read from a file that contains a 2D (x,y) |
|
|
map of depths (in m) for the ocean or pressures (in Pa) for the |
|
|
atmosphere. The file name is represented by the variable |
|
|
\textbf{bathyFile}. The file is assumed to contain binary numbers |
|
|
giving the depth (pressure) of the model at each grid cell, ordered |
|
|
with the x coordinate varying fastest. The points are ordered from |
|
|
low coordinate to high coordinate for both axes. The model code |
|
|
applies without modification to enclosed, periodic, and double |
|
|
periodic domains. Periodicity is assumed by default and is |
|
|
suppressed by setting the depths to 0m for the cells at the limits |
|
|
of the computational domain (note: not sure this is the case for the |
|
|
atmosphere). The precision with which to read the binary data is |
|
|
controlled by the integer variable \textbf{readBinaryPrec} which can |
|
|
take the value \texttt{32} (single precision) or \texttt{64} (double |
|
|
precision). See the matlab program \textit{gendata.m} in the |
|
|
\textit{input} directories under \textit{verification} to see how |
|
|
the bathymetry files are generated for the case study experiments. |
|
|
|
|
|
To use the partial cell capability, the variable \textbf{hFacMin} |
|
|
needs to be set to a value between 0 and 1 (it is set to 1 by |
|
|
default) corresponding to the minimum fractional size of the cell. |
|
|
For example if the bottom cell is 500m thick and \textbf{hFacMin} is |
|
|
set to 0.1, the actual thickness of the cell (i.e. used in the code) |
|
|
can cover a range of discrete values 50m apart from 50m to 500m |
|
|
depending on the value of the bottom depth (in \textbf{bathyFile}) |
|
|
at this point. |
|
|
|
|
|
Note that the bottom depths (or pressures) need not coincide with |
|
|
the models levels as deduced from \textbf{delz} or \textbf{delp}. |
|
|
The model will interpolate the numbers in \textbf{bathyFile} so that |
|
|
they match the levels obtained from \textbf{delz} or \textbf{delp} |
|
|
and \textbf{hFacMin}. |
|
|
|
|
|
(Note: the atmospheric case is a bit more complicated than what is |
|
|
written here I think. To come soon...) |
|
|
|
|
|
\item[time-discretization] \ |
|
|
|
|
|
The time steps are set through the real variables \textbf{deltaTMom} |
|
|
and \textbf{deltaTtracer} (in s) which represent the time step for |
|
|
the momentum and tracer equations, respectively. For synchronous |
|
|
integrations, simply set the two variables to the same value (or you |
|
|
can prescribe one time step only through the variable |
|
|
\textbf{deltaT}). The Adams-Bashforth stabilizing parameter is set |
|
|
through the variable \textbf{abEps} (dimensionless). The stagger |
|
|
baroclinic time stepping can be activated by setting the logical |
|
|
variable \textbf{staggerTimeStep} to \texttt{'.TRUE.'}. |
|
|
|
|
|
\end{description} |
|
|
|
|
|
|
|
|
\subsection{Equation of state} |
|
|
|
|
|
First, because the model equations are written in terms of |
|
|
perturbations, a reference thermodynamic state needs to be specified. |
|
|
This is done through the 1D arrays \textbf{tRef} and \textbf{sRef}. |
|
|
\textbf{tRef} specifies the reference potential temperature profile |
|
|
(in $^{o}$C for the ocean and $^{o}$K for the atmosphere) starting |
|
|
from the level k=1. Similarly, \textbf{sRef} specifies the reference |
|
|
salinity profile (in ppt) for the ocean or the reference specific |
|
|
humidity profile (in g/kg) for the atmosphere. |
|
|
|
|
|
The form of the equation of state is controlled by the character |
|
|
variables \textbf{buoyancyRelation} and \textbf{eosType}. |
|
|
\textbf{buoyancyRelation} is set to \texttt{'OCEANIC'} by default and |
|
|
needs to be set to \texttt{'ATMOSPHERIC'} for atmosphere simulations. |
|
|
In this case, \textbf{eosType} must be set to \texttt{'IDEALGAS'}. |
|
|
For the ocean, two forms of the equation of state are available: |
|
|
linear (set \textbf{eosType} to \texttt{'LINEAR'}) and a polynomial |
|
|
approximation to the full nonlinear equation ( set \textbf{eosType} to |
|
|
\texttt{'POLYNOMIAL'}). In the linear case, you need to specify the |
|
|
thermal and haline expansion coefficients represented by the variables |
|
|
\textbf{tAlpha} (in K$^{-1}$) and \textbf{sBeta} (in ppt$^{-1}$). For |
|
|
the nonlinear case, you need to generate a file of polynomial |
|
|
coefficients called \textit{POLY3.COEFFS}. To do this, use the program |
|
|
\textit{utils/knudsen2/knudsen2.f} under the model tree (a Makefile is |
|
|
available in the same directory and you will need to edit the number |
|
|
and the values of the vertical levels in \textit{knudsen2.f} so that |
|
|
they match those of your configuration). |
|
|
|
|
|
There there are also higher polynomials for the equation of state: |
|
|
\begin{description} |
|
|
\item[\texttt{'UNESCO'}:] The UNESCO equation of state formula of |
|
|
Fofonoff and Millard \cite{fofonoff83}. This equation of state |
|
|
assumes in-situ temperature, which is not a model variable; {\em its |
|
|
use is therefore discouraged, and it is only listed for |
|
|
completeness}. |
|
|
\item[\texttt{'JMD95Z'}:] A modified UNESCO formula by Jackett and |
|
|
McDougall \cite{jackett95}, which uses the model variable potential |
|
|
temperature as input. The \texttt{'Z'} indicates that this equation |
|
|
of state uses a horizontally and temporally constant pressure |
|
|
$p_{0}=-g\rho_{0}z$. |
|
|
\item[\texttt{'JMD95P'}:] A modified UNESCO formula by Jackett and |
|
|
McDougall \cite{jackett95}, which uses the model variable potential |
|
|
temperature as input. The \texttt{'P'} indicates that this equation |
|
|
of state uses the actual hydrostatic pressure of the last time |
|
|
step. Lagging the pressure in this way requires an additional pickup |
|
|
file for restarts. |
|
|
\item[\texttt{'MDJWF'}:] The new, more accurate and less expensive |
|
|
equation of state by McDougall et~al. \cite{mcdougall03}. It also |
|
|
requires lagging the pressure and therefore an additional pickup |
|
|
file for restarts. |
|
|
\end{description} |
|
|
For none of these options an reference profile of temperature or |
|
|
salinity is required. |
|
|
|
|
|
\subsection{Momentum equations} |
|
|
|
|
|
In this section, we only focus for now on the parameters that you are |
|
|
likely to change, i.e. the ones relative to forcing and dissipation |
|
|
for example. The details relevant to the vector-invariant form of the |
|
|
equations and the various advection schemes are not covered for the |
|
|
moment. We assume that you use the standard form of the momentum |
|
|
equations (i.e. the flux-form) with the default advection scheme. |
|
|
Also, there are a few logical variables that allow you to turn on/off |
|
|
various terms in the momentum equation. These variables are called |
|
|
\textbf{momViscosity, momAdvection, momForcing, useCoriolis, |
|
|
momPressureForcing, momStepping} and \textbf{metricTerms }and are |
|
|
assumed to be set to \texttt{'.TRUE.'} here. Look at the file |
|
|
\textit{model/inc/PARAMS.h }for a precise definition of these |
|
|
variables. |
|
|
|
|
|
\begin{description} |
|
|
\item[initialization] \ |
|
|
|
|
|
The velocity components are initialized to 0 unless the simulation |
|
|
is starting from a pickup file (see section on simulation control |
|
|
parameters). |
|
|
|
|
|
\item[forcing] \ |
|
|
|
|
|
This section only applies to the ocean. You need to generate |
|
|
wind-stress data into two files \textbf{zonalWindFile} and |
|
|
\textbf{meridWindFile} corresponding to the zonal and meridional |
|
|
components of the wind stress, respectively (if you want the stress |
|
|
to be along the direction of only one of the model horizontal axes, |
|
|
you only need to generate one file). The format of the files is |
|
|
similar to the bathymetry file. The zonal (meridional) stress data |
|
|
are assumed to be in Pa and located at U-points (V-points). As for |
|
|
the bathymetry, the precision with which to read the binary data is |
|
|
controlled by the variable \textbf{readBinaryPrec}. See the matlab |
|
|
program \textit{gendata.m} in the \textit{input} directories under |
|
|
\textit{verification} to see how simple analytical wind forcing data |
|
|
are generated for the case study experiments. |
|
|
|
|
|
There is also the possibility of prescribing time-dependent periodic |
|
|
forcing. To do this, concatenate the successive time records into a |
|
|
single file (for each stress component) ordered in a (x,y,t) fashion |
|
|
and set the following variables: \textbf{periodicExternalForcing }to |
|
|
\texttt{'.TRUE.'}, \textbf{externForcingPeriod }to the period (in s) |
|
|
of which the forcing varies (typically 1 month), and |
|
|
\textbf{externForcingCycle} to the repeat time (in s) of the forcing |
|
|
(typically 1 year -- note: \textbf{ externForcingCycle} must be a |
|
|
multiple of \textbf{externForcingPeriod}). With these variables set |
|
|
up, the model will interpolate the forcing linearly at each |
|
|
iteration. |
|
|
|
|
|
\item[dissipation] \ |
|
|
|
|
|
The lateral eddy viscosity coefficient is specified through the |
|
|
variable \textbf{viscAh} (in m$^{2}$s$^{-1}$). The vertical eddy |
|
|
viscosity coefficient is specified through the variable |
|
|
\textbf{viscAz} (in m$^{2}$s$^{-1}$) for the ocean and |
|
|
\textbf{viscAp} (in Pa$^{2}$s$^{-1}$) for the atmosphere. The |
|
|
vertical diffusive fluxes can be computed implicitly by setting the |
|
|
logical variable \textbf{implicitViscosity }to \texttt{'.TRUE.'}. |
|
|
In addition, biharmonic mixing can be added as well through the |
|
|
variable \textbf{viscA4} (in m$^{4}$s$^{-1}$). On a spherical polar |
|
|
grid, you might also need to set the variable \textbf{cosPower} |
|
|
which is set to 0 by default and which represents the power of |
|
|
cosine of latitude to multiply viscosity. Slip or no-slip conditions |
|
|
at lateral and bottom boundaries are specified through the logical |
|
|
variables \textbf{no\_slip\_sides} and \textbf{no\_slip\_bottom}. If |
|
|
set to \texttt{'.FALSE.'}, free-slip boundary conditions are |
|
|
applied. If no-slip boundary conditions are applied at the bottom, a |
|
|
bottom drag can be applied as well. Two forms are available: linear |
|
|
(set the variable \textbf{bottomDragLinear} in s$ ^{-1}$) and |
|
|
quadratic (set the variable \textbf{bottomDragQuadratic} in |
|
|
m$^{-1}$). |
|
|
|
|
|
The Fourier and Shapiro filters are described elsewhere. |
|
|
|
|
|
\item[C-D scheme] \ |
|
|
|
|
|
If you run at a sufficiently coarse resolution, you will need the |
|
|
C-D scheme for the computation of the Coriolis terms. The |
|
|
variable\textbf{\ tauCD}, which represents the C-D scheme coupling |
|
|
timescale (in s) needs to be set. |
|
|
|
|
|
\item[calculation of pressure/geopotential] \ |
|
|
|
|
|
First, to run a non-hydrostatic ocean simulation, set the logical |
|
|
variable \textbf{nonHydrostatic} to \texttt{'.TRUE.'}. The pressure |
|
|
field is then inverted through a 3D elliptic equation. (Note: this |
|
|
capability is not available for the atmosphere yet.) By default, a |
|
|
hydrostatic simulation is assumed and a 2D elliptic equation is used |
|
|
to invert the pressure field. The parameters controlling the |
|
|
behaviour of the elliptic solvers are the variables |
|
|
\textbf{cg2dMaxIters} and \textbf{cg2dTargetResidual } for |
|
|
the 2D case and \textbf{cg3dMaxIters} and |
|
|
\textbf{cg3dTargetResidual} for the 3D case. You probably won't need to |
|
|
alter the default values (are we sure of this?). |
|
|
|
|
|
For the calculation of the surface pressure (for the ocean) or |
|
|
surface geopotential (for the atmosphere) you need to set the |
|
|
logical variables \textbf{rigidLid} and \textbf{implicitFreeSurface} |
|
|
(set one to \texttt{'.TRUE.'} and the other to \texttt{'.FALSE.'} |
|
|
depending on how you want to deal with the ocean upper or atmosphere |
|
|
lower boundary). |
|
|
|
|
|
\end{description} |
|
|
|
|
|
\subsection{Tracer equations} |
|
|
|
|
|
This section covers the tracer equations i.e. the potential |
|
|
temperature equation and the salinity (for the ocean) or specific |
|
|
humidity (for the atmosphere) equation. As for the momentum equations, |
|
|
we only describe for now the parameters that you are likely to change. |
|
|
The logical variables \textbf{tempDiffusion} \textbf{tempAdvection} |
|
|
\textbf{tempForcing}, and \textbf{tempStepping} allow you to turn |
|
|
on/off terms in the temperature equation (same thing for salinity or |
|
|
specific humidity with variables \textbf{saltDiffusion}, |
|
|
\textbf{saltAdvection} etc.). These variables are all assumed here to |
|
|
be set to \texttt{'.TRUE.'}. Look at file \textit{model/inc/PARAMS.h} |
|
|
for a precise definition. |
|
|
|
|
|
\begin{description} |
|
|
\item[initialization] \ |
|
|
|
|
|
The initial tracer data can be contained in the binary files |
|
|
\textbf{hydrogThetaFile} and \textbf{hydrogSaltFile}. These files |
|
|
should contain 3D data ordered in an (x,y,r) fashion with k=1 as the |
|
|
first vertical level. If no file names are provided, the tracers |
|
|
are then initialized with the values of \textbf{tRef} and |
|
|
\textbf{sRef} mentioned above (in the equation of state section). In |
|
|
this case, the initial tracer data are uniform in x and y for each |
|
|
depth level. |
|
|
|
|
|
\item[forcing] \ |
|
|
|
|
|
This part is more relevant for the ocean, the procedure for the |
|
|
atmosphere not being completely stabilized at the moment. |
|
|
|
|
|
A combination of fluxes data and relaxation terms can be used for |
|
|
driving the tracer equations. For potential temperature, heat flux |
|
|
data (in W/m$ ^{2}$) can be stored in the 2D binary file |
|
|
\textbf{surfQfile}. Alternatively or in addition, the forcing can |
|
|
be specified through a relaxation term. The SST data to which the |
|
|
model surface temperatures are restored to are supposed to be stored |
|
|
in the 2D binary file \textbf{thetaClimFile}. The corresponding |
|
|
relaxation time scale coefficient is set through the variable |
|
|
\textbf{tauThetaClimRelax} (in s). The same procedure applies for |
|
|
salinity with the variable names \textbf{EmPmRfile}, |
|
|
\textbf{saltClimFile}, and \textbf{tauSaltClimRelax} for freshwater |
|
|
flux (in m/s) and surface salinity (in ppt) data files and |
|
|
relaxation time scale coefficient (in s), respectively. Also for |
|
|
salinity, if the CPP key \textbf{USE\_NATURAL\_BCS} is turned on, |
|
|
natural boundary conditions are applied i.e. when computing the |
|
|
surface salinity tendency, the freshwater flux is multiplied by the |
|
|
model surface salinity instead of a constant salinity value. |
|
|
|
|
|
As for the other input files, the precision with which to read the |
|
|
data is controlled by the variable \textbf{readBinaryPrec}. |
|
|
Time-dependent, periodic forcing can be applied as well following |
|
|
the same procedure used for the wind forcing data (see above). |
|
|
|
|
|
\item[dissipation] \ |
|
|
|
|
|
Lateral eddy diffusivities for temperature and salinity/specific |
|
|
humidity are specified through the variables \textbf{diffKhT} and |
|
|
\textbf{diffKhS} (in m$^{2}$/s). Vertical eddy diffusivities are |
|
|
specified through the variables \textbf{diffKzT} and |
|
|
\textbf{diffKzS} (in m$^{2}$/s) for the ocean and \textbf{diffKpT |
|
|
}and \textbf{diffKpS} (in Pa$^{2}$/s) for the atmosphere. The |
|
|
vertical diffusive fluxes can be computed implicitly by setting the |
|
|
logical variable \textbf{implicitDiffusion} to \texttt{'.TRUE.'}. |
|
|
In addition, biharmonic diffusivities can be specified as well |
|
|
through the coefficients \textbf{diffK4T} and \textbf{diffK4S} (in |
|
|
m$^{4}$/s). Note that the cosine power scaling (specified through |
|
|
\textbf{cosPower}---see the momentum equations section) is applied to |
|
|
the tracer diffusivities (Laplacian and biharmonic) as well. The |
|
|
Gent and McWilliams parameterization for oceanic tracers is |
|
|
described in the package section. Finally, note that tracers can be |
|
|
also subject to Fourier and Shapiro filtering (see the corresponding |
|
|
section on these filters). |
|
|
|
|
|
\item[ocean convection] \ |
|
|
|
|
|
Two options are available to parameterize ocean convection: one is |
|
|
to use the convective adjustment scheme. In this case, you need to |
|
|
set the variable \textbf{cadjFreq}, which represents the frequency |
|
|
(in s) with which the adjustment algorithm is called, to a non-zero |
|
|
value (if set to a negative value by the user, the model will set it |
|
|
to the tracer time step). The other option is to parameterize |
|
|
convection with implicit vertical diffusion. To do this, set the |
|
|
logical variable \textbf{implicitDiffusion} to \texttt{'.TRUE.'} |
|
|
and the real variable \textbf{ivdc\_kappa} to a value (in m$^{2}$/s) |
|
|
you wish the tracer vertical diffusivities to have when mixing |
|
|
tracers vertically due to static instabilities. Note that |
|
|
\textbf{cadjFreq} and \textbf{ivdc\_kappa}can not both have non-zero |
|
|
value. |
|
|
|
|
|
\end{description} |
|
|
|
|
|
\subsection{Simulation controls} |
|
|
|
|
|
The model ''clock'' is defined by the variable \textbf{deltaTClock} |
|
|
(in s) which determines the IO frequencies and is used in tagging |
|
|
output. Typically, you will set it to the tracer time step for |
|
|
accelerated runs (otherwise it is simply set to the default time step |
|
|
\textbf{deltaT}). Frequency of checkpointing and dumping of the model |
|
|
state are referenced to this clock (see below). |
|
|
|
|
|
\begin{description} |
|
|
\item[run duration] \ |
|
|
|
|
|
The beginning of a simulation is set by specifying a start time (in |
|
|
s) through the real variable \textbf{startTime} or by specifying an |
|
|
initial iteration number through the integer variable |
|
|
\textbf{nIter0}. If these variables are set to nonzero values, the |
|
|
model will look for a ''pickup'' file \textit{pickup.0000nIter0} to |
|
|
restart the integration. The end of a simulation is set through the |
|
|
real variable \textbf{endTime} (in s). Alternatively, you can |
|
|
specify instead the number of time steps to execute through the |
|
|
integer variable \textbf{nTimeSteps}. |
|
|
|
|
|
\item[frequency of output] \ |
|
|
|
|
|
Real variables defining frequencies (in s) with which output files |
|
|
are written on disk need to be set up. \textbf{dumpFreq} controls |
|
|
the frequency with which the instantaneous state of the model is |
|
|
saved. \textbf{chkPtFreq} and \textbf{pchkPtFreq} control the output |
|
|
frequency of rolling and permanent checkpoint files, respectively. |
|
|
See section 1.5.1 Output files for the definition of model state and |
|
|
checkpoint files. In addition, time-averaged fields can be written |
|
|
out by setting the variable \textbf{taveFreq} (in s). The precision |
|
|
with which to write the binary data is controlled by the integer |
|
|
variable w\textbf{riteBinaryPrec} (set it to \texttt{32} or |
|
|
\texttt{64}). |
|
|
|
|
|
\end{description} |
|
|
|
|
|
|
|
|
%%% Local Variables: |
|
|
%%% mode: latex |
|
|
%%% TeX-master: t |
|
|
%%% End: |
|