--- manual/s_getstarted/text/getting_started.tex 2004/01/29 19:22:35 1.18 +++ manual/s_getstarted/text/getting_started.tex 2004/10/14 14:24:28 1.27 @@ -1,4 +1,4 @@ -% $Header: /home/ubuntu/mnt/e9_copy/manual/s_getstarted/text/getting_started.tex,v 1.18 2004/01/29 19:22:35 edhill Exp $ +% $Header: /home/ubuntu/mnt/e9_copy/manual/s_getstarted/text/getting_started.tex,v 1.27 2004/10/14 14:24:28 cnh Exp $ % $Name: $ %\section{Getting started} @@ -79,6 +79,9 @@ \end{enumerate} +\subsection{Method 1 - Checkout from CVS} +\label{sect:cvs_checkout} + If CVS is available on your system, we strongly encourage you to use it. CVS provides an efficient and elegant way of organizing your code and keeping track of your changes. If CVS is not available on your machine, you can also @@ -93,7 +96,7 @@ \begin{verbatim} % export CVSROOT=':pserver:cvsanon@mitgcm.org:/u/gcmpack' \end{verbatim} -in your .profile or .bashrc file. +in your \texttt{.profile} or \texttt{.bashrc} file. To get MITgcm through CVS, first register with the MITgcm CVS server @@ -121,6 +124,28 @@ \end{verbatim} \begin{rawhtml} \end{rawhtml} +As a convenience, the MITgcm CVS server contains aliases which are +named subsets of the codebase. These aliases can be especially +helpful when used over slow internet connections or on machines with +restricted storage space. Table \ref{tab:cvsModules} contains a list +of CVS aliases +\begin{table}[htb] + \centering + \begin{tabular}[htb]{|lp{3.25in}|}\hline + \textbf{Alias Name} & \textbf{Information (directories) Contained} \\\hline + \texttt{MITgcm\_code} & Only the source code -- none of the verification examples. \\ + \texttt{MITgcm\_verif\_basic} + & Source code plus a small set of the verification examples + (\texttt{global\_ocean.90x40x15}, \texttt{aim.5l\_cs}, \texttt{hs94.128x64x5}, + \texttt{front\_relax}, and \texttt{plume\_on\_slope}). \\ + \texttt{MITgcm\_verif\_atmos} & Source code plus all of the atmospheric examples. \\ + \texttt{MITgcm\_verif\_ocean} & Source code plus all of the oceanic examples. \\ + \texttt{MITgcm\_verif\_all} & Source code plus all of the + verification examples. \\\hline + \end{tabular} + \caption{MITgcm CVS Modules} + \label{tab:cvsModules} +\end{table} The checkout process creates a directory called \textit{MITgcm}. If the directory \textit{MITgcm} exists this command updates your code @@ -134,9 +159,17 @@ here \begin{rawhtml} \end{rawhtml} . +It is important to note that the CVS aliases in Table +\ref{tab:cvsModules} cannot be used in conjunction with the CVS +\texttt{-d DIRNAME} option. However, the \texttt{MITgcm} directories +they create can be changed to a different name following the check-out: +\begin{verbatim} + % cvs co MITgcm_verif_basic + % mv MITgcm MITgcm_verif_basic +\end{verbatim} -\paragraph*{Conventional download method} +\subsection{Method 2 - Tar file download} \label{sect:conventionalDownload} If you do not have CVS on your system, you can download the model as a @@ -156,7 +189,7 @@ \begin{rawhtml} \end{rawhtml} mailing list. -\paragraph*{Upgrading from an earlier version} +\subsubsection{Upgrading from an earlier version} If you already have an earlier version of the code you can ``upgrade'' your copy instead of downloading the entire repository again. First, @@ -291,7 +324,7 @@ \end{itemize} -\section{Example experiments} +\section[MITgcm Example Experiments]{Example experiments} \label{sect:modelExamples} %% a set of twenty-four pre-configured numerical experiments @@ -451,7 +484,7 @@ Once you have chosen the example you want to run, you are ready to compile the code. -\section{Building the code} +\section[Building MITgcm]{Building the code} \label{sect:buildingCode} To compile the code, we use the {\em make} program. This uses a file @@ -633,18 +666,17 @@ \end{verbatim} - -\subsection{Using \textit{genmake2}} +\subsection{Using \texttt{genmake2}} \label{sect:genmake} To compile the code, first use the program \texttt{genmake2} (located -in the \textit{tools} directory) to generate a Makefile. +in the \texttt{tools} directory) to generate a Makefile. \texttt{genmake2} is a shell script written to work with all ``sh''--compatible shells including bash v1, bash v2, and Bourne. Internally, \texttt{genmake2} determines the locations of needed files, the compiler, compiler options, libraries, and Unix tools. It -relies upon a number of ``optfiles'' located in the {\em - tools/build\_options} directory. +relies upon a number of ``optfiles'' located in the +\texttt{tools/build\_options} directory. The purpose of the optfiles is to provide all the compilation options for particular ``platforms'' (where ``platform'' roughly means the @@ -739,6 +771,21 @@ the user's path. When these three items have been identified, genmake2 will try to find an optfile that has a matching name. +\item[\texttt{--pdefault='PKG1 PKG2 PKG3 ...'}] specifies the default + set of packages to be used. The normal order of precedence for + packages is as follows: + \begin{enumerate} + \item If available, the command line (\texttt{--pdefault}) settings + over-rule any others. + + \item Next, \texttt{genmake2} will look for a file named + ``\texttt{packages.conf}'' in the local directory or in any of the + directories specified with the \texttt{--mods} option. + + \item Finally, if neither of the above are available, + \texttt{genmake2} will use the \texttt{/pkg/pkg\_default} file. + \end{enumerate} + \item[\texttt{--pdepend=/PATH/FILENAME}] specifies the dependency file used for packages. @@ -751,12 +798,6 @@ assumed that the two packages are compatible and will function either with or without each other. -\item[\texttt{--pdefault='PKG1 PKG2 PKG3 ...'}] specifies the default - set of packages to be used. - - If not set, the default package list will be read from {\em - pkg/pkg\_default} - \item[\texttt{--adof=/path/to/file}] specifies the "adjoint" or automatic differentiation options file to be used. The file is analogous to the ``optfile'' defined above but it specifies @@ -786,22 +827,135 @@ ``-standarddirs'' option) \end{itemize} +\item[\texttt{--mpi}] This option enables certain MPI features (using + CPP \texttt{\#define}s) within the code and is necessary for MPI + builds (see Section \ref{sect:mpi-build}). + \item[\texttt{--make=/path/to/gmake}] Due to the poor handling of soft-links and other bugs common with the \texttt{make} versions provided by commercial Unix vendors, GNU \texttt{make} (sometimes called \texttt{gmake}) should be preferred. This option provides a means for specifying the make executable to be used. + +\item[\texttt{--bash=/path/to/sh}] On some (usually older UNIX) + machines, the ``bash'' shell is unavailable. To run on these + systems, \texttt{genmake2} can be invoked using an ``sh'' (that is, + a Bourne, POSIX, or compatible) shell. The syntax in these + circumstances is: + \begin{center} + \texttt{\% /bin/sh genmake2 -bash=/bin/sh [...options...]} + \end{center} + where \texttt{/bin/sh} can be replaced with the full path and name + of the desired shell. \end{description} +\subsection{Building with MPI} +\label{sect:mpi-build} + +Building MITgcm to use MPI libraries can be complicated due to the +variety of different MPI implementations available, their dependencies +or interactions with different compilers, and their often ad-hoc +locations within file systems. For these reasons, its generally a +good idea to start by finding and reading the documentation for your +machine(s) and, if necessary, seeking help from your local systems +administrator. -\section{Running the model} +The steps for building MITgcm with MPI support are: +\begin{enumerate} + +\item Determine the locations of your MPI-enabled compiler and/or MPI + libraries and put them into an options file as described in Section + \ref{sect:genmake}. One can start with one of the examples in: + \begin{rawhtml} + \end{rawhtml} + \begin{center} + \texttt{MITgcm/tools/build\_options/} + \end{center} + \begin{rawhtml} \end{rawhtml} + such as \texttt{linux\_ia32\_g77+mpi\_cg01} or + \texttt{linux\_ia64\_efc+mpi} and then edit it to suit the machine at + hand. You may need help from your user guide or local systems + administrator to determine the exact location of the MPI libraries. + If libraries are not installed, MPI implementations and related + tools are available including: + \begin{itemize} + \item \begin{rawhtml} + \end{rawhtml} + MPICH + \begin{rawhtml} \end{rawhtml} + + \item \begin{rawhtml} + \end{rawhtml} + LAM/MPI + \begin{rawhtml} \end{rawhtml} + + \item \begin{rawhtml} + \end{rawhtml} + MPIexec + \begin{rawhtml} \end{rawhtml} + \end{itemize} + +\item Build the code with the \texttt{genmake2} \texttt{-mpi} option + (see Section \ref{sect:genmake}) using commands such as: +{\footnotesize \begin{verbatim} + % ../../../tools/genmake2 -mods=../code -mpi -of=YOUR_OPTFILE + % make depend + % make +\end{verbatim} } + +\item Run the code with the appropriate MPI ``run'' or ``exec'' + program provided with your particular implementation of MPI. + Typical MPI packages such as MPICH will use something like: +\begin{verbatim} + % mpirun -np 4 -machinefile mf ./mitgcmuv +\end{verbatim} + Sightly more complicated scripts may be needed for many machines + since execution of the code may be controlled by both the MPI + library and a job scheduling and queueing system such as PBS, + LoadLeveller, Condor, or any of a number of similar tools. A few + example scripts (those used for our \begin{rawhtml} \end{rawhtml}regular + verification runs\begin{rawhtml} \end{rawhtml}) are available + at: + \begin{rawhtml} + \end{rawhtml} + {\footnotesize \tt + http://mitgcm.org/cgi-bin/viewcvs.cgi/MITgcm\_contrib/test\_scripts/ } + \begin{rawhtml} \end{rawhtml} + +\end{enumerate} + +An example of the above process on the MITgcm cluster (``cg01'') using +the GNU g77 compiler and the mpich MPI library is: + +{\footnotesize \begin{verbatim} + % cd MITgcm/verification/exp5 + % mkdir build + % cd build + % ../../../tools/genmake2 -mpi -mods=../code \ + -of=../../../tools/build_options/linux_ia32_g77+mpi_cg01 + % make depend + % make + % cd ../input + % /usr/local/pkg/mpi/mpi-1.2.4..8a-gm-1.5/g77/bin/mpirun.ch_gm \ + -machinefile mf --gm-kill 5 -v -np 2 ../build/mitgcmuv +\end{verbatim} } + + + +\section[Running MITgcm]{Running the model in prognostic mode} \label{sect:runModel} -If compilation finished succesfuully (section \ref{sect:buildModel}) -then an executable called {\em mitgcmuv} will now exist in the local -directory. +If compilation finished succesfuully (section \ref{sect:buildingCode}) +then an executable called \texttt{mitgcmuv} will now exist in the +local directory. To run the model as a single process (ie. not in parallel) simply type: @@ -905,422 +1059,3 @@ >> for n=1:11; imagesc(eta(:,:,n)');axis ij;colorbar;pause(.5);end \end{verbatim} -\section{Doing it yourself: customizing the code} - -When you are ready to run the model in the configuration you want, the -easiest thing is to use and adapt the setup of the case studies -experiment (described previously) that is the closest to your -configuration. Then, the amount of setup will be minimized. In this -section, we focus on the setup relative to the ``numerical model'' -part of the code (the setup relative to the ``execution environment'' -part is covered in the parallel implementation section) and on the -variables and parameters that you are likely to change. - -\subsection{Configuration and setup} - -The CPP keys relative to the ``numerical model'' part of the code are -all defined and set in the file \textit{CPP\_OPTIONS.h }in the -directory \textit{ model/inc }or in one of the \textit{code -}directories of the case study experiments under -\textit{verification.} The model parameters are defined and declared -in the file \textit{model/inc/PARAMS.h }and their default values are -set in the routine \textit{model/src/set\_defaults.F. }The default -values can be modified in the namelist file \textit{data }which needs -to be located in the directory where you will run the model. The -parameters are initialized in the routine -\textit{model/src/ini\_parms.F}. Look at this routine to see in what -part of the namelist the parameters are located. - -In what follows the parameters are grouped into categories related to -the computational domain, the equations solved in the model, and the -simulation controls. - -\subsection{Computational domain, geometry and time-discretization} - -\begin{description} -\item[dimensions] \ - - The number of points in the x, y, and r directions are represented - by the variables \textbf{sNx}, \textbf{sNy} and \textbf{Nr} - respectively which are declared and set in the file - \textit{model/inc/SIZE.h}. (Again, this assumes a mono-processor - calculation. For multiprocessor calculations see the section on - parallel implementation.) - -\item[grid] \ - - Three different grids are available: cartesian, spherical polar, and - curvilinear (which includes the cubed sphere). The grid is set - through the logical variables \textbf{usingCartesianGrid}, - \textbf{usingSphericalPolarGrid}, and \textbf{usingCurvilinearGrid}. - In the case of spherical and curvilinear grids, the southern - boundary is defined through the variable \textbf{phiMin} which - corresponds to the latitude of the southern most cell face (in - degrees). The resolution along the x and y directions is controlled - by the 1D arrays \textbf{delx} and \textbf{dely} (in meters in the - case of a cartesian grid, in degrees otherwise). The vertical grid - spacing is set through the 1D array \textbf{delz} for the ocean (in - meters) or \textbf{delp} for the atmosphere (in Pa). The variable - \textbf{Ro\_SeaLevel} represents the standard position of Sea-Level - in ``R'' coordinate. This is typically set to 0m for the ocean - (default value) and 10$^{5}$Pa for the atmosphere. For the - atmosphere, also set the logical variable \textbf{groundAtK1} to - \texttt{'.TRUE.'} which puts the first level (k=1) at the lower - boundary (ground). - - For the cartesian grid case, the Coriolis parameter $f$ is set - through the variables \textbf{f0} and \textbf{beta} which correspond - to the reference Coriolis parameter (in s$^{-1}$) and - $\frac{\partial f}{ \partial y}$(in m$^{-1}$s$^{-1}$) respectively. - If \textbf{beta } is set to a nonzero value, \textbf{f0} is the - value of $f$ at the southern edge of the domain. - -\item[topography - full and partial cells] \ - - The domain bathymetry is read from a file that contains a 2D (x,y) - map of depths (in m) for the ocean or pressures (in Pa) for the - atmosphere. The file name is represented by the variable - \textbf{bathyFile}. The file is assumed to contain binary numbers - giving the depth (pressure) of the model at each grid cell, ordered - with the x coordinate varying fastest. The points are ordered from - low coordinate to high coordinate for both axes. The model code - applies without modification to enclosed, periodic, and double - periodic domains. Periodicity is assumed by default and is - suppressed by setting the depths to 0m for the cells at the limits - of the computational domain (note: not sure this is the case for the - atmosphere). The precision with which to read the binary data is - controlled by the integer variable \textbf{readBinaryPrec} which can - take the value \texttt{32} (single precision) or \texttt{64} (double - precision). See the matlab program \textit{gendata.m} in the - \textit{input} directories under \textit{verification} to see how - the bathymetry files are generated for the case study experiments. - - To use the partial cell capability, the variable \textbf{hFacMin} - needs to be set to a value between 0 and 1 (it is set to 1 by - default) corresponding to the minimum fractional size of the cell. - For example if the bottom cell is 500m thick and \textbf{hFacMin} is - set to 0.1, the actual thickness of the cell (i.e. used in the code) - can cover a range of discrete values 50m apart from 50m to 500m - depending on the value of the bottom depth (in \textbf{bathyFile}) - at this point. - - Note that the bottom depths (or pressures) need not coincide with - the models levels as deduced from \textbf{delz} or \textbf{delp}. - The model will interpolate the numbers in \textbf{bathyFile} so that - they match the levels obtained from \textbf{delz} or \textbf{delp} - and \textbf{hFacMin}. - - (Note: the atmospheric case is a bit more complicated than what is - written here I think. To come soon...) - -\item[time-discretization] \ - - The time steps are set through the real variables \textbf{deltaTMom} - and \textbf{deltaTtracer} (in s) which represent the time step for - the momentum and tracer equations, respectively. For synchronous - integrations, simply set the two variables to the same value (or you - can prescribe one time step only through the variable - \textbf{deltaT}). The Adams-Bashforth stabilizing parameter is set - through the variable \textbf{abEps} (dimensionless). The stagger - baroclinic time stepping can be activated by setting the logical - variable \textbf{staggerTimeStep} to \texttt{'.TRUE.'}. - -\end{description} - - -\subsection{Equation of state} - -First, because the model equations are written in terms of -perturbations, a reference thermodynamic state needs to be specified. -This is done through the 1D arrays \textbf{tRef} and \textbf{sRef}. -\textbf{tRef} specifies the reference potential temperature profile -(in $^{o}$C for the ocean and $^{o}$K for the atmosphere) starting -from the level k=1. Similarly, \textbf{sRef} specifies the reference -salinity profile (in ppt) for the ocean or the reference specific -humidity profile (in g/kg) for the atmosphere. - -The form of the equation of state is controlled by the character -variables \textbf{buoyancyRelation} and \textbf{eosType}. -\textbf{buoyancyRelation} is set to \texttt{'OCEANIC'} by default and -needs to be set to \texttt{'ATMOSPHERIC'} for atmosphere simulations. -In this case, \textbf{eosType} must be set to \texttt{'IDEALGAS'}. -For the ocean, two forms of the equation of state are available: -linear (set \textbf{eosType} to \texttt{'LINEAR'}) and a polynomial -approximation to the full nonlinear equation ( set \textbf{eosType} to -\texttt{'POLYNOMIAL'}). In the linear case, you need to specify the -thermal and haline expansion coefficients represented by the variables -\textbf{tAlpha} (in K$^{-1}$) and \textbf{sBeta} (in ppt$^{-1}$). For -the nonlinear case, you need to generate a file of polynomial -coefficients called \textit{POLY3.COEFFS}. To do this, use the program -\textit{utils/knudsen2/knudsen2.f} under the model tree (a Makefile is -available in the same directory and you will need to edit the number -and the values of the vertical levels in \textit{knudsen2.f} so that -they match those of your configuration). - -There there are also higher polynomials for the equation of state: -\begin{description} -\item[\texttt{'UNESCO'}:] The UNESCO equation of state formula of - Fofonoff and Millard \cite{fofonoff83}. This equation of state - assumes in-situ temperature, which is not a model variable; {\em its - use is therefore discouraged, and it is only listed for - completeness}. -\item[\texttt{'JMD95Z'}:] A modified UNESCO formula by Jackett and - McDougall \cite{jackett95}, which uses the model variable potential - temperature as input. The \texttt{'Z'} indicates that this equation - of state uses a horizontally and temporally constant pressure - $p_{0}=-g\rho_{0}z$. -\item[\texttt{'JMD95P'}:] A modified UNESCO formula by Jackett and - McDougall \cite{jackett95}, which uses the model variable potential - temperature as input. The \texttt{'P'} indicates that this equation - of state uses the actual hydrostatic pressure of the last time - step. Lagging the pressure in this way requires an additional pickup - file for restarts. -\item[\texttt{'MDJWF'}:] The new, more accurate and less expensive - equation of state by McDougall et~al. \cite{mcdougall03}. It also - requires lagging the pressure and therefore an additional pickup - file for restarts. -\end{description} -For none of these options an reference profile of temperature or -salinity is required. - -\subsection{Momentum equations} - -In this section, we only focus for now on the parameters that you are -likely to change, i.e. the ones relative to forcing and dissipation -for example. The details relevant to the vector-invariant form of the -equations and the various advection schemes are not covered for the -moment. We assume that you use the standard form of the momentum -equations (i.e. the flux-form) with the default advection scheme. -Also, there are a few logical variables that allow you to turn on/off -various terms in the momentum equation. These variables are called -\textbf{momViscosity, momAdvection, momForcing, useCoriolis, - momPressureForcing, momStepping} and \textbf{metricTerms }and are -assumed to be set to \texttt{'.TRUE.'} here. Look at the file -\textit{model/inc/PARAMS.h }for a precise definition of these -variables. - -\begin{description} -\item[initialization] \ - - The velocity components are initialized to 0 unless the simulation - is starting from a pickup file (see section on simulation control - parameters). - -\item[forcing] \ - - This section only applies to the ocean. You need to generate - wind-stress data into two files \textbf{zonalWindFile} and - \textbf{meridWindFile} corresponding to the zonal and meridional - components of the wind stress, respectively (if you want the stress - to be along the direction of only one of the model horizontal axes, - you only need to generate one file). The format of the files is - similar to the bathymetry file. The zonal (meridional) stress data - are assumed to be in Pa and located at U-points (V-points). As for - the bathymetry, the precision with which to read the binary data is - controlled by the variable \textbf{readBinaryPrec}. See the matlab - program \textit{gendata.m} in the \textit{input} directories under - \textit{verification} to see how simple analytical wind forcing data - are generated for the case study experiments. - - There is also the possibility of prescribing time-dependent periodic - forcing. To do this, concatenate the successive time records into a - single file (for each stress component) ordered in a (x,y,t) fashion - and set the following variables: \textbf{periodicExternalForcing }to - \texttt{'.TRUE.'}, \textbf{externForcingPeriod }to the period (in s) - of which the forcing varies (typically 1 month), and - \textbf{externForcingCycle} to the repeat time (in s) of the forcing - (typically 1 year -- note: \textbf{ externForcingCycle} must be a - multiple of \textbf{externForcingPeriod}). With these variables set - up, the model will interpolate the forcing linearly at each - iteration. - -\item[dissipation] \ - - The lateral eddy viscosity coefficient is specified through the - variable \textbf{viscAh} (in m$^{2}$s$^{-1}$). The vertical eddy - viscosity coefficient is specified through the variable - \textbf{viscAz} (in m$^{2}$s$^{-1}$) for the ocean and - \textbf{viscAp} (in Pa$^{2}$s$^{-1}$) for the atmosphere. The - vertical diffusive fluxes can be computed implicitly by setting the - logical variable \textbf{implicitViscosity }to \texttt{'.TRUE.'}. - In addition, biharmonic mixing can be added as well through the - variable \textbf{viscA4} (in m$^{4}$s$^{-1}$). On a spherical polar - grid, you might also need to set the variable \textbf{cosPower} - which is set to 0 by default and which represents the power of - cosine of latitude to multiply viscosity. Slip or no-slip conditions - at lateral and bottom boundaries are specified through the logical - variables \textbf{no\_slip\_sides} and \textbf{no\_slip\_bottom}. If - set to \texttt{'.FALSE.'}, free-slip boundary conditions are - applied. If no-slip boundary conditions are applied at the bottom, a - bottom drag can be applied as well. Two forms are available: linear - (set the variable \textbf{bottomDragLinear} in s$ ^{-1}$) and - quadratic (set the variable \textbf{bottomDragQuadratic} in - m$^{-1}$). - - The Fourier and Shapiro filters are described elsewhere. - -\item[C-D scheme] \ - - If you run at a sufficiently coarse resolution, you will need the - C-D scheme for the computation of the Coriolis terms. The - variable\textbf{\ tauCD}, which represents the C-D scheme coupling - timescale (in s) needs to be set. - -\item[calculation of pressure/geopotential] \ - - First, to run a non-hydrostatic ocean simulation, set the logical - variable \textbf{nonHydrostatic} to \texttt{'.TRUE.'}. The pressure - field is then inverted through a 3D elliptic equation. (Note: this - capability is not available for the atmosphere yet.) By default, a - hydrostatic simulation is assumed and a 2D elliptic equation is used - to invert the pressure field. The parameters controlling the - behaviour of the elliptic solvers are the variables - \textbf{cg2dMaxIters} and \textbf{cg2dTargetResidual } for - the 2D case and \textbf{cg3dMaxIters} and - \textbf{cg3dTargetResidual} for the 3D case. You probably won't need to - alter the default values (are we sure of this?). - - For the calculation of the surface pressure (for the ocean) or - surface geopotential (for the atmosphere) you need to set the - logical variables \textbf{rigidLid} and \textbf{implicitFreeSurface} - (set one to \texttt{'.TRUE.'} and the other to \texttt{'.FALSE.'} - depending on how you want to deal with the ocean upper or atmosphere - lower boundary). - -\end{description} - -\subsection{Tracer equations} - -This section covers the tracer equations i.e. the potential -temperature equation and the salinity (for the ocean) or specific -humidity (for the atmosphere) equation. As for the momentum equations, -we only describe for now the parameters that you are likely to change. -The logical variables \textbf{tempDiffusion} \textbf{tempAdvection} -\textbf{tempForcing}, and \textbf{tempStepping} allow you to turn -on/off terms in the temperature equation (same thing for salinity or -specific humidity with variables \textbf{saltDiffusion}, -\textbf{saltAdvection} etc.). These variables are all assumed here to -be set to \texttt{'.TRUE.'}. Look at file \textit{model/inc/PARAMS.h} -for a precise definition. - -\begin{description} -\item[initialization] \ - - The initial tracer data can be contained in the binary files - \textbf{hydrogThetaFile} and \textbf{hydrogSaltFile}. These files - should contain 3D data ordered in an (x,y,r) fashion with k=1 as the - first vertical level. If no file names are provided, the tracers - are then initialized with the values of \textbf{tRef} and - \textbf{sRef} mentioned above (in the equation of state section). In - this case, the initial tracer data are uniform in x and y for each - depth level. - -\item[forcing] \ - - This part is more relevant for the ocean, the procedure for the - atmosphere not being completely stabilized at the moment. - - A combination of fluxes data and relaxation terms can be used for - driving the tracer equations. For potential temperature, heat flux - data (in W/m$ ^{2}$) can be stored in the 2D binary file - \textbf{surfQfile}. Alternatively or in addition, the forcing can - be specified through a relaxation term. The SST data to which the - model surface temperatures are restored to are supposed to be stored - in the 2D binary file \textbf{thetaClimFile}. The corresponding - relaxation time scale coefficient is set through the variable - \textbf{tauThetaClimRelax} (in s). The same procedure applies for - salinity with the variable names \textbf{EmPmRfile}, - \textbf{saltClimFile}, and \textbf{tauSaltClimRelax} for freshwater - flux (in m/s) and surface salinity (in ppt) data files and - relaxation time scale coefficient (in s), respectively. Also for - salinity, if the CPP key \textbf{USE\_NATURAL\_BCS} is turned on, - natural boundary conditions are applied i.e. when computing the - surface salinity tendency, the freshwater flux is multiplied by the - model surface salinity instead of a constant salinity value. - - As for the other input files, the precision with which to read the - data is controlled by the variable \textbf{readBinaryPrec}. - Time-dependent, periodic forcing can be applied as well following - the same procedure used for the wind forcing data (see above). - -\item[dissipation] \ - - Lateral eddy diffusivities for temperature and salinity/specific - humidity are specified through the variables \textbf{diffKhT} and - \textbf{diffKhS} (in m$^{2}$/s). Vertical eddy diffusivities are - specified through the variables \textbf{diffKzT} and - \textbf{diffKzS} (in m$^{2}$/s) for the ocean and \textbf{diffKpT - }and \textbf{diffKpS} (in Pa$^{2}$/s) for the atmosphere. The - vertical diffusive fluxes can be computed implicitly by setting the - logical variable \textbf{implicitDiffusion} to \texttt{'.TRUE.'}. - In addition, biharmonic diffusivities can be specified as well - through the coefficients \textbf{diffK4T} and \textbf{diffK4S} (in - m$^{4}$/s). Note that the cosine power scaling (specified through - \textbf{cosPower}---see the momentum equations section) is applied to - the tracer diffusivities (Laplacian and biharmonic) as well. The - Gent and McWilliams parameterization for oceanic tracers is - described in the package section. Finally, note that tracers can be - also subject to Fourier and Shapiro filtering (see the corresponding - section on these filters). - -\item[ocean convection] \ - - Two options are available to parameterize ocean convection: one is - to use the convective adjustment scheme. In this case, you need to - set the variable \textbf{cadjFreq}, which represents the frequency - (in s) with which the adjustment algorithm is called, to a non-zero - value (if set to a negative value by the user, the model will set it - to the tracer time step). The other option is to parameterize - convection with implicit vertical diffusion. To do this, set the - logical variable \textbf{implicitDiffusion} to \texttt{'.TRUE.'} - and the real variable \textbf{ivdc\_kappa} to a value (in m$^{2}$/s) - you wish the tracer vertical diffusivities to have when mixing - tracers vertically due to static instabilities. Note that - \textbf{cadjFreq} and \textbf{ivdc\_kappa}can not both have non-zero - value. - -\end{description} - -\subsection{Simulation controls} - -The model ''clock'' is defined by the variable \textbf{deltaTClock} -(in s) which determines the IO frequencies and is used in tagging -output. Typically, you will set it to the tracer time step for -accelerated runs (otherwise it is simply set to the default time step -\textbf{deltaT}). Frequency of checkpointing and dumping of the model -state are referenced to this clock (see below). - -\begin{description} -\item[run duration] \ - - The beginning of a simulation is set by specifying a start time (in - s) through the real variable \textbf{startTime} or by specifying an - initial iteration number through the integer variable - \textbf{nIter0}. If these variables are set to nonzero values, the - model will look for a ''pickup'' file \textit{pickup.0000nIter0} to - restart the integration. The end of a simulation is set through the - real variable \textbf{endTime} (in s). Alternatively, you can - specify instead the number of time steps to execute through the - integer variable \textbf{nTimeSteps}. - -\item[frequency of output] \ - - Real variables defining frequencies (in s) with which output files - are written on disk need to be set up. \textbf{dumpFreq} controls - the frequency with which the instantaneous state of the model is - saved. \textbf{chkPtFreq} and \textbf{pchkPtFreq} control the output - frequency of rolling and permanent checkpoint files, respectively. - See section 1.5.1 Output files for the definition of model state and - checkpoint files. In addition, time-averaged fields can be written - out by setting the variable \textbf{taveFreq} (in s). The precision - with which to write the binary data is controlled by the integer - variable w\textbf{riteBinaryPrec} (set it to \texttt{32} or - \texttt{64}). - -\end{description} - - -%%% Local Variables: -%%% mode: latex -%%% TeX-master: t -%%% End: