/[MITgcm]/manual/s_getstarted/text/getting_started.tex
ViewVC logotype

Contents of /manual/s_getstarted/text/getting_started.tex

Parent Directory Parent Directory | Revision Log Revision Log | View Revision Graph Revision Graph


Revision 1.12 - (show annotations) (download) (as text)
Wed Dec 5 15:49:39 2001 UTC (23 years, 7 months ago) by adcroft
Branch: MAIN
Changes since 1.11: +73 -10 lines
File MIME type: application/x-tex
More on CVS

1 % $Header: /u/gcmpack/mitgcmdoc/part3/getting_started.tex,v 1.11 2001/12/04 18:08:34 adcroft Exp $
2 % $Name: $
3
4 %\section{Getting started}
5
6 In this section, we describe how to use the model. In the first
7 section, we provide enough information to help you get started with
8 the model. We believe the best way to familiarize yourself with the
9 model is to run the case study examples provided with the base
10 version. Information on how to obtain, compile, and run the code is
11 found there as well as a brief description of the model structure
12 directory and the case study examples. The latter and the code
13 structure are described more fully in chapters
14 \ref{chap:discretization} and \ref{chap:sarch}, respectively. Here, in
15 this section, we provide information on how to customize the code when
16 you are ready to try implementing the configuration you have in mind.
17
18 \section{Where to find information}
19 \label{sect:whereToFindInfo}
20
21 A web site is maintained for release 1 (Sealion) of MITgcm:
22 \begin{verbatim}
23 http://mitgcm.org/sealion
24 \end{verbatim}
25 Here you will find an on-line version of this document, a
26 ``browsable'' copy of the code and a searchable database of the model
27 and site, as well as links for downloading the model and
28 documentation, to data-sources and other related sites.
29
30 There is also a support news group for the model that you can email at
31 \texttt{support@mitgcm.org} or browse at:
32 \begin{verbatim}
33 news://mitgcm.org/mitgcm.support
34 \end{verbatim}
35 A mail to the email list will reach all the developers and be archived
36 on the newsgroup. A users email list will be established at some time
37 in the future.
38
39 \section{Obtaining the code}
40 \label{sect:obtainingCode}
41
42 MITgcm can be downloaded from our system by following
43 the instructions below. As a courtesy we ask that you send e-mail to us at
44 \begin{rawhtml} <A href=mailto:support@mitgcm.org> \end{rawhtml}
45 support@mitgcm.org
46 \begin{rawhtml} </A> \end{rawhtml}
47 to enable us to keep track of who's using the model and in what application.
48 You can download the model two ways:
49
50 \begin{enumerate}
51 \item Using CVS software. CVS is a freely available source code management
52 tool. To use CVS you need to have the software installed. Many systems
53 come with CVS pre-installed, otherwise good places to look for
54 the software for a particular platform are
55 \begin{rawhtml} <A href=http://www.cvshome.org/ target="idontexist"> \end{rawhtml}
56 cvshome.org
57 \begin{rawhtml} </A> \end{rawhtml}
58 and
59 \begin{rawhtml} <A href=http://www.wincvs.org/ target="idontexist"> \end{rawhtml}
60 wincvs.org
61 \begin{rawhtml} </A> \end{rawhtml}
62 .
63
64 \item Using a tar file. This method is simple and does not
65 require any special software. However, this method does not
66 provide easy support for maintenance updates.
67
68 \end{enumerate}
69
70 If CVS is available on your system, we strongly encourage you to use it. CVS
71 provides an efficient and elegant way of organizing your code and keeping
72 track of your changes. If CVS is not available on your machine, you can also
73 download a tar file.
74
75 Before you can use CVS, the following environment variable has to be set in
76 your .cshrc or .tcshrc:
77 \begin{verbatim}
78 % setenv CVSROOT :pserver:cvsanon@mitgcm.org:/u/u0/gcmpack
79 \end{verbatim}
80
81 To start using CVS, register with the MITgcm CVS server using command:
82 \begin{verbatim}
83 % cvs login ( CVS password: cvsanon )
84 \end{verbatim}
85 You only need to do ``cvs login'' once.
86
87 To obtain the sources for release1 type:
88 \begin{verbatim}
89 % cvs co -d directory -P -r release1_beta1 MITgcm
90 \end{verbatim}
91
92 This creates a directory called \textit{directory}. If \textit{directory}
93 exists this command updates your code based on the repository. Each
94 directory in the source tree contains a directory \textit{CVS}. This
95 information is required by CVS to keep track of your file versions with
96 respect to the repository. Don't edit the files in \textit{CVS}!
97 You can also use CVS to download code updates. More extensive
98 information on using CVS for maintaining MITgcm code can be found
99 \begin{rawhtml} <A href=http://mitgcm.org/usingcvstoget.html target="idontexist"> \end{rawhtml}
100 here
101 \begin{rawhtml} </A> \end{rawhtml}
102 .
103
104
105 \paragraph*{Conventional download method}
106 \label{sect:conventionalDownload}
107
108 If you do not have CVS on your system, you can download the model as a
109 tar file from the reference web site at:
110 \begin{rawhtml} <A href=http://mitgcm.org/download target="idontexist"> \end{rawhtml}
111 \begin{verbatim}
112 http://mitgcm.org/download/
113 \end{verbatim}
114 \begin{rawhtml} </A> \end{rawhtml}
115 The tar file still contains CVS information which we urge you not to
116 delete; even if you do not use CVS yourself the information can help
117 us if you should need to send us your copy of the code.
118
119 \paragraph*{Upgrading from an earlier version}
120
121 If you already have an earlier version of the code you can ``upgrade''
122 your copy instead of downloading the entire repository again. First,
123 ``cd'' (change directory) to the top of your working copy:
124 \begin{verbatim}
125 % cd MITgcm
126 \end{verbatim}
127 and then issue the cvs update command:
128 \begin{verbatim}
129 % cvs -q update -r release1_beta1 -d -P
130 \end{verbatim}
131 This will update the ``tag'' to ``release1\_beta1'', add any new
132 directories (-d) and remove any empty directories (-P). The -q option
133 means be quiet which will reduce the number of messages you'll see in
134 the terminal. If you have modified the code prior to upgrading, CVS
135 will try to merge your changes with the upgrades. If there is a
136 conflict between your modifications and the upgrade, it will report
137 that file with a ``C'' in front, e.g.:
138 \begin{verbatim}
139 C model/src/ini_parms.F
140 \end{verbatim}
141 If the list of conflicts scrolled off the screen, you can re-issue the
142 cvs update command and it will report the conflicts. Conflicts are
143 indicated in the code by the delimites ``<<<<<<<'', ``======='' and
144 ``>>>>>>>''. For example,
145 \begin{verbatim}
146 <<<<<<< ini_parms.F
147 & bottomDragLinear,myOwnBottomDragCoefficient,
148 =======
149 & bottomDragLinear,bottomDragQuadratic,
150 >>>>>>> 1.18
151 \end{verbatim}
152 means that you added ``myOwnBottomDragCoefficient'' to a namelist at
153 the same time and place that we added ``bottomDragQuadratic''. You
154 need to resolve this conflict and in this case the line should be
155 changed to:
156 \begin{verbatim}
157 & bottomDragLinear,bottomDragQuadratic,myOwnBottomDragCoefficient,
158 \end{verbatim}
159 and the lines with the delimiters (<<<<<<,======,>>>>>>) be deleted.
160 Unless you are making modifications which exactly parallel
161 developments we make, these types of conflicts should be rare.
162
163 \paragraph*{Upgrading to the current pre-release version}
164
165 We don't make a ``release'' for every little patch and bug fix in
166 order to keep the frequency of upgrades to a minimum. However, if you
167 have run into a problem for which ``we have already fixed in the
168 latest code'' and we haven't made a ``tag'' or ``release'' since that
169 patch then you'll need to get the latest code:
170 \begin{verbatim}
171 % cvs -q update -A -d -P
172 \end{verbatim}
173 Unlike, the ``check-out'' and ``update'' procedures above, there is no
174 ``tag'' or release name. The -A tells CVS to upgrade to the
175 very latest version. As a rule, we don't recommend this since you
176 might upgrade while we are in the processes of checking in the code so
177 that you may only have part of a patch. Using this method of updating
178 also means we can't tell what version of the code you are working
179 with. So please be sure you understand what you're doing.
180
181 \section{Model and directory structure}
182
183 The ``numerical'' model is contained within a execution environment
184 support wrapper. This wrapper is designed to provide a general
185 framework for grid-point models. MITgcmUV is a specific numerical
186 model that uses the framework. Under this structure the model is split
187 into execution environment support code and conventional numerical
188 model code. The execution environment support code is held under the
189 \textit{eesupp} directory. The grid point model code is held under the
190 \textit{model} directory. Code execution actually starts in the
191 \textit{eesupp} routines and not in the \textit{model} routines. For
192 this reason the top-level
193 \textit{MAIN.F} is in the \textit{eesupp/src} directory. In general,
194 end-users should not need to worry about this level. The top-level routine
195 for the numerical part of the code is in \textit{model/src/THE\_MODEL\_MAIN.F%
196 }. Here is a brief description of the directory structure of the model under
197 the root tree (a detailed description is given in section 3: Code structure).
198
199 \begin{itemize}
200 \item \textit{bin}: this directory is initially empty. It is the default
201 directory in which to compile the code.
202
203 \item \textit{diags}: contains the code relative to time-averaged
204 diagnostics. It is subdivided into two subdirectories \textit{inc} and
205 \textit{src} that contain include files (*.\textit{h} files) and Fortran
206 subroutines (*.\textit{F} files), respectively.
207
208 \item \textit{doc}: contains brief documentation notes.
209
210 \item \textit{eesupp}: contains the execution environment source code. Also
211 subdivided into two subdirectories \textit{inc} and \textit{src}.
212
213 \item \textit{exe}: this directory is initially empty. It is the default
214 directory in which to execute the code.
215
216 \item \textit{model}: this directory contains the main source code. Also
217 subdivided into two subdirectories \textit{inc} and \textit{src}.
218
219 \item \textit{pkg}: contains the source code for the packages. Each package
220 corresponds to a subdirectory. For example, \textit{gmredi} contains the
221 code related to the Gent-McWilliams/Redi scheme, \textit{aim} the code
222 relative to the atmospheric intermediate physics. The packages are described
223 in detail in section 3.
224
225 \item \textit{tools}: this directory contains various useful tools. For
226 example, \textit{genmake} is a script written in csh (C-shell) that should
227 be used to generate your makefile. The directory \textit{adjoint} contains
228 the makefile specific to the Tangent linear and Adjoint Compiler (TAMC) that
229 generates the adjoint code. The latter is described in details in part V.
230
231 \item \textit{utils}: this directory contains various utilities. The
232 subdirectory \textit{knudsen2} contains code and a makefile that
233 compute coefficients of the polynomial approximation to the knudsen
234 formula for an ocean nonlinear equation of state. The \textit{matlab}
235 subdirectory contains matlab scripts for reading model output directly
236 into matlab. \textit{scripts} contains C-shell post-processing
237 scripts for joining processor-based and tiled-based model output.
238
239 \item \textit{verification}: this directory contains the model examples. See
240 section \ref{sect:modelExamples}.
241 \end{itemize}
242
243 \section{Example experiments}
244 \label{sect:modelExamples}
245
246 The MITgcm distribution comes with a set of twenty-four pre-configured
247 numerical experiments. Some of these examples experiments are tests of
248 individual parts of the model code, but many are fully fledged numerical
249 simulations. A few of the examples are used for tutorial documentation
250 in sections \ref{sect:eg-baro} - \ref{sect:eg-global}. The other examples
251 follow the same general structure as the tutorial examples. However,
252 they only include brief instructions in a text file called {\it README}.
253 The examples are located in subdirectories under
254 the directory \textit{verification}. Each
255 example is briefly described below.
256
257 \subsection{Full list of model examples}
258
259 \begin{enumerate}
260 \item \textit{exp0} - single layer, ocean double gyre (barotropic with
261 free-surface). This experiment is described in detail in section
262 \ref{sect:eg-baro}.
263
264 \item \textit{exp1} - Four layer, ocean double gyre. This experiment is described in detail in section
265 \ref{sect:eg-baroc}.
266
267 \item \textit{exp2} - 4x4 degree global ocean simulation with steady
268 climatological forcing. This experiment is described in detail in section
269 \ref{sect:eg-global}.
270
271 \item \textit{exp4} - Flow over a Gaussian bump in open-water or channel
272 with open boundaries.
273
274 \item \textit{exp5} - Inhomogenously forced ocean convection in a doubly
275 periodic box.
276
277 \item \textit{front\_relax} - Relaxation of an ocean thermal front (test for
278 Gent/McWilliams scheme). 2D (Y-Z).
279
280 \item \textit{internal wave} - Ocean internal wave forced by open boundary
281 conditions.
282
283 \item \textit{natl\_box} - Eastern subtropical North Atlantic with KPP
284 scheme; 1 month integration
285
286 \item \textit{hs94.1x64x5} - Zonal averaged atmosphere using Held and Suarez
287 '94 forcing.
288
289 \item \textit{hs94.128x64x5} - 3D atmosphere dynamics using Held and Suarez
290 '94 forcing.
291
292 \item \textit{hs94.cs-32x32x5} - 3D atmosphere dynamics using Held and
293 Suarez '94 forcing on the cubed sphere.
294
295 \item \textit{aim.5l\_zon-ave} - Intermediate Atmospheric physics. Global
296 Zonal Mean configuration, 1x64x5 resolution.
297
298 \item \textit{aim.5l\_XZ\_Equatorial\_Slice} - Intermediate Atmospheric
299 physics, equatorial Slice configuration.
300 2D (X-Z).
301
302 \item \textit{aim.5l\_Equatorial\_Channel} - Intermediate Atmospheric
303 physics. 3D Equatorial Channel configuration.
304
305 \item \textit{aim.5l\_LatLon} - Intermediate Atmospheric physics.
306 Global configuration, on latitude longitude grid with 128x64x5 grid points
307 ($2.8^\circ{\rm degree}$ resolution).
308
309 \item \textit{adjustment.128x64x1} Barotropic adjustment
310 problem on latitude longitude grid with 128x64 grid points ($2.8^\circ{\rm degree}$ resolution).
311
312 \item \textit{adjustment.cs-32x32x1}
313 Barotropic adjustment
314 problem on cube sphere grid with 32x32 points per face ( roughly
315 $2.8^\circ{\rm degree}$ resolution).
316
317 \item \textit{advect\_cs} Two-dimensional passive advection test on
318 cube sphere grid.
319
320 \item \textit{advect\_xy} Two-dimensional (horizontal plane) passive advection
321 test on Cartesian grid.
322
323 \item \textit{advect\_yz} Two-dimensional (vertical plane) passive advection test on Cartesian grid.
324
325 \item \textit{carbon} Simple passive tracer experiment. Includes derivative
326 calculation. Described in detail in section \ref{sect:eg-carbon-ad}.
327
328 \item \textit{flt\_example} Example of using float package.
329
330 \item \textit{global\_ocean.90x40x15} Global circulation with
331 GM, flux boundary conditions and poles.
332
333 \item \textit{solid-body.cs-32x32x1} Solid body rotation test for cube sphere
334 grid.
335
336 \end{enumerate}
337
338 \subsection{Directory structure of model examples}
339
340 Each example directory has the following subdirectories:
341
342 \begin{itemize}
343 \item \textit{code}: contains the code particular to the example. At a
344 minimum, this directory includes the following files:
345
346 \begin{itemize}
347 \item \textit{code/CPP\_EEOPTIONS.h}: declares CPP keys relative to the
348 ``execution environment'' part of the code. The default version is located
349 in \textit{eesupp/inc}.
350
351 \item \textit{code/CPP\_OPTIONS.h}: declares CPP keys relative to the
352 ``numerical model'' part of the code. The default version is located in
353 \textit{model/inc}.
354
355 \item \textit{code/SIZE.h}: declares size of underlying computational grid.
356 The default version is located in \textit{model/inc}.
357 \end{itemize}
358
359 In addition, other include files and subroutines might be present in \textit{%
360 code} depending on the particular experiment. See section 2 for more details.
361
362 \item \textit{input}: contains the input data files required to run the
363 example. At a minimum, the \textit{input} directory contains the following
364 files:
365
366 \begin{itemize}
367 \item \textit{input/data}: this file, written as a namelist, specifies the
368 main parameters for the experiment.
369
370 \item \textit{input/data.pkg}: contains parameters relative to the packages
371 used in the experiment.
372
373 \item \textit{input/eedata}: this file contains ``execution environment''
374 data. At present, this consists of a specification of the number of threads
375 to use in $X$ and $Y$ under multithreaded execution.
376 \end{itemize}
377
378 In addition, you will also find in this directory the forcing and topography
379 files as well as the files describing the initial state of the experiment.
380 This varies from experiment to experiment. See section 2 for more details.
381
382 \item \textit{results}: this directory contains the output file \textit{%
383 output.txt} produced by the simulation example. This file is useful for
384 comparison with your own output when you run the experiment.
385 \end{itemize}
386
387 Once you have chosen the example you want to run, you are ready to compile
388 the code.
389
390 \section{Building the code}
391 \label{sect:buildingCode}
392
393 To compile the code, we use the {\em make} program. This uses a file
394 ({\em Makefile}) that allows us to pre-process source files, specify
395 compiler and optimization options and also figures out any file
396 dependencies. We supply a script ({\em genmake}), described in section
397 \ref{sect:genmake}, that automatically creates the {\em Makefile} for
398 you. You then need to build the dependencies and compile the code.
399
400 As an example, let's assume that you want to build and run experiment
401 \textit{verification/exp2}. The are multiple ways and places to actually
402 do this but here let's build the code in
403 \textit{verification/exp2/input}:
404 \begin{verbatim}
405 % cd verification/exp2/input
406 \end{verbatim}
407 First, build the {\em Makefile}:
408 \begin{verbatim}
409 % ../../../tools/genmake -mods=../code
410 \end{verbatim}
411 The command line option tells {\em genmake} to override model source
412 code with any files in the directory {\em ./code/}.
413
414 If there is no \textit{.genmakerc} in the \textit{input} directory, you have
415 to use the following options when invoking \textit{genmake}:
416 \begin{verbatim}
417 % ../../../tools/genmake -mods=../code
418 \end{verbatim}
419
420 Next, create the dependencies:
421 \begin{verbatim}
422 % make depend
423 \end{verbatim}
424 This modifies {\em Makefile} by attaching a [long] list of files on
425 which other files depend. The purpose of this is to reduce
426 re-compilation if and when you start to modify the code. {\tt make
427 depend} also created links from the model source to this directory.
428
429 Now compile the code:
430 \begin{verbatim}
431 % make
432 \end{verbatim}
433 The {\tt make} command creates an executable called \textit{mitgcmuv}.
434
435 Now you are ready to run the model. General instructions for doing so are
436 given in section \ref{sect:runModel}. Here, we can run the model with:
437 \begin{verbatim}
438 ./mitgcmuv > output.txt
439 \end{verbatim}
440 where we are re-directing the stream of text output to the file {\em
441 output.txt}.
442
443
444 \subsection{Building/compiling the code elsewhere}
445
446 In the example above (section \ref{sect:buildingCode}) we built the
447 executable in the {\em input} directory of the experiment for
448 convenience. You can also configure and compile the code in other
449 locations, for example on a scratch disk with out having to copy the
450 entire source tree. The only requirement to do so is you have {\tt
451 genmake} in your path or you know the absolute path to {\tt genmake}.
452
453 The following sections outline some possible methods of organizing you
454 source and data.
455
456 \subsubsection{Building from the {\em ../code directory}}
457
458 This is just as simple as building in the {\em input/} directory:
459 \begin{verbatim}
460 % cd verification/exp2/code
461 % ../../../tools/genmake
462 % make depend
463 % make
464 \end{verbatim}
465 However, to run the model the executable ({\em mitgcmuv}) and input
466 files must be in the same place. If you only have one calculation to make:
467 \begin{verbatim}
468 % cd ../input
469 % cp ../code/mitgcmuv ./
470 % ./mitgcmuv > output.txt
471 \end{verbatim}
472 or if you will be making multiple runs with the same executable:
473 \begin{verbatim}
474 % cd ../
475 % cp -r input run1
476 % cp code/mitgcmuv run1
477 % cd run1
478 % ./mitgcmuv > output.txt
479 \end{verbatim}
480
481 \subsubsection{Building from a new directory}
482
483 Since the {\em input} directory contains input files it is often more
484 useful to keep {\em input} pristine and build in a new directory
485 within {\em verification/exp2/}:
486 \begin{verbatim}
487 % cd verification/exp2
488 % mkdir build
489 % cd build
490 % ../../../tools/genmake -mods=../code
491 % make depend
492 % make
493 \end{verbatim}
494 This builds the code exactly as before but this time you need to copy
495 either the executable or the input files or both in order to run the
496 model. For example,
497 \begin{verbatim}
498 % cp ../input/* ./
499 % ./mitgcmuv > output.txt
500 \end{verbatim}
501 or if you tend to make multiple runs with the same executable then
502 running in a new directory each time might be more appropriate:
503 \begin{verbatim}
504 % cd ../
505 % mkdir run1
506 % cp build/mitgcmuv run1/
507 % cp input/* run1/
508 % cd run1
509 % ./mitgcmuv > output.txt
510 \end{verbatim}
511
512 \subsubsection{Building from on a scratch disk}
513
514 Model object files and output data can use up large amounts of disk
515 space so it is often the case that you will be operating on a large
516 scratch disk. Assuming the model source is in {\em ~/MITgcm} then the
517 following commands will build the model in {\em /scratch/exp2-run1}:
518 \begin{verbatim}
519 % cd /scratch/exp2-run1
520 % ~/MITgcm/tools/genmake -rootdir=~/MITgcm -mods=~/MITgcm/verification/exp2/code
521 % make depend
522 % make
523 \end{verbatim}
524 To run the model here, you'll need the input files:
525 \begin{verbatim}
526 % cp ~/MITgcm/verification/exp2/input/* ./
527 % ./mitgcmuv > output.txt
528 \end{verbatim}
529
530 As before, you could build in one directory and make multiple runs of
531 the one experiment:
532 \begin{verbatim}
533 % cd /scratch/exp2
534 % mkdir build
535 % cd build
536 % ~/MITgcm/tools/genmake -rootdir=~/MITgcm -mods=~/MITgcm/verification/exp2/code
537 % make depend
538 % make
539 % cd ../
540 % cp -r ~/MITgcm/verification/exp2/input run2
541 % cd run2
542 % ./mitgcmuv > output.txt
543 \end{verbatim}
544
545
546
547 \subsection{\textit{genmake}}
548 \label{sect:genmake}
549
550 To compile the code, use the script \textit{genmake} located in the \textit{%
551 tools} directory. \textit{genmake} is a script that generates the makefile.
552 It has been written so that the code can be compiled on a wide diversity of
553 machines and systems. However, if it doesn't work the first time on your
554 platform, you might need to edit certain lines of \textit{genmake} in the
555 section containing the setups for the different machines. The file is
556 structured like this:
557 \begin{verbatim}
558 .
559 .
560 .
561 general instructions (machine independent)
562 .
563 .
564 .
565 - setup machine 1
566 - setup machine 2
567 - setup machine 3
568 - setup machine 4
569 etc
570 .
571 .
572 .
573 \end{verbatim}
574
575 For example, the setup corresponding to a DEC alpha machine is reproduced
576 here:
577 \begin{verbatim}
578 case OSF1+mpi:
579 echo "Configuring for DEC Alpha"
580 set CPP = ( '/usr/bin/cpp -P' )
581 set DEFINES = ( ${DEFINES} '-DTARGET_DEC -DWORDLENGTH=1' )
582 set KPP = ( 'kapf' )
583 set KPPFILES = ( 'main.F' )
584 set KFLAGS1 = ( '-scan=132 -noconc -cmp=' )
585 set FC = ( 'f77' )
586 set FFLAGS = ( '-convert big_endian -r8 -extend_source -automatic -call_shared -notransform_loops -align dcommons' )
587 set FOPTIM = ( '-O5 -fast -tune host -inline all' )
588 set NOOPTFLAGS = ( '-O0' )
589 set LIBS = ( '-lfmpi -lmpi -lkmp_osfp10 -pthread' )
590 set NOOPTFILES = ( 'barrier.F different_multiple.F external_fields_load.F')
591 set RMFILES = ( '*.p.out' )
592 breaksw
593 \end{verbatim}
594
595 Typically, these are the lines that you might need to edit to make \textit{%
596 genmake} work on your platform if it doesn't work the first time. \textit{%
597 genmake} understands several options that are described here:
598
599 \begin{itemize}
600 \item -rootdir=dir
601
602 indicates where the model root directory is relative to the directory where
603 you are compiling. This option is not needed if you compile in the \textit{%
604 bin} directory (which is the default compilation directory) or within the
605 \textit{verification} tree.
606
607 \item -mods=dir1,dir2,...
608
609 indicates the relative or absolute paths directories where the sources
610 should take precedence over the default versions (located in \textit{model},
611 \textit{eesupp},...). Typically, this option is used when running the
612 examples, see below.
613
614 \item -enable=pkg1,pkg2,...
615
616 enables packages source code \textit{pkg1}, \textit{pkg2},... when creating
617 the makefile.
618
619 \item -disable=pkg1,pkg2,...
620
621 disables packages source code \textit{pkg1}, \textit{pkg2},... when creating
622 the makefile.
623
624 \item -platform=machine
625
626 specifies the platform for which you want the makefile. In general, you
627 won't need this option. \textit{genmake} will select the right machine for
628 you (the one you're working on!). However, this option is useful if you have
629 a choice of several compilers on one machine and you want to use the one
630 that is not the default (ex: \texttt{pgf77} instead of \texttt{f77} under
631 Linux).
632
633 \item -mpi
634
635 this is used when you want to run the model in parallel processing mode
636 under mpi (see section on parallel computation for more details).
637
638 \item -jam
639
640 this is used when you want to run the model in parallel processing mode
641 under jam (see section on parallel computation for more details).
642 \end{itemize}
643
644 For some of the examples, there is a file called \textit{.genmakerc} in the
645 \textit{input} directory that has the relevant \textit{genmake} options for
646 that particular example. In this way you don't need to type the options when
647 invoking \textit{genmake}.
648
649
650 \section{Running the model}
651 \label{sect:runModel}
652
653 If compilation finished succesfuully (section \ref{sect:buildModel})
654 then an executable called {\em mitgcmuv} will now exist in the local
655 directory.
656
657 To run the model as a single process (ie. not in parallel) simply
658 type:
659 \begin{verbatim}
660 % ./mitgcmuv
661 \end{verbatim}
662 The ``./'' is a safe-guard to make sure you use the local executable
663 in case you have others that exist in your path (surely odd if you
664 do!). The above command will spew out many lines of text output to
665 your screen. This output contains details such as parameter values as
666 well as diagnostics such as mean Kinetic energy, largest CFL number,
667 etc. It is worth keeping this text output with the binary output so we
668 normally re-direct the {\em stdout} stream as follows:
669 \begin{verbatim}
670 % ./mitgcmuv > output.txt
671 \end{verbatim}
672
673 For the example experiments in {\em vericication}, an example of the
674 output is kept in {\em results/output.txt} for comparison. You can compare
675 your {\em output.txt} with this one to check that the set-up works.
676
677
678
679 \subsection{Output files}
680
681 The model produces various output files. At a minimum, the instantaneous
682 ``state'' of the model is written out, which is made of the following files:
683
684 \begin{itemize}
685 \item \textit{U.00000nIter} - zonal component of velocity field (m/s and $>
686 0 $ eastward).
687
688 \item \textit{V.00000nIter} - meridional component of velocity field (m/s
689 and $> 0$ northward).
690
691 \item \textit{W.00000nIter} - vertical component of velocity field (ocean:
692 m/s and $> 0$ upward, atmosphere: Pa/s and $> 0$ towards increasing pressure
693 i.e. downward).
694
695 \item \textit{T.00000nIter} - potential temperature (ocean: $^{0}$C,
696 atmosphere: $^{0}$K).
697
698 \item \textit{S.00000nIter} - ocean: salinity (psu), atmosphere: water vapor
699 (g/kg).
700
701 \item \textit{Eta.00000nIter} - ocean: surface elevation (m), atmosphere:
702 surface pressure anomaly (Pa).
703 \end{itemize}
704
705 The chain \textit{00000nIter} consists of ten figures that specify the
706 iteration number at which the output is written out. For example, \textit{%
707 U.0000000300} is the zonal velocity at iteration 300.
708
709 In addition, a ``pickup'' or ``checkpoint'' file called:
710
711 \begin{itemize}
712 \item \textit{pickup.00000nIter}
713 \end{itemize}
714
715 is written out. This file represents the state of the model in a condensed
716 form and is used for restarting the integration. If the C-D scheme is used,
717 there is an additional ``pickup'' file:
718
719 \begin{itemize}
720 \item \textit{pickup\_cd.00000nIter}
721 \end{itemize}
722
723 containing the D-grid velocity data and that has to be written out as well
724 in order to restart the integration. Rolling checkpoint files are the same
725 as the pickup files but are named differently. Their name contain the chain
726 \textit{ckptA} or \textit{ckptB} instead of \textit{00000nIter}. They can be
727 used to restart the model but are overwritten every other time they are
728 output to save disk space during long integrations.
729
730 \subsection{Looking at the output}
731
732 All the model data are written according to a ``meta/data'' file format.
733 Each variable is associated with two files with suffix names \textit{.data}
734 and \textit{.meta}. The \textit{.data} file contains the data written in
735 binary form (big\_endian by default). The \textit{.meta} file is a
736 ``header'' file that contains information about the size and the structure
737 of the \textit{.data} file. This way of organizing the output is
738 particularly useful when running multi-processors calculations. The base
739 version of the model includes a few matlab utilities to read output files
740 written in this format. The matlab scripts are located in the directory
741 \textit{utils/matlab} under the root tree. The script \textit{rdmds.m} reads
742 the data. Look at the comments inside the script to see how to use it.
743
744 Some examples of reading and visualizing some output in {\em Matlab}:
745 \begin{verbatim}
746 % matlab
747 >> H=rdmds('Depth');
748 >> contourf(H');colorbar;
749 >> title('Depth of fluid as used by model');
750
751 >> eta=rdmds('Eta',10);
752 >> imagesc(eta');axis ij;colorbar;
753 >> title('Surface height at iter=10');
754
755 >> eta=rdmds('Eta',[0:10:100]);
756 >> for n=1:11; imagesc(eta(:,:,n)');axis ij;colorbar;pause(.5);end
757 \end{verbatim}
758
759 \section{Doing it yourself: customizing the code}
760
761 When you are ready to run the model in the configuration you want, the
762 easiest thing is to use and adapt the setup of the case studies experiment
763 (described previously) that is the closest to your configuration. Then, the
764 amount of setup will be minimized. In this section, we focus on the setup
765 relative to the ''numerical model'' part of the code (the setup relative to
766 the ''execution environment'' part is covered in the parallel implementation
767 section) and on the variables and parameters that you are likely to change.
768
769 \subsection{Configuration and setup}
770
771 The CPP keys relative to the ''numerical model'' part of the code are all
772 defined and set in the file \textit{CPP\_OPTIONS.h }in the directory \textit{%
773 model/inc }or in one of the \textit{code }directories of the case study
774 experiments under \textit{verification.} The model parameters are defined
775 and declared in the file \textit{model/inc/PARAMS.h }and their default
776 values are set in the routine \textit{model/src/set\_defaults.F. }The
777 default values can be modified in the namelist file \textit{data }which
778 needs to be located in the directory where you will run the model. The
779 parameters are initialized in the routine \textit{model/src/ini\_parms.F}.
780 Look at this routine to see in what part of the namelist the parameters are
781 located.
782
783 In what follows the parameters are grouped into categories related to the
784 computational domain, the equations solved in the model, and the simulation
785 controls.
786
787 \subsection{Computational domain, geometry and time-discretization}
788
789 \begin{itemize}
790 \item dimensions
791 \end{itemize}
792
793 The number of points in the x, y,\textit{\ }and r\textit{\ }directions are
794 represented by the variables \textbf{sNx}\textit{, }\textbf{sNy}\textit{, }%
795 and \textbf{Nr}\textit{\ }respectively which are declared and set in the
796 file \textit{model/inc/SIZE.h. }(Again, this assumes a mono-processor
797 calculation. For multiprocessor calculations see section on parallel
798 implementation.)
799
800 \begin{itemize}
801 \item grid
802 \end{itemize}
803
804 Three different grids are available: cartesian, spherical polar, and
805 curvilinear (including the cubed sphere). The grid is set through the
806 logical variables \textbf{usingCartesianGrid}\textit{, }\textbf{%
807 usingSphericalPolarGrid}\textit{, }and \textit{\ }\textbf{%
808 usingCurvilinearGrid}\textit{. }In the case of spherical and curvilinear
809 grids, the southern boundary is defined through the variable \textbf{phiMin}%
810 \textit{\ }which corresponds to the latitude of the southern most cell face
811 (in degrees). The resolution along the x and y directions is controlled by
812 the 1D arrays \textbf{delx}\textit{\ }and \textbf{dely}\textit{\ }(in meters
813 in the case of a cartesian grid, in degrees otherwise). The vertical grid
814 spacing is set through the 1D array \textbf{delz }for the ocean (in meters)
815 or \textbf{delp}\textit{\ }for the atmosphere (in Pa). The variable \textbf{%
816 Ro\_SeaLevel} represents the standard position of Sea-Level in ''R''
817 coordinate. This is typically set to 0m for the ocean (default value) and 10$%
818 ^{5}$Pa for the atmosphere. For the atmosphere, also set the logical
819 variable \textbf{groundAtK1} to '.\texttt{TRUE}.'. which put the first level
820 (k=1) at the lower boundary (ground).
821
822 For the cartesian grid case, the Coriolis parameter $f$ is set through the
823 variables \textbf{f0}\textit{\ }and \textbf{beta}\textit{\ }which correspond
824 to the reference Coriolis parameter (in s$^{-1}$) and $\frac{\partial f}{%
825 \partial y}$(in m$^{-1}$s$^{-1}$) respectively. If \textbf{beta }\textit{\ }%
826 is set to a nonzero value, \textbf{f0}\textit{\ }is the value of $f$ at the
827 southern edge of the domain.
828
829 \begin{itemize}
830 \item topography - full and partial cells
831 \end{itemize}
832
833 The domain bathymetry is read from a file that contains a 2D (x,y) map of
834 depths (in m) for the ocean or pressures (in Pa) for the atmosphere. The
835 file name is represented by the variable \textbf{bathyFile}\textit{. }The
836 file is assumed to contain binary numbers giving the depth (pressure) of the
837 model at each grid cell, ordered with the x coordinate varying fastest. The
838 points are ordered from low coordinate to high coordinate for both axes. The
839 model code applies without modification to enclosed, periodic, and double
840 periodic domains. Periodicity is assumed by default and is suppressed by
841 setting the depths to 0m for the cells at the limits of the computational
842 domain (note: not sure this is the case for the atmosphere). The precision
843 with which to read the binary data is controlled by the integer variable
844 \textbf{readBinaryPrec }which can take the value \texttt{32} (single
845 precision) or \texttt{64} (double precision). See the matlab program \textit{%
846 gendata.m }in the \textit{input }directories under \textit{verification }to
847 see how the bathymetry files are generated for the case study experiments.
848
849 To use the partial cell capability, the variable \textbf{hFacMin}\textit{\ }%
850 needs to be set to a value between 0 and 1 (it is set to 1 by default)
851 corresponding to the minimum fractional size of the cell. For example if the
852 bottom cell is 500m thick and \textbf{hFacMin}\textit{\ }is set to 0.1, the
853 actual thickness of the cell (i.e. used in the code) can cover a range of
854 discrete values 50m apart from 50m to 500m depending on the value of the
855 bottom depth (in \textbf{bathyFile}) at this point.
856
857 Note that the bottom depths (or pressures) need not coincide with the models
858 levels as deduced from \textbf{delz}\textit{\ }or\textit{\ }\textbf{delp}%
859 \textit{. }The model will interpolate the numbers in \textbf{bathyFile}%
860 \textit{\ }so that they match the levels obtained from \textbf{delz}\textit{%
861 \ }or\textit{\ }\textbf{delp}\textit{\ }and \textbf{hFacMin}\textit{. }
862
863 (Note: the atmospheric case is a bit more complicated than what is written
864 here I think. To come soon...)
865
866 \begin{itemize}
867 \item time-discretization
868 \end{itemize}
869
870 The time steps are set through the real variables \textbf{deltaTMom }and
871 \textbf{deltaTtracer }(in s) which represent the time step for the momentum
872 and tracer equations, respectively. For synchronous integrations, simply set
873 the two variables to the same value (or you can prescribe one time step only
874 through the variable \textbf{deltaT}). The Adams-Bashforth stabilizing
875 parameter is set through the variable \textbf{abEps }(dimensionless). The
876 stagger baroclinic time stepping can be activated by setting the logical
877 variable \textbf{staggerTimeStep }to '.\texttt{TRUE}.'.
878
879 \subsection{Equation of state}
880
881 First, because the model equations are written in terms of perturbations, a
882 reference thermodynamic state needs to be specified. This is done through
883 the 1D arrays \textbf{tRef}\textit{\ }and \textbf{sRef}. \textbf{tRef }%
884 specifies the reference potential temperature profile (in $^{o}$C for
885 the ocean and $^{o}$K for the atmosphere) starting from the level
886 k=1. Similarly, \textbf{sRef}\textit{\ }specifies the reference salinity
887 profile (in ppt) for the ocean or the reference specific humidity profile
888 (in g/kg) for the atmosphere.
889
890 The form of the equation of state is controlled by the character variables
891 \textbf{buoyancyRelation}\textit{\ }and \textbf{eosType}\textit{. }\textbf{%
892 buoyancyRelation}\textit{\ }is set to '\texttt{OCEANIC}' by default and
893 needs to be set to '\texttt{ATMOSPHERIC}' for atmosphere simulations. In
894 this case, \textbf{eosType}\textit{\ }must be set to '\texttt{IDEALGAS}'.
895 For the ocean, two forms of the equation of state are available: linear (set
896 \textbf{eosType}\textit{\ }to '\texttt{LINEAR}') and a polynomial
897 approximation to the full nonlinear equation ( set \textbf{eosType}\textit{\
898 }to '\texttt{POLYNOMIAL}'). In the linear case, you need to specify the
899 thermal and haline expansion coefficients represented by the variables
900 \textbf{tAlpha}\textit{\ }(in K$^{-1}$) and \textbf{sBeta}\textit{\ }(in ppt$%
901 ^{-1}$). For the nonlinear case, you need to generate a file of polynomial
902 coefficients called \textit{POLY3.COEFFS. }To do this, use the program
903 \textit{utils/knudsen2/knudsen2.f }under the model tree (a Makefile is
904 available in the same directory and you will need to edit the number and the
905 values of the vertical levels in \textit{knudsen2.f }so that they match
906 those of your configuration). \textit{\ }
907
908 \subsection{Momentum equations}
909
910 In this section, we only focus for now on the parameters that you are likely
911 to change, i.e. the ones relative to forcing and dissipation for example.
912 The details relevant to the vector-invariant form of the equations and the
913 various advection schemes are not covered for the moment. We assume that you
914 use the standard form of the momentum equations (i.e. the flux-form) with
915 the default advection scheme. Also, there are a few logical variables that
916 allow you to turn on/off various terms in the momentum equation. These
917 variables are called \textbf{momViscosity, momAdvection, momForcing,
918 useCoriolis, momPressureForcing, momStepping}\textit{, }and \textit{\ }%
919 \textbf{metricTerms }and are assumed to be set to '.\texttt{TRUE}.' here.
920 Look at the file \textit{model/inc/PARAMS.h }for a precise definition of
921 these variables.
922
923 \begin{itemize}
924 \item initialization
925 \end{itemize}
926
927 The velocity components are initialized to 0 unless the simulation is
928 starting from a pickup file (see section on simulation control parameters).
929
930 \begin{itemize}
931 \item forcing
932 \end{itemize}
933
934 This section only applies to the ocean. You need to generate wind-stress
935 data into two files \textbf{zonalWindFile}\textit{\ }and \textbf{%
936 meridWindFile }corresponding to the zonal and meridional components of the
937 wind stress, respectively (if you want the stress to be along the direction
938 of only one of the model horizontal axes, you only need to generate one
939 file). The format of the files is similar to the bathymetry file. The zonal
940 (meridional) stress data are assumed to be in Pa and located at U-points
941 (V-points). As for the bathymetry, the precision with which to read the
942 binary data is controlled by the variable \textbf{readBinaryPrec}.\textbf{\ }
943 See the matlab program \textit{gendata.m }in the \textit{input }directories
944 under \textit{verification }to see how simple analytical wind forcing data
945 are generated for the case study experiments.
946
947 There is also the possibility of prescribing time-dependent periodic
948 forcing. To do this, concatenate the successive time records into a single
949 file (for each stress component) ordered in a (x, y, t) fashion and set the
950 following variables: \textbf{periodicExternalForcing }to '.\texttt{TRUE}.',
951 \textbf{externForcingPeriod }to the period (in s) of which the forcing
952 varies (typically 1 month), and \textbf{externForcingCycle }to the repeat
953 time (in s) of the forcing (typically 1 year -- note: \textbf{%
954 externForcingCycle }must be a multiple of \textbf{externForcingPeriod}).
955 With these variables set up, the model will interpolate the forcing linearly
956 at each iteration.
957
958 \begin{itemize}
959 \item dissipation
960 \end{itemize}
961
962 The lateral eddy viscosity coefficient is specified through the variable
963 \textbf{viscAh}\textit{\ }(in m$^{2}$s$^{-1}$). The vertical eddy viscosity
964 coefficient is specified through the variable \textbf{viscAz }(in m$^{2}$s$%
965 ^{-1}$) for the ocean and \textbf{viscAp}\textit{\ }(in Pa$^{2}$s$^{-1}$)
966 for the atmosphere. The vertical diffusive fluxes can be computed implicitly
967 by setting the logical variable \textbf{implicitViscosity }to '.\texttt{TRUE}%
968 .'. In addition, biharmonic mixing can be added as well through the variable
969 \textbf{viscA4}\textit{\ }(in m$^{4}$s$^{-1}$). On a spherical polar grid,
970 you might also need to set the variable \textbf{cosPower} which is set to 0
971 by default and which represents the power of cosine of latitude to multiply
972 viscosity. Slip or no-slip conditions at lateral and bottom boundaries are
973 specified through the logical variables \textbf{no\_slip\_sides}\textit{\ }%
974 and \textbf{no\_slip\_bottom}. If set to '\texttt{.FALSE.}', free-slip
975 boundary conditions are applied. If no-slip boundary conditions are applied
976 at the bottom, a bottom drag can be applied as well. Two forms are
977 available: linear (set the variable \textbf{bottomDragLinear}\textit{\ }in s$%
978 ^{-1}$) and quadratic (set the variable \textbf{bottomDragQuadratic}\textit{%
979 \ }in m$^{-1}$).
980
981 The Fourier and Shapiro filters are described elsewhere.
982
983 \begin{itemize}
984 \item C-D scheme
985 \end{itemize}
986
987 If you run at a sufficiently coarse resolution, you will need the C-D scheme
988 for the computation of the Coriolis terms. The variable\textbf{\ tauCD},
989 which represents the C-D scheme coupling timescale (in s) needs to be set.
990
991 \begin{itemize}
992 \item calculation of pressure/geopotential
993 \end{itemize}
994
995 First, to run a non-hydrostatic ocean simulation, set the logical variable
996 \textbf{nonHydrostatic} to '.\texttt{TRUE}.'. The pressure field is then
997 inverted through a 3D elliptic equation. (Note: this capability is not
998 available for the atmosphere yet.) By default, a hydrostatic simulation is
999 assumed and a 2D elliptic equation is used to invert the pressure field. The
1000 parameters controlling the behaviour of the elliptic solvers are the
1001 variables \textbf{cg2dMaxIters}\textit{\ }and \textbf{cg2dTargetResidual }%
1002 for the 2D case and \textbf{cg3dMaxIters}\textit{\ }and \textbf{%
1003 cg3dTargetResidual }for the 3D case. You probably won't need to alter the
1004 default values (are we sure of this?).
1005
1006 For the calculation of the surface pressure (for the ocean) or surface
1007 geopotential (for the atmosphere) you need to set the logical variables
1008 \textbf{rigidLid} and \textbf{implicitFreeSurface}\textit{\ }(set one to '.%
1009 \texttt{TRUE}.' and the other to '.\texttt{FALSE}.' depending on how you
1010 want to deal with the ocean upper or atmosphere lower boundary).
1011
1012 \subsection{Tracer equations}
1013
1014 This section covers the tracer equations i.e. the potential temperature
1015 equation and the salinity (for the ocean) or specific humidity (for the
1016 atmosphere) equation. As for the momentum equations, we only describe for
1017 now the parameters that you are likely to change. The logical variables
1018 \textbf{tempDiffusion}\textit{, }\textbf{tempAdvection}\textit{, }\textbf{%
1019 tempForcing}\textit{,} and \textbf{tempStepping} allow you to turn on/off
1020 terms in the temperature equation (same thing for salinity or specific
1021 humidity with variables \textbf{saltDiffusion}\textit{, }\textbf{%
1022 saltAdvection}\textit{\ }etc). These variables are all assumed here to be
1023 set to '.\texttt{TRUE}.'. Look at file \textit{model/inc/PARAMS.h }for a
1024 precise definition.
1025
1026 \begin{itemize}
1027 \item initialization
1028 \end{itemize}
1029
1030 The initial tracer data can be contained in the binary files \textbf{%
1031 hydrogThetaFile }and \textbf{hydrogSaltFile}. These files should contain 3D
1032 data ordered in an (x, y, r) fashion with k=1 as the first vertical level.
1033 If no file names are provided, the tracers are then initialized with the
1034 values of \textbf{tRef }and \textbf{sRef }mentioned above (in the equation
1035 of state section). In this case, the initial tracer data are uniform in x
1036 and y for each depth level.
1037
1038 \begin{itemize}
1039 \item forcing
1040 \end{itemize}
1041
1042 This part is more relevant for the ocean, the procedure for the atmosphere
1043 not being completely stabilized at the moment.
1044
1045 A combination of fluxes data and relaxation terms can be used for driving
1046 the tracer equations. \ For potential temperature, heat flux data (in W/m$%
1047 ^{2}$) can be stored in the 2D binary file \textbf{surfQfile}\textit{. }%
1048 Alternatively or in addition, the forcing can be specified through a
1049 relaxation term. The SST data to which the model surface temperatures are
1050 restored to are supposed to be stored in the 2D binary file \textbf{%
1051 thetaClimFile}\textit{. }The corresponding relaxation time scale coefficient
1052 is set through the variable \textbf{tauThetaClimRelax}\textit{\ }(in s). The
1053 same procedure applies for salinity with the variable names \textbf{EmPmRfile%
1054 }\textit{, }\textbf{saltClimFile}\textit{, }and \textbf{tauSaltClimRelax}%
1055 \textit{\ }for freshwater flux (in m/s) and surface salinity (in ppt) data
1056 files and relaxation time scale coefficient (in s), respectively. Also for
1057 salinity, if the CPP key \textbf{USE\_NATURAL\_BCS} is turned on, natural
1058 boundary conditions are applied i.e. when computing the surface salinity
1059 tendency, the freshwater flux is multiplied by the model surface salinity
1060 instead of a constant salinity value.
1061
1062 As for the other input files, the precision with which to read the data is
1063 controlled by the variable \textbf{readBinaryPrec}. Time-dependent, periodic
1064 forcing can be applied as well following the same procedure used for the
1065 wind forcing data (see above).
1066
1067 \begin{itemize}
1068 \item dissipation
1069 \end{itemize}
1070
1071 Lateral eddy diffusivities for temperature and salinity/specific humidity
1072 are specified through the variables \textbf{diffKhT }and \textbf{diffKhS }%
1073 (in m$^{2}$/s). Vertical eddy diffusivities are specified through the
1074 variables \textbf{diffKzT }and \textbf{diffKzS }(in m$^{2}$/s) for the ocean
1075 and \textbf{diffKpT }and \textbf{diffKpS }(in Pa$^{2}$/s) for the
1076 atmosphere. The vertical diffusive fluxes can be computed implicitly by
1077 setting the logical variable \textbf{implicitDiffusion }to '.\texttt{TRUE}%
1078 .'. In addition, biharmonic diffusivities can be specified as well through
1079 the coefficients \textbf{diffK4T }and \textbf{diffK4S }(in m$^{4}$/s). Note
1080 that the cosine power scaling (specified through \textbf{cosPower }- see the
1081 momentum equations section) is applied to the tracer diffusivities
1082 (Laplacian and biharmonic) as well. The Gent and McWilliams parameterization
1083 for oceanic tracers is described in the package section. Finally, note that
1084 tracers can be also subject to Fourier and Shapiro filtering (see the
1085 corresponding section on these filters).
1086
1087 \begin{itemize}
1088 \item ocean convection
1089 \end{itemize}
1090
1091 Two options are available to parameterize ocean convection: one is to use
1092 the convective adjustment scheme. In this case, you need to set the variable
1093 \textbf{cadjFreq}, which represents the frequency (in s) with which the
1094 adjustment algorithm is called, to a non-zero value (if set to a negative
1095 value by the user, the model will set it to the tracer time step). The other
1096 option is to parameterize convection with implicit vertical diffusion. To do
1097 this, set the logical variable \textbf{implicitDiffusion }to '.\texttt{TRUE}%
1098 .' and the real variable \textbf{ivdc\_kappa }to a value (in m$^{2}$/s) you
1099 wish the tracer vertical diffusivities to have when mixing tracers
1100 vertically due to static instabilities. Note that \textbf{cadjFreq }and
1101 \textbf{ivdc\_kappa }can not both have non-zero value.
1102
1103 \subsection{Simulation controls}
1104
1105 The model ''clock'' is defined by the variable \textbf{deltaTClock }(in s)
1106 which determines the IO frequencies and is used in tagging output.
1107 Typically, you will set it to the tracer time step for accelerated runs
1108 (otherwise it is simply set to the default time step \textbf{deltaT}).
1109 Frequency of checkpointing and dumping of the model state are referenced to
1110 this clock (see below).
1111
1112 \begin{itemize}
1113 \item run duration
1114 \end{itemize}
1115
1116 The beginning of a simulation is set by specifying a start time (in s)
1117 through the real variable \textbf{startTime }or by specifying an initial
1118 iteration number through the integer variable \textbf{nIter0}. If these
1119 variables are set to nonzero values, the model will look for a ''pickup''
1120 file \textit{pickup.0000nIter0 }to restart the integration\textit{. }The end
1121 of a simulation is set through the real variable \textbf{endTime }(in s).
1122 Alternatively, you can specify instead the number of time steps to execute
1123 through the integer variable \textbf{nTimeSteps}.
1124
1125 \begin{itemize}
1126 \item frequency of output
1127 \end{itemize}
1128
1129 Real variables defining frequencies (in s) with which output files are
1130 written on disk need to be set up. \textbf{dumpFreq }controls the frequency
1131 with which the instantaneous state of the model is saved. \textbf{chkPtFreq }%
1132 and \textbf{pchkPtFreq }control the output frequency of rolling and
1133 permanent checkpoint files, respectively. See section 1.5.1 Output files for the
1134 definition of model state and checkpoint files. In addition, time-averaged
1135 fields can be written out by setting the variable \textbf{taveFreq} (in s).
1136 The precision with which to write the binary data is controlled by the
1137 integer variable w\textbf{riteBinaryPrec }(set it to \texttt{32} or \texttt{%
1138 64}).

  ViewVC Help
Powered by ViewVC 1.1.22