/[MITgcm]/manual/s_getstarted/text/getting_started.tex
ViewVC logotype

Contents of /manual/s_getstarted/text/getting_started.tex

Parent Directory Parent Directory | Revision Log Revision Log | View Revision Graph Revision Graph


Revision 1.7 - (show annotations) (download) (as text)
Sun Oct 21 04:19:40 2001 UTC (23 years, 8 months ago) by cnh
Branch: MAIN
Changes since 1.6: +41 -13 lines
File MIME type: application/x-tex
White background
Ignore latex2html subdir manual/
Obtaing the code mods

1 % $Header: /u/u0/gcmpack/mitgcmdoc/part3/getting_started.tex,v 1.6 2001/10/18 18:44:14 adcroft Exp $
2 % $Name: $
3
4 %\section{Getting started}
5
6 In this section, we describe how to use the model. In the first
7 section, we provide enough information to help you get started with
8 the model. We believe the best way to familiarize yourself with the
9 model is to run the case study examples provided with the base
10 version. Information on how to obtain, compile, and run the code is
11 found there as well as a brief description of the model structure
12 directory and the case study examples. The latter and the code
13 structure are described more fully in chapters
14 \ref{chap:discretization} and \ref{chap:sarch}, respectively. Here, in
15 this section, we provide information on how to customize the code when
16 you are ready to try implementing the configuration you have in mind.
17
18 \section{Where to find information}
19 \label{sect:whereToFindInfo}
20
21 A web site is maintained for release 1 (Sealion) of MITgcm:
22 \begin{verbatim}
23 http://mitgcm.org/sealion
24 \end{verbatim}
25 Here you will find an on-line version of this document, a
26 ``browsable'' copy of the code and a searchable database of the model
27 and site, as well as links for downloading the model and
28 documentation, to data-sources and other related sites.
29
30 There is also a support news group for the model that you can email at
31 \texttt{support@mitgcm.org} or browse at:
32 \begin{verbatim}
33 news://mitgcm.org/mitgcm.support
34 \end{verbatim}
35 A mail to the email list will reach all the developers and be archived
36 on the newsgroup. A users email list will be established at some time
37 in the future.
38
39 \section{Obtaining the code}
40 \label{sect:obtainingCode}
41
42 MITgcm can be downloaded from our system by following
43 the instructions below. As a courtesy we ask that you send e-mail to us at
44 \begin{rawhtml} <A href=mailto:support@mitgcm.org> \end{rawhtml}
45 support@mitgcm.org
46 \begin{rawhtml} </A> \end{rawhtml}
47 to enable us to keep track of who's using the model and in what application.
48 You can download the model two ways:
49
50 \begin{enumerate}
51 \item Using CVS software. CVS is a freely available source code managment
52 tool. To use CVS you need to have the software installed. Many systems
53 come with CVS pre-installed, otherwise good places to look for
54 the software for a particular platform are
55 \begin{rawhtml} <A href=http://www.cvshome.org/ target="idontexist"> \end{rawhtml}
56 cvshome.org
57 \begin{rawhtml} </A> \end{rawhtml}
58 and
59 \begin{rawhtml} <A href=http://www.wincvs.org/ target="idontexist"> \end{rawhtml}
60 wincvs.org
61 \begin{rawhtml} </A> \end{rawhtml}
62 .
63
64 \item Using a tar file. This method is simple and does not
65 require any special software. However, this method does not
66 provide easy support for maintenance updates.
67
68 \end{enumerate}
69
70 If CVS is available on your system, we strongly encourage you to use it. CVS
71 provides an efficient and elegant way of organizing your code and keeping
72 track of your changes. If CVS is not available on your machine, you can also
73 download a tar file.
74
75 Before you can use CVS, the following environment variable has to be set in
76 your .cshrc or .tcshrc:
77 \begin{verbatim}
78 % setenv CVSROOT :pserver:cvsanon@mitgcm.org:/u/u0/gcmpack
79 \end{verbatim}
80
81 To start using CVS, register with the MITgcm CVS server using command:
82 \begin{verbatim}
83 % cvs login ( CVS password: cvsanon )
84 \end{verbatim}
85 You only need to do ``cvs login'' once.
86
87 To obtain the sources for release1 type:
88 \begin{verbatim}
89 % cvs co -d directory -P -r release1 MITgcmUV
90 \end{verbatim}
91
92 This creates a directory called \textit{directory}. If \textit{directory}
93 exists this command updates your code based on the repository. Each
94 directory in the source tree contains a directory \textit{CVS}. This
95 information is required by CVS to keep track of your file versions with
96 respect to the repository. Don't edit the files in \textit{CVS}!
97 You can also use CVS to download code updates. More extensive
98 information on using CVS for maintaining MITgcm code can be found
99 \begin{rawhtml} <A href=http://mitgcm.org/usingcvstoget.html target="idontexist"> \end{rawhtml}
100 here
101 \begin{rawhtml} </A> \end{rawhtml}
102 .
103
104
105 \paragraph*{Conventional download method}
106 \label{sect:conventionalDownload}
107
108 If you do not have CVS on your system, you can download the model as a
109 tar file from the reference web site at:
110 \begin{rawhtml} <A href=http://mitgcm.org/download target="idontexist"> \end{rawhtml}
111 \begin{verbatim}
112 http://mitgcm.org/download/
113 \end{verbatim}
114 \begin{rawhtml} </A> \end{rawhtml}
115 The tar file still contains CVS information which we urge you not to
116 delete; even if you do not use CVS yourself the information can help
117 us if you should need to send us your copy of the code.
118
119 \section{Model and directory structure}
120
121 The ``numerical'' model is contained within a execution environment support
122 wrapper. This wrapper is designed to provide a general framework for
123 grid-point models. MITgcmUV is a specific numerical model that uses the
124 framework. Under this structure the model is split into execution
125 environment support code and conventional numerical model code. The
126 execution environment support code is held under the \textit{eesupp}
127 directory. The grid point model code is held under the \textit{model}
128 directory. Code execution actually starts in the \textit{eesupp} routines
129 and not in the \textit{model} routines. For this reason the top-level
130 \textit{MAIN.F} is in the \textit{eesupp/src} directory. In general,
131 end-users should not need to worry about this level. The top-level routine
132 for the numerical part of the code is in \textit{model/src/THE\_MODEL\_MAIN.F%
133 }. Here is a brief description of the directory structure of the model under
134 the root tree (a detailed description is given in section 3: Code structure).
135
136 \begin{itemize}
137 \item \textit{bin}: this directory is initially empty. It is the default
138 directory in which to compile the code.
139
140 \item \textit{diags}: contains the code relative to time-averaged
141 diagnostics. It is subdivided into two subdirectories \textit{inc} and
142 \textit{src} that contain include files (*.\textit{h} files) and fortran
143 subroutines (*.\textit{F} files), respectively.
144
145 \item \textit{doc}: contains brief documentation notes.
146
147 \item \textit{eesupp}: contains the execution environment source code. Also
148 subdivided into two subdirectories \textit{inc} and \textit{src}.
149
150 \item \textit{exe}: this directory is initially empty. It is the default
151 directory in which to execute the code.
152
153 \item \textit{model}: this directory contains the main source code. Also
154 subdivided into two subdirectories \textit{inc} and \textit{src}.
155
156 \item \textit{pkg}: contains the source code for the packages. Each package
157 corresponds to a subdirectory. For example, \textit{gmredi} contains the
158 code related to the Gent-McWilliams/Redi scheme, \textit{aim} the code
159 relative to the atmospheric intermediate physics. The packages are described
160 in detail in section 3.
161
162 \item \textit{tools}: this directory contains various useful tools. For
163 example, \textit{genmake} is a script written in csh (C-shell) that should
164 be used to generate your makefile. The directory \textit{adjoint} contains
165 the makefile specific to the Tangent linear and Adjoint Compiler (TAMC) that
166 generates the adjoint code. The latter is described in details in part V.
167
168 \item \textit{utils}: this directory contains various utilities. The
169 subdirectory \textit{knudsen2} contains code and a makefile that
170 compute coefficients of the polynomial approximation to the knudsen
171 formula for an ocean nonlinear equation of state. The \textit{matlab}
172 subdirectory contains matlab scripts for reading model output directly
173 into matlab. \textit{scripts} contains C-shell post-processing
174 scripts for joining processor-based and tiled-based model output.
175
176 \item \textit{verification}: this directory contains the model examples. See
177 section \ref{sect:modelExamples}.
178 \end{itemize}
179
180 \section{Example experiments}
181 \label{sect:modelExamples}
182
183 Now that you have successfully downloaded the model code we recommend that
184 you first try to run the examples provided with the base version. You will
185 probably want to run the example that is the closest to the configuration
186 you will use eventually. The examples are located in subdirectories under
187 the directory \textit{verification} and are briefly described below (a full
188 description is given in section 2):
189
190 \subsection{List of model examples}
191
192 \begin{itemize}
193 \item \textit{exp0} - single layer, ocean double gyre (barotropic with
194 free-surface).
195
196 \item \textit{exp1} - 4 layers, ocean double gyre.
197
198 \item \textit{exp2} - 4x4 degree global ocean simulation with steady
199 climatological forcing.
200
201 \item \textit{exp4} - flow over a Gaussian bump in open-water or channel
202 with open boundaries.
203
204 \item \textit{exp5} - inhomogenously forced ocean convection in a doubly
205 periodic box.
206
207 \item \textit{front\_relax} - relaxation of an ocean thermal front (test for
208 Gent/McWilliams scheme). 2D (Y-Z).
209
210 \item \textit{internal wave} - ocean internal wave forced by open boundary
211 conditions.
212
213 \item \textit{natl\_box} - eastern subtropical North Atlantic with KPP
214 scheme; 1 month integration
215
216 \item \textit{hs94.1x64x5} - zonal averaged atmosphere using Held and Suarez
217 '94 forcing.
218
219 \item \textit{hs94.128x64x5} - 3D atmosphere dynamics using Held and Suarez
220 '94 forcing.
221
222 \item \textit{hs94.cs-32x32x5} - 3D atmosphere dynamics using Held and
223 Suarez '94 forcing on the cubed sphere.
224
225 \item \textit{aim.5l\_zon-ave} - Intermediate Atmospheric physics, 5 layers
226 Molteni physics package. Global Zonal Mean configuration, 1x64x5 resolution.
227
228 \item \textit{aim.5l\_XZ\_Equatorial\_Slice} - Intermediate Atmospheric
229 physics, 5 layers Molteni physics package. Equatorial Slice configuration.
230 2D (X-Z).
231
232 \item \textit{aim.5l\_Equatorial\_Channel} - Intermediate Atmospheric
233 physics, 5 layers Molteni physics package. 3D Equatorial Channel
234 configuration (not completely tested).
235
236 \item \textit{aim.5l\_LatLon} - Intermediate Atmospheric physics, 5 layers
237 Molteni physics package. Global configuration, 128x64x5 resolution.
238
239 \item \textit{adjustment.128x64x1}
240
241 \item \textit{adjustment.cs-32x32x1}
242 \end{itemize}
243
244 \subsection{Directory structure of model examples}
245
246 Each example directory has the following subdirectories:
247
248 \begin{itemize}
249 \item \textit{code}: contains the code particular to the example. At a
250 minimum, this directory includes the following files:
251
252 \begin{itemize}
253 \item \textit{code/CPP\_EEOPTIONS.h}: declares CPP keys relative to the
254 ``execution environment'' part of the code. The default version is located
255 in \textit{eesupp/inc}.
256
257 \item \textit{code/CPP\_OPTIONS.h}: declares CPP keys relative to the
258 ``numerical model'' part of the code. The default version is located in
259 \textit{model/inc}.
260
261 \item \textit{code/SIZE.h}: declares size of underlying computational grid.
262 The default version is located in \textit{model/inc}.
263 \end{itemize}
264
265 In addition, other include files and subroutines might be present in \textit{%
266 code} depending on the particular experiment. See section 2 for more details.
267
268 \item \textit{input}: contains the input data files required to run the
269 example. At a mimimum, the \textit{input} directory contains the following
270 files:
271
272 \begin{itemize}
273 \item \textit{input/data}: this file, written as a namelist, specifies the
274 main parameters for the experiment.
275
276 \item \textit{input/data.pkg}: contains parameters relative to the packages
277 used in the experiment.
278
279 \item \textit{input/eedata}: this file contains ``execution environment''
280 data. At present, this consists of a specification of the number of threads
281 to use in $X$ and $Y$ under multithreaded execution.
282 \end{itemize}
283
284 In addition, you will also find in this directory the forcing and topography
285 files as well as the files describing the initial state of the experiment.
286 This varies from experiment to experiment. See section 2 for more details.
287
288 \item \textit{results}: this directory contains the output file \textit{%
289 output.txt} produced by the simulation example. This file is useful for
290 comparison with your own output when you run the experiment.
291 \end{itemize}
292
293 Once you have chosen the example you want to run, you are ready to compile
294 the code.
295
296 \section{Building the code}
297 \label{sect:buildingCode}
298
299 To compile the code, we use the {\em make} program. This uses a file
300 ({\em Makefile}) that allows us to pre-process source files, specify
301 compiler and optimization options and also figures out any file
302 dependancies. We supply a script ({\em genmake}), described in section
303 \ref{sect:genmake}, that automatically creates the {\em Makefile} for
304 you. You then need to build the dependancies and compile the code.
305
306 As an example, let's assume that you want to build and run experiment
307 \textit{verification/exp2}. The are multiple ways and places to actually
308 do this but here let's build the code in
309 \textit{verification/exp2/input}:
310 \begin{verbatim}
311 % cd verification/exp2/input
312 \end{verbatim}
313 First, build the {\em Makefile}:
314 \begin{verbatim}
315 % ../../../tools/genmake -mods=../code
316 \end{verbatim}
317 The command line option tells {\em genmake} to override model source
318 code with any files in the directory {\em ./code/}.
319
320 If there is no \textit{.genmakerc} in the \textit{input} directory, you have
321 to use the following options when invoking \textit{genmake}:
322 \begin{verbatim}
323 % ../../../tools/genmake -mods=../code
324 \end{verbatim}
325
326 Next, create the dependancies:
327 \begin{verbatim}
328 % make depend
329 \end{verbatim}
330 This modifies {\em Makefile} by attaching a [long] list of files on
331 which other files depend. The purpose of this is to reduce
332 re-compilation if and when you start to modify the code. {\tt make
333 depend} also created links from the model source to this directory.
334
335 Now compile the code:
336 \begin{verbatim}
337 % make
338 \end{verbatim}
339 The {\tt make} command creates an executable called \textit{mitgcmuv}.
340
341 Now you are ready to run the model. General instructions for doing so are
342 given in section \ref{sect:runModel}. Here, we can run the model with:
343 \begin{verbatim}
344 ./mitgcmuv > output.txt
345 \end{verbatim}
346 where we are re-directing the stream of text output to the file {\em
347 output.txt}.
348
349
350 \subsection{Building/compiling the code elsewhere}
351
352 In the example above (section \ref{sect:buildingCode}) we built the
353 executable in the {\em input} directory of the experiment for
354 convenience. You can also configure and compile the code in other
355 locations, for example on a scratch disk with out having to copy the
356 entire source tree. The only requirement to do so is you have {\tt
357 genmake} in your path or you know the absolute path to {\tt genmake}.
358
359 The following sections outline some possible methods of organizing you
360 source and data.
361
362 \subsubsection{Building from the {\em ../code directory}}
363
364 This is just as simple as building in the {\em input/} directory:
365 \begin{verbatim}
366 % cd verification/exp2/code
367 % ../../../tools/genmake
368 % make depend
369 % make
370 \end{verbatim}
371 However, to run the model the executable ({\em mitgcmuv}) and input
372 files must be in the same place. If you only have one calculation to make:
373 \begin{verbatim}
374 % cd ../input
375 % cp ../code/mitgcmuv ./
376 % ./mitgcmuv > output.txt
377 \end{verbatim}
378 or if you will be making muliple runs with the same executable:
379 \begin{verbatim}
380 % cd ../
381 % cp -r input run1
382 % cp code/mitgcmuv run1
383 % cd run1
384 % ./mitgcmuv > output.txt
385 \end{verbatim}
386
387 \subsubsection{Building from a new directory}
388
389 Since the {\em input} directory contains input files it is often more
390 useful to keep {\em input} prestine and build in a new directory
391 within {\em verification/exp2/}:
392 \begin{verbatim}
393 % cd verification/exp2
394 % mkdir build
395 % cd build
396 % ../../../tools/genmake -mods=../code
397 % make depend
398 % make
399 \end{verbatim}
400 This builds the code exactly as before but this time you need to copy
401 either the executable or the input files or both in order to run the
402 model. For example,
403 \begin{verbatim}
404 % cp ../input/* ./
405 % ./mitgcmuv > output.txt
406 \end{verbatim}
407 or if you tend to make multiple runs with the same executable then
408 running in a new directory each time might be more appropriate:
409 \begin{verbatim}
410 % cd ../
411 % mkdir run1
412 % cp build/mitgcmuv run1/
413 % cp input/* run1/
414 % cd run1
415 % ./mitgcmuv > output.txt
416 \end{verbatim}
417
418 \subsubsection{Building from on a scratch disk}
419
420 Model object files and output data can use up large amounts of disk
421 space so it is often the case that you will be operating on a large
422 scratch disk. Assuming the model source is in {\em ~/MITgcm} then the
423 following commands will build the model in {\em /scratch/exp2-run1}:
424 \begin{verbatim}
425 % cd /scratch/exp2-run1
426 % ~/MITgcm/tools/genmake -rootdir=~/MITgcm -mods=~/MITgcm/verification/exp2/code
427 % make depend
428 % make
429 \end{verbatim}
430 To run the model here, you'll need the input files:
431 \begin{verbatim}
432 % cp ~/MITgcm/verification/exp2/input/* ./
433 % ./mitgcmuv > output.txt
434 \end{verbatim}
435
436 As before, you could build in one directory and make multiple runs of
437 the one experiment:
438 \begin{verbatim}
439 % cd /scratch/exp2
440 % mkdir build
441 % cd build
442 % ~/MITgcm/tools/genmake -rootdir=~/MITgcm -mods=~/MITgcm/verification/exp2/code
443 % make depend
444 % make
445 % cd ../
446 % cp -r ~/MITgcm/verification/exp2/input run2
447 % cd run2
448 % ./mitgcmuv > output.txt
449 \end{verbatim}
450
451
452
453 \subsection{\textit{genmake}}
454 \label{sect:genmake}
455
456 To compile the code, use the script \textit{genmake} located in the \textit{%
457 tools} directory. \textit{genmake} is a script that generates the makefile.
458 It has been written so that the code can be compiled on a wide diversity of
459 machines and systems. However, if it doesn't work the first time on your
460 platform, you might need to edit certain lines of \textit{genmake} in the
461 section containing the setups for the different machines. The file is
462 structured like this:
463 \begin{verbatim}
464 .
465 .
466 .
467 general instructions (machine independent)
468 .
469 .
470 .
471 - setup machine 1
472 - setup machine 2
473 - setup machine 3
474 - setup machine 4
475 etc
476 .
477 .
478 .
479 \end{verbatim}
480
481 For example, the setup corresponding to a DEC alpha machine is reproduced
482 here:
483 \begin{verbatim}
484 case OSF1+mpi:
485 echo "Configuring for DEC Alpha"
486 set CPP = ( '/usr/bin/cpp -P' )
487 set DEFINES = ( ${DEFINES} '-DTARGET_DEC -DWORDLENGTH=1' )
488 set KPP = ( 'kapf' )
489 set KPPFILES = ( 'main.F' )
490 set KFLAGS1 = ( '-scan=132 -noconc -cmp=' )
491 set FC = ( 'f77' )
492 set FFLAGS = ( '-convert big_endian -r8 -extend_source -automatic -call_shared -notransform_loops -align dcommons' )
493 set FOPTIM = ( '-O5 -fast -tune host -inline all' )
494 set NOOPTFLAGS = ( '-O0' )
495 set LIBS = ( '-lfmpi -lmpi -lkmp_osfp10 -pthread' )
496 set NOOPTFILES = ( 'barrier.F different_multiple.F external_fields_load.F')
497 set RMFILES = ( '*.p.out' )
498 breaksw
499 \end{verbatim}
500
501 Typically, these are the lines that you might need to edit to make \textit{%
502 genmake} work on your platform if it doesn't work the first time. \textit{%
503 genmake} understands several options that are described here:
504
505 \begin{itemize}
506 \item -rootdir=dir
507
508 indicates where the model root directory is relative to the directory where
509 you are compiling. This option is not needed if you compile in the \textit{%
510 bin} directory (which is the default compilation directory) or within the
511 \textit{verification} tree.
512
513 \item -mods=dir1,dir2,...
514
515 indicates the relative or absolute paths directories where the sources
516 should take precedence over the default versions (located in \textit{model},
517 \textit{eesupp},...). Typically, this option is used when running the
518 examples, see below.
519
520 \item -enable=pkg1,pkg2,...
521
522 enables packages source code \textit{pkg1}, \textit{pkg2},... when creating
523 the makefile.
524
525 \item -disable=pkg1,pkg2,...
526
527 disables packages source code \textit{pkg1}, \textit{pkg2},... when creating
528 the makefile.
529
530 \item -platform=machine
531
532 specifies the platform for which you want the makefile. In general, you
533 won't need this option. \textit{genmake} will select the right machine for
534 you (the one you're working on!). However, this option is useful if you have
535 a choice of several compilers on one machine and you want to use the one
536 that is not the default (ex: \texttt{pgf77} instead of \texttt{f77} under
537 Linux).
538
539 \item -mpi
540
541 this is used when you want to run the model in parallel processing mode
542 under mpi (see section on parallel computation for more details).
543
544 \item -jam
545
546 this is used when you want to run the model in parallel processing mode
547 under jam (see section on parallel computation for more details).
548 \end{itemize}
549
550 For some of the examples, there is a file called \textit{.genmakerc} in the
551 \textit{input} directory that has the relevant \textit{genmake} options for
552 that particular example. In this way you don't need to type the options when
553 invoking \textit{genmake}.
554
555
556 \section{Running the model}
557 \label{sect:runModel}
558
559 If compilation finished succesfuully (section \ref{sect:buildModel})
560 then an executable called {\em mitgcmuv} will now exist in the local
561 directory.
562
563 To run the model as a single process (ie. not in parallel) simply
564 type:
565 \begin{verbatim}
566 % ./mitgcmuv
567 \end{verbatim}
568 The ``./'' is a safe-guard to make sure you use the local executable
569 in case you have others that exist in your path (surely odd if you
570 do!). The above command will spew out many lines of text output to
571 your screen. This output contains details such as parameter values as
572 well as diagnostics such as mean Kinetic energy, largest CFL number,
573 etc. It is worth keeping this text output with the binary output so we
574 normally re-direct the {\em stdout} stream as follows:
575 \begin{verbatim}
576 % ./mitgcmuv > output.txt
577 \end{verbatim}
578
579 For the example experiments in {\em vericication}, an example of the
580 output is kept in {\em results/output.txt} for comparison. You can compare
581 your {\em output.txt} with this one to check that the set-up works.
582
583
584
585 \subsection{Output files}
586
587 The model produces various output files. At a minimum, the instantaneous
588 ``state'' of the model is written out, which is made of the following files:
589
590 \begin{itemize}
591 \item \textit{U.00000nIter} - zonal component of velocity field (m/s and $>
592 0 $ eastward).
593
594 \item \textit{V.00000nIter} - meridional component of velocity field (m/s
595 and $> 0$ northward).
596
597 \item \textit{W.00000nIter} - vertical component of velocity field (ocean:
598 m/s and $> 0$ upward, atmosphere: Pa/s and $> 0$ towards increasing pressure
599 i.e. downward).
600
601 \item \textit{T.00000nIter} - potential temperature (ocean: $^{0}$C,
602 atmosphere: $^{0}$K).
603
604 \item \textit{S.00000nIter} - ocean: salinity (psu), atmosphere: water vapor
605 (g/kg).
606
607 \item \textit{Eta.00000nIter} - ocean: surface elevation (m), atmosphere:
608 surface pressure anomaly (Pa).
609 \end{itemize}
610
611 The chain \textit{00000nIter} consists of ten figures that specify the
612 iteration number at which the output is written out. For example, \textit{%
613 U.0000000300} is the zonal velocity at iteration 300.
614
615 In addition, a ``pickup'' or ``checkpoint'' file called:
616
617 \begin{itemize}
618 \item \textit{pickup.00000nIter}
619 \end{itemize}
620
621 is written out. This file represents the state of the model in a condensed
622 form and is used for restarting the integration. If the C-D scheme is used,
623 there is an additional ``pickup'' file:
624
625 \begin{itemize}
626 \item \textit{pickup\_cd.00000nIter}
627 \end{itemize}
628
629 containing the D-grid velocity data and that has to be written out as well
630 in order to restart the integration. Rolling checkpoint files are the same
631 as the pickup files but are named differently. Their name contain the chain
632 \textit{ckptA} or \textit{ckptB} instead of \textit{00000nIter}. They can be
633 used to restart the model but are overwritten every other time they are
634 output to save disk space during long integrations.
635
636 \subsection{Looking at the output}
637
638 All the model data are written according to a ``meta/data'' file format.
639 Each variable is associated with two files with suffix names \textit{.data}
640 and \textit{.meta}. The \textit{.data} file contains the data written in
641 binary form (big\_endian by default). The \textit{.meta} file is a
642 ``header'' file that contains information about the size and the structure
643 of the \textit{.data} file. This way of organizing the output is
644 particularly useful when running multi-processors calculations. The base
645 version of the model includes a few matlab utilities to read output files
646 written in this format. The matlab scripts are located in the directory
647 \textit{utils/matlab} under the root tree. The script \textit{rdmds.m} reads
648 the data. Look at the comments inside the script to see how to use it.
649
650 Some examples of reading and visualizing some output in {\em Matlab}:
651 \begin{verbatim}
652 % matlab
653 >> H=rdmds('Depth');
654 >> contourf(H');colorbar;
655 >> title('Depth of fluid as used by model');
656
657 >> eta=rdmds('Eta',10);
658 >> imagesc(eta');axis ij;colorbar;
659 >> title('Surface height at iter=10');
660
661 >> eta=rdmds('Eta',[0:10:100]);
662 >> for n=1:11; imagesc(eta(:,:,n)');axis ij;colorbar;pause(.5);end
663 \end{verbatim}
664
665 \section{Doing it yourself: customizing the code}
666
667 When you are ready to run the model in the configuration you want, the
668 easiest thing is to use and adapt the setup of the case studies experiment
669 (described previously) that is the closest to your configuration. Then, the
670 amount of setup will be minimized. In this section, we focus on the setup
671 relative to the ''numerical model'' part of the code (the setup relative to
672 the ''execution environment'' part is covered in the parallel implementation
673 section) and on the variables and parameters that you are likely to change.
674
675 \subsection{Configuration and setup}
676
677 The CPP keys relative to the ''numerical model'' part of the code are all
678 defined and set in the file \textit{CPP\_OPTIONS.h }in the directory \textit{%
679 model/inc }or in one of the \textit{code }directories of the case study
680 experiments under \textit{verification.} The model parameters are defined
681 and declared in the file \textit{model/inc/PARAMS.h }and their default
682 values are set in the routine \textit{model/src/set\_defaults.F. }The
683 default values can be modified in the namelist file \textit{data }which
684 needs to be located in the directory where you will run the model. The
685 parameters are initialized in the routine \textit{model/src/ini\_parms.F}.
686 Look at this routine to see in what part of the namelist the parameters are
687 located.
688
689 In what follows the parameters are grouped into categories related to the
690 computational domain, the equations solved in the model, and the simulation
691 controls.
692
693 \subsection{Computational domain, geometry and time-discretization}
694
695 \begin{itemize}
696 \item dimensions
697 \end{itemize}
698
699 The number of points in the x, y,\textit{\ }and r\textit{\ }directions are
700 represented by the variables \textbf{sNx}\textit{, }\textbf{sNy}\textit{, }%
701 and \textbf{Nr}\textit{\ }respectively which are declared and set in the
702 file \textit{model/inc/SIZE.h. }(Again, this assumes a mono-processor
703 calculation. For multiprocessor calculations see section on parallel
704 implementation.)
705
706 \begin{itemize}
707 \item grid
708 \end{itemize}
709
710 Three different grids are available: cartesian, spherical polar, and
711 curvilinear (including the cubed sphere). The grid is set through the
712 logical variables \textbf{usingCartesianGrid}\textit{, }\textbf{%
713 usingSphericalPolarGrid}\textit{, }and \textit{\ }\textbf{%
714 usingCurvilinearGrid}\textit{. }In the case of spherical and curvilinear
715 grids, the southern boundary is defined through the variable \textbf{phiMin}%
716 \textit{\ }which corresponds to the latitude of the southern most cell face
717 (in degrees). The resolution along the x and y directions is controlled by
718 the 1D arrays \textbf{delx}\textit{\ }and \textbf{dely}\textit{\ }(in meters
719 in the case of a cartesian grid, in degrees otherwise). The vertical grid
720 spacing is set through the 1D array \textbf{delz }for the ocean (in meters)
721 or \textbf{delp}\textit{\ }for the atmosphere (in Pa). The variable \textbf{%
722 Ro\_SeaLevel} represents the standard position of Sea-Level in ''R''
723 coordinate. This is typically set to 0m for the ocean (default value) and 10$%
724 ^{5}$Pa for the atmosphere. For the atmosphere, also set the logical
725 variable \textbf{groundAtK1} to '.\texttt{TRUE}.'. which put the first level
726 (k=1) at the lower boundary (ground).
727
728 For the cartesian grid case, the Coriolis parameter $f$ is set through the
729 variables \textbf{f0}\textit{\ }and \textbf{beta}\textit{\ }which correspond
730 to the reference Coriolis parameter (in s$^{-1}$) and $\frac{\partial f}{%
731 \partial y}$(in m$^{-1}$s$^{-1}$) respectively. If \textbf{beta }\textit{\ }%
732 is set to a nonzero value, \textbf{f0}\textit{\ }is the value of $f$ at the
733 southern edge of the domain.
734
735 \begin{itemize}
736 \item topography - full and partial cells
737 \end{itemize}
738
739 The domain bathymetry is read from a file that contains a 2D (x,y) map of
740 depths (in m) for the ocean or pressures (in Pa) for the atmosphere. The
741 file name is represented by the variable \textbf{bathyFile}\textit{. }The
742 file is assumed to contain binary numbers giving the depth (pressure) of the
743 model at each grid cell, ordered with the x coordinate varying fastest. The
744 points are ordered from low coordinate to high coordinate for both axes. The
745 model code applies without modification to enclosed, periodic, and double
746 periodic domains. Periodicity is assumed by default and is suppressed by
747 setting the depths to 0m for the cells at the limits of the computational
748 domain (note: not sure this is the case for the atmosphere). The precision
749 with which to read the binary data is controlled by the integer variable
750 \textbf{readBinaryPrec }which can take the value \texttt{32} (single
751 precision) or \texttt{64} (double precision). See the matlab program \textit{%
752 gendata.m }in the \textit{input }directories under \textit{verification }to
753 see how the bathymetry files are generated for the case study experiments.
754
755 To use the partial cell capability, the variable \textbf{hFacMin}\textit{\ }%
756 needs to be set to a value between 0 and 1 (it is set to 1 by default)
757 corresponding to the minimum fractional size of the cell. For example if the
758 bottom cell is 500m thick and \textbf{hFacMin}\textit{\ }is set to 0.1, the
759 actual thickness of the cell (i.e. used in the code) can cover a range of
760 discrete values 50m apart from 50m to 500m depending on the value of the
761 bottom depth (in \textbf{bathyFile}) at this point.
762
763 Note that the bottom depths (or pressures) need not coincide with the models
764 levels as deduced from \textbf{delz}\textit{\ }or\textit{\ }\textbf{delp}%
765 \textit{. }The model will interpolate the numbers in \textbf{bathyFile}%
766 \textit{\ }so that they match the levels obtained from \textbf{delz}\textit{%
767 \ }or\textit{\ }\textbf{delp}\textit{\ }and \textbf{hFacMin}\textit{. }
768
769 (Note: the atmospheric case is a bit more complicated than what is written
770 here I think. To come soon...)
771
772 \begin{itemize}
773 \item time-discretization
774 \end{itemize}
775
776 The time steps are set through the real variables \textbf{deltaTMom }and
777 \textbf{deltaTtracer }(in s) which represent the time step for the momentum
778 and tracer equations, respectively. For synchronous integrations, simply set
779 the two variables to the same value (or you can prescribe one time step only
780 through the variable \textbf{deltaT}). The Adams-Bashforth stabilizing
781 parameter is set through the variable \textbf{abEps }(dimensionless). The
782 stagger baroclinic time stepping can be activated by setting the logical
783 variable \textbf{staggerTimeStep }to '.\texttt{TRUE}.'.
784
785 \subsection{Equation of state}
786
787 First, because the model equations are written in terms of perturbations, a
788 reference thermodynamic state needs to be specified. This is done through
789 the 1D arrays \textbf{tRef}\textit{\ }and \textbf{sRef}. \textbf{tRef }%
790 specifies the reference potential temperature profile (in $^{o}$C for
791 the ocean and $^{o}$K for the atmosphere) starting from the level
792 k=1. Similarly, \textbf{sRef}\textit{\ }specifies the reference salinity
793 profile (in ppt) for the ocean or the reference specific humidity profile
794 (in g/kg) for the atmosphere.
795
796 The form of the equation of state is controlled by the character variables
797 \textbf{buoyancyRelation}\textit{\ }and \textbf{eosType}\textit{. }\textbf{%
798 buoyancyRelation}\textit{\ }is set to '\texttt{OCEANIC}' by default and
799 needs to be set to '\texttt{ATMOSPHERIC}' for atmosphere simulations. In
800 this case, \textbf{eosType}\textit{\ }must be set to '\texttt{IDEALGAS}'.
801 For the ocean, two forms of the equation of state are available: linear (set
802 \textbf{eosType}\textit{\ }to '\texttt{LINEAR}') and a polynomial
803 approximation to the full nonlinear equation ( set \textbf{eosType}\textit{\
804 }to '\texttt{POLYNOMIAL}'). In the linear case, you need to specify the
805 thermal and haline expansion coefficients represented by the variables
806 \textbf{tAlpha}\textit{\ }(in K$^{-1}$) and \textbf{sBeta}\textit{\ }(in ppt$%
807 ^{-1}$). For the nonlinear case, you need to generate a file of polynomial
808 coefficients called \textit{POLY3.COEFFS. }To do this, use the program
809 \textit{utils/knudsen2/knudsen2.f }under the model tree (a Makefile is
810 available in the same directory and you will need to edit the number and the
811 values of the vertical levels in \textit{knudsen2.f }so that they match
812 those of your configuration). \textit{\ }
813
814 \subsection{Momentum equations}
815
816 In this section, we only focus for now on the parameters that you are likely
817 to change, i.e. the ones relative to forcing and dissipation for example.
818 The details relevant to the vector-invariant form of the equations and the
819 various advection schemes are not covered for the moment. We assume that you
820 use the standard form of the momentum equations (i.e. the flux-form) with
821 the default advection scheme. Also, there are a few logical variables that
822 allow you to turn on/off various terms in the momentum equation. These
823 variables are called \textbf{momViscosity, momAdvection, momForcing,
824 useCoriolis, momPressureForcing, momStepping}\textit{, }and \textit{\ }%
825 \textbf{metricTerms }and are assumed to be set to '.\texttt{TRUE}.' here.
826 Look at the file \textit{model/inc/PARAMS.h }for a precise definition of
827 these variables.
828
829 \begin{itemize}
830 \item initialization
831 \end{itemize}
832
833 The velocity components are initialized to 0 unless the simulation is
834 starting from a pickup file (see section on simulation control parameters).
835
836 \begin{itemize}
837 \item forcing
838 \end{itemize}
839
840 This section only applies to the ocean. You need to generate wind-stress
841 data into two files \textbf{zonalWindFile}\textit{\ }and \textbf{%
842 meridWindFile }corresponding to the zonal and meridional components of the
843 wind stress, respectively (if you want the stress to be along the direction
844 of only one of the model horizontal axes, you only need to generate one
845 file). The format of the files is similar to the bathymetry file. The zonal
846 (meridional) stress data are assumed to be in Pa and located at U-points
847 (V-points). As for the bathymetry, the precision with which to read the
848 binary data is controlled by the variable \textbf{readBinaryPrec}.\textbf{\ }
849 See the matlab program \textit{gendata.m }in the \textit{input }directories
850 under \textit{verification }to see how simple analytical wind forcing data
851 are generated for the case study experiments.
852
853 There is also the possibility of prescribing time-dependent periodic
854 forcing. To do this, concatenate the successive time records into a single
855 file (for each stress component) ordered in a (x, y, t) fashion and set the
856 following variables: \textbf{periodicExternalForcing }to '.\texttt{TRUE}.',
857 \textbf{externForcingPeriod }to the period (in s) of which the forcing
858 varies (typically 1 month), and \textbf{externForcingCycle }to the repeat
859 time (in s) of the forcing (typically 1 year -- note: \textbf{%
860 externForcingCycle }must be a multiple of \textbf{externForcingPeriod}).
861 With these variables set up, the model will interpolate the forcing linearly
862 at each iteration.
863
864 \begin{itemize}
865 \item dissipation
866 \end{itemize}
867
868 The lateral eddy viscosity coefficient is specified through the variable
869 \textbf{viscAh}\textit{\ }(in m$^{2}$s$^{-1}$). The vertical eddy viscosity
870 coefficient is specified through the variable \textbf{viscAz }(in m$^{2}$s$%
871 ^{-1}$) for the ocean and \textbf{viscAp}\textit{\ }(in Pa$^{2}$s$^{-1}$)
872 for the atmosphere. The vertical diffusive fluxes can be computed implicitly
873 by setting the logical variable \textbf{implicitViscosity }to '.\texttt{TRUE}%
874 .'. In addition, biharmonic mixing can be added as well through the variable
875 \textbf{viscA4}\textit{\ }(in m$^{4}$s$^{-1}$). On a spherical polar grid,
876 you might also need to set the variable \textbf{cosPower} which is set to 0
877 by default and which represents the power of cosine of latitude to multiply
878 viscosity. Slip or no-slip conditions at lateral and bottom boundaries are
879 specified through the logical variables \textbf{no\_slip\_sides}\textit{\ }%
880 and \textbf{no\_slip\_bottom}. If set to '\texttt{.FALSE.}', free-slip
881 boundary conditions are applied. If no-slip boundary conditions are applied
882 at the bottom, a bottom drag can be applied as well. Two forms are
883 available: linear (set the variable \textbf{bottomDragLinear}\textit{\ }in s$%
884 ^{-1}$) and quadratic (set the variable \textbf{bottomDragQuadratic}\textit{%
885 \ }in m$^{-1}$).
886
887 The Fourier and Shapiro filters are described elsewhere.
888
889 \begin{itemize}
890 \item C-D scheme
891 \end{itemize}
892
893 If you run at a sufficiently coarse resolution, you will need the C-D scheme
894 for the computation of the Coriolis terms. The variable\textbf{\ tauCD},
895 which represents the C-D scheme coupling timescale (in s) needs to be set.
896
897 \begin{itemize}
898 \item calculation of pressure/geopotential
899 \end{itemize}
900
901 First, to run a non-hydrostatic ocean simulation, set the logical variable
902 \textbf{nonHydrostatic} to '.\texttt{TRUE}.'. The pressure field is then
903 inverted through a 3D elliptic equation. (Note: this capability is not
904 available for the atmosphere yet.) By default, a hydrostatic simulation is
905 assumed and a 2D elliptic equation is used to invert the pressure field. The
906 parameters controlling the behaviour of the elliptic solvers are the
907 variables \textbf{cg2dMaxIters}\textit{\ }and \textbf{cg2dTargetResidual }%
908 for the 2D case and \textbf{cg3dMaxIters}\textit{\ }and \textbf{%
909 cg3dTargetResidual }for the 3D case. You probably won't need to alter the
910 default values (are we sure of this?).
911
912 For the calculation of the surface pressure (for the ocean) or surface
913 geopotential (for the atmosphere) you need to set the logical variables
914 \textbf{rigidLid} and \textbf{implicitFreeSurface}\textit{\ }(set one to '.%
915 \texttt{TRUE}.' and the other to '.\texttt{FALSE}.' depending on how you
916 want to deal with the ocean upper or atmosphere lower boundary).
917
918 \subsection{Tracer equations}
919
920 This section covers the tracer equations i.e. the potential temperature
921 equation and the salinity (for the ocean) or specific humidity (for the
922 atmosphere) equation. As for the momentum equations, we only describe for
923 now the parameters that you are likely to change. The logical variables
924 \textbf{tempDiffusion}\textit{, }\textbf{tempAdvection}\textit{, }\textbf{%
925 tempForcing}\textit{,} and \textbf{tempStepping} allow you to turn on/off
926 terms in the temperature equation (same thing for salinity or specific
927 humidity with variables \textbf{saltDiffusion}\textit{, }\textbf{%
928 saltAdvection}\textit{\ }etc). These variables are all assumed here to be
929 set to '.\texttt{TRUE}.'. Look at file \textit{model/inc/PARAMS.h }for a
930 precise definition.
931
932 \begin{itemize}
933 \item initialization
934 \end{itemize}
935
936 The initial tracer data can be contained in the binary files \textbf{%
937 hydrogThetaFile }and \textbf{hydrogSaltFile}. These files should contain 3D
938 data ordered in an (x, y, r) fashion with k=1 as the first vertical level.
939 If no file names are provided, the tracers are then initialized with the
940 values of \textbf{tRef }and \textbf{sRef }mentioned above (in the equation
941 of state section). In this case, the initial tracer data are uniform in x
942 and y for each depth level.
943
944 \begin{itemize}
945 \item forcing
946 \end{itemize}
947
948 This part is more relevant for the ocean, the procedure for the atmosphere
949 not being completely stabilized at the moment.
950
951 A combination of fluxes data and relaxation terms can be used for driving
952 the tracer equations. \ For potential temperature, heat flux data (in W/m$%
953 ^{2}$) can be stored in the 2D binary file \textbf{surfQfile}\textit{. }%
954 Alternatively or in addition, the forcing can be specified through a
955 relaxation term. The SST data to which the model surface temperatures are
956 restored to are supposed to be stored in the 2D binary file \textbf{%
957 thetaClimFile}\textit{. }The corresponding relaxation time scale coefficient
958 is set through the variable \textbf{tauThetaClimRelax}\textit{\ }(in s). The
959 same procedure applies for salinity with the variable names \textbf{EmPmRfile%
960 }\textit{, }\textbf{saltClimFile}\textit{, }and \textbf{tauSaltClimRelax}%
961 \textit{\ }for freshwater flux (in m/s) and surface salinity (in ppt) data
962 files and relaxation time scale coefficient (in s), respectively. Also for
963 salinity, if the CPP key \textbf{USE\_NATURAL\_BCS} is turned on, natural
964 boundary conditions are applied i.e. when computing the surface salinity
965 tendency, the freshwater flux is multiplied by the model surface salinity
966 instead of a constant salinity value.
967
968 As for the other input files, the precision with which to read the data is
969 controlled by the variable \textbf{readBinaryPrec}. Time-dependent, periodic
970 forcing can be applied as well following the same procedure used for the
971 wind forcing data (see above).
972
973 \begin{itemize}
974 \item dissipation
975 \end{itemize}
976
977 Lateral eddy diffusivities for temperature and salinity/specific humidity
978 are specified through the variables \textbf{diffKhT }and \textbf{diffKhS }%
979 (in m$^{2}$/s). Vertical eddy diffusivities are specified through the
980 variables \textbf{diffKzT }and \textbf{diffKzS }(in m$^{2}$/s) for the ocean
981 and \textbf{diffKpT }and \textbf{diffKpS }(in Pa$^{2}$/s) for the
982 atmosphere. The vertical diffusive fluxes can be computed implicitly by
983 setting the logical variable \textbf{implicitDiffusion }to '.\texttt{TRUE}%
984 .'. In addition, biharmonic diffusivities can be specified as well through
985 the coefficients \textbf{diffK4T }and \textbf{diffK4S }(in m$^{4}$/s). Note
986 that the cosine power scaling (specified through \textbf{cosPower }- see the
987 momentum equations section) is applied to the tracer diffusivities
988 (Laplacian and biharmonic) as well. The Gent and McWilliams parameterization
989 for oceanic tracers is described in the package section. Finally, note that
990 tracers can be also subject to Fourier and Shapiro filtering (see the
991 corresponding section on these filters).
992
993 \begin{itemize}
994 \item ocean convection
995 \end{itemize}
996
997 Two options are available to parameterize ocean convection: one is to use
998 the convective adjustment scheme. In this case, you need to set the variable
999 \textbf{cadjFreq}, which represents the frequency (in s) with which the
1000 adjustment algorithm is called, to a non-zero value (if set to a negative
1001 value by the user, the model will set it to the tracer time step). The other
1002 option is to parameterize convection with implicit vertical diffusion. To do
1003 this, set the logical variable \textbf{implicitDiffusion }to '.\texttt{TRUE}%
1004 .' and the real variable \textbf{ivdc\_kappa }to a value (in m$^{2}$/s) you
1005 wish the tracer vertical diffusivities to have when mixing tracers
1006 vertically due to static instabilities. Note that \textbf{cadjFreq }and
1007 \textbf{ivdc\_kappa }can not both have non-zero value.
1008
1009 \subsection{Simulation controls}
1010
1011 The model ''clock'' is defined by the variable \textbf{deltaTClock }(in s)
1012 which determines the IO frequencies and is used in tagging output.
1013 Typically, you will set it to the tracer time step for accelerated runs
1014 (otherwise it is simply set to the default time step \textbf{deltaT}).
1015 Frequency of checkpointing and dumping of the model state are referenced to
1016 this clock (see below).
1017
1018 \begin{itemize}
1019 \item run duration
1020 \end{itemize}
1021
1022 The beginning of a simulation is set by specifying a start time (in s)
1023 through the real variable \textbf{startTime }or by specifying an initial
1024 iteration number through the integer variable \textbf{nIter0}. If these
1025 variables are set to nonzero values, the model will look for a ''pickup''
1026 file \textit{pickup.0000nIter0 }to restart the integration\textit{. }The end
1027 of a simulation is set through the real variable \textbf{endTime }(in s).
1028 Alternatively, you can specify instead the number of time steps to execute
1029 through the integer variable \textbf{nTimeSteps}.
1030
1031 \begin{itemize}
1032 \item frequency of output
1033 \end{itemize}
1034
1035 Real variables defining frequencies (in s) with which output files are
1036 written on disk need to be set up. \textbf{dumpFreq }controls the frequency
1037 with which the instantaneous state of the model is saved. \textbf{chkPtFreq }%
1038 and \textbf{pchkPtFreq }control the output frequency of rolling and
1039 permanent checkpoint files, respectively. See section 1.5.1 Output files for the
1040 definition of model state and checkpoint files. In addition, time-averaged
1041 fields can be written out by setting the variable \textbf{taveFreq} (in s).
1042 The precision with which to write the binary data is controlled by the
1043 integer variable w\textbf{riteBinaryPrec }(set it to \texttt{32} or \texttt{%
1044 64}).

  ViewVC Help
Powered by ViewVC 1.1.22