1 |
$Header: /u/gcmpack/models/MITgcmUV/doc/README,v 1.8 1998/06/22 16:24:50 adcroft Exp $ |
2 |
|
3 |
|
4 |
MITgcmUV Getting Started |
5 |
======================== |
6 |
|
7 |
o Introduction |
8 |
|
9 |
This note is a guide to using the MIT General Circulation Model Ultra Verstaile |
10 |
implementation, MITgmcUV. MITgcmUV is a Fortran code that implements the |
11 |
algorithm described in Marshall et. al. 1997, Hill, Adcroft, ... |
12 |
The MITgcmUV implementation is designed to work efficiently on all classes |
13 |
of computer platforms. It can be used in both a single processor mode |
14 |
and a parallel processor mode. Parallel processing can be either multi-threaded |
15 |
shared memory such as that found on CRAY T90 machines or it can be multi-process |
16 |
distributed memory. A set of "execution enviroment" support routines are |
17 |
used to allow the same numerical code to run on top of a single-process, multi-threaded |
18 |
or distributed multi-process configuration. |
19 |
|
20 |
o Installing |
21 |
To setup the model on a particular computer the code tree must be created |
22 |
and appropriate compile and run scripts set up. For some platforms |
23 |
the necessary scripts are included in the release - in this case follow |
24 |
the steps below: |
25 |
|
26 |
1. Extract MITgcmUV from the downloadable archive |
27 |
tar -xvf checkpoint12.tar |
28 |
|
29 |
2. Create platform specific make file |
30 |
For example on a Digital UNIX machine the script "genmake.dec" can |
31 |
be used as shown below |
32 |
|
33 |
cd bin |
34 |
../tools/genmake |
35 |
cp Makefile.alpha Makefile |
36 |
|
37 |
3. Now create header file dependency entries |
38 |
make depend |
39 |
|
40 |
4. Compile code |
41 |
make |
42 |
|
43 |
5. Copy input files |
44 |
cp ../verification/exp2/[a-z]* ../verification/exp2/*bin . |
45 |
|
46 |
6. Run baseline test case |
47 |
setenv PARALLEL 1 |
48 |
dmpirun -np 2 ../exe/mitgcmuv |
49 |
|
50 |
This runs a 4 degree global ocean climatological simulation. |
51 |
By default this code is set to use two porcessors splitting |
52 |
the model domain along the equator. Textual output is written |
53 |
to files STDOUT.* and STDERR.* with one file for each process. |
54 |
Model fileds are written to files suffixed .data and .meta |
55 |
These files are written on a per process basis. The .meta |
56 |
file indicates the location and shape of the subdomain in |
57 |
each .data file. |
58 |
|
59 |
o Running |
60 |
|
61 |
- Input and output files |
62 |
|
63 |
Required files |
64 |
============== |
65 |
The model is configured to look for two files with fixed names. |
66 |
These files are called |
67 |
"eedata" and "data". |
68 |
The file eedata contains "execution environment" data. At present |
69 |
this consists of a specification of the number of threads to |
70 |
use in X and Y under multithreaded execution. |
71 |
|
72 |
- Serial execution |
73 |
|
74 |
- Parallel execution. Threads |
75 |
nSx, nSy |
76 |
setenv PARALLEL n |
77 |
nTx=2, nTy=2 |
78 |
|
79 |
- Parallel execution. MPI |
80 |
mPx, nPy |
81 |
dmpirun |
82 |
|
83 |
- Parallel execution. Hybrid |
84 |
|
85 |
o References |
86 |
Web sites - HP |
87 |
for doc Digital |
88 |
SGI |
89 |
Sun |
90 |
Linux threads |
91 |
CRAY multitasking |
92 |
PPT notes |