1 |
$Header: /u/gcmpack/models/MITgcmUV/doc/README,v 1.7 1998/06/16 23:04:39 cnh Exp $ |
2 |
|
3 |
MITgcmUV Getting Started |
4 |
======================== |
5 |
|
6 |
o Introduction |
7 |
|
8 |
This note is a guide to using the MIT General Circulation Model Ultra Verstaile |
9 |
implementation, MITgmcUV. MITgcmUV is a Fortran code that implements the |
10 |
algorithm described in Marshall et. al. 1997, Hill, Adcroft, ... |
11 |
The MITgcmUV implementation is designed to work efficiently on all classes |
12 |
of computer platforms. It can be used in both a single processor mode |
13 |
and a parallel processor mode. Parallel processing can be either multi-threaded |
14 |
shared memory such as that found on CRAY T90 machines or it can be multi-process |
15 |
distributed memory. A set of "execution enviroment" support routines are |
16 |
used to allow the same numerical code to run on top of a single-process, multi-threaded |
17 |
or distributed multi-process configuration. |
18 |
|
19 |
o Installing |
20 |
To setup the model on a particular computer the code tree must be created |
21 |
and appropriate compile and run scripts set up. For some platforms |
22 |
the necessary scripts are included in the release - in this case follow |
23 |
the steps below: |
24 |
|
25 |
1. Extract MITgcmUV from the downloadable archive |
26 |
tar -xvf MITgcmUV.2.0.tar |
27 |
|
28 |
2. Create platform specific make file |
29 |
For example on a Digital UNIX machine the script "genmake.dec" can |
30 |
be used as shown below |
31 |
|
32 |
cd MITgcmUV.2.0/tools |
33 |
genmake.dec |
34 |
cd ../bin |
35 |
ln -s ../tools/Makefile.dec makefile |
36 |
|
37 |
3. Now create header file dependency entries |
38 |
make depend |
39 |
|
40 |
4. Compile code |
41 |
make |
42 |
|
43 |
5. Copy input files |
44 |
cp ../verification/exp2/[a-z]* . |
45 |
|
46 |
6. Run baseline test case |
47 |
setenv PARALLEL 1 |
48 |
../exe/mitgcmuv |
49 |
|
50 |
This runs a 4 degree global ocean climatological simulation. |
51 |
By default this code is set to use two porcessors splitting |
52 |
the model domain along the equator. Textual output is written |
53 |
to files STDOUT.* and STDERR.* with one file for each process. |
54 |
Model fileds are written to files suffixed .data and .meta |
55 |
These files are written on a per process basis. The .meta |
56 |
file indicates the location and shape of the subdomain in |
57 |
each .data file. |
58 |
|
59 |
o Running |
60 |
|
61 |
- Input and output files |
62 |
|
63 |
Required files |
64 |
============== |
65 |
The model is configured to look for two files with fixed names. |
66 |
These files are called |
67 |
"eedata" and "data". |
68 |
The file eedata contains "execution environment" data. At present |
69 |
this consists of a specification of the number of threads to |
70 |
use in X and Y under multithreaded execution. |
71 |
|
72 |
- Serial execution |
73 |
|
74 |
- Parallel execution. Threads |
75 |
nSx, nSy |
76 |
setenv PARALLEL n |
77 |
nTx=2, nTy=2 |
78 |
|
79 |
- Parallel execution. MPI |
80 |
mPx, nPy |
81 |
dmpirun |
82 |
|
83 |
- Parallel execution. Hybrid |
84 |
|
85 |
o References |
86 |
Web sites - HP |
87 |
for doc Digital |
88 |
SGI |
89 |
Sun |
90 |
Linux threads |
91 |
CRAY multitasking |
92 |
PPT notes |