1 |
heimbach |
1.4 |
============================================ |
2 |
mlosch |
1.1 |
Example: "4x4 Global Simulation with Seasonal Forcing" |
3 |
jmc |
1.2 |
============================================ |
4 |
|
|
(see also similar set-up in: verification/tutorial_global_oce_latlon/) |
5 |
|
|
|
6 |
|
|
From verification/global_ocean.90x40x15 dir: |
7 |
mlosch |
1.1 |
|
8 |
|
|
Configure and compile the code: |
9 |
jmc |
1.2 |
cd build |
10 |
jmc |
1.3 |
../../../tools/genmake2 -mods ../code [-of my_platform_optionFile] |
11 |
jmc |
1.2 |
[make Clean] |
12 |
mlosch |
1.1 |
make depend |
13 |
|
|
make |
14 |
jmc |
1.2 |
cd .. |
15 |
jmc |
1.3 |
|
16 |
|
|
To run: |
17 |
|
|
cd run |
18 |
|
|
ln -s ../input/* . |
19 |
|
|
../input/prepare_run |
20 |
|
|
ln -s ../build/mitgcmuv . |
21 |
|
|
./mitgcmuv > output.txt |
22 |
jmc |
1.2 |
cd .. |
23 |
mlosch |
1.1 |
|
24 |
jmc |
1.3 |
There is comparison output in the directory: |
25 |
|
|
results/output.txt |
26 |
|
|
|
27 |
mlosch |
1.1 |
There is comparison output in directory: |
28 |
jmc |
1.2 |
(verification/global_ocean.90x40x1/) results |
29 |
mlosch |
1.1 |
|
30 |
|
|
Comments: |
31 |
|
|
o The input data is real*4. |
32 |
|
|
o The surface fluxes are derived from monthly means of the NCEP climatology; |
33 |
jmc |
1.2 |
- a matlab script is provided that created the surface flux data files from |
34 |
mlosch |
1.1 |
the original NCEP data: ncep2global_ocean.m in the diags_matlab directory, |
35 |
|
|
needs editing to adjust search paths. |
36 |
|
|
o matlab scripts that make a simple diagnostic (barotropic stream function, |
37 |
|
|
overturning stream functions, averaged hydrography etc.) is provided in |
38 |
jmc |
1.2 |
verification/tutorial_global_oce_latlon/diags_matlab: |
39 |
mlosch |
1.1 |
- mit_loadglobal is the toplevel script that run all other scripts |
40 |
|
|
- mit_globalmovie animates theta, salinity, and 3D-velocity field for |
41 |
|
|
a layer "iz", if "meanfields=0" |
42 |
|
|
|
43 |
jmc |
1.2 |
-------------------------------------------- |
44 |
|
|
Additional example: |
45 |
|
|
similar set-up, with the same executable, and using pkg/dwnslp: |
46 |
|
|
to run this 2nd example: |
47 |
|
|
cd input.dwnslp |
48 |
|
|
ln -s ../input/* . |
49 |
|
|
../input/prepare_run |
50 |
|
|
../build/mitgcmuv > output.dwnslp.txt |
51 |
|
|
cd .. |
52 |
|
|
|
53 |
heimbach |
1.4 |
|
54 |
|
|
============================================ |
55 |
|
|
Use of "blank tiles" in conjunction with exch2 package: |
56 |
|
|
============================================ |
57 |
|
|
|
58 |
|
|
This verification experiment also demonstrate the omission of tiles |
59 |
|
|
(or processors) for tiles that are fully land cover and don't need computation. |
60 |
|
|
The relevant config. files to be manipulated are: |
61 |
|
|
* at compile time, in dir. code/: packages.conf, SIZE.h |
62 |
|
|
* at run time, in dir. input/: data.exch2 |
63 |
|
|
|
64 |
|
|
To enable this feature requires package "exch2" |
65 |
|
|
(see Section 6.2.4 "exch2: Extended Cubed Sphere Topology" of online manual), |
66 |
|
|
i.e. in code/packages.conf add "exch2" |
67 |
|
|
|
68 |
|
|
The basic layout of the experiment is Nx*Ny = 90x40. |
69 |
|
|
In a single-processor config. with very small tile sizes (sNx*sNy=10*10) |
70 |
|
|
this can be represented, e.g. via |
71 |
|
|
& sNx = 10, |
72 |
|
|
& sNy = 10, |
73 |
|
|
& OLx = 3, |
74 |
|
|
& OLy = 3, |
75 |
|
|
& nSx = 9, |
76 |
|
|
& nSy = 4, |
77 |
|
|
& nPx = 1, |
78 |
|
|
& nPy = 1, |
79 |
|
|
i.e. we use nSx*nSy=9*4=36 virtual processors. |
80 |
|
|
|
81 |
|
|
An equivalent parallel setup using 9 virtual and 4 real processors |
82 |
|
|
would look like: |
83 |
|
|
& sNx = 10, |
84 |
|
|
& sNy = 10, |
85 |
|
|
& OLx = 3, |
86 |
|
|
& OLy = 3, |
87 |
|
|
& nSx = 9, |
88 |
|
|
& nSy = 1, |
89 |
|
|
& nPx = 1, |
90 |
|
|
& nPy = 4, |
91 |
|
|
|
92 |
|
|
In this layout it turns out that tile number 30 is "empty", i.e. fully land covered. |
93 |
|
|
We wish to remove this tile from our calculation. How to proceed? |
94 |
|
|
|
95 |
|
|
1. Find out which tiles to eliminate via a config. that uses all tiles |
96 |
|
|
--- |
97 |
|
|
1.1. At compile time: |
98 |
|
|
* add line "exch2" to packages.conf |
99 |
|
|
* configure SIZE.h using your desired individual tile size, e.g. sNx*sNy=10*10, as follows: |
100 |
|
|
& sNx = 10, |
101 |
|
|
& sNy = 10, |
102 |
|
|
& OLx = 3, |
103 |
|
|
& OLy = 3, |
104 |
|
|
& nSx = 9, |
105 |
|
|
& nSy = 1, |
106 |
|
|
& nPx = 1, |
107 |
|
|
& nPy = 4, |
108 |
|
|
As described above you are using 4 real processors with 9 virtual tiles per processor. |
109 |
|
|
* compile (don't forget to compile via -mpi or similar) |
110 |
|
|
|
111 |
|
|
1.2 At runtime: |
112 |
|
|
* need to reflect your basic layout in data.exch2 |
113 |
|
|
This is simple: since you are not using any non-trivial topology with |
114 |
|
|
multiple facets (such as cubed-sphere), |
115 |
|
|
you only need to specify one basic facet (Nx,Ny)=(90,40) layout via following line: |
116 |
|
|
dimsFacets = 90, 40, |
117 |
|
|
Then run the moodel, e.g. via: |
118 |
|
|
mpirun -np 4 ./mitgcmuv |
119 |
|
|
|
120 |
|
|
1.3 Diagnose which tile numbers are empty: |
121 |
|
|
* from STDOUT.000[0-3] you can infer which tiles are empty using following grep: |
122 |
|
|
grep "Empty tile" STDOUT.* | awk '{print " " $6 ","}' > empty_tiles.txt |
123 |
|
|
* In this example there's only one empty tile, and it is #30. |
124 |
|
|
|
125 |
|
|
2. Configuring with empty tiles removed |
126 |
|
|
--- |
127 |
|
|
2.1 At compile time: |
128 |
|
|
We've determined that 1 out of nSx*nPx*nSy*nPy=36 tiles is empty and can be removed, |
129 |
|
|
leaving 36-1=35 non-empty tiles. We are free to re-order nSx,nPx,nSy,nPy in any way, |
130 |
|
|
as long as nSx*nPx*nSy*nPy=35. Here's how it's chosen in the verif. experiment |
131 |
|
|
(see file SIZE.h_mpi) |
132 |
|
|
& sNx = 10, |
133 |
|
|
& sNy = 10, |
134 |
|
|
& OLx = 3, |
135 |
|
|
& OLy = 3, |
136 |
|
|
& nSx = 7, |
137 |
|
|
& nSy = 1, |
138 |
|
|
& nPx = 1, |
139 |
|
|
& nPy = 5, |
140 |
|
|
and for which, as required nSx*nPx*nSy*nPy=35 |
141 |
|
|
|
142 |
|
|
2.2 At runtime: |
143 |
|
|
We now need to specify in data.exch2 the number of the empty tile to be removed. |
144 |
|
|
This is done as follows (see file data.exch2.mpi): |
145 |
|
|
blankList = 30, |
146 |
|
|
If there were more empty tiles, this would be a more extensive index array, e.g. |
147 |
|
|
blankList = tile1, tile2, tile3, ... |
148 |
|
|
Now run the model (note that we've selected nPy=5, so in this example we actually |
149 |
|
|
*increase* the number of "real" processors used, despite reducing the number of |
150 |
|
|
total, i.e. "real" plus virtual tiles from 36 to 35): |
151 |
|
|
mpirun -np 5 ./mitgcmuv |
152 |
|
|
|
153 |
|
|
|
154 |
jmc |
1.2 |
============================================ |
155 |
|
|
Adjoint set-up example: |
156 |
heimbach |
1.4 |
============================================ |
157 |
jmc |
1.2 |
|
158 |
|
|
Configure and compile the code: |
159 |
|
|
cd build |
160 |
|
|
../../../tools/genmake2 -mods='../code_ad' |
161 |
|
|
[make Clean] |
162 |
|
|
make depend |
163 |
|
|
make adall |
164 |
|
|
cd .. |
165 |
|
|
To run the code: |
166 |
|
|
cd input_ad |
167 |
|
|
./prepare_run |
168 |
|
|
../build/mitgcmuv_ad > output_adm.txt |
169 |
|
|
cd .. |