/[MITgcm]/manual/s_software/text/sarch.tex
ViewVC logotype

Contents of /manual/s_software/text/sarch.tex

Parent Directory Parent Directory | Revision Log Revision Log | View Revision Graph Revision Graph


Revision 1.23 - (show annotations) (download) (as text)
Tue Apr 4 19:14:14 2006 UTC (19 years, 3 months ago) by edhill
Branch: MAIN
Changes since 1.22: +47 -22 lines
File MIME type: application/x-tex
fix a load of small spelling and other inconsistencies so that the call
  tree in the model comments exactly matches the call tree in the manual

1 % $Header: /u/gcmpack/manual/part4/sarch.tex,v 1.22 2006/04/04 18:33:52 edhill Exp $
2
3 This chapter focuses on describing the {\bf WRAPPER} environment
4 within which both the core numerics and the pluggable packages
5 operate. The description presented here is intended to be a detailed
6 exposition and contains significant background material, as well as
7 advanced details on working with the WRAPPER. The tutorial sections
8 of this manual (see sections \ref{sect:tutorials} and
9 \ref{sect:tutorialIII}) contain more succinct, step-by-step
10 instructions on running basic numerical experiments, of varous types,
11 both sequentially and in parallel. For many projects simply starting
12 from an example code and adapting it to suit a particular situation
13 will be all that is required. The first part of this chapter
14 discusses the MITgcm architecture at an abstract level. In the second
15 part of the chapter we described practical details of the MITgcm
16 implementation and of current tools and operating system features that
17 are employed.
18
19 \section{Overall architectural goals}
20 \begin{rawhtml}
21 <!-- CMIREDIR:overall_architectural_goals: -->
22 \end{rawhtml}
23
24 Broadly, the goals of the software architecture employed in MITgcm are
25 three-fold
26
27 \begin{itemize}
28 \item We wish to be able to study a very broad range of interesting
29 and challenging rotating fluids problems.
30 \item We wish the model code to be readily targeted to a wide range of
31 platforms
32 \item On any given platform we would like to be able to achieve
33 performance comparable to an implementation developed and
34 specialized specifically for that platform.
35 \end{itemize}
36
37 These points are summarized in figure
38 \ref{fig:mitgcm_architecture_goals} which conveys the goals of the
39 MITgcm design. The goals lead to a software architecture which at the
40 high-level can be viewed as consisting of
41
42 \begin{enumerate}
43 \item A core set of numerical and support code. This is discussed in
44 detail in section \ref{chap:discretization}.
45
46 \item A scheme for supporting optional ``pluggable'' {\bf packages}
47 (containing for example mixed-layer schemes, biogeochemical schemes,
48 atmospheric physics). These packages are used both to overlay
49 alternate dynamics and to introduce specialized physical content
50 onto the core numerical code. An overview of the {\bf package}
51 scheme is given at the start of part \ref{chap:packagesI}.
52
53 \item A support framework called {\bf WRAPPER} (Wrappable Application
54 Parallel Programming Environment Resource), within which the core
55 numerics and pluggable packages operate.
56 \end{enumerate}
57
58 This chapter focuses on describing the {\bf WRAPPER} environment under
59 which both the core numerics and the pluggable packages function. The
60 description presented here is intended to be a detailed exposition and
61 contains significant background material, as well as advanced details
62 on working with the WRAPPER. The examples section of this manual
63 (part \ref{chap:getting_started}) contains more succinct, step-by-step
64 instructions on running basic numerical experiments both sequentially
65 and in parallel. For many projects simply starting from an example
66 code and adapting it to suit a particular situation will be all that
67 is required.
68
69
70 \begin{figure}
71 \begin{center}
72 \resizebox{!}{2.5in}{\includegraphics{part4/mitgcm_goals.eps}}
73 \end{center}
74 \caption{ The MITgcm architecture is designed to allow simulation of a
75 wide range of physical problems on a wide range of hardware. The
76 computational resource requirements of the applications targeted
77 range from around $10^7$ bytes ($\approx 10$ megabytes) of memory to
78 $10^{11}$ bytes ($\approx 100$ gigabytes). Arithmetic operation
79 counts for the applications of interest range from $10^{9}$ floating
80 point operations to more than $10^{17}$ floating point operations.}
81 \label{fig:mitgcm_architecture_goals}
82 \end{figure}
83
84 \section{WRAPPER}
85 \begin{rawhtml}
86 <!-- CMIREDIR:wrapper: -->
87 \end{rawhtml}
88
89 A significant element of the software architecture utilized in MITgcm
90 is a software superstructure and substructure collectively called the
91 WRAPPER (Wrappable Application Parallel Programming Environment
92 Resource). All numerical and support code in MITgcm is written to
93 ``fit'' within the WRAPPER infrastructure. Writing code to ``fit''
94 within the WRAPPER means that coding has to follow certain, relatively
95 straightforward, rules and conventions (these are discussed further in
96 section \ref{sect:specifying_a_decomposition}).
97
98 The approach taken by the WRAPPER is illustrated in figure
99 \ref{fig:fit_in_wrapper} which shows how the WRAPPER serves to
100 insulate code that fits within it from architectural differences
101 between hardware platforms and operating systems. This allows
102 numerical code to be easily retargetted.
103
104
105 \begin{figure}
106 \begin{center}
107 \resizebox{!}{4.5in}{\includegraphics{part4/fit_in_wrapper.eps}}
108 \end{center}
109 \caption{
110 Numerical code is written to fit within a software support
111 infrastructure called WRAPPER. The WRAPPER is portable and
112 can be specialized for a wide range of specific target hardware and
113 programming environments, without impacting numerical code that fits
114 within the WRAPPER. Codes that fit within the WRAPPER can generally be
115 made to run as fast on a particular platform as codes specially
116 optimized for that platform.}
117 \label{fig:fit_in_wrapper}
118 \end{figure}
119
120 \subsection{Target hardware}
121 \label{sect:target_hardware}
122
123 The WRAPPER is designed to target as broad as possible a range of
124 computer systems. The original development of the WRAPPER took place
125 on a multi-processor, CRAY Y-MP system. On that system, numerical code
126 performance and scaling under the WRAPPER was in excess of that of an
127 implementation that was tightly bound to the CRAY systems proprietary
128 multi-tasking and micro-tasking approach. Later developments have been
129 carried out on uniprocessor and multi-processor Sun systems with both
130 uniform memory access (UMA) and non-uniform memory access (NUMA)
131 designs. Significant work has also been undertaken on x86 cluster
132 systems, Alpha processor based clustered SMP systems, and on
133 cache-coherent NUMA (CC-NUMA) systems such as Silicon Graphics Altix
134 systems. The MITgcm code, operating within the WRAPPER, is also
135 routinely used on large scale MPP systems (for example, Cray T3E and
136 IBM SP systems). In all cases numerical code, operating within the
137 WRAPPER, performs and scales very competitively with equivalent
138 numerical code that has been modified to contain native optimizations
139 for a particular system \ref{ref hoe and hill, ecmwf}.
140
141 \subsection{Supporting hardware neutrality}
142
143 The different systems listed in section \ref{sect:target_hardware} can
144 be categorized in many different ways. For example, one common
145 distinction is between shared-memory parallel systems (SMP and PVP)
146 and distributed memory parallel systems (for example x86 clusters and
147 large MPP systems). This is one example of a difference between
148 compute platforms that can impact an application. Another common
149 distinction is between vector processing systems with highly
150 specialized CPUs and memory subsystems and commodity microprocessor
151 based systems. There are numerous other differences, especially in
152 relation to how parallel execution is supported. To capture the
153 essential differences between different platforms the WRAPPER uses a
154 {\it machine model}.
155
156 \subsection{WRAPPER machine model}
157
158 Applications using the WRAPPER are not written to target just one
159 particular machine (for example an IBM SP2) or just one particular
160 family or class of machines (for example Parallel Vector Processor
161 Systems). Instead the WRAPPER provides applications with an abstract
162 {\it machine model}. The machine model is very general, however, it
163 can easily be specialized to fit, in a computationally efficient
164 manner, any computer architecture currently available to the
165 scientific computing community.
166
167 \subsection{Machine model parallelism}
168 \begin{rawhtml}
169 <!-- CMIREDIR:domain_decomp: -->
170 \end{rawhtml}
171
172 Codes operating under the WRAPPER target an abstract machine that is
173 assumed to consist of one or more logical processors that can compute
174 concurrently. Computational work is divided among the logical
175 processors by allocating ``ownership'' to each processor of a certain
176 set (or sets) of calculations. Each set of calculations owned by a
177 particular processor is associated with a specific region of the
178 physical space that is being simulated, only one processor will be
179 associated with each such region (domain decomposition).
180
181 In a strict sense the logical processors over which work is divided do
182 not need to correspond to physical processors. It is perfectly
183 possible to execute a configuration decomposed for multiple logical
184 processors on a single physical processor. This helps ensure that
185 numerical code that is written to fit within the WRAPPER will
186 parallelize with no additional effort. It is also useful for
187 debugging purposes. Generally, however, the computational domain will
188 be subdivided over multiple logical processors in order to then bind
189 those logical processors to physical processor resources that can
190 compute in parallel.
191
192 \subsubsection{Tiles}
193
194 Computationally, the data structures (\textit{eg.} arrays, scalar
195 variables, etc.) that hold the simulated state are associated with
196 each region of physical space and are allocated to a particular
197 logical processor. We refer to these data structures as being {\bf
198 owned} by the processor to which their associated region of physical
199 space has been allocated. Individual regions that are allocated to
200 processors are called {\bf tiles}. A processor can own more than one
201 tile. Figure \ref{fig:domaindecomp} shows a physical domain being
202 mapped to a set of logical processors, with each processors owning a
203 single region of the domain (a single tile). Except for periods of
204 communication and coordination, each processor computes autonomously,
205 working only with data from the tile (or tiles) that the processor
206 owns. When multiple tiles are alloted to a single processor, each
207 tile is computed on independently of the other tiles, in a sequential
208 fashion.
209
210 \begin{figure}
211 \begin{center}
212 \resizebox{5in}{!}{
213 \includegraphics{part4/domain_decomp.eps}
214 }
215 \end{center}
216 \caption{ The WRAPPER provides support for one and two dimensional
217 decompositions of grid-point domains. The figure shows a
218 hypothetical domain of total size $N_{x}N_{y}N_{z}$. This
219 hypothetical domain is decomposed in two-dimensions along the
220 $N_{x}$ and $N_{y}$ directions. The resulting {\bf tiles} are {\bf
221 owned} by different processors. The {\bf owning} processors
222 perform the arithmetic operations associated with a {\bf tile}.
223 Although not illustrated here, a single processor can {\bf own}
224 several {\bf tiles}. Whenever a processor wishes to transfer data
225 between tiles or communicate with other processors it calls a
226 WRAPPER supplied function. } \label{fig:domaindecomp}
227 \end{figure}
228
229
230 \subsubsection{Tile layout}
231
232 Tiles consist of an interior region and an overlap region. The
233 overlap region of a tile corresponds to the interior region of an
234 adjacent tile. In figure \ref{fig:tiledworld} each tile would own the
235 region within the black square and hold duplicate information for
236 overlap regions extending into the tiles to the north, south, east and
237 west. During computational phases a processor will reference data in
238 an overlap region whenever it requires values that lie outside the
239 domain it owns. Periodically processors will make calls to WRAPPER
240 functions to communicate data between tiles, in order to keep the
241 overlap regions up to date (see section
242 \ref{sect:communication_primitives}). The WRAPPER functions can use a
243 variety of different mechanisms to communicate data between tiles.
244
245 \begin{figure}
246 \begin{center}
247 \resizebox{5in}{!}{
248 \includegraphics{part4/tiled-world.eps}
249 }
250 \end{center}
251 \caption{ A global grid subdivided into tiles.
252 Tiles contain a interior region and an overlap region.
253 Overlap regions are periodically updated from neighboring tiles.
254 } \label{fig:tiledworld}
255 \end{figure}
256
257 \subsection{Communication mechanisms}
258
259 Logical processors are assumed to be able to exchange information
260 between tiles and between each other using at least one of two
261 possible mechanisms.
262
263 \begin{itemize}
264 \item {\bf Shared memory communication}. Under this mode of
265 communication data transfers are assumed to be possible using direct
266 addressing of regions of memory. In this case a CPU is able to read
267 (and write) directly to regions of memory ``owned'' by another CPU
268 using simple programming language level assignment operations of the
269 the sort shown in figure \ref{fig:simple_assign}. In this way one
270 CPU (CPU1 in the figure) can communicate information to another CPU
271 (CPU2 in the figure) by assigning a particular value to a particular
272 memory location.
273
274 \item {\bf Distributed memory communication}. Under this mode of
275 communication there is no mechanism, at the application code level,
276 for directly addressing regions of memory owned and visible to
277 another CPU. Instead a communication library must be used as
278 illustrated in figure \ref{fig:comm_msg}. In this case CPUs must
279 call a function in the API of the communication library to
280 communicate data from a tile that it owns to a tile that another CPU
281 owns. By default the WRAPPER binds to the MPI communication library
282 \ref{MPI} for this style of communication.
283 \end{itemize}
284
285 The WRAPPER assumes that communication will use one of these two styles
286 of communication. The underlying hardware and operating system support
287 for the style used is not specified and can vary from system to system.
288
289 \begin{figure}
290 \begin{verbatim}
291
292 CPU1 | CPU2
293 ==== | ====
294 |
295 a(3) = 8 | WHILE ( a(3) .NE. 8 )
296 | WAIT
297 | END WHILE
298 |
299 \end{verbatim}
300 \caption{In the WRAPPER shared memory communication model, simple writes to an
301 array can be made to be visible to other CPUs at the application code level.
302 So that for example, if one CPU (CPU1 in the figure above) writes the value $8$ to
303 element $3$ of array $a$, then other CPUs (for example CPU2 in the figure above)
304 will be able to see the value $8$ when they read from $a(3)$.
305 This provides a very low latency and high bandwidth communication
306 mechanism.
307 } \label{fig:simple_assign}
308 \end{figure}
309
310 \begin{figure}
311 \begin{verbatim}
312
313 CPU1 | CPU2
314 ==== | ====
315 |
316 a(3) = 8 | WHILE ( a(3) .NE. 8 )
317 CALL SEND( CPU2,a(3) ) | CALL RECV( CPU1, a(3) )
318 | END WHILE
319 |
320 \end{verbatim}
321 \caption{ In the WRAPPER distributed memory communication model
322 data can not be made directly visible to other CPUs.
323 If one CPU writes the value $8$ to element $3$ of array $a$, then
324 at least one of CPU1 and/or CPU2 in the figure above will need
325 to call a bespoke communication library in order for the updated
326 value to be communicated between CPUs.
327 } \label{fig:comm_msg}
328 \end{figure}
329
330 \subsection{Shared memory communication}
331 \label{sect:shared_memory_communication}
332
333 Under shared communication independent CPUs are operating on the
334 exact same global address space at the application level. This means
335 that CPU 1 can directly write into global data structures that CPU 2
336 ``owns'' using a simple assignment at the application level. This is
337 the model of memory access is supported at the basic system design
338 level in ``shared-memory'' systems such as PVP systems, SMP systems,
339 and on distributed shared memory systems (\textit{eg.} SGI Origin, SGI
340 Altix, and some AMD Opteron systems). On such systems the WRAPPER
341 will generally use simple read and write statements to access directly
342 application data structures when communicating between CPUs.
343
344 In a system where assignments statements, like the one in figure
345 \ref{fig:simple_assign} map directly to hardware instructions that
346 transport data between CPU and memory banks, this can be a very
347 efficient mechanism for communication. In this case two CPUs, CPU1
348 and CPU2, can communicate simply be reading and writing to an agreed
349 location and following a few basic rules. The latency of this sort of
350 communication is generally not that much higher than the hardware
351 latency of other memory accesses on the system. The bandwidth
352 available between CPUs communicating in this way can be close to the
353 bandwidth of the systems main-memory interconnect. This can make this
354 method of communication very efficient provided it is used
355 appropriately.
356
357 \subsubsection{Memory consistency}
358 \label{sect:memory_consistency}
359
360 When using shared memory communication between multiple processors the
361 WRAPPER level shields user applications from certain counter-intuitive
362 system behaviors. In particular, one issue the WRAPPER layer must
363 deal with is a systems memory model. In general the order of reads
364 and writes expressed by the textual order of an application code may
365 not be the ordering of instructions executed by the processor
366 performing the application. The processor performing the application
367 instructions will always operate so that, for the application
368 instructions the processor is executing, any reordering is not
369 apparent. However, in general machines are often designed so that
370 reordering of instructions is not hidden from other second processors.
371 This means that, in general, even on a shared memory system two
372 processors can observe inconsistent memory values.
373
374 The issue of memory consistency between multiple processors is
375 discussed at length in many computer science papers. From a practical
376 point of view, in order to deal with this issue, shared memory
377 machines all provide some mechanism to enforce memory consistency when
378 it is needed. The exact mechanism employed will vary between systems.
379 For communication using shared memory, the WRAPPER provides a place to
380 invoke the appropriate mechanism to ensure memory consistency for a
381 particular platform.
382
383 \subsubsection{Cache effects and false sharing}
384 \label{sect:cache_effects_and_false_sharing}
385
386 Shared-memory machines often have local to processor memory caches
387 which contain mirrored copies of main memory. Automatic cache-coherence
388 protocols are used to maintain consistency between caches on different
389 processors. These cache-coherence protocols typically enforce consistency
390 between regions of memory with large granularity (typically 128 or 256 byte
391 chunks). The coherency protocols employed can be expensive relative to other
392 memory accesses and so care is taken in the WRAPPER (by padding synchronization
393 structures appropriately) to avoid unnecessary coherence traffic.
394
395 \subsubsection{Operating system support for shared memory.}
396
397 Applications running under multiple threads within a single process
398 can use shared memory communication. In this case {\it all} the
399 memory locations in an application are potentially visible to all the
400 compute threads. Multiple threads operating within a single process is
401 the standard mechanism for supporting shared memory that the WRAPPER
402 utilizes. Configuring and launching code to run in multi-threaded mode
403 on specific platforms is discussed in section
404 \ref{sect:multi-threaded-execution}. However, on many systems,
405 potentially very efficient mechanisms for using shared memory
406 communication between multiple processes (in contrast to multiple
407 threads within a single process) also exist. In most cases this works
408 by making a limited region of memory shared between processes. The
409 MMAP \ref{magicgarden} and IPC \ref{magicgarden} facilities in UNIX
410 systems provide this capability as do vendor specific tools like LAPI
411 \ref{IBMLAPI} and IMC \ref{Memorychannel}. Extensions exist for the
412 WRAPPER that allow these mechanisms to be used for shared memory
413 communication. However, these mechanisms are not distributed with the
414 default WRAPPER sources, because of their proprietary nature.
415
416 \subsection{Distributed memory communication}
417 \label{sect:distributed_memory_communication}
418 Many parallel systems are not constructed in a way where it is
419 possible or practical for an application to use shared memory for
420 communication. For example cluster systems consist of individual
421 computers connected by a fast network. On such systems there is no
422 notion of shared memory at the system level. For this sort of system
423 the WRAPPER provides support for communication based on a bespoke
424 communication library (see figure \ref{fig:comm_msg}). The default
425 communication library used is MPI \cite{MPI-std-20}. However, it is
426 relatively straightforward to implement bindings to optimized platform
427 specific communication libraries. For example the work described in
428 \ref{hoe-hill:99} substituted standard MPI communication for a highly
429 optimized library.
430
431 \subsection{Communication primitives}
432 \label{sect:communication_primitives}
433
434 \begin{figure}
435 \begin{center}
436 \resizebox{5in}{!}{
437 \includegraphics{part4/comm-primm.eps}
438 }
439 \end{center}
440 \caption{Three performance critical parallel primitives are provided
441 by the WRAPPER. These primitives are always used to communicate data
442 between tiles. The figure shows four tiles. The curved arrows
443 indicate exchange primitives which transfer data between the overlap
444 regions at tile edges and interior regions for nearest-neighbor
445 tiles. The straight arrows symbolize global sum operations which
446 connect all tiles. The global sum operation provides both a key
447 arithmetic primitive and can serve as a synchronization primitive. A
448 third barrier primitive is also provided, it behaves much like the
449 global sum primitive. } \label{fig:communication_primitives}
450 \end{figure}
451
452
453 Optimized communication support is assumed to be potentially available
454 for a small number of communication operations. It is also assumed
455 that communication performance optimizations can be achieved by
456 optimizing a small number of communication primitives. Three
457 optimizable primitives are provided by the WRAPPER
458 \begin{itemize}
459 \item{\bf EXCHANGE} This operation is used to transfer data between
460 interior and overlap regions of neighboring tiles. A number of
461 different forms of this operation are supported. These different
462 forms handle
463 \begin{itemize}
464 \item Data type differences. Sixty-four bit and thirty-two bit
465 fields may be handled separately.
466 \item Bindings to different communication methods. Exchange
467 primitives select between using shared memory or distributed
468 memory communication.
469 \item Transformation operations required when transporting data
470 between different grid regions. Transferring data between faces of
471 a cube-sphere grid, for example, involves a rotation of vector
472 components.
473 \item Forward and reverse mode computations. Derivative calculations
474 require tangent linear and adjoint forms of the exchange
475 primitives.
476 \end{itemize}
477
478 \item{\bf GLOBAL SUM} The global sum operation is a central arithmetic
479 operation for the pressure inversion phase of the MITgcm algorithm.
480 For certain configurations scaling can be highly sensitive to the
481 performance of the global sum primitive. This operation is a
482 collective operation involving all tiles of the simulated domain.
483 Different forms of the global sum primitive exist for handling
484 \begin{itemize}
485 \item Data type differences. Sixty-four bit and thirty-two bit
486 fields may be handled separately.
487 \item Bindings to different communication methods. Exchange
488 primitives select between using shared memory or distributed
489 memory communication.
490 \item Forward and reverse mode computations. Derivative calculations
491 require tangent linear and adjoint forms of the exchange
492 primitives.
493 \end{itemize}
494
495 \item{\bf BARRIER} The WRAPPER provides a global synchronization
496 function called barrier. This is used to synchronize computations
497 over all tiles. The {\bf BARRIER} and {\bf GLOBAL SUM} primitives
498 have much in common and in some cases use the same underlying code.
499 \end{itemize}
500
501
502 \subsection{Memory architecture}
503
504 The WRAPPER machine model is aimed to target efficiently systems with
505 highly pipelined memory architectures and systems with deep memory
506 hierarchies that favor memory reuse. This is achieved by supporting a
507 flexible tiling strategy as shown in figure \ref{fig:tiling-strategy}.
508 Within a CPU computations are carried out sequentially on each tile
509 in turn. By reshaping tiles according to the target platform it is
510 possible to automatically tune code to improve memory performance.
511 On a vector machine a given domain might be sub-divided into a few
512 long, thin regions. On a commodity microprocessor based system, however,
513 the same region could be simulated use many more smaller
514 sub-domains.
515
516
517 \begin{figure}
518 \begin{center}
519 \resizebox{5in}{!}{
520 \includegraphics{part4/tiling_detail.eps}
521 }
522 \end{center}
523 \caption{The tiling strategy that the WRAPPER supports allows tiles
524 to be shaped to suit the underlying system memory architecture.
525 Compact tiles that lead to greater memory reuse can be used on cache
526 based systems (upper half of figure) with deep memory hierarchies, long tiles
527 with large inner loops can be used to exploit vector systems having
528 highly pipelined memory systems.
529 } \label{fig:tiling-strategy}
530 \end{figure}
531
532 \newpage
533 \subsection{Summary}
534 Following the discussion above, the machine model that the WRAPPER
535 presents to an application has the following characteristics
536
537 \begin{itemize}
538 \item The machine consists of one or more logical processors.
539 \item Each processor operates on tiles that it owns.
540 \item A processor may own more than one tile.
541 \item Processors may compute concurrently.
542 \item Exchange of information between tiles is handled by the
543 machine (WRAPPER) not by the application.
544 \end{itemize}
545 Behind the scenes this allows the WRAPPER to adapt the machine model
546 functions to exploit hardware on which
547 \begin{itemize}
548 \item Processors may be able to communicate very efficiently with each
549 other using shared memory.
550 \item An alternative communication mechanism based on a relatively
551 simple inter-process communication API may be required.
552 \item Shared memory may not necessarily obey sequential consistency,
553 however some mechanism will exist for enforcing memory consistency.
554 \item Memory consistency that is enforced at the hardware level
555 may be expensive. Unnecessary triggering of consistency protocols
556 should be avoided.
557 \item Memory access patterns may need to either repetitive or highly
558 pipelined for optimum hardware performance.
559 \end{itemize}
560
561 This generic model captures the essential hardware ingredients
562 of almost all successful scientific computer systems designed in the
563 last 50 years.
564
565 \section{Using the WRAPPER}
566 \begin{rawhtml}
567 <!-- CMIREDIR:using_the_wrapper: -->
568 \end{rawhtml}
569
570 In order to support maximum portability the WRAPPER is implemented
571 primarily in sequential Fortran 77. At a practical level the key steps
572 provided by the WRAPPER are
573 \begin{enumerate}
574 \item specifying how a domain will be decomposed
575 \item starting a code in either sequential or parallel modes of operations
576 \item controlling communication between tiles and between concurrently
577 computing CPUs.
578 \end{enumerate}
579 This section describes the details of each of these operations.
580 Section \ref{sect:specifying_a_decomposition} explains how the way in
581 which a domain is decomposed (or composed) is expressed. Section
582 \ref{sect:starting_a_code} describes practical details of running
583 codes in various different parallel modes on contemporary computer
584 systems. Section \ref{sect:controlling_communication} explains the
585 internal information that the WRAPPER uses to control how information
586 is communicated between tiles.
587
588 \subsection{Specifying a domain decomposition}
589 \label{sect:specifying_a_decomposition}
590
591 At its heart much of the WRAPPER works only in terms of a collection of tiles
592 which are interconnected to each other. This is also true of application
593 code operating within the WRAPPER. Application code is written as a series
594 of compute operations, each of which operates on a single tile. If
595 application code needs to perform operations involving data
596 associated with another tile, it uses a WRAPPER function to obtain
597 that data.
598 The specification of how a global domain is constructed from tiles or alternatively
599 how a global domain is decomposed into tiles is made in the file {\em SIZE.h}.
600 This file defines the following parameters \\
601
602 \fbox{
603 \begin{minipage}{4.75in}
604 Parameters: {\em sNx, sNy, OLx, OLy, nSx, nSy, nPx, nPy} \\
605 File: {\em model/inc/SIZE.h}
606 \end{minipage}
607 } \\
608
609 Together these parameters define a tiling decomposition of the style shown in
610 figure \ref{fig:labelled_tile}. The parameters {\em sNx} and {\em sNy} define
611 the size of an individual tile. The parameters {\em OLx} and {\em OLy} define the
612 maximum size of the overlap extent. This must be set to the maximum width
613 of the computation stencil that the numerical code finite-difference operations
614 require between overlap region updates. The maximum overlap required
615 by any of the operations in the MITgcm code distributed with this release is three grid
616 points. This is set by the requirements of the $\nabla^4$ dissipation and
617 diffusion operator. Code modifications and enhancements that involve adding wide
618 finite-difference stencils may require increasing {\em OLx} and {\em OLy}.
619 Setting {\em OLx} and {\em OLy} to a too large value will decrease code
620 performance (because redundant computations will be performed), however it will
621 not cause any other problems.
622
623 \begin{figure}
624 \begin{center}
625 \resizebox{5in}{!}{
626 \includegraphics{part4/size_h.eps}
627 }
628 \end{center}
629 \caption{ The three level domain decomposition hierarchy employed by the
630 WRAPPER. A domain is composed of tiles. Multiple tiles can be allocated
631 to a single process. Multiple processes can exist, each with multiple tiles.
632 Tiles within a process can be spread over multiple compute threads.
633 } \label{fig:labelled_tile}
634 \end{figure}
635
636 The parameters {\em nSx} and {\em nSy} specify the number of tiles that will
637 be created within a single process. Each of these tiles will have internal
638 dimensions of {\em sNx} and {\em sNy}. If, when the code is executed, these tiles are
639 allocated to different threads of a process that are then bound to
640 different physical processors ( see the multi-threaded
641 execution discussion in section \ref{sect:starting_the_code} ) then
642 computation will be performed concurrently on each tile. However, it is also
643 possible to run the same decomposition within a process running a single thread on
644 a single processor. In this case the tiles will be computed over sequentially.
645 If the decomposition is run in a single process running multiple threads
646 but attached to a single physical processor, then, in general, the computation
647 for different tiles will be interleaved by system level software.
648 This too is a valid mode of operation.
649
650 The parameters {\em sNx, sNy, OLx, OLy, nSx} and{\em nSy} are used extensively by
651 numerical code. The settings of {\em sNx, sNy, OLx} and {\em OLy}
652 are used to form the loop ranges for many numerical calculations and
653 to provide dimensions for arrays holding numerical state.
654 The {\em nSx} and{\em nSy} are used in conjunction with the thread number
655 parameter {\em myThid}. Much of the numerical code operating within the
656 WRAPPER takes the form
657 \begin{verbatim}
658 DO bj=myByLo(myThid),myByHi(myThid)
659 DO bi=myBxLo(myThid),myBxHi(myThid)
660 :
661 a block of computations ranging
662 over 1,sNx +/- OLx and 1,sNy +/- OLy grid points
663 :
664 ENDDO
665 ENDDO
666
667 communication code to sum a number or maybe update
668 tile overlap regions
669
670 DO bj=myByLo(myThid),myByHi(myThid)
671 DO bi=myBxLo(myThid),myBxHi(myThid)
672 :
673 another block of computations ranging
674 over 1,sNx +/- OLx and 1,sNy +/- OLy grid points
675 :
676 ENDDO
677 ENDDO
678 \end{verbatim}
679 The variables {\em myBxLo(myThid), myBxHi(myThid), myByLo(myThid)} and {\em
680 myByHi(myThid)} set the bounds of the loops in {\em bi} and {\em bj} in this
681 schematic. These variables specify the subset of the tiles in
682 the range 1,{\em nSx} and 1,{\em nSy} that the logical processor bound to
683 thread number {\em myThid} owns. The thread number variable {\em myThid}
684 ranges from 1 to the total number of threads requested at execution time.
685 For each value of {\em myThid} the loop scheme above will step sequentially
686 through the tiles owned by that thread. However, different threads will
687 have different ranges of tiles assigned to them, so that separate threads can
688 compute iterations of the {\em bi}, {\em bj} loop concurrently.
689 Within a {\em bi}, {\em bj} loop
690 computation is performed concurrently over as many processes and threads
691 as there are physical processors available to compute.
692
693 An exception to the the use of {\em bi} and {\em bj} in loops arises in the
694 exchange routines used when the exch2 package is used with the cubed
695 sphere. In this case {\em bj} is generally set to 1 and the loop runs from
696 1,{\em bi}. Within the loop {\em bi} is used to retrieve the tile number,
697 which is then used to reference exchange parameters.
698
699 The amount of computation that can be embedded
700 a single loop over {\em bi} and {\em bj} varies for different parts of the
701 MITgcm algorithm. Figure \ref{fig:bibj_extract} shows a code extract
702 from the two-dimensional implicit elliptic solver. This portion of the
703 code computes the l2Norm of a vector whose elements are held in
704 the array {\em cg2d\_r} writing the final result to scalar variable
705 {\em err}. In this case, because the l2norm requires a global reduction,
706 the {\em bi},{\em bj} loop only contains one statement. This computation
707 phase is then followed by a communication phase in which all threads and
708 processes must participate. However,
709 in other areas of the MITgcm code entries subsections of code are within
710 a single {\em bi},{\em bj} loop. For example the evaluation of all
711 the momentum equation prognostic terms ( see {\em S/R DYNAMICS()})
712 is within a single {\em bi},{\em bj} loop.
713
714 \begin{figure}
715 \begin{verbatim}
716 REAL*8 cg2d_r(1-OLx:sNx+OLx,1-OLy:sNy+OLy,nSx,nSy)
717 REAL*8 err
718 :
719 :
720 other computations
721 :
722 :
723 err = 0.
724 DO bj=myByLo(myThid),myByHi(myThid)
725 DO bi=myBxLo(myThid),myBxHi(myThid)
726 DO J=1,sNy
727 DO I=1,sNx
728 err = err +
729 & cg2d_r(I,J,bi,bj)*cg2d_r(I,J,bi,bj)
730 ENDDO
731 ENDDO
732 ENDDO
733 ENDDO
734
735 CALL GLOBAL_SUM_R8( err , myThid )
736 err = SQRT(err)
737
738 \end{verbatim}
739 \caption{Example of numerical code for calculating
740 the l2-norm of a vector within the WRAPPER. Notice that
741 under the WRAPPER arrays such as {\em cg2d\_r} have two extra trailing
742 dimensions. These right most indices are tile indexes. Different
743 threads with a single process operate on different ranges of tile
744 index, as controlled by the settings of
745 {\em myByLo, myByHi, myBxLo} and {\em myBxHi}.
746 } \label{fig:bibj_extract}
747 \end{figure}
748
749 The final decomposition parameters are {\em nPx} and {\em nPy}. These parameters
750 are used to indicate to the WRAPPER level how many processes (each with
751 {\em nSx}$\times${\em nSy} tiles) will be used for this simulation.
752 This information is needed during initialization and during I/O phases.
753 However, unlike the variables {\em sNx, sNy, OLx, OLy, nSx} and {\em nSy}
754 the values of {\em nPx} and {\em nPy} are absent
755 from the core numerical and support code.
756
757 \subsubsection{Examples of {\em SIZE.h} specifications}
758
759 The following different {\em SIZE.h} parameter setting illustrate how to
760 interpret the values of {\em sNx, sNy, OLx, OLy, nSx, nSy, nPx}
761 and {\em nPy}.
762 \begin{enumerate}
763 \item
764 \begin{verbatim}
765 PARAMETER (
766 & sNx = 90,
767 & sNy = 40,
768 & OLx = 3,
769 & OLy = 3,
770 & nSx = 1,
771 & nSy = 1,
772 & nPx = 1,
773 & nPy = 1)
774 \end{verbatim}
775 This sets up a single tile with x-dimension of ninety grid points, y-dimension of
776 forty grid points, and x and y overlaps of three grid points each.
777 \item
778 \begin{verbatim}
779 PARAMETER (
780 & sNx = 45,
781 & sNy = 20,
782 & OLx = 3,
783 & OLy = 3,
784 & nSx = 1,
785 & nSy = 1,
786 & nPx = 2,
787 & nPy = 2)
788 \end{verbatim}
789 This sets up tiles with x-dimension of forty-five grid points, y-dimension of
790 twenty grid points, and x and y overlaps of three grid points each. There are
791 four tiles allocated to four separate processes ({\em nPx=2,nPy=2}) and
792 arranged so that the global domain size is again ninety grid points in x and
793 forty grid points in y. In general the formula for global grid size (held in
794 model variables {\em Nx} and {\em Ny}) is
795 \begin{verbatim}
796 Nx = sNx*nSx*nPx
797 Ny = sNy*nSy*nPy
798 \end{verbatim}
799 \item
800 \begin{verbatim}
801 PARAMETER (
802 & sNx = 90,
803 & sNy = 10,
804 & OLx = 3,
805 & OLy = 3,
806 & nSx = 1,
807 & nSy = 2,
808 & nPx = 1,
809 & nPy = 2)
810 \end{verbatim}
811 This sets up tiles with x-dimension of ninety grid points, y-dimension of
812 ten grid points, and x and y overlaps of three grid points each. There are
813 four tiles allocated to two separate processes ({\em nPy=2}) each of which
814 has two separate sub-domains {\em nSy=2},
815 The global domain size is again ninety grid points in x and
816 forty grid points in y. The two sub-domains in each process will be computed
817 sequentially if they are given to a single thread within a single process.
818 Alternatively if the code is invoked with multiple threads per process
819 the two domains in y may be computed concurrently.
820 \item
821 \begin{verbatim}
822 PARAMETER (
823 & sNx = 32,
824 & sNy = 32,
825 & OLx = 3,
826 & OLy = 3,
827 & nSx = 6,
828 & nSy = 1,
829 & nPx = 1,
830 & nPy = 1)
831 \end{verbatim}
832 This sets up tiles with x-dimension of thirty-two grid points, y-dimension of
833 thirty-two grid points, and x and y overlaps of three grid points each.
834 There are six tiles allocated to six separate logical processors ({\em nSx=6}).
835 This set of values can be used for a cube sphere calculation.
836 Each tile of size $32 \times 32$ represents a face of the
837 cube. Initializing the tile connectivity correctly ( see section
838 \ref{sect:cube_sphere_communication}. allows the rotations associated with
839 moving between the six cube faces to be embedded within the
840 tile-tile communication code.
841 \end{enumerate}
842
843
844 \subsection{Starting the code}
845 \label{sect:starting_the_code}
846 When code is started under the WRAPPER, execution begins in a main routine {\em
847 eesupp/src/main.F} that is owned by the WRAPPER. Control is transferred
848 to the application through a routine called {\em THE\_MODEL\_MAIN()}
849 once the WRAPPER has initialized correctly and has created the necessary variables
850 to support subsequent calls to communication routines
851 by the application code. The startup calling sequence followed by the
852 WRAPPER is shown in figure \ref{fig:wrapper_startup}.
853
854 \begin{figure}
855 {\footnotesize
856 \begin{verbatim}
857
858 MAIN
859 |
860 |--EEBOOT :: WRAPPER initialization
861 | |
862 | |-- EEBOOT_MINMAL :: Minimal startup. Just enough to
863 | | allow basic I/O.
864 | |-- EEINTRO_MSG :: Write startup greeting.
865 | |
866 | |-- EESET_PARMS :: Set WRAPPER parameters
867 | |
868 | |-- EEWRITE_EEENV :: Print WRAPPER parameter settings
869 | |
870 | |-- INI_PROCS :: Associate processes with grid regions.
871 | |
872 | |-- INI_THREADING_ENVIRONMENT :: Associate threads with grid regions.
873 | |
874 | |--INI_COMMUNICATION_PATTERNS :: Initialize between tile
875 | :: communication data structures
876 |
877 |
878 |--CHECK_THREADS :: Validate multiple thread start up.
879 |
880 |--THE_MODEL_MAIN :: Numerical code top-level driver routine
881
882
883 \end{verbatim}
884 }
885 \caption{Main stages of the WRAPPER startup procedure.
886 This process proceeds transfer of control to application code, which
887 occurs through the procedure {\em THE\_MODEL\_MAIN()}.
888 } \label{fig:wrapper_startup}
889 \end{figure}
890
891 \subsubsection{Multi-threaded execution}
892 \label{sect:multi-threaded-execution}
893 Prior to transferring control to the procedure {\em THE\_MODEL\_MAIN()} the
894 WRAPPER may cause several coarse grain threads to be initialized. The routine
895 {\em THE\_MODEL\_MAIN()} is called once for each thread and is passed a single
896 stack argument which is the thread number, stored in the
897 variable {\em myThid}. In addition to specifying a decomposition with
898 multiple tiles per process ( see section \ref{sect:specifying_a_decomposition})
899 configuring and starting a code to run using multiple threads requires the following
900 steps.\\
901
902 \paragraph{Compilation}
903 First the code must be compiled with appropriate multi-threading directives
904 active in the file {\em main.F} and with appropriate compiler flags
905 to request multi-threading support. The header files
906 {\em MAIN\_PDIRECTIVES1.h} and {\em MAIN\_PDIRECTIVES2.h}
907 contain directives compatible with compilers for Sun, Compaq, SGI,
908 Hewlett-Packard SMP systems and CRAY PVP systems. These directives can be
909 activated by using compile time
910 directives {\em -DTARGET\_SUN},
911 {\em -DTARGET\_DEC}, {\em -DTARGET\_SGI}, {\em -DTARGET\_HP}
912 or {\em -DTARGET\_CRAY\_VECTOR} respectively. Compiler options
913 for invoking multi-threaded compilation vary from system to system
914 and from compiler to compiler. The options will be described
915 in the individual compiler documentation. For the Fortran compiler
916 from Sun the following options are needed to correctly compile
917 multi-threaded code
918 \begin{verbatim}
919 -stackvar -explicitpar -vpara -noautopar
920 \end{verbatim}
921 These options are specific to the Sun compiler. Other compilers
922 will use different syntax that will be described in their
923 documentation. The effect of these options is as follows
924 \begin{enumerate}
925 \item {\bf -stackvar} Causes all local variables to be allocated in stack
926 storage. This is necessary for local variables to ensure that they are private
927 to their thread. Note, when using this option it may be necessary to override
928 the default limit on stack-size that the operating system assigns to a process.
929 This can normally be done by changing the settings of the command shells
930 {\em stack-size} limit variable. However, on some systems changing this limit
931 will require privileged administrator access to modify system parameters.
932
933 \item {\bf -explicitpar} Requests that multiple threads be spawned
934 in response to explicit directives in the application code. These
935 directives are inserted with syntax appropriate to the particular target
936 platform when, for example, the {\em -DTARGET\_SUN} flag is selected.
937
938 \item {\bf -vpara} This causes the compiler to describe the multi-threaded
939 configuration it is creating. This is not required
940 but it can be useful when trouble shooting.
941
942 \item {\bf -noautopar} This inhibits any automatic multi-threaded
943 parallelization the compiler may otherwise generate.
944
945 \end{enumerate}
946
947
948 An example of valid settings for the {\em eedata} file for a
949 domain with two subdomains in y and running with two threads is shown
950 below
951 \begin{verbatim}
952 nTx=1,nTy=2
953 \end{verbatim}
954 This set of values will cause computations to stay within a single
955 thread when moving across the {\em nSx} sub-domains. In the y-direction,
956 however, sub-domains will be split equally between two threads.
957
958 \paragraph{Multi-threading files and parameters} The following
959 files and variables are used in setting up multi-threaded execution.\\
960
961 \fbox{
962 \begin{minipage}{4.75in}
963 File: {\em eesupp/inc/MAIN\_PDIRECTIVES1.h}\\
964 File: {\em eesupp/inc/MAIN\_PDIRECTIVES2.h}\\
965 File: {\em model/src/THE\_MODEL\_MAIN.F}\\
966 File: {\em eesupp/src/MAIN.F}\\
967 File: {\em tools/genmake2}\\
968 File: {\em eedata}\\
969 CPP: {\em TARGET\_SUN}\\
970 CPP: {\em TARGET\_DEC}\\
971 CPP: {\em TARGET\_HP }\\
972 CPP: {\em TARGET\_SGI}\\
973 CPP: {\em TARGET\_CRAY\_VECTOR}\\
974 Parameter: {\em nTx}\\
975 Parameter: {\em nTy}
976 \end{minipage}
977 } \\
978
979 \subsubsection{Multi-process execution}
980 \label{sect:multi-process-execution}
981
982 Despite its appealing programming model, multi-threaded execution
983 remains less common than multi-process execution. One major reason for
984 this is that many system libraries are still not ``thread-safe''. This
985 means that, for example, on some systems it is not safe to call system
986 routines to perform I/O when running in multi-threaded mode (except,
987 perhaps, in a limited set of circumstances). Another reason is that
988 support for multi-threaded programming models varies between systems.
989
990 Multi-process execution is more ubiquitous. In order to run code in a
991 multi-process configuration a decomposition specification (see section
992 \ref{sect:specifying_a_decomposition}) is given (in which the at least
993 one of the parameters {\em nPx} or {\em nPy} will be greater than one)
994 and then, as for multi-threaded operation, appropriate compile time
995 and run time steps must be taken.
996
997 \paragraph{Compilation} Multi-process execution under the WRAPPER
998 assumes that the portable, MPI libraries are available for controlling
999 the start-up of multiple processes. The MPI libraries are not
1000 required, although they are usually used, for performance critical
1001 communication. However, in order to simplify the task of controlling
1002 and coordinating the start up of a large number (hundreds and possibly
1003 even thousands) of copies of the same program, MPI is used. The calls
1004 to the MPI multi-process startup routines must be activated at compile
1005 time. Currently MPI libraries are invoked by specifying the
1006 appropriate options file with the {\tt-of} flag when running the {\em
1007 genmake2} script, which generates the Makefile for compiling and
1008 linking MITgcm. (Previously this was done by setting the {\em
1009 ALLOW\_USE\_MPI} and {\em ALWAYS\_USE\_MPI} flags in the {\em
1010 CPP\_EEOPTIONS.h} file.) More detailed information about the use of
1011 {\em genmake2} for specifying
1012 local compiler flags is located in section \ref{sect:genmake}.\\
1013
1014
1015 \fbox{
1016 \begin{minipage}{4.75in}
1017 Directory: {\em tools/build\_options}\\
1018 File: {\em tools/genmake2}
1019 \end{minipage}
1020 } \\
1021 \paragraph{\bf Execution} The mechanics of starting a program in
1022 multi-process mode under MPI is not standardized. Documentation
1023 associated with the distribution of MPI installed on a system will
1024 describe how to start a program using that distribution. For the
1025 open-source MPICH system, the MITgcm program can be started using a
1026 command such as
1027 \begin{verbatim}
1028 mpirun -np 64 -machinefile mf ./mitgcmuv
1029 \end{verbatim}
1030 In this example the text {\em -np 64} specifies the number of
1031 processes that will be created. The numeric value {\em 64} must be
1032 equal to the product of the processor grid settings of {\em nPx} and
1033 {\em nPy} in the file {\em SIZE.h}. The parameter {\em mf} specifies
1034 that a text file called ``mf'' will be read to get a list of processor
1035 names on which the sixty-four processes will execute. The syntax of
1036 this file is specified by the MPI distribution.
1037 \\
1038
1039 \fbox{
1040 \begin{minipage}{4.75in}
1041 File: {\em SIZE.h}\\
1042 Parameter: {\em nPx}\\
1043 Parameter: {\em nPy}
1044 \end{minipage}
1045 } \\
1046
1047
1048 \paragraph{Environment variables}
1049 On most systems multi-threaded execution also requires the setting of
1050 a special environment variable. On many machines this variable is
1051 called PARALLEL and its values should be set to the number of parallel
1052 threads required. Generally the help or manual pages associated with
1053 the multi-threaded compiler on a machine will explain how to set the
1054 required environment variables.
1055
1056 \paragraph{Runtime input parameters}
1057 Finally the file {\em eedata} needs to be configured to indicate the
1058 number of threads to be used in the x and y directions. The variables
1059 {\em nTx} and {\em nTy} in this file are used to specify the
1060 information required. The product of {\em nTx} and {\em nTy} must be
1061 equal to the number of threads spawned i.e. the setting of the
1062 environment variable PARALLEL. The value of {\em nTx} must subdivide
1063 the number of sub-domains in x ({\em nSx}) exactly. The value of {\em
1064 nTy} must subdivide the number of sub-domains in y ({\em nSy})
1065 exactly. The multiprocess startup of the MITgcm executable {\em
1066 mitgcmuv} is controlled by the routines {\em EEBOOT\_MINIMAL()} and
1067 {\em INI\_PROCS()}. The first routine performs basic steps required to
1068 make sure each process is started and has a textual output stream
1069 associated with it. By default two output files are opened for each
1070 process with names {\bf STDOUT.NNNN} and {\bf STDERR.NNNN}. The {\bf
1071 NNNNN} part of the name is filled in with the process number so that
1072 process number 0 will create output files {\bf STDOUT.0000} and {\bf
1073 STDERR.0000}, process number 1 will create output files {\bf
1074 STDOUT.0001} and {\bf STDERR.0001}, etc. These files are used for
1075 reporting status and configuration information and for reporting error
1076 conditions on a process by process basis. The {\em EEBOOT\_MINIMAL()}
1077 procedure also sets the variables {\em myProcId} and {\em
1078 MPI\_COMM\_MODEL}. These variables are related to processor
1079 identification are are used later in the routine {\em INI\_PROCS()} to
1080 allocate tiles to processes.
1081
1082 Allocation of processes to tiles is controlled by the routine {\em
1083 INI\_PROCS()}. For each process this routine sets the variables {\em
1084 myXGlobalLo} and {\em myYGlobalLo}. These variables specify, in
1085 index space, the coordinates of the southernmost and westernmost
1086 corner of the southernmost and westernmost tile owned by this process.
1087 The variables {\em pidW}, {\em pidE}, {\em pidS} and {\em pidN} are
1088 also set in this routine. These are used to identify processes holding
1089 tiles to the west, east, south and north of a given process. These
1090 values are stored in global storage in the header file {\em
1091 EESUPPORT.h} for use by communication routines. The above does not
1092 hold when the exch2 package is used. The exch2 sets its own
1093 parameters to specify the global indices of tiles and their
1094 relationships to each other. See the documentation on the exch2
1095 package (\ref{sec:exch2}) for details.
1096 \\
1097
1098 \fbox{
1099 \begin{minipage}{4.75in}
1100 File: {\em eesupp/src/eeboot\_minimal.F}\\
1101 File: {\em eesupp/src/ini\_procs.F} \\
1102 File: {\em eesupp/inc/EESUPPORT.h} \\
1103 Parameter: {\em myProcId} \\
1104 Parameter: {\em MPI\_COMM\_MODEL} \\
1105 Parameter: {\em myXGlobalLo} \\
1106 Parameter: {\em myYGlobalLo} \\
1107 Parameter: {\em pidW } \\
1108 Parameter: {\em pidE } \\
1109 Parameter: {\em pidS } \\
1110 Parameter: {\em pidN }
1111 \end{minipage}
1112 } \\
1113
1114
1115 \subsection{Controlling communication}
1116 The WRAPPER maintains internal information that is used for communication
1117 operations and that can be customized for different platforms. This section
1118 describes the information that is held and used.
1119
1120 \begin{enumerate}
1121 \item {\bf Tile-tile connectivity information}
1122 For each tile the WRAPPER sets a flag that sets the tile number to
1123 the north, south, east and west of that tile. This number is unique
1124 over all tiles in a configuration. Except when using the cubed
1125 sphere and the exch2 package, the number is held in the variables
1126 {\em tileNo} ( this holds the tiles own number), {\em tileNoN}, {\em
1127 tileNoS}, {\em tileNoE} and {\em tileNoW}. A parameter is also
1128 stored with each tile that specifies the type of communication that
1129 is used between tiles. This information is held in the variables
1130 {\em tileCommModeN}, {\em tileCommModeS}, {\em tileCommModeE} and
1131 {\em tileCommModeW}. This latter set of variables can take one of
1132 the following values {\em COMM\_NONE}, {\em COMM\_MSG}, {\em
1133 COMM\_PUT} and {\em COMM\_GET}. A value of {\em COMM\_NONE} is
1134 used to indicate that a tile has no neighbor to communicate with on
1135 a particular face. A value of {\em COMM\_MSG} is used to indicate
1136 that some form of distributed memory communication is required to
1137 communicate between these tile faces (see section
1138 \ref{sect:distributed_memory_communication}). A value of {\em
1139 COMM\_PUT} or {\em COMM\_GET} is used to indicate forms of shared
1140 memory communication (see section
1141 \ref{sect:shared_memory_communication}). The {\em COMM\_PUT} value
1142 indicates that a CPU should communicate by writing to data
1143 structures owned by another CPU. A {\em COMM\_GET} value indicates
1144 that a CPU should communicate by reading from data structures owned
1145 by another CPU. These flags affect the behavior of the WRAPPER
1146 exchange primitive (see figure \ref{fig:communication_primitives}).
1147 The routine {\em ini\_communication\_patterns()} is responsible for
1148 setting the communication mode values for each tile.
1149
1150 When using the cubed sphere configuration with the exch2 package,
1151 the relationships between tiles and their communication methods are
1152 set by the exch2 package and stored in different variables. See the
1153 exch2 package documentation (\ref{sec:exch2} for details.
1154
1155 \fbox{
1156 \begin{minipage}{4.75in}
1157 File: {\em eesupp/src/ini\_communication\_patterns.F}\\
1158 File: {\em eesupp/inc/EESUPPORT.h} \\
1159 Parameter: {\em tileNo} \\
1160 Parameter: {\em tileNoE} \\
1161 Parameter: {\em tileNoW} \\
1162 Parameter: {\em tileNoN} \\
1163 Parameter: {\em tileNoS} \\
1164 Parameter: {\em tileCommModeE} \\
1165 Parameter: {\em tileCommModeW} \\
1166 Parameter: {\em tileCommModeN} \\
1167 Parameter: {\em tileCommModeS} \\
1168 \end{minipage}
1169 } \\
1170
1171 \item {\bf MP directives}
1172 The WRAPPER transfers control to numerical application code through
1173 the routine {\em THE\_MODEL\_MAIN}. This routine is called in a way
1174 that allows for it to be invoked by several threads. Support for
1175 this is based on either multi-processing (MP) compiler directives or
1176 specific calls to multi-threading libraries (\textit{eg.} POSIX
1177 threads). Most commercially available Fortran compilers support the
1178 generation of code to spawn multiple threads through some form of
1179 compiler directives. Compiler directives are generally more
1180 convenient than writing code to explicitly spawning threads. And,
1181 on some systems, compiler directives may be the only method
1182 available. The WRAPPER is distributed with template MP directives
1183 for a number of systems.
1184
1185 These directives are inserted into the code just before and after
1186 the transfer of control to numerical algorithm code through the
1187 routine {\em THE\_MODEL\_MAIN}. Figure \ref{fig:mp_directives} shows
1188 an example of the code that performs this process for a Silicon
1189 Graphics system. This code is extracted from the files {\em main.F}
1190 and {\em MAIN\_PDIRECTIVES1.h}. The variable {\em nThreads}
1191 specifies how many instances of the routine {\em THE\_MODEL\_MAIN}
1192 will be created. The value of {\em nThreads} is set in the routine
1193 {\em INI\_THREADING\_ENVIRONMENT}. The value is set equal to the the
1194 product of the parameters {\em nTx} and {\em nTy} that are read from
1195 the file {\em eedata}. If the value of {\em nThreads} is
1196 inconsistent with the number of threads requested from the operating
1197 system (for example by using an environment variable as described in
1198 section \ref{sect:multi_threaded_execution}) then usually an error
1199 will be reported by the routine {\em CHECK\_THREADS}.
1200
1201 \fbox{
1202 \begin{minipage}{4.75in}
1203 File: {\em eesupp/src/ini\_threading\_environment.F}\\
1204 File: {\em eesupp/src/check\_threads.F} \\
1205 File: {\em eesupp/src/main.F} \\
1206 File: {\em eesupp/inc/MAIN\_PDIRECTIVES1.h} \\
1207 File: {\em eedata } \\
1208 Parameter: {\em nThreads} \\
1209 Parameter: {\em nTx} \\
1210 Parameter: {\em nTy} \\
1211 \end{minipage}
1212 }
1213
1214 \item {\bf memsync flags}
1215 As discussed in section \ref{sect:memory_consistency}, a low-level
1216 system function may be need to force memory consistency on some
1217 shared memory systems. The routine {\em MEMSYNC()} is used for this
1218 purpose. This routine should not need modifying and the information
1219 below is only provided for completeness. A logical parameter {\em
1220 exchNeedsMemSync} set in the routine {\em
1221 INI\_COMMUNICATION\_PATTERNS()} controls whether the {\em
1222 MEMSYNC()} primitive is called. In general this routine is only
1223 used for multi-threaded execution. The code that goes into the {\em
1224 MEMSYNC()} routine is specific to the compiler and processor used.
1225 In some cases, it must be written using a short code snippet of
1226 assembly language. For an Ultra Sparc system the following code
1227 snippet is used
1228 \begin{verbatim}
1229 asm("membar #LoadStore|#StoreStore");
1230 \end{verbatim}
1231 for an Alpha based system the equivalent code reads
1232 \begin{verbatim}
1233 asm("mb");
1234 \end{verbatim}
1235 while on an x86 system the following code is required
1236 \begin{verbatim}
1237 asm("lock; addl $0,0(%%esp)": : :"memory")
1238 \end{verbatim}
1239
1240 \item {\bf Cache line size}
1241 As discussed in section \ref{sect:cache_effects_and_false_sharing},
1242 milti-threaded codes explicitly avoid penalties associated with
1243 excessive coherence traffic on an SMP system. To do this the shared
1244 memory data structures used by the {\em GLOBAL\_SUM}, {\em
1245 GLOBAL\_MAX} and {\em BARRIER} routines are padded. The variables
1246 that control the padding are set in the header file {\em
1247 EEPARAMS.h}. These variables are called {\em cacheLineSize}, {\em
1248 lShare1}, {\em lShare4} and {\em lShare8}. The default values
1249 should not normally need changing.
1250
1251 \item {\bf \_BARRIER}
1252 This is a CPP macro that is expanded to a call to a routine which
1253 synchronizes all the logical processors running under the WRAPPER.
1254 Using a macro here preserves flexibility to insert a specialized
1255 call in-line into application code. By default this resolves to
1256 calling the procedure {\em BARRIER()}. The default setting for the
1257 \_BARRIER macro is given in the file {\em CPP\_EEMACROS.h}.
1258
1259 \item {\bf \_GSUM}
1260 This is a CPP macro that is expanded to a call to a routine which
1261 sums up a floating point number over all the logical processors
1262 running under the WRAPPER. Using a macro here provides extra
1263 flexibility to insert a specialized call in-line into application
1264 code. By default this resolves to calling the procedure {\em
1265 GLOBAL\_SUM\_R8()} ( for 64-bit floating point operands) or {\em
1266 GLOBAL\_SUM\_R4()} (for 32-bit floating point operands). The
1267 default setting for the \_GSUM macro is given in the file {\em
1268 CPP\_EEMACROS.h}. The \_GSUM macro is a performance critical
1269 operation, especially for large processor count, small tile size
1270 configurations. The custom communication example discussed in
1271 section \ref{sect:jam_example} shows how the macro is used to invoke
1272 a custom global sum routine for a specific set of hardware.
1273
1274 \item {\bf \_EXCH}
1275 The \_EXCH CPP macro is used to update tile overlap regions. It is
1276 qualified by a suffix indicating whether overlap updates are for
1277 two-dimensional ( \_EXCH\_XY ) or three dimensional ( \_EXCH\_XYZ )
1278 physical fields and whether fields are 32-bit floating point (
1279 \_EXCH\_XY\_R4, \_EXCH\_XYZ\_R4 ) or 64-bit floating point (
1280 \_EXCH\_XY\_R8, \_EXCH\_XYZ\_R8 ). The macro mappings are defined in
1281 the header file {\em CPP\_EEMACROS.h}. As with \_GSUM, the \_EXCH
1282 operation plays a crucial role in scaling to small tile, large
1283 logical and physical processor count configurations. The example in
1284 section \ref{sect:jam_example} discusses defining an optimized and
1285 specialized form on the \_EXCH operation.
1286
1287 The \_EXCH operation is also central to supporting grids such as the
1288 cube-sphere grid. In this class of grid a rotation may be required
1289 between tiles. Aligning the coordinate requiring rotation with the
1290 tile decomposition, allows the coordinate transformation to be
1291 embedded within a custom form of the \_EXCH primitive. In these
1292 cases \_EXCH is mapped to exch2 routines, as detailed in the exch2
1293 package documentation \ref{sec:exch2}.
1294
1295 \item {\bf Reverse Mode}
1296 The communication primitives \_EXCH and \_GSUM both employ
1297 hand-written adjoint forms (or reverse mode) forms. These reverse
1298 mode forms can be found in the source code directory {\em
1299 pkg/autodiff}. For the global sum primitive the reverse mode form
1300 calls are to {\em GLOBAL\_ADSUM\_R4} and {\em GLOBAL\_ADSUM\_R8}.
1301 The reverse mode form of the exchange primitives are found in
1302 routines prefixed {\em ADEXCH}. The exchange routines make calls to
1303 the same low-level communication primitives as the forward mode
1304 operations. However, the routine argument {\em simulationMode} is
1305 set to the value {\em REVERSE\_SIMULATION}. This signifies to the
1306 low-level routines that the adjoint forms of the appropriate
1307 communication operation should be performed.
1308
1309 \item {\bf MAX\_NO\_THREADS}
1310 The variable {\em MAX\_NO\_THREADS} is used to indicate the maximum
1311 number of OS threads that a code will use. This value defaults to
1312 thirty-two and is set in the file {\em EEPARAMS.h}. For single
1313 threaded execution it can be reduced to one if required. The value
1314 is largely private to the WRAPPER and application code will not
1315 normally reference the value, except in the following scenario.
1316
1317 For certain physical parametrization schemes it is necessary to have
1318 a substantial number of work arrays. Where these arrays are
1319 allocated in heap storage (for example COMMON blocks) multi-threaded
1320 execution will require multiple instances of the COMMON block data.
1321 This can be achieved using a Fortran 90 module construct. However,
1322 if this mechanism is unavailable then the work arrays can be extended
1323 with dimensions using the tile dimensioning scheme of {\em nSx} and
1324 {\em nSy} (as described in section
1325 \ref{sect:specifying_a_decomposition}). However, if the
1326 configuration being specified involves many more tiles than OS
1327 threads then it can save memory resources to reduce the variable
1328 {\em MAX\_NO\_THREADS} to be equal to the actual number of threads
1329 that will be used and to declare the physical parameterization work
1330 arrays with a single {\em MAX\_NO\_THREADS} extra dimension. An
1331 example of this is given in the verification experiment {\em
1332 aim.5l\_cs}. Here the default setting of {\em MAX\_NO\_THREADS} is
1333 altered to
1334 \begin{verbatim}
1335 INTEGER MAX_NO_THREADS
1336 PARAMETER ( MAX_NO_THREADS = 6 )
1337 \end{verbatim}
1338 and several work arrays for storing intermediate calculations are
1339 created with declarations of the form.
1340 \begin{verbatim}
1341 common /FORCIN/ sst1(ngp,MAX_NO_THREADS)
1342 \end{verbatim}
1343 This declaration scheme is not used widely, because most global data
1344 is used for permanent not temporary storage of state information.
1345 In the case of permanent state information this approach cannot be
1346 used because there has to be enough storage allocated for all tiles.
1347 However, the technique can sometimes be a useful scheme for reducing
1348 memory requirements in complex physical parameterizations.
1349 \end{enumerate}
1350
1351 \begin{figure}
1352 \begin{verbatim}
1353 C--
1354 C-- Parallel directives for MIPS Pro Fortran compiler
1355 C--
1356 C Parallel compiler directives for SGI with IRIX
1357 C$PAR PARALLEL DO
1358 C$PAR& CHUNK=1,MP_SCHEDTYPE=INTERLEAVE,
1359 C$PAR& SHARE(nThreads),LOCAL(myThid,I)
1360 C
1361 DO I=1,nThreads
1362 myThid = I
1363
1364 C-- Invoke nThreads instances of the numerical model
1365 CALL THE_MODEL_MAIN(myThid)
1366
1367 ENDDO
1368 \end{verbatim}
1369 \caption{Prior to transferring control to the procedure {\em
1370 THE\_MODEL\_MAIN()} the WRAPPER may use MP directives to spawn
1371 multiple threads. } \label{fig:mp_directives}
1372 \end{figure}
1373
1374
1375 \subsubsection{Specializing the Communication Code}
1376
1377 The isolation of performance critical communication primitives and the
1378 sub-division of the simulation domain into tiles is a powerful tool.
1379 Here we show how it can be used to improve application performance and
1380 how it can be used to adapt to new griding approaches.
1381
1382 \subsubsection{JAM example}
1383 \label{sect:jam_example}
1384 On some platforms a big performance boost can be obtained by binding
1385 the communication routines {\em \_EXCH} and {\em \_GSUM} to
1386 specialized native libraries (for example, the shmem library on CRAY
1387 T3E systems). The {\em LETS\_MAKE\_JAM} CPP flag is used as an
1388 illustration of a specialized communication configuration that
1389 substitutes for standard, portable forms of {\em \_EXCH} and {\em
1390 \_GSUM}. It affects three source files {\em eeboot.F}, {\em
1391 CPP\_EEMACROS.h} and {\em cg2d.F}. When the flag is defined is has
1392 the following effects.
1393 \begin{itemize}
1394 \item An extra phase is included at boot time to initialize the custom
1395 communications library ( see {\em ini\_jam.F}).
1396 \item The {\em \_GSUM} and {\em \_EXCH} macro definitions are replaced
1397 with calls to custom routines (see {\em gsum\_jam.F} and {\em
1398 exch\_jam.F})
1399 \item a highly specialized form of the exchange operator (optimized
1400 for overlap regions of width one) is substituted into the elliptic
1401 solver routine {\em cg2d.F}.
1402 \end{itemize}
1403 Developing specialized code for other libraries follows a similar
1404 pattern.
1405
1406 \subsubsection{Cube sphere communication}
1407 \label{sect:cube_sphere_communication}
1408 Actual {\em \_EXCH} routine code is generated automatically from a
1409 series of template files, for example {\em exch\_rx.template}. This
1410 is done to allow a large number of variations on the exchange process
1411 to be maintained. One set of variations supports the cube sphere grid.
1412 Support for a cube sphere grid in MITgcm is based on having each face
1413 of the cube as a separate tile or tiles. The exchange routines are
1414 then able to absorb much of the detailed rotation and reorientation
1415 required when moving around the cube grid. The set of {\em \_EXCH}
1416 routines that contain the word cube in their name perform these
1417 transformations. They are invoked when the run-time logical parameter
1418 {\em useCubedSphereExchange} is set true. To facilitate the
1419 transformations on a staggered C-grid, exchange operations are defined
1420 separately for both vector and scalar quantities and for grid-centered
1421 and for grid-face and grid-corner quantities. Three sets of exchange
1422 routines are defined. Routines with names of the form {\em exch\_rx}
1423 are used to exchange cell centered scalar quantities. Routines with
1424 names of the form {\em exch\_uv\_rx} are used to exchange vector
1425 quantities located at the C-grid velocity points. The vector
1426 quantities exchanged by the {\em exch\_uv\_rx} routines can either be
1427 signed (for example velocity components) or un-signed (for example
1428 grid-cell separations). Routines with names of the form {\em
1429 exch\_z\_rx} are used to exchange quantities at the C-grid vorticity
1430 point locations.
1431
1432
1433
1434
1435 \section{MITgcm execution under WRAPPER}
1436 \begin{rawhtml}
1437 <!-- CMIREDIR:mitgcm_wrapper: -->
1438 \end{rawhtml}
1439
1440 Fitting together the WRAPPER elements, package elements and
1441 MITgcm core equation elements of the source code produces calling
1442 sequence shown in section \ref{sect:calling_sequence}
1443
1444 \subsection{Annotated call tree for MITgcm and WRAPPER}
1445 \label{sect:calling_sequence}
1446
1447 WRAPPER layer.
1448
1449 {\footnotesize
1450 \begin{verbatim}
1451
1452 MAIN
1453 |
1454 |--EEBOOT :: WRAPPER initialization
1455 | |
1456 | |-- EEBOOT_MINMAL :: Minimal startup. Just enough to
1457 | | allow basic I/O.
1458 | |-- EEINTRO_MSG :: Write startup greeting.
1459 | |
1460 | |-- EESET_PARMS :: Set WRAPPER parameters
1461 | |
1462 | |-- EEWRITE_EEENV :: Print WRAPPER parameter settings
1463 | |
1464 | |-- INI_PROCS :: Associate processes with grid regions.
1465 | |
1466 | |-- INI_THREADING_ENVIRONMENT :: Associate threads with grid regions.
1467 | |
1468 | |--INI_COMMUNICATION_PATTERNS :: Initialize between tile
1469 | :: communication data structures
1470 |
1471 |
1472 |--CHECK_THREADS :: Validate multiple thread start up.
1473 |
1474 |--THE_MODEL_MAIN :: Numerical code top-level driver routine
1475
1476 \end{verbatim}
1477 }
1478
1479 Core equations plus packages.
1480
1481 {\footnotesize
1482 \begin{verbatim}
1483 C
1484 C Invocation from WRAPPER level...
1485 C :
1486 C :
1487 C |
1488 C |-THE_MODEL_MAIN :: Primary driver for the MITgcm algorithm
1489 C | :: Called from WRAPPER level numerical
1490 C | :: code invocation routine. On entry
1491 C | :: to THE_MODEL_MAIN separate thread and
1492 C | :: separate processes will have been established.
1493 C | :: Each thread and process will have a unique ID
1494 C | :: but as yet it will not be associated with a
1495 C | :: specific region in decomposed discrete space.
1496 C |
1497 C |-INITIALISE_FIXED :: Set fixed model arrays such as topography,
1498 C | | :: grid, solver matrices etc..
1499 C | |
1500 C | |-INI_PARMS :: Routine to set kernel model parameters.
1501 C | | :: By default kernel parameters are read from file
1502 C | | :: "data" in directory in which code executes.
1503 C | |
1504 C | |-MON_INIT :: Initializes monitor package ( see pkg/monitor )
1505 C | |
1506 C | |-INI_GRID :: Control grid array (vert. and hori.) initialization.
1507 C | | | :: Grid arrays are held and described in GRID.h.
1508 C | | |
1509 C | | |-INI_VERTICAL_GRID :: Initialize vertical grid arrays.
1510 C | | |
1511 C | | |-INI_CARTESIAN_GRID :: Cartesian horiz. grid initialization
1512 C | | | :: (calculate grid from kernel parameters).
1513 C | | |
1514 C | | |-INI_SPHERICAL_POLAR_GRID :: Spherical polar horiz. grid
1515 C | | | :: initialization (calculate grid from
1516 C | | | :: kernel parameters).
1517 C | | |
1518 C | | |-INI_CURVILINEAR_GRID :: General orthogonal, structured horiz.
1519 C | | :: grid initializations. ( input from raw
1520 C | | :: grid files, LONC.bin, DXF.bin etc... )
1521 C | |
1522 C | |-INI_DEPTHS :: Read (from "bathyFile") or set bathymetry/orgography.
1523 C | |
1524 C | |-INI_MASKS_ETC :: Derive horizontal and vertical cell fractions and
1525 C | | :: land masking for solid-fluid boundaries.
1526 C | |
1527 C | |-INI_LINEAR_PHSURF :: Set ref. surface Bo_surf
1528 C | |
1529 C | |-INI_CORI :: Set coriolis term. zero, f-plane, beta-plane,
1530 C | | :: sphere options are coded.
1531 C | |
1532 C | |-PACAKGES_BOOT :: Start up the optional package environment.
1533 C | | :: Runtime selection of active packages.
1534 C | |
1535 C | |-PACKAGES_READPARMS :: Call active package internal parameter load.
1536 C | | |
1537 C | | |-GMREDI_READPARMS :: GM Package. see pkg/gmredi
1538 C | | |-KPP_READPARMS :: KPP Package. see pkg/kpp
1539 C | | |-SHAP_FILT_READPARMS :: Shapiro filter package. see pkg/shap_filt
1540 C | | |-OBCS_READPARMS :: Open bndy package. see pkg/obcs
1541 C | | |-AIM_READPARMS :: Intermediate Atmos. pacakage. see pkg/aim
1542 C | | |-COST_READPARMS :: Cost function package. see pkg/cost
1543 C | | |-CTRL_INIT :: Control vector support package. see pkg/ctrl
1544 C | | |-OPTIM_READPARMS :: Optimisation support package. see pkg/ctrl
1545 C | | |-GRDCHK_READPARMS :: Gradient check package. see pkg/grdchk
1546 C | | |-ECCO_READPARMS :: ECCO Support Package. see pkg/ecco
1547 C | | |-PTRACERS_READPARMS :: multiple tracer package, see pkg/ptracers
1548 C | | |-GCHEM_READPARMS :: tracer interface package, see pkg/gchem
1549 C | |
1550 C | |-PACKAGES_CHECK
1551 C | | |
1552 C | | |-KPP_CHECK :: KPP Package. pkg/kpp
1553 C | | |-OBCS_CHECK :: Open bndy Pacakge. pkg/obcs
1554 C | | |-GMREDI_CHECK :: GM Package. pkg/gmredi
1555 C | |
1556 C | |-PACKAGES_INIT_FIXED
1557 C | | |-OBCS_INIT_FIXED :: Open bndy Package. see pkg/obcs
1558 C | | |-FLT_INIT :: Floats Package. see pkg/flt
1559 C | | |-GCHEM_INIT_FIXED :: tracer interface pachage, see pkg/gchem
1560 C | |
1561 C | |-ZONAL_FILT_INIT :: FFT filter Package. see pkg/zonal_filt
1562 C | |
1563 C | |-INI_CG2D :: 2d con. grad solver initialization.
1564 C | |
1565 C | |-INI_CG3D :: 3d con. grad solver initialization.
1566 C | |
1567 C | |-CONFIG_SUMMARY :: Provide synopsis of kernel setup.
1568 C | :: Includes annotated table of kernel
1569 C | :: parameter settings.
1570 C |
1571 C |-CTRL_UNPACK :: Control vector support package. see pkg/ctrl
1572 C |
1573 C |-ADTHE_MAIN_LOOP :: Derivative evaluating form of main time stepping loop
1574 C ! :: Auotmatically generated by TAMC/TAF.
1575 C |
1576 C |-CTRL_PACK :: Control vector support package. see pkg/ctrl
1577 C |
1578 C |-GRDCHK_MAIN :: Gradient check package. see pkg/grdchk
1579 C |
1580 C |-THE_MAIN_LOOP :: Main timestepping loop routine.
1581 C | |
1582 C | |-INITIALISE_VARIA :: Set the initial conditions for time evolving
1583 C | | | :: variables
1584 C | | |
1585 C | | |-INI_LINEAR_PHISURF :: Set ref. surface Bo_surf
1586 C | | |
1587 C | | |-INI_CORI :: Set coriolis term. zero, f-plane, beta-plane,
1588 C | | | :: sphere options are coded.
1589 C | | |
1590 C | | |-INI_CG2D :: 2d con. grad solver initialization.
1591 C | | |-INI_CG3D :: 3d con. grad solver initialization.
1592 C | | |-INI_MIXING :: Initialize diapycnal diffusivity.
1593 C | | |-INI_DYNVARS :: Initialize to zero all DYNVARS.h arrays (dynamical
1594 C | | | :: fields).
1595 C | | |
1596 C | | |-INI_FIELDS :: Control initializing model fields to non-zero
1597 C | | | |-INI_VEL :: Initialize 3D flow field.
1598 C | | | |-INI_THETA :: Set model initial temperature field.
1599 C | | | |-INI_SALT :: Set model initial salinity field.
1600 C | | | |-INI_PSURF :: Set model initial free-surface height/pressure.
1601 C | | | |-INI_PRESSURE :: Compute model initial hydrostatic pressure
1602 C | | | |-READ_CHECKPOINT :: Read the checkpoint
1603 C | | |
1604 C | | |-THE_CORRECTION_STEP :: Step forward to next time step.
1605 C | | | | :: Here applied to move restart conditions
1606 C | | | | :: (saved in mid timestep) to correct level in
1607 C | | | | :: time (only used for pre-c35).
1608 C | | | |
1609 C | | | |-CALC_GRAD_PHI_SURF :: Return DDx and DDy of surface pressure
1610 C | | | |-CORRECTION_STEP :: Pressure correction to momentum
1611 C | | | |-CYCLE_TRACER :: Move tracers forward in time.
1612 C | | | |-OBCS_APPLY :: Open bndy package. see pkg/obcs
1613 C | | | |-SHAP_FILT_APPLY :: Shapiro filter package. see pkg/shap_filt
1614 C | | | |-ZONAL_FILT_APPLY :: FFT filter package. see pkg/zonal_filt
1615 C | | | |-CONVECTIVE_ADJUSTMENT :: Control static instability mixing.
1616 C | | | | |-FIND_RHO :: Find adjacent densities.
1617 C | | | | |-CONVECT :: Mix static instability.
1618 C | | | | |-TIMEAVE_CUMULATE :: Update convection statistics.
1619 C | | | |
1620 C | | | |-CALC_EXACT_ETA :: Change SSH to flow divergence.
1621 C | | |
1622 C | | |-CONVECTIVE_ADJUSTMENT_INI :: Control static instability mixing
1623 C | | | | :: Extra time history interactions.
1624 C | | | |
1625 C | | | |-FIND_RHO :: Find adjacent densities.
1626 C | | | |-CONVECT :: Mix static instability.
1627 C | | | |-TIMEAVE_CUMULATE :: Update convection statistics.
1628 C | | |
1629 C | | |-PACKAGES_INIT_VARIABLES :: Does initialization of time evolving
1630 C | | | | :: package data.
1631 C | | | |
1632 C | | | |-GMREDI_INIT :: GM package. ( see pkg/gmredi )
1633 C | | | |-KPP_INIT :: KPP package. ( see pkg/kpp )
1634 C | | | |-KPP_OPEN_DIAGS
1635 C | | | |-OBCS_INIT_VARIABLES :: Open bndy. package. ( see pkg/obcs )
1636 C | | | |-PTRACERS_INIT :: multi. tracer package,(see pkg/ptracers)
1637 C | | | |-GCHEM_INIT :: tracer interface pkg (see pkh/gchem)
1638 C | | | |-AIM_INIT :: Interm. atmos package. ( see pkg/aim )
1639 C | | | |-CTRL_MAP_INI :: Control vector package.( see pkg/ctrl )
1640 C | | | |-COST_INIT :: Cost function package. ( see pkg/cost )
1641 C | | | |-ECCO_INIT :: ECCO support package. ( see pkg/ecco )
1642 C | | | |-INI_FORCING :: Set model initial forcing fields.
1643 C | | | | :: Either set in-line or from file as shown.
1644 C | | | |-READ_FLD_XY_RS(zonalWindFile)
1645 C | | | |-READ_FLD_XY_RS(meridWindFile)
1646 C | | | |-READ_FLD_XY_RS(surfQFile)
1647 C | | | |-READ_FLD_XY_RS(EmPmRfile)
1648 C | | | |-READ_FLD_XY_RS(thetaClimFile)
1649 C | | | |-READ_FLD_XY_RS(saltClimFile)
1650 C | | | |-READ_FLD_XY_RS(surfQswFile)
1651 C | | |
1652 C | | |-CALC_SURF_DR :: Calculate the new surface level thickness.
1653 C | | |-UPDATE_SURF_DR :: Update the surface-level thickness fraction.
1654 C | | |-UPDATE_CG2D :: Update 2d conjugate grad. for Free-Surf.
1655 C | | |-STATE_SUMMARY :: Summarize model prognostic variables.
1656 C | | |-TIMEAVE_STATVARS :: Time averaging package ( see pkg/timeave ).
1657 C | |
1658 C | |-WRITE_STATE :: Controlling routine for IO to dump model state.
1659 C | | |-WRITE_REC_XYZ_RL :: Single file I/O
1660 C | | |-WRITE_FLD_XYZ_RL :: Multi-file I/O
1661 C | |
1662 C | |-MONITOR :: Monitor state ( see pkg/monitor )
1663 C | |-CTRL_MAP_FORCING :: Control vector support package. ( see pkg/ctrl )
1664 C====|>|
1665 C====|>| ****************************
1666 C====|>| BEGIN MAIN TIMESTEPPING LOOP
1667 C====|>| ****************************
1668 C====|>|
1669 C/\ | |-FORWARD_STEP :: Step forward a time-step ( AT LAST !!! )
1670 C/\ | | |
1671 C/\ | | |-DUMMY_IN_STEPPING :: autodiff package ( pkg/autoduff ).
1672 C/\ | | |-CALC_EXACT_ETA :: Change SSH to flow divergence.
1673 C/\ | | |-CALC_SURF_DR :: Calculate the new surface level thickness.
1674 C/\ | | |-EXF_GETFORCING :: External forcing package. ( pkg/exf )
1675 C/\ | | |-EXTERNAL_FIELDS_LOAD :: Control loading time dep. external data.
1676 C/\ | | | | :: Simple interpolation between end-points
1677 C/\ | | | | :: for forcing datasets.
1678 C/\ | | | |
1679 C/\ | | | |-EXCH :: Sync forcing. in overlap regions.
1680 C/\ | | |-SEAICE_MODEL :: Compute sea-ice terms. ( pkg/seaice )
1681 C/\ | | |-FREEZE :: Limit surface temperature.
1682 C/\ | | |-GCHEM_FIELD_LOAD :: load tracer forcing fields (pkg/gchem)
1683 C/\ | | |
1684 C/\ | | |-THERMODYNAMICS :: theta, salt + tracer equations driver.
1685 C/\ | | | |
1686 C/\ | | | |-INTEGRATE_FOR_W :: Integrate for vertical velocity.
1687 C/\ | | | |-OBCS_APPLY_W :: Open bndy. package ( see pkg/obcs ).
1688 C/\ | | | |-FIND_RHO :: Calculates [rho(S,T,z)-RhoConst] of a slice
1689 C/\ | | | |-GRAD_SIGMA :: Calculate isoneutral gradients
1690 C/\ | | | |-CALC_IVDC :: Set Implicit Vertical Diffusivity for Convection
1691 C/\ | | | |
1692 C/\ | | | |-OBCS_CALC :: Open bndy. package ( see pkg/obcs ).
1693 C/\ | | | |-EXTERNAL_FORCING_SURF:: Accumulates appropriately dimensioned
1694 C/\ | | | | | :: forcing terms.
1695 C/\ | | | | |-PTRACERS_FORCING_SURF :: Tracer package ( see pkg/ptracers ).
1696 C/\ | | | |
1697 C/\ | | | |-GMREDI_CALC_TENSOR :: GM package ( see pkg/gmredi ).
1698 C/\ | | | |-GMREDI_CALC_TENSOR_DUMMY :: GM package ( see pkg/gmredi ).
1699 C/\ | | | |-KPP_CALC :: KPP package ( see pkg/kpp ).
1700 C/\ | | | |-KPP_CALC_DUMMY :: KPP package ( see pkg/kpp ).
1701 C/\ | | | |-AIM_DO_ATMOS_PHYSICS :: Intermed. atmos package ( see pkg/aim ).
1702 C/\ | | | |-GAD_ADVECTION :: Generalised advection driver (multi-dim
1703 C/\ | | | | advection case) (see pkg/gad).
1704 C/\ | | | |-CALC_COMMON_FACTORS :: Calculate common data (such as volume flux)
1705 C/\ | | | |-CALC_DIFFUSIVITY :: Calculate net vertical diffusivity
1706 C/\ | | | | |
1707 C/\ | | | | |-GMREDI_CALC_DIFF :: GM package ( see pkg/gmredi ).
1708 C/\ | | | | |-KPP_CALC_DIFF :: KPP package ( see pkg/kpp ).
1709 C/\ | | | |
1710 C/\ | | | |-CALC_GT :: Calculate the temperature tendency terms
1711 C/\ | | | | |
1712 C/\ | | | | |-GAD_CALC_RHS :: Generalised advection package
1713 C/\ | | | | | | :: ( see pkg/gad )
1714 C/\ | | | | | |-KPP_TRANSPORT_T :: KPP non-local transport ( see pkg/kpp ).
1715 C/\ | | | | |
1716 C/\ | | | | |-EXTERNAL_FORCING_T :: Problem specific forcing for temperature.
1717 C/\ | | | | |-ADAMS_BASHFORTH2 :: Extrapolate tendencies forward in time.
1718 C/\ | | | | |-FREESURF_RESCALE_G :: Re-scale Gt for free-surface height.
1719 C/\ | | | |
1720 C/\ | | | |-TIMESTEP_TRACER :: Step tracer field forward in time
1721 C/\ | | | |
1722 C/\ | | | |-CALC_GS :: Calculate the salinity tendency terms
1723 C/\ | | | | |
1724 C/\ | | | | |-GAD_CALC_RHS :: Generalised advection package
1725 C/\ | | | | | | :: ( see pkg/gad )
1726 C/\ | | | | | |-KPP_TRANSPORT_S :: KPP non-local transport ( see pkg/kpp ).
1727 C/\ | | | | |
1728 C/\ | | | | |-EXTERNAL_FORCING_S :: Problem specific forcing for salt.
1729 C/\ | | | | |-ADAMS_BASHFORTH2 :: Extrapolate tendencies forward in time.
1730 C/\ | | | | |-FREESURF_RESCALE_G :: Re-scale Gs for free-surface height.
1731 C/\ | | | |
1732 C/\ | | | |-TIMESTEP_TRACER :: Step tracer field forward in time
1733 C/\ | | | |
1734 C/\ | | | |-TIMESTEP_TRACER :: Step tracer field forward in time
1735 C/\ | | | |
1736 C/\ | | | |-PTRACERS_INTEGRATE :: Integrate other tracer(s) (see pkg/ptracers).
1737 C/\ | | | | |
1738 C/\ | | | | |-GAD_CALC_RHS :: Generalised advection package
1739 C/\ | | | | | | :: ( see pkg/gad )
1740 C/\ | | | | | |-KPP_TRANSPORT_PTR:: KPP non-local transport ( see pkg/kpp ).
1741 C/\ | | | | |
1742 C/\ | | | | |-PTRACERS_FORCING :: Problem specific forcing for tracer.
1743 C/\ | | | | |-GCHEM_FORCING_INT :: tracer forcing for gchem pkg (if all
1744 C/\ | | | | | tendancy terms calcualted together)
1745 C/\ | | | | |-ADAMS_BASHFORTH2 :: Extrapolate tendencies forward in time.
1746 C/\ | | | | |-FREESURF_RESCALE_G :: Re-scale Gs for free-surface height.
1747 C/\ | | | | |-TIMESTEP_TRACER :: Step tracer field forward in time
1748 C/\ | | | |
1749 C/\ | | | |-OBCS_APPLY_TS :: Open bndy. package (see pkg/obcs ).
1750 C/\ | | | |
1751 C/\ | | | |-IMPLDIFF :: Solve vertical implicit diffusion equation.
1752 C/\ | | | |-OBCS_APPLY_TS :: Open bndy. package (see pkg/obcs ).
1753 C/\ | | | |
1754 C/\ | | | |-AIM_AIM2DYN_EXCHANGES :: Inetermed. atmos (see pkg/aim).
1755 C/\ | | | |-EXCH :: Update overlaps
1756 C/\ | | |
1757 C/\ | | |-DYNAMICS :: Momentum equations driver.
1758 C/\ | | | |
1759 C/\ | | | |-CALC_GRAD_PHI_SURF :: Calculate the gradient of the surface
1760 C/\ | | | | Potential anomaly.
1761 C/\ | | | |-CALC_VISCOSITY :: Calculate net vertical viscosity
1762 C/\ | | | | |-KPP_CALC_VISC :: KPP package ( see pkg/kpp ).
1763 C/\ | | | |
1764 C/\ | | | |-CALC_PHI_HYD :: Integrate the hydrostatic relation.
1765 C/\ | | | |-MOM_FLUXFORM :: Flux form mom eqn. package ( see
1766 C/\ | | | | pkg/mom_fluxform ).
1767 C/\ | | | |-MOM_VECINV :: Vector invariant form mom eqn. package ( see
1768 C/\ | | | | pkg/mom_vecinv ).
1769 C/\ | | | |-TIMESTEP :: Step momentum fields forward in time
1770 C/\ | | | |-OBCS_APPLY_UV :: Open bndy. package (see pkg/obcs ).
1771 C/\ | | | |
1772 C/\ | | | |-IMPLDIFF :: Solve vertical implicit diffusion equation.
1773 C/\ | | | |-OBCS_APPLY_UV :: Open bndy. package (see pkg/obcs ).
1774 C/\ | | | |
1775 C/\ | | | |-TIMEAVE_CUMUL_1T :: Time averaging package ( see pkg/timeave ).
1776 C/\ | | | |-TIMEAVE_CUMUATE :: Time averaging package ( see pkg/timeave ).
1777 C/\ | | | |-DEBUG_STATS_RL :: Quick debug package ( see pkg/debug ).
1778 C/\ | | |
1779 C/\ | | |-CALC_GW :: vert. momentum tendency terms ( NH, QH only ).
1780 C/\ | | |
1781 C/\ | | |-UPDATE_SURF_DR :: Update the surface-level thickness fraction.
1782 C/\ | | |
1783 C/\ | | |-UPDATE_CG2D :: Update 2d conjugate grad. for Free-Surf.
1784 C/\ | | |
1785 C/\ | | |-SOLVE_FOR_PRESSURE :: Find surface pressure.
1786 C/\ | | | |-CALC_DIV_GHAT :: Form the RHS of the surface pressure eqn.
1787 C/\ | | | |-CG2D :: Two-dim pre-con. conjugate-gradient.
1788 C/\ | | | |-CG3D :: Three-dim pre-con. conjugate-gradient solver.
1789 C/\ | | |
1790 C/\ | | |-THE_CORRECTION_STEP :: Step forward to next time step.
1791 C/\ | | | |
1792 C/\ | | | |-CALC_GRAD_PHI_SURF :: Return DDx and DDy of surface pressure
1793 C/\ | | | |-CORRECTION_STEP :: Pressure correction to momentum
1794 C/\ | | | |-CYCLE_TRACER :: Move tracers forward in time.
1795 C/\ | | | |-OBCS_APPLY :: Open bndy package. see pkg/obcs
1796 C/\ | | | |-SHAP_FILT_APPLY :: Shapiro filter package. see pkg/shap_filt
1797 C/\ | | | |-ZONAL_FILT_APPLY :: FFT filter package. see pkg/zonal_filt
1798 C/\ | | | |-CONVECTIVE_ADJUSTMENT :: Control static instability mixing.
1799 C/\ | | | | |-FIND_RHO :: Find adjacent densities.
1800 C/\ | | | | |-CONVECT :: Mix static instability.
1801 C/\ | | | | |-TIMEAVE_CUMULATE :: Update convection statistics.
1802 C/\ | | | |
1803 C/\ | | | |-CALC_EXACT_ETA :: Change SSH to flow divergence.
1804 C/\ | | |
1805 C/\ | | |-DO_FIELDS_BLOCKING_EXCHANGES :: Sync up overlap regions.
1806 C/\ | | | |-EXCH
1807 C/\ | | |
1808 C/\ | | |-GCHEM_FORCING_SEP :: tracer forcing for gchem pkg (if
1809 C/\ | | | tracer dependent tendencies calculated
1810 C/\ | | | separatly)
1811 C/\ | | |
1812 C/\ | | |-FLT_MAIN :: Float package ( pkg/flt ).
1813 C/\ | | |
1814 C/\ | | |-MONITOR :: Monitor package ( pkg/monitor ).
1815 C/\ | | |
1816 C/\ | | |-DO_THE_MODEL_IO :: Standard diagnostic I/O.
1817 C/\ | | | |-WRITE_STATE :: Core state I/O
1818 C/\ | | | |-TIMEAVE_STATV_WRITE :: Time averages. see pkg/timeave
1819 C/\ | | | |-AIM_WRITE_DIAGS :: Intermed. atmos diags. see pkg/aim
1820 C/\ | | | |-GMREDI_DIAGS :: GM diags. see pkg/gmredi
1821 C/\ | | | |-KPP_DO_DIAGS :: KPP diags. see pkg/kpp
1822 C/\ | | | |-SBO_CALC :: SBO diags. see pkg/sbo
1823 C/\ | | | |-SBO_DIAGS :: SBO diags. see pkg/sbo
1824 C/\ | | | |-SEAICE_DO_DIAGS :: SEAICE diags. see pkg/seaice
1825 C/\ | | | |-GCHEM_DIAGS :: gchem diags. see pkg/gchem
1826 C/\ | | |
1827 C/\ | | |-WRITE_CHECKPOINT :: Do I/O for restart files.
1828 C/\ | |
1829 C/\ | |-COST_TILE :: Cost function package. ( see pkg/cost )
1830 C<===|=|
1831 C<===|=| **************************
1832 C<===|=| END MAIN TIMESTEPPING LOOP
1833 C<===|=| **************************
1834 C<===|=|
1835 C | |-COST_FINAL :: Cost function package. ( see pkg/cost )
1836 C |
1837 C |-WRITE_CHECKPOINT :: Final state storage, for restart.
1838 C |
1839 C |-TIMER_PRINTALL :: Computational timing summary
1840 C |
1841 C |-COMM_STATS :: Summarise inter-proc and inter-thread communication
1842 C :: events.
1843 C
1844 \end{verbatim}
1845 }
1846
1847 \subsection{Measuring and Characterizing Performance}
1848
1849 TO BE DONE (CNH)
1850
1851 \subsection{Estimating Resource Requirements}
1852
1853 TO BE DONE (CNH)
1854
1855 \subsubsection{Atlantic 1/6 degree example}
1856 \subsubsection{Dry Run testing}
1857 \subsubsection{Adjoint Resource Requirements}
1858 \subsubsection{State Estimation Environment Resources}
1859

  ViewVC Help
Powered by ViewVC 1.1.22