--- manual/s_software/text/sarch.tex 2004/01/28 19:59:03 1.12 +++ manual/s_software/text/sarch.tex 2004/04/20 23:32:59 1.19 @@ -1,4 +1,4 @@ -% $Header: /home/ubuntu/mnt/e9_copy/manual/s_software/text/sarch.tex,v 1.12 2004/01/28 19:59:03 afe Exp $ +% $Header: /home/ubuntu/mnt/e9_copy/manual/s_software/text/sarch.tex,v 1.19 2004/04/20 23:32:59 edhill Exp $ This chapter focuses on describing the {\bf WRAPPER} environment within which both the core numerics and the pluggable packages operate. The description @@ -76,6 +76,9 @@ \end{figure} \section{WRAPPER} +\begin{rawhtml} + +\end{rawhtml} A significant element of the software architecture utilized in MITgcm is a software superstructure and substructure collectively @@ -150,6 +153,9 @@ computer architecture currently available to the scientific computing community. \subsection{Machine model parallelism} +\begin{rawhtml} + +\end{rawhtml} Codes operating under the WRAPPER target an abstract machine that is assumed to consist of one or more logical processors that can compute concurrently. @@ -977,16 +983,17 @@ routines must be activated at compile time. Currently MPI libraries are invoked by specifying the appropriate options file with the -\begin{verbatim}-of\end{verbatim} flag when running the {\em genmake2} +{\tt-of} flag when running the {\em genmake2} script, which generates the Makefile for compiling and linking MITgcm. (Previously this was done by setting the {\em ALLOW\_USE\_MPI} and {\em ALWAYS\_USE\_MPI} flags in the {\em CPP\_EEOPTIONS.h} file.) More detailed information about the use of {\em genmake2} for specifying -local compiler flags is located in section 3 ??\\ +local compiler flags is located in section \ref{sect:genmake}.\\ \fbox{ \begin{minipage}{4.75in} +Directory: {\em tools/build\_options}\\ File: {\em tools/genmake2} \end{minipage} } \\ @@ -1005,7 +1012,7 @@ in the file {\em SIZE.h}. The parameter {\em mf} specifies that a text file called ``mf'' will be read to get a list of processor names on which the sixty-four processes will execute. The syntax of this file -is specified by the MPI distribution +is specified by the MPI distribution. \\ \fbox{ @@ -1056,15 +1063,20 @@ Allocation of processes to tiles in controlled by the routine {\em INI\_PROCS()}. For each process this routine sets the variables {\em myXGlobalLo} and {\em myYGlobalLo}. -These variables specify (in index space) the coordinate -of the southern most and western most corner of the -southern most and western most tile owned by this process. +These variables specify in index space the coordinates +of the southernmost and westernmost corner of the +southernmost and westernmost tile owned by this process. The variables {\em pidW}, {\em pidE}, {\em pidS} and {\em pidN} are also set in this routine. These are used to identify processes holding tiles to the west, east, south and north of this process. These values are stored in global storage in the header file {\em EESUPPORT.h} for use by -communication routines. +communication routines. The above does not hold when the +exch2 package is used -- exch2 sets its own parameters to +specify the global indices of tiles and their relationships +to each other. See the documentation on the exch2 package +(\ref{sec:exch2}) for +details. \\ \fbox{ @@ -1090,10 +1102,13 @@ describes the information that is held and used. \begin{enumerate} -\item {\bf Tile-tile connectivity information} For each tile the WRAPPER -sets a flag that sets the tile number to the north, south, east and +\item {\bf Tile-tile connectivity information} +For each tile the WRAPPER +sets a flag that sets the tile number to the north, +south, east and west of that tile. This number is unique over all tiles in a -configuration. The number is held in the variables {\em tileNo} +configuration. Except when using the cubed sphere and the exch2 package, +the number is held in the variables {\em tileNo} ( this holds the tiles own number), {\em tileNoN}, {\em tileNoS}, {\em tileNoE} and {\em tileNoW}. A parameter is also stored with each tile that specifies the type of communication that is used between tiles. @@ -1116,7 +1131,13 @@ (see figure \ref{fig:communication_primitives}). The routine {\em ini\_communication\_patterns()} is responsible for setting the communication mode values for each tile. -\\ + +When using the cubed sphere configuration with the exch2 package, the +relationships between tiles and their communication methods are set +by the package in other variables. See the exch2 package documentation +(\ref{sec:exch2} for details. + + \fbox{ \begin{minipage}{4.75in} @@ -1252,7 +1273,9 @@ the cube-sphere grid. In this class of grid a rotation may be required between tiles. Aligning the coordinate requiring rotation with the tile decomposition, allows the coordinate transformation to -be embedded within a custom form of the \_EXCH primitive. +be embedded within a custom form of the \_EXCH primitive. In these +cases \_EXCH is mapped to exch2 routines, as detailed in the exch2 +package documentation \ref{sec:exch2}. \item {\bf Reverse Mode} The communication primitives \_EXCH and \_GSUM both employ @@ -1269,6 +1292,7 @@ is set to the value {\em REVERSE\_SIMULATION}. This signifies ti the low-level routines that the adjoint forms of the appropriate communication operation should be performed. + \item {\bf MAX\_NO\_THREADS} The variable {\em MAX\_NO\_THREADS} is used to indicate the maximum number of OS threads that a code will use. This @@ -1373,7 +1397,7 @@ This is done to allow a large number of variations on the exchange process to be maintained. One set of variations supports the cube sphere grid. Support for a cube sphere grid in MITgcm is based -on having each face of the cube as a separate tile (or tiles). +on having each face of the cube as a separate tile or tiles. The exchange routines are then able to absorb much of the detailed rotation and reorientation required when moving around the cube grid. The set of {\em \_EXCH} routines that contain the @@ -1397,6 +1421,9 @@ \section{MITgcm execution under WRAPPER} +\begin{rawhtml} + +\end{rawhtml} Fitting together the WRAPPER elements, package elements and MITgcm core equation elements of the source code produces calling