/[MITgcm]/MITgcm/eesupp/src/read_field.F
ViewVC logotype

Contents of /MITgcm/eesupp/src/read_field.F

Parent Directory Parent Directory | Revision Log Revision Log | View Revision Graph Revision Graph


Revision 1.5 - (show annotations) (download)
Wed Mar 15 16:00:52 2000 UTC (24 years, 2 months ago) by adcroft
Branch: MAIN
CVS Tags: checkpoint26, branch-atmos-merge-start, checkpoint27, checkpoint33, checkpoint32, checkpoint31, checkpoint30, checkpoint34, branch-atmos-merge-zonalfilt, branch-atmos-merge-shapiro, checkpoint28, checkpoint29, branch-atmos-merge-phase5, branch-atmos-merge-phase4, branch-atmos-merge-phase7, branch-atmos-merge-phase6, branch-atmos-merge-phase1, checkpoint25, branch-atmos-merge-phase3, branch-atmos-merge-phase2, branch-atmos-merge-freeze
Branch point for: branch-atmos-merge
Changes since 1.4: +5 -1 lines
Memory saving updates.
 o DFILE.h has been cpp'd out with USE_DFILE
 o EEIO.h has been cpp'd out with USE_EEIO
 o EXCH.h uses NUMBER_OF_BUFFER_LEVELS=1 instead of 10

1 C $Header: /u/gcmpack/models/MITgcmUV/eesupp/src/read_field.F,v 1.4 1999/07/30 15:42:56 adcroft Exp $
2
3 #include "CPP_EEOPTIONS.h"
4
5 SUBROUTINE READ_FIELD_XYZR8(
6 O fld,
7 I filNam, filFmt, myThid )
8 C /==========================================================\
9 C | SUBROUTINE READ_FIELD_XYZR8 |
10 C | o Reads a file into three-dimensional model array |
11 C |==========================================================|
12 C | Routine that controls the reading of external datasets |
13 C | into the model. In a multi-threaded and/or MPI world |
14 C | this can be a non-trivial exercise. Here we use the |
15 C | following approach: |
16 C | Thread 1. reads data for the complete domain i.e. all |
17 C | processes and all threads into a buffer. Each individual |
18 C | thread then reads its portion of data into the actual |
19 C | model array. This is clean because there is just one |
20 C | input file with a single format irrespective of the |
21 C | of processes or threads in use. However, it has several |
22 C | potential difficulties. |
23 C | 1. Very large problems may have individual fields of |
24 C | several GB. For example 1/20th degree global and |
25 C | fifty levels is 10GB per field at 8 byte precision. |
26 C | 2. MPI 1.nn is vague about I/O support - not all |
27 C | processes have to support I/O. |
28 C | MPI 2. includes a standard API for distributed data, |
29 C | parallel I/O. If applications funnel all their field |
30 C | I/O through this routine then adopting this or some |
31 C | alternative should be fairly straight-forward. |
32 C | In the light of problem 1. the following strategy |
33 C | is adopted. Files are read one layer at a time. After |
34 C | each layer has been read there is a barrier and then |
35 C | the threads all copy data from the buffer to the arrays.|
36 C | This creates a lower-performance I/O code but reduces |
37 C | the degree to which a single large array is required for|
38 C | the master thread. To be consistent with this binary |
39 C | input files must be written by code of the form |
40 C | WRITE(N) ((array(I,J,K),I=1,Nx),J=1,Ny) |
41 C | rather than of the form |
42 C | WRITE(N) array |
43 C | The approach taken here also avoids one other ugly |
44 C | behaviour. On several systems even Fortran internal |
45 C | reads and writes are not thread-safe. This means that |
46 C | the portion of the code that builds file names has to |
47 C | be a critical section. However, if only the master |
48 C | thread is interested in the value of the file name then |
49 C | only the master need set its value. |
50 C | Finally the IO performed here is for the whole XY |
51 C | domain - even under MPI. The input files can stay the |
52 C | same no matter what processor count is being used. |
53 C | This is not a scalable approach to IO and MPI 2 has much|
54 C | better support for this. Output is handled differently. |
55 C | By default output files are written split and have to be|
56 C | merged in a post-processing stage - YUK! |
57 C \==========================================================/
58 IMPLICIT NONE
59
60 C == GLobal variables ==
61 #include "SIZE.h"
62 #include "EEPARAMS.h"
63 #include "EESUPPORT.h"
64 #include "EEIO.h"
65
66 C == Routine arguments ==
67 C fld - Array into which data will be written.
68 C filNam - Name of file to read.
69 C filFmt - Format to use to read the file.
70 C myNz - No. vertical layers for array fld.
71 C myThid - Thread number for this instance of the routine.
72 _RL fld(1-OLx:sNx+OLx,1-OLy:sNy+OLy,1:Nr,nSx, nSy )
73 CHARACTER*(*) filNam
74 CHARACTER*(*) filFmt
75 INTEGER myThid
76
77 #ifdef USE_EEIO
78
79 C == Local variables ==
80 C msgBuf - Variable for writing error messages
81 C I,J,K, bi,bj - Loop counters
82 C dUnit - Unit number for file I/O
83 C ioStatus - I/O error code
84 CHARACTER*(MAX_LEN_MBUF) msgBuf
85 INTEGER I
86 INTEGER J
87 INTEGER K
88 INTEGER bi
89 INTEGER bj
90 INTEGER iG, jG
91 INTEGER dUnit
92 INTEGER ioStatus
93 C
94 dUnit = 42
95
96 C-- Open the file
97 C Note: The error trapping here is inelegant. There is no
98 C easy way to tell other threads and/or MPI processes that
99 C there was an error. Here we simply STOP if there is an error.
100 C Under a multi-threaded mode this will halt all the threads.
101 C Under MPI the other processes may die or they may just hang!
102 _BEGIN_MASTER(myThid)
103 OPEN(dUnit,FILE=filNam,FORM='unformatted',STATUS='old',
104 & IOSTAT=ioStatus)
105 IF ( ioStatus .GT. 0 ) THEN
106 WRITE(msgBuf,'(A)')
107 & 'S/R READ_FIELD_XYZR8'
108 CALL PRINT_ERROR( msgBuf , myThid)
109 WRITE(msgBuf,'(A)')
110 & 'Open for read failed for'
111 CALL PRINT_ERROR( msgBuf , myThid)
112 WRITE(msgBuf,'(A,A50)')
113 & 'file ',filNam
114 CALL PRINT_ERROR( msgBuf , myThid)
115 STOP 'ABNORMAL END: S/R READ_FIELD_XYZR8'
116 ENDIF
117 _END_MASTER(myThid)
118
119 DO K = 1, Nr
120 C-- Read data from file one XY layer at a time
121 _BEGIN_MASTER(myThid)
122 C READ ...
123 DO J=1,Ny
124 DO I=1,Nx
125 IF ( filNam(1:1) .EQ. 'u' ) THEN
126 IO_tmpXY_R8(I,J) = 0.0 _d 0
127 IF ( J .GT. 15 .AND. J .LT. 24 )
128 & IO_tmpXY_R8(I,J) = 0.1 _d 0
129 ELSEIF ( filNam(1:1) .EQ. 'v' ) THEN
130 IO_tmpXY_R8(I,J) = 0.0 _d 0
131 ELSE
132 IO_tmpXY_R8(I,J) = 0.0 _d 0
133 ENDIF
134 ENDDO
135 ENDDO
136 _END_MASTER(myThid)
137 _BARRIER
138 C-- Copy data into per thread data structures
139 DO bj=myByLo(myThid),myByHi(myThid)
140 DO bi=myBxLo(myThid),myBxHi(myThid)
141 DO j=1,sNy
142 DO i=1,sNx
143 iG = myXGlobalLo+(bi-1)*sNx+I-1
144 jG = myYGlobalLo+(bj-1)*sNy+J-1
145 fld(i,j,k,bi,bj) = IO_tmpXY_R8(iG,jG)
146 ENDDO
147 ENDDO
148 ENDDO
149 ENDDO
150 _BARRIER
151 ENDDO
152 C
153 _EXCH_XYZ_R8(fld, myThid )
154 C
155 #endif
156
157 RETURN
158 END

  ViewVC Help
Powered by ViewVC 1.1.22