Parent Directory | Revision Log | Revision Graph
Implemented new I/O package (mdsio.F). This package does parallel I/O in much the same way as dfile.F used to except it uses "direct access" rather than (f77) unformatted sequential access. Problems with dfile.F package included: o unnecessary memory use (each process had two global sized buffers) o inability to read the files it had written without post-processing o "tiled" files were tiled by process/thread rather than actual tiles o created huge numbers of files with no alternatives Features of the mdsio.F package: o direct-access binary writes o no excessive memory use o ability to read/write from multiple record files o "tiled" files are based on "WRAPPER" tiles so that the number and content of files is independent of the number of threads and/or processes o option to create single "global" files rather than "tiled" files o ability to read both "global" and "tiled" files [Caveat: the tiling of files must match the model tiles] o checkpoints now use a single file per model section ie. one file for the hydrostatic model core, one file for the non-hydrostatic extensions and one file for the C-D extensions o the mid-level I/O routines now is broken into more source files read_write_fld.F supplies basic I/O routines with the same interface as the original I/O package read_write_rec.F supplies I/O routines which allow multiple records write_state.F writes the model state checkpoint.F supplies the read/write checkpoint routines All the example input data has had to be modified to be direct-access. Otherwise only routines that used I/O have been affected and not all of those have been due to the continuity of arguments in the read_write_fld.F routines. What needs to be done? We have to create a suite of conversion utilities for users with old-style data. Also supply the option for using old-style I/O, not just for die-hards but for reading data too extensive to be converted. And more...
1 | # ==================== |
2 | # | Model parameters | |
3 | # ==================== |
4 | # |
5 | # Continuous equation parameters |
6 | &PARM01 |
7 | tRef= 0.696834, 0.497738, 0.298643, 0.0995477, -0.0995477, -0.298643, -0.497738, -0.696834, |
8 | sRef= 8*35., |
9 | viscAz=1.E-3, |
10 | viscAh=1.E3, |
11 | no_slip_sides=.FALSE., |
12 | no_slip_bottom=.FALSE., |
13 | viscA4=0.E12, |
14 | diffKhT=1.E3, |
15 | diffKzT=1.E-5, |
16 | diffKhS=1.E3, |
17 | diffKzS=1.E-5, |
18 | GMkBackground=0.d3, |
19 | f0=1.e-4, |
20 | beta=0.E-11, |
21 | tAlpha=2.E-4, |
22 | sBeta =0.E-4, |
23 | gravity=9.81, |
24 | gBaro=9.81, |
25 | rigidLid=.FALSE., |
26 | implicitFreeSurface=.TRUE., |
27 | eosType='LINEAR', |
28 | hFacMin=0.2, |
29 | openBoundaries=.TRUE., |
30 | nonHydrostatic=.FALSE., |
31 | readBinaryPrec=64, |
32 | globalFiles=.TRUE., |
33 | & |
34 | |
35 | # Elliptic solver parameters |
36 | &PARM02 |
37 | cg2dMaxIters=1000, |
38 | cg2dTargetResidual=1.E-13, |
39 | cg3dMaxIters=400, |
40 | cg3dTargetResidual=1.E-13, |
41 | & |
42 | |
43 | # Time stepping parameters |
44 | &PARM03 |
45 | niter0=0, |
46 | nTimeSteps=10, |
47 | deltaT=600.0, |
48 | abEps=0.1, |
49 | pChkptFreq=3000.0, |
50 | chkptFreq=0.0, |
51 | dumpFreq=6000.0, |
52 | & |
53 | |
54 | # Gridding parameters |
55 | &PARM04 |
56 | usingCartesianGrid=.TRUE., |
57 | usingSphericalPolarGrid=.FALSE., |
58 | delX=80*5.e3, |
59 | delY=42*5.e3, |
60 | delZ= 8*562.5, |
61 | & |
62 | |
63 | # Input datasets |
64 | &PARM05 |
65 | bathyFile='topog.bump', |
66 | & |
67 | |
68 | # Open-boundaries |
69 | &PARM06 |
70 | OB_Jnorth=80*-1, |
71 | OB_Jsouth=80*1, |
72 | OB_Ieast=42*-1, |
73 | OB_Iwest=42*1, |
74 | & |
ViewVC Help | |
Powered by ViewVC 1.1.22 |