| 1 |
gforget |
1.1 |
- the top level routine is loop_1x1_bulk.m. It does the following: |
| 2 |
|
|
[loops over years and 6-hourly records] |
| 3 |
|
|
load fields (e.g. by calling ncep_load_fields.m) |
| 4 |
|
|
compute bulks (exf_bulk_largeyeager04.m or gmaze_bulk_coare.m) |
| 5 |
|
|
compute net fluxes |
| 6 |
|
|
compute time averages (averagesFields.m) |
| 7 |
|
|
write to disk (averagesFields.m) |
| 8 |
|
|
|
| 9 |
|
|
- loop_1x1_flux.m and loop_1x1_flux_noicemask.m |
| 10 |
|
|
simply do a similar formatting of flux data (no bulk |
| 11 |
|
|
involved) to allow easy comparisons. |
| 12 |
|
|
|
| 13 |
|
|
-a few more scripts: |
| 14 |
|
|
gpcp_load_atlas.m loads/averages GPCP precip data set |
| 15 |
|
|
quickcow_load_atlas.m loads quickcow wind stress atlas |
| 16 |
|
|
plot_bulk.m a sample script to plot results |
| 17 |
|
|
domaine_global_def.m, domaine.m, ecmwf_grid.m and ncep_grid.m |
| 18 |
|
|
handle data and computational grids |
| 19 |
|
|
|
| 20 |
|
|
-the way scripts are setup right now: |
| 21 |
|
|
do not account leap years (allways do 365 days), output are daily in local dir. |
| 22 |
|
|
fields are interpolated to the 1x1 degree computational grid. |
| 23 |
|
|
atm. state/fluxes data is NCEP, ECMWF/ERA40 or CORE/LargeYeager. |
| 24 |
|
|
sst is Reynolds, runoff is Large and Yeager, Ice coverage is Hadley Center. |
| 25 |
|
|
links to data sets (from Charmaine directories and mines) are hardcoded. |
| 26 |
|
|
data sets are under /net/ross/raid* and /net/altix3700/raid*. |