Skip to content

Commit 455f18c

Browse files
committed
Improving documentation
1 parent 6c4a110 commit 455f18c

File tree

3 files changed

+138
-103
lines changed

3 files changed

+138
-103
lines changed

.gitignore

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
\#*\#
1010
scratch.m
1111
test_batch.m
12-
doc*
12+
#doc*
1313

1414
# Compiled source #
1515
###################

README.md

+35-102
Original file line numberDiff line numberDiff line change
@@ -46,105 +46,38 @@ The rest of the paths needed for FluxProc can be set using
4646
Now FluxProc code should be initialized and ready to use the data and
4747
configuration files in the FLUXROOT directory.
4848

49-
## Further documentation
50-
51-
Below is the old UNM New Mexico Elevation Gradient data processing manual,
52-
by Timothy W. Hilton (hilton@unm.edu) from around July 2012. It is very
53-
out of date. We will begin updating the documentation in the near future.
54-
55-
### OVERVIEW
56-
57-
58-
This README presents Matlab functions we have developed to process and
59-
view data collected from the New Mexico Elevation Gradient (NMEG) eddy
60-
covariance sites and their associated data.
61-
62-
In general, user-level main functions (things that are intended to be
63-
called from a Matlab command line) are named UNM_*.m, and helper
64-
functions do not have the "UNM_" prefix.
65-
66-
#### Documentation
67-
68-
I have tried to consistently include in each m-file descriptive
69-
documentation immediately following the function definition so that
70-
calling 'help' or 'doc' on the function from the Matlab prompt will
71-
display self-contained documentation. Thus, this readme document will
72-
not discuss function usage and interfaces in detail -- use the Matlab
73-
help!
74-
75-
76-
#### Source control management
77-
78-
The code is version-controlled in a Mercurial
79-
(http://mercurial.selenic.com/) repository. It is not necessary to
80-
use the version control; you may simply ignore the .hg subdirectory,
81-
or delete it to permanently disable version control. There is a very
82-
good tutorial at http://hginit.com/ if you are unfamiliar with
83-
Mercurial (or source control management tools in general) and wish to
84-
learn how to use it. The revision history steps sequentially back to
85-
15 August 2011.
86-
87-
88-
### USER-LEVEL FUNCTION SUMMARY
89-
90-
There are four main user-level data processing matlab functions:
91-
92-
* UNM_retrieve_card_data_GUI.m
93-
* UNM_RemoveBadData.m
94-
* UNM_fill_met_gaps_from_nearby_site.m
95-
* UNM_Ameriflux_file_maker_TWH.m
96-
97-
There are several functions to parse data files from various stages of
98-
the data processing pipeline into Matlab:
99-
100-
* UNM_parse_QC_txt_file.m
101-
* UNM_parse_QC_xls_file.m
102-
* UNM_parse_fluxall_txt_file.m
103-
* UNM_parse_fluxall_xls_file.m
104-
* UNM_parse_gapfilled_partitioned_output.m
105-
* UNM_parse_sev_met_data.m
106-
* UNM_parse_valles_met_data.m
107-
* parse_forgapfilling_file.m
108-
* parse_ameriflux_file.m
109-
110-
There are also a number of functions to visualize flux data. Some are
111-
called from within the processing functions listed above; some of
112-
these are sometimes independently useful.
113-
114-
* plot_fingerprint.m
115-
* UNM_site_plot_doy_time_offsets.m
116-
* UNM_site_plot_fullyear_time_offsets.m
117-
* plot_siteyear_fingerprint_2x3array.m
118-
* plot_siteyear_fingerprint_single.m
119-
120-
121-
### DATA PROCESSING PIPELINE SUMMARY
122-
123-
The steps for processing incoming data from the field sites. I have
124-
attempted to make the processing routines somewhat robust to data
125-
glitches: missing data, mangled text, mangled file names, etc. There
126-
is (as always) more work that could be done in that arena; for now, if
127-
something breaks, the best bet is to step into the Matlab code and
128-
debug.
129-
130-
1. Insert the datalogger flash card into the computer.
131-
2. Within Matlab, call UNM_retrieve_card_data_GUI. This copies the
132-
data to disk and displays a figure that plots each 30-minute data
133-
field sequentially. Step through each field and scan the plot to
134-
make sure it looks reasonable! When done, close the plot figure.
135-
Matlab will now openCampbell Scientific's CardConvert to process
136-
the raw data into TOA5 files and daily TOB1 10-hz files, copy those
137-
files to their backup locations, compress the raw data, and copy
138-
the compressed and uncompressed raw data to their backup locations.
139-
The final step will require the user to manually enter a password
140-
to transfer the data to the EDAC FTP server.
141-
3. Run UNM_RemoveBadData. Scan the resulting plots for problems in
142-
the data and fix any problems that arise.
143-
4. Run UNM_fill_met_gaps_from_nearby_site.
144-
5. Send the SITE_YEAR_for_gapfilling_filled.txt through the online
145-
flux gapfiller/flux partitioner:
146-
http://www.bgc-jena.mpg.de/~MDIwork/eddyproc/upload.php.
147-
6. From bash, call download_partitioned_data to download the gapfilled
148-
partitioned data.
149-
7. Call UNM_Ameriflux_file_maker_TWH.m
150-
8. Upload the Ameriflux files to soccoro.unm.edu.
49+
50+
## Task scripts
51+
52+
Common tasks have scripts that can be run with common configurations, and are easily modified. These scripts can be found in the [scripts](https://github.com/gremau/NMEG_FluxProc/scripts/) directory. Each of these scripts can be set to run for a list of sites and years and to overwrite existing output files or not.
53+
54+
### Create new "fluxall" files
55+
56+
Fluxall files ({site}_{year}_fluxall.txt') should contain raw data from all sensors at a site for one year. The [script_make_fluxall.m](https://github.com/gremau/NMEG_FluxProc/scripts/script_make_fluxall.m) script will make these files, primarily by calling `card_data_processor.m` in various configurations and reading the raw data in 'toa5' and 'ts_data' directories. Though these files should contain all sensor data, in practice there are some sites with dataloggers that have not been configured to be merged into the fluxall file (namely the Valles Caldera sites).
57+
58+
### Create new "qc", "for_gapfilling", and "for_gapfilling_filled" files
59+
60+
There are several files created from the NMEG quality control pipeline, all output to the 'processed_flux' directory. These are:
61+
62+
1. qc files ({site}_{years}_fluxall_qc.txt): Contain all variables that are quality-controlled and then output by the `RemoveBadData.m` script.
63+
64+
2. for_gapfilling files ({site}_flux_all_{year}_for_gap_filling.txt): Also output by `RemoveBadData.m` script and contain a subset of quality-controlled variables in a format ready to be filled with ancillary met data.
65+
66+
3. for_gapfilling_filled files ({site}_flux_all_{year}_for_gap_filling_filled.txt): Same as the file above, but gaps in the met variables have been filled with ancillary met data by the `UNM_fill_met_gaps_from_nearby_site.m` script.
67+
68+
To make these files, run the [script_make_qc_gf.m](https://github.com/gremau/NMEG_FluxProc/scripts/script_make_qc_gf.m). This script may also run the REddyProc gapfilling tool by calling on the [R code from the Max Planck institute](https://www.bgc-jena.mpg.de/bgi/index.php/Services/REddyProcWebRPackage), and the output (also in 'processed_flux') can be used to make AmeriFlux files, below, if desired.
69+
70+
### Create new AmeriFlux files
71+
72+
AmeriFlux files ({af-site}_{year}_gapfilled.txt and {af-site}_{year}_with_gaps.txt) contain quality controlled sensor data, gapfilled met data, gapfilled fluxes, and partitioned C fluxes. There are several steps currently needed to create them.
73+
74+
1. Send the 'for_gapfilling_filled' file for each site/year to the [MPI EddyProc web service](http://www.bgc-jena.mpg.de/~MDIwork/eddyproc/upload.php). This service provides gapfilled and partitioned flux data, and is the way we currently have to get Lasslop partitioned fluxes used for the lower elevation NMEG sites.
75+
76+
2. Once you receive notification that the partitioner has finished (by email), copy the job number and run `download_gapfilled_partitioned_flux(job#)`. This will download the resulting files to the 'processed_flux' directory.
77+
78+
3. Run [script_make_ameriflux.m](https://github.com/gremau/NMEG_FluxProc/scripts/script_make_ameriflux.m), which will call the `UNM_Ameriflux_File_Maker.m` with the specified configuration options and output the new AmeriFlux files to 'FLUXROOT/FluxOut/'.
79+
80+
81+
## Additional documentation
82+
83+
Additional documentation can be found in the [doc](https://github.com/gremau/NMEG_FluxProc/doc/) directory.

doc/old_README.md

+102
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,102 @@
1+
# Old README for NMEG_FluxProc
2+
3+
Below is the old UNM New Mexico Elevation Gradient data processing manual,
4+
by Timothy W. Hilton (hilton@unm.edu) from around July 2012. It is very
5+
out of date. We will begin updating the documentation in the near future.
6+
7+
### OVERVIEW
8+
9+
10+
This README presents Matlab functions we have developed to process and
11+
view data collected from the New Mexico Elevation Gradient (NMEG) eddy
12+
covariance sites and their associated data.
13+
14+
In general, user-level main functions (things that are intended to be
15+
called from a Matlab command line) are named UNM_*.m, and helper
16+
functions do not have the "UNM_" prefix.
17+
18+
#### Documentation
19+
20+
I have tried to consistently include in each m-file descriptive
21+
documentation immediately following the function definition so that
22+
calling 'help' or 'doc' on the function from the Matlab prompt will
23+
display self-contained documentation. Thus, this readme document will
24+
not discuss function usage and interfaces in detail -- use the Matlab
25+
help!
26+
27+
28+
#### Source control management
29+
30+
The code is version-controlled in a Mercurial
31+
(http://mercurial.selenic.com/) repository. It is not necessary to
32+
use the version control; you may simply ignore the .hg subdirectory,
33+
or delete it to permanently disable version control. There is a very
34+
good tutorial at http://hginit.com/ if you are unfamiliar with
35+
Mercurial (or source control management tools in general) and wish to
36+
learn how to use it. The revision history steps sequentially back to
37+
15 August 2011.
38+
39+
40+
### USER-LEVEL FUNCTION SUMMARY
41+
42+
There are four main user-level data processing matlab functions:
43+
44+
* UNM_retrieve_card_data_GUI.m
45+
* UNM_RemoveBadData.m
46+
* UNM_fill_met_gaps_from_nearby_site.m
47+
* UNM_Ameriflux_file_maker_TWH.m
48+
49+
There are several functions to parse data files from various stages of
50+
the data processing pipeline into Matlab:
51+
52+
* UNM_parse_QC_txt_file.m
53+
* UNM_parse_QC_xls_file.m
54+
* UNM_parse_fluxall_txt_file.m
55+
* UNM_parse_fluxall_xls_file.m
56+
* UNM_parse_gapfilled_partitioned_output.m
57+
* UNM_parse_sev_met_data.m
58+
* UNM_parse_valles_met_data.m
59+
* parse_forgapfilling_file.m
60+
* parse_ameriflux_file.m
61+
62+
There are also a number of functions to visualize flux data. Some are
63+
called from within the processing functions listed above; some of
64+
these are sometimes independently useful.
65+
66+
* plot_fingerprint.m
67+
* UNM_site_plot_doy_time_offsets.m
68+
* UNM_site_plot_fullyear_time_offsets.m
69+
* plot_siteyear_fingerprint_2x3array.m
70+
* plot_siteyear_fingerprint_single.m
71+
72+
73+
### DATA PROCESSING PIPELINE SUMMARY
74+
75+
The steps for processing incoming data from the field sites. I have
76+
attempted to make the processing routines somewhat robust to data
77+
glitches: missing data, mangled text, mangled file names, etc. There
78+
is (as always) more work that could be done in that arena; for now, if
79+
something breaks, the best bet is to step into the Matlab code and
80+
debug.
81+
82+
1. Insert the datalogger flash card into the computer.
83+
2. Within Matlab, call UNM_retrieve_card_data_GUI. This copies the
84+
data to disk and displays a figure that plots each 30-minute data
85+
field sequentially. Step through each field and scan the plot to
86+
make sure it looks reasonable! When done, close the plot figure.
87+
Matlab will now openCampbell Scientific's CardConvert to process
88+
the raw data into TOA5 files and daily TOB1 10-hz files, copy those
89+
files to their backup locations, compress the raw data, and copy
90+
the compressed and uncompressed raw data to their backup locations.
91+
The final step will require the user to manually enter a password
92+
to transfer the data to the EDAC FTP server.
93+
3. Run UNM_RemoveBadData. Scan the resulting plots for problems in
94+
the data and fix any problems that arise.
95+
4. Run UNM_fill_met_gaps_from_nearby_site.
96+
5. Send the SITE_YEAR_for_gapfilling_filled.txt through the online
97+
flux gapfiller/flux partitioner:
98+
http://www.bgc-jena.mpg.de/~MDIwork/eddyproc/upload.php.
99+
6. From bash, call download_partitioned_data to download the gapfilled
100+
partitioned data.
101+
7. Call UNM_Ameriflux_file_maker_TWH.m
102+
8. Upload the Ameriflux files to soccoro.unm.edu.

0 commit comments

Comments
 (0)