Description of Merge

From Intl Surface Temp Initiative

The following is a description of the proposed process that will be used to turn Stage 2 data into a consolidated master database (Stage 3). Currently this is a work in progress, and feedback is greatly appreciated. The plan is to release the underlying code at the time of the databank release so that it is fully open and transparent.

Contents

Description of Merge (20120606)

An overview of the latest merge program is here: ftp://ftp.ncdc.noaa.gov/pub/data/globaldatabank/documents/merge_update_20120606.pdf

Description of Merge (20120214)

An overview of the latest merge program is here: ftp://ftp.ncdc.noaa.gov/pub/data/globaldatabank/documents/Databank-MergingMethodology-14Feb12.pdf

Some graphics / tables based upon the latest results of the merge program is here: ftp://ftp.ncdc.noaa.gov/pub/data/globaldatabank/documents/merge_update_20120214.pdf

Description of Merge (20120127)

Still only looking at TMAX, the merge program goes through 36 different sources. There are nearly 3 billion station comparisons (2,986,705,336), and the process takes a little over five hours

The same three metadata probabilities are calculated (geographic distance, height distance, jaccard), and for all the candidate stations whose probability is greater than 0.7 (multiplying 3 prior probabilities to form a posterior probability), data metrics are then tested to choose which station has the best information needed for merging.

There are 2 different types of scenarios for data comparisons, data that overlaps, and data that do not. For overlapping stations, a direct comparison can be made through the Root Mean Square Deviation (RMSD). A probability is fit using the following formula: 1/1+RMSD. The station with the highest "RMSD probability" is chosen for merge.

For data that do not overlap, comparisons of mean and variance are made. The t-test is applied for mean, and the resulting p-value can be used as a probability (1 minus p-value). For variance, methods similar to the metadata metrics are made (a difference is calculated and applied to an exponential decay function). Using both the metadata and data probabilities, a new posterior probability is formed, and the station with the highest probability is chosen.

Description of Merge (20120106)

1) merge program loops through 29 different "sources", in order to save time/resources, we are only focusing on TMAX at this time

2) The ordering of the source list matters (see source hierarchy above), each source encountered in the source list has priority over a previously listed source.

3) The program begins by reading in the first source .... ghcnd-raw. After that, the program iterates from candidate source #2 through source #29, merging where appropriate and adding unique stations when appropriate.

4) For any particular merge, the candidate set of stations are gone through one by one, and for each one, they are compared to every station within the already merged data set. In particular three metadata metrics are calculated:

  • distance between stations
  • elevation between stations
  • jaccard naming similarity index

These indices are all appropriately scaled to probabilities and the total probability for potential inclusion is currently set at 0.7 (approximately 0.9*0.9*0.9). More specifically, for any one candidate station, these metrics are first calculated for "all" existing master or merged stations. Then if any stations are over the 0.7 total probability, the station with the highest probability is chosen for the merge. That station is then removed from any possibility of being merged with any other candidates in the current candidate set.

To Do List for the Merge Program (20120106)

1) Incorporate Data Metrics. We already have some considered, such as mean and variance, however we have to consider situations when there is an overlap of data records, and those when there is no overlap. Our first task is overlapping data, where we can calculate a simple metric such as RMSD, and then normalize to create a probability between 0 and 1. Non-overlapping data will be next, where we may have to consider comparisons of seasonal cycles

2) Include all three elements into the merge (TMAX, TMIN, and TAVG)

Description of Multi-Elemental Merge (20111014)

Once a Hierarchy is established, we begin the merge process source by source (ie Source 1 vs Source 2, and then Merged Source1/2 vs Source 3, etc.), while maintaining all three elements (TMAX,TMIN,TAVG). Effectively the same piece of pairwise comparison code is run nsources-1 times.

Multiple metrics can be calculated to determine if a station is the same

  • METADATA METRICS
    • Geographical distance between 2 stations
    • Height distance between 2 stations
    • Name of station (using comparison metric such as Jaccard Index)
  • DATA METRICS
    • Compare the number of common months (ie non-missing data for both stations for a respective month)
    • Ratio of common months
      • Number of times data for common months are within +/- 1.1 of each other over the total number of common months
    • Compare the mean and standard deviations of the 2 stations
      • Possibly using the F-Test / T-Test

Here are some example Booleans that can be used to make a station match

  • Geographical distance = 0, AND name is exactly the same
  • Distance = 0, AND name is not exactly the same, AND ratio of common months is greater than 0.4
  • Distance > 0, AND name is exactly the same, AND ratio of common months is greater than 0.4
  • Distance >= 0, AND ratio of common months is >= 0.99
  • 0 < Distance < 20, AND part of the name is contained within each other
  • 0 < Distance < 20, AND ratio of common months is missing, AND Name is exactly the same, AND difference in mean is +/- 0.5 AND difference in stdev is +/- 0.5

Alternatively, to avoid hard-wired decisions it is possible that such checks could be coded in an explicitly bayesian framework whereby each test is run and forms a suitably weighted 'prior' and all such priors are recombined to form a posterior probability of a station match. This is intuitively quite nice as most of these comparison statistics are in reality a continuum (e.g. a station with reported latitude and longitude match of within 1 second should have more weight than one reported with one minute of separation) and not well suited to ad hoc binary inclusion criteria.

If it is determined there is a station match, it then checks to see if there are any non-common months. If so, a merge is performed. For common months, the non-missing data that is within the higher source hierarchy is given higher preference [Q: Are we going to mingle sources or simply leave missing mask untouched in the higher priority set?]. If a station does not match up with the master dataset, then it is then considered unique and put into the master dataset.

Personal tools