Main Page

From Intl Surface Temp Initiative

Revision as of 15:24, 12 October 2011 by Admin (Talk | contribs)

Contents

Embedded Pages

Monthly Sources

Daily Sources

Information about the Global Databank

To deliver climate services for the benefit of society we need to develop and deliver a suite of monitoring products from hourly to century timescales and from location specific to the global mean. As the study of climate science increases in its importance for decision and policy making – decisions that could have multi-billion dollar ramifications - the expectations and requirements of our data products will continue to grow. Society expects openness and transparency in the process and to have a greater understanding of the certainty regarding how climate has changed and how it will continue to change. Necessary steps to deliver on these requirements for observed land surface temperatures were discussed at a meeting held at the UK Met Office in September 2010 attended by climate scientists, measurement scientists, statisticians, economists and software / IT specialists. The meeting followed a submission to the WMO Commission for Climatology from the UK Met Office which was expanded upon in an invited opinion piece for Nature. Meeting discussions were based upon white papers solicited from authors with specialist knowledge in the relevant areas which were open for public comment for over a month. The meeting initiated an envisaged multi-year project which this website constitutes the focal point for. As work continues both this site and the accompanying moderated blog (used primarily to disseminate news items and solicit comments) will form the central focal point of the effort.

The over-arching implementation plan provides details of and rationale for the planned initiative progress. The envisaged process includes as its first necessary step the creation, for the first time, of a single comprehensive international databank of the actual surface meteorological observations taken globally at monthly, daily and sub-daily resolutions. This databank will be version controlled and seek to ascertain data provenance, preferably enabling researchers to drill down all the way to the original data record (see Figure). It will also have associated metadata - data describing the data - including images and changes in instrumentation and practices to the extent known. The databank effort will be run internationally and for the benefit of all. The effort required in creating and maintaining such a databank is substantial and the task is envisaged as open ended both because there is a wealth of data to recover and incorporate and because the databank will need to update in real-time. Novel approaches to data recovery such as crowd sourcing digitisation may be pursued. In the interests of getting subsequent parts of the work underway it is envisaged that a first version of the databank will be ready in April 2012. This will definitively not mean that the databank issue is closed or resolved.

From this databank it is hoped that many investigators will attempt to create surface temperature data products for given regions, time periods and reporting frequencies. A range of approaches is required both to quantify the certainty in how climate has evolved and also because different approaches will have different strengths and weaknesses and therefore be more or less useful for a given application. Efforts will be encouraged from outside the climate science community. To ascertain their fundamental quality a common benchmarking exercise will be undertaken that data product creators will be expected to submit their efforts to. This benchmarking process is envisaged to be double blind and cyclical with each cycle lasting approximately 3 years.

Finally, the project will aim to create a set of user tools and provide these value added products back to the data providers and to society to enable critical decision making. Provision of such tools and advice is key to realising the scientific benefits that this program will deliver to society more generally.

Official Definition of Stages

  • STAGE 0: Digital image and hard copy
  • STAGE 1: Keyed in native format
  • STAGE 2: Converted into common format
  • STAGE 3: Consolidated master database
  • STAGE 4: Quality controlled derived products
  • STAGE 5: Homogenized products

More Information about Stages

  • Stage 1 is the original format given to us by the institution "as is". We do no alteration to stage1
  • Stage 2 data is common formatted, and much like GHCN-M there is a data file and the inventory file. Format is as follows:
  • Because we are only looking for certain parameters in Stage 2, there may be information lost, however the data provenance flags will help the user go back into the previous stages and grab other necessary information if it is needed
  • Stage 3 data will have the same common format as Stage 2, however these stations will be unique.
  • Stage 4 data will be Quality Controlled, however may or may not have the same format as Stage 2/3
    • ie, The operational GHCN-D is considered Stage 4
Personal tools