Scott K. Johnson, writing for Ars Technica:
Building a global temperature dataset is a huge undertaking, because that’s only the half of it. Lots of careful corrections need to be made to the raw measurements to account for things like instrument changes, weather station placement, and even the time of day the station is checked.
One of the most commonly used datasets, dubbed “HadCRUT4” in its current incarnation, is maintained by the UK Met Office and researchers at the University of East Anglia. That dataset lacks temperature records over 16 percent of the globe, mostly parts of the Arctic, Antarctic, and Africa. Each group that manages one of these datasets faces this problem, but deals with it a little differently. In HadCRUT4, the gaps are simply dropped out of the calculated average; in NASA’s GISTEMP dataset, these holes are filled in by interpolating from the nearest measurements.
Interesting. Though wouldn’t interpolation and omission have a similar effect on averages?
Probably depends on how they calculate the average.