Leveraging seismic data quality metrics for network supervision and sensitive earthquake detection
In earthquake seismology, the last decade has led to ever growing datasets of continuous seismograms being recorded, stored, and made openly available. These ever increasing datasets (e.g., about 50 M hours of seismograms or 17 TB for Norwegian networks alone) allow us to revisit the data with new methodologies, for example by template matching to detect previously uncataloged earthquakes. Template matching is based on correlatiing known earthquake waveforms against continous seismic data to detect similar and helps to obtain more complete catalogs of seismicity that are suited for further statistical analysis. One challenge in applying template matching to larger datasets is that one needs to be able to sysematically recognize problems in data quality that quickly lead to misdetections. What helps with this task are seismic data quality metrics – a set of up to 50 measures of specific data properties for each day and trace – for which two major systems now exist. In this talk, I will introduce how these data quality metrics help us to (a) keep better control of data quality in archived data, (b) monitor the state of the currently recording stations systematically, (c) apply sensitive template-matching without misdetections (with examples from the North Sea and a Norwegian reservoir), and (d) gain a new overview of seismic noise throughout the network operations.