Lab presentations at EGU 2020

This week, the EGU 2020 General Assembly has been happening online for the first time. Instead of the traditional presentations, authors were asked to submit slides/videos/posters and then participated on a live chat room to pitch their projects and answer questions from the audience.

Both Leo and Santiago had presentations in session G4.3: Acquisition and processing of gravity and magnetic field data and their integrative interpretation. You can view their abstracts and slides as well as leave comments on the conference website until the end of May 2020. In case the slides are no longer available through EGU, they have also been uploaded to figshare (see links below).

Both presentations investigate improvements to equivalent layer/source processing, which is a powerful technique to process and grid gravity and magnetic data. The improvements are being directly implemented in the Python software Verde and Harmonica.

A better strategy for interpolating gravity and magnetic data

We present a new strategy for gravity and magnetic data interpolation and processing. Our method is based on the equivalent layer technique (EQL) and produces more accurate interpolations when compared with similar EQL methods. It also reduces the computation time and memory requirements, both of which have been severe limiting factors.

Evaluating the accuracy of equivalent-source predictions using cross-validation

We investigate the use of cross-validation (CV) techniques to estimate the accuracy of equivalent-source (also known as equivalent-layer) models for interpolation and processing of potential-field data. Our preliminary results indicate that some common CV algorithms (e.g., random permutations and k-folds) tend to overestimate the accuracy. We have found that blocked CV methods, where the data are split along spatial blocks instead of randomly, provide more conservative and realistic accuracy estimates. Beyond evaluating an equivalent-source model's performance, cross-validation can be used to automatically determine configuration parameters, like source depth and amount of regularization, that maximize prediction accuracy and avoid over-fitting.