Update (2021/05/20): This position is now closed.
We have one open position for a fully funded 5-year PhD and graduate student assistant position at the University of Liverpool.
Applications are open until 21 April 2021.
Open to UK applicants only.
Apply here: https://www.liverpool.ac.uk/study/postgraduate-research/how-to-apply/
The flow of heat coming from the Earth's interior is an important parameter for how ice sheets flow and deform. It also plays a key role in determining how the Earth's crust rebounds upwards once ice mass is displaced, influencing our estimates of sea-level rise. Geophysicists can indirectly determine heat flow from the measurements of anomalies in the Earth's magnetic field due to magnetized rocks in the crust. These data allow us to estimate the depth at which magnetic minerals lose their magnetic properties, which happens at a particular temperature known as the Curie temperature. However, there are still some unknown factors about the limitations of this technique.
The goals of this project are to:
The magnetic data integration and Curie depth estimation will be done by adapting methods and software recently developed by the research group and our international collaborators at the PINGA lab (for example, Hidalgo-Gato et al., 2021, and Reis et al., 2020). By achieving these goals, we hope to quantify and reduce the uncertainty in the Curie isotherm depth estimates for Antarctica, which is a key parameter in determining geothermal heat flow in Antarctica.
You will be working with an international team of researchers:
Through this PhD, you will acquire the mathematical and Python programming skills required to undertake the project. You will be trained to develop software in a collaborative environment using GitHub and the current best practices in research software engineering.
The project will be conducted following the current established norms of open-science and reproducible research, with all outputs published on the group's GitHub page.
The project also has the potential to involve code contributions to the different open-source Python software developed by the research group, mainly Fatiando a Terra, leading to potential impact beyond standard scientific publications.
This position would suit someone with skills in (or who is willing to learn):
An ideal candidate would also be motivated, able to self-study, and interested in open-science practices and open-source software.
This position is for a Graduate Teaching Assistant at 0.5 full-time employment (FTE) over a 5 year period.
The role includes:
This project is one of eight eligible for funding support through the recruitment of two GTAs within Geography & Planning and Earth Sciences. The funding covers:
Applicants are encouraged to read the lab manual to familiarize themselves with the way we approach science, expectations, our code of conduct, etc.
Interested? Find out more about:
To apply for this opportunity:
You have to apply for both positions and make sure both are marked as "GTA SoES Post".
If have any questions, please get in touch!
]]>Register for free (though donations are welcome) at: https://softwareunderground.org/events/transform-2021
Join us on the Software Underground slack for discussions and support during the tutorial. There are many more tutorials to choose from, a hackathon, unsessions, and more!
Can't wait? Watch is the Verde tutorial from Transform 2020 in the meantime:
]]>This paper describes how we used the gradient-boosting machine learning method to scale equivalent source processing to millions gravity and magnetic data. Equivalent sources allow us take into account the observation height and the physics of potential fields (mainly, they are harmonic functions) when processing and interpolation, which are often ignored by other methods. This leads to great results but it's involves large linear models, so processing datasets of this magnitude is tricky. By using gradient-boosting to fit the model in small chunks, we can process millions of data even on a modest laptop (and in a reasonable amount of time).
This research was done entirely with open-source software and open data! This means that anyone should be able to fully reproduce our results using the information in the paper and the material in the associated GitHub repository.
This is the final part of Santiago's PhD thesis.
The equivalent source technique is a powerful and widely used method for processing gravity and magnetic data. Nevertheless, its major drawback is the large computational cost in terms of processing time and computer memory. We present two techniques for reducing the computational cost of equivalent source processing: block-averaging source locations and the gradient-boosted equivalent source algorithm. Through block-averaging, we reduce the number of source coefficients that must be estimated while retaining the minimum desired resolution in the final processed data. With the gradient boosting method, we estimate the sources coefficients in small batches along overlapping windows, allowing us to reduce the computer memory requirements arbitrarily to conform to the constraints of the available hardware. We show that the combination of block-averaging and gradient-boosted equivalent sources is capable of producing accurate interpolations through tests against synthetic data. Moreover, we demonstrate the feasibility of our method by gridding a gravity dataset covering Australia with over 1.7 million observations using a modest personal computer.
Soler, S., & Uieda, L. (2021). Gradient-boosted equivalent sources. EarthArXiv. doi:10.31223/x58g7c
]]>I'm, of course, extremely proud of them all. It was a pleasure to get to know them throughout the year and I've learned a lot along the way. Their arrival is what motivated the creation of the lab manual and their work is providing a model for a dissertation template that will facilitate the on-boarding of new members in the 2020/21 cohort.
Please join me in wishing the new geophysicists all the best in their future careers and lives in general! 🥂
]]>Leo and Santiago will be teaching the tutorial "From scattered data to gridded products using Verde". The session will be live-streamed to YouTube and will still be available after the event:
There will be many other interesting tutorials, discussions, and lightning talks. You can register and browse the schedule on the Transform2020 Sched page.
]]>Both Leo and Santiago had presentations in session G4.3: Acquisition and processing of gravity and magnetic field data and their integrative interpretation. You can view their abstracts and slides as well as leave comments on the conference website until the end of May 2020. In case the slides are no longer available through EGU, they have also been uploaded to figshare (see links below).
Both presentations investigate improvements to equivalent layer/source processing, which is a powerful technique to process and grid gravity and magnetic data. The improvements are being directly implemented in the Python software Verde and Harmonica.
We present a new strategy for gravity and magnetic data interpolation and processing. Our method is based on the equivalent layer technique (EQL) and produces more accurate interpolations when compared with similar EQL methods. It also reduces the computation time and memory requirements, both of which have been severe limiting factors.
We investigate the use of cross-validation (CV) techniques to estimate the accuracy of equivalent-source (also known as equivalent-layer) models for interpolation and processing of potential-field data. Our preliminary results indicate that some common CV algorithms (e.g., random permutations and k-folds) tend to overestimate the accuracy. We have found that blocked CV methods, where the data are split along spatial blocks instead of randomly, provide more conservative and realistic accuracy estimates. Beyond evaluating an equivalent-source model's performance, cross-validation can be used to automatically determine configuration parameters, like source depth and amount of regularization, that maximize prediction accuracy and avoid over-fitting.
Read more about what this means and his plans for the fellowship in this blog post: Advancing research software in the UK through an SSI fellowship.
]]>This paper marks the release of Pooch v0.7.1, a Python library for downloading and managing data files. Pooch is a part of the new ecosystem of packages in Fatiando a Terra. It's used by most of the other packages to manage the sample data files used for testing and documentation.
The peer-review at JOSS is open and can be found at openjournals/joss-reviews#1943.
Scientific software is usually created to acquire, analyze, model, and visualize data. As such, many software libraries include sample datasets in their distributions for use in documentation, tests, benchmarks, and workshops. A common approach is to include smaller datasets in the GitHub repository directly and package them with the source and binary distributions (e.g., scikit-learn and scikit-image do this). As data files increase in size, it becomes unfeasible to store them in GitHub repositories. Thus, larger datasets require writing code to download the files from a remote server to the user's computer. The same problem is faced by scientists using version control to manage their research projects. While downloading a data file over HTTPS can be done easily with modern Python libraries, it is not trivial to manage a set of files, keep them updated, and check for corruption. For example, scikit-learn, Cartopy, and PyVista all include code dedicated to this particular task. Instead of scientists and library authors recreating the same code, it would be best to have a minimalistic and easy to set up tool for fetching and maintaining data files.
Pooch is a Python library that fills this gap. It manages a data registry (containing file names, SHA-256 cryptographic hashes, and download URLs) by downloading files from one or more remote servers and storing them in a local data cache. Pooch is written in pure Python and has minimal dependencies. It can be easily installed from the Python Package Index (PyPI) and conda-forge on a wide range of Python versions: 2.7 (up to Pooch 0.6.0) and from 3.5 to 3.8.
Uieda, L., Soler, S.R., Rampin, R., van Kemenade, H., Turk, M., Shapero, D., Banihirwe, A., and Leeman, J. (2020). Pooch: A friend to fetch your data files. Journal of Open Source Software, 5(45), 1943. doi:10.21105/joss.01943
]]>Update (2020/01/10): This position is now closed.
I have two open positions for funded studentships at the University of Liverpool. Applications are open until 10 January 2020.
Follow the links for more detailed versions.
Bringing machine learning techniques to geophysical data processing
The goal of this project is to investigate the use of existing machine learning techniques to process gravity and magnetics data using the Equivalent Layer Method. The methods and software developed during this project can be applied to process large amounts of gravity and magnetics data, including airborne and satellite surveys, and produce data products that can enable further scientific investigations. Examples of such data products include global gravity gradient grids from GOCE satellite measurements, regional magnetic grids for the UK, gravity grids for the Moon and Mars, etc.
Large-scale mapping of the thickness of the crust from satellite gravity and gravity gradient data
The goal of this project is to develop improved inversion methods to determine crustal thickness from gravity and gravity gradient data, in particular Uieda and Barbosa (2017). Main objectives are: (1) account for density variation in the oceanic lithosphere due to temperature; (2) incorporate seismological estimates of crustal thickness in the inversion process; (3) estimate the density contrast across the crust-mantle interface in different domains; (4) joint inversion of gravity and gravity gradient data; (5) develop techniques to reduce the computational load of the inversion; (6) quantify uncertainty due to errors in regional crustal and sedimentary basin models. The inversion methods developed in this project can be applied to produce improved crustal thickness estimates for South America, Africa, Antarctica, the Moon, Mars, etc.
The funding for these projects comes from the School of Environment Sciences. Applicants choose a project when applying and will be judged on their own merit (not the project/supervisor). There are only a small number of studentships available for the entire School, so competition for the studentships tends to be high. Sadly, applications are limited to UK and EU citizens. Candidates who are able to self-fund (e.g., through their employer) are encouraged to apply as well. In this case, there is no need to go through the normal competition.
Both projects have a large computational component. Students will make code contributions to the different open-source Python software developed by the lab. They will be trained to develop software in a collaborative environment using GitHub and use the current best practices in software engineering and reproducible research.
Applicants are encouraged to read the lab manual to familiarize themselves with the way we approach science, expectations, our code of conduct, etc.
If you're interested in applying (or know someone who might be), please get in touch!
]]>The constant thing throughout my career has been the fact that "computational thinking" is at the heart of everything I do: from potential-field inversion to forward modeling. After much brainstorming, "Computer-Oriented Geoscience" seems to accurately describe the type of research we do here: we apply computational methods to solve problems in geoscience.
As a direct consequence, we also develop and maintain several open-source projects that supports our research and implement the methods that we develop. Our language of choice is Python and we have years of expertise in modern and collaborative software development. We are a small group at the moment (only 2 members) but I'm hoping to grow with time.
This website serves as a portfolio of our research and also as a guide for members and collaborators. We are strong believers that the scientific process needs to be more open, collaborative, reproducible, and inclusive. As such, we have high expectations not only for the results of our work, but also for how it is done. Our lab manual provides detailed information, like our code of conduct and what collaborators can expect from us (and what we expect from them). The manual is based on the excellent Lab Carpentry blueprints.
We are eager to establish collaborations with applied scientists who have interesting problems that could benefit from our computational and numerical modeling skills. Get in touch if that sounds like you!
]]>This paper builds upon our work on Tesseroids and extends the methodology to work for depth-variable densities. Santiago led this project, did most of the work and a large part of the writing of the paper.
Figure: Application of the methodology to the Neuquén Basin, a foreland basin in the southern Andes. (a) Topography of the Neuquén Basin and its location in South America. (b) Thickness of the sedimentary basin. Inset shows the exponential density profile used in the modeling. (c) Resulting vertical gravitational acceleration at 10 km height modeled by tesseroids with exponential density variation in depth. (d) Difference between gravitational acceleration modeled using the exponential density profile and a homogeneous density.
We present a new methodology to compute the gravitational fields generated by tesseroids (spherical prisms) whose density varies with depth according to an arbitrary continuous function. It approximates the gravitational fields through the Gauss-Legendre Quadrature along with two discretization algorithms that automatically control its accuracy by adaptively dividing the tesseroid into smaller ones. The first one is a preexisting two dimensional adaptive discretization algorithm that reduces the errors due to the distance between the tesseroid and the computation point. The second is a new density-based discretization algorithm that decreases the errors introduced by the variation of the density function with depth. The amount of divisions made by each algorithm is indirectly controlled by two parameters: the distance-size ratio and the delta ratio. We have obtained analytical solutions for a spherical shell with radially variable density and compared them to the results of the numerical model for linear, exponential, and sinusoidal density functions. These comparisons allowed us to obtain optimal values for the distance-size and delta ratios that yield an accuracy of 0.1% of the analytical solutions. The resulting optimal values of distance-size ratio for the gravitational potential and its gradient are 1 and 2.5, respectively. The density-based discretization algorithm produces no discretizations in the linear density case, but a delta ratio of 0.1 is needed for the exponential and the sinusoidal density functions. These values can be extrapolated to cover most common use cases. However, the distance-size and delta ratios can be configured by the user to increase the accuracy of the results at the expense of computational speed. Lastly, we apply this new methodology to model the Neuquén Basin, a foreland basin in Argentina with a maximum depth of over 5000 m, using an exponential density function.
Soler, S. R., Pesce, A., Gimenez, M. E., & Uieda, L., 2019. Gravitational field calculation in spherical coordinates using variable densities in depth, Geophysical Journal International, doi:10.1093/gji/ggz277
]]>This paper marks the first release of Verde, a Python library for processing and gridding spatial data. Verde is the first part of a large-scale refactoring of the Fatiando a Terra project into separate packages.
The peer-review at JOSS is open and can be found at openjournals/joss-reviews#957.
Figure: Example of using verde.BlockMean
to calculate weighted means in spatial
blocks assuming different uncertainty models. Full source code to produce this image is
available in the Verde example
gallery.
Verde is a Python library for gridding spatial data using different Green's functions.
It differs from the radial basis functions in scipy.interpolate
by providing an API
inspired by scikit-learn. The Verde API should be familiar to scikit-learn users but
is tweaked to work with spatial data, which has Cartesian or geographic coordinates and
multiple data components instead of an X
feature matrix and y
label vector. The
library also includes more specialized Green's functions, utilities for trend estimation
and data decimation (which are often required prior to gridding), and more. Some of
these interpolation and data processing methods already exist in the Generic Mapping
Tools (GMT), a command-line program popular in the Earth Sciences. However, there are no
model selection tools in GMT and it can be difficult to separate parts of the processing
that are done internally by its modules. Verde is designed to be modular, easily
extended, and integrated into the scientific Python ecosystem. It can be used to
implement new interpolation methods by subclassing the verde.base.BaseGridder
class,
requiring only the implementation of the new Green's function. For example, it is
currently being used to develop a method for interpolation of 3-component GPS data.
Uieda, L. (2018). Verde: Processing and gridding spatial data using Green's functions. Journal of Open Source Software, 3(30), 957. doi:10.21105/joss.00957
]]>This paper presents a new gravity inversion method to estimate the depth of the crust-mantle interface (the Moho). The inversion uses a spherical Earth approximation by discretizing the Earth into tesseroids (spherical prisms). The forward modeling method used is described in the Tesseroids paper. We applied the inversion to estimate the Moho depth for South America.
The main result from this publication is the gravity-derived Moho depth model for South America and the differences between it and seismological estimates of Assumpção et al. (2013). These differences allow us to know where the gravity-derived model can be trusted and where there might be unaccounted sources in the gravity data.
You can download the model results, source code, and input data from doi:10.6084/m9.figshare.3987267
Figure: Estimated depth to the crust-mantle interface (Moho) from satellite
measurements of gravity disturbances. Dotted lines represent the boundaries between
major geologic provinces. AD: Andean Province, AFB: Andean foreland basins, AM: Amazonas
Basin, BR: Brazilian Shield, BO: Borborema province, CH: Chaco Basin, GB: Guyana Basin,
GU: Guyana Shield, PB: Parnaíba Basin, PC: Parecis Basin, PR: Paraná Basin, PT:
Patagonia province, SF: São Francisco Craton, SM: Solimões Basin. Solid orange lines
mark the limits of the main lithospheric plates. AF: Africa Plate, AN: Antarctica Plate,
CA: Caribbean Plate, CO: Cocos Plate, SA: South America Plate, SC: Scotia Plate, NZ:
Nazca Plate. The solid light grey line is the 35 km Moho depth contour.
The inversion method proposed here is implemented in the Python programming language. The code uses the forward modeling and inversion packages of the Fatiando a Terra library (version 0.5).
You'll find the source code, input data, and instructions to produce the results from the paper on the Github repository. There should be enough information for you to produce all figures of the paper.
Estimating the relief of the Moho from gravity data is a computationally intensive non-linear inverse problem. What is more, the modeling must take the Earths curvature into account when the study area is of regional scale or greater. We present a regularized non-linear gravity inversion method that has a low computational footprint and employs a spherical Earth approximation. To achieve this, we combine the highly efficient Bott's method with smoothness regularization and a discretization of the anomalous Moho into tesseroids (spherical prisms). The computational efficiency of our method is attained by harnessing the fact that all matrices involved are sparse. The inversion results are controlled by three hyper-parameters: the regularization parameter, the anomalous Moho density-contrast, and the reference Moho depth. We estimate the regularization parameter using the method of hold-out cross-validation. Additionally, we estimate the density-contrast and the reference depth using knowledge of the Moho depth at certain points. We apply the proposed method to estimate the Moho depth for the South American continent using satellite gravity data and seismological data. The final Moho model is in accordance with previous gravity-derived models and seismological data. The misfit to the gravity and seismological data is worse in the Andes and best in oceanic areas, central Brazil and Patagonia, and along the Atlantic coast. Similarly to previous results, the model suggests a thinner crust of 30-35 km under the Andean foreland basins. Discrepancies with the seismological data are greatest in the Guyana Shield, the central Solimões and Amazonas Basins, the Paraná Basin, and the Borborema province. These differences suggest the existence of crustal or mantle density anomalies that were unaccounted for during gravity data processing.
Uieda, L., and V. C. F. Barbosa (2017), Fast nonlinear gravity inversion in spherical coordinates with application to the South American Moho, Geophys. J. Int., 208(1), 162-176, doi:10.1093/gji/ggw390.
]]>This paper describes the algorithms used in version 1.2.0 of the open-source
software Tesseroids.
The software is a suite of C coded command-line programs that calculate the
gravitational field of a tesseroid (spherical prism) model.
There is also a separate Python implementation of the same algorithm in the
fatiando.gravmag.tesseroid
module of the
Fatiando a Terra library (introduced in version 0.3)
and in Harmonica.
Figure: Adaptive discretization of the tesseroid shown in panel (a) for a computation
point P using the distance-size ratio D equal to (b) 1, (c) 2, and (d) 6. Lr, Lφ, and Lλ
are the dimensions of the tesseroid. Note that increasing D results in a fine division
of the tesseroid close the computation point and a coarser division further away.
We present the open-source software Tesseroids, a set of command-line programs to perform the forward modeling of gravitational fields in spherical coordinates. The software is implemented in the C programming language and uses tesseroids (spherical prisms) for the discretization of the subsurface mass distribution. The gravitational fields of tesseroids are calculated numerically using the Gauss-Legendre Quadrature (GLQ). We have improved upon an adaptive discretization algorithm to guarantee the accuracy of the GLQ integration. Our implementation of adaptive discretization uses a "stack" based algorithm instead of recursion to achieve more control over execution errors and corner cases. The algorithm is controlled by a scalar value called the distance-size ratio (D) that determines the accuracy of the integration as well as the computation time. We determined optimal values of D for the gravitational potential, gravitational acceleration, and gravity gradient tensor by comparing the computed tesseroids effects with those of a homogeneous spherical shell. The values required for a maximum relative error of 0.1% of the shell effects are D = 1 for the gravitational potential, D = 1.5 for the gravitational acceleration, and D = 8 for the gravity gradients. Contrary to previous assumptions, our results show that the potential and its first and second derivatives require different values of D to achieve the same accuracy. These values were incorporated as defaults in the software.
Uieda, L., V. Barbosa, and C. Braitenberg (2016), Tesseroids: Forward-modeling gravitational fields in spherical coordinates, GEOPHYSICS, F41–F48, doi:10.1190/geo2015-0204.1.
]]>The inversion method proposed in this paper is implemented in the open-source
Fatiando a Terra Python library
as the fatiando.gravmag.harvester
module.
The module was introduced in
version 0.1
of the library.
The following is an animation of the growth algorithm during the inversion of synthetic data. The video is available at figshare: 10.6084/m9.figshare.91469
We have developed a new gravity gradient inversion method for estimating a 3D density-contrast distribution defined on a grid of rectangular prisms. Our method consists of an iterative algorithm that does not require the solution of an equation system. Instead, the solution grows systematically around user-specified prismatic elements, called “seeds,” with given density contrasts. Each seed can be assigned a different density-contrast value, allowing the interpretation of multiple sources with different density contrasts and that produce interfering signals. In real world scenarios, some sources might not be targeted for the interpretation. Thus, we developed a robust procedure that neither requires the isolation of the signal of the targeted sources prior to the inversion nor requires substantial prior information about the non-targeted sources. In our iterative algorithm, the estimated sources grow by the accretion of prisms in the periphery of the current estimate. In addition, only the columns of the sensitivity matrix corresponding to the prisms in the periphery of the current estimate are needed for the computations. Therefore, the individual columns of the sensitivity matrix can be calculated on demand and deleted after an accretion takes place, greatly reducing the demand for computer memory and processing time. Tests on synthetic data show the ability of our method to correctly recover the geometry of the targeted sources, even when interfering signals produced by non-targeted sources are present. Inverting the data from an airborne gravity gradiometry survey flown over the iron ore province of Quadrilátero Ferrífero, southeastern Brazil, we estimated a compact iron ore body that is in agreement with geologic information and previous interpretations.
Figure: Inversion results for data from the Quadrilátero Ferrífero, southeastern
Brazil. Perspective views of the estimated density-contrast distribution, where prisms
with zero density contrast are not shown or shown in gray and prisms with density
contrast 1 g/cm³, corresponding to the iron ore body of the Cauê itabirite, are shown in
solid or transparent red. The seeds used in the inversion are shown as black prisms.
Uieda, L., and V. C. F. Barbosa (2012), Robust 3D gravity gradient inversion by planting anomalous densities, Geophysics, 77(4), G55-G66, doi:10.1190/geo2011-0388.1
]]>