January 8, 2021

Using Machine Learning to study anatomy, climate and earthquakes

Using Machine Learning to study anatomy, climate and earthquakes

Research documents come out too fast for anyone to read them all, especially in the field of Machine Learning, which now affects (and produces documents in) virtually all industries and businesses. The aim of this column is to collect the most relevant recent findings and work - especially in the field of artificial intelligence, but not limited to it - and explain why they are important.

This week there is a little more “basic research” than consumer applications. Machine Learning can be applied advantageously in many ways that benefit users, but it is also transformative in areas such as seismology and biology, where huge amounts of backward data can be used to train AI models or as raw material for knowledge extraction.

Within Earthquakes

We are surrounded by natural phenomena that we do not really understand, obviously we know where earthquakes and storms come from, but how exactly do they spread? What side effects are there if the different measures intersect? How far in advance can these things be predicted?

Several recently published research projects have used Machine Learning to try to better understand or predict these phenomena. With decades of data available from which to extract, knowledge can generally be obtained in this way, if seismologists, meteorologists, and geologists interested in doing so can obtain the funding and expertise to do so.

The most recent discovery, made by researchers at Los Alamos National Laboratories, uses a new data source as well as the ML to document previously unobserved behavior along faults during “slow earthquakes. By using a synthetic aperture radar captured from orbit, which can see through the cloud cover and at night to give an accurate and regular image of the ground shape, the team was able to directly observe the “rupture propagation” for the first time, along the North Anatolia Fault in Turkey.

“The deep learning methodology we developed allows us to automatically detect the small, transient deformation that occurs on faults with unprecedented resolution, paving the way for a systematic study of the interaction between slow and regular earthquakes, on a global scale,” said Los Alamos geophysicist Bertrand Rouet-Leduc.

Another initiative, which has been underway for some years at Stanford, helps Earth science researcher Mostafa Mousavi address the problem of signal-to-noise ratio with seismic data. Analyzing the data that are being analyzed by the old software for the billionth time one day, he felt there had to be a better way and has spent years working on various methods. The most recent one is a way to analyze evidence from small earthquakes that went unnoticed but still left a record in the data.

The “Earthquake Transformer” (named after a machine-learning technique, not robots) was trained with years of hand-tagged seismic data. When tested on measurements collected during the Japanese 6.6 magnitude Tottori earthquake, it isolated 21,092 separate events, more than twice what people had found in their original survey, and using data from less than half of the stations that recorded the earthquake.

This instrument will not predict earthquakes by itself, but a better understanding of the true and complete nature of the phenomena means that we may be able to do so by other means. “Improving our ability to detect and locate these small earthquakes will allow us to get a more accurate picture of how earthquakes interact or spread along the fault, how they start and even how they stop,” said co-author Gregory Beroza.

Source: TechCrunch