[ad_1]
As cosmologists and astrophysicists deepen their research in the darkest recesses of the universe, their need for more and more powerful observation and computing tools has grown exponentially. From facilities such as black energy spectroscopic instrument to supercomputers such as the Cori system from Lawrence Berkeley National Laboratory in the National Energy Research Scientific Computing (NERSC) facility, they seek to collect, simulate and analyze increasing amounts of data that can help explain the nature of things we can not see, as well as what we can.
To this end, the gravitational lens is one of the most promising tools scientists have to extract this information by allowing them to probe both the geometry of the universe and the growth of the cosmic structure. The gravitational lens distorts images of distant galaxies in a way that is determined by the amount of matter in the line of sight in a certain direction, and provides a way to look at a two-dimensional map of dark matter, according to Deborah Bard, Head of the Data Science Engagement Group at Berkeley Lab's National Center for Scientific Computing for Energy Research (NERSC).
"The gravitational lens is one of the best ways to study dark matter, which is important because it says a lot about the structure of the universe," she said. "The majority of matter in the universe is dark matter, which we can not see directly, so we need to use indirect methods to study its distribution."
But as the sets of theoretical and experimental data, as well as the simulations necessary to image and analyze these data, grow, a new challenge has emerged: these simulations are increasingly expensive, even prohibitive. Calculator cosmologists often use less expensive substitution models that mimic expensive simulations. More recently, however, "advances in deep generative models based on neural networks have opened the possibility of building more robust and less elaborate substitution models by hand for many types of simulators, including those of cosmology" said Mustafa Mustafa, a machine learning engineer at NERSC and lead author of a new study describing one of these approaches developed by a collaboration between Berkeley Lab, Google Research and the University of KwaZulu-Natal. Native.
A variety of deep generative models are being investigated for scientific applications, but the team led by the Berkeley Lab adopts a unique tactic: Generative Confrontation Networks (GANs). In an article published on May 6, 2019 in Computational Astrophysics and Cosmology, they discuss their new deep learning network, called CosmoGAN, and its ability to create low fidelity and low gravitational lens convergence maps.
"A convergence map is actually a 2D map of the gravitational lens we see in the sky along the line of sight," said Bard, co-author of the book Computational Astrophysics and Cosmology paper. "If you have a vertex in a convergence map that corresponds to a peak in a large amount of material along the line of sight, it means that there is a huge amount of dark matter in that direction."
The advantages of GAN
Why opt for GANs instead of other types of generative models? Performance and precision, according to Mustafa.
"From an in-depth learning perspective, there are other ways to learn how to generate convergence maps from images, but when we started this project, the GANs seemed to produce very high resolution images compared to competing methods, while still being efficient in terms of computing and neural network size, "he said.
"We were looking for two things: to be precise and fast," said co-author Zaria Lukic, a researcher at the Berkeley Lab Center for Computational Cosmology. "The GANs offer the hope of being almost as accurate as the full physical simulations."
The research team is particularly interested in building a surrogate model that would reduce the computational costs associated with running these simulations. in the Computational Astrophysics and Cosmology In this paper, they describe a number of advantages of GANs in the study of large physical simulations.
"The GANs are known to be very unstable during training, especially when you reach the end of the training and as the pictures start to look beautiful, that's when network updates can to be really chaotic, "Mustafa said. "But because we have the summary statistics we use in cosmology, we were able to evaluate the GANs at each stage of the training, which helped us determine which generator we thought was the best. generally not used in GAN training. "
By using the CosmoGAN generator network, the team was able to produce convergence maps described – with high statistical reliability – by the same summary statistics as the fully simulated maps. This very high level of concordance between the maps of convergence, statistically indistinguishable from maps produced by physics-based generative models, is an important step in the construction of emulators from deep neural networks.
"The huge advantage here was that the problem we were addressing was a physics problem that associated measures were associated with," said Bard. "But with our approach, there are metrics that allow you to quantify the accuracy of your GAN, which for me is really exciting about how this type of physics problem can influence learning methods. automatic.
Ultimately, such approaches could transform a science that currently relies on detailed physics simulations that require billions of computing hours and taking up petabytes of disk space – but much work remains to be done. Cosmology data (and scientific data in general) may require very high resolution measurements, such as full-sky telescope images.
"The 2D images taken into account for this project are useful, but the current physical simulations are three-dimensional and can vary over time – and are irregular, producing a feature-rich structure, similar to that of the Web," said Wahid Bhmiji, a large data architect from the Data and Analytics Services group at NERSC and co-author of Computational Astrophysics and Cosmology paper. "In addition, the approach needs to be extended to explore new virtual universes rather than those already simulated, namely the construction of a controllable CosmoGAN system."
"The idea of making controllable GANs is essentially the holy grail of the whole problem we are working on: to be able to really mimic physical simulators, we have to build alternative models based on controllable GANs," Mustafa added. . "Right now, we're trying to figure out how to stabilize the training dynamic, given all the progress we've made on the ground in the last few years, and stabilizing training is extremely important to do what we want to do next."
A new filter to better map the dark universe
Mustafa Mustafa et al, CosmoGAN: creation of low-fidelity low-fidelity convergence cards using generative adversary networks, Computational Astrophysics and Cosmology (2019). DOI: 10.1186 / s40668-019-0029-9
Quote:
CosmoGAN: Formation of a neural network to study dark matter (May 16, 2019)
recovered on May 16, 2019
at https://phys.org/news/2019-05-cosmogan-neural-network-dark.html
This document is subject to copyright. Apart from any fair use for study or private research purposes, no
part may be reproduced without written permission. Content is provided for information only.
[ad_2]
Source link