# Improving the Deconvolution of Spectrum at Finite Temperature via Neural Network

@inproceedings{Xie2021ImprovingTD, title={Improving the Deconvolution of Spectrum at Finite Temperature via Neural Network}, author={Haidong Xie and Xueshuang Xiang}, year={2021} }

In the study of condensed matter physics, spectral information plays an important role for understand the mechanism of materials. However, it is difficult to obtain the spectrum directly through experiments or simulation. For example, the spectral information deconvoluted by scanning tunneling spectroscopy suffers from the temperature broadening effect, which is ill-posed and makes the deconvolution result unstable. To solve this problem, the core idea of existing methods, such as the maximum… Expand

#### References

SHOWING 1-10 OF 40 REFERENCES

Artificial Neural Network Approach to the Analytic Continuation Problem.

- Physics, Medicine
- Physical review letters
- 2020

This work presents a general framework for building an artificial neural network (ANN) that solves the analytic continuation problem with a supervised learning approach and shows that the method can reach the same level of accuracy for low-noise input data, while performing significantly better when the noise strength increases. Expand

Analytic continuation via domain knowledge free machine learning

- Physics, Computer Science
- Physical Review B
- 2018

The machine-learning-based approach to analytic continuation not only provides the more accurate spectrum than the conventional methods in terms of peak positions and heights, but is also more robust against the noise which is the required key feature for any continuation technique to be successful. Expand

Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks

- Computer Science, Mathematics
- Communications in Computational Physics
- 2020

A very universal Frequency Principle (F-Principle) --- DNNs often fit target functions from low to high frequencies --- is demonstrated on high-dimensional benchmark datasets such as MNIST/CIFAR10 and deep neural networks such as VGG16. Expand

Understanding training and generalization in deep learning by Fourier analysis

- Computer Science, Mathematics
- ArXiv
- 2018

This work studies DNN training by Fourier analysis to explain why Deep Neural Networks often achieve remarkably low generalization error and suggests small initialization leads to good generalization ability of DNN while preserving the DNN's ability to fit any function. Expand

On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima

- Computer Science, Mathematics
- ICLR
- 2017

This work investigates the cause for this generalization drop in the large-batch regime and presents numerical evidence that supports the view that large- batch methods tend to converge to sharp minimizers of the training and testing functions - and as is well known, sharp minima lead to poorer generalization. Expand

Implementation of the maximum entropy method for analytic continuation

- Computer Science, Mathematics
- Comput. Phys. Commun.
- 2017

Maxent is a tool for performing analytic continuation of spectral functions using the maximum entropy method and implements a range of bosonic, fermionic and generalized kernels for normal and anomalous Green’s functions, self-energies, and two-particle response functions. Expand

Frequency Principle in Deep Learning Beyond Gradient-descent-based Training

- Computer Science
- ArXiv
- 2021

Empirical studies show the universality of the F-Principle in the training process of DNNs with nongradient-descent-based training, and algorithms without gradient information, such as Powell’s method and Particle Swarm Optimization. Expand

A new approach to solve inverse problems: Combination of model-based solving and example-based learning

- Mathematics
- 2017

Inverse problem, which is one of the basic forms of mathematical problems, exists in the science, engineering and technology extensively. Traditional inverse problems are resolved through solving… Expand

Theory of the Frequency Principle for General Deep Neural Networks

- Computer Science, Mathematics
- CSIAM Transactions on Applied Mathematics
- 2021

This work rigorously investigate the F-Principle for the training dynamics of a general DNN at three stages: initial stage, intermediate stage, and final stage and results are general in the sense that they work for multilayer networks with general activation functions, population densities of data, and a large class of loss functions. Expand

Maximum entropy formalism for the analytic continuation of matrix-valued Green's functions

- Mathematics, Physics
- 2017

We present a generalization of the maximum entropy method to the analytic continuation of matrix-valued Green's functions. To treat off-diagonal elements correctly based on Bayesian probability… Expand