From Tensor Network Quantum States to Tensorial Recurrent Neural Networks

Dian Wu, Riccardo Rossi, Filippo Vicentini, Giuseppe Carleo

Jun 27 2022 quant-ph cond-mat.str-el cs.LG physics.comp-ph stat.ML arXiv:2206.12363v1

Scite!14  PDF

We show that any matrix product state (MPS) can be exactly represented by a recurrent neural network (RNN) with a linear memory update. We generalize this RNN architecture to 2D lattices using a multilinear memory update. It supports perfect sampling and wave function evaluation in polynomial time, and can represent an area law of entanglement entropy. Numerical evidence shows that it can encode the wave function using a bond dimension lower by orders of magnitude when compared to MPS, with an accuracy that can be systematically improved by increasing the bond dimension.

Positive-definite parametrization of mixed quantum states with deep neural networks

Filippo Vicentini, Riccardo Rossi, Giuseppe Carleo

Jun 28 2022 quant-ph cs.LG physics.comp-ph arXiv:2206.13488v1

Scite!10  PDF

We introduce the Gram-Hadamard Density Operator (GHDO), a new deep neural-network architecture that can encode positive semi-definite density operators of exponential rank with polynomial resources. We then show how to embed an autoregressive structure in the GHDO to allow direct sampling of the probability distribution. These properties are especially important when representing and variationally optimizing the mixed quantum state of a system interacting with an environment. Finally, we benchmark this architecture by simulating the steady state of the dissipative transverse-field Ising model. Estimating local observables and the Rényi entropy, we show significant improvements over previous state-of-the-art variational approaches.

Practical Black Box Hamiltonian Learning

Andi Gu, Lukasz Cincio, Patrick J. Coles

Jul 01 2022 quant-ph cs.LG arXiv:2206.15464v1

Scite!6  PDF

We study the problem of learning the parameters for the Hamiltonian of a quantum many-body system, given limited access to the system. In this work, we build upon recent approaches to Hamiltonian learning via derivative estimation. We propose a protocol that improves the scaling dependence of prior works, particularly with respect to parameters relating to the structure of the Hamiltonian (e.g., its locality kk). Furthermore, by deriving exact bounds on the performance of our protocol, we are able to provide a precise numerical prescription for theoretically optimal settings of hyperparameters in our learning protocol, such as the maximum evolution time (when learning with unitary dynamics) or minimum temperature (when learning with Gibbs states). Thanks to these improvements, our protocol is practical for large problems: we demonstrate this with a numerical simulation of our protocol on an 80-qubit system.

Neural network enhanced measurement efficiency for molecular groundstates

Dmitri Iouchtchenko, Jérôme F. Gonthier, Alejandro Perdomo-Ortiz, Roger G. Melko

Jul 01 2022 quant-ph cond-mat.dis-nn physics.chem-ph arXiv:2206.15449v1

Scite!5  PDF

It is believed that one of the first useful applications for a quantum computer will be the preparation of groundstates of molecular Hamiltonians. A crucial task involving state preparation and readout is obtaining physical observables of such states, which are typically estimated using projective measurements on the qubits. At present, measurement data is costly and time-consuming to obtain on any quantum computing architecture, which has significant consequences for the statistical errors of estimators. In this paper, we adapt common neural network models (restricted Boltzmann machines and recurrent neural networks) to learn complex groundstate wavefunctions for several prototypical molecular qubit Hamiltonians from typical measurement data. By relating the accuracy εε of the reconstructed groundstate energy to the number of measurements, we find that using a neural network model provides a robust improvement over using single-copy measurement outcomes alone to reconstruct observables. This enhancement yields an asymptotic scaling near ε−1ε−1 for the model-based approaches, as opposed to ε−2ε−2 in the case of classical shadow tomography.

Exponential data encoding for quantum supervised learning

S. Shin, Y. S. Teo, H. Jeong

Jun 27 2022 quant-ph arXiv:2206.12105v1

Scite!5  PDF

Reliable quantum supervised learning of a multivariate function mapping depends on the expressivity of the corresponding quantum circuit and measurement resources. We introduce exponential-data-encoding strategies that are optimal amongst all non-entangling Pauli-encoded schemes, which is sufficient for a quantum circuit to express general functions described by Fourier series of very broad frequency spectra using only exponentially few qubits. We show that such an encoding strategy not only mitigates the barren-plateau problem as the function degree scales up, but also exhibits a quantum advantage during loss-function-gradient computation in contrast with known efficient classical strategies when polynomial-depth training circuits are also employed. When gradient-computation resources are constrained, we numerically demonstrate that even exponential-data-encoding circuits with single-layer training circuits can generally express functions that lie outside the classically-expressible region, thereby supporting the practical benefits of such a quantum advantage.

Integral Transforms in a Physics-Informed (Quantum) Neural Network setting: Applications & Use-Cases

Niraj Kumar, Evan Philip, Vincent E. Elfving

Jun 29 2022 quant-ph cs.LG stat.ML arXiv:2206.14184v1

Scite!3  PDF

In many computational problems in engineering and science, function or model differentiation is essential, but also integration is needed. An important class of computational problems include so-called integro-differential equations which include both integrals and derivatives of a function. In another example, stochastic differential equations can be written in terms of a partial differential equation of a probability density function of the stochastic variable. To learn characteristics of the stochastic variable based on the density function, specific integral transforms, namely moments, of the density function need to be calculated. Recently, the machine learning paradigm of Physics-Informed Neural Networks emerged with increasing popularity as a method to solve differential equations by leveraging automatic differentiation. In this work, we propose to augment the paradigm of Physics-Informed Neural Networks with automatic integration in order to compute complex integral transforms on trained solutions, and to solve integro-differential equations where integrals are computed on-the-fly during training. Furthermore, we showcase the techniques in various application settings, numerically simulating quantum computer-based neural networks as well as classical neural networks.

Learning quantum symmetries with interactive quantum-classical variational algorithms

Jonathan Z. Lu, Rodrigo A. Bravo, Kaiying Hou, Gebremedhin A. Dagnew, Susanne F. Yelin, Khadijeh Najafi

Jun 27 2022 quant-ph cs.LG arXiv:2206.11970v1

Scite!3  PDF

A symmetry of a state |ψ⟩|ψ⟩ is a unitary operator of which |ψ⟩|ψ⟩ is an eigenvector. When |ψ⟩|ψ⟩ is an unknown state supplied by a black-box oracle, the state’s symmetries serve to characterize it, and often relegate much of the desired information about |ψ⟩|ψ⟩. In this paper, we develop a variational hybrid quantum-classical learning scheme to systematically probe for symmetries of |ψ⟩|ψ⟩ with no a priori assumptions about the state. This procedure can be used to learn various symmetries at the same time. In order to avoid re-learning already known symmetries, we introduce an interactive protocol with a classical deep neural net. The classical net thereby regularizes against repetitive findings and allows our algorithm to terminate empirically with all possible symmetries found. Our scheme can be implemented efficiently on average with non-local SWAP gates; we also give a less efficient algorithm with only local operations, which may be more appropriate for current noisy quantum devices. We demonstrate our algorithm on representative families of states.

Topological data analysis and machine learning

Daniel Leykam, Dimitris G. Angelakis

Jul 01 2022 cond-mat.mes-hall physics.optics quant-ph arXiv:2206.15075v1

Scite!1  PDF

Topological data analysis refers to approaches for systematically and reliably computing abstract “shapes” of complex data sets. There are various applications of topological data analysis in life and data sciences, with growing interest among physicists. We present a concise yet (we hope) comprehensive review of applications of topological data analysis to physics and machine learning problems in physics including the detection of phase transitions. We finish with a preview of anticipated directions for future research.

Quantum Neural Architecture Search with Quantum Circuits Metric and Bayesian Optimization

Trong Duong, Sang T. Truong, Minh Tam, Bao Bach, Ju-Young Ryu, June-Koo Kevin Rhee

Jun 29 2022 quant-ph cs.LG arXiv:2206.14115v1

Scite!1  PDF

Quantum neural networks are promising for a wide range of applications in the Noisy Intermediate-Scale Quantum era. As such, there is an increasing demand for automatic quantum neural architecture search. We tackle this challenge by designing a quantum circuits metric for Bayesian optimization with Gaussian process. To this goal, we propose a new quantum gates distance that characterizes the gates’ action over every quantum state and provide a theoretical perspective on its geometrical properties. Our approach significantly outperforms the benchmark on three empirical quantum machine learning problems including training a quantum generative adversarial network, solving combinatorial optimization in the MaxCut problem, and simulating quantum Fourier transform. Our method can be extended to characterize behaviors of various quantum machine learning models.

Categories: Week-in-QML


Leave a Reply

Your email address will not be published. Required fields are marked *