Learning Quantum Systems

Valentin Gebhart, Raffaele Santagati, Antonio Andrea Gentile, Erik Gauger, David Craig, Natalia Ares, Leonardo Banchi, Florian Marquardt, Luca Pezze’, Cristian Bonato

Jul 04 2022 quant-ph arXiv:2207.00298v2

Scite!10  PDF

Quantum technologies hold the promise to revolutionise our society with ground-breaking applications in secure communication, high-performance computing and ultra-precise sensing. One of the main features in scaling up quantum technologies is that the complexity of quantum systems scales exponentially with their size. This poses severe challenges in the efficient calibration, benchmarking and validation of quantum states and their dynamical control. While the complete simulation of large-scale quantum systems may only be possible with a quantum computer, classical characterisation and optimisation methods (supported by cutting edge numerical techniques) can still play an important role. Here, we review classical approaches to learning quantum systems, their correlation properties, their dynamics and their interaction with the environment. We discuss theoretical proposals and successful implementations in different physical platforms such as spin qubits, trapped ions, photonic and atomic systems, and superconducting circuits. This review provides a brief background for key concepts recurring across many of these approaches, such as the Bayesian formalism or Neural Networks, and outlines open questions.

Unitary Partitioning and the Contextual Subspace Variational Quantum Eigensolver

Alexis Ralli, Tim Weaving, Andrew Tranter, William M. Kirby, Peter J. Love, Peter V. Coveney

Jul 08 2022 quant-ph arXiv:2207.03451v1

Scite!7  PDF

The contextual subspace variational quantum eigensolver (CS-VQE) is a hybrid quantum-classical algorithm that approximates the ground state energy of a given qubit Hamiltonian. It achieves this by separating the Hamiltonian into contextual and noncontextual parts. The ground state energy is approximated by classically solving the noncontextual problem, followed by solving the contextual problem using VQE, constrained by the noncontexual solution. In general, computation of the contextual correction needs fewer qubits and measurements compared to solving the full Hamiltonian via traditional VQE. We simulate CS-VQE on different tapered molecular Hamiltonians and apply the unitary partitioning measurement reduction strategy to further reduce the number of measurements required to obtain the contextual correction. Our results indicate that CS-VQE combined with measurement reduction is a promising approach to allow feasible eigenvalue computations on noisy intermediate-scale quantum devices. We also provide a modification to the CS-VQE algorithm, that previously could cause an exponential increase in Hamiltonian terms, that now at worst will scale quadratically.

Pricing multi-asset derivatives by variational quantum algorithms

Kenji Kubo, Koichi Miyamoto, Kosuke Mitarai, Keisuke Fujii

Jul 05 2022 quant-ph q-fin.CP q-fin.PR arXiv:2207.01277v1

Scite!7  PDF

Pricing a multi-asset derivative is an important problem in financial engineering, both theoretically and practically. Although it is suitable to numerically solve partial differential equations to calculate the prices of certain types of derivatives, the computational complexity increases exponentially as the number of underlying assets increases in some classical methods, such as the finite difference method. Therefore, there are efforts to reduce the computational complexity by using quantum computation. However, when solving with naive quantum algorithms, the target derivative price is embedded in the amplitude of one basis of the quantum state, and so an exponential complexity is required to obtain the solution. To avoid the bottleneck, the previous study~[Miyamoto and Kubo, IEEE Transactions on Quantum Engineering, \textbf3, 1–25 (2022)] utilizes the fact that the present price of a derivative can be obtained by its discounted expected value at any future point in time and shows that the quantum algorithm can reduce the complexity. In this paper, to make the algorithm feasible to run on a small quantum computer, we use variational quantum simulation to solve the Black-Scholes equation and compute the derivative price from the inner product between the solution and a probability distribution. This avoids the measurement bottleneck of the naive approach and would provide quantum speedup even in noisy quantum computers. We also conduct numerical experiments to validate our method. Our method will be an important breakthrough in derivative pricing using small-scale quantum computers.

Tensor networks in machine learning

Richik Sengupta, Soumik Adhikary, Ivan Oseledets, Jacob Biamonte

Jul 08 2022 quant-ph cond-mat.dis-nn cs.AI cs.LG arXiv:2207.02851v1

Scite!6  PDF

A tensor network is a type of decomposition used to express and approximate large arrays of data. A given data-set, quantum state or higher dimensional multi-linear map is factored and approximated by a composition of smaller multi-linear maps. This is reminiscent to how a Boolean function might be decomposed into a gate array: this represents a special case of tensor decomposition, in which the tensor entries are replaced by 0, 1 and the factorisation becomes exact. The collection of associated techniques are called, tensor network methods: the subject developed independently in several distinct fields of study, which have more recently become interrelated through the language of tensor networks. The tantamount questions in the field relate to expressability of tensor networks and the reduction of computational overheads. A merger of tensor networks with machine learning is natural. On the one hand, machine learning can aid in determining a factorization of a tensor network approximating a data set. On the other hand, a given tensor network structure can be viewed as a machine learning model. Herein the tensor network parameters are adjusted to learn or classify a data-set. In this survey we recover the basics of tensor networks and explain the ongoing effort to develop the theory of tensor networks in machine learning.

The Quantum Approximate Optimization Algorithm performance with low entanglement and high circuit depth

Rishi Sreedhar, Pontus Vikstål, Marika Svensson, Andreas Ask, Göran Johansson, Laura García-Álvarez

Jul 08 2022 quant-ph arXiv:2207.03404v1

Scite!4  PDF

Variational quantum algorithms constitute one of the most widespread methods for using current noisy quantum computers. However, it is unknown if these heuristic algorithms provide any quantum-computational speedup, although we cannot simulate them classically for intermediate sizes. Since entanglement lies at the core of quantum computing power, we investigate its role in these heuristic methods for solving optimization problems. In particular, we use matrix product states to simulate the quantum approximate optimization algorithm with reduced bond dimensions DD, a parameter bounding the system entanglement. Moreover, we restrict the simulation further by deterministically sampling solutions. We conclude that entanglement plays a minor role in the MaxCut and Exact Cover 3 problems studied here since the simulated algorithm analysis, with up to 6060 qubits and p=100p=100 algorithm layers, shows that it provides solutions for bond dimension D≈10D≈10 and depth p≈30p≈30. Additionally, we study the classical optimization loop in the approximated algorithm simulation with 1212 qubits and depth up to p=4p=4 and show that the approximated optimal parameters with low entanglement approach the exact ones.

Machine Learning of Average Non-Markovianity from Randomized Benchmarking

Shih-Xian Yang, Pedro Figueroa-Romero, Min-Hsiu Hsieh

Jul 05 2022 quant-ph arXiv:2207.01542v1

Scite!2  PDF

The presence of correlations in noisy quantum circuits will be an inevitable side effect as quantum devices continue to grow in size and depth. Randomized Benchmarking (RB) is arguably the simplest method to initially assess the overall performance of a quantum device, as well as to pinpoint the presence of temporal-correlations, so-called non-Markovianity; however, when such presence is detected, it hitherto remains a challenge to operationally quantify its features. Here, we demonstrate a method exploiting the power of machine learning with matrix product operators to deduce the minimal average non-Markovianity displayed by the data of a RB experiment, arguing that this can be achieved for any suitable gate set, as well as tailored for most specific-purpose RB techniques.

Categories: Week-in-QML


Leave a Reply

Your email address will not be published. Required fields are marked *