📰News:

📽Videos:

📗Papers:

Group-Invariant Quantum Machine Learning

Martin Larocca, Frederic Sauvage, Faris M. Sbahi, Guillaume Verdon, Patrick J. Coles, M. Cerezo

May 06 2022 quant-ph cs.LG stat.ML arXiv:2205.02261v1

Scite!24  PDF

Quantum Machine Learning (QML) models are aimed at learning from data encoded in quantum states. Recently, it has been shown that models with little to no inductive biases (i.e., with no assumptions about the problem embedded in the model) are likely to have trainability and generalization issues, especially for large problem sizes. As such, it is fundamental to develop schemes that encode as much information as available about the problem at hand. In this work we present a simple, yet powerful, framework where the underlying invariances in the data are used to build QML models that, by construction, respect those symmetries. These so-called group-invariant models produce outputs that remain invariant under the action of any element of the symmetry group GG associated to the dataset. We present theoretical results underpinning the design of GG-invariant models, and exemplify their application through several paradigmatic QML classification tasks including cases when GG is a continuous Lie group and also when it is a discrete symmetry group. Notably, our framework allows us to recover, in an elegant way, several well known algorithms for the literature, as well as to discover new ones. Taken together, we expect that our results will help pave the way towards a more geometric and group-theoretic approach to QML model design.

Quantum Extremal Learning

Savvas Varsamopoulos, Evan Philip, Herman W. T. van Vlijmen, Sairam Menon, Ann Vos, Natalia Dyubankova, Bert Torfs, Anthony Rowe, Vincent E. Elfving

May 06 2022 quant-ph cs.LG stat.ML arXiv:2205.02807v1

Scite!8  PDF

We propose a quantum algorithm for `extremal learning’, which is the process of finding the input to a hidden function that extremizes the function output, without having direct access to the hidden function, given only partial input-output (training) data. The algorithm, called quantum extremal learning (QEL), consists of a parametric quantum circuit that is variationally trained to model data input-output relationships and where a trainable quantum feature map, that encodes the input data, is analytically differentiated in order to find the coordinate that extremizes the model. This enables the combination of established quantum machine learning modelling with established quantum optimization, on a single circuit/quantum computer. We have tested our algorithm on a range of classical datasets based on either discrete or continuous input variables, both of which are compatible with the algorithm. In case of discrete variables, we test our algorithm on synthetic problems formulated based on Max-Cut problem generators and also considering higher order correlations in the input-output relationships. In case of the continuous variables, we test our algorithm on synthetic datasets in 1D and simple ordinary differential functions. We find that the algorithm is able to successfully find the extremal value of such problems, even when the training dataset is sparse or a small fraction of the input configuration space. We additionally show how the algorithm can be used for much more general cases of higher dimensionality, complex differential equations, and with full flexibility in the choice of both modeling and optimization ansatz. We envision that due to its general framework and simple construction, the QEL algorithm will be able to solve a wide variety of applications in different fields, opening up areas of further research.

Quantum Compressive Sensing: Mathematical Machinery, Quantum Algorithms, and Quantum Circuitry

Kyle Sherbert, Naveed Naimipour, Haleh Safavi, Harry Shaw, Mojtaba Soltanalian

Apr 28 2022 quant-ph eess.SP arXiv:2204.13035v1

Scite!6  PDF

Compressive sensing is a sensing protocol that facilitates reconstruction of large signals from relatively few measurements by exploiting known structures of signals of interest, typically manifested as signal sparsity. Compressive sensing’s vast repertoire of applications in areas such as communications and image reconstruction stems from the traditional approach of utilizing non-linear optimization to exploit the sparsity assumption by selecting the lowest-weight (i.e. maximum sparsity) signal consistent with all acquired measurements. Recent efforts in the literature consider instead a data-driven approach, training tensor networks to learn the structure of signals of interest. The trained tensor network is updated to “project” its state onto one consistent with the measurements taken, and is then sampled site by site to “guess” the original signal. In this paper, we take advantage of this computing protocol by formulating an alternative “quantum” protocol, in which the state of the tensor network is a quantum state over a set of entangled qubits. Accordingly, we present the associated algorithms and quantum circuits required to implement the training, projection, and sampling steps on a quantum computer. We supplement our theoretical results by simulating the proposed circuits with a small, qualitative model of LIDAR imaging of earth forests. Our results indicate that a quantum, data-driven approach to compressive sensing, may have significant promise as quantum technology continues to make new leaps.

Deep learning of quantum entanglement from incomplete measurements

Dominik Koutný, Laia Ginés, Magdalena Moczała-Dusanowska, Sven Höfling, Christian Schneider, Ana Predojević, Miroslav Ježek

May 04 2022 quant-ph arXiv:2205.01462v2

Scite!5  PDF

The quantification of the entanglement present in a physical system is of paramount importance for fundamental research and many cutting-edge applications. Currently, achieving this goal requires very demanding experimental procedures such as full state tomography. Here, we demonstrate that by employing neural networks we can quantify the degree of entanglement without needing to know the full description of the quantum state. Our method allows for direct quantification of the quantum correlations using an incomplete set of local measurements. Despite using under-sampled measurements, we achieve an estimation error of up to an order of magnitude lower than the state-of-the-art quantum tomography. Furthermore, we achieve this result employing networks trained using exclusively simulated data. Finally, we derive a method based on a convolutional network input that can accept data from various measurement scenarios and perform, to some extent, independently of the measurement device.

LAWS: Look Around and Warm-Start Natural Gradient Descent for Quantum Neural Networks

Zeyi Tao, Jindi Wu, Qi Xia, Qun Li

May 06 2022 quant-ph cs.LG arXiv:2205.02666v1

Scite!3  PDF

Variational quantum algorithms (VQAs) have recently received significant attention from the research community due to their promising performance in Noisy Intermediate-Scale Quantum computers (NISQ). However, VQAs run on parameterized quantum circuits (PQC) with randomly initialized parameters are characterized by barren plateaus (BP) where the gradient vanishes exponentially in the number of qubits. In this paper, we first review quantum natural gradient (QNG), which is one of the most popular algorithms used in VQA, from the classical first-order optimization point of view. Then, we proposed a \underlineLook \underlineAround \underlineWarm-\underlineStart QNG (LAWS) algorithm to mitigate the widespread existing BP issues. LAWS is a combinatorial optimization strategy taking advantage of model parameter initialization and fast convergence of QNG. LAWS repeatedly reinitializes parameter search space for the next iteration parameter update. The reinitialized parameter search space is carefully chosen by sampling the gradient close to the current optimal. Moreover, we present a unified framework (WS-SGD) for integrating parameter initialization techniques into the optimizer. We provide the convergence proof of the proposed framework for both convex and non-convex objective functions based on Polyak-Lojasiewicz (PL) condition. Our experiment results show that the proposed algorithm could mitigate the BP and have better generalization ability in quantum classification problems.

BEINIT: Avoiding Barren Plateaus in Variational Quantum Algorithms

Ankit Kulshrestha, Ilya Safro

May 02 2022 quant-ph cs.LG arXiv:2204.13751v1

Scite!3  PDF

Barren plateaus are a notorious problem in the optimization of variational quantum algorithms and pose a critical obstacle in the quest for more efficient quantum machine learning algorithms. Many potential reasons for barren plateaus have been identified but few solutions have been proposed to avoid them in practice. Existing solutions are mainly focused on the initialization of unitary gate parameters without taking into account the changes induced by input data. In this paper, we propose an alternative strategy which initializes the parameters of a unitary gate by drawing from a beta distribution. The hyperparameters of the beta distribution are estimated from the data. To further prevent barren plateau during training we add a novel perturbation at every gradient descent step. Taking these ideas together, we empirically show that our proposed framework significantly reduces the possibility of a complex quantum neural network getting stuck in a barren plateau.

On Circuit Depth Scaling For Quantum Approximate Optimization

V. Akshay, H. Philathong, E. Campos, D. Rabinovich, I. Zacharov, Xiao-Ming Zhang, J. Biamonte

May 05 2022 quant-ph cond-mat.dis-nn cs.LG arXiv:2205.01698v1

Scite!1  PDF

Variational quantum algorithms are the centerpiece of modern quantum programming. These algorithms involve training parameterized quantum circuits using a classical co-processor, an approach adapted partly from classical machine learning. An important subclass of these algorithms, designed for combinatorial optimization on currrent quantum hardware, is the quantum approximate optimization algorithm (QAOA). It is known that problem density – a problem constraint to variable ratio – induces under-parametrization in fixed depth QAOA. Density dependent performance has been reported in the literature, yet the circuit depth required to achieve fixed performance (henceforth called critical depth) remained unknown. Here, we propose a predictive model, based on a logistic saturation conjecture for critical depth scaling with respect to density. Focusing on random instances of MAX-2-SAT, we test our predictive model against simulated data with up to 15 qubits. We report the average critical depth, required to attain a success probability of 0.7, saturates at a value of 10 for densities beyond 4. We observe the predictive model to describe the simulated data within a 3σ3σ confidence interval. Furthermore, based on the model, a linear trend for the critical depth with respect problem size is recovered for the range of 5 to 15 qubits.

Entanglement Forging with generative neural network models

Patrick Huembeli, Giuseppe Carleo, Antonio Mezzacapo

May 03 2022 quant-ph arXiv:2205.00933v1

Scite!1  PDF

The optimal use of quantum and classical computational techniques together is important to address problems that cannot be easily solved by quantum computations alone. This is the case of the ground state problem for quantum many-body systems. We show here that probabilistic generative models can work in conjunction with quantum algorithms to design hybrid quantum-classical variational ansätze that forge entanglement to lower quantum resource overhead. The variational ansätze comprise parametrized quantum circuits on two separate quantum registers, and a classical generative neural network that can entangle them by learning a Schmidt decomposition of the whole system. The method presented is efficient in terms of the number of measurements required to achieve fixed precision on expected values of observables. To demonstrate its effectiveness, we perform numerical experiments on the transverse field Ising model in one and two dimensions, and fermionic systems such as the t-V Hamiltonian of spinless fermions on a lattice.

Quantum Approximate Optimization Algorithm with Sparsified Phase Operator

Xiaoyuan Liu, Ruslan Shaydulin, Ilya Safro

May 03 2022 quant-ph cs.DM arXiv:2205.00118v1

Scite!1  PDF

The Quantum Approximate Optimization Algorithm (QAOA) is a promising candidate algorithm for demonstrating quantum advantage in optimization using near-term quantum computers. However, QAOA has high requirements on gate fidelity due to the need to encode the objective function in the phase separating operator, requiring a large number of gates that potentially do not match the hardware connectivity. Using the MaxCut problem as the target, we demonstrate numerically that an easier way to implement an alternative phase operator can be used in lieu of the phase operator encoding the objective function, as long as the ground state is the same. We observe that if the ground state energy is not preserved, the approximation ratio obtained by QAOA with such phase separating operator is likely to decrease. Moreover, we show that a better alignment of the low energy subspace of the alternative operator leads to better performance. Leveraging these observations, we propose a sparsification strategy that reduces the resource requirements of QAOA. We also compare our sparsification strategy with some other classical graph sparsification methods, and demonstrate the efficacy of our approach.

Tunable Quantum Neural Networks in the QPAC-Learning Framework

Viet Pham Ngoc, David Tuckey, Herbert Wiklicky

May 04 2022 quant-ph arXiv:2205.01514v1

Scite!0  PDF

In this paper, we investigate the performances of tunable quantum neural networks in the Quantum Probably Approximately Correct (QPAC) learning framework. Tunable neural networks are quantum circuits made of multi-controlled X gates. By tuning the set of controls these circuits are able to approximate any Boolean functions. This architecture is particularly suited to be used in the QPAC-learning framework as it can handle the superposition produced by the oracle. In order to tune the network so that it can approximate a target concept, we have devised and implemented an algorithm based on amplitude amplification. The numerical results show that this approach can efficiently learn concepts from a simple class.

On the uncertainty principle of neural networks

Jun-Jie Zhang, Dong-Xiao Zhang, Jian-Nan Chen, Long-Gang Pang

May 04 2022 cs.LG physics.comp-ph quant-ph arXiv:2205.01493v1

Scite!0  PDF

Despite the successes in many fields, it is found that neural networks are vulnerability and difficult to be both accurate and robust (robust means that the prediction of the trained network stays unchanged for inputs with non-random perturbations introduced by adversarial attacks). Various empirical and analytic studies have suggested that there is more or less a trade-off between the accuracy and robustness of neural networks. If the trade-off is inherent, applications based on the neural networks are vulnerable with untrustworthy predictions. It is then essential to ask whether the trade-off is an inherent property or not. Here, we show that the accuracy-robustness trade-off is an intrinsic property whose underlying mechanism is deeply related to the uncertainty principle in quantum mechanics. We find that for a neural network to be both accurate and robust, it needs to resolve the features of the two conjugated parts xx (the inputs) and ΔΔ (the derivatives of the normalized loss function JJ with respect to xx), respectively. Analogous to the position-momentum conjugation in quantum mechanics, we show that the inputs and their conjugates cannot be resolved by a neural network simultaneously.

Augmenting QAOA Ansatz with Multiparameter Problem-Independent Layer

Michelle Chalupnik, Hans Melo, Yuri Alexeev, Alexey Galda

May 04 2022 quant-ph arXiv:2205.01192v1

Scite!0  PDF

The quantum approximate optimization algorithm (QAOA) promises to solve classically intractable computational problems in the area of combinatorial optimization. A growing amount of evidence suggests that the originally proposed form of the QAOA ansatz is not optimal, however. To address this problem, we propose an alternative ansatz, which we call QAOA+, that augments the traditional p=1p=1 QAOA ansatz with an additional multiparameter problem-independent layer. The QAOA+ ansatz allows obtaining higher approximation ratios than p=1p=1 QAOA while keeping the circuit depth below that of p=2p=2 QAOA, as benchmarked on the MaxCut problem for random regular graphs. We additionally show that the proposed QAOA+ ansatz, while using a larger number of trainable classical parameters than with the standard QAOA, in most cases outperforms alternative multiangle QAOA ansätze.

Quantum Robustness Verification: A Hybrid Quantum-Classical Neural Network Certification Algorithm

Nicola Franco, Tom Wollschlaeger, Nicholas Gao, Jeanette Miriam Lorenz, Stephan Guennemann

May 03 2022 quant-ph arXiv:2205.00900v1

Scite!0  PDF

In recent years, quantum computers and algorithms have made significant progress indicating the prospective importance of quantum computing (QC). Especially combinatorial optimization has gained a lot of attention as an application field for near-term quantum computers, both by using gate-based QC via the Quantum Approximate Optimization Algorithm and by quantum annealing using the Ising model. However, demonstrating an advantage over classical methods in real-world applications remains an active area of research. In this work, we investigate the robustness verification of ReLU networks, which involves solving a many-variable mixed-integer programs (MIPs), as a practical application. Classically, complete verification techniques struggle with large networks as the combinatorial space grows exponentially, implying that realistic networks are difficult to be verified by classical methods. To alleviate this issue, we propose to use QC for neural network verification and introduce a hybrid quantum procedure to compute provable certificates. By applying Benders decomposition, we split the MIP into a quadratic unconstrained binary optimization and a linear program which are solved by quantum and classical computers, respectively. We further improve existing hybrid methods based on the Benders decomposition by reducing the overall number of iterations and placing a limit on the maximum number of qubits required. We show that, in a simulated environment, our certificate is sound, and provide bounds on the minimum number of qubits necessary to approximate the problem. Finally, we evaluate our method within simulation and on quantum hardware.

Neural-Network Quantum States: A Systematic Review

David R. Vivas, Javier Madroñero, Victor Bucheli, Luis O. Gómez, John H. Reina

Apr 28 2022 quant-ph arXiv:2204.12966v1

Scite!0  PDF

The so-called contemporary AI revolution has reached every corner of the social, human and natural sciences — physics included. In the context of quantum many-body physics, its intersection with machine learning has configured a high-impact interdisciplinary field of study; with the arise of recent seminal contributions that have derived in a large number of publications. One particular research line of such field of study is the so-called Neural-Network Quantum States, a powerful variational computational methodology for the solution of quantum many-body systems that has proven to compete with well-established, traditional formalisms. Here, a systematic review of literature regarding Neural-Network Quantum States is presented.

Categories: Week-in-QML

0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *