- Powering the next generation of AI
- Automakers like BMW are becoming quantum computing’s early adopters
- Listen: What quantum computing may mean for the finance industry
- Unlocking the Mysteries of Quantum Materials with Machine Learning | Eun-Ah Kim
- Physical Neural Networks: Harnessing Complex Dynamics to Perform Machine Learning
- [DSC Europe 21] How Quantum ML is changing Data & AI: an Interview – PETER MORGAN
- Using hybrid quantum-classical convolutional neural networks to id
- A Review of Supervised Variational Quantum Classifiers
Variational quantum machine learning is an extensively studied application of near-term quantum computers. The success of variational quantum learning models crucially depends on finding a suitable parametrization of the model that encodes an inductive bias relevant to the learning task. However, precious little is known about guiding principles for the construction of suitable parametrizations. In this work, we holistically explore when and how symmetries of the learning problem can be exploited to construct quantum learning models with outcomes invariant under the symmetry of the learning task. Building on tools from representation theory, we show how a standard gateset can be transformed into an equivariant gateset that respects the symmetries of the problem at hand through a process of gate symmetrization. We benchmark the proposed methods on two toy problems that feature a non-trivial symmetry and observe a substantial increase in generalization performance. As our tools can also be applied in a straightforward way to other variational problems with symmetric structure, we show how equivariant gatesets can be used in variational quantum eigensolvers.
Exploring quantum applications of near-term quantum devices is a rapidly growing field of quantum information science with both theoretical and practical interests. A leading paradigm to establish such near-term quantum applications is variational quantum algorithms (VQAs). These algorithms use a classical optimizer to train a parameterized quantum circuit to accomplish certain tasks, where the circuits are usually randomly initialized. In this work, we prove that for a broad class of such random circuits, the variation range of the cost function via adjusting any local quantum gate within the circuit vanishes exponentially in the number of qubits with a high probability. This result can unify the restrictions on gradient-based and gradient-free optimizations in a natural manner and reveal extra harsh constraints on the training landscapes of VQAs. Hence a fundamental limitation on the trainability of VQAs is unraveled, indicating the essence of the optimization hardness in the Hilbert space with exponential dimension. We further showcase the validity of our results with numerical simulations of representative VQAs. We believe that these results would deepen our understanding of the scalability of VQAs and shed light on the search for near-term quantum applications with advantages.
May 13 2022 quant-ph arXiv:2205.05786v1
One of the most important properties of classical neural networks is how surprisingly trainable they are, though their training algorithms typically rely on optimizing complicated, nonconvex loss functions. Previous results have shown that unlike the case in classical neural networks, variational quantum models are often not trainable. The most studied phenomenon is the onset of barren plateaus in the training landscape of these quantum models, typically when the models are very deep. This focus on barren plateaus has made the phenomenon almost synonymous with the trainability of quantum models. Here, we show that barren plateaus are only a part of the story. We prove that a wide class of variational quantum models — which are shallow, and exhibit no barren plateus — have only a superpolynomially small fraction of local minima within any constant energy from the global minimum, rendering these models untrainable if no good initial guess of the optimal parameters is known. We also study the trainability of variational quantum algorithms from a statistical query framework, and show that noisy optimization of a wide variety of quantum models is impossible with a sub-exponential number of queries. Finally, we numerically confirm our results on a variety of problem instances. Though we exclude a wide variety of quantum algorithms here, we give reason for optimism for certain classes of variational algorithms and discuss potential ways forward in showing the practical utility of such algorithms.
May 09 2022 quant-ph arXiv:2205.02891v1
The inherent noise and complexity of quantum communication networks leads to challenges in designing quantum network protocols using classical methods. To address this issue, we develop a variational quantum optimization framework that simulates quantum networks on quantum hardware and optimizes the network using differential programming techniques. We use our hybrid framework to optimize nonlocality in noisy quantum networks. On the noisy IBM quantum computers, we demonstrate our framework’s ability to maximize quantum nonlocality. On a classical simulator with a static noise model, we investigate the noise robustness of quantum nonlocality with respect to unital and nonunital channels. In both cases, we find that our optimization methods can reproduce known results, while uncovering interesting phenomena. When unital noise is present we find numerical evidence suggesting that maximally entangled state preparations yield maximal nonlocality. When nonunital noise is present we find that nonmaximally entangled states can yield maximal nonlocality. Thus, we show that variational quantum optimization is a practical design tool for quantum networks in the near-term. In the long-term, our variational quantum optimization techniques show promise of scaling beyond classical approaches and can be deployed on quantum network hardware to optimize quantum communication protocols against their inherent noise.
Variational quantum algorithms are the leading candidate for near-term advantage on noisy quantum hardware. When training a parametrized quantum circuit to solve a specific task, the choice of ansatz is one of the most important factors that determines the trainability and performance of the algorithm. Problem-tailored ansatzes have become the standard for tasks in optimization or quantum chemistry, and yield more efficient algorithms with better performance than unstructured approaches. In quantum machine learning (QML), however, the literature on ansatzes that are motivated by the training data structure is scarce. Considering that it is widely known that unstructured ansatzes can become untrainable with increasing system size and circuit depth, it is of key importance to also study problem-tailored circuit architectures in a QML context. In this work, we introduce an ansatz for learning tasks on weighted graphs that respects an important graph symmetry, namely equivariance under node permutations. We evaluate the performance of this ansatz on a complex learning task on weighted graphs, where a ML model is used to implement a heuristic for a combinatorial optimization problem. We analytically study the expressivity of our ansatz at depth one, and numerically compare the performance of our model on instances with up to 20 qubits to ansatzes where the equivariance property is gradually broken. We show that our ansatz outperforms all others even in the small-instance regime. Our results strengthen the notion that symmetry-preserving ansatzes are a key to success in QML and should be an active area of research in order to enable near-term advantages in this field.
Variational quantum algorithm for unconstrained black box binary optimization: Application to feature selection
May 09 2022 quant-ph arXiv:2205.03045v1
We introduce a variational quantum algorithm to solve unconstrained black box binary optimization problems, i.e., problems in which the objective function is given as black box. This is in contrast to the typical setting of quantum algorithms for optimization where a classical objective function is provided as a given Quadratic Unconstrained Binary Optimization problem and mapped to a sum of Pauli operators. Furthermore, we provide theoretical justification for our method based on convergence guarantees of quantum imaginary time evolution. To investigate the performance of our algorithm and its potential advantages, we tackle a challenging real-world optimization problem: feature selection. This refers to the problem of selecting a subset of relevant features to use for constructing a predictive model such as fraud detection. Optimal feature selection — when formulated in terms of a generic loss function — offers little structure on which to build classical heuristics, thus resulting primarily in ‘greedy methods’. This leaves room for (near-term) quantum algorithms to be competitive to classical state-of-the-art approaches. We apply our quantum-optimization-based feature selection algorithm, termed VarQFS, to build a predictive model for a credit risk data set with 20 and 59 input features (qubits) and train the model using quantum hardware and tensor-network-based numerical simulations, respectively. We show that the quantum method produces competitive and in certain aspects even better performance compared to traditional feature selection techniques used in today’s industry.
An emerging direction of quantum computing is to establish meaningful quantum applications in various fields of artificial intelligence, including natural language processing (NLP). Although some efforts based on syntactic analysis have opened the door to research in Quantum NLP (QNLP), limitations such as heavy syntactic preprocessing and syntax-dependent network architecture make them impracticable on larger and real-world data sets. In this paper, we propose a new simple network architecture, called the quantum self-attention neural network (QSANN), which can make up for these limitations. Specifically, we introduce the self-attention mechanism into quantum neural networks and then utilize a Gaussian projected quantum self-attention serving as a sensible quantum version of self-attention. As a result, QSANN is effective and scalable on larger data sets and has the desirable property of being implementable on near-term quantum devices. In particular, our QSANN outperforms the best existing QNLP model based on syntactic analysis as well as a simple classical self-attention neural network in numerical experiments of text classification tasks on public data sets. We further show that our method exhibits robustness to low-level quantum noises.
The intrinsic probabilistic nature of quantum mechanics invokes endeavors of designing quantum generative learning models (QGLMs) with computational advantages over classical ones. To date, two prototypical QGLMs are quantum circuit Born machines (QCBMs) and quantum generative adversarial networks (QGANs), which approximate the target distribution in explicit and implicit ways, respectively. Despite the empirical achievements, the fundamental theory of these models remains largely obscure. To narrow this knowledge gap, here we explore the learnability of QCBMs and QGANs from the perspective of generalization when their loss is specified to be the maximum mean discrepancy. Particularly, we first analyze the generalization ability of QCBMs and identify their superiorities when the quantum devices can directly access the target distribution and the quantum kernels are employed. Next, we prove how the generalization error bound of QGANs depends on the employed Ansatz, the number of qudits, and input states. This bound can be further employed to seek potential quantum advantages in Hamiltonian learning tasks. Numerical results of QGLMs in approximating quantum states, Gaussian distribution, and ground states of parameterized Hamiltonians accord with the theoretical analysis. Our work opens the avenue for quantitatively understanding the power of quantum generative learning models.
Quantum computing technologies are in the process of moving from academic research to real industrial applications, with the first hints of quantum advantage demonstrated in recent months. In these early practical uses of quantum computers it is relevant to develop algorithms that are useful for actual industrial processes. In this work we propose a quantum pipeline, comprising a quantum autoencoder followed by a quantum classifier, which are used to first compress and then label classical data coming from a separator, i.e., a machine used in one of Eni’s Oil Treatment Plants. This work represents one of the first attempts to integrate quantum computing procedures in a real-case scenario of an industrial pipeline, in particular using actual data coming from physical machines, rather than pedagogical data from benchmark datasets.
We introduce an unsupervised machine learning method based on Siamese Neural Networks (SNN) to detect phase boundaries. This method is applied to Monte-Carlo simulations of Ising-type systems and Rydberg atom arrays. In both cases the SNN reveals phase boundaries consistent with prior research. The combination of leveraging the power of feed-forward neural networks, unsupervised learning and the ability to learn about multiple phases without knowing about their existence provides a powerful method to explore new and unknown phases of matter.
We introduce an approach for performing quantum state reconstruction on systems of nn qubits using a machine-learning-based reconstruction system trained exclusively on mm qubits, where m≥nm≥n. This approach removes the necessity of exactly matching the dimensionality of a system under consideration with the dimension of a model used for training. We demonstrate our technique by performing quantum state reconstruction on randomly sampled systems of one, two, and three qubits using machine-learning-based methods trained exclusively on systems containing at least one additional qubit. The reconstruction time required for machine-learning-based methods scales significantly more favorably than the training time; hence this technique can offer an overall savings of resources by leveraging a single neural network for dimension-variable state reconstruction, obviating the need to train dedicated machine-learning systems for each Hilbert space.
Implementation and Empirical Evaluation of a Quantum Machine Learning Pipeline for Local Classification
In the current era, quantum resources are extremely limited, and this makes difficult the usage of quantum machine learning (QML) models. Concerning the supervised tasks, a viable approach is the introduction of a quantum locality technique, which allows the models to focus only on the neighborhood of the considered element. A well-known locality technique is the k-nearest neighbors (k-NN) algorithm, of which several quantum variants have been proposed. Nevertheless, they have not been employed yet as a preliminary step of other QML models, whereas the classical counterpart has already proven successful. In this paper, we present (i) an implementation in Python of a QML pipeline for local classification, and (ii) its extensive empirical evaluation. Specifically, the quantum pipeline, developed using Qiskit, consists of a quantum k-NN and a quantum binary classifier. The results have shown the quantum pipeline’s equivalence (in terms of accuracy) to its classical counterpart in the ideal case, the validity of locality’s application to the QML realm, but also the strong sensitivity of the chosen quantum k-NN to probability fluctuations and the better performance of classical baseline methods like the random forest.
In recent years, machine learning methods have been used to assist scientists in scientific research. Human scientific theories are based on a series of concepts. How machine learns the concepts from experimental data will be an important first step. We propose a hybrid method to extract interpretable physical concepts through unsupervised machine learning. This method consists of two stages. At first, we need to find the Betti numbers of experimental data. Secondly, given the Betti numbers, we use a variational autoencoder network to extract meaningful physical variables. We test our protocol on toy models and show how it works.
Image recognition is one of the primary applications of machine learning algorithms. Nevertheless, machine learning models used in modern image recognition systems consist of millions of parameters that usually require significant computational time to be adjusted. Moreover, adjustment of model hyperparameters leads to additional overhead. Because of this, new developments in machine learning models and hyperparameter optimization techniques are required. This paper presents a quantum-inspired hyperparameter optimization technique and a hybrid quantum-classical machine learning model for supervised learning. We benchmark our hyperparameter optimization method over standard black-box objective functions and observe performance improvements in the form of reduced expected run times and fitness in response to the growth in the size of the search space. We test our approaches in a car image classification task, and demonstrate a full-scale implementation of the hybrid quantum neural network model with the tensor train hyperparameter optimization. Our tests show a qualitative and quantitative advantage over the corresponding standard classical tabular grid search approach used with a deep neural network ResNet34. A classification accuracy of 0.97 was obtained by the hybrid model after 18 iterations, whereas the classical model achieved an accuracy of 0.92 after 75 iterations.
May 11 2022 quant-ph arXiv:2205.04844v1
In this paper we investigate the workflow scheduling problem, a known NP-hard class of scheduling problems. We derive problem instances from an industrial use case and compare against several quantum, classical, and hybrid quantum-classical algorithms. We develop a novel QUBO to represent our scheduling problem and show how the QUBO complexity depends on the input problem. We derive and present a decomposition method for this specific application to mitigate this complexity and demonstrate the effectiveness of the approach.
May 11 2022 quant-ph arXiv:2205.04753v1
The emerging technology of quantum neural networks (QNNs) attracts great attention from both the fields of machine learning and quantum physics with the capability to gain quantum advantage from an artificial neural network (ANN) system. Comparing to the classical counterparts, QNNs have been proven to be able to speed up the information processing, enhance the prediction or classification efficiency as well as offer versatile and experimentally friendly platforms. It is well established that Kerr nonlinearity is an indispensable element in a classical ANN, while, in a QNN, the roles of Kerr nonlinearity are not yet fully understood. In this work, we consider a bosonic QNN and investigate both classical (simulating an XOR gate) and quantum (generating Schrödinger cat states) tasks to demonstrate that the Kerr nonlinearity not only enables non-trivial tasks but also makes the system more robust to errors.
May 11 2022 quant-ph arXiv:2205.04746v1
In the paper, a gradient-free optimization algorithm for single-qubit quantum classifier is proposed to overcome the effects of barren plateau caused by quantum devices. A rotation gate RX(\phi) is applied on a single-qubit binary quantum classifier, and the training data and parameters are loaded into \phi with the form of vector-multiplication. The cost function is decreased by finding the value of each parameter that yield the minimum expectation value of measuring the quantum circuit. The algorithm is performed iteratively for all parameters one by one, until the cost function satisfies the stop condition. The proposed algorithm is demonstrated for a classification task and is compared with that using Adam optimizer. Furthermore, the performance of the single-qubit quantum classifier with the proposed gradient-free optimization algorithm is discussed when the rotation gate in quantum device is under different noise. The simulation results show that the single-qubit quantum classifier with proposed gradient-free optimization algorithm can reach a high accuracy faster than that using Adam optimizer. Moreover, the proposed gradient-free optimization algorithm can quickly completes the training process of the single-qubit classifier. Additionally, the single-qubit quantum classifier with proposed gradient-free optimization algorithm has a good performance in noisy environments.
We study non-classical pathways and quantum interference in enhanced ionisation of diatomic molecules in strong laser fields using machine learning techniques. Quantum interference provides a bridge, which facilitates intramolecular population transfer. Its frequency is higher than that of the field, intrinsic to the system and depends on several factors, for instance the state of the initial wavepacket or the internuclear separation. Using dimensionality reduction techniques, namely t-distributed stochastic neighbour embedding (t-SNE) and principal component analysis (PCA), we investigate the effect of multiple parameters at once and find optimal conditions for enhanced ionisation in static fields, and controlled ionisation release for two-colour driving fields. This controlled ionisation manifests itself as a step-like behaviour in the time-dependent autocorrelation function. We explain the features encountered with phase-space arguments, and also establish a hierarchy of parameters for controlling ionisation via phase-space Wigner quasiprobability flows, such as specific coherent superpositions of states, electron localisation and internuclear-distance ranges.
An Efficient Gradient Sensitive Alternate Framework for Variational Quantum Eigensolver with Variable Ansatz
May 09 2022 quant-ph arXiv:2205.03031v1
Variational quantum eigensolver (VQE), aiming at determining the ground state energy of a quantum system described by a Hamiltonian on noisy intermediate-scale quantum (NISQ) devices, is among the most significant applications of variational quantum algorithms (VQAs). However, the accuracy and trainability of the current VQE algorithm are significantly influenced due to the \emphbarren plateau (BP), the non-negligible gate error and limited coherence time in NISQ devices. To tackle these issues, a gradient sensitive alternate framework with variable ansatz is proposed in this paper to enhance the performance of the VQE. We first propose a theoretical framework solving VA-VQE via alternately solving a gradient magnitudes related multi-objective optimization problem and the original VQE. Then, we propose a novel implementation based on the candidate tree based double ϵϵ-greedy strategy and modified multi-objective genetic algorithm. As a result, the local optimum are avoided both in ansatz and parameter perspectives and the stability of output ansatz is enhanced. Furthermore, the experimental results show that, compared with the (arXiv:2010.10217) implementation, our framework is able to obtain the improvement of the error of the found solution, the quantum cost and the stability by up to 59.8%, 39.3% and 86.8%, respectively.