MACHINE LEARNING FOR QUANTUM X | July 5 – 9th
- University of Illinois and IBM Researching AI, Quantum Tech
- Daresbury AI and quantum computing site ‘will help business’
- QTUM: The ETF Offers A Diversified Way To Play The Quantum Computing And AI Trends
- Expressibility of Parametrized Quantum Circuits & Classification Accuracy of Quantum Neural Networks
- Chapter_250 Quantum Classical Hybrid Machine Learning
- LOO: The Basics of Artificial Intelligence Machine Learning – When a Muffin Becomes a Panda
- Quantum Networks and the Role of CQN
- The Incredible Potential for Quantum in Health 🤯
- The Meta-Variational Quantum Eigensolver (Meta-VQE)
- General registration for the Qiskit Global Summer School on Quantum Machine Learning from July 12-23 open
- Amazon Braket release v1.6.4
- Classically-Boosted Variational Quantum Eigensolver
- Classical algorithms and quantum limitations for maximum cut on high-girth graphs
- The dilemma of quantum neural networks
- Variational Quantum-Neural Hybrid Eigensolver
- Quantum Natural Gradient for Variational Bayes
- Error Mitigation for Deep Quantum Optimization Circuits by Leveraging Problem Symmetries
- Classical Variational Optimization of Gate Sequences for Time Evolution of Large Quantum Systems
- Matrix Product State Pre-Training for Quantum Machine Learning
- Optimizing Ansatz Design in QAOA for Max-cut
- A Review of Machine Learning Classification Using Quantum Annealing for Real-world Applications
- Grover’s Algorithm for Question Answering
- BIGDML: Towards Exact Machine Learning Force Fields for Materials
- Extract the Degradation Information in Squeezed States with Machine Learning
- Free versus Bound Entanglement: Machine learning tackling a NP-hard problem
- Mean Field Approximation for solving QUBO problems
- Adiabatic Quantum Feature Selection for Sparse Linear Regression
The ability of near-term quantum computers to represent classically-intractable quantum states has brought much interest in using such devices for estimating the ground and excited state energies of fermionic Hamiltonians. The usefulness of such near-term techniques, generally based on the Variational Quantum Eigensolver (VQE), however, is limited by device noise and the need to perform many circuit repetitions. This paper addresses these challenges by generalizing VQE to consider wavefunctions in a subspace spanned by classically tractable states and states that can be prepared on a quantum computer. The manuscript shows how the ground and excited state energies can be estimated using such “classical-boosting” and how this approach can be combined with VQE Hamiltonian decomposition techniques. Unlike existing VQE approaches, the sensitivity to sampling error and device noise approaches zero in the limit where the classically tractable states are able to describe an eigenstate. A detailed analysis of the measurement requirements in the simplest case, where a single computational basis state is used to boost conventional VQE, shows that the ground-state energy estimation of several closed-shell homonuclear diatomic molecules can be accelerated by a factor of approximately 10-1000. The analysis also shows that the measurement reduction of such single basis state boosting, relative to conventional VQE, can be estimated using only the overlap between the ground state and the computational basis state used for boosting.
We study the performance of local quantum algorithms such as the Quantum Approximate Optimization Algorithm (QAOA) for the maximum cut problem, and their relationship to that of classical algorithms. (1) We prove that every (quantum or classical) one-local algorithm achieves on DD-regular graphs of girth >5>5 a maximum cut of at most 1/2+C/D−−√1/2+C/D for C=1/2–√≈0.7071C=1/2≈0.7071. This is the first such result showing that one-local algorithms achieve a value bounded away from the true optimum for random graphs, which is 1/2+P∗/D−−√+o(1/D−−√)1/2+P∗/D+o(1/D) for P∗≈0.7632P∗≈0.7632. (2) We show that there is a classical kk-local algorithm that achieves a value of 1/2+C/D−−√−O(1/k−−√)1/2+C/D−O(1/k) for DD-regular graphs of girth >2k+1>2k+1, where C=2/π≈0.6366C=2/π≈0.6366. This is an algorithmic version of the existential bound of Lyons and is related to the algorithm of Aizenman, Lebowitz, and Ruelle (ALR) for the Sherrington-Kirkpatrick model. This bound is better than that achieved by the one-local and two-local versions of QAOA on high-girth graphs. (3) Through computational experiments, we give evidence that the ALR algorithm achieves better performance than constant-locality QAOA for random DD-regular graphs, as well as other natural instances, including graphs that do have short cycles. Our experimental work suggests that it could be possible to extend beyond our theoretical constraints. This points at the tantalizing possibility that O(1)O(1)-local quantum maximum-cut algorithms might be *pointwise dominated* by polynomial-time classical algorithms, in the sense that there is a classical algorithm outputting cuts of equal or better quality *on every possible instance*. This is in contrast to the evidence that polynomial-time algorithms cannot simulate the probability distributions induced by local quantum algorithms.
The core of quantum machine learning is to devise quantum models with good trainability and low generalization error bound than their classical counterparts to ensure better reliability and interpretability. Recent studies confirmed that quantum neural networks (QNNs) have the ability to achieve this goal on specific datasets. With this regard, it is of great importance to understand whether these advantages are still preserved on real-world tasks. Through systematic numerical experiments, we empirically observe that current QNNs fail to provide any benefit over classical learning models. Concretely, our results deliver two key messages. First, QNNs suffer from the severely limited effective model capacity, which incurs poor generalization on real-world datasets. Second, the trainability of QNNs is insensitive to regularization techniques, which sharply contrasts with the classical scenario. These empirical results force us to rethink the role of current QNNs and to design novel protocols for solving real-world problems with quantum advantages.
The variational quantum eigensolver (VQE) is one of the most representative quantum algorithms in the noisy intermediate-size quantum (NISQ) era, and is generally speculated to deliver one of the first quantum advantages for the ground-state simulations of some non-trivial Hamiltonians. However, short quantum coherence time and limited availability of quantum hardware resources in the NISQ hardware strongly restrain the capacity and expressiveness of VQEs. In this Letter, we introduce the variational quantum-neural hybrid eigensolver (VQNHE) in which the shallow-circuit quantum ansatz can be further enhanced by classical post-processing with neural networks. We show that VQNHE consistently and significantly outperforms VQE in simulating ground-state energies of quantum spins and molecules given the same amount of quantum resources. More importantly, we demonstrate that for arbitrary post-processing neural functions, VQNHE only incurs an polynomial overhead of processing time and represents the first scalable method to exponentially accelerate VQE with non-unitary post-processing that can be efficiently implemented in the NISQ era.
Variational Bayes (VB) is a critical method in machine learning and statistics, underpinning the recent success of Bayesian deep learning. The natural gradient is an essential component of efficient VB estimation, but it is prohibitively computationally expensive in high dimensions. We propose a hybrid quantum-classical algorithm to improve the scaling properties of natural gradient computation and make VB a truly computationally efficient method for Bayesian inference in highdimensional settings. The algorithm leverages matrix inversion from the linear systems algorithm by Harrow, Hassidim, and Lloyd [Phys. Rev. Lett. 103, 15 (2009)] (HHL). We demonstrate that the matrix to be inverted is sparse and the classical-quantum-classical handoffs are sufficiently economical to preserve computational efficiency, making the problem of natural gradient for VB an ideal application of HHL. We prove that, under standard conditions, the VB algorithm with quantum natural gradient is guaranteed to converge.
High error rates and limited fidelity of quantum gates in near-term quantum devices are the central obstacles to successful execution of the Quantum Approximate Optimization Algorithm (QAOA). In this paper we introduce an application-specific approach for mitigating the errors in QAOA evolution by leveraging the symmetries present in the classical objective function to be optimized. Specifically, the QAOA state is projected into the symmetry-restricted subspace, with projection being performed either at the end of the circuit or throughout the evolution. Our approach improves the fidelity of the QAOA state, thereby increasing both the accuracy of the sample estimate of the QAOA objective and the probability of sampling the binary string corresponding to that objective value. We demonstrate the efficacy of the proposed methods on QAOA applied to the MaxCut problem, although our methods are general and apply to any objective function with symmetries, as well as to the generalization of QAOA with alternative mixers. We experimentally verify the proposed methods on an IBM Quantum processor, utilizing up to 5 qubits. When leveraging a global bit-flip symmetry, our approach leads to a 23% average improvement in quantum state fidelity.
The simulation of time evolution of large quantum systems is a classically challenging and often intractable task, making it a promising application for quantum computation. A Trotter-Suzuki approximation yields an implementation thereof, where a certain desired accuracy can be achieved by raising the gate count adequately. In this work, we introduce a variational algorithm which uses solutions of classical optimizations to predict efficient quantum circuits for time evolution of large quantum systems. Our strategy improves on the Trotter-Suzuki ansatz in accuracy and gate count by several orders of magnitude. In a NISQ-friendly setting, we can either significantly reduce the approximation error while using the same amount of gates as the Trotter-Suzuki ansatz or achieve a comparable accuracy with significantly fewer gates. We find, that our strategy outperforms the Trotter-Suzuki benchmark in translation invariant systems as well as for open boundaries.
Hybrid Quantum-Classical algorithms are a promising candidate for developing uses for NISQ devices. In particular, Parametrised Quantum Circuits (PQCs) paired with classical optimizers have been used as a basis for quantum chemistry and quantum optimization problems. Training PQCs relies on methods to overcome the fact that the gradients of PQCs vanish exponentially in the size of the circuits used. Tensor network methods are being increasingly used as a classical machine learning tool, as well as a tool for studying quantum systems. We introduce a circuit pre-training method based on matrix product state machine learning methods, and demonstrate that it accelerates training of PQCs for both supervised learning, energy minimization, and combinatorial optimization.
Quantum Approximate Optimization Algorithm (QAOA) has been studied widely in the literature, primarily for finding an approximate value of the maximum cut size of a graph. QAOA is composed of a problem hamiltonian and a mixer hamiltonian which are applied alternately for p≥1p≥1 layers. The circuit for this algorithm requires 2m2m CNOT gates in each layer, where mm is the number of edges in the graph. CNOT gate is one of the primary sources of error in modern quantum computers. In this paper, we propose two techniques for reducing the number of CNOT gates in the circuit which are independent of the hardware architecture. For a graph with nn vertices, we first propose a technique based on edge coloring that can reduce upto ⌊n2⌋⌊n2⌋ CNOT gates in the circuit. Next, we propose another technique based on Depth First Search (DFS) that can reduce n−1n−1 CNOT gates at the cost of some increased depth. We analytically derive the criteria for which the reduction in the number of CNOT gates due to the DFS based technique can provide lower error probability even with some increased depth, and show that all graphs conform to this criteria, making this technique universal. We further show that this proposed optimization holds even in the post transpilation stage of the circuit, which is actually executed in the IBM Quantum hardware. We simulate these two techniques for graphs of various sparsity with the ibmq_manhattan noise model and show that the DFS based technique outperforms the edge coloring based technique, which in turn, outperforms the traditional QAOA circuit in terms of reduction in the number of CNOT gates, and hence the probability of error of the circuit
Optimizing the training of a machine learning pipeline helps in reducing training costs and improving model performance. One such optimizing strategy is quantum annealing, which is an emerging computing paradigm that has shown potential in optimizing the training of a machine learning model. The implementation of a physical quantum annealer has been realized by D-Wave systems and is available to the research community for experiments. Recent experimental results on a variety of machine learning applications using quantum annealing have shown interesting results where the performance of classical machine learning techniques is limited by limited training data and high dimensional features. This article explores the application of D-Wave’s quantum annealer for optimizing machine learning pipelines for real-world classification problems. We review the application domains on which a physical quantum annealer has been used to train machine learning classifiers. We discuss and analyze the experiments performed on the D-Wave quantum annealer for applications such as image recognition, remote sensing imagery, computational biology, and particle physics. We discuss the possible advantages and the problems for which quantum annealing is likely to be advantageous over classical computation.
Grover’s algorithm, a well-know quantum search algorithm, allows one to find the correct item in a database, with quadratic speedup. In this paper we adapt Grover’s algorithm to the problem of finding a correct answer to a natural language question in English, thus contributing to the growing field of Quantum Natural Language Processing. Using a grammar that can be interpreted as tensor contractions, each word is represented as a quantum state that serves as input to the quantum circuit. We here introduce a quantum measurement to contract the representations of words, resulting in the representation of larger text fragments. Using this framework, a representation for the question is found that contains all the possible answers in equal quantum superposition, and allows for the building of an oracle that can detect a correct answer, being agnostic to the specific question. Furthermore, we show that our construction can deal with certain types of ambiguous phrases by keeping the various different meanings in quantum superposition.
Machine-learning force fields (MLFF) should be accurate, computationally and data efficient, and applicable to molecules, materials, and interfaces thereof. Currently, MLFFs often introduce tradeoffs that restrict their practical applicability to small subsets of chemical space or require exhaustive datasets for training. Here, we introduce the Bravais-Inspired Gradient-Domain Machine Learning (BIGDML) approach and demonstrate its ability to construct reliable force fields using a training set with just 10-200 geometries for materials including pristine and defect-containing 2D and 3D semiconductors and metals, as well as chemisorbed and physisorbed atomic and molecular adsorbates on surfaces. The BIGDML model employs the full relevant symmetry group for a given material, does not assume artificial atom types or localization of atomic interactions and exhibits high data efficiency and state-of-the-art energy accuracies (errors substantially below 1 meV per atom) for an extended set of materials. Extensive path-integral molecular dynamics carried out with BIGDML models demonstrate the counterintuitive localization of benzene–graphene dynamics induced by nuclear quantum effects and allow to rationalize the Arrhenius behavior of hydrogen diffusion coefficient in a Pd crystal for a wide range of temperatures.
In order to leverage the full power of quantum noise squeezing with unavoidable decoherence, a complete understanding of the degradation in the purity of squeezed light is demanded. By implementing machine learning architecture with a convolutional neural network, we illustrate a fast, robust, and precise quantum state tomography for continuous variables, through the experimentally measured data generated from the balanced homodyne detectors. Compared with the maximum likelihood estimation method, which suffers from time consuming and over-fitting problems, a well-trained machine fed with squeezed vacuum and squeezed thermal states can complete the task of the reconstruction of density matrix in less than one second. Moreover, the resulting fidelity remains as high as 0.990.99 even when the anti-squeezing level is higher than 2020 dB. Compared with the phase noise and loss mechanisms coupled from the environment and surrounding vacuum, experimentally, the degradation information is unveiled with machine learning for low and high noisy scenarios, i.e., with the anti-squeezing levels at 1212 dB and 1818 dB, respectively. Our neural network enhanced quantum state tomography provides the metrics to give physical descriptions of every feature observed in the quantum state and paves a way of exploring large-scale quantum systems.
Entanglement detection in high dimensional systems is a NP-hard problem since it is lacking an efficient way. Given a bipartite quantum state of interest free entanglement can be detected efficiently by the PPT-criterion (Peres-Horodecki criterion), in contrast to detecting bound entanglement, i.e. a curious form of entanglement that can also not be distilled into maximally (free) entangled states. Only a few bound entangled states have been found, typically by constructing dedicated entanglement witnesses, so naturally the question arises how large is the volume of those states. We define a large family of magically symmetric states of bipartite qutrits for which we find 82%82% to be free entangled, 2%2% to be certainly separable and as much as 10%10% to be bound entangled, which shows that this kind of entanglement is not rare. Via various machine learning algorithms we can confirm that the remaining 6%6% of states are more likely to belonging to the set of separable states than bound entangled states. Most important we find via dimension reduction algorithms that there is a strong 22-dimensional (linear) sub-structure in the set of bound entangled states. This revealed structure opens a novel path to find and characterize bound entanglement towards solving the long-standing problem of what the existence of bound entanglement is implying.
The Quadratic Unconstrained Binary Optimization (QUBO) problems are NP hard; thus, so far, there are no algorithms to solve them efficiently. There are exact methods like the Branch-and-Bound algorithm for smaller problems, and for larger ones, many good approximations like stochastic simulated annealing for discrete variables or the mean field annealing for continuous variables. This paper will show that the statistical physics approach and the quantum mechanical approach in the mean field annealing give the same result. We examined the Ising problem, which is an alternative formulation of the QUBO problem. Our methods consist of a set of simple gradient-based minimizations with continuous variables, thus easy to simulate. We benchmarked our methods with solving the Maximum Cut problem with the G-sets. In many graphs, we could achieve the best-known Cut Value.
Linear regression is a popular machine learning approach to learn and predict real valued outputs or dependent variables from independent variables or features. In many real world problems, its beneficial to perform sparse linear regression to identify important features helpful in predicting the dependent variable. It not only helps in getting interpretable results but also avoids overfitting when the number of features is large, and the amount of data is small. The most natural way to achieve this is by using `best subset selection’ which penalizes non-zero model parameters by adding ℓ0ℓ0 norm over parameters to the least squares loss. However, this makes the objective function non-convex and intractable even for a small number of features. This paper aims to address the intractability of sparse linear regression with ℓ0ℓ0 norm using adiabatic quantum computing, a quantum computing paradigm that is particularly useful for solving optimization problems faster. We formulate the ℓ0ℓ0 optimization problem as a Quadratic Unconstrained Binary Optimization (QUBO) problem and solve it using the D-Wave adiabatic quantum computer. We study and compare the quality of QUBO solution on synthetic and real world datasets. The results demonstrate the effectiveness of the proposed adiabatic quantum computing approach in finding the optimal solution. The QUBO solution matches the optimal solution for a wide range of sparsity penalty values across the datasets.