- Researchers disentangle quantum machine learning
- Is Quantum Computing the Future of AI?
- Agnostiq Selects Pennylane to Develop Quantum Platform for Finance
- NVIDIA Teams With Google Quantum AI and IBM to Speed Research in Quantum Computing
- Clever Combination of Quantum Physics and Molecular Biology
- Sofiene Jerbi: Quantum machine learning beyond kernel methods
- Quantum Artificial Intelligence, The Next Breakthrough?
- WEBINAR “Quantum Machine Learning & Data Visualization”
- QUANTUM MACHINE LEARNING For Image Classification | QuBites 4.1
- An Introduction to Open Source Quantum Computing
- [Summary] Paper Review: “QAOA for Max-Cut requires several hundred qubits”
Modern quantum machine learning (QML) methods involve variationally optimizing a parameterized quantum circuit on a training data set, and subsequently making predictions on a testing data set (i.e., generalizing). In this work, we provide a comprehensive study of generalization performance in QML after training on a limited number NN of training data points. We show that the generalization error of a quantum machine learning model with TT trainable gates scales at worst as T/N−−−−√T/N. When only K≪TK≪T gates have undergone substantial change in the optimization process, we prove that the generalization error improves to K/N−−−−√K/N. Our results imply that the compiling of unitaries into a polynomial number of native gates, a crucial application for the quantum computing industry that typically uses exponential-size training data, can be sped up significantly. We also show that classification of quantum states across a phase transition with a quantum convolutional neural network requires only a very small training data set. Other potential applications include learning quantum error correcting codes or quantum dynamical simulation. Our work injects new hope into the field of QML, as good generalization is guaranteed from few training data.
We study the power of quantum memory for learning properties of quantum systems and dynamics, which is of great importance in physics and chemistry. Many state-of-the-art learning algorithms require access to an additional external quantum memory. While such a quantum memory is not required a priori, in many cases, algorithms that do not utilize quantum memory require much more data than those which do. We show that this trade-off is inherent in a wide range of learning problems. Our results include the following: (1) We show that to perform shadow tomography on an nn-qubit state rho with MM observables, any algorithm without quantum memory requires Ω(min(M,2n))Ω(min(M,2n)) samples of rho in the worst case. Up to logarithmic factors, this matches the upper bound of [HKP20] and completely resolves an open question in [Aar18, AR19]. (2) We establish exponential separations between algorithms with and without quantum memory for purity testing, distinguishing scrambling and depolarizing evolutions, as well as uncovering symmetry in physical dynamics. Our separations improve and generalize prior work of [ACQ21] by allowing for a broader class of algorithms without quantum memory. (3) We give the first tradeoff between quantum memory and sample complexity. We prove that to estimate absolute values of all nn-qubit Pauli observables, algorithms with k<nk<n qubits of quantum memory require at least Ω(2(n−k)/3)Ω(2(n−k)/3) samples, but there is an algorithm using nn-qubit quantum memory which only requires O(n)O(n) samples. The separations we show are sufficiently large and could already be evident, for instance, with tens of qubits. This provides a concrete path towards demonstrating real-world advantage for learning algorithms with quantum memory.
The variational quantum eigensolver (or VQE) uses the variational principle to compute the ground state energy of a Hamiltonian, a problem that is central to quantum chemistry and condensed matter physics. Conventional computing methods are constrained in their accuracy due to the computational limits. The VQE may be used to model complex wavefunctions in polynomial time, making it one of the most promising near-term applications for quantum computing. Finding a path to navigate the relevant literature has rapidly become an overwhelming task, with many methods promising to improve different parts of the algorithm. Despite strong theoretical underpinnings suggesting excellent scaling of individual VQE components, studies have pointed out that their various pre-factors could be too large to reach a quantum computing advantage over conventional methods. This review aims to provide an overview of the progress that has been made on the different parts of the algorithm. All the different components of the algorithm are reviewed in detail including representation of Hamiltonians and wavefunctions on a quantum computer, the optimization process, the post-processing mitigation of errors, and best practices are suggested. We identify four main areas of future research:(1) optimal measurement schemes for reduction of circuit repetitions; (2) large scale parallelization across many quantum computers;(3) ways to overcome the potential appearance of vanishing gradients in the optimization process, and how the number of iterations required for the optimization scales with system size; (4) the extent to which VQE suffers for quantum noise, and whether this noise can be mitigated. The answers to these open research questions will determine the routes for the VQE to achieve quantum advantage as the quantum computing hardware scales up and as the noise levels are reduced.
Variational quantum circuits are used in quantum machine learning and variational quantum simulation tasks. Designing good variational circuits or predicting how well they perform for given learning or optimization tasks is still unclear. Here we discuss these problems, analyzing variational quantum circuits using the theory of neural tangent kernels. We define quantum neural tangent kernels, and derive dynamical equations for their associated loss function in optimization and learning tasks. We analytically solve the dynamics in the frozen limit, or lazy training regime, where variational angles change slowly and a linear perturbation is good enough. We extend the analysis to a dynamical setting, including quadratic corrections in the variational angles. We then consider hybrid quantum-classical architecture and define a large-width limit for hybrid kernels, showing that a hybrid quantum-classical neural network can be approximately Gaussian. The results presented here show limits for which analytical understandings of the training dynamics for variational quantum circuits, used for quantum machine learning and optimization problems, are possible. These analytical results are supported by numerical simulations of quantum machine learning experiments.
The fermionic SWAP network is a qubit routing sequence that can be used to efficiently execute the Quantum Approximate Optimization Algorithm (QAOA). Even with a minimally-connected topology on an n-qubit processor, this routing sequence enables O(n^2) operations to execute in O(n) steps. In this work, we optimize the execution of fermionic SWAP networks for QAOA through two techniques. First, we take advantage of an overcomplete set of native hardware operations [including 150 ns controlled-pi/2 phase gates with up to 99.67(1)% fidelity] in order to decompose the relevant quantum gates and SWAP networks in a manner which minimizes circuit depth and maximizes gate cancellation. Second, we introduce Equivalent Circuit Averaging, which randomizes over degrees of freedom in the quantum circuit compilation to reduce the impact of systematic coherent errors. Our techniques are experimentally validated on the Advanced Quantum Testbed through the execution of QAOA circuits for finding the ground state of two- and four-node Sherrington-Kirkpatrick spin-glass models with various randomly sampled parameters. We observe a ~60% average reduction in error (total variation distance) for QAOA of depth p = 1 on four transmon qubits on a superconducting quantum processor.
Quantum kernel methods are considered a promising avenue for applying quantum computers to machine learning problems. However, recent results overlook the central role hyperparameters play in determining the performance of machine learning methods. In this work we show how optimizing the bandwidth of a quantum kernel can improve the performance of the kernel method from a random guess to being competitive with the best classical methods. Without hyperparameter optimization, kernel values decrease exponentially with qubit count, which is the cause behind recent observations that the performance of quantum kernel methods decreases with qubit count. We reproduce these negative results and show, through extensive numerical experiments using multiple quantum kernels and classical datasets, that if the kernel bandwidth is optimized, the performance instead improves with growing qubit count. We draw a connection between the bandwidth of classical and quantum kernels and show analogous behavior in both cases.
In this thesis, we investigate whether quantum algorithms can be used in the field of machine learning for both long and near term quantum computers. We will first recall the fundamentals of machine learning and quantum computing and then describe more precisely how to link them through linear algebra: we introduce quantum algorithms to efficiently solve tasks such as matrix product or distance estimation. These results are then used to develop new quantum algorithms for unsupervised machine learning, such as k-means and spectral clustering. This allows us to define many fundamental procedures, in particular in vector and graph analysis. We will also present new quantum algorithms for neural networks, or deep learning. For this, we introduce an algorithm to perform a quantum convolution product on images, as well as a new way to perform a fast tomography on quantum states. We prove that these quantum algorithms are faster versions of equivalent classical algorithms, but exhibit random effects due to the quantum nature of the computation. Many simulations have been carried out to study these effects and measure their learning accuracy on real data. Finally, we will present a quantum orthogonal neural network circuit adapted to the currently available small and imperfect quantum computers. This allows us to perform real experiments to test our theory.
Binary classifiers for noisy datasets: a comparative study of existing quantum machine learning frameworks and some new approaches
One of the most promising areas of research to obtain practical advantage is Quantum Machine Learning which was born as a result of cross-fertilisation of ideas between Quantum Computing and Classical Machine Learning. In this paper, we apply Quantum Machine Learning (QML) frameworks to improve binary classification models for noisy datasets which are prevalent in financial datasets. The metric we use for assessing the performance of our quantum classifiers is the area under the receiver operating characteristic curve (ROC/AUC). By combining such approaches as hybrid-neural networks, parametric circuits, and data re-uploading we create QML inspired architectures and utilise them for the classification of non-convex 2 and 3-dimensional figures. An extensive benchmarking of our new FULL HYBRID classifiers against existing quantum and classical classifier models, reveals that our novel models exhibit better learning characteristics to asymmetrical Gaussian noise in the dataset compared to known quantum classifiers and performs equally well for existing classical classifiers, with a slight improvement over classical results in the region of the high noise.
Quantum circuit Born machines (QCBMs) and training via variational quantum algorithms (VQAs) are key applications for near-term quantum hardware. QCBM ansätze designs are unique in that they do not require prior knowledge of a physical Hamiltonian. Many ansätze are built from fixed designs. In this work, we train and compare the performance of QCBM models built using two commonly employed parameterizations and two commonly employed entangling layer designs. In addition to comparing the overall performance of these models, we look at features and characteristics of the loss landscape — connectivity of minima in particular — to help understand the advantages and disadvantages of each design choice. We show that the rotational gate choices can improve loss landscape connectivity.
Quantum computing brings a promise of new approaches into computational quantum chemistry. While universal, fault-tolerant quantum computers are still not available, we want to utilize today’s noisy quantum processors. One of their flagship applications is the variational quantum eigensolver (VQE) — an algorithm to calculate the minimum energy of a physical Hamiltonian. In this study, we investigate how various types of errors affect the VQE, and how to efficiently use the available resources to produce precise computational results. We utilize a simulator of a noisy quantum device, an exact statevector simulator, as well as physical quantum hardware to study the VQE algorithm for molecular hydrogen. We find that the optimal way of running the hybrid classical-quantum optimization is to (i) allow some noise in intermediate energy evaluations, using fewer shots per step and fewer optimization iterations, but require high final readout precision, (ii) emphasize efficient problem encoding and ansatz parametrization, and (iii) run all experiments within a short time-frame, avoiding parameter drift with time. Nevertheless, current publicly available quantum resources are still very noisy and scarce/expensive, and even when using them efficiently it is quite difficult to obtain trustworthy calculations of molecular energies.
An Application of Quantum Machine Learning on Quantum Correlated Systems: Quantum Convolutional Neural Network as a Classifier for Many-Body Wavefunctions from the Quantum Variational Eigensolver
Machine learning has been applied on a wide variety of models, from classical statistical mechanics to quantum strongly correlated systems for the identification of phase transitions. The recently proposed quantum convolutional neural network (QCNN) provides a new framework for using quantum circuits instead of classical neural networks as the backbone of classification methods. We present here the results from training the QCNN by the wavefunctions of the variational quantum eigensolver for the one-dimensional transverse field Ising model (TFIM). We demonstrate that the QCNN identifies wavefunctions which correspond to the paramagnetic phase and the ferromagnetic phase of the TFIM with good accuracy. The QCNN can be trained to predict the corresponding phase of wavefunctions around the putative quantum critical point, even though it is trained by wavefunctions far away from it. This provides a basis for exploiting the QCNN to identify the quantum critical point.
Neural network quantum states provide a novel representation of the many-body states of interacting quantum systems and open up a promising route to solve frustrated quantum spin models that evade other numerical approaches. Yet its capacity to describe complex magnetic orders with large unit cells has not been demonstrated, and its performance in a rugged energy landscape has been questioned. Here we apply restricted Boltzmann machines and stochastic gradient descent to seek the ground states of a compass spin model on the honeycomb lattice, which unifies the Kitaev model, Ising model and the quantum 120∘∘ model with a single tuning parameter. We report calculation results on the variational energy, order parameters and correlation functions. The phase diagram obtained is in good agreement with the predictions of tensor network ansatz, demonstrating the capacity of restricted Boltzmann machines in learning the ground states of frustrated quantum spin Hamiltonians. The limitations of the calculation are discussed. A few strategies are outlined to address some of the challenges in machine learning frustrated quantum magnets.
We propose a systematic method based on reinforcement learning (RL) techniques to find the optimal path that can minimize the total entropy production between two equilibrium states of open systems at the same temperature in a given fixed time period. Benefited from the generalization of the deep RL techniques, our method can provide a powerful tool to address this problem in quantum systems even with two-dimensional continuous controllable parameters. We successfully apply our method on the classical and quantum two-level systems.