- Breakthrough proof clears path for quantum AI
- Quantum enhanced convolutional neural networks for NISQ computers
- Quantum computing can transform optimisation, machine learning, and cryptography
- MATRIX AI Consortium Hosting AI and Quantum Symposium Oct. 21-22
- Quantum Neural Network Simplified
- Panel on Quantum Machine Learning and Barren Plateaus | Quantum Colloquium
- Jordan Cotler | October 19, 2021 | Quantum-enhanced Learning using a Quantum Memory
- Barren Plateaus and Quantum Generative Training Using Rényi Divergences | Quantum Colloquium
- CSIP Seminar “Quantum Tensor Networks for Machine Learning” (presented by Jun Qi), Georgia Tech ECE
- Tech n’ Fest | Quantum Machine Learning With Vertexai
- Artificial Intelligence meets Adiabatic Quantum Computing
- Predicting parameters for the Quantum Approximate Optimization Algorithm for MAX-CUT from the infinite-size limit
- Machine Learning for Continuous Quantum Error Correction on Superconducting Qubits
- RoQNN: Noise-Aware Training for Robust Quantum Neural Networks
- Learning quantum dynamics with latent neural ODEs
- Quantum Face Recognition Protocol with Ghost Imaging
- Variational Quantum Eigensolver in Compressed Space for Nearest-Neighbour Quadratic Fermionic Hamiltonians
- Development of Quantum Circuits for Perceptron Neural Network Training, Based on the Principles of Grover’s Algorithm
Predicting parameters for the Quantum Approximate Optimization Algorithm for MAX-CUT from the infinite-size limit
Combinatorial optimization is regarded as a potentially promising application of near and long-term quantum computers. The best-known heuristic quantum algorithm for combinatorial optimization on gate-based devices, the Quantum Approximate Optimization Algorithm (QAOA), has been the subject of many theoretical and empirical studies. Unfortunately, its application to specific combinatorial optimization problems poses several difficulties: among these, few performance guarantees are known, and the variational nature of the algorithm makes it necessary to classically optimize a number of parameters. In this work, we partially address these issues for a specific combinatorial optimization problem: diluted spin models, with MAX-CUT as a notable special case. Specifically, generalizing the analysis of the Sherrington-Kirkpatrick model by Farhi et al., we establish an explicit algorithm to evaluate the performance of QAOA on MAX-CUT applied to random Erdos-Renyi graphs of expected degree dd for an arbitrary constant number of layers pp and as the problem size tends to infinity. This analysis yields an explicit mapping between QAOA parameters for MAX-CUT on Erdos-Renyi graphs of expected degree dd, in the limit d→∞d→∞, and the Sherrington-Kirkpatrick model, and gives good QAOA variational parameters for MAX-CUT applied to Erdos-Renyi graphs. We then partially generalize the latter analysis to graphs with a degree distribution rather than a single degree dd, and finally to diluted spin-models with DD-body interactions (D≥3D≥3). We validate our results with numerical experiments suggesting they may have a larger reach than rigorously established; among other things, our algorithms provided good initial, if not nearly optimal, variational parameters for very small problem instances where the infinite-size limit assumption is clearly violated.
We propose a machine learning algorithm for continuous quantum error correction that is based on the use of a recurrent neural network to identity bit-flip errors from continuous noisy syndrome measurements. The algorithm is designed to operate on measurement signals deviating from the ideal behavior in which the mean value corresponds to a code syndrome value and the measurement has white noise. We analyze continuous measurements taken from a superconducting architecture using three transmon qubits to identify three significant practical examples of non-ideal behavior, namely auto-correlation at temporal short lags, transient syndrome dynamics after each bit-flip, and drift in the steady-state syndrome values over the course of many experiments. Based on these real-world imperfections, we generate synthetic measurement signals from which to train the recurrent neural network, and then test its proficiency when implementing active error correction, comparing this with a traditional double threshold scheme and a discrete Bayesian classifier. The results show that our machine learning protocol is able to outperform the double threshold protocol across all tests, achieving a final state fidelity comparable to the discrete Bayesian classifier.
Quantum Neural Network (QNN) is a promising application towards quantum advantage on near-term quantum hardware. However, due to the large quantum noises (errors), the performance of QNN models has a severe degradation on real quantum devices. For example, the accuracy gap between noise-free simulation and noisy results on IBMQ-Yorktown for MNIST-4 classification is over 60%. Existing noise mitigation methods are general ones without leveraging unique characteristics of QNN and are only applicable to inference; on the other hand, existing QNN work does not consider noise effect. To this end, we present RoQNN, a QNN-specific framework to perform noise-aware optimizations in both training and inference stages to improve robustness. We analytically deduct and experimentally observe that the effect of quantum noise to QNN measurement outcome is a linear map from noise-free outcome with a scaling and a shift factor. Motivated by that, we propose post-measurement normalization to mitigate the feature distribution differences between noise-free and noisy scenarios. Furthermore, to improve the robustness against noise, we propose noise injection to the training process by inserting quantum error gates to QNN according to realistic noise models of quantum hardware. Finally, post-measurement quantization is introduced to quantize the measurement outcomes to discrete values, achieving the denoising effect. Extensive experiments on 8 classification tasks using 6 quantum devices demonstrate that RoQNN improves accuracy by up to 43%, and achieves over 94% 2-class, 80% 4-class, and 34% 10-class MNIST classification accuracy measured on real quantum computers. We also open-source our PyTorch library for construction and noise-aware training of QNN at https://github.com/mit-han-lab/pytorch-quantum .
The core objective of machine-assisted scientific discovery is to learn physical laws from experimental data without prior knowledge of the systems in question. In the area of quantum physics, making progress towards these goals is significantly more challenging due to the curse of dimensionality as well as the counter-intuitive nature of quantum mechanics. Here, we present the QNODE, a latent neural ODE trained on dynamics from closed and open quantum systems. The QNODE can learn to generate quantum dynamics and extrapolate outside of its training region that satisfy the von Neumann and time-local Lindblad master equations for closed and open quantum systems. Furthermore the QNODE rediscovers quantum mechanical laws such as Heisenberg’s uncertainty principle in a totally data-driven way, without constraints or guidance. Additionally, we show that trajectories that are generated from the QNODE and are close in its latent space have similar quantum dynamics while preserving the physics of the training system.
Face recognition is one of the most ubiquitous examples of pattern recognition in machine learning, with numerous applications in security, access control, and law enforcement, among many others. Pattern recognition with classical algorithms requires significant computational resources, especially when dealing with high-resolution images in an extensive database. Quantum algorithms have been shown to improve the efficiency and speed of many computational tasks, and as such, they could also potentially improve the complexity of the face recognition process. Here, we propose a quantum machine learning algorithm for pattern recognition based on quantum principal component analysis (QPCA), and quantum independent component analysis (QICA). A novel quantum algorithm for finding dissimilarity in the faces based on the computation of trace and determinant of a matrix (image) is also proposed. The overall complexity of our pattern recognition algorithm is O(Nlog N) — NN is the image dimension. As an input to these pattern recognition algorithms, we consider experimental images obtained from quantum imaging techniques with correlated photons, e.g. “interaction-free” imaging or “ghost” imaging. Interfacing these imaging techniques with our quantum pattern recognition processor provides input images that possess a better signal-to-noise ratio, lower exposures, and higher resolution, thus speeding up the machine learning process further. Our fully quantum pattern recognition system with quantum algorithm and quantum inputs promises a much-improved image acquisition and identification system with potential applications extending beyond face recognition, e.g., in medical imaging for diagnosing sensitive tissues or biology for protein identification.
Variational Quantum Eigensolver in Compressed Space for Nearest-Neighbour Quadratic Fermionic Hamiltonians
Variational quantum eigensolvers (VQE) are one of the possible tasks that can be performed in noisy intermediate-scale quantum (NISQ) computers. While one of the limitations of NISQ platforms is their restricted number of available qubits, the theory of compressable matchgate circuits can be used to circumvent this limitation for certain types of Hamiltonians. We show how VQE algorithms can be used to find the ground state of quadratic fermionic Hamiltonians, providing an expressible ansatz in a logarithmic number of qubits. In particular, for systems of nn orbitals encoded to 2-local qubit models with nearest neighbour interactions, the ground state energy can be evaluated with O(logn)O(logn) sets of measurements. This result is invariant of the dimensions in which the nn sites are arranged.
Development of Quantum Circuits for Perceptron Neural Network Training, Based on the Principles of Grover’s Algorithm
This paper highlights a practical research of the possibility of forming quantum circuits for training neural networks. The demonstrated quantum circuits were based on the principles of Grover’s Search Algorithm. The perceptron was chosen as the architecture for the example neural network. The multilayer perceptron is a popular neural network architecture due to its scalability and applicability for solving a wide range of problems.