- JPMorgan’s guide to quantum machine learning in finance
- What’s the Excitement About Quantum Machine Learning?
- How Quantum Computing Will Transform AI
- Next-Generation Batteries Will Be Brought to You by AI
- Qiskit Global Summer School 2021
- SymCorrel2021 | Quantum Machine-Learning for Electronic Structure Calculations (Sabre Kais)
- QCTalks3/Talk2: Quantum Machine Learning in High Energy Physics
- Quantum Machine Learning- Why its Popular!
- Learning quantum machines AWS
- Qiskit Optimization & Machine Learning Demo Session with Atsushi Matsuo & Anton Dekusar
- Machine Learning to Deep Learning- NO FREE LUNCH
- Exponentially Many Local Minima in Quantum Neural Networks
- Quantum Semi-Supervised Learning with Quantum Supremacy
- Supervised Learning Enhanced Quantum Circuit Transformation
- Analyzing Rydberg excitation Dynamics in an atomic chain via discrete truncated Wigner approximation and artificial neural networks
- A Quantum Generative Adversarial Network for distributions
- Feasible Architecture for Quantum Fully Convolutional Networks
- Development and Training of Quantum Neural Networks, Based on the Principles of Grover’s Algorithm
Quantum Neural Networks (QNNs), or the so-called variational quantum circuits, are important quantum applications both because of their similar promises as classical neural networks and because of the feasibility of their implementation on near-term intermediate-size noisy quantum machines (NISQ). However, the training task of QNNs is challenging and much less understood. We conduct a quantitative investigation on the landscape of loss functions of QNNs and identify a class of simple yet extremely hard QNN instances for training. Specifically, we show for typical under-parameterized QNNs, there exists a dataset that induces a loss function with the number of spurious local minima depending exponentially on the number of parameters. Moreover, we show the optimality of our construction by providing an almost matching upper bound on such dependence. While local minima in classical neural networks are due to non-linear activations, in quantum neural networks local minima appear as a result of the quantum interference phenomenon. Finally, we empirically confirm that our constructions can indeed be hard instances in practice with typical gradient-based optimizers, which demonstrates the practical value of our findings.
Quantum machine learning promises to efficiently solve important problems. There are two persistent challenges in classical machine learning: the lack of labeled data, and the limit of computational power. We propose a novel framework that resolves both issues: quantum semi-supervised learning. Moreover, we provide a protocol in systematically designing quantum machine learning algorithms with quantum supremacy, which can be extended beyond quantum semi-supervised learning. We showcase two concrete quantum semi-supervised learning algorithms: a quantum self-training algorithm named the propagating nearest-neighbor classifier, and the quantum semi-supervised K-means clustering algorithm. By doing time complexity analysis, we conclude that they indeed possess quantum supremacy.
Quantum circuit transformation (QCT) is required when executing a quantum program in a real quantum processing unit (QPU). Through inserting auxiliary SWAP gates, a QCT algorithm transforms a quantum circuit to one that satisfies the connectivity constraint imposed by the QPU. Due to the non-negligible gate error and the limited qubit coherence time of the QPU, QCT algorithms which minimize gate number or circuit depth or maximize the fidelity of output circuits are in urgent need. Unfortunately, finding optimized transformations often involves deep and exhaustive search, which is extremely time-consuming and not practical for circuits with medium to large depth. In this paper, we propose a framework which uses a policy artificial neural network (ANN) trained by supervised learning on shallow circuits to help existing QCT algorithms select the most promising SWAP. Very attractively, ANNs can be trained in a distributed way off-line. Meanwhile, the trained ANN can be easily incorporated into many QCT algorithms without bringing too much overhead in time complexity. Exemplary embeddings of the trained ANNs into target QCT algorithms demonstrate that the transformation performance can be consistently improved on QPUs with various connectivity structures and random quantum circuits.
Analyzing Rydberg excitation Dynamics in an atomic chain via discrete truncated Wigner approximation and artificial neural networks
We analyze the excitation dynamics numerically in a one-dimensional Rydberg atomic chain, using the methods of discrete truncated Wigner approximation (dTWA) and artificial neural networks (ANN), for both van der Waals and dipolar interactions. In particular, we look at how the number of excitations dynamically grows or evolves in the system for an initial state where all atoms are in their electronic ground state. Further, we calculate the maximum number of excitations attained at any instant and the average number of excitations. For a small system size of ten atoms, we compare the results from DTWA and ANN with that of exact numerical calculations of the Schrödinger equation. The collapse and revival dynamics in the number of Rydberg excitations are also characterized in detail. Though we find good agreement at shorter periods, both DTWA and ANN failed to capture the dynamics quantitatively at longer times. By increasing the number of hidden units, the accuracy of ANN is significantly improved but suffered by numerical instabilities, especially for large interaction strengths. Finally, we look at the dynamics of a large system using dTWA.
Generative Adversarial Networks are becoming a fundamental tool in Machine Learning, in particular in the context of improving the stability of deep neural networks. At the same time, recent advances in Quantum Computing have shown that, despite the absence of a fault-tolerant quantum computer so far, quantum techniques are providing exponential advantage over their classical counterparts. We develop a fully connected Quantum Generative Adversarial network and show how it can be applied in Mathematical Finance, with a particular focus on volatility modelling.
Fully convolutional networks are robust in performing semantic segmentation, with many applications from signal processing to computer vision. From the fundamental principles of variational quantum algorithms, we propose a feasible pure quantum architecture that can be operated on noisy intermediate-scale quantum devices. In this work, a parameterized quantum circuit consisting of three layers, convolutional, pooling, and upsampling, is characterized by generative one-qubit and two-qubit gates and driven by a classical optimizer. This architecture supplies a solution for realizing the dynamical programming on a one-way quantum computer and maximally taking advantage of quantum computing throughout the calculation. Moreover, our algorithm works on many physical platforms, and particularly the upsampling layer can use either conventional qubits or multiple-level systems. Through numerical simulations, our study represents the successful training of a pure quantum fully convolutional network and discusses advantages by comparing it with the hybrid solution.
This paper highlights the possibility of creating quantum neural networks that are trained by Grover’s Search Algorithm. The purpose of this work is to propose the concept of combining the training process of a neural network, which is performed on the principles of Grover’s algorithm, with the functional structure of that neural network, interpreted as a quantum circuit. As a simple example of a neural network, to showcase the concept, a perceptron with one trainable parameter – the weight of a synapse connected to a hidden neuron.