- According to a Latest Research, Quantum Machine Learning Could Benefit From a Spooky Action That Could Allow exponential Scaling Through Mysterious Quantum Connections
- Xanadu Collaborates with NVIDIA for Quantum Computing R&D
- HOW CAN QUANTUM COMPUTING CHANGE THE WORLD?
- QC Ware Webinar Q1 2022 | Introducing Three New Quantum Machine Learning Algorithms
- Tech Talk || Quantum Machine Learning || ROBOWEEK 2.0
- Matthias Caro: Topics in quantum machine learning
- Robert Huang | March 22, 2022 | Provably efficient machine learning for quantum many-body problems
- ⌨ Quantum Machine Learning – Damian Widera
Sampling from complicated probability distributions is a hard computational problem arising in many fields, including statistical physics, optimization, and machine learning. Quantum computers have recently been used to sample from complicated distributions that are hard to sample from classically, but which seldom arise in applications. Here we introduce a quantum algorithm to sample from distributions that pose a bottleneck in several applications, which we implement on a superconducting quantum processor. The algorithm performs Markov chain Monte Carlo (MCMC), a popular iterative sampling technique, to sample from the Boltzmann distribution of classical Ising models. In each step, the quantum processor explores the model in superposition to propose a random move, which is then accepted or rejected by a classical computer and returned to the quantum processor, ensuring convergence to the desired Boltzmann distribution. We find that this quantum algorithm converges in fewer iterations than common classical MCMC alternatives on relevant problem instances, both in simulations and experiments. It therefore opens a new path for quantum computers to solve useful–not merely difficult–problems in the near term.
Quantum Machine Learning models are composed by Variational Quantum Circuits (VQCs) in a very natural way. There are already some empirical results proving that such models provide an advantage in supervised/unsupervised learning tasks. However, when applied to Reinforcement Learning (RL), less is known. In this work, we consider Policy Gradients using a hardware-efficient ansatz. We prove that the complexity of obtaining an \epsilon-approximation of the gradient using quantum hardware scales only logarithmically with the number of parameters, considering the number of quantum circuits executions. We test the performance of such models in benchmarking environments and verify empirically that such quantum models outperform typical classical neural networks used in those environments, using a fraction of the number of parameters. Moreover, we propose the utilization of the Fisher Information spectrum to show that the quantum model is less prone to barren plateaus than its classical counterpart. As a different use case, we consider the application of such variational quantum models to the problem of quantum control and show its feasibility in the quantum-quantum domain.
Time-independent quantum response calculations are performed using Tensor cores. This is achieved by mapping density matrix perturbation theory onto the computational structure of a deep neural network. The main computational cost of each deep layer is dominated by tensor contractions, i.e. dense matrix-matrix multiplications, in mixed precision arithmetics which achieves close to peak performance. Quantum response calculations are demonstrated and analyzed using self-consistent charge density-functional tight-binding theory as well as coupled-perturbed Hartree-Fock theory. For linear response calculations, a novel parameter-free convergence criterion is presented that is well-suited for numerically noisy low precision floating point operations and we demonstrate a peak performance of almost 200 Tflops using the Tensor cores of two Nvidia A100 GPUs.
There has been significant recent interest in quantum neural networks (QNNs), along with their applications in diverse domains. Current solutions for QNNs pose significant challenges concerning their scalability, ensuring that the postulates of quantum mechanics are satisfied and that the networks are physically realizable. The exponential state space of QNNs poses challenges for the scalability of training procedures. The no-cloning principle prohibits making multiple copies of training samples, and the measurement postulates lead to non-deterministic loss functions. Consequently, the physical realizability and efficiency of existing approaches that rely on repeated measurement of several copies of each sample for training QNNs are unclear. This paper presents a new model for QNNs that relies on band-limited Fourier expansions of transfer functions of quantum perceptrons (QPs) to design scalable training procedures. This training procedure is augmented with a randomized quantum stochastic gradient descent technique that eliminates the need for sample replication. We show that this training procedure converges to the true minima in expectation, even in the presence of non-determinism due to quantum measurement. Our solution has a number of important benefits: (i) using QPs with concentrated Fourier power spectrum, we show that the training procedure for QNNs can be made scalable; (ii) it eliminates the need for resampling, thus staying consistent with the no-cloning rule; and (iii) enhanced data efficiency for the overall training process since each data sample is processed once per epoch. We present a detailed theoretical foundation for our models and methods’ scalability, accuracy, and data efficiency. We also validate the utility of our approach through a series of numerical experiments.
In recent years, quantum computing (QC) has been getting a lot of attention from industry and academia. Especially, among various QC research topics, variational quantum circuit (VQC) enables quantum deep reinforcement learning (QRL). Many studies of QRL have shown that the QRL is superior to the classical reinforcement learning (RL) methods under the constraints of the number of training parameters. This paper extends and demonstrates the QRL to quantum multi-agent RL (QMARL). However, the extension of QRL to QMARL is not straightforward due to the challenge of the noise intermediate-scale quantum (NISQ) and the non-stationary properties in classical multi-agent RL (MARL). Therefore, this paper proposes the centralized training and decentralized execution (CTDE) QMARL framework by designing novel VQCs for the framework to cope with these issues. To corroborate the QMARL framework, this paper conducts the QMARL demonstration in a single-hop environment where edge agents offload packets to clouds. The extensive demonstration shows that the proposed QMARL framework enhances 57.7% of total reward than classical frameworks.
To address Quantum Artificial Neural Networks as quantum dynamical computing systems, a formalization of quantum artificial neural networks as dynamical systems is developed, expanding the concept of unitary map to the neural computation setting and introducing a quantum computing field theory on the network. The formalism is illustrated in a simulation of a quantum recurrent neural network and the resulting field dynamics is researched upon, showing emergent neural waves with excitation and relaxation cycles at the level of the quantum neural activity field, as well as edge of chaos signatures, with the local neurons operating as far-from-equilibrium open quantum systems, exhibiting entropy fluctuations with complex dynamics including complex quasiperiodic patterns and power law signatures. The implications for quantum computer science, quantum complexity research, quantum technologies and neuroscience are also addressed.
High-fidelity quantum dynamics emulators can be used to predict the time evolution of complex physical systems. Here, we introduce an efficient training framework for constructing machine learning-based emulators. Our approach is based on the idea of knowledge distillation and uses elements of curriculum learning. It works by constructing a set of simple, but rich-in-physics training examples (a curriculum). These examples are used by the emulator to learn the general rules describing the time evolution of a quantum system (knowledge distillation). The goal is not only to obtain high-quality predictions, but also to examine the process of how the emulator learns the physics of the underlying problem. This allows us to discover new facts about the physical system, detect symmetries, and measure relative importance of the contributing physical processes. We illustrate this approach by training an artificial neural network to predict the time evolution of quantum wave packages propagating through a potential landscape. We focus on the question of how the emulator learns the rules of quantum dynamics from the curriculum of simple training examples and to which extent it can generalize the acquired knowledge to solve more challenging cases.
Quantum Dropout for Efficient Quantum Approximate Optimization Algorithm on Combinatorial Optimization Problems
Mar 22 2022 quant-ph arXiv:2203.10101v1
A combinatorial optimization problem becomes very difficult in situations where the energy landscape is rugged and the global minimum locates in a narrow region of the configuration space. When using the quantum approximate optimization algorithm (QAOA) to tackle these hard cases, we find that difficulty mainly originates from the QAOA quantum circuit instead of the cost function. To alleviate the issue, we selectively drop out the clauses defining the quantum circuit while keeping the cost function intact. Due to the combinatorial nature of the optimization problems, the dropout of clauses in the circuit does not affect the solution. Our numerical results confirm QAOA’s performance improvements with various types of quantum-dropout implementation.
We study the problem of calibrating a quantum receiver for optical coherent states when transmitted on a quantum optical channel with variable transmissivity, a common model for long-distance optical-fiber and free/deep-space optical communication. We optimize the error probability of legacy adaptive receivers, such as Kennedy’s and Dolinar’s, on average with respect to the channel transmissivity distribution. We then compare our results with the ultimate error probability attainable by a general quantum device, computing the Helstrom bound for mixtures of coherent-state hypotheses, for the first time to our knowledge, and with homodyne measurements. With these tools, we first analyze the simplest case of two different transmissivity values; we find that the strategies adopted by adaptive receivers exhibit strikingly new features as the difference between the two transmissivities increases. Finally, we employ a recently introduced library of shallow reinforcement learning methods, demonstrating that an intelligent agent can learn the optimal receiver setup from scratch by training on repeated communication episodes on the channel with variable transmissivity and receiving rewards if the coherent-state message is correctly identified.