- This Oxford-Based Startup is Building Machine Learning-Based Software for Quantum Control
- Entanglement Unlocks Scaling for Quantum Machine Learning
- IBM invests in Quantinuum and opens up fresh ML tech horizons
- Quantum Artificial Intelligence | My PhD at MIT
- QHack 2022: Josh Izaac —Differentiable quantum transforms
- L7 Measurements in other bases : Introduction to quantum computing course 2022
- QHack 2022: Amira Abbas —Model capacity in machine learning
- David Hogg – Machine learning, but structured like physical law (February 11, 2022)
- QHack 2022: Jarrod McClean —Quantum advantage in learning from experiments
- QHack 2022: Marika Kieferova —Training quantum neural networks
- QHack 2022: Zoe Holmes —Expressibility & trainability
Mar 04 2022 quant-ph arXiv:2203.01340v1
Machine learning is frequently listed among the most promising applications for quantum computing. This is in fact a curious choice: Today’s machine learning algorithms are notoriously powerful in practice, but remain theoretically difficult to study. Quantum computing, in contrast, does not offer practical benchmarks on realistic scales, and theory is the main tool we have to judge whether it could become relevant for a problem. In this perspective we explain why it is so difficult to say something about the practical power of quantum computers for machine learning with the tools we are currently using. We argue that these challenges call for a critical debate on whether quantum advantage and the narrative of “beating” classical machine learning should continue to dominate the literature the way it does, and provide a few examples for alternative research questions.
Machine learning systems are becoming more and more ubiquitous in increasingly complex areas, including cutting-edge scientific research. The opposite is also true: the interest in better understanding the inner workings of machine learning systems motivates their analysis under the lens of different scientific disciplines. Physics is particularly successful in this, due to its ability to describe complex dynamical systems. While explanations of phenomena in machine learning based on physics are increasingly present, examples of direct application of notions akin to physics in order to improve machine learning systems are more scarce. Here we provide one such application in the problem of developing algorithms that preserve the privacy of the manipulated data, which is especially important in tasks such as the processing of medical records. We develop well-defined conditions to guarantee robustness to specific types of privacy leaks, and rigorously prove that such conditions are satisfied by tensor-network architectures. These are inspired by the efficient representation of quantum many-body systems, and have shown to compete and even surpass traditional machine learning architectures in certain cases. Given the growing expertise in training tensor-network architectures, these results imply that one may not have to be forced to make a choice between accuracy in prediction and ensuring the privacy of the information processed.
Unitary transformations formulate the time evolution of quantum states. How to learn a unitary transformation efficiently is a fundamental problem in quantum machine learning. The most natural and leading strategy is to train a quantum machine learning model based on a quantum dataset. Although presence of more training data results in better models, using too much data reduces the efficiency of training. In this work, we solve the problem on the minimum size of sufficient quantum datasets for learning a unitary transformation exactly, which reveals the power and limitation of quantum data. First, we prove that the minimum size of dataset with pure states is 2n2n for learning an nn-qubit unitary transformation. To fully explore the capability of quantum data, we introduce a quantum dataset consisting of n+1n+1 mixed states that are sufficient for exact training. The main idea is to simplify the structure utilizing decoupling, which leads to an exponential improvement on the size over the datasets with pure states. Furthermore, we show that the size of quantum dataset with mixed states can be reduced to a constant, which yields an optimal quantum dataset for learning a unitary. We showcase the applications of our results in oracle compiling and Hamiltonian simulation. Notably, to accurately simulate a 3-qubit one-dimensional nearest-neighbor Heisenberg model, our circuit only uses 4848 elementary quantum gates, which is significantly less than 43204320 gates in the circuit constructed by the Trotter-Suzuki product formula.
Quantum computing has shown the potential to substantially speed up machine learning applications, in particular for supervised and unsupervised learning. Reinforcement learning, on the other hand, has become essential for solving many decision making problems and policy iteration methods remain the foundation of such approaches. In this paper, we provide a general framework for performing quantum reinforcement learning via policy iteration. We validate our framework by designing and analyzing: \emphquantum policy evaluation methods for infinite horizon discounted problems by building quantum states that approximately encode the value function of a policy ππ; and \emphquantum policy improvement methods by post-processing measurement outcomes on these quantum states. Last, we study the theoretical and experimental performance of our quantum algorithms on two environments from OpenAI’s Gym.
Mar 01 2022 quant-ph arXiv:2202.13964v1
Several applications of machine learning in quantum physics encompass a measurement upon the quantum system followed by the training of the machine learning model with the measurement outcomes. However, recently developed quantum machine learning models, such as variational quantum circuits (VQCs), can be implemented directly on the state of the quantum system (quantum data). Here, we propose to use a qubit as a probe to estimate the degree of non-Markovianity of the environment. Using VQCs, we find an optimal sequence of interactions of the qubit with the environment to estimate, with high precision, the degree of non-Markovianity for the phase damping and amplitude damping channels. This work contributes to practical quantum applications of VQCs and delivers a feasible experimental procedure to estimate the degree of non-Markovianity.
We use the Lipkin-Meshkov-Glick (LMG) model and the valence-space nuclear shell model to examine the likely performance of variational quantum eigensolvers in nuclear-structure theory. The LMG model exhibits both a phase transition and spontaneous symmetry breaking at the mean-field level in one of the phases, features that characterize collective dynamics in medium-mass and heavy nuclei. We show that with appropriate modifications, the ADAPT-VQE algorithm, a particularly flexible and accurate variational approach, is not troubled by these complications. We treat up to 12 particles and show that the number of quantum operations needed to approach the ground-state energy scales linearly with the number of qubits. We find similar scaling when the algorithm is applied to the nuclear shell model with realistic interactions in the sdsd and pfpf shells.
Quantum Neural Network (QNN) is drawing increasing research interest thanks to its potential to achieve quantum advantage on near-term Noisy Intermediate Scale Quantum (NISQ) hardware. In order to achieve scalable QNN learning, the training process needs to be offloaded to real quantum machines instead of using exponential-cost classical simulators. One common approach to obtain QNN gradients is parameter shift whose cost scales linearly with the number of qubits. We present On-chip QNN, the first experimental demonstration of practical on-chip QNN training with parameter shift. Nevertheless, we find that due to the significant quantum errors (noises) on real machines, gradients obtained from naive parameter shift have low fidelity and thus degrade the training accuracy. To this end, we further propose probabilistic gradient pruning to firstly identify gradients with potentially large errors and then remove them. Specifically, small gradients have larger relative errors than large ones, thus having a higher probability to be pruned. We perform extensive experiments on 5 classification tasks with 5 real quantum machines. The results demonstrate that our on-chip training achieves over 90% and 60% accuracy for 2-class and 4-class image classification tasks. The probabilistic gradient pruning brings up to 7% QNN accuracy improvements over no pruning. Overall, we successfully obtain similar on-chip training accuracy compared with noise-free simulation but have much better training scalability. The code for parameter shift on-chip training is available in the TorchQuantum library.
Quantum metrology exploits quantum resources and strategies to improve measurement precision of unknown parameters. One crucial issue is how to prepare a quantum entangled state suitable for high-precision measurement beyond the standard quantum limit. Here, we propose a scheme to find optimal pulse sequence to accelerate the one-axis twisting dynamics for entanglement generation with the aid of deep reinforcement learning (DRL). We consider the pulse train as a sequence of π/2π/2 pulses along one axis or two orthogonal axes, and the operation is determined by maximizing the quantum Fisher information using DRL. Within a limited evolution time, the ultimate precision bounds of the prepared entangled states follow the Heisenberg-limited scalings. These states can also be used as the input states for Ramsey interferometry and the final measurement precisions still follow the Heisenberg-limited scalings. While the pulse train along only one axis is more simple and efficient, the scheme using pulse sequence along two orthogonal axes show better robustness against atom number deviation. Our protocol with DRL is efficient and easy to be implemented in state-of-the-art experiments.
Classical Random Neural Networks (RNNs) have demonstrated effective applications in decision making, signal processing, and image recognition tasks. However, their implementation has been limited to deterministic digital systems that output probability distributions in lieu of stochastic behaviors of random spiking signals. We introduce the novel class of supervised Random Quantum Neural Networks (RQNNs) with a robust training strategy to better exploit the random nature of the spiking RNN. The proposed RQNN employs hybrid classical-quantum algorithms with superposition state and amplitude encoding features, inspired by quantum information theory and the brain’s spatial-temporal stochastic spiking property of neuron information encoding. We have extensively validated our proposed RQNN model, relying on hybrid classical-quantum algorithms via the PennyLane Quantum simulator with a limited number of \emphqubits. Experiments on the MNIST, FashionMNIST, and KMNIST datasets demonstrate that the proposed RQNN model achieves an average classification accuracy of 94.9%94.9%. Additionally, the experimental findings illustrate the proposed RQNN’s effectiveness and resilience in noisy settings, with enhanced image classification accuracy when compared to the classical counterparts (RNNs), classical Spiking Neural Networks (SNNs), and the classical convolutional neural network (AlexNet). Furthermore, the RQNN can deal with noise, which is useful for various applications, including computer vision in NISQ devices. The PyTorch code (https://github.com/darthsimpus/RQN) is made available on GitHub to reproduce the results reported in this manuscript.
Mar 04 2022 quant-ph arXiv:2203.01607v1
In this paper, we investigate how to reduce the number of measurement configurations needed for sufficiently precise entanglement quantification. Instead of analytical formulae, we employ artificial neural networks to predict the amount of entanglement in a quantum state based on results of collective measurements (simultaneous measurements on multiple instances of the investigated state). This approach allows us to explore the precision of entanglement quantification as a function of measurement configurations. For the purpose of our research, we consider general two-qubit states and their negativity as entanglement quantifier. We outline the benefits of this approach in future quantum communication networks.
Image processing is popular in our daily life because of the need to extract essential information from our 3D world, including a variety of applications in widely separated fields like bio-medicine, economics, entertainment, and industry. The nature of visual information, algorithm complexity, and the representation of 3D scenes in 2D spaces are all popular research topics. In particular, the rapidly increasing volume of image data as well as increasingly challenging computational tasks have become important driving forces for further improving the efficiency of image processing and analysis. Since the concept of quantum computing was proposed by Feynman in 1982, many achievements have shown that quantum computing has dramatically improved computational efficiency . Quantum information processing exploit quantum mechanical properties, such as quantum superposition, entanglement and parallelism, and effectively accelerate many classical problems like factoring large numbers, searching an unsorted database, Boson sampling, quantum simulation, solving linear systems of equations, and machine learning. These unique quantum properties may also be used to speed up signal and data processing. In quantum image processing, quantum image representation plays a key role, which substantively determines the kinds of processing tasks and how well they can be performed.
Current noisy intermediate-scale quantum devices suffer from various sources of intrinsic quantum noise. Overcoming the effects of noise is a major challenge, for which different error mitigation and error correction techniques have been proposed. In this paper, we conduct a first study of the performance of quantum Generative Adversarial Networks (qGANs) in the presence of different types of quantum noise, focusing on a simplified use case in high-energy physics. In particular, we explore the effects of readout and two-qubit gate errors on the qGAN training process. Simulating a noisy quantum device classically with IBM’s Qiskit framework, we examine the threshold of error rates up to which a reliable training is possible. In addition, we investigate the importance of various hyperparameters for the training process in the presence of different error rates, and we explore the impact of readout error mitigation on the results.
Machine Learning algorithms have played an important role in hadronic jet classification problems. The large variety of models applied to Large Hadron Collider data has demonstrated that there is still room for improvement. In this context Quantum Machine Learning is a new and almost unexplored methodology, where the intrinsic properties of quantum computation could be used to exploit particles correlations for improving the jet classification performance. In this paper, we present a brand new approach to identify if a jet contains a hadron formed by a bb or b¯b¯ quark at the moment of production, based on a Variational Quantum Classifier applied to simulated data of the LHCb experiment. Quantum models are trained and evaluated using LHCb simulation. The jet identification performance is compared with a Deep Neural Network model to assess which method gives the better performance.
Recognition of multifrequency microwave (MW) electric fields is challenging because of the complex interference of multifrequency fields in practical applications. Rydberg atom-based measurements for multifrequency MW electric fields is promising in MW radar and MW communications. However, Rydberg atoms are sensitive not only to the MW signal but also to noise from atomic collisions and the environment, meaning that solution of the governing Lindblad master equation of light-atom interactions is complicated by the inclusion of noise and high-order terms. Here, we solve these problems by combining Rydberg atoms with deep learning model, demonstrating that this model uses the sensitivity of the Rydberg atoms while also reducing the impact of noise without solving the master equation. As a proof-of-principle demonstration, the deep learning enhanced Rydberg receiver allows direct decoding of the frequency-division multiplexed (FDM) signal. This type of sensing technology is expected to benefit Rydberg-based MW fields sensing and communication.