- MENTEN AI PARTNERS WITH XANADU TO DEVELOP QUANTUM MACHINE LEARNING FOR PROTEIN-BASED DRUG DISCOVERY
- Machine Learning’s Next Frontier: Quantum Computing
- Deep Dive: Generative Quantum Machine Learning for Finance
- Quantum Machine Learning
- Machine Learning With Quantum Computers | Tutorial | NeurIPS 2021
- Prof. Dr. Florian Ellsaesser about Quantum Machine Learning | Frankfurt School
- Ewin Tang – On quantum linear algebra for machine learning – IPAM at UCLA
Jan 25 2022 quant-ph arXiv:2201.09128v1
Neural Network Quantum States (NQS) represent quantum wavefunctions with small stochastic neural networks. We study the wavefunction access provided by NQS and relate it to results from distribution testing. This first leads to improved distribution testing algorithms for NQS and also motivates definition of a variant of sample and query wavefunction access model, the amplitude ratio access. We study it independently of NQS, argue that it is strictly weaker than the usual sample and query access model, but also that it retains many of its simulation capabilities. Secondly, we give an NQS with just three nodes that does not encode a valid wavefunction and cannot be sampled from.
We investigate the potential of supervised machine learning to propagate a quantum system in time. While Markovian dynamics can be learned easily, given a sufficient amount of data, non-Markovian systems are non-trivial and their description requires the memory knowledge of past states. Here we analyse the feature of such memory by taking a simple 1D Heisenberg model as many-body Hamiltonian, and construct a non-Markovian description by representing the system over the single-particle reduced density matrix. The number of past states required for this representation to reproduce the time-dependent dynamics is found to grow exponentially with the number of spins and with the density of the system spectrum. Most importantly, we demonstrate that neural networks can work as time propagators at any time in the future and that they can be concatenated in time forming an autoregression. Such neural-network autoregression can be used to generate long-time and arbitrary dense time trajectories. Finally, we investigate the time resolution needed to represent the system memory. We find two regimes: for fine memory samplings the memory needed remains constant, while longer memories are required for coarse samplings, although the total number of time steps remains constant. The boundary between these two regimes is set by the period corresponding to the highest frequency in the system spectrum, demonstrating that neural network can overcome the limitation set by the Shannon-Nyquist sampling theorem.
Machine learning Markovian quantum master equations of few-body observables in interacting spin chains
Full information about a many-body quantum system is usually out-of-reach due to the exponential growth – with the size of the system – of the number of parameters needed to encode its state. Nonetheless, in order to understand the complex phenomenology that can be observed in these systems, it is often sufficient to consider dynamical or stationary properties of local observables or, at most, of few-body correlation functions. These quantities are typically studied by singling out a specific subsystem of interest and regarding the remainder of the many-body system as an effective bath. In the simplest scenario, the subsystem dynamics, which is in fact an open quantum dynamics, can be approximated through Markovian quantum master equations. Here, we show how the generator of such a dynamics can be efficiently learned by means of a fully interpretable neural network which provides the relevant dynamical parameters for the subsystem of interest. Importantly, the neural network is constructed such that the learned generator implements a physically consistent open quantum time-evolution. We exploit our method to learn the generator of the dynamics of a subsystem of a many-body system subject to a unitary quantum dynamics. We explore the capability of the network to predict the time-evolution of a two-body subsystem and exploit the physical consistency of the generator to make predictions on the stationary state of the subsystem dynamics.
Automated machine learning for secure key rate in discrete-modulated continuous-variable quantum key distribution
Continuous-variable quantum key distribution (CV QKD) with discrete modulation has attracted increasing attention due to its experimental simplicity, lower-cost implementation and compatibility with classical optical communication. Correspondingly, some novel numerical methods have been proposed to analyze the security of these protocols against collective attacks, which promotes key rates over one hundred kilometers of fiber distance. However, numerical methods are limited by their calculation time and resource consumption, for which they cannot play more roles on mobile platforms in quantum networks. To improve this issue, a neural network model predicting key rates in nearly real time has been proposed previously. Here, we go further and show a neural network model combined with Bayesian optimization. This model automatically designs the best architecture of neural network computing key rates in real time. We demonstrate our model with two variants of CV QKD protocols with quaternary modulation. The results show high reliability with secure probability as high as 99.15%−99.59%99.15%−99.59%, considerable tightness and high efficiency with speedup of approximately 107107 in both cases. This inspiring model enables the real-time computation of unstructured quantum key distribution protocols’ key rate more automatically and efficiently, which has met the growing needs of implementing QKD protocols on moving platforms.
We propose a series of data-centric heuristics for improving the performance of machine learning systems when applied to problems in quantum information science. In particular, we consider how systematic engineering of training sets can significantly enhance the accuracy of pre-trained neural networks used for quantum state reconstruction without altering the underlying architecture. We find that it is not always optimal to engineer training sets to exactly match the expected distribution of a target scenario, and instead, performance can be further improved by biasing the training set to be slightly more mixed than the target. This is due to the heterogeneity in the number of free variables required to describe states of different purity, and as a result, overall accuracy of the network improves when training sets of a fixed size focus on states with the least constrained free variables. For further clarity, we also include a “toy model” demonstration of how spurious correlations can inadvertently enter synthetic data sets used for training, how the performance of systems trained with these correlations can degrade dramatically, and how the inclusion of even relatively few counterexamples can effectively remedy such problems.
Quantum Machine Learning (QML) is an emerging research area advocating the use of quantum computing for advancement in machine learning. Since the discovery of the capability of Parametrized Variational Quantum Circuits (VQC) to replace Artificial Neural Networks, they have been widely adopted to different tasks in Quantum Machine Learning. However, despite their potential to outperform neural networks, VQCs are limited to small scale applications given the challenges in scalability of quantum circuits. To address this shortcoming, we propose an algorithm that compresses the quantum state within the circuit using a tensor ring representation. Using the input qubit state in the tensor ring representation, single qubit gates maintain the tensor ring representation. However, the same is not true for two qubit gates in general, where an approximation is used to have the output as a tensor ring representation. Using this approximation, the storage and computational time increases linearly in the number of qubits and number of layers, as compared to the exponential increase with exact simulation algorithms. This approximation is used to implement the tensor ring VQC. The training of the parameters of tensor ring VQC is performed using a gradient descent based algorithm, where efficient approaches for backpropagation are used. The proposed approach is evaluated on two datasets: Iris and MNIST for the classification task to show the improved accuracy using more number of qubits. We achieve a test accuracy of 83.33\% on Iris dataset and a maximum of 99.30\% and 76.31\% on binary and ternary classification of MNIST dataset using various circuit architectures. The results from the IRIS dataset outperform the results on VQC implemented on Qiskit, and being scalable, demonstrates the potential for VQCs to be used for large scale Quantum Machine Learning applications.