- Welcome To 2032: A Merged Physical/Digital World
- Machine learning used to predict synthesis of complex novel materials
- Neural’s best quantum computing and physics stories from 2021
- Machine learning models quantum devices: A novel algorithm allows for efficient and accurate verification of quantum devices
A semidefinite program (SDP) is a particular kind of convex optimization problem with applications in operations research, combinatorial optimization, quantum information science, and beyond. In this work, we propose variational quantum algorithms for approximately solving SDPs. For one class of SDPs, we provide a rigorous analysis of their convergence to approximate locally optimal solutions, under the assumption that they are weakly constrained (i.e., N≫MN≫M, where NN is the dimension of the input matrices and MM is the number of constraints). We also provide algorithms for a more general class of SDPs that requires fewer assumptions. Finally, we numerically simulate our quantum algorithms for applications such as MaxCut, and the results of these simulations provide evidence that convergence still occurs in noisy settings.
Reinforcement learning studies how an agent should interact with an environment to maximize its cumulative reward. A standard way to study this question abstractly is to ask how many samples an agent needs from the environment to learn an optimal policy for a γγ-discounted Markov decision process (MDP). For such an MDP, we design quantum algorithms that approximate an optimal policy (π∗π∗), the optimal value function (v∗v∗), and the optimal QQ-function (q∗q∗), assuming the algorithms can access samples from the environment in quantum superposition. This assumption is justified whenever there exists a simulator for the environment; for example, if the environment is a video game or some other program. Our quantum algorithms, inspired by value iteration, achieve quadratic speedups over the best-possible classical sample complexities in the approximation accuracy (ϵϵ) and two main parameters of the MDP: the effective time horizon (11−γ11−γ) and the size of the action space (AA). Moreover, we show that our quantum algorithm for computing q∗q∗ is optimal by proving a matching quantum lower bound.
Filippo Vicentini, Damian Hofmann, Attila Szabó, Dian Wu, Christopher Roth, Clemens Giuliani, Gabriel Pescia, Jannes Nys, Vladimir Vargas-Calderon, Nikita Astrakhantsev, Giuseppe CarleoDec 21 2021 quant-ph cs.LG cs.MS physics.comp-ph arXiv:2112.10526v1
We introduce version 3 of NetKet, the machine learning toolbox for many-body quantum physics. NetKet is built around neural-network quantum states and provides efficient algorithms for their evaluation and optimization. This new version is built on top of JAX, a differentiable programming and accelerated linear algebra framework for the Python programming language. The most significant new feature is the possibility to define arbitrary neural network ansätze in pure Python code using the concise notation of machine-learning frameworks, which allows for just-in-time compilation as well as the implicit generation of gradients thanks to automatic differentiation. NetKet 3 also comes with support for GPU and TPU accelerators, advanced support for discrete symmetry groups, chunking to scale up to thousands of degrees of freedom, drivers for quantum dynamics applications, and improved modularity, allowing users to use only parts of the toolbox as a foundation for their own code.
We generalize the Quantum Approximate Optimization Algorithm (QAOA) of Farhi et al. (2014) to allow for arbitrary separable initial states and corresponding mixers such that the starting state is the most excited state of the mixing Hamiltonian. We demonstrate this version of QAOA by simulating Max-Cut on weighted graphs. We initialize the starting state as a warm-start inspired by classical rank-2 and rank-3 approximations obtained using Burer-Monteiro’s heuristics, and choose a corresponding custom mixer. Our numerical simulations with this generalization, which we call QAOA-Warmest, yield higher quality cuts (compared to standard QAOA, the classical Goemans-Williamson algorithm, and a warm-started QAOA without custom mixers) for an instance library of 1148 graphs (up to 11 nodes) and depth p=8. We further show that QAOA-Warmest outperforms the standard QAOA of Farhi et al. in experiments on current IBM-Q hardware.
Quantum error mitigation (QEM) is crucial for obtaining reliable results on quantum computers by suppressing quantum noise with moderate resources. It is a key for successful and practical quantum algorithm implementations in the noisy intermediate scale quantum (NISQ) era. Since quantum-classical hybrid algorithms can be executed with moderate and noisy quantum resources, combining QEM with quantum-classical hybrid schemes is one of the most promising directions toward practical quantum advantages. In this paper, we show how the variational quantum-neural hybrid eigensolver (VQNHE) algorithm, which seamlessly combines the expressive power of a parameterized quantum circuit with a neural network, is inherently noise resilient with a unique QEM capacity, which is absent in vanilla variational quantum eigensolvers (VQE). We carefully analyze and elucidate the asymptotic scaling of this unique QEM capacity in VQNHE from both theoretical and experimental perspectives. Finally, we consider a variational basis transformation for the Hamiltonian to be measured under the VQNHE framework, yielding a powerful tri-optimization setup that further enhances the quantum-neural hybrid error mitigation capacity.
We introduce a family of neural quantum states for the simulation of strongly interacting systems in the presence of spatial periodicity. Our variational state is parameterized in terms of a permutationally-invariant part described by the Deep Sets neural-network architecture. The input coordinates to the Deep Sets are periodically transformed such that they are suitable to directly describe periodic bosonic systems. We show example applications to both one and two-dimensional interacting quantum gases with Gaussian interactions, as well as to 44He confined in a one-dimensional geometry. For the one-dimensional systems we find very precise estimations of the ground-state energies and the radial distribution functions of the particles. In two dimensions we obtain good estimations of the ground-state energies, comparable to results obtained from more conventional methods.
Cole Miles, Rhine Samajdar, Sepehr Ebadi, Tout T. Wang, Hannes Pichler, Subir Sachdev, Mikhail D. Lukin, Markus Greiner, Kilian Q. Weinberger, Eun-Ah KimDec 22 2021 cond-mat.str-el cs.LG quant-ph arXiv:2112.10789v1
Machine learning has recently emerged as a promising approach for studying complex phenomena characterized by rich datasets. In particular, data-centric approaches lend to the possibility of automatically discovering structures in experimental datasets that manual inspection may miss. Here, we introduce an interpretable unsupervised-supervised hybrid machine learning approach, the hybrid-correlation convolutional neural network (Hybrid-CCNN), and apply it to experimental data generated using a programmable quantum simulator based on Rydberg atom arrays. Specifically, we apply Hybrid-CCNN to analyze new quantum phases on square lattices with programmable interactions. The initial unsupervised dimensionality reduction and clustering stage first reveals five distinct quantum phase regions. In a second supervised stage, we refine these phase boundaries and characterize each phase by training fully interpretable CCNNs and extracting the relevant correlations for each phase. The characteristic spatial weightings and snippets of correlations specifically recognized in each phase capture quantum fluctuations in the striated phase and identify two previously undetected phases, the rhombic and boundary-ordered phases. These observations demonstrate that a combination of programmable quantum simulators with machine learning can be used as a powerful tool for detailed exploration of correlated quantum states of matter.
Simulation is essential for developing quantum hardware and algorithms. However, simulating quantum circuits on classical hardware is challenging due to the exponential scaling of quantum state space. While factorized tensors can greatly reduce this overhead, tensor network-based simulators are relatively few and often lack crucial functionalities. To address this deficiency, we created TensorLy-Quantum, a Python library for quantum circuit simulation that adopts the PyTorch API. Our library leverages the optimized tensor methods of the existing TensorLy ecosystem to represent, simulate, and manipulate large-scale quantum circuits. Through compact tensor representations and efficient operations, TensorLy-Quantum can scale to hundreds of qubits on a single GPU and thousands of qubits on multiple GPUs. TensorLy-Quantum is open-source and accessible at https://github.com/tensorly/quantum
Enhancing classical machine learning (ML) algorithms through quantum kernels is a rapidly growing research topic in quantum machine learning (QML). A key challenge in using kernels — both classical and quantum — is that ML workflows involve acquiring new observations, for which new kernel values need to be calculated. Transferring data back-and-forth between where the new observations are generated & a quantum computer incurs a time delay; this delay may exceed the timescales relevant for using the QML algorithm in the first place. In this work, we show quantum kernel matrices can be extended to incorporate new data using a classical (chordal-graph-based) matrix completion algorithm. The minimal sample complexity needed for perfect completion is dependent on matrix rank. We empirically show that (a) quantum kernel matrices can be completed using this algorithm when the minimal sample complexity is met, (b) the error of the completion degrades gracefully in the presence of finite-sampling noise, and (c) the rank of quantum kernel matrices depends weakly on the expressibility of the quantum feature map generating the kernel. Further, on a real-world, industrially-relevant data set, the completion error behaves gracefully even when the minimal sample complexity is not reached.
The Quantum Approximate Optimization Algorithm (QAOA), which is a variational quantum algorithm, aims to give sub-optimal solutions of combinatorial optimization problems. It is widely believed that QAOA has the potential to demonstrate application-level quantum advantages in the noisy intermediate-scale quantum(NISQ) processors with shallow circuit depth. Since the core of QAOA is the computation of expectation values of the problem Hamiltonian, an important practical question is whether we can find an efficient classical algorithm to solve quantum mean value in the case of general shallow quantum circuits. Here, we present a novel graph decomposition based classical algorithm that scales linearly with the number of qubits for the shallow QAOA circuits in most optimization problems except for complete graph case. Numerical tests in Max-cut, graph coloring and Sherrington-Kirkpatrick model problems, compared to the state-of-the-art method, shows orders of magnitude performance improvement. Our results are not only important for the exploration of quantum advantages with QAOA, but also useful for the benchmarking of NISQ processors.
Accurate models of real quantum systems are important for investigating their behaviour, yet are difficult to distill empirically. Here, we report an algorithm — the Quantum Model Learning Agent (QMLA) — to reverse engineer Hamiltonian descriptions of a target system. We test the performance of QMLA on a number of simulated experiments, demonstrating several mechanisms for the design of candidate Hamiltonian models and simultaneously entertaining numerous hypotheses about the nature of the physical interactions governing the system under study. QMLA is shown to identify the true model in the majority of instances, when provided with limited a priori information, and control of the experimental setup. Our protocol can explore Ising, Heisenberg and Hubbard families of models in parallel, reliably identifying the family which best describes the system dynamics. We demonstrate QMLA operating on large model spaces by incorporating a genetic algorithm to formulate new hypothetical models. The selection of models whose features propagate to the next generation is based upon an objective function inspired by the Elo rating scheme, typically used to rate competitors in games such as chess and football. In all instances, our protocol finds models that exhibit F1F1-score ≥0.88≥0.88 when compared with the true model, and it precisely identifies the true model in 72% of cases, whilst exploring a space of over 250,000250,000 potential models. By testing which interactions actually occur in the target system, QMLA is a viable tool for both the exploration of fundamental physics and the characterisation and calibration of quantum devices.
Machine learning techniques have been successfully applied to classifying an extensive range of phenomena in quantum theory. From detecting quantum phase transitions to identifying Bell non-locality, it has been established that classical machines can learn genuine quantum features via classical data. Quantum entanglement is one of the uniquely quantum phenomena in that range, as it has been shown that neural networks can be used to classify different types of entanglement. Our work builds on this topic. We investigate whether distinct models of neural networks can learn how to detect catalysis and self-catalysis of entanglement. Additionally, we also study whether a trained machine can detect another related phenomenon – which we dub transfer knowledge. As we build our models from scratch, besides making all the codes available, we can study a whole gamut of paradigmatic measures, including accuracy, execution time, training time, bias in the training data set and so on.
A Parameter Initialization Method for Variational Quantum Algorithms to Mitigate Barren Plateaus Based on Transfer Learning
Variational quantum algorithms (VQAs) are widely applied in the noisy intermediate-scale quantum era and are expected to demonstrate quantum advantage. However, training VQAs faces difficulties, one of which is the so-called barren plateaus (BP) phenomenon, where gradients of the cost function vanish exponentially with the number of qubits. In this paper, based on the basic idea of transfer learning, where knowledge of pre-solved tasks could be further used in a different but related work with the training efficiency improved, we report a parameter initialization method to mitigate BP. In the method, the quantum neural network, as well as the optimal parameters for the task with a small size, are transferred to tasks with larger sizes. Numerical simulations show that this method outperforms random initializations and could mitigate BP as well. This work provides a reference for mitigating BP, and therefore, VQAs could be applied to more practical problems.
We apply digitized Quantum Annealing (QA) and Quantum Approximate Optimization Algorithm (QAOA) to a paradigmatic task of supervised learning in artificial neural networks: the optimization of synaptic weights for the binary perceptron. At variance with the usual QAOA applications to MaxCut, or to quantum spin-chains ground state preparation, the classical Hamiltonian is characterized by highly non-local multi-spin interactions. Yet, we provide evidence for the existence of optimal smooth solutions for the QAOA parameters, which are transferable among typical instances of the same problem, and we prove numerically an enhanced performance of QAOA over traditional QA. We also investigate on the role of the QAOA optimization landscape geometry in this problem, showing that the detrimental effect of a gap-closing transition encountered in QA is also negatively affecting the performance of our implementation of QAOA.
The de novo design of drug molecules is recognized as a time-consuming and costly process, and computational approaches have been applied in each stage of the drug discovery pipeline. Variational autoencoder is one of the computer-aided design methods which explores the chemical space based on existing molecular dataset. Quantum machine learning has emerged as an atypical learning method that may speed up some classical learning tasks because of its strong expressive power. However, near-term quantum computers suffer from limited number of qubits which hinders the representation learning in high dimensional spaces. We present a scalable quantum generative autoencoder (SQ-VAE) for simultaneously reconstructing and sampling drug molecules, and a corresponding vanilla variant (SQ-AE) for better reconstruction. The architectural strategies in hybrid quantum classical networks such as, adjustable quantum layer depth, heterogeneous learning rates, and patched quantum circuits are proposed to learn high dimensional dataset such as, ligand-targeted drugs. Extensive experimental results are reported for different dimensions including 8×8 and 32×32 after choosing suitable architectural strategies. The performance of quantum generative autoencoder is compared with the corresponding classical counterpart throughout all experiments. The results show that quantum computing advantages can be achieved for normalized low-dimension molecules, and that high-dimension molecules generated from quantum generative autoencoders have better drug properties within the same learning period.
We investigate the properties of attractor quantum neural networks (aQNNs) using the tools provided by the resource theory of coherence, thus relating quantum machine learning techniques with coherence theory. We show that when the aQNN is characterized by a quantum channel with maximal number of stationary states, then it corresponds to non-coherence-generating operations and that the depth of the neural network is related to its decohering power. Further, we examine the case of faulty aQNNs, described by noisy quantum channels, and derive their physical implementation. Finally, we show that the performance of this class of aQNNs cannot be enhanced by using either entanglement or coherence as an external resource.
Despite empirical successes of recurrent neural networks (RNNs) in natural language processing (NLP), theoretical understanding of RNNs is still limited due to intrinsically complex computations in RNNs. We perform a systematic analysis of RNNs’ behaviors in a ubiquitous NLP task, the sentiment analysis of movie reviews, via the mapping between a class of RNNs called recurrent arithmetic circuits (RACs) and a matrix product state (MPS). Using the von-Neumann entanglement entropy (EE) as a proxy for information propagation, we show that single-layer RACs possess a maximum information propagation capacity, reflected by the saturation of the EE. Enlarging the bond dimension of an MPS beyond the EE saturation threshold does not increase the prediction accuracies, so a minimal model that best estimates the data statistics can be constructed. Although the saturated EE is smaller than the maximum EE achievable by the area law of an MPS, our model achieves ~99% training accuracies in realistic sentiment analysis data sets. Thus, low EE alone is not a warrant against the adoption of single-layer RACs for NLP. Contrary to a common belief that long-range information propagation is the main source of RNNs’ expressiveness, we show that single-layer RACs also harness high expressiveness from meaningful word vector embeddings. Our work sheds light on the phenomenology of learning in RACs and more generally on the explainability aspects of RNNs for NLP, using tools from many-body quantum physics.