- Quantum deep reinforcement learning for clinical decision support in oncology: application to adaptive radiotherapy
- Quantum processor swapped in for a neural network
- Machine learning method could speed the search for new battery materials
Variational quantum algorithms are poised to have significant impact on high-dimensional optimization, with applications in classical combinatorics, quantum chemistry, and condensed matter. Nevertheless, the optimization landscape of these algorithms is generally nonconvex, causing suboptimal solutions due to convergence to local, rather than global, minima. In this work, we introduce a variational quantum algorithm that uses classical Markov chain Monte Carlo techniques to provably converge to global minima. These performance gaurantees are derived from the ergodicity of our algorithm’s state space and enable us to place analytic bounds on its time-complexity. We demonstrate both the effectiveness of our technique and the validity of our analysis through quantum circuit simulations for MaxCut instances, solving these problems deterministically and with perfect accuracy. Our technique stands to broadly enrich the field of variational quantum algorithms, improving and gauranteeing the performance of these promising, yet often heuristic, methods.
Quantum computing devices are inevitably subject to errors. To leverage quantum technologies for computational benefits in practical applications, quantum algorithms and protocols must be implemented reliably under noise and imperfections. Since noise and imperfections limit the size of quantum circuits that can be realized on a quantum device, developing quantum error mitigation techniques that do not require extra qubits and gates is of critical importance. In this work, we present a deep learning-based protocol for reducing readout errors on quantum hardware. Our technique is based on training an artificial neural network with the measurement results obtained from experiments with simple quantum circuits consisting of singe-qubit gates only. With the neural network and deep learning, non-linear noise can be corrected, which is not possible with the existing linear inversion methods. The advantage of our method against the existing methods is demonstrated through quantum readout error mitigation experiments performed on IBM five-qubit quantum devices.
In this work we introduce a novel approach to the pulsar classification problem in time-domain radio astronomy using a Born machine, often referred to as a \emphquantum neural network. Using a single-qubit architecture, we show that the pulsar classification problem maps well to the Bloch sphere and that comparable accuracies to more classical machine learning approaches are achievable. We introduce a novel single-qubit encoding for the pulsar data used in this work and show that this performs comparably to a multi-qubit QAOA encoding.
Quantum computing has demonstrated the potential to revolutionize our understanding of nuclear, atomic, and molecular structure by obtaining forefront solutions in non-relativistic quantum many-body theory. In this work, we show that quantum computing can be used to solve for the structure of hadrons, governed by strongly-interacting relativistic quantum field theory. Following our previous work on light unflavored mesons as a relativistic bound-state problem within the nonperturbative Hamiltonian formalism, we present the numerical calculations on simulated quantum devices using the basis light-front quantization (BLFQ) approach. We implement and compare the variational quantum eigensolver (VQE) and the subspace-search variational quantum eigensolver (SSVQE) to find the low-lying mass spectrum of the light meson system and its corresponding light-front wave functions (LFWFs) via quantum simulations. Based on these LFWFs, we then evaluate the meson decay constants and parton distribution functions. Our quantum simulations are in reasonable agreement with accurate numerical solutions on classical computers, and our overall results are comparable with the available experimental data.
QKSA: Quantum Knowledge Seeking Agent — resource-optimized reinforcement learning using quantum process tomography
In this research, we extend the universal reinforcement learning (URL) agent models of artificial general intelligence to quantum environments. The utility function of a classical exploratory stochastic Knowledge Seeking Agent, KL-KSA, is generalized to distance measures from quantum information theory on density matrices. Quantum process tomography (QPT) algorithms form the tractable subset of programs for modeling environmental dynamics. The optimal QPT policy is selected based on a mutable cost function based on algorithmic complexity as well as computational resource complexity. Instead of Turing machines, we estimate the cost metrics on a high-level language to allow realistic experimentation. The entire agent design is encapsulated in a self-replicating quine which mutates the cost function based on the predictive value of the optimal policy choosing scheme. Thus, multiple agents with pareto-optimal QPT policies evolve using genetic programming, mimicking the development of physical theories each with different resource trade-offs. This formal framework is termed Quantum Knowledge Seeking Agent (QKSA). Despite its importance, few quantum reinforcement learning models exist in contrast to the current thrust in quantum machine learning. QKSA is the first proposal for a framework that resembles the classical URL models. Similar to how AIXI-tl is a resource-bounded active version of Solomonoff universal induction, QKSA is a resource-bounded participatory observer framework to the recently proposed algorithmic information-based reconstruction of quantum mechanics. QKSA can be applied for simulating and studying aspects of quantum information theory. Specifically, we demonstrate that it can be used to accelerate quantum variational algorithms which include tomographic reconstruction as its integral subroutine.
Evaluating performance of hybrid quantum optimization algorithms for MAXCUT Clustering using IBM runtime environment
Quantum algorithms can be used to perform unsupervised machine learning task like data clustering by mapping the distance between data points to a graph optimization problem (i.e. MAXCUT) and finding optimal solution through energy minimization using hybrid quantum classical methods. Taking advantage of the IBM runtime environment, we benchmark the performance of the “Warm Start”(ws) variant of Quantum Approximate Optimization Algorithm (QAOA) versus the standard implementation of QAOA and the variational quantum eigensolver(VQE) for unstructured clustering problems using real world dataset with respect to accuracy and runtime. Our numerical results show a strong speedup in runtime for different optimization algorithmsusing the IBM runtime architecture, andincreasing in classification accuracy in ws-QAOA algorithm.
Prediction and compression of lattice QCD data using machine learning algorithms on quantum annealer
We present regression and compression algorithms for lattice QCD data utilizing the efficient binary optimization ability of quantum annealers. In the regression algorithm, we encode the correlation between the input and output variables into a sparse coding machine learning algorithm. The trained correlation pattern is used to predict lattice QCD observables of unseen lattice configurations from other observables measured on the lattice. In the compression algorithm, we define a mapping from lattice QCD data of floating-point numbers to the binary coefficients that closely reconstruct the input data from a set of basis vectors. Since the reconstruction is not exact, the mapping defines a lossy compression, but, a reasonably small number of binary coefficients are able to reconstruct the input vector of lattice QCD data with the reconstruction error much smaller than the statistical fluctuation. In both applications, we use D-Wave quantum annealers to solve the NP-hard binary optimization problems of the machine learning algorithms.