- The Universe Might Be a Self-Learning Computer. Here’s What That Means
- A shortcut to the thermodynamic limit for quantum many-body calculations of metals
- How to transform vacancies into quantum information
- Design tools for quantum computing
- Rigetti Computing Announces Next-Generation 40Q and 80Q Quantum Systems
- Breakthrough Proof Clears Path for Quantum AI – Overcoming Threat of “Barren Plateaus”
- Anatole von Lilienfeld: Quantum Machine Learning in Chemical Compound Space
- CS&[email protected] Classification problem solving using quantum machine learning mechanisms
- Quantum Machine Intelligence with Pooya
- Alessandro Ciani: Error mitigation and quantum-assisted simulation in the error corrected regime
- The Operating System for Quantum Computing
- A brief story of Quantum Neuromorphic Computing
- Beyond Enterprise Adoption to Quantum Acceleration: How to drive Quantum expertise
- A tensor network approach to an AdS/QCFT correspondence
- Introduction to Quantum ML
A semidefinite program (SDP) is a particular kind of convex optimization problem with applications in operations research, combinatorial optimization, quantum information science, and beyond. In this work, we propose variational quantum algorithms for approximately solving SDPs. For one class of SDPs, we provide a rigorous analysis of their convergence to approximate locally optimal solutions, under the assumption that they are weakly constrained (i.e., N≫MN≫M, where NN is the dimension of the input matrices and MM is the number of constraints). We also provide algorithms for a more general class of SDPs that requires fewer assumptions. Finally, we numerically simulate our quantum algorithms for applications such as MaxCut, and the results of these simulations provide evidence that convergence still occurs in noisy settings.
We develop a theoretical framework for SnSn-equivariant quantum convolutional circuits, building on and significantly generalizing Jordan’s Permutational Quantum Computing (PQC) formalism. We show that quantum circuits are a natural choice for Fourier space neural architectures affording a super-exponential speedup in computing the matrix elements of SnSn-Fourier coefficients compared to the best known classical Fast Fourier Transform (FFT) over the symmetric group. In particular, we utilize the Okounkov-Vershik approach to prove Harrow’s statement (Ph.D. Thesis 2005 p.160) on the equivalence between SU(d)SU(d)- and SnSn-irrep bases and to establish the SnSn-equivariant Convolutional Quantum Alternating Ansätze (SnSn-CQA) using Young-Jucys-Murphy (YJM) elements. We prove that SnSn-CQA are dense, thus expressible within each SnSn-irrep block, which may serve as a universal model for potential future quantum machine learning and optimization applications. Our method provides another way to prove the universality of Quantum Approximate Optimization Algorithm (QAOA), from the representation-theoretical point of view. Our framework can be naturally applied to a wide array of problems with global SU(d)SU(d) symmetry. We present numerical simulations to showcase the effectiveness of the ansätze to find the sign structure of the ground state of the J1J1–J2J2 antiferromagnetic Heisenberg model on the rectangular and Kagome lattices. Our work identifies quantum advantage for a specific machine learning problem, and provides the first application of the celebrated Okounkov-Vershik’s representation theory to machine learning and quantum physics.
Variational Quantum Algorithms (VQAs) are relatively robust to noise, but errors are still a significant detriment to VQAs on near-term quantum machines. It is imperative to employ error mitigation techniques to improve VQA fidelity. While existing error mitigation techniques built from theory provide substantial gains, the disconnect between theory and real machine execution limits their benefits. Thus, it is critical to optimize mitigation techniques to explicitly suit the target application as well as the noise characteristics of the target machine. We propose VAQEM, which dynamically tailors existing error mitigation techniques to the actual, dynamic noisy execution characteristics of VQAs on a target quantum machine. We do so by tuning specific features of these mitigation techniques similar to the traditional rotation angle parameters – by targeting improvements towards a specific objective function which represents the VQA problem at hand. In this paper, we target two types of error mitigation techniques which are suited to idle times in quantum circuits: single qubit gate scheduling and the insertion of dynamical decoupling sequences. We gain substantial improvements to VQA objective measurements – a mean of over 3x across a variety of VQA applications, run on IBM Quantum machines. More importantly, the proposed variational approach is general and can be extended to many other error mitigation techniques whose specific configurations are hard to select a priori. Integrating more mitigation techniques into the VAQEM framework can lead to potentially realizing practically useful VQA benefits on today’s noisy quantum machines.
Reinforcement learning studies how an agent should interact with an environment to maximize its cumulative reward. A standard way to study this question abstractly is to ask how many samples an agent needs from the environment to learn an optimal policy for a γγ-discounted Markov decision process (MDP). For such an MDP, we design quantum algorithms that approximate an optimal policy (π∗π∗), the optimal value function (v∗v∗), and the optimal QQ-function (q∗q∗), assuming the algorithms can access samples from the environment in quantum superposition. This assumption is justified whenever there exists a simulator for the environment; for example, if the environment is a video game or some other program. Our quantum algorithms, inspired by value iteration, achieve quadratic speedups over the best-possible classical sample complexities in the approximation accuracy (ϵϵ) and two main parameters of the MDP: the effective time horizon (11−γ11−γ) and the size of the action space (AA). Moreover, we show that our quantum algorithm for computing q∗q∗ is optimal by proving a matching quantum lower bound.
Finding the closest separable state to a given target state is a notoriously difficult task, even more difficult than deciding whether a state is entangled or separable. To tackle this task, we parametrize separable states with a neural network and train it to minimize the distance to a given target state, with respect to a differentiable distance, such as the trace distance or Hilbert-Schmidt distance. By examining the output of the algorithm, we can deduce whether the target state is entangled or not, and construct an approximation for its closest separable state. We benchmark the method on a variety of well-known classes of bipartite states and find excellent agreement, even up to local dimension of d=10d=10. Moreover, we show our method to be efficient in the multipartite case, considering different notions of separability. Examining three and four-party GHZ and W states we recover known bounds and obtain novel ones, for instance for triseparability. Finally, we show how to use the neural network’s results to gain analytic insight.
We present an analog version of the quantum approximate optimization algorithm suitable for current quantum annealers. The central idea of this algorithm is to optimize the schedule function, which defines the adiabatic evolution. It is achieved by choosing a suitable parametrization of the schedule function based on interpolation methods for a fixed time. This algorithm provides an approximate result of optimization problems that may be developed during the coherence time of current quantum annealers on their way towards quantum advantage.
Enhancing classical machine learning (ML) algorithms through quantum kernels is a rapidly growing research topic in quantum machine learning (QML). A key challenge in using kernels — both classical and quantum — is that ML workflows involve acquiring new observations, for which new kernel values need to be calculated. Transferring data back-and-forth between where the new observations are generated & a quantum computer incurs a time delay; this delay may exceed the timescales relevant for using the QML algorithm in the first place. In this work, we show quantum kernel matrices can be extended to incorporate new data using a classical (chordal-graph-based) matrix completion algorithm. The minimal sample complexity needed for perfect completion is dependent on matrix rank. We empirically show that (a) quantum kernel matrices can be completed using this algorithm when the minimal sample complexity is met, (b) the error of the completion degrades gracefully in the presence of finite-sampling noise, and (c) the rank of quantum kernel matrices depends weakly on the expressibility of the quantum feature map generating the kernel. Further, on a real-world, industrially-relevant data set, the completion error behaves gracefully even when the minimal sample complexity is not reached.
Accurate models of real quantum systems are important for investigating their behaviour, yet are difficult to distill empirically. Here, we report an algorithm — the Quantum Model Learning Agent (QMLA) — to reverse engineer Hamiltonian descriptions of a target system. We test the performance of QMLA on a number of simulated experiments, demonstrating several mechanisms for the design of candidate Hamiltonian models and simultaneously entertaining numerous hypotheses about the nature of the physical interactions governing the system under study. QMLA is shown to identify the true model in the majority of instances, when provided with limited a priori information, and control of the experimental setup. Our protocol can explore Ising, Heisenberg and Hubbard families of models in parallel, reliably identifying the family which best describes the system dynamics. We demonstrate QMLA operating on large model spaces by incorporating a genetic algorithm to formulate new hypothetical models. The selection of models whose features propagate to the next generation is based upon an objective function inspired by the Elo rating scheme, typically used to rate competitors in games such as chess and football. In all instances, our protocol finds models that exhibit F1F1-score ≥0.88≥0.88 when compared with the true model, and it precisely identifies the true model in 72% of cases, whilst exploring a space of over 250,000250,000 potential models. By testing which interactions actually occur in the target system, QMLA is a viable tool for both the exploration of fundamental physics and the characterisation and calibration of quantum devices.
Quantum Optimal Control is an established field of research which is necessary for the development of Quantum Technologies. Quantum Machine Learning is a fast emerging field in which the theory of Quantum Mechanics and Machine Learning fuse together in order to learn and benefit from each other. In particular, Reinforcement Learning has a direct application in control problems. In this tutorial we introduce the methods of Quantum Optimal Control and Reinforcement Learning by applying them to the problem of three-level population transfer. The jupyter notebooks to reproduce some of our results are open-sourced and available on github.
We present a comparative study between classical probability and quantum probability from the Bayesian viewpoint, where probability is construed as our rational degree of belief on whether a given statement is true. From this viewpoint, including conditional probability, three issues are discussed: i) Given a measure of the rational degree of belief, does it satisfy the axioms of the probability? ii) Given the probability satisfying these axioms, is it seen as the measure of the rational degree of belief? iii) Can the measure of the rational degree of belief be evaluated in terms of the relative frequency of events occurring? Here we show that as with the classical probability, all these issues can be resolved affirmatively in the quantum probability, provided that the relation to the relative frequency is slightly modified in case of a small number of observations. This implies that the relation between the Bayesian probability and the relative frequency in quantum mechanics is the same as that in the classical probability theory, including conditional probability.
Quantum computing has promised significant improvement in solving difficult computational tasks over classical computers. Designing quantum circuits for practical use, however, is not a trivial objective and requires expert-level knowledge. To aid this endeavor, this paper proposes a machine learning-based method to construct quantum circuit architectures. Previous works have demonstrated that classical deep reinforcement learning (DRL) algorithms can successfully construct quantum circuit architectures without encoded physics knowledge. However, these DRL-based works are not generalizable to settings with changing device noises, thus requiring considerable amounts of training resources to keep the RL models up-to-date. With this in mind, we incorporated continual learning to enhance the performance of our algorithm. In this paper, we present the Probabilistic Policy Reuse with deep Q-learning (PPR-DQL) framework to tackle this circuit design challenge. By conducting numerical simulations over various noise patterns, we demonstrate that the RL agent with PPR was able to find the quantum gate sequence to generate the two-qubit Bell state faster than the agent that was trained from scratch. The proposed framework is general and can be applied to other quantum gate synthesis or control problems — including the automatic calibration of quantum devices.
Recent breakthroughs have opened the possibility to intermediate-scale quantum computing with tens to hundreds of qubits, and shown the potential for solving classical challenging problems, such as in chemistry and condensed matter physics. However, the extremely high accuracy needed to surpass classical computers poses a critical demand to the circuit depth, which is severely limited by the non-negligible gate infidelity, currently around 0.1-1%. Here, by incorporating a virtual Heisenberg circuit, which acts effectively on the measurement observables, to a real shallow Schrödinger circuit, which is implemented realistically on the quantum hardware, we propose a paradigm of Schrödinger-Heisenberg variational quantum algorithms to resolve this problem. We choose a Clifford virtual circuit, whose effect on the Hamiltonian can be efficiently and classically implemented according to the Gottesman-Knill theorem. Yet, it greatly enlarges the state expressivity, realizing much larger unitary t-designs. Our method enables accurate quantum simulation and computation that otherwise is only achievable with much deeper and more accurate circuits conventionally. This has been verified in our numerical experiments for a better approximation of random states and a higher-fidelity solution to the ground state energy of the XXZ model. Together with effective quantum error mitigation, our work paves the way for realizing accurate quantum computing algorithms with near-term quantum devices
The exotic nature of quantum mechanics makes machine learning (ML) be different in the quantum realm compared to classical applications. ML can be used for knowledge discovery using information continuously extracted from a quantum system in a broad range of tasks. The model receives streaming quantum information for learning and decision-making, resulting in instant feedback on the quantum system. As a stream learning approach, we present a deep reinforcement learning on streaming data from a continuously measured qubit at the presence of detuning, dephasing, and relaxation. We also investigate how the agent adapts to another quantum noise pattern by transfer learning. Stream learning provides a better understanding of closed-loop quantum control, which may pave the way for advanced quantum technologies.
The Quantum Approximate Optimization Algorithm(QAOA) is one of the highly advocated variational algorithms for problem optimization, such as MAXCUT problem. It is a type of parameterized quantum classical combined algorithm feasible on the current era of Noisy Intermediate Scale Quantum(NISQ) computing. Like all other quantum algorithms, a QAOA circuit has to be converted to a hardware compliant circuit with some additional SWAP gates inserted, which is called the qubit mapping process. QAOA has a special kind of unit block called CPHASE. Commuting CPHASE blocks can be scheduled in any order, which grants more freedom to the quantum program compilation process in the scope of qubit mapping. Due to the decoherence of qubit, the number of SWAP gates inserted and the depth of the converted circuit needs to be as small as possible. After analyzing tremendous SWAP insertion and gate reordering combinations, we discovered a simple yet effective pattern of gates scheduling called CommuTativity Aware Graph(CTAG). This specific pattern can be utilized to schedule any QAOA circuit while greatly reducing the gate count and circuit depth. Our CTAG based method yields circuits with depth in the bound of O(2N), as long as there is line embedding in the given architecture with length of N. Comparing our CTAG based method to the state of art QAOA compilation solution, which is a heuristic approach using the commutativity feature, we achieve up to 90% reduction in circuit depth and 75% reduction in gate count in linear architecture for input graphs with up to 120 vertices. Even more speedup can be achieved, if the input graph has more vertices.
Open quantum systems can undergo dissipative phase transitions, and their critical behavior can be exploited in sensing applications. For example, it can be used to enhance the fidelity of superconducting qubit readout measurements, a central problem toward the creation of reliable quantum hardware. A recently introduced measurement protocol, named “critical parametric quantum sensing”, uses the parametric (two-photon driven) Kerr resonator’s driven-dissipative phase transition to reach single-qubit detection fidelity of 99.9\% [arXiv:2107.04503]. In this work, we improve upon the previous protocol by using machine learning-based classification algorithms to \textitefficiently and rapidly extract information from this critical dynamics, which has so far been neglected to focus only on stationary properties. These classification algorithms are applied to the time series data of weak quantum measurements (homodyne detection) of a circuit-QED implementation of the Kerr resonator coupled to a superconducting qubit. This demonstrates how machine learning methods enable a faster and more reliable measurement protocol in critical open quantum systems.