- New machine learning algorithm to detect quantum errors
- Quantum computer far away, says AI pioneer Raj Reddy
- Artificial Intelligence forum: AI and quantum computing: building the quantum economy
- Learning in a Quantum World by Nathan Wiebe | QWorld
- The Quantum Concept of Consciousness
- quantumcat – Cross Platform Open Source Quantum Computing Library – Jitesh Lalwani
- Quantum Computing & Artificial Intelligence | Knowledge Empowerment Session | Ft. Indrajeet
- All Cirq vendor modules have been extracted
- Rigetti support
- Two-qubit unitary decomposition for (inverse) sqrt-iSWAP
- Typescript development and 3D Circuits
- Python 3.9 support
- Performance boost for low entanglement circuits
- Flatten Tensor products
- Optimal learning of quantum Hamiltonians from high-temperature Gibbs states
- Quantum Continual Learning Overcoming Catastrophic Forgetting
- Quantum Circuits For Two-Dimensional Isometric Tensor Networks
- Globally optimizing QAOA circuit depth for constrained optimization problems
- Parameters Fixing Strategy for Quantum Approximate Optimization Algorithm
- Quantum Reinforcement Learning: the Maze problem
- Continuous-variable optimization with neural network quantum states
- Matrix Model simulations using Quantum Computing, Deep Learning, and Lattice Monte Carlo
We study the problem of learning a Hamiltonian HH to precision εε, supposing we are given copies of its Gibbs state ρ=exp(−βH)/Tr(exp(−βH))ρ=exp(−βH)/Tr(exp(−βH)) at a known inverse temperature ββ. Anshu, Arunachalam, Kuwahara, and Soleimanifar (Nature Physics, 2021) recently studied the sample complexity (number of copies of ρρ needed) of this problem for geometrically local NN-qubit Hamiltonians. In the high-temperature (low ββ) regime, their algorithm has sample complexity poly(N,1/β,1/ε)(N,1/β,1/ε) and can be implemented with polynomial, but suboptimal, time complexity. In this paper, we study the same question for a more general class of Hamiltonians. We show how to learn the coefficients of a Hamiltonian to error εε with sample complexity S=O(logN/(βε)2)S=O(logN/(βε)2) and time complexity linear in the sample size, O(SN)O(SN). Furthermore, we prove a matching lower bound showing that our algorithm’s sample complexity is optimal, and hence our time complexity is also optimal. In the appendix, we show that virtually the same algorithm can be used to learn HH from a real-time evolution unitary e−itHe−itH in a small tt regime with similar sample and time complexity.
Catastrophic forgetting describes the fact that machine learning models will likely forget the knowledge of previously learned tasks after the learning process of a new one. It is a vital problem in the continual learning scenario and recently has attracted tremendous concern across different communities. In this paper, we explore the catastrophic forgetting phenomena in the context of quantum machine learning. We find that, similar to those classical learning models based on neural networks, quantum learning systems likewise suffer from such forgetting problem in classification tasks emerging from various application scenes. We show that based on the local geometrical information in the loss function landscape of the trained model, a uniform strategy can be adapted to overcome the forgetting problem in the incremental learning setting. Our results uncover the catastrophic forgetting phenomena in quantum machine learning and offer a practical method to overcome this problem, which opens a new avenue for exploring potential quantum advantages towards continual learning.
The variational quantum eigensolver (VQE) combines classical and quantum resources in order simulate classically intractable quantum states. Amongst other variables, successful VQE depends on the choice of variational ansatz for a problem Hamiltonian. We give a detailed description of a quantum circuit version of the 2D isometric tensor network (isoTNS) ansatz which we call qisoTNS. We benchmark the performance of qisoTNS on two different 2D spin 1/2 Hamiltonians. We find that the ansatz has several advantages. It is qubit efficient with the number of qubits allowing for access to some exponentially large bond-dimension tensors at polynomial quantum cost. In addition, the ansatz is robust to the barren plateau problem due emergent layerwise training. We further explore the effect of noise on the efficacy of the ansatz. Overall, we find that qisoTNS is a suitable variational ansatz for 2D Hamiltonians with local interactions.
We develop a global variable substitution method that reduces nn-variable monomials in combinatorial optimization problems to equivalent instances with monomials in fewer variables. We apply this technique to 33-SAT and analyze the optimal quantum circuit depth needed to solve the reduced problem using the quantum approximate optimization algorithm. For benchmark 33-SAT problems, we find that the upper bound of the circuit depth is smaller when the problem is formulated as a product and uses the substitution method to decompose gates than when the problem is written in the linear formulation, which requires no decomposition.
The quantum approximate optimization algorithm (QAOA) has numerous promising applications in solving the combinatorial optimization problems on near-term Noisy Intermediate Scalable Quantum (NISQ) devices. QAOA has a quantum-classical hybrid structure. Its quantum part consists of a parameterized alternating operator ansatz, and its classical part comprises an optimization algorithm, which optimizes the parameters to maximize the expectation value of the problem Hamiltonian. This expectation value depends highly on the parameters, this implies that a set of good parameters leads to an accurate solution. However, at large circuit depth of QAOA, it is difficult to achieve global optimization due to the multiple occurrences of local minima or maxima. In this paper, we propose a parameters fixing strategy which gives high approximation ratio on average, even at large circuit depths, by initializing QAOA with the optimal parameters obtained from the previous depths. We test our strategy on the Max-cut problem of certain classes of graphs such as the 3-regular graphs and the Erdös-Rényi graphs.
Quantum Machine Learning (QML) is a young but rapidly growing field where quantum information meets machine learning. Here, we will introduce a new QML model generalizing the classical concept of Reinforcement Learning to the quantum domain, i.e. Quantum Reinforcement Learning (QRL). In particular we apply this idea to the maze problem, where an agent has to learn the optimal set of actions in order to escape from a maze with the highest success probability. To perform the strategy optimization, we consider an hybrid protocol where QRL is combined with classical deep neural networks. In particular, we find that the agent learns the optimal strategy in both the classical and quantum regimes, and we also investigate its behaviour in a noisy environment. It turns out that the quantum speedup does robustly allow the agent to exploit useful actions also at very short time scales, with key roles played by the quantum coherence and the external noise. This new framework has the high potential to be applied to perform different tasks (e.g. high transmission/processing rates and quantum error correction) in the new-generation Noisy Intermediate-Scale Quantum (NISQ) devices whose topology engineering is starting to become a new and crucial control knob for practical applications in real-world problems. This work is dedicated to the memory of Peter Wittek.
Inspired by proposals for continuous-variable quantum approximate optimization (CV-QAOA), we investigate the utility of continuous-variable neural network quantum states (CV-NQS) for performing continuous optimization, focusing on the ground state optimization of the classical antiferromagnetic rotor model. Numerical experiments conducted using variational Monte Carlo with CV-NQS indicate that although the non-local algorithm succeeds in finding ground states competitive with the local gradient search methods, the proposal suffers from unfavorable scaling. A number of proposed extensions are put forward which may help alleviate the scaling difficulty.
Matrix quantum mechanics plays various important roles in theoretical physics, such as a holographic description of quantum black holes. Understanding quantum black holes and the role of entanglement in a holographic setup is of paramount importance for the development of better quantum algorithms (quantum error correction codes) and for the realization of a quantum theory of gravity. Quantum computing and deep learning offer us potentially useful approaches to study the dynamics of matrix quantum mechanics. In this paper we perform a systematic survey for quantum computing and deep learning approaches to matrix quantum mechanics, comparing them to Lattice Monte Carlo simulations. In particular, we test the performance of each method by calculating the low-energy spectrum.