📰News:

📽Videos:

📗Papers:

Modern applications of machine learning in quantum sciences

Anna Dawid, Julian Arnold, Borja Requena, Alexander Gresch, Marcin Płodzień, Kaelan Donatella, Kim Nicoli, Paolo Stornati, Rouven Koch, Miriam Büttner, Robert Okuła, Gorka Muñoz-Gil, Rodrigo A. Vargas-Hernández, Alba Cervera-Lierta, Juan Carrasquilla, Vedran Dunjko, Marylou Gabrié, Patrick Huembeli, Evert van Nieuwenburg, Filippo Vicentini, et al (9)

Apr 11 2022 quant-ph cond-mat.dis-nn cond-mat.mes-hall arXiv:2204.04198v1

Scite!57  PDF

In these Lecture Notes, we provide a comprehensive introduction to the most recent advances in the application of machine learning methods in quantum sciences. We cover the use of deep learning and kernel methods in supervised, unsupervised, and reinforcement learning algorithms for phase classification, representation of many-body quantum states, quantum feedback control, and quantum circuits optimization. Moreover, we introduce and discuss more specialized topics such as differentiable programming, generative models, statistical approach to machine learning, and quantum machine learning.

Improving the efficiency of learning-based error mitigation

Piotr Czarnik, Michael McKerns, Andrew T. Sornborger, Lukasz Cincio

Apr 15 2022 quant-ph arXiv:2204.07109v1

Scite!10  PDF

Error mitigation will play an important role in practical applications of near-term noisy quantum computers. Current error mitigation methods typically concentrate on correction quality at the expense of frugality (as measured by the number of additional calls to quantum hardware). To fill the need for highly accurate, yet inexpensive techniques, we introduce an error mitigation scheme that builds on Clifford data regression (CDR). The scheme improves the frugality by carefully choosing the training data and exploiting the symmetries of the problem. We test our approach by correcting long range correlators of the ground state of XY Hamiltonian on IBM Toronto quantum computer. We find that our method is an order of magnitude cheaper while maintaining the same accuracy as the original CDR approach. The efficiency gain enables us to obtain a factor of 1010 improvement on the unmitigated results with the total budget as small as 2⋅1052⋅105 shots.

Expressivity of Variational Quantum Machine Learning on the Boolean Cube

Dylan Herman, Rudy Raymond, Muyuan Li, Nicolas Robles, Antonio Mezzacapo, Marco Pistoia

Apr 12 2022 quant-ph arXiv:2204.05286v1

Scite!7  PDF

Categorical data plays an important part in machine learning research and appears in a variety of applications. Models that can express large classes of functions on the Boolean cube are useful for problems involving discrete-valued data types, including those which are not Boolean. To this date, the commonly used schemes for embedding classical data into variational quantum machine learning models encode continuous values. Here we investigate quantum embeddings for encoding Boolean-valued data into parameterized quantum circuits used for machine learning tasks. We narrow down representability conditions for functions on the nn-dimensional Boolean cube with respect to previously known results, using two quantum embeddings: a phase embedding and an embedding based on quantum random access codes. We show that for any function on the Boolean cube, there exists a variational linear quantum model based on a phase embedding that can represent it. Additionally, we prove that an ensemble of variational linear quantum models that use the quantum random access code embedding can represent any function on the Boolean cube with degree d≤⌈n3⌉d≤⌈n3⌉ using ⌈n3⌉⌈n3⌉ qubits. Lastly, we demonstrate the use of the embeddings presented by performing numerical simulations and experiments on IBM quantum processors using the Qiskit machine learning framework.

Characterizing Error Mitigation by Symmetry Verification in QAOA

Ashish Kakkar, Jeffrey Larson, Alexey Galda, Ruslan Shaydulin

Apr 13 2022 quant-ph arXiv:2204.05852v1

Scite!6  PDF

Hardware errors are a major obstacle to demonstrating quantum advantage with the quantum approximate optimization algorithm (QAOA). Recently, symmetry verification has been proposed and empirically demonstrated to boost the quantum state fidelity, the expected solution quality, and the success probability of QAOA on a superconducting quantum processor. Symmetry verification uses parity checks that leverage the symmetries of the objective function to be optimized. We develop a theoretical framework for analyzing this approach under local noise and derive explicit formulas for fidelity improvements on problems with global Z2Z2 symmetry. We numerically investigate the symmetry verification on the MaxCut problem and identify the error regimes in which this approach improves the QAOA objective. We observe that these regimes correspond to the error rates present in near-term hardware. We further demonstrate the efficacy of symmetry verification on an IonQ trapped ion quantum processor where an improvement in the QAOA objective of up to 19.2\% is observed.

Embedding Learning in Hybrid Quantum-Classical Neural Networks

Henry Liu, Junyu Liu, Rui Liu, Henry Makhanov, Danylo Lykov, Anuj Apte, Yuri Alexeev

Apr 12 2022 quant-ph arXiv:2204.04550v1

Scite!5  PDF

Quantum embedding learning is an important step in the application of quantum machine learning to classical data. In this paper we propose a quantum few-shot embedding learning paradigm, which learns embeddings useful for training downstream quantum machine learning tasks. Crucially, we identify the circuit bypass problem in hybrid neural networks, where learned classical parameters do not utilize the Hilbert space efficiently. We observe that the few-shot learned embeddings generalize to unseen classes and suffer less from the circuit bypass problem compared with other approaches.

A quantum generative model for multi-dimensional time series using Hamiltonian learning

Haim Horowitz, Pooja Rao, Santosh Kumar Radha

Apr 14 2022 quant-ph cs.LG stat.ML arXiv:2204.06150v1

Scite!3  PDF

Synthetic data generation has proven to be a promising solution for addressing data availability issues in various domains. Even more challenging is the generation of synthetic time series data, where one has to preserve temporal dynamics, i.e., the generated time series must respect the original relationships between variables across time. Recently proposed techniques such as generative adversarial networks (GANs) and quantum-GANs lack the ability to attend to the time series specific temporal correlations adequately. We propose using the inherent nature of quantum computers to simulate quantum dynamics as a technique to encode such features. We start by assuming that a given time series can be generated by a quantum process, after which we proceed to learn that quantum process using quantum machine learning. We then use the learned model to generate out-of-sample time series and show that it captures unique and complex features of the learned time series. We also study the class of time series that can be modeled using this technique. Finally, we experimentally demonstrate the proposed algorithm on an 11-qubit trapped-ion quantum machine.

Surrogate-based optimization for variational quantum algorithms

Ryan Shaffer, Lucas Kocia, Mohan Sarovar

Apr 13 2022 quant-ph arXiv:2204.05451v1

Scite!3  PDF

Variational quantum algorithms are a class of techniques intended to be used on near-term quantum computers. The goal of these algorithms is to perform large quantum computations by breaking the problem down into a large number of shallow quantum circuits, complemented by classical optimization and feedback between each circuit execution. One path for improving the performance of these algorithms is to enhance the classical optimization technique. Given the relative ease and abundance of classical computing resources, there is ample opportunity to do so. In this work, we introduce the idea of learning surrogate models for variational circuits using few experimental measurements, and then performing parameter optimization using these models as opposed to the original data. We demonstrate this idea using a surrogate model based on kernel approximations, through which we reconstruct local patches of variational cost functions using batches of noisy quantum circuit results. Through application to the quantum approximate optimization algorithm and preparation of ground states for molecules, we demonstrate the superiority of surrogate-based optimization over commonly-used optimization techniques for variational algorithms.

Efficient and practical quantum compiler towards multi-qubit systems with deep reinforcement learning

Qiuhao Chen, Yuxuan Du, Qi Zhao, Yuling Jiao, Xiliang Lu, Xingyao Wu

Apr 15 2022 quant-ph cs.LG arXiv:2204.06904v1

Scite!2  PDF

Efficient quantum compiling tactics greatly enhance the capability of quantum computers to execute complicated quantum algorithms. Due to its fundamental importance, a plethora of quantum compilers has been designed in past years. However, there are several caveats to current protocols, which are low optimality, high inference time, limited scalability, and lack of universality. To compensate for these defects, here we devise an efficient and practical quantum compiler assisted by advanced deep reinforcement learning (RL) techniques, i.e., data generation, deep Q-learning, and AQ* search. In this way, our protocol is compatible with various quantum machines and can be used to compile multi-qubit operators. We systematically evaluate the performance of our proposal in compiling quantum operators with both inverse-closed and inverse-free universal basis sets. In the task of single-qubit operator compiling, our proposal outperforms other RL-based quantum compilers in the measure of compiling sequence length and inference time. Meanwhile, the output solution is near-optimal, guaranteed by the Solovay-Kitaev theorem. Notably, for the inverse-free universal basis set, the achieved sequence length complexity is comparable with the inverse-based setting and dramatically advances previous methods. These empirical results contribute to improving the inverse-free Solovay-Kitaev theorem. In addition, for the first time, we demonstrate how to leverage RL-based quantum compilers to accomplish two-qubit operator compiling. The achieved results open an avenue for integrating RL with quantum compiling to unify efficiency and practicality and thus facilitate the exploration of quantum advantages.

Driving black-box quantum thermal machines with optimal power/efficiency trade-offs using reinforcement learning

Paolo Andrea Erdman, Frank NoéApr 12 2022 quant-ph cond-mat.mes-hall cs.LG physics.chem-ph arXiv:2204.04785v1

Scite!2  PDF

The optimal control of non-equilibrium open quantum systems is a challenging task but has a key role in improving existing quantum information processing technologies. We introduce a general model-free framework based on Reinforcement Learning to identify out-of-equilibrium thermodynamic cycles that are Pareto optimal trade-offs between power and efficiency for quantum heat engines and refrigerators. The method does not require any knowledge of the quantum thermal machine, nor of the system model, nor of the quantum state. Instead, it only observes the heat fluxes, so it is both applicable to simulations and experimental devices. We test our method identifying Pareto-optimal trade-offs between power and efficiency in two systems: an experimentally realistic refrigerator based on a superconducting qubit, where we identify non-intuitive control sequences that reduce quantum friction and outperform previous cycles proposed in literature; and a heat engine based on a quantum harmonic oscillator, where we find cycles with an elaborate structure that outperform the optimized Otto cycle.

Quantum Machine Learning Framework for Virtual Screening in Drug Discovery: a Prospective Quantum Advantage

Stefano Mensa, Emre Sahin, Francesco Tacchino, Panagiotis Kl. Barkoutsos, Ivano Tavernelli

Apr 11 2022 quant-ph cs.LG physics.chem-ph arXiv:2204.04017v1

Scite!2  PDF

Machine Learning (ML) for Ligand Based Virtual Screening (LB-VS) is an important in-silico tool for discovering new drugs in a faster and cost-effective manner, especially for emerging diseases such as COVID-19. In this paper, we propose a general-purpose framework combining a classical Support Vector Classifier (SVC) algorithm with quantum kernel estimation for LB-VS on real-world databases, and we argue in favor of its prospective quantum advantage. Indeed, we heuristically prove that our quantum integrated workflow can, at least in some relevant instances, provide a tangible advantage compared to state-of-art classical algorithms operating on the same datasets, showing strong dependence on target and features selection method. Finally, we test our algorithm on IBM Quantum processors using ADRB2 and COVID-19 datasets, showing that hardware simulations provide results in line with the predicted performances and can surpass classical equivalents.

Programming Quantum Hardware via Levenberg

Marquardt Machine LearningJames E. Steck, Nathan L. Thompson, Elizabeth C. Behrman

Apr 15 2022 quant-ph arXiv:2204.07011v1

Scite!0  PDF

Significant challenges remain with the development of macroscopic quantum computing, hardware problems of noise, decoherence, and scaling, software problems of error correction, and, most important, algorithm construction. Finding truly quantum algorithms is quite difficult, and many quantum algorithms, like Shor prime factoring or phase estimation, require extremely long circuit depth for any practical application, necessitating error correction. Machine learning can be used as a systematic method to nonalgorithmically program quantum computers. Quantum machine learning enables us to perform computations without breaking down an algorithm into its gate building blocks, eliminating that difficult step and potentially reducing unnecessary complexity. In addition, we have shown that our machine learning approach is robust to both noise and to decoherence, which is ideal for running on inherently noisy NISQ devices which are limited in the number of qubits available for error correction. We demonstrated this using a fundamentally non classical calculation, experimentally estimating the entanglement of an unknown quantum state. Results from this have been successfully ported to the IBM hardware and trained using a powerful hybrid reinforcement learning technique which is a modified Levenberg Marquardt LM method. The LM method is ideally suited to quantum machine learning as it only requires knowledge of the final measured output of the quantum computation, not intermediate quantum states which are generally not accessible. Since it processes all the learning data simultaneously, it also requires significantly fewer hits on the quantum hardware. Machine learning is demonstrated with results from simulations and runs on the IBM Qiskit hardware interface.

Categories: Week-in-QML

0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *