📰News:

📽Videos:

👨‍💻Developers:

ProjectQ v0.7.1

📗Papers:

Systematic Literature Review: Quantum Machine Learning and its applications

David Peral García, Juan Cruz-Benito, Francisco José García-PeñalvoJan 12 2022 quant-ph cs.LG arXiv:2201.04093v1

Quantum computing is the process of performing calculations using quantum mechanics. This field studies the quantum behavior of certain subatomic particles for subsequent use in performing calculations, as well as for large-scale information processing. These capabilities can give quantum computers an advantage in terms of computational time and cost over classical computers. Nowadays, there are scientific challenges that are impossible to perform by classical computation due to computational complexity or the time the calculation would take, and quantum computation is one of the possible answers. However, current quantum devices have not yet the necessary qubits and are not fault-tolerant enough to achieve these goals. Nonetheless, there are other fields like machine learning or chemistry where quantum computation could be useful with current quantum devices. This manuscript aims to present a Systematic Literature Review of the papers published between 2017 and 2021 to identify, analyze and classify the different algorithms used in quantum machine learning and their applications. Consequently, this study identified 52 articles that used quantum machine learning techniques and algorithms. The main types of found algorithms are quantum implementations of classical machine learning algorithms, such as support vector machines or the k-nearest neighbor model, and classical deep learning algorithms, like quantum neural networks. Many articles try to solve problems currently answered by classical machine learning but using quantum devices and algorithms. Even though results are promising, quantum machine learning is far from achieving its full potential. An improvement in the quantum hardware is required since the existing quantum computers lack enough quality, speed, and scale to allow quantum computing to achieve its full potential.

Scaling Quantum Approximate Optimization on Near-term Hardware

Phillip C. Lotshaw, Thien Nguyen, Anthony Santana, Alexander McCaskey, Rebekah Herrman, James Ostrowski, George Siopsis, Travis S. HumbleJan 10 2022 quant-ph arXiv:2201.02247v1

The quantum approximate optimization algorithm (QAOA) is an approach for near-term quantum computers to potentially demonstrate computational advantage in solving combinatorial optimization problems. However, the viability of the QAOA depends on how its performance and resource requirements scale with problem size and complexity for realistic hardware implementations. Here, we quantify scaling of the expected resource requirements by synthesizing optimized circuits for hardware architectures with varying levels of connectivity. Assuming noisy gate operations, we estimate the number of measurements needed to sample the output of the idealized QAOA circuit with high probability. We show the number of measurements, and hence total time to solution, grows exponentially in problem size and problem graph degree as well as depth of the QAOA ansatz, gate infidelities, and inverse hardware graph degree. These problems may be alleviated by increasing hardware connectivity or by recently proposed modifications to the QAOA that achieve higher performance with fewer circuit layers.

QAOA pseudo-Boltzmann states

Pablo Díez-Valle, Diego Porras, Juan José García-RipollJan 11 2022 quant-ph arXiv:2201.03358v1

In this letter, we provide analytical and numerical evidence that the single-layer Quantum Approximate Optimization Algorithm (QAOA) on universal Ising spin models produces thermal states with Gaussian perturbations. We find that these pseudo-Boltzman states can not be efficiently simulated on classical computers according to state-of-art techniques, and we relate this distribution to the optimization potential of QAOA. Moreover, we observe that the temperature depends on a hidden universal correlation between the energy of a state and the covariance of other energy levels and the Hamming distances of the state to those energies.

Quantum activation functions for quantum neural networks

Marco Maronese, Claudio Destri, Enrico PratiJan 12 2022 quant-ph cs.AI arXiv:2201.03700v1

The field of artificial neural networks is expected to strongly benefit from recent developments of quantum computers. In particular, quantum machine learning, a class of quantum algorithms which exploit qubits for creating trainable neural networks, will provide more power to solve problems such as pattern recognition, clustering and machine learning in general. The building block of feed-forward neural networks consists of one layer of neurons connected to an output neuron that is activated according to an arbitrary activation function. The corresponding learning algorithm goes under the name of Rosenblatt perceptron. Quantum perceptrons with specific activation functions are known, but a general method to realize arbitrary activation functions on a quantum computer is still lacking. Here we fill this gap with a quantum algorithm which is capable to approximate any analytic activation functions to any given order of its power series. Unlike previous proposals providing irreversible measurement–based and simplified activation functions, here we show how to approximate any analytic function to any required accuracy without the need to measure the states encoding the information. Thanks to the generality of this construction, any feed-forward neural network may acquire the universal approximation properties according to Hornik’s theorem. Our results recast the science of artificial neural networks in the architecture of gate-model quantum computers.

Generalized quantum similarity learning

Santosh Kumar Radha, Casey JaoJan 10 2022 quant-ph cs.LG stat.ML arXiv:2201.02310v1

The similarity between objects is significant in a broad range of areas. While similarity can be measured using off-the-shelf distance functions, they may fail to capture the inherent meaning of similarity, which tends to depend on the underlying data and task. Moreover, conventional distance functions limit the space of similarity measures to be symmetric and do not directly allow comparing objects from different spaces. We propose using quantum networks (GQSim) for learning task-dependent (a)symmetric similarity between data that need not have the same dimensionality. We analyze the properties of such similarity function analytically (for a simple case) and numerically (for a complex case) and showthat these similarity measures can extract salient features of the data. We also demonstrate that the similarity measure derived using this technique is (ϵ,γ,τ)(ϵ,γ,τ)-good, resulting in theoretically guaranteed performance. Finally, we conclude by applying this technique for three relevant applications – Classification, Graph Completion, Generative modeling.

Variational quantum simulation of valence-bond solids

Daniel HuergaJan 10 2022 quant-ph cond-mat.str-el arXiv:2201.02545v1

We introduce a hybrid quantum-classical variational algorithm to simulate ground-state phase diagrams of frustrated quantum spin models in the thermodynamic limit. The method is based on a cluster-Gutzwiller ansatz where the wave function of the cluster is provided by a parameterized quantum circuit. The key ingredient is a tunable real XY gate allowing to generate valence-bonds on nearest-neighbor qubits. Additional tunable single-qubit Z- and two-qubit ZZ-rotation gates permit the description of magnetically ordered phases while efficiently restricting the variational optimization to the U(1) symmetric subspace. We benchmark the method against the paradigmatic J1-J2 Heisenberg model on the square lattice, for which the present hybrid ansatz is an exact realization of the cluster-Gutzwiller with 4-qubit clusters. In particular, we describe the Neel order and its continuous quantum phase transition onto a valence-bond solid characterized by a periodic pattern of 2×2 strongly-correlated plaquettes, providing a route to synthetically realize valence-bond solids with currently developed superconducting circuit devices.

Assessment of the variational quantum eigensolver: application to the Heisenberg model

Manpreet Singh Jattana, Fengping Jin, Hans De Raedt, Kristel MichielsenJan 14 2022 quant-ph arXiv:2201.05065v1

We present and analyze large-scale simulation results of a hybrid quantum-classical variational method to calculate the ground state energy of the anti-ferromagnetic Heisenberg model. Using a massively parallel universal quantum computer simulator, we observe that a low-depth-circuit ansatz advantageously exploits the efficiently preparable Néel initial state, avoids potential barren plateaus, and works for both one- and two-dimensional lattices. The analysis reflects the decisive ingredients required for a simulation by comparing different ansätze, initial parameters, and gradient-based versus gradient-free optimizers. Extrapolation to the thermodynamic limit accurately yields the analytical value for the ground state energy, given by the Bethe ansatz. We predict that a fully functional quantum computer with 100 qubits can calculate the ground state energy with a relatively small error.

Generating the optimal structures for parameterized quantum circuits by a meta-trained graph variational autoencoder

Chuangtao Chen, Zhimin He, Shenggen Zheng, Yan Zhou, Haozhen SituJan 11 2022 quant-ph arXiv:2201.03309v1

Current structure optimization algorithms optimize the structure of quantum circuit from scratch for each new task of variational quantum algorithms (VQAs) without using any prior experiences, which is inefficient and time-consuming. Besides, the number of quantum gates is a hyperparameter of these algorithms, which is difficult and time-consuming to determine. In this paper, we propose a rapid structure optimization algorithm for VQAs which can automatically determine the number of quantum gates and directly generate the optimal structures for new tasks with the meta-trained graph variational autoencoder (VAE) on a number of training tasks. We also develop a meta-trained predictor to filter out circuits with poor performances to further accelerate the algorithm. Simulation results show that the proposed method can output structures with lower loss than a state-of-the-art algorithm, namely DQAS, and only needs 1.4% of its running time.

Categories: Week-in-QML

0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *