#50: Nov 27th – Dec 3rd

📰News:

📽Videos:

👨‍💻Developers:

Amazon Braket SDK v1.11.0

📗Papers:

Revisiting dequantization and quantum advantage in learning tasks

Jordan Cotler, Hsin-Yuan Huang, Jarrod R. McCleanDec 03 2021 quant-ph cs.IT cs.LG math.IT arXiv:2112.00811v1

It has been shown that the apparent advantage of some quantum machine learning algorithms may be efficiently replicated using classical algorithms with suitable data access — a process known as dequantization. Existing works on dequantization compare quantum algorithms which take copies of an n-qubit quantum state |x⟩=∑ixi|i⟩|x⟩=∑ixi|i⟩ as input to classical algorithms which have sample and query (SQ) access to the vector xx. In this note, we prove that classical algorithms with SQ access can accomplish some learning tasks exponentially faster than quantum algorithms with quantum state inputs. Because classical algorithms are a subset of quantum algorithms, this demonstrates that SQ access can sometimes be significantly more powerful than quantum state inputs. Our findings suggest that the absence of exponential quantum advantage in some learning tasks may be due to SQ access being too powerful relative to quantum state inputs. If we compare quantum algorithms with quantum state inputs to classical algorithms with access to measurement data on quantum states, the landscape of quantum advantage can be dramatically different.

Performance comparison of optimization methods on variational quantum algorithms

Xavier Bonet-Monroig, Hao Wang, Diederick Vermetten, Bruno Senjean, Charles Moussa, Thomas Bäck, Vedran Dunjko, Thomas E. O’BrienNov 29 2021 quant-ph arXiv:2111.13454v1

Variational quantum algorithms (VQAs) offer a promising path towards using near-term quantum hardware for applications in academic and industrial research. These algorithms aim to find approximate solutions to quantum problems by optimizing a parametrized quantum circuit using a classical optimization algorithm. A successful VQA requires fast and reliable classical optimization algorithms. Understanding and optimizing how off-the-shelf optimization methods perform in this context is important for the future of the field. In this work we study the performance of four commonly used gradient-free optimization methods: SLSQP, COBYLA, CMA-ES, and SPSA, at finding ground-state energies of a range of small chemistry and material science problems. We test a telescoping sampling scheme (where the accuracy of the cost-function estimate provided to the optimizer is increased as the optimization converges) on all methods, demonstrating mixed results across our range of optimizers and problems chosen. We further hyperparameter tune two of the four optimizers (CMA-ES and SPSA) across a large range of models, and demonstrate that with appropriate hyperparameter tuning, CMA-ES is competitive with and sometimes outperforms SPSA (which is not observed in the absence of hyperparameter tuning). Finally, we investigate the ability of an optimizer to beat the `sampling noise floor’, given by the sampling noise on each cost-function estimate provided to the optimizer. Our results demonstrate the necessity for tailoring and hyperparameter-tuning known optimization techniques for inherently-noisy variational quantum algorithms, and that the variational landscape that one finds in a VQA is highly problem- and system-dependent. This provides guidance for future implementations of these algorithms in experiment.

Simulating thermal density operators with cluster expansions and tensor networks

Bram Vanhecke, David Devoogdt, Frank Verstraete, Laurens VanderstraetenDec 03 2021 cond-mat.str-el cond-mat.stat-mech quant-ph arXiv:2112.01507v1

We provide an efficient approximation for the exponential of a local operator in quantum spin systems using tensor-network representations of a cluster expansion. We benchmark this cluster tensor network operator (cluster TNO) for one-dimensional systems, and show that the approximation works well for large real- or imaginary-time steps. We use this formalism for representing the thermal density operator of a two-dimensional quantum spin system at a certain temperature as a single cluster TNO, which we can then contract by standard contraction methods for two-dimensional tensor networks. We apply this approach to the thermal phase transition of the transverse-field Ising model on the square lattice, and we find through a scaling analysis that the cluster-TNO approximation gives rise to a continuous phase transition in the correct universality class; by increasing the order of the cluster expansion we find good values of the critical point up to surprisingly low temperatures.

Generative Quantum Machine Learning

Christa ZoufalNov 29 2021 quant-ph arXiv:2111.12738v1

The goal of generative machine learning is to model the probability distribution underlying a given data set. This probability distribution helps to characterize the generation process of the data samples. While classical generative machine learning is solely based on classical resources, generative quantum machine learning can also employ quantum resources – such as parameterized quantum channels and quantum operators – to learn and sample from the probability model of interest. Applications of generative (quantum) models are multifaceted. The trained model can generate new samples that are compatible with the given data and extend the data set. Additionally, learning a model for the generation process of a data set may provide interesting information about the corresponding properties. With the help of quantum resources, the respective generative models have access to functions that are difficult to evaluate with a classical computer and may improve the performance or lead to new insights. Furthermore, generative quantum machine learning can be applied to efficient, approximate loading of classical data into a quantum state which may help to avoid – potentially exponentially – expensive, exact quantum data loading. The aim of this doctoral thesis is to develop new generative quantum machine learning algorithms, demonstrate their feasibility, and analyze their performance. Additionally, we outline their potential application to efficient, approximate quantum data loading. More specifically, we introduce a quantum generative adversarial network and a quantum Boltzmann machine implementation, both of which can be realized with parameterized quantum circuits. These algorithms are compatible with first-generation quantum hardware and, thus, enable us to study proof of concept implementations not only with numerical quantum simulations but also real quantum hardware available today.

Quantifying scrambling in quantum neural networks

Roy J. Garcia, Kaifeng Bu, Arthur JaffeDec 03 2021 quant-ph arXiv:2112.01440v1

We characterize a quantum neural network’s error in terms of the network’s scrambling properties via the out-of-time-ordered correlator. A network can be trained by optimizing either a loss function or a cost function. We show that, with some probability, both functions can be bounded by out-of-time-ordered correlators. The gradients of these functions can be bounded by the gradient of the out-of-time-ordered correlator, demonstrating that the network’s scrambling ability governs its trainability. Our results pave the way for the exploration of quantum chaos in quantum neural networks.

Infinite Neural Network Quantum States

Di Luo, James HalversonDec 02 2021 quant-ph cond-mat.dis-nn cs.LG hep-th arXiv:2112.00723v1

We study infinite limits of neural network quantum states (∞∞-NNQS), which exhibit representation power through ensemble statistics, and also tractable gradient descent dynamics. Ensemble averages of Renyi entropies are expressed in terms of neural network correlators, and architectures that exhibit volume-law entanglement are presented. A general framework is developed for studying the gradient descent dynamics of neural network quantum states (NNQS), using a quantum state neural tangent kernel (QS-NTK). For ∞∞-NNQS the training dynamics is simplified, since the QS-NTK becomes deterministic and constant. An analytic solution is derived for quantum state supervised learning, which allows an ∞∞-NNQS to recover any target wavefunction. Numerical experiments on finite and infinite NNQS in the transverse field Ising model and Fermi Hubbard model demonstrate excellent agreement with theory. ∞∞-NNQS opens up new opportunities for studying entanglement and training dynamics in other physics applications, such as in finding ground states.

Neural Tangent Kernel of Matrix Product States: Convergence and Applications

Erdong Guo, David DraperNov 30 2021 stat.ML cs.LG cs.NE quant-ph arXiv:2111.14046v1

In this work, we study the Neural Tangent Kernel (NTK) of Matrix Product States (MPS) and the convergence of its NTK in the infinite bond dimensional limit. We prove that the NTK of MPS asymptotically converges to a constant matrix during the gradient descent (training) process (and also the initialization phase) as the bond dimensions of MPS go to infinity by the observation that the variation of the tensors in MPS asymptotically goes to zero during training in the infinite limit. By showing the positive-definiteness of the NTK of MPS, the convergence of MPS during the training in the function space (space of functions represented by MPS) is guaranteed without any extra assumptions of the data set. We then consider the settings of (supervised) Regression with Mean Square Error (RMSE) and (unsupervised) Born Machines (BM) and analyze their dynamics in the infinite bond dimensional limit. The ordinary differential equations (ODEs) which describe the dynamics of the responses of MPS in the RMSE and BM are derived and solved in the closed-form. For the Regression, we consider Mercer Kernels (Gaussian Kernels) and find that the evolution of the mean of the responses of MPS follows the largest eigenvalue of the NTK. Due to the orthogonality of the kernel functions in BM, the evolution of different modes (samples) decouples and the “characteristic time” of convergence in training is obtained.

Synthetic weather radar using hybrid quantum-classical machine learning

Graham R. Enos, Matthew J. Reagor, Maxwell P. Henderson, Christina Young, Kyle Horton, Mandy Birch, Chad RigettiDec 01 2021 quant-ph cs.LG arXiv:2111.15605v1

The availability of high-resolution weather radar images underpins effective forecasting and decision-making. In regions beyond traditional radar coverage, generative models have emerged as an important synthetic capability, fusing more ubiquitous data sources, such as satellite imagery and numerical weather models, into accurate radar-like products. Here, we demonstrate methods to augment conventional convolutional neural networks with quantum-assisted models for generative tasks in global synthetic weather radar. We show that quantum kernels can, in principle, perform fundamentally more complex tasks than classical learning machines on the relevant underlying data. Our results establish synthetic weather radar as an effective heuristic benchmark for quantum computing capabilities and set the stage for detailed quantum advantage benchmarking on a high-impact operationally relevant problem.

Mitigating Noise-Induced Gradient Vanishing in Variational Quantum Algorithm Training

Anbang Wu, Gushu Li, Yufei Ding, Yuan XieNov 29 2021 quant-ph cs.LG arXiv:2111.13209v1

Variational quantum algorithms are expected to demonstrate the advantage of quantum computing on near-term noisy quantum computers. However, training such variational quantum algorithms suffers from gradient vanishing as the size of the algorithm increases. Previous work cannot handle the gradient vanishing induced by the inevitable noise effects on realistic quantum hardware. In this paper, we propose a novel training scheme to mitigate such noise-induced gradient vanishing. We first introduce a new cost function of which the gradients are significantly augmented by employing traceless observables in truncated subspace. We then prove that the same minimum can be reached by optimizing the original cost function with the gradients from the new cost function. Experiments show that our new training scheme is highly effective for major variational quantum algorithms of various tasks.

EQC : Ensembled Quantum Computing for Variational Quantum Algorithms

Samuel Stein, Yufei Ding, Nathan Wiebe, Bo Peng, Karol Kowalski, Nathan Baker, James Ang, Ang LiDec 01 2021 quant-ph arXiv:2111.14940v1

Variational quantum algorithm (VQA), which is comprised of a classical optimizer and a parameterized quantum circuit, emerges as one of the most promising approaches for harvesting the power of quantum computers in the noisy intermediate scale quantum (NISQ) era. However, the deployment of VQAs on contemporary NISQ devices often faces considerable system and time-dependant noise and prohibitively slow training speeds. On the other hand, the expensive supporting resources and infrastructure make quantum computers extremely keen on high utilization. In this paper, we propose a virtualized way of building up a quantum backend for variational quantum algorithms: rather than relying on a single physical device which tends to introduce temporal-dependant device-specific noise with worsening performance as time-since-calibration grows, we propose to constitute a quantum ensemble, which dynamically distributes quantum tasks asynchronously across a set of physical devices, and adjusting the ensemble configuration with respect to machine status. In addition to reduced machine-dependant noise, the ensemble can provide significant speedups for VQA training. With this idea, we build a novel VQA training framework called EQC that comprises: (i) a system architecture for asynchronous parallel VQA cooperative training; (ii) an analytic model for assessing the quality of the returned VQA gradient over a particular device concerning its architecture, transpilation, and runtime conditions; (iii) a weighting mechanism to adjust the quantum ensemble’s computational contribution according to the systems’ current performance. Evaluations comprising 500K circuit evaluations across 10 IBMQ devices using a VQE and a QAOA applications demonstrate that EQC can attain error rates close to the most performant device of the ensemble, while boosting the training speed by 10.5x on average (up to 86x and at least 5.2x).

Towards Efficient Ansatz Architecture for Variational Quantum Algorithms

Anbang Wu, Gushu Li, Yuke Wang, Boyuan Feng, Yufei Ding, Yuan XieNov 30 2021 quant-ph cs.LG arXiv:2111.13730v1

Variational quantum algorithms are expected to demonstrate the advantage of quantum computing on near-term noisy quantum computers. However, training such variational quantum algorithms suffers from gradient vanishing as the size of the algorithm increases. Previous work cannot handle the gradient vanishing induced by the inevitable noise effects on realistic quantum hardware. In this paper, we propose a novel training scheme to mitigate such noise-induced gradient vanishing. We first introduce a new cost function of which the gradients are significantly augmented by employing traceless observables in truncated subspace. We then prove that the same minimum can be reached by optimizing the original cost function with the gradients from the new cost function. Experiments show that our new training scheme is highly effective for major variational quantum algorithms of various tasks.

Solving systems of Boolean multivariate equations with quantum annealing

Sergi Ramos-Calderer, Carlos Bravo-Prieto, Ruge Lin, Emanuele Bellini, Marc Manzano, Najwa Aaraj, José I. LatorreNov 29 2021 quant-ph arXiv:2111.13224v1

Polynomial systems over the binary field have important applications, especially in symmetric and asymmetric cryptanalysis, multivariate-based post-quantum cryptography, coding theory, and computer algebra. In this work, we study the quantum annealing model for solving Boolean systems of multivariate equations of degree 2, usually referred to as the Multivariate Quadratic problem. We present different methodologies to embed the problem into a Hamiltonian that can be solved by available quantum annealing platforms. In particular, we provide three embedding options, and we highlight their differences in terms of quantum resources. Moreover, we design a machine-agnostic algorithm that adopts an iterative approach to better solve the problem Hamiltonian by repeatedly reducing the search space. Finally, we use D-Wave devices to successfully implement our methodologies on several instances of the Multivariate Quadratic problem.

QAOA of the Highest Order

Colin Campbell, Edward DahlNov 29 2021 quant-ph arXiv:2111.12754v1

The Quantum Approximate Optimization Algorithm (QAOA) has been one of the leading candidates for near-term quantum advantage in gate-model quantum computers. From its inception, this algorithm has sparked the desire for comparison between gate-model and annealing platforms. When preparing problem statements for these algorithms, the predominant inclination has been to formulate a quadratic Hamiltonian. This paper gives an example of a graph coloring problem that, depending on its variable encoding scheme, optionally admits higher order terms. This paper presents evidence that the higher order formulation is preferable to two other encoding schemes. The evidence then motivates an analysis of the scaling behavior of QAOA in this higher order formulation for an ensemble of graph coloring problems.

Einstein-Podolsky-Rosen steering based on semi-supervised machine learning

Lifeng Zhang, Zhihua Chen, Shao-Ming FeiDec 02 2021 quant-ph arXiv:2112.00455v1

Einstein-Podolsky-Rosen(EPR)steering is a kind of powerful nonlocal quantum resource in quantum information processing such as quantum cryptography and quantum communication. Many criteria have been proposed in the past few years to detect the steerability both analytically and numerically. Supervised machine learning such as support vector machines and neural networks have also been trained to detect the EPR steerability. To implement supervised machine learning, one needs a lot of labeled quantum states by using the semidefinite programming, which is very time consuming. We present a semi-supervised support vector machine method which only uses a small portion of labeled quantum states in detecting quantum steering. We show that our approach can significantly improve the accuracies by detailed examples.

Discriminating Quantum States with Quantum Machine Learning

David Quiroga, Prasanna Date, Raphael C. PooserDec 02 2021 quant-ph cs.LG arXiv:2112.00313v1

Quantum machine learning (QML) algorithms have obtained great relevance in the machine learning (ML) field due to the promise of quantum speedups when performing basic linear algebra subroutines (BLAS), a fundamental element in most ML algorithms. By making use of BLAS operations, we propose, implement and analyze a quantum k-means (qk-means) algorithm with a low time complexity of O(NKlog(D)I/C)O(NKlog(D)I/C) to apply it to the fundamental problem of discriminating quantum states at readout. Discriminating quantum states allows the identification of quantum states |0⟩|0⟩ and |1⟩|1⟩ from low-level in-phase and quadrature signal (IQ) data, and can be done using custom ML models. In order to reduce dependency on a classical computer, we use the qk-means to perform state discrimination on the IBMQ Bogota device and managed to find assignment fidelities of up to 98.7% that were only marginally lower than that of the k-means algorithm. Inspection of assignment fidelity scores resulting from applying both algorithms to a combination of quantum states showed concordance to our correlation analysis using Pearson Correlation coefficients, where evidence shows cross-talk in the (1, 2) and (2, 3) neighboring qubit couples for the analyzed device.

Categories: Week-in-QML

0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *