đ°News:

- Google claims âquantum advantageâ for machine learning
- Using machine learning to narrow down the possibilities for a better quantum tunneling interface
- With the help of machine learning, NoVaâs QCI wants to change how we think about quantum
- UK’s first 32 qubit quantum computer Unveiled: promoting machine learning material simulation
- Stabilizing quantum GANs

đœVideos:

- What is quantum machine learning and how can we use it?, with Luis Serrano
- Overview of Quantum Machine Learning
- Amazon re:MARS 2022 – Quantum machine learning: Prospects and challenges (MLR337)
- Quantum Advantage in Learning from Experiments | Qiskit Seminar Series
- Introduction to photonic quantum machine learning – DĂĄniel Nagy
- Machine Learning Based Control of Quantum Devices I Quantum Technology User Meeting

đPapers:

## Laziness, Barren Plateau, and Noise in Machine Learning

Junyu Liu,Â Zexi Lin,Â Liang Jiang

Jun 22 2022Â cs.LGÂ cs.AIÂ quant-phÂ stat.MLÂ arXiv:2206.09313v1

We define \emphlaziness to describe a large suppression of variational parameter updates for neural networks, classical or quantum. In the quantum case, the suppression is exponential in the number of qubits for randomized variational quantum circuits. We discuss the difference between laziness and \emphbarren plateau in quantum machine learning created by quantum physicists in \citemcclean2018barren for the flatness of the loss function landscape during gradient descent. We address a novel theoretical understanding of those two phenomena in light of the theory of neural tangent kernels. For noiseless quantum circuits, without the measurement noise, the loss function landscape is complicated in the overparametrized regime with a large number of trainable variational angles. Instead, around a random starting point in optimization, there are large numbers of local minima that are good enough and could minimize the mean square loss function, where we still have quantum laziness, but we do not have barren plateaus. However, the complicated landscape is not visible within a limited number of iterations, and low precision in quantum control and quantum sensing. Moreover, we look at the effect of noises during optimization by assuming intuitive noise models, and show that variational quantum algorithms are noise-resilient in the overparametrization regime. Our work precisely reformulates the quantum barren plateau statement towards a precision statement and justifies the statement in certain noise models, injects new hope toward near-term variational quantum algorithms, and provides theoretical connections toward classical machine learning. Our paper provides conceptual perspectives about quantum barren plateaus, together with discussions about the gradient descent dynamics in \citetogether.

## Classical surrogates for quantum learning models

Franz J. Schreiber,Â Jens Eisert,Â Johannes Jakob Meyer

Jun 24 2022Â quant-phÂ cs.AIÂ cs.LGÂ arXiv:2206.11740v1

The advent of noisy intermediate-scale quantum computers has put the search for possible applications to the forefront of quantum information science. One area where hopes for an advantage through near-term quantum computers are high is quantum machine learning, where variational quantum learning models based on parametrized quantum circuits are discussed. In this work, we introduce the concept of a classical surrogate, a classical model which can be efficiently obtained from a trained quantum learning model and reproduces its input-output relations. As inference can be performed classically, the existence of a classical surrogate greatly enhances the applicability of a quantum learning strategy. However, the classical surrogate also challenges possible advantages of quantum schemes. As it is possible to directly optimize the ansatz of the classical surrogate, they create a natural benchmark the quantum model has to outperform. We show that large classes of well-analyzed re-uploading models have a classical surrogate. We conducted numerical experiments and found that these quantum models show no advantage in performance or trainability in the problems we analyze. This leaves only generalization capability as possible point of quantum advantage and emphasizes the dire need for a better understanding of inductive biases of quantum learning models.

## Random tensor networks with nontrivial links

Newton Cheng,Â CĂ©cilia Lancien,Â Geoff Penington,Â Michael Walter,Â Freek Witteveen

Jun 22 2022Â quant-phÂ hep-thÂ math-phÂ math.MPÂ math.PRÂ arXiv:2206.10482v1

Random tensor networks are a powerful toy model for understanding the entanglement structure of holographic quantum gravity. However, unlike holographic quantum gravity, their entanglement spectra are flat. It has therefore been argued that a better model consists of random tensor networks with link states that are not maximally entangled, i.e., have nontrivial spectra. In this work, we initiate a systematic study of the entanglement properties of these networks. We employ tools from free probability, random matrix theory, and one-shot quantum information theory to study random tensor networks with bounded and unbounded variation in link spectra, and in cases where a subsystem has one or multiple minimal cuts. If the link states have bounded spectral variation, the limiting entanglement spectrum of a subsystem with two minimal cuts can be expressed as a free product of the entanglement spectra of each cut, along with a Marchenko-Pastur distribution. For a class of states with unbounded spectral variation, analogous to semiclassical states in quantum gravity, we relate the limiting entanglement spectrum of a subsystem with two minimal cuts to the distribution of the minimal entanglement across the two cuts. In doing so, we draw connections to previous work on split transfer protocols, entanglement negativity in random tensor networks, and Euclidean path integrals in quantum gravity.

## Hyperparameter Importance of Quantum Neural Networks Across Small Datasets

Charles Moussa,Â Jan N. van Rijn,Â Thomas BĂ€ck,Â Vedran Dunjko

Jun 22 2022Â quant-phÂ cs.LGÂ arXiv:2206.09992v1

As restricted quantum computers are slowly becoming a reality, the search for meaningful first applications intensifies. In this domain, one of the more investigated approaches is the use of a special type of quantum circuit – a so-called quantum neural network — to serve as a basis for a machine learning model. Roughly speaking, as the name suggests, a quantum neural network can play a similar role to a neural network. However, specifically for applications in machine learning contexts, very little is known about suitable circuit architectures, or model hyperparameters one should use to achieve good learning performance. In this work, we apply the functional ANOVA framework to quantum neural networks to analyze which of the hyperparameters were most influential for their predictive performance. We analyze one of the most typically used quantum neural network architectures. We then apply this toÂ 77Â open-source datasets from the OpenML-CC18 classification benchmark whose number of features is small enough to fit on quantum hardware with less thanÂ 2020Â qubits. Three main levels of importance were detected from the ranking of hyperparameters obtained with functional ANOVA. Our experiment both confirmed expected patterns and revealed new insights. For instance, setting well the learning rate is deemed the most critical hyperparameter in terms of marginal contribution on all datasets, whereas the particular choice of entangling gates used is considered the least important except on one dataset. This work introduces new methodologies to study quantum machine learning models and provides new insights toward quantum model selection.

## Open Source Variational Quantum Eigensolver Extension of the Quantum Learning Machine (QLM) for Quantum Chemistry

Mohammad Haidar,Â Marko J. RanÄiÄ,Â Thomas Ayral,Â Yvon Maday,Â Jean-Philip Piquemal

Jun 20 2022Â quant-phÂ physics.chem-phÂ arXiv:2206.08798v3

Quantum Chemistry (QC) is one of the most promising applications of Quantum Computing. However, present quantum processing units (QPUs) are still subject to large errors. Therefore, noisy intermediate-scale quantum (NISQ) hardware is limited in terms of qubits counts and circuit depths. Specific algorithms such as Variational Quantum Eigensolvers (VQEs) can potentially overcome such issues. We introduce here a novel open-source QC package, denoted Open-VQE, providing tools for using and developing chemically-inspired adaptive methods derived from Unitary Coupled Cluster (UCC). It facilitates the development and testing of VQE algorithms. It is able to use the Atos Quantum Learning Machine (QLM), a general quantum programming framework enabling to write, optimize and simulate quantum computing programs. Along with Open-VQE, we introduce myQLM-Fermion, a new open-source module (that includes the key QLM ressources that are important for QC developments (fermionic second quantization tools etc…). The Open-VQE package extends therefore QLM to QC providing: (i) the functions to generate the different types of excitations beyond the commonly used UCCSD ansĂ€tz;(ii) a new implementation of the “adaptive derivative assembled pseudo-Trotter method” (ADAPT-VQE), written in simple class structure python codes. Interoperability with other major quantum programming frameworks is ensured thanks to myQLM, which allows users to easily build their own code and execute it on existing QPUs. The combined Open-VQE/myQLM-Fermion quantum simulator facilitates the implementation, tests and developments of variational quantum algorithms towards choosing the best compromise to run QC computations on present quantum computers while offering the possibility to test large molecules. We provide extensive benchmarks for several molecules associated to qubit counts ranging from 4 up to 24.

## Quantum Approximation of Normalized Schatten Norms and Applications to Learning

Yiyou Chen,Â Hideyuki Miyahara,Â Louis-S. Bouchard,Â Vwani Roychowdhury

Jun 24 2022Â quant-phÂ cs.LGÂ arXiv:2206.11506v1

Efficient measures to determine similarity of quantum states, such as the fidelity metric, have been widely studied. In this paper, we address the problem of defining a similarity measure for quantum operations that can be \textitefficiently estimated. Given two quantum operations,Â U1U1Â andÂ U2U2, represented in their circuit forms, we first develop a quantum sampling circuit to estimate the normalized Schatten 2-norm of their difference (â„U1âU2â„S2âU1âU2âS2) with precisionÂ Ï”Ï”, using only one clean qubit and one classical random variable. We prove a Poly(1Ï”)(1Ï”)Â upper bound on the sample complexity, which is independent of the size of the quantum system. We then show that such a similarity metric is directly related to a functional definition of similarity of unitary operations using the conventional fidelity metric of quantum states (FF): IfÂ â„U1âU2â„S2âU1âU2âS2Â is sufficiently small (e.g.Â â€Ï”1+2(1/ÎŽâ1)ââ€Ï”1+2(1/ÎŽâ1)) then the fidelity of states obtained by processing the same randomly and uniformly picked pure state,Â |Ïâ©|Ïâ©, is as high as needed (F(U1|Ïâ©,U2|Ïâ©)â„1âÏ”F(U1|Ïâ©,U2|Ïâ©)â„1âÏ”) with probability exceedingÂ 1âÎŽ1âÎŽ. We provide example applications of this efficient similarity metric estimation framework to quantum circuit learning tasks, such as finding the square root of a given unitary operation.

## Supervised learning of random quantum circuits via scalable neural networks

S. Cantori,Â D. Vitali,Â S. Pilati

Jun 22 2022Â quant-phÂ cond-mat.dis-nnÂ cs.LGÂ arXiv:2206.10348v1

Predicting the output of quantum circuits is a hard computational task that plays a pivotal role in the development of universal quantum computers. Here we investigate the supervised learning of output expectation values of random quantum circuits. Deep convolutional neural networks (CNNs) are trained to predict single-qubit and two-qubit expectation values using databases of classically simulated circuits. These circuits are represented via an appropriately designed one-hot encoding of the constituent gates. The prediction accuracy for previously unseen circuits is analyzed, also making comparisons with small-scale quantum computers available from the free IBM Quantum program. The CNNs often outperform the quantum devices, depending on the circuit depth, on the network depth, and on the training set size. Notably, our CNNs are designed to be scalable. This allows us exploiting transfer learning and performing extrapolations to circuits larger than those included in the training set. These CNNs also demonstrate remarkable resilience against noise, namely, they remain accurate even when trained on (simulated) expectation values averaged over very few measurements.

## On Classifying Images using Quantum Image Representation

Ankit Khandelwal,Â M Girish Chandra,Â Sayantan Pramanik

Jun 24 2022Â quant-phÂ arXiv:2206.11509v1

In this paper, we consider different Quantum Image Representation Methods to encode images into quantum states and then use a Quantum Machine Learning pipeline to classify the images. We provide encouraging results on classifying benchmark datasets of grayscale and colour images using two different classifiers. We also test multi-class classification performance.

## Quantum algorithm for learning secret strings and its experimental demonstration

Yongzhen Xu,Â Shihao Zhang,Â Lvzhou Li

Jun 23 2022Â quant-phÂ arXiv:2206.11221v1

In this paper, we consider the secret-string-learning problem in the teacher-student setting: the teacher has a secret stringÂ sâ{0,1}nsâ{0,1}n, and the student wants to learn the secretÂ ssÂ by question-answer interactions with the teacher, where at each time, the student can ask the teacher with a pairÂ (x,q)â{0,1}nĂ{0,1,âŻ,nâ1}(x,q)â{0,1}nĂ{0,1,âŻ,nâ1}Â and the teacher returns a bit given by the oracleÂ fs(x,q)fs(x,q)Â that indicates whether the length of the longest common prefix ofÂ ssÂ andÂ xxÂ is greater thanÂ qqÂ or not. Our contributions are as follows. (i) We prove that any classical deterministic algorithm needs at leastÂ nnÂ queries to the oracleÂ fsfsÂ to learn theÂ nn-bit secret stringÂ ssÂ in both the worst case and the average case, and also present an optimal classical deterministic algorithm learning anyÂ ssÂ usingÂ nnÂ queries. (ii) We obtain a quantum algorithm learning theÂ nn-bit secret stringÂ ssÂ with certainty usingÂ ân/2âân/2âÂ queries to the oracleÂ fsfs, thus proving a double speedup over classical counterparts. (iii) Experimental demonstrations of our quantum algorithm on the IBM cloud quantum computer are presented, with average success probabilities ofÂ 85.3%85.3%Â andÂ 82.5%82.5%Â for all cases withÂ n=2n=2Â andÂ n=3n=3Â , respectively.

## Penalty Weights in QUBO Formulations: Permutation Problems

Jun 23 2022Â math.OCÂ cs.AIÂ cs.DMÂ quant-phÂ arXiv:2206.11040v1

Optimisation algorithms designed to work on quantum computers or other specialised hardware have been of research interest in recent years. Many of these solver can only optimise problems that are in binary and quadratic form. Quadratic Unconstrained Binary Optimisation (QUBO) is therefore a common formulation used by these solvers. There are many combinatorial optimisation problems that are naturally represented as permutations e.g., travelling salesman problem. Encoding permutation problems using binary variables however presents some challenges. Many QUBO solvers are single flip solvers, it is therefore possible to generate solutions that cannot be decoded to a valid permutation. To create bias towards generating feasible solutions, we use penalty weights. The process of setting static penalty weights for various types of problems is not trivial. This is because values that are too small will lead to infeasible solutions being returned by the solver while values that are too large may lead to slower convergence. In this study, we explore some methods of setting penalty weights within the context of QUBO formulations. We propose new static methods of calculating penalty weights which lead to more promising results than existing methods.

## Hidden-nucleons neural-network quantum states for the nuclear many-body problem

A. Lovato,Â C. Adams,Â G. Carleo,Â N. Rocco

Jun 22 2022Â nucl-thÂ cond-mat.dis-nnÂ quant-phÂ arXiv:2206.10021v1

We generalize the hidden-fermion family of neural network quantum states to encompass both continuous and discrete degrees of freedom and solve the nuclear many-body SchrĂ¶dinger equation in a systematically improvable fashion. We demonstrate that adding hidden nucleons to the original Hilbert space considerably augments the expressivity of the neural-network architecture compared to the Slater-Jastrow ansatz. The benefits of explicitly encoding in the wave function point symmetries such as parity and time-reversal are also discussed. Leveraging on improved optimization methods and sampling techniques, the hidden-nucleon ansatz achieves an accuracy comparable to the numerically-exact hyperspherical harmonic method in light nuclei and to the auxiliary field diffusion Monte Carlo inÂ 1616O. Thanks to its polynomial scaling with the number of nucleons, this method opens the way to highly-accurate quantum Monte Carlo studies of medium-mass nuclei.

## Meta-Learning Digitized-Counterdiabatic Quantum Optimization

Pranav Chandarana,Â Pablo S. Vieites,Â Narendra N. Hegade,Â Enrique Solano,Â Yue Ban,Â Xi Chen

Jun 22 2022Â quant-phÂ arXiv:2206.09966v1

Solving optimization tasks using variational quantum algorithms has emerged as a crucial application of the current noisy intermediate-scale quantum devices. However, these algorithms face several difficulties like finding suitable ansatz and appropriate initial parameters, among others. In this work, we tackle the problem of finding suitable initial parameters for variational optimization by employing a meta-learning technique using recurrent neural networks. We investigate this technique with the recently proposed digitized-counterdiabatic quantum approximate optimization algorithm (DC-QAOA) that utilizes counterdiabatic protocols to improve the state-of-the-art QAOA. The combination of meta learning and DC-QAOA enables us to find optimal initial parameters for different models, such as MaxCut problem and the Sherrington-Kirkpatrick model. Decreasing the number of iterations of optimization as well as enhancing the performance, our protocol designs short depth circuit ansatz with optimal initial parameters by incorporating shortcuts-to-adiabaticity principles into machine learning methods for the near-term devices.

## 0 Comments