- AI Designs Quantum Physics Experiments Beyond What Any Human Has Conceived
- Atos launches ‘ThinkAI’
- Nippon Steel tested quantum computing to help improve plant scheduling
- Learn About Quantum Machine Learning
- Quantum machine learning hits a limit
- Generalization in Quantum Machine Learning: a Quantum Information Perspective – TQC 2021
- Introduction and Basics – Quantum machine learning of graph-structured data Part 1
- Quantum neural network – Quantum machine learning of graph-structured data Part 2
- Graph-Structure – Quantum machine learning of graph-structured data Part 3
- QML Meetup: Dr David Sutter (IBM Research, Zurich), The power of quantum neural networks
- Quantum Computing: Grover’s Search Algorithm | Breakthrough Junior Challenge 2021
- Natural Gradient Optimization for Optical Quantum Circuits
- Variational quantum algorithm for molecular geometry optimization
- Decoding conformal field theories: from supervised to unsupervised learning
- Nonlinear Quantum Optimization Algorithms via Efficient Ising Model Encodings
- Training Saturation in Layerwise Quantum Approximate Optimisation
- Variational Quantum Eigensolver for SU(NN) Fermions
- Probabilistic Graphical Models and Tensor Networks: A Hybrid Framework
- Threshold-Based Quantum Optimization
- On exploring practical potentials of quantum auto-encoder with advantages
- Realization of an ion trap quantum classifier
- Counterdiabaticity and the quantum approximate optimization algorithm
- Adaptive Random Quantum Eigensolver
- Experimental Quantum Embedding for Machine Learning
- Bayesian Phase Estimation via Active Learning
- Predicting quantum dynamical cost landscapes with deep learning
- Non-parametric Active Learning and Rate Reduction in Many-body Hilbert Space with Rescaled Logarithmic Fidelity
- Importance of Diagonal Gates in Tensor Network Simulations
- Machine Learning S-Wave Scattering Phase Shifts Bypassing the Radial Schrödinger Equation
- A New Quantum Approach to Binary Classification
- GaN-based Bipolar Cascade Lasers with 25nm wide Quantum Wells: Simulation and Analysis
Optical quantum circuits can be optimized using gradient descent methods, as the gates in a circuit can be parametrized by continuous parameters. However, the parameter space as seen by the cost function is not Euclidean, which means that the Euclidean gradient does not generally point in the direction of steepest ascent. In order to retrieve the steepest ascent direction, in this work we implement Natural Gradient descent in the optical quantum circuit setting, which takes the local metric tensor into account. In particular, we adapt the Natural Gradient approach to a complex-valued parameter space. We then compare the Natural Gradient approach to vanilla gradient descent and to Adam over two state preparation tasks: a single-photon source and a Gottesman-Kitaev-Preskill state source. We observe that the NG approach has a faster convergence (due in part to the possibility of using larger learning rates) and a significantly smoother decay of the cost function throughout the optimization.
Classical algorithms for predicting the equilibrium geometry of strongly correlated molecules require expensive wave function methods that become impractical already for few-atom systems. In this work, we introduce a variational quantum algorithm for finding the most stable structure of a molecule by explicitly considering the parametric dependence of the electronic Hamiltonian on the nuclear coordinates. The equilibrium geometry of the molecule is obtained by minimizing a more general cost function that depends on both the quantum circuit and the Hamiltonian parameters, which are simultaneously optimized at each step. The algorithm is applied to find the equilibrium geometries of the H2H2, H+3H3+, BeH2BeH2 and H2OH2O molecules. The quantum circuits used to prepare the electronic ground state for each molecule were designed using an adaptive algorithm where excitation gates in the form of Givens rotations are selected according to the norm of their gradient. All quantum simulations are performed using the PennyLane library for quantum differentiable programming. The optimized geometrical parameters for the simulated molecules show an excellent agreement with their counterparts computed using classical quantum chemistry methods.
We use machine learning to classify rational two-dimensional conformal field theories. We first use the energy spectra of these minimal models to train a supervised learning algorithm. We find that the machine is able to correctly predict the nature and the value of critical points of several strongly correlated spin models using only their energy spectra. This is in contrast to previous works that use machine learning to classify different phases of matter, but do not reveal the nature of the critical point between phases. Given that the ground-state entanglement Hamiltonian of certain topological phases of matter is also described by conformal field theories, we use supervised learning on Réyni entropies and find that the machine is able to identify which conformal field theory describes the entanglement Hamiltonian with only the lowest few Réyni entropies to a high degree of accuracy. Finally, using autoencoders, an unsupervised learning algorithm, we find a hidden variable that has a direct correlation with the central charge and discuss prospects for using machine learning to investigate other conformal field theories, including higher-dimensional ones. Our results highlight that machine learning can be used to find and characterize critical points and also hint at the intriguing possibility to use machine learning to learn about more complex conformal field theories.
Despite extensive research efforts, few quantum algorithms for classical optimization demonstrate realizable advantage. The utility of many quantum algorithms is limited by high requisite circuit depth and nonconvex optimization landscapes. We tackle these challenges to quantum advantage with two new variational quantum algorithms, which utilize multi-basis graph encodings and nonlinear activation functions to outperform existing methods with remarkably shallow quantum circuits. Both algorithms provide a polynomial reduction in measurement complexity and either a factor of two speedup \textitor a factor of two reduction in quantum resources. Typically, the classical simulation of such algorithms with many qubits is impossible due to the exponential scaling of traditional quantum formalism and the limitations of tensor networks. Nonetheless, the shallow circuits and moderate entanglement of our algorithms, combined with efficient tensor method-based simulation, enable us to successfully optimize the MaxCut of high-connectivity global graphs with up to 512512 nodes (qubits) on a single GPU.
Quantum Approximate Optimisation (QAOA) is the most studied gate based variational quantum algorithm today. We train QAOA one layer at a time to maximize overlap with an nn qubit target state. Doing so we discovered that such training always saturates — called \textittraining saturation — at some depth p∗p∗, meaning that past a certain depth, overlap can not be improved by adding subsequent layers. We formulate necessary conditions for saturation. Numerically, we find layerwise QAOA reaches its maximum overlap at depth p∗=np∗=n. The addition of coherent dephasing errors to training removes saturation, recovering robustness to layerwise training. This study sheds new light on the performance limitations and prospects of QAOA.
Variational quantum algorithms aim at harnessing the power of noisy intermediate-scale quantum computers, by using a classical optimizer to train a parameterized quantum circuit to solve tractable quantum problems. The variational quantum eigensolver is one of the aforementioned algorithms designed to determine the ground-state of many-body Hamiltonians. Here, we apply the variational quantum eigensolver to study the ground-state properties of NN-component fermions. With such knowledge, we study the persistent current of interacting SU(NN) fermions, which is employed to reliably map out the different quantum phases of the system. Our approach lays out the basis for a current-based quantum simulator of many-body systems that can be implemented on noisy intermediate-scale quantum computers.
We investigate a correspondence between two formalisms for discrete probabilistic modeling: probabilistic graphical models (PGMs) and tensor networks (TNs), a powerful modeling framework for simulating complex quantum systems. The graphical calculus of PGMs and TNs exhibits many similarities, with discrete undirected graphical models (UGMs) being a special case of TNs. However, more general probabilistic TN models such as Born machines (BMs) employ complex-valued hidden states to produce novel forms of correlation among the probabilities. While representing a new modeling resource for capturing structure in discrete probability distributions, this behavior also renders the direct application of standard PGM tools impossible. We aim to bridge this gap by introducing a hybrid PGM-TN formalism that integrates quantum-like correlations into PGM models in a principled manner, using the physically-motivated concept of decoherence. We first prove that applying decoherence to the entirety of a BM model converts it into a discrete UGM, and conversely, that any subgraph of a discrete UGM can be represented as a decohered BM. This method allows a broad family of probabilistic TN models to be encoded as partially decohered BMs, a fact we leverage to combine the representational strengths of both model families. We experimentally verify the performance of such hybrid models in a sequential modeling task, and identify promising uses of our method within the context of existing applications of graphical models.
We propose and study Th-QAOA (pronounced Threshold QAOA), a variation of the Quantum Alternating Operator Ansatz (QAOA) that replaces the standard phase separator operator, which encodes the objective function, with a threshold function that returns a value 11 for solutions with an objective value above the threshold and a 00 otherwise. We vary the threshold value to arrive at a quantum optimization algorithm. We focus on a combination with the Grover Mixer operator; the resulting GM-Th-QAOA can be viewed as a generalization of Grover’s quantum search algorithm and its minimum/maximum finding cousin to approximate optimization. Our main findings include: (i) we show semi-formally that the optimum parameter values of GM-Th-QAOA (angles and threshold value) can be found with O(log(p)×logM)O(log(p)×logM) iterations of the classical outer loop, where pp is the number of QAOA rounds and MM is an upper bound on the solution value (often the number of vertices or edges in an input graph), thus eliminating the notorious outer-loop parameter finding issue of other QAOA algorithms; (ii) GM-Th-QAOA can be simulated classically with little effort up to 100 qubits through a set of tricks that cut down memory requirements; (iii) somewhat surprisingly, GM-Th-QAOA outperforms its non-thresholded counterparts in terms of approximation ratios achieved. This third result holds across a range of optimization problems (MaxCut, Max k-VertexCover, Max k-DensestSubgraph, MaxBisection) and various experimental design parameters, such as different input edge densities and constraint sizes.
Quantum auto-encoder (QAE) is a powerful tool to relieve the curse of dimensionality encountered in quantum physics, celebrated by the ability to extract low-dimensional patterns from quantum states living in the high-dimensional space. Despite its attractive properties, little is known about the practical applications of QAE with provable advantages. To address these issues, here we prove that QAE can be used to efficiently calculate the eigenvalues and prepare the corresponding eigenvectors of a high-dimensional quantum state with the low-rank property. With this regard, we devise three effective QAE-based learning protocols to solve the low-rank state fidelity estimation, the quantum Gibbs state preparation, and the quantum metrology tasks, respectively. Notably, all of these protocols are scalable and can be readily executed on near-term quantum machines. Moreover, we prove that the error bounds of the proposed QAE-based methods outperform those in previous literature. Numerical simulations collaborate with our theoretical analysis. Our work opens a new avenue of utilizing QAE to tackle various quantum physics and quantum information processing problems in a scalable way.
We report the realization of a versatile classifier based on the quantum mechanics of a single atom. The problem of classification has been extensively studied by the classical machine learning community, with plenty of proposed algorithms that have been refined over time. Quantum computation must necessarily develop quantum classifiers and benchmark them against their classical counterparts. It is not obvious how to make use of our increasing ability to precisely control and evolve a quantum state to solve this kind of problems, while there is only a limited number of strong theorems backing the quantum algorithms for classification. Here we show that both of these limitations can be successfully addressed by the implementation of a recently proposed data re-uploading algorithm in an ion trap based quantum processing unit. The quantum classifier is trained in two steps: first, the quantum circuit is fed with an optimal set of variational parameters found by classical simulation; then, the variational circuit is optimized by inspecting the parameter landscape with only the quantum processing unit. This second step provides a partial cancellation of the systematic errors inherent to the quantum device. The accuracy of our quantum supervised classifier is benchmarked on a variety of datasets, that imply, finding the separation of classes associated to regions in a plane in both binary and multi-class problems, as well as in higher-dimensional feature spaces. Our experiments show that a single-ion quantum classifier circuit made out of kk gates is as powerful as a neural network with one intermediate hidden layer of kk neurons.
The quantum approximate optimization algorithm (QAOA) is a near-term hybrid algorithm intended to solve combinatorial optimization problems, such as MaxCut. QAOA can be made to mimic an adiabatic schedule, and in the p→∞p→∞ limit the final state is an exact maximal eigenstate in accordance with the adiabatic theorem. In this work, the connection between QAOA and adiabaticity is made explicit by inspecting the regime of pp large but finite. By connecting QAOA to counterdiabatic (CD) evolution, we construct CD-QAOA angles which mimic a counterdiabatic schedule by matching Trotter “error” terms to approximate adiabatic gauge potentials which suppress diabatic excitations arising from finite ramp speed. In our construction, these “error” terms are helpful, not detrimental, to QAOA. Using this matching to link QAOA with quantum adiabatic algorithms (QAA), we show that the approximation ratio converges to one at least as 1−C(p)∼1/pμ1−C(p)∼1/pμ. We show that transfer of parameters between graphs, and interpolating angles for p+1p+1 given pp are both natural byproducts of CD-QAOA matching. Optimization of CD-QAOA angles is equivalent to optimizing a continuous adiabatic schedule. Finally, we show that, using a property of variational adiabatic gauge potentials, QAOA is at least counterdiabatic, not just adiabatic, and has better performance than finite time adiabatic evolution. We demonstrate the method on three examples: a 2 level system, an Ising chain, and the MaxCut problem.
We propose an adaptive random quantum algorithm to obtain an optimized eigensolver. The changes in the involved matrices follow bio-inspired evolutionary mutations which are based on two figures of merit: learning speed and learning accuracy. This method provides high fidelities for the searched eigenvectors and faster convergence on the way to quantum advantage with current noisy intermediate-scaled quantum (NISQ) computers.
Ilaria Gianani,Ivana Mastroserio,Lorenzo Buffoni,Natalia Bruno,Ludovica Donati,Valeria Cimini,Marco Barbieri,Francesco S. Cataliotti,Filippo CarusoJun 29 2021 quant-phcond-mat.dis-nncond-mat.quant-gasphysics.optics arXiv:2106.13835v1Scite!2
The classification of big data usually requires a mapping onto new data clusters which can then be processed by machine learning algorithms by means of more efficient and feasible linear separators. Recently, Lloyd et al. have advanced the proposal to embed classical data into quantum ones: these live in the more complex Hilbert space where they can get split into linearly separable clusters. Here, we implement these ideas by engineering two different experimental platforms, based on quantum optics and ultra-cold atoms respectively, where we adapt and numerically optimize the quantum embedding protocol by deep learning methods, and test it for some trial classical data. We perform also a similar analysis on the Rigetti superconducting quantum computer. Therefore, we find that the quantum embedding approach successfully works also at the experimental level and, in particular, we show how different platforms could work in a complementary fashion to achieve this task. These studies might pave the way for future investigations on quantum machine learning techniques especially based on hybrid quantum technologies.
Bayesian estimation approaches, which are capable of combining the information of experimental data from different likelihood functions to achieve high precisions, have been widely used in phase estimation via introducing a controllable auxiliary phase. Here, we present a non-adaptive Bayesian phase estimation (BPE) algorithms with an ingenious update rule of the auxiliary phase designed via active learning. Unlike adaptive BPE algorithms, the auxiliary phase in our algorithm is determined by a pre-established update rule with simple statistical analysis of a small batch of data, instead of complex calculations in every update trails. As the number of measurements for a same amount of Bayesian updates is significantly reduced via active learning, our algorithm can work as efficient as adaptive ones and shares the advantages (such as wide dynamic range and perfect noise robustness) of non-adaptive ones. Our algorithm is of promising applications in various practical quantum sensors such as atomic clocks and quantum magnetometers.
State-of-the-art quantum algorithms routinely tune dynamically parametrized cost functionals for combinatorics, machine learning, equation-solving, or energy minimization. However, large search complexity often demands many (noisy) quantum measurements, leading to the increasing use of classical probability models to estimate which areas in the cost functional landscape are of highest interest. Introducing deep learning based modelling of the landscape, we demonstrate an order of magnitude increases in accuracy and speed over state-of-the-art Bayesian methods. Moreover, once trained the deep neural network enables the extraction of information at a much faster rate than conventional numerical simulation. This allows for on-the-fly experimental optimizations and detailed classification of complexity and navigability throughout the phase diagram of the landscape.
Non-parametric Active Learning and Rate Reduction in Many-body Hilbert Space with Rescaled Logarithmic Fidelity
In quantum and quantum-inspired machine learning, the very first step is to embed the data in quantum space known as Hilbert space. Developing quantum kernel function (QKF), which defines the distances among the samples in the Hilbert space, belongs to the fundamental topics for machine learning. In this work, we propose the rescaled logarithmic fidelity (RLF) and a non-parametric active learning in the quantum space, which we name as RLF-NAL. The rescaling takes advantage of the non-linearity of the kernel to tune the mutual distances of samples in the Hilbert space, and meanwhile avoids the exponentially-small fidelities between quantum many-qubit states. We compare RLF-NAL with several well-known non-parametric algorithms including naive Bayes classifiers, kk-nearest neighbors, and spectral clustering. Our method exhibits excellent accuracy particularly for the unsupervised case with no labeled samples and the few-shot cases with small numbers of labeled samples. With the visualizations by t-SNE, our results imply that the machine learning in the Hilbert space complies with the principles of maximal coding rate reduction, where the low-dimensional data exhibit within-class compressibility, between-class discrimination, and overall diversity. Our proposals can be applied to other quantum and quantum-inspired machine learning, including the methods using the parametric models such as tensor networks, quantum circuits, and quantum neural networks.
In this work we present two techniques that tremendously increase the performance of tensor-network based quantum circuit simulations. The techniques are implemented in the QTensor package and benchmarked using Quantum Approximate Optimization Algorithm (QAOA) circuits. The techniques allowed us to increase the depth and size of QAOA circuits that can be simulated. In particular, we increased the QAOA depth from 2 to 5 and the size of a QAOA circuit from 180 to 244 qubits. Moreover, we increased the speed of simulations by up to 10 million times. Our work provides important insights into how various techniques can dramatically speed up the simulations of circuits.
We present a machine learning model resting on convolutional neural networks (CNN) capable to yield accurate scattering phase shifts caused by different three-dimensional spherically symmetric potentials bypassing the radial Schrödinger equation. We obtain good performance even in the presence of potential instances supporting bound states
Machine Learning classification models learn the relation between input as features and output as a class in order to predict the class for the new given input. Quantum Mechanics (QM) has already shown its effectiveness in many fields and researchers have proposed several interesting results which cannot be obtained through classical theory. In recent years, researchers have been trying to investigate whether the QM can help to improve the classical machine learning algorithms. It is believed that the theory of QM may also inspire an effective algorithm if it is implemented properly. From this inspiration, we propose the quantum-inspired binary classifier.
We analyze internal device physics, performance limitations, and optimization options for a unique laser design with multiple active regions separated by tunnel junctions, featuring surprisingly wide quantum wells. Contrary to common assumptions, these quantum wells are revealed to allow for perfect screening of the strong built-in polarization field, while optical gain is provided by higher quantum levels. However, internal absorption, low p-cladding conductivity, and self-heating are shown to strongly limit the laser performance.