#24 May 29th – June 4th

🎫Events:


📰News:

📽Videos:

📗Papers:



Diagnosing barren plateaus with tools from quantum optimal control

Variational Quantum Algorithms (VQAs) have received considerable attention due to their potential for achieving near-term quantum advantage. However, more work is needed to understand their scalability. One known scaling result for VQAs is barren plateaus, where certain circumstances lead to exponentially vanishing gradients. It is common folklore that problem-inspired ansatzes avoid barren plateaus, but in fact, very little is known about their gradient scaling. In this work we employ tools from quantum optimal control to develop a framework that can diagnose the presence or absence of barren plateaus for problem-inspired ansatzes. Such ansatzes include the Quantum Alternating Operator Ansatz (QAOA), the Hamiltonian Variational Ansatz (HVA), and others. With our framework, we prove that avoiding barren plateaus for these ansatzes is not always guaranteed. Specifically, we show that the gradient scaling of the VQA depends on the controllability of the system, and hence can be diagnosed trough the dynamical Lie algebra gg obtained from the generators of the ansatz. We analyze the existence of barren plateaus in QAOA and HVA ansatzes, and we highlight the role of the input state, as different initial states can lead to the presence or absence of barren plateaus. Taken together, our results provide a framework for trainability-aware ansatz design strategies that do not come at the cost of extra quantum resources. Moreover, we prove no-go results for obtaining ground states with variational ansatzes for controllable system such as spin glasses. We finally provide evidence that barren plateaus can be linked to dimension of gg.

Single-component gradient rules for variational quantum algorithms

Many near-term quantum computing algorithms are conceived as variational quantum algorithms, in which parameterized quantum circuits are optimized in a hybrid quantum-classical setup. Examples are variational quantum eigensolvers, quantum approximate optimization algorithms as well as various algorithms in the context of quantum-assisted machine learning. A common bottleneck of any such algorithm is constituted by the optimization of the variational parameters. A popular set of optimization methods work on the estimate of the gradient, obtained by means of circuit evaluations. We will refer to the way in which one can combine these circuit evaluations as gradient rules. This work provides a comprehensive picture of the family of gradient rules that vary parameters of quantum gates individually. The most prominent known members of this family are the parameter shift rule and the finite differences method. To unite this family, we propose a generalized parameter shift rule that expresses all members of the aforementioned family as special cases, and discuss how all of these can be seen as providing access to a linear combination of exact first- and second-order derivatives. We further prove that a parameter shift rule with one non-shifted evaluation and only one shifted circuit evaluation can not exist does not exist, and introduce a novel perspective for approaching new gradient rules.

Linear Regression by Quantum Amplitude Estimation and its Extension to Convex Optimization

Linear regression is a basic and widely-used methodology in data analysis. It is known that some quantum algorithms efficiently perform least squares linear regression of an exponentially large data set. However, if we obtain values of the regression coefficients as classical data, the complexity of the existing quantum algorithms can be larger than the classical method. This is because it depends strongly on the tolerance error ϵϵ: the best one among the existing proposals is O(ϵ−2)O(ϵ−2). In this paper, we propose the new quantum algorithm for linear regression, which has the complexity of O(ϵ−1)O(ϵ−1) and keeps the logarithmic dependence on the number of data points NDND. In this method, we overcome bottleneck parts in the calculation, which take the form of the sum over data points and therefore have the complexity proportional to NDND, using quantum amplitude estimation, and other parts classically. Additionally, we generalize our method to some class of convex optimization problems.

Experimental error mitigation using linear rescaling for variational quantum eigensolving with up to 20 qubits

Quantum computers have the potential to help solve a range of physics and chemistry problems, but noise in quantum hardware currently limits our ability to obtain accurate results from the execution of quantum-simulation algorithms. Various methods have been proposed to mitigate the impact of noise on variational algorithms, including several that model the noise as damping expectation values of observables. In this work, we benchmark various methods, including two new methods proposed here, for estimating the damping factor and hence recovering the noise-free expectation values. We compare their performance in estimating the ground-state energies of several instances of the 1D mixed-field Ising model using the variational-quantum-eigensolver algorithm with up to 20 qubits on two of IBM’s quantum computers. We find that several error-mitigation techniques allow us to recover energies to within 10% of the true values for circuits containing up to about 25 ansatz layers, where each layer consists of CNOT gates between all neighboring qubits and Y-rotations on all qubits.

Provable superior accuracy in machine learned quantum models

In modelling complex processes, the potential past data that influence future expectations are immense. Models that track all this data are not only computationally wasteful but also shed little light on what past data most influence the future. There is thus enormous interest in dimensional reduction-finding automated means to reduce the memory dimension of our models while minimizing its impact on its predictive accuracy. Here we construct dimensionally reduced quantum models by machine learning methods that can achieve greater accuracy than provably optimal classical counterparts. We demonstrate this advantage on present-day quantum computing hardware. Our algorithm works directly off classical time-series data and can thus be deployed in real-world settings. These techniques illustrate the immediate relevance of quantum technologies to time-series analysis and offer a rare instance where the resulting quantum advantage can be provably established.

Quantum Compiling by Deep Reinforcement Learning

The architecture of circuital quantum computers requires computing layers devoted to compiling high-level quantum algorithms into lower-level circuits of quantum gates. The general problem of quantum compiling is to approximate any unitary transformation that describes the quantum computation, as a sequence of elements selected from a finite base of universal quantum gates. The existence of an approximating sequence of one qubit quantum gates is guaranteed by the Solovay-Kitaev theorem, which implies sub-optimal algorithms to establish it explicitly. Since a unitary transformation may require significantly different gate sequences, depending on the base considered, such a problem is of great complexity and does not admit an efficient approximating algorithm. Therefore, traditional approaches are time-consuming tasks, unsuitable to be employed during quantum computation. We exploit the deep reinforcement learning method as an alternative strategy, which has a significantly different trade-off between search time and exploitation time. Deep reinforcement learning allows creating single-qubit operations in real time, after an arbitrary long training period during which a strategy for creating sequences to approximate unitary operators is built. The deep reinforcement learning based compiling method allows for fast computation times, which could in principle be exploited for real-time quantum compiling.

Detecting ergodic bubbles at the crossover to many-body localization using neural networks

The transition between ergodic and many-body localized phases is expected to occur via an avalanche mechanism, in which \emphergodic bubbles that arise due to local fluctuations in system properties thermalize their surroundings leading to delocalization of the system, unless the disorder is sufficiently strong to stop this process. We propose an algorithm based on neural networks that allows to detect the ergodic bubbles using experimentally measurable two-site correlation functions. Investigating time evolution of the system, we observe a logarithmic in time growth of the ergodic bubbles in the MBL regime. The distribution of the size of ergodic bubbles converges during time evolution to an exponentially decaying distribution in the MBL regime, and a power-law distribution with a thermal peak in the critical regime, supporting thus the scenario of delocalization through the avalanche mechanism. Our algorithm permits to pin-point quantitative differences in time evolution of systems with random and quasiperiodic potentials, as well as to identify rare (Griffiths) events. Our results open new pathways in studies of the mechanisms of thermalization of disordered many-body systems and beyond.

Quantum Federated Learning with Quantum Data

Quantum machine learning (QML) has emerged as a promising field that leans on the developments in quantum computing to explore large complex machine learning problems. Recently, some purely quantum machine learning models were proposed such as the quantum convolutional neural networks (QCNN) to perform classification on quantum data. However, all of the existing QML models rely on centralized solutions that cannot scale well for large-scale and distributed quantum networks. Hence, it is apropos to consider more practical quantum federated learning (QFL) solutions tailored towards emerging quantum network architectures. Indeed, developing QFL frameworks for quantum networks is critical given the fragile nature of computing qubits and the difficulty of transferring them. On top of its practical momentousness, QFL allows for distributed quantum learning by leveraging existing wireless communication infrastructure. This paper proposes the first fully quantum federated learning framework that can operate over quantum data and, thus, share the learning of quantum circuit parameters in a decentralized manner. First, given the lack of existing quantum federated datasets in the literature, the proposed framework begins by generating the first quantum federated dataset, with a hierarchical data format, for distributed quantum networks. Then, clients sharing QCNN models are fed with the quantum data to perform a classification task. Subsequently, the server aggregates the learnable quantum circuit parameters from clients and performs federated averaging. Extensive experiments are conducted to evaluate and validate the effectiveness of the proposed QFL solution. This work is the first to combine Google’s TensorFlow Federated and TensorFlow Quantum in a practical implementation.

Using machine learning for quantum annealing accuracy prediction

Quantum annealers, such as the device built by D-Wave Systems, Inc., offer a way to compute solutions of NP-hard problems that can be expressed in Ising or QUBO (quadratic unconstrained binary optimization) form. Although such solutions are typically of very high quality, problem instances are usually not solved to optimality due to imperfections of the current generations quantum annealers. In this contribution, we aim to understand some of the factors contributing to the hardness of a problem instance, and to use machine learning models to predict the accuracy of the D-Wave 2000Q annealer for solving specific problems. We focus on the Maximum Clique problem, a classic NP-hard problem with important applications in network analysis, bioinformatics, and computational chemistry. By training a machine learning classification model on basic problem characteristics such as the number of edges in the graph, or annealing parameters such as D-Wave’s chain strength, we are able to rank certain features in the order of their contribution to the solution hardness, and present a simple decision tree which allows to predict whether a problem will be solvable to optimality with the D-Wave 2000Q. We extend these results by training a machine learning regression model that predicts the clique size found by D-Wave.

Artificial neural network states for non-additive systems

Methods inspired from machine learning have recently attracted great interest in the computational study of quantum many-particle systems. So far, however, it has proven challenging to deal with microscopic models in which the total number of particles is not conserved. To address this issue, we propose a new variant of neural network states, which we term neural coherent states. Taking the Fröhlich impurity model as a case study, we show that neural coherent states can learn the ground state of non-additive systems very well. In particular, we observe substantial improvement over the standard coherent state estimates in the most challenging intermediate coupling regime. Our approach is generic and does not assume specific details of the system, suggesting wide applications.

Efficient sorting of orbital-angular-momentum states with large topological charges and their unknown superpositions via machine learning

Light beams carrying orbital-angular-momentum (OAM) play an important role in optical manipulation and communication owing to their unbounded state space. However, it is still challenging to efficiently discriminate OAM modes with large topological charges and thus only a small part of the OAM states have been usually used. Here we demonstrate that neural networks can be trained to sort OAM modes with large topological charges and unknown superpositions. Using intensity images of OAM modes generalized in simulations and experiments as the input data, we illustrate that our neural network has great generalization power to recognize OAM modes of large topological charges beyond training areas with high accuracy. Moreover, the trained neural network can correctly classify and predict arbitrary superpositions of two OAM modes with random topological charges. Our machine learning approach only requires a small portion of experimental samples and significantly reduces the cost in experiments, which paves the way to study the OAM physics and increase the state space of OAM beams in practical applications.

Categories: Week-in-QML

0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *