- Innovative chip built by UCPH physicists resolves quantum headache
- This chemist is reimagining the discovery of materials using AI and automation
- AI Generates Hypotheses Human Scientists Have Not Thought Of
- High-Fidelity Quantum Computing Is Now Possible, Thanks AI
- Pasqal announces new machine learning protocol for comparing complex graph-based data on quantum systems
- Quantum Machine Learning conference, 23.10.2021.
- HIDA Lecture with Anatole von Lilienfeld: Quantum Machine Learning in Chemical Compound Space
- Why D-Wave is Bullish on Quantum Annealing | Qubits 2021
- Materials Project Seminars – Kamal Choudhary, “Deep Learning and Quantum Computation Methods […]”
- Critical Points in Hamiltonian Agnostic Variational Quantum Algorithms
- “AI for designing quantum experiments” by Dr. Alexey Melnkiov
With noisy intermediate-scale quantum computers showing great promise for near-term applications, a number of machine learning algorithms based on parametrized quantum circuits have been suggested as possible means to achieve learning advantages. Yet, our understanding of how these quantum machine learning models compare, both to existing classical models and to each other, remains limited. A big step in this direction has been made by relating them to so-called kernel methods from classical machine learning. By building on this connection, previous works have shown that a systematic reformulation of many quantum machine learning models as kernel models was guaranteed to improve their training performance. In this work, we first extend the applicability of this result to a more general family of parametrized quantum circuit models called data re-uploading circuits. Secondly, we show, through simple constructions and numerical simulations, that models defined and trained variationally can exhibit a critically better generalization performance than their kernel formulations, which is the true figure of merit of machine learning tasks. Our results constitute another step towards a more comprehensive theory of quantum machine learning models next to kernel formulations.
The Quantum Approximate Optimization Algorithm (QAOA) finds approximate solutions to combinatorial optimization problems. Its performance monotonically improves with its depth pp. We apply the QAOA to MaxCut on large-girth DD-regular graphs. We give an iterative formula to evaluate performance for any DD at any depth pp. Looking at random DD-regular graphs, at optimal parameters and as DD goes to infinity, we find that the p=11p=11 QAOA beats all classical algorithms (known to the authors) that are free of unproven conjectures. While the iterative formula for these DD-regular graphs is derived by looking at a single tree subgraph, we prove that it also gives the ensemble-averaged performance of the QAOA on the Sherrington-Kirkpatrick (SK) model. Our iteration is a compact procedure, but its computational complexity grows as O(p24p)O(p24p). This iteration is more efficient than the previous procedure for analyzing QAOA performance on the SK model, and we are able to numerically go to p=20p=20. Encouraged by our findings, we make the optimistic conjecture that the QAOA, as pp goes to infinity, will achieve the Parisi value. We analyze the performance of the quantum algorithm, but one needs to run it on a quantum computer to produce a string with the guaranteed performance.
A new paradigm for data science has emerged, with quantum data, quantum models, and quantum computational devices. This field, called Quantum Machine Learning (QML), aims to achieve a speedup over traditional machine learning for data analysis. However, its success usually hinges on efficiently training the parameters in quantum neural networks, and the field of QML is still lacking theoretical scaling results for their trainability. Some trainability results have been proven for a closely related field called Variational Quantum Algorithms (VQAs). While both fields involve training a parametrized quantum circuit, there are crucial differences that make the results for one setting not readily applicable to the other. In this work we bridge the two frameworks and show that gradient scaling results for VQAs can also be applied to study the gradient scaling of QML models. Our results indicate that features deemed detrimental for VQA trainability can also lead to issues such as barren plateaus in QML. Consequently, our work has implications for several QML proposals in the literature. In addition, we provide theoretical and numerical evidence that QML models exhibit further trainability issues not present in VQAs, arising from the use of a training dataset. We refer to these as dataset-induced barren plateaus. These results are most relevant when dealing with classical data, as here the choice of embedding scheme (i.e., the map between classical data and quantum states) can greatly affect the gradient scaling.
Samantha Koretsky,Pranav Gokhale,Jonathan M. Baker,Joshua Viszlai,Honghao Zheng,Niroj Gurung,Ryan Burg,Esa Aleksi Paaso,Amin Khodaei,Rozhin Eskandarpour,Frederic T. ChongOct 26 2021 quant-ph arXiv:2110.12624v1
In the present Noisy Intermediate-Scale Quantum (NISQ), hybrid algorithms that leverage classical resources to reduce quantum costs are particularly appealing. We formulate and apply such a hybrid quantum-classical algorithm to a power system optimization problem called Unit Commitment, which aims to satisfy a target power load at minimal cost. Our algorithm extends the Quantum Approximation Optimization Algorithm (QAOA) with a classical minimizer in order to support mixed binary optimization. Using Qiskit, we simulate results for sample systems to validate the effectiveness of our approach. We also compare to purely classical methods. Our results indicate that classical solvers are effective for our simulated Unit Commitment instances with fewer than 400 power generation units. However, for larger problem instances, the classical solvers either scale exponentially in runtime or must resort to coarse approximations. Potential quantum advantage would require problem instances at this scale, with several hundred units.
Computational chemistry is one of the most promising applications of quantum computing, mostly thanks to the development of the Variational Quantum Eigensolver (VQE) algorithm. VQE is being studied extensively and numerous optimisations of VQE’s sub-processes have been suggested, including the encoding methods and the choice of excitations. Recently, adaptive methods were introduced that apply each excitation iteratively. When it comes to adaptive VQE, research is focused on the choice of excitation pool and the strategies for choosing each excitation. Here we focus on a usually overlooked component of VQE, which is the choice of the classical optimisation algorithm. We introduce the parabolic optimiser that we designed specifically for the needs of VQE. This includes both an 1-D and an n-D optimiser that can be used either for adaptive or traditional VQE implementations. We then continue to benchmark the parabolic optimiser against Nelder-Mead for various implementations of VQE. We found that the parabolic optimiser performs significantly better than traditional optimisation methods, requiring fewer CNOTs and fewer quantum experiments to achieve a given energy accuracy.
In the absence of external relata, internal quantum reference frames (QRFs) appear widely in the literature on quantum gravity, gauge theories and quantum foundations. Here, we extend the perspective-neutral approach to QRF covariance to general unimodular Lie groups. This is a framework that links internal QRF perspectives via a manifestly gauge-invariant Hilbert space in the form of “quantum coordinate transformations”, and we clarify how it is a quantum extension of special covariance. We model the QRF orientations as coherent states which give rise to a covariant POVM, furnishing a consistent probability interpretation and encompassing non-ideal QRFs whose orientations are not perfectly distinguishable. We generalize the construction of relational observables, establish a variety of their algebraic properties and equip them with a transparent conditional probability interpretation. We import the distinction between gauge transformations and physical symmetries from gauge theories and identify the latter as QRF reorientations. The “quantum coordinate maps” into an internal QRF perspective are constructed via a conditioning on the QRF’s orientation, generalizing the Page-Wootters formalism and a symmetry reduction procedure. We find two types of QRF transformations: gauge induced “quantum coordinate transformations” as passive unitary changes of description and symmetry induced active changes of relational observables from one QRF to another. We reveal new effects: (i) QRFs with non-trivial orientation isotropy groups can only resolve isotropy-group-invariant properties of other subsystems; (ii) in the absence of symmetries, the internal perspective Hilbert space “rotates” through the kinematical subsystem Hilbert space as the QRF changes orientation. Finally, we invoke the symmetries to generalize the quantum relativity of subsystems before comparing with other approaches. [Abridged]
Room-temperature (RT), on-chip deterministic generation of indistinguishable photons coupled to photonic integrated circuits is key for quantum photonic applications. Nevertheless, high indistinguishability (I) at RT is difficult to obtain due to the intrinsic dephasing of most deterministic single-photon sources (SPS). Here we present the design, fabrication and optimization of a hybrid slot-Bragg nanophotonic cavity that achieves near-unity I and high efficiency (e̱ta) at RT for a variety of single-photon emitters. Our cavity provides modal volumes in the order of 10-3(\lambda/2n)3, allowing for strong coupling of quantum photonic emitters that can be heterogeneously integrated. We show that high I and e̱ta should be possible by fine-tuning the quality factor (Q) depending on the intrinsic properties of the single-photon emitter. Furthermore, we perform a machine learning optimization based on the combination of a deep neural network and a genetic algorithm (GA) to further decrease the modal volume by almost three times while relaxing the tight dimensions required for strong coupling.
We use a neural network variational ansatz to compute Gaussian quantum discrete solitons in an array of waveguides described by the quantum discrete nonlinear Schroedinger equation. By training the quantum machine learning model in the phase space, we find different quantum soliton solutions varying the number of particles and interaction strength. The use of Gaussian states enables measuring the degree of entanglement and the boson sampling patterns. We compute the probability of generating different particle pairs when varying the soliton features and unveil that bound states of discrete solitons emit correlated pairs of photons. These results may have a role in boson sampling experiments with nonlinear systems and in developing quantum processors to generate entangled many-photon nonlinear states.
We propose a new binary formulation of the Travelling Salesman Problem (TSP), with which we overcame the best formulation of the Vehicle Routing Problem (VRP) in terms of the minimum number of necessary variables. Furthermore, we present a detailed study of the constraints used and compare our model (GPS) with other frequent formulations (MTZ and native formulation). Finally, we have carried out a coherence and efficiency check of the proposed formulation by running it on a quantum annealing computer, D-Wave 2000Q6.