- Quantum Machine Learning – An Introduction to QGANs
- WANT TO LEARN QUANTUM MACHINE LEARNING? HERE’S HOW!
- EXPLORING THE DEPTH OF QUANTUM MACHINE LEARNING
- Quantum Machine Learning Algorithms for Drug Discovery Applications
- What is Quantum AI?
- How Quantum Computing Will Help Solve Real-World Problems Much Faster
- What would happen if we connected the human brain to a quantum computer?
- Improving quantum computer performance with machine learning
- Shi-Ju Ran: “Deep learning quantum states for Hamiltonian predictions”
- Beyond the Patterns 32 – Raghavendra Selvan: Quantum Tensor Networks for Medical Image Analysis
- Extending the Performance of Noisy Superconducting Quantum Processors – Irfan Siddiqi
- What is a Quantum GAN?
- How to Use Google Colab For machine learning Project || Google Colaboratory
- Amazon Braket SDK: v1.6.1
- QE labs OpenQL: Release 0.9.0: major internal improvements, architecture system, and pass management
- An evolving objective function for improved variational quantum optimisation
- Molecular Excited State Calculations with Adaptive Wavefunctions on a Quantum Eigensolver Emulation: Reducing Circuit Depth and Separating Spin States
- Variational Quantum Classifiers Through the Lens of the Hessian
- Frequentist Parameter Estimation with Supervised Learning
- Quantum Embedding Search for Quantum Machine Learning
- Developments of Neural Networks in Quantum Physics
- Quantum mean value approximator for hard integer value problems
- Quantum Approximate Optimization Algorithm with Adaptive Bias Fields
- IGO-QNN: Quantum Neural Network Architecture for Inductive Grover Oracularization
- Quantum geometric tensor and quantum phase transitions in the Lipkin-Meshkov-Glick model
A promising approach to useful computational quantum advantage is to use variational quantum algorithms for optimisation problems. Crucial for the performance of these algorithms is to ensure that the algorithm converges with high probability to a near-optimal solution in a small time. In Barkoutsos et al (Quantum 2020) an alternative class of objective functions, called Conditional Value-at-Risk (CVaR), was introduced and it was shown that they perform better than standard objective functions. Here we extend that work by introducing an evolving objective function, which we call Ascending-CVaR and that can be used for any optimisation problem. We test our proposed objective function, in an emulation environment, using as case-studies three different optimisation problems: Max-Cut, Number Partitioning and Portfolio Optimisation. We examine multiple instances of different sizes and analyse the performance using the Variational Quantum Eigensolver (VQE) with hardware-efficient ansatz and the Quantum Approximate Optimization Algorithm (QAOA). We show that Ascending-CVaR in all cases performs better than standard objective functions or the “constant” CVaR of Barkoutsos et al (Quantum 2020) and that it can be used as a heuristic for avoiding sub-optimal minima. Our proposal achieves higher overlap with the ideal state in all problems, whether we consider easy or hard instances — on average it gives up to ten times greater overlap at Portfolio Optimisation and Number Partitioning, while it gives an 80% improvement at Max-Cut. In the hard instances we consider, for the number partitioning problem, standard objective functions fail to find the correct solution in almost all cases, CVaR finds the correct solution at 60% of the cases, while Ascending-CVaR finds the correct solution in 95% of the cases.
Molecular Excited State Calculations with Adaptive Wavefunctions on a Quantum Eigensolver Emulation: Reducing Circuit Depth and Separating Spin States
Ab initio electronic excited state calculations are necessary for the quantitative study of photochemical reactions, but their accurate computation on classical computers is plagued by prohibitive scaling. The Variational Quantum Deflation (VQD) is an extension of the Variational Quantum Eigensolver (VQE) for calculating electronic excited state energies, and has the potential to address some of these scaling challenges using quantum computers. However, quantum computers available in the near term can only support a limited number of circuit operations, so reducing the quantum computational cost in VQD methods is critical to their realisation. In this work, we investigate the use of adaptive quantum circuit growth (ADAPT-VQE) in excited state VQD calculations, a strategy that has been successful previously in reducing the resource for ground state energy VQE calculations. We also invoke spin restrictions to separate the recovery of eigenstates with different spin symmetry to reduce the number of calculations and accumulation of errors. We created a quantum eigensolver emulation package – Quantum Eigensolver Building on Achievements of Both quantum computing and quantum chemistry (QEBAB) – for testing the proposed adaptive procedure against two VQD methods that use fixed-length quantum circuits. For a lithium hydride test case we found that the spin-restricted adaptive growth variant of VQD uses the most compact circuits by far, consistently recovers adequate electron correlation energy for different nuclear geometries and eigenstates while isolating the singlet and triplet manifold. This work is a further step towards developing techniques which improve the efficiency of hybrid quantum algorithms for excited state quantum chemistry, opening up the possibility of exploiting real quantum computers for electronic excited state calculations sooner than previously anticipated.
In quantum computing, the variational quantum algorithms (VQAs) are well suited for finding optimal combinations of things in specific applications ranging from chemistry all the way to finance. The training of VQAs with gradient descent optimization algorithm has shown a good convergence. At an early stage, the simulation of variational quantum circuits on noisy intermediate-scale quantum (NISQ) devices suffers from noisy outputs. Just like classical deep learning, it also suffers from vanishing gradient problems. It is a realistic goal to study the topology of loss landscape, to visualize the curvature information and trainability of these circuits in the existence of vanishing gradients. In this paper, we calculated the Hessian and visualized the loss landscape of variational quantum classifiers at different points in parameter space. The curvature information of variational quantum classifiers (VQC) is interpreted and the loss function’s convergence is shown. It helps us better understand the behavior of variational quantum circuits to tackle optimization problems efficiently. We investigated the variational quantum classifiers via Hessian on quantum computers, started with a simple 4-bit parity problem to gain insight into the practical behavior of Hessian, then thoroughly analyzed the behavior of Hessian’s eigenvalues on training the variational quantum classifier for the Diabetes dataset.
Recently there has been a great deal of interest surrounding the calibration of quantum sensors using machine learning techniques. In this work, we explore the use of regression to infer a machine-learned point estimate of an unknown parameter. Although the analysis is neccessarily frequentist – relying on repeated esitmates to build up statistics – we clarify that this machine-learned estimator converges to the Bayesian maximum a-posterori estimator (subject to some regularity conditions). When the number of training measurements are large, this is identical to the well-known maximum-likelihood estimator (MLE), and using this fact, we argue that the Cramér-Rao sensitivity bound applies to the mean-square error cost function and can therefore be used to select optimal model and training parameters. We show that the machine-learned estimator inherits the desirable asymptotic properties of the MLE, up to a limit imposed by the resolution of the training grid. Furthermore, we investigate the role of quantum noise the training process, and show that this noise imposes a fundamental limit on number of grid points. This manuscript paves the way for machine-learning to assist the calibration of quantum sensors, thereby allowing maximum-likelihood inference to play a more prominent role in the design and operation of the next generation of ultra-precise sensors.
This paper introduces a novel quantum embedding search algorithm (QES, pronounced as “quest”), enabling search for optimal quantum embedding design for a specific dataset of interest. First, we establish the connection between the structures of quantum embedding and the representations of directed multi-graphs, enabling a well-defined search space. Second, we instigate the entanglement level to reduce the cardinality of the search space to a feasible size for practical implementations. Finally, we mitigate the cost of evaluating the true loss function by using surrogate models via sequential model-based optimization. We demonstrate the feasibility of our proposed approach on synthesis and Iris datasets, which empirically shows that found quantum embedding architecture by QES outperforms manual designs whereas achieving comparable performance to classical machine learning models.
Quantum machine learning emerges from the symbiosis of quantum mechanics and machine learning. In particular, the latter gets displayed in quantum sciences as: (i) the use of classical machine learning as a tool applied to quantum physics problems, (ii) or the use of quantum resources such as superposition, entanglement, or quantum optimization protocols to enhance the performance of classification and regression tasks compare to their classical counterparts. This paper reviews examples in these two scenarios. On the one hand, the application of classical neural network to design a new quantum sensing protocol. On the other hand, the design of a quantum neural network based on the dynamics of a quantum perceptron optimized with the aid of shortcuts to adiabaticity gives rise to a short operation time and robust performance. These examples demonstrate the mutual reinforcement of both neural networks and quantum physics.
Evaluating the expectation of a quantum circuit is a classically difficult problem known as the quantum mean value problem (QMV). It is used to optimize the quantum approximate optimization algorithm and other variational quantum eigensolvers. We show that such an optimization can be improved substantially by using an approximation rather than the exact expectation. Together with efficient classical sampling algorithms, a quantum algorithm with minimal gate count can thus improve the efficiency of general integer-value problems, such as the shortest vector problem (SVP) investigated in this work.
The quantum approximate optimization algorithm (QAOA) transforms a simple many-qubit wavefunction into one which encodes the solution to a difficult classical optimization problem. It does this by optimizing the schedule according to which two unitary operators are alternately applied to the qubits. In this paper, this procedure is modified by updating the operators themselves to include local fields, using information from the measured wavefunction at the end of one iteration step to improve the operators at later steps. It is shown by numerical simulation on MAXCUT problems that this decreases the runtime of QAOA very substantially. This improvement appears to increase with the problem size. Our method requires essentially the same number of quantum gates per optimization step as the standard QAOA. Application of this modified algorithm should bring closer the time to quantum advantage for optimization problems.
We propose a novel paradigm of integration of Grover’s algorithm in a machine learning framework: the inductive Grover oracular quantum neural network (IGO-QNN). The model defines a variational quantum circuit with hidden layers of parameterized quantum neurons densely connected via entangle synapses to encode a dynamic Grover’s search oracle that can be trained from a set of database-hit training examples. This widens the range of problem applications of Grover’s unstructured search algorithm to include the vast majority of problems lacking analytic descriptions of solution verifiers, allowing for quadratic speed-up in unstructured search for the set of search problems with relationships between input and output spaces that are tractably underivable deductively. This generalization of Grover’s oracularization may prove particularly effective in deep reinforcement learning, computer vision, and, more generally, as a feature vector classifier at the top of an existing model.
We study the quantum metric tensor and its scalar curvature for a particular version of the Lipkin-Meshkov-Glick model. We build the classical Hamiltonian using Bloch coherent states and find its stationary points. They exhibit the presence of a ground state quantum phase transition, where a bifurcation occurs, showing a change of stability associated with an excited state quantum phase transition. Symmetrically, for a sign change in one Hamiltonian parameter, the same phenomenon is observed in the highest energy state. Employing the Holstein-Primakoff approximation, we derive analytic expressions for the quantum metric tensor and compute the scalar and Berry curvatures. We contrast the analytic results with their finite-size counterparts obtained through exact numerical diagonalization and find an excellent agreement between them for large sizes of the system in a wide region of the parameter space, except in points near the phase transition where the Holstein-Primakoff approximation ceases to be valid.