- An optimist’s guide to the future: How quantum AI could make Earth a paradise
- Quantum Computing, Artificial Intelligence, & Machine Learning for Drug Discovery
- Resonant quantum principal component analysis
- How Big Data Carried Graph Theory Into New Dimensions
- Quantum Machine Learning APIs + Distributed Machine Learning with Amazon SageMaker and EFS
- This ‘Quantum Brain’ Would Mimic Our Own to Speed Up AI
- Quantum Computing, Artificial Intelligence, & Machine Learning for Drug Discovery
- Bored with classical computers ? – Quantum AI with OpenFermion
- How to implement a Quantum Autoencoder in TensorFlow-Quantum (and how not to)
- Circuit optimization
- QNodes are even more powerful
- Device Resource Tracker
- Containerization support
- Improved Hamiltonian simulations
- New gradients module
- Even more new operations and templates
Bug Fixes and Other Changes
- Add test for local simulator device names
- The Presence and Absence of Barren Plateaus in Tensor-network Based Machine Learning
- Probing ground state properties of the kagome antiferromagnetic Heisenberg model using the Variational Quantum Eigensolver
- Solving the Hubbard model using density matrix embedding theory and the variational quantum eigensolver
- QDataset: Quantum Datasets for Machine Learning
- Optimized Hamiltonian learning from short-time measurements
- Quadratic Unconstrained Binary Optimisation via Quantum-Inspired Annealing
- Quantum Optimization Heuristics with an Application to Knapsack Problems
- Benchmarking Machine Learning Algorithms for Adaptive Quantum Phase Estimation with Noisy Intermediate-Scale Quantum Sensors
- Machine Learning Based Parameter Estimation of Gaussian Quantum States
- Determinant-free fermionic wave function using feed-forward neural network
- High-dimensional encryption in optical fibers using machine learning
- Emulating ultrafast dissipative quantum dynamics with deep neural networks
Tensor networks are efficient representations of high-dimensional tensors with widespread applications in quantum many-body physics. Recently, they have been adapted to the field of machine learning, giving rise to an emergent research frontier that has attracted considerable attention. Here, we study the trainability of tensor-network based machine learning models by exploring the landscapes of different loss functions, with a focus on the matrix product states (also called tensor trains) architecture. In particular, we rigorously prove that barren plateaus (i.e., exponentially vanishing gradients) prevail in the training process of the machine learning algorithms with global loss functions. Whereas, for local loss functions the gradients with respect to variational parameters near the local observables do not vanish as the system size increases. Therefore, the barren plateaus are absent in this case and the corresponding models could be efficiently trainable. Our results reveal a crucial aspect of tensor-network based machine learning in a rigorous fashion, which provide a valuable guide for both practical applications and theoretical studies in the future.
Probing ground state properties of the kagome antiferromagnetic Heisenberg model using the Variational Quantum Eigensolver
Finding and probing the ground states of spin lattices, such as the antiferromagnetic Heisenberg model on the kagome lattice (KAFH), is a very challenging problem on classical computers and only possible for relatively small systems. We propose using the Variational Quantum Eigensolver (VQE) to find the ground state of the KAFH on a quantum computer. We find efficient ansatz circuits and show how physically interesting observables can be measured efficiently. To investigate the expressiveness and scaling of our ansatz circuits we used classical, exact simulations of VQE for the KAFH for different lattices ranging from 8 to 24 qubits. We find that the fidelity with the ground state approaches one exponentially in the circuit depth for all lattices considered, except for a 24-qubit lattice with an almost degenerate ground state. We conclude that VQE circuits that are able to represent the ground state of the KAFH on lattices inaccessible to exact diagonalisation techniques may be achievable on near term quantum hardware. However, for large systems circuits with many variational parameters are needed to achieve high fidelity with the ground state.
Solving the Hubbard model using density matrix embedding theory and the variational quantum eigensolver
Calculating the ground state properties of a Hamiltonian can be mapped to the problem of finding the ground state of a smaller Hamiltonian through the use of embedding methods. These embedding techniques have the ability to drastically reduce the problem size, and hence the number of qubits required when running on a quantum computer. However, the embedding process can produce a relatively complicated Hamiltonian, leading to a more complex quantum algorithm. In this paper we carry out a detailed study into how density matrix embedding theory (DMET) could be implemented on a quantum computer to solve the Hubbard model. We consider the variational quantum eigensolver (VQE) as the solver for the embedded Hamiltonian within the DMET algorithm. We derive the exact form of the embedded Hamiltonian and use it to construct efficient ansatz circuits and measurement schemes. We conduct detailed numerical simulations up to 16 qubits, the largest to date, for a range of Hubbard model parameters and find that the combination of DMET and VQE is effective for reproducing ground state properties of the model.
The availability of large-scale datasets on which to train, benchmark and test algorithms has been central to the rapid development of machine learning as a discipline and its maturity as a research discipline. Despite considerable advancements in recent years, the field of quantum machine learning (QML) has thus far lacked a set of comprehensive large-scale datasets upon which to benchmark the development of algorithms for use in applied and theoretical quantum settings. In this paper, we introduce such a dataset, the QDataSet, a quantum dataset designed specifically to facilitate the training and development of QML algorithms. The QDataSet comprises 52 high-quality publicly available datasets derived from simulations of one- and two-qubit systems evolving in the presence and/or absence of noise. The datasets are structured to provide a wealth of information to enable machine learning practitioners to use the QDataSet to solve problems in applied quantum computation, such as quantum control, quantum spectroscopy and tomography. Accompanying the datasets on the associated GitHub repository are a set of workbooks demonstrating the use of the QDataSet in a range of optimisation contexts.
Characterizing noisy quantum devices requires methods for learning the underlying quantum Hamiltonian which governs their dynamics. Often, such methods compare measurements to simulations of candidate Hamiltonians, a task which requires exponential computational complexity. Here, we analyze and optimize a method which circumvents this difficulty using measurements of short time dynamics. We provide estimates for the optimal measurement schedule and reconstruction error. We demonstrate that the reconstruction requires a system-size independent number of experimental shots, and characterize an informationally-complete set of state preparations and measurements for learning local Hamiltonians. Finally, we show how grouping of commuting observables and use of Hamiltonian symmetries can improve the Hamiltonian reconstruction.
We present a classical algorithm to find approximate solutions to instances of quadratic unconstrained binary optimisation. The algorithm can be seen as an analogue of quantum annealing under the restriction of a product state space, where the dynamical evolution in quantum annealing is replaced with a gradient-descent based method. This formulation is able to quickly find high-quality solutions to large-scale problem instances, and can naturally be accelerated by dedicated hardware such as graphics processing units. We benchmark our approach for large scale problem instances with tuneable hardness and planted solutions. We find that our algorithm offers a similar performance to current state of the art approaches within a comparably simple gradient-based and non-stochastic setting.
This paper introduces two techniques that make the standard Quantum Approximate Optimization Algorithm (QAOA) more suitable for constrained optimization problems. The first technique describes how to use the outcome of a prior greedy classical algorithm to define an initial quantum state and mixing operation to adjust the quantum optimization algorithm to explore the possible answers around this initial greedy solution. The second technique is used to nudge the quantum exploration to avoid the local minima around the greedy solutions. To analyze the benefits of these two techniques we run the quantum algorithm on known hard instances of the Knapsack Problem using unit depth quantum circuits. The results show that the adjusted quantum optimization heuristics typically perform better than various classical heuristics.
Benchmarking Machine Learning Algorithms for Adaptive Quantum Phase Estimation with Noisy Intermediate-Scale Quantum Sensors
Quantum phase estimation is a paradigmatic problem in quantum sensing and metrology. Here we show that adaptive methods based on classical machine learning algorithms can be used to enhance the precision of quantum phase estimation when noisy non-entangled qubits are used as sensors. We employ the Differential Evolution (DE) and Particle Swarm Optimization (PSO) algorithms to this task and we identify the optimal feedback policies which minimize the Holevo variance. We benchmark these schemes with respect to scenarios that include Gaussian and Random Telegraph fluctuations as well as reduced Ramsey-fringe visibility due to decoherence. We discuss their robustness against noise in connection with real experimental setups such as Mach-Zehnder interferometry with optical photons and Ramsey interferometry in trapped ions,superconducting qubits and nitrogen-vacancy (NV) centers in diamond.
We propose a machine learning framework for parameter estimation of single mode Gaussian quantum states. Under a Bayesian framework, our approach estimates parameters of suitable prior distributions from measured data. For phase-space displacement and squeezing parameter estimation, this is achieved by introducing Expectation-Maximization (EM) based algorithms, while for phase parameter estimation an empirical Bayes method is applied. The estimated prior distribution parameters along with the observed data are used for finding the optimal Bayesian estimate of the unknown displacement, squeezing and phase parameters. Our simulation results show that the proposed algorithms have estimation performance that is very close to that of Genie Aided Bayesian estimators, that assume perfect knowledge of the prior parameters. Our proposed methods can be utilized by experimentalists to find the optimum Bayesian estimate of parameters of Gaussian quantum states by using only the observed measurements without requiring any knowledge about the prior distribution parameters.
We propose a general framework for finding the ground state of many-body fermionic systems by using feed-forward neural networks. The anticommutation relation for fermions is usually implemented to a variational wave function by the Slater determinant (or Pfaffian), which is a computational bottleneck because of the numerical cost of O(N3)O(N3) for NN particles. We bypass this bottleneck by explicitly calculating the sign changes associated with particle exchanges in real space and using fully connected neural networks for optimizing the rest parts of the wave function. This reduces the computational cost to O(N2)O(N2) or less. We show that the accuracy of the approximation can be improved by optimizing the “variance” of the energy simultaneously with the energy itself. We also find that a reweighting method in Monte Carlo sampling can stabilize the calculation. These improvements can be applied to other approaches based on variational Monte Carlo methods. Moreover, we show that the accuracy can be further improved by using the symmetry of the system, the representative states, and an additional neural network implementing a generalized Gutzwiller-Jastrow factor. We demonstrate the efficiency of the method by applying it to a two-dimensional Hubbard model.
Michelle L. J. Lollie,Fatemeh Mostafavi,Narayan Bhusal,Mingyuan Hong,Chenglong You,Roberto de J. León-Montiel,Omar S. Magaña-Loaiza,Mario A. Quiroz-JuárezAug 17 2021 quant-phphysics.optics arXiv:2108.06420v1
The ability to engineer the spatial wavefunction of photons has enabled a variety of quantum protocols for communication, sensing, and information processing. These protocols exploit the high dimensionality of structured light enabling the encodinng of multiple bits of information in a single photon, the measurement of small physical parameters, and the achievement of unprecedented levels of security in schemes for cryptography. Unfortunately, the potential of structured light has been restrained to free-space platforms in which the spatial profile of photons is preserved. Here, we make an important step forward to using structured light for fiber optical communication. We introduce a smart high-dimensional encryption protocol in which the propagation of spatial modes in multimode fibers is used as a natural mechanism for encryption. This provides a secure communication channel for data transmission. The information encoded in spatial modes is retrieved using artificial neural networks, which are trained from the intensity distributions of experimentally detected spatial modes. Our on-fiber communication platform allows us to use spatial modes of light for high-dimensional bit-by-bit and byte-by-byte encoding. This protocol enables one to recover messages and images with almost perfect accuracy. Our smart protocol for high-dimensional optical encryption in optical fibers has key implications for quantum technologies relying on structured fields of light, particularly those that are challenged by free-space propagation.
The simulation of driven dissipative quantum dynamics is often prohibitively computation-intensive, especially when it is calculated for various shapes of the driving field. We engineer a new feature space for representing the field and demonstrate that a deep neural network can be trained to emulate these dynamics by mapping this representation directly to the target observables. We demonstrate that with this approach, the system response can be retrieved many orders of magnitude faster. We verify the validity of our approach using the example of finite transverse Ising model irradiated with few-cycle magnetic pulses interacting with a Markovian environment. We show that our approach is sufficiently generalizable and robust to reproduce responses to pulses outside the training set.