- Machine Learning Gets a Quantum Speedup
- Multiverse Computing Partners with Xanadu to Deliver Quantum Software Solutions for Finance
- WHAT IS LACKING IN MACHINE LEARNING TO GET SMARTER?
We introduce a new approach for quantum linear algebra based on quantum subspace states and present three new quantum machine learning algorithms. The first is a quantum determinant sampling algorithm that samples from the distribution Pr[S]=det(XSXTS)Pr[S]=det(XSXST) for |S|=d|S|=d using O(nd)O(nd) gates and with circuit depth O(dlogn)O(dlogn). The state of art classical algorithm for the task requires O(d3)O(d3) operations \citederezinski2019minimax. The second is a quantum singular value estimation algorithm for compound matrices AkAk, the speedup for this algorithm is potentially exponential. It decomposes a (nk)(nk) dimensional vector of order-kk correlations into a linear combination of subspace states corresponding to kk-tuples of singular vectors of AA. The third algorithm reduces exponentially the depth of circuits used in quantum topological data analysis from O(n)O(n) to O(logn)O(logn). Our basic tool are quantum subspace states, defined as |Col(X)⟩=∑S⊂[n],|S|=ddet(XS)|S⟩|Col(X)⟩=∑S⊂[n],|S|=ddet(XS)|S⟩ for matrices X∈Rn×dX∈Rn×d such that XTX=IdXTX=Id, that encode dd-dimensional subspaces of RnRn. We develop two efficient state preparation techniques, the first using Givens circuits uses the representation of a subspace as a sequence of Givens rotations, while the second uses efficient implementations of unitaries Γ(x)=∑ixiZ⊗(i−1)⊗X⊗In−iΓ(x)=∑ixiZ⊗(i−1)⊗X⊗In−i with O(logn)O(logn) depth circuits that we term Clifford loaders.
Combinatorial optimization problems on graphs have broad applications in science and engineering. The Quantum Approximate Optimization Algorithm (QAOA) is a method to solve these problems on a quantum computer by applying multiple rounds of variational circuits. However, there exist several challenges limiting the real-world applications of QAOA. In this paper, we demonstrate on a trapped-ion quantum computer that QAOA results improve with the number of rounds for multiple problems on several arbitrary graphs. We also demonstrate an advanced mixing Hamiltonian that allows sampling of all optimal solutions with predetermined weights. Our results are a step towards applying quantum algorithms to real-world problems.
We compare the performance of several variations of the Quantum Alternating Operator Ansatz (QAOA) on constrained optimization problems. Specifically, we study the Clique, Ring, and Grover mixers as well as the traditional objective value and recently introduced threshold-based phase separators. These are studied through numerical simulation on k-Densest Subgraph, Maximum k-Vertex Cover, and Maximum Bisection problems of size up to n=18 on Erdös-Renyi graphs. We show that only one of these QAOA variations, the Clique mixer with objective value phase separator, outperforms Grover-style unstructured search, with a potentially super-polynomial advantage.
Quantum many-body control is a central milestone en route to harnessing quantum technologies. However, the exponential growth of the Hilbert space dimension with the number of qubits makes it challenging to classically simulate quantum many-body systems and consequently, to devise reliable and robust optimal control protocols. Here, we present a novel framework for efficiently controlling quantum many-body systems based on reinforcement learning (RL). We tackle the quantum control problem by leveraging matrix product states (i) for representing the many-body state and, (ii) as part of the trainable machine learning architecture for our RL agent. The framework is applied to prepare ground states of the quantum Ising chain, including critical states. It allows us to control systems far larger than neural-network-only architectures permit, while retaining the advantages of deep learning algorithms, such as generalizability and trainable robustness to noise. In particular, we demonstrate that RL agents are capable of finding universal controls, of learning how to optimally steer previously unseen many-body states, and of adapting control protocols on-the-fly when the quantum dynamics is subject to stochastic perturbations.
Machine learning based prediction of the electronic structure of quasi-one-dimensional materials under strain
We present a machine learning based model that can predict the electronic structure of quasi-one-dimensional materials while they are subjected to deformation modes such as torsion and extension/compression. The technique described here applies to important classes of materials such as nanotubes, nanoribbons, nanowires, miscellaneous chiral structures and nano-assemblies, for all of which, tuning the interplay of mechanical deformations and electronic fields is an active area of investigation in the literature. Our model incorporates global structural symmetries and atomic relaxation effects, benefits from the use of helical coordinates to specify the electronic fields, and makes use of a specialized data generation process that solves the symmetry-adapted equations of Kohn-Sham Density Functional Theory in these coordinates. Using armchair single wall carbon nanotubes as a prototypical example, we demonstrate the use of the model to predict the fields associated with the ground state electron density and the nuclear pseudocharges, when three parameters – namely, the radius of the nanotube, its axial stretch, and the twist per unit length – are specified as inputs. Other electronic properties of interest, including the ground state electronic free energy, can then be evaluated with low-overhead post-processing, typically to chemical accuracy. We also show how the nuclear coordinates can be reliably determined from the pseudocharge field using a clustering based technique. Remarkably, only about 120 data points are found to be enough to predict the three dimensional electronic fields accurately, which we ascribe to the symmetry in the problem setup, the use of low-discrepancy sequences for sampling, and presence of intrinsic low-dimensional features in the electronic fields. We comment on the interpretability of our machine learning model and discuss its possible future applications.
We explore the efficacy of the novel use of parametrised quantum circuits (PQCs) as quantum neural networks (QNNs) for forecasting time series signals with simulated quantum forward propagation. The temporal signals consist of several sinusoidal components (deterministic signal), blended together with trends and additive noise. The performance of the PQCs is compared against that of classical bidirectional long short-term memory (BiLSTM) neural networks. Our results show that for time series signals consisting of small amplitude noise variations (up to 40 per cent of the amplitude of the deterministic signal) PQCs, with only a few parameters, perform similar to classical BiLSTM networks, with thousands of parameters, and outperform them for signals with higher amplitude noise variations. Thus, QNNs can be used effectively to model time series having, at the same time, the significant advantage of being trained significantly faster than a classical machine learning model in a quantum computer.
Jonas Schuff, Dominic T. Lennon, Simon Geyer, David L. Craig, Federico Fedele, Florian Vigneau, Leon C. Camenzind, Andreas V. Kuhlmann, G. Andrew D. Briggs, Dominik M. Zumbühl, Dino Sejdinovic, Natalia Ares
Pauli spin blockade (PSB) can be employed as a great resource for spin qubit initialisation and readout even at elevated temperatures but it can be difficult to identify. We present a machine learning algorithm capable of automatically identifying PSB using charge transport measurements. The scarcity of PSB data is circumvented by training the algorithm with simulated data and by using cross-device validation. We demonstrate our approach on a silicon field-effect transistor device and report an accuracy of 96% on different test devices, giving evidence that the approach is robust to device variability. The approach is expected to be employable across all types of quantum dot devices.
Efficient Simulation of Quantum Many-body Thermodynamics by Tailoring Zero-temperature Tensor Network
Numerical annealing and renormalization group have conceived various successful approaches to study the thermodynamics of strongly-correlated systems where perturbation or expansion theories fail to work. As the process of lowering the temperatures is usually involved in different manners, these approaches in general become much less efficient or accurate at the low temperatures. In this work, we propose to access the finite-temperature properties from the tensor network (TN) representing the zero-temperature partition function. We propose to “scissor” a finite part from such an infinite-size TN, and “stitch” it to possess the periodic boundary condition along the imaginary-time direction. We dub this approach as TN tailoring. Exceptional accuracy is achieved with a fine-tune process, surpassing the previous methods including the linearized tensor renormalization group [Phys. Rev. Lett. 106, 127202 (2011)], continuous matrix product operator [Phys. Rev. Lett. 125, 170604 (2020)], and etc. High efficiency is demonstrated, where the time cost is nearly independent of the target temperature including the extremely-low temperatures. The proposed idea can be extended to higher-dimensional systems of bosons and fermions.
Recently, quantum-state representation using artificial neural networks has started to be recognized as a powerful tool. However, due to the black-box nature of machine learning, it is difficult to analyze what machine learns or why it is powerful. Here, by applying one of the simplest neural networks, the restricted Boltzmann machine (RBM), to the ground-state representation of the one-dimensional (1D) transverse-field Ising (TFI) model, we make an attempt to directly analyze the optimized network parameters. In the RBM optimization, a zero-temperature quantum state is mapped onto a finite-temperature classical state of the extended Ising spins that constitute the RBM. We find that the quantum phase transition from the ordered phase to the disordered phase in the 1D TFI model by increasing the transverse-field is clearly reflected in the behaviors of the optimized RBM parameters and hence in the finite-temperature phase diagram of the classical RBM Ising system. The present finding of a correspondence between the neural-network parameters and quantum phases suggests that a careful investigation of the neural-network parameters may provide a new route to extracting nontrivial physical insights from the neural-network wave functions.
Development of a network for remote entanglement of quantum processors is an outstanding challenge in quantum information science. We propose and analyze a two-species architecture for remote entanglement of neutral atom quantum computers based on integration of optically trapped atomic qubit arrays with fast optics for photon collection. One of the atomic species is used for atom-photon entanglement, and the other species provides local processing. We compare the achievable rates of remote entanglement generation for two optical approaches: free space photon collection with a lens and a near-concentric, long working distance resonant cavity. Laser cooling and trapping within the cavity removes the need for mechanical transport of atoms from a source region, which allows for a fast repetition rate. Using optimized values of the cavity finesse, remote entanglement generation rates >103 s−1>103 s−1 are predicted for experimentally feasible parameters
Variational quantum algorithms, which have risen to prominence in the noisy intermediate-scale quantum setting, require the implementation of a stochastic optimizer on classical hardware. To date, most research has employed algorithms based on the stochastic gradient iteration as the stochastic classical optimizer. In this work we propose instead using stochastic optimization algorithms that yield stochastic processes emulating the dynamics of classical deterministic algorithms. This approach results in methods with theoretically superior worst-case iteration complexities, at the expense of greater per-iteration sample (shot) complexities. We investigate this trade-off both theoretically and empirically and conclude that preferences for a choice of stochastic optimizer should explicitly depend on a function of both latency and shot execution times.