- Keyon Christ Launches First-Ever Experimental Music NFT With Quantum Machine Learning
- Modeling quantum spin liquids using machine learning
- The First Quantum Toolkit And Library For Natural Language Processing
- CHINA’S EXASCALE QUANTUM SIMULATION NOT ALL IT APPEARS
- Quantum Machine Learning for Complex Systems
- Sofiene Jerbi: Quantum machine learning beyond kernel methods
- How will our cryptographic toolkit be impacted by quantum computers and Machine Learning ?
- Modeling a Theory of Gravity Using Machine Learning
- Theory of overparametrization in quantum neural networks
Can we use a quantum computer to speed up classical machine learning in solving problems of practical significance? Here, we study this open question focusing on the quantum phase learning problem, an important task in many-body quantum physics. We prove that, under widely believed complexity theory assumptions, quantum phase learning problem cannot be efficiently solved by machine learning algorithms using classical resources and classical data. Whereas using quantum data, we theoretically prove the universality of quantum kernel Alphatron in efficiently predicting quantum phases, indicating quantum advantages in this learning problem. We numerically benchmark the algorithm for a variety of problems,including recognizing symmetry-protected topological phases and symmetry-broken phases. Our results highlight the capability of quantum machine learning in efficient prediction of quantum phases.
Recently, several approaches to solving linear systems on a quantum computer have been formulated in terms of the quantum adiabatic theorem for a continuously varying Hamiltonian. Such approaches enabled near-linear scaling in the condition number κκ of the linear system, without requiring a complicated variable-time amplitude amplification procedure. However, the most efficient of those procedures is still asymptotically sub-optimal by a factor of log(κ)log(κ). Here, we prove a rigorous form of the adiabatic theorem that bounds the error in terms of the spectral gap for intrinsically discrete time evolutions. We use this discrete adiabatic theorem to develop a quantum algorithm for solving linear systems that is asymptotically optimal, in the sense that the complexity is strictly linear in κκ, matching a known lower bound on the complexity. Our O(κlog(1/ϵ))O(κlog(1/ϵ)) complexity is also optimal in terms of the combined scaling in κκ and the precision ϵϵ. Compared to existing suboptimal methods, our algorithm is simpler and easier to implement. Moreover, we determine the constant factors in the algorithm, which would be suitable for determining the complexity in terms of gate counts for specific applications.
Quantum machine learning has emerged as a promising method to improve near-term quantum computation devices. However, algorithmic classes such as variational quantum algorithms have been shown to suffer from barren plateaus due to vanishing gradients in their parameter spaces. We present an approach to quantum algorithm optimization that is based on trainable Fourier coefficients of Hamiltonian system parameters. Our ansatz applies to the extension of discrete quantum variational algorithms to analogue quantum optimal control schemes and is non-local in time. We demonstrate the viability of our ansatz on several objective functions using quantum natural gradient descent. In comparison to the temporally local discretization ansätze in quantum optimal control and parametrized circuits, our ansatz exhibits faster and more consistent convergence with a distinct lack of barren plateaus. We propose our ansatz as a viable parametrization candidate for near-term quantum machine learning.
Quantum-classical hybrid schemes based on variational quantum eigensolvers (VQEs) may transform our ability of simulating materials and molecules already within the next few years. However, one of the main obstacles to overcome in order to achieve practical near-term quantum advantage is to improve our ability of mitigating the “noise effects”, characteristic of the current generation of quantum processing units (QPUs). To this end, here we design a method based on probabilistic machine learning, which allows us to mitigate the noise by imbuing within the computation prior (data independent) information about the variational landscape. We perform benchmark calculations of a 4-qubit impurity model using the IBM open-source framework for quantum computing Qiskit, showing that our method improves dramatically the accuracy of the VQE outputs. Finally, we show that applying our method makes quantum-embedding simulations of the Hubbard model with a VQE impurity solver considerably more reliable.
There has been tremendous progress in Artificial Intelligence (AI) for music, in particular for musical composition and access to large databases for commercialisation through the Internet. We are interested in further advancing this field, focusing on composition. In contrast to current black-box AI methods, we are championing an interpretable compositional outlook on generative music systems. In particular, we are importing methods from the Distributional Compositional Categorical (DisCoCat) modelling framework for Natural Language Processing (NLP), motivated by musical grammars. Quantum computing is a nascent technology, which is very likely to impact the music industry in time to come. Thus, we are pioneering a Quantum Natural Language Processing (QNLP) approach to develop a new generation of intelligent musical systems. This work follows from previous experimental implementations of DisCoCat linguistic models on quantum hardware. In this chapter, we present Quanthoven, the first proof-of-concept ever built, which (a) demonstrates that it is possible to program a quantum computer to learn to classify music that conveys different meanings and (b) illustrates how such a capability might be leveraged to develop a system to compose meaningful pieces of music. After a discussion about our current understanding of music as a communication medium and its relationship to natural language, the chapter focuses on the techniques developed to (a) encode musical compositions as quantum circuits, and (b) design a quantum classifier. The chapter ends with demonstrations of compositions created with the system.
We develop a general framework for the entanglement classification in discrete systems, lattice gauge field theories and continuous systems. Given a quantum state, we define the dimension spectrum which is the dimensions of subspaces generated by kk-local operators acting on the state and characterize the entanglement resource of the state. With the spectrum as coefficients, we define the entanglement polynomials which induce a homomorphism from states to polynomials. By taking quotient over the kernel of the homomorphism, we obtain an isomorphism from entanglement classes to polynomials, which classifies entanglement effectively. It implies that we can characterize and find the building blocks of entanglement by entanglement polynomials factorization. It’s also proven that an operator inducing automorphisms on all of the subspaces of kk-local operators keeps the entanglement polynomials invariant. SLOCC and permutation are examples of such operators. We also construct a series of states called stochastic renormalized states to compute entanglement polynomials effectively by computing their ranks.
We report the implementation of a perceptron quantum gate in an ion-trap quantum computer. In this scheme, a perceptron’s target qubit changes its state depending on the interactions with several qubits. The target qubit displays a tunable sigmoid switching behaviour becoming a universal approximator when nested with other percetrons. The procedure consists on the adiabatic ramp-down of a dressing-field applied to the target qubit. We also use two successive perceptron quantum gates to implement a XNOR-gate, where the perceptron qubit changes its state only when the parity of two input qubits is even. The applicability can be generalized to higher-dimensional gates as well as the reconstruction of arbitrary bounded continuous functions of the perceptron observables.
Symmetry-protected topological (SPT) phases are short-range entangled phases of matter with a non-local order parameter which are preserved under a local symmetry group. Here, by using unsupervised learning algorithm, namely the diffusion maps, we demonstrate that can differentiate between symmetry broken phases and topologically ordered phases, and between non-trivial topological phases in different classes. In particular, we show that the phase transitions associated with these phases can be detected in different bosonic and fermionic models in one dimension. This includes the interacting SSH model, the AKLT model and its variants, and weakly interacting fermionic models. Our approach serves as an inexpensive computational method for detecting topological phases transitions associated with SPT systems which can be also applied to experimental data obtained from quantum simulators.
Driven by growing computational power and algorithmic developments, machine learning methods have become valuable tools for analyzing vast amounts of data. Simultaneously, the fast technological progress of quantum information processing suggests employing quantum hardware for machine learning purposes. Recent works discuss different architectures of quantum perceptrons, but the abilities of such quantum devices remain debated. Here, we investigate the storage capacity of a particular quantum perceptron architecture by using statistical mechanics techniques and connect our analysis to the theory of classical spin glasses. We focus on a specific quantum perceptron model and explore its storage properties in the limit of a large number of inputs. Finally, we comment on using statistical physics techniques for further studies of neural networks.
How many different ways are there to handwrite digit 3? To quantify this question imagine extending a dataset of handwritten digits MNIST by sampling additional images until they start repeating. We call the collection of all resulting images of digit 3 the “full set.” To study the properties of the full set we introduce a tensor network architecture which simultaneously accomplishes both classification (discrimination) and sampling tasks. Qualitatively, our trained network represents the indicator function of the full set. It therefore can be used to characterize the data itself. We illustrate that by studying the full sets associated with the digits of MNIST. Using quantum mechanical interpretation of our network we characterize the full set by calculating its entanglement entropy. We also study its geometric properties such as mean Hamming distance, effective dimension, and size. The latter answers the question above — the total number of black and white threes written MNIST style is 272272.