An overview of the challenges and opportunities Photo by Anton Maksimov 5642.su on Unsplash
Today, were diving into the intersection of quantum computing and machine learning, exploring quantum machine learning. Our main goal is to compare the performance of a quantum neural network for stock price time series forecasting with a simple single-layer MLP.
To facilitate this project, well be utilizing the Historical API endpoint offered by Financial Modeling Prep (FMP) for reliable and accurate data which is very critical. With that being said, lets dive into the article.
Lets start by importing the necessary libraries for our analysis. These libraries will provide the basic tools required to explore and implement our project.
Weve set up our environment by installing the Qiskit library for working with quantum computing networks, along with other essential libraries. To extract the data, well use the historical data API endpoint provided by Financial Modeling Prep.
FMPs historical data API offers a conveniently accessible endpoint, providing a diverse and extensive collection of historical stock data that proves invaluable at every step of our project. This resource enables us to access a wide range of financial information, enhancing the depth and accuracy of our analysis. Its user-friendly interface and comprehensive dataset contribute significantly to the success and efficiency of our research and implementation.
Now we are going to extract historical data as follows:
Replace YOUR API KEYwith your secret API key which you can obtain by creating an FMP account. The output is a JSON response which looks as follows:
In regular computers, we have tiny switches called digital gates. These switches control how information moves around. They work with basic units of data called bits, which can be either 0 or 1. The gates help computers do calculations and process stuff. Now, in quantum computers, we use something called qubits instead of bits. Qubits are special because they can be both 0 and 1 at the same time. Its like having a coin thats spinning and showing both heads and tails until you catch it, and then it picks one side.
When we say the wave function collapses, its just a fancy way of saying the qubit decides to be either 0 or 1 when we check it. We make these qubits using different things like light particles (photons), tiny particles that make up stuff (atoms), or even small electrical circuits (Josephson junctions). These are like the building blocks for our special qubits.
These quantum systems (particles or circuits) do some interesting things. They can be in different states at the same time (superposition), connect in a special way (entanglement), and even go through barriers they shouldnt (tunneling).
Whats cool is that quantum computers, with their qubits and special behaviors, use certain algorithms to solve some problems faster than regular computers. Its like having a new tool that might help us solve tough puzzles more efficiently in the future.
In traditional computing, we perform operations using basic logic gates like AND, NOT, and OR. These gates work with 0s and 1s, and their rules are based on a simple mathematical system called:
which essentially deals with counting modulo 2.
Now, imagine a quantum computer it also has gates, but these are like supercharged versions. Instead of dealing with simple bits, quantum gates work with quantum bits or qubits. The math behind these quantum gates involves complex numbers and Matrix operations.
Lets take the quantum NOT gate, called:
as an example. Apply it to a qubit initially in the state 0, and the operator flips it to 1 , and if you apply it again, it goes back to 0. Its a bit like flipping a coin.
Theres also the Hadamard gate (H) that does something really cool. Applying it to a qubit initially in the state 0 puts it in this special mix of 0 and 1 states at the same time to show mathematically H operates on |0 and converts it into the standard superposition of the basis states:
Its like having a coin spinning in the air, showing both heads and tails until it lands.
Now, lets talk about the Controlled-NOT (CNOT) gate. This one works on two qubits. If the first qubit is 1, it flips the second qubit from 0 to 1 or vice versa. Its like a quantum switch that depends on the state of the first qubit.
In the quantum world, things get more interesting. If you have two qubits in a special state, the CNOT gate uniquely rearranges their combinations, creating what we call entanglement. This entanglement is like a special connection between the qubits, making them behave in a coordinated manner.
So, in a nutshell, while regular computers use basic rules with 0s and 1s, quantum computers have these fascinating gates that play with probabilities, mix states, and create connections between qubits, opening up a world of possibilities for solving complex problems more efficiently.
In our project, we place special emphasis on a category of gates known as parameterized gates. These gates exhibit behavior that is contingent on specific input parameters, denoted by the symbol . Notably, we focus on rotation gates such as:
each characterized by a unitary matrix as described in the below figure:
Lets delve a bit deeper into these rotation gates. Consider:
envision it as a quantum gate resembling a rotating door that allows for the rotation of a qubit by a specific angle . The
and,
gates function similarly, introducing rotations around different axes.
The significance of these gates lies in their parameterized nature. By adjusting the input parameter , we essentially introduce a customizable element into our quantum algorithms. These gates serve as the foundational components for constructing the quantum neural network integral to our Project.
In essence, acts as a tuning parameter, akin to a knob, enabling us to finely adjust and tailor the behavior of our quantum algorithms within the framework of the quantum neural network. This flexibility becomes pivotal in optimizing and customizing the performance of our quantum algorithms for specific tasks.
Quantum algorithms can be thought of as a series of operations performed on a quantum state, represented by expressions like:
These algorithms are translated into quantum circuits, as illustrated in Figure below. In this depiction, the algorithm starts from the initial state |q_0 q_1 = |00 and concludes with a measurement resulting in either |00 or |11 with an equal probability of 0.5, recorded into classical bits (line c).
In a quantum circuit, each horizontal line corresponds to a single qubit, and gates are applied sequentially until measurement. Its important to note that loops are not allowed in a quantum program. A specific type of quantum circuit is the Variational Quantum Circuit (VQC). Notably, VQC incorporates parameterized gates like the aforementioned R_x(), R_y(), R_z().
In simpler terms, quantum algorithms are like step-by-step instructions for a quantum computer, and quantum circuits visually represent these steps. The Variational Quantum Circuit introduces a special kind of flexibility with parameterized gates, allowing for customization based on specific values, denoted by .
The primary objective of QML is to devise and deploy methods capable of running on quantum computers to address conventional supervised, unsupervised, and reinforcement learning tasks encountered in classical Machine Learning.
What makes QML distinct is its utilization of quantum operations, leveraging unique features like superposition, tunneling, entanglement, and quantum parallelism inherent to Quantum Computing (QC). In our study, we specifically concentrate on Quantum Neural Network (QNN) design. A QNN serves as the quantum counterpart of a classical neural network.
Breaking it down, each layer in a QNN is a Variational Quantum Circuit (VQC) comprising parameterized gates. These parameters act as the quantum equivalents of the weights in a classical neural network. Additionally, the QNN incorporates a mechanism to exchange information among existing qubits, resembling the connections between neurons in different layers of a classical network. Typically, this information exchange is achieved through entanglements, employing operators such as the CNOT gate.
Creating a Quantum Machine Learning (QML) model typically involves several steps, as illustrated in Figure above. First, we load and preprocess the dataset on a classical CPU. Next, we use a quantum embedding technique to encode this classical data into quantum states on a Quantum Processing Unit (QPU) or quantum hardware. Once the classic data is represented in quantum states, the core model, implemented in the ansatz, is executed, and the results are measured using classical bits. Finally, if needed, we post-process these results on the CPU to obtain the expected model output. In our study, we follow this overall process to investigate the application of a Quantum Neural Network for time series forecasting.
A Quantum Neural Network (QNN) typically consists of three main layers:
1. Input Layer: This layer transforms classical input data into a quantum state. It uses a parameterized variational circuit with rotation and controlled-rotation gates to prepare the desired quantum state for a given input. This step, known as quantum embedding, employs techniques like basis encoding, amplitude encoding, Hamiltonian encoding, or tensor product encoding.
2. Ansatz Layer: The heart of the QNN, this layer contains a Variational Quantum Circuit, repeated L times to simulate L network layers classically. Its responsible for processing and manipulating quantum information.
3. Output Layer: This layer performs measurements on qubits, providing the final expected outcome.
For the input layer, we use a tensor product encoding technique. It involves a simple X-rotation gate for each qubit, where the gate parameter is set by scaling the classic data to the range [-, ]. Although its a quick and straightforward encoding method (O(1) operations), it has limitations. The number of qubits needed scales linearly with the input classic data. To address this, we introduce learnable parameters for scaling and bias in the input data, enhancing the flexibility of the quantum embedding. In Figure 3, you can see an example of the input layer for a network with 3 qubits, where classic data features:
, input scale parameters:
, and bias parameters:
come into play.
Regarding the ansatz, its interesting to note that, unlike classical neural networks, there isnt a fixed set of quantum layer structures commonly found in the literature (such as fully connected or recurrent layers). The realm of possible gates for quantum information transfer between qubits is extensive, and the optimal organization of these gates for effective data transfer is an area that hasnt been thoroughly explored yet.
In our Project, we adopt the Real Amplitudes ansatz, a choice inspired by its success in various domains like policy estimation for quantum reinforcement learning and classification. This ansatz initiates with full rotation X/Y/Z parameterized gates, akin to the quantum version of connection weights. It is then followed by a series of CNOT gates arranged in a ring structure to facilitate qubit information transfer. Figure 4 provides a visual representation of how this ansatz is implemented, serving as the quantum equivalent of a network layer for a 3-qubit network.
To break it down, a quantum network layer in our work involves a set of parameters totaling 3 times the number of qubits (3*n), where n represents the number of qubits in the quantum network.
Now, lets talk about the output layer, which is a critical part of our quantum model. In quantum computing, when we want to extract information from our quantum state, we often perform measurements using a chosen observable. One such commonly used observable is represented by the _z operator over the computational basis. To understand this, think of it as a way to extract information from our quantum state.
The network output is determined by calculating the expectation of this observable over our quantum state. This is expressed as |_z|, where | denotes the complex conjugate of |. The result falls within the range of [-1, 1].
No need to stress over those complex mathematical equations our trusty library, Qiskit, has got it covered! Qiskit will handle all the intricate quantum calculations seamlessly, making the quantum computing process much more accessible for us. So, you can focus on exploring the quantum world without getting bogged down by the nitty-gritty math
Now, to make our network output less sensitive to biases and scales inherent in the dataset, we introduce a final scale parameter and bias to be learned. This step adds a layer of adaptability to our model, allowing it to fine-tune and adjust the output based on the specific characteristics of our data. The entire model architecture is visually represented in the figure below.
The training of our proposed Quantum Neural Network (QNN) happens on a regular CPU using classical algorithms like the Adam optimizer. The CPU handles the gradient computation through traditional propagation rules, while on the Quantum Processing Unit (QPU), we calculate the gradient using the parameter-shift rule. Its a bit like having a dual system where the CPU manages the main training, and the QPU comes into play for specific quantum computations.
Visualize the training process pipeline in Figure 6, where represents the scale/bias parameters in the input layer, corresponds to the parameters of the layers containing the ansatz, and are the scale/bias parameters for the network outputs. This orchestration ensures a cohesive training approach, leveraging both classical and quantum computing resources.
As a Quantum Neural Network (QNN) operates as a feedforward model, our initial step involves defining a time horizon, denoted as T. To adapt the time series data for the QNN, we transform it into a tabular format. Here, the target is the time series value at time t, denoted as x(t), while the inputs encompass the values x(t-1), x(t-2), , x(t-T). This restructuring facilitates the models understanding of the temporal relationships in the data, allowing it to make predictions based on past values.
First, we fetch the data using the historical Data API endpoint provided by Financial Modeling Prep as follows:
The output is a Pandas dataframe which looks something like this (before that, make sure to replace YOUR API KEY with your secret API key):
Follow this link:
Stock Price Prediction with Quantum Machine Learning in Python - DataDrivenInvestor