It is impossible to ignore the important role artificial intelligence (AI) and its subset, machine learning, play in the stock market today.
While AI refers to machines that can perform tasks that would normally require human intelligence, machine learning (ML) involves learning patterns from data, which can enhance machines’ predictive and decision-making abilities. Increases
Algorithmic trading is one of the main ways machine learning is used in the stock market. ML models recognize patterns from large amounts of financial data, then place trades based on these patterns – hundreds of thousands of trades in tiny fractions of a second. These algorithmic trading models continually learn, adjusting their predictions and actions in an iterative process, which can sometimes lead to events such as flash crashes, when certain patterns instigate a feedback loop, causing the market to lose momentum. Some sections suddenly go into freefall.
Algorithmic trading, despite its occasional shortcomings, has become indispensable to our financial system. This has huge benefits; Which is another way of saying that it makes some people a lot of money. According to technology services company Exadel, banks will save $1 trillion by 2030 thanks to algorithmic trading.
However, such reliance on machine learning models in finance is not without risk – even risk beyond the flash crash.
A significant and under-appreciated threat to these systems is what are known as adversarial attacks. This happens when malicious actors manipulate the input data fed to the ML model, causing the model to make poor predictions.
A variant of this adversarial attack is known as “data poisoning”, in which bad actors introduce “noise” – or false data – into the input. Training on this toxic data can cause the model to misclassify the entire dataset. For example, a credit card fraud system may falsely attribute fraudulent activity where no fraud has occurred.
Such manipulation is not just a theoretical threat. The effects of data poisoning and adversarial attacks have wide-ranging implications for various machine learning applications, including financial forecasting models. In a study conducted by researchers from the University of Illinois, IBM and other institutions, they demonstrated the vulnerability of financial forecasting models to adversarial attacks. According to their findings, these attacks can lead to sub-optimal trading decisions, resulting in losses of 23 percent to 32 percent for investors. This study highlights the potential seriousness of these threats, and underlines the need for robust protection against adversarial attacks.
The financial industry’s response to these attacks has often been reactive – a strange game of defense only after being attacked. However, given that these threats are embedded in the structure of ML algorithms, a more proactive approach is the only way to meaningfully address this ongoing problem.
Financial institutions need to implement robust and efficient test and evaluation methods that can detect potential vulnerabilities and defend against these attacks. Such implementation may include rigorous testing procedures, employing “red teams” to simulate attacks, and continually updating models to ensure they are not compromised by malicious actors or bad data.
The consequences of ignoring the problem of adversarial attacks in algorithmic trading can be potentially disastrous, ranging from significant financial losses to damaged reputations of firms or even widespread economic disruption. In a world increasingly dependent on ML models, the financial sector needs to move from reactive to proactive to ensure the security and integrity of our financial system.
Joshua Steer is a Technical Analyst, and Psy Pratyush Katragadda is a Data Scientist, at the non-profit, non-partisan RAND Corporation.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
Source: thehill.com