Cybersecurity refers to the practice of protecting computer systems, networks, and data from unauthorized access, theft, damage, disruption, or other forms of attack. The practice in cybersecurity can involve several technologies, and Artificial Intelligence (AI) is one of the key enabling technologies among many.
This article will explore the importance and use cases of AI in cybersecurity, which bring about both benefits and risks, some risk mitigation methods that are currently being discussed, including how to balance human involvement with AI operations in cybersecurity, implications of AI in cybersecurity for financial service industry, and future trends of AI development in the cybersecurity landscape. This article, inspired by the author’s interest in exploring the role of AI in cybersecurity and finding the right balance with human involvement, aims to share insights from research, the author’s thought processes, and key findings from the exploration.
What is the Role of AI in Cybersecurity? Why is it Becoming Increasingly Important Now?
AI can be categorized into predictive AI and generative AI based on functions they perform. Predictive AI involves models trained on large datasets to identify patterns, correlations, and trends, often used for tasks such as forecasting, classification, and risk assessment. Generative AI, on the other hand, focuses more on generating new content such as text, audio, video, etc., and understand unstructured data.
Predictive AI has been used in cybersecurity since the late 1980s to detect abnormal activities, recognized for its speed and accuracy beyond human capability. However, recent advances in generative AI and machine learning (ML) have contributed both positively and negatively to the cybersecurity realm. To cyber criminals, generative AI, together with ML, allows them to launch more sophisticated attacks at a larger scale such as crafting more realistic phishing emails or creating malware with the ability to adapt its own code to avoid detection by traditional security system. On the other hand, generative AI can be used by cybersecurity team as a countermeasure to enhance threat intelligence by automating learning of unstructured threat data and build a new capability to analyze more qualitative data to improve threat detection.
AI’s capabilities in cybersecurity benefit all organizations, but for large corporations with ample resources, AI improves efficiency and speeds up responses to threats. For SMBs with limited budgets, the benefits are more accentuated in terms of cost reduction for hiring cybersecurity personnel.
With its promising capabilities, organizations are increasingly adopting predictive AI, generative AI, and ML for their cybersecurity practices to boost efficiency and reduce costs by automating labor-intensive tasks. This is, in one way, evidenced through a survey of 800+ senior management by Arctic Wolf where 98% of respondents plan to allocate some portion of their upcoming cybersecurity budget towards AI. Within those, 52% are dedicating over a quarter of their budget in this area.
Note: The term AI used in this article onward means both predictive AI and generative AI together with the capabilities of ML.
AI Capabilities and its Implications to Cybersecurity
AI benefits and capabilities for cybersecurity can be explained through 6 core functions of cybersecurity, which will be further elaborated in the next paragraph. While the benefits offered are not negligible, there are also some restrictions and risks associated with adoption of AI for cybersecurity practices. Organizations must be prepared to mitigate such risks to optimize AI performance.
Cybersecurity can be divided into 6 functions according to the NIST (National Institute of Standards and Technology) framework, a world-renowned framework adopted by global organizations such as Saudi Aramco, Israel National Cyber Directorate, University of Chicago, etc. The definition of each according to NIST is as shown below.
Figure 1: NIST Cybersecurity Framework
- GOVERN addresses an understanding of organizational context; the establishment of cybersecurity strategy and cybersecurity supply chain risk management; roles, responsibilities, and authorities; policy; and the oversight of cybersecurity strategy.
- IDENTIFY understands the organization in deeper detail (e.g. data, hardware, software, systems, facilities, services, people) and related cybersecurity risks, which enables an organization to prioritize its efforts consistent with its strategy and direction as identified under GOVERN.
- PROTECT supports the ability to secure organizational assets to prevent or lower the likelihood and impact of adverse cybersecurity events, as well as to increase the likelihood and impact of taking advantage of opportunities.
- DETECT enables the timely discovery and analysis of anomalies, indicators of compromise, and other potentially adverse events that may indicate that cybersecurity attacks and incidents are occurring.
- RESPOND supports the ability to contain the effects of cybersecurity incidents.
- RECOVER supports the timely restoration of normal operations to reduce the effects of cybersecurity incidents and enable appropriate communication during recovery efforts.
For more information or examples of activities under each function, please visit NIST website.
To understand which tasks are suitable for AI in each function, it is essential to first analyze the expected outcome of functions, which can be done by understanding the objectives. Then, the key success factors can be determined based on qualities that will lead to better outcome of the function, which in turn will be mapped with AI’s unique capabilities to derive suitable tasks under each function. Please see below an analysis of the expected outcome and key success factors for the 6 functions, together with AI/ML capabilities.
Figure 2: Analysis of AI/ML capabilities for Cybersecurity Core functions
Given the unique strengths of AI, particularly its capability to handle vast amount of data, automate routine tasks, and perform real-time actions, AI can help enhance an organization’s cybersecurity posture through all 6 activities, albeit at different capacity.
Even with all the promising capabilities that AI provide, it is a double-edged sword. There are still some downsides that users need to be aware of. There are two major concerns commonly mentioned and discussed across sources.
- Ethical concerns:
- Bias and discrimination in decision-making: This can stem from non-diverse training data set or bias from machine learning process of non-relevant input factors such as gender, race, etc. In cybersecurity, for example, AI might flag certain groups more frequently, leading to unequal treatment. This is especially concerning when using AI in areas like fraud detection or risk assessments where fairness is crucial.
- Privacy concern from data used to train AI: Data used to train AI is usually retrieved from production databases which might contain personal sensitive information, leading to concerns about data leakage and personal data rights. Data used to train AI for cybersecurity, specifically, often includes personally identifiable information such as user behaviors and biometric information. Strong security measures must be implemented end-to-end from the origination and transmission of data to the handling of data after its use, to ensure no data leakage from the additional exposure.
- Lack of transparency and explainability: AI is often regarded as a Black Box system, where users can only see inputs and outputs, but not the processes in between. This lack of explainability can cause trust issues, as cybersecurity teams might not fully comprehend why an AI system flags certain behaviors as malicious or overlooks potential threats. Transparency is key to ensuring that AI’s actions align with the organization’s cybersecurity objectives.
- Potential Mistakes from AI:
- Mistakes happen from a model itself such as:
- A generative AI hallucination – AI models generate incorrect or misleading results, which are caused by a variety of factors, including insufficient training data, incorrect assumptions, and etc. according to Google Cloud.
- An overfitting of models – AI algorithm fits too closely or even exactly to its training data, resulting in a model that can’t make accurate predictions or conclusions from any data other than the training data according to IBM.
- An example of this type of mistake in cybersecurity regime is when an AI model flags a legitimate activity as malicious (false positives) or fails to detect actual threats (false negatives).
- Mistakes happen from a model itself such as:
-
- Mistakes happen from malicious actions such as:
- Data manipulation – Cyber threat actors manipulate data consumed by AI algorithms. By inserting incorrect information into legitimate but compromised sources, they can “poison” AI systems, causing them to error out or export bad information, according to BlueVoyant. For example, attackers might alter logs or feed deceptive data into AI-driven monitoring systems to avoid detection. When AI gradually recognizes such pattern as normal, attackers can then utilize this attack vector for an actual offense.
- Model theft – AI model itself is compromised and reversed engineered by attackers to find vulnerabilities of the model. Attackers can then exploit weaknesses discovered to launch undetected attacks.
- Mistakes happen from malicious actions such as:
There are currently two main approaches to mitigate these concerns from organizations’ side which are the use of technological solutions and human involvement and supervision. These two options are generally utilized in combination to tackle the concerns.
- Technological solutions are seen more often to solve the following concerns.
- Privacy concern – This concern can be mitigated by using tools to generate synthetic data or perform data masking. There are multiple players providing these solutions such as betterdata, Hazy, Mostly AI.
- Lack of transparency and explainability – One way to address this concern is to use AI solutions with clear documentation on decision-making processes which can be audited and customizable as needed.
- Potential mistakes from both a model itself and malicious actions – There has been discussion about building an AI agent to work for humans. AI agent, according to IBM, is “a system or program that is capable of autonomously performing tasks on behalf of a user or another system by designing its workflow and utilizing available tools”. In this case, AI agent can be specifically trained to validate the output of another AI against expected outcome to prevent mistakes.
- Human involvement and supervision
- Bias and discrimination in decision making – To prevent bias and discrimination in AI, ethicists should be integrated into AI development and deployment teams to ensure that training datasets are unbiased, and outputs are rigorously tested for fairness. Geoffrey Hinton emphasized this approach during a session at Collision 2024. More information on AI ethicists can be found here.
- Potential mistakes from both a model itself and malicious actions – Humans can serve as validators of AI outputs, identifying and correcting any mistakes made by the AI model. This approach follows the ‘Maker-Checker’ principle, ensuring an additional layer of oversight and accountability.
Balancing Human Involvement in the Age of AI Cyber Defense
It is undeniable that AI has unlocked the level of efficiency that is not previously achievable by humans. On the other hand, a human component in cybersecurity operations is still mandatory to mitigate ethical concerns and reduce risks from AI mistakes. Thus, balancing the usage of both components is key to resilient cybersecurity.
Achieving an optimal balance between humans and AI requires a clear understanding of their respective strengths and limitations. Cybersecurity tasks can be categorized into two key areas: operational capabilities and intelligent capabilities. Operational capabilities ensure tasks are executed effectively and efficiently, while intelligent capabilities ensure tasks are carried out responsibly and aligned with an organization’s goals. This analysis helps determine which cybersecurity activities are best suited for AI and which should remain under human supervision. The table below highlights the strengths and limitations of both AI and humans in these areas, with green cells indicating strengths and red cells indicating limitations.
Human | AI | |
Execution Capabilities | ||
Speed | Slower processing of large data | Swift large scale data processing |
Accuracy | Prone to human errors | Prone to algorithm errors and data poisoning |
Consistency | Inconsistent performance subject to human limitations | Highly consistent performance with 24/7 availability |
Scalability | Cannot be scaled effectively | Easily scalable to handle multiple tasks |
Cost efficiency | High costs for salaries, training, benefits and difficult to retain | High cost efficiency for repetitive tasks |
Intelligent Capabilities | ||
Cognitive abilities | – Ability to be creative based on personal background | – Ability to be creative based on training data |
– Capability to contextualize | – Low capability to contextualize | |
– Judgment based on intuition | – Intuition by training | |
Abilities to learn from unlimited sources | High dependency on training data fed by human | |
Slower learning limited by speed of data digestion | Speedy learning enabled by computing resources | |
Emotional intelligence | – Understand human emotions and unspoken words
– Personalized interactions |
– Emotions by training |
Ethics | Prone to personal biases | Prone to ethical issues from training data |
As seen in the table above, human and AI possess different strong traits, suggesting rooms to efficiently divide tasks between the two. While small, the overlapping greens or reds point to possibilities of collaboration. When mapped with the NIST framework for cybersecurity, leading roles for each of the 6 tasks and activities that each can perform to strengthen the security postures are as identified in the following figure, where human is tasked to lead roles that require high level of strategic decision-making and communication, while AI is expected to lead the execution according to the preset policies which are more operational by nature.
Figure 3: Human and AI Collaboration on the Cybersecurity’s Core Functions
AI in Cybersecurity for Financial Service Industry
Financial services are frequently cited as a top target for cyber-attacks, with the industry incurring the second-highest breach costs, averaging nearly $6 million annually, according to Nvidia. Due to the high volume and value of monetary transactions, financial institutions are particularly vulnerable to identity fraud and transaction fraud. These forms of fraud are highlighted because identity theft can give attackers access to accounts allowing them to perform fraudulent transactions, while transaction fraud directly compromises the transfer of funds, making them critical concerns for the sector.
Preventions for these two types of frauds often require analysis of high volume of data and involve routine operations in PROTECT, DETECT, and RESPOND functions, in which AI excels at leading the tasks. Therefore, AI is undeniably an effective and efficient defense tools against these frauds for financial service providers to manage the risks. This section will deep dive into how AI has helped the financial service industry mitigate risks of these two frauds and some solution providers.
Identity fraud is “the crime of using someone’s personal information in order to pretend to be them and to get money or goods in their name” according to the Cambridge Dictionary. Some examples of prominent identity frauds include:
1) Phishing – malicious actor sends a phishing content through channels such as email, text message, to account owners, luring them to provide personal credentials or financial information;
2) Fake website – threat actor creates a fake website, looking like legitimate and trustworthy one, deceiving account owners to input financial information or make false financial transactions; and
3) Data breaches – cybercriminal gains access to account owner credentials and information through unauthorized database access or other forms of records.
Transaction fraud is “any deceptive activity intended to acquire money, goods or services during a financial transaction” according to Datavisor. Transaction fraud typically happens after identity fraud, if not at the same time. Cybercriminals use credentials received from identity fraud to perform financial transactions such as using credit card information for unauthorized purchases, using login credentials to perform money transfer to their own accounts.
Several large banks around the world have integrated AI into their cybersecurity measures to protect their customers and minimize their financial losses and reputation damages. Many have announced their strategies on using to address cybersecurity challenges including Bank of America, JPMorgan Chase, KBank, BNP Paribas, Mitsubishi UFG and more. Some outstanding use cases of AI as countermeasures for these frauds being implemented in the financial service sector are shown below.
Identity Fraud | Transaction Fraud | |
PROTECT | Biometric authentication – AI is being utilized to perform biometric authentication through methods such as facial recognition, fingerprint scan, and voice recognition to verify account owners in addition to the traditional methods like OTP.
Document verification – AI is being used to verify the authenticity of documents provided by account owners to ensure that it is not a threat actor with falsified documents claiming someone else’s identity. |
Biometric authentication – Financial service providers are increasingly implementing biometric verification on transactions with values above certain thresholds to limit the risks of transactions initiated by unauthorized threat actors.
Solution providers are usually the same as those providing authentication and verification solutions for identity fraud. |
DETECT | Customer profile analytics – AI can collect a customer’s device ID, IP address, geolocation, and behavioral biometric clues such as typing speed, pressure and the angle at which a customer typically holds their phone to create a customer’s profile. Deviations from normal patterns can be flagged as anomalies. | Customer behavior analytics – AI can learn customer’s normal patterns of spending including types of expenses, normal ticket sizes, time and place of transactions, and etc. Any abnormal spending behaviors are then flagged for further actions. |
RESPOND | Real-time alerts – AI can automatically alert customers for potential identity and transaction frauds flagged in the DETECTION or PROTECTION phases and prompt them to change passwords and act through a verified channel to confirm if it is their legitimate action.
Real-time suspension – In a more serious case, AI can even decide to force a logout and suspend an account, and request customers to verify themselves through channels such as phone call before resuming their activities. The RESPOND features often come with the DETECT features; therefore, solution providers in this case are the same as those providing solutions for DETECT function. |
Conclusion
AI has become essential in cybersecurity, offering new capabilities for both attackers and defenders. While AI enables faster, more widespread cyberattacks, it also empowers defense mechanisms to counter threats at unprecedented speed and scale. The effectiveness of AI grows with more data, making it a race where “data is the new oil,” as Clive Humby noted. However, AI is a double-edged sword, with ethical concerns and potential errors posing significant risks. To mitigate these, the industry is balancing AI’s strengths, like rapid data analysis and automated responses, with human oversight for tasks requiring context and nuanced judgment. One of the prominent use cases is for the financial service industry, which deals with high volume and value of monetary transactions. AI is being widely adopted to prevent identity fraud and transaction fraud due to its strengths in speedy high volume data analysis and routine task automation.
The AI era is just beginning, with many future possibilities to strengthen cybersecurity. One promising initiative is cross-environment intelligence, where AI models can learn from data across multiple organizations without exposing sensitive information, creating real-time collective intelligence. However, this requires central coordination and standardized integration across systems, making it a work-in-progress. Another development is the rise of AI agents, which can integrate with systems to automatically perform cybersecurity tasks using available tools and applications, and collaborate with each other, like humans, to enhance security and push automation further in cybersecurity operations.
As we venture into this ever-evolving landscape of cyber threats, organizations must stay informed on emerging trends and technologies to remain resilient, with AI being at the forefront. However, the use of AI in cybersecurity will require human supervision to ensure ethical outcomes, prevent mistakes, monitor undocumented data, and make strategic decisions. Only with this balance between AI and human oversight can organizations fully harness the potential of AI to effectively enhance their cybersecurity defenses.
Author: Benjamas Tusakul (Air)
Editors: Wanwares Boonkong (Pin), Woraphot Kingkawkantong (Ping)
Reference
https://www.sophos.com/en-us/cybersecurity-explained/ai-in-cybersecurity
https://www.engati.com/blog/ai-in-cybersecurity
https://www.securitymagazine.com/articles/99487-assessing-the-pros-and-cons-of-ai-for-cybersecurity
https://www.statista.com/statistics/1382266/cyber-attacks-worldwide-by-type/
https://www.weforum.org/agenda/2024/01/cybersecurity-cybercrime-system-safety/
https://www.ey.com/en_gl/insights/consulting/transform-cybersecurity-to-accelerate-value-from-ai
https://arcticwolf.com/resource/aw/the-human-ai-partnership
https://academia.co.uk/ai-versus-human-collaboration-for-a-secure-digital-future/
https://secureframe.com/blog/ai-in-cybersecurity
https://www.paloaltonetworks.com/cyberpedia/generative-ai-in-cybersecurity
https://outshift.cisco.com/blog/adopting-ai-security-operations
https://www.techmagic.co/blog/ai-in-cybersecurity/
https://www.americanbanker.com/news/can-ai-help-when-a-scam-is-invisible-to-the-bank
https://innov8tif.com/6-ways-ai-is-fighting-back-against/
https://www.cio.com/article/190888/5-famous-analytics-and-ai-disasters.html
https://www.splunk.com/en_us/form/state-of-security.html
https://www.tableau.com/data-insights/ai/advantages-disadvantages