AI in Cybersecurity: How Businesses are Adapting in 2024

Written by Jessica Schulze • Updated on

Learn about today’s AI-powered offenses and defenses and how you can prepare your organization for both.

[Featured image] Several coworkers share data analytics around a wooden table covered in laptops, tablets, and paperwork.

The recent buzz surrounding artificial intelligence (AI) may give the impression of novelty, but for many companies, AI is already a familiar technology. In fact, a study by CompTIA reports 56 percent of businesses and IT professionals are already using AI in cybersecurity [1]. The same study identifies skill gaps among employees as the biggest challenge to cybersecurity initiatives. In the following article, we’ll discuss why adapting to AI is crucial for cybersecurity and how effective strategies include everyone in an organization. 

AI’s impact on the workplace in 2024

Since the emergence of user-friendly AI tools like Chat-GPT, the demand for AI literacy has been high and is climbing. AI has even found itself at the forefront of industries not traditionally associated with tech, such as real estate and retail [2]. Since 2017, the number of companies adopting AI has more than doubled [3]. 

Placeholder

Advantages of AI in cybersecurity

A few features of AI make it well-suited for defensive use in cybersecurity. For starters, it’s capable of digesting information much faster than humans, in much larger amounts. It can also be more accurate, reducing the risk of human errors such as false-positive results or lapses in judgment caused by alert fatigue. It’s been used jointly with traditional cybersecurity tools such as antivirus and intrusion detection software for real-time monitoring and immediate threat containment.  

How AI can combat cybercrime

  • Enhanced malware detection. AI can analyze various types of data for irregular file characteristics and code patterns to better identify malicious software, scripts, and behavior. 

  • Security analytics. Security records and logs of past incidents can be parsed by AI to uncover trends or other metrics that would be time-consuming or challenging for human security analysts to identify manually. 

  • Proactive security. AI tools can be trained on historical data to predict and prevent future attacks. Through logical inference, they can analyze previous behaviors to suggest possible solutions and reduce cybersecurity teams’ response times. 

  • Continuous monitoring. AI in cybersecurity removes the delay of human response time. Complex, enterprise-level systems can be monitored constantly from networks to applications to the devices that run them. In the event of a breach, they can act quickly, prioritizing recovery measures to secure back-up data and keep businesses up and running. 

The ROI of AI in cybersecurity can be substantial. 

Ransomware alone is projected to cost victims around $265 billion annually by 2031 [4]. Aside from averting costly cybersecurity incidents, AI tools have been shown to directly impact profitability by automating workflows and increasing employee productivity. 

Placeholder

Challenges of AI in cybersecurity

The same way AI has revolutionized cybersecurity defense, it’s enabled much more complicated and larger-scale attacks. Traditional cybersecurity measures, such as antivirus software that relies on threat signatures, can fail due to the adaptive nature of today’s cyber threats. Currently, comprehensive federal legislation surrounding AI doesn’t exist in the United States. These circumstances leave businesses responsible for keeping pace with data protection and privacy requirements. 

Read more: What is InfoSec? Definition + Career Guide

How AI can enable cybercrime

  • Deepfakes. AI can be used to create synthetic videos and audio to mimic real people. By engaging victims in conversation with seemingly familiar people, malicious actors can convince them to share sensitive information or spread misinformation. 

  • Password guessing algorithms. AI-powered password-cracking software such as PassGAN is capable of guessing common seven-digit passwords in mere minutes. For other password types, the AI cracked 51 percent in under a minute, 65 percent in under an hour, 71 percent in under a day, and 81 percent in under a month [5].

  • Adaptive attack patterns. Metamorphic and polymorphic software is capable of changing its code to adapt to the system through which it spreads. These types of transformative attacks can be challenging for traditional detection methods to manage, given that they can change shape by the time they’re detected. 

  • Generative malware. Recent advancements in generative models such as GPT-4 enable people with little to no programming knowledge to generate working code. In the wrong hands, these tools can be prompted to create malicious software or solve problems related to circumventing cybersecurity defense measures.  

Read more: Cybersecurity Terms: A to Z Glossary

AI-specific cyber threats

AI isn’t just enabling existing cyberattacks; it’s also creating new ones–notably, data poisoning. Data poisoning is the manipulation of machine learning training data. Poisoned data is injected into a database, spoiling the algorithm and causing it to produce inaccurate results. In other instances, data poisoning attacks may create hidden vulnerabilities in a database, allowing malicious actors to secretly control the model or trick it into trusting unsafe file types.  

Placeholder

How businesses are adapting to AI in cybersecurity

In 2021, the global market for AI in cybersecurity accounted for $14.9 billion. By 2030, its value is projected to reach $133.8 billion [6]. In part, this growth can be attributed to greater accessibility. Thanks to advancements in computing power, companies no longer need massive data sets and high-end servers to support AI technology. Many comprehensive AI cybersecurity solutions are available today, but tech isn’t the only component company leaders should consider. 

Digital transformations can fail without the proper support from users. AI tools are only as powerful as the humans who wield them, and misuse cases have increased 26-fold since 2012 [3]. Are you thinking of implementing AI cybersecurity measures in your organization? Consider starting with the following areas of focus before introducing a technical solution.  

Employee training

Research shows that 86 percent of employees believe they need AI training. Many companies have begun to provide it, but these learning experiences aren’t trickling down from the executive suite. While 44 percent of organization leaders have been upskilled in AI, just 14 percent of employees have been presented with that same opportunity [7]. AI upskilling ensures your organization stays current, but it can also be an opportunity to offer equal growth and development opportunities for a more cohesive and efficient workforce.

Is it hard to get employees to participate in AI training?

If reluctance to adapt to an AI initiative is a concern, rest assured that employees tend to appreciate professional development. Here are a few fast facts from a study by Zippia about employee training programs [8]: 

  • 45 percent of employees would stay at a company longer if it invested in their learning and development. 

  • 92 percent of employees say well-planned training programs positively affect their engagement. 

  • Employees learn 70 percent of their abilities through on-the-job experience.  

Some cybercrime can only be mitigated through employee education.

An example of a cyber threat that can only be eliminated through employee training is social engineering. Social engineering manipulates people into divulging sensitive information or engaging in harmful tasks. This can include downloading suspicious software or purchasing unauthorized items with company funds.

Although some social engineering attacks are easily spotted, AI tools are increasing their effectiveness and the scale at which they can be carried out. For example, phishing emails often contain grammatical or spelling errors that evoke suspicion. With generative AI such as chatbots, these emails can be composed quicker, more often, and with fewer mistakes. Teaching organization members what AI can do and how it’s being used is crucial to protect employees and company resources.

Placeholder

AI tools and security solutions

Here are a few examples of AI-based cybersecurity tools in today’s market:

  • IBM Security Verify. Ideal for hybrid work environments, this AI-powered identity and access management solution provides automated on-premise and cloud-based security governance. It’s a software-as-a-service (SaaS) approach that protects both internal members and customers.

  • Amazon GuardDuty. AWS systems can obtain continuous monitoring and machine learning (ML) powered threat detection through Amazon GuardDuty. It creates detailed security reports for increased visibility and faster resolutions for your cyber team. 

  • CylanceENDPOINT. CylanceENDPOINT focuses on preventative defense against malware, zero-day-threats, and fileless memory exploits. Its AI uses less than 6 percent of CPU processing power, making it a lightweight option for businesses with cloud-native, hybrid, and on-premises systems. 

Stay current in an evolving workplace with Coursera 

Modern cybersecurity systems must be agile and adaptable to meet the ever-changing nature of today’s cyber threats. While employing AI in cybersecurity systems is an excellent way to fight fire with fire, transparent and collaborative protocols are also key. Set your team up for success with a two-part plan, including technical implementation supported by thorough employee training. 

Article sources

1

CompTIA. “State of Cybersecurity 2024, https://www.comptia.org/content/research/cybersecurity-trends-research.” Accessed November 28, 2023. 

Keep reading

Updated on
Written by:

Writer

Jessica is a technical writer who specializes in computer science and information technology. Equipp...

This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.