Generative AI brings new threats, how to choose a security response strategy in the era of universal

Mondo Technology Updated on 2024-01-29

When it comes to cybersecurity, AI is definitely a double-edged sword. On the one hand, there are an estimated 3.5 million unfilled cybersecurity positions worldwide today, and staffing shortages have become a bottleneck in cybersecurity assurance. Under such conditions, the introduction of AI-assisted systems in the field of cyber security will greatly alleviate the problem of cyber security staff shortage. On the other hand, as hackers are adopting AI faster than enterprise technology teams, AI is causing new problems for security. AI cyberattacks have even become a separate category.

Under such conditions, according to Liu Jundian, consulting director of Unit 42 of Palo Alto Networks, this will inevitably lead us into an era of AI versus AI, and a series of changes will take place in network security technology and strategy.

Generative AI introduces new risks

GPT-1 was released in June 2018, and it was only in March 2023 that GPT-4 was released, and generative AI really entered the blowout stage. But to date, generative AI has not entered the stage of large-scale enterprise adoption, and only a small number of experimental applications are being carried out in-house by leading companies. But if you think that the security troubles caused by generative AI are also stagnant at such a stage, then you should be careful, because ChatGPT's "** twin" has at least 7 ways to get you hit.

Liu Jundian listed several of them. As early as ChatGPT just set off an AI boom, some experts have been warning that large language models such as ChatGPT by Open AI may be maliciously corrupted.

Unfortunately, the expert's words were verified, and WormGPT sprung up. WormGPT is a model trained exclusively on malware data, but it has no security guardrails, no bottom line, and it can easily be prompted by someone with ulterior motives to create Python-based malware.

The emergence of generative AI can alleviate some of the work pressure, generate text, images, or applications in the fields of medical care, education and other fields to bring innovation and convenience to our lives. But in the hands of bad actors, the emails generated by tools like WORMGPT are not only very convincing, but also tactically cunning, demonstrating their potential for sophisticated phishing and BEC attacks (corporate email attacks).

The "success" of FraudGPT marks the arrival of a dangerous era for the democratization of generative AI and hacking techniques. Or to put it another way, it lowers the bar for cyberattacks all at once.

FraudGPT transforms the complex hacking techniques of the past into an automated service that can be easily used by even the most tech-savvy people, such as writing malware**, creating undetectable malware, and writing convincing phishing emails. As a result, the malicious ** that used to take three months to write now only takes three minutes to complete.

FraudGPT also quickly became popular with this feature. For a subscription fee of $200 per month or $1,700 per year, anyone can become a hacker. And so far, FraudGPT has more than 3,000 subscribers. Even before the launch of ChatGPT in late November 2022, Palo Alto Networks warned that attackers, including state-sponsored cyber hacking groups, were starting to turn generative AI into **.

Deepfakes are another technology that can be very damaging. If you see an acquaintance borrowing money on WeChat, and your familiar voice comes out of the microphone, in a hurry, do you help or not?If you transfer money, advanced deepfake technologies such as AI face swap technology may fool you. And when deepfakes are mixed with extortion, many victims are forced to pay a ransom.

In addition to this, there is also the emergence of multi-modal AI malware. Deep Locker uses a deep neural network (DNN) AI model to hide its attack payload in a benign vector application, and the payload is unlocked only when it reaches a predetermined target. The "Black Mamba" proof-of-concept attack allows malware to dynamically modify benign** at runtime, without the need for any command-and-control (C2) infrastructure, allowing it to evade current automated security systems.

Liu Jundian summarizes common AI attacks into three categories: "The first type is to use AI to write a Trojan program, and then use that Trojan program to enter the environment of the attacker. The second type is to use AI to write high-trust phishing software, and the third type is to use AI to carry out faster attacks. ”

As generative AI may rapidly evolve into more diverse and threatening attacks, Liu Jundian emphasized that an era of AI against AI has arrived, so it is impossible to rely solely on human power to respond to incidents, but must use AI to assist.

Stand on the cutting edge of technology

In the face of such an era of AI against AI, what people think about next is how to deal with this era.

In this regard, Liu Jundian said: "We must see that the evolution of AI technology is not completed in a day, in the process of AI evolution, Palo Alto Networks and its subsidiary unit42 have always maintained a high degree of attention to AI security threat information, and have also conducted a lot of related research, and we have some corresponding solutions." In terms of strategy, we want users to make a switch. The first is that we can't use manpower to block AI attacks, and using AI to fight AI is the only way to solve it. Second, AI has shown us faster, stronger, and more accurate attack patterns, so some of the defense patterns and methods that we know and know before must be changed. ”

Specifically, Liu Jundian summarized these transformations into six steps. The first is to change the mindset of success beyond just preventing attacks. The second is to prioritize defenses that limit the attacker and provide the defender with room and time to maneuver. The next step is to streamline, repeat, and automate defenses. It also requires 24/7 monitoring, increasing the time pressure on attackers, and measuring and reducing the external attack surface. Last but not least, transition to a zero-trust enterprise and enhance security measures.

At the tool level, Palo Alto Networks has a security formula: Zero Trust + Platform = Forward-looking. Zero trust refers to a strategy that eliminates any implicit trust and is based on continuous verification;A platform is about connecting the best features of different types based on your needs for maximum visibility, control, and efficiency;Forward-looking refers to an effortless and secure transformation that allows businesses to operate and innovate efficiently in a secure environment.

Compared with the large number of single-point security products, the platform-based solution can achieve comprehensive and automated security defense, and truly protect the network security of enterprises actively. Palo Alto Networks' next-generation network security platform addresses security challenges in the digital economy from three dimensions: network, cloud, and endpoint, and provides an automated security operations analysis system on top of all security solutions. All of this is based on Palo Alto Networks' global security threat intelligence network, a professional security team composed of top security analysts, Unit 42, to analyze and respond to security incidents and intelligence around the world in a timely manner.

Shared new development

From the perspective of large language models, more and more large models are coming on the road to open source, and the new version of GPT has made it easier to develop based on it. This brings more technologists on board, and it also makes it easier for hackers to use generative AI to create new types of attacks. Hackers have begun to exchange AI attack experiences and cases on various forums, and convenient AI attack tools allow hackers to launch attacks more realistically, more accurately, and more quickly without having a very high level of technology. This also places new demands on the defenders, so the defenders must unite. For defenders, knowledge is power, which makes the role of Unit 42 come to the fore.

"We have noticed that attackers have started to cooperate and share cases, sharing successes and failures with each other. Therefore, closer cooperation between enterprises and enterprises, enterprises and the industrial environment should also begin. ”

In the past, we always knew about Unit 42 through cyber security reports, but never approached it behind the scenes. In fact, Palo Alto Networks Unit 42 brings together world-renowned cyber threat researchers, incident response experts, and expert security consultants to create an intelligence-driven, responsive organization dedicated to helping organizations proactively address cyber risk. As trusted security advisors, the team works together to help organizations evaluate and test security controls to address threats in a targeted manner, improve their security strategies through threat notifications, and continuously improve incident response time so that they can focus on their business as quickly as possible.

When AI attacks continue to evolve in new ways and the industry needs to join forces, Liu Jundian talked about the new changes in Unit 42: "For Unit 42, the biggest change is that we need to closely track the evolution of AI attacks and how AI is used in this process. We will not only look at the strategies that attackers use with AI, but also what toolkits they use to carry out various types of attacks, and what new forms of attacks are created based on them. When enterprises need to join forces to deal with new AI attacks, we can put our research results on the ** and forums in an open and free form to share with everyone. Eventually, we will be able to work with you to operate a network ecosystem to protect the whole, and our security research results must not only be exclusive to unit42, but used to protect the entire ecosystem. ”

In fact, AI can be so disruptive that it goes far beyond that. At the 2023 RSAC meeting, a panel of experts discussed the upcoming AI risks and resiliency issues that CISOs will face in the coming years. One of the most striking topics discussed was an emerging attack – the sponge attack. In a sponge attack, the attacker compromises the security of the sponge structure by targeting its weaknesses. The goal of an attack is to generate an output of a specific nature from a given input. In this type of attack, an adversary can use specially crafted inputs to consume the model's hardware resources to carry out a denial-of-service attack on the AI model. In other words, it is possible that AI will no longer be used as an auxiliary tool, but will be directly involved in the attack.

As early as 2019, Forrester Research predicted that AI would increase the scale and speed of attacks, and AI would carry out attacks unimaginable to humans. Unfortunately, these prophecies are becoming a reality today. In the face of increasingly severe AI attacks, changing concepts and tools to form a joint force will be the only way to mitigate the malicious use of AI for cybersecurity activities.

Related Pages