200 million Hong Kong dollars scam AI face swapping black technology, multinational companies are mi

Mondo Finance Updated on 2024-03-03

Recently, a huge fraud case caused by artificial intelligence "multi-person face swapping" technology shocked the world. According to Hong Kong**, fraudsters used cutting-edge deepfake technology to successfully defraud a multinational company of up to HK$200 million from its Hong Kong branch. In this bizarre case, an employee of the victim company received an invitation to a meeting purportedly from the company's UK CFO and made multiple transfers as instructed during what appeared to be a normal meeting process. However, when the truth was revealed, it was surprising that the other "participants" in this ** meeting turned out to be virtual avatars forged through AI face-swapping technology, and only the victim was a real participant.

The fraudsters carefully planned, first from the public network platform **target person**, using advanced deep learning algorithms to identify and extract features of face images, and then seamlessly integrate these facial features with the face of the fraudster, while simulating the voice of high-level personnel, and finally present a fake ** call scene that is enough to mess with reality. Although the employee was initially suspicious of such secret transactions, he was confused after communicating with the employee and eventually fell into the trap.

In order to prevent such high-tech scams, Xue Zhizhi, an expert from the Cyberspace Security Association of China, advises the public to take the following measures: ask the other party to make specific actions during the ** conversation, such as waving their hands, to test the authenticity of their faces; Ask questions that only a real person can answer to verify identity; For suspected fake identities, verify their authenticity through a variety of channels, such as contacting directly or asking relatives and friends.

Yu Nenghai, executive dean of the School of Cyberspace Security at the University of Science and Technology of China, stressed that the public should pay more attention to the protection of biological information and develop good cybersecurity habits. For example, strengthen the security protection of personal biometric data in daily life, carefully authorize applications to access sensitive information such as sound and images, and avoid insecure logins** to prevent virus intrusion.

Superintendent Chan Chun-ching reminds the general public that despite the rapid development of technology, the key to fraud prevention is always to maintain a high level of vigilance and verification. In the face of an endless stream of high-tech fraud methods, we must understand that "seeing is not necessarily believing", especially in the era of increasingly developed deep fake technology, the information on any social platform needs to be verified and confirmed, so as not to fall into the trap of **.

As the "AI face-swapping" technology continues to mature and become popular, the methods used by fraudsters to commit crimes are becoming increasingly sophisticated and difficult to detect. It is pointed out that in this case, strategies were even devised to circumvent technical loopholes, such as avoiding excessive verbal communication and instead conveying instructions through online instant messaging software, so as to reduce the risk of being discovered.

To address this new challenge of online fraud, experts are calling on companies to strengthen internal management, raise employees' awareness of technologies such as deepfakes, and strengthen information security training to ensure that employees are equipped to identify suspicious situations. At the same time, enterprises should establish and improve a multi-level approval system, especially when it comes to large-scale fund transfers, they must go through multiple verifications and confirmations.

In addition, regulators need to intensify research and crackdown on such new high-tech crimes, promote the updating and improvement of relevant laws and regulations, and strengthen the monitoring and governance of the abuse of AI technology. All sectors of society need to work together to build a solid line of defense against AI fraud, not only to protect the safety of personal property, but also to create a healthy and orderly development environment for scientific and technological progress.

Superintendent Chan Chun-ching once again emphasised that no matter how technology develops, the public's own anti-deception awareness is always the first line of deception. "Members of the public should always be vigilant and verify the identity of any request to transfer money or disclose personal information, even if it seems to be a familiar person or authority," he said. "Only when everyone becomes the first person responsible for their own cyber security can we effectively resist the endless stream of high-tech fraud, so that AI face-swapping fraud has nowhere to hide.

Globally, the misuse of AI face-swapping technology has attracted widespread attention. Countries, scientific research institutions and enterprises are working together to address this challenge by upgrading countermeasures through technological research and development, and at the same time strengthening international cooperation to jointly combat the use of AI technology to commit crimes. Tech companies, for example, are developing more accurate deepfake detection techniques to identify fake content the first time.

In addition, experts suggest that major communication platforms and conference software providers should actively take measures to improve product security protection capabilities, such as adding biometric real-time detection functions to authenticate users participating in ** calls to prevent the occurrence of such fraud cases. At the same time, users are encouraged to enable the multi-factor authentication mechanism during use to improve account security.

All sectors of society should also increase the popularization of cyber security knowledge, educate the public on how to identify and prevent AI face-swap fraud, and enhance the cyber literacy of the whole people. From schooling to community training, ensure that people of all ages understand the characteristics of these new types of fraud and how to prevent them, thereby reducing the risk of being scammed.

In the face of the threat of AI face swap fraud, we must build a triple protection system of legal supervision, technological innovation, and public education, so that high technology can truly serve the development and progress of human society, rather than becoming a tool for criminals to make profits. In this digital era, each of us is a link in the network ecological chain, and only by working together can we protect our information space and make the real world and the virtual world coexist in harmony.

Related Pages