More than a quarter of companies prohibit employees from using generative AI

Mondo International Updated on 2024-02-04

According to Cisco's 2024 Data Privacy Benchmark Study, more than a quarter (27%) of organizations have temporarily banned the use of generative AI among their employees due to privacy and data security risks.

Most organizations also have controls in place for such tools. Nearly two-thirds (63%) of respondents have placed restrictions on the data they can enter, and 61% have placed restrictions on which Genai tools employees can use.

Despite these limitations, many organizations admit to having input sensitive data into their AI applications. This includes information about internal processes (62%), employee names or information (45%), non-public information about the company (42%), and customer names or information (38%).

The majority of respondents (92%) believe that generative AI is a completely new technology that poses new challenges and requires the adoption of new technologies to manage data and risk.

Respondents' biggest concerns about AI technology are:

may harm the organization's legal and intellectual property rights (69).

The information entered may be publicly leaked or shared with competitors (68).

The information returned to the user may be wrong (68%)

Harmful to humans (63%)

Likely to replace another employee (61%)

Probably taking the place of themselves (58%)

Leakage and misuse of employees' AI accounts is also a significant risk for businesses. According to Kaspersky's investigation, a large number of stolen ChatGPT accounts are selling well on the dark web, posing a significant threat to users and companies.

In addition, the training data and generated data of popular large language models such as ChatGPT may also have the risk of violating data protection regulations such as GDPR.

91% of security and privacy professionals admit that they need to do more to reassure their customers about their AI-powered data.

However, according to Cisco's survey, none of the following efforts to help build trust in AI have been adopted by more than 50% of respondents:

Explain how the AI app works 50%.

Ensure that there is 50% human intervention in the process

49% of the companies have developed an AI ethics management plan

33% deviation in the application of audit AI

Devstahlkopf, Chief Legal Officer at Cisco, commented, "More than 90% of respondents believe that AI requires new technologies to manage data and risk. AI security governance is critical to building customer trust. ”

Related Pages