Generative AI may seem magical, but you have to be aware of the pitfalls. Four business leaders explain how to deal with risks.
Generative artificial intelligence (AI) is a form of magic for the untrained.
From summarizing text to creating and writing, tools like OpenAI's ChatGPT and Microsoft's Copilot deliver seemingly brilliant solutions to challenging problems in seconds. However, the magical abilities of generative AI can also lead to some useless little tricks.
And does your business need a chief AI officer?
Whether it's ethical issues, safety concerns, or illusion issues, users must be aware that these issues can undermine the benefits of emerging technologies. Here, four business leaders will explain how to overcome some of the big problems with generative AI.
Birgitte Aga, head of innovation and research at the Munch Museum in Oslo, Norway, said that many of the concerns about AI are related to a lack of understanding of its potential impact, and it makes sense.
Even high-profile generative AI tools like ChatGPT have only been available to the public for just over 12 months. While many people are familiar with this technology, few companies have used it in production.
Aga said that companies should give employees the opportunity to learn about the capabilities of emerging technologies in a safe and secure way. "She said"I think lowering the bar and getting everyone on board is key. "But that doesn't mean doing it uncritically"。
When employees discuss how to use AI, they should also consider some significant ethical issues, such as bias, stereotypes and technical limitations, Aga said.
And AI Safety and Bias: Unraveling the Complex Chains of AI Training
In a chat with ZDNET, she explained how the museum is working with technologist TCS to find ways to use artificial intelligence to help more audiences understand art.
She said"At TCS, we really agree on ethics at every meeting. She said"Find collaborators who are really aligned with you on that level and then build on that, rather than just finding people who do cool things"。
Ivah Litan, Distinguished Vice President Analyst at Gartner, says one of the key issues to watch out for is the pressure to change from people outside of IT.
She said"Enterprises want to sprint with all their might"She was referring to the adoption of generative AI tools by professionals across the organization, with or without the consent of those in charge. "Security and risk people struggle to adapt to this deployment, track what people are doing, and manage risk"。
Also: 64% of employees pass off AI-generated jobs as their own
As a result, there is a very tense relationship between two groups of people: those who want to use AI and those who need to manage the use of AI.
No one wants to stifle innovation, but security and risk personnel have never encountered anything like this before"She said in a **chat with zdnet. "Even though AI has been around for years, they didn't really have to worry about these technologies until generative AI was on the rise. "
The best way to address concerns, Litan said, is to create an AI task force of experts from across the business to consider privacy, security, and risk issues.
She said"That way, everybody is on the same page, so they know what the risk is, they know what the model should do, and in the end they get better performance.
And then there's Artificial Intelligence in 2023: a year of breakthroughs, where nothing remains the same for humanity
According to Litan, Gartner's research shows that two-thirds of enterprises have yet to establish an AI working group. She encourages all companies to build such cross-business teams.
She said"These working groups help to reach consensus. "People know what to expect, and businesses can create more value.
Thierry Martin, senior manager of data and analytics strategy at Toyota Motor Europe, said his biggest concern about generative AI is illusions.
He saw firsthand this kind of problem while testing generative AI for coding purposes.
In addition to personal exploration, businesses must also focus on the large language models (LLMs) they use, the inputs they need, and the outputs they roll out, Martin said.
We need a large language model that is very stable"He said. "Many of today's most popular models are trained on many things like poetry, philosophy, and technical content. When you ask a question, you hallucinate.
In a one-on-one interview with ZDNet, Martin emphasized that companies must find ways to create more restrained language models.
He said. "I want to stay within the scope of the knowledge base I provide. "That way, if I ask my model some specific questions, it will give me the right answer. So I'd like to see more models associated with the data I've provided"。
Martin is interested in hearing more about trailblazing developments, such as Snowflake's partnership with NVIDIA, where the two companies are creating an AI factory to help businesses turn their data into custom generative AI models.
He said. "For example, an LLM that is able to perform SQL queries on Python ** perfectly is interesting. "For the average user, ChatGPT and all these other public tools are great options. But when it comes to connecting such tools with enterprise data, you have to tread with caution. "
Bev White, chief executive of recruitment expert Nash Squared, said her biggest concern was that the reality of using generative AI could be very different from what was envisioned.
She said in a **conversation with zdnet:"There's a lot of hype. She said in a conversation with zdnet"There are also many panickers who say that jobs will be lost and that artificial intelligence will cause mass unemployment. There are also concerns about data security and privacy"。
It's important to recognize that during the first 12 months of generative AI, Big Tech companies are racing to refine and update their models, White said.
She said"It's no coincidence that these tools have gone through many iterations. "People who use these techniques have discovered them.
White advises CIOs and other senior executives to exercise caution. Even if it feels like everyone else is rushing forward, don't be afraid to take a step back.
I think we need something tangible that serves as a cordon that we can use. CISOs of enterprises must start thinking about generative AI – and our evidence suggests they are doing just that. In addition, regulation needs to keep up with the pace of change"She said
Maybe we need to slow down and figure out how to take advantage of this technology. It's like inventing a magical rocket, but without a stabilizer and safety system before launch.