Despite record-fast adoption and continued hype, generative AI is more of an intellectual promise or corporate focus area than an actionable reality. It is estimated that by 2030, the AI market will reach nearly $670 billion, and productivity will add up to 4$4 trillion, but business leaders still want to know exactly what AI can do, how to leverage it, and how it will deliver the advertised economic benefits. There is no shortage of hope or belief in the potential of AI, especially in times of economic turmoil.
As with any aspect of digital transformation, the effective deployment of generative AI will rely more on human adaptability than on technical capabilities. In fact, the human factor – people and culture – will drive the adoption or lack thereof of AI. This means that companies need to spend as much time as possible thinking about how to leverage their cultural strengths and implement the necessary processes to complement or compensate for their cultural weaknesses to drive AI adoption. For example, if a culture is passive-aggressive or risk-averse, establishing the right formal incentives to reward risk will do wonders. Conversely, if a culture is so entrepreneurial that it grabs any new market opportunity to the point of being distracted by shiny new things, formal incentives and processes must reward focus, discipline, and the ability to ignore novelty trends.
While genai is too new to understand or how far it will go, and AI in general has only recently become mainstream, there are still valuable lessons to be learned from recent corporate history about how organizations can adopt and realize the value of emerging technologies. These explain why some cultures are better able to not only embrace new technologies, but also innovate. More specifically, evidence from scientific research and real-world case studies identifies seven generalizable lessons that can help you improve your ability to adopt GenAI and any new technology at the organizational level:
Change or change will change you. This is the biggest argument for innovation, although sometimes it comes across situations where the organization is reluctant to change. This resistance is present in every tissue. But to remain competitive, companies need to be able to stay the course.
Generative AI is no exception: while some organizations have embraced it, too many have resisted it because they have tried and tested the way things are done, and the fear of the unknown obscures any desire for change. Like any other innovation, GenAI will effectively empower an organization if it can be deployed throughout the system, making it future-proof and improving its overall adaptability. However, this requires the right advocates (change agents, intrapreneurs, etc.) to counter the inherent, instinctive resistance in the system that can be outraged by any change that may be seen as a threat to the status quo.
Successfully introducing new technology into an organization – especially when it's popular or controversial – also requires an understanding of where the resistance is coming from and the logic behind it. Sometimes, the boycott may be formal and explicit, such as a report that 75% of organizations are considering banning employees from generative AI. Other times, what needs to be addressed is informal resistance, a phenomenon so common that there is a specific academic term for "passive innovation resistance" that aims to highlight the unconscious resistance that arises from employee resistance to change and being content with the status quo. The best way to address this implicit fear is to market the way the technology will strengthen the organization – and increase the resilience of each part – with the goal of escalating attitudes from negative to positive, or at least neutral.
Generative AI is a very versatile technology. However, this can be a disadvantage because it has no obvious connection to a specific problem, which may make it a state of ingenious solutions waiting for the problem to be solved.
To address this shortcoming, organizations must start with the problem. That is, to identify the most pressing and painful challenges that businesses must address. Once they have a clear goal, they should test AI as well as other potential solutions. When it comes to bio-intelligence, an important mindset shift is to focus less on automation, which often means disruption or elimination and more focus on amplification.
For example, H&M went from being a laggard in the field of AI to a pioneer because it believes that AI is not "AI" but "amplified" intelligence, focusing on how this technology augments or augments existing organizational capabilities, rather than just eliminating inefficiencies, including humans. Similarly, among the tech giants, Amazon had a relatively late start in the field of AI, but once it positioned AI as an enabler of other innovations within its existing business lines, it managed to outperform its competitors. Walmart decided to invest in generative AI to improve customer service by empowering its employees, helping them find what they need and meet their needs.
In general, small, incremental improvements to the status quo will be a better way to test and deploy technological innovations than large-scale grand master plans. As Harvard Business School professor Amy Edmondson points out in her latest book, The Right Wrong, it's also the best way to design experiments that cause intelligence to fail, because it allows us to spot small mistakes before they become serious endemic problems.
Therefore, approaching your AI pilot sooner rather than later, with an open mind and an experimental mindset, is the best way to learn, including learning from failed experiments – and failing fast is a good formula for creating the conditions for success. In the long run, as long as you can learn from these failures.
There's no greater barrier to deploying generative AI or any form of data-driven automation than human intuition. In fact, no matter what role people play, human-like activities generated through autonomous technologies are often seen as a threat to control, power, and autonomy. To be fair, it does often reduce human freedom and improvisation. Workers fear that the technology they are busy training will be replaced. Executives see AI-based standardization as an attack on their power, as decisions and actions have been codified in systems and detached from individual institutions.
Therefore, it is important to convey that there is a trade-off here: in relinquishing some control over secondary decisions, one can focus more energy on higher-order tasks.
For example, recruiters and hiring managers tend to overestimate their own ability to assess the talents of others, but research shows that AI is at least as accurate, if not more accurate, at identifying people's potential. Ultimately, as long as the technology is capable of reaching the level of human capabilities, humans will have the opportunity to develop and deploy other skills, especially those that are not available to the technology, including genai. For example, at ManpowerGroup, our recruiters are using GenAI to outsource repetitive and non-creative tasks (e.g., summarizing and parsing resumes, proofreading and correcting cover letters, and typing job advertisements) so that they can spend more time on value-added activities: helping candidates understand if a job is right for them, helping clients close the gap between the candidates they want and the candidates they actually need.
Change is a lovely idea, but on both an individual and collective level, the idea starts to lose a lot of traction once we realize the effort, persistence, and struggle required to execute it. Actually, what we like is not change, but change.
The same is true for generative AI: the idea of having an organization that has already gone through a phase of experimentation, harnessing its power, scaling or industrialization is tempting. However, going through these stages and going through the process of going through these experiences is what really needs to be done. Therefore, organizations should approach AI adoption in the same way that an individual learns a new language or completes a new college degree: with patience, time, dedication, and recognizing that it's not the destination that matters, it's the journey. 。
Cultural resistance is often cited as a major barrier to AI adoption. While organizations continue to grow in investment in "culture change" interventions, attempts to deliberately shape or reshape a company's culture take a lot of time and have a low success rate.
A better approach is to think of culture as a constraint or set parameter, and treat it as if it were your relationship to the weather: not something you can change, but something that influences your choice of clothing. The key is to establish new systems and processes to counteract the influence of culture, such as extrinsic formal incentives that inhibit the influence of informal dynamics and forces. As academic reviews have shown, such processes are best deployed and implemented by middle managers, as their behaviors and decisions can drive change and instill new habits in the broader workforce.
Because of the sensationalism surrounding generative AI, the topic often raises ethical dilemmas, legal fears, and ethical concerns. Organizations must address these issues from the outset, positioning AI as one that is both ethically designed and improves the status quo. For example, being transparent with users, giving people the ability to "opt-in," and ensuring that the application of AI represents an improvement over existing processes and methods will not only get companies out of the woods, but also convince skeptics that generative AI can create valuable improvements in their jobs and lives. As Gartner's report on the adoption of ethical AI suggests, transparency is critical: "Whether employees, customers, or citizens, be honest about what they're interacting with a machine and clearly label any conversations multiple times throughout the process." ”
Ultimately, culture is always evolving. Progress is not the result of adopting every innovation or new technology, but rather leveraging the right tools to advance strategy and improve the long-term effectiveness of the organization. If companies can figure out how to seamlessly integrate AI into their strategy and culture, they may be able to increase their competitive advantage over their competitors. Most organizations are still trying to figure this out. Those companies that succeed in cracking the code of cultural adoption will reap the rewards of this new technology.
Technology and analysis