Why UCSF Robert Wachter is optimistic that new technologies will deliver on its promises.
Historically, healthcare has been slow to adopt new technologies that involve large-scale changes to the nature of work. Witness the slow and tortuous rollout of electronic health records, and the complete failure of previous efforts to implement AI tools, such as the Watson Health experience that IBM boasted but was ultimately doomed to fail.
But in a review published in the Journal of the American Medical Association on the one-year anniversary of ChatGPT's public launch, Robert Wachter, MD, dean of the UCSF School of Medicine, is bullish on the potential of new generative AI tools to transform the healthcare landscape in ways previously unattainable with technology.
In an article published on November 30, 2023, Wachter and co-author, Dr. Erik Brynjolfsson, director of Stanford University's Digital Economy Lab and senior fellow at the Institute for Human-Centered AI, argue that generative AI — AI that can produce high-quality text, images, and other content, unlike the data it is trained on — has unique characteristics that may shorten the usual lag time between commitment and outcome, This leads to increased productivity, not impasse.
Wachter has long documented the challenges of health information technology and is the author of Digital Doctor: The Hope, Hype, and Dangers of the Dawn of the Medical Computer Age.
In 1993, my co-author, Erik Brynjolfsson, coined the term "the productivity paradox of information technology," referring to the almost universal painful experience of all walks of life trying to adopt so-called general-purpose technologies that have broadly changed the nature of work across organizations. Paradoxically, despite the hype and the best intentions, many years, sometimes decades, have passed without a significant increase in productivity. This is bad news. The good news is that if technology has any benefit, eventually this paradox will be overcome, with productivity as well as quality and user experience greatly improved. Examples include electricity, electric motors, automobiles, computers, and the internet.
Until recently, healthcare has lagged behind in the adoption of generic technologies. In 2008, less than 1 in 10 U.S. hospitals had an electronic health record (EHR).
Why did we start digital dancing so late?There are many reasons for this: inconsistent incentives – hospitals or doctors will have to pay for computers, but some financial benefits will go to insurers – complexity, privacy regulations and general resistance to change. Finally, starting around 2010, healthcare did start digitizing records. Now, less than 1 in 10 hospitals don't have EHR, which lays the foundation for today's AI.
The good news about the IT productivity paradox is that if technology is any good, it will eventually be solved. So, first of all, technology needs to get better. Second, the system must change the way it works to take advantage of these new tools.
While healthcare is notoriously difficult for digital transformation, GenAI has some unique attributes that will make it easier to deliver on its promises. First of all, it's relatively easy to use. Unlike the adoption of EHR, it doesn't require a bunch of new hardware or massive changes to the way things work, as doctors, ** and at some point patients have already done most of their healthcare-related work on the computer.
Probably most importantly, the healthcare ecosystem is better prepared for Genai than it was 5 or 10 years ago. We are all Xi to working with digital data and systems. It's easier to plug in third-party software tools than before. The pressure on the healthcare system to provide high-quality, safe, equitable care at a lower cost is increasing, and there is a shortage of nearly every type of clinical and non-clinical staff. It's easy to see how GenAI can help existing healthcare organizations meet their clinical and business needs.
Finally, those of us in leadership positions in healthcare are less naïve about the approach needed to integrate digital tools at work than we once were, and great organizations like UCSF Health have trained leaders and created governance structures to help with smooth implementation.
In the 1960s and 1980s, early efforts at healthcare AI failed miserably, partly because the system wasn't very good, but mostly because developers chose to try to solve the most difficult problem: replacing the doctor's brain as the engine of diagnosis.
Today, most players in the genai space have learned this lesson. The early gains will come in the area of administrative friction – helping patients schedule appointments, refill medications, find doctors, and get answers to some questions.
For physicians and the healthcare system, Genai will help create clinical records, make prior authorization requests to insurance companies, and letters to patients and other physicians. It will also summarize complex patient records. There will be some early work on diagnosis, but the biggest work will be to suggest a possible diagnosis, not to replace the doctor. The stakes are just too high, and the consequences of making mistakes are too high.
Generative AI must continue to improve, especially as the risks increase. The good news is that even in the past year, the situation has improved significantly. While it's easier than ever to integrate AI into an EHR system, it's still not as easy as it needs to be. AI is going to be expensive, and the healthcare system is going to need to find investment money, and if they see a return on their investment, they will do so.
As recent strikes in the entertainment and automotive industries have highlighted, potential labor tensions around AI also need to be addressed. However, labor shortages and high levels of burnout in healthcare will dampen some of the resistance.
Finally, as AI moves into more clinical areas, we need to figure out how to develop systems that allow doctors and ** to work in tandem with the technology – trust it when it's trustworthy, but don't fall asleep on the wheel of metaphor.
Clearly, there is a need for some regulations to establish guardrails for genai, especially in high-risk areas such as clinical medicine. How to do this effectively and efficiently is a daunting question, especially for general-purpose technologies.
It's one thing to regulate a new drug, or even a specific AI algorithm for reading pathology slides. It's another thing to regulate AI, where AI provides recommendations or ** that the entire system of care is using—especially since the AI you approved yesterday could evolve tomorrow to give different answers.