In November 2023, OpenAI's board of directors, which was responsible for creating the popular ChatGPT and Dall-eAI tools, fired CEO Osamaltman. The decision sparked a revolt from investors and employees and led to a period of turmoil. After five days of chaos, Altman triumphantly returned to OpenAI, which caused excitement among employees. Notably, three board members who tried to remove him have resigned. The peculiar structure of OpenAI's board of directors, in which a nonprofit oversees a for-profit subsidiary, appears to have contributed to this dramatic development.
Hybrid governance.
As a management scholar who studies organizational accountability, governance, and performance, I would like to elaborate on the capabilities of this hybrid approach. In 2015, Altman co-founded OpenAI as a tax-exempt non-profit organization that aims to develop safe and beneficial artificial general intelligence (AGI) for all of humanity. In order to obtain more funds in addition to charitable donations, OpenAI then formed a holding company. This structure allows the organization to attract investment to its for-profit subsidiaries.
OpenAI's leaders have adopted a "hybrid governance" setup that aims to maintain its social mission while leveraging market dynamics for growth and revenue. They combine profit and purpose, attracting many investors who are looking for financial returns. OpenAI emphasizes the balance between commercial viability, safety, and sustainability, rather than focusing only on profit maximization, as stated on its website. Notably, major investors, including Microsoft, own a 49% stake in OpenAI's for-profit subsidiary after investing $13 billion, playing a sizable share in OpenAI's success. Unlike traditional companies, these investors do not have the privilege of gaining a seat on the board.
Other hybrid governance models.
OpenAI limits investors' profit returns to about 100 times their initial investment. Once this threshold is reached, the organization will be converted back to non-profit status. The idea behind this design is to ensure that OpenAI stays true to its mission of safe and beneficial to humanity and prevents it from jeopardizing its goals by pursuing excessive profits. Interestingly, there are more hybrid governance models than one might expect. Take, for example, the Philadelphia Inquirer, a for-profit newspaper owned by the nonprofit LenfestInstitute. This structure allows the newspaper to attract investment while maintaining its core purpose of providing news coverage that meets the needs of local communities.
Patagonia, a well-known brand of outdoor clothing and gear, offers another noteworthy example. The brand's founder, Yvonchouinard, and his successors have permanently transferred ownership to a non-profit trust**. As a result, Patagonia currently invests all its profits in supporting environmental causes. Unlike OpenAI, Anthroic is a competitor to OpenAI and has also adopted a hybrid governance structure. Humans will have two different governing bodies: one is the board of directors of the company and the other is the long-term interest trust. As a public interest company, the Board of Directors of the Society of Humanity has the power to consider the interests of various stakeholders, including the public, not just its owners.
The cause of the conflict between the board of directors and Ultraman.
The Bangladesh Rural Development Council is an important international development organization established in Bangladesh in 1972 and is one of the largest non-governmental organizations in the world. The organization oversees a variety of for-profit social enterprises that benefit the poor. Similar to OpenAI, BRAC has taken a similar approach. In OpenAI, a non-profit entity owns and manages a for-profit business. In a hybrid governance model, the basic responsibility of a nonprofit board is to ensure that the company adheres to its mission. The challenge for boards is how to prevent the impact of market pressures on the company's mission, as the market is often focused on generating profits for investors and shareholders, a potential risk often referred to as mission drift.
The board of directors of a nonprofit organization has three main responsibilities: obedience responsibility, which requires them to act in accordance with the organization's mission;The duty of care needs to be carefully considered in the decision-making process;The duty of loyalty, which makes them commit to preventing or resolving conflicts of interest. In the case of OpenAI's dismissal of Ultraman, it seems to be to fulfill the duty of obedience. The reason for this is that he did not have consistent and open communication with the board. Other personal claims, such as "concerned former Openai employees," have not been confirmed, despite anonymity.
Quest v. Money.
In addition, in some turbulent events, Helentoner, a member of OpenAI's board of directors, resigned from the board of directors. Interestingly, a month ago, when OpenAI tried to hire Ultraman, Tonna co-published a study in which he praised Anthroic's precautionary measures and criticized OpenAI for "crazy plagiarism" in releasing the widely used ChatGPT chatbot. This is not the first time that Ultraman has been tried to fire him for fear of deviating from his mission. In 2021, Darioa Modei, the head of AI security at the time, tried unsuccessfully to convince the board of directors to fire Altman due to safety concerns. This happened shortly after Microsoft invested $1 billion in the company. After these events, Amodei and about 12 other researchers left OpenAI to create Anthroic.
Ilya Satzkewell, chief scientist and co-founder of OpenAI, vividly describes the conflict between mission and financial considerations. He was one of three board members who resigned or were forced to leave their jobs. At first, Satskerwell supported the reversal of the decision on Ultraman, arguing that the mission of protecting AI for the benefit of humanity was necessary. However, he later reversed his stance and expressed his regret on Twitter: "I am very sorry to be involved in the action of the board. Eventually, Sutskwell signed an employee letter supporting Altman's reinstatement and continued as CEO of the company.
The risks of artificial intelligence.
Whether OpenAI's board of directors has fulfilled its duty of care is an equally important question. They have reason to question whether ChatGPT, which was released in November 2022, has adequate security guarantees. Since then, large natural language processing models have been wreaking havoc in various fields. As a professor, I saw it all firsthand. In many cases, it is almost impossible for us to tell if a student is cheating with the help of artificial intelligence. Of course, this risk is masked by AI doing more harmful things, such as helping to design pathogens that could cause epidemics, or creating disinformation and fakes that will erode social trust and pose a threat to democracies.
Conflict of interest.
Amnesty International, on the other hand, promises to bring significant benefits to humanity, such as accelerating the development of life-saving vaccines. However, the potential risks are serious, and once this powerful technology is unleashed, there is no known "off switch". The third responsibility is loyalty, which depends on whether there is a conflict of interest among board members. The most obvious question is whether they will benefit financially from OpenAI's products that could harm the organization's mission. Typically, the members of a nonprofit's board of directors are unpaid, and those who are not directly employed by the organization have no financial stake in their success. The CEO is accountable to the Board of Directors, and the Board of Directors has the power to hire and fire the CEO.
Prior to OpenAI's recent restructuring, three of its six board members held executive positions — CEO, Chief Scientist, and President of the Profit Division. It's no surprise that all three independent directors voted in favor of the decision to remove Altman, and that all salaried executives ultimately supported him. Receiving compensation from a supervised entity is considered a conflict of interest in the nonprofit sector. Even if OpenAI's reconfigured board succeeds in upholding its mission of serving society's needs rather than maximizing profits, it may not be enough.
The tech industry is largely controlled by giants like Microsoft, Meta, and Alphabet, which are a large number of for-profit entities rather than mission-driven nonprofits. Considering the high stakes involved, I believe strict regulations are necessary. Delegating governance entirely to the creators of AI will not solve this problem.
If you are interested in the article, please follow me or leave me a message