Colin Read • December 9, 2023

Artificial Intelligence Forever? - December 10, 2023

(picture courtesy of Queenfadesa, CC BY-SA 4.0 <https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons)


We have watched the drama unfold at OpenAI over the last couple of weeks. Why is there so much drama over the most successful artificial intelligence program to date? 


The root of the problem is corporate responsibility. 


A group of visionaries assembled to create OpenAI in 2015. Sam Altman, Elon Musk, and Musk collaborator in creating PayPal, Peter Thiel, had a vision to channel Artificial Intelligence (AI) and especially the variant called Artificial General Intelligence (AGI) that could perform intellectual exercises as humans may accomplish, in altruistic ways that advance all humankind. 


This lofty goal is not unlike the motivation for Elon Musk to create new human civilizations on Mars. The goal is to empower humankind, free us from our most mundane tasks, advance technology, bring us closer to sustainability, and make the world a better place. Nowhere in this vision was a mention of profit. In fact, OpenAI, while generously seeded by Musk and Thiel, was started as a non-profit. 


It is not unusual for nonprofits to have profit-making subsidiaries. Non-profit hospitals support physician groups, each of which expect to make a profit. Churches run profit-centers so they can use the proceeds to support their broader efforts. Even not-for-profit colleges compete to patent the research and development efforts our taxpayer dollars support, and can earn millions, or even billions from the resulting profits. 


There has been a long-standing tension between the goals of a corporation. You and I understand our role in society. We each feel some degree of responsibility to give back to a society that enabled a livelihood for each of us that has been far more rewarding than we would have otherwise enjoyed. We may give back in charitable giving, in mentoring, community or public service, or in philanthropy and estate giving. If you operate a local business, you realize that the health of your business is intimately tied to the health of the local community. We are inclined to give back because we feel an emotional responsibility or because we recognize that giving is paying it back and paying it forward. 


Andrew Carnegie understood that once one gets so large that they play on a national or even international stage, giving is even more important because the profits are much larger and the close tie to neighbors and family is much reduced. He advocated for a Gospel of Wealth to reinforce among his Gilded-Age compatriots the need to give back that is emotionally lessened as the scale of an enterprise grows. 


Hence, someone like Musk gives back through avenues such as OpenAI, or stimulating sustainable transportation and sharing many of his patents and concepts openly in the process. As readers know, I admire that in Elon, even if I believe some of his other forays into the public arena are less than productive. 


While Carnegie had one response to Gilded Age excesses, the Corporate Social Responsibility movement had another. It argued that no living person has the degree of protections afforded the corporate person, and yet a corporate entity often does not feel the same responsibility to give back. In September of 1970, Morrell Heald wrote “The Social Responsibilities of Business: Company and Community, 1900-1960” that argued why companies have no less a moral obligation to give back than their workers or stakeholders may feel in their personal lives. 


This influential thesis created an equally profound backlash. Milton Friedman immediately penned an infamous New York Times opinion article entitled “The Social Responsibility of Business is to Increase its Profits.” He argued that by exercising any legal means possible to enhance profits, that then frees a company’s shareholders to do good with the dividends they receive, if that is how they wish to spend their money. 


What Friedman missed is the diluted sense of responsibility as organizations and individual wealth grows in proportion far beyond our local community. In forming OpenAI, Musk, Altman, and Thiel recognized that they must give back in a far more significant way than supporting their local United Way. They wished to ensure that this incredibly valuable tool of AGI should be used to enhance technological and economic efficiency, but also ensure that human equity and equality is enhanced at the same time. 


As past blogs have documented, an increase in efficiency could result in increased profits for Microsoft, Google, Apple, Nvidia, and others, but it may also result in declines in the need for labor that robs some households of the ability to support their families. It turns out that how we enhance efficiency is important. The world may actually become a poorer place if fewer families can support themselves, and if politicians are ineffective in redistributing the bounty of AGI to ensure we protect the weakest among us. There is also, of course, the looming secondary concern that we keep the world safe from AGI as well. If AGI became so smart that it could begin running things without human intervention, the aspirations of machines may be opposed to human aspirations. 


The creators of OpenAI understood these risks, and hence formed their corporation as a non-profit, with societal safety their paramount concern, all the while as they foster innovations in artificial intelligence. 


It did not take long for a chasm of principles to emerge. The OpenAI board permitted a for-profit enterprise to form under their umbrella. Workers attracted to the company had their compensation tied to the success of the profit-making corporate component. The company CEO, Sam Altman, was allowed to simultaneously pursue his own profit-making ventures and to direct OpenAI revenues in ways that could support his other interests. Many wondered, including Musk, whether Altman remained concerned more about humanity or about a single human.


Very quickly the company was divided into two camps; the vast majority of the direct corporate stakeholders that could become wealthy if OpenAI’s products eschewed safety for profitability, and a far smaller subset of the company still determined to follow the wish of the founders to further AGI in ways that are good for society. 


Unfortunately, profits won out over the non-profit mission. Altman was fired, workers loyal to the profits Altman created for them revolted, Altman was reinstated, and the non-profit board was fired. Some argue that this is the way of the world. If the market seemed to value the profit entity more than the lofty goals of societal safety and equity, then economic and social Darwinism worked as intended. Others, Musk included, became increasingly scared that AI just lost the most important guardrail in its brief and already spectacular moment. 


The result has been a clamoring for regulation to create the guardrails necessary to ensure that perhaps the most significant technology of our lifetime indeed makes our lives better rather than worse or more dangerous. We often look to government to ensure that society is protected against innovations. Cheaper ways to make electricity from coal may be good for the coal companies, but they aren’t good for people and the ecosystem that rely on the atmosphere. 


Unfortunately, government never leads innovation, and often lags a decade or two behind. Recall that OpenAI and Large Language Model AI are not even a decade old and have already changed the search for new drugs, the drilling of oil, conduct of warfare, and the ability of college students to cheat on term papers and open-book exams. (Indeed, the use of ChatGPT drops dramatically in the periods between semesters). Government will be even more vexed in getting ahead of AI than they are in regulating cryptocurrency and greenhouse gas emissions. A politician can, if they want to, understand global warming. Few fully understand crypto. It is probably not hyperbole to claim that no politician fully understands the implications of AI, and few probably even understand much of it. 


That is why the changing of Sam Altman’s stripes is so distressing to some, and especially to the board of directors who had been appointed to contain the potential beast of AI. It might be okay to say that, once in a while, the goals of a non-profit are obsolete and a profit-oriented subsidiary should assume the mantle and mission of the corporation. This is not one of those times, even if a number of OpenAI stakeholders can now expect to become very wealthy.


By Colin Read November 16, 2024
It's the Economy, Stupid - Sunday, November 17, 2024
By Colin Read November 8, 2024
This is a Great Election (for China) - Sunday, November 10, 2024
By Colin Read November 3, 2024
Everybody's in a Snarky Mood - Sunday, November 3, 2024
By Colin Read October 26, 2024
And They Call It Democracy - October 27, 2024
By Colin Read October 19, 2024
A Misinformation Tax? - October 20, 2024
By Colin Read October 11, 2024
The Rewards and Risks of AI - October 13, 2024
By Colin Read October 2, 2024
Nature's Wrath - October 6, 2024
By Colin Read September 28, 2024
When an Interest Rate is Not an Interest Rate - September 29, 2024
By Colin Read September 21, 2024
A Rate Change of Historic Proportion - September 22, 2024
By Colin Read September 15, 2024
Too Easy To Criticize - September 15, 2024
More Posts
Share by: