Sam Altman, ChatGPT, & the Triumph of Tech Bro Culture

Jake Trussell
Design Innovation
Published in
7 min readNov 24, 2023

--

I asked ChatGPT, Bard, Bing Chat, and Claude to comment on OpenAI’s apparent move away from their nonprofit mission to ensure that their work “benefits all of humanity.”

Artwork by HopesAndDrums

When OpenAI’s board members who had been focused on human-centered AI safety and ethics were replaced by more profit-minded directors this past week, I was curious to learn what the major AI chatbots would have to say about it, and how they might interpret the potential implications.

Even though ChatGPT has become the fastest-growing tech product in history, its parent company OpenAI is a nonprofit org with a stated mission to “ensure that artificial general intelligence — AI systems that are generally smarter than humans — benefits all of humanity.”

When their CEO Sam Altman was fired by the nonprofit board of directors last week, nearly all of the company employees wrote a letter in solidarity with him, threatening an exodus to Microsoft (a 49% owner of OpenAI who offered Altman and the rest of the team jobs building a new AI lab there). After a tumultuous few days, most of the OpenAI board members were fired and Altman was brought back as CEO. The reorg removed all of the researchers and academics who were focused on AI safety and ethics and all of the women, replacing them with titans of industry and economics.

Here’s a summary from the New York Times:

“OpenAI’s new board will consist of three people, at least initially: Adam D’Angelo, the chief executive of Quora (and the only holdover from the old board); Bret Taylor, a former executive at Facebook and Salesforce; and Lawrence H. Summers, the former Treasury secretary. The board is expected to grow from there. OpenAI’s largest investor, Microsoft, is also expected to have a larger voice in OpenAI’s governance going forward. That may include a board seat.

Gone from the board are three of the members who pushed for Mr. Altman’s ouster: Ilya Sutskever, OpenAI’s chief scientist (who has since recanted his decision); Helen Toner, a director of strategy at Georgetown University’s Center for Security and Emerging Technology; and Tasha McCauley, an entrepreneur and researcher at the RAND Corporation.”

This led me to wonder how the AI chatbots see it. So I asked them, and here’s what they each wrote back:

ChatGPT

“The shift in board members’ focus from safety and humanity-centered perspectives to a more profit-driven approach can indeed suggest a change in priorities for OpenAI. It might signal a pivot towards commercial interests rather than a primary emphasis on the broader mission of benefiting humanity.

Potential implications could be:

Mission Drift: The organization might prioritize profit-making over the ethical and safe development of AI. This could lead to less focus on ensuring that AI serves the greater good and minimizes potential risks.

Ethical Concerns: Decision-making might prioritize short-term gains over long-term implications for society, potentially resulting in AI applications that are ethically questionable or harmful.

Transparency and Access: There might be reduced transparency in AI development, limiting access to crucial information and hindering collaborative efforts aimed at ensuring AI’s safe and beneficial deployment.

Inequality and Accessibility: A profit-driven approach could result in AI technologies that primarily benefit those who can afford them, exacerbating social inequalities and limiting access for marginalized communities.

Public Trust: Shifting away from a human-centered approach might erode public trust in OpenAI, as the focus on shareholder interests may conflict with the organization’s initial promises and ideals.

However, it’s essential to note that while board compositions can influence an organization’s direction, they are not the sole determinants. Internal cultures, ongoing collaborations, and the commitment of the team toward the mission also play pivotal roles.

OpenAI’s commitment to its original mission and the actions it takes moving forward will ultimately determine the impact of these changes. It’s important for stakeholders and the wider community to engage and advocate for AI that serves humanity’s best interests.”

Artwork by HopesAndDrums

Bard

“Yes, the removal of OpenAI’s board members whose role was to focus on safety and humanity, and replacing them with members whose focus is on profit, suggests a potential shift away from OpenAI’s mission of benefiting all of humanity. This shift could have several implications, including:

Increased risk of AI misuse: With a greater emphasis on profit, there may be less incentive to prioritize safety and ethical considerations. This could lead to the development of AI systems that are harmful or pose risks to humanity.

Reduced transparency and accountability: A focus on profit could lead to a decrease in transparency and accountability, making it more difficult for the public to understand and hold OpenAI responsible for its actions.

Diminished focus on long-term benefits: A focus on short-term profits could lead to a diminished focus on long-term benefits, such as ensuring that AI is used for the betterment of society.

Eroding public trust: The removal of safety-focused board members could erode public trust in OpenAI, making it more difficult for the organization to achieve its goals.

Overall, the change in OpenAI’s board composition raises concerns about the organization’s commitment to its mission of benefiting all of humanity. It is important to remain vigilant in monitoring OpenAI’s activities and ensuring that the development of AI remains aligned with human values and interests.”

Artwork by HopesAndDrums

Bing Chat

“The recent removal of OpenAI’s board members who were responsible for managing AI from a safety perspective, with humanity in mind, and replacing them with a board whose background lies in thinking about the bottom line for shareholders has raised concerns about the company’s mission of benefitting all of humanity. The new board’s focus on profit may lead to a shift in OpenAI’s priorities, which could have significant implications for the future of AI development and its impact on society.

It is important to note that OpenAI’s mission is to develop AI in a way that is safe and beneficial for all of humanity. The company has been at the forefront of AI research and has made significant contributions to the field. However, the recent changes to the board may indicate a shift in priorities that could have far-reaching consequences.

One potential implication of the new board’s focus on profit is that OpenAI may be less willing to share its research and findings with the wider scientific community. This could slow down the development of AI and limit its potential benefits for society. Additionally, the new board’s focus on profit may lead to a greater emphasis on developing AI for commercial purposes, rather than for the benefit of humanity as a whole.

It is important to monitor the situation closely and to hold OpenAI accountable for its mission of developing AI in a way that is safe and beneficial for all of humanity. The company has a responsibility to ensure that its research and development efforts are aligned with this mission, and that it continues to make significant contributions to the field of AI research.”

Artwork by HopesAndDrums

Claude

“I do not have enough information to make a definitive judgment on OpenAI’s governance changes and their implications. However, some general reflections:

Safety and ethics should be top priorities when building advanced AI systems that could have major impacts on society. Removing key voices focused on those issues from leadership roles is concerning.

On the other hand, having business, economic, and industry expertise on the board could help ensure OpenAI has funding and resources needed to develop systems responsibly and deploy them in a way that balances different priorities.

Good governance of AI development involves bringing together diverse perspectives — technical experts, ethicists, business leaders, policymakers, civil society representatives, etc. No one viewpoint should dominate.

As an outside observer, I cannot evaluate the individual qualifications of the old vs. new OpenAI board members. But in general, having no or very few women in leadership roles risks narrowing the perspectives being considered.

The mission of an organization, its culture, and its practices matter tremendously — perhaps even more than board composition. But board oversight plays an important role in shaping those.

In summary, the OpenAI changes deserve scrutiny, but reasonable people could disagree on their implications. Continued public discussion on the governance of AI development, emphasizing safety, ethics and diverse representation, is warranted. As with the technology itself, transparency, responsibility and wisdom should be guiding principles.”

Artwork by HopesAndDrums

The good news I read across these responses is that all the major AI platforms acknowledge the need for ethics, trust, and transparency, and express concern over OpenAI’s new direction. They also consider the need to balance ethics with business, financial, and industry expertise in order for the product to be sustainable.

Perhaps their most important acknowledgment is the apparent mission-backslide at OpenAI. The hope that ethicists will help shape a thoughtful, human-centered AI future now feels farther out of reach. Will the AI industry careen untethered toward a profit-driven future without proper guidance? Will the disinformation machine that has already hijacked civic engagement across social media, and is leading to the erosion of democracy around the globe, accelerate exponentially? Feel free to go ask the AI chatbots yourself.

Interestingly, Claude’s response was the most limited, yet it was also the only one that acknowledged the elephant in the room; there were only 2 women on the OpenAI board and they were both removed. “No or very few women in leadership roles risks narrowing the perspectives being considered” Claude pointed out. In this case, it appears that tech bro culture has triumphed.

--

--