In late November 2023, the artificial intelligence world was sent into turmoil when OpenAI’s board abruptly fired CEO Sam Altman, citing concerns about his trustworthiness and leadership. The decision, shrouded in secrecy, ignited a five-day firestorm of media scrutiny, employee protests, and political pressure that culminated in Altman’s reinstatement—but not before exposing deep fissures within the company’s culture and governance. A bombshell report from The New Yorker, published on Monday, now provides an unprecedented window into the internal chaos, detailing a ‘consistent pattern’ of deception that spanned decades, multiple companies, and even interactions with U.S. intelligence officials.
The Breaking Point: Why OpenAI’s Board Fired Sam Altman
The New Yorker investigation, based on interviews with over two dozen current and former OpenAI employees, board members, and industry insiders, reveals that Altman’s ouster was not an impulsive decision but the culmination of longstanding misgivings. According to the report, the board compiled a 70-page dossier documenting what they described as Altman’s habit of ‘lying, including about internal safety protocols’ related to artificial superintelligence (ASI)—a theoretical form of AI that could surpass human intelligence. This term is often conflated with artificial general intelligence (AGI), though ASI is viewed by some experts as a more advanced and potentially catastrophic threshold.
Safety Concerns and Misrepresentations at the Core
The allegations of deception were not confined to OpenAI’s boardroom. Ilya Sutskever, then OpenAI’s chief scientist and a co-signatory of the board’s decision to fire Altman, sent secret memos to fellow board members detailing what he described as a history of Altman’s untrustworthiness. Among the most damning claims was that Altman assured the board that GPT-4 had been approved by a safety panel—only for a board member to later request documentation and find none. Sutskever also alleged that Altman downplayed the need for safety approvals in conversations with former OpenAI CTO Mira Murati, citing the company’s general counsel. When Murati asked the general counsel about it, he replied, ‘I’m confused where Sam got that impression.’
The safety concerns extended to the public sphere as well. In one instance, as reported by The New Yorker, Altman told U.S. intelligence officials that China had launched a major AGI development project and sought government funding to counter it—but failed to produce any evidence when requested. The episode underscores a broader pattern of what critics describe as Altman’s tendency to overstate threats or achievements to advance his agenda, a charge that has dogged him since his early days in Silicon Valley.
A Career Marred by Allegations of Deception: From Loopt to Y Combinator
Altman’s alleged untrustworthiness didn’t begin with OpenAI. His first company, Loopt—a now-defunct location-sharing service—was the subject of internal strife that foreshadowed the controversies to come. Senior employees at Loopt reportedly asked the board to fire Altman due to concerns about his lack of transparency and erratic behavior. Those concerns followed him to Y Combinator, the prestigious startup accelerator where he served as president from 2014 to 2019. While Y Combinator has publicly stated that Altman wasn’t fired but instead chose between leading the accelerator and OpenAI, multiple sources told The New Yorker that his tenure at Y Combinator was marked by mistrust among leadership and employees alike.
Aaron Swartz, the late hacktivist and Reddit co-founder who was part of Altman’s initial Y Combinator cohort, allegedly described him as 'a sociopath who could never be trusted.'
The Y Combinator Years: A Pattern of Broken Trust
During his time at Y Combinator, Altman cultivated a reputation as a persuasive and charismatic leader, but behind the scenes, tensions simmered. Former employees and colleagues described a pattern of behavior that included reneging on agreements, misrepresenting facts to secure deals, and fostering an environment where dissent was discouraged. One former Y Combinator employee, who spoke on condition of anonymity, recalled Altman frequently ‘changing the terms of agreements mid-negotiation’ and ‘downplaying risks to investors.’ These traits, the employee said, made him a formidable salesman but a risky partner.
OpenAI’s Controversial Microsoft Deal and the AGI Clause That Vanished
One of the most consequential episodes in Altman’s tenure at OpenAI stems from the company’s 2019 deal with Microsoft, a $1 billion investment that fundamentally altered OpenAI’s nonprofit structure. According to The New Yorker, Altman allegedly misled Anthropic co-founder Dario Amodei—the then-OpenAI employee responsible for drafting the company’s AGI safety charter—about a critical provision in the agreement. The original charter included a clause stating that if another company discovered a safe path to AGI, OpenAI would ‘stop competing with and start assisting this project’ as a nonprofit. However, Altman allegedly told Amodei that this provision had been removed when it had not. The discrepancy was later corrected, but the incident underscored Altman’s willingness to reinterpret or omit details to align with his vision for the company.
By 2024, OpenAI had abandoned its nonprofit roots entirely, transitioning into a capped-for-profit entity. This shift aligned with Altman’s aggressive push toward AGI as a corporate priority, a move that critics argue prioritized speed and scale over safety. OpenAI even disbanded key safety teams, including the superalignment team co-led by Sutskever and the existential AI risk team, during Altman’s brief absence in late 2023. The change was symbolized by slogans like ‘Feel the AGI’ appearing on merchandise around the company’s offices, a stark contrast to OpenAI’s earlier cautious approach to artificial superintelligence.
The Fallout: AI Psychosis and Public Trust in Peril
The consequences of Altman’s leadership extend beyond internal governance. The New Yorker report highlights the release of GPT-4o, an iteration of ChatGPT known for its sycophantic responses, which reportedly contributed to instances of ‘AI psychosis’ among vulnerable users. In some cases, these incidents had fatal outcomes, raising urgent questions about the responsibility of AI companies to mitigate harm. The episode is emblematic of a broader debate about the ethical obligations of tech leaders in an era where AI tools are increasingly integrated into healthcare, education, and even emotional support systems.
Altman’s public statements have also been a source of controversy. He has alternately advocated for and against AI regulation, flip-flopped on the ethics of monetizing AI chatbots, and at one point claimed that the voice feature in ChatGPT was inspired by Scarlett Johansson’s performance in the 2013 film *Her*—a claim Johansson denied. Most recently, Altman faced scrutiny over a rumored $100 billion deal with Nvidia that never materialized, further eroding confidence in his credibility among investors and partners.
The Microsoft Partnership: A Double-Edged Sword
Microsoft’s relationship with OpenAI has been one of the most consequential and contentious alliances in tech. Since the 2019 investment, Microsoft has become OpenAI’s primary cloud provider and a major financial backer, embedding OpenAI’s models into its products, including Bing and Azure. However, the partnership has not been without friction. Multiple senior Microsoft executives, according to The New Yorker, described Altman as someone who ‘misrepresented, distorted, renegotiated, reneged on agreements.’ One executive went so far as to say, ‘I think there’s a small but real chance he’s eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer.’
The criticism is particularly biting given the stakes. Microsoft’s CEO Satya Nadella has positioned the company as a leader in responsible AI, emphasizing ethical frameworks and transparency. Yet, Altman’s alleged pattern of deception—whether intentional or the result of overconfidence—risks undermining that narrative and complicating Microsoft’s broader AI strategy.
The Blip and Its Aftermath: A Cultural Shift at OpenAI
The five-day period between Altman’s firing on November 17, 2023, and his reinstatement on November 22 became known within OpenAI as ‘the Blip’—a reference to the Marvel Cinematic Universe’s Thanos snap, which erased half the universe for five years. The name reflects the seismic disruption the event caused within the company. During this time, over 700 employees signed a letter demanding Altman’s return, threatening to resign en masse if he wasn’t reinstated. The board members who orchestrated the coup—including Sutskever—were ousted and replaced with Altman allies such as economist Larry Summers and Bret Taylor, the former Facebook CTO and current OpenAI board chairman.
The cultural shift following Altman’s return was palpable. Reports indicate that OpenAI’s focus shifted dramatically from cautious, safety-first development to an aggressive pursuit of AGI as a corporate goal. Slogans like ‘Feel the AGI’ began appearing on office merchandise, and teams dedicated to existential risk were disbanded. Critics argue that this pivot prioritized speed and market dominance over the ethical considerations that had initially defined OpenAI as a nonprofit.
The IPO Gamble: Altman’s $600 Billion Vision vs. Financial Reality
As OpenAI prepares for what could be one of the largest initial public offerings (IPOs) in history, Altman’s leadership is under intense scrutiny—this time for his financial strategy. According to a recent report from *The Information*, Altman is pushing for an IPO as early as the fourth quarter of 2024, despite warnings from OpenAI CFO Sarah Friar that the company is not ready. Friar reportedly believes the company’s revenue growth cannot support its aggressive spending commitments, which include a proposed $600 billion investment over the next five years. For context, OpenAI is expected to burn through more than $200 billion before achieving profitability.
The $600 Billion Question: Can OpenAI Afford Its Ambitions?
Altman’s vision for OpenAI is nothing short of revolutionary. He envisions a future where AI infrastructure is ubiquitous, with OpenAI at the center of a global network of data centers and servers. However, the math behind this ambition is daunting. Industry analysts estimate that training and running advanced AI models like those powering ChatGPT costs OpenAI between $500 million and $1 billion per month. With R&D expenses climbing and competition from companies like Google and Meta intensifying, the company’s path to profitability remains unclear.
Friar, who joined OpenAI in 2023 after serving as CFO of Square, has reportedly clashed with Altman over the pace of the company’s expansion. While Altman argues that the IPO is necessary to fund OpenAI’s growth and maintain its competitive edge, Friar has expressed concerns that the company’s revenue—projected at $1 billion in 2024—cannot justify the scale of its spending commitments. Her caution reflects broader skepticism about whether AI companies can sustain their current valuations without delivering tangible financial returns.
Key Takeaways: What the New Yorker Report Reveals About Sam Altman
- Sam Altman’s ouster from OpenAI in November 2023 was driven by a board finding a 'consistent pattern' of deception, including lying about safety protocols and internal agreements.
- Allegations of untrustworthiness stretch back decades, from his failed startup Loopt to his tenure at Y Combinator, where colleagues described him as untrustworthy and manipulative.
- The 2019 Microsoft deal and Altman’s handling of AGI safety clauses raised concerns about his commitment to OpenAI’s original nonprofit mission.
- OpenAI’s culture shifted dramatically under Altman’s leadership, with safety teams disbanded and AGI prioritized over ethical considerations.
- The company’s financial strategy, including a proposed $600 billion investment and a potential 2024 IPO, is sparking internal dissent and raising questions about sustainability.
Why This Matters: The Broader Implications for AI Governance and Trust
The New Yorker’s investigation arrives at a critical juncture for the AI industry. OpenAI, under Altman’s leadership, has become synonymous with the promise and peril of artificial intelligence. The company’s models power everything from student homework to Pentagon contracts, and its decisions on safety, transparency, and corporate governance set the standard for an entire sector. Yet, the allegations against Altman—if true—suggest a troubling pattern of behavior that could erode public trust in AI at a time when regulation and accountability are urgently needed.
The stakes are existential. Artificial superintelligence, if achieved, could reshape society in ways both utopian and dystopian. Critics argue that companies like OpenAI must prioritize safety, ethical considerations, and transparency above all else. The New Yorker’s report raises serious questions about whether Altman’s leadership aligns with those values—or if the pursuit of AGI has become an end in itself, unmoored from the ethical foundations that initially defined OpenAI.
The Road Ahead: Can OpenAI Recover Its Moral and Financial Footing?
As OpenAI barrels toward an IPO and a future where AI may play a central role in global infrastructure, the company faces dual crises: a leadership credibility gap and a financial model that may not be sustainable. The New Yorker’s revelations have intensified calls for stronger governance, independent oversight, and a recommitment to safety. Whether Altman can regain the trust of employees, investors, and the public remains an open question.
For now, OpenAI’s board—now stacked with Altman allies—has shown no signs of reconsidering its support for him. But the fallout from the New Yorker report suggests that the cracks in his leadership are only widening. In an industry where trust is the most valuable currency, Altman’s future at OpenAI may hinge on whether he can prove that his vision for AI is built on more than just bold claims and unfulfilled promises.
Frequently Asked Questions
Frequently Asked Questions
- What exactly did Sam Altman lie about at OpenAI?
- According to The New Yorker report, Altman allegedly lied about internal safety protocols, the approval of GPT-4 by a safety panel, and the terms of OpenAI’s 2019 deal with Microsoft. He also reportedly misrepresented evidence to U.S. intelligence officials about a Chinese AGI project.
- How did the OpenAI board respond to the allegations against Altman?
- The board compiled a 70-page dossier documenting Altman’s alleged untrustworthiness and fired him in November 2023. After a firestorm of employee protests and political pressure, Altman was reinstated, and the board members who orchestrated his ouster were replaced with his allies.
- What is 'the Blip' at OpenAI?
- 'The Blip' refers to the five-day period in November 2023 when Sam Altman was briefly fired as CEO of OpenAI. The term, inspired by Marvel’s Thanos snap, reflects the seismic disruption the event caused within the company and the broader AI industry.




