Media

OpenAI CEO Ousted Amid Terrifying Discovery

Share

OpenAI’s CEO Sam Altman was ousted from his company after a concerning letter from OpenAI employees was sent to OpenAI’s board.

The letter touched on a potential breakthrough in the company’s AI research that could threaten all of humanity and mentioned OpenAI’s ‘Q-Star’ program.

Beyond this, specific details were not given about the incredibly concerning discovery that Altman was reportedly ousted over.

Since his ouster, Altman has since been returned to his position as CEO of OpenAI—deepening the mystery of why he was specifically ousted in the first place.

What is really going on at OpenAI? Read on and let us know what you think:

Robby Starbuck provided a copy of the letter written by former employees of OpenAI and highlighted this excerpt:

“Throughout our time at OpenAI, we witnessed a disturbing pattern of deceit and manipulation by Sam Altman and Greg Brockman, driven by their insatiable pursuit of achieving artificial general intelligence (AGI).

Their methods, however, have raised serious doubts about their true intentions and the extent to which they genuinely prioritize the benefit of all humanity.”

Reuters broke the story:

In their letter to the board, researchers flagged AI’s prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter.

There has long been discussion among computer scientists about the danger posed by superintelligent machines, for instance if they might decide that the destruction of humanity was in their interest.

Charlie Kirk asked: “Silicon Valley is in turmoil after ChatGPT creator OpenAI suddenly fired (then rehired) CEO Sam Altman without explaining why. The firing has created a whirlwind of rumors: Is AI advancing too quickly to be controlled? Are we on the brink of so-called Artificial General Intelligence?”

On Wednesday, OpenAI announced: “We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo. We are collaborating to figure out the details. Thank you so much for your patience through this.”

Even Altman himself recently issued similar warnings according to CNN:

Two weeks after the hearing, Altman joined hundreds of top AI scientists, researchers and business leaders in signing a letter stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The warning was widely covered in the press, with some suggesting it showed the need to take such apocalyptic scenarios more seriously.

It also highlighted an important dynamic in Silicon Valley: Top executives at some of the biggest tech companies are telling the public that AI has the potential to bring about human extinction while also racing to invest in and deploy this technology into products that reach billions of people.


Share

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button