The surprising, albeit short-lived, ouster of OpenAI CEO Sam Altman has renewed concern over the state of artificial intelligence regulation as the industry continues to grow at a rapid pace.

"Now is the perfect time to create common sense guard rails," Christopher Alexander, the chief analytics officer of Pioneer Development Group, told Fox News Digital.

Alexander's comments came after OpenAI's board made the move last week to push out Altman, arguing that it had lost the confidence with him, citing in a news release his not being "consistently candid in his communications." OpenAI has since reversed course, announcing early Wednesday on X that it has "reached an agreement in principle for Sam Altman to return" as CEO under a new board of directors. 

"We are collaborating to figure out the details. Thank you so much for your patience through this," OpenAI continued.

SAM ALTMAN REPORTEDLY PLANNING NEW AI VENTURE AFTER OPENAI OUSTER

then-OpenAI CEO Sam Altman on Capitol Hill in May 2023

Sam Altman, chief executive officer and co-founder of OpenAI, speaks during a Senate Judiciary Subcommittee hearing in Washington, D.C., on Tuesday, May 16, 2023. (Getty)

OpenAI's earlier move to push out Altman came as the growing AI industry begins to attract more attention from governments around the world, with President Biden recently signing an executive order meant to be the first step in regulating the industry. However, just how far reaching that order can go is questionable, while Congress has taken few steps to codify any AI regulation into law.

That lack of regulation has caused concern to grow as one of the country's leading AI companies, which is behind the popular ChatGPT platform, transitions to new leadership. Writing for the New York Post earlier this week, Niagara University philosophy professor Steve Petersen argued the move showed that AI companies "can no longer be trusted to police themselves; it’s past time to call in the long arm of the law."

"It’s defined as intelligence at the human level or better on all cognitive tasks — and once we get it, it’s likely to blow way beyond our level of intelligence in a hurry," Petersen argued. "If we get it wrong, many sober thinkers agree it could be an existential catastrophe for humanity."

Alexander agreed that the risk AI could potentially poses makes regulation necessary, though he cautioned that the current technology is not at the level many fear, but one day could be.

ChatGPT screen on laptop next to window

ChatGPT on a laptop. (Cyberguy)

OPENAI CO-FOUNDER GREG BROCKMAN ‘SHOCKED AND SADDENED’ BY CEO SAM ALTMAN’S DEPARTURE

Alexander pointed to the initial boom in social media as one example of missed opportunity, arguing that policymakers should not make the same mistakes when it comes to AI.

However, some have questioned whether the Altman saga highlights the need for more regulation, with Center for Advanced Preparedness and Threat Response Simulation founder Phil Siegel pointing out that OpenAI has already passed regulatory hurdles.

"The lack of a process for investigation and policy application is alarming for a company that has such an impact on our tech ecosystem right now," Siegel told Fox News Digital. "I’m hoping the investors will put their foot down and demand a larger, more professionalized and accountable Board, maybe with only one founder, one or two activists, and several experienced Directors. Not sure you can regulate that—they would have passed most regulatory hurdles you can imagine."

The OpenAI logo on smartphone screen

The OpenAI ChatGPT logo is seen on a mobile phone in this photo illustration. (Jaap Arriens/NurPhoto via Getty Images)

CLICK HERE FOR MORE US NEWS

Meanwhile, other experts fear that overregulation could dampen U.S. innovation of AI technology.

"Leadership struggles like what we just saw at OpenAI aren't new. The business world experiences them on a regular basis," Samuel Mangold-Lenett, a staff editor at The Federalist, told Fox News Digital. 

"Most of our knowledge about AGI remains theoretical, and it's likely that regulations in this space will inhibit development and slow progress while our adversaries — namely China — continue to plow ahead. We need to be careful and ensure that this technology doesn't pose a threat to humanity and that the people working on it can be trusted to work on it, but much like the Manhattan Project, it's impossible to find the perfect balance of conditions and personalities." 

CLICK TO GET THE FOX NEWS APP

However, Jon Schweppe, the policy director of American Principles Project, argued that the ordeal is an example of why regulation is needed.

""Imagine if we decided to leave our nuclear arsenal in the hands of a few highly ideological tech bros," Schweppe told Fox News Digital. "That’s essentially what’s happening with AI right now. AI is a technological weapon of mass destruction — it poses an existential threat to humanity as we know it. Obviously some level of oversight is needed before it’s too late."

Editor's Note: This story has been updated to reflect breaking news that Altman will be reinstated as OpenAI CEO.