OpenAI CEO Sam Altman says the age of the giant artificial intelligence model is already over.

"I think we're at the end of the era where it's going to be these, like, giant, giant models," he told an audience at the Massachusetts Institute of Technology over Zoom last week. 

"We'll make them better in other ways." 

During the same event, Altman also confirmed that his company is not developing Chat GPT-5. 

OPENAI CEO SAM ALTMAN SAYS ELON MUSK-BACKED LETTER CALLING FOR AI PAUSE WASN'T 'OPTIMAL WAY TO ADDRESS IT'

"An earlier version of the letter claimed OpenAI is training GPT-5 right now," he said, referencing a letter from billionaire Elon Musk and Apple co-founder Steve Wozniak. "We are not and won't for some time."

Sam Altman speaking

Sam Altman, president of Y Combinator, speaks during the New Work Summit in Half Moon Bay, Calif., Feb. 25, 2019.  (David Paul Morris/Bloomberg via Getty Images)

The letter, published by the nonprofit Future of Life Institute, called for a six-month moratorium on the development of any AI technology that is more powerful than Chat GPT-4. Chat GPT-4 was released in March.

Altman said he didn't believe the letter's conclusions were the way to address AI issues.

The recent debate has forced the Biden administration and other governments around the world to recognize the need for policies to regulate the emerging industry.

Sam Altman standing

Sam Altman, CEO and co-founder of OpenAI, speaks during an event at the Microsoft headquarters in Redmond, Wash., Feb. 7, 2023.  (Chona Kasinger/Bloomberg via Getty Images)

However, it remains unclear where advances will come from.

GOOGLE CEO TOUTS AI AS MORE ‘PROFOUND’ THAN ELECTRICITY, BUT WARNS IT COMES WITH SERIOUS JOB IMPLICATIONS

Google launched a chatbot called Bard, and Microsoft added its own to its Bing search engine.

While Musk may have signed the letter, the Twitter CEO told Tucker Carlson he would start his own artificial intelligence chatbot: TruthGPT.

"I'm going to start something which I call TruthGPT, or a maximum truth-seeking AI that tries to understand the nature of the universe," he said in the interview that aired Monday. "And I think this might be the best path to safety in the sense that an AI that cares about understanding the universe is unlikely to annihilate humans because we are an interesting part of the universe."

A photo of Elon Musk

Elon Musk, CEO of Space Exploration Technologies Corp. (SpaceX) and Tesla Inc., listens as Jim Bridenstine, administrator of NASA, not pictured, speaks during an event at SpaceX headquarters in Hawthorne, Calif., Oct. 10, 2019.  (Patrick T. Fallon/Bloomberg via Getty Images)

"AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production in the sense that it, it has the potential, however small one may regard that probability, but it is not trivial; it has the potential of civilizational destruction," the SpaceX founder said

CLICK HERE TO GET THE FOX NEWS APP 

Musk has also founded a new company called X.AI, according to a March 9 filing in Nevada. 

Fox News' Jeffrey Clark and Bailee Hill contributed to this report.