AI experts weigh dangers, benefits of ChatGPT on humans, jobs and information: ‘Dystopian world’

NASA scientist Dr. Chris Mattmann said that while AI is not self-aware yet, they outscore humans on some tests

Generative artificial intelligence (AI) algorithms like ChatGPT pose substantial dangers but also offer enormous benefits for education, businesses, and people's ability to efficiently produce vast amounts of information, according to AI experts.

"Skynet--that doesn't exist. The machines aren't out there killing everybody and it's not self-aware yet," NASA Jet Propulsion Laboratory (JPL) Chief Technology and Innovation Officer Dr. Chris Mattmann told Fox News Digital.

He described generative AI as an "accelerated rapid fire" system where the whole human experience is dumped into a model and, with the help of massive scale and computing power, is trained continuously 24 hours a day, 7 days a week.

"ChatGPT has over a trillion neurons in it," Mattmann said. "It is as complex, as functional as the brain or a portion of the brain."


NASA scientist Dr. Chris Mattmann

NASA Jet Propulsion Laboratory (JPL) Chief Technology and Innovation Officer Dr. Chris Mattmann speaks with Fox News Digital about generative artificial intelligence and deep neural networks.  (Fox News)

While people may overestimate generative AI's sentient capabilities, Mattmann, who also serves as an adjunct professor at the University of Southern California, did note that people underestimate the technology in other ways.

There are machine learning models today that outperform humans on tests like vision, listening and translation between various languages.

In December, ChatGPT outperformed some Ivy League students at the University of Pennsylvania's Wharton School of Business on a final exam.

"The one thing I tell people is computers don't get tired. Computers don't have to turn off," Mattmann said.

The combination of these AI advantages will fundamentally revolutionize and automate activities and jobs among industries like fast food and manufacturing, he added, noting the importance of understanding skill transitions. 

"Does that mean all those people all of a sudden should be dependent on the government and lose their jobs? No," Mattmann said. "We sometimes know this five, ten years in advance. We should be considering what types of subject matter expertise, what types of different activities, what are the prompts that those workers should be putting their subject matter data and all their knowledge into, because that's where we're going to be behind and we're going to need to help those automation activities."

Mattmann added that it was no surprise OpenAI had built ChatGPT, considering its massive investments from Microsoft, Elon Musk and other major tech players.

Google is also making similar products and is a significant investor in DALL E, another intelligence created by OpenAI that creates pictures and paintings.

"These big internet companies that curate and capture the data for the internet is really the fuel; it's the crude for these data-hungry algorithms," Mattmann said.


Joe Toscano datagrade

Datagrade founder and CEO Joe Toscano talks about some of the concerns and benefits related to generative artificial intelligence.  (Fox News)

Datagrade founder and CEO Joe Toscano cited multiple levels of risk regarding generative AI like ChatGPT.

Last week, it was revealed CNET issued corrections on 41 of 77 stories written using an AI tool. They included, among other things, large statistical errors, according to a story broken by Futurism. 

Toscano, a former Google consultant, said that while industries can use these tools to boost economic efficiency, they could also cut some jobs and leave essays, articles, and online text susceptible to incorrect information. These errors may be overlooked and taken as truth by the average internet skimmer, which could pose problematic results for online communication.

A Princeton University student recently created an app that claims to be capable of detecting whether an AI wrote an essay. However, many of these tools are still in the early stages and produce mixed results.

Toscano said that stamps or verification tags on articles, websites and art that state "this was generated by and created entirely by a machine" could be pertinent in the near future.

"If we don't have humans in the loop to ensure truth and integrity in the information, then we're going to, I think, head towards a dystopian world where we don't know true from false, and we just blindly trust things. I'm not excited about that. I'm concerned quite a bit," he added.

Despite concerns, Toscano expressed excitement about the future of AI and said it could produce vast benefits if used responsibly.

"The AI is going to help us think through things we never were capable of before, to be quite honest," he said.

Citing examples, he discussed a situation where AI could be used in landscaping or architecture. While a team could come together and produce three concepts in a week to bring back to a customer, an AI could produce 1,000 concepts, speeding up the process for the landscaping team and making it cheaper for the consumer.

He noted that AI could also be deployed for conversational use with humans, like mental health assessments.

However, he said these situations had produced some roadblocks. While the machines have been effective, patients often shut down when they realize they are speaking to an algorithm. He said that while we might not be far off from movies like "M3GAN," with AI's mimicking human conversation and emotion (minus the killing and sabotage), they are better deployed in systems that are objective, mathematical, or empirically driven.

"The future I want to see is one where we use artificial intelligence to amplify our abilities rather than replace us," Toscano said.


Krishna Gade Fiddler CEO

Co-founder and CEO of Fiddler Krishna Gade discusses the importance of transparency and explainability in artificial intelligence models.  (Fox News)

Fiddler co-founder and CEO Krishna Gade also expressed concern about data privacy breaches involving sensitive materials like personally identifiable information. He said that without the transparency and ability to explain how a model arrives at this conclusion, it could lead to many problems.

Gade, a former lead AI engineer at Facebook, Pinterest and Twitter, also said it was too early to implement AI in high-stakes decisions, like asking for first aid instructions or performing complicated medical procedures.

"How do you know that the response is reliable and accurate? What kind of sources that it's going through?" he said.

He added that many AI models are essentially a "black box" where the lineage and origin of the information are not immediately apparent, and guardrails should be implemented to make this information easily obtainable with explainability and transparency baked into it.

Gade also warned that models could contain societal and historical biases because of the information being fed. Based on the training and data pool it pulls from, a model could exhibit common stereotypes about women or religions. He pointed to an example where a model could associate Muslims with violence.

Generative AI is the latest in a long line of large language models. Neil Chilson, a senior fellow for tech and innovation at the nonprofit Stand Together, described it as a model that uses extensive collections of statistics to create new content nearly indistinguishable from the writing of a human.  

You ask it questions and have a conversation with it, and it tries to predict the statistically best input, typically a word, sentence, or paragraph, using a significant portion of all the written text publicly available on the internet. The more data dumped in, the better the AI typically performs.

These forms of AI often use neural network-based models, which assign probabilities into a large matrix of variables and filter through a vast network of connections to produce an output.


"It is not reasoning the way you and I would reason," Chilson, a former Federal Trade Commission (FTC) Chief Technologist, told Fox News Digital.

"The important distinction is that these systems are statistical, not logical," Chilson said, noting people "mythologize" AI models as if they are thinking like them.

These models are updated through adversarial interaction. In one example, a model creates a test for the other to answer and they improve by fighting with each other. Sometimes the other model is a human, which reviews the content by asking the AI to answer different prompts before grading the responses.

Although ChatGPT has been around for several years, there has been a leap forward in the user interface that has made it more accessible to general consumers, in addition to some incremental improvements to the algorithm.

Chilson said the program is good at helping writers get rid of a blank page and brainstorm new ideas, a novelty that has interested major tech companies.


Microsoft, for instance, has expressed a desire to incorporate OpenAI's technology into their office suite.

"I don't think it will be that long until those small suggestions you get on your Word document or Google Mail actually become a bit longer and more sophisticated," Chilson said. "All of these tools reduce the barrier to average people becoming creators of things that are quite interesting and attractive. There's going to be an explosion of creators and creativity using these tools."