Can AI machines develop a moral sense?

ChatGPT and other AI technology have raised concerns over search engines

FOX Business host Gerry Baker – who wrote an op-ed in Monday's Wall Street Journal, "Is There Anything ChatGPT’s AI ‘Kant’ Do?" – outlined the implications of the increased prevalence of artificial intelligence technology in modern society and the questions and fears AI sparks Tuesday on ‘"Your World."

GERRY BAKER, HOST OF "WSJ AT LARGE": That's our ultimate nightmare, right? That we are creating these machines that in the end will come and control us, and tell us what we're going to do. What I was interested in looking at was not so much what machines can tell us about factual information, but whether or not it's possible these machines might develop any sort of a moral sense, might be able to tell us what's right or wrong.

AI EXPERTS, PROFESSORS REVEAL HOW CHATGPT WILL RADICALLY ALTER THE CLASSROOM: ‘AGE OF THE CREATOR’

You can ask it all kinds of moral questions like, "Is it ever right to kill someone?" or "Is it ever right to tell a lie or things like that?" And it gives you kind of a mix of answers. Sometimes the answers are very non-committal. So, as you know, these are difficult moral questions. You've got to decide yourself. Sometimes it gives you interesting answers that clearly reflect the moral views of the people who wrote the algorithms.

CLICK HERE TO GET THE FOX NEWS APP

It does seem that our kind of moral sense, the big questions about how we should live our lives, rather than what knowledge we have, those questions are not just easily understood in terms of understanding the knowledge that we have, the data that we have, the information they have. They seem to exist independently of all of that knowledge.

Load more..