With AI large language models like ChatGPT being developed around the globe, countries have raced to regulate AI. Some have drafted strict laws on the technology, while others lack regulatory oversight. 

China and the EU have received particular attention, as they have created detailed, yet divergent, AI regulations. In both, the government plays a large role. This greatly differs from countries like the United States, where there is no federal legislation on AI. Government regulation comes as many countries have raised concerns about various aspects of AI. These mainly includes privacy concerns, and the potential for societal harm with the controversial software.

The following is a description of how countries across the globe have managed regulation of the growing use of AI programs. 

ARTIFICIAL INTELLIGENCE: FREQUENTLY ASKED QUESTIONS ABOUT AI

Bing, OpenAI, Google and Microsoft logos

Countries around the world have released different regulations regarding AI, as the revolutionary technology gains global prominence. (Jakub Porzycki/NurPhoto via Getty Images)

  1. US regulation
  2. Chinese regulation
  3. What other countries have passed legislation?

1. US regulation

The United States has yet to pass federal legislation on AI. OpenAI, a US-based company, has created the most talked about AI software to date, ChatGPT. ChatGPT has heavily influenced the AI conversation. Countries around the world are now generating AI software of their own, with similar functions to ChatGPT.

Despite the lack of federal legislation, the Biden Administration, in conjunction with the National Institute of Standards and Technology (NIST) released the AI Bill of Rights. The document essentially offers guidance on how AI should be used and some ways it can be misused. Yet, the framework is not legally binding.

ARTIFICIAL INTELLIGENCE QUIZ! HOW WELL DO YOU KNOW AI?

However, multiple states across the country have introduced their own sets of laws on AI. Vermont, Colorado and Illinois began by creating task forces to study AI, according to the National Conference of State Legislatures (NCSL). The District of Columbia, Washington, Vermont, Rhode Island, Pennsylvania, New York, New Jersey, Michigan, Massachusetts, Illinois, Colorado and California are also considering AI laws. While many of the laws are still being debated, Colorado, Illinois, Vermont, and Washington have passed various forms of legislation.

For example, the Colorado Division of Insurance requires companies to account for how they use AI in their modeling and algorithms. In Illinois, the legislature passed the Artificial Intelligence Video Interview Act, which requires employee consent if AI technology is used to evaluate job applicants' candidacies. Washington state requires its chief information officer to establish a regulatory framework for any systems in which AI might impact public agencies.

WHAT ARE THE DANGERS OF AI? FIND OUT WHY PEOPLE ARE AFRAID OF ARTIFICIAL INTELLIGENCE

The outside of the White House

The United States does not have any federal AI regulations at this point.  (Yasin Ozturk/Anadolu Agency via Getty Images)

While AI regulation in the United States is a hot topic and ever-growing conversation, it remains to be seen when Congress may begin to exercise regulatory discretion over AI.

2. The Chinese regulatory approach

China is a country in which the government plays a large part in AI regulation. There are lots of Chinese based tech companies that have recently released AI software such as chatbots and image generators. For example, Baidu, SenseTime and Alibaba have all released various artifical intelligence software. Alibaba has a large language model out called Tongyi Qianwen and SenseTime has a slew of AI services like SenseChat, which functions similarly to ChatGPT, a service unavailable in the country. Ernie Bot is another chatbot that was released in China by Baidu. 

The Cyberspace Administration of China (CAC) released regulation in April 2023 that includes a list of rules that AI companies need to follow and the penalties they will face if they fail to adhere to the rules. 

One of the rules released by the CAC is that security reviews must be conducted before an AI model is released on a public level, according to the Wall Street Journal. Rules like this give government considerable oversight of AI. 

WHAT ARE THE FOUR MAIN TYPES OF ARTIFICIAL INTELLIGENCE? FIND OUT HOW FUTURE AI PROGRAMS CAN CHANGE THE WORLD

The Baidu and Ernie Bot logos

The Chinese company Baidu has released its own AI chatbot called Ernie Bot.  (Pavlo Gonchar/SOPA Images/LightRocket via Getty Images)

The CAC said that while it supports the innovation of safe AI, it must be in line with China's socialist values, according to Reuters. 

Another specific regulation detailed by the CAC is that providers are the ones responsible for the accuracy of the data being used to train their AI software. There also must be measures in place that prevent any discrimination when the AI is created, according to the source.  AI services additionally must require users to submit their real identities when using the software. 

There are also penalties, including fines, suspended services, and criminal charges for violations, according to Reuters. Also, if there is inappropriate content released through any AI software, the company has three months to update the technology to ensure it doesn't happen again, according to the source. 

The rules created by the CAC hold AI companies responsible for the information that their software is generating. 

WHAT IS THE HISTORY OF AI?

An illustration with the OpenAI logo and a Chinese flag in the background

OpenAI's ChatGPT is not available in China. ( Avishek Das/SOPA Images/LightRocket via Getty Images)

3. What other countries have passed legislation? 

Rules established by the European Union (EU). include the Artificial Intelligence Act (AIA) which debuted in April 2021. However, the act is still under review in the European Parliament, according to the World Economic Forum. 

The EU regulatory framework divides AI applications into four categories: minimal risk, limited risk, high risk and unacceptable risk. Applications that are considered minimal or limited risk have light regulatory requirements, but must meet certain transparency obligations. On the other hand, applications that are categorized as unacceptable risk are prohibited. Applications that fall in the high risk category can be used, but they are required to follow more strict guidelines, and be subject to heavy testing requirements.

Within the context of the EU, Italy's Italian Data Protection Authority placed a temporary ban on ChatGPT in March. The ban was largely based on privacy concerns. Upon implementing the ban, the regulatory agency gave OpenAI 20 days to address specific concerns, including age verification, clarification on personal data usage, privacy policy updates, and providing more information to users about how personal data is used by the application.

CLICK HERE TO GET THE FOX NEWS APP

Screens displaying the logos of OpenAI and ChatGPT

ChatGPT has sparked a lot of AI conversation around the world.  (Photo by LIONEL BONAVENTURE/AFP via Getty Images)

The ban on ChatGPT in Italy was rescinded at the end of April, after the chatbot was found to be in compliance with regulatory requirements.

Another country that has undertaken AI regulation is Canada with the Artificial Intelligence and Data Act (AIDA) that was drafted in June 2022. The AIDA requires transparency from AI companies as well as providing for anti-discrimination measures.