US regulatory agencies are taking action for the first time! OpenAI officially investigated by FTC: generating false information poses risks

The Federal Trade Commission (FTC) may investigate OpenAI, the developer of the Chatbot ChatGPT, on consumer protection issues.On July 13 local time, CNN reported that the Federal Trade Commission of the United States was investigating whether OpenAI violated consumer protection laws, and asked OpenAI to provide a large number of records about the possibility of its processing personal data, providing inaccurate information to users, and "the risk of damage to consumers (including reputation damage)"

The Federal Trade Commission (FTC) may investigate OpenAI, the developer of the Chatbot ChatGPT, on consumer protection issues.

On July 13 local time, CNN reported that the Federal Trade Commission of the United States was investigating whether OpenAI violated consumer protection laws, and asked OpenAI to provide a large number of records about the possibility of its processing personal data, providing inaccurate information to users, and "the risk of damage to consumers (including reputation damage)".

The Financial Times of the UK said that this was the first time that US regulators officially launched a review of the risks posed by artificial intelligence Chatbot.

A document shows that the Federal Trade Commission of the United States this week issued a 20 page request to OpenAI, including "how to obtain data for training large language models", to describe ChatGPT's "ability to generate False statement about real individuals", and to request OpenAI to provide any public complaints it receives, the list of lawsuits it involves, and the testimony of Data breach details disclosed by the company in March 2023, The testimony once exposed the user's Chat log and payment data.

It is reported that the document was first exposed by the Washington Post, and subsequently, an insider confirmed the authenticity of the document to CNN.

The investigation was launched less than a year after OpenAI launched ChatGPT. Industry insiders believe that the Federal Trade Commission is taking action on AI faster. In the past, FTC usually began investigating a company after a major public error, such as in 2018 when it was reported that Facebook shared user data with Cambridge Analytica, a political consulting firm. The agency began investigating Facebook's privacy practices.

Lina Khan, chairman of the Federal Trade Commission of the United States, said that technology companies should be supervised when technology is in its infancy, rather than when technology is mature. Lena Khan expressed concern about AI during her testimony in Congress on Thursday local time, stating that law enforcement officials "need to be vigilant as soon as possible" about transformative tools such as AI.

Although these tools are novel, they cannot exempt from the constraints of existing rules, "Lena Khan wrote in an article in the New York Times in May." Although this technology is developing rapidly, we have seen some risks

However, OpenAI has always openly acknowledged some of the limitations of its products. For example, a white paper related to the latest version of GPT-4 explains that the model may "generate meaningless or untrue content related to certain sources". OpenAI also disclosed similar information that tools such as GPT may lead to widespread discrimination against minority or other vulnerable groups.

Sam Altman, the father of ChatGPT and co-founder and CEO of OpenAI, also stated that the rapidly developing artificial intelligence industry needs to be regulated. In May of this year, he testified in Congress calling for legislation on artificial intelligence and visited hundreds of lawmakers with the aim of setting a policy agenda for the technology. On Thursday local time, he tweeted that the security of OpenAI technology is "very important". He added, 'We have confidence in complying with the law' and will cooperate with the agency.

OpenAI is already facing international regulatory pressure. In March of this year, the Italian data protection agency banned ChatGPT, claiming that OpenAI illegally collected users' personal data and did not have an appropriate age verification system to prevent minors from accessing illegal information. OpenAI restored access to the system in April and stated that it had made changes in accordance with the requirements of Italian regulatory authorities.


Disclaimer: The content of this article is sourced from the internet. The copyright of the text, images, and other materials belongs to the original author. The platform reprints the materials for the purpose of conveying more information. The content of the article is for reference and learning only, and should not be used for commercial purposes. If it infringes on your legitimate rights and interests, please contact us promptly and we will handle it as soon as possible! We respect copyright and are committed to protecting it. Thank you for sharing.(Email:[email protected])