Google: Customers can use its AI to make decisions in 'high-risk' areas as long as there's human oversight

December 18th.GoogleIt was made clear by way of an update to the utilization policy that customers could be in the "high risk" areas as long as there was manual supervision (e.g. healthcare) using its generative formulaAItools to make "automated decisions".

Google: Customers can use its AI to make decisions in 'high-risk' areas as long as there's human oversight

Customers can use Google's generative AI to make under certain conditions, according to an updated version of the company's Generative AI Prohibited Use Policy released Tuesday"Automated decision-making" that may have a significant adverse impact on the rights of individualsFor example, inEmployment, housing, insurance and social benefitsand other areas. These decisions are made as long as they are made inSome form of human oversight, then it was allowed to be implemented.

Note: In the field of artificial intelligence, automated decision-making refers to decisions made by an AI system based on factual or inferred data. For example, AI mightDecision on loan approval based on applicant's data, or screening job applicants.

Google's previous draft terms stated thatHigh-Risk Automated Decisions Involving Generative AI Should Be Completely Banned.... But Google revealed to foreign media outlet TechCrunch that its generative AI "Never actually prohibited"Automated decision-making in overly high-risk areas.Provided there is human supervision.

A Google spokesperson said in an interview, "The manual supervision requirement has always existed and applies toAll high-risk areas." He added: "We've simply re-categorized the terms and listed some specific examples more clearly, with the aim of making it clearer for users."

In response to automated decisions that affect individuals, regulators have expressed concern about the potential bias of AI. For example, studies have shown that AI systems used to approve credit and mortgage applications may exacerbate historical problems of discrimination.

Because automated decision-making systems can affect individuals, regulators are concerned about the potential bias issues of such AI systems, according to the report. Research suggests that AI systems used for credit and mortgage approvals may exacerbate historical discrimination problems.

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

OpenAI Says No Plans to Launch Sora API for Video Generation Models Yet

2024-12-18 11:01:55

Information

OpenAI o1 Inference Modeling API goes live, open only to select developers

2024-12-18 11:06:25

Search