WASHINGTON ― The Biden administration Thursday announced three new policies to guide the federal government's use of artificial intelligence, billing the standards as a model for global action for a rapidly evolving technology.
The policies, which build off an executive order President Joe Biden signed in October, come amid growing concerns about risks posed by AI to the U.S. workforce, privacy, national security and for potential discrimination in decision-making.
Vice President Kamala Harris announced the rules in a call with reporters, saying the policies were shaped by input from the public and private sectors, computer scientists, civil rights leaders, legal scholars and business leaders.
"President Biden and I intend that these domestic policies will serve as a model for global action," said Harris, who has helped lead the administration's efforts on AI and outlined U.S. initiatives on AI during a global summit in London last November.
Prep for the polls: See who is running for president and compare where they stand on key issues in our Voter Guide
"All leaders from government, civil society and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm, while ensuring everyone is able to enjoy its full benefit," Harris said.
The federal government has disclosed more than 700 examples of current and planned AI use across agencies. The Defense Department alone has more than 685 unclassified AI projects, according to the nonpartisan Congressional Research Service.
Disclosures from other agencies show AI is being used to document suspected war crimes in Ukraine, test whether coughing into a smartphone can detect COVID-19 in asymptomatic people, stop fentanyl smugglers from crossing the southern border, rescue children being sexually abused and find illegal rhino horns in airplane luggage – among many other things.
To assess the safety risks of AI, federal agencies by December will be required to implement safeguards to "reliably assess assess, test and monitor" "AI’s impacts on the public, mitigate risks of algorithmic discrimination and publicize how the government is using AI.
Harris provided an example: If the Veterans Administration wants to use artificial intelligence in VA hospitals to help doctors diagnose patience, Harris said it would need to show the AI system does not produce "racially biased diagnoses."
Biden's AI executive order, by invoking the Defense Production Act, required companies developing the most advanced AI platforms notify the government and share the results of safety tests. These tests are conducted through a risk assessment process called "red-teaming."
Under the order, the National Institute of Standards and Technology is creating standards for the red-team testing that are aimed at ensuring safety prior to release to the public.
Contributing: Maureen Groppe
2024-12-24 20:392751 view
2024-12-24 20:011959 view
2024-12-24 19:221032 view
2024-12-24 19:202073 view
2024-12-24 19:022095 view
2024-12-24 18:212750 view
Taylor Swift and Travis Kelce have excellent fathers—and their bond is only getting stronger. As it
The end of the WNBA season is in sight.Caitlin Clark and the Indiana Fever have just one regular-sea
NEW YORK (AP) — Sean “Diddy” Combs is widely recognized as one of the most influential figures in hi