It is very hard to regulate and put in place safety protocols for artificial intelligence because the science behind it is still evolving with no definite end.
AI developers themselves are grappling with how to prevent abuse of novel systems, offering no easy fix for government authorities to embrace, Elizabeth Kelly, director of the U.S. Artificial Intelligence Safety Institute, said last 10 December.
Cybersecurity is an area of concern according to Kelly, speaking at the Reuters NEXT conference in New York. Ways to bypass guard rails that AI labs established for security and other topics, called "jailbreaks," can be easy, she said.
"It is difficult for policymakers to say these are best practices we recommend in terms of safeguards, when we don't actually know which ones work and which ones don't," Kelly said.
Technology experts are hashing out how to vet and protect AI across different dimensions. Another area regards synthetic content. Tampering with digital watermarks, which flag to consumers when images are AI-generated, remains too easy for authorities to devise guidance for industry, she said.
The U.S. AI Safety Institute, created under the Biden administration, is addressing such concerns via academic, industry and civil society partnerships that inform its tech evaluations, Kelly said. She said AI safety is a "fundamentally bipartisan issue," when asked what will happen to the body after Donald Trump takes office in January.
0 comments
Post a Comment