Experts tell Fox News Digital that the Biden administration’s plan to create an artificial intelligence (AI) security commission may prove “necessary” but not “sufficient” to address potential risks to the developing technology.
“Chances are (the algorithm) is not where the majority of the risk is,” said Phil Siegel, founder of the Center for Advanced Threat Preparedness and Simulation (CAPTRS). “It’s more likely that the risk lies with users either using it for harm or simply abusing it.
President Biden on Monday signed an executive order that the White House said included the “most sweeping actions ever taken to protect Americans from the potential dangers of artificial intelligence systems” — requiring companies to notify the government when they train new models and share reading group safety test results.”
“These measures will ensure that artificial intelligence systems are safe, secure and reliable before companies make them public,” the White House said of the executive order.
EXPERTS RECOGNIZE HOW AMERICA CAN WIN THE RACE AGAINST CHINA FOR MILITARY SUPERIORITY
The administration also announced the establishment of the AI Safety Institute – under the supervision of the National Institute of Standards and Technology – which will “set the rigorous standards for extensive red team testing to ensure safety before public release.”
Speaking at the Bletchley Park summit in the U.K., U.S. Commerce Secretary Gina Raimondo said Wednesday that the Biden administration will use its new AI Security Institute to assess known and emerging risks of “borderline” artificial intelligence models and that the private sector “must step up.”
Siegel compared the White House’s approach to that of an airline that audits a plan for “safety” but does not audit maintenance procedures, pilot training or crews.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
“Everything is necessary,” he said. “Similarly, a security panel can’t just control algorithms. It must control processes for users.”
“We can get technology providers to help,” he continued. “Just as we have banks provide KYC (know your customer) procedures to prevent money laundering, we can require technology providers to provide KYC for the security of user applications,” Siegel added.
The Center for Advanced Preparedness and Threat Response Simulation tackles these kinds of problems regularly, examining decision-making and intuition among users in public health, engineering, public policy and other disciplines, and training them in games to improve these skills. Therefore, user behavior remains a central concern – as it will be with artificial intelligence.
IT’S NOT OUR NATION’S JOB TO KEEP ALLIES ‘YET’ IN AI DEVELOPMENT, SAYS FORMER CIA CHIEF
Many critics of AI since earlier this year have pointed to the myriad pitfalls the technology presents, from deepfake technology that disrupts elections and creates child abuse material to the use of AI-generated algorithms to crack even the most complex digital security systems and gain access to sensitive information.
Christopher Alexander, head of analytics at Pioneer Development Group, acknowledged that while it’s a good idea to force companies to share their information instead of hiding it — in what one expert previously described to Fox News Digital as a “black box” of content — the current system appears to have “no transparent appeals process”.
CLICK HERE TO GET THE FOX NEWS APP
Alexander told Fox News Digital that he is also concerned that “political agendas could bias the security clearance process” because the agency, established by executive order, places its management at the behest of the sitting president.
Some critics have already raised concerns about political bias, such as with China requiring any new AI technology to conform to the ruling party’s socialist values.
Fox News Digital’s Greg Norman and Reuters contributed to this report.