President Biden signed what he called a “landmark” executive order (EO) on artificial intelligence, drawing mixed reviews from experts in the rapidly developing technology.
“One key area the Biden AI [executive order] is focused on includes the provision of ‘testing data’ for review by the federal government. If this provision allows the federal government a way to examine the ‘black box’ algorithms that could lead to a biased AI algorithm, it could be helpful,” Christopher Alexander, chief analytics officer of Pioneer Development Group, told Fox News Digital.
“Since core algorithms are proprietary, there really is no other way to provide oversight and commercial protections,” Alexander added. “At the same time, this needs to be a bipartisan, technocratic effort that checks political ideology at the door or this will likely make the threat of AI worse rather than mitigate it.”
Alexander’s comments come after Biden unveiled a long-anticipated executive order containing new regulations for AI, hailing it as the “most sweeping actions ever taken to protect Americans from the potential risks of AI systems.”
The executive order will require AI developers to share safety test results with the government, create standards to monitor and ensure the safety of AI and erect guardrails meant to protect Americans’ privacy as AI technology rapidly grows.
“AI is all around us,” Biden said before signing the order, according to a report from The Associated Press. “To realize the promise of AI and avoid the risk, we need to govern this technology.”
Jon Schweppe, policy director of American Principles Project, told Fox News Digital that the concerns about AI that led to the executive order are “warranted” and complimented some of the details of Biden’s executive order, but also argued that some of the order focuses “on the wrong priorities.”
“There’s a role for direct government oversight over AI, especially when it comes to scientific research and homeland security,” Schweppe said. “But ultimately we don’t need government bureaucrats micromanaging all facets of the issue. Certainly we shouldn’t want a Bureau of Artificial Intelligence running around conducting investigations into whether a company’s AI algorithm is adequately ‘woke.'”
Schweppe argued that there is also a role for “private oversight” of the growing technology, while also noting that AI developers should be exposed to “significant liability.”
“AI companies and their creators should be held liable for everything their AI does, and Congress should create a private right of action giving citizens their day in court when AI harms them in a material way,” Schweppe said. “This fear of liability would lead to self-correction in the marketplace — we wouldn’t need government-approved authentication badges because private companies would already be going out of their way to protect themselves from being sued.”
The order was designed to build on voluntary commitments by some of the largest technology companies the president helped broker earlier this year, which will require the firms to share data about AI safety with the government.
Read the full story here.
Scroll down to leave a comment and share your thoughts.