‘People Have Bias’: Disinformation-Hunting AI by Firm Close to Pentagon Part of ‘Very Bad Trend’
CC0Opinion01:03 GMT 09.10.2020(updated 01:04 GMT 09.10.2020) Get short URLby Evan Craighead 112Subscribehttps://cdn1.img.sputniknews.com/img/07e4/08/1b/1080290191_425:0:1625:675_1200x675_80_0_0_e8669b99af918806b54df4bad8601e3d.jpgSputnik Internationalhttps://cdn2.img.sputniknews.com/i/logo.pngEvan Craighead . Sputnik Internationalhttps://sputniknews.com/analysis/202010091080713549-people-have-bias-disinformation-hunting-ai-by-firm-close-to-pentagon-part-of-very-bad-trend/
Web developer and technologist Chris Garaffa tells Sputnik that although machine intelligence firm Primer has experienced staff with close government ties, its recent US military contract geared toward combating disinformation provokes questions about the limits of artificial intelligence and the possible misuse of AI by US government officials.
“For all the US military’s technical advantages over adversaries, it still struggles to counter disinformation. A new software tool to be developed for the US Air Force and Special Operations Command, or SOCOM, may help change that,” said a new Defense One article published on October 2, just a day following the announcement of Primer’s multi-million-dollar contract.
Garaffa told Radio Sputnik’s Political Misfits on Thursday that this deal made in an effort to combat fake news is part of a “very bad trend to make AI [determine] what is true and what is not.”
In fact, designing AI to do “anything [other] than summarizing information that should then be reviewed by a human” is problematic, Garaffa told hosts Bob Schlehuber and Michelle Witte.
Amy Heineike and Sean Gourley, former employees of private software and services company Quid, are involved in Primer, as is Brian Raymond, a former CIA officer and ex-director for Iraq for the National Security Council (NSC).
Speaking of Raymond, Garaffa stated that “this is somebody who has very, very close ties to the government and to the intelligence community, having been on the NSC.”
“I don’t trust AI to do this kind of real-world analysis, in real-time, in this state that it’s in,” they said.
“That’s what the Air Force wants it for. [The service] wants it for situational awareness on the ground,” they noted, noting that this tech could later be adopted by an array of federal government agencies, such as the US Department of Homeland Security and its subagency US Immigration and Customs Enforcement.
“They could use it to monitor protests, which they, you know, do,” Garaffa highlighted.
There’s also the general issue of trust and how something comes to be regarded as fact versus fiction.
“There’s no information about how Primer addresses any of these questions, or bias that is inherent in the development of AI,” they emphasized.
“Remember, these algorithms are developed by people. People have bias. People have blind spots.”
The views and opinions expressed in the article do not necessarily reflect those of Sputnik.