Algorithm May Decide Who Is a Contributing Member of Society, Civil Rights Groups Warn

The Trump administration’s call for” extreme vetting” has immigration officials trying an automated process that can scraping through immigrants’ personal information and digital histories. But civil rights groups say that entails turning over the immigration process to an algorithm that would be a technological and ethical nightmare.

Immigrations and Customs Enforcement is trying tech firms to build an” overarching vetting contract that automates, centralizes and streamlines the current manual vetting process .” Specifically, ICE seeks to determine potential immigrants hazard for terrorism and potential to be a” contributing are part of society .”

ICE says its overtures to tech companies are nothing unusual.

” The Department of Homeland Security is tasked with protecting national security by vetting visa applicants to prevent terrorists and crooks from entering the U.S. and ensuring nonimmigrant foreigners comply with the terms of their admission to the U.S ,” an ICE spokesperson told The Daily Beast.” The request for information on this initiative was simply that- an opportunity to gather information from industry professionals and other government agencies on current technological capabilities to determine the best way forward .”

But such a program would be a discriminatory catastrophe, over 100 technology experts and civil liberties groups said in two letters to the Department of Homeland Security on Wednesday.

An algorithm-based vetting process has real issues. So few immigrants have committed acts of terrorism, that a computer program couldn’t even generate an accurate predictive model, the coalition of tech experts representing some of the U.S.’s top universities and research groups says.

” There is a wealth of literature demonstrating that even the’ best’ automated decision-making models generate an unacceptable number of mistakes when predicting rare events. On the scale of the American population and immigration rates, criminal acts are relatively rare, and terrorist acts are extremely rare ,” their letter to DHS states.” As a outcome, even the most accurate possible model would generate a very large number of false positives .”

And an error in the algorithm entails misidentifying innocent people as possible criminals, Rachel Levinson-Waldman, a senior advise at the Brennan Center for Justice who helped coordinate the pair of letters.

” If you’re building an algorithm trying to find witches, you’re going to end up with the Salem Witch Trials ,” Levinson-Waldman told The Daily Beast.” You may think you’re sweeping up witches, but you’re sweeping up a whole bunch of people who have cats and brooms .”

And ICE’s computer program wouldn’t even be the best-case scenario the group describes in the present letter. ICE wants a program that will help identify” positively contributing members of society” who” contribute to national interests .”

” But when you talk about people attaining’ positive contributions ,’ there’s literally no definition of what that entails ,” Levinson-Waldman said.” American law doesn’t have a definition. DHS hasn’t set out a definition. What that entails is that presumably the contractors who construct this program, maybe in coordination with ICE, will be coming up with proxies for what those entails .”

Those proxies might be income- or employment-based. But the language ICE employed when discussing the program with tech companies in July was lifted immediately from the language of Trump’s first overturned travel outlaw against immigrants from seven Muslim-majority countries, leading to fears that ICE’s program could be written with implicit biases against Muslims or people from the countries affected by the first forbidding, Levinson-Waldman said.

” If you’re building an algorithm trying to find witches, you’re going to end up with the Salem Witch Trials .”
— Rachel Levinson-Waldman, the Brennan Center

Internet users could also be more likely to get caught up in ICE’s algorithmic crackdown. When meeting with tech companies in July, ICE expressed interest in a program that could scan social media and other digitally available information, documents procured by the Intercept in August reveal.

That could mean new immigration woes for” anyone who might have been outspoken, might have criticized the government of the united states, attained jokes online ,” Levinson-Waldman said.

Technology experts said an algorithm is even less qualified to scan social media than a human.

” Errors in human judgement about the real meaning of social media posts are common ,” different groups note in their letter.

Put in translation and work through an algorithm, those misunderstandings can pile up, like in the case of Palestinian human who was arrested late last month after posting “good morning” in Arabic on his Facebook page. Facebook’s artificial intelligence software mistranslated the greet as” assault them” in Hebrew. Israeli police arrested the man without consulting any Arabic-speaking officers, who could have pointed out the mistranslation.( Facebook subsequently apologized for the error .)

The groups also pointed to an August report in the Nation, which revealed that the DHS was building tools that rubbed social media data and allowed the government to search those social media posts by “tone.”

In ICE’s hands, these types of tool could have immigrants censoring their speech in an effort to meet ICE’s” vague, unmoored notion of what it means to make a positive contribution ,” Levinson-Waldman said.

Read more: www.thedailybeast.com