The Bittersweet Sweepstakes to Build an AI That Destroys Fake News

Autonomous 18 -wheelers are now driving the roads. Coffee table gadgets are recognizing spoken English nearly as well as humans. Smartphones apps instantaneously translate conversations between people speaking as many as nine different languages. But for Dean Pomerleau , none of this is all that surprising.

Pomerleau built a self-driving auto way back in 1989, when the first George Bush was president, and it navigated private roads utilizing a neural network, the same AI technology that underpins modern gadgetry like the Amazon Echo and Microsoft Translator. This vehicle wasn’t ready for the public highways, thanks to the limited computing power of the 20 th century. But as a graduate student at Carnegie Mellon, Pomerleau was one of the few who understood the promise of this AI well before the world had the vast amounts of computing power and data needed to push it into everyday life. Now, he’s working to solve a much harder AI problem: fake news.

A quarter-century after his self-driving vehicle appeared in Byte magazine, Pomerleau is an adjunct professor at Carnegie Mellon, and last month, as so many lamented the role of fake news, he set a call out on Twitter, challenging the AI community to build an algorithm that could identify fake news and remove from it from online services like Twitter, Google, and Facebook. It was an open-ended gamble, with Pomerleau putting down $1,000. And the community took him up on it.

His bet now has a website, a Slack channel, a GitHub code repository, and a Twitter hashtag –# FakeNewsChallenge–and over the past few weeks, virtually 40 researchers, academics, technologists, and independent hackers have joined the grassroots project. Delip Rao, a machine learning expert who helped construct the speech recognition system that underpins the Amazon Echo, recently put down another $1,000 in prize money. Pomerleau, Rao, and the rest of these hackers will now compete in teams, running over the next six months to identifies fake news employing neural network and so many other AI techniques.

And they will fail.

Flagging Fakery

Neural networks can recognise cats in YouTube videos, spot computer viruses, and even help a vehicle drive down the road on its own. But they can’t identify fake news–at least not with real certainty. Component of the problem is that the characteristics of fake news stories are enormously hard to pin down. Distinguishing what’s fake requires not just the kind of pattern recognition that AI is so good at. It involves human decision, as Pomerleau himself recognise. A machine that can reliably identify fake news is a machine that has completely solved AI.” It would entail AI has reached human-level intelligence ,” he says. What’s more, even humans can’t agree on what’s fake and what’s not. The news is always a tension between objective observation and subjective judgement.” In many cases, “there hasnt” right answer ,” Pomerleau admits.

A machine that can reliably identify fake news would entail AI has reached human-level intelligence.

Pomerleau’s hope, rather, is that he and other researchers can build algorithms that mitigate the fake news problem–algorithms that can flag potentially fake news for humans to review. It’s another case of AI not exactly replacing humen but running alongside them, helping us perform tasks with greater speed and accuracy. If taken together with human editors, the kinds of algorithms produced by Pomerleau’s challenge could indeed allow the likes of Google, Facebook, and Twitter to catch especially egregious narratives much quicker than before.

These companies are likely working on their own algorithm, and no doubt, they too see this AI as something that will operate alongside humen. Earlier this month, Yann LeCun, the head of AI research at Facebook, told a group of reporters that technology could solve the fake news problem. But like Pomerleau, he stopped short of saying it could solve the problem on its own.” The question is how does it make sense to deploy it ?” he said.” And this isnt my department .”

LeCun’s boss, Facebook CEO Mark Zuckerberg, knows that human eyes are also required. After all, this is how the company works to remove lewd photos and dislike speech from its vast social network. Facebook has done both with considerable success through a combination of humanity and technology–sometimes more humanity than technology. And during a pre-election interview at Facebook headquarters, Zuckerberg told me this is also how the company will use new algorithms designed to predict when Facebook users are at risk of suicide. The technology will alert trained human professionals who can then evaluate the situation in full.” The sum of these two things is much more powerful than either of them by themselves ,” he said.

Fake news is no different.” We need humans in the loop ,” says Rao, the ex-Amazon Echo engineer who has joined the Fake News Challenge.” Expert judgment is indispensable .”

The Virtuous Circle

What AI experts like Rao can do is set a bigger dent in the problem. This starts with an online database filled with bogus stories.

Seven years ago, researchers at Stanford University started constructing a massive database of digital photos called ImageNet, hoping to facilitate the process of developing computer vision. Neural networks, you see, learn undertakings by analyzing vast amounts of carefully labeled data. ImageNet was designed to feed these algorithms–and it ran. The world now has online services like Google Photos, which can instantaneously recognize objects and faces in digital pics. Rao and Pomerleau aim to build a similar database of fake news.

‘ We have to spend a lot of day simply defining what fake news is .’

” This alone is a hard problem ,” says Rao, who runs a machine learning consultancy called Joostware.” We have to spend a lot of period just defining what fake news is .” They must separate parody sites and honest mistakes from blatantly fake news meant to deceive, while also deciding how to treat news that is exaggerated or distorted in some manner.

The hope is that this database can help train all sorts of fake news algorithms–and that these algorithms can find a home at Snopes.com, Politifacts, or FactCheck.org, websites where humen are already working to separate the real from the fake. As with so many other AI projects, this could eventually create a virtuous cycle of humanity, data, and AI. As algorithm and human fact checkers identify more and more fake news, this ever expanding collection of data can help create better algorithmsa virtuous circle.

Ultimately, this circle could include services like Facebook and Twitter. Just yesterday, Facebook announced that it will work with sites like Snopes and Factcheck.org to identify fake news on its own network. If people like Pomerleau add reliable AI algorithms to the mixture, Facebook could potentially catch egregious narratives with greater speed–perhaps even before they run viral.” Our endeavors simply became a whole lot more relevant ,” Pomerleau said over Slack as Facebook unveiled its new policies.

But so many impediments loom.

This week, two emails turned up in my inbox, both related to fake news. One pointed me to Pomerleau and his AI contest. The other carried the subject line” More on Fake News Reports ,” and when opened, it listed what it described as eight leading sources of fake news. This included CNN, the Associated Press, The New York Times , and Hillary Clinton. And then, as a kicker, this email suggested I watch Fox News instead. Pomerleau and his hackers have no purpose of shuttling The New York Times into their database. And this country of ours includes so many people who would gladly tag Fox News as fake. Which is only to say that no database will please everyone.

Pomerleau’s Fake News Challenge is more relevant than ever–but also more challenging. Even as he was hailing Facebook’s most recent moves to quash fake news, his hashtag –# FakeNewsChallenge–was being hijacked by conspiracy theorists calling for a boycott on CNN over purported lies about Donald Trump, questioning whether Barack Obama was born in the US, and generally spewing hate speech at ethnic minorities. These tweets piled up at a rate of about 25 a minute.” Do we really believe a method of flagging fake news on social media stands a chance against that onslaught ?” Pomerleau said.” I’m at a loss .”

Read more:

Leave a Reply