Mendoza School of Business

Russia meddling mess will cost tech giants big bucks to fix

Published: November 2, 2017 / Author: Mike Chapple op-ed for CNBC




Mike Chapple

During a series of hearings before House and Senate committees this week, members of Congress trotted out poster boards showing graphic examples of social media advertisements that attempted to influence the 2016 election. With headlines like “Heritage, not hate. The South will rise again!” and “Join us because we care. Black matters!” these ads focused on polarizing, hot-button issues including gun ownership, race relations, immigration, and religion, simultaneously targeting both sides of each debate in an effort to foment unrest.

Attorneys for Facebook, Google, and Twitter sat in the hot seat during these hearings and offered Congress assurances that they take the issue seriously and are implementing new controls to prevent misleading advertising. The issue with those safeguards, however, is that they are not likely to be effective. Many of them depend heavily upon artificial intelligence and machine learning technologies that simply aren’t yet up to the challenge, at least on their own.

At the heart of these approaches is the belief that social media companies can develop models that automatically identify false and misleading advertisements, as well as advertisers operating under a false flag. The reality is that parties seeking to defeat these automated safeguards can continually alter their advertisements until they discover content that passes through the algorithm’s filters.

This doesn’t mean, however, that artificial intelligence can’t be a valuable tool in the fight against fake news. It just can’t be our only tool. Effective approaches to combating misleading advertisements must combine technology with old-fashioned human investigative skills. 

Read Chapple’s entire commentary on the CNBC website.