08/12/2013 - 06:21

Don’t let societal verification turn Borg

Lovely Umayam

Lovely Umayam

Umayam is a graduate student at the Monterey Institute of International Studies and a research assistant at the James Martin Center for Nonproliferation Studies. She is the founder and chief...

More

Most Trekkies consider the Borg one of the most sinister villains in the Star Trek universe. These pale humanoid aliens with cybernetic appendages have a trait that renders them almost invincible: They function as one collective consciousness and intelligence. Although each cyborg goes about its own day, its cognitive link with other Borg provide its brain with boundless amounts of data, allowing it to process information quickly and adapt with tremendous efficiency. In its constant pursuit of perfection, the computational power of the Borg hive mind purges itself of emotion and executes every action with frightening precision. We have the Borg to thank for its all-too-accurate signature phrase: “resistance is futile.”

In our non-fictional world, the hive mind—that is, thousands of ordinary people contributing data toward an objective—can also be an awesome force. After the Boston Marathon bombings in April, the FBI sought the public’s help to gather photographs, videos, spectator accounts and other types of potential evidence that would augment their efforts to identify suspects. By sifting through this data, the FBI found the perpetrators just days after the bombing. But as the investigation unfolded, it became evident that the real-life version of the hive mind could be as menacing as the one in Star Trek.

While the Boston Marathon investigators reviewed crowd-sourced multimedia behind the scenes, some crowd-sourcing enthusiasts among the general public made mistakes that intensified confusion. On websites like Reddit and 4Chan, internetlings conducted their own amateur investigations by examining photos and videos in an attempt to identify the culprits, posting their findings as they went. All their conclusions turned out to be horribly wrong. Even more unsettling was the fact that the crowd-sourcing participants identified many possible suspects based on their perceived ethnic and religious background. And with mainstream media outlets vying for attention, misinformation culled from the Internet fueled irresponsible on-air speculation about the “suspects’” appearance and personal character. The result was an overzealous witch hunt fraught with incompetence and bias, one that inflicted damage on the wrongfully accused. While the FBI’s internal crowd-sourcing efforts were successful, the public utterly failed to differentiate between information gathering and crime solving. 

The aftermath of the Boston Marathon bombings taught us an important lesson: crowd-sourcing can easily evolve into something uncontrollable, creating opportunities to misconstrue information and prompting the public to conspire and accuse. This has important implications for the nuclear nonproliferation and arms control community, which is exploring the use of societal verification. The aim of this process is to use information that non-expert, non-professional citizens have gathered, shared, and analyzed to make sure countries abide by their arms control and nonproliferation commitments. But how can societal verification steer clear of the kind of crowd-sourcing problems that occurred in April?

Conversations about societal verification currently focus on problems like how participants should handle sensitive information, whether they will be subject to censorship, and general disinterest in the arms control cause. Although these concerns are valid, there are not enough discussions dedicated to just what the role of the citizen is supposed to be. Literature on societal verification from the early nineties used words like “monitoring” and “whistleblower” to describe the potential role of participants, envisioning a world in which a volunteer could, for example, download a seismic detector app to help identify and expose a nuclear weapon test. But this kind of direct and assertive involvement may not be the best approach for building trust between the public and the government, or even among citizens. As happened during the aftermath of the Boston Marathon bombings, crowd-sourcing can lead people to make false accusations. Societal verification advocates should reconsider what kind of responsibility citizens should be asked to take on, keeping in mind that the hive mind can get unruly when it’s given great power.

Establishing a common understanding of the citizen’s role in societal verification should be a top priority. If societal verification becomes a reality, the agencies or authorities managing such efforts could avoid the kind of unfortunate speculation that occurred during the Boston investigation by giving the public clear guidelines on their crowd-sourcing responsibilities. There is no guarantee that guidelines will smooth over crowd-sourcing’s rough edges, but this is one way to help separate the bonafide helpers from the Internet trolls.

Societal verification may not necessarily entail citizens getting involved with direct monitoring, or trying to determine for themselves whether a country is complying with its arms control agreements. An alternative approach would be to call on citizens to make small but substantial contributions to verification activities, so that they are engaged in meaningful work without having to come up with conclusions. One idea would be to structure societal verification activities so that people can participate in one specific area—a piece of the puzzle—which would contribute to the overall assessment and decision-making process. The Citizen Science Alliance's web-based projects, which allow users to identify whale songs and star clusters via simple data mining and analytical exercises, could be a possible template for future social verification programs. Their projects invite individual users to contribute information or help analyze a fraction of the data, which is then aggregated for actual scientists to conduct the final analysis. This method eliminates any potential bias and at the same time invites the public to be part of scientific research.

Despite past crowd-sourcing blunders, more government agencies, law-enforcement bodies, and non-governmental organizations will spearhead other versions of these activities, as policy makers become more comfortable with the idea of partnering with citizens and tapping into the hive mind. As these projects come and go, the arms control and nonproliferation community should take notes on successes and failures. If they don’t learn their lessons, societal verification efforts may very well be … futile.