Total Pageviews

8/18/2014

Don’t fear the robot car bomb | Bulletin of the Atomic Scientists

08/17/2014 - 21:45

Don’t fear the robot car bomb

Patrick Lin

PATRICK LIN

Patrick Lin is director of theEthics and Emerging Sciences Group and associate philosophy professor at California Polytechnic State University, San Luis...
More
Within the next few years, autonomous vehicles—alias robot cars—could be weaponized, the US Federal Bureau of Investigation (FBI) fears. In a recently disclosed report,FBI experts wrote that they believe that robot cars would be “game changing” for law enforcement. The self-driving machines could be professional getaway drivers, to name one possibility. Given the pace of developments on autonomous cars, this doesn’t seem implausible.
But what about robotic car bombers? If car bombs no longer require sacrificing the driver’s life, then criminals and terrorists might be more likely to use them. The two-page FBI report doesn’t mention this idea directly, but this scenario has caused much public anxiety anyway—perhaps reasonably so. Car bombs are visceral icons of terrorism in modern times, from The Troubles of Northern Ireland to regional conflicts in the Middle East and Asia.
In the first half of 2014, about 4,000 people were killed or injured in vehicle bombs worldwide. In the last few weeks alone, more than 150 people were killed by car bombs in Iraq, Afghanistan, Yemen, Somalia, Egypt, and Thailand. Even China saw car bombings this summer.
America is no stranger to these crude weapons either. In the deadliest act of domestic terrorism on US soil, a truck bomb killed 168 people and injured about 700 others in Oklahoma City in 1995. That one explosion caused more than $650 million in damage to hundreds of buildings and cars within a 16-block radius. In 1993, a truck bomb parked underneath the World Trade Center killed six people and injured more than a thousand others in the ensuing chaos. And earlier this year, jihadists were calling for more car bombs in America. Thus, popular concerns about car bombs seem all too real.
But what do automated car bombs mean to criminals and terrorists? Perhaps the same as anything else that is automated. Generally, robots take over those jobs called the “three D’s”: dull, dirty, and dangerous. They bring greater precision, more endurance, cost savings, labor efficiencies, force multiplication, ability to operate in inaccessible areas, less risk to human life, and other advantages.
But how would these benefits supposed to play out in robot car bombs? Less well than might be imagined.
Pros and cons. For the would-be suicide car bomber, a robotic car means eliminating the pesky suicide part. By replacing the human driver who is often sacrificed in the detonation of a car bomb, an autonomous vehicle removes a major downside. This aspect is related to the worry that nation-states may be quicker to use force because of armed drones, since those robots remove the political cost of casualties to their own side. When costs got down, adoption rates go up; therefore, we can expect to see an increase in suicide car-bombing incidents, driven by autonomous technologies.
Or so the thinking goes.
But this analysis is too pat. Part of the point for some guerilla fighters—though probably not for ordinary criminals—is martyrdom and its eternal benefits. So, dying isn’t so much of a cost to these terrorists, but rather more of a payoff. This demographic probably wouldn’t be tempted much by self-driving technology, since they are already undeterred by death.
Of course, it may be that a more calculating terrorist, who still seeks glory, would like to do as much damage as possible before he kills himself. (Though some suicide bombers are women, most of them are still men.) In this case, he may want to mastermind several car-bombing attacks before finally dying in one. Robot cars would enable him to do so, and still allow him to get credit for his work, an issue of importance to terrorists, if not to criminals.
And at the least, those not motivated by ideology might not want to die quite so soon. For them, a robotic driver would be an attractive accomplice.
However, other options are already available for terrorists who do not want to harm themselves—yet these options have not created any panic about car-bombing attacks. For instance, both criminals and guerilla fighters have been known to recruit and train others to do their bidding. Those designated as drivers sometimes are not even aware of their explosive cargo, which avoids the trouble of indoctrinating them toward fanatical self-sacrifice. Terrorists could kidnap innocent people and coerce them to become suicide bombers, which is reportedly occurring today in Nigeria.
So if ease and costs are considerations, there are better alternatives than transforming robot cars into mobile bombs. For one thing, the only production cars being built today with self-driving capabilities are the Mercedes Benz S-Class sedan (that sells for about $100,000) and the Infiniti Q50 sedan (about $40,000)—not exactly tools for the budget-conscious terrorist, even if prices do fall in the future. Even then, their capacity to operate autonomously is primarily limited to things such as staying within a lane and following the flow of traffic on a highway.
Google’s self-driving car makes even less sense for this evil purpose. As the most advanced automated car today, it would cost more than a Ferrari 599 at over $300,000—if it were for sale, which as a research vehicle it isn’t. (Even if a terrorist could steal it, good luck figuring out how to turn it on.) Anyway, the car can operate autonomously only aroundGoogle’s headquarters, since ultra-precise maps beyond that area don’t yet exist. In sum, it is not a good choice for targets outside Mountain View, California.
If a fanboy terrorist really did want to go high-tech, he could more easily rig his own car to be driven by remote control. Or kidnap engineers to do the work, as drug cartels in Mexico have done to build communication systems. Or just get some kamikaze micro-drones. All of these options are more likely and more practical, getting the same job done as autonomous car bombs.
Besides bombing, are there post-execution reasons for using a robot car, such as minimizing forensics evidence? A captured driver, or even the DNA of one who is blown up, can attribute an attack to a particular group. But the same could be achieved by stealing a car and coercing an innocent person to drive.
Robot cars may actually be worse for the criminal who wants to keep a low profile. If they are networked and depend on GPS for navigation, the cars could be tracked as soon as they leave the driveway of the suspect under surveillance. GPS records could be searched to piece together a timeline of events, including where the car has been on the days and weeks leading up to its use as a weapon.
Furthermore, a self-driving car without a human in it at all won’t be in production any time soon. A human will always be “in the loop” for the foreseeable future; at the moment, any “self-driving” car is supposed to have someone in the driver’s seat, ready to take the wheel at a moment’s notice, such as when an unexpected construction detour or bad weather interferes with the car’s sensors and a human operator must quickly retake control. So a robot car bomb with no driver in it would likely raise immediate suspicions, if the car would even move at all.
Admittedly, hacks have already appeared that disable the safety features meant to ensure a human is present and alert. Networked and autonomous cars present many more entry points for hackers, possibly allowing a very knowledgeable criminal to cyber-hijack a robot car.
Theoretically, a terrorist could want to use a robot car as a bomb while he’s still in it—that is, forego the opportunity to spare his own life. It could be that he tends to get lost easily, wants to read last-minute instructions behind the wheel, has to stay in contact with his home base, or must baby-sit the trigger mechanism. A robot car would offer these benefits, however minor they may be.
Possible solutions. The threat of robot car bombers, then, seems unlikely but not impossible to become a reality. Some solutions to that possible threat include requiring manufacturers to install a “kill switch” that law enforcement could activate to stop an autonomous vehicle.This plan was already proposed in the European Union for all cars in the future. Or sensors inside the car could be used to detect hazardous cargo and explosives, similar to the sensors at airport security checkpoints. Or regulators could require special registration of owners of autonomous vehicles, cross-referencing customers with criminal databases and terrorist watch-lists.
But any of these options will face fierce resistance from civil rights advocates and other groups.
And a determined terrorist can get around technological safeguards and firewalls.
At the end of the day, there’s still no substitute for good old-fashioned counterterrorism, human intelligence, and vigilance: in recent weeks, security checkpoints foiled car-bombing plots in Northern Ireland and Jerusalem. Overall, it makes more sense to use these traditional methods; it is easier to continue to use checkpoints, and regulate and monitor the ingredients used in car bombs, rather than oversee the cars themselves.
In truth, in the idea behind robot cars, domestic and international security is facing a very old threat. The problem isn’t so much with robots but with stopping enemy vehicles from penetrating city walls with a destructive payload, which is a problem as old as the Trojan horse of ancient Greek mythology. (There’s a reason why a certain kind of malware goes by the same name). Robot cars merely present a new way to deliver the payload.
Maybe this is a problem that doesn’t demand immediate action and is just part of the “new normal”—if it even comes to pass. For hundreds of years, just about every kind of vehicle has been turned into a mobile bomb: horse-drawn buggies, boats, planes, rickshaws, bicycles, motorcycles, and trains.
This could be a case of misplaced priorities. Or, as journalists Matthew Gault and Robert Beckhusen phrased it in War Is Boring: “Americans freak out over small threats and ignore big ones,” For example, a terrorist with a single well-placed match in California during the summertime could easily do a massive amount of economic damage and disrupt transportation, businesses, and ecosystems. It’s the ultimate in low-tech terrorism, yet could plausibly cause hundreds of millions of dollars in damage.
But the appearance of just one robot car bombing could set back the entire autonomous-driving industry, in addition to the loss of life and the property destroyed. And there are other uses, misuses, and abuses related to autonomous cars that should be of just as much—if not more—concern.
First-world problems. Weirdly, robot cars bombs seem to be a decidedly Western—or even American—fear, even though the actual threat posed by car bombs is generally located far elsewhere. Most suicide car bombs happen in the Middle East in a low-tech way, whereas they are very rare in the United States. But because most of the news coverage about a hypothetical robot car bomb has occurred in the US media, it gives the false impression that it’s a first-world problem. Autonomous cars would have a hard time operating on Afghanistan’s dirt roads without lane markings, for instance, even if one could be obtained there.
Perhaps the reason for America’s obsession is that the car bomb is a special, iconic weapon of terror—our prized possession turned against us. Different from rockets and drone missiles that fall from the sky, car bombs can be more insidious. They would infiltrate civilized society, sneaking up on its most vulnerable points. Like matches, cars are omnipresent in the modern world, and thus nearly impossible to control. But very few elaborate car bombings have been attempted, even though they could be done today via remote control or through the use of a kidnapped driver, for example. Simple still works. As an actual threat, the robot car bomb seems overblown.

No comments:

Post a Comment