Tuesday, October 13, 2015 by Carol Young
A recent RAND report imagines what law enforcement could do in the future to self-driving cars. They set up a scene of a cop stopping a self-driving car that threatens to slam into a packed intersection:
The police officer directing traffic in the intersection could see the car barreling toward him and the occupant looking down at his smartphone. Officer Rodriguez gestured for the car to stop, and the self-driving vehicle rolled to a halt behind the crosswalk.
What are the possible repercussions of police being able to direct self-driving cars? The abuse of power could lead to innocent citizens being stopped based on unreasonable suspicions. As Techdirt points out, could a police officer stop your car if you refuse to? Not only that, what about the safety of people driving cars that are vulnerable to hackers? “Thanks to what will inevitably be a push for backdoors to this data, we’ll obviously be creating entirely new delicious targets for hackers.”
Another article by Techdirt discusses the blind spots for most smart devices: security. “As car infotainment systems get more elaborate, and wireless carriers increasingly push users to add their cellular-connected car to shared data plans, the security of these platforms has sometimes been an afterthought.”
If law enforcement is given the right to stop self-driving cars, what happens if there is an economic collapse and chaos ensues? Will law enforcement, or the military, have the right to stop anyone from going anywhere in order to control the masses? What exactly are the boundaries of these ideas, and what are the rights of the people?
Technology needs to be guided by codes of ethics that people and nations can agree on. Can law enforcement, the military and the government take advantage of our rapidly advancing technology without the consensus of the people?