Google has been pursuing the idea of self-driving cars since 2009. They have now successfully developed prototypes for testing, and they have been in trial mode for the past couple of years. The finished design won’t allow for human passengers to drive at all. For testing purposes, though, Google’s prototypes are required to have an operable steering wheels and pedals. This allows the human passenger to step in and become a safety driver as needed.
Last year in California, one of these cars attended by Jon Urbana was pulled over by a police officer for driving too slowly. This brings into question: who is considered liable in the event of an accident, the car’s AI or the passenger inside the car? The National Highway Traffic Safety Administration(NHTSA) has decided that Google’s self-driving system is considered to be a “driver”, just not in the traditional sense.
It’s scary to think that self-driving cars are becoming a reality. It’s scary in the sense that this new technology will take away the privilege of driving away from humans over time. By allowing computers to drive us, we are putting faith in something that could suffer a system crash, become controllable from external sources (hackers with malicious intentions) or encounter other technological difficulties. These problems still have a lot of time to arise, though, since this is an up-and-coming technology. By allowing this concept to take to the streets, I feel as if personal freedom and faith in computers will gradually diminish.