Information Technology Blog - - The Self-Drive Dilemma: The Embryonic Issue of Self-Driving Cars - Information Technology Blog
With our busy daily lives, we’re spending much more time inside our cars. And because life is short, every moment matters and we’re under pressure to make every minute count. Hence, the era of driver-less cars will likely transform our lives even more, whether we like it or not.
Travel between destinations without human operational involvement is no longer illogical and, according to business intelligence, 10 million self-driving cars will be on the road by 2020 (Rouse, 2017). The automakers are already introducing numerous self-directed features in the car, which carry additional high margin revenues.
It is projected to be the fastest-growing market for carmakers for the next ten years. Especially if we consider the importance of these technological advancements, with existing semiautonomous features such as self-braking systems, assisted parking, blind-spot monitoring, and lane-keeping assistance, preventing 95 percent of accidents caused by human errors. Most importantly, these semi-autonomy features are not affected by the state of the driver (e.g. tired, angry, sad, etc.). Additionally, it can also scan multiple directions simultaneously, thus improving road safety overall, and reducing auto insurance and health costs.
Self-driving cars can get confused by unexpected encounters, such as when a traffic officer waves vehicles through a red light. Being a human, you can recognize body language and other contextual factors of human behaviors. The cars cannot.
However, the self-driving car is really a massive computer. So, can we be sure of the safety of such utilities? Who is liable for the risk? Even though self-driving cars will turn out to be mainstream in more than a decade, there are definite considerations that car users should start thinking about now.
How does it work?
When we consider self-driving car technologies, sensors, connectivity, able to drive due to age or physical impairments (Gupton, 2017). To navigate the car safely sensors like radar, ultrasonic and camera provide the necessary inputs. Google is using ‘lidar’ (a radar-like technology that uses light instead of radio waves) sensor technology and going straight to cars without steering wheels or foot pedals. Connectivity supports the detection
of the latest traffic, weather, surface conditions, construction, maps, adjacent cars, and road infrastructure. This information can be used to monitor a vehicle’s functioning environment to anticipate breaking or to avoid dangerous situations.
Software/control algorithms are there to capture data from sensors and connectivity and make the necessary changes to speed, steering, braking and the route. Tesla has a software system named ‘Autopilot’, which employs high-tech camera sensors (eyes) and some of their cars are already on the market.
Insecurity coupled with security
Have you ever thought about what will happen if hackers could breach the network, which is connected to a self-driving vehicle and then deactivate key sensors and GPS features? If this happens, hackers could remotely force the vehicle to head to a remote, undisclosed location in order to facilitate the theft of both the vehicle and its contents. In addition, lives could be at risk. Most importantly, the connecting technologies, including laser range finders, ultrasonic devices, wheel sensors, cameras, and internal system measurement systems can be accessed by hackers (Miller, 2014). At the same time, different types of risks will emerge, such as software bugs, information system incompatibilities and control failures (Greenberg, 2015).
Unexpected encounters
According to research done at ‘Duke University,’ it is impossible to code every scenario in advance. For example, self-driving cars can get confused by unexpected encounters, such as when a traffic officer waves vehicles through a red light. Being a human, you can recognize body language and other contextual factors of human behaviors. The cars cannot. Therefore, it is a huge challenge for the computer to handle such a situation. Another example might be when a kid is about to dart into a road; a self-drive car’s artificial intelligence (AI) then must then be able to abstract it. This is the one best example of how advanced technologies are not yet able to create 100 percent secured designs (Hamers, 2016).
Human-robot conflict
How does the car notify a passenger as to whether or not they should take over the task? Moreover, how does the car
confirm that the passenger is ready to take over the responsibilities of driving? Scientists are still doing research to try and understand the connection between the human brain and notifications they might receive while in passenger mode (Hamers, 2016).
Lack of sensitivity
Can the self-driving car function in the same way whatever the road conditions? It should be able to detect all the road features around it, despite bad weather conditions such as fog, lightning, rain, and snow. Therefore, these sensors should be reliable, accurate and fit-for-purpose, with enough detail available to enable the vehicle to continue functioning even in extreme conditions.
Every cloud has a silver lining
When we consider security implications for self-driving cars, one way to prevent hackings and security breaches would be a centralized token system, and this would provide protection and connection with the vehicle when actions were needed. Using this technique, hackers would not only have to reach the network connectivity but also, they would have to compromise the access token. Therefore, it would be difficult for them to penetrate two security layers with cross-protection. Furthermore, a sheltered, centralized position within the cloud would be likely to operate as the preeminent interoperability configuration for these communication networks. But, then again, earing in mind the fact that today’s popular applications (mobile phones) are regularly hacked, it would be wise for original equipment manufacturers (OEMs), suppliers and technology providers to holistically collaborate on security measures, before some self-driving vehicle drives down the middle of the road, and the actions of a hacked car become most dire (Miller, 2014).
Policy for protection
Policymakers should research on target agile innovation and advance the way they use aggregate volumes of data, cumulative patterns, and accumulated incidents. It is imperative to have a holistic stakeholder collaborative platform connected with an automotive ecosystem that will include insurers, auto manufacturers, technology companies and regulators. In fact, stakeholder organizations, such as policy institutes, insurers, automobile manufacturers and suppliers, should conduct rigorous pilot research in order to inspire or inhibit adoption in each different geographical location/environment. Their research findings must integrate with the corresponding progress in
each jurisdiction and highlight leading technological models that are suitable as default templates for broader roll-out.
Twitter: @dineshabeywick
Facebook: http://www.facebook.com/samuraidinesh
linked in : https://lk.linkedin.com/in/samuraidinesh
The post The Self-Drive Dilemma: The Embryonic Issue of Self-Driving Cars appeared first on Information Technology Blog.
from Information Technology Blog https://ift.tt/2ApzcSD
via IFTTT
Comments
Post a Comment