Vehicle Functions (Torwards Automated Driving)

This brings us directly to the discussion about the vehicle functionality of the future as enabled by the IoT. Naturally, the Holy Grail here is automated driving. Many publications also talk about the self-driving car or autonomous driving. We believe it involves much more than just vehicle autonomy – after all, the IoT will enable the connected car. In the following section, we use the term “automated driving” to refer to both autonomous driving and connected vehicles.

The Roadmap towards Automated Driving

Although a lot has been written about the roadmap that will lead us to automated/autonomous driving, most people agree that “autonomous driving is not going to be a Big Bang, it`s going to be a series of little steps” (Toscan Bennett, Volvo [LL1]).

Many different factors will have an impact on the evolution of automated driving; factors such as user acceptance, technology, legislation and, last but not least, insurance actuaries – the statisticians who calculate insurance risks and premiums.

A roadmap for automated driving as developed by experts at Bosch is shown in the figure below. A key assumption in this roadmap is that the ability to efficiently combine data from different data sources in real time will be decisive. Single sensors are used for basic features such as adaptive cruise control applications like lane-keeping guidance. Sensor data fusion enables the merging of data from multiple sources to support advanced solutions like Integrated cruise assist and Highway assist. The addition of map data will support the Highway pilot feature, which already provides highly automated driving functions. The biggest challenge for automated driving will be driving in densely populated urban areas, because of the many associated risks, such as crossing pedestrians, children at play, etc.

Roadmap towards automated driving (Source: Bosch)

Roadmap towards automated driving (Source: Bosch)

Predicting future developments in this space is difficult. According to Wolf-Henning Scheider (member of the Bosch management board), Bosch has already received production orders to supply the radar, camera, control units, and other technology needed for semi-automated driving in 2017 and 2018. Based on this, he has laid out a four-stage roadmap [TAP1]:

  • In 2017, the first cars with integrated highway assist functionality will become available, with fully automated lane keeping for speeds of up to 75 mph.
  • In 2018, the highway assist functionality will be extended to support higher speeds, as well as automatic lane changes based on driver approval. Drivers will still be required to keep their eyes on the road at all times.
  • By 2020, the highway pilot functionality will support fully automated driving on highways. The driver will be notified if they are required to overtake a car, and if this doesn’t happen quickly enough, the car will pull over and stop.
  • By around 2025, Bosch believes that the auto pilot function will support fully automated door-to-door transportation without the need for any intervention on the part of the driver.

Given that in 2004, not one of the cars in the first DARPA self-driving contest made it past the first 7 of the planned 150 miles, these are clearly ambitious goals. Nevertheless, the market in general seems to agree, with many of the OEMs at CES 2015 confirming similar timelines [FC1].

Automated Driving – Technologies

Naturally, sensors will play a key role in autonomous driving. Modern cars already use a number of different sensors, including tilt sensors (used by the light control system), high pressure sensors (used by the Electronic Stability Program (ESP)), torque sensors (steering system), steering wheel angle sensors (steering and ESP systems), acceleration sensors, and seat occupation sensors (airbag control system), wheel rotation angle sensors (ESP system), and wheel angular velocity sensors (Anti-locking Braking System (ABS)) [LA11].

For automated driving, different sensors are generally combined to create a virtual image of the vehicle’s surrounding environment. These include:

  • LIDAR: This technology uses laser to measure distances by analyzing reflected light. Used for adaptive cruise control (ACC), LIDAR devices are mounted on the front of the vehicle to monitor the distance between the vehicle and any car in front of it.
  • Radar: Millimeter-wave radars are commonly used. This involves various infrared and optical sensors being placed at the front, sides, and rear quarters of the vehicle.
  • Ultrasonic: Used for close obstacle detection such as in automatic parking, for example.
  • Cameras: Used to identify nearby hazards (pedestrians and cyclists), read road signs, and detect traffic lights.

Google is seen as one of the pioneers of autonomous driving. The central element of the Google car is a laser range finder (LIDAR) that is mounted onto the roof of the car [IE1]. The device generates a detailed 3D map of the environment. The system then combines these laser measurements with high-resolution maps of the surrounding area. Additional sensors include four radars to deal with fast-flowing traffic on freeways (mounted on the front and rear bumpers), as well as a camera close to the rear-view mirror that detects traffic lights. For positioning, GPS is combined with an inertial measurement unit (IMU), which measures the actual movement of the car thus complementing the GPS data.

Technologies for automated driving

Other concepts seen as important for automated driving are Car2Car and Car2Infrastructure, together also referred to as Car2X (or Vehicle2X). These technologies enable vehicles to communicate either with other vehicles (Car2Car), or with traffic infrastructure such as traffic lights (Car2Infrastructure). Car2Car technology should also enable predictive driving, because it allows cars to communicate with cars far ahead of them. Naturally, this would require all cars to support the same interfaces; a major obstacle in itself.

Automated Driving – System Architecture

The key challenge for any automated driving system is to manage and combine the significant amounts of data coming from the different sensors, and to create a consistent model from this data that can be used to make decisions about driving behavior.

A common solution to this problem is the creation of a hierarchical sensor fusion architecture [TI1], as shown in the figure below (again using our IoT AIA template).

AIA for automated driving

Most sensors are equipped with a dedicated processing unit that creates a digital representation of the raw, often analog sensor data. For example, the output from a LIDAR sensor could be a 3D map of the vehicle’s surroundings.

Sensor data fusion combines the outputs of multiple sensors. For example, the data from two cameras can be combined to extract depth information (also known as stereo vision). Similarly, data from different sensor types with overlapping fields of view can be merged to improve object detection and classification and to create a more precise model.

It is also possible to add data from external systems. For example, data from the car cloud will include detailed map data, traffic data, and weather data. The addition of data from a Car2X gateway is also possible.

The result is a detailed 3D map of the car’s surrounding environment. This map is object-based and includes lane markers, other vehicles, pedestrians, cyclists, street signs, traffic lights, and so on. This detailed map is also positioned within a larger, less detailed map which is required for navigation. Both model perspectives are updated in real time, although at different intervals.

The entire process can also be described as a “reconstruction” of the real world in the virtual world based on sensor data. A similar approach can also be found in other case studies in this book; for example, the case study of CERN’s Large Hadron Collider LHCb experiment.  The term “reconstruction” is also used throughout the Ignite | IoT Methodology section of this book.

Google 3D data model (Source: [MI1])

Leveraging the results of the reconstruction process, a central driving control engine can now use the model to make decisions about driving behavior, including speed, direction, emergency braking, etc. The engine interacts with the different car control elements such as the central VCU and different ECUs to achieve this.

Due to the highly heterogeneous environment found in most modern cars, as well as the complex nature of such a system, this type of highly centralized approach may prove risky in some cases. For example, it is very likely that, for security reasons, the autonomous emergency break (AEB) feature will continue to be deployed as an autonomous system that is capable of overriding the central driving engine if need be.

Digital Horizon

Combining local sensor data with car cloud services will help optimize the driving experience even further while also supporting more economic driving. A good example of this is the Digital Horizon system developed by Bosch [ST1]. The Digital Horizon system combines a cloud-based backend with an on-board unit that also connects to the car’s driving controls via the CAN bus [CAN1]. The cloud backend provides map data enriched with metadata relating to road conditions, speed limits, etc. The on-board unit takes this data and uses it to support a number of different services, including:

  • Predictive light control: By combining connected horizon data with sensor readings and camera image analysis, the system can adapt the lighting to the situation ahead. Features include headlamp beam height adjustment and predictive curve lighting.
  • Assisted driving: Speed can be regulated according to the condition of the road ahead. For example, the system can reduce the speed if there are tight bends or poor road quality ahead.
  • Coasting assistant (moving without propulsion): Conventional braking on downhill stretches or when approaching speed limits dissipates the vehicle’s energy in the form of heat. Coasting uses the vehicle’s kinetic energy to overcome driving resistance. The system identifies stretches of road suitable for coasting and indicates when the driver should take their foot off the accelerator.
  • Predictive hybrid management: Hybrid electric vehicles can recuperate braking energy and store it in their batteries. In order to overcome limited battery capacity, the system uses topographical navigation data to determine the recuperation potential for the road section ahead. Based on this information, the system then discharges the battery sufficiently through increased use of the electric motor to ensure that the maximum amount of energy can be subsequently recuperated.

Digital Horizon

Parking

Automated parking is likely to be the first step in the productization of fully automated vehicle controls (it is not included in the roadmap above because technically it is not driving)

Roadmap for automated parking (Source: Bosch)

Initial production-ready parking support systems already provide parking steering control, where the car does the steering, and the driver controls the speed and the braking. This will be followed by parking maneuver control, where the braking step is also automated by the system. The next step will be remote parking assistance, which allows the driver to park their car from outside the vehicle. The final step is auto-pilot parking, which supports fully automated parking.

Because of the potential to easily convert multi-story car parks into fully automated environments (at least in part), automated valet parking is another interesting area in which fully automated driving could be rolled out on a larger scale.

While some of these parking automation scenarios do not require connectivity between the car and external systems – and are thus perhaps not perfect examples of IoT solutions – they are still an important part of the overall picture. Moreover, there are other interesting scenarios in which IoT concepts can be deployed in combination with parking. The first example of this is Automated Valet Parking (AVP), which connects the car with the parking deck, as well as the user with the AVP system (such as using smartphones for services like payment, booking times for drop-off and pick-up of a vehicle, etc.). Another good example would be community-based parking. In this case, sensors installed within the vehicle scan nearby streets for available parking spaces (even if the driver is not looking for one). This data is collated centrally and updated continuously. This means that any user of the system can get real-time information about available parking in the vicinity. This information can also be added to the map data in the driver’s car navigation system.