Most anyone who’s driven for any length of time has been told to “keep their eyes on the road.” With the advent of connected cars and the digital cockpit, there’s no doubt your vehicle has multiple eyes on the road—and on the driver. Breakthroughs in computer vision, a version of artificial intelligence (AI) that processes, analyzes, and understands digital images, have led to new safety and security features that come standard on a growing number of vehicle models. 

In some cases, upwards of eight or nine cameras are located both inside and outside a vehicle. The cameras send imagery data to the AI’s hub where it’s processed, and appropriate actions can be taken to either assist a driver or act on the driver’s behalf. These actions fall under two primary categories: driver and passenger monitoring and advanced driver assistance systems (ADAS). 

Driver and passenger monitoring

Within the vehicle, computer vision cameras  monitor the facial features of the driver, looking for signs of distraction, distress, or fatigue. If the vehicle AI detects that a driver isn’t paying attention or is on their phone,  it can alert them with an audio or visual cue. Furthermore, if it detects reckless driving such as erratic acceleration or braking patterns, or excessive speed, it can alert the driver, or intervene and slow down the vehicle.  

An example of driver and passenger monitoring comes from German engineering firm Bosch. The mobility-focused firm is working on a monitoring system that relies on cameras installed on the steering column and on the rearview mirrors, both outside and in-cabin. In addition to monitoring the driver’s facial features and gestures, the rear view-mounted cameras can monitor passengers’ behavior and detect for signs of distress, aggression, or suspicious activity. 

The camera mounted on the in-cabin rear view mirror warns if any occupants have an unfastened seat belt, automatically adjusting the seatbelts and airbags based on how a passenger sits, or turn an airbag off if it detects a carseat is in use. And if AI detects a passenger’s presence when the car is in ‘park,’ it will alert the driver of the situation.

Advanced driver assistance systems

ADAS relies on a variety of inputs to help drivers stay aware of their surroundings outside the vehicle: LiDAR, RADAR, GPS, and computer vision. Together, they can detect and classify objects on the roadway, adjusting the vehicle’s speed on behalf of the driver—to ensure a safe distance from vehicles in front of them, warning the driver of potential hazards, and assisting them with parking and staying in their lane. 

Computer vision is especially important for responding to different variables and discerning the proper course of action. When properly trained, computer vision can detect and classify objects on the road, such as other vehicles, pedestrians, bicycles, and animals. An ADAS system can use this information to warn the driver. If it detects that a vehicle is veering over into the next lane, or into a bicyclist who is riding on the shoulder, it can generate an audible warning. If the driver doesn’t respond, the ADAS can then assist the driver, either in activating the emergency brake or in steering the vehicle back into its lane. 

Computer vision can also be helpful in warning a driver when other vehicles are in their blind spot, and in helping them park a vehicle by providing visual and audio cues to guide them into a parking space.

Computer vision training

Ensuring that computer vision solutions can accurately recognize objects begins by collecting and preparing a large dataset of images and videos that are relevant to the problem you’re trying to solve. In the case of driver/occupant monitoring solutions, this includes still images and videos of drivers and passengers—specifically their facial features—as well as images and videos of occupants within the cabin making particular gestures, sitting in different styles, and detecting seatbelt use. 

For an ADAS solution, you need to collect still images and video of pedestrians, bicyclists, motorcyclists, fire hydrants, vehicles, animals, and any other objects a vehicle might encounter on the road. Prior to training your algorithm, your visual data must also be properly labeled or annotated. This step is crucial to streamlining the training process and ensuring that your machine learning algorithm understands what it’s looking at. 

Training a machine learning algorithm for computer vision is complex and time-consuming, requiring a great deal of experimentation, trial and error, and patience. LXT has the expertise and resources to guide you along the process. 


Contact us today at info@lxt.ai to discuss your data needs with one of our experts.