Imagine two cars of the future, that just passed across in two-way road, and share some information.

Hey Car A, this is Car B. Look! I’ve spotted a pedestrian at your right, not far from where you are heading, just between the two parked cars. Take care.

Adult or children?

I believe it’s an adult, but I couldn’t see it moving, so not entirely sure.

– Most likely there are children around. There is a school just there, and it’s lunch time. I’ll moderate speed and keep an eye on it just in case. You know, sometimes children tend to jump into the road, following a ball or whatever.

– Good!

– By the way, you better upgrade your software, I am not just a Car, but a Sport Car.

– No way!

Yes, you know, I am equipped with the latest technology, and fully electric.

– Well, watch out your batteries, next charging station was full of old Cars!

– Ouch!

Of course, cars, like any other machines don’t talk in human language, nor have feelings (phew!) but instead, share structured messages, carefully crafted by automotive industry forums that define which fields the message shall have (e.g. number, position, and type of objects, etc). Messages follow standardised protocols and definitions to make sure transmitter and receiver understand the content and can react accordingly.

However, in 2021, these messages are still rather simple, with fixed terms, robotic if you like. They only allow cars to share information about presence, speed and some other first-level details about of other objects, intended to extend perception capabilities beyond the limits of vehicle sensing devices. Any richer interpretation of the situation is simply not there! It is not available real-time either, so there is not a direct way cars can infer, reason or deduce information out of this basic information. And such capabilities will be needed if the dream is to have full automation in Open World conditions, and drive safely, comfortably and energy-efficiently.

Getting closer to human language and understanding will be a key step for autonomous driving functions. Semantic analysis is a discipline that has been successfully used to organize the information in the Internet during the last decades, and it is time that it can be exploited at the automotive sector as well. Ontologies are repositories of knowledge, which can be built not only to describe actors and their hierarchies and properties in the road (e.g. cars types, pedestrians, traffic signs), but also actions and events (e.g. crossing, parking, accelerating, turning, etc.), but most importantly relations between all of them. Relational information makes data to be linked in a way semantic technologies can explore the graph and find consequences, predict situations and determine possible risks. These capabilities are the seed for (real) “artificial intelligence” and the only way cars of the future can safely take decisions in situations they have never seen before, just like a human with intuition would do.

In the HEADSTART project, advances in the utilisation of ontologies and semantic tools have been reached, focusing on the intelligent utilisation of scenario databases, where specific scenes can be found, parameterized and prepared for testing environments. The work in HEADSTART is being aligned with the ASAM OpenXOntology standardisation project [1], where companies of the sector are joining efforts to create ontologies, and the Video Content Description [2] open source project, which is being used as a basis for the ongoing ASAM OpenLABEL project [3], which addresses semantic labeling concepts that can lead to richer messaging, understanding and control mechanisms in automated cars.




The blog post was created by Marcos Nieto, from Vicomtech. Dr. Marcos Nieto received the Ph.D. degree in electrical engineering from UPM, Spain, in 2010. He then joined Vicomtech, where he is a Principal Researcher on the ITS & Engineering department, working as technical coordinator of H2020 projects, specialised in data analytics and semantics for the automotive sector.