Collaborative projects

ICRI-CARS

Collaborative Autonomous & Resilient Systems (CARS), i.e., the study of security, privacy, and safety of auto­no­mous systems that collaborate with each other. Examples include drones, self-driving vehicles, or collaborative systems in industrial automation.

CARS introduce a new paradigm to computing that is different from conventional systems in a very important way: they must learn, adapt, and evolve with minimal or no supervision. A fundamental question therefore, is what rules and principles should guide the evolution of CARS? In natural life forms, this is achieved via natural selection – a random trial and error method that, over time, ensures that only the fittest survive. That approach, however, may not be acceptable for man-made CARS. Alternate approaches to guide the evolution of CARS are necessary.

The key research goal is how to ensure the “Do no Harm” principle. This raises related security related questions in multiple research areas:

CARS Stability: A key requirement for CARS is Stability. This is so because, like any well-designed control system, you don’t want CARS to spiral into undesirable states (e.g. chain reactions that place the CARS into harmful states). The goal is to regulate the autonomous behavior to keep it within acceptable bounds or detect and mitigate behaviors that are out of given bounds.

CARS Compliance: How should the CARS behave, and how do we ensure it will do so accordingly. The first challenge we run into here, is how to formally specify the behavior of the CARS, and how do we ensure that the specification is consistent (i.e. not contradictory) and meets expectations of regulators and users. . The second challenge becomes, how do we show that the constructed CARS will behave and remain in compliance with the spec? What are appropriate bounds of autonomy?

CARS Accountability: CARS don’t exist in a vacuum. Rather, they become part of our everyday social infrastructure. CARS therefore must account for the Human Factors that it affects, in particular it must be held accountable to some human entity. So, how do we build CARS that support ethics, legal liability and audit? How much responsibility should the CARS undertake, and how much the human in the loop? How can this be specified, validated, and enforced? CARS by definition has the ability to take decisions about its own operation. There is a real need for a human entity to recreate and interpret this decision pathway to figure out “what happened and why”?

CARS Risk Management: How do we quantify the risks of CARS, and its impact? How do we decide if the risks are acceptable?

Website ICRI-CARS