Abstract
This article puts forward solutions to some of the ethical and legal dilemmas posed in the current discussion on how to program crash algorithms in autonomous or self-driving cars. The first part of the paper defines the scope of the problem in the criminal legal field, and the next section gives a critical analysis of the proposal to always prioritise the interest of the occupant of the vehicle in situations with conflict of interests. The principle of minimizing social damage as a model for configuring self-driving cars is examined in the third section. Despite its apparent plausibility, within the framework of a liberal legal system that recognises humans as free agents who have rights and responsibilities, maximizing the function of social utility does not justify harmful interference into a person’s legal sphere. Therefore, in the fourth part, the author argues the need to program the crash algorithms of autonomous cars based on a deontological understanding of the system of justifications in criminal law. The solution to the dilemma lies in a prior analysis of the legal positions of all agents involved in the conflict, from a perspective of the principles of autonomy and solidarity as the core of the system of justifications.