Desktop-Version

Start arrow Maschinenbau arrow Autonomes Fahren

< Zurück   INHALT   Weiter >

5.4 Constraints and Deontological Ethics

Cost functions, by their nature, weigh the impact of different actions on multiple competing objectives. Optimal controllers put more emphasis on the objectives with the highest cost or weighting so individual goals can be prioritized by making their associated costs much higher than those of other goals. This only works to an extent, however. When certain costs are orders of magnitude greater than other costs, the mathematics of the problem may become poorly conditioned and result in rapidly changing inputs or extreme actions. Such challenges are not merely mathematical but are also commonly found in philosophy, for example in the reasoning behind Pascal's wager [1]. Furthermore, for certain objectives, the trade-offs implicit in a cost function may obscure the true importance or priority of specific goals. It may make sense to penalize both large steering changes and collisions with pedestrians but there is a clear hierarchy in these objectives. Instead of simply trying to make a collision a thousand times or a million times more costly than a change of steering angle, it makes more sense to phrase the desired behavior in more absolute terms: the vehicle should avoid collisions regardless of how abrupt the required steering might be. The objective therefore shifts from a consequentialist approach of minimizing cost to a deontological approach of enforcing certain rules.

From a mathematical perspective, such objectives can be formulated by placing constraints on the optimization problem. Constraints may take a number of forms, reflecting behaviors imposed by the laws of physics or specific limitations of the system (such as maximum engine horsepower, braking capability or turning radius). They may also represent boundaries to the system operation that the system designers determine should not be crossed.

Constraints in an optimal control problem can be used to capture ethical rules associated with a deontological view in a rather straightforward way. For instance, the goal of avoiding collisions with other road users can be expressed in the control law as constraining the vehicle motion to paths that avoid pedestrians, cars, cyclists and other obstacles. The vehicle programmed in this manner would never have a collision if a feasible set of actions or control inputs existed to prevent it; in other words, no other objective such as smooth operation could ever influence or override this imperative. Certain traffic laws can be programmed in a similar way. The vehicle can avoid crossing a lane boundary by simply encoding this boundary as a constraint on the motion. The same mathematics of constraint can therefore place either physical or ethical restrictions on the chosen vehicle motion.

As we know from daily driving, in the vast majority of situations, it is possible to simultaneouly drive smoothly, obey all traffic laws and avoid collisions with any other users of the road. In certain circumstances, however, dilemma situations arise in which it is not possible to simultaneously meet the constraints placed on the problem. From an ethical standpoint, these may be situations where loss of life is inevitable, comparable to the classic trolley car problem [14]. Yet much more benign conflicts are also possible and significantly more common. For instance, should the car be allowed to cross into an adjacent lane and drive against the flow of traffic if this would avoid an accident with another vehicle? In this case, the vehicle cannot simultaneously satisfy all of the constraints but must still make a decision as to the best course of action.

From the mathematical perspective, dilemma situations represent cases that are mathematically infeasible. In other words, there is no choice of control inputs that can satisfy all of the constraints placed on the vehicle motion. The more constraints that are layered on the vehicle motion, the greater the possibility of encountering a dilemma situation where some constraint must be violated. Clearly, the vehicle must be programmed to do something in these situations beyond merely determining that no ideal action exists. A common approach in solving optimization problems with constraints is to implement the constraint as a “soft constraint” or slack variable [15]. The constraint normally holds but, when the problem becomes infeasible, the solver replaces it with a very high cost. In this way, the system can be guaranteed to find some solution to the problem and will make its best effort to reduce constraint violation. A hierarchy of constraints can be enforced by placing higher weights on the costs of violating certain constraints relative to others. The vehicle then operates according to deontological rules or constraints until it reaches a dilemma situation; in such situations, the weight or hierarchy placed on different constraints resolves the dilemma, again drawing on a consequentialist approach. This becomes a hybrid framework for ethics in the presence of infeasibility, consistent with approaches suggested philosophically by Lin and others [2, 4, 12] and addressing some of the limitations Goodall [3] described with using a single ethical framework.

So what is an appropriate hierarchy of rules that can provide a deontological basis for ethical actions of automated vehicles? Perhaps the best known hierarchy of deontological rules for automated systems is the Three Laws of Robotics postulated by science fiction writer Isaac Asimov [16], which state:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

These rules do not comprise a complete ethical framework and would not be sufficient for ethical behavior in an autonomous vehicle. In fact, many of Asimov's plotlines involved conflicts when resolving these rules into actions in real situations. However, this simple framework works well to illustrate several of the ethical considerations that can arise, beginning with the First Law. This law emphasizes the fundamental value of human life and the duty of a robot to protect it. While such a law is not necessarily applicable to robotic drones that could be used in warfare [12], it seems highly valuable to automated vehicles. The potential to reduce accidents and fatalities is a major motivation for the development and deployment of automated vehicles. Thus placing the protection of human life at the top of a hierarchy of rules for automated vehicles, analogous to the placement in Asimov's laws, seems justified.

The exact wording of Asimov's First Law does represent some challenges, however. In particular, the emphasis on the robot's duty to avoid injuring humans assumes that the robot has a concept of harm and a sense of what actions result in harm. This raises a number of challenges with regards to the information available, similar to those discussed above for a consequentialist cost function approach. The movie “I, Robot” dramatizes this law with a robot calculating the survival probabilities of two people to several significant figures to decide which one to save. Developing such a capability seems unlikely in the near future or, at least, much more challenging then the development of the automated vehicle itself.

Instead of trying to deduce harm or injury to humans, might it be sufficient for the vehicle to simply attempt to avoid collisions? After all, the most likely way that an automated vehicle could injure a human is through the physical contact of a collision. Avoiding minor injuries such as closing a hand in a car door could be considered the responsibility of the human and not the car, as it is today. Restricting the responsibility to collision avoidance would mean that the car would not have to be programmed to sacrifice itself to protect human life in an accident in which it would otherwise not have been involved. The ethical responsibility would simply be to not initiate a collision rather than to prevent harm[2]. Collisions with more vulnerable road users such as pedestrians and cyclists could be prioritized above collisions with other cars or those producing only property damage.

Such an approach would not necessarily produce the best outcome in a pure consequentialist calculation: it could be that a minor injury to a pedestrian could be less costly to society as a whole than significant property damage. Collisions should, in any event, be very rare events. Through careful control system design, automated cars could conceivably avoid any collisions that are avoidable within the constraints placed by the laws of physics [17, 18]. In those rare cases where collisions are truly unavoidable, society might accept suboptimal outcomes in return for the clarity and comfort associated with automated vehicles that possess a clear respect for human life above other priorities.

Replacing the idea of harm and injury with the less abstract notion of a collision, however, produces some rules that are more actionable for the vehicle. Taking the idea of prioritizing human life and the most vulnerable road users and phrasing the resulting hierarchy in the spirit of Asimov's laws gives:

1. An automated vehicle should not collide with a pedestrian or cyclist.

2. An automated vehicle should not collide with another vehicle, except where avoiding such a collision would conflict with the First Law.

3. An automated vehicle should not collide with any other object in the environment, except where avoiding such a collision would conflict with the First or Second Law.

These are straightforward rules that can be implemented in an automated vehicle and prioritized according to this hierarchy by the proper choice of slack variables on constraint violation. Such ethical rules would only require categorization of objects and not attempt to make finer calculations about injury. These could be implemented with the current level of sensing and perception capability, allowing for the possibility that objects may not always be correctly classified.

  • [1] Blaise Pascal's argument that belief in God's existence is rational since the penalties for failing to believe and being incorrect are so great [13]
  • [2] It is possible that an automated vehicle could, while avoiding an accident, take an action that results in a collision for other vehicles being unavoidable. Such possibilities could be eliminated by communication among the vehicles and appropriate choice of constraints
 
Fehler gefunden? Bitte markieren Sie das Wort und drücken Sie die Umschalttaste + Eingabetaste  
< Zurück   INHALT   Weiter >

Related topics