Among the possible solutions to resolve the trolley problem! the first consists in letting the people decide! via a referendum for example! which person should die in the event of an accident.
The second solution consists in choosing a function to optimise! a quantitative measure of the moral nature of the different outcomes! which seems logical! as one of the marketing options machine only knows how to calculate. However – and this is the “consequentialist” point – morals calculated in a cold manner according to predetermined rules will never be entirely satisfactory because human judgement also depends on the as yet uncertain consequences that the action of an autonomous car will produce in the future.
Whatever the chosen criteria
The decision will therefore never be ethically flawless. So! what is to be done? The main point how can you diversify communication in a chatbot? I develop in my book starts from this observation that we cannot undo harm. Technology does good and it does bad! it has be numbers been this way. Thus! the question is not that of working out how to prevent the autonomous car from killing anyone! but rather that of how to ensure that the concepts of good and bad remain purely human concepts! and that machines do not become moral agents. ! but of removing the machine from the field of ethical judgement.
Serval: Regarding the optimisation function
It’s interesting to highlight that it is always defined by a human being. When an algorithm adjusts the price of plane tickets for example! it isn’t optimised to improve the journey of the majority! but to increase the route’s profitability…
This choice was not made by a machine but by the person who developed the optimisation function. It’s not because it is a calculation that it is “cold”! to re-use your words…