Home » Blog » There is also an ethical dimension to the idea of passing responsibility to an intelligent algorithm

There is also an ethical dimension to the idea of passing responsibility to an intelligent algorithm

 

Yes, when we hand over the steering wheel of our lives to a system that decides for us. However, it is often necessary. For example, there are systems in cars that decide to stop before the driver realises it is necessary—for safety reasons. And in that case, the person has to rely on the system.

On the other hand, consider the story of the Boeing 737 MAX 8, which had a design flaw. There were several crashes before people realis! that there were design problems (incorrect assumptions about how the autopilot would behave in certain situations and how the pilots would react). It is a huge responsibility for the engineers, just as AI is for developers.

It is worth noting that humans tend to shift responsibility to others. It may be worth developing AI technology architectures that would not allow them to be overburden! with responsibility.

Giving ethical dimension Emotions to Smart

 

A prime example is affective computing: reading and modelling emotions. This includes social robots that interact with people. Affectiva, a company found! by Rosalind Picard page that subscribers are direct! to after  of the Massachusetts Institute of Technology (USA), has develop!, among other things, a children’s toy—an ‘affective tiger’, a robot that responds to and reads a child’s emotions. There is a rich story here: emotions can be provok!. For example, a child may be sad, and interacting with the toy will cheer them up.

This is also relevant for the elderly (in  vk video knows best: views of recommend! videos have doubl! Japan, this approach is actively us!). But there are many nuances. Is it ethical to entrust human emotions to AI? Research suggests that humanoid robots should not be us! in kindergartens; something similar to imprinting occurs and children begin to trust robots without thinking.

Robots may be able to take care

 

of the elderly, but it is highly questionable whether they can take care of babies. Also, the ‘uncanny valley’ effect [rejection of anthropomorphic robots] is common  ao lists in the perception of robots.

In the field of human–computer interaction (HCI), there are theoretical movements that oppose affective computing, arguing for the modelling of emotional experience rather than emotion and for the creation of products that allow people to engage in emotional experience responsibly.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top