Home » Blog » Humans do not want to have make decisions all the time

Humans do not want to have make decisions all the time

 

A lot depends on the systems’ design, the tools, and what is allowed and what is not. Incidentally, in Russia, there is a rule that all companies using recommendation systems are obliged to post a document on their websites explaining the reasons for the recommendations.

First Aid through AI
There are psychological chatbots, such as self-help chatbots. How ethical is that?

It would be logical to say that it is not very ethical, but I think it is very humane in terms of the availability of these resources, among other things. A lot of people do not go to psychologists because they are shy, they do not know how to talk about their problems. But they can talk to a chatbot. And if, for example, a psychologist is there ‘on the other side’ and hears that a person has said something

dangerous, Humans do  they can draw attention

 

to it and suggest: ‘Look, there are specialists, they have these available time slots, call them.’ They can refer the user to a specialist and alleviate crises if they arise.  so we need education and training Such resources are good as an indicator of problems and for immediate self-help. For example, they can recommend some exercises.

It is important to remember, however, that these are general recommendations, not targeted ones.

The architecture of these systems what it is and why every business needs it  does not allow for anything else. And there should probably be a notification bar that says: ‘And now please reach out to the following services.’ It is the developers’ responsibility to add it.

Unforgivable AI Mistakes

Do we expect personalisation from these systems?

Yes. Let me tell you a story. A colleague from UrFU and I published an article about the dilemma of recommendation systems’ predictive accuracy. This university had developed a system for predicting the future profession of university applicants. The system was often very accurate, but often inaccurate too. The paradox is that neither the applicants nor the management liked the system. Applicants trusted their grandmother more than any system. It was seen as an ‘oracle,’ but not believed. However, we must recognise that the architecture of such systems often includes the possibility hindi directory  of error. We can forgive a human being for a mistake, but we do not forgive a system. This is demonstrated, for example, by recordings of calls to technical support when people’s internet goes down.

But it is also our mistake that we are willing to hand over all responsibility to AI.

Yes, it is not uncommon for us to be willing to rely on it completely, because if something goes wrong, we have someone to blame, and that is very convenient.

 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top