Yes. Daniel Kahneman has the concepts of ‘system 1’ and ‘system 2’. The first system is easier to switch on and work with, to make decisions like ‘which way to go home’, mechanically. But some decisions cannot be made automatically: which university to apply to, where to work. People want to avoid making these decisions because they are very difficult.
But there is another extreme, when we do not trust when we should. One has to find the way between the devil and the deep blue sea, between trust and mistrust. Besides, cultural aspects should also be taken into account.
Is AI a magical culturally specific?
Yes. For example, ChatGPT has specific political preferences. China has its own AI that adheres to certain cultural values. ChatGPT or Midjourney would have a hard time strategic planning courses for everyone drawing the flower with seven colours [a Soviet children’s tale—transl.] and do not know, for example, the poet Nekrasov—simply because they have not encountered the data. This should be taken into account. Every language model is a ai and the secrets of successful email newsletters reflection of our society, of certain social groups. The same is true for YandexGPT and GigaChat.
Ethical Code: Presumption of Guilt
Let’s discuss the ethical rules declared by the creators of smart algorithms.
Some documents set out rules and even ethics as developers see them. Senior developers are very responsible in their activities. As mentioned earlier, several companies have signed the AI Ethics Code and are constantly updating it. Universities also develop their codes of ethics.
I would like to mention the well-known Collingridge Dilemma (from engineering ethics, of which AI ethics is a part), which states that there is a power problem and an information hindi directory problem. We cannot change a technology until it is mature, because we do not understand what changes need to be made. At the same time, once the technology is widespread, we can no longer change it. The solution to this dilemma may be the precautionary principle: developers must prove that their products will not cause harm. Developers have a huge responsibility, and the risks are also huge.
But their products do touch on the innermost part of a human being!