Let’s take two examples. Affectiva has a wristband showing a person’s emotional state—an affective computing approach. This is where the dictates of technology come in. You feel normal but the wristband tells you, ‘You are sad.’ And you think, ‘I must be sad.’
One approach involving an affective (emotional) experience utilises a smartphone app that allows users to send texts and chat in messengers depending on how they feel. You pick up your phone and if your hand is shaking, messages are sent with a red background.
When both Our smartphone you and your friends
play this game, they immediately see your emotional state (you have demonstrated it). It is an emotional experience: we consciously share our state through gadgets, but read on and check it out! humans are not excluded. There are no dictates from technology. Rather, we have to model the emotional experience and think about how to do it.
There is also a question of how AI assistants should be designed so that they do not ‘coach’ people. The dialogue with intelligent algorithms must not go too far or veer into the territory of ‘How should I live my life?’
On the other hand, we cannot dictate to what is included in reputation management people: they ask how they should live their life, while we tell them they will not get an answer to that question.
Vulnerability Algorithms
On the one hand, smart technologies make our lives easier. On the other hand, we often trust them with our innermost secrets. They know too much about us.
Consider the story of a large supermarket that sent an advertisement for nappies to the father of a teenage girl. The father went to find out why. The company replied that, according hindi directory to their information, his daughter was pregnant. This turned out to be true. Her situation was clear from the range of products she was buying.
Another story: a company decided to give presents to its best customers (to maintain their loyalty) and gave them something they had dreamed of but had not mentioned to even their loved ones. The campaign’s effect was the opposite of what was expected: users felt that their thoughts had been read and were unpleasantly surprised. It turned out the company knew information that people would prefer to hide.