Home » Blog » AI Ethics: When Smart Algorithms Should (and Shouldn’t) Be Used

AI Ethics: When Smart Algorithms Should (and Shouldn’t) Be Used

Artificial intelligence can help in many areas, from legal contracts and business to medical examinations. But is it worth sharing the most intimate, ‘too human’ things with it—feelings, ideas, plans, developments, and innermost desires? Anna Kartasheva, a researcher at the International Laboratory for Applied Network Research at HSE University, discusses this in an interview with IQ Media.

 

Researcher at the International Laboratory for Applied Network Research at HSE University, Candidate of Sciences in Philosophy, Director of the Delovaya Kniga Publishing House

 

Benchmarks Smart Algorithm are an important

 

concept in the ethics of intelligent systems. A benchmark is a model against which a system is compared and a conclusion is drawn as to how well it meets the specified characteristics.

The ethics of intelligent systems is about hypotheses about distribution channels  how to measure a system’s alignment with human needs, not about who is good and who is bad. There is an area in English-language science called ‘AI alignment.’ What does it mean to ‘align a system’? There are human  how to integrate a chatbot into the overall b2b communication scheme interests and needs, and a system has guiding rules. It is necessary that our interests and the system’s interests coincide, that the system obeys us, that it has no interests humanity does not have.

Ethics vs Functionality

AI is often used in jobs that involve independent thinking.

Each algorithm has a specific architecture that imposes a set of constraints. Every answer from a language model is based on prior knowledge. It provides the most likely answer to a given question. This is why its answers are often so impersonal.

Another aspect is the lack of feedback. If you ask a language model who the mother of actor Tom Cruise is, it will say Mary Lee Pfeiffer. But if you ask it who Mary Lee Pfeiffer’s  ao lists son is, it will not know the answer. Why not? It is due to a lack of feedback and a lack of information to make inferences. There is a lot of information about Tom Cruise, so it is easy to make an inference; but there is not so much information about his mother. To us, these ‘phenomena’ appear to be related, but not to a language model. To it, the more information there is, the more likely the inference is. Some tasks are great to solve with the help of AI, while some tasks should not be solved this way.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top