Artificial intelligence suffers from human bias problem
Studies show that the more reliance on artificial intelligence systems, the greater the risks and questions about its accuracy. The most important question is whether artificial intelligence systems are biased like humans.
But machines do not have an actual bias, and artificial intelligence does not want something to be true or wrong for reasons that can not be rationally explained, but unfortunately human bias exists in automated learning systems because of the data or algorithms used to train them. Anyone solve this huge problem.
A team of scientists from the Czech Republic and Germany recently conducted a research group to determine the effect of human cognitive bias on the interpretation of the outputs used to establish the rules of automated learning. The results of the researchers indicate that they have tested multiple automated learning systems and found that the characteristics of human bias exist in artificial intelligence systems.
The research team's report shows how 20 different cognitive biases can change the evolution of automated learning rules and suggest ways to eliminate this bias.
According to the researchers' report, bias may lie in the code and algorithms whether the developers are intentional or not, but when these types of human errors become hidden within parts of artificial intelligence this means that our bias is responsible for choosing the training base that forms the creation of the automated learning models. We do not create artificial intelligence but put our mistakes in a black box and do not know how to use it against us!
As a result, the scene of personal responsibility changes as the reliance on AI systems increases. Most of the vehicles will be operated by machines and a large number of surgeries and medical procedures will be carried out by robots. This will put artificial intelligence developers at the forefront in the event of human tragedy. Blaming someone holds all the mistakes.
For many problems, the simple solution was to change the way data is represented. The team, for example, assumes that changing the output of algorithms so that more natural numbers are used than percentages can greatly reduce the likelihood of abuse Interpretation of certain results.
But unfortunately there is no radical solution to this problem most of the time we do not know that we are biased where we think we are smart or obvious or not think about it and there are much more than 20 different cognitive bias must be interested in the programmers of automated learning, even when the algorithms are ideal and outputs are not possible For change, our cognitive biases make our interpretation of the data unreliable at best. Everyone has these prejudices to some extent, and in the end there is little research on how it affects the interpretation of data.
According to the research team: To the extent that we know, cognitive biases have not yet been discussed with regard to the interpretation of the results of machine learning, and so we began this review of research published in cognitive sciences in order to give a psychological basis for the changes that appear in the algorithms of learning the rules of induction and the method of delivery of its results, And effects that can lead to systematic errors when the rules extracted by induction are interpreted.
Artificial intelligence is an extremely powerful technology whose effect can not be denied. It is important to be very careful in developing, using and committing to the use of social problems and to develop them in a way that ensures trust and protects humanity from any negative repercussions. It is important that researchers around the world complete this work and discover methods. Effective to avoid cognitive bias in fully automated learning because artificial intelligence is nothing more than a human amplifier, and bad knowledge must make bad robots.
No comments