2022-10-02 –, EMACS
Language: English
Artificial Intelligence is present in our everyday life. AI programmers indirectly shape our opportunities and transfer their worldview into their algorithms and technology designs. They also create robots & AIChatbots to their image or idealized others- mostly portrayed as White, threatening a post-racial future that erases PoC. What interventions are effective to increase their awareness?
Biases in the development of artificial intelligence (AI) have recently received increased attention. Specifically, biases relating to gender and race are frequently programmed in technology design, in many cases allegedly due to individual programmers who, consciously or unconsciously, replicate existing biases in their work. Past research has neglected to empirically investigate the role of individual programmers in overcoming biases, especially the question of how they can be motivated to engage in bias detection and affirmative action. The authors develop and test a conceptual framework on the effectiveness of motivational appeals directed at programmers, outlining the role of framing, the message speaker’s race and gender, and receivers’ individual differences in terms of social dominance orientation-egalitarianism (SDO-E) for the effectiveness of such appeals in driving stereotype activation and implicit (i.e., ability to detect potential biases, e.g., an AI chatbot portrayed as a white male) affirmative action outcomes. The framework proposes that a problem framing (i.e., “you are part of the problem”) will be more effective than a solution framing (i.e., “you are part of the solution”) if the speaker is white and male (instead of black and female) and vice versa. Regarding individual differences, the authors propose that these results will only occur for respondents with low levels of SDO-E and be reversed for respondents with high levels due to the pursuit of egalitarian values, which automatically inhibits stereotype activation. The authors recruited 590 real US programmers via Prolific to participate in a 2x2 experiment measuring their ability to detect bias.
Arlette is a PhD student in Business Ethics and AI at the University of Mannheim. Also a Human Rights Lawyer from the Dominican Republic with a Master's in Public Policy from the University of Bristol