•  
  •  
 

Missouri S&T’s Peer to Peer

Abstract

Machine learning algorithms (MLAGs), frequently used in artificial intelligence (AI), rely on using patterns across sets of data to derive decision-making intelligence. In recent years, as society continues to give increasing authority to ML-driven AIs, these algorithms have demonstrated the ability to take on human-like discriminatory biases. Microsoft's "Tay," for example, a social media-based chatbot, went from resembling a normal teenage girl to displaying racist and sexist attitudes in a mere sixteen hours). Tay and many other ML-driven implementations across a wide variety of fields have replicated numerous human biases. In most cases, these human-like biases originated due to improper training of the associated MLAG. The purpose of this study is to illustrate the harmful effects of learned human-like biases in MLAGs, to highlight how improper algorithm training leads to bias formation, and to analyze research in bias correction. Discriminatory, human-like biases observed in ML algorithms have numerous harmful effects, and there exists a growing need to regulate and correct these biases.

Share

COinS