TY - JOUR
T1 - Large language models show human-like content biases in transmission chain experiments
AU - Acerbi, Alberto
AU - Stubbersfield, Joe
PY - 2023/10/26
Y1 - 2023/10/26
N2 - As the use of Large Language Models (LLMs) grows, it is important to examine if they exhibit biases in their output. Research in Cultural Evolution, using transmission chain experiments, demonstrates that humans have biases to attend to, remember, and transmit some types of content over others. Here, in five pre-registered experiments with the same methodology, we find that the LLM chatGPT-3 shows biases analogous to humans for content that is gender-stereotype consistent, social, negative, threat-related, and biologically counterintuitive, over other content. The presence of these biases in LLM output suggests that such content is widespread in its training data, and could have consequential downstream effects, by magnifying pre-existing human tendencies for cognitively appealing, and not necessarily informative, or valuable, content.
AB - As the use of Large Language Models (LLMs) grows, it is important to examine if they exhibit biases in their output. Research in Cultural Evolution, using transmission chain experiments, demonstrates that humans have biases to attend to, remember, and transmit some types of content over others. Here, in five pre-registered experiments with the same methodology, we find that the LLM chatGPT-3 shows biases analogous to humans for content that is gender-stereotype consistent, social, negative, threat-related, and biologically counterintuitive, over other content. The presence of these biases in LLM output suggests that such content is widespread in its training data, and could have consequential downstream effects, by magnifying pre-existing human tendencies for cognitively appealing, and not necessarily informative, or valuable, content.
U2 - 10.31219/osf.io/8zg4d
DO - 10.31219/osf.io/8zg4d
M3 - Article
JO - Proceedings of the National Academy of Sciences of the United States of America
JF - Proceedings of the National Academy of Sciences of the United States of America
SN - 0027-8424
ER -