【学术讲堂】Probing Social Bias in Labor Market Text Generation by ChatGPT: A Masked Language Model Approach(孔令龙--阿尔伯塔大学)

发布者:统计与数据科学学院发布时间:2025-07-07浏览次数:10

专家简介】:孔令龙,阿尔伯塔大学数学与统计科学系教授。担任加拿大统计学习研究讲席教授及加拿大高等研究院人工智能讲席教授。作为美国统计协会(ASA)和阿尔伯塔机器智能研究所(Amii)会士,在AOS,JASA,JRSSB等顶级期刊及NeurIPS、ICML、ICLR等国际顶级会议发表120余篇同行评审学术论文。孔教授荣获2025年加拿大数学研究中心-统计学会奖以表彰其卓越研究贡献,现任JASA 和AOAS等多本顶级期刊副主编,并曾在美国统计协会及加拿大统计学会担任领导职务。其研究涵盖高维与神经影像数据分析、统计机器学习、稳健统计、分位数回归、可信机器学习及智能健康人工智能等领域。

报告摘要】:As generative large language models (LLMs) such as ChatGPT gain widespread adoption in various domains, their potential to propagate and amplify social biases, particularly in high-stakes areas such as the labor market, has become a pressing concern. AI algorithms are not only widely used in the selection of job applicants, individual job seekers may also make use of gencrative LLMs to help develop their job application materials. Against this backdrop, this research builds on a novel experimental design to examine social biases within ChatGPT-generated job applications in response to real job advertisements. By simulating the process of job application creation, we examine the language patterns and biases that emerge when the model is prompted with diverse job postings. Notably, we present a novel bias evaluation framework based on Masked Language Models to quantitatively assess social bias based on validated inventories of social cuesfwords, enabling a systematic analysis of the language used. Our findings show that the increasing adoption of generative

Al, not only by employers but also increasingly by individual job seekers, can reinforce and exacerbate gender and social inequalities in the labor market through the use of biased and gendered language.

报告时间】:2025年072109:30-10:30

报告地点】:崇真楼110