Data-driven systems employ algorithms to aid human judgment in critical domains like hiring and employment, school and college admissions, credit and lending, and college ranking. Because of their impacts on individuals, population groups, institutions, and society at large, it is critical to incorporate fairness, accountability, and transparency considerations into the design, validation, and use of these systems. Current research in this area has mainly focused on classification and prediction tasks. However, scoring and ranking are also used widely, and raise many concerns that methods designed for classification cannot handle because classification labels are applied one item at a time, whereas ranking is explicitly designed to compare items. This project is focused on algorithmic score-based rankers that sort a set of candidates based on a “simple” scoring formula. Such rankers are widely used in critical domains because of the premise that they are easier to design, understand, and justify than complex learned models. Yet, even these seemingly simple and transparent rankers may produce counter-intuitive results, unfairly demote candidates that belong to disadvantaged groups, and be prone to manipulation due to sensitivity to slight changes in the input data or in the scoring formula. Addressing these issues is challenging due to the interplay between the data being ranked and the ranker, the complex structure within the data, and the need to balance multiple objectives.This project considers the core technical challenges inherent in the responsible design and validation of algorithmic rankers, and pursues three synergistic aims. Aim 1 is to develop methods to quantify the impact of item attributes, and of specific engineering choices regarding attribute representation and pre-processing, on the ranked outcome (validation). This information is then used to guide the data scientist in selecting a scoring function that corresponds to their understanding of quality or appropriateness (design). Aim 2 is to develop methods to quantify the impact of data uncertainty, of slight changes in the scoring formula, or both, on the ranked outcome (validation). This information is then used to guide the data scientist in intervening on data acquisition and pre-processing to reduce uncertainty, and in selecting a scoring function that is sufficiently stable (design). Aim 3 is to develop methods to quantify lack of fairness in ranked outcomes, with respect to candidates from under-represented or historically disadvantaged groups, in view of multiple fairness objectives and potential intersectional discrimination (validation). This information is then used to identify feasible trade-offs and assist the data scientist in navigating these trade-offs to enact fairness-enhancing interventions (design). Outcomes of this work will impact the practice of scoring and ranking in critical domains like educational program admissions, hiring, and college ranking. Insights from this work will enable technical interventions when appropriate, and also identify cases where they are insufficient, and where more data should be collected or an alternative screening process should be used. This project will also include teaching and mentoring, public education and outreach, and broadening participation of members of under-represented groups in computing.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
数据驱动的系统采用算法来帮助人类法官在关键领域,例如招聘和就业,学校和大学录取,信用和贷款以及大学排名。由于它们对个人,人口群体,机构和整个社会的影响,因此至关重要的是将公平性,问责制和透明度的考虑因素纳入这些系统的设计,验证和使用。目前在该领域的研究主要集中在分类和预测任务上。但是,得分和排名也被广泛使用,并提出了许多用于分类方法无法处理的方法,因为一次分类标签一次是一次应用,而排名明确设计用于比较项目。该项目的重点是基于算法得分的排名者,这些排名者基于“简单”评分公式对一组候选人进行排序。这样的排名者被广泛用于关键领域,因为它们比复杂的学习模型更容易设计,理解和合理。然而,即使这些看似简单且透明的排名者也可能产生违反直觉结果,不公平地撤离属于弱势群体,并且由于对输入数据或得分公式中的微微变化的敏感性而容易受到操纵。解决这些问题的挑战是由于被排名的数据与排名者之间的相互作用,数据中的复杂结构以及平衡多个目标的需求。该项目考虑算法排名者的责任设计和验证固有的核心技术挑战,并追求三个协同的目标。 AIM 1是开发方法来量化项目属性的影响以及有关属性表示和预处理的特定工程选择对排名结果(验证)的影响。然后,该信息用于指导数据科学家选择与他们对质量或适当性理解相对应的评分功能(设计)。 AIM 2是开发方法来量化数据不确定性的影响,评分公式或两者兼有对排名结果的影响(验证)的影响。然后,该信息用于指导数据科学家介入数据获取和预处理以减少不确定性,并选择足够稳定的评分函数(设计)。目的3是开发方法来量化排名中缺乏公平性。鉴于多个公平对象和潜在的间际歧视(验证),结果是从代表性不足或历史上不受欢迎的群体中的候选人(验证)。然后,这些信息用于确定可行的权衡,并帮助数据科学家导航这些权衡,以实施公平性增强干预措施(设计)。这项工作的成果将影响在教育计划,招聘和大学排名等关键领域的评分和排名的实践。这项工作的洞察力将在适当的情况下实现技术干预措施,并确定应收集更多数据或应使用替代性筛选过程的情况。该项目还将包括教学和心理化,公共教育和外展,并扩大代表性群体中成员参与计算的成员的参与。该奖项反映了NSF的法定任务,并通过使用基金会的知识分子优点和更广泛的影响评估标准来评估值得支持。