Skip to content

In recent years, there has been a growing awareness that Machine Learning (ML) algorithms can reinforce or exacerbate human biases. The RAND Tool for Evaluating Algorithmic Bias was developed to help identify and mitigate biases in algorithms that assist in decision-making processes.

Notifications You must be signed in to change notification settings

RANDCorporation/evaluating-algorithmic-bias

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RAND Tool for Evaluating Algorithmic Bias

Introduction

In recent years, there has been a growing awareness that Machine Learning (ML) algorithms can reinforce or exacerbate human biases. The RAND Tool for Evaluating Algorithmic Bias was developed to help assess and correct biases in algorithms that assist in decision-making processes. In particular, the tool helps users visualize tradeoffs between different types of fairness and overall model performance. It also provides tools to mitigate bias through post-processing or pre-processing.

This tool was originally produced as part of a research effort for RAND, with the goal of assisting the Department of Defense (DoD) as they invest in the development of ML algorithms for a growing number of applications. The tool has been extended to address the issue of using proxy measures for group labels, which is common in healthcare settings where information on race and ethnicity is often missing or imputed. The two companion reports further discusses this tool, its creation, and its development.

While ML algorithms are deployed in a wide variety of applications, this tool is specifically designed for algorithms that assist in decision-making processes. In particular, this tool is useful when algorithmic output is used to influence binary decisions about individuals. Hypothetical examples within this framework are (1) an algorithm that produces individual-level employee performance scores which are subsequently considered in promotional decisions or (2) an algorithm that produces recommendations for follow-up treatment from medical diagnostic testing.

References

The following report further discusses this tool and its original creation: Advancing Equitable Decisionmaking for the Department of Defense Through Fairness in Machine Learning

The following paper provides the methodological innovations utilized in the tool to provide estimates with noisy group measurements: De-Biasing the Bias: Methods for Improving Disparity Assessments with Noisy Group Measurements

The following user guide provides the most up-to-date details on using the tool: The RAND Tool for Evaluating Algorithmic Bias

Usage

Download the code from this repository to run this application locally. This application requires R be installed to run. After installation, the tool can be easily launched by following these steps:

  • Launch the R project
  • Run renv::restore()
  • Run the app.R file

Contact

Reach out to Joshua Snoke for questions related to this repository.

License

Copyright (C) 2023 by The RAND Corporation. This repository is released as open-source software under a GPL-2.0 license. See the LICENSE file.

About

In recent years, there has been a growing awareness that Machine Learning (ML) algorithms can reinforce or exacerbate human biases. The RAND Tool for Evaluating Algorithmic Bias was developed to help identify and mitigate biases in algorithms that assist in decision-making processes.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages