AI Bias and Fairness


Our group is part of a team of researchers at MIT and Boston College that is funded by USAID to address the problem of bias and fairness in Artificial Intelligence, particularly in the context of developing countries. Since developing countries often have large populations with low-resources, AI is becoming increasingly used by businesses and governments in developing countries, often out of necessity and convenience, in areas such as employment, banking, health, justice, and even college admission.  But since many people in developing countries also have low-education and high-poverty, along with less regulation, these communities are very vulnerable to issues of bias or fairness in such algorithms. Our team is looking at not just bias mitigation strategies, but also helping to educate implementers and policy-makers about the proper use of machine learning tools.

Students: Olasubomi Olubeko, Christopher Sweeney (supervised by Dr. Najafian), Amit Ghandi (supervised by Dan Frey), Yazeed Awwad (Harvard and BC)

Collaborators: Maryam Najafian (MIT), Mike Teodorescu (BC), Dan Frey (MIT), Kendra Leith (MIT)