Dealing with Bias and Unfairness in AI
Friday, September 20, 2019 — 10:40AM - 11:40AM
Machine learning algorithms can encode a discriminative bias when training them with real data in which underrepresented groups are not properly characterized. Then a question quickly emerges: how can we make sure AI does not discriminate against people from minority groups because of the color of their skin, gender, or ethnicity? Even more, as the tech industry does not represent the entire population, underrepresented populations in computing such as women, Hispanics, African-Americans, Native Americans have limited control over the direction of machine learning breakthroughs. In this panel, we claim that it is our responsibility to advance the progress of artificial intelligence by exposing this problem and proposing reliable solutions based on solid research. This will be done by increasing the presence of members of underrepresented groups that are able to build solutions and algorithms to advance the progress of this field towards a direction in which bias and unfairness are accordingly addressed. As we are surrounded with lots of data, machine learning algorithms have the potential to automate decisions for common people, who are not necessarily aware of how these techniques work. While we expect this technology to be aligned to the values of our society, the reality is that data sets collected to feed machine learning algorithms also embody the underlying unfairness of the society we live in.
Patricia Ordóñez, Associate Professor, University of Puerto Rico, Río Piedras
Juan Gilbert, Professor and Chair, University of Florida
Kori Inkpen, Principal Researcher, Microsoft
Brianna Posadas, PhD Candidate, University of Florida
Tianlu Wang, PhD Student, University of Virginia