Algorithmic Bias: How Algorithms Can Limit Economic Opportunity for Communities of Color

Fast forward to today and my community is experiencing unprecedented levels of Islamophobia that are normalized in media, policy, and culture, not only in the U.S. but globally. (Photo: SocialJusticeSeeker812/cc/flickr)

Algorithmic Bias: How Algorithms Can Limit Economic Opportunity for Communities of Color

As a young person of color who relies heavily on technology, I worry about the ways inequality is becoming automated, normalized and worsened through computerized decision-making systems

Racial and gender bias in algorithms impacts communities of color in disproportionate and frightening ways. The urgency of addressing this issue cannot be stressed enough. Algorithmic bias goes beyond privacy and surveillance issues that we have seen in the past. Biased algorithms can determine my access to healthcare, jobs, loans, and broadly, economic opportunity -- and yours, too.

As a young person of color who relies heavily on technology, I worry about the ways inequality is becoming automated, normalized and worsened through computerized decision-making systems. I have first-hand experience of what happens when we leave discrimination unchecked. As a Muslim raised post 9/11, I have seen Islamophobia continue to increase. I watched as Mohammed became Mo, as aunties took off their hijabs, as my community did all but shed their skin to hide their Muslim identity.

My community is experiencing unprecedented levels of Islamophobia that are normalized in media, policy, and culture, not only in the U.S. but globally.

Fast forward to today and my community is experiencing unprecedented levels of Islamophobia that are normalized in media, policy, and culture, not only in the U.S. but globally. The silence around the genocide of Rohingya Muslims in Burma and the extermination of Uyghur Muslims in China speaks volumes to how the dehumanization of Muslims became so normalized, which by the way was supported by platforms like Facebook which enabled hate-based propaganda on their platforms. So I of all people understand why addressing algorithmic bias is a matter of urgency. We must include this analysis in our fight for equity and justice before automated inequality becomes the new status quo.

Let's focus on the hiring process as an example to expand on what algorithmic bias looks like. To begin with, an algorithm is a process that helps solve a problem. Think of it as a formula where you plug in datasets and methods to show results. The simplest algorithms are written based just on the intuitions of the programmer, but many in practice also rely on big data and artificial intelligence. Big data and artificial intelligence use datasets in combination with programmer instructions to shape algorithms into more finely tuned formulas. Though generally seen as objective, computerized decision-making systems contain human and data set bias. They are, after all, created by humans who bring their own biases that can impact the way the program is structured and the information that's fed into it.

Hiring software contains algorithmic bias, limiting economic opportunity for people of color and marginalized genders. When you have a 1,000 applicants for a job, using an algorithm to pick out your top 20 candidates to interview solves an issue of capacity and time for an organization. For this algorithm to pick the best applicants you plug in resumes of successful hires at the organization, past hiring history, and keywords that match the job description. Here is where the "isms" start to show up.

In 2017, Uber's technical leadership was entirely White and Asian, and 88.7 percent male. This means successful hires were White or Asian male and the hiring history dataset was only made up of this group.

Let's use Uber an example. In 2017, Uber's technical leadership was entirely White and Asian, and 88.7 percent male. This means successful hires were White or Asian male and the hiring history dataset was only made up of this group. Keywords may also include bias. Textio, a company which helps create gender-neutral language in job descriptions, shows that words like "enforcement" or "fearless" tend to attract male applicants. Based on this data, the hiring algorithm will likely pick White and Asian men as the top 20 candidates, taking away economic opportunity from qualified diverse candidates. These are just two examples of how an algorithmic can contain bias in the hiring process. The bias that led the hiring process to select only White and Asian men for the job is now embedded into the algorithm, automating this cycle of discrimination.

As someone who is currently applying for jobs, I worry that I may not even get an interview despite my qualifications due to this bias. I found there are hacks you can use to improve your chances to pass resume reading software through tools like Bloc and Job Scan. To learn more about algorithmic bias in the hiring process read this report by Upturn.

In order to address this issue at its root, community organizers, policy advocates, nonprofit professionals, and youth need to understand the impact of algorithmic bias based on race, gender, or other factors. Our communities must mobilize to create solutions that are by us and for us -- soon. Companies like Google still lag on creating long-term solutions that address the root cause of these issues. We need a grassroots #PeoplePowered movement to bring #TechEquity, before these supposedly "objective" systems normalize and worsen discrimination.

Follow us on Twitter for more updates and stay tuned for our official report on #AlgorthmicBias. Have ideas or want training for how your organization can advance #TechEquity? Email haleemab@greenlining.org.

Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.