Scholar discusses solutions to New Jim Code

Dr. Ruha Benjamin presented on how technology and algorithms can be built in a biased manner. Benjamin is an associate professor of African American Studies at Princeton. // Photo courtesy of ruhabenjamin.com

On the afternoon of Feb. 7, the School of History and Sociology (HSOC) presented the second installment of their spring 2022 speaker series entitled “The New Jim Code: Reimagining the Default Settings of Technology and Society.” The virtual lecture was led by Dr. Ruha Benjamin, an associate professor of African American Studies at Princeton University. 

Benjamin is a sociologist whose work primarily focuses on the intersection of equity and technological innovation, with a particular concentration on race. While her lecture was subjected around race, Benjamin recognized that discrimination in technology impacts an intersection of marginalized identities, including race, nationality, gender and sexuality. 

“For those of us who want to construct a different social reality, one grounded in justice and joy, we can’t only critique the underside, that is who these systems harm. But we also have to wrestle with the deep investment, the desires even, the apt that many people have for social domination,” Benjamin said. 

According to Benjamin, algorithmic discrimination is most clearly found in technologies that target marginalized communities. Examples of offending technologies include self-learning algorithms, automated and semi-automated technology. These technologies are particularly dangerous when used in surveillance or to monitor communities. 

Surveillance technology is also becoming increasingly prevalent in the private sector as businesses use technology to monitor the productivity of their workers. Workers not meeting their productivity goals are punished, often in the form of decreased pay. In this instance, the technology is being used to specifically target the working class.

Another form of technology more familiar to Tech students is artificial intelligence (AI) systems used to review job applications. 

Research conducted on these AI systems discovered a variety of discriminations were built into these systems. 

For instance, applicant names designated ethnic or foreign, particularly Black names, were viewed unfavorably by the AI compared to traditionally white or Anglican names such as John. 

The system also viewed Ivy League or more prestigious universities as being more favorable against public or lesser-known universities. 

Some applicants beat the algorithm by hiding the name of a prestigious university, such as Oxford University, in white letters on their job application. 

The lecture pinpointed algorithmic discrimination in technology used to surveil immigrant communities and police U.S. borders.

Benjamin suggests this an example of how harmful technology could be used as a tool by a governing body. 

“We must move beyond a narrow debate limited to hard versus smart borders towards a discussion of how we can move toward a world where all people have the support needed to live healthy, secure and vibrant lives,” Benjamin said. 

Surveillance technology used to police U.S. borders was created by a collaborative effort of international tech giants such as Accenture and Gemalto, which hints at a greater issue when attempting to solve problems of technological discrimination. 

These tech firms have made minimal effort to address issues of technological discrimination to avoid any risk of loosing potential revenue. 

Benjamin advises incentivizing businesses would not provide the adequate solution. Rather, discriminatory technology can only be combatted effectively by policy solutions. 

Some steps have been taken, particularly by non-governmental organizations (NGOs) such as the United Nations (UN) and Amnesty International. 

Benjamin has worked closely with the UN as they conduct ongoing research on the causes and impacts of algorithmic discrimination, and she has collaborated with Amnesty International to formulate a series of policy solutions to combat technological discrimination. 

Amnesty International released a series of strategies for mitigating the harm of algorithmic discrimination that primarily focuses on creating policies to reframe how science and technology regulation is considered — through a lens of justice and ethics that considers the wellness and safety of all people. 

One of the organization’s first policy proposals was to restrict self-learning algorithms for decision making that has a significant impact, specifically those that affect people and the environment. 

Then, Amnesty International recommends forming official policy to ban discrimination against race, ethnicity, nationality and other marginalized communities in technology policy. 

Having specific regulations against technological discrimination provides legal basis to challenge harmful technology. 

Benjamin said such policies create a pathway to build safer and more equitable communities in the future. The significant attendance of the Tech community at the virtual lecture suggested that many are interested in working towards these more equitable policies. 

The audience was enthusiastic with questions and concerns about applying ethics in technology and rooting out bias in research and real-world data. 

Benjamin reflected on the value of open and honest discussions of race and equity in science and technology. 

“A liberatory imagination opens up possibilities and pathways. It creates new settings and codes new values and builds on critical intellectual traditions that have continuously developed insights and strategies grounded in justice. And my hope is that we’ll all find ways to build on this tradition,” Benjamin said. 

To learn more about the HSOC Spring 2022 Speaker Series, take a look at their website, which is hsoc.gatech.edu/speakers-series

Advertising