Digital Equity in Education is About More Than the Internet

As schools embrace technology in response to COVID-19, they must investigate the potential racial biases that are built into the technologies they use.

Mandatory COVID-19 shutdowns quickly transformed the meaning of “school.” No longer a physical, communal space for learning, schools shifted focus from learning experiences and classroom management to a new slew of priorities. Teachers must master new technologies, support students’ well-being, manage diverse home environments, and foster learning through a device, all while supporting their own families. Concerns about teacher training, screen-time, and the effectiveness of technology disappeared overnight, as schools quickly embraced technology to enable learning. This rapid transition also forced schools to directly confront students’ unequal backgrounds and exposed the living reality for many students who don’t have a safe or quiet space to learn. 

The digital divide is real. It is abundantly clear that many students — especially those from Black and Brown communities — disproportionately lack access to basic internet that is a prerequisite for online education. The inequities persist, despite schools digging deep into budgets to purchase devices and Wi-Fi hotspots for students in need. Without this, millions of students would not be able to “learn from home.” 

Digital equity is not just about access to devices and the internet. Remote learning requires teachers and students to use digital products, designed by technology companies who sell their products to schools. Even when all students have equal access to the internet, schools must ensure that technology companies critically consider the way racial bias still impacts students today. The same criteria by which schools scrutinize teacher bias and require culturally relevant content should also apply to the evaluation of software schools use.

Black and Brown students face rampant inequality in schools on a daily basis. They are suspended more often, placed on lower academic tracks, and taught content to which they often cannot culturally relate, compared to white students for whom the education system was designed. These experiences label students in ways that exclude them from the system, leading them down a well-researched school-to-prison pipeline. Discrimination is reflected in a range of “objective” outcome data illustrated by the race gap in attendance and discipline records, grades and test scores.

Even in the safety of their own homes, these biases follow students into technologies schools use to teach and learn. Many EdTech companies use (albeit aggregated) student data to train algorithms that promise to personalize learning, identify at-risk students and save teachers time. EdTech companies often don’t understand the complex and discriminatory racial dynamics in schools, which impacts the way their products are designed and used. Machine learning algorithms further exacerbate these issues as they learn over time from the assumptions and data that created them. When assessing their impact, few companies test or disclose differences in the impact of their products by race.

EdTech companies must deliberately design products for Black and Brown students – who make up nearly half of students in American K12 public schools. This means prioritizing  Black and Brown students as key users of their product and allocating time to detect and mitigate racial bias during each phase of product development. Companies must work closely with schools to understand the context in which their products are used to ensure that feedback mechanisms intended to “improve” an algorithm don’t simply reinforce existing biases. 

If nothing else, EdTech companies should test and disclose differences in the impact of their products by race. Even with the best intentions, companies cannot mitigate racial bias without first understanding it. If Black students’ test scores improve by fewer points than other students’, the product was not designed with Black students in mind. Companies can also avoid collecting sensitive race data at the individual student level by using school-level demographic information instead. 


As schools rely more on technology than before, they should require companies to demonstrate their commitment to racial equity in the design of their products, the same way they do for data privacy and accessibility. We have already seen horror stories of artificial intelligence in other industries; it’s only a matter of time before educators and families are confronted with similar catastrophes. 

The COVID-19 crisis presents an opportunity for schools to redesign students’ learning experiences — to think critically about the way our education system is designed, or rather, for whom our education system is designed. We must take responsibility for the way our system disadvantages these students by ensuring that schools champion students’ needs and EdTech companies commit to racial equity, on par with their commitments to data privacy and accessibility. 

For more information:

The Edtech Equity Project provides technology companies with an AI in Education Toolkit for Racial Equity that details specific and tangible practices that companies can adopt to account for racial bias in their products. The Edtech Equity Project also provides procurement guidelines to help schools ask companies the right questions in order to meaningfully evaluate a product’s potential impact on racial equity.

Nidhi Hebbar is an ed-technologist and fellow at the Aspen Tech Policy Hub.

About The Author