The automated decision systems that are pervasive in banking, the criminal justice system and the tech sector can easily amplify forms of discrimination that were once attributable to human bias only. The Department of Housing and Urban Development spoke to this problem on Thursday when it charged Facebook with violating the Fair Housing Act of 1968 by permitting advertisers to restrict access to housing ads based on user characteristics like race, religion or national origin.
This HUD filing resembles a similar suit filed last year by fair housing groups with which Facebook forged a sweeping settlement this month. Nevertheless, federal involvement puts tech companies on notice that they can be held accountable for civil rights violations connected with the ostensibly “neutral” systems they use to decide which readers see which material.
The Fair Housing Act of 1968 was partly intended to compensate for housing discrimination inflicted by the federal government itself, as it had previously shut most African-Americans out of homeownership through redlining. That practice entailed drawing lines around black neighborhoods and declaring them unsafe for federally backed mortgage insurance.
The act outlawed housing discrimination on paper, but the federal government has consistently dragged its feet in enforcing it, leaving most of the work to private groups that have frequently mounted court challenges to housing discrimination — and the government’s failure to fight it. In recent years, these groups have increasingly focused on the question of whether online services are replicating longstanding biases by using algorithms to target specific users.