A growing set of jurisdictions has embraced “consultative algorithmic governance,” the idea that community members, particularly racially and otherwise politically marginalized ones, should be involved in the processes by which state institutions procure, construct, implement, and oversee artificially intelligent algorithms employed in public sector decision-making. Consultative processes range from public hearings that provide communities with an opportunity to comment about anticipated algorithmic use to community advisory boards that help public officials evaluate the impact of current or future algorithmic use.
This Article argues that consultative algorithmic governance is critically flawed and then builds upon this critique to point toward a more pluralistic, and potentially contentious, vision of community participation. Consultative algorithmic governance rests on the assumption that community participation should be limited to consultation. This narrow conception depoliticizes the stakes of algorithm use for different communities to the detriment of communities who are most likely to be harmed by algorithm use. For racially and otherwise politically marginalized communities, algorithmic use is not neutral but rather can serve as a mechanism to facilitate and justify their punitive treatment by the state. By treating consultation as the primary goal of community participation, consultative algorithmic governance undermines efforts by politically marginalized communities to force the state to account for the multifaceted interests at stake.
To illustrate the problem, the Article focuses on the use of consultative algorithmic governance in criminal administration. It then sets out a different approach to community participation, one that recognizes that communities may seek to do more than consult with criminal legal institutions. They may seek to influence algorithmic construction, change algorithmic policy, participate in ongoing algorithmic oversight, or even build political power to stop algorithmic use all together. If we are serious about accounting for the needs of different communities, then we should aim to better facilitate the diverse purposes for which communities participate in algorithmic governance. This recognition directs us beyond just providing opportunities for community consultation and toward also aiding communities in their efforts to render the state more responsive to their perspectives and their needs. This Article explores the incorporation of resources into algorithmic governance that can better facilitate the diverse aims that different communities pursue in algorithmic governance, whether the aim is to consult, collaborate, or check institutional power. While these resources are not a complete solution, they are part of the larger restructuring needed to create a more democratic iteration of algorithmic governance for all.