Skip to main content

Big data and artificial intelligence are growing more pervasive and are creating new, complex links between individuals and the many groups to which they might belong, including groups no one might have thought of as a “group” before. How should we think about questions of group privacy, discrimination, and group identity in this new world? Does it matter whether algorithms used in health care focus on identified groups that have been designated as protected classes, rather than more precisely (or amorphously) defined groups that may or may not align with some protected class boundaries?

Artificial intelligence (AI) healthcare technology can solve our most complex and intransigent health issues. It has the potential to enhance healthcare quality, improve access, reduce cost, and deliver highly personalized care. However, delivering those solutions requires large, historical, broadly representative, and well-organized data from an affected population as well as from the individual that is seeking care.

Throughout history, data about Indigenous communities, cultures, and territories have been collected and collated through research and surveillance as part of processes of colonisation and assimilation. Settler colonialism occurs not only on the land but also in the academy where the omission of Indigenous leadership in data production positions Indigenous Peoples as objects of study.

PROJECT NARRATIVE Between 1907 and the mid-1970s, 32 US states passed and implemented eugenic sterilization laws that authorized the sterilization of people considered unfit. Our epidemiological, historical and mixed-methods analysis of over 32,000 eugenic sterilization requests in five US states (California, North Carolina, Iowa, Michigan and Utah) identifies varying demographic patterns and documents changes in how eugenics laws were applied over time.

Subscribe to Big Data