Source Themes

“I’m fully who I am”: Towards Centering Transgender and Non-Binary Voices to Measure Biases in Open Language Generation

Transgender and non-binary (TGNB) individuals disproportionately experience discrimination and exclusion from daily life. Given the recent popularity and adoption of language generation technologies, the potential to further marginalize this …

Queer In AI: A Case Study in Community-Led Participatory AI

Queerness and queer people face an uncertain future in the face of ever more widely deployed and invasive artificial intelligence (AI). These technologies have caused numerous harms to queer people, including privacy violations, censoring and …

Bound by the Bounty: Collaboratively Shaping Evaluation Processes for Queer AI Harms

Bias evaluation benchmarks and dataset and model documentation have emerged as central processes for assessing the biases and harms of artificial intelligence (AI) systems. However, these auditing processes have been criticized for their failure to …

Factoring the Matrix of Domination: A Critical Review and Reimagination of Intersectionality in AI Fairness

Intersectionality is a critical framework that, through inquiry and praxis, allows us to examine how social inequalities persist through domains of structure and discipline. Given AI fairness' raison d'être of 'fairness', we argue that adopting …

Should they? Mobile Biometrics and Technopolicy meet Queer Community Considerations

Smartphones are integral to our daily lives and activities, providing us with basic functions like texting and phone calls to more complex motion-based functionalities like navigation, mobile gaming, and fitness-tracking. To facilitate these …

Improving Adversarial Robustness to Sensitivity and Invariance Attacks with Deep Metric Learning

Intentionally crafted adversarial samples have effectively exploited weaknesses in deep neural networks. A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample such that its …

Auditing Algorithmic Fairness in Machine Learning for Health with Severity-Based LOGAN

Auditing machine learning-based (ML) healthcare tools for bias is critical to preventing patient harm, especially in communities that disproportionately face health inequities. General frameworks are becoming increasingly available to measure ML …

Harms of gender exclusivity and challenges in non-binary representation in language technologies

Gender is widely discussed in the context of language tasks and when examining the stereotypes propagated by language models. However, current discussions primarily treat gender as binary, which can perpetuate harms such as the cyclical erasure of …

Leveraging Social Media Activity and Machine Learning for HIV and Substance Abuse Risk Assessment

A data collection and artificial intelligence system for automated health risk behavior assessment using social media data. (JMIR 2021)

An example preprint / working paper

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis posuere tellus ac convallis placerat. Proin tincidunt magna sed ex sollicitudin condimentum.