“I think that a lot of my approach to working with technology is motivated by how to avoid the non-benevolent uses of technology. And there can be a lot of dangers to AI in particular.”
The lack of diversity among AI practitioners is a known challenge. Another is the absence of participation and input from those outside the field who might be affected by the technology. While Bilenko’s short purple pixie haircut wasn’t particularly notable amongst the diverse team that brought Dr. Brainlove to Burning Man, it did cause her to stand out when she attended scientific conferences.
“AI is a profession that excludes a lot of people from participating, and that’s a huge problem,” she says. Chatting with a handful of non-binary and queer scientists among the thousands attending a key research conference in the field, they brainstormed how to address these issues. The result was “Queer in AI,” a group to support queer researchers and raise awareness of issues in AI and machine learning that might disproportionately affect the queer community. They took their inspiration from other newly created groups like Women in AI (founded in 2016) and Black in AI (founded in 2017).
“A lot of the motivation I had for starting the organization Queer in AI with other folks, it was about some of these impacts of technology on people who are marginalized – both in society in general and in access to these technologies.”
Queer in AI found that most queer scientists they surveyed do not feel completely welcome in the field, partially due to a lack of a visible community and role models. Since then, Queer in AI members have worked to become a more visible presence in the larger AI community, returning to that conference and others each year to host social gatherings, lead mentoring sessions, and give research talks. The group has created a scholarship fund to help students apply to graduate school in AI and worked to raise awareness of concerns within the queer sub-community of the larger AI community. Through their original research and advocacy, they encourage and highlight new findings to address various concerns, such as the ethical use of AI, privacy and safety, and how binary-based model assumptions can harm members of the queer community. Bilenko notes that algorithm development requires a large volume of data, and sometimes older data includes assumptions based on stereotypes, which then reinforces biased decision making.
“It’s not just wrong, but wrong in ways that harm those that are already the most marginalized.”