To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.
Krystal Kauffman worked as an organizer on political and issue campaigns for a decade before pursuing a degree in geology. Then, she turned to gig work, which lead her to Turkopticon, a nonprofit organization dedicated to fighting for the rights of gig workers — specifically those using Amazon’s Mechanical Turk (AMT) platform.
Now the lead organizer at Turkopticon, Kauffman recently started as a research fellow with the Distributed AI Research Institute (DAIR) Institute, working alongside others to build — in her words — “a community of workers united in righting the wrongs of the big-tech marketplace platforms.”
Q&A
Briefly, how did you get your start in AI? What attracted you to the field?
In 2015, I became ill, and couldn’t work outside of my home. While doctors were trying to sort things out, I found the AMT platform. For the next two years, I was able to support myself doing data work in which I completed tasks that helped program AI, build LLMs and so on. During my time working on AMT, I became very passionate about solving issues with the platform and taking on the ethics of data work in general.
What work are you most proud of (in the AI field)?
When I first started data work nine years ago, very few people knew that there was a global workforce quietly programming smart devices, developing AI and building datasets from their homes. Over the last several years, I’ve spoken out about this workforce and the ethical challenges that come with data work through interviews, conference panels, articles, forums, aiding legislators, speaking engagements, workshops and social media. It’s an honor to be in a position in which I can help educate the general public, congressional leaders and labor advocates about this workforce and all that comes with it.
How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?
I consider myself very fortunate because I have a great support system that includes my colleagues and mentors. I choose to surround myself with people who want to see female and non-binary folks succeed. My mentors are women and I also seek advice from supportive men. One thing that has to continue, however, is speaking up about inequity and moving the conversation forward to change it.
What advice would you give to women seeking to enter the AI field?
I would tell any woman wanting to enter the AI field to go for it! Finding a good mentor or mentors is so important. Look to the many strong women and non-binary folks in the field for guidance when needed. Forge relationships with supportive men. Lastly, don’t be afraid to speak up. Great ideas come from confronting some of the hardest questions!
What are some of the most pressing issues facing AI as it evolves?
One of the most pressing issues facing the evolution of AI is accessibility. Who has access to the tools? Who’s providing the data and maintaining the system? Who’s benefiting from AI? What populations are being left behind and how do we change that? How are the workers behind the system being treated?
The other issue I would raise here would be bias. How do we create systems completely free from bias?
What are some issues AI users should be aware of?
I would always tell users to look at how the workers training AI are being treated. That’s an indicator of so many things.
What is the best way to responsibly build AI?
It’s imperative that we involve underrepresented populations in the creation of AI. The people who will be impacted by the tech should always have a seat at the table. Similarly, the creation of AI legislation has to involve data workers. They are the foundation of these systems and to have the discussion without them would be irresponsible.
How can investors better push for responsible AI?
I will just say what I have been saying: Nothing is set in stone. We do not have to accept what is being presented to us. The only way things improve is to speak up and act. Look for other organizations pushing for responsible AI. Challenge working conditions, challenge implementation, usage, etc. Challenge anything that feels unfair or irresponsible.