To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.
Emilia Gómez is a principal investigator at the European Commission’s Joint Research Centre and scientific coordinator of AI Watch, the EC initiative to monitor the advancements, uptake and impact of AI in Europe. Her team contributes with scientific and technical knowledge to EC AI policies, including the recently proposed AI Act.
Gómez’s research is grounded in the computational music field, where she contributes to the understanding of the way humans describe music and the methods in which it’s modeled digitally. Starting from the music domain, Gómez investigates the impact of AI on human behavior — in particular the effects on jobs, decisions and child cognitive and socioemotional development.
Q&A
Briefly, how did you get your start in AI? What attracted you to the field?
I started my research in AI, in particular in machine learning, as a developer of algorithms for the automatic description of music audio signals in terms of melody, tonality, similarity, style or emotion, which are exploited in different applications from music platforms to education. I started to research how to design novel machine learning approaches dealing with different computational tasks in the music field, and on the relevance of the data pipeline including data set creation and annotation. What I liked at the time from machine learning was its modelling capabilities and the shift from knowledge-driven to data-driven algorithm design — e.g. instead of designing descriptors based on our knowledge of acoustics and music, we were now using our know-how to design data sets, architectures and training and evaluation procedures.
From my experience as a machine learning researcher, and seeing my algorithms “in action” in different domains, from music platforms to symphonic music concerts, I realized the huge impact that those algorithms have on people (e.g. listeners, musicians) and directed my research toward AI evaluation rather than development, in particular on studying the impact of AI on human behavior and how to evaluate systems in terms of aspects such as fairness, human oversight or transparency. This is my team’s current research topic at the Joint Research Centre.
What work are you most proud of (in the AI field)?
On the academic and technical side, I’m proud of my contributions to music-specific machine learning architectures at the Music Technology Group in Barcelona, which have advanced the state of the art in the field, as it’s reflected in my citation records. For instance, during my PhD I proposed a data-driven algorithm to extract tonality from audio signals (e.g. if a musical piece is in C major or D minor) which has become a key reference in the field, and later I co-designed machine learning methods for the automatic description of music signals in terms of melody (e.g. used to search for songs by humming), tempo or for the modeling of emotions in music. Most of these algorithms are currently integrated into Essentia, an open source library for audio and music analysis, description and synthesis and have been exploited in many recommender systems.
I’m particularly proud of Banda Sonora Vital (LifeSoundTrack), a project awarded by Red Cross Award for Humanitarian Technologies, where we developed a personalized music recommender adapted to senior Alzheimer patients. There’s also PHENICX, a large European Union (EU)-funded project I coordinated on the use of music; and AI to create enriched symphonic music experiences.
I love the music computing community and I was happy to become the first female president of the International Society for Music Information Retrieval, to which I’ve been contributing all my career, with a special interest in increasing diversity in the field.
Currently, in my role at the Commission, which I joined in 2018 as lead scientist, I provide scientific and technical support to AI policies developed in the EU, notably the AI Act. From this recent work, which is less visible in terms of publications, I’m proud of my humble technical contributions to the AI Act — I say “humble” as you may guess there are many people involved here! As an example, there’s a lot of work I contributed to on the harmonization or translation between legal and technical terms (e.g. proposing definitions grounded in existing literature) and on assessing the practical implementation of legal requirements, such as transparency or technical documentation for high-risk AI systems, general-purpose AI models and generative AI.
I’m also quite proud of my team’s work in supporting the EU AI liability directive, where we studied, among others, particular characteristics that make AI systems inherently risky, such as lack of causality, opacity, unpredictability or their self- and continuous-learning capabilities, and assessed associated difficulties presented when it comes to proving causation.
How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?
It’s not only tech — I’m also navigating a male-dominated AI research and policy field! I don’t have a technique or a strategy, as it’s the only environment I know. I don’t know how it would be to work in a diverse or a female-dominated working environment. “Wouldn’t it be nice?,” like the Beach Boys’ song goes. I honestly try to avoid frustration and have fun in this challenging scenario, working in a world dominated by very assertive guys and enjoying collaborating with excellent women in the field.
What advice would you give to women seeking to enter the AI field?
I would tell them two things:
You’re much needed — please enter our field, as there’s an urgent need for diversity of visions, approaches and ideas. For instance, according to the divinAI project — a project I co-founded on monitoring diversity in the AI field — only 23% of author names at the International Conference on Machine Learning and 29% at the International Joint Conference on AI in 2023 were female, regardless of their gender identity.
You aren’t alone — there are many women, nonbinary colleagues and male allies in the field, even though we may not be so visible or recognized. Look for them and get their mentoring and support! In this context, there are many affinity groups present in the research field. For instance, when I became president of the International Society for Music Information Retrieval, I was very active in the Women in Music Information Retrieval initiative, a pioneer in diversity efforts in music computing with a very successful mentoring program.
What are some of the most pressing issues facing AI as it evolves?
In my opinion, researchers should devote as many efforts to AI development as to AI evaluation, as there’s now a lack of balance. The research community is so busy advancing the state of the art in terms of AI capabilities and performance and so excited to see their algorithms used in the real world that they forget to do proper evaluations, impact assessment and external audits. The more intelligent AI systems are, the more intelligent their evaluations should be. The AI evaluation field is under-studied, and this is the cause of many incidents that give AI a bad reputation, e.g. gender or racial biases present in data sets or algorithms.
What are some issues AI users should be aware of?
Citizens using AI-powered tools, like chatbots, should know that AI is not magic. Artificial intelligence is a product of human intelligence. They should learn about the working principles and limitations of AI algorithms to be able to challenge them and use them in a responsible way. It’s also important for citizens to be informed about the quality of AI products, how they are assessed or certified, so that they know which ones they can trust.
What is the best way to responsibly build AI?
In my view, the best way to develop AI products (with a good social and environmental impact and in a responsible way) is to spend the needed resources on evaluation, assessment of social impact and mitigation of risks — for instance, to fundamental rights — before placing an AI system in the market. This is for the benefit of businesses and trust on products, but also of society.
Responsible AI or trustworthy AI is a way to build algorithms where aspects such as transparency, fairness, human oversight or social and environmental well-being need to be addressed from the very beginning of the AI design process. In this sense, the AI Act not only sets the bar for regulating artificial intelligence worldwide, but it also reflects the European emphasis on trustworthiness and transparency — enabling innovation while protecting citizens’ rights. This I feel will increase citizen trust in the product and technology.