Women have made significant contributions to the field of AI, but their work often goes unrecognized. To shine a spotlight on these remarkable women, TechCrunch is launching a series of interviews highlighting their achievements. One of these exceptional women is Ewa Luger, co-director at the Institute of Design Informatics and co-director of the Bridging Responsible AI Divides (BRAID) program.
Luger’s research focuses on social, ethical, and interactional issues in data-driven systems, including AI systems. She is particularly interested in issues of design, power distribution, exclusion, and user consent. Her work has led her to collaborations with policymakers, industry leaders, and the U.K. Department for Culture, Media and Sport (DCMS).
Luger’s journey into AI began during her time at Microsoft Research, where she worked in the user experience and design group. As AI became a core focus at Microsoft, her work naturally evolved in that direction. After her time at Microsoft, she moved to the University of Edinburgh to explore issues of algorithmic intelligibility. This interest ultimately led her into the field of responsible AI, where she is currently leading the BRAID program.
When asked about her most proud work in the AI field, Luger highlights her paper on the user experience of voice assistants, which was the first study of its kind and continues to be highly cited. However, she is most proud of her ongoing work with the BRAID program. In partnership with the Ada Lovelace Institute and the BBC, BRAID aims to connect arts and humanities knowledge to policy, regulation, industry, and the voluntary sector. Luger believes that the arts and humanities have been overlooked in the AI field, despite their significant contributions.
Luger has also collaborated with industry partners like Microsoft and the BBC to co-produce responsible AI challenges. Through these collaborations, academic researchers have been able to respond to real-world challenges and contribute to the development of responsible AI practices. BRAID has funded 27 projects so far and has plans for a new call for proposals.
In addition to her work with BRAID, Luger and her team are designing a free online course for stakeholders looking to engage with AI. They are also setting up a forum to engage a diverse cross-section of the population and other stakeholders to support governance of AI technologies. Luger believes that there is a need to address the myths and hyperbole surrounding AI and to ensure that the voices of those who are most likely to suffer downstream harms are heard.
When asked about navigating the challenges of the male-dominated tech industry, Luger acknowledges that these issues are not unique to the industry but are also present in academia. She shares her experience of working in a male-dominated lab and the higher standards and expectations placed on women. Luger emphasizes the importance of setting boundaries, saying no, and recognizing one’s own value to challenge these dynamics. She also highlights the need for a more balanced gender representation in tech and academia.
For women seeking to enter the AI field, Luger advises them to go for opportunities that allow them to level up, even if they don’t feel 100% qualified. She emphasizes the importance of not foreclosing opportunities for oneself and not underestimating one’s own abilities. Luger also acknowledges the trend towards more gender awareness in the hiring process and among funders but believes that there is still much work to be done to ensure gender representation in the field.
In terms of the most pressing issues facing AI, Luger highlights the immediate and downstream harms that can occur if AI systems are not designed, governed, and used responsibly. She specifically points out the environmental impact of large-scale AI models and the need to reconcile the speed of AI innovation with regulatory measures. Luger also raises concerns about the effects of AI-generated content on the creative industries, journalism, and democratic systems. She emphasizes the importance of addressing bias in AI systems.
Luger believes that AI users should be aware of issues of trust, veracity, and authenticity when using AI systems. She warns against fully trusting generated text and encourages users to use AI systems only in low-risk contexts. Luger also emphasizes the need for societal literacies to make reasoned judgments about AI-generated content and to check the source.
When it comes to responsibly building AI, Luger advocates for algorithmic impact assessments, regulatory compliance, and an active focus on doing good rather than just minimizing risk. She emphasizes the importance of addressing the composition of AI designers to ensure diversity and to address bias in AI systems. Luger also highlights the need to train systems architects to be aware of moral and socio-technical issues and to involve stakeholders in the governance and conceptual design of AI systems. Additionally, she stresses the importance of thorough stress-testing of AI systems and the availability of mechanisms for opt-out, contestation, and recourse.
In terms of investors pushing for responsible AI, Luger recognizes the inherent challenges faced by companies driven by capital gain. She believes that responsibility should be the minimum standard for companies but acknowledges the competing values and the trade-offs that companies often face. Luger calls for an alignment of values and incentives to prioritize responsible AI. She also emphasizes the need to consider the impact of AI on marginalized groups who may not have the resources to contest negative outcomes.
In conclusion, Ewa Luger’s work in the field of responsible AI highlights the importance of considering social, ethical, and interactional issues in AI systems. Her contributions to the BRAID program and her collaborations with industry partners demonstrate the potential for multidisciplinary efforts to shape the development and use of AI technologies. Luger’s insights provide valuable guidance for women seeking to enter the AI field and for investors seeking to push for responsible AI practices.
Source link