Why AI’s diversity crisis matters, and how to tackle it – Nature.com
Inclusivity groups focus on promoting diverse builders for future artificial-intelligence projects.Credit: Shutterstock
Artificial intelligence (AI) is facing a diversity crisis. If it isnt addressed promptly, flaws in the working culture of AI will perpetuate biases that ooze into the resulting technologies, which will exclude and harm entire groups of people. On top of that, the resulting intelligence will be flawed, lacking varied social-emotional and cultural knowledge.
In a 2019 report from New York Universitys AI Now Institute, researchers noted that more than 80% of AI professors were men. Furthermore, Black individuals made up just 2.5% of Google employees and 4% of those working at Facebook and Microsoft. In addition, the report authors noted that the overwhelming focus on women in tech when discussing diversity issues in AI is too narrow and likely to privilege white women over others.
Some researchers are fighting for change, but theres also a culture of resistance to their efforts. Beneath this veneer of oh, AI is the future, and we have all these sparkly, nice things, both AI academia and AI industry are fundamentally conservative, says Sabine Weber, a scientific consultant at VDI/VDE Innovation + Technik, a technology consultancy headquartered in Berlin. AI in both sectors is dominated by mostly middle-aged white men from affluent backgrounds. They are really attached to the status quo, says Weber, who is a core organizer of the advocacy group Queer in AI. Nature spoke to five researchers who are spearheading efforts to change the status quo and make the AI ecosystem more equitable.
Senior data science manager at Shopify in Atlanta, Georgia, and a general chair of the 2023 Deep Learning Indaba conference.
I am originally from Ghana and did my masters in statistics at the University of Akron in Ohio in 2011. My background is in using machine learning to solve business problems in customer-experience management. I apply my analytics skills to build models that drive customer behaviour, such as customer-targeting recommendation systems, aspects of lead scoring the ranking of potential customers, prioritizing which ones to contact for different communications and things of that nature.
This year, Im also a general chair for the Deep Learning Indaba, a meeting of the African machine-learning and AI community that is held in a different African country every year. Last year, it was held in Tunisia. This year, it is taking place in Ghana in September.
Our organization is built for all of Africa. Last year, 52 countries participated. The goal is to have all 54 African countries represented. Deep Learning Indaba empowers each country to have a network of people driving things locally. We have the flagship event, which is the annual conference, and country-specific IndabaX events (think TED and TEDx talks).
During Ghanas IndabaX conferences, we train people in how to program and how to deal with different kinds of data. We also do workshops on what is happening in the industry outside of Ghana and how Ghana should be involved. IndabaX provides funding and recommends speakers who are established researchers working for companies such as Deep Mind, Microsoft and Google.
To strengthen machine learning and AI and inclusion in Ghana, we need to build capacity by training young researchers and students to understand the skill sets and preparation they need to excel in this field. The number one challenge we face is resources. Our economic status is such that the focus of the government and most Ghanaians is on peoples daily bread. Most Ghanaians are not even thinking about technological transformation. Many local academics dont have the expertise to teach the students, to really ground them in AI and machine learning.
Most of the algorithms and systems we use today were created by people outside Africa. Africas perspective is missing and, consequently, biases affect Africa. When we are doing image-related AI, there arent many African images available. African data points make up no more than 1% of most industry machine-learning data sets.
When it comes to self-driving cars, the US road network is nice and clean, but in Africa, the network is very bumpy, with a lot of holes. Theres no way that a self-driving car trained on US or UK roads could actually work in Africa. We also expect that using AI to help diagnose diseases will transform peoples lives. But this will not help Africa if people are not going there to collect data, and to understand African health care and related social-support systems, sicknesses and the environment people live in.
Today, African students in AI and machine learning must look for scholarships and leave their countries to study. I want to see this change and I hope to see Africans involved in decision-making, pioneering huge breakthroughs in machine learning and AI research.
Researchers outside Africa can support African AI by mentoring and collaborating with existing African efforts. For example, we have Ghana NLP, an initiative focused on building algorithms to translate English into more than three dozen Ghanaian languages. Global researchers volunteering to contribute their skill set to African-specific research will help with efforts like this. Deep Learning Indaba has a portal in which researchers can sign up to be mentors.
Maria Skoularidou has worked to improve accessibility at a major artificial-intelligence conference. Credit: Maria Skoularidou
PhD candidate in biostatistics at the University of Cambridge, UK, and founder and chair of {Dis}Ability in AI.
I founded {Dis}Ability in AI in 2018, because I realized that disabled people werent represented at conferences and it didnt feel right. I wanted to start such a movement so that conferences could be inclusive and accessible, and disabled people such as me could attend them.
That year, at NeurIPS the annual conference on Neural Information Processing Systems in Montreal, Canada, at least 4,000 people attended and I couldnt identify a single person who could be categorized as visibly disabled. Statistically, it doesnt add up to not have any disabled participants.
I also observed many accessibility issues. For example, I saw posters that were inconsiderate with respect to colour blindness. The place was so crowded that people who use assistive devices such as wheelchairs, white canes or service dogs wouldnt have had room to navigate the poster session. There were elevators, but for somebody with limited mobility, it would not have been easy to access all the session rooms, given the size of the venue. There were also no sign-language interpreters.
Since 2019, {Dis}Ability in AI has helped facilitate better accessibility at NeurIPS. There were interpreters, and closed captioning for people with hearing problems. There were volunteer escorts for people with impaired mobility or vision who requested help. There were hotline counsellors and silent rooms because large conferences can be overwhelming. The idea was: this is what we can provide now, but please reach out in case we are not considerate with respect to something, because we want to be ethical, fair, equal and honest. Disability is part of society, and it needs to be represented and included.
Many disabled researchers have shared their fears and concerns about the barriers they face in AI. Some have said that they wouldnt feel safe sharing details about their chronic illness, because if they did so, they might not get promoted, be treated equally, have the same opportunities as their peers, be given the same salary and so on. Other AI researchers who reached out to me had been bullied and felt that if they spoke up about their condition again, they could even lose their jobs.
People from marginalized groups need to be part of all the steps of the AI process. When disabled people are not included, the algorithms are trained without taking our community into account. If a sighted person closes their eyes, that does not make them understand what a blind person must deal with. We need to be part of these efforts.Being kind is one way that non-disabled researchers can make the field more inclusive. Non-disabled people could invite disabled people to give talks or be visiting researchers or collaborators. They need to interact with our community at a fair and equal level.
William Agnew is a computer science PhD candidate at the University of Washington in Seattle. Sabine Weber is a scientific consultant at VDI/VDE Innovation + Technik in Erfurt, Germany. They are organizers of the advocacy organization Queer in AI.
Agnew: I helped to organize the first Queer in AI workshop for NeurIPS in 2018. Fundamentally, the AI field doesnt take diversity and inclusion seriously. Every step of the way, efforts in these areas are underfunded and underappreciated. The field often protects harassers.
Most people doing the work in Queer in AI are graduate students, including me. You can ask, Why isnt it the senior professor? Why isnt it the vice-president of whatever? The lack of senior members limits our operation and what we have the resources to advocate for.
The things we advocate for are happening from the bottom up. We are asking for gender-neutral toilets; putting pronouns on conference registration badges, speaker biographies and in surveys; opportunities to run our queer-AI experiences survey, to collect demographics, experiences of harm and exclusion, and the needs of the queer AI community; and we are opposing extractive data policies. We, as a bunch of queer people who are marginalized by their queerness and who are the most junior people in our field, must advocate from those positions.
In our surveys, queer people consistently name the lack of community, support and peer groups as their biggest issues that might prevent them from continuing a career path in AI. One of our programmes gives scholarships to help people apply to graduate school, to cover the fees for applications, standardized admissions tests, such as the Graduate Record Examination (GRE) and university transcripts. Some people must fly to a different country to take the GRE. Its a huge barrier, especially for queer people, who are less likely to have financial support from their families and who experience repressive legal environments. For instance, US state legislatures are passing anti-trans and anti-queer laws affecting our membership.
In large part because of my work with Queer in AI, I switched from being a roboticist to being an ethicist. How queer peoples data are used, collected and misused is a big concern. Another concern is that machine learning is fundamentally about categorizing items and people and predicting outcomes on the basis of the past. These things are antithetical to the notion of queerness, where identity is fluid and often changes in important and big ways, and frequently throughout life. We push back and try to imagine machine-learning systems that dont repress queerness.
You might say: These models dont represent queerness. Well just fix them. But queer people have long been the targets of different forms of surveillance aimed at outing, controlling or suppressing us, and a model that understands queer people well can also surveil them better. We should avoid building technologies that entrench these harms, and work towards technologies that empower queer communities.
Weber: Previously, I worked as an engineer at a technology company. I said to my boss that I was the only person who was not a cisgender dude in the whole team of 60 or so developers. He replied, You were the only person who applied for your job who had the qualification. Its so hard to find qualified people.
But companies clearly arent looking very hard. To them it feels like: Were sitting on high. Everybody comes to us and offers themselves. Instead, companies could recruit people at queer organizations, at feminist organizations. Every university has a women in science, technology, engineering and mathematics (STEM) group or women in computing group that firms could easily go to.
But the thinking, Thats how we have always done it; dont rock the boat, is prevalent. Its frustrating. Actually, I really want to rock the boat, because the boat is stupid. Its such a disappointment to run up against these barriers.
Laura Montoya encourages those who, like herself, came to the field of artificial intelligence through a non-conventional route. Credit: Tim McMacken Jr (tim@accel.ai)
Executive director of the Accel.AI Institute and LatinX in AI in San Francisco, California.
In 2016, I started the Accel.AI Institute as an education company that helps under-represented or underserved people in AI. Now, its a non-profit organization with the mission of driving AI for social impact initiatives. I also co-founded the LatinX in AI programme, a professional body for people of Latin American background in the field. Im first generation in the United States, because my family emigrated from Colombia.
My background is in biology and physical science. I started my career as a software engineer, but conventional software engineering wasnt rewarding for me. Thats when I found the world of machine learning, data science and AI. I investigated the best way to learn about AI and machine learning without going to graduate school. Ive always been an alternative thinker.
I realized there was a need for alternative educational options for people like me, who dont take the typical route, who identify as women, who identify as people of colour, who want to pursue an alternative path for working with these tools and technologies.
Later on, while attending large AI and machine-learning conferences, I met others like myself, but we made up a small part of the population. I got together with these few friends to brainstorm, How can we change this?. Thats how LatinX in AI was born. Since 2018, weve launched research workshops at major conferences, and hosted our own call for papers in conjunction with NeurIPS.
We also have a three-month mentorship programme to address the brain drain resulting from researchers leaving Latin America for North America, Europe and Asia. More senior members of our community and even allies who are not LatinX can serve as mentors.
In 2022, we launched our supercomputer programme, because computational power is severely lacking in much of Latin America. For our pilot programme, to provide research access to high-performance computing resources at the Guadalajara campus of the Monterey Institute of Technology in Mexico, the technology company NVIDIA, based in Santa Clara, California, donated a DGX A100 system essentially a large server computer. The government agency for innovation in the Mexican state of Jalisco will host the system. Local researchers and students can share access to this hardware for research in AI and deep learning. We put out a global call for proposals for teams that include at least 50% Latinx members who want to use this hardware, without having to be enrolled at the institute or even be located in the Guadalajara region.
So far, eight teams have been selected to take part in the first cohort, working on projects that include autonomous driving applications for Latin America and monitoring tools for animal conservation. Each team gets access to one graphics processing unit, or GPU which is designed to handle complex graphics and visual-data processing tasks in parallel for the period of time they request. This will be an opportunity for cross-collaboration, for researchers to come together to solve big problems and use the technology for good.
See original here:
Why AI's diversity crisis matters, and how to tackle it - Nature.com
- This is what happened when I asked journalism students to keep an AI diary - Online Journalism Blog - July 8th, 2025 [July 8th, 2025]
- The Future of AI in K-12 Education - UC San Diego Today - July 8th, 2025 [July 8th, 2025]
- Amazon's AI-powered Rufus shopping assistant can help you find deals but this is how I prefer to do it - Business Insider - July 8th, 2025 [July 8th, 2025]
- Huawei's AI lab denies that one of its Pangu models copied Alibaba's Qwen - Reuters - July 8th, 2025 [July 8th, 2025]
- Citi lifts Nvidia's price target on booming sovereign AI demand - Yahoo Finance - July 8th, 2025 [July 8th, 2025]
- A billionaire, an AI supercomputer, toxic emissions and a Memphis community that did nothing wrong - Tennessee Lookout - July 8th, 2025 [July 8th, 2025]
- Advantech and mimik Join Forces to Simplify AI Deployment Across Edge and Cloud - Morningstar - July 8th, 2025 [July 8th, 2025]
- 'Puzzlingly defiant': Judge sanctions Mike Lindell's lawyers over AI-generated filing rife with cites to nonexistent cases, excoriates their... - July 8th, 2025 [July 8th, 2025]
- What makes a good AI prompt? Here are 4 expert tips - The Conversation - July 8th, 2025 [July 8th, 2025]
- A million customer conversations with AI agents yielded this surprising lesson - ZDNet - July 8th, 2025 [July 8th, 2025]
- Southwest General adopts AI to handle some healthcare tasks - Cleveland.com - July 8th, 2025 [July 8th, 2025]
- Innovation Used to Be About Generations. Gen AI Makes It About Everyone - PYMNTS.com - July 8th, 2025 [July 8th, 2025]
- Why CISOs are making the SASE switch: Fewer vendors, smarter security, better AI guardrails - VentureBeat - July 8th, 2025 [July 8th, 2025]
- Entity-Based Regulation in Frontier AI Governance - Carnegie Endowment for International Peace - July 8th, 2025 [July 8th, 2025]
- This Magnificent Artificial Intelligence (AI) Stock Is Down 26%. Buy the Dip, Or Run for the Hills? - Nasdaq - July 8th, 2025 [July 8th, 2025]
- 2 Hot AI Stocks to Sell Before They Fall 25%, According to Wall Street Analysts - Yahoo Finance - July 8th, 2025 [July 8th, 2025]
- The Grammys for AI Music? Entries Open for Groundbreaking Song Contest from the Fanvue World AI Creator Awards - as Millions of Creators Make Music... - July 8th, 2025 [July 8th, 2025]
- SEO Poisoning Campaign Targets 8,500+ SMB Users with Malware Disguised as AI Tools - The Hacker News - July 8th, 2025 [July 8th, 2025]
- Samsung expects second-quarter profits to more than halve as it struggles to capture AI demand - CNBC - July 8th, 2025 [July 8th, 2025]
- Companies That Tried to Save Money With AI Are Now Spending a Fortune Hiring People to Fix Its Mistakes - Futurism - July 8th, 2025 [July 8th, 2025]
- Expert Says AI Systems May Be Hiding Their True Capabilities to Seed Our Destruction - Futurism - July 8th, 2025 [July 8th, 2025]
- I'm a university lecturer concerned that students are using AI to cheat. It's made my workload skyrocket, and I've had to make drastic changes. -... - July 8th, 2025 [July 8th, 2025]
- Is Metas pricey AI hiring spree worth it? This analyst has doubts. - MarketWatch - July 8th, 2025 [July 8th, 2025]
- AI surveillance and the governance vacuum in the Asia-Pacific - Lowy Institute - July 8th, 2025 [July 8th, 2025]
- Unilever debuts AI-driven design unit to wean brands off TV-first model - Marketing Dive - July 8th, 2025 [July 8th, 2025]
- AI developed a new rare-earth-free magnet 200 times faster than humans - Boy Genius Report - July 8th, 2025 [July 8th, 2025]
- Its time to retire the ticket: An IT roadmap for agentic AI - cio.com - July 8th, 2025 [July 8th, 2025]
- SUNY Orange to lead the way in AI healthcare education - Mid Hudson News - July 8th, 2025 [July 8th, 2025]
- AI is forcing the data industry to consolidate but thats not the whole story - TechCrunch - July 8th, 2025 [July 8th, 2025]
- Top 5 AI Growth Stocks to Buy in July 2025 - The Motley Fool - July 8th, 2025 [July 8th, 2025]
- AI Content Is Invading the Real World | by Thomas Smith | The Generator | Jul, 2025 - Medium - July 8th, 2025 [July 8th, 2025]
- Microsoft and the microbiome: Viome works with tech giant to optimize AI for molecular health - GeekWire - July 8th, 2025 [July 8th, 2025]
- AI and the Trust Revolution: How Technology Is Transforming Human Connections - Foreign Affairs - July 8th, 2025 [July 8th, 2025]
- I'm a CEO running an 8-figure AI company. I'm also an extreme procrastinator and I think that's a good thing. - Business Insider - July 6th, 2025 [July 6th, 2025]
- AI Band the Velvet Sundown Officially Confirm Theyre AI and a Provocation - Rolling Stone - July 6th, 2025 [July 6th, 2025]
- Apple scores big victory with 'F1,' but AI is still a major problem in Cupertino - CNBC - July 6th, 2025 [July 6th, 2025]
- Companies keep slashing jobs. How worried should workers be about AI replacing them? - Los Angeles Times - July 6th, 2025 [July 6th, 2025]
- Nvidia-backed stock sends a quiet shockwave through the AI world - TheStreet - July 6th, 2025 [July 6th, 2025]
- NOAA, Google scientists team up to advance AI hurricane models - National Oceanic and Atmospheric Administration (.gov) - July 6th, 2025 [July 6th, 2025]
- I turned off every AI tool on my Galaxy and regretted it by lunchtime - Android Police - July 6th, 2025 [July 6th, 2025]
- Schools turn to handwritten exams as AI cheating surges - Fox News - July 6th, 2025 [July 6th, 2025]
- The AI Birthday Letter That Blew Me Away - The Atlantic - July 6th, 2025 [July 6th, 2025]
- CEOs Start Saying the Quiet Part Out Loud: AI Will Wipe Out Jobs - WSJ - WSJ - July 6th, 2025 [July 6th, 2025]
- Down Over 50%, Should You Buy the Dip on SoundHound AI Stock? - The Motley Fool - July 6th, 2025 [July 6th, 2025]
- CEOs Say AI Is Poised to Wipe Out an Astonishing Number of Jobs - Futurism - July 6th, 2025 [July 6th, 2025]
- Pixar's chief creative officer says AI is 'the least impressive blah average of things' - Business Insider - July 6th, 2025 [July 6th, 2025]
- Eminem, AI and me: why artists need new laws in the digital age | Alexander Hurst - The Guardian - July 6th, 2025 [July 6th, 2025]
- NVIDIA plans massive tech campus in Israel, boosting AI innovation - www.israelhayom.com - July 6th, 2025 [July 6th, 2025]
- From Sensual Butt Songs to Santas Alleged Coke Habit: AI Slop Music Is Getting Harder to Avoid - WIRED - July 6th, 2025 [July 6th, 2025]
- Its Time for Your Company to Invest in AI. Heres How. - Harvard Business Review - July 6th, 2025 [July 6th, 2025]
- AI doesnt know what an orgasm sounds like: audiobook actors grapple with the rise of robot narrators - The Guardian - July 6th, 2025 [July 6th, 2025]
- Dark personality traits linked to generative AI use among art students - PsyPost - July 6th, 2025 [July 6th, 2025]
- SoundHound AI Stock Has Plummeted by 53%. This Move by Nvidia Is a Key Reason Why. - The Motley Fool - July 6th, 2025 [July 6th, 2025]
- Prediction: This Artificial Intelligence (AI) Stock Could Be the Surprise Winner of 2025 - The Motley Fool - July 6th, 2025 [July 6th, 2025]
- Laid-off workers should use AI to manage their emotions, says Xbox exec - The Verge - July 6th, 2025 [July 6th, 2025]
- Why Is Everyone Talking About SoundHound AI Stock? - The Motley Fool - July 6th, 2025 [July 6th, 2025]
- A Leading Indicator Has Emerged Suggesting That the AI Industry Is Cooked - Futurism - July 6th, 2025 [July 6th, 2025]
- Microsoft Exec Tells Devs Laid Off Because Of AI To Ask It For Job Advice - DualShockers - July 6th, 2025 [July 6th, 2025]
- Your AI Agents Might Be Leaking Data Watch this Webinar to Learn How to Stop It - The Hacker News - July 6th, 2025 [July 6th, 2025]
- Racist videos made with AI are going viral on TikTok - The Verge - July 6th, 2025 [July 6th, 2025]
- Babylonian text missing for 1,000 years deciphered with AI - Popular Science - July 6th, 2025 [July 6th, 2025]
- 'I'm being paid to fix issues caused by AI' - BBC - July 4th, 2025 [July 4th, 2025]
- Mark Cuban: The world's first trillionaire could be just one dude in the basement' who's great at using AI - CNBC - July 4th, 2025 [July 4th, 2025]
- CEOs Start Saying the Quiet Part Out Loud: AI Will Wipe Out Jobs - WSJ - July 4th, 2025 [July 4th, 2025]
- Explainer: Will the EU delay enforcing its AI Act? - Reuters - July 4th, 2025 [July 4th, 2025]
- Code of practice to help firms comply with AI rules may apply end 2025, EU says - Reuters - July 4th, 2025 [July 4th, 2025]
- A couple tried for 18 years to get pregnant. AI made it happen - CNN - July 4th, 2025 [July 4th, 2025]
- Ford's CEO is the latest exec to warn that AI will wipe out half of white-collar jobs - Business Insider - July 4th, 2025 [July 4th, 2025]
- An indie band is blowing up on Spotify, but people think it's AI - NBC News - July 4th, 2025 [July 4th, 2025]
- Former OpenAI board member said companies are going to start trying to poach Meta's new AI hires from day one - Business Insider - July 4th, 2025 [July 4th, 2025]
- Trumps allies wanted to strip states powers on AI. It backfired. - Politico - July 4th, 2025 [July 4th, 2025]
- AI helps discover optimal new material for removing radioactive iodine contamination - Phys.org - July 4th, 2025 [July 4th, 2025]
- EU businesses want a pause on AI regulations so they can cope with unregulated Big Tech players - theregister.com - July 4th, 2025 [July 4th, 2025]
- Crunchyroll blames third-party vendor for AI subtitle mess - Engadget - July 4th, 2025 [July 4th, 2025]
- AI Models and Tools: OpenAI Enables Creation of Shopify AI Assistants - PYMNTS.com - July 4th, 2025 [July 4th, 2025]
- Microsoft to cut up to 9,000 jobs as it invests in AI - BBC - July 4th, 2025 [July 4th, 2025]
- CoreWeave is the first cloud provider to deploy Nvidia's latest AI chips - CNBC - July 4th, 2025 [July 4th, 2025]
- How a GOP rift over tech regulation doomed a ban on state AI laws in Trumps tax bill - AP News - July 4th, 2025 [July 4th, 2025]
- Transforming network operations with AI: How Swisscom built a network assistant using Amazon Bedrock - Amazon.com - July 4th, 2025 [July 4th, 2025]
- How a Teacher Used an AI Tool to Help Her Students Reading Comprehension - Education Week - July 4th, 2025 [July 4th, 2025]