top of page

Editor’s Take: Why Centering Marginalized Voices in AI Is the Fight of Our Time

By Raji Mohanam, Editor, WAIV Magazine


Technology is not inherently progressive. This became even more apparent to me as I watched the recent news clips of billionaire technocrats meeting with oligarchs and dictators. AI, in its current form, is being built by the powerful for leverage, not for our liberation. Not for the many, but for the few who already hold the "cards".


AI is not neutral. It's not naturally good. It's not even guaranteed to be a tool for progress. It is, like every major shift in


history, simply a tool. Tools reflect the hands that build them.


We need to ask who is building it right now? Who is using it? Who is visible to AI? And right now, the answer is skewed. Skewed toward those with power, capital, and compute. Skewed toward centralized control, surveillance, and erasure. If we don’t shift the narrative—quickly—we risk codifying centuries of exclusion, exploitation, and erasure into the very fabric of our future.


That’s why platforms like ours matter. That’s why your voice matters.


Let’s imagine, for a moment, a different kind of future. A future where AI doesn’t flatten us into data points, but lifts us into deeper understanding. Where it isn’t built to surveil or suppress, but to heal, connect, and create. Where a young girl in Ghana, a midlife woman pivoting into tech, a nurse in Kerala, and a poet in the Bronx are not only seen by the algorithm—but help write it.


At WAIV Magazine, we believe in this promise of AI. But only if it is built on purpose. And only if it centers the voices that history has too often silenced. Too much of today’s AI is being shaped behind closed doors, by powerful entities with opaque agendas. It’s increasingly a technology of the already-entrenched: energy-hungry data centers, trillion-dollar investments, tools optimized for surveillance, labor devaluation, and institutional control. Not connection. Not equity. Not care.


When governments, tech titans, and shadowy investors use AI to manage populations instead of empowering them, we all lose something essential: our agency, our relationships, our humanity. These systems are not just changing how we search, write, and work, they are quietly and literally reshaping the very fabric of how we relate to one another.


That’s why we must stay alert.


But here’s the counterweight: discernment. We must ask better questions of the AI tools we invite into our lives. If out of 20 reels we view, 15 are AI creations that we can't tell are real or not,



is that helping us connect with one another as humans? Does the AI we consume right now reflect our values—or reinforce old hierarchies? Does it open access to knowledge, healthcare, justice—or gatekeep it?


I don't believe AI is inherently dangerous, just unfinished. And that is our opportunity.


I see a future where AI can uplift human creativity, broaden access to healthcare, strengthen democracy, and bring us closer to truths we’ve long had buried. But only if it is designed by and for everyone. Especially women. Especially the marginalized. Especially those whose intelligence—emotional, cultural, ancestral—has never been measured in silicon.


Ethical and inclusive AI is not a luxury. It is infrastructure for a livable world.

At WAIV, we champion technology that listens before it predicts. That collaborates instead of controls. That is as diverse and dynamic as the people it hopes to serve.

And the most radical thing we can do right now is to make sure those choices reflect all of us.

 
 
  • Instagram
  • Facebook
  • LinkedIn
For inquiries email us

Copyright WAIV Magazine, 2025

WAIV Magazine was established as a platform to explore the work and ideas of women and other underrepresented groups who are redefining Artificial Intelligence. WAIV supports an industry-wide paradigm shift in AI development that puts ethics and gender equity at the center, ensuring these technologies serve all of humanity. Through free articles and our “Deep Dives” podcast episodes, we cover issues from data bias to ethical policies aimed at building a global community dedicated to equitable AI. 

bottom of page