The MACHINE Doesn't Know You: Ethical AI, gender bias, and the unwritten future of work
- info5474246
- Jun 25
- 5 min read
By Michelle Pontes
In nearly every country, a version of the same thing is happening. Technology is being introduced, adopted, scaled, and embedded into the rhythms of work, care, hiring, health, education, and credit. It is described as a leap forward, a necessary change, a sign of progress. But what happens, again and again, is this: those most affected are invited in last, if at all.
Artificial intelligence is rarely framed as a system of expectation, but that is what it becomes. A system learns what it’s given, adapts to what it’s rewarded for, and privileges the patterns it already recognises. When the people designing those systems share similar backgrounds, similar assumptions, similar networks and incentives, what emerges is less a tool for transformation and more an echo of the familiar.
In 2025, women make up just under a third (29.4 %) of the global AI workforce. The figure has risen,slowly, over the last seven years, but remains lowest in technical and leadership roles. In research, in governance, in the early design stages where systems take on shape and weight, their presence is still treated as supplementary. In the United States, 37 % of women now report using generative AI tools,compared to 50 % of men. In Australia, the numbers are 50 and 70, respectively. In Denmark, the gap is twenty points. The tools are not behind glass, they’re available, but availability is not the same as invitation. Usage reflects access, context, time, confidence, and the freedom to explore something without immediately needing to justify it. This gap shapes everything. It determines who gets to experiment, who sees themselves as a possible user, who builds fluency in the tools being framed as essential. Where digital skills training exists, it often targets entrepreneurs, not informal workers. Where adoption is encouraged, it is often framed in terms of business efficiency, not daily relevance. In contexts like these, underuse is not a sign of indifference. It is the outcome of design choices, who was imagined at the start, whose time was considered flexible, whose learning curve was deemed worth supporting. Yet, the women most vulnerable to that change are rarely included in discussions about the tools being introduced to replace them.
People with disabilities encounter similar erasure. More than a billion people worldwide live with some form of disability, but most AI systems are built around standardised expectations of voice, of movement, of interface, of speech. Accessibility is often retrofitted rather than embedded, treated as an obligation rather than a principle. Tools that might otherwise expand access end up creating new exclusions: voice recognition systems that can’t process variation, facial recognition that misidentifies or ignores atypical expressions, platforms that are incompatible with screen readers or keyboard navigation. These are not accidents. They are the result of processes where the wrong people were absent at the right moment. Older adults are missing from the room in a different way. They are not ignored so much as presumed irrelevant. In Europe, generative AI use among people over 65 remains between 6 and 14%. In the United States, it stands at 18% for those over 50. These are people with decades of professional experience, deep institutional knowledge, and nuanced social insight. But in AI training programmes and industry pilots, their concerns are rarely addressed, their pace dismissed, their patterns of learning treated as a challenge to be resolved rather than a strength to be integrated.
This narrowing of voice and vision carries consequences. When systems are introduced without community authorship, they behave in ways that reflect power, not fairness. Algorithms trained on past hiring data reject CVs with gaps. Models trained on loan performance replicate the credit outcomes of people who were denied loans. Tools trained on English misread requests in dialects. The system optimises for what it has seen, and in doing so, makes less room for what it hasn’t.
In the workplace, this plays out most starkly. Across high-income economies, women are more than twice as likely as men to hold jobs at high risk of automation. In some sectors, the risk is closer to threefold. These are not obsolete roles. They are roles made precarious by their structure, task-oriented, undervalued, and easiest to replace. The International Labour Organization estimates that 9.6% of female-held jobs in developed countries are at high risk, compared to 3.5% for men. In the Pacific Islands, the World Bank estimates that improving gender parity in employment could increase GDP by up to 30%. But policy is slow, and platforms are faster.
The answer is not to ban the tools or retreat from the work. It is to design with greater precision, and with a deeper understanding of who has not been allowed to participate. Co-design begins with a change in posture. It asks people to shape the question instead of commenting on a finished product. It treats lived experience as data. It recognises that context is not noise, it does not seek consensus at the cost of clarity, it works across different speeds, and it allows for contradiction. It values the person who has spent ten years in a system as much as the person who has studied it for two, and crucially, it does not rush. When equity is treated as an add-on, it fails. When it is embedded, it changes the outcome, but to do that, it needs more than metrics, it needs relationships.

There are examples already, often unfunded, often unreported. Girls in Kigali learning how to build chatbots in Kinyarwanda. Feminist design groups in São Paulo developing participatory machine learning curricula. Disabled designers in Delhi collaborating with coders to reshape interface logic. These aren’t solutions in the commercial sense. They are interventions in who gets to write the next version of the system.
The risk isn’t that AI fails, it’s that it succeeds on terms that were never built to include most of us. If the future is being automated, then the future must also be co-authored, and that means asking different questions, in different rooms, with people who have not yet been permitted to lead. The pattern remains the same, yes, but it doesn’t have to.
Michelle Pontes is a concept designer, strategist, and storyteller, and the founder of Folio.5 , a studio built on the belief that creativity should come with context. She's spent nearly 20 years in sales, business development and then marketing, across sectors like energy, health, education, and local government. Now, she works at the intersection of visual storytelling, AI, and brand strategy, helping people and organizations use creative technology with more purpose.
Michelle is also the creator of the Human-Driven AI ( HDAI ) a standard that puts people at the centre of human-led, AI-enabled creativity. It shapes everything she does, from visual storytelling to training sessions to the way she approaches concept development itself.
When she's not designing or consulting, she's usually experimenting with Midjourney, collecting records, raising her daughters or helping others learn to use AI in ways that feel genuinely theirs..