top of page

Algorithmic Citizenship: Who Gets Counted in the Age of AI?

Updated: May 13

By Skyler Misbah Riley


Skyler Misbah Riley
Skyler Misbah Riley

When a machine decides whether you’re visible or legitimate—does it also decide if you belong?


That’s the question I keep circling as I research algorithmic governance, while living as a queer, neurodivergent expat in Belgium. Systems that once felt cold or distant—visa platforms, social welfare portals, student registries—now feel close, intimate even. They don’t just track my data; they seem to scan for my worth.


In my current PhD work-in-progress, I explore a concept I call algorithmic citizenship—how public sector AI shapes access to rights and resources, especially for structurally marginalised people like migrants and LGBTQI+ communities. In a world where algorithms help determine who gets housing, healthcare, or even asylum, citizenship becomes more than a legal status—it becomes a score.


But before I got to this research, I had to decode a few systems of my own.


Coming Into Visibility (And Learning What That Means)


I arrived in Belgium for my Erasmus Mundus Master’s during a chapter of internal transformation. I was navigating two languages, three academic cultures, and a long-overdue diagnosis: AuDHD—autistic + ADHD. I had always sensed I moved through systems differently, but no label had ever quite fit—until that one did.


At the same time, I was finding language for my queerness and slowly coming out, not just socially, but intellectually. My master’s thesis examined how foreign students in Belgium used digital tools to survive university life—what platforms they trusted, what barriers they hit, what “connection” really looked like when institutional tech didn’t fully see them. That thesis became more than research—it became personal.


Even then, I could feel how much the design of a digital system decided who got seen and who got left behind. I wanted to understand what that meant at scale—not just in universities, but in immigration offices, job markets, and city halls.


When the State Becomes a Platform


My PhD now zooms out from students to society. I study how algorithmic systems—used in EU migration control, social welfare, and predictive policing—don’t just automate services. They sort people. Often invisibly. Often unequally.


These technologies claim to be neutral. But they are shaped by the fears, biases, and blind spots of those who build and deploy them. Risk scores, predictive models, eligibility engines—they may seem technical. But they end up making moral decisions: Who is deserving? Who is visible? Who gets flagged, and who gets ignored?


This is why I call it algorithmic citizenship. Because in many public systems, it’s not your passport that counts— it’s your data trail. And that trail isn’t neutral either.


Design Justice, Queer Logic, and Other Ways Forward


Yet I’m not cynical. I believe that public tech can be redesigned—with care, participation, and justice at its core. Around Europe, I see activist groups, civic technologists, and critical scholars calling out opacity and exclusion. Some are building better tools. Others are building better questions.


For me, being a queer, AuDHD researcher means I carry my difference into my work not as a deficit, but as a lens. I see the gaps, because I’ve lived in them. And I know that what seems efficient to one user might be hostile to another. Especially if you’re undocumented, racialised, or simply not accounted for in the code.


What I Hope We Build


I’m not here just to critique systems. I want to imagine new ones. Ones where accountability is designed in, not patched on. Where digital inclusion isn’t a side project, but the starting point.


We are living in an era where algorithms shape who gets counted, who gets sorted, and who gets seen. If we don’t ask critical questions now—if we don’t build inclusive practices into the foundation—we risk building digital states as flawed as the systems they’re meant to replace.


So here’s my invitation: Let’s reimagine what it means to belong in the age of AI. Let’s build technologies that don’t just recognize us—but respect us.



Skyler Misbah Riley is a PhD candidate, communication strategist, and storyteller working at the intersection of AI ethics, algorithmic governance, and digital inclusion. A former television journalist in India and assistant editor in Dubai, Skyler brings a deep commitment to conscious research and reflective storytelling. He is an alum of the Erasmus Mundus Digital Communication Leadership (DCLead) program, where he completed his MA in policy and innovation at Vrije Universiteit Brussel. Now based in Belgium, Skyler’s work explores how algorithmic systems shape identity, access, and citizenship—especially for those pushed to the margins. With a global lens and a strong ethical compass, he is passionate about building more just, inclusive, and human-centered digital futures.


 
 
  • Instagram
  • Facebook
  • LinkedIn
For inquiries email us

Copyright WAIV Magazine, 2025

WAIV Magazine was established as a platform to explore the work and ideas of women and other underrepresented groups who are redefining Artificial Intelligence. WAIV supports an industry-wide paradigm shift in AI development that puts ethics and gender equity at the center, ensuring these technologies serve all of humanity. Through free articles and our “Deep Dives” podcast episodes, we cover issues from data bias to ethical policies aimed at building a global community dedicated to equitable AI. 

bottom of page