top of page

Rethinking Question Zero in AI

This is a polished version of my keynote talk at AI Policy Lab Day in Umea, Sweden. Recordings of the talk can be found here. The slides can be downloaded through the link below.




Introduction


In this talk, I will focus on something that increasingly bothers me: how institutions bend and embedd critical questions in AI - such as Question Zero - to fit their own interests and to maintain the status quo.  I will take one Dutch case as anrunning example, and will weave different lines of experiences that have shaped and sharped how I understanddevelopments in AI research, policy, and practice


I will slow cook my message and foreground my own lived experiences—both personal and professional - not to center myself, but to center  experiences that might help others sharpen their lenses and perspectives too. Part 1 of my talk focuses on the lived-experience–research nexus, part 2 on the lived-experience–policy link, and part 3 on the research–policy nexus. In the final part, I will explain why I believe we should rethink Question Zero in AI.


 So let me begin with Part 1.


Lived Experience - Research Nexus


I was born in Ethiopia during the war between Ethiopia and Eritrea. When I was three, my parents moved back to Eritrea. Three years later, when the Ethiopian army discovered my parents were part of the Eritrean resistance, we had to flee.This is me at age five on the back of an Eritrean freedom fighter. Traveling at night - on foot, by camel, and when possible by car or truck - we escaped Eritrea, stayed in Sudan for a while, then Italy, and finally arrived in the Netherlands as political refugees. This picture shows me and my sister a year after our arrival.


For most of my childhood, I wanted to study medicine. But at the last moment, I decided to pursue computer science instead - and compensated for that impulsive change by focusing my studies and master’s thesis on medical informatics. Here I am defending my thesis, which led to my first publications on modeling objects and object relations using geometric algebra, presented at the Visual Information Systems Conference 1997.


Immediately after completing my master’s, I started a PhD in computer vision and medical imaging. My initial focus was intelligent, interactive image segmentation using geometric deformable models such as snakes, which were state of the art at in the mid 90's. But during my PhD, my topic shifted to learning statistical deformable models from examples, and to learning from relevance feedback in medical image retrieval. I experienced firsthand the transition in computer vision from constructing geometric models to statistically learning models from data.


This was also the period when image and object recognition competitions emerged, where research groups worldwide tested their models on shared datasets - such as the first fingerprint verification competition in the year 2000.


Such competitions became popular in other fields as well, including cognitive neuroscience, which I moved into a few years after my PhD. I became interested in the idea that if we are building artificial systems that can learn, perhaps we should learn from how the brain learns. I participated in the first brain-reading competition in 2006, which brought together the cognitive neuroscience, neuroimaging, and machine learning communities.


The results were impressive and attracted significant academic and public attention. The correlations between predicted features and ground-truth annotations exceeded expectations, and our method could even accurately predict individual faces of figures such as actor Tim Allen in the Home Improvements Sitcom based on neuro-imaging scans.


Nature  Neuroscience publsihed this editorial about this compeitition in 2007. A few month later all participants of the competition - including myself - received this email from a distressed person in India, mentioning that advanced technologies were used to read his brain and thoughts. No one took the concerns seriously; some dismissed the person as “nuts,” arguing that neuroimaging technology was still too immature to be used in practice.


But then a New York Times article was published about a woman in India convicted of murdering her husband based on fMRI scans that supposedly revealed her “knowledge” of the crime scene. That was the first moment I critically reflected on the technologies we were building.


During that time, I was also researching computational and neural face spaces and face recognition. I found existing face datasets limited in representation, so I curated a new dataset of almost 20,000 faces scraped from the internet by a colleague.


While doing this research, I encountered an automatic rotating door that would not open for me, although it worked for all my white colleagues. This became a second moment of criticalreflection: if we equip everyday products with “smart” sensing technologies, we risk discriminating against and excluding many people.


These momenets of reflection and realization motivated me to focus on mechanisms of bias - computational, neural, and social. But at that time, these topics were not seen as viable research directions. So in 2010, I established an education lab, the Beta Lab, to teach students about these issues.


Around 2015, as AI and social bias gained more attention, I started planting seeds for a research lab – now knwon as Civic AI Lab - not only to address AI as an amplifier of social bias but also as a means to uncover and tackle discrimination and increase equal opportunities.


This was the first case I used to develop, promote, and find funding for the Civic AI Lab was the relocation algorithm developed by the Immigration Policy Lab, a collaboration between Stanford University and ETH Zürich. I will return to this shortly.


First, Part 2.


Lived Experience - Policy Nexus


Being a refugee often means being part of a community. In 1979, there were an estimated 50 Eritrean refugees in the Netherlands. But the community grew year by year.


There were several peak moments when refugee arrivals increased sharply. By 2010, there were an estimated 5,000 Eritreans in the Netherlands. Then came the refugee crisis. Within ten years - between 2010 and 2020 -the community grew by 20,000 refugees, bringing significant social and political challenges.


As an active member of the Eritrean community, I helped establish organizations including the foundation BOOST for Refugees, where I had the honor of serving as chair for many years. BOOST offered a physical space where refugees could land softly, be themselves, learn the language, learn to bicycle. Away from the instiutional formalities and complexities.


I also started engainging intensively with policymakers because it was clear that a huge gap existed between refugee reception and integration policies and the lived experiences of refugees. Many of us felt the urgency to fill that gap with real knowledge.


However, the more I engaged with these policies, the more I realized that they were - to some extent - designed to complicate rather than support. This is why we established Foundation Civic: to improve Dutch integration policies, which over the past 30 years have been revised more than 20 times, becoming increasingly restrictive and harsh, pushing refugees further into financial debt and mental distress.


We engaged with many Dutch ministries, local governments, and civil society organizations. Over time, I grew increasingly pessimistic about institutions’ willingness and ability to protect and empower refugees. At the same time, I grew more optimistic about the possibilities offered by AI systems.


For example,  the relocation algorithm developed by the Immigration Policy Lab, bringing me to part 3, where I will zoom in more on the relocation algorithm developed by this lab


Research - Policy Nexus



COA is the Dutch government agency responsible for receiving refugees. It provides accommodation, guidance for integration or return, and ensures access to basic necessities. It works closely with the Dutch  Immigration and Naturalisation Service.


The government assigns municipalities housing quotas. COA conducts screening interviews to assess personal, educational, and medical factors. Status holders are then placed in regions believed to best support their future integration, while municipalities coordinate with COA to provide suitable housing.


In practice, however, COA faces many challenges, including staffing problems, expertise gaps,  the national housing shortage, political pressure, and a polarized public debate often amplified by the media - all influenced by broader geopolitical developments.


The result is an ad-hoc relocation approach driven more by available capacity than by needs. This leads to repeated relocations, reliance on emergency housing, and limited use of refugees’ backgrounds, talents, and needs - leaving many of those needs unmet.


This is why I decided to engage more  with multiple levels of government to explore whether IPL’s relocation algorithm could offer a win-win solution. I spoke with ministers about it, majors, civil servants.


Because, at least in theory, the algorithm increases refugees’ employability in the US and Switzerland by up to 70% - supporting both refugees entering the labor market and municipalities filling job vacancies. A potential win-win.


It turned out that COA was already exploring the use of AI and raising thoughtful questions: Why use algorithmic solutions at all? Is human judgment sufficient? Who holds responsibility for relocation outcomes? How can refugee data be kept private? These reflections align closely with Data Protection and AI assessment principles. I was suprised and sharmed by COA’s secure approach.


So I suggested fostering research–policy collaboration across disciplines, developing algorithmic tools locally, and focusing on long-term goals rather than short political cycles. The approach should optimize for more than employability - centering wellbeing, education, and social factors, and actively involving refugees themselves.


However, there was no follow-up. COA launched a pilot independently. I was asked to join the advisory board, I was never consulted. This felt concerning at the time.


Apparently, others felt the same. A recent soon-tobe-be-published investigation by Follow-the-Money (FTM) and Utrecht University researchers - based on data accessed through Freedom of Information act and analysis of IPL’s technical documentation, COA’s project documents, Berenschot’s DP/AI impact assessment, and Deloitte’s audit report - revealed significant findings.


The analysis revealed distinct institutional logics - bureaucratic (COA), technical (IPL), and managerial/auditing (Deloitte). More importantly, refugee data seems to be shared with IPL, optimizations favored the economic interest of places, rather the needs of people, and women and poorer refugees faced discrimination. One could argue that the algorithm was bent to serve institutions, not individuals.


I am happy with the investigation of FTM, as it yet again confirms what is often denied and hidden. Namely that AI systems - and their assessments - are routinely bended and embedded to serve institutions and markets rather than people, communities, and their wellbeing. This occurs structurally and systematically, and demands from us to not only be critical but also transformative.


This brings me to the final part of my talk.


Rethinking Question Zero


There is a lot of focus on seemingly opposing perspectives in AI research, policy and practice: Technology v. Society, Ethics versus, Resonsibility v. Accountability and so on.


This focus on seemingly opposing perspectives distracts from the more deeper tensions and or gaps if you will: global versus local orientation, bottom-up versus down governance, short term versus long-term thinking, disciplinarity versus interdisciplinarity.  This is by now means a complete least


But what deserved most serieus attention is the perpectual relationship between this tensions and gaps. They are linked, enforce  eachother and maintain the status quo.


The is underlying problem is that we live in a complex world shaped by conflicting values, unequal distributions of wealth, knowledge, and resources, and oppressive structures and power systems. These form centripetal and centrigugal forces, pushing people and institutions toward or away from the center. Does that mean we cannot do anything? No. On the contrary, we can work toward a more conscious world.


A world where differences in values and behaviors are continuously learned, valued, and embraced; where wealth, knowledge, and resources are distributed more fairly; and where power systems are challenged and dismantled when they oppress rather than serve people and communities.


This idea of a conscious society was already advocated by Paulo Freire decades ago. Freire was a Brazilian thinker, educator, and changemaker devoted to developing critical consciousness. His seminal book Pedagogy of the Oppressed is the third most cited work in the social sciences - showing that his ideas are more relevant than ever, not only as theory but as practical methodology for empowering marginalized communities.


Freire’s cycle of oppression describes how socialization happens: we are born into societies with predefined assumptions, roles, and structures, which we learn through family, through institutions such as schools, and through cultural practices and laws. Societies that revolves around values such as fear, ignorence, competition and aimed at maintaining the status quo.


AI seems to be undergoing the same cycle of socialization as humans. Born about two decades ago – at least in the form that we have today - ot has passed through early phases of institutional and cultural socialization and enforcement. Now, AI appears to be entering a second round of socialization - led mainly by Big Tech and influenced by the broader AI community. Without collective action, this cycle will likely continue and reinforce existing inequalities and power structures.


But there is a way out: the cycle of liberation. It shows how oppressive systems can be undone by identifying and transforming recurring social patterns. It is about empowering the self, building communities through outreach, organizing, and creating critical transformations that lead to sustainable changes in structures, assumptions, roles, and rules. This cycle requires continuous reflection, empathy, learning, and co-creation across individuals, communities, and society.


AI has become part of the cycle of oppression - but it can also be part of the cycle of liberation. In fact, AI can become a powerful instrument for empowering people and communities, helping sustain and reinforce the cycle of liberation. But this requires rethinking how we design, develop, deploy, and assess AI systems.


This is why we focus on socially intelligent AI systems. Systems that enable the transition from the cycle of oppression to the cycle of liberation - centering an Ethics of Life rather than an Ethics of the Market. Such systems should focus on values such as equity, dignity, integrity, and solidarity, and allow for continuous learning, exploration, and co-creation that respects both universality and local social values. Their goal is to improve human–human and human–society alignment, treating machine alignment as a by-product, not an end goal. This is where current AI safety research falls short and where new research is needed.


Against this background, it becomes important to rethink “question zero.” We should approach it from the perspective of transitioning from cycles of oppression to cycles of liberation. Instead of asking whether we should use AI, perhaps the better question is: How can we use AI to empower people and communities to support their movement from oppression toward liberation? Approaching it this way, I believe, reduces the risk that question zero will be misused or repurposed - intentionally or unintentionally - to legitimize or reinforce harmful practices, or to conceal or introduce new ones.


What this means in practice -and how it translates into assessment - is what I wouldlike to leave you with. It may be helpful to place greater emphasis on, for example, centering lived experiences, building socio-technical systems, revealing hidden socio-technical patterns, and enabling socio-technical transformations.


I would like to end with this quote of Paolo Freifre “The oppressed, instead of striving for liberation,  tend themselves to become oppressors”, suggesting that there always has been a back and forth between systems of oppression and liberation, This has also been my personal experience: the freedom fighter – the liberators - of then, have become dictators, opressors. In this back and forth, there is no starting point, no question zero. Perhaps a question infinity

 
 
 

Comments


Recent Posts

Contact

  • LinkedIn
  • Twitter
  • Facebook
  • YouTube

Sennay Ghebreab, Lab42, room L5.19

Informatics Institute, University of Amsterdam

Science Park 942, 1098 XH Amsterdam

Email: s.ghebreab@uva.nl  Tel: +31642825020

bottom of page