An Interview with Sennay Ghebreab, Founder and Director of the Civic AI Lab
Original interview can be found here
In this week’s blog post we will be discussing the development of AI for the social good with Sennay Ghebreab, who is head of the Socially Intelligent Artificial Systems group at the University of Amsterdam, and founder and scientific director of the Civic AI Lab.
Could you tell us a bit about your background and interests?
I studied Technical Information Systems at the University of Amsterdam in the early 90's. Subsequently, I did my PhD in computer vision and medical imaging, partially at Yale School of Medicine. The aim of my PhD work, already two decades ago, was to build artificial systems to help radiologists interpret medical images efficiently and effectively. After a few years of Postdoc on this topic, I got more interested in, not the artificial system, but more the natural complex operating system of the brain. So I moved from building artificial systems, to using them to study neural systems and human cognition. After a decade I switched my interest to social systems and questions like: How can we prevent AI systems learning unwanted human behaviour such as social discrimination? How can we use AI systems to help us solve inequality, poverty, obesity, and so on. That is basically what I am doing now in my current position, and also with the Civic AI Lab.
So you have been busy with AI quite extensively, what is your opinion on AI in general?
AI has become a very powerful tool in the last two decades. It has created a paradigm shift, a systemic shift, in how we think about and use computers in all sectors of life, academia, industry, government and civil society. Especially big tech companies have been using AI to advance their agendas, which has led to a lot of great AI-based systems that allow people to navigate, communicate, and so on. But is has unfortunately also led to AI-based systems, for example for surveillance, that do more harm than good. The use of AI for social good is still in its infancy. So AI has both positive and negative impacts.
Do you think citizens of Amsterdam are aware of these impacts that AI can have?
Citizens in general are increasingly being exposed to or confronted with the power and impact of AI. So there is an increasing understanding of what is happening and what the impact is. But I also know that there is an increasing mistrust because of all the misuse of AI. So yes, there is an increasing understanding, which does not mean that there is an increasing engagement with AI or taking agency in AI which is needed, but it's getting better.
As you have seen the development in the recent years from neural to social, what is the main reason to focus on AI in Society?
So, in a nutshell the transformation was from the focus on computational, to the focus on neural, to the focus on social, and back. The reason why I switch focus and disciplines is because I believe the computational, the neural and the social are all interconnected. They are not stand alone topics or perspectives. Our neural systems are shaped in and by social systems. The same holds for our computational systems. And now increasingly, computational systems such as social media, shape our social systems and influence our neural systems. That is why in the past I coined the term exo-cortex: AI that forms an external layer on top of our neo-cortex, connecting us with others and with things. So there is a connection in all of this and that is why I love to travel from one focus to the other, from one discipline to the other, and come back again to try to make sense from all of this.
As founder and director of the Civic AI Lab, what does it stand for?
The Civic AI Lab is part of the national Innovation Center for AI, ICAI, which started a few years ago as an ecosystem of labs that are public-private and public-public collaborations between knowledge institutions, and other organizations. ICAI started with labs such QUVA Lab and AIRlab, cooperations with the University of Amsterdam and Qualcomm and Ahold Delhaize respectively. So at some point, the emergence of public-private ICAI labs led me to believe that we should also move towards public-public. So I started thinking about Civic AI Lab, which is now a collaboration between the University of Amsterdam, City of Amsterdam, the Free University of Amsterdam and the Ministry of Internal Affairs, aiming at increasing equality and equity in domains such as education, health and transportation through AI technology.
Why is this relevant for citizens today in our city especially?
Because every minute of our lives is impacted by AI systems. From the phone we use, to facial recognition, and voice recognition, all have AI, but also services provided by hospitals, municipalities, and financial companies.
And if these AI systems are not well-designed, well-built, and we do not take into account, the values that we claim to appreciate as an organization, city or society, then we may end up with systems that undermine our values. In order to prevent this, and in order to create useful and beneficial AI systems, it is important that our citizens are more engaged with these AI systems. Citizens are essential for improving AI systems, for making them more fair and inclusive.
What are your thoughts about what happened at the VU with discrimination?
These are the type of thing things I have been warning about for more than a decade, and perhaps a decade ago people would say: "AI, Algorithms, discriminate people? How can machines discriminate?" Unfortunately, these question are still being raised whenever an AI system discriminates a group of people based on their color, gender, religion or something else. Awareness about the power and potential impact of AI systems has grown in the last decade, but there is still much to gain. One thing I keep emphasizing is that, while it is very unfortunate that AI discrimination occurs, it is also an opportunity to expose and address discrimination, inequality, as AI system reflect and quantify the good, the bad and the ugly in society.
Apart from Civic AI Lab, are there other projects you work on with the city?
Apart from the Civic AI Lab, there is another project we recently started called Communicity, which is a project with the city of Amsterdam, Helsinki, Porto and twelve other parties, universities and civil society organisations. There the idea is that we try to bring technologies, including AI, closer to citizens. More precisely to bring AI technologies closer to the needs of communities and to co create AI with these communities, especially hard to reach and marginalized communities. The CommuniCity goes beyond AI technology “for” people, in which spirit the Civic AI Lab was conceptualized. CommuniCity is a step towards AI technology “with” people. My long term agenda is to go towards AI technology “by” people. If we invest sufficiently in AI education for all, then perhaps in a decade from now, citizens of all walks of life may be able to design and develop AI technology by themselves with all sorts of plug and play data, tools, and stuff we can collect. AI technologies that help them empower their own position and well-being in society and that of others.
Is this currently not possible?
At the moment there are not that many possibilities. Of course there are many apps to track your health and so on, but citizens as individuals, are not able to add that much. But as a collective they can, take the Sickle cell as an example, which is a disease that emerges mainly among the African American population, men, predominantly. There is not that much knowledge about sickle cell because it only reaches a small group of society. If people themselves can now gather data, collect data and use that data to together with scientists, see what might lead to cures or a better understanding, those type of things, you need to do as a collective. The more and the better data, the better algorithms you can develop and the better solutions you can create, so what we do here is, the cycle is repeated and this way we can get people in it. People have a lot of questions, Can we trust our data to hospitals universities and companies? How will these algorithms make the right distinction?There is a lot of mistrust, and there are no mechanisms and tools to secure that yet. How can we make sure people are the owner of that data, there are no mechanism or tools to secure that yet but people are thinking about it, if that is fixed then we can go a step further, to understand health or poverty issues better.
All these different AI centers or labs, and data being shared through different partners, I can imagine you ran into some bottlenecks. How do you work through this?
There is no single way. There are top down solutions coming from academia, government and companies that should be combined with bottom up scrutiny, improvement, and by citizens and communities. Organisations need to hold accountability for their use of data. So now lots of companies, hospitals, and municipalities, gather data, even for purposes for which the data was not intended because it might help them in their business model. This is ofcourse understandable from the perspective of organizations, but from the perspective of citizens its not. So companies need to be transparent and also explain which AI and Algorithms for whom, how does it impact citizens' communities and so on. That is why now you see many impact assessments taking up their responsibility about how AI impacts citizens' communities and so on. Human rights impact assessment, privacy impact assessment to mention a few. That is a good development, but much needs to be done. This needs to be combined with a better understanding of and education of citizens. What is data and what kind of different data do we have? How is this data processed and why or how can it be useful for you? There is a bottom up solution to get out of this cycle of mistrust.
Based in Lab42, how do you aim to impact Education?
I have been educating for at least a decade now on AI on aspects of fairness, inequality etc. and I have seen a rapid interest and investment in education on these topics. This is very good and needed. These topics, for example, are now embedded in the core curriculum of the AI and Data Sciences programs at the Informatics Institute of the University of Amsterdam. Whereas 10 years ago, the focus was on the technical aspects of AI systems, now there is also attention for more ethical and fairness aspects. The tools that have been developed by technical students can impact education, whereas at the same time, students that have no technical background should also be educated in the basics of AI. What is algorithmic thinking and how can it influence your lives? So we do that in a very interdisciplinary setting, not only through formal education at the university, but also by designing education programs for professionals, such as health professionals and civil servants. This we will for example do in the ICAI ROBUST program, which was launched two weeks ago and which will bring the number of lCAI close to 50 now. But we also try to inform and educate the general population through for example inzicht-in-ai.nl, and an initiative from The Royal Netherlands Academy of Arts and Sciences (KNAW), which we curate.
So as we have spoken about a lot of different issues, what is your vision for the future regarding AI thinking about all these matters?
As I have said frequently, AI can be a very powerful tool to help advance society. If we don’t engage in citizens' lives, then this powerful tool comes in hands of only a few, especially the big tech companies. Then you will see more powerhouses, increase of inequality, discrimination, and the list goes on. We have Elon Musk emerging as Head of Twitter, and you immediately see the impact on the social dynamics. Just one push of the button, and whole Twitter changes its behaviour influencing people in ways that are good or bad. So it is a powerful thing, which we can not leave only in the hands of others. We, the people, should take things in our own hands, and build the AI we need to shape the society we want: democratic, sustainable, just and inclusive.