top of page


How can AI systems boost  common sense and collective wisdom? How can AI systems advance society in a fair and inclusive manner?  How can AI systems be aligned with public values, goals, and behaviors, in a manner that respects social, cultural, economic, and political diversity between people and communities? The SIAS group addresses these questions through fundamental and applied research,  focusing on socio-technical  solutions for / with / by  people & communities from all walks of life.  The group has three principle research lines.

Prosocial AI

Meeting today’s major societal challenges requires understanding dynamics of cooperation and conflict in complex systems. Artificial Intelligence (AI) is intimately connected with these challenges, both as an application domain and as a source of new computational techniques: On the one hand, AI suggests new algorithmic recommendations and interaction paradigms, offering novel possibilities to engineer cooperation and alleviate conflict; on the other hand, new learning algorithms provide improved techniques to simulate sophisticated agents and increasingly realistic systems. In this research line, we focus on the interface between complex systems and AI: we develop computational methods to understand strategic dynamics in multiagent systems and aim at designing AI that contributes to stabilize prosocial behaviors and fairness. In this domain, we have been exploring 1) how social norms and reputation systems impact dynamics of cooperation, fairness, and reciprocity; 2) the stability of cooperation within populations of reinforcement learners; 3) the impact of algorithmic recommendations on dynamics of cooperation, parochialism, and polarization; 4) the dynamics resulting from individuals strategically adapting to algorithms; or 5) the design of fair transportation networks and their impact in urban mobility patterns. Part of the research in this line is done in public-public collaborations such as the Civic AI Lab.  Scientific lead is: Dr. Fernando Santos.


Responsible AI

Day by day, contemporary AI technologies disclose new possibilities of application. Yet, the distributive effects that these innovations support carry the risk of reinforcing, exhacerbating, if not creating problematic processes in societies, thus undermining social sustainability. However, computational means may also furnish more reactive instruments to acknowledge these issues, driving towards and enabling more adequate societal responses. Our research on Responsible AI stems from the realization that what eventually determines the meaning of algorithmic decision-making lies not only in the program artefact, nor--if applicable--in the data on which it has been trained, but in the preparatory and consequent practices of the social environment in which the artificial device is embedded. We have therefore been investigating: 1) values and power structures as being mapped in and through computational means, 2) entrenchments between dimensions as legality, legitimacy, and instrumental possibility; 3) inclusive co-design practices and methods, focusing on marginalized communities; 4) (computational) regulatory mechanisms for AI and data governance, building upon normative systems design and agent-oriented programming; 5) cognitively-inspired computational models of bias, fairness, and responsibility. Part of the research in this line is done in public-civic partnerships such as the CommuniCity project.  Scientific lead is Dr. Giovanni Sileno.

Explainable AI

From finance and healthcare to life-sciences and public sector, many decisions impacting our lives are governed by the support of machine learning models. Yet the more complex the models, the more difficult to explain the intuitive rationale behind them. This, in turn, does not only undermine the applicability of efficient algorithms when it comes to high-stake decisions, but also makes it more difficult to control the behaviour of the algorithm to avoid socially undesirable outcomes e.g., to make them more fair for vulnerable and less represented groups in our society.

         In SIAS, we investigate novel explainable AI methodologies that integrate machine learning models with symbolic structures such as logic and causality, with a human-centered perspective. Particularly,  in the domain of finance, we have strong collaboration with industry within the scope of new AI4Fintech initiative at the University of Amsterdam.  Part of the research in this line is done in public-private collaborations. Scientific lead is Dr. Erman Acar.

bottom of page