Education

My involvement in education is driven by the responsibility I feel towards society to educate computer scientists with solid technical and academic skills in combination with awareness of the societal impact of digital technologies, and the ability to take this into account in engineering decisions. My teaching is characterised by a well-thought out organisation and course & study program design whose elements are aligned with each other and the study objectives. This provides students a safe learning context within which they can develop their skills and use their creativity. I enjoy learning together with and from my students. I obtained my University Teaching Qualification in 2010.

I have taught a range of topics across the fields of Computer Science (CS) and Artificial Intelligence (AI) since I started my PhD in 2002. For example, I have been a tutor or lecturer in multiagent systems, logic, programming and value-sensitive design. I have also been involved in various leadership and coordinating roles in education. For example, as assistant professor at TU Delft I have been coordinator for the master programme on Media & Knowledge Engineering, a member of the board of examiners, a designer and coordinator of the projects study line of the CS bachelor, and member of the curriculum committee.

On this page you can find more information about the courses I teach, students I supervise, and my activities regarding teaching Responsibe CS & AI.

Courses

More information about the study programs and courses I teach at the UT can be found at the UT website. Specifically, I am currently involved in the following courses:

  • Knowledge Representation & Reasoning: Technical Computer Science bachelor, Module 9 Data Science & AI: seeing through the hype (year 3, Q1); initiator and designer of the course and teacher of lectures on introduction to KRR, Representing & Reasoning about Actions, and Representing & Reasoning about Time
  • Artificial Intelligence & Cyber Security: Technical Computer Science bachelor, Module 6 Intelligent Interaction Design (year 2, Q2); coordinator of the course (20/21, 21/22) and teacher of lectures on introduction to AI, logic and search techniques
  • Human-Computer Interaction: Technical Computer Science bachelor, Module 6 (year 2, Q2); I teach a lecture on Value-sensitive design and the accompanying assignment for the project
  • guest lectures Affective Computing: Interaction Technology master, Q4; topics: Ethics and regulation of affective computing; Socially-aware personal agents

As a PhD (UU) and postdoc (LMU Munich) I have been a tutor of courses on logic, expert systems, and programming. As assistant professor at TU Delft I have taught courses and projects on Prolog and multiagent systems (bachelor), human-agent robot teamwork (master) and a master seminar on Intimate Computing that I designed based on my research vision.

Student supervision

If you are a UT student and you are looking for supervisor for your thesis or project, you are very welcome to contact me. I have list of possible projects available which is continuously updated, but I am also open to discussing other ideas, provided that they fall within my areas of interest and expertise.

Below you can find information about the bachelor and master students I supervise at the UT, and current and former PhD candidates and postdocs. At TU Delft I have supervised several master students on agent systems, for example Dr. Sung-Shik Jongmans who graduated cum laude on development of a model checker for the GOAL agent programming language.

Bachelor at UT

Master at UT

PhD/postdoc

Current (PhD):

Current (Postdoc):

Former (PhD):

Teaching Responsible CS & AI

In 2017 as member of the curriculum committee at TU Delft I proposed Responsible Computer Science as the theme of the new bachelor programme. This was motivated by the increasingly important role that digital technology plays as part of our lives in many domains such as health and wellbeing, smart cities, energy, art and culture, and robotics. Considering this pervasiveness and impact of digital technology on our society, it is more important than ever that engineers understand this context and can translate this understanding into development of digital technology that meets the needs and opportunities of people and societies.

This marks a changing perspective on the field of computer science: a change from seeing it as an objective study of computation and the design of computational systems to the realisation that computing is not value-free. The digital technologies we develop play a role in shaping who we are as human beings and our society as a whole. Computer scientists need to operate with an awareness that computing is not just a source of good, but can cause harm with great impact. This – positive or negative – impact is becoming even bigger with advances in Data Science & AI spurred on by the availability of big data from a plethora of devices and software systems, which makes it all the more essential to address.

I have come to see it as one of my most significant responsibilities as a teacher in CS & AI to integrate awareness of and ways of addressing Responsible CS & AI into our engineering programs.

Crosscutting approach

I advocate a crosscutting approach which incorporates Ethical, Legal and Societal Aspects (ELSA) of digital technologies and AI in the immediate technical context in which they emerge. As suggested by (Miller, 2006) technical issues are best understood in their social context, and the societal aspects of computing are best understood in the context of the underlying technical realisation. For example, understanding and addressing issues of algorithmic bias requires both technical knowledge of the effects of the choice of datasets on machine learning models, as well as understanding of the societal phenomena of bias and discrimination themselves and their interplay with software systems such as surveillance technologies.

This crosscutting approach has been echoed by several related initiatives. For example, a project at Harvard CS called Embedded EthiCS (2019) proposes to embed ethical aspects throughout the CS curriculum as an inherent part of the technical courses. Moreover, the project Ethics4EU (2019) develops best practices and learning resources for integrating ethical aspects in CS study programs for a wide range of CS topics. Mozilla has launched the Responsible Computer Science Challenge (2018), an initiative that funds projects on embedding ethics into undergraduate computer science education. The awarded projects have together created a Playbook on Teaching Responsible Computing bringing together their lessons learned. It suggests an “across the curriculum” model in which responsible computing concepts and habits are taught in conjunction with technical subjects, so as to create a more substantive, contextual, and task-oriented understanding. There are also many CS curricula that include a variety of dedicated digital ethics courses. Dr. Casey Fiesler has collected a database (2018) of over 300 of those courses, and a series of lectures on global perspectives on AI ethics is available online.

My activities

Below I highlight several ways in which I currently address ELSA in my teaching (see also the section on courses). I aim to further develop and systematise these efforts in the coming years.

  • Teaching Responsible AI (TRAI) working group: in 2021 I initiated the national TRAI working group as part of the platform on Participative and Constructive Ethics (PACE) which is part of the Human-centric AI building block of the NL AIC. The goal of TRAI is to bring together stakeholders such as teachers, students, and organizations who are interested in developing materials, practices and methods of teaching and adopting Responsible AI for exchanging expertise, community building and agenda setting.
  • Knowledge Representation & Reasoning (KRR): I teach KRR to 3rd year bachelor students in CS as part of the module Data Science (DS) & AI: seeing through the hype. The module aims to teach a critical perspective on DS&AI, including courses both technical courses on data-oriented and knowledge-based approaches to AI as well as on data quality, explainable AI and ethics. The KRR course comprises practical exercises in which students are asked to reflect on usability and understandability of the KRR techniques as well as their relation with machine learning approaches (intro video).
  • HI/SIKS course Responsible AI: in collaboration with the Hybrid Intelligence project, prof. Rineke Verbrugge and I organised the first SIKS PhD course on Responsible AI in June 2022.
  • Value-sensitive design (VSD): I teach VSD to 2nd year bachelor students in CS as part of a course on Human-Computer Interaction within the module Intelligent Interaction Design. The module also includes a course on AI, which facilitates bridging of human-centred and technical AI aspects.
  • Ethics and Regulation of Emotion Recognition technologies: I have given a guest lecture within the course Affective Computing (Interaction Technology master) on this topic. While emotion recognition technologies have potential benefits, for example for making human-machine interaction more natural, there are also many risks. These are for example related to misclassification of emotions, personal freedom, and discrimination. I have discussed these issues using various examples of existing emotion recognition systems, and highlighting relevant parts of the new proposed EU regulation on AI.
  • Ethics Committee pre-check team: I was a member of the Human-Media Interaction ethics pre-check team (20/21, 21/22) which checks and provides feedback on ethics requests for, primarily, user studies of CREATE bachelor and I-Tech master students before they are formally submitted to the Ethics Committee (EC). In this way, ethics requests are already of higher quality when submitted to the EC, which can then be processed more efficiently.

These activities are aligned with my research activities, in which ethical aspects of digital technologies have taken on a more and more central role in the past decade. In research projects that started around 2011/2012 we have developed computational models for making interpersonal data and information sharing via digital technologies for mobile location sharing and social sensing align with norms and values of users. I have dubbed this approach Responsible Data Sharing. In my Vidi project (2014) we are investigating how behaviour support technologies can take into account norms and values of people, and in the Hybrid Intelligence project (2021) we develop conversational human-machine alignment models. I have also taken on several leadership roles addressing aspects of responsibility, in particular as (former) coordinator of the Responsible Hybrid Intelligence (HI) research line within the corresponding gravitation project, and as co-chair of the committee tasked with setting up collaboration between the HI project and the gravitation project on Ethics of Socially Disruptive Technologies. Among other things we have initiated a project-wide discussion to identify what it means to do research on HI in a responsible way.