My involvement in education is driven by the responsibility I feel towards society to educate computer scientists with solid technical and academic skills in combination with awareness of the societal impact of digital technologies, and the ability to take this into account in engineering decisions. My teaching is characterised by a well-thought out organisation and course & study program design whose elements are aligned with each other and the study objectives. This provides students a safe learning context within which they can develop their skills and use their creativity. I enjoy learning together with and from my students. I obtained my University Teaching Qualification in 2010.
I have taught a range of topics across the fields of Computer Science (CS) and Artificial Intelligence (AI) since I started my PhD in 2002. For example, I have been a tutor or lecturer in multiagent systems, logic, programming and value-sensitive design. I have also been involved in various leadership and coordinating roles in education. For example, as assistant professor at TU Delft I have been coordinator for the master programme on Media & Knowledge Engineering, a member of the board of examiners, a designer and coordinator of the projects study line of the CS bachelor, and member of the curriculum committee.
More information about the study programs and courses I teach at the UT can be found at the UT website. Specifically, I am currently involved in the following courses:
- Artificial Intelligence & Cyber Security: Technical Computer Science bachelor, Module 6 (year 2, Q2); coordinator of the course and teacher of lectures on introduction to AI, logic and search techniques
- Human-Computer Interaction: Technical Computer Science bachelor, Module 6 (year 2, Q2); I teach a lecture on Value-sensitive design and the accompanying assignment for the project
- guest lectures Affective Computing: Interaction Technology master, Q4; topics: Ethics and regulation of affective computing; Socially-aware personal agents
As a PhD (UU) and postdoc (LMU Munich) I have been a tutor of courses on logic, expert systems, and programming. As assistant professor at TU Delft I have taught courses and projects on Prolog and multiagent systems (bachelor), human-agent robot teamwork (master) and a master seminar on Intimate Computing that I designed based on my research vision.
If you are a UT student and you are looking for supervisor for your thesis or project, you are very welcome to contact me.
Below you can find information about the bachelor and master students I supervise at the UT, and current and former PhD students. At TU Delft I have supervised several master students on agent systems, for example Dr. Sung-Shik Jongmans who graduated cum laude on development of a model checker for the GOAL agent programming language.
Bachelor at UT
- David Lütke-Sunderhaus, Using process mining to classify habits through the analysis of daily activities (Business Information Technology, 2021)
- Teun Metz, An Exploratory Diary Study On Modelling Morning Routines (Business Information Technology, 2021)
- Emma Schipper, Value Conflicts between Sustainable and Alternative Behaviour in Daily Routines (Business Information Technology, 2021)
Master at UT
- Luuk van Kessel, Value-sensitive Design of a dashboard for monitoring employee health (Interaction Technology, ongoing)
- Irma Harms, Visualization of Values in Design (Interaction Technology, ongoing)
- Moniek Honcoop, Mood regulation by wearables with sound for people with autism spectrum disorder (Interaction Technology, ongoing)
- Daan Ekkelenkamp, Adding higher-order Situation Awareness components to a Platoon Commander’s Battle Management System (Interaction Technology, 2021, external at TNO)
- Pei-Yu Chen, Interactive Machine Reasoning for Responsible Hybrid Intelligence (with Myrthe Tielman, Catholijn M. Jonker and Dirk Heylen, started 2021, TU Delft)
- Loan Ho, Knowledge Representation Formalisms for Hybrid Intelligence (with Stefan Schlobach, Victor de Boer and Myrthe Tielman, started 2021, VU)
- Esther Kox, Trust Repair in Human-Agent Teaming (with Peter de Vries and Jose Kerstholt, started 2020, BMS@UT)
- Ilir Kola, Social Situation Awareness in Personal Assistant Agents (with Catholijn M. Jonker, started 2017, TU Delft)
- (stopped) Pietro Pasotti, Computational Reasoning for Habit Support Agents (with Catholijn M. Jonker, stopped 2018, TU Delft)
- Alex Kayal, Normative Social Applications: User-centered models for sharing location in the family life domain (with Willem-Paul Brinkman and Mark Neerincx, 2017, TU Delft)
- Ni Kang, Public Speaking in Virtual Reality: audience design and speaker experiences (with Willem-Paul Brinkman and Mark Neerincx, 2016, TU Delft)
- Thomas King, Governing Governance: A formal framework for analysing institutional design and enactment governance (with Virginia Dignum and Catholijn M. Jonker, 2016, TU Delft)
- Matthew Johnson, Co-active Design: Designing support for interdependence in human-robot teamwork (with Catholijn M. Jonker, 2014, cum laude, TU Delft)
- (stopped) Iris van de Kieft, Explainable AI & Shared Mental Models in Negotiation Support Systems (with Catholijn M. Jonker, stopped 2011, TU Delft)
Teaching Responsible CS & AI
In 2017 as member of the curriculum committee at TU Delft I proposed Responsible Computer Science as the theme of the new bachelor programme. This was motivated by the increasingly important role that digital technology plays as part of our lives in many domains such as health and wellbeing, smart cities, energy, art and culture, and robotics. Considering this pervasiveness and impact of digital technology on our society, it is more important than ever that engineers understand this context and can translate this understanding into development of digital technology that meets the needs and opportunities of people and societies.
This marks a changing perspective on the field of computer science: a change from seeing it as an objective study of computation and the design of computational systems to the realisation that computing is not value-free. The digital technologies we develop play a role in shaping who we are as human beings and our society as a whole. Computer scientists need to operate with an awareness that computing is not just a source of good, but can cause harm with great impact. This – positive or negative – impact is becoming even bigger with advances in Data Science & AI spurred on by the availability of big data from a plethora of devices and software systems, which makes it all the more essential to address.
I have come to see it as one of my most significant responsibilities as a teacher in CS & AI to integrate awareness of and ways of addressing Responsible CS & AI into our engineering programs.
I advocate a crosscutting approach which incorporates Ethical, Legal and Societal Aspects (ELSA) of digital technologies and AI in the immediate technical context in which they emerge. As suggested by (Miller, 2006) technical issues are best understood in their social context, and the societal aspects of computing are best understood in the context of the underlying technical realisation. For example, understanding and addressing issues of algorithmic bias requires both technical knowledge of the effects of the choice of datasets on machine learning models, as well as understanding of the societal phenomena of bias and discrimination themselves and their interplay with software systems such as surveillance technologies.
This crosscutting approach has been echoed by several related initiatives. For example, a project at Harvard CS called Embedded EthiCS (2019) proposes to embed ethical aspects throughout the CS curriculum as an inherent part of the technical courses. Moreover, the project Ethics4EU (2019) develops best practices and learning resources for integrating ethical aspects in CS study programs for a wide range of CS topics. Mozilla has launched the Responsible Computer Science Challenge (2018), an initiative that funds projects on embedding ethics into undergraduate computer science education. The awarded projects have together created a Playbook on Teaching Responsible Computing bringing together their lessons learned. It suggests an “across the curriculum” model in which responsible computing concepts and habits are taught in conjunction with technical subjects, so as to create a more substantive, contextual, and task-oriented understanding. There are also many CS curricula that include a variety of dedicated digital ethics courses. Dr. Casey Fiesler has collected a database (2018) of almost 200 of those courses.
Below I highlight several ways in which I currently address ELSA in my teaching (see also the section on courses). I aim to further develop and systematise these efforts in the coming years.
- Value-sensitive design (VSD): I teach VSD to 2nd year bachelor students in CS as part of a course on Human-Computer Interaction within the module Intelligent Interaction Design. The module also includes a course on AI, which facilitates bridging of human-centred and technical AI aspects.
- Ethics and Regulation of Emotion Recognition technologies: I have given a guest lecture within the course Affective Computing (Interaction Technology master) on this topic. While emotion recognition technologies have potential benefits, for example for making human-machine interaction more natural, there are also many risks. These are for example related to misclassification of emotions, personal freedom, and discrimination. I have discussed these issues using various examples of existing emotion recognition systems, and highlighting relevant parts of the new proposed EU regulation on AI.
- Ethics Committee pre-check team: I am a member of the Human-Media Interaction team which checks and provides feedback on ethics requests for, primarily, user studies of CREATE bachelor and I-Tech master students before they are formally submitted to the Ethics Committee (EC). In this way, ethics requests are already of higher quality when submitted to the EC, which can then be processed more efficiently.
These activities are aligned with my research activities, in which ethical aspects of digital technologies have taken on a more and more central role in the past decade. In research projects that started around 2011/2012 we have developed computational models for making interpersonal data and information sharing via digital technologies for mobile location sharing and social sensing align with norms and values of users. I have dubbed this approach Responsible Data Sharing. In my Vidi project (2014) we are investigating how behaviour support technologies can take into account norms and values of people, and in the Hybrid Intelligence project (2021) we develop conversational human-machine alignment models. I have also taken on several leadership roles addressing aspects of responsibility, in particular as coordinator of the Responsible Hybrid Intelligence (HI) research line within the corresponding gravitation project, and as co-chair of the committee tasked with setting up collaboration between the HI project and the gravitation project on Ethics of Socially Disruptive Technologies. Among other things we have initiated a project-wide discussion to identify what it means to do research on HI in a responsible way.