Symposium ‘Being human in the digital society’

Logo of CoreSAEP project, featuring a house with a WiFi sign as 'smoke' out of the chimney and people standing in front of it.

On November 21st 2022 Dr. Birna van Riemsdijk (UT) and Dr. Myrthe Tielman (TUD) organized the symposium ‘Being human in the digital society: on technology, norms and us’. The symposium was set up in hybrid form, with around 25 participants attending throughout the event, either in person at TU Delft or online. It was organized in conjunction with the PhD defence of Dr. Ilir Kola, to celebrate the conclusion of the CoreSAEP NWO Vidi project (Computational Reasoning for Socially Adaptive Electronic Partners) which was awarded to Van Riemsdijk in 2014. CoreSAEP investigates software that takes into account personal norms and values.

The project proposal was written in a time where the idea of aligning the behavior of digital technologies with personal norms and values raised some eyebrows. Since then, advances in sensor technology and AI have enabled creation of digital technologies that are more and more interwoven with almost every aspect of our daily lives, collecting data about us and influencing our behavior. This raises questions about how we want to shape this hybrid digital society: how to maintain agency in how we shape our lives with digital technologies, how to account for diversity of people and social contexts, and how to ensure our digital technologies leave space for what they cannot understand? In the symposium we have taken people along on the journey we have taken over the past years in addressing these questions.

Invited talk neurodiversity in technology development

The symposium started with an invited talk by Caroline Bollen, PhD candidate at Delft University of Technology in the “Ethics of Socially Disruptive Technologies” consortium. Bollen talked about valuing neurodiversity in technology development, providing a philosophical perspective on the notion of normativity and what it means to be different from ‘the norm’. Specifically, she showed that notions of empathy in the literature are defined essentially as a characteristic ‘lacking’ in neurodiverse persons. This circular reasoning then by definition introduces and perpetuates the erroneous stereotype that neurodiverse people have limited social competencies and empathy. To take steps towards avoiding reinforcement of such harmful myths when we develop technology, it is important to revise our definitions of such notions and to ask who we imagine when we develop technology to support ‘people’: how can we ensure that the diversity of human experiences is included in our development? Check out her YouTube video on the topic of you want to know more!

Taking into account personal norms and values

We then presented results of the CoreSAEP project. Van Riemsdijk provided a high-level overview of the insights gained throughout the project on how to realise human-machine alignment. She showed that software that takes into account personal norms and values requires a person-centred approach, with meaningful models, human-grounded methods and interactive meaning making (slides).

Human-machine alignment via person-centered models, methods and meaning making from University of Twente on Vimeo.

Cover of Dr. Ilir Kola’s PhD thesis

Dr. Ilir Kola presented an overview of his PhD research on ‘Enabling Social Situation Awareness (SSA) in support agents’. He presented the three-level SSA architecture he developed in his thesis, which enables social situation perception based on social features, comprehension in terms of psychological characteristics of a situation, and projection of expected user behaviour and values. In the afternoon he successfully defended his PhD thesis!

Dr. Myrthe Tielman presented her work on modelling a person’s activities and sub-activities, and the values which are affected by them. She also showed how a dialogue agent can interact with a visually impaired person to elicit their navigation behavior and related values. She investigated which types of human-machine misalignment such a dialogue can give rise to. She also showed ongoing work by PhD candidate Pei-Yu Chen (Hybrid Intelligence project), who is working on dialogues through which human and machine can achieve alignment of how the machine can best support the user. These are called ‘Alignment Dialogues’.

Panel ‘AI value alignment: reasoning or learning?’

The symposium concluded with a panel on ‘AI value alignment: reasoning or learning?’. This was motivated by the observation that over the past years, different approaches to norm and value alignment have been developed in the literature. Some take a data-oriented approach by deriving norms and values from the behavior of agents in a system, while others model norms, values and reasoning about them explicitly, or take a hybrid human-machine approach to identifying values.

The panel featured researchers with a broad range of expertise in human-machine alignment ranging from technical AI expertise on reasoning and learning to human-machine interaction, and ethics: Dr. Matthew Dennis, research fellow Ethics of AI and Persuasive Technologies (TU Eindhoven), Dr. Maite López-Sánchez, associate professor AI and Ethics (University of Barcelona), Dr. Pradeep Murukannaiah, assistant professor Engineering socially intelligent agents (TU Delft), and Sanne van Waveren, PhD candidate Human-robot interaction (KTH Royal Institute of Technology). During the panel we discussed these different approaches.

An important question which was posed concerned the tension between personal values and societal values. Should computer systems be designed to reflect current norms of society or future desired values? And whose values are prioritised? Some approaches take the individual as a starting point and emphasise personal values, but this comes with a risk of supporting or stimulating behaviour that is in conflict with societal values. Thus approaches that allow to model societal values in a machine and through which the machine can learn how to be behave accordingly are also important to develop. Eventually, hybrid human-machine approaches seem required to navigate these tensions in the specific contexts in which they occur.

Comments are closed.