As conversational user interfaces (CUIs) evolve, the challenges surrounding trust and identity become increasingly pronounced. The identities of CUIs are multifaceted, incorporating individual names like Siri or Alexa, the companies behind them like Apple or Amazon, as well as attributes of race, gender, and class perceived by people as encoded in how and what they say. Identity is also encoded in the embodiment (an abstract animation on a watch, or a humanlike avatar in virtual reality) and the backstory designers give these agents. But, if identity is fragmented, e.g., across multiple physical forms, if and how users can establish trust becomes difficult to address. Drawing from diverse fields including ethics, design, and engineering, we explore the hurdles posed by ambiguous identities. A dynamic embodiment of a CUI across multiple devices presents technical complexities, but more importantly, it raises ethical dilemmas surrounding trust.
In this workshop, we aim to synthesize research goals and methods to further probe the intricacies of identity fragmentation and its implications for user trust in CUIs. To pursue a collaborative debate, we formulate that trust and identity suffer from the chicken or the egg problem--- Should issues surrounding identity be resolved first before trust can even be conceived to be possible between humans and CUIs? Can users truly trust a CUI that lacks a consistent and transparent identity or fails to evolve as its relation with the user develops, and would that trust be different for different embodiments and platforms? We consider that trust itself perhaps should be questioned given that the issues surrounding identity are not resolved. We additionally discuss whether identity across platforms should be uniform to conduce user trust or should adopt distinct personas to engender user confidence and whether the identity should stay consistent over time or evolve with the user as their relationship develops.
20 May 2024
Extended to 27 May 2024
5 June 2024
8 July 2024
9:30-12:30
Accepted position papers and statements of interest: Not yet available
For more information or questions, please reach out to us at: cuibetweentrustandidentity@gmail.com
Full workshop description is available here:
We aim to deepen our understanding of the intricate interplay between identity formation and fragmentation, trust dynamics, and ethical considerations. Our workshop will focus on the tension between trust and the identity of who or what we trust. This can be the diverse groups of people who use CUIs, CUIs themselves (in whatever form or forms), and organizations and companies behind CUIs. We build on the theme of the conference this year, i.e., Trustworthy Conversational Agents, and introduce identity as a crucial consideration. At HRI (ACM/IEEE International Conference on Human-Robot Interaction), we have led three workshops on identity [9-11] exploring the challenges surrounding this topic as noted as ongoing and future research agendas [4-19]. These are also relevant for the CUI community. These discussion points on artificial identity or robo-identity at HRI were across disciplines, with engineers, computer scientists, designers, and philosophers as organizers and attendees of the robo-identity workshop series. The first workshop considered the possibility of multiple embodiments, such as how the physical form is not a limitation for artificially created identities. Beyond this, groups of robots, e.g., swarms, can also form what is called group identity observables like sharing the same voice [3]. The second workshop focused on identity conveyed via speech and voice, as well as emotions through such modality. Synthetic voice is one example that demonstrated the difficulty in distinguishing between human or artificial identity [17] and voice design influences people's likelihood of trusting an agent [30]. We then turned to identity fluidity in a shared world between humans and machines, such as how both human and non-human identities continuously evolve.
For the Conversational User Interfaces (CUIs) community, the inquiry into "who" these interfaces represent also is relevant, yet has not been of focus. CUIs do not rely on (but can have) tangible bodies of robots, in which there is a spectrum of forms, ranging from standardized humanoid avatars to abstract digital entities such as Siri or Jibo. How diverse identities that may or may not rely on physical bodies could seamlessly traverse across different technological platforms becomes compelling to explore.
Already human identity has been discussed at length in philosophy [13,24] and psychology [29], with normative assumptions like the lack of consistent human identity being related to psychiatric disorders like dissociative identity disorder [31]. Technology, on the other hand, is not seen as "disordered" if its identity is distributed across multiple embodiments, e.g., Siri on a smartwatch and on the iPhone. However, numerous challenges persist, including the continuing research on the optimal contexts for identity migration and the nuanced perception of migrating artificial identities across diverse user contexts ([6, 14, 16, 25]). Further, it is questionable whether or not human-like traits should be designed into robots, such as race [34] especially considering robots can serve as proxies for human identity [2].
Trust is the disposition to rely on another entity to deliver an expected action; it is assessed by risks related to the action and judging if the entity at hand is worthy of trust [23]. Hence, agents that can transparently let humans know what action it is taking and why it is taking that action are seen as trustworthy, especially in collaboration with humans [1, 12, 18, 20. Thus, CUIs can make use of various modalities, be it visuals, text, or voice, to provide explanations [5, 8, 15, 32]. Yet, tension ensues when people over- or under-trust agents, be it over-relying on them when they should rely on themselves or other humans or doubting agents when they should trust the information the agents share [26, 28]. In this, people's state (emotional and cognitive) is involved in trust within an interaction \cite{nickel2013trust, nickel2015design}, but trust has also been called a taken-for-granted given that is required for any social exchange to happen in the first place [7, 33]. When trust is there socially, it is rarely noticed, yet when it is gone, its absence is glaringly obvious.
Taken together, agents can have physical, virtual, augmented, and multiple bodies, as well as diverse modalities (like changing voice) [4]. Hence, an agent's identity in all its possibilities challenges if and how we should trust it. We thus look into explicitly fostering constructive disagreements and shared debates during the workshop. The topics covered by attendees should include positions on whether or not identity issues need to be resolved before trust in CUIs can be discussed versus the position that trust can stand alone on its own conceptually despite the issues regarding identity still looming large.
During this Half-Day Workshop, we plan to explore divergent and convergent themes of trust and identity of CUIs. To achieve this, we have designed a series of interactive activities which aim to foster multidisciplinary discussions and encourage the exploration of various facets of identity in CUIs. Through synthesis-oriented exercises and small-group discussions, we aim to generate ideas and insights that will inspire future work in this field.
Introduction and agenda
(15 min)
Paired debates: The chicken or the egg dilemma on trust and identity
(90 min)
We will ask authors to present their position (regarding what comes first on trust vs. identity as the chicken or the egg problem) in PechaKucha style1. Specifically, we request 10 slides with a maximum of 20 seconds per slide (though this is a recommendation rather than a strict requirement). The presenters then can do a rebuttal. The organizers will first do a debate before opening it up to attendees, who will be paired based on their position papers. The hope is in sharing authors’ debates on identity, we can inspire co-attendees in an easily accessible manner.
Coffee Break
(15 min)
Nordic Perspectives on Algorithm Systems
(30 min)
Then, we will have synthesis oriented discussions with the generative Nordic Perspectives on Algorithm Systems Exercise. The Nordic Perspectives card deck will be used to generate scenarios of research into changing trust and identity. In small groups, participants will use the Settings and Metaphors sets to develop simple scenarios of interactions that exemplify interactions on this topic. The groups will then combine, sharing two scenarios and working together to add two research plans using the Method and Caveat decks. This will encourage generative discussion on the range of challenges involved in the development and research of CUIs with respect to the trust in their identities.
Conver-Stations Discussion
(60 min)
The Conver-Stations exercise involves participants moving between tables, with 10 minutes for discussion at each. These stations will be pre-populated with topics taken from the submitted position papers of the participants. These large sheets of paper will also include 2 to 3 related questions that each group should answer in one sentence during their discussion before moving on, reading the previous answers on their next station, and starting a new discussion on the new topic.
Reporting back and Future steps
(30 min)
The final session on reporting back and future steps will focus on the reporting back of connections that have been made, and the future work -- from follow-up discussions, to papers, to research funding proposals -- that they want to continue with after the workshop.
Informal Workshop Dinner
(afterwards)
The primary audience of the workshop includes researchers in the fields of HRI, Conversational User Interfaces, and RoboPhilosophy. Beyond those working in the CUI space, we hope to include researchers in design research, philosophy, and related HCI fields. By doing so hope to bring the RoboPhilosphy, HRI and CUI communities together -- both into discussions during the workshop regarding overlapping interests identity and trust, but also to the CUI conference as a whole to make long-lasting and meaningful connections between community members. The workshop will be open and inclusive to all participants from varying fields. Based on similar previously organized workshops, we expect attendance of fifteen to twenty participants.
If you are interested in participating in this workshop, we ask you to submit a 'statement of interest' (e.g., name, affiliation, bio/background, what you expect from the workshop, interested discussion topics etc.).
We also encourage participants to submit an extended abstract of 2 to 4 pages (maximum of 6 including references) on research related to the topics described above. Submissions should also include short author biographies of 1-2 sentences. Papers may address any of the topics described above and we encourage the submission of papers describing works in progress, preliminary results to discuss with the community, methodology proposals, and lessons learned when designing CUIs to converse with end users. All papers will be reviewed by our program committee and our website will serve as a repository for accepted papers of participants. All submissions should use the sigconf style. The ACM template is available on Overleaf (simply change the documentclass to be \documentclass[sigconf,screen]{acmart}). Submissions will be single blind and thus do not need to be anonymized.
Extended abstracts can be submitted to cuibetweentrustandidentity@gmail.com by 20 May 2024. Please include "CUI24 Between Trust and Identity" in your email subject line. We will notify the acceptances by 5 June 2024.
Minha Lee is an assistant professor at the Eindhoven University of Technology in the Department of Industrial Design, with a background in philosophy, digital arts, and HCI. Her research concerns morally relevant interactions with various agents like robots or chatbots. Her recent work explores how we can explore our moral self-identity through conversations with digital entities, e.g., via acting compassionately towards a chatbot.
Donald McMillan is an Assistant Professor at Stockholm University's Department of Computer and Systems Sciences. His research lies at the juncture between CUI, HCI and computer science in investigating how observational methods that provide detailed perspectives on human communication can be applied to improve sensing and interaction with novel devices.
Illaria Torre is an Assistant Professor at Chalmers University of Technology, Division of Interaction Design and Software Engineering. Her research focuses on Human-Robot Interaction, looking particularly at developing effective and appropriate communication methods for intentional and unintentional human-robot interactions.
Joel Fischer is Professor of Human-Computer Interaction at the University of Nottingham, UK. His research takes a human-centred view on AI-infused technologies to understand and support human activities and reasoning. He has co-organised international workshops and published widely on related topics spanning robotics and conversational systems, frequently drawing on perspectives from Ethnomethodology and Conversation Analysis.
Yvon Ruitenburg is a PhD candidate at the Department of Industrial Design at Eindhoven University of Technology with a background in Industrial Design. Her research delves into how conversational technologies can help people with dementia and those around them communicate their perceptions of reality.
[1] Sule Anjomshoae, Amro Najjar, Davide Calvaresi, and Kary Främling. 2019. Explainable agents and robots: Results from a systematic literature review. In 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019), Montreal, Canada, May 13–17, 2019. International Foundation for Autonomous Agents and Multiagent Systems, 1078–1088.
[2] Franziska Babel, Philipp Hock, Katie Winkle, Ilaria Torre, and Tom Ziemke. 2024. The Human Behind the Robot: Rethinking the Low Social Status of Service Robots. In Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (, Boulder, CO, USA,) (HRI ’24). Association for Computing Machinery, New York, NY, USA, 1–10. https://doi.org/10.1145/3610978.3640763
[3] Alexandra Bejarano and Tom Williams. 2022. Understanding and influencing user mental models of robot identity. In 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 1149–1151
[4] Karla Bransky, Penny Sweetser, Sabrina Caldwell, and Kingsley Fletcher. 2024. Mind-Body-Identity: A Scoping Review of Multi-Embodiment. In Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (, Boulder, CO, USA,) (HRI ’24). Association for Computing Machinery, New York, NY, USA, 65–75. https://doi.org/10.1145/3610977.3634922
[5] Ewart J de Visser, Frank Krueger, Patrick McKnight, Steven Scheid, Melissa Smith, Stephanie Chalk, and Raja Parasuraman. 2012. The world is not enough: Trust in cognitive agents. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Los Angeles, CA) (HFES ’12, Vol. 56). Sage Publications, 263–267.
[6] Brian R Duffy, Gregory MP O’Hare, Alan N Martin, John F Bradley, and Bianca Schon. 2003. Agent chameleons: Agent minds and bodies. In Proceedings 11th IEEE International Workshop on Program Comprehension. IEEE, 118–125.
[7] Harold Garfinkel. 1963. A Conception and experiments with “Trust" as a condition of stable concerted actions. In Ed. O. J. Harvey (Ronald Press) (Motivation and social interaction: Cognitive Determinants). 187–238.
[8] Jiun-Yin Jian, Ann M Bisantz, and Colin G Drury. 2000. Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics 4, 1 (2000), 53–71.
[9] Rucha Khot, Minha Lee, Alexandra Bejarano, Lux Miranda, Gisela Reyes-Cruz, Joel E. Fischer, and Dimosthenis Kontogiorgos. 2024. Robo-Identity: Designing for Identity in the Shared World. In Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (, Boulder, CO, USA) (HRI ’24). Association for Computing Machinery, New York, NY, USA, 1326–1328. https://doi.org/10.1145/3610978.3638166
[10] Guy Laban, Sebastien Le Maguer, Minha Lee, Dimosthenis Kontogiorgos, Samantha Reig, Ilaria Torre, Ravi Tejwani, Matthew J Dennis, and Andre Pereira. 2022. Robo-identity: Exploring artificial identity and emotion via speech interactions. In 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 1265–1268.
[11] Minha Lee, Dimosthenis Kontogiorgos, Ilaria Torre, Michal Luria, Ravi Tejwani, Matthew J Dennis, and Andre Pereira. 2021. Robo-identity: Exploring artificial identity and multi-embodiment. In Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction. 718–720.
[12] Brian Y. Lim, Anind K. Dey, and Daniel Avrahami. 2009. Why and Why Not Explanations Improve the Intelligibility of Context-Aware Intelligent Systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Boston, MA, USA) (CHI ’09). Association for Computing Machinery, New York, NY, USA, 2119–2128. https://doi.org/10.1145/1518701.1519023
[13] John Locke. 1979 [1690]. An Essay Concerning Human Understanding (1 ed.). Oxford University Press. Edited by Peter H. Nidditch.
[14] Michal Luria, Samantha Reig, Xiang Zhi Tan, Aaron Steinfeld, Jodi Forlizzi, and John Zimmerman. 2019. Re-Embodiment and Co-Embodiment: Exploration of social presence for robots and conversational agents. In Proceedings of the 2019 on Designing Interactive Systems Conference. ACM, 633–644.
[15] Joseph B Lyons, Garrett G Sadler, Kolina Koltai, Henri Battiste, Nhut T Ho, Lauren C Hoffmann, David Smith, Walter Johnson, and Robert Shively. 2017. Shaping trust through transparent design: theoretical and experimental guidelines. In Advances in Human Factors in Robots and Unmanned Systems. Springer, 127–136.
[16] Alan Martin, Gregory MP O’hare, Brian R Duffy, Bianca Schön, and John F Bradley. 2005. Maintaining the identity of dynamically embodied agents. In International Workshop on Intelligent Virtual Agents. Springer, 454–465.
[17] Conor McGinn and Ilaria Torre. 2019. Can you tell the robot by the voice? An exploratory study on the role of voice in the perception of robots. In 2019 14th ACM/IEEE international Conference on human-robot interaction (HRI). IEEE, 211–221.
[18] Joseph E Mercado, Michael A Rupp, Jessie YC Chen, Michael J Barnes, Daniel Barber, and Katelyn Procci. 2016. Intelligent agent transparency in
human–agent teaming for Multi-UxV management. Human factors 58, 3 (2016), 401–415.
[19] Lux Miranda, Ginevra Castellano, and Katie Winkle. 2023. Examining the State of Robot Identity. In Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction. 658–662.
[20] Bonnie M Muir. 1987. Trust between humans and machines, and the design of decision aids. International journal of man-machine studies 27, 5-6 (1987), 527–539.
[21] Philip J. Nickel. 2013. Trust in technological systems. In Ed. de Vries, Marc J., Hansson, Sven Ove and Meijers, Anthonie W.M. (Springer) (Norms in technology). vol 9. Springer, Dordrecht, 223–237. https://doi.org/10.1007/978-94-007-5243-6_14
[22] Philip J. Nickel. 2015. Design for the value of trust. In Ed. van den Hoven, Jeroen and Vermaas, Pieter E. and van de Poel, Ibo (Springer) (Handbook of ethics, values, and technological design: Sources, theory, values and application domains). Springer, Germany, 551–567. https://doi.org/10.1007/978-94-007-6970-0_21
[23] Philip J. Nickel and Krist Vaesen. 2012. Risk and trust. In Ed. Roeser, Sabine and Hillerbrand, Rafaela and Sandin, Per and Peterson, Martin. Springer, Germany, 857–876. https://doi.org/10.1007/978-94-007-5243-6_14
[24] Derek Parfit. 1984. Reasons and Persons. Oxford University Press.
[25] Samantha Reig, Michal Luria, Janet Z Wang, Danielle Oltman, Elizabeth Jeanne Carter, Aaron Steinfeld, Jodi Forlizzi, and John Zimmerman. 2020. Not Some Random Agent: Multi-person interaction with a personalizing service robot. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction. ACM, 289–297.
[26] James Schaffer, John O’Donovan, James Michaelis, Adrienne Raglin, and Tobias Höllerer. 2019. I Can Do Better than Your AI: Expertise and Explanations. In Proceedings of the 24th International Conference on Intelligent User Interfaces (Marina del Ray, California) (IUI ’19). Association for Computing Machinery, New York, NY, USA, 240–251. https://doi.org/10.1145/3301275.3302308
[27] Ben Shneiderman and Pattie Maes. 1997. Direct manipulation vs. interface agents. interactions 4, 6 (1997), 42–61.
28] Linda J Skitka, Kathleen L Mosier, and Mark Burdick. 1999. Does automation bias decision-making? International Journal of Human-Computer Studies 51, 5 (1999), 991–1006.
[29] Henri Tajfel and John C Turner. 2004. The social identity theory of intergroup behavior. In Political psychology. Psychology Press, 276–293.
[30] Ilaria Torre, Jeremy Goslin, Laurence White, and Debora Zanatto. 2018. Trust in artificial voices: A" congruency effect" of first impressions and behavioural experience. In Proceedings of the technology, mind, and society. 1–6.
[31] Onno Van der Hart, Ellert RS Nijenhuis, and Kathy Steele. 2006. The Haunted Self: Structural Dissociation and the Treatment of Chronic Traumatization. WW Norton & Company.
[32] Ning Wang, David V Pynadath, and Susan G Hill. 2016. Trust calibration within a human-robot team: Comparing automatically generated explanations. In The Eleventh ACM/IEEE International Conference on Human Robot Interaction. IEEE Press, 109–116.
[33] Rod Watson. 2009. Constitutive practices and Garfinkel’s notion of trust: Revisited. Journal of Classical Sociology 9, 4 (2009), 475–499.
[34] Tom Williams. 2023. The Eye of the Robot Beholder: Ethical Risks of Representation, Recognition, and Reasoning over Identity Characteristics in Human-Robot Interaction. In Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction. 1–10.