Programme

Sunday (Sept 3)

Arrival

Monday (Sept 4)

07:30 – 08:30: Breakfast

10:00 – 10:30: Introduction
10:30 – 11:00: Coffee break
11:00 – 12:30: Tony Belpaeme: What can we learn from robots? Robots as tutors

12:30 – 13:30 Lunch

13:30 – 15:00: André Martins: Deep Learning: A Shallow Tutorial
15:00 – 15:30: Coffee break
15:30 – 17:00: Iolanda Leite: Toward Autonomous Social Robots in the Wild

17:30 – 20:00: Poster session

20:00 – end: Dinner

Tuesday (Sept 5)

07:30 – 08:30: Breakfast

09:00 – 10:30: Serge Thill: Human Cognition in Social HRI
10:30 – 11:00: Coffee break
11:00 – 12:30: Alessandra Sciutti: Using Robots to Study Humans

12:30 – 13:30 Lunch

13:30 – 15:00: Paul Baxter: Experimental Human-Robot Interaction
15:00 – 15:30: Coffee break
15:30 – 19:30: Workshops

20:00 – end: Dinner

Wednesday (Sept 6)

07:30 – 08:30: Breakfast

09:00 – 10:30: Hendrik Buschmeier: Interactional Intelligence for Artificial Conversational Agents
10:30 – 11:00: Coffee break
11:00 – 12:30: Carlos Cifuentes: Human-Robot Interaction Strategies for Assistance and Rehabilitation

12:30 – 13:30 Lunch

13:30 – 18:30: Free

18:30: Bus departing from Beach Club hotel to Bar 07

19:00 – 0:00: BBQ

Thursday (Sept 7)

07:30 – 08:30: Breakfast

09:00 – 10:30: Mary Ellen Foster: Face-to-Face Conversation with Socially Intelligent Robots
10:30 – 11:00: Coffee break
11:00 – 12:30: An Jacobs: Bringing even more social in social HRI

12:30 – 13:30 Lunch

13:30 – 15:00: Tom Ziemke: Intentions in HRI
15:00 – 15:30: Coffee break
15:30 – 19:30: Workshops

20:00 – end: Dinner

Friday (Sept 8)

07:30 – 08:30: Breakfast

09:00 – 10:30: Bram Vanderborght: Designing Robots for HRI
10:30 – 11:00: Coffee break
11:00 – 12:30: Ana Paiva: Groups of humans and robots together

12:30 – 13:30 Lunch

13:30 – 15:00: Manuel Lopes: Interactive Learning and Teaching with Robots
15:00 – 15:30: Coffee break
15:30 – 19:30: Workshops

20:00 – end: Dinner

Saturday (Sept 9)

Departure


Tony Belpaeme
What can we learn from robots? Robots as tutors

While robots have featured in STEM education for many decades, the use of robots as teachers or tutors is a very recent development. Several studies have shown that physically present robots, when compared to on-screen characters, often receive more attention, are seen as more favorable and induce more compliance in the user. This has formed the basis for a lot of studies in using social robots as tutors. Robots can offer one-on-one tutoring, they can adopt different roles (such as a peer, a less able peer or a teacher) and they can personalise the taught material to the user. While in many cases robot tutors are seen as making a positive contribution to the user’s learning, there are still a large number of unknowns. This lecture will explore what we know and what we don’t know about robot tutors, through a mix of research results and speculation on where the field might be heading.

Suggested readings:

André Martins
Deep Learning: A Shallow Tutorial

Deep learning is revolutionizing AI, with new breakthroughs in natural language processing, computer vision, and robotics. Are these successes reincarnations of old ideas on better hardware and more data? Or do they bring new perspectives, models, and training strategies? In this tutorial, I will describe some of the ideas behind modern deep learning models, starting from the very basics (the perceptron, feed-forward neural networks, the backpropagation algorithm) to more recent instances of representation learning (word vectors, auto-encoders), new activation functions (rectified linear units, max-pooling, sparsemax), and neural networks for structured data (convolutional nets, recurrent neural networks, long short-term memories, attention mechanisms). I will illustrate with concrete examples in natural language processing and computer vision tasks.

Iolanda Leite
Toward Autonomous Social Robots in the Wild

Social robots are becoming increasingly common tools for assisted living, education and collaborative manufacturing. As robots move out of controlled laboratory environments to be deployed in the real world, a long-standing barrier is the need to respond and adapt to unstructured social environments without the need for operator intervention. In this talk, I will present my past and current research on artificial intelligence mechanisms that enable robots to interact with people in dynamic, real-world social environments. These mechanisms include computational models of empathy, turn-taking, and engagement. I will present evidence on the positive effects of implementing these models in robots that interact socially with people in different application domains. I will also discuss limitations of the current state of the art in robotic technology suitable for realistic social environments, arguing that an improved understanding of how robots perceive, reason and act depending on their surrounding social context can lead to more natural, enjoyable and useful human-robot interactions in the long-term.

Alessandra Sciutti
Using Robots to Study Humans

Humans show a great natural ability at interacting with each other. Such efficiency in joint actions depends on a synergy between planned collaboration and emergent coordination, a subconscious mechanism based on a tight link between action execution and perception, supporting phenomena as mutual adaptation, synchronization and anticipation. Defining which human motion features allow for such emergent coordination with another agent would be crucial to establish more natural and efficient interaction paradigms with artificial devices, ranging from assistive and rehabilitative technology to companion robots. However, investigating the behavioral and neural mechanisms supporting natural interaction poses substantial problems. In particular, the unconscious processes at the basis of emergent coordination (e.g., unintentional movements or gazing) are very difficult – if not impossible – to restrain or control in a quantitative way for a human agent. Moreover,  during an interaction, participants influence each other continuously in a complex way, resulting in behaviors that go beyond experimental control. In this lesson I will discuss how robotics could represent a potential solution to this methodological problem. Robots indeed can establish an interaction with a human partner, contingently reacting to his actions without losing the controllability of the experiment or the naturalness of the interactive scenario. A robot could then serve as a key tool for the investigation of the psychological and neuroscientific bases of social interaction.

Suggested reading:

  • D’Ausilio A., Lohan K., Badino L. & Sciutti A. 2016, Studying Human-Human interaction to build the future of Human-Robot interaction, in A. Gaggioli, A. Ferscha, G. Riva, S. Dunne & I. Viaud-Delmon (eds.),Human Computer Confluence. Transforming Human Experience Through Symbiotic Technologies, De Gruyter Open pp.213-226,  Berlin http://www.degruyter.com/viewbooktoc/product/469548
  • Sciutti A., Ansuini C., Becchio C. & Sandini G. 2015, Investigating the ability to read others’ intentions using humanoid robots, Frontiers in Psychology – Cognitive Science, vol. 6,no. 1362. http://journal.frontiersin.org/article/10.3389/fpsyg.2015.01362
  • Sciutti A., Bisio A., Nori F., Metta G., Fadiga L., Pozzo T. & Sandini G. 2012, Measuring human-robot interaction through motor resonance, International Journal of Social Robotics, vol. 4,no. 3, pp. 223–234 https://link.springer.com/article/10.1007%2Fs12369-012-0143-1

Paul Baxter
Experimental Human-Robot Interaction

Social HRI (sHRI) lies at the intersection of multiple research fields, each with their distinct motivations, techniques, and methodologies. Running experiments in this context is a tricky business, especially ‘in the wild’, but one that we collectively of course have an interest in getting right. In this lecture, I will metaphorically sprint through a typical sHRI experimental methodology cycle, identifying aspects to take into account, pitfalls to avoid, and nuggets to mull over, partly through the prism of my own research experiences. Since there are so many aspects to cover, and I have made at least as many experimental mistakes as anyone else, there will hopefully be a little something here for everyone.

Hendrik Buschmeier
Interactional Intelligence for Artificial Conversational Agents

Understanding what interlocutors ‘mean’ in conversation cannot be achieved through language processing alone. Understanding, and making oneself understood, is based on inference and ostension and is achieved through interactive means such as feedback, clarification, adaptation, and repair. The ability to use such mechanisms to advance the conversation is what I call ‘interactional intelligence’ of conversational agents. In the lecture I will describe computational models for feedback processing and language adaptation in artificial conversational agents and show how interactively intelligent conversational coordination is achieved in interaction with human interlocutors.

Suggested reading:

  • Buschmeier, H. & Kopp, S. (2011). Towards conversational agents that attend to and adapt to communicative user feedback. In Proceedings of the 11th International Conference on Intelligent Virtual Agents, pp. 169–182, Reykjavík, Iceland.
  • Buschmeier, H., Baumann, T., Dosch, B., Kopp, S., & Schlangen, D. (2012). Combining incremental language generation and incremental speech synthesis for adaptive information presentation. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 295–303, Seoul, South Korea.
  • Buschmeier, H. & Kopp, S. (2014). A dynamic minimal model of the listener for feedback-based dialogue coordination. In SemDial 2014: Proceedings of the 18th Workshop on the Semantics and Pragmatics of Dialogue, pp. 17–25, Edinburgh, UK.
  • Levinson, S. C. (1995). Interactional biases in human thinking. In Goody, E. N., (Ed.), Social Intel ligence and Interaction. Expressions and Implications of the Social Bias in Human Intelligence, pp. 221–260. Cambridge University Press, Cambridge, UK.
  • Levinson, S. C. (2006). On the human “Interaction Engine”. In Enfield, N. J. & Levinson, S. C., (Eds.), Roots of Human Sociality: Culture, Cognition and Interaction, pp. 39–69. Berg, Oxford, UK.
  • Wilson, D. & Sperber, D. (2004). Relevance Theory. In Horn, L. R. & Ward, G. (Eds.), Handbook of Pragmatics, pp. 607–632. Blackwell, Oxford, UK.

Serge Thill
Human Cognition in Social HRI

A large part of human cognitive abilities is dedicated to interaction with other humans. Research in social cognition addresses how we go about such interactions. Theory of mind, in particular, seeks to explain how we infer the intentions of other people, or the outcome of our own actions in social situations.

In this lecture, we cover aspects of human cognition such as the above with a particular focus on implications for social human-robot interaction: how might humans interact with robots and other artificial agents? How do humans infer the intentions of such agents (and vice versa)? What implications are there for robot design if we see them as social partners rather than tools?

Mary-Ellen Foster
Face-to-face conversation with socially intelligent robots

When humans engage in face-to-face conversation, they use their voices, faces, and bodies together in a rich, multimodal, continuous, interactive process. For a robot to participate fully in this sort of natural, face-to-face conversation in the real world, it must also be able not only to understand the multimodal communicative signals of its human partners, but also to produce understandable, appropriate, and natural communicative signals in response. I will describe two recent projects which aim to develop robots that support this sort of conversation: the JAMES socially aware robot bartender, and the MuMMER robot, which is a socially intelligent robot designed to interact with people in a public shopping mall.

Suggested readings:

  • Bavelas, J., Hutchinson, S., Kenwood, C., & Matheson, D. (1997). Using Face-to-face Dialogue as a Standard for Other Communication Systems. Canadian Journal of Communication, 22(1). https://doi.org/10.22230/cjc.1997v22n1a973
  • Foster, M. E.; Gaschler, A.; and Giuliani, M. Automatically Classifying User Engagement for Dynamic Multi-party Human–Robot Interaction. International Journal of Social Robotics, . July 2017. http://doi.org/10.1007/s12369-017-0414-y
  • Foster, M. E.; Alami, R.; Gestranius, O.; Lemon, O.; Niemelä, M.; Odobez, J.; and Pandey, A. K. The MuMMER project: Engaging human-robot interaction in real-world public spaces. In Proceedings of the Eighth International Conference on Social Robotics (ICSR 2016), November 2016. http://doi.org/10.1007/978-3-319-47437-3_74
  • Keizer, S.; Foster, M. E.; Gaschler, A.; Giuliani, M.; Isard, A.; and Lemon, O. Handling uncertain input in multi-user human-robot interaction. In Proceedings of the 23rd IEEE International Symposium on Robot and Human Interactive Communication (IEEE RO-MAN 2014), pages 312–317, Edinburgh, Scotland, August 2014. http://doi.org/10.1109/ROMAN.2014.6926271

Carlos Cifuentes
Human-Robot Interaction Strategies for Assistance and Rehabilitation

Human-Robot Interaction is being increasingly implemented in the assistance and rehabilitation of patients. On the one hand, robot technology supports neurological patients during physical rehabilitation in order to reduce the physical load of the therapist and increase the training duration and intensity of the patient. On the other hand, socially assistive robotics have been applied to therapeutic applications in order to maximize motivation and engagement by means of cognitive interaction. This talk presents some remarks of the design of robotic devices for enhanced physical interaction during gait assistance and rehabilitation, and also some ongoing research of social assistive robotics in the context of neurorehabilitation, cardiac rehabilitation and diagnosis of children with autism spectrum disorder.

Tom Ziemke
Intentions in HRI

 

Bram Vanderborght
Designing Robots for HRI

 

Ana Paiva
Groups of humans and robots together

Manuel Lopes
TBD