Discussing Use Cases for Remote Presence

from Willow Garage websiteThis week, researchers from the University of Massachusetts Lowell (Katherine M. Tsui, Munjal Desai, and Holly A. Yanco) pre-published their findings on use cases of remote presence / telepresence robots in prep for the HRI 2011 conference in Switzerland this March. In their article, they bring up a number of issues that are relevant to the future of remote presence, that are worth reading about.

The research setting was at the Google campus in Mountain View, California. The team was investigating the particular use cases that RPS would work most effectively in – in particular in meetings and in hallways/walking with. While the experiments were relatively short and has some interesting dimensions (e.g. users deciding not to return to using the RPS if they did not like it), I believe many of the teams observations were dead-on – which I summarize below:

Likely best use case: hub-spoke configuration where one person (the spoke) is interacting with the group from a remote location (hub). Usually, when the spoke has already been with the team physically beforehand, the configuration brings about greater pilot satisfaction because members of the group recognize them (being physically present).

Best added value: before and after meetings. As expected in other cases (like the HP BiReality effort), the social need for before and after conversation was greatly improved with the RPS since now the pilot could be present during the gathering time and follow someone out of the room if they wanted to continue the conversation much later.

Issues that hampered effective presence:

  • Poor wifi router switching – when going from one access point to another, the two RPSes had some difficulty switching over and would have 20 seconds or more switching delay (which is to be remedied on these systems).
  • Ability to look in the face of the participant when walking – with the head mounted camera on the vGo or the Anybot QB, keeping the participant in view when walking became an issue. Having an independently articulated “head” or torso would be effective in addressing this.
  • Audio is extremely important – in the testing, echo, feedback and audio cutouts hampered the effective interaction between the pilot and the participant(s).
  • Cognitive load between driving and conversing – since conversation when walking with someone required partial attention to navigation, pilots might benefit from some level of autonomous assistance (e.g, “follow-that-person”).
  • “Where am I?” – how to provide localization information to the pilot in an effective manner such that the pilot can determine where they are and relative to where they wish to be.
  • Dynamic volume control – the challenge of knowing how loud the pilot is in the physical space and how to manage that is a bit of a problem that can either be addressed through auto-gain (not particular a good way IMHO) or manual adaptation (e.g., volume control on the RPS that can be controlled by the pilot and/or participants). Additionally, from John Markoff’s story, the ability to single out or directionally focus the audio pickup would address the “cocktail party” problem for sound separation.

This article is the first I have seen that goes into these issues to some degree – and I recommend that if you are building these systems to investigate the team’s insights.

This entry was posted in Anybots QB, Remote Presence Issues, Remote Presence News, Remote Presence Systems, VGo and tagged , , , , , , , . Bookmark the permalink.