Robot tele-operation control simulator (2012-2015)
Assistive robots must be designed to work with people with a wide variety of illnesses and impairments. As such it is necessary to test assistive robot prototypes with a wide range of users across a wide range of settings/application domains. There are many practical barriers to doing this. Real robots (especially large personal service robots) can be unreliable, difficult to attain and possess short run-times before batteries need to be changed. The environments that the robots need to be tested in may be difficult to realise or setup for the purpose of capturing the necessary experimental data may be impractical (i.e. setting up a 3D position capture system in a house may be very inconvenient for the owners!). Setting up-tests in dynamically changing environments while ensuring that are easily repeatable is a challenge in itself. These are just some of the challenges of conducting this research. Furthermore getting large numbers of people to a dedicated test-site is often not feasible.
In response to practical challenges of performing experiments with real robots, we developed a virtual reality simulator for applications involving robot tele-operation. In particular, the simulator was designed to emulate some of the challenges that would be encountered when controlling a personal service robot in a home environment.
The simulation was designed to replicate a realistic robot control scenario that involved both remote (robot leaves the field of view of operator) and proximal (robot stays in the field of view of operator) interactions. In this way, the operator would be required to switch between controlling the robot using visual information within the operators field of view and using an on-board sensor feed from the robot. The simulation was designed to replicate the first person experience of a statically positioned human operator. When using the simulation, participants experience the apartment at eye level and the system is programmed to automatically track the robot as it moves throughout the apartment. A second view from a camera positioned on-board the robot is also made available during the simulation. This view is continuously broadcast in the upper right corner of the simulation window. This perspective is important for instances when the robot is remotely located and has left the line-of-sight of the operator. The robot’s camera-feed updates at the same rate as the simulation (30Hz) and has no noticeable latency. This performance ensures compliance with levels needed to avoid performance drop due to sensory degradation.
It was determined that a ‘fetch and deliver’ task would be a contextually appropriate scenario to emulate in the simulation. The robot that was implemented in the simulator took the form of a humanoid robot, designed loosely to resemble the Aerobot personal service robot. The operator, positioned in a fixed location in the house, was required to control the robot as it retrieved an object and delivered it to a specified location. It was then required to exit the room and return to a docking station. The requirements of the simulation can thus be broken into three tasks;
Task 1: Navigate the robot from its initial starting position in the living room into the kitchen where it locates and collects a soft-drink.
Task 2: Navigate from the kitchen and deliver the soft-drink to a person in the bedroom.
Task 3: Navigate from the bedroom and return to a docking station located in the hallway.
The simulator has been used for both evaluation and design purposes. In evaluation, it has been used to study the performance of human operators engaged in manual robot control tasks. It has also been used as a design tool, for providing feedback on the ergonomics of bespoke controller embodiments being developed internally by the group.