Enactive Human Computer Interfaces for teaching and learning manual tasks
Our hands are fundamental “instruments” that we use naturally in our everyday life for a variety of tasks. In VR experience, the interaction is often deprived of this fundamental part that is the most natural form of interaction.
In this Research Direction we investigate the main issues of manipulation and assembly in virtual environments. This Research Direction is concerned with the study of training/learning processes using Enactive Interfaces in mixed and virtual reality where physical embodiment is a necessary condition for learning, with particular regard to manual tasks and abilities.
The transfer of knowledge related to manual procedures is a complex task, involving different cognitive and perceptual aspects. Learning manual tasks is difficult without an instructor and the acquisition of particularly complex manual skills usually requires a long training and supervision of expert teachers (i.e. surgery, complex maintenance procedures). The transmission of this kind of ‘information’ relies on the practical presence of a master/teacher that, by being physically present, is able to practically conduct the inexperienced subject to keep confidence with tools and practices, and is still not possible to be completely automated.
Enactive interfaces represent a mean to enhance the conditions for carrying out intuitively manipulative procedures, and to study the conditions for the user of “getting his hands in-there and acting”, leading to an overall enhancement of the feeling of “being there”. In this context we study the interaction of different sensorial modalities, such as audio, haptic and visual modalities, and- the proper construction of a virtual scene in order to offer to a user the possibilities for actions available in the environment, and to evoke them as natural (Gibson’s affordances). Evoking affordances in virtual environments is linked to the possibility of displaying appropriate correlated multisensorial stimuli to the subject, overall guaranteeing him the capacity for continuing action and enaction in the environment: the immersiveness of the experience alone does not in fact necessarily lead to a natural interaction, if the user cannot perceive what can be done via the interface.
Manipulating and assembly in VR
In the industry, the traditional training of workers to use special equipment is normally carried out using a part or full real equipment. This could be afforded by the industry itself or specialized centers for training. But it brings many drawbacks like: the cost of equipment just for training is too high; machines are innovating and training equipment should change; new products or improvements of the production line which implies new training; outsourcing training with specialized centers, etc. Beside this kind of training there are also more specialized trainings like aviation or surgery where it is not always possible to use the real equipment or to check all the cases that the trainee could face.
For these reasons, the help of computer solutions has been considered. They offer lower cost and more adaptability. The simulation of a working environment with computers is done by means of Virtual Reality (VR). In these applications we are able to build any kind of scenarios, tools and equipment. A complete and detailed simulation of some scenarios could be very complex to develop, and moreover it is still difficult to produce truly convincing results.
A haptic-feedback system provides to the user the possibility to manipulate 3D objects with both hands. The bene%uFB01t of manipulating objects with the haptic workstation is to involve the user’s upper-body perception-action feedback loop. For example, in an assembling process the user can manipulate virtual objects and position them.
Figure 1 Dual Hand manipulation in VR though the Haptic Workstation™
The digital mock-up (DM) represents an important aspect of today product development process.
Haptic interfaces represent a natural mean for the interaction and manipulation of the digital mock-up, especially when the size and the complexity of the models are large.
Virtual Environments using haptic devices have proved to be useful for assembly/disassembly simulation of mechanical components. To date most haptic virtual environments are stand-alone.
In the virtual assembly scenario, the user can perceive different forces (weight, collision forces, sliding forces) by means of the haptic platforms.
Figure 2 Interaction with the digital mock-up at CEIT with the LHIfAM
In this RD we are implementing a collaborative scenario for performing and demonstrating common assembly operations in virtual and mixed reality, such as mounting/dismounting of parts, on the digital mock-up of a complex mechanical machine.
Figure 3 Example of a digital mock-up of a complex machine
The scenario is based on a cross-platform integration of technologies by different partners, creating a standardization for the integration of different modules in a complete demonstrator, that can show manipulation capabilities on the field. Haptic, visual and sound sensorial modalities are included in the demonstrator. One of the issues addressed is the possibility of carrying out complex manipulative procedures in a simulated environment, and the study of how the combination of sensorial modalities can improve the performance in the manipulative actions involved in assembling.
The core of the demonstrator is built around a volumetric rendering engine (VPS) for the detection of collision between geometries and the generation of contact forces, provided by DLR. The module is integrated with a visualization module based on XVR (PERCRO) and with a sound module based on a physical model of the interaction (CEIT). As a long term objective, the simultaneous support for different users will be developed based on a messaging infrastructure for networked communications applications.
Figure 4 The VPS is capable of identifying when the cover moved by the user collides with the other parts of the assembly. Color is changed from green to red.
Collaborative haptic assembly
Collaborative Haptic Virtual Environments (CHVEs) are distributed across a number of users via a network, such as the Internet. These offer new challenges to the designer, such as consistency, user-user haptic interaction and scalability. We study how Collaborative Haptic Virtual Environments can be distributed over a packet-switched network such as the Internet. It gives priority to the validation of interactions between objects grasped by users; guaranteeing consistency across different users’ virtual environments. Results found show the system maintains a consistent and satisfactory response when network incurs delay or packet jitter.
This set-up has also allowed showing the viability of using a polygonal (triangular) representation of the models for the automatic recognition of constraints (pins and holes). After detecting a common assembly axis, the movement of model is constrained to satisfy the constraint
Figure 5 Virtual assembly through Haptic remote training at LABEIN
Explaining nanophysics through physical-based models
Clearly the experience you can have in Virtual Reality can go beyond the barriers and limits of physics, reaching also the level of nanophysics.
Practical works on nanophysics has begun in 2004 with a few students from the physics department of Joseph Fourier University in Grenoble. We nowadays reach some 150 students in different fields of nanophysics. Each practical session is organized around 2 or 3 students per haptic device. The aim is to discover a nanophysic phenomenon. We designed CORDIS physical-based models for simulating the dynamical real-time behaviours of:
- the approach-retract phenomenon (when moving an AFM probe vertically downwards and upwards a sample surface)
- the deformation of the tip and the surface shapes while interacting with the sample
- the stick-and-slip effect while scanning a surface with the AFM probe
- the 2D manipulation of a nano-particle (phenomena emerging from the dynamics of the scene: hysteresis, deformation of the surface, topology rendering, long distance dipolar effects).
The ERGOS system allows a multisensory coupling between the user and the virtual space at a high rate (10 to 44kHz) and a high force rendering (up to 200N), with a physical-based modelling: the CORDIS formalism. This kind of environment fits perfectly several area of physics teaching, such as in the example chosen here, nanophysics.
Learning such complex dynamic phenomena only through curves and formal expressions is quite difficult for first level academic students.
Collisions between nano-objects are adhesive non symmetrical interactions represented by two van der Walls laws.
In the following experiment, multisensory - active representations is put in parallel with the real force feedback manipulation. By this way, students have access to a sensorial knowledge, giving them the true nature of force transients, sticking, breaking, hysteretic effects, scale effects, as those present in complex physical interactions and leading to a faster understanding of the dynamical shapes of the physical curves.
At the beginning of the practical work, students have a direct contact with a sample, in order to get use to the behaviour reactions at that scale and to compare the rendering with the dynamics of the virtual sample. They are often surprised by the force rendering and they start to ask questions about the way the virtual model is computed, to give such realistic interaction feeling. After a short introduction to physical-based modelling and simulation with the vision, sound and force coupling, we explain that the tele-operation link between the force feedback device (FFD) and the AFM is not only a control action and sensed reaction at a level of the signals, but that the close loop transforms the piezo-element of the AFM from a position generator into a single mass, free to move into the 1D space. From the point of view of the enaction, we are not talking only about a robotic link, but about physical objects, some virtual some real, which interact all together. The real physical inertia of the FFD is projected, through the nano-macro scale transfer, into a nano-mass attributed to the piezo. The vertical force acting on the tip and sensed by the photodiode is the same transmitted by the cantilever and acting on the piezo. Through this close loop, the students understand that, when they are holding the FFD key in 1D, they actually hold the piezo-element as a free mass in the 1D AFM space. They are touching a real nano-object, the piezo, through the close loop, which transfers their Newton forces into the nano-Newtons, and vice-versa from the position. They can feel that they are in the loop, their hand/arm/body belongs to the dynamical loop, as the free-mass piezo would travel to infinite if they suddenly realease the FFD key with a non-zero speed.
Integration of kinesthetic and haptic modalities.
We can go forward to the integration of kinesthetic and tactile modalities (technologically, force-feedback and tactile stimulation), to explore the possibilities to enact, through tactile-force-feedback systems, the feeling of the object in dexterous tasks with precision grip of deformable/fragile objects. The main issue here is a coherent simulation of the object’s displacements and deformations.
Figure 7 Full hand interaction with virtual objects
(a) Single hand interaction at PERCRO, (b) Dual hand interaction at EPFL
An additional aspect that it is explored in this RD is the relation between the fine tactile information and the visual modality, investigating how humans can appropriate sensation in virtual reality (the study of the effect of changes in the qualities (size, shape, mass) of the virtual arm in making such appropriation possible). What is the role of the haptic stimuli, conveyed through tactors placed on a dataglove, during the interaction of a virtual hand with objects? The representation of self and in particular of own hand during interaction with virtual objects represents a relevant feature for humans and it is also important in those applications where bulk of human limbs constrains the task execution, such as simulation of mounting and maintenance procedures. The influence of these haptic/tactile cues during manipulation and grasping on the sense of agency and ownership is investigated.
Figure 8 Use of vibratory pads for the elicitation of haptic sensation during grasping and manipulation (PERCRO)
Perception is based also on the establishment of valid and lawful Sensory-Motor linkages, and is directed upon or awareness of distal objects. A better observation of consequences of own motor actions in a Virtual Environment should lead to an increase of performance.
In fact learning activities in VR require dynamic models of movement and sensory-motor coordination of different sensorial modalities. But what are the Sensory-Motor laws that can be effectively reproduced within a simulated scenario? And how the sense of agency can be enhanced in tasks of manipulation?
Conditions of enactive learning can be simulated in a variety of tasks for the purpose of studying perceptual mechanisms of human beings.
The Haptic Pool allows for instance to play billiards using a haptic interface. This example, developed by PERCRO, integrates the dynamic simulation of the pool table with the haptic feedback using the HapticWeb framework.
The haptic interface is used for impressing force and direction to the balls, and also for changing the point of view of the player, using the direct rendering of the forces. The application is enhanced with audio feedback to provide the sound of collisions between the balls with the cushions and other balls. The user decides the hit direction through the haptic interface; then by pressing a button on the device, a virtual sliding is implemented that constraints the cue to move only forward and backward along a line aligned with the hit direction and through a point p of the ball, that represents the hit point.
Figure 9 Playing pool in a virtual simulated scenario