Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
The use of robots in search and rescue is gaining particular interest, but singular skills are required to ensure efficient deployments in real missions. To face this problem, there is a need to develop more intuitive control interfaces. Moreover, to ensure high performance during high cognitive demanding tasks, such as in search and rescue missions, there is a need to combine both humanâs and robotâs skills. Respectively, humans and robots can adapt to new situations and optimize the execution of repetitive tasks. In this regard, novel share-control techniques have been developed to adapt the human-robot interaction, but to dynamically adapt this interaction, the information about the human state is missing.
To address these problems, I first developed a novel wearable system that enhances the control of drones providing a more intuitive flying experience. As shown in chapter 2, this wearable system tracks the upper body movements and translates them into commands for a drone. The system has been tested with a simulator and demonstrated for the teleoperation of a real drone. Moreover, to ensure enduring operations, I proposed a method that drastically reduces communication, and consequently, improves energy efficiency by 11.9%.
Second, in chapter 3, I presented a machine-learning approach for monitoring the cognitive workload level of a drone operator involved in search and rescue missions. My model combines the information of different features extracted from physiological signals, such as respiratory activity, electrocardiogram, photoplethysmogram, and skin temperature, acquired in a non-invasive way. To reduce both subject and day inter-variability of the signals, I explored different feature normalization techniques. Moreover, I adjusted the learning method for support vector machines to allow subject-specific optimizations. On a test set acquired from 34 volunteers, the proposed model distinguished between low and high cognitive workloads with an average accuracy of 87.3% and 91.2%, while controlling a drone simulator using both a traditional controller and the proposed FlyJacket design, respectively.
Third, in chapter 4, I presented the integration of the method developed for cognitive workload monitoring, on a new single wearable embedded system, that also integrates the proposed drone controller design. On the hardware side, it includes a multi-channel physiological signals acquisition and a low-power processing platform that is suited for cognitive workload monitoring. On the software side, the proposed system includes novel energy-aware bio-signal processing and embedded machine learning methods. Moreover, to exploit the trade-offs between the required accuracy of the available energy of the system, I presented a new application of the concept of a scalable machine-learning method with different power-saving levels. Results showed that the proposed self-aware approach yields an increase of 78% of the battery lifetime without really affecting the classification accuracy.
The proposed system, comprising a drone controller integrating a unit for cognitive workload monitoring, lays the foundations for the development of new-generation human-robot interfaces. With the information about the human state, we can close the loop of traditional share-control techniques, which will be able to dynamically adapt the level of interaction with semi-autonomous machines based on the need for the operator.
David Atienza Alonso, Miguel Peon Quiros, José Angel Miranda Calero, Hossein Taji