Multimodal Multi-User Surface Recognition With the Kernel Two-Sample Test



Behnam Khojasteh, Friedrich Solowjow, Sebastian Trimpe, Katherine J. Kuchenbecker

  Overview of the proposed method. Copyright: © Behnam Khojasteh Our general concept for the task of surface recognition. A well-performing discriminator retrieves the signature of an unlabeled inspection trial through comparison with a library of previously extracted class manifolds.


Machine learning and deep learning have been used extensively to classify physical surfaces through images and timeseries contact data. However, these methods rely on human expertise and entail the time-consuming processes of data and parameter tuning. To overcome these challenges, we propose an easily implemented framework that can directly handle heterogeneous data sources for classification tasks. Our data-versusdata approach automatically quantifies distinctive differences in distributions in a high-dimensional space via kernel two-sample testing between two sets extracted from multimodal data (e.g., images, sounds, haptic signals). We demonstrate the effectiveness of our technique by benchmarking against expertly engineered classifiers for visual-audio-haptic surface recognition due to the industrial relevance, difficulty, and competitive baselines of this application; ablation studies confirm the utility of key components of our pipeline. As shown in our open-source code, we achieve 97.2% accuracy on a standard multi-user dataset with 108 surface classes, outperforming the state-of-the-art machinelearning algorithm by 6% on a more difficult version of the task. The fact that our classifier obtains this performance with minimal data processing in the standard algorithm setting reinforces the powerful nature of kernel methods for learning to recognize complex patterns.

Published in IEEE Transactions on Automation Science and Engineering.