Low-shot Object Learning with Mutual Exclusivity Bias

1Georgia Institute of Technology, 2Google Deepmind, 3University of Illinois Urbana-Champaign

Abstract

This paper introduces Low-shot Object Learning with Mutual Exclusivity Bias (LSME), the first computational framing of mutual exclusivity bias, a phenomenon commonly observed in infants during word learning. We provide a novel dataset, comprehensive baselines, and a state-of-the-art method to enable the ML community to tackle this challenging learning task. The goal of LSME is to analyze an RGB image of a scene containing multiple objects and correctly associate a previously-unknown object instance with a provided category label. This association is then used to perform low-shot learning to test category generalization. We provide a data generation pipeline for the LSME problem and conduct a thorough analysis of the factors that contribute to its difficulty. Additionally, we evaluate the performance of multiple baselines, including state-of-the-art foundation models. Finally, we present a baseline approach that outperforms state-of-the-art models in terms of low-shot accuracy.

Video

Poster

BibTeX

@inproceedings{
      thai2023lowshot,
      title={Low-shot Object Learning with Mutual Exclusivity Bias},
      author={Ngoc Anh Thai and Ahmad Humayun and Stefan Stojanov and Zixuan Huang and Bikram Boote and James Matthew Rehg},
      booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
      year={2023},
      url={https://openreview.net/forum?id=9lOVNw7guQ}
      }