BaseTransformers: Attention over base data-points for One Shot Learning
Published in British Machine Vision Conference, 2022
Recommended citation: @inproceedings{Maniparambil_2022_BMVC, author = {Mayug Maniparambil and Kevin McGuinness and Noel O Connor}, title = {BaseTransformers: Attention over base data-points for One Shot Learning}, booktitle = {33rd British Machine Vision Conference 2022, {BMVC} 2022, London, UK, November 21-24, 2022}, publisher = {{BMVA} Press}, year = {2022}, url = {https://bmvc2022.mpi-inf.mpg.de/0482.pdf} } https://bmvc2022.mpi-inf.mpg.de/482/
Abstract
Few shot classification aims to learn to recognize novel categories using only limited samples per category. Most current few shot methods use a base dataset rich in labeled examples to train an encoder that is used for obtaining representations of support instances for novel classes. Since the test instances are from a distribution different to the base distribution, their feature representations are of poor quality, degrading performance. In this paper we propose to make use of the well-trained feature representations of the base dataset that are closest to each support instance to improve its representation during meta-test time. To this end, we propose BaseTransformers, that attends to the most relevant regions of the base dataset feature space and improves support instance representations. Experiments on three benchmark data sets show that our method works well for several backbones and achieves state-of-the-art results in the inductive one shot setting.
Project website:
https://github.com/mayug/BaseTransformers