![]() ![]() This paper provides an extensive review of self-supervised methods that follow the contrastive approach. It aims at embedding augmented versions of the same sample close to each other while trying to push away embeddings from different samples. ![]() Specifically, contrastive learning has recently become a dominant component in self-supervised learning for computer vision, natural language processing (NLP), and other domains. It is capable of adopting self-defined pseudolabels as supervision and use the learned representations for several downstream tasks. Self-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets.
0 Comments
Leave a Reply. |