Culture Magazine

Into AI Valley: Linguistic Pretraining Allows "VirTex" to Learn Visual Features Using Fewer Images

By Bbenzon @bbenzon
Language is more "semantically dense" than other training signals, leading to more data-efficient learning than either traditional classification pretraining or recent unsupervised pretraining (e.g. MoCo / PIRL) pic.twitter.com/F4KMunhlhL — Justin Johnson (@jcjohnss) June 12, 2020

Hmmmm....Are things getting interesting?

Back to Featured Articles on Logo Paperblog