Culture Magazine

GPT-3 Based News Summaries

By Bbenzon @bbenzon

We collect human preference annotations for news summaries generated by current SOTA and zero-shot GPT-3 models. For multiple settings (generic + keyword) and datasets (CNN + BBC), GPT-3 summaries beat prior fine-tuned models!
[2/6] pic.twitter.com/BiFHDh0nZa

— Tanya Goyal (@tanyaagoyal) September 27, 2022

This also means we can now break away from noisy benchmark datasets, e.g. XSum, that (we observe) cannot produce systems for real settings. Instead, actual use cases and not data availability can now dictate future research directions (task goals, domains, etc.)
[4/6]

— Tanya Goyal (@tanyaagoyal) September 27, 2022

Browse examples of generated summaries and human annotations at: https://t.co/vcSeVl5Zwj
[6/6]

— Tanya Goyal (@tanyaagoyal) September 27, 2022

Back to Featured Articles on Logo Paperblog