ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks
2020-03-12 23:10:53
Paper: NeurIPS 2019
Code: https://github.com/facebookresearch/vilbert-multi-task
1. Background and Motivation:
==
ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks
原文:https://www.cnblogs.com/wangxiaocvpr/p/12483565.html