Co-training Transformer with Movies and Pictures Improves Motion Recognition

0
91


Motion recognition has grow to be a serious focus space for the analysis group as a result of many functions can profit from improved modeling, equivalent to video retrieval, video captioning, video question-answering, and so on. Transformer-based approaches have lately demonstrated state-of-the-art efficiency on a number of benchmarks. Whereas Transformer fashions require knowledge to be taught higher visible priors in comparison with ConvNets, motion recognition datasets are comparatively small in scale. Giant Transformer fashions are sometimes first skilled on picture datasets and later fine-tuned on a goal motion recognition dataset.

Whereas the present pre-training and fine-tuning motion recognition paradigm is easy and manifests robust empirical outcomes, it might be overly restrictive for constructing general-purpose action-recognition fashions. In comparison with a dataset like ImageNet that covers a wide variety of object recognition courses, motion recognition datasets like Kinetics and One thing-One thing-v2 (SSv2) pertain to restricted matters. For instance, Kinetics embrace object-centric actions like “cliff diving” and “mountain climbing’ whereas SSv2 incorporates object-agnostic actions like ’pretending to place one thing onto one thing else.’ Because of this, we noticed poor efficiency adapting an motion recognition mannequin that has been fine-tuned on one dataset to a different disparate dataset.

Variations in objects and video backgrounds amongst datasets additional exacerbate studying a general-purpose motion recognition classification mannequin. Even supposing video datasets could also be growing in dimension, prior work suggests important knowledge augmentation and regularization is important to attain robust efficiency. This latter discovering could point out the mannequin rapidly overfits on the goal dataset, and consequently, hinders its capability to generalize to different motion recognition duties.

In “Co-training Transformer with Movies and Pictures Improves Motion Recognition”, we suggest a coaching technique, named CoVeR, that leverages each picture and video knowledge to collectively be taught a single general-purpose motion recognition mannequin. Our method is buttressed by two most important findings. First, disparate video datasets cowl a various set of actions, and coaching them collectively in a single mannequin may result in a mannequin that excels at a variety of actions. Second, video is an ideal supply for studying movement data, whereas photographs are nice for exploiting structural look. Leveraging a various distribution of picture examples could also be helpful in constructing sturdy spatial representations in video fashions. Concretely, CoVeR first pre-trains the mannequin on a picture dataset, and through fine-tuning, it concurrently trains a single mannequin on a number of video and picture datasets to construct sturdy spatial and temporal representations for a general-purpose video understanding mannequin.

Structure and Coaching Technique

We utilized the CoVeR method to the lately proposed spatial-temporal video transformer, referred to as TimeSFormer, that incorporates 24 layers of transformer blocks. Every block incorporates one temporal consideration, one spatial consideration, and one multilayer perceptron (MLP) layer. To be taught from a number of video and picture datasets, we undertake a multi-task studying paradigm and equip the motion recognition mannequin with a number of classification heads. We pre-train all non-temporal parameters on the large-scale JFT dataset. Throughout fine-tuning, a batch of movies and pictures are sampled from a number of video and picture datasets. The sampling price is proportional to the dimensions of the datasets. Every pattern inside the batch is processed by the TimeSFormer after which distributed to the corresponding classifier to get the predictions.

In contrast with the usual coaching technique, CoVeR has two benefits. First, because the mannequin is instantly skilled on a number of datasets, the realized video representations are extra basic and may be instantly evaluated on these datasets with out extra fine-tuning. Second, Transformer-based fashions could simply overfit to a smaller video distribution, thus degrading the generalization of the realized representations. Coaching on a number of datasets mitigates this problem by decreasing the chance of overfitting.

CoVeR adopts a multi-task studying technique skilled on a number of datasets, every with their very own classifier.

Benchmark Outcomes

We consider the CoVeR method to coach on Kinetics-400 (K400), Kinetics-600 (K600), Kinetics-700 (K700), SomethingSomething-V2 (SSv2), and Moments-in-Time (MiT) datasets. In contrast with different approaches — equivalent to TimeSFormer, Video SwinTransformer, TokenLearner, ViViT, MoViNet, VATT, VidTr, and OmniSource — CoVeR established the brand new state-of-the-art on a number of datasets (proven beneath). Not like earlier approaches that practice a devoted mannequin for one single dataset, a mannequin skilled by CoVeR may be instantly utilized to a number of datasets with out additional fine-tuning.

Mannequin Pretrain Finetune K400 Accuracy
VATT AudioSet+Movies K400 82.1
Omnisource IG-Kinetics-65M K400 83.6
ViViT JFT-300M K400 85.4
Video SwinTrans   ImageNet21K+exterior   K400 86.8
CoVeR JFT-3B K400+SSv2+MiT+ImNet 87.2
Accuracy comparability on Kinetics-400 (K400) dataset.
Mannequin Pretrain Finetune SSv2 Accuracy
TimeSFormer ImageNet21k SSv2 62.4
VidTr ImageNet21k SSv2 63.0
ViViT ImageNet21k SSv2 65.9
Video SwinTrans   ImageNet21K+exterior   SSv2 69.6
CoVeR JFT-3B K400+SSv2+MiT+ImNet 70.9
Accuracy comparability on SomethingSomething-V2 (SSv2) dataset.
Mannequin Pretrain Finetune MiT Accuracy
ViViT ImageNet21k MiT 38.5
VidTr ImageNet21k SSv2 41.1
CoVeR JFT-3B K400+SSv2+MiT+ImNet 46.1
Accuracy comparability on Moments-in-Time (MiT) dataset.

Switch Studying

We use switch studying to additional confirm the video motion recognition efficiency and evaluate with co-training on a number of datasets, outcomes are summarized beneath. Particularly, we practice on the supply datasets, then fine-tune and consider on the goal dataset.

We first contemplate K400 because the goal dataset. CoVeR co-trained on SSv2 and MiT improves the top-1 accuracy on K400→K400 (the place the mannequin is skilled on K400 after which fine-tuned on K400) by 1.3%, SSv2→K400 by 1.7%, and MiT→K400 by 0.4%. Equally, we observe that by transferring to SSv2, CoVeR achieves 2%, 1.8%, and 1.1% enchancment over SSv2→SSv2, K400→SSv2, and MiT→SSv2, respectively. The 1.2% and a couple of% efficiency enchancment on K400 and SSv2 signifies that CoVeR co-trained on a number of datasets may be taught higher visible representations than the usual coaching paradigm, which is beneficial for downstream duties.

Comparability of switch studying the illustration realized by CoVeR and commonplace coaching paradigm. A→B means the mannequin is skilled on dataset A after which fine-tuned on dataset B.

Conclusion

On this work, we current CoVeR, a coaching paradigm that collectively learns motion recognition and object recognition duties in a single mannequin for the aim of developing a general-purpose motion recognition framework. Our evaluation signifies that it might be helpful to combine many video datasets into one multi-task studying paradigm. We spotlight the significance of constant to be taught on picture knowledge throughout fine-tuning to take care of sturdy spatial representations. Our empirical findings recommend CoVeR can be taught a single general-purpose video understanding mannequin which achieves spectacular efficiency throughout many motion recognition datasets with out an extra stage of fine-tuning on every downstream utility.

Acknowledgements

We wish to thank Christopher Fifty, Wei Han, Andrew M. Dai, Ruoming Pang, and Fei Sha for preparation of the CoVeR paper, Yue Zhao, Hexiang Hu, Zirui Wang, Zitian Chen, Qingqing Huang, Claire Cui and Yonghui Wu for useful discussions and feedbacks, and others on the Mind Staff for help all through this mission.

LEAVE A REPLY

Please enter your comment!
Please enter your name here