Skip to content

bytedance/tarsier

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Jiawei Wang*, Liping Yuan*, Yuchen Zhang*

ByteDance Research

*:Equal contribution, sorted alphabetically.

arXiv Model on HF Dataset on HF

PWC PWC PWC PWC PWC PWC PWC

Perface

Welcome to Tarsier!

In this repository, we introduce Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions (see Figure 1), together with good capability of general video understanding (SOTA results on 6 open benchmarks). Tarsier takes a simple model structure (CLIP-ViT + LLM), combined with a carefully designed training strategy: multi-task pre-training (stage-1) and multi-grained instruction tuning (stage-2).

Besides the model, we propose a new video description benchmark called DREAM-1K (Description with Rich Events, Actions, and Motions), featuring videos from diverse sources and varying complexity. AutoDQ (Automatic Description Quality) is also introduced as a highly interpretable and discriminative approach to evaluate video description quality.

We have released the model, code, and data for inference, evaluation and depolyment.

Please cite us if you found our work helpful.


Figure 1: Example dialogue between a user and Tarsier. The input video is: assets/videos/coffee.gif

Overview

Abstract

Generating fine-grained video descriptions is a fundamental challenge in video understanding. In this work, we introduce Tarsier, a family of large-scale video-language models designed to generate high-quality video descriptions. Tarsier employs CLIP-ViT to encode frames separately and then uses an LLM to model temporal relationships. Despite its simple architecture, we demonstrate that with a meticulously designed two-stage training procedure, the Tarsier models exhibit substantially stronger video description capabilities than any existing open-source model, showing a +51.4% advantage in human side-by-side evaluation over the strongest model. Additionally, they are comparable to state-of-the-art proprietary models, with a +12.3% advantage against GPT-4V and a −6.7% disadvantage against Gemini 1.5 Pro. Besides video description, Tarsier proves to be a versatile generalist model, achieving new state-of-the-art results across nine public benchmarks, including multi-choice VQA, open-ended VQA, and zero-shot video captioning. Our second contribution is the introduction of a new benchmark for evaluating video description models, consisting of a new challenging dataset featuring videos from diverse sources and varying complexity, along with an automatic method specifically designed to assess the quality of fine-grained video descriptions. We make our models and evaluation benchmark publicly available at https://github.com/bytedance/tarsier.

Simple Model Structure

Tarsier takes a simple sturcture that use a MLP projection layer to connect visual encoder (CLIP ViT) and text decoder (LLM). Frames are encoded independently and concatenated to input into LLM.


Figure 2: Tarsier Model Structure.

Two-stage Training

Tarsier tasks a two-stage training strategy.

  • Stage-1: Multi-task Pre-training on 13M data
  • Stage-2: Multi-grained Instruction Tuning on 500K data

In both stages, we freeze ViT and train all the parameters of projection layer and LLM.

Video Description Evaluation

Benchmark: DREAM-1K

We proposed DREAM-1K as a challenging video description benchmark. It contrains a collection of 1,000 video clips with diverse complexities from five different origins: live-action movies, animated movies, stock videos, long YouTube videos, and TikTok-style short videos. We provide a fine-grained manual annotation for each video. See: data/annotations/DREAM-1k.jsonl


Figure 3: DREAM-1K data Statistics.

Figure 4 shows the human reference and description results of different models of one video clip (assets/videos/sitting.mp4) from DREAM-1K.


Figure 4: Human reference and description results of different models on one video clip from DREAM-1K. This video features six actions, each highlighted in a unique color. Model hallucinations are indicated by underlining and red color.

Evaluation Approach: AutoDQ

We propose AutoDQ as a more interpretable approach to automatic video description evaluation. AutoDQ uses an extraction model to extract events from two video descriptions, then uses an entailment model to examine how many events extracted from one description are entailed by the other description. We use ChatGPT to implement both models, as shown in Figure 5.


Figure 5: The AutoDQ workflow.

The relative code is: evaluation/metrics/evaluate_dream_gpt.py

Evaluation Results

We evaluate some advanced open-source video understanding models and two proprietary models (GPT-4V and Genmini 1.5 Pro) on DREAM-1K. The results are shown in Figure 6.


Figure 6: Evaluation results on DREAM-1K.

Video Understanding Benchmarks Evaluation

Tarsier is evluated on 7 commonly used video understanding benchmarks, including MVBench, NeXT-QA, Egoschema, MSVD-QA, MSR-VTT-QA, ActivityNet-QA and TGIF-QA. Ours Tarsier-34b gains 6 SOTA results among the 7 benchmarks.

Usage

This section provides guidance on how to run, evaluate and deploy Tarsier.

Setup

Following all are running under the environment of python 3.9. If you are not using python 3.9, you can create a virtual environment with:

conda create -n tarsier python=3.9

Then run the setup script:

git clone https://github.com/bytedance/tarsier.git
cd tarsier

bash setup.sh

Note that you should fill in the environment parameters for calling OpenAI Service through Azure, if you need to run evaluations based on ChatGPT (for Open-ended QA and DREAM-1K).

Model Prepare

Download the model checkpoints from Hugging Face: Tarsier-7b and Tarsier-34b.

Quick Start

You can use the following script to run a quick start of video detailed description:

MODEL_NAME_OR_PATH="path_to_the_model"
VIDEO_FILE="assets/videos/coffee.gif" # Or try your own example, could be images (include gif images), videos.

python3 -m tasks.inference_quick_start \
  --model_name_or_path $MODEL_NAME_OR_PATH \
  --instruction "Describe the video in detail." \
  --input_path $VIDEO_FILE

The result should be:

# Tarsier-7b:
"Prediction: A person is sitting at an outdoor café table with a cup of coffee and a book. The person takes a sip from the cup and then looks at the book. The background features the Eiffel Tower and is surrounded by red roses. Another person walks by in the background, waving at the first person. The scene is set in a Parisian outdoor café with street lamps and greenery visible."

# Tarsier-34b:
"Prediction: A person picks up a cup of coffee from a table and takes a sip while holding a book. The scene is set in an outdoor café with the Eiffel Tower visible in the background. The person then turns to look at two other individuals who are seated at a nearby table. The two individuals wave at the person with the coffee. The person continues to hold the coffee cup and book, looking back at the two individuals."

Benchmark Inference and Evaluation

Data Prepare

  1. DREAM-1K

    Download Video from https://huggingface.co/datasets/omni-research/DREAM-1K.

    We have preprocessed the metadata for all benchmarks we used, see: data/annotations But you need to change the "<placeholder>" in the annotation file to your local video file path according to the "vid". We provide an example code for processing DREAM-1K. You can refer to it when processing other benchmarks.

  2. Other Benchmarks

Benchmark Inference and Evaluation

Following command will firstly run in parallel to inference on the selected benchmarks (Edit the parameters in scripts/run_inference_benchmark.sh: "CHUNKS" and "GPULIST" to customly control the parallelism), and then run evaluation.

model_name_or_path="path_to_the_model"
output_dir="dream_predictions"
benchmarks="dream" # Split benchmarks by space. Default as 'all' to inference on all benchmarks; Also could be task types: ('dream', 'caption', 'mc_qa', 'oe_qa'); Or specific benchmark names: ('dream', 'msvd-caption', 'msr-vtt-caption', 'vatex-caption', 'next-qa', 'egoschema', 'mvbench', 'video-mme', 'msvd-qa', 'msr-vtt-qa', 'tgif-qa', 'anet-qa')

mkdir $output_dir

bash scripts/run_inference_benchmark.sh $model_name_or_path $output_dir $benchmarks

The evaluation results will be printed and saved in $output_dir.

Evaluation Only

Run the following script to only calcluate the metrics for selected benchmarks.

pred_dir="dream_predictions"
benchmarks="dream" # Same as above code block

bash run_evaluation_only.sh $pred_dir $benchmark

The evaluation result will be saved as: {pred_dir}/{benchmark-name}_eval_result.txt

Deployment

CLI Demo

Use the following script to run a conversation demo in command line.

model_path="path_to_the_model"

bash scripts/run_demo_cli.sh $model_path

Bellow is the input video and a conversation with Tarsier-34b about the video:


Figure 7: Input video in CLI Demo.


Figure 8: Conversation in CLI Demo.

Gradio Demo

Use the following script to run a Gradio Demo.

model_path="path_to_the_model"

bash scripts/run_demo_gradio.sh $model_path

The gradio page show be as following. You shoud input a Video/Image/GIF in according block firstly, and then start conversation. Click the "Clear" button to restart.


Figure 9: Tarsier Gradio Demo.

Citation

Pleae cite us as:

@misc{wang2024tarsierrecipestrainingevaluating,
      title={Tarsier: Recipes for Training and Evaluating Large Video Description Models}, 
      author={Jiawei Wang and Liping Yuan and Yuchen Zhang},
      year={2024},
      eprint={2407.00634},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2407.00634},
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages