[CVPR 2024] Official PyTorch Code for "PromptKD: Unsupervised Prompt Distillation for Vision-Language Models"
-
Updated
Jul 15, 2024 - Python
[CVPR 2024] Official PyTorch Code for "PromptKD: Unsupervised Prompt Distillation for Vision-Language Models"
CVPR 2023-2024 Papers: Dive into advanced research presented at the leading computer vision conference. Keep up to date with the latest developments in computer vision and deep learning. Code included. ⭐ support visual intelligence development!
Public repository of our assessment work in missing views for EO applications
Public repository of our IGARSS 2023 submission
Build high-performance AI models with modular building blocks
Achelous: A Fast Unified Water-surface Panoptic Perception Framework based on Fusion of Monocular Camera and 4D mmWave Radar
An open source implementation of CLIP.
[ICML 2023] Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining
Official pytorch repository for CG-DETR "Correlation-guided Query-Dependency Calibration in Video Representation Learning for Temporal Grounding"
Code for LEMMA-RCA website
Public repository of our work in the search for an optimal multi-view crop classifier (considering encoder architectures and fusion strategies)
Public repository of our IGARSS 2023 submission
[IVS'24] UniBEV: the official implementation of UniBEV
[CVPR 2024] Situational Awareness Matters in 3D Vision Language Reasoning
Code for "LEMMA-RCA: A Large Multi-modal Multi-domain Dataset for Root Cause Analysis" paper
Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration
The open source implementation of the model from "Scaling Vision Transformers to 22 Billion Parameters"
[CVPR 2024] EmbodiedScan: A Holistic Multi-Modal 3D Perception Suite Towards Embodied AI
Multi-modal Object Re-identification
【CVPR2024】Magic Tokens: Select Diverse Tokens for Multi-modal Object Re-Identification
Add a description, image, and links to the multi-modal-learning topic page so that developers can more easily learn about it.
To associate your repository with the multi-modal-learning topic, visit your repo's landing page and select "manage topics."