Run OpenAI's CLIP model on iOS to search photos.
-
Updated
Jul 11, 2024 - Swift
Run OpenAI's CLIP model on iOS to search photos.
Generation of faces, numbers and images...And Stable-Diffusion Inpainting through Segmentation through SAM and CLIP Model
Text to image search & Image Similarity Search using @typesense
Youtube video moment searcher by text or photo
Simple implementation of OpenAI CLIP model in PyTorch.
Semantic Emoji Search Plugin for FiftyOne
Traverse the space of concepts with a multi-modal similarity index in FiftyOne
[ NeurIPS 2023 R0-FoMo Workshop ] Official Codebase for "Estimating Uncertainty in Multimodal Foundation Models using Public Internet Data"
The most impactful papers related to contrastive pretraining for multimodal models!
Description of YOLO-World along with it's application
Repo deploying CLIP model to predict if an image is a hotdog or not.
🏄 Scalable embedding, reasoning, ranking for images and sentences with CLIP
[NeurIPS 2023 Oral] Quilt-1M: One Million Image-Text Pairs for Histopathology.
VisuNex, a fork of original stable diffusion repository is an attempt to personalize text-to-image which allows users to personalize image creation based on their unique aesthetic preferences.
Semantic Search demo featuring UForm, USearch, UCall, and StreamLit, to visual and retrieve from image datasets, similar to "CLIP Retrieval"
[ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models
Реализация система извлечения изображений по текстовому описанию и поиск похожих фотографий
根据文本描述搜索本地图片的工具,powered by Rust + candle + CLIP
[ICCV2023] Tem-adapter: Adapting Image-Text Pretraining for Video Question Answer
An official open-source Image/Video retrieval engine, developed by Badger Team X in AI Challenge 2022
Add a description, image, and links to the clip-model topic page so that developers can more easily learn about it.
To associate your repository with the clip-model topic, visit your repo's landing page and select "manage topics."