Researchers have recently seen a surge of interest in image-and-language representation learning, aiming to capture the intricate relationship between visual and textual information. Among all the Contrastive Language-Image Pre-Training (CLIP) frameworks, it has emerged as a promising approach, demonstrating state-of-the-art performance across various tasks and robustness to out-of-distribution data. While previous studies focused on scaling CLIP with ample computational resources, this research investigates its performance under resource constraints, exploring scaling down CLIP in terms of data, architecture, and training strategies. Conducted on the WebLI dataset with over 3.4 billion image-text pairs, the study sets computation limits and evaluates different pre-training strategies.
CLIP, introduced as a joint pre-training framework for image and text representations, utilizes a contrastive loss function to learn shared embedding spaces. It achieves remarkable zero-shot performance on visual classification tasks. Extensions like LiT and SLIP enhance CLIP’s efficiency. Efforts to scale CLIP, including FLIP and other methods, aim to improve efficiency and scalability, though the focus remains on large computational resources.
The researchers from the University of California and Google DeepMind present the investigation for the performance of CLIP under constrained computation budgets, exploring three key dimensions: data, architecture, and training strategies. It underscores the importance of high-quality training data, revealing that smaller datasets of high quality can outperform larger ones of lower quality. Also, the researchers investigated how model performance varies with dataset sizes, suggesting that smaller Vision Transformer (ViT) models are more suitable for smaller datasets. In contrast, larger models excel with fixed computing. It offers insights into choosing between CNN-based and ViT-based architectures for CLIP training.
The training pipeline mirrors CLIP’s approach, employing a contrastive loss to train vision and text encoders, encouraging similar representations for corresponding image-text pairs. The WebLI dataset, comprising over 10 billion image-text pairs from various languages, is the experimental foundation, focusing on English pairs totaling approximately 3.4 billion. Text processing involves a SentencePiece tokenizer with a vocabulary size of 32k. Evaluation metrics encompass zero-shot transfer, linear probe, and retrieval performance on MSCOCO captions, adhering to established protocols for fair comparisons and assessments of model generalization and effectiveness.
MLP-Mixer outperforms other architectures with fewer samples in linear probing, but ViT-B/32 excels as sample size increases, especially on out-of-distribution (OOD) variants. ViT is preferred for robustness and standard accuracy with larger sample sizes, while ResNet is suitable for smaller ones. ViT and MLP-Mixer demonstrate better robustness and generalization to out-of-distribution datasets due to their lower inductive bias.
In retrieval tasks, ResNet-50 performs better with smaller sample sizes, but ViT-B/32 surpasses it with sample sizes exceeding 400M for both few-shot and retrieval tasks. Mixer-B/32 exhibits the poorest performance for retrieval tasks consistently. These findings indicate ViT as the preferred choice for the vision encoder across zero-shot, linear probing, few-shot, and retrieval tasks.
In conclusion, The paper investigates the influence of data size, network architecture, and training strategies on CLIP’s performance. It underscores the significance of data quantity and quality, showcasing how data augmentation techniques can bolster CLIP’s performance without imposing substantial computational costs. Also, the study investigates various network architectures and training strategies, revealing that certain choices excel at different computational budgets. This emphasizes the necessity for meticulous selection to optimize CLIP’s performance effectively.
Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.
If you like our work, you will love our newsletter..
Don’t Forget to join our 40k+ ML SubReddit
For Content Partnership, Please Fill Out This Form Here..
The post Navigating the Landscape of CLIP: Investigating Data, Architecture, and Training Strategies appeared first on MarkTechPost.
#AIPaperSummary #AIShorts #Applications #ArtificialIntelligence #ComputerVision #EditorsPick #Staff #TechNews #Technology [Source: AI Techpark]