• Wed. Nov 27th, 2024

This AI Paper from China Introduces ‘Monkey’: A Novel Artificial Intelligence Approach to Enhance Input Resolution and Contextual Association in Large Multimodal Models

Nov 28, 2023

Large multimodal models are becoming increasingly popular due to their ability to handle and analyze various data, including text and pictures. Academics have noticed their knowledge in various multimodal activities, including labeling images, answering visual questions, and more. State-of-the-art models like LLaVA, MiniGPT4, mPLUG-Owl, and Qwen-VL are examples of rapid progress in this field. However, there are several obstacles to overcome, especially when dealing with complex scenarios, because of the wide range of picture resolutions and the need for more training data quality. The image encoder has been improved, and large datasets have been used to increase input resolution to overcome these difficulties. 

Furthermore, LLaVA is innovative in extending instruction-tuning into multimodal situations by fusing multimodal instruction-following data. Despite these developments, these techniques frequently need help managing picture input sizes sustainably and substantial training costs. The need for more intricate picture descriptions to understand the subtleties of image-text linkages increases as datasets get bigger, a condition that needs to be met by the brief, one-sentence captions seen in datasets like COYO and LAION. Driven by these constraints researchers from Huazhong University of Science and Technology and Kingsoft present a resource-efficient technique to increase input resolution in the context of the LMM paradigm called Monkey. By leveraging pre-existing LMMs, the research team circumvent the time-consuming pretraining process, thanks to the abundance of great open-source work. 

The research team suggest a straightforward yet efficient module that uses a sliding window approach to divide high-resolution pictures into more manageable, localized portions. A static visual encoder, multiple LoRA modifications, and a trainable visual resampler encode each patch individually. The language decoder is then given these patches’ encodings and the global picture’s encoding for improved image understanding. We have also created a technique combining multi-level cues from many generators, such as BLIP2, PPOCR, GRIT, SAM, and ChatGPT OpenAI, to provide abundant and high-quality caption data.

First, their model’s picture captioning assignment can precisely describe nearly every aspect of the image, including the athlete’s different accessories and the red flag in the backdrop, with no mistakes or omissions. The brown bag in the caption is highlighted in the model’s description, even though it might not be immediately apparent without close examination of the picture. This little hint allows the model to draw sensible conclusions, even if it cannot be verified confidently. This shows the model’s capacity to pay attention to small items and provide logical and accurate descriptions. Along with offering a thorough explanation of the visual, the model also distinguishes between the many languages and the signals that correspond to them. 

The utility of the photograph by Monkey may then be reasonably predicted using this information. Even if the image’s watermark, “life quotes Tumblr,” is missing an “e,” the model can respond to a question regarding it in the question-answering job. This shows their model can read tiny text in photos with higher resolution after training. The model’s ability to read data from charts and identify the right response among dense textual material without being distracted by extraneous text is demonstrated when it properly responds to the query regarding the date “October 6, 1966” in. This phenomenon shows that the model can accurately represent the alignment of a given text with its matching target. Further demonstrates the model’s ability to accurately identify the answer to a query even in thick and hazy texts, highlighting the model’s relevance to the objective and its capacity for global knowledge. 

The benefits of the Monkey are summed up as follows: 

1. Associations within context. By presenting a multi-level strategy for producing descriptions, the research team improve the model’s ability to comprehend the relationships between various targets and more effectively explore common knowledge when creating text descriptions. This leads to the production of more insightful and thorough findings. 

2. Without pretraining, support resolutions up to 1344 x 896. Above the 448 x 448 resolution usually used for LMMs, this large resolution boosts the capacity to identify and comprehend small or densely packed objects and text. 

3. Improvements in performance across several assessment datasets. Their Monkey model performed competitively in tasks including Image Captioning, General Visual Question Answering, Scene Text-centric Visual Question Answering, and Document-oriented Visual Question Answering as a result of testing it on 16 different datasets.


Check out the Paper and GithubAll credit for this research goes to the researchers of this project. Also, don’t forget to join our 33k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

The post This AI Paper from China Introduces ‘Monkey’: A Novel Artificial Intelligence Approach to Enhance Input Resolution and Contextual Association in Large Multimodal Models appeared first on MarkTechPost.


#AIShorts #Applications #ArtificialIntelligence #ComputerVision #EditorsPick #LanguageModel #LargeLanguageModel #MultimodalAI #Staff #TechNews #Technology #Uncategorized
[Source: AI Techpark]

Related Post