Existing web agents face limitations that stem from the fact that these agents often rely on a single input modality and are tested in controlled environments, like web simulators or static snapshots, which do not accurately reflect the complexity and dynamic nature of real-world web interactions. This significantly restricts their applicability and effectiveness in real-world scenarios where dynamic interactions with web content are required. This creates a gap in their practical utility, as they cannot effectively navigate and interact with the diverse and ever-evolving content found on actual websites.
Previous works in web agents have focused on autonomous navigation and interaction with web environments. Key developments include WebGPT and WebAgent, which leverage GPT-3 and T5 models for text-based web browsing and HTML snippet extraction. There’s also a growing interest in multimodal web agents, like WebGUM combining T5 with Vision Transformers and PIX2ACT using web screenshots. These efforts contrast previous single-modality or simplified web environment approaches, moving towards more realistic and dynamic web interactions. Concurrently, large multimodal models (LMMs) like GPT-4V have shown robust multimodal comprehension, laying the groundwork for more sophisticated web agents.
Researchers from Zhejiang University, Tencent AI Lab, and Westlake University have proposed the development of WebVoyager, an LMM powered web agent that can complete user instructions end-to-end by interacting with real-world websites. They have proposed a new evaluation protocol that leverages the robust multimodal comprehension capabilities of GPT-4V and includes a benchmark of real-world tasks from 15 widely used websites. The agent’s interaction with the Apple website is demonstrated step by step, showing an optimal path without redundant actions.
The evaluation set is constructed using a combination of self-instruct and human verification methods. Tasks are sampled and rewritten from various websites, ensuring high quality and relevance. Human validation is performed to verify the generated tasks and ensure the answers can be found on the corresponding websites. Human evaluation is the main metric, where expert annotators judge task success based on the agent’s interaction with the web. Interestingly, it utilizes GPT-4V for automatic evaluation, aiming to reduce the reliance on human evaluators and experiment costs.
WebVoyager achieved a 55.7% task success rate, outperforming GPT-4 and its text-only variant. The automatic evaluation protocol using GPT-4V aligned closely with human judgment, showing an 85.3% agreement rate. Despite its strong performance on most website tasks, WebVoyager encountered challenges with text-heavy sites like Cambridge Dictionary and Wolfram Alpha. The agent’s consistency improved with more information, reaching a Kappa score of 0.7, matching human agreement levels, and highlighting GPT-4V’s potential for efficient, large-scale evaluations of web agents.
In conclusion, WebVoyager is an LMM-powered web agent designed for end-to-end web task resolution, with a 55.7% task success rate. Still, there is room for improvement, as indicated by the comprehensive Error Analysis provided in the paper. Researchers allude that future work should focus on better integration methods for visual and textual information and exploring the creation of multi-modal web agents using open-sourced LMMs.
Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our 36k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.
If you like our work, you will love our newsletter..
Don’t Forget to join our Telegram Channel
The post Meet WebVoyager: An Innovative Large Multimodal Model (LMM) Powered Web Agent that can Complete User Instructions End-to-End by Interacting with Real-World Websites appeared first on MarkTechPost.
#Applications #ArtificialIntelligence #EditorsPick #LanguageModel #LargeLanguageModel #MachineLearning #Staff #TechNews #Technology #Uncategorized [Source: AI Techpark]