Trillo launches Trillo Workbench
Trillo, a leading provider of application platform and AI solutions, is thrilled to announce the launch of Trillo Workbench on Google Cloud and Microsoft Azure. Trillo Workbench represents a game-changing…
AI2, The Allen Institute for AI, releases the OLMo
The OLMo framework will drive a critical shift in AI development by providing the industry with a unique large, accurate, and open language model framework – creating an alternative to…
The Soft Robotics BOD appoints Mark J. Chiappetta as President and CEO
Soft Robotics, an industry leader in AI-enabled machine vision solutions for robotic automation and manufacturing process inspection, proudly announces the board appointment of Mark Chiappetta as its new President and Chief…
Meet CMMMU: A New Chinese Massive Multi-Discipline Multimodal Understanding Benchmark Designed to Evaluate Large Multimodal Models LMMs
In the realm of artificial intelligence, Large Multimodal Models (LMMs) have exhibited remarkable problem-solving capabilities across diverse tasks, such as zero-shot image/video classification, zero-shot image/video-text retrieval, and multimodal question answering…
DeepSeek-AI Introduce the DeepSeek-Coder Series: A Range of Open-Source Code Models from 1.3B to 33B and Trained from Scratch on 2T Tokens
In the dynamic field of software development, integrating large language models (LLMs) has initiated a new chapter, especially in code intelligence. These sophisticated models have been pivotal in automating various…
This AI Paper from China Introduces ‘AGENTBOARD’: An Open-Source Evaluation Framework Tailored to Analytical Evaluation of Multi-Turn LLM Agents
Evaluating LLMs as versatile agents is crucial for their integration into practical applications. However, existing evaluation frameworks face challenges in benchmarking diverse scenarios, maintaining partially observable environments, and capturing multi-round…
Researchers from the Chinese University of Hong Kong and Tencent AI Lab Propose a Multimodal Pathway to Improve Transformers with Irrelevant Data from Other Modalities
Transformers have found widespread application in diverse tasks spanning text classification, map construction, object detection, point cloud analysis, and audio spectrogram recognition. Their versatility extends to multimodal tasks, exemplified by…
Meet BiTA: An Innovative AI Method Expediting LLMs via Streamlined Semi-Autoregressive Generation and Draft Verification
Large language models (LLMs) based on transformer architectures have emerged in recent years. Models such as Chat-GPT and LLaMA-2 demonstrate how the parameters of LLMs have rapidly increased, ranging from…
UC Berkeley and UCSF Researchers Propose Cross-Attention Masked Autoencoders (CrossMAE): A Leap in Efficient Visual Data Processing
One of the more intriguing developments in the dynamic field of computer vision is the efficient processing of visual data, which is essential for applications ranging from automated image analysis…
Experts from 30 nations will contribute to global AI safety report
Leading experts from 30 nations across the globe will advise on a landmark report assessing the capabilities and risks of AI systems. The International Scientific Report on Advanced AI Safety…