LLMWare Launches SLIMs: Small Specialized Function-Calling Models for Multi-Step Automation
As enterprises look to deploy LLMs in more complex production use cases beyond simple knowledge assistants, there is a growing recognition of three interconnected needs: Agents – complex workflows involve…
Innovaccer wins 2024 North American Technology Innovation Leadership
Innovaccer’s AI-powered VBC solution suite unifies patient data seamlessly across diverse systems and care settings, empowering healthcare organizations with innovative, scalable applications. After evaluating the revenue cycle management (RCM) industry, Frost…
Telcos to spend $20B on AI network orchestration by 2028
Telecom companies are expected to increase their spending on AI for automating network management to $20 billion by 2028, a new report from Juniper Research found. This would represent a…
Infleqtion unveiled 5-year Quantum Computing Roadmap
New Sqorpius Program Accelerates Execution Across Quantum Hardware and Software to Deliver Commercial-Ready Quantum Computing During a live webinar, Infleqtion, the world’s leading quantum information company, shared a broad business…
Nomic AI Introduces Nomic Embed: Text Embedding Model with an 8192 Context-Length that Outperforms OpenAI Ada-002 and Text-Embedding-3-Small on both Short and Long Context Tasks
Nomic AI released an embedding model with a multi-stage training pipeline, Nomic Embed, an open-source, auditable, and high-performing text embedding model. It also has an extended context length supporting tasks…
Can Large Language Models be Trusted for Evaluation? Meet SCALEEVAL: An Agent-Debate-Assisted Meta-Evaluation Framework that Leverages the Capabilities of Multiple Communicative LLM Agents
Despite the utility of large language models (LLMs) across various tasks and scenarios, researchers need help to evaluate LLMs properly in different situations. They use LLMs to check their responses,…
Pinterest Researchers Present an Effective Scalable Algorithm to Improve Diffusion Models Using Reinforcement Learning (RL)
Diffusion models are a set of generative models that work by adding noise to the training data and then learn to recover the same by reversing the noising process. This…
Meet Graph-Mamba: A Novel Graph Model that Leverages State Space Models SSM for Efficient Data-Dependent Context Selection
Graph Transformers need help with scalability in graph sequence modeling due to high computational costs, and existing attention sparsification methods fail to adequately address data-dependent contexts. State space models (SSMs)…
‘Weak-to-Strong JailBreaking Attack’: An Efficient AI Method to Attack Aligned LLMs to Produce Harmful Text
Well-known Large Language Models (LLMs) like ChatGPT and Llama have recently advanced and shown incredible performance in a number of Artificial Intelligence (AI) applications. Though these models have demonstrated capabilities…
Advancing Vision-Language Models: A Survey by Huawei Technologies Researchers in Overcoming Hallucination Challenges
The emergence of Large Vision-Language Models (LVLMs) characterizes the intersection of visual perception and language processing. These models, which interpret visual data and generate corresponding textual descriptions, represent a significant…