• Mon. Nov 25th, 2024

Month: June 2024

  • Home
  • Allen Institute for AI Releases Tulu 2.5 Suite on Hugging Face: Advanced AI Models Trained with DPO and PPO, Featuring Reward and Value Models

Allen Institute for AI Releases Tulu 2.5 Suite on Hugging Face: Advanced AI Models Trained with DPO and PPO, Featuring Reward and Value Models

The release of the Tulu 2.5 suite by the Allen Institute for AI marks a significant advancement in model training using Direct Preference Optimization (DPO) and Proximal Policy Optimization (PPO).…

Innovative Approaches in Machine Unlearning: Insights and Breakthroughs from the first NeurIPS Unlearning Competition on Efficient Data Erasure

Machine unlearning is a cutting-edge area in artificial intelligence that focuses on efficiently erasing the influence of specific training data from a trained model. This field addresses crucial legal, privacy,…

BiGGen Bench: A Benchmark Designed to Evaluate Nine Core Capabilities of Language Models

A systematic and multifaceted evaluation approach is needed to evaluate a Large Language Model’s (LLM) proficiency in a given capacity. This method is necessary to precisely pinpoint the model’s limitations…

OpenVLA: A 7B-Parameter Open-Source VLA Setting New State-of-the-Art for Robot Manipulation Policies

A major weakness of current robotic manipulation policies is their inability to generalize beyond their training data. While these policies, trained for specific skills or language instructions, can adapt to…

Google DeepMind Researchers Propose a Novel Divide-and-Conquer Style Monte Carlo Tree Search (MCTS) Algorithm ‘OmegaPRM’ for Efficiently Collecting High-Quality Process Supervision Data

Artificial intelligence (AI) focuses on creating systems capable of performing tasks requiring human intelligence. Within this field, the development of large language models (LLMs) aims to understand and generate human…

This AI Paper from China Proposes Continuity-Relativity indExing with gAussian Middle (CREAM): A Simple yet Effective AI Method to Extend the Context of Large Language Models

Large language models (LLMs) like transformers are typically pre-trained with a fixed context window size, such as 4K tokens. However, many applications require processing much longer contexts, up to 256K…

Generalization of Gradient Descent in Over-Parameterized ReLU Networks: Insights from Minima Stability and Large Learning Rates

Gradient descent-trained neural networks operate effectively even in overparameterized settings with random weight initialization, often finding global optimum solutions despite the non-convex nature of the problem. These solutions, achieving zero…

Microsoft Researchers Introduce Samba 3.8B: A Simple Mamba+Sliding Window Attention Architecture that Outperforms Phi3-mini on Major Benchmarks

Large Language Models (LLMs) face challenges in capturing complex long-term dependencies and achieving efficient parallelization for large-scale training. Attention-based models have dominated LLM architectures due to their ability to address…

Optimizing for Choice: Novel Loss Functions Enhance AI Model Generalizability and Performance

Artificial intelligence (AI) is focused on developing systems capable of performing tasks that typically require human intelligence, such as learning, reasoning, problem-solving, perception, and language understanding. These technologies have various…

MAGPIE: A Self-Synthesis Method for Generating Large-Scale Alignment Data by Prompting Aligned LLMs with Nothing

Artificial intelligence’s large language models (LLMs) have become essential tools due to their ability to process and generate human-like text, enabling them to perform various tasks. These models rely heavily…