• Sat. Jul 6th, 2024

This Paper Reveals Insights from Reproducing OpenAI’s RLHF (Reinforcement Learning from Human Feedback) Work: Implementation and Scaling Explored

Mar 29, 2024

In recent years, there has been an enormous development in pre-trained large language models (LLMs). These LLMs are trained to predict the next token given the previous tokens and provide a suitable prompt. They can solve various natural language processing (NLP) tasks. However, the next-token prediction objective deviates from the fundamental aim of “outputting contents that humans prefer.” 

To address this gap, Reinforcement Learning from Human Feedback (RLHF) is introduced as a pipeline to collect pair-wise human preferences, train a reward model (RM) to model these preferences, and use Reinforcement Learning (RL) to create a model that outputs contents that humans prefer. It has proven challenging to reproduce OpenAI’s RLHF pipeline in the open-source community for several reasons:

  1. RL and RLHF have many subtle implementation details that can significantly impact training stability.
  2. The models are challenging to evaluate for the following tasks: e.g., assessing the quality of 800 lines of generated code snippets for a coding task.
  3. They take a long time to train and iterate.

Hugging Face, Mila and Fuxi AI lab researchers have undertaken a unique approach, presenting a high-precision reproduction of the Reinforcement Learning from Human Feedback (RLHF) scaling behaviors reported in OpenAI’s seminal TL;DR summarization work. They meticulously created an RLHF pipeline, focusing on over 20 key implementation details. They adopted a unified learning rate for SFT, RM, and PPO training to enhance reproducibility. 

They used the transformers library’s implementation of the Pythia models in conjunction with deepspeed’s ZeRO Stage 2 to help fit the models into the GPU memory; for 6.9B PPO training, they also transferred the reference policy and reward model to the CPU. The dropout layers were turned off during training. This is important for PPO training, especially because with dropout activated, the log probabilities of tokens will not be reproducible, making calculating the KL penalty unreliable while also causing the ratios of the PPO to be not 1s during the first epoch, causing PPO optimization problems. For consistency, they also turn off dropout for SFT and RM training. 

The PPO implementation optimizes the RLHF objective, leading to a significant increase in the score total. Their best 6.9B model is preferred by GPT nearly 80% of the time, demonstrating its practical superiority. For their 1B-sized model, the average preference consistency in multiple random experiments is close to 0.4, indicating that the 1B model has captured a different set of preferences, a finding with important implications. It is shown that PPO models outperform SFT models across all summary lengths, further reinforcing the practical relevance of the research.

In conclusion, Mila and Fuxi AI lab researchers have successfully reproduced the RLHF scaling behaviors reported in OpenAI’s seminal TL;DR summarization work with high precision. Their RLHF-trained Pythia models have demonstrated significant gains in response quality that scale with model size. Notably, their 2.8B and 6.9B models have outperformed OpenAI’s released 1.3B checkpoint, underscoring the importance of model size in achieving superior results.


Check out the Paper and GithubAll credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our 39k+ ML SubReddit

The post This Paper Reveals Insights from Reproducing OpenAI’s RLHF (Reinforcement Learning from Human Feedback) Work: Implementation and Scaling Explored appeared first on MarkTechPost.


#AIPaperSummary #AIShorts #Applications #ArtificialIntelligence #EditorsPick #MachineLearning #Staff #TechNews #Technology
[Source: AI Techpark]

Related Post