• Thu. Nov 21st, 2024

Researchers from the University of Washington and Meta AI Present a Simple Context-Aware Decoding (CAD) Method to Encourage the Language Model to Attend to Its Context During Generation

Mar 31, 2024

Language models (LMs) have proven their remarkable effectiveness in generating coherent and fluent continuations of a prompt or document prefix. In the text generation step, they mostly rely on two sources of knowledge: (1) prior knowledge, which is learned during pretraining and stored implicitly within the model parameters; (2) context knowledge, passed as inputs in the prefix context. However, it remains an open question how a pre-trained LM, particularly a vanilla LM without task-specific finetuning, balances these two knowledge sources during generation. LMs often need help paying enough attention to the input context and generating texts that are unfaithful or contain hallucinations. 

Previous research shows that LMs need to pay more attention to new information introduced in the context-knowledge. This can lead to hallucination in summarization, where the generated summaries include facts not present in the input document (but were learned by the LM during the training phase). More attention to context is especially problematic when the context knowledge contradicts the prior knowledge. For instance, when LLaMA is presented with the latest document, “Argentina won the FIFA World Cups in 1978,1986 and 2022 …” in its context, it still predicts “Two” in response to the question “How many World Cups have Argentina won?”, due in part to the outdated training data on which the model has learned that output.

Researchers from the University of Washington and Meta AI present context-aware decoding (CAD), which follows a contrastive output distribution that amplifies the difference between the output probabilities when a model is used with and without context. CAD is particularly effective in overriding a model’s prior knowledge when it contradicts the provided context, leading to substantial improvements in tasks where resolving the knowledge conflict is essential.

CAD samples from a new output distribution, which amplifies the difference between output probabilities with and without the context document. This provides a new contrastive decoding form, effectively downweights the prior knowledge when more relevant contextual information is provided. CAD can be used with off-the-shelf pre-trained LMs without any additional training. They adjusted the model’s original output probability distribution using the pointwise mutual information (PMI) between the context and the generation conditioned on input.

Experimentally, they have shown that CAD outperforms the standard decoding algorithm by a large margin in all eight models across both datasets. Specifically, when applied to LLAMA30B in CNN-DM, CAD leads to a 21% increase in ROUGE-L, a 14.3% increase in factKB, and a 7.8% increase in BERT-P. This result demonstrates that CAD could effectively improve the quality and factuality of the generated summaries from a diverse set of LMs.

In conclusion, researchers from the University of Washington and Meta AI present CAD, which follows a contrastive output distribution that amplifies the difference between the output probabilities when a model is used with and without context, to encourage the LM to pay sufficient attention to its context during generation, CAD, without additional training, significantly improves the faithfulness of different LM families, including OPT, GPT, LLaMA and FLAN-T5 for summarization tasks. CAD is particularly effective in overriding a model’s prior knowledge when it contradicts the provided context, leading to substantial improvements in tasks where resolving the knowledge conflict is essential.


Check out the PaperAll credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our 39k+ ML SubReddit

The post Researchers from the University of Washington and Meta AI Present a Simple Context-Aware Decoding (CAD) Method to Encourage the Language Model to Attend to Its Context During Generation appeared first on MarkTechPost.


#AIPaperSummary #AIShorts #Applications #ArtificialIntelligence #LanguageModel #LargeLanguageModel #TechNews #Technology
[Source: AI Techpark]

Related Post