• Sat. Jul 6th, 2024

Cleanlab Introduces the Trustworthy Language Model (TLM) that Addresses the Primary Challenge to Enterprise Adoption of LLMs: Unreliable Outputs and Hallucinations

Apr 29, 2024

While 55% of organizations are experimenting with generative AI, only 10% have implemented it in production, according to a recent Gartner poll. LLMs face a major obstacle in transitioning to production due to their tendency to generate erroneous outputs, termed hallucinations. These inaccuracies hinder their utilization in applications requiring correct results. Instances like Air Canada’s chatbot misinforming customers about refund policies and a law firm’s use of ChatGPT to produce a brief filled with fabricated citations illustrate the risks associated with deploying unreliable LLMs. Similarly, New York City’s “MyCity” chatbot has provided incorrect responses to inquiries about local laws, underscoring the challenges in ensuring accurate outputs from LLMs.

Cleanlab presents the Trustworthy Language Model (TLM), addressing the primary challenge hindering enterprise adoption of LLMs: unreliable outputs and hallucinations. TLM integrates a trust score into each LLM response, empowering users to identify and control erroneous outputs, thus facilitating the deployment of generative AI in previously inaccessible scenarios. Extensive benchmarking demonstrates that TLM outperforms existing LLMs in accuracy while offering better-calibrated trustworthiness scores, leading to enhanced cost and time efficiency compared to alternative methods for managing LLM uncertainty.

TLM addresses the inevitable presence of hallucinations in LLMs by assigning a trustworthiness score to each output, enabling users to identify instances of hallucination. TLM prioritizes minimizing false negatives, ensuring that the trustworthiness score is low when hallucinations occur, thereby facilitating the reliable deployment of LLM-based applications. 

The TLM API serves multiple purposes: it can function as a seamless replacement for existing LLMs, offering a .prompt() method that returns responses and trustworthiness scores, enabling new applications. Also, TLM enhances the accuracy of responses by internally generating multiple responses and selecting the one with the highest trustworthiness score. TLM can augment trust for outputs from existing LLMs or human-generated data through its .get_trustworthiness_score() method. TLM operates by integrating a trust layer onto existing LLMs, allowing users to select from popular base models like GPT-3.5 and GPT-4 or augment any LLM with only black-box access to the LLM API. For enterprise needs, such as enhancing trustworthiness in custom fine-tuned LLMs, users can engage with Cleanlab directly.

The evaluation compares Cleanlab’s TLM to OpenAI’s GPT-4, focusing on response accuracy and cost/time savings. TLM’s trustworthiness score enhances trust in LLM outputs, detecting errors efficiently. Compared to self-evaluation and probability-based methods, TLM’s comprehensive assessment includes epistemic uncertainty, offering superior reliability. TLM optimizes resource allocation by flagging low-scoring outputs for human review, ensuring robust decision-making. Berkeley Research Group (BRG) has already seen significant cost savings from leveraging TLM, according to Steven Gawthorpe, PhD, Associate Director and Senior Data Scientist at BRG.

In conclusion, Cleanlab’s Trustworthy Language Model (TLM) is an extensive solution to organizations’ challenges in deploying LLM applications. TLM enables more accurate and dependable outputs by addressing the reliability issues associated with hallucinations through trustworthiness scores. With its ability to augment existing LLMs and enhance trust in various applications, TLM signifies a significant advancement in the deployment of generative AI, paving the way for increased adoption & utilization in enterprise settings.

The post Cleanlab Introduces the Trustworthy Language Model (TLM) that Addresses the Primary Challenge to Enterprise Adoption of LLMs: Unreliable Outputs and Hallucinations appeared first on MarkTechPost.


#AIShorts #Applications #ArtificialIntelligence #EditorsPick #LanguageModel #LargeLanguageModel #Staff #TechNews #Technology
[Source: AI Techpark]

Related Post