• Sat. Nov 23rd, 2024

Predibase Launches New Offering to Fine-tune and Serve 100x More LLMs at No Additional Cost

Oct 25, 2023

Organizations across industries are under pressure to figure out how and where to use generative AI solutions. More specifically, they are concerned about how to implement Large Language Models (LLMs) in their workflows. On one hand, there is genuine excitement about the potential for LLMs, on the other there is a fear of falling behind the competition if the company is unable to leverage the power of AI. 

To address some of the concerns about using LLMs, such as privacy and cost, Predibase, the developer platform for open-source AI, announced the launch of a software development kit (SDK) designed for efficient fine-during and serving of LLMs. The new kit is expected to reduce deployment costs and complexity and increase training speed. 

Predibase recently conducted a study that highlighted the surprisingly low adoption rate for LLMs among businesses. While a high percentage of companies have started working with LLMs, most of the work is in the experimentation phase, and only 23 percent have already deployed or have plans to deploy commercial LLMs. 

The new SDK kit will allow developers to train task-specific and smaller LLMs without needing expensive GPU hardware. The readily available GPUs on the cloud can be used to train models. The fine-tuned models can then be served with Predibase’s LLM architecture which is designed to be lightweight and has the ability to load and unload models on demand in seconds. This helps reduce the additional costs of serving multiple models. 

via Shutterstock

“More than 75% of organizations won’t use commercial LLMs in production due to concerns over ownership, privacy, cost, and security, but productionizing open-source LLMs comes with its own set of infrastructure challenges,” said Dev Rishi, co-founder and CEO of Predibase. “Even with access to high-performance GPUs in the cloud, training costs can reach thousands of dollars per job due to a lack of automated, reliable, cost-effective fine-tuning infrastructure. Debugging and setting up environments require countless engineering hours. As a result,  businesses can spend a fortune even before getting to the cost of serving in production.”

As the AI landscape is evolving rapidly, Predibase aims to level the playing field for startups and small companies who do not have the resources to compete with industry giants. Along with cost savings and reduced complexity, the new SDK kit by Predibase offers ease of use. The platform’s simplicity allows novice users to develop the model, and then more seasoned practitioners can fine-tune the model parameters. This can significantly reduce the deployment timeline. 

Predibase claims that its platform offers an overall 15x reduction in deployment costs and a 50x improvement in training speed for task-specific models. This includes automatic memory-efficient fine–tuning that works on commodity GPUs, such as Nvidia T4. Predibase’s training system automatically applies optimizations to ensure training success on whatever type of hardware is available. 

(Joe Techapanupreeda/Shutterstock)

The built-in orchestration logic is designed to use the most cost-effective hardware in your cloud to run each training job. In addition, businesses can fine-tune each LLM deployment according to their needs. They can scale up and down using dynamic or stand-alone hosting. Each fine-tuned model can be loaded and queried in seconds after fine-tuning, with no need for deployment of each model on a separate GPU.  

Along with the announcement of the new SDK kit, Predibase also introduced the Predibase AI Cloud – a new service for supporting multiple cloud environments and regions. It optimizes a combination of training hardware based on performance criteria and cost. 

The introduction of Predibase’s SDK is set to democratize access to advanced technology. This marks a significant shift in the AI landscape, as smaller businesses get a fair opportunity to compete against bigger players in the market. As a result, we can expect increased competitiveness and innovation across various industries. 

Related Items 

Predibase Launches AI Platform, Secures Additional $12.2M in Series A Funding Round 

VMware Unveils New Generative AI Tools, Expands Nvidia Partnership

Breaking the Language Barrier: The Unprecedented Capabilities Large Language Models like ChatGPT Offer Businesses


#AI/ML/DL #Slider:FrontPage #genAI #LLM #SDK
[Source: EnterpriseAI]

Related Post