DeepSeek has recently released its latest open-source model on Hugging Facel, DeepSeek-V2-Chat-0628. This release marks a significant advancement in AI-driven text generation and chatbot technology capabilities, positioning DeepSeek at the forefront of the industry.
DeepSeek-V2-Chat-0628 is an enhanced iteration of the previous DeepSeek-V2-Chat model. This new version has been meticulously refined to deliver superior performance across various benchmarks. According to the LMSYS Chatbot Arena Leaderboard, DeepSeek-V2-Chat-0628 has secured an impressive overall ranking of #11, outperforming all other open-source models. This achievement underscores DeepSeek’s commitment to advancing the field of artificial intelligence and providing top-tier solutions for conversational AI applications.
The improvements in DeepSeek-V2-Chat-0628 are extensive, covering various critical aspects of the model’s functionality. Notably, the model exhibits substantial enhancements in several benchmark tests:
- HumanEval: The score improved from 81.1 to 84.8, reflecting a 3.7-point increase.
- MATH: A remarkable leap from 53.9 to 71.0, indicating a 17.1-point improvement.
- BBH: The performance score rose from 79.7 to 83.4, marking a 3.7-point enhancement.
- IFEval: A significant increase from 63.8 to 77.6, a 13.8-point improvement.
- Arena-Hard: Demonstrated the most dramatic improvement, with the score jumping from 41.6 to 68.3, a 26.7-point rise.
- JSON Output (Internal): Improved from 78 to 85, showing a 7-point enhancement.
The DeepSeek-V2-Chat-0628 model also features optimized instruction-following capabilities within the “system” area, significantly enhancing the user experience. This optimization benefits tasks such as immersive translation and Retrieval-Augmented Generation (RAG), providing users with a more intuitive and efficient interaction with the AI.
For those interested in deploying DeepSeek-V2-Chat-0628, the model requires 80GB*8 GPUs for inference in BF16 format. Users can utilize Huggingface’s Transformers for model inference, which involves importing the necessary libraries and setting up the model and tokenizer with appropriate configurations. Compared to previous versions, the complete chat template has been updated, enhancing the model’s response generation and interaction capabilities. The new template includes specific formatting and token settings that ensure more accurate and relevant outputs based on user inputs.
vLLM is recommended for model inference, which offers a streamlined approach for integrating the model into various applications. The vLLM setup involves merging a pull request into the vLLM codebase and configuring the model and tokenizer to handle the desired tasks efficiently.
The DeepSeek-V2-Chat-0628 model is available under the MIT License for the code repository, with the model itself subject to the Model License. This allows for commercial use of the DeepSeek-V2 series, including both Base and Chat models, making it accessible for businesses and developers aiming to integrate advanced AI capabilities into their products & services.
In conclusion, the release of DeepSeek-V2-Chat-0628 for DeepSeek showcases its ongoing dedication to innovation in artificial intelligence. With impressive performance metrics and enhanced user experience, this model is poised to set new standards in conversational AI.
Check out the Model Card and API. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. If you like our work, you will love our newsletter..
Don’t Forget to join our 46k+ ML SubReddit
Find Upcoming AI Webinars here
The post DeepSeek-V2-0628 Released: An Improved Open-Source Version of DeepSeek-V2 appeared first on MarkTechPost.
#AIShorts #Applications #ArtificialIntelligence #EditorsPick #LanguageModel #LargeLanguageModel #Staff #TechNews #Technology [Source: AI Techpark]