AI News caught up with Victor Jakubiuk, Head of AI at Ampere Computing, a semiconductor company offering Cloud Native Processors. We discussed how they are driving high performance, scalable and energy-efficient solutions built for the sustainable cloud.
In today’s business landscape, artificial intelligence (AI) has been an undeniable game-changer, driving innovation and competitiveness across all industries. Yet, a critical hurdle has recently emerged as companies rapidly shift to cloud-native processes: a severe shortage of servers and rising operational costs.
With soaring demand for computational power risking the seamless integration of AI-driven initiatives, businesses now face the urgent task of finding innovative, affordable and far more sustainable solutions to tackle this shortage – a shortage only set to continue.
The impact on the environment is also a concern. A new study reveals that by 2027, the AI industry could consume as much energy as a country like Argentina, Netherlands, or Sweden. With energy-intensive graphics processing units (GPUs) a popular choice for AI workloads, this computing power has seen energy consumption and carbon footprints hit unprecedented highs.
As businesses scale their digital footprints, the imperative for sustainability becomes increasingly important. The environmental impact of energy-intensive hardware poses a moral and practical challenge, demanding solutions that reconcile performance with responsible resource usage.
“Efficiency is key in the future of computing,” explains Jakubiuk. “While the universe of compute and data is expanding exponentially, the focus on energy efficiency in individual workloads is notably increasing.”
“Historically, GPUs were the go-to for AI model training due to their compute power. However, they are power-hungry and inefficient for production,” he says. “Deploying GPUs for inference transfers these inefficiencies, compounding power, cost, and operational complexities.”
This is all the more notable as processes scale, he warns. While AI training requires large amounts of compute up front, AI inferencing can require up to 10x more total compute over time, creating an even larger problem as the scale as AI usage increases.
Ampere Computing steps has set out to deliver solutions that meet these needs. “We focus solely on efficient AI inference on less energy-hungry central processing units (CPUs), delivering unparalleled efficiency and cost-effectiveness in the cloud,” Jakubiuk says. “Our software and hardware solutions offer a seamless transition without necessitating an overhaul of existing frameworks, setting us apart from closed-source alternatives.”
“Our software, compatible with all open-source frameworks, enables conscious computing without the need for extensive rewrites, unlike proprietary systems,” he says, noting that Ampere Cloud Native Processors are available with all major cloud providers such as Oracle Cloud, Google Cloud, and Microsoft Azure, and on Scaleway among the European CSPs.
AI workloads today fall into four categories: computer vision, NLP, recommendation engines, and generative AI. Ampere Computing’s software and hardware combination caters seamlessly across all these workloads for sustainable AI deployments at scale.
“AI, being a horizontal technology, finds compatibility across various industries, from conservative sectors like finance to innovative realms like self-driving cars. The versatility of Ampere’s solutions caters to a wide spectrum of cloud-based applications.”
Ampere is also one of the founding members of the recently launched The AI Platform Alliance, formed specifically to promote better collaboration and openness when it comes to AI and at a pivotal moment not just for the technology industry, but for the world at large.
Because AI solutions can be complex to implement, the AI Platform Alliance will work together to validate joint AI solutions that provide a better alternative than the GPU-based status quo.
By developing these solutions as a community, this group will accelerate the pace of AI innovation by making AI platforms more open and transparent, by increasing the efficiency of AI to solve real-world problems, and delivering sustainable infrastructure at scale that is environmentally friendly and socially responsible.
“We’re at the forefront, actively advocating for greater efficiency in AI and beyond,” says Jakubiuk, who will be speaking at the upcoming AI & Big Data Expo Global event in London on November 30th.
Looking ahead, Jakubiuk envisions a future where efficiency reigns supreme. “The future of computing lies in greater power efficiency. Our relentless pursuit is to drive efficiency across workloads, pushing the world towards higher efficacy,” he asserts. The future is bright as company continues to push towards enabling the sustainable cloud.
You can find out more about Ampere’s scalable solutions here.
The post Ampere Computing: Unlocking a Path to the Sustainable Cloud appeared first on AI News.
#ArtificialIntelligence [Source: artificialintelligence-news.com]