You are currently viewing Google unveils first Arm-based CPU designed for data centres, Gen AI capabilities

Google unveils first Arm-based CPU designed for data centres, Gen AI capabilities


Tech giant Google has unveiled Axion, the first ARM-based CPU (central processing unit) specifically created for data centres. The launch, which was made during Google Cloud Next 2024 event, also included new enhancements in Gen AI capabilities and AI-based infrastructure. 

Built using Arm Holding’s Neoverse V2 CPU, Axion processors deliver high performance for general-purpose workloads such as web and app servers, open-source databases, data analytics engines, media processing, CPU-based AI training and inferencing, as per Google. 

ARM processors are ideal for cloud and hyperscale computing, edge computing, telecommunications, as well as high-performance computing.

Some of its other offerings include A3 mega, NVIDIA B200, and GB200 chips, alongside the general availability of TPU v5p and AI-based storage solutions for handling intensive AI workloads.

As every application or workload in the cloud has specific technical needs, optimising these workloads can lead to better performance and interoperability between applications. ARM-based processors present as a more budget-friendly and power-efficient alternative, widely used across devices from smartphones to servers.

As stated in the company’s blogspot, Google Axion claims to deliver up to 50% better performance and up to 60% better energy efficiency than comparable current-generation x86-based instances. 

New capabilities 

Google said that its AI portfolio has also enabled the creation of AI agents that can assist in tasks such as customer service, employee productivity, and content creation.

The California-based firm has introduced Google Vids, an AI-powered video creation application for work, along with AI add-ons designed for meetings, messaging, and security.

The AI-powered app will serve as a video assistant, covering tasks from writing and production to editing. Based on preferences, it will help users generate editable storyboards and assemble a first draft with suggested scenes from stock videos, images, and background music. 

The application will also integrate with other Workspace apps like Docs, Sheets, and Slides and is set to launch in Workspace Labs in June.

@media (max-width: 769px) {
.thumbnailWrapper{
width:6.62rem !important;
}
.alsoReadTitleImage{
min-width: 81px !important;
min-height: 81px !important;
}

.alsoReadMainTitleText{
font-size: 14px !important;
line-height: 20px !important;
}

.alsoReadHeadText{
font-size: 24px !important;
line-height: 20px !important;
}
}

Also Read

With enterprise solutions, Ema AI is making space for itself in the crowded Gen AI market

The company has also enabled AI anywhere through Google Distributed Cloud (GDC), allowing users to choose the environment, configuration, and controls. It has also expanded data residency options for data stored at rest for Generative AI on Vertex AI services to 11 new countries. 

“Leading mobile provider Orange, which operates in 26 countries where local data must be kept in each country, leverages AI on GDC to improve network performance and enhance customer experiences,” said Thomas Kurian, CEO, Google Cloud in a blogpost.

Google has broadened its model access and grounding capabilities within Vertex AI. This includes new offerings such as Gemini 1.5 Pro, featuring an extensive 1 million token context window, as well as Claude 3, CodeGemma, and Imagen 2.

Additionally, Vertex AI now incorporates grounding capabilities within Google Search and data sourced from enterprise applications like Workday and Salesforce. In essence, the AI is enabled to be grounded in real-world data, improving its accuracy in generating outputs or decision-making.


Edited by Kanishk Singh



Source link

Leave a Reply