Advertisment

Nvidia introduces new chip to instruct GenAI models

Nvidia launched a new chip designed for supercomputers fueling GenAI models. The platform is the formidable Nvidia H200 Tensor Core GPU.

author-image
Bharti Trehan
Updated On
New Update
Nvidia introduces new chip to instruct GenAI models

Nvidia has revealed a new-age chip designed to empower supercomputers fueling generative AI models. Anchored in the innovative Nvidia Hopper architecture, the platform showcases the formidable Nvidia H200 Tensor Core GPU, boasting advanced memory capabilities adept at managing vast datasets pivotal for genAI and high-performance computing workloads.

Advertisment

This unveiling underscores Nvidia's commitment to pushing the boundaries of AI capabilities, offering a robust solution for processing substantial data volumes in the realm of generative AI and demanding computing tasks. The H200 Tensor Core GPU stands as a testament to Nvidia's dedication to advancing the landscape of artificial intelligence and computing technology.

Introducing Nvidia H200 GPU, equipped with HBM3e, an extensive memory solution that propels the acceleration of generative AI, and large language models, and enhances scientific computing in HPC workloads. The Nvidia H200 leverages HBM3e technology to deliver an impressive 141GB of memory at a speed of 4.8 terabytes per second. This represents a remarkable advancement, with nearly double the capacity and 2.4 times more bandwidth compared to its predecessor, the Nvidia A100, highlighting the continuous commitment of the company to push the boundaries of GPU capabilities in the realm of GenAI and scientific computing.

In 2024 Nvidia H200 is expected to reach globally through cloud service providers

Advertisment

Commencing in the second quarter of 2024, the Nvidia H200 will be made available by global system manufacturers and prominent cloud service providers. Key industry leaders, including Amazon Web Services, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure, will spearhead the adoption of H200-powered instances, initiating deployment within the coming year. This strategic collaboration positions the H200 at the forefront of cutting-edge technology, ensuring widespread accessibility and utilization through renowned partners in the cloud computing landscape.

Fueled by Nvidia NVLink and NVSwitch high-speed interconnects, the HGX H200 stands out for its superior performance across diverse application workloads, notably excelling in LLM training and inference for models exceeding 175 billion parameters. An eight-way configuration of the HGX H200 delivers an impressive performance of over 32 petaflops of FP8 deep learning computing. Additionally, it boasts a substantial 1.1TB of aggregate high-bandwidth memory, solidifying its status as a powerhouse for top-tier performance in generative AI and HPC applications, as highlighted by Nvidia. This technological advancement signifies a significant stride forward in the realm of advanced computing capabilities.

Fueled by Nvidia NVLink and NVSwitch high-speed interconnects, the HGX H200 stands out for its superior performance across diverse application workloads, notably excelling in LLM training and inference for models exceeding 175 billion parameters. An eight-way configuration of the HGX H200 delivers an impressive performance of over 32 petaflops of FP8 deep learning computing. Additionally, it boasts a substantial 1.1TB of aggregate high-bandwidth memory, solidifying its status as a powerhouse for top-tier performance in generative AI and HPC applications, as highlighted by Nvidia. This technological advancement signifies a significant stride forward in the realm of advanced computing capabilities.

Advertisment

The H200 and high-end AI products that Nvidia is planning to release in 2024 and 2025 are seen as positive news for GenAI development. Nvidia is set to host a conference call regarding its fiscal third quarter results on November 21st.

nvidia genai
Advertisment