It’s been a busy week for NVIDIA, and they are not finished. NVIDIA is rolling out an AI foundry service on Microsoft Azure, and it’s a powerful new tool for businesses diving into custom generative AI.
It’s been a busy week for NVIDIA, and they are not finished. NVIDIA is rolling out an AI foundry service on Microsoft Azure, and it’s a powerful new tool for businesses diving into custom generative AI.
NVIDIA has made a significant leap in AI computing by introducing the NVIDIA HGX H200. Based on the NVIDIA Hopper architecture, this new platform features the NVIDIA H200 Tensor Core GPU, tailored for generative AI and high-performance computing (HPC) workloads, handling massive data volumes with advanced memory capabilities.
ASRock Rack is showcasing a diverse array of server systems designed for AI and high-performance computing (HPC) applications at the SC’23 conference. Their booth will feature NVIDIA-certified GPU systems, a new range of products incorporating NVIDIA’s latest GPUs, and several groundbreaking server systems and server boards.
In the upcoming Supercomputing Conference (SC23), GIGABYTE is poised to present its range of servers and cooling technologies, emphasizing solutions for the NVIDIA GH200 Grace Hopper Superchip and forthcoming AMD Instinct APU. At the event, GIGABYTE plans to demonstrate how it will address core challenges faced by supercomputing data centers, highlighting efficient cooling methods and
One Stop Systems (OSS) introduced the Gen 5 short-depth server for AI computing at the SC23 high-performance computing conference. This powerhouse is equipped with four PCIe NVIDIA H100 Tensor Core GPUs and is designed to cater to the escalating needs for robust AI Transportables at the edge.
Lenovo and NVIDIA have unveiled an ambitious expansion of their partnership, setting a new paradigm in generative AI, as announced in a monumental reveal at Lenovo Tech World in Austin, Texas. The collaboration is a shared vision that aims to democratize the transformative capabilities of generative AI across the broad spectrum of enterprises. Brian and
While there’s a significant amount of hype around dense GPU servers for AI, and rightfully so, the reality is that most AI training projects start on workstations. Although we can now jam up to four NVIDIA A6000 Ada GPUs into a single workstation, what’s more challenging is getting robust storage in these AI boxes. We
This week Brian sits down with our own AI expert, Jordan Ranous, to discuss and speculate on the challenges facing enterprises and small businesses as they try to get a grip on the hot topic of the decade, Artificial Intelligence. If you have been following StorageReview, then you are aware of some of the tests,
NVIDIA has made quite a splash in the world of artificial intelligence (AI) and high-performance computing with its latest unveiling – the NVIDIA GH200 Grace Hopper Superchip. This recent offering has shown outstanding performance in the MLPerf benchmarks, demonstrating NVIDIA’s prowess in cloud and edge AI.
Earlier this year, Intel published performance results between Intel Habana Gaudi2 and GPU market leader NVIDIA that illustrated Intel’s commitment to AI and proved AI is not a one-size-fits-all category. At the same time, a joint development between Intel AI researchers and Microsoft Research created BridgeTower, a pre-trained multimodal transformer delivering state-of-the-art vision-language tasks. Hugging