Google unveils the Ironwood TPU, its most powerful AI accelerator yet, delivering massive improvements in inference performance and efficiency.
Google unveils the Ironwood TPU, its most powerful AI accelerator yet, delivering massive improvements in inference performance and efficiency.
Lenovo announces a significant portfolio refresh with new data storage solutions that include storage arrays, software-defined storage, and advancements in AI and virtualization.
From AI-ready workstations to business-class convertibles, here’s how the new ThinkPads stack up.
Veeam integrates support for Anthropic’s Model Context Protocol (MCP), enabling AI systems to access and use stored repositories.
Scality ARTESCA + Veeam solution reduces complexity, time, and costs. The software is designed to run on customer-preferred hardware platforms.
The Object First Consumption model provides secure, simple, and robust immutable backup storage without the heavy lifting of hardware lifecycle management.
UALink Consortium ratifies Ultra Accelerator Link 200G 1.0, an open standard to meet the needs of growing AI workloads.
AMD launches the Pensando Pollara 400, a fully programmable 400Gbps AI NIC designed to optimize GPU communication and accelerate AI workloads.
CoolIT’s CHx2000 CDU delivers 2MW of liquid cooling in a standard rack, setting a new bar for AI and HPC data center performance.
NVIDIA and Google Cloud collaborate to bring agentic AI to enterprises utilizing Google Gemini AI models through Blackwell HGX and DGX platforms.