UALink Consortium ratifies Ultra Accelerator Link 200G 1.0, an open standard to meet the needs of growing AI workloads.
UALink Consortium ratifies Ultra Accelerator Link 200G 1.0, an open standard to meet the needs of growing AI workloads.
AMD launches the Pensando Pollara 400, a fully programmable 400Gbps AI NIC designed to optimize GPU communication and accelerate AI workloads.
CoolIT’s CHx2000 CDU delivers 2MW of liquid cooling in a standard rack, setting a new bar for AI and HPC data center performance.
NVIDIA and Google Cloud collaborate to bring agentic AI to enterprises utilizing Google Gemini AI models through Blackwell HGX and DGX platforms.
Version 3.4 focuses on incremental updates for performance, usability, and flexibility.
IBM integrates two of Meta’s latest Llama 4 models, Scout and Maverick, into watsonx.ai platform.
Proxmox VE 8.4 adds live vGPU migration, third-party backup API, virtiofs sharing, and updated core components for modern IT.
IBM’s z17 mainframe has the processing power to deliver 50% more AI inference operations daily.
IBM’s acquisition of data and AI solutions leader Hakkoda allows it to expand the development of streamlined, efficient, and cost-effective data estates.
Generac data center backup generators deliver 2.25MW–3.25MW of scalable, reliable power for hyperscale, enterprise, and edge deployments.