Dell expands its AI Factory with NVIDIA, adding new PowerEdge servers, faster networking, and managed services for enterprise AI adoption.
Dell expands its AI Factory with NVIDIA, adding new PowerEdge servers, faster networking, and managed services for enterprise AI adoption.
Intel will launch Arc Pro B50 and B60 GPUs at Computex 2025. These GPUs enable advanced AI inference and workstation graphics with up to 24GB VRAM.
NVIDIA Computex 2025 announcements span NVLink Fusion, RTX PRO 6000 servers, DGX Cloud Lepton, Jetson Thor, and GR00T-Dreams robotics.
The PEAK:AIO Token Memory Platform uses KVCache reuse and CXL to deliver faster inference, larger context windows, and AI-ready memory scaling.
Supermicro introduces DCBBS and DLC-2, a modular solution for building scalable, liquid-cooled AI data centers with faster time-to-deployment.
Cadence and NVIDIA introduce Millennium M2000, an AI-optimized supercomputer for advanced engineering and life sciences simulations.
Google outlines new AI data center infrastructure with +/-400 VDC power and liquid cooling to handle 1MW racks and rising thermal loads.
Google unveils the Ironwood TPU, its most powerful AI accelerator yet, delivering massive improvements in inference performance and efficiency.
UALink Consortium ratifies Ultra Accelerator Link 200G 1.0, an open standard to meet the needs of growing AI workloads.
DUG Nomad mobile data centers deliver immersion-cooled AI and HPC capabilities at the edge with Hypertec servers and Solidigm SSDs.