NVIDIA Computex 2025 announcements span NVLink Fusion, RTX PRO 6000 servers, DGX Cloud Lepton, Jetson Thor, and GR00T-Dreams robotics.
NVIDIA Computex 2025 announcements span NVLink Fusion, RTX PRO 6000 servers, DGX Cloud Lepton, Jetson Thor, and GR00T-Dreams robotics.
The PEAK:AIO Token Memory Platform uses KVCache reuse and CXL to deliver faster inference, larger context windows, and AI-ready memory scaling.
Supermicro introduces DCBBS and DLC-2, a modular solution for building scalable, liquid-cooled AI data centers with faster time-to-deployment.
Cadence and NVIDIA introduce Millennium M2000, an AI-optimized supercomputer for advanced engineering and life sciences simulations.
Google outlines new AI data center infrastructure with +/-400 VDC power and liquid cooling to handle 1MW racks and rising thermal loads.
Google unveils the Ironwood TPU, its most powerful AI accelerator yet, delivering massive improvements in inference performance and efficiency.
UALink Consortium ratifies Ultra Accelerator Link 200G 1.0, an open standard to meet the needs of growing AI workloads.
DUG Nomad mobile data centers deliver immersion-cooled AI and HPC capabilities at the edge with Hypertec servers and Solidigm SSDs.
NVIDIA and Google Cloud collaborate to bring agentic AI to enterprises utilizing Google Gemini AI models through Blackwell HGX and DGX platforms.
IBM integrates two of Meta’s latest Llama 4 models, Scout and Maverick, into watsonx.ai platform.