UALink Consortium ratifies Ultra Accelerator Link 200G 1.0, an open standard to meet the needs of growing AI workloads.
UALink Consortium ratifies Ultra Accelerator Link 200G 1.0, an open standard to meet the needs of growing AI workloads.
DUG Nomad mobile data centers deliver immersion-cooled AI and HPC capabilities at the edge with Hypertec servers and Solidigm SSDs.
NVIDIA and Google Cloud collaborate to bring agentic AI to enterprises utilizing Google Gemini AI models through Blackwell HGX and DGX platforms.
IBM integrates two of Meta’s latest Llama 4 models, Scout and Maverick, into watsonx.ai platform.
IBM’s z17 mainframe has the processing power to deliver 50% more AI inference operations daily.
Meta unveils Llama 4, a powerful MoE-based AI model family offering improved efficiency, scalability, and multimodal performance.
NVIDIA leveraged GTC 2025 to announce new software advancing AI innovation and optimization.
Rapt AI AMD collaboration integrates workload automation with AMD Instinct GPUs to enhance AI infrastructure efficiency and reduce TCO.
WEKA’s Augmented Memory for AI inference boosts GPU efficiency, reducing latency and cost while scaling AI models for enterprise workloads.
VAST Data integrates InsightEngine with NVIDIA DGX for real-time AI data processing, enabling seamless retrieval, inference, and scaling.