Veeam integrates support for Anthropic’s Model Context Protocol (MCP), enabling AI systems to access and use stored repositories.
Veeam integrates support for Anthropic’s Model Context Protocol (MCP), enabling AI systems to access and use stored repositories.
Scality ARTESCA + Veeam solution reduces complexity, time, and costs. The software is designed to run on customer-preferred hardware platforms.
The Object First Consumption model provides secure, simple, and robust immutable backup storage without the heavy lifting of hardware lifecycle management.
UALink Consortium ratifies Ultra Accelerator Link 200G 1.0, an open standard to meet the needs of growing AI workloads.
AMD launches the Pensando Pollara 400, a fully programmable 400Gbps AI NIC designed to optimize GPU communication and accelerate AI workloads.
CoolIT’s CHx2000 CDU delivers 2MW of liquid cooling in a standard rack, setting a new bar for AI and HPC data center performance.
NVIDIA and Google Cloud collaborate to bring agentic AI to enterprises utilizing Google Gemini AI models through Blackwell HGX and DGX platforms.
Version 3.4 focuses on incremental updates for performance, usability, and flexibility.
IBM integrates two of Meta’s latest Llama 4 models, Scout and Maverick, into watsonx.ai platform.
Proxmox VE 8.4 adds live vGPU migration, third-party backup API, virtiofs sharing, and updated core components for modern IT.