StorageReview.com
AWS Snowball Edge
Cloud  ◇  Enterprise

Getting Data to the Cloud Faster with AWS Snowball Edge Devices

We recently completed a data analytics-style project that left us with a 100TB output file. While we do have ample storage throughout our lab, hanging onto a 100TB file in perpetuity has a unique set of challenges. Further, we don’t really “need” the file, but we’d prefer to preserve it, if possible. The cloud is

Enterprise  ◇  Power Management

NUT Software – Not That Hard to Crack

Eaton Power recently approached us to demonstrate how their uninterruptible power supply (UPS) units work with home lab enthusiasts in mind. We opted to show how a simple Raspberry Pi can be used as a dedicated management card for an Eaton TrippLite Smart 1500RM2U UPS, paired with the work from the Network UPS Tools project.

Broadcom MegaRAID 9670W-16i on top of server
Enterprise  ◇  Storage Adapters

Broadcom MegaRAID 9670W-16i RAID Card Review

The MegaRAID 9600 series is a third-generation storage adapter that supports SATA, SAS, and NVMe drives, designed to deliver the best possible performance and data availability for storage servers. Compared to the previous generation, the 9600 series offers a 2x increase in bandwidth, over 4x increase in IOPs, 25x reduction in write latency, and 60x

HP Z8 Fury G5 CPU Cooler
Enterprise  ◇  In the Lab

Turbocharging Our Hardware Reviews: Unleashing the Power of UL Procyon AI Inference Benchmark

The world of artificial intelligence is growing at an unprecedented pace, and with it comes the need for comprehensive benchmarking tools that can provide insights into the performance of various inference engines on different hardware platforms. The UL Procyon AI Inference Benchmark for Windows is an exciting addition to our lab. Designed for technology professionals,

Cloud  ◇  Enterprise

How To Add an EC2 Boost to AWS Snowball Edge Storage Optimized Device

The rapid growth of edge computing has led to a surge in data generation and collection at unprecedented levels. Temporary installations, such as scientific research stations, surveillance systems, and industrial facilities, often require rapid data collection and transfer for smooth operations. However, the high cost of hardware, coupled with the need for reliable and efficient

Enterprise  ◇  SSD

Apex Storage X21 Review: The Roaring Twenty-Ones

The Apex Storage X21 brings top-tier performance and capacity to enterprise and prosumer use cases with 21 Gen4 NVMe  M.2 SSDs on a double-width PCIe add-in card (FHFL). In total, the X21 can provide 168TB of storage per card (8TB SSDs) and cranks out up to 31GB/s read speeds and over 10 million IOPS.

Enterprise  ◇  Server

Supermicro Storage SuperServer SSG-121E-NES24R Review (24x E1.S)

The Supermicro Storage SuperServer SSG-121E-NES24R is a dual-socket server that supports the latest 4th generation Intel Xeon Scalable processors, up to 32 DIMMs, and support for 24 E1.S SSDs. The server is designed for hyperscalers and others that require massive scale and density via the 24 hot-swappable NVMe drive bays, PCIe 5.0 x16 expansion slots,

Enterprise  ◇  In the Lab

The Storage Review Bare-Bones AI Setup Guide

Recently we have been working extensively with AI in the lab. This has had wide-ranging results, from accidentally borking an entire OS with various configurations and software to needing to set up and have a baseline image to work with across platforms. We decided that it would be worth outlining the basic getting started steps as

Enterprise  ◇  Enterprise Storage

HPE Alletra MP Hardware Deep Dive

New block and file services highlighted the HPE GreenLake Storage Day. While HPE emphasizes the operational and management benefits of making these services available in GreenLake, we’re interested in the new universal hardware platform that sits behind the scenes – the HPE Alletra MP storage server.

AI  ◇  Enterprise

Meta LLaMa And Alpacas Loose in the Lab! Running Large Language Models Locally

In recent months, large language models have been the subject of extensive research and development, with state-of-the-art models like GPT-4, Meta LLaMa, and Alpaca pushing the boundaries of natural language processing and the hardware required to run them. Running inference on these models can be computationally challenging, requiring powerful hardware to deliver real-time results.