Home Enterprise Podcast #104: The Latest on HPC and Scale-Out Storage

Podcast #104: The Latest on HPC and Scale-Out Storage

by Harold Fritts

Brian gets Curtis Anderson of Panasas to join him for a deep and detailed view of HPC. Curtis is the Software Architect at Panasas and co-chair of the MLCommons Storage Working Group. What is MLCommons Storage Working Group? MLCommons is an industry consortium working to accelerate machine learning and increase its positive impact on society. It publishes several standards for the performance of AI/ML models, among other things. As one of the co-chairs, Curtis defines standard(s) for evaluating the performance of storage subsystems that feed AI/ML environments.

Brian gets Curtis Anderson of Panasas to join him for a deep and detailed view of HPC. Curtis is the Software Architect at Panasas and co-chair of the MLCommons Storage Working Group. What is MLCommons Storage Working Group? MLCommons is an industry consortium working to accelerate machine learning and increase its positive impact on society. It publishes several standards for the performance of AI/ML models, among other things. As one of the co-chairs, Curtis defines standard(s) for evaluating the performance of storage subsystems that feed AI/ML environments.

As a Software Architect at Panasas, Curtis coordinates all the technology teams in the Sunnyvale office: Platform, OSD (Object Storage Device), CLI/GUI Management, NFS and CIFS gateway, and interfacing to the hardware and QA teams. He also gets to write code again(!).

The market for scale-out storage with a commercial feature set is growing rapidly, and competitors like NetApp have the commercial feature set but haven’t been able to master scale-out. Panasas has 15 years of scale-out performance and the resiliency everyone wants but needs the commercial feature set that some commercial players have.

Brian digs deep into the technology and asks Curtis to explain use cases and the software structure, where Panasas fits in the overall market and their new flash systems.

This is a great podcast; if HPC is of interest, you should give this a listen. It’s only 45 minutes, but if you want to jump around, we have put some timestamps below:

00:00 Intro

  • Supercomputing history
  • Use cases – HPC
  • Hardware
    • New features
    • Scale-out
    • Parallel systems vs. traditional systems

05:00 How Panasas systems work

  • Why it was invented
  • Best for moderate/large files
  • Nodes for workload types

10:00 Software structure

  • PanFS
  • More on Parallel file systems
  • Why didn’t the big providers address the Parallel F/S

15:00 How it handles multiple HPC projects

  • Where does Panasas fit in the market
  • Media & Entertainment constraints
    • Where does PanFS fit
    • CGI

20:00 How M&E works

  • Why CGI is different
  • Difference between M7E and HPC workloads
  • What fits in the HPC market
  • AI/ML

25:00 Costs

  • GPU’s
  • Flash expense
  • Compute strategies
  • Skillset needed

30:00 How to set up a smaller company for HPC

  • How to recruit
  • Back to the cloud
    • Cloud is the honeypot
    • Easy to use
    • Low cost
    • Not set up for HPC
  • Can a Panasas system be consumed in the cloud
  • How to architect PanFS for Cloud

35:00 Edge Placement

  • Hard to justify

40:00 Environmental impact

  • Efficiency
  • What has Panasas seen as HPC use cases
  • Traditional HPC is changing
    • Oil and Gas
    • Exploration companies are now looking for Salt Fields to store fuel

45:00 How to engage Panasas

Closing and wrap-up

Video pod on YouTube

Subscribe to our podcast:

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | TikTok | RSS Feed