Key Criteria for Evaluating Enterprise Block Storage

Table of Contents

  1. Summary
  2. Considerations About the Evaluation Criteria
  3. Table Stakes
  4. Key Criteria
  5. Key Criteria: Impact Analysis
  6. Near-Term Game-Changing Technology
  7. Conclusion

1. Summary

The market landscape for enterprise block storage, usually associated with primary workloads for its characteristics of resiliency and performance consistency, has been transitioning from traditional architectures to innovative and powerful new designs for quite some time now. The introduction of new technologies, like flash memory and high-speed Ethernet networks have commoditized performance and reduced costs, allowing for much more liberty in system design. Other storage needs are heavily contributing to how storage is designed and then consumed, including:

  • Better agility of the infrastructure to respond quickly to business needs
  • Improve data mobility and integration with cloud
  • Support a larger number of applications and workloads concurrently on the same system
  • Overall infrastructure simplification
  • Drastic reduction of the Total Cost of Ownership (TCO) while significantly increasing the capacity per sysadmin under management.

All of these factors together contribute to relegating what used to be considered high-end storage systems, to a shrinking niche that can be described as monolithic, complex to operate, and expensive both in terms of Total Cost of Acquisition (TCA) and TCO, for legacy applications and compute systems. This is also true for legacy storage networks and protocols like Fibre Channel (FC). In fact, Ethernet has already reached the 100Gb/s throughput speed and servers come with dual 10 or 25Gb/s ports in standard configurations. FC switches have slower throughput at 32Gb/s and the adapters are very expensive add-ons.

The most important metrics for the evaluation of a modern data block storage system, aimed at serving primary workloads, should include:

  • Performance: Flash memory has radically changed the game when compared to hard disks. It is not only thousands of times faster and with minimal latency, but it also improves resiliency and durability. While hard disk technology is reaching its physical limits, flash vendors are closing the price gap with hard drives. Other technologies that have significantly improved performance include high-speed networks and faster CPUs, now with specific instruction sets to speed up storage operations, while protocols specifically designed for flash memory (NVMe) are contributing to shrinking latency even more. The new challenge is to achieve consistent performance while serving many different workloads on the same system simultaneously.
  • Total Cost of Ownership (TCO): Organizations want to minimize IT infrastructure costs particularly with storage, as growth continues to outpace budgets. TCO is difficult to estimate and is different for every organization. TCO metrics include environmental, finance, security, processes in place, human factors, and much more. The list is long but if we simplify a bit, and focus on the infrastructure level, there are at least two important aspects that help to control costs. For one, there are powerful analytics tools aimed at simplifying capacity planning, improving maintenance operations, and discovering issues before they become problems. Two, there are new all-inclusive maintenance support models enabling continuous HW and SW upgrades to minimize infrastructure obsolescence.
  • System Lifespan: Most organizations purchase storage systems for specific projects or infrastructure needs with 3 to 4-year support contracts offered by vendors. As storage systems can last for a longer period of time (up to 5 to 7 years sometimes), the 3-year forced refresh cycle can create several issues from both the financial and operations standpoints. Choosing systems that can last longer, capable of receiving the necessary updates to stay aligned with the rest of the infrastructure in terms of functionalities, security, and efficiency, and without dramatic cost increases after the first few years in production, helps to keep infrastructure costs at bay while avoiding unnecessary data migrations and forklift upgrades. Another key that is crucial for system longevity is its roadmap and the ability of the vendor to execute it as promised or update it following emerging user needs of the user base.
  • Flexibility: Contrary to what happened in the past, with highly siloed stacks and storage systems serving a limited number of applications and workloads, now storage systems tend to be shared by a larger number of servers with an increasing number of VMs and applications, and lately there are also containers and a return to bare metal Linux for some big data and AI/ML (Artificial Intelligence and Machine Learning) applications. Storage systems are reconfigured more often and must offer tools and integrations with a wider variety of software stacks in the upper layer as well as with automation platform like, for example, Ansible or Puppet. Furthermore, not all applications have the same needs in terms of speed, latency, or priority. QoS (Quality of Service) mechanisms could be of help to avoid issues in crowded environments.
  • Ease-of-use and Usability: In many IT organizations, especially smaller ones, system administrators manage several aspects of the infrastructure, becoming more generalists without the time or the skills to operate complex systems that are difficult to use. GUIs and dashboards are usually welcome, especially when supported by predictive analytics systems for troubleshooting and capacity planning. At the same time, CLIs and APIs, including specific integrations with other software enable management and resource provisioning directly from the software that uses them (i.e. hypervisor or container orchestrator).
  • $/IOPS: Even though most primary block storage systems can perform incredibly well, understanding what this performance really costs indicates the efficiency of the entire system. In fact, instead of comparing prices only, the price/ IOPS metric gives a better idea of the trade-offs of different systems when data reduction and other services are enabled.
  • $/GB: This metric still remains one of the most important for acquiring storage, and one of the few that is understood well outside of the IT organization (e.g. procurement department), even if we talk about high-performance storage. In contrast to $/IOPS, $/GB compares systems in regard to the capacity exposed to the clients, hence the efficiency of data reduction mechanisms in place and, more in general, how the media is utilized. When associated with $/IOPS, the two combined can give a pretty good idea of the overall efficiency of the system and its overall performance/capacity efficiency.

In this report, we will analyze several aspects and important features of modern storage systems to better understand how they impact the parameters described, especially in relation to the needs of each IT organization. In fact, storage systems are different from vendor to vendor and most of them propose different models to address several different needs.

Full content available to GigaOm Subscribers.

Sign Up For Free