Table of Contents
- Summary
- Report Methodology
- Overview
- Considerations for Adoption
- GigaOm Sonar
- Market Landscape
- Near-Term Roadmap
- Analyst’s Take
- About GigaOm
- Copyright
1. Summary
Computational storage emerged in response to the growing need for high-performance compute resources for highly specialized tasks located close to data storage devices. Unlike Data Processing Units (DPUs), which are aimed at accelerating specific low-level tasks such as encryption, protocol optimization, and data resiliency, computational storage devices (CSDs) and computational storage processors (CSPs) are highly programmable and can run customized applications or replace software stack components to optimize data management.
As shown in Figure 1, the main advantage of this approach is that data doesn’t need to move to the server CPU from the storage device to be analyzed and manipulated, which improves parallelization, overall execution speed, and compute efficiency. Most use cases for computational storage can be found in edge computing, data analytics, high-demand AI workflows, and other applications for which efficiency and speed in data handling and manipulation are the key metrics.
Figure 1. Standard and Computational Storage Approaches Compared
The two most common approaches to computational storage are:
- General purpose compute: A multi-core CPU or an FPGA (field-programmable gate array) with RAM and adequate connectivity is integrated in the storage device. Data can be accessed simultaneously by the host and by the integrated compute resources. The operating system of the device, usually an embedded Linux distribution, runs the applications that are developed with standard tools and SDKs. In the case of FPGAs, the architecture design of the solution is even simpler, and provides better performance, but it is harder to program.
- Stack optimization: The compute resources in the storage device are not directly accessible by the user, but APIs are available for software integration. The device offers encryption, compression, data protection, KV store, and more. These services are integrated with the rest of the software stack to offload high-demand operations to the device itself, saving server CPU and memory while improving performance.
Sophisticated solutions may combine these two models. When such storage devices should be deployed depends on many factors. They are usually beneficial for accelerating and optimizing high demand workloads in the data center, and can reduce compute needs radically at the edge and in all use cases where massive amounts of data need to be analyzed and manipulated quickly, without unnecessary and expensive data movements.
How We Got Here
Computer accelerators are a very hot topic at the moment. For example, GPUs have moved beyond gaming to become popular for a variety of operations. Moreover, the entire industry is working to improve compute and storage density, efficiency, and performance while keeping infrastructure costs down. This is particularly true in large infrastructures, such as those of hyperscalers, in which each additional optimization can lead to massive savings and better services for users. In fact, hyperscalers and large enterprises are among the first users of this type of technology.
The rising demand for accelerators also stems from the growing number of applications that need to increase parallelism and throughput. The amount of raw machine-generated data is now much greater than human-generated data, and moving data around, even for simple operations, is no longer feasible in some circumstances. At the same time, data is created and accessed now by a multitude of users and devices, which requires greater parallelism and latency minimization.
CSDs are still new for enterprises but accelerators are becoming common among hyperscale cloud providers, and DPUs also are making their first appearances in enterprise data centers. Compelling TCO figures are getting the attention of every type of organization that deals with large amounts of data and can afford to integrate these devices into their existing infrastructure stack or write the software to take advantage of their capabilities.
About the GigaOm Emerging Technology Impact Report
This GigaOm report is focused on emerging technologies and market segments. It helps organizations of all sizes to understand a technology, its strengths and weaknesses, and its fit in an overall IT strategy. The report is organized into four sections:
- Technology Overview: An overview of the technology, its major benefits, possible use cases, and relevant characteristics of different product implementations already available in the market.
- Considerations for Adoption: An analysis of the potential risks and benefits of introducing products based on this technology in an enterprise IT scenario, including table stakes and key differentiating features, as well as consideration of how to integrate the new product with the existing environment.
- GigaOm Sonar: A graphical representation of the market and its most important players focused on their value proposition and their roadmaps for the future. This section also includes a breakdown of each vendor’s offering in the sector.
- Near-Term Roadmap: A 12-18 month forecast of the future development of the technology, its ecosystem, and major players in this market segment.