Enrico Signoretti, Author at Gigaom https://gigaom.com/author/enricosignoretti/ Your industry partner in emerging technology research Tue, 12 Sep 2023 21:03:43 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 CxO Decision Brief: Data Resilience and Management https://gigaom.com/report/cxo-decision-brief-data-resilience-and-management/ Fri, 31 Mar 2023 14:00:16 +0000 https://research.gigaom.com/?post_type=go-report&p=1012795/ Effective security practice extends beyond recovery and recognizes that the most sophisticated attacks are not against infrastructure but data. In fact, data

The post CxO Decision Brief: Data Resilience and Management appeared first on Gigaom.

]]>
Effective security practice extends beyond recovery and recognizes that the most sophisticated attacks are not against infrastructure but data. In fact, data is an irreplaceable asset of the business because it carries vital information for the enterprise, including intellectual property and personal information. Solutions focused on the most urgent threat—ransomware—are often vendor-centric and siloed, resulting in limited visibility, inconsistent security posture and recovery procedures, limited data and cost optimization, and compliance issues.

Index Engines proposes a non-siloed and data-centric approach that prioritizes data resilience, opening up multiple opportunities. Sophisticated machine learning-based ransomware detection and prevention is at the core of the solution. Once data is indexed and analyzed, the same back-end engine can be used for search, classification, optimization, reporting, and more. This means data management and resilience is no longer tied to the single storage system but works horizontally across the organization’s entire infrastructure to enhance the value of data stored on different systems.

The post CxO Decision Brief: Data Resilience and Management appeared first on Gigaom.

]]>
CXO Insight: Delivering on Edge Infrastructure https://gigaom.com/2023/03/23/cxo-insight-delivering-on-edge-infrastructure/ Thu, 23 Mar 2023 12:54:25 +0000 https://gigaom.com/?p=1004971 Edge infrastructure has little in common with traditional data center infrastructure. Whilst data centers have well-defined boundaries, edge has many more variables

The post CXO Insight: Delivering on Edge Infrastructure appeared first on Gigaom.

]]>
Edge infrastructure has little in common with traditional data center infrastructure. Whilst data centers have well-defined boundaries, edge has many more variables and challenges to take into account. Many edge deployments have revealed this in the worst possible way, by failing miserably in their first attempts.

Edge infrastructure adds several layers of complexity when compared to core infrastructure. As a rule of thumb, we can say that the farther from the core you are, the more variables you add. Some of these challenges are obvious, while others may be sneakier and difficult to properly identify. Here are a few examples from a long list:

  • Security: traditional perimeter security does not apply. In some cases, devices and compute nodes are in public places and can be easily stolen or compromised.
  • Safety: we are not talking about clean and tidy data centers here. Edge infrastructure elements can be deployed in manufacturing plants, offshore platforms, gas stations, and other high-risk environments. Physical intervention may be possible only by trained personnel, who are not always IT specialists.
  • Rugged and specialized hardware: many edge environments can be harsh, so hardware is usually designed to sustain environmental and physical abuse.
  • Hardware obsolescence: for the above reasons and others, compute devices installed at the far edge can require a 5 to 7 year lifecycle, which creates challenges if software is not properly designed and managed.
  • Zero-touch deployment: in many cases, installing hardware and software in a remote location requires specialist expertise. Consider for example, the challenges of installing a server on top of a wind turbine, or on an offshore platform in the middle of the ocean.

It is unsurprising therefore, that longer-term edge infrastructure strategy needs specialized solutions, not least for:

Hardware management. Edge is critical for many use cases but the hardware (compute, networking and storage) is often minimal, both in physical size and resources, making it challenging to add software that isn’t absolutely necessary.

Optimization and integration. You need hardware, operating system and application software to be fully optimized and integrated with other layers of the stack, both as individual components and end-to-end.

What options exist?

Several options can enable you to achieve this goal. And I’d like to bring a couple of examples from the Edge Field Day 1 event I attended a few weeks ago: Zededa and Scale Computing. Two totally different approaches, with only partial overlap in terms of actual use cases.

The first, from Zededa, is particularly cool because you obtain the end-to-end integration I just described. Zededa’s operating system is open source, highly optimized and certified with hardware dedicated for edge deployments. The solution includes a management and orchestration platform aimed at monitoring the entire infrastructure, automate deployment and management operations, and keep software up to date.

Meanwhile, Scale Computing has a familiar hyperconverged approach, with additions that can be considered a natural extension of regular IT operations, but with the scale and requirements of edge computing. This approach provides simplicity and resiliency, while recent extensions such as fleet management, automation and zero-touch deployments dramatically increase the manageability of edge infrastructure.

Of these two examples, the first is edge-specific building out, and the second extends familiar approaches from the core.

Conclusion: Look before you leap

Multiple ways exist to deliver edge deployments. Improvising and failing before getting it right is obviously not the right approach, but it happens. Many enterprise IT teams think of edge as an extension of their traditional operations, and they want to apply the same processes and standards they use for data center and cloud. However, edge computing needs a totally different approach with solutions specifically designed for the specific requirements of your infrastructure.

With this in mind, the first question you should ask yourself is about what type of edge you are dealing with. Edge is tough business: user and customer service level expectations are the same as for cloud or data center services, but with devices distributed in the wild it is really challenging to provide a decent user experience or the stability required to run business critical applications. This is without taking into account security and many other critical factors.

So, remember the adage, “fail to plan and plan to fail” when it comes to Edge. A better approach—plan for what you need, and decide which option will work for you—is better than jumping in with both feet and finding out what you got wrong later, particularly in scenarios when failure was never an option.

The post CXO Insight: Delivering on Edge Infrastructure appeared first on Gigaom.

]]>
CxO Decision Brief: Kubernetes Data Protection https://gigaom.com/report/cxo-decision-brief-kubernetes-data-protection/ Wed, 08 Mar 2023 18:27:47 +0000 https://research.gigaom.com/?post_type=go-report&p=1012385/ Microservices are now a common and critical component of enterprise applications. In just a few short years, we have moved from stateless,

The post CxO Decision Brief: Kubernetes Data Protection appeared first on Gigaom.

]]>
Microservices are now a common and critical component of enterprise applications. In just a few short years, we have moved from stateless, microservices-based applications residing in the front end to complex stateful apps incorporating databases and other valuable data. These container-based applications scale up and down quickly and work as a single distributed entity.

Traditional backup approaches can’t keep pace with the complexity, volatility, and velocity of change in these container-based microservices environments. Solutions like CloudCasa meet this emergent need, providing end-to-end environmental awareness and data protection and migration features that work with Kubernetes and cloud-native services that would otherwise be at risk.

The post CxO Decision Brief: Kubernetes Data Protection appeared first on Gigaom.

]]>
CXO Insight: Do We Really Need Kubernetes at the Edge? https://gigaom.com/2023/03/06/cxo-insight-do-we-really-need-kubernetes-at-the-edge/ Mon, 06 Mar 2023 15:00:37 +0000 https://gigaom.com/?p=1004946 Last week I attended Edge Field Day 1, a Tech Field Day event focused on edge computing solutions. Some of the sessions

The post CXO Insight: Do We Really Need Kubernetes at the Edge? appeared first on Gigaom.

]]>
Last week I attended Edge Field Day 1, a Tech Field Day event focused on edge computing solutions. Some of the sessions really made me think.

Edge infrastructures are quite different from anything in the data center or the cloud: the farther from the center you go, the tinier devices become. Less CPU power, less memory and storage, less network and connectivity all pose serious challenges. That’s before considering physical and logical security requirements that are less important in the data center or the cloud, where the perimeter is well protected. 

In addition, many edge devices stay in the field for several years, posing environmental and lifecycle challenges. To complicate things even further, edge compute resources can run mission-critical applications, which are developed for efficiency and resiliency. Containers and Kubernetes (K8s) may be a good option here, but does the edge really want the complexity of Kubernetes ?

Assessing the value of Kubernetes at the Edge

To be fair, Edge Kubernetes has been happening for some time. A number of vendors now deliver optimized Kubernetes distributions for edge use cases, plus management platforms to manage huge fleets of tiny clusters. The ecosystem is growing and many users are adopting these solutions in the field.

But does Edge Kubernetes make sense? Or more accurately, how far from the cloud-based core can you deploy Kubernetes, before it becomes more trouble than it’s worth? Kubernetes adds a layer of complexity that must be deployed and managed. And there are additional things to keep in mind:

  1. Even if an application is developed with microservices in mind (as small containers), it is not always so big and complex that it needs a full orchestration layer. 
  2. K8s often needs additional components to ensure redundancy and data persistence. In a limited-resource scenario where few containers are deployed, the Kubernetes orchestration layer could consume more resources than the application! 

In the GigaOm report covering this space, we found most vendors working on how to deliver K8s management at scale. Different approaches, but they all include some forms of automation and, lately, GitOps. This solves for infrastructure management but doesn’t cover resource consumption, nor does it really enable container and application management, which remain concerns at the edge.

While application management can be solved with additional tools, the same you are using for the rest of your K8s applications, resource consumption is something that doesn’t have a solution if you keep using Kubernetes. And this is particularly true when instead of three nodes, you have two or one, and maybe that one is also of a very small size. 

Alternatives to Kubernetes at the Edge

Back at the Tech Field Day, an approach that I found compelling was shown by Avassa. They have an end-to-end container management platform that doesn’t need Kubernetes to operate. It does all you expect for a small container orchestrator at the edge, while removing complexity and unnecessary components.

As a result, the edge-level component has a tiny footprint compared to (even) edge-optimized Kubernetes distributions. In addition, it implements management and monitoring capabilities to provide visibility on important application aspects, including deployment and management. Avassa currently offers something quite differentiated, even with other options to remove K8s from the (edge) picture, not least Web Assembly. 

Key Actions and Takeaways

To summarize, many organizations are evaluating solutions in this space, and applications are usually written following very precise requirements. Containers are the best way to deploy them, but are not synonymous with Kubernetes.

Before installing Kubernetes at the edge, it is important to check if it is worth doing so. If you have already deployed, you will likely have found its value increases with the size of the application. However, that value diminishes with the distance from the data center, and the size and number of edge compute nodes.

It may therefore be wise to explore alternatives to simplify the stack, and therefore improve TCO of the entire infrastructure. If the IT team in charge of edge infrastructure is small, and has to interact every day with the development team, this becomes even more true. The skills shortage across the industry, and particularly around Kubernetes, make it mandatory to consider options.

I’m not saying that Kubernetes is a no-go for edge applications. However, it is important to evaluate the pros and cons, and establish the best course of action, before beginning what may be a challenging journey. 

The post CXO Insight: Do We Really Need Kubernetes at the Edge? appeared first on Gigaom.

]]>
CXO Insight: Cloud Cost Optimization https://gigaom.com/2022/12/08/cxo-insight-cloud-cost-optimization/ Thu, 08 Dec 2022 18:57:28 +0000 https://gigaom.com/?p=1004732 One of the most common discussions with users these days is about the cost of public cloud and what they can do

The post CXO Insight: Cloud Cost Optimization appeared first on Gigaom.

]]>
One of the most common discussions with users these days is about the cost of public cloud and what they can do to reduce their bills. I visited AWS Re:Invent last week, and there was no exception. What can enterprises do to solve the cost problem? And what is AWS, the biggest of the cloud providers, in this space?

Why does it matter?

Many organizations are interested in FinOps as a new operating model, but in my opinion, FinOps is not always a solution. In fact, most users and vendors do not understand it; they think FinOps is a set of tools to help identify underutilized or poorly configured resources to reduce consumption and spend less. Tools can be very effective initially, but without a general acceptance of best practices across teams, applications, and business owners, it becomes complicated to scale these solutions to cover the entire cloud spending, especially when we talk about complex multi and hybrid cloud environments. Another big problem of this approach comes from the tool itself; this is another component to trust and manage, which must support a broad range of technologies, providers, and services over time.

Challenges and Opportunities

Most FinOps tools available today are designed around three fundamental steps: observation and data collection, analysis, alerting, and actions. Now, many of these tools use AI/ML techniques to provide the necessary insights to the user. In theory, this process works well, but simpler and highly effective methods exist to achieve similar or better results. With this, I’m not saying that FinOps tools are ineffective or can’t help optimize the use of cloud resources; what I want to say is that before choosing a tool, it is necessary to implement best practices and understand why resources are incorrectly allocated.

  1. FinOps as a feature: Many cloud providers implement extended observability and automation features directly in their services. Thanks to these, the user can monitor the real utilization of resources and define policies for automated optimization. Often users don’t even know about the existence of these features.
  2. Chargeback, Showback, and Shameback are good practices: One of the main features of FinOps tools is the ability to show who is doing what. In other words,  users can easily see the cost of an application or resources associated with a single developer or end user. This feature is often available directly from cloud service providers for every service, account, and tenant.
  3. Optimization also brings cost optimization: It is often easier to think about lift and shift for legacy applications or underestimate application optimization to solve performance problems. Additional resource allocation is just easier and less expensive in the short term than doing a thorough analysis and optimizing the single application components. 

Key Actions and Takeaways

As often, common sense usually brings the best results instead of complicating things with additional tools and layers. In this context, if we look at the three points above, we can easily find how to reduce cloud costs without increasing overall complexity.

Before adopting a FinOps tool, it is fundamental to look at services and products in use. Here are some examples to understand how easy cloud cost management can be:

  1. Data storage is the most important item in cloud spending for the majority of enterprises. S3 Storage Lens is a phenomenal tool to get better visibility into what is happening with your S3 storage. An easy-to-use interface and a lot of metrics give the user insights into how applications use storage and how to remediate potential issues, not only from the cost savings point of view.
  2. KubeCost is now a popular tool in the Kubernetes space. It is simple yet effective and gives full visibility on resource consumption. It can associate a cost to each single resource, show the real cost of every application or team, provide real-time alerts and insights, or produce reports to track costs and show trends over time. 
  3. S3 intelligent tiering is another example of optimization. Instead of manually using one of the many storage classes available on AWS S3, the user can select this option and have the system place data on different storage tiers depending on access time for the single object. This automates data placement for the best combination of performance and $/GB. Users that adopted this feature have seen a tremendous drop in storage fees with no or minimal impact on applications.

Where to go from here

This article is not aimed against FinOps, but it wants to separate hype from reality. Many users don’t need FinOps tools to solve their cloud spending, especially when the best practices behind it are not adopted as well.

In most cases, common sense will suffice to reduce cloud bills. And the right utilization of features from Amazon or other public service providers are more than enough to help cut costs noticeably. 

FinOps tools should be considered only when the organization is particularly large, and it becomes complicated to track all the moving parts, teams, users, and applications. (or there are politicals problems for which FinOps is much cooler than best practices, including chargeback)

If you are interested in learning more about Cloud and FinOps, please check GigaOm’s report library on CloudOps and Cloud infrastructure topics.

The post CXO Insight: Cloud Cost Optimization appeared first on Gigaom.

]]>
GigaOm Use Case Scenario for Cloud-Native Backup https://gigaom.com/report/gigaom-use-case-scenario-for-cloud-native-backup/ Mon, 18 Jul 2022 23:32:47 +0000 https://research.gigaom.com/?post_type=go-report&p=1006535/ Many organizations are investing heavily in the cloud to improve their agility and optimize the total cost of ownership of their infrastructure.

The post GigaOm Use Case Scenario for Cloud-Native Backup appeared first on Gigaom.

]]>
Many organizations are investing heavily in the cloud to improve their agility and optimize the total cost of ownership of their infrastructure. They are moving applications and data to the public cloud to take advantage of its flexibility, only to discover that, when not properly managed, the public cloud costs can quickly spiral out of control.

Data storage and protection are among the biggest pain points of many cloud bills. Many of the services available in the public cloud need to be enhanced and hardened to deliver the reliability and availability of enterprise storage systems and the tools to manage the protection of data saved in them need to go well beyond simple snapshot-based data protection.

Even though snapshots provide a good mechanism to protect data against basic operational incidents, they are not designed to meet enterprise needs and can be particularly expensive when managed without the proper tools and awareness of the environment. At the same time, traditional enterprise backup solutions are not optimal because they do not provide the necessary speed and flexibility and add unnecessary complexity to the picture.

Cloud-native backup solutions are designed to add enterprise-class backup functionalities to the public cloud while improving data management processes and costs. Compared to traditional (agent-based) and snapshot backup solutions, cloud-native data protection offers several advantages and simplifies operations.

In this regard, the user should take into account some important aspects:

  • Speed: When properly integrated, cloud-native backup can take advantage of snapshots and other mechanisms available from the service provider to speed up backup and restore operations.
  • Granularity: One of the biggest limitations of snapshots is the ability to restore single files and database records, one of the most common requirements. To do so, the user has to mount the snapshot on a new virtual machine instance, recover the necessary field, and then kill the instance. This is slow, and the process is also error-prone.
  • Air gap: Creating distance between source and backup targets is at the base of every safety and security practice in data protection, especially with the increasing number of ransomware attacks. Snapshot management services in the cloud do not separate snapshots from the source storage system, exposing the system to potential attacks or risks of major service failures.
  • Operation scalability: Snapshots are good for making quick backup copies of data, but they tend to show their limits pretty quickly. Most of the services available in the market make it difficult to coordinate snapshot operations and grant application consistency. At the same time, managing a large number of snapshots can quickly become complicated and, while automation exists, it usually lacks the user-friendliness necessary to manage large-scale environments. Agent-based solutions have a different set of challenges, but the scalability of operations can easily become a problem as well. With agents, everything should be planned in advance, and it is another software component that has to be installed and managed over time.
  • Cost and TCO: Snapshots are relatively cheap, but they are very expensive to manage in the end, creating hidden costs that are difficult to remove over time. Again, for agent-based solutions the user has to consider additional costs coming from additional resources necessary to run backup operations and infrastructure management.

The most efficient way to operate in the public cloud is to always adopt solutions specifically designed in a cloud-native fashion. In this context, the best data protection is the one that can take advantage of the services available from the cloud provider and can operate with them to build a seamless user experience. This means having the ability to operate with snapshots, organize them efficiently, and have full visibility of data for recovery operations. At the same time, enterprise users expect to find features and functionalities similar to what they have on their traditional backup platforms, including application awareness, analytics, reporting, and so on.

About the GigaOm Use Case Scenario Report

This GigaOm report is focused on a specific use case scenario and best practices to adopt new technology. It helps organizations of all sizes understand the technology, and apply it efficiently for their needs. The report is organized into two sections:

Design criteria: A simple guide that describes the use case in all its aspects, including potential benefits, challenges, and risks during the adoption process. This section also includes information on common architectures, how to create an adoption timeline, and considerations about interactions with the rest of the infrastructure and processes in place.

Solution profile: A description of a solution that has a proven track record with the technology described in these pages and with this specific use case.

The post GigaOm Use Case Scenario for Cloud-Native Backup appeared first on Gigaom.

]]>
GigaOm Use Case Scenario for Decentralized Object Storage for Video https://gigaom.com/report/gigaom-use-case-scenario-for-decentralized-object-storage-for-video-2/ Tue, 12 Jul 2022 23:39:32 +0000 https://research.gigaom.com/?post_type=go-report&p=1003466/ In the media and entertainment (M&E) industry, video productions invest heavily in large-scale infrastructures to store vast amounts of data, both in

The post GigaOm Use Case Scenario for Decentralized Object Storage for Video appeared first on Gigaom.

]]>
In the media and entertainment (M&E) industry, video productions invest heavily in large-scale infrastructures to store vast amounts of data, both in the cloud and on-premises. Video and other media assets coming from different sources have to be edited and rendered into the final product by teams that are often globally distributed. Depending on how the videos will be consumed, the final product is rendered in multiple versions and kept for long periods of time afterward. This challenge is even bigger now with videos that are shot at 4K and 8K resolutions.

No matter the size of the company, media-rich content requires a lot of storage capacity. It has to be reliable, fast, and, at the same time, reasonably priced. Object storage is considered one of the best options for storing unstructured data due to its scalability, cost, simplicity, and accessibility, but it also poses challenges, especially when data needs to be globally accessed and distributed. This GigaOm Use Case Scenario report explores the application of decentralized object storage in video production and collaboration.

About the GigaOm Use Case Scenario Report

This GigaOm Use Case Scenario report focuses on specific scenarios and best practices to improve adoption of technologies, exploring both use case design criteria and a viable technical solution. In this context, a particularly demanding use case for decentralized object storage can be found in video production and collaboration.

Use of video is growing in every industry for a variety of reasons: surveillance, training, marketing, conference calls archiving, and so on. These are generic use cases found in organizations of every size, but when we focus on the M&E industry, and specifically on video production, we find that:

  • Videos are recorded and edited in different locations.
  • Users take advantage of compute resources from different providers to render videos that must be centralized in a single location.
  • All video archives are now nearline.
  • Content has to be distributed efficiently on different channels and platforms.
  • Users don’t use S3 protocol directly, but want familiar file interfaces (SMB or NFS) and media asset management tools (MAM) to simplify their workflows.

These requirements create a formidable problem. Meeting high-level standards in terms of availability and resiliency can be challenging, especially when the total cost of the infrastructure must be accounted for. In this regard, the user should take into account some important aspects:

  • Infrastructure resiliency: Object storage is usually resilient, but it is crucial to consider business continuity and disaster recovery for on-premises/hybrid infrastructures.
  • Data accessibility and availability: Even in the public cloud, having multiple copies of data can be a requirement in case of a zone or region failure. Keeping data synchronized is very expensive and creates additional synchronization issues if data needs to be accessed concurrently from multiple locations or for backup reasons.
  • Performance: Even though performance is not usually associated with object storage, parallelism and throughput are important characteristics to consider, especially when video is involved. Content delivery networks (CDN) are a solution, but they are expensive and complicate the infrastructure topology.
  • Scalability: This can be an issue for on-premises deployments, especially for large systems installed in locations with limited space.
  • Cost and TCO: Cost can be one of the biggest issues when video is involved, especially in hybrid and public cloud environments because of complex billing mechanisms and egress fees.

Decentralized storage is a solution to these challenges. A decentralized storage system is based on a peer-to-peer (P2P) network, a type of architecture that has found some success for data distribution and file sharing. Instead of storing data in a centralized system made up of data centers, it is chunked, distributed, and stored on thousands of nodes in a global network or the internet. Figure 1 compares traditional shared storage with decentralized storage. This latest version of decentralized cloud storage has evolved greatly and is now enterprise-grade and considerably more secure, performant, private, and durable than a centralized cloud provider. It is also a fraction of the cost.

Figure 1. Traditional and Decentralized Storage

We have seen several attempts over the past decade—largely unsuccessful—to build a decentralized, or P2P, network infrastructure. But risks are generally mitigated by the large number of unused commodity resources across the internet, better security, and blockchain technology that ensure data immutability and consistency. It is now easier to take advantage of the abundance of unused, and sometimes unreliable, resources to build performant and secure storage infrastructures. More so, the interest in web3 and decentralized internet technologies has attracted large investments, accelerating product development and the growth of a solution ecosystem.

The post GigaOm Use Case Scenario for Decentralized Object Storage for Video appeared first on Gigaom.

]]>
GigaOm Sonar Report for Object Storage on Tape https://gigaom.com/report/gigaom-sonar-report-for-object-storage-on-tape/ Fri, 08 Apr 2022 18:20:52 +0000 https://research.gigaom.com/?post_type=go-report&p=1004217/ Enterprises want to keep more data, for longer periods of time, at affordable costs, in a way that it’s accessible when necessary.

The post GigaOm Sonar Report for Object Storage on Tape appeared first on Gigaom.

]]>
Enterprises want to keep more data, for longer periods of time, at affordable costs, in a way that it’s accessible when necessary. However, it can be difficult to find long-term storage for ever-growing data sets that’s cost effective and highly available, so organizations often have to prioritize one consideration above the others when comparing vendors.

Data storage in an enterprise generally looks like the pyramid in Figure 1, with the bulk of the older data in cold storage, which is typically cheaper and less available.

One of the earliest forms of cold data storage was tape, with capacity ranges now starting at dozens of petabytes and going upward. Retrieval time can be slower than with other technologies, but the total cost of ownership of this type of infrastructure can be very compelling.

Tape is also cost effective, but it can be difficult to manage and access. Data management is usually performed using backup and archive software (a prerequisite to managing tape libraries), and access takes significantly more time compared to other media. Tape also faces other challenges: it’s designed for linear access, and even though it has good throughput, it doesn’t cope well with small files.

Today, object storage has become one of the most common options. Object storage works just as its name suggests: it stores objects, each of which consists of data, metadata, and a globally unique identifier that lets the object be found. The metadata is customizable, which makes object storage extremely flexible.

S3 is now the dominant object storage protocol. Following AWS S3’s success in the public cloud, most object storage vendors competed to capture the private-cloud object storage market. At the outset, they all worked on general-purpose object stores with the idea of building an on-premises replacement for S3.

When built around inexpensive hard disk drive (HDD)-based nodes, these solutions offer an interesting $/GB ratio. However, organizations want object storage solutions that are even less expensive and more efficient in terms of power consumption while still offering an optimal datacenter footprint.

Many cloud providers now offer object storage solutions that solve this problem: for example, S3 Glacier and similar alternatives from other cloud service providers, such as Azure Blob and Google Archival Storage, are available. These solutions come with their own set of challenges, however; retrieval costs may be quite high. Also, retrieval times from deep storage can be very long—as much as 48 hours.

These ongoing challenges highlight why object storage on tape solutions are becoming increasingly popular. These solutions offer the best $/GB ratio, particularly for long-term storage. They have no issues with scalability and offer very long-term retention guarantees, thanks to extended LTO consortium roadmaps related to tape models and drive compatibility.

The primary challenge with deploying object storage on tape is getting started. Installations are usually extensive: they start at multi-petabyte level and can grow beyond hundreds of petabytes.

Figure 1. Storage Tiers

About the Gigaom Sonar Report

This GigaOm report is focused on emerging technologies and market segments. It helps organizations of all sizes to understand the technology and how it can fit in the overall IT strategy, its strengths, and its weaknesses. The report is organized into four sections:

Overview: An overview of the technology, its major benefits, possible use cases, and relevant characteristics of different product implementations already available in the market.

Considerations for Adoption: An analysis of the potential risks and benefits of introducing products based on this technology in an enterprise IT scenario, including table stakes and key differentiating features, as well as consideration on how to integrate the new product with the existing environment.

GigaOm Sonar: A graphical representation of the market and its most important players focused on their value proposition and their roadmaps for the future. This section also includes a breakdown of each vendor’s offering in the sector.

Near-Term Roadmap: A 12 to 18 month forecast of the future development of the technology, its ecosystem, and major players of this market segment.

The post GigaOm Sonar Report for Object Storage on Tape appeared first on Gigaom.

]]>
GigaOm Radar for High-Performance Object Storage https://gigaom.com/report/gigaom-radar-for-high-performance-object-storage-2/ Mon, 04 Apr 2022 19:08:02 +0000 https://research.gigaom.com/?post_type=go-report&p=1004110/ For some time, users have asked for object storage solutions with better performance characteristics. To satisfy such requests, several factors must first

The post GigaOm Radar for High-Performance Object Storage appeared first on Gigaom.

]]>
For some time, users have asked for object storage solutions with better performance characteristics. To satisfy such requests, several factors must first be considered:

  • Data consolidation: Combining and storing various types of data in a single place can help to minimize the number of storage systems, lower costs, and improve infrastructure efficiency.
  • New workloads and applications: Thanks to the cloud and other technology, developers have finally embraced object storage APIs, and both custom and commercial applications now support object storage. Moreover, there is a high demand for object storage for AI/ML and other advanced workflows in which rich metadata can play an important role.
  • Better economics at scale: Object storage is typically much more cost-effective than file storage and easier to manage at the petabyte scale. And $/GB is just one aspect; generally, the overall TCO of an object storage solution is better than it is for file and block systems.
  • Security: Some features of object stores, such as the object lock API, increase data safety and security against errors and malicious attacks.
  • Accessibility: Object stores are easier to access than file or block storage, making it the right target for IoT, AI, analytics, and any workflow that collects and shares large amounts of data or requires parallel and diversified data access.

Many applications find object storage a natural repository for their data because of its scalability and ease of access. However, older object stores were not designed for flash memory, nor were they optimized to deal with very small files (512KB and less). Many vendors are redesigning the backend of their solution to respond to these new needs, but in the meantime, a new generation of fast object stores has become available for these workloads.

These new object stores usually offer a subset of the features of traditional object stores, in particular geo-replication or S3 API compatibility, but they excel in other ways that are even more important for interactive and high-performance workloads, including strong consistency, small file optimization, file-object parity, and features aimed at simplified data ingestion and access with the lowest possible latency. Their design is based on the latest technology: flash memory, persistent memory, and high-speed networks are usually combined with the latest innovations in software optimization. Even though object stores will never provide the performance of block or file storage, it is important to note that they are more secure and easier to manage at scale than the others, offering a good balance among performance, scalability, and TCO.

Maintaining a consistent response time under multiple different workloads is also very important. On the one hand, there are the primary workloads for which these object stores are usually selected, but on the other, it is unusual to find fast object stores serving only a single workload over a long period. Users tend to consolidate additional data and workloads, and multitenancy quickly becomes another important requirement. These solutions typically offer good file storage capabilities, allowing data to be consolidated even further.

Presently, high-performance object stores do not overlap with traditional object stores except for a limited set of use cases. This distinction will change over time because both traditional and high-performance object stores will eventually add the features necessary for parity. The products with the most balanced architecture and the ability to optimize for the latest media will end up in the leading positions.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding, consider reviewing the following reports:
Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.
GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.
Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

The post GigaOm Radar for High-Performance Object Storage appeared first on Gigaom.

]]>
GigaOm Radar for Enterprise Object Storage https://gigaom.com/report/gigaom-radar-for-enterprise-object-storage-3/ Mon, 04 Apr 2022 19:08:00 +0000 https://research.gigaom.com/?post_type=go-report&p=1004127/ The market landscape for object storage has been changing quickly and radically. The S3 protocol is now a standard for many applications,

The post GigaOm Radar for Enterprise Object Storage appeared first on Gigaom.

]]>
The market landscape for object storage has been changing quickly and radically. The S3 protocol is now a standard for many applications, and developers are adopting it for a growing number of use cases. Object stores evolved from cheap deep storage repositories to solutions that can support multiple workloads and applications concurrently. Even though object storage performance cannot be compared to that of file and block storage; the latest iterations of the technology demonstrated that it could indeed be used for high-performance applications. (Note that a separate Radar report on high-performance object storage is also available.)

The most critical evaluation metrics remain $/GB ratio and scalability, but efficiency and flexibility are becoming equally important as a result of changing user needs. In fact, the market saw an increase in use cases and user needs, including multitenancy and performance, that directly impacted efficiency and flexibility.

Users started to take advantage of object storage directly as a primary target for most applications that require storing large amounts of data. For example, many backup products can now use object storage as the primary repository and make the most of its object immutability characteristics for enhanced security. The same goes for data analytics products, which can now use object stores to access active data. Furthermore, it is clear that part of the growth in enterprise object storage over the last year, both public and private, is attributable to the many digital transformation initiatives that enterprises started because of the COVID-19 pandemic. In such cases, object storage is the best choice because of its accessibility characteristics compared to other solutions.

It’s also important to note that object stores are becoming a common data service deployed on top of or alongside Kubernetes. This is another sign of the changed role that object storage is taking on in many IT infrastructures.

These new directions brought about several challenges in product development. Vendors now need to offer optimizations for small files, advanced analytics, better ease of use, and a lower entry point for their solutions, while keeping the system balanced for traditional, less interactive workloads. In fact, with the need to efficiently manage large numbers of small files, hybrid and all-flash clusters became more common, although many vendors are not yet ready to provide the necessary optimizations to take advantage of these media or, in some cases, to provide enough flexibility for data movement across tiers in relatively small systems. To respond adequately to these new challenges, several vendors are currently navigating a transition phase to rearchitect their solutions to become more Kubernetes-friendly and better optimized for next-generation workloads and needs.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding, consider reviewing the following reports:
Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.
GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.
Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

The post GigaOm Radar for Enterprise Object Storage appeared first on Gigaom.

]]>