Juergen Urbanski, Author at Gigaom Your industry partner in emerging technology research Wed, 14 Oct 2020 00:32:52 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 What VMware’s SpringSource Acquisition Means for Microsoft https://gigaom.com/report/what-vmwares-springsource-acquisition-means-for-microsoft/ Tue, 18 Aug 2009 07:00:49 +0000 http://pro.gigaom.com/?p=10022 On Aug. 10, 2009, VMware announced a definitive agreement to acquire privately held open source Java application framework and platform developer SpringSource for $420 million ($331 million in cash, $31 million in equity for vested options, $58 million for unvested stocks/options). Customers will ultimately care about VMware’s acquisition of SpringSource because together the two will be able to offer a tightly integrated enterprise and cloud application platform similar to Microsoft’s server products, including the .NET application frameworks, Windows Server application runtime platform, and Systems Center management product offerings. The tight integration that VMware, Microsoft, and ultimately IBM and Oracle, aspire to offer — with slightly different approaches — is critical for bringing down dramatically the TCO of enterprise and cloud applications built on these platforms. This note examines the acquisition and its impact on a brewing battle between Microsoft and VMware.

The post What VMware’s SpringSource Acquisition Means for Microsoft appeared first on Gigaom.

]]>
On Aug. 10, VMware announced a definitive agreement to acquire privately held open source Java application framework and platform developer SpringSource for $420 million ($331 million in cash, $31 million in equity for vested options, $58 million for unvested stocks/options).  SpringSource, founded in 2004, had raised $25 million in two rounds of VC funding led by Accel and Benchmark Capital. The transaction is expected to close in September 2009.  VMware expects SpringSource to be cash flow positive by the first half of 2010, implying that billings will likely exceed $30 million on an annualized basis.

The post What VMware’s SpringSource Acquisition Means for Microsoft appeared first on Gigaom.

]]>
Bringing Moore’s Law to the Data Storage Market https://gigaom.com/report/bringing-moores-law-to-the-data-storage-market/ Tue, 30 Jun 2009 07:00:05 +0000 http://pro.gigaom.com/?p=6565 Flash solid-state drives (SSD) will enable a once-in-a-decade improvement in storage price-performance. Flash SSDs sit between the CPU main memory and the spinning disks, offering more capacity per dollar than main memory and more speed per dollar than disks. Crucially, flash SSDs enable storage to keep up with the rapid advances in CPU speeds driven by Moore’s Law. This may enable customers to dramatically scale back purchases of expensive Fibre Channel (FC) disks and, potentially, high-end FC arrays. However, some early flash SSDs implementations come with a set of limitations that customers need to be aware of, notably around usability and resilience. This note illustrates how the combination of technological advances and declining prices will open up new use cases for flash SSDs in the enterprise, while flagging some of the caveats that customers need to be aware of.

The post Bringing Moore’s Law to the Data Storage Market appeared first on Gigaom.

]]>
Flash solid-state drives (SSD) will enable a once-in-a-decade improvement in storage price-performance. Flash SSDs sit between the CPU main memory and the spinning disks, offering more capacity per dollar than main memory and more speed per dollar than disks. Crucially, flash SSDs enable storage to keep up with the rapid advances in CPU speeds driven by Moore’s Law. This may enable customers to dramatically scale back purchases of expensive Fibre Channel (FC) disks and, potentially, high-end FC arrays.

However, some early flash SSDs implementations come with a set of limitations that customers need to be aware of, notably around usability and resilience. This note illustrates how the combination of technological advances and declining prices will open up new use cases for flash SSDs in the enterprise, while flagging some of the caveats that customers need to be aware of.

The post Bringing Moore’s Law to the Data Storage Market appeared first on Gigaom.

]]>
De-Duplicating the Storage Industry https://gigaom.com/report/de-duplicating-the-storage-industry/ Thu, 11 Jun 2009 15:30:47 +0000 http://pro.gigaom.com/?p=5201 Companies are rolling out storage efficiency technology as fast as they can since that technology helps delay and avoid additional capital expenditures that would otherwise be needed to accommodate ongoing data growth. Data Domain is the leader in storage efficiency and therefore represents an attractive target to larger storage OEMs that are driving the consolidation of the industry.
This note examines the strategic rationale of the deal for the different potential acquirers as well as for Data Domain, elaborates on what the deal means for customers, and puts Data Domain’s offerings in the context of storage efficiency technologies and trends.

The post De-Duplicating the Storage Industry appeared first on Gigaom.

]]>
Companies are rolling out storage efficiency technology as fast as they can since that technology helps delay and avoid additional capital expenditures that would otherwise be needed to accommodate ongoing data growth. Data Domain is the leader in storage efficiency and therefore represents an attractive target to larger storage OEMs that are driving the consolidation of the industry. This note examines the strategic rationale of the deal for the different potential acquirers as well as for Data Domain, elaborates on what the deal means for customers, and puts Data Domain’s offerings in the context of storage efficiency technologies and trends.

The post De-Duplicating the Storage Industry appeared first on Gigaom.

]]>
Will Storage Go the Way of The Server? https://gigaom.com/report/will-storage-go-way-of-server/ Tue, 12 May 2009 16:00:35 +0000 http://pro.gigaom.com/?p=1605 The storage industry is on the cusp of the biggest structural change since networked storage began to substitute for direct-attached storage a decade ago. Despite being one of the fastest growing technology sectors in terms of capacity, the economics for many participants are deteriorating. Several major technology shifts will radically redefine the economics of the industry leading to slimmer margins for all but the most innovative, software-driven players. In essence, the future of storage is about storage software that increasingly absorbs intelligence that used to be hard-wired in a proprietary storage controller and array, which in turn is increasingly becoming an abundant pool of commodity disks. It is the pace of this transition that is at issue. In this report, we show how the different customer segments and associated workloads will evolve at different paces, and examine the associated opportunities for both incumbents and new market entrants.

The post Will Storage Go the Way of The Server? appeared first on Gigaom.

]]>
The storage industry is at on the cusp of the biggest structural change since networked storage (including SAN, NAS, and more recently iSCSI) began to substitute for direct-attached storage a decade ago. Despite being one of the fastest growth sectors in technology in terms of capacity, the economics for many participants are deteriorating. Several major technology and business model shifts will re-define the profit pools in the industry, leading to slimmer margins for all but the most innovative, software-driven players.

The long-term future of storage is about smart software that manages a large pool of cheap interchangeable hardware. However, in the near term, mainstream enterprise buyers continue to move cautiously while upgrading their existing installed base mostly with more of the same from vendors such as EMC and NetApp. But the current recession is making them more price-sensitive and creating pressure to try technology from newer vendors such as 3PAR and Data Domain for growing pockets of use cases. Cloud/online service providers are the most price-sensitive and open to new approaches since their storage capital and operating expenditures have a direct impact on their ability to offer competitive pricing.

Customers are transitioning from storage typically bought for a specific application to a more horizontal, virtual pool that better matches the shared resource model of their virtual servers. Much of the growth is occurring in two customer archetypes that are very different from the legacy enterprise data center characterized by scale-up architectures.

Scale up describes a model of using bigger, more expensive machines, whether for storage or servers, to get more performance. The scale-out model is when many smaller, inexpensive machines working together are used for better performance. Just about all enterprise software is written to scale up while cloud-based software is primarily written to scale out.

  • Many new high-growth workloads in the enterprise are best handled as NAS-based file-oriented data, as opposed to highly structured SAN-based block-oriented data. They include web serving, film and animation processing for media companies, seismic modeling for oil exploration, financial simulation in banking, etc. These workloads generate so much data that customers have been willing to try newer vendors with less expensive scale-out architectures such as Isilon.
  • The very largest cloud/online service providers, such as Google, Yahoo, Amazon Web Services and Microsoft tend to build their own scale-out storage software to run on commodity storage hardware. This do-it-yourself model is an extreme example of what analysts are referring to when they say storage will become as commoditized as servers.

Storage technology is morphing in the direction of server technology, more slowly in the enterprise and faster in the cloud.

  • Server virtualization is putting a layer in front of storage that over the next several years will start to homogenize the differences between storage products for applications and administrators.
  • As modular or commodity storage manages more workloads, the storage software can sit either on the x86 controller or the x86 server. That will make it easier for customers to benchmark and put pressure on hardware prices, even if the software comes from the same storage vendor providing the controller and disk drives.
  • Customers are rolling out storage efficiency functionality that improves utilization similar to server virtualization. However, customers are using technologies such as snapshots, thin provisioning and de-duplication to accelerate data growth, particularly backup or nearline, while only modestly compressing their storage budgets.
  • Flash memory will drastically improve the price/performance for virtually all classes of storage. In particular, over the next several years using flash as cache to augment magnetic disk performance will have a bigger impact than flash-based solid-state disks.

While these last two trends are very significant for the storage industry, we are treating them as outside the scope of this report, which deals with the commoditization of storage in the midst of a transition to virtualization and cloud computing.

Changing customer workloads and emerging technologies are driving changes in vendor business models.

  • For all but the most high performance and resilient systems, storage hardware and software will increasingly be sold and priced as two distinct parts of one integrated product line, starting especially with cloud/online service providers. Even though these two components will likely come from the same vendor most of the time, this change will force storage vendors to sell software based on business value rather than systems based on capacity.
  • To the extent that customers shift more of their data to the cloud, aggregate industry demand for storage will move from a ‘just in case’ capacity, upfront capex model to a ‘just in time’ capacity, ongoing opex model. This is because online service providers are running at much higher asset utilization than the typical customer can add capacity in more granular increments and are able to extract very favorable pricing from their suppliers. During this transiti0n period, which we can think of as a form of industry-wide thin provisioning coupled with collective bargaining, storage vendors may see a temporary slowdown in revenue growth. More importantly, they may experience lower margins for a prolonged period.
  • Truly interchangeable storage software and commodity hardware will likely be limited to the largest cloud / online service providers, such as Google, Yahoo, Amazon and Microsoft. Enterprises lack the scarce talent required to combine third-party or open source storage software with commodity hardware in a way that ensures scalability and resilience. In other words, the ‘mix and match’ model of server hardware and software is not likely to become prevalent in mainstream storage anytime soon.
  • The storage vendor mix in traditional enterprises is unlikely to be radically reshuffled anytime soon, since the innovative storage software challengers have to contend with customers’ concerns about interoperability, supportability and resilience. A major OEM endorsement of a startup vendor such as Virsto or Sanbolic would change that dynamic. While Cisco is the most likely vendor to fill the role of disruptor, HP, Dell or IBM might be somewhat more conflicted about accelerating storage hardware commoditization.

The post Will Storage Go the Way of The Server? appeared first on Gigaom.

]]>
VMware’s vSphere4 Announcement: A Critical Perspective https://gigaom.com/report/vmware/ Tue, 21 Apr 2009 16:00:46 +0000 http://pro.gigaom.com/?p=3858 VMware just announced a major refresh of its server virtualization product line, renamed vSphere4. VMware vSphere 4 aims to aggregate and manage large pools of infrastructure — processors, memory, storage and networking — as a seamless, flexible and dynamic operating environment. As the company's biggest announcement in almost three years, vSphere4 marks a big step forward. This note examines the primary objectives of the relaunch, as well as the implications for VMware's market position relative to competitors Microsoft and Citrix.

The post VMware’s vSphere4 Announcement: A Critical Perspective appeared first on Gigaom.

]]>
VMware just announced a major refresh of its server virtualization product line, renamed vSphere4. VMware vSphere 4 aims to aggregate and manage large pools of infrastructure — processors, memory, storage and networking — as a seamless, flexible and dynamic operating environment.  As the company’s biggest announcement in almost three years, vSphere4 marks a big step forward and is focused on three objectives:

  • Removing the barriers that slow the advance of virtualization into more performance-sensitive, business critical workloads such as SQL Server, Exchange or SAP.  For instance, we estimate that less than 5 percent of SAP production workloads running on x86 hardware are virtualized today, unlike test and development workloads, which are already widely virtualized. That explains the primary focus on increased performance and scalability.  Based on our primary research with customers and channel partners, we believe that virtualization may be reaching an inflection point over the next 12-18 months as far as adoption for more business critical production workloads is concerned.
  • Increasing storage efficiency and energy efficiency, which should resonate in today’s environment where CFOs have a much more prominent role at the table in IT buying decisions.  In principle, this should lead to reduced capital and operating expenditures associated with VMware deployments.
  • Automating resource management and simplifying operations. This is most relevant to large virtualized environments (50 physical servers and up).  While the new product capabilities are positioned to reduce the 70 percent of IT budgets being spent on keeping the lights on, frankly, they reinforces the functionality of other parts of the suite in mitigating the challenges created by wide-scale virtualization adoption, notably the proliferation of virtual machines.

VMware has captured the marketing zeitgeist by labeling its suite “the industry’s first operating system for building the internal cloud.”  Private clouds are, in fact, being adopted ahead of public clouds, and VMware is right that it will only be successful if it enables mainstream commercial workloads to be run in these clouds without requiring modifications to the applications. The company claims that “for hosting service providers, VMware vSphere 4 will enable a more economic and efficient path to delivering cloud services that are compatible with customers’ internal cloud infrastructures.” While VMware is by far the best-positioned vendor to enable these, and is building that stack from the bottom up, we would have liked to see more detail on what that top-to-bottom private cloud stack would look like.

As Gregory Smith of IT and telecommunications outsourcing firm T-Systems articulated at the April 2009 SAP Virtualization Week, a true private cloud provides infrastructure-as-a-service (IaaS) to enable end-to-end application services delivery. While virtualization provides the crucial foundation for IaaS, VMware is still assembling the top layers of the IaaS stack – such as the ability to charge internal departments based on their usage of IT resources. Moreover, vSphere 4 maxes out at 32 physical servers in a single logical resource pool, which represents the scale of a medium-sized puddle, not the large pool required for an enterprise cloud.  Clearly private clouds are still very immature today.  VMware could have used that opportunity though to provide customers with more clarity on which workloads should be put into private clouds first and what the barriers are for more critical workloads.

The company did not emphasize its previous aspirations of managing end-to-end application services delivery.  We expect VMware to ship an upgrade to its highly regarded AppSpeed application performance management product some time later in 2009.  However, it will only provide application-level intelligence for a limited number of workloads, including J2EE frameworks, .NET, SQL Server and a few others.  We believe VMware is leaving the application-level management to others because it did not want to compete head-on against Microsoft and the Big Four systems management vendors.  Moreover, most customers prefer to have their existing systems management tools span across and provide an integrated, single view into both physical and virtual infrastructures.

The company did not talk about its desktop virtualization product line, renamed View. Since View is based on the newly refreshed server virtualization infrastructure, we would expect to see a refresh toward the end of the year. For users accessing a desktop environment streamed from servers, the increased storage efficiency will help bring capex costs down toward those of traditional desktop environments.  Better presentation protocols, including the one coming from the joint work with Teradici, will bring Adobe Flash and other rich media support to end-users.  What’s not clear yet is when the client-side bare metal hypervisor that works in occasionally-connected environments will finally ship. However, many of the fifty VMware customers and channel partners we interviewed did say that, given VMware’s advantage in managing servers, their customers expect to use that same infrastructure to manage their desktops, when and if they start that migration.

Overall, we are impressed by the product announcement, with which VMware will further expand its already considerable lead over Microsoft and Citrix.  However, we suspect the company will face formidable challenges in transitioning its own sales force and particularly its channel partners toward a multi-disciplinary, solutions-led sale.

The post VMware’s vSphere4 Announcement: A Critical Perspective appeared first on Gigaom.

]]>