Dave Ohara, Author at Gigaom Your industry partner in emerging technology research Wed, 14 Oct 2020 00:38:22 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 Moving toward continuous delivery as a service https://gigaom.com/report/moving-toward-continuous-delivery-as-a-service/ Tue, 12 Nov 2013 07:55:06 +0000 http://research.gigaom.com/?post_type=go-report&p=204241/ Choosing continuous delivery as a service means that many of the decisions about the infrastructure have already been made for you, so you can focus on how to design your application for CD.

The post Moving toward continuous delivery as a service appeared first on Gigaom.

]]>
If you are an executive who determines your organization’s business, development, and IT strategies and policies, or if you are looking for ways to implement continuous delivery (CD) in your business and are considering a variety of options, you need to know about the available possibilities. This report provides a better understanding of the process of implementing a CD pipeline, highlights the benefits of using CD as a service, and evaluates the trade-offs and advantages offered by each choice. Some of the issues we address are:

  • What is CD?
  • Why is it important?
  • How do you implement a CD pipeline?
  • What do you gain by using the turnkey approach offered by CD as a service?

At the center of CD, a process that uses agile development and automated tools to create and deploy robust software services, is the CD pipeline. Software changes that successfully pass through the pipeline are ready for customers to use. Among its advantages, CD gives you the ability to respond to business needs swiftly, and it puts the business in charge of the software rather than the other way around. It also reduces risk by delivering early, often, and in small increments while making quality a first-class concern. Additionally, CD aligns an organization’s departments around the shared goal of delivering a good product.

If you decide to use CD, you have two options. You create a CD pipeline yourself, but must also provision and manage the infrastructure that supports the pipeline. Or, you can use CD as a service, which means that someone else handles the server administration while you concentrate on building applications. This is often a good option for a small company that doesn’t have the necessary IT resources to create a CD pipeline. Additional things to consider are:

  • Applications that are well suited to CD as a service follow the “convention over configuration” principle.
  • Doing CD requires you to understand how to design applications for CD, how to treat infrastructure as code, and how to automate services and infrastructure.
  • Choosing to start with a service based on conventions means that many of the decisions about the infrastructure have already been made for you, so you can focus on how to design your application for CD.

(Feature image courtesy Flickr user OwenXu)

The post Moving toward continuous delivery as a service appeared first on Gigaom.

]]>
The value of green HPC https://gigaom.com/report/the-value-of-green-hpc/ Wed, 30 Oct 2013 21:54:40 +0000 http://research.gigaom.com/?post_type=go-report&p=203250/ High-performance computing applications can help businesses take advantage of the lower energy costs a green data center offers, no matter where it's located.

The post The value of green HPC appeared first on Gigaom.

]]>
Forward-thinking CIOs are anticipating increased regulation of carbon emissions and want lower and more-predictable energy costs over the long term. As part of that process, they are looking at ways to go green. They know that data centers are under scrutiny for how sustainable they are, and they know that demand for data center services is growing while the costs of fossil fuels are already high, getting higher, and becoming difficult to predict.

Green data centers present one solution because they use renewable energy sources, have efficient data center facilities, and use efficient IT equipment. The savings these data centers offer can be transformed into more processing power, which gives new opportunities for increased business revenue. Many of these data centers are located where they can take advantage of an area’s natural resources (cool climates, for example) and sources of power such as wind, geothermal, and hydroelectric.

However, not all applications are suitable for offloading to a data center, whether it’s green or not. Deciding which applications can be placed in a green data center while still satisfying business and performance specifications is critical to success. Among the candidates to consider are high-performance computing (HPC) applications. HPC was once limited to scientific research, but many businesses now use it to analyze large amounts of data and to create simulations and models. HPC applications are compute-intensive and, when applied at scale, require large amounts of energy. However, because users of these applications don’t require real-time responses, you have flexibility in where you place these applications. This means that you can take advantage of the lower energy costs a green data center offers, no matter where it’s located. This report analyzes these topics as well as the following areas:

  • Three factors to consider in choosing a green data center for HPC are the source of the data center’s power, the efficiency of its IT equipment, and the data center’s efficiency.
  • Today’s CIOs have the options of building a new data center, refurbishing an existing data center, using co-location, and using the cloud. Each option needs to be balanced against the following criteria: the requirements of increased data center traffic, government regulations, volatile energy costs, and sustainable practices.
  • Latency is the single most important criterion for choosing the appropriate applications for cloud or co-location. Following latency, other considerations are whether the application must peer with another company, the business requirements, the application architecture, current and predicted application workload, and the application’s resource consumption rate.

The post The value of green HPC appeared first on Gigaom.

]]>
The power of IT: data-center efficiency and disaster recovery in the cloud https://gigaom.com/report/the-power-of-it-data-center-efficiency-and-disaster-recovery-in-the-cloud/ Wed, 14 Aug 2013 22:11:26 +0000 http://pro.gigaom.com/?post_type=go-report&p=187328/ Today’s data center managers must not only satisfy customer demands for around-the-clock availability from anywhere in the world; they must also contend with demands from within their own organizations to help reduce operational costs.

The post The power of IT: data-center efficiency and disaster recovery in the cloud appeared first on Gigaom.

]]>
Today’s data center managers must not only satisfy customer demands for around-the-clock availability from anywhere in the world; they must also contend with demands from within their own organizations to help reduce operational costs.

Customers and internal stakeholders alike expect and ask for the same availability as the traditional “plain old telephone service” (POTS). In days gone by, providers such as AT&T engineered their dial-tone service to be available 99.999 percent of the time. This dial-tone reliability has become such a well-known benchmark that it is commonly known as the “five nines” standard, which is the equivalent of having a dial tone available for all but five minutes a year.

However, high expectations translate into increasing pressure on data center managers, who can quickly find themselves on the horns of a dilemma. On the one hand, they must keep their facilities operating at peak performance. On the other hand, they have budgetary concerns from within their own companies because external regulatory bodies are demanding that energy usage be reduced.

This paper is intended for executives who determine their organization’s business strategies and IT policies. If you are looking for ways to decrease data center costs, and are considering a variety of options, you require knowledge of the possibilities that are available as well as the successes that others have had.

A primary question you must answer is “how can organizations continue to provide availability, scale to the needs of their customers, and remain cost-effective?” This single question requires that you look at multiple ways to make your data center more efficient, as well as consider other options, such as offloading some of your applications to the cloud.

The data center is a complex environment. You will need to examine how well you take advantage of virtualization, multi-tenancy, location independence, and optimization. Two other issues to consider are how innovative your IT department is and if you have a good business continuity plan in place. Highly successful data centers spend a great deal of their time and budget on new projects and reengineering their services. And, as Hurricane Sandy brought home to many, a business continuity plan gives you a strategy for dealing with disasters.

Another option is to examine the efficiencies that are potentially available by using the cloud. Disaster recovery is one application that may be well suited to the cloud because it normally uses a small number of resources and only requires maximum resources when a major problem occurs. Of course, you must ask potential vendors many questions, such as what their bandwidth capacity is, what their own disaster recovery plan is, and how they handle information security.

Finally, what technologies and approaches can make your data center more efficient – not just in the short term, but for its entire lifespan? For example, people are discussing how to enforce accountability, which means how to make sure departments are held responsible for the energy they use, new energy-efficient processors, the merits of DC power, and better ways to measure that elusive quality: efficiency.

After reading this paper, you will better understand the current challenges surrounding data centers and some of the concrete benefits that can come from increasing efficiency in your own data centers, as well as how the cloud can also help you to reduce your costs and your carbon footprint. Some of the issues we address are:

  • Trends driving data center efficiency go beyond energy. Other factors include how well data centers can accommodate growth, how well critical systems are protected, and how well they can satisfy customer demands.
  • Multi-tenancy can include shared IaaS services such as metered usage and identity management as well as the PaaS layer, which may offer such things as application servers and development environments, and might ultimately extend to the SaaS layer.
  • Hurricane Sandy in 2012 demonstrated how good continuity planning involves redundant data centers, how the data center is organized in terms of where equipment is located, and how to handle generator systems.
  • While true in many cases, claims that cloud providers have enormous economies of scale and therefore will be less expensive than existing IT resources are overly simplistic and do not take into account the kind of company and efficiencies already achieved for the relative unit cost of some resources, such as computer storage.
  • Any sort of disaster recovery (DR) plan needs to determine the recovery point objective (RPO) and the recovery time objective (RTO) in order to understand what happens if a particular process or application goes offline. While disaster recovery plans often focus on bridging the gap where data, software, or hardware have been damaged or lost, they should also take into account what happens if personnel is not available.
  • While power usage effectiveness (PUE) is a starting point for measuring a data center’s efficiency, other considerations such as carbon emissions, water usage effectiveness, and energy sources are also important when determining how green a data center really is.

The post The power of IT: data-center efficiency and disaster recovery in the cloud appeared first on Gigaom.

]]>
Continuous delivery and the world of devops https://gigaom.com/report/continuous-delivery-and-the-world-of-devops/ Tue, 02 Oct 2012 06:55:53 +0000 http://pro.gigaom.com/?post_type=go-report&p=178551/ This paper explains the world of continuous delivery and its underlying philosophy, devops. Continuous delivery is an automated pipeline constructed with various technologies that allows you to ensure that your code is always ready to be released. It does not mean that you have to release every change you implement: That is a business decision. It does mean that when you choose to release, your code is ready, fully functional, and fully tested.

The post Continuous delivery and the world of devops appeared first on Gigaom.

]]>
The advent of online businesses has created new opportunities and fierce competition. Companies want to get their products and services to market as fast as they can, and releases that occur in periods of months or years are no longer competitive. As a result, the pattern of how to release software is changing from large, infrequent releases of new software to small releases that occur very frequently, as shown in Figure 1. The ultimate goal is the continuous delivery of software updates.

Figure 1. The changing pattern of software releases

 

This paper explains the world of continuous delivery and its underlying philosophy, devops. Continuous delivery is an automated pipeline constructed with various technologies that allows you to ensure that your code is always ready to be released. It does not mean that you have to release every change you implement: That is a business decision. It does mean that when you choose to release, your code is ready, fully functional, and fully tested.

In conjunction with the technology is the emerging devops methodology, which is an outgrowth of the agile movement. This movement stresses collaboration among groups that have often found themselves at odds, in particular development teams and operations teams. This increased level of collaboration blurs the boundaries between infrastructure and code. Looking at application code and infrastructure holistically rather than as separate disciplines and treating them the same in terms of automated delivery provides compelling benefits in terms of time to market and overall stability.

The post Continuous delivery and the world of devops appeared first on Gigaom.

]]>
The big machine: creating value out of machine-driven big data https://gigaom.com/report/the-big-machine-creating-value-out-of-machine-driven-big-data/ Mon, 09 Apr 2012 06:55:43 +0000 http://pro.gigaom.com/?post_type=go-report&p=180830/ Big data continues to offer more and more opportunity for businesses. By analyzing the information already in an organization, executives can save cost and increase revenue. For example, you may discover that you need to stock certain items in your inventory at particular times, or you may be able to offer customers incentives in real time to buy more of your products.

The post The big machine: creating value out of machine-driven big data appeared first on Gigaom.

]]>
Big data continues to offer more and more opportunity for businesses. By analyzing the information already in an organization, executives can save cost and increase revenue. For example, you may discover that you need to stock certain items in your inventory at particular times, or you may be able to offer customers incentives in real time to buy more of your products. Companies can now take advantage of data that has traditionally been overlooked because of the falling cost of computing and the evolution of new technologies that help analyze large amounts of information. This information is mostly created by machines as a byproduct of normal operations. Examples of operational data include call detail records and event logs. This paper explains how business executives working with their CTOs or CIOs and other tech management can use big data within their organizations. It explains what big data is and how it differs from traditional business intelligence. The different considerations executives should take into account as they plan their big data strategies are discussed. For example, is it better to build your own system or to buy one? Should you run your system on premises or in the cloud? How do you plan for access control and scalability? The paper also includes examples of how some companies are putting their operational data to creative use. Remember that users are becoming accustomed to faster response times and to richer data sources. They also expect self-service access to resources. Big data can help you to satisfy all of these requirements. If your organization doesn’t have a strategy for big data now, you will need one in the future.

The post The big machine: creating value out of machine-driven big data appeared first on Gigaom.

]]>
Migrating media applications to the private cloud: best practices for businesses https://gigaom.com/report/migrating-media-applications-to-the-private-cloud-best-practices-for-businesses/ Mon, 05 Dec 2011 08:01:23 +0000 http://pro.gigaom.com/?post_type=go-report&p=181830/ Web content that relies on interactivity, social networking and personalization is becoming the dominant form, but it puts particular demands on the network, since it requires a low-latency environment that can provide sites that respond quickly to user input. Content delivery networks (CDNs), which are designed to deliver large amounts of static content, may be too slow to provide this environment.

The post Migrating media applications to the private cloud: best practices for businesses appeared first on Gigaom.

]]>

Web content that relies on interactivity, social networking and personalization is becoming the dominant form, but it puts particular demands on the network, since it requires a low-latency environment that can provide sites that respond quickly to user input. Content delivery networks (CDNs), which are designed to deliver large amounts of static content, may be too slow to provide this environment. Even public clouds may not be adequate: They provide a generic network that cannot be tuned to the demands of a particular application. Outages are also a problem.

So if your company has a cloud application with a predictable audience size or one that is costing you more than $25,000 a month to host, you may want to consider maintaining a private cloud. It is important to remember that using a private cloud does not preclude also using the public cloud. There is a spectrum of possibilities, including using a hybrid solution.

This paper is an overview of the factors that decision makers who are developing a public-to-private cloud-migration strategy should consider, recognizing that the public versus private cloud strategy is not an all-or-nothing proposition. There is considerable flexibility along a spectrum of implementation choices. The paper describes some pitfalls that must be avoided along the way, and it provides a case study of Zynga, a company that has found a way to use both the private and public clouds to create a hybrid solution.

The post Migrating media applications to the private cloud: best practices for businesses appeared first on Gigaom.

]]>