Application Cache Performance Testingv1.0

Product Evaluation: Azure Cache for Redis

1. Summary

Applications and their performance requirements have evolved dramatically in today’s landscape. The cloud enables enterprises to differentiate and innovate with APIs and microservices at a rapid pace. Cloud providers, like Azure, allow microservice endpoints to be cloned and scaled in a matter of minutes. The cloud offers elastic scalability compared to on-premises deployments, enabling faster server deployment and application development and less costly compute.

Likewise, databases play an important role in modern applications and microservices. Fully-managed database services in the cloud are being adopted by industries in all verticals. They offer many advantages over hosting on-premises, including the speed of development and deployment and relieving the administration and maintenance burden.

This report focuses on a web application deployed in Azure with a backend database under a heavy read/write load. Under such loads, the time it takes to access or write data may cause high application response latency and delay further transactions. Latency is specifically a problem with read and write-intensive applications that include high volumes of user or machine traffic.

Many organizations depend on their apps, APIs, and microservices for high performance and availability. For this paper, we define “high performance” as companies who experience workloads of more than 1,000 transactions per second (tps) and need sub-second latency across their landscape. For these organizations, performance is an essential requirement, equal to availability and security, because they rely on these transaction rates to keep up with their business speed. Thus, an application’s underlying database solution must not be a performance bottleneck.

Moreover, many of these companies are looking for a distributed solution to load balance across redundant applications and enable high transaction volumes. If a business experiences 1,000 transactions per second, this translates to 3 billion API calls in a month. Thus, performance can be a critical factor when architecting an application.

In this paper, we reveal the results of application performance testing we completed both with and without Azure Cache for Redis on top of Azure SQL Database and Azure Database for PostgreSQL.

For our test, API requests came back with the lowest latencies and highest throughput by far when Azure Cache for Redis was used. The additional cost of Azure Cache for Redis is negligible. We recommend that any Azure Database for PostgreSQL or Azure SQL DB application anticipating over 1,000 tps eliminate the latency concern by adding Azure Cache for Redis, which provides over 800% performance gain to the 8 DB vCores configuration in our test. Considering the negligible cost of Azure Cache for Redis, it might be possible to achieve significant performance while reducing costs with less expensive database instances. Also, even though 1,000 tps is a high-end transaction rate, Redis is applicable at transaction volumes of any size. You don’t need to max out your DB before you can benefit from a cache.

Testing hardware and software in the cloud is very challenging. Configurations may favor one system over another in feature availability, virtual machine processor generations, memory amounts, storage configurations for optimal input/output, network latencies, software and operating system versions, and the workload itself. Even more challenging is testing fully managed, as-a-service offerings where the underlying configurations (processing power, memory, networking, and the like) are unknown. Our testing demonstrates a narrow slice of potential configurations and workloads.

This is a sponsored report. Microsoft chose the backend database and Azure Cache for Redis configurations. GigaOm chose the test methodology, set up the infrastructure, and ran the testing workloads. Choosing testing configurations is subject to judgment. We have attempted to describe our decisions in this paper.

We leave the issue of fairness for the reader to determine. We strongly encourage you, as the reader, to look past marketing messages and discern for yourself what is of value. We hope this report is informative and helpful in uncovering some of the challenges and nuances of application data architecture selection.

We have provided enough information in the report for anyone to reproduce this test. You are encouraged to compile your own representative workloads and test compatible configurations applicable to your requirements.

2. High Performance Cloud Apps

The performance of web applications in the cloud varies greatly by an organization’s needs and requirements. Some are higher than others. Since most applications require a database, chokepoints and performance bottlenecks sometimes can occur at the database level.

Furthermore, in terms of high performance, we focus particularly on latency results at the 95th percentile and above. At first glance, this might seem like outlier cases. However, in our experience, these measures are extremely important in latency results. Latency results tend to be multi-modal over time, with the tops of the spikes representing “hiccups” in response times. These hiccups matter. If the median response time or latency is less than 30 milliseconds, but there are “hiccups” in latency, this has a cumulative negative effect on subsequent user experiences.

For example, if you visit a fast-food drive-through where the median wait time for food is 1 minute, you probably think that was a good customer experience. However, what if the customer in front of you has a problem with their order, and it takes 10 minutes to resolve? Your wait time would actually be 11 minutes. Because your request came in line after the “hiccup,” the delayed 99.99th percentile’s delay can become your delay too.

This paper aims to explore database options for potentially supporting this high-end performance use case.

Azure Cache for Redis

Azure Cache for Redis is a fully-managed, in-memory data store based on the open-source software Redis. This service improves the performance and scalability of an application that depends on rapidly reading and writing backend databases. Azure Cache for Redis is able to process heavy loads of application requests by keeping frequently accessed data in memory, so that it can be written to and read from much more quickly than on disk. Azure Cache for Redis brings a critical low-latency and high-throughput data storage solution to modern applications.

Redis was first released in 2009 and became popular because of its dual purpose as a data cache and data store. The site DB-Engines consistently ranks Redis as the most popular key-value database. Microsoft launched the first general release of Azure Cache for Redis in 2014. Then in 2020, a new partnership between Microsoft and Redis Labs was formed to further enhance Azure Cache for Redis. This partnership was the first native integration between Redis Labs and a major cloud platform.

Azure Cache for Redis provides secure and dedicated Redis servers and full Redis API compatibility. The service is operated by Microsoft, hosted on Azure, and accessible to any application within or outside of Azure.

Azure Cache for Redis can be deployed standalone or as a heavy lifter to enhance the performance of other Azure database services—such as Azure SQL or PostgreSQL—in high performance scenarios. It can be used as a distributed data cache, content cache, a session store, a message broker, and much more.

As a data cache, Azure Cache for Redis is commonly used as a “cache-aside” data store to load data into the cache as needed. When the system makes changes to the data, the system can also modify the cache, which is then distributed to other clients. Additionally, the service can set an expiration date on data and use an eviction policy to trigger data updates into cache.

As a session store, Azure Cache for Redis is often used with shopping carts and other user history data that a web application will associate with user cookies. Storing too much in a cookie can have a negative impact on performance as the cookie size grows and is passed and validated with every request. Thus, a web application will use the cookie as a key to query the data in a database—or in the case of an in-memory cache, like Azure Cache for Redis, it will associate information with user information in cache rather than interacting with a full relational database.

This paper also demonstrates how these real-life scenarios and usage patterns were used in our testing.

3. Test Setup

The GigaOm Web Application Database Load Test is a simple workload designed to simulate a common web application with a backend database and attack it with a barrage of HTTP requests and gradually increase the attacks until the application fails.

First, we built a backend API for this test that was a custom application developed by GigaOm. It is a Python application that leverages the free-to-use Falcon API framework. We desired to create a backend API that was as performant and lightweight as possible so that the latency generated by the application itself was minimized. The test setup is a realistic real-world setup, but it is a configuration that will show off the benefit of caching.

The application and its backend database are designed to mimic a simplistic user session and shopping cart management API or microservice that would run on the backend of an e-commerce website. We found this usage to be both commonly used and easily understood by both technical and non-technical readers alike. The application supports the following requests:

  • Login: Log in a user and store their session identifier and login time
  • View: Allow a logged-in user to view an item
  • Add: Allow a logged-in user to add an item to their cart
  • Remove: Allow a logged-in user to remove an item from their cart
  • Logout: Log out a user and update their session as inactive and log the logout time

The backend database had only three (3) tables:

  • SESSIONS: Store the user identifier, their active status, and their login and logout timestamps
  • ITEMS: Store the items in the store catalog (our ITEMS table had 10,000 unique items in it)
  • CARTS: Store the user cart identifier and all the items in their cart

When we used Azure Cache for Redis, we stored keys with the following prefixes and their respective values:

Entity Key Value
SESSIONS session:<id> {an array of session data}
ITEMS item:<item> {an array of item data}
CARTS cart:<id>:item:<item> {an array of cart data}

Tests without Azure Cache for Redis had 10,000 items but empty SESSIONS and CARTS tables. Tests with Azure Cache for Redis had a completely empty cache. The ITEMS were lazy loaded into cache the first time the item was viewed by a user.

The application works by binding the API application within Azure App Services and listening for GET requests, such as:

GET https://app-name.azurewebsites.net/login?id=9999
GET https://app-name.azurewebsites.net/view?id=9999&item=123
GET https://app-name.azurewebsites.net/add?id=9999&item=123
GET https://app-name.azurewebsites.net/remove?id=9999&item=123
GET https://app-name.azurewebsites.net/logout?id=9999

If the database operations called by the GET request were successful, the API would respond with a HTTP status of 200 and the JSON string:

{"status": "Ok"}

If the database operation failed, the API would respond with an HTTP status of 500 and the JSON string:

{"status": "Error"}

We had four versions of our application—each designed to test four different backend database configurations:

  • Azure SQL Database
  • Azure SQL Database with Azure Cache for Redis
  • Azure Database for PostgreSQL
  • Azure Database for PostgreSQL with Azure Cache for Redis

We deployed these four applications as containers onto Azure App Service—a fully-managed service for building, deploying, and scaling web apps. Once they were deployed, Azure App Service allowed us to scale them out easily with as many as 30 identical instances (at the Premium V2 tier), along with a load balancer up front to distribute the request load equally among our apps. This redundancy ensured that a bottleneck would not occur at the application, allowing the 30 apps to barrage the backend database with multiple simultaneous requests.

The backend database would then complete the transaction by sending the query results back to the application, which then would report back a success. If the database failed to complete the query request, the application would catch it as an exception and report back a failure.

To perform the attacks, we used the load testing tool Apache JMeter, a free-to-use, open-source test kit made available by the Apache Software Foundation.  Apache JMeter is a pure Java application designed to load test functional behavior and measure performance. We used the data generated by JMeter to compile and interpret the results of the test.

Figure 1 shows the test plan and the components we used within Azure to perform the tests.

Figure 1 – The GigaOm Web Application Database Load Test on Azure

Each individual attack designed for JMeter was broken into two stages: Ramp-Up and Steady State.

The Ramp-Up stage was simply the Login request for a new user and the viewing of a new item. For the login without Azure Cache for Redis, this involved a write (INSERT) into the SESSIONS table. For the view item without cache, this performed a read (SELECT) from the SESSIONS table to make sure they are a logged-in user and a read (SELECT) from the ITEMS table. For tests with Azure Cache for Redis, this involved a write (HMSET) of the session key-value pair, a read (GET) from the cache to make sure their session exists, and a read (GET) from the item cache. If the item did not already exist in the cache, a SELECT was read from the ITEMS table in the database and its information was loaded into cache for future requests.

Once a user completed the Ramp-Up phase, they entered a Steady State phase loop where they would have three randomly-selected choices:

Choice Operations Weight
1 VIEW an item 70%
2 VIEW an item

ADD it to their cart

20%
3 VIEW an item

ADD it to their cart

REMOVE it from their cart

10%

We weighted these choices to occur 70%, 20%, and 10% of the time, respectively. This was implemented in JMeter using the Blazemeter Weighted Switch Controller.

For the VIEW item, the same database/cache operations happened as those described in the Ramp-Up phase.

For the ADD item without cache, the app performed a read (SELECT) from the SESSIONS table to make sure the user is logged in, a read (SELECT) from the ITEMS table, and a write (INSERT) into the CARTS table. Without cache, the app performed a read (GET) of the session key to make sure the user is logged in, a read (GET) from the item in cache (or pulled from the database and added to the cache), and a write (HMSET) of the cart-item key.

The REMOVE item operation repeated the VIEW and ADD options above, followed by a DELETE from the CARTS table (without cache) or a DELETE of the cart-item key from the cache.

During testing, we found that the Logout process was not needed, because our attack pattern was to continually add users until the application broke—there was no sense in logging them out.

The test was conducted by continually adding users at a rate of 1,000 new users per minute until the application reported back its first 500 status error from either the backend database or Azure Cache for Redis. Once the first error occurred, we declared the app to have failed and we stopped the test immediately.

Test Environments

Selecting and sizing the compute and storage for comparison can be challenging, particularly for fully managed as-a-service vendor offerings. The tables below give a layout of the configurations and SKUs we tested.

Component Azure SKU
JMeter Azure VM E8a_v4 (8 vCPUs, 64GB RAM)
App Service Plan Scale Up P3V2 Premium 3 (840 Azure Compute Units, 14GB RAM)
App Service Plan Scale Out 30 instances
Azure SQL Database Gen 5 General Purpose (2, 8, 16, 24, and 32 vCores)
Azure Database for PostgreSQL General Purpose (2, 8, 16, 24, and 32 vCores)
Azure Cache for Redis P1 Premium (6GB cache)

Results may vary across different configurations, and again, you are encouraged to compile your own representative workloads and test compatible configurations applicable to your requirements.

4. Test Results

This section analyzes the latencies and throughputs experienced in each of the scaled GigaOm Web Application Database Load Tests described above. A lower latency is better—meaning database responses via the APIs are coming back faster. Also, the latency reveals the response time at the 50th, 90th, 95th, 99th, 99.9th, and 99.99th percentiles and the maximum latency. These are important values for service-level agreements (SLAs) and knowing what the slowest response times are that a user might experience.

Maximum Throughput Azure SQL DB

This chart shows the number of successful transactions (before the first error) we were able to generate with our Azure SQL DB configurations. With Azure Cache for Redis, an 8x higher level of throughput (vs 8 DBvCores) was attained. Adding DBvCores also improves performance, though only modestly. Improvement was not even 1.5x going from 8 to 32 DBvCores.

Count Maximum Transactions Azure Db for PostgreSQL

This chart shows the number of successful transactions (before the first error) we were able to generate with our Azure Db PostgreSQL configurations. With the Azure Cache for Redis, a 10X higher level of throughput (vs 8 DBvCores) was attained. Adding DBvCores also improves performance, though only modestly. Improvement from 2 to 32 DBvCores was 1.84x.

Azure SQL DB Latency

The following chart shows the latencies of the platforms. The transaction rate was held at 1,000 users added per minute. Even though 1,000 tps is a high-end transaction rate, Redis is applicable at transaction volumes of any size. You don’t need to max out your database before you can benefit from a cache. Note the y-axis latency is in milliseconds.

As you can see, the 8 DBvCores without cache configuration had the slowest response times overall with the 99.99th percentile experiencing a whopping 73-second latency. Using Azure Cache for Redis was by far the faster and the most consistent profile overall. Latencies trended down somewhat with the addition of DBvCores, but even at 32 DBvCores, the difference with Azure Cache for Redis at 99.99%, for example, was 59x.

Azure Db for PostgreSQL Latency

The following chart shows the latencies of the platforms. The transaction rate was held at 1,000 customers added per second. Even though 1,000 tps is a high-end transaction rate, Redis is applicable at transaction volumes of any size. You don’t need to max out your database before you can benefit from a cache. Note the y-axis latency is in milliseconds.

As you can see, the 8 DBvCores without cache configuration had the slowest response times overall with the 99th percentile experiencing 17 second latency. Using Azure Cache for Redis was by far the faster and the most consistent profile overall. Latencies trended down somewhat with the addition of DBvCores, but even at 32 DBvCores, the difference with Azure Cache for Redis at 99.99%, for example, was 33x.

5. Conclusion

This report outlines the results from a GigaOm Web Application Database Load Test, a simple workload designed to simulate a common web application with a backend database and attack it with a barrage of HTTP requests and gradually increase the attacks until the application fails.

Azure Cache for Redis is a fully managed, in-memory data store based on the open-source software Redis. We learned how much Azure Cache for Redis is able to process heavy loads of application requests by keeping frequently accessed data in memory, so that it can be written to and read from much more quickly than on disk. Azure Cache for Redis brings a critical low-latency and high-throughput data storage solution to modern applications.

For this test using this particular workload with these particular configurations, API requests came back with the lowest latencies and highest throughput by far when Azure Cache for Redis was used. The additional cost of Azure Cache for Redis is negligible. We recommend that any Azure Db PostgreSQL or Azure SQL Db application anticipating over 1,000 tps eliminate the latency concern by adding Azure Cache for Redis, which provides over 800% performance gain to the 8 DBvCores configuration in our test. Considering the negligible cost of Azure Cache for Redis, it might be possible to achieve significant performance while reducing costs with less expensive database instances. Also, even though 1,000 tps is a high-end transaction rate, Redis is applicable at transaction volumes of any size. You don’t need to max out your DB before you can benefit from a cache.

The service is operated by Microsoft, hosted on Azure, and accessible to any application within or outside of Azure.

Keep in mind, optimizations on all platforms would be possible as the offerings evolve or internal tests point to different configurations.

6. Disclaimer

Performance is important but it is only one criterion for a platform selection. This test is a point-in-time check into specific performance. There are numerous other factors to consider in selection across administration, features and functionality, workload management, user interface, scalability, vendor, reliability, and numerous other criteria. It is also our experience that performance changes over time and is competitively different for different workloads.

GigaOm runs all of its performance tests to strict ethical standards. The results of the report are the objective results of the application of load tests to the simulations described in the report. The report clearly defines the selected criteria and process used to establish the field test. The report also clearly states the tools and workloads used. The reader is left to determine for themselves how to qualify the information for their individual needs. The report does not make any claim regarding the third-party certification and presents the objective results received from the application of the process to the criteria as described in the report. The report strictly measures performance and does not purport to evaluate other factors that potential customers may find relevant when making a purchase decision.

This is a sponsored report. Microsoft chose the backend database and Azure Cache for Redis configurations. GigaOm chose the test methodology, set up the infrastructure, and ran the testing workloads. Choosing testing configurations is subject to judgment. We have attempted to describe our decisions in this paper.

7. Appendix: Recreating the Test

The following is the code for the backend API with Azure Cache for Redis. It contains the database connection string and queries for Azure Cache for Redis and Azure SQL Database. To use Azure Database for PostgreSQL, you would use the Python psycopg2 package and connection string instead, such as:

import psycopg2
conn = psycopg2.connect("dbname='db' user='user@server' host=server.postgres.database.azure.com' password='password' port='5432' sslmode='require'")

You are free to use and modify this code at your own discretion. GigaOm makes no warranty or claim for its use beyond the scope of this test or report.

import falcon
import sys
import time
from datetime import datetime
import pyodbc
import redis

server = 'server.database.windows.net'
database = 'db'
username = 'user'
password = 'password'   
driver = '{ODBC Driver 17 for SQL Server}'

conn = pyodbc.connect('DRIVER=' + driver + ';SERVER=' + server + ';PORT=1433;DATABASE=' + database + ';UID=' + username + ';PWD=' + password, autocommit=True)
cur = conn.cursor()
cur.execute('SELECT @@version')
db_version = cur.fetchone()[0]
print(db_version)

r = redis.Redis(host=server.redis.cache.windows.net', port=6380, db=0, password='password', ssl=True)

print('Redis v.' + r.execute_command('INFO')['redis_version'])

def create_destroy_session(id, action):
store_dt = datetime.strftime(datetime.now(), '%Y-%m-%d %H:%M:%S.%f')
if (action == '/login'):
r.hmset("session:" + id, { "isActive": 1, "activeDate": store_dt })
elif (action == '/logout'):
r.hmset("session:" + id, { "isActive": 0, "inactiveDate": store_dt })
token = r.hgetall("session:" + id)
return token

def add_remove_view_item(id, item, action):
if (r.hget("session:" + id, "isActive") == b'1'):
price = r.get("item:" + item)
if not price:
cur.execute("select price from items where itemID = %d" % int(item))
price = cur.fetchall()[0][0]
r.set("item:" + item, str(price))
if (action == '/add'):
store_dt = datetime.strftime(datetime.now(), '%Y-%m-%d %H:%M:%S.%f')
r.hmset("cart:" + id + ":item:" + item, { "price": price,"addedDate": store_dt })
elif (action == '/remove'):
if (r.exists("cart:" + id + ":item:" + item)):
r.delete("cart:" + id + ":item:" + item)
else:
return False
else:
return False
return True

class SessionResource(object):
def on_get(self, req, resp):
try:
res = create_destroy_session(req.params['id'], req.path)
except:
res = False
if (res):
resp.status = falcon.HTTP_200
resp.body = ('{"status": "Ok"}\n')
else:
resp.status = falcon.HTTP_500
resp.body = ('{"status": "Error"}\n')
class ItemResource(object):
def on_get(self, req, resp):
res = add_remove_view_item(req.params['id'], req.params['item'], req.path)
if (res):
resp.status = falcon.HTTP_200
resp.body = ('{"status": "Ok"}\n')
else:
resp.status = falcon.HTTP_500
resp.body = ('{"status": "Error"}\n')
class ClearAllResource(object):
def on_get(self, req, resp):
try:
res = r.flushdb()
except:
res = False
if (res):
resp.status = falcon.HTTP_200
resp.body = ('{"status": "Ok"}\n')
else:
resp.status = falcon.HTTP_500
resp.body = ('{"status": "Error"}\n')
app = falcon.API()

session = SessionResource()
item = ItemResource()
clearall = ClearAllResource()

app.add_route('/login', session)
app.add_route('/logout', session)
app.add_route('/view', item)
app.add_route('/add', item)
app.add_route('/remove', item)
app.add_route('/clear', clearall)

The following is the code for the backend API with Azure Cache for Redis. It contains the database connection string and queries for Azure SQL Database. To use Azure Database for PostgreSQL, you would use the Python psycopg2 package, as shown before.

You are free to use and modify at your own discretion. GigaOm makes no warranty or claim for its use beyond the scope of this test or report.

import falcon
import sys
import time
import pyodbc
server = 'server.database.windows.net'
database = 'db'
username = 'user'
password = 'password'
driver = '{ODBC Driver 17 for SQL Server}'

conn = pyodbc.connect('DRIVER='+driver+';SERVER='+server+';PORT=1433;DATABASE='+database+';UID='+username+';PWD='+ password, autocommit=True)
cur = conn.cursor()
cur.execute('SELECT @@version')
db_version = cur.fetchone()[0]
print(db_version)

def create_destroy_session(id, action):
if (action == '/login'):
cur.execute("insert into sessions values (%d, '1', GETDATE(), NULL)" % int(id))
elif (action == '/logout'):
cur.execute("update sessions set isActive = '0', inactiveDate = GETDATE() where sessionID = %d" % int(id))
token = id
return token

def add_remove_view_item(id, item, action):
cur.execute("select sessionId from sessions where sessionID = %d andisActive = '1'" % int(id))
r = cur.fetchall()
if (r):
cur.execute("select price from items where itemID = %d" % int(item))
r = cur.fetchall()
if (r):
if (action == '/add'):
cur.execute("insert into carts values (%d, %d, %.2f, GETDATE())" % (int(id), int(item), float(r[0][0])))
cur.execute("select price from carts where sessionId = %d and itemId = %d" % (int(id), int(item)))
r = cur.fetchall()
if (r):
cur.execute("delete from carts where sessionId = %d and itemId = %d" % (int(id), int(item)))
else:
return False
return True
else:
return False
else:
return False
return True

class SessionResource(object):
def on_get(self, req, resp):
try:
r = create_destroy_session(req.params['id'], req.path)
except:
r = False
if (r):
resp.status = falcon.HTTP_200
resp.body = ('{"status": "Ok"}\n')
else:
resp.status = falcon.HTTP_500
resp.body = ('{"status": "Error"}\n')
class ItemResource(object):
def on_get(self, req, resp):
try:
r = add_remove_view_item(req.params['id'], req.params['item'], req.path)
except:
r = False
if (r):
resp.status = falcon.HTTP_200
resp.body = ('{"status": "Ok"}\n')
else:
resp.status = falcon.HTTP_500
resp.body = ('{"status": "Error"}\n')
class ClearAllResource(object):
def on_get(self, req, resp):
try:
cur.execute("truncate table carts")
cur.execute("truncate table sessions")
r = True
except:
r = False
if (r):
resp.status = falcon.HTTP_200
resp.body = ('{"status": "Ok"}\n')
else:
resp.status = falcon.HTTP_500
resp.body = ('{"status": "Error"}\n')
app = falcon.API()

session = SessionResource()
item = ItemResource()
clearall = ClearAllResource()

app.add_route('/login', session)
app.add_route('/logout', session)
app.add_route('/view', item)
app.add_route('/add', item)
app.add_route('/remove', item)
app.add_route('/clear', clearall)

						
						
						

8. About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

Microsoft offers Azure Cache for Redis. To learn more about Azure Cache for Redis, visit https://azure.microsoft.com/en-us/services/cache/.

9. About William McKnight

William McKnight is a former Fortune 50 technology executive and database engineer. An Ernst & Young Entrepreneur of the Year finalist and frequent best practices judge, he helps enterprise clients with action plans, architectures, strategies, and technology tools to manage information.

Currently, William is an analyst for GigaOm Research who takes corporate information and turns it into a bottom-line-enhancing asset. He has worked with Dong Energy, France Telecom, Pfizer, Samba Bank, ScotiaBank, Teva Pharmaceuticals, and Verizon, among many others. William focuses on delivering business value and solving business problems utilizing proven approaches in information management.

10. About Jake Dolezal

Jake Dolezal is a contributing analyst at GigaOm. He has two decades of experience in the information management field, with expertise in analytics, data warehousing, master data management, data governance, business intelligence, statistics, data modeling and integration, and visualization. Jake has solved technical problems across a broad range of industries, including healthcare, education, government, manufacturing, engineering, hospitality, and restaurants. He has a doctorate in information management from Syracuse University.

11. About GigaOm

GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.

GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.

GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.

12. Copyright

© Knowingly, Inc. 2020 "Application Cache Performance Testing" is a trademark of Knowingly, Inc. For permission to reproduce this report, please contact sales@gigaom.com.