Table of Contents
1. Summary
Data security has become an immutable part of the technology stack for modern applications. Protecting application assets and data against cybercriminal activities, insider threats, and basic human negligence is no longer an afterthought. It must be addressed early and often, both in the application development cycle and the data analytics stack.
The requirements have grown well beyond the simplistic features provided by data platforms, and as a result a competitive industry has emerged to address the security layer. The capabilities of this layer must be more than thorough, they must also be usable and streamlined, adding a minimum of overhead to existing processes.
To measure the policy management burden, we designed a reproducible test that included a standardized, publicly available dataset and a number of access control policy management scenarios based on real world use cases we have observed for cloud data workloads. We tested two options: Apache Ranger with Apache Atlas and Immuta. This study contrasts the differences between a largely role-based access control model with object tagging (OT-RBAC) to a pure attribute-based access control (ABAC) model using these respective technologies.
This study captures the time and effort involved in managing the ever-evolving access control policies at a modern data-driven enterprise. With this study, we show the impacts of data access control policy management in terms of:
- Dynamic versus static
- Scalability
- Evolvability
In our scenarios, Ranger alone took 75x the policy changes than Immuta to accomplish the same data security objectives, while Ranger with Apache Atlas took 63x the policy changes. For our advanced use cases, Immuta only required one policy change each, while Ranger was not able to fulfill the data security requirement at all.
This study exposed the limitations of extending legacy Hadoop security components into cloud use cases. Apache Ranger uses static policies in an OT-RBAC model for the Hadoop ecosystem with very limited support for attributes. The difference between it and Immuta’s attribute-based access control model (ABAC) became clear. By leveraging dynamic variables, nested attributes, and global row-level policies and row-level security, Immuta can be quickly implemented and updated in comparison with Ranger.
Using Ranger as a data security mechanism creates a high policy-management burden compared to Immuta, as organizations migrate and expand cloud data use—which is shown here to provide scalability, clarity, and evolvability in a complex enterprise’s data security and governance needs.
The chart in Figure 1 reveals the difference in cumulative policy changes required for each platform configuration.
Figure 1. Difference in Cumulative Policy Changes
The assessment and scoring rubric and methodology is detailed in the report. We leave the issue of fairness for the reader to determine. We strongly encourage you, as the reader, to discern for yourself what is of value. We hope this report is informative and helpful in uncovering some of the challenges and nuances of data governance platform selection. You are encouraged to compile your own representative use cases and workflows and review these platforms in a way that is applicable to your requirements.
2. Data Security and Access Control
The quest for data maturity profoundly impacts most aspects of enterprise technical initiatives. The volume of data that enterprises generate and utilize, the complexity of its use that now includes machine learning, the footprint of its expansion into multiple clouds, and the limited downtime combine to create an attack surface that requires focused professional attention. Data piracy is a threat, but so is accidental misuse.
Threats notwithstanding, consumer protections such as GDPR, CCPA, and more are joining HIPAA, SOX, and others as corporate requirements, and this influences data security and governance policy and implementation. At the same time, data is a key corporate asset—one whose use will define the winners and losers in the marketplace. Though it must be managed with the aforementioned guardrails, the business value of accelerated data access has never been greater.
The practice of securing data encompasses multiple dimensions of security, from physical security practices to the administrative and access controls managed by data security and governance tools such as the ones tested for this report.
These tools, together with the data practices of the organization, provide management, visibility, response, and a lifecycle approach to potential issues with protections that span authorization, encryption, data masking, and redaction of sensitive files. Specifically, these tools provide definition, enforcement and auditing for the data asset. As described in the next section, the policies enforced need to be dynamic, scalable, and evolving.
In this research, we demonstrate the real cost of maintaining a secure data environment.
Immuta in our tests accomplished data security objectives in the basic use cases at a fraction of the effort of Ranger and Ranger with Apache Atlas. In fact, Ranger needed 75x the policy changes, and Ranger with Apache Atlas 63x the policy changes, than Immuta to achieve the same use case objectives. The benefits of Immuta’s dynamic approach to data security are quantified and evident in our overall findings, including the advanced scenarios not possible with Ranger.
Several Immuta features make this possible. Immuta’s attribute-based access control leverages dynamic attributes to enforce data protection, enabling protection down to a granular level. With Immuta’s decoupled architecture, policy decisions can also be automatically enforced based on data and user attributes. Policy decisions carry forward automatically when new data or users are added.
3. Competitive Platforms
Our assessment includes the data security platforms of Apache Ranger and Immuta.
Apache Ranger
The Hadoop ecosystem has always consisted of a variable list of software components that have some compatibility when implemented together. The Apache Hadoop collection has grown over the years, even as market interest has recently declined.
It used to be that each software component needed to be secured separately. Apache Ranger changed all that. The project began as XA Secure and also was named Argus before settling in as Ranger. Apache Ranger provides a plug-in repository for each of the Hadoop components, along with a common authentication repository where an enterprise can define policies in a centralized location. Cloudera is the primary contributor to the project today and commercial vendors such as Privacera have built plug-ins to support Ranger policies outside of Cloudera.
Immuta
Immuta is cloud-based software that provides automated, fine-grained access control over sensitive analytics data. Immuta handles data security, privacy, and access control. It replaces many manual data prep, security, and privacy steps, such as complex data integration jobs and manual anonymization.
Automated, scalable, and easy-to-understand Immuta policies feature attribute-based access control, which is a dynamic and streamlined approach to assigning privileges by resource and group introduced by the NIST cybersecurity framework. Immuta automatically manages access control and automatically enforces controls with techniques such as k-anonymization. Immuta natively supports Databricks, Snowflake, Amazon Redshift, Azure Synapse, GCP BigQuery, and other leading cloud platforms.
Some of the Immuta differences from Apache Ranger are demonstrated in this paper.
4. Test and Results
This section analyzes the methods and findings of our data security policy management study. This study captures the time and effort involved in managing the ever-evolving data security policies at a modern data-driven enterprise. With this study, we show the impacts of data access control policy management in terms of:
- Dynamic versus static
- Scalability
- Evolvability
Policy management burden, quantified below, is the amount of time and effort required by data personas to manage enterprise information security as the number of policies, users, user groups, and other security dimensions increase.
Dynamic Versus Static
Dynamic data security policies provide a huge advantage over static policies because they relieve the management burden and human error. Dynamic policies include dynamic variables, nested attributes, and global row-level policies, which adapt either automatically or with minimal effort as new users, user groups, datasets, metadata, and data governance requirements are added. Using dynamic variables over static roles reduces the amount of hard coding that ultimately will need to be redone or refactored as enterprises reorganize, merge with others, or face new security challenges. Nested attributes allow for organizational hierarchies to govern data access, rather than static policy rules that must be constantly updated. Global row-level policies can govern new data and new databases as they are introduced to the information ecosystem.
Scalability
Scalability is the measure of how a platform or solution adapts to rapid growth. In the case of data security, this is often true whenever an enterprise experiences an explosion in data volume or variety, or performs an acquisition and must quickly merge two different security and governance structures and policy banks. A data security and governance platform has high scalability when it can automate and easily incorporate rapid changes in the data environment.
Evolvability
Evolvability is how well a data security and governance platform can adapt to slower day-in-day-out changes within an enterprise without generating rework for the data security personas or introducing fear of making a change. Examples of changes that could require rework include the reassignment of a team to a new manager, a change in reporting structure for dashboards, users moving groups, new governance requirements that must cascade down through different parts of the organization or apply to several databases, and so on.
Test Design
To measure policy management burden across the two platforms, we designed a reproducible test that included a standardized, publicly-available dataset and a number of data security policy management scenarios based on real-world use cases we have observed in the field. These use cases were inspired by the existing work of Data Platform School, which we adopted as a starting point.
Data
The data we used in this test was the TPC-DS dataset. TPC-DS models the decision support functions of a retail product supplier. The supporting schema contains vital business information, such as customer, order, and product data that is intended to mimic much of the complexity of a real retail data warehouse. The TPC-DS is typically used for data warehouse performance testing. However, we found its schema and data to be good candidates to represent a real-world use case. The TPC-DS was created and maintained by the Transaction Processing Performance Council. According to the TPC, “TPC-DS is a decision support benchmark that models several generally applicable aspects of a decision support system.” More details may be found at tpc.org.
The TPC-DS schema models the sales and sales returns data for an organization that employs three primary sales channels: store, catalog, and web. The schema includes seven fact tables:
- A pair of fact tables focused on the product sales and returns for store, catalog, and web
- A single fact table that models inventory for the catalog and internet sales channels.
In addition, the schema includes 17 dimension tables that are associated with all sales channels. For our data masking test scenarios, we found over 50 columns of personally-identifiable information (PII), which provided a healthy sample of data to tag as PII and use in our tests. Since we are not testing the performance of the database, we used a TPC-DS scale factor of 1 for the data size—we only needed some data, not a large volume.
All of the below tests are fully reproducible by the reader using this publicly available TPC-DS dataset.
Database
To set up this database, we used Starburst Enterprise 350-e.1 deployed from the Amazon Web Services (AWS) Marketplace using a CloudFormation template. We chose Starburst for a few reasons:
- It can be deployed with Apache Ranger already installed and configured
- It comes with TPC-DS data already available in its Hive metastore
- Immuta supports Starburst
Additionally, we installed and used Apache Atlas to tag the PII data in our database. However, we scored the Ranger scenarios with and without Atlas to provide further detail on their impact on policy management.
Scoring
To score our test, we used a simple rubric that counted:
- The number of policies created
- The number of policy modifications
Each change counted was a step required to implement the data security requirements contained in the scenarios and can be seen as an element of the overall policy management burden.
Scenarios and Results
Our test scenario involves a data analytics team at a retail company that has just put into production a new data warehouse, which has the same data model as the TPC-DS and leverages Starburst as its platform. The analytics team is working with the data governance and security teams to allow access to the data to users and groups across the company, while enforcing the current data security and governance policies in place. At first the requirements are fairly straightforward, but, as with a typical organization, the security requirements evolve and change over time.
The scenarios shown here represent a number of commonly seen data security and governance requirements. We have three categories of requirements:
- Basic security
- Row-level security
- Advanced security
NOTE: In two of our scenarios below (1b and 1c), we required certain data columns to be identified as PII. Ranger does not have the capability to find and automatically tag PII attributes while Immuta does. However, we also implemented Apache Atlas, which does tag PII data. We tested Ranger with and without Atlas, so we have two separate scores for scenarios 1b and 1c.
Basic Scenario 1a: Provide Access to All Data for Central Office Employees
We began with the most basic of all security requirements, and that is to allow employees in one AD group full access to the data.
Requirement:
- All users in a central office Active Directory (AD) group may SELECT from all tables in the data warehouse.
Policy Changes – Immuta:
- Created group “central-office”
- Created a new global access policy that allowed all users in group Central Office to subscribe to tables in the database.
Policy Changes – Ranger:
- Created group “central-office”
- Create a new access policy.
- Specify the database, tables (*), columns (*), and allow conditions (select, for central-office).
Basic Scenario 1b: Mask All PII Data
Next, we were required to automatically mask all data that is personally identifiable information (PII).
Requirement:
- All data suspected to be PII should be nullified, except for users with an override authorization.
Policy Changes – Immuta:
Immuta automatically tagged data with a “Discovered.PII” tag if it matches any classifier. This was done automatically, without setup or effort on our behalf.
- Created a user attribute to allow overriding of the PII policy with an attribute: AuthorizedSensitiveData > All
- Created a new global data policy that:
- Masked all columns tagged Discovered.PII and made them appear NULL
- Except when user has authorization in AuthorizedSensitiveData > All
Policy Changes – Ranger:
- Created override group
- Created masking policies with override exception for all 54 manually-identified PII columns in the database
Policy Changes – Ranger+Atlas:
- Synchronized the PII tag from Atlas. Apache Ranger does not have built-in data discovery.
- Created override group
- Created a new masking policy
- Added an Allow condition for authorized users.
- Added a Deny condition for “*”
- Added a Deny exception for override.
Basic Scenario 1c: Allow Email Domains Through the Masking Policy
Even though email addresses were tagged as PII, users need to see a customer’s email domain.
Requirement:
- The domain portion of the email address must be demasked.
Policy Changes – Immuta:
Email addresses were auto-discovered by Immuta and have two tags applied: Discovered.PII and Discoverd.Entity.Email Address.
- Added a nested rule to the policy from scenario 1b to mask data auto-discovered as email:
- Masked all data tagged Discovered.Entity.Email using a regular expression: find ^(.*)@(.*)$ and replace with ^XXXX@(.*)$.
- Except for users with attribute AuthorizedSensitiveData > All
Note that an Immuta policy will apply the more specific (i.e., more nested rule) to a column first. Therefore, writing this policy on Email Address will simply extend the original layer of protection. Ranger + Atlas does not allow nesting of tags and instead relies on the “ordering” of policy rules.
Policy Changes – Ranger:
- Followed the same steps in Scenario 1b to create a new masking policy with the additional two email columns
Policy Changes – Ranger+Atlas:
- Followed the same steps in Scenario 1b to create a new masking policy and applied a different mask rule to email columns
Be aware that Ranger does not have a regular expression masking policy, but it may extract the last four characters from the string, which did not perfectly fulfill the requirement.
Basic Scenario 1d: Add Two Users Access to All PII Data
Two new users have been assigned to the team with PII override privileges (but they are in a different AD group) and need to be authorized to access the unrestricted PII data.
Requirement:
- Allow new users access to PII data, despite their AD group.
Policy Changes – Immuta:
- Added the AuthorizedSensitiveData > All attribute to each user in the Immuta UI.
Policy Changes – Ranger:
- Added two new users to group
- Modified existing two masking policies for exception user group
Row-Level Scenario 2a: Share Data With Managers
Central office needs to share data with sales of store performance, broken out by store. However, they do not want store managers to see the performance of their peers.
Requirements:
- All 50 store managers may access their store sales.
- Store managers cannot see data of other managers’ stores.
- All central office personnel may access all sales
Policy Changes – Immuta:
The AD store groups were mapped to an AuthorizedStores attribute (because there can be one-to-many relationships between an individual and store we set this up using an “attribute” rather than group permission). As Immuta ingested the AD groups, it mapped the “store-id” group to an AuthorizedStores attribute by adding the “store_key” tag to the “store_sales.ss_store_sk” column. If the user is in the “central-office” group, they automatically get the “AuthorizedStores>All” value.
- Wrote a policy that shows rows where user has an attribute in AuthorizedStores that matches the column tagged “store_key”, except where user has the attribute in AuthorizedStores that matches “All”
Policy Changes – Ranger:
- Created 50 store groups
- Created new row-based policy with 51 row filter conditions
Row-Level Scenario 2b: Merging Groups
A store manager has recently departed from the company, and her store will be managed by another store manager in the interim period.
Requirements:
- Same as scenario 2a, except for the interim manager
Policy Changes – Immuta:
We added the interim store manager to the appropriate Active Directory group, and provided the user with the new store authorization with the Immuta UI. No policy changes required.
Policy Changes – Ranger:
- Added the user to group policy
Row-Level Scenario 2c: Share Additional Data With Managers
The analytics team has added a second tab to the store dashboard showing employee satisfaction. As in Scenario 2a, each manager should only have access to their store or region’s records. However, they do not want managers to see the performance of their peers.
Requirements:
- Same as Scenarios 2a but applied to all three tables (store, store sales, and store returns)
Policy Changes – Immuta:
Added the “store_key” tag to “store.s_store_sk” and “store_returns.sr_store_sk” columns. No policy changes needed.
Policy Changes – Ranger:
- Created 2 new row-based policies with 51 row filter conditions each (per manager)
Row-Level Scenario 2d: Reorganize Managers Into Regions
The organization underwent an organizational restructuring, and they added a new layer of “regional managers” with authorization into multiple stores.
Requirements:
- All 50 store managers can access their store sales, store returns, and store metadata.
- All 10 regional managers can access their region’s sales, store returns, and store metadata.
- All central office personnel can access all store sales, store returns, and store metadata.
Policy Changes – Immuta:
We mapped the Region AD groups to the AuthorizedStores attribute. No policy changes needed.
Policy Changes – Ranger:
- Created 10 region groups
- Added 10 additional row filter conditions to each of the three row-level policies
Row-Level Scenario 2e: Restrict Data Access to Specific Countries
The company expanded into eight new countries. A new data governance policy prohibits users from seeing record-level information on individuals in countries outside of their own country (unless explicitly authorized). A new AD group has been added to individual users indicating their country.
Requirements:
- Employees can only see customer, catalog sales, catalog returns, store sales, store returns, web sales, and web returns (seven tables) if the customer country column equals the user’s country AD group.
- All previous scenario requirements must be met
- Central office personnel may access all country data
Policy Changes – Immuta:
We mapped the country AD groups to a CountryAuthorizations attribute. For example, users in the “country-us” group were mapped to “CountryAuthorizations>US”. Central office personnel were assigned the “CountryAuthorizations>All” attribute.
-
- Wrote a new SQL-based row-level global policy (for everyone except when user possesses attribute AuthorizedCountries>All) that read:
select c_customer_sk from public.customer where c_birth_country in (@attributes('AuthorizedCountries')))
- Wrote a new SQL-based row-level global policy (for everyone except when user possesses attribute AuthorizedCountries>All) that read:
Then for all seven affected tables, we added the CustomerId tag to the relevant columns.
Policy Changes – Ranger:
- Created eight country groups
- Created one row-level policy with nine row-level filters for each country group + central-office
- Created six row-level policies for each fact table, each with nine row-level filters for each country group + central-office (54 total changes)
Row-Level Scenario 2f: Grant New User Group Access to All Rows by Default
A new global data and analytics team was established in the central office.
Requirements:
- All existing policies should be updated to allow these teams to access all store records and all customer records.
Policy Changes – Immuta:
We mapped the new global analytics AD group to the existing attributes: “StoreAuthorizations>All” and “CountryAuthorizations>All”. No policy changes needed.
Policy Changes – Ranger:
- Created two new groups.
- Added two exception groups to 14 existing policies from above.
Row-Level Scenario 2g: Apply Policies to a Derived Data Mart
The global analytics team created a derivative data mart based on the original data warehouse. As they plan to expose these new tables to store management, all policies need to be applied to the data mart as well. The data mart included:
- Five tables with store level data only
- Five tables with customer data only
- Five tables with customer and store level data
Requirements:
- All existing policies should be applied to the derived data mart.
Policy Changes – Immuta:
All relevant “store_id” and “customer_id” columns were tagged appropriately in the data mart. No policy changes needed.
Policy Changes – Ranger:
- Modified 10 existing store policies and added 10 tables to each
- Modified four existing customer policies and added 10 tables to each
Unlike Immuta, Ranger is unable to use tags to drive where row level policies are applied, This is why Atlas was unable to support this use case, though it could support column masking of PII.
Advanced Scenario 3a: “AND” Logic Policy
The data team has uploaded employee data to the data warehouse, which is considered Personal AND Business Confidential. Existing groups for “May access personal data” and “May access sensitive data” already exist.
Requirements:
- Only users in both these groups should have access to the new human resources data.
Policy Changes – Immuta:
- Wrote a new policy to mask all columns by making NULL: for everyone except when user is a member of group “May access personal data” and “May access sensitive data”.
Policy Changes – Ranger:
Ranger does not support AND logic in their policies. This will require the creation of a new role and leveraging that role in a new table policy.
Advanced Scenario 3b: Conditional Policies
The global analytics team is unable to use the country of birth for minors under 16 years old, based on a data use agreement with a regulatory agency.
Requirements:
- In the customer table, mask the c_birth_country where a birth date indicates the customer is less than 16 years old.
Policy Changes – Immuta:
- Wrote a new policy:
Policy Changes – Ranger:
Cannot be implemented in Ranger as is (Ranger has no date functions)—would need to create a view of the table with age pre-calculated, and then rewrite all four customer policies to use the view instead of the table.
Advanced Scenario 3c: Minimization Policies
To limit expensive queries that rapidly escalate compute cost in the cloud, the finance team requires a policy to limit data access to 25% of data in a table for all users. As more consumers are accessing data with different tools, there is limited control of what queries are being generated.
Requirements:
- Only show randomly selected 25% of records in the store_sales table.
Policy Changes – Immuta:
- Wrote a new minimize data source policy:
Policy Changes – Ranger:
Cannot be implemented in Ranger as is (Ranger has no random, date filter, or limit functions)—would need to create a view of the table with age pre-calculated, and then write a policy to use the view instead of the table.
Advanced Scenario 3d: De-Identification Policies
The legal team had concerns about insider attacks on customer data and requires guarantees against linkage attacks. For this policy, we needed to use the k-anonymity of the customer table, which is defined as the number of records within the least populated cohort. Thus, the quasi-identifiers (QI) of any single record cannot be distinguished from at least k other records. In this way, a record with QIs could not be uniquely associated with any one individual in a data source, provided k was greater than 1.
Requirements:
- For any combination of quasi-identifiers (QI) such as c_birth_country and c_birth_year in the customer table, mask those values using k-anonymity such that the least populated cohort is greater than 4.
Policy Changes – Immuta:
- Wrote a new Dynamic K-anonymization Policy without any coding1
Policy Changes – Ranger:
Cannot be implemented in Ranger as is. In fact, this may not be feasible even with modifying the database.
1See https://www.immuta.com/articles/how-to-anonymize-employee-data-using-databricks-spark/ for more details.
5. Conclusion
The chart in Figure 2 reveals the difference in cumulative policy changes required for each platform configuration.
Figure 2. Cumulative Policy Changes
It took 75x the cumulative policy changes for Ranger alone and 63x the policy changes for Ranger with Apache Atlas than it took for Immuta to accomplish the same data security objectives in our scenarios.
For the advanced use cases, Immuta only required one policy change each, while Ranger was not able to fulfill the data security requirement at all (Figure 3).
Figure 3. Advanced Data Security Scenario Policy Changes
Policy management burden has bottom-line cost implications as well. Consider the following downstream costs of using Immuta or Ranger based on the scenario we constructed, based on our field experience with these types of projects.
First, consider the cost of employees’ time:
Second, consider how long it takes to approve and modify a policy:
We can use those figures to calculate a cost for each policy change:
And finally, using our scenarios from the testing above, this would cost:
These are very conservative estimates. If it takes your organization more than one hour (say one day) to approve and implement a policy change or if you have 100 analysts waiting, instead of 10, the lost time/opportunity costs would increase 8x to 10x, and our scenarios would imply costs over $4 million. You can see how this would add up over time. The cost analysis also does not include potential regulatory fines for mistakes or data leaks that can occur with a more complex system to manage (less scalability and evolvability).
Also consider that this is a relatively basic scenario consisting of multi-channel retail with SKU-based and order data. In a large, global enterprise with many additional sensitive datasets, not only would the number of analysts increase but also the sheer number of policies and policy changes as a result of having more complex data.
The chart in Figure 4 is a summary of the above calculations and how policy management burden expands and lost opportunity/time costs increase as more and more employees are waiting on data, due to the complexities of using Ranger versus Immuta given our scenarios.
Figure 4. Policy Burden Costs
This study exposed the limitations of extending legacy Hadoop security components into cloud use cases. Apache Ranger uses static policies in a role-based access control model with very limited support for attributes. The difference between it and Immuta became clear. By leveraging dynamic variables, nested attributes, and global row-level policies, row-level security, Immuta, in particular, can be quickly implemented and updated in comparison with Ranger.
Using Ranger as a data security mechanism on cloud data workloads creates an inordinate amount of policy management burden compared to Immuta—which is shown here to provide scalability, clarity and evolvability in a complex enterprise’s data security and governance needs.
6. Disclaimer
GigaOm runs all of its assessments and tests to strict ethical standards. The results of the report are the objective results of the application of a scoring methodology using the same rubric across all the competitive platforms.
This is a sponsored report. Immuta chose the competitor. GigaOm developed the methodology and scoring. While we made every effort to remove subjectivity from our assessment, certain criteria are inherently subject to judgment. We have attempted to describe our decisions in this paper.
7. About Immuta
Immuta is the universal cloud data access control platform, providing data engineering and operations teams one platform to control access to analytical data sets in the cloud. Only Immuta can automate access control for any data, on any cloud service, across all compute infrastructure. Data-driven organizations around the world rely on Immuta to speed time to data, safely share more data with more users, and mitigate the risk of data leaks and breaches. Founded in 2015, Immuta is headquartered in Boston, MA. Learn more at www.immuta.com.
8. About William McKnight
William McKnight is a former Fortune 50 technology executive and database engineer. An Ernst & Young Entrepreneur of the Year finalist and frequent best practices judge, he helps enterprise clients with action plans, architectures, strategies, and technology tools to manage information.
Currently, William is an analyst for GigaOm Research who takes corporate information and turns it into a bottom-line-enhancing asset. He has worked with Dong Energy, France Telecom, Pfizer, Samba Bank, ScotiaBank, Teva Pharmaceuticals, and Verizon, among many others. William focuses on delivering business value and solving business problems utilizing proven approaches in information management.
9. About Jake Dolezal
Jake Dolezal is a contributing analyst at GigaOm. He has two decades of experience in the information management field, with expertise in analytics, data warehousing, master data management, data governance, business intelligence, statistics, data modeling and integration, and visualization. Jake has solved technical problems across a broad range of industries, including healthcare, education, government, manufacturing, engineering, hospitality, and restaurants. He has a doctorate in information management from Syracuse University.
10. About GigaOm
GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.
GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.
GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.
11. Copyright
© Knowingly, Inc. 2021 "Cloud Data Security" is a trademark of Knowingly, Inc. For permission to reproduce this report, please contact sales@gigaom.com.