About The Data Fabric Survey
One of the most extensive data management surveys in the world. Learn what over 700 users think about data management products and uncover the latest trends that matter for your data management today.
The Data Fabric Survey, now in its seventh year, is a BARC research study focused on the data management tools market. Our research is primarily based on a major survey of 776 participants worldwide, and provides a wealth of user feedback on 19 of the leading data management solutions on the market today.
Our user survey covered issues ranging from the selection and purchase of software through to deployment and use, including questions about the success of software projects, the usability of each product and the challenges encountered.
The following link provides more detail on our survey methodology, the survey sample, and how we categorize and score data management tools:
Respondents
Countries
Data Management Products
Years
Components of The Data Management Survey
The findings from The Data Fabric Survey 26 are published across several bite-size documents (see below). These documents do not need to be read in sequence. The Results and the Vendor Performance Summaries can be read independently.
BARC also provides the raw data via a web-based tool – The Data Fabric Survey Analyzer – enabling users to carry out their own analysis of the survey results.
The Results
An overview and analysis of the most important product-related findings and topical results from The Data Fabric Survey 26.
The Analyzer
Our powerful interactive online tool, enabling you to perform your own custom analysis of the full survey data set.
Vendor Performance Summary
A series of executive reports on each product featured in The Data Fabric Survey 26. Each report contains a vendor and product overview by BARC’s analyst team plus all the relevant product-related results from The Data Fabric Survey.
Sample & Methodology
Much of the value of The Data Fabric Survey lies in the large number and distribution of survey responses. With a sample of 776 responses, it is among the largest independent surveys into this topic in the world.
Overall, this concept enables a more precise allocation of products to their respective functional usage areas, allowing for better comparability between specific product groups. In addition to this differentiation, we have also created peer groups to provide a more detailed analysis of certain functional product categories. For further information, please refer to the “Peer groups” section below.
The KPI calculation is performed across all products and is therefore independent of the peer group allocation. For further information on KPI calculation, please take a look at chapters “Overview of the key calculations in The Data Fabric Survey 26” and “Understanding the KPIs” in our “Sample & Methodology” PDF.
This section describes the characteristics of the people who took part in the study, including information on the type, company size and industry sector of participants.
Sample size and make-up
Many thousands of people were invited to take part in The Data Fabric Survey 26 using a range of media.
A summary of the online data collected is shown in the table, with the number of responses removed also displayed.
Our data cleansing rules are thorough and involve several different tests. All fraudulent or suspect data that purports to be from bona fide data management software users is removed.
The number of responses is divided between users, consultants and vendors. The questionnaire for vendors contains a different set of questions to those answered by users and consultants.
| Responses removed from the samples | Responses |
|---|---|
| Total responses | 776 |
| Removed during data cleansing | 74 |
| Total answering questions | 702 |
| Total responses analyzed | Responses |
|---|---|
| Users | 413 |
| Consultants | 149 |
| All users | 562 |
| Vendors/Resellers | 100 |
Organization sizes by headcount
Data management products are mostly found in mid-sized and large organizations, a fact reflected in the high percentage of responses we received from users in companies with more than 1,000 employees.
Participants from smaller companies (i.e., with less than 100 employees) formed the smallest grouping with 18% of the total number of responses.
Vertical markets
The chart on the right shows the breakdown of survey responses by industry sector. It only includes respondents who answered product-related questions in the survey (i.e., users and consultants).
Manufacturing comes out on top – as it has in previous years – with 22% of the sample.
Data Management Products in The Data Fabric Survey
We require at least 19 user reviews for the survey results of any data management product to be analyzed in detail. 19 data management tools reached this threshold in this year’s Data Fabric Survey.
When grouping and describing the data management solutions featured in The Data Fabric Survey, we do not always follow the naming conventions the vendors use to the letter. The names we use are sometimes abbreviated and are not always the official product names used by the vendors at the time of publication.
We asked respondents explicitly about their experiences with products from a predefined list, with the option to nominate other products. This list is updated each year and is based on the sample size of the products in the previous year, as well including new entrants to the data management market.
Where respondents said they were using an ‘other’ product, but from the context it was clear that they were actually using one of the listed products, we reclassified their data accordingly.
The table to the right shows the data management products included in our detailed analysis.
| Product name | Respondents |
|---|---|
| 2150 Datavault Builder | 20 |
| Amazon Redshift | 19 |
| AnalyticsCreator | 25 |
| Databricks Data Intelligence Platform | 20 |
| dbt Cloud | 20 |
| dbt Core | 19 |
| Dremio | 21 |
| Exasol Analytics Engine (Exasol Cloud) | 20 |
| Google BigQuery | 20 |
| Informatica Intelligent Data Management Cloud | 20 |
| Microsoft Azure Data Factory | 24 |
| Microsoft Fabric | 45 |
| One Data | 20 |
| Qlik Data Integration | 22 |
| SAP BW/4HANA | 38 |
| SAS Data Engineering | 20 |
| Snowflake Platform | 21 |
| TimeXtender | 20 |
The Peer Groups
The Data Fabric Survey 26 features a range of different types of data management tools so we use peer groups to help identify competing products. The groups are essential to allow fair and useful comparisons of products that are likely to compete.
The peer groups have been defined by BARC analysts using their experience and judgment, with segmentation based on usage scenario.
Peer groups are intended to help the reader understand which products are comparable and why there is such a disparity of findings between all the individual products. The groupings themselves make no judgment on the quality of the products. Most products appear in more than one peer group.
Mainly SaaS platforms that provide integrated, end-to-end functionality to manage the complete data lifecycle – from data integration and processing to storage and governance – in order to deliver trusted data for a wide range of use cases such as business intelligence, self-service analytics, data science, and AI/ML applications.
- Amazon Redshift
- Databricks DI Platform
- Exasol Cloud
- Google BigQuery
- Informatica IDMC
- Microsoft Fabric
- SAP Datasphere
- Snowflake Platform
The world‘s leading vendors in the Data Platform segment, whose solutions are marketed and used globally.
- Amazon Redshift
- Databricks DI Platform
- Google BigQuery
- Microsoft Fabric
Centralized databases optimized for analysis, which store historical and consolidated data from various sources to enable enterprise-wide business intelligence and reporting.
- Amazon Redshift
- Databricks DI Platform
- Exasol Cloud
- Google BigQuery
- Microsoft Fabric
- SAP BW/4HANA
- SAP Datasphere
- Snowflake Platform
Tools and platforms for designing, orchestrating, automating, and managing data pipelines that transform source data into usable formats for operational, analytical and business purposes.
- 2150 Datavault Builder
- AnalyticsCreator
- dbt Cloud
- dbt Core
- Informatica IDMC
- Microsoft Azure Data Factory
- One Data
- Qlik Data Integration
- SAS Data Engineering
- TimeXtender
Tools that accelerate the design, implementation, and operation of data warehouses through the automated, metadata-driven generation of data models and processes.
- 2150 Datavault Builder
- AnalyticsCreator
- dbt Cloud
- dbt Core
- Qlik Data Integration
- TimeXtender
Specialized tools for the extract, transform, load (ETL/ELT) process to move and prepare data from various sources into central analytical target systems such as data warehouses or data platforms.
- dbt Cloud
- dbt Core
- Informatica IDMC
- Microsoft Azure Data Factory
- One Data
- Qlik Data Integration
- SAS Data Engineering
Data warehouse plat¬forms provided as a cloud service, offering high scalability, flexibility, and a usage-based pricing model for storing and analyzing large volumes of data.
- Amazon Redshift
- Exasol Cloud
- Google BigQuery
- Microsoft Fabric
- SAP Datasphere
- Snowflake Platform
The world‘s leading vendors of data engineering solutions, whose products are marketed and used globally.
- dbt Cloud
- dbt Core
- Informatica IDMC
- Microsoft Azure Data Factory
- Qlik Data Integration
- SAS Data Engineering
The KPIs
The KPIs are designed to help the reader spot winners and losers in The Data Fabric Survey 26 using well-designed dashboards packed with concise information. There is a set of 20 normalized KPIs (which we refer to as ‘root’ KPIs) for each of the 19 products, as well as 4 aggregated KPIs based on aggregations of various combinations of ‘root’ KPIs.
A set of KPIs has been calculated for each of the eight peer groups. The values are normalized according to the whole sample.
The KPIs all follow these simple rules:
- Only measures that have a clear good/bad trend are used as the basis for KPIs.
- KPIs may be based on one or more measures from The Data Fabric Survey.
- Only products with samples of at least 15 – 20 (depending on the KPI) for each of the questions that feed into the KPI are included.
- Each KPI is measured on a scale from 0 (lowest possible value) to 10 (highest possible value).
- In some instances, adjustments are made to account for extreme outliers.
KPIs are only calculated if the samples have at least 15 – 30 data points (this varies from KPI to KPI) and if the KPI in question is applicable to a product.
| Aggregated KPIs | Root KPIs |
|---|---|
| Business Value |
Business Benefits Project Success Project Length |
| Customer Satisfaction |
Price to Value Recommendation User Support Product Satisfaction Sales Experience Time to Market Product Enhancement |
| User Experience |
Functional Coverage Ease of Use Adaptability Key User Support |
| Technical Foundation |
Performance Platform Reliability Connectivity Scalability Ecosystem Integration Data Security & Privacy |