BARC Benchmark:
Methodology and Scoring
Decide faster, better and with greater confidence
which software solution fits your company.
About the BARC Benchmark
The BARC Benchmark (BB) is a structured methodology for objectively assessing BI & CPM software. The BARC Benchmark Score (BBS) is a comparative metric calculated from standardized, automated measurements of real user interactions. It consists of two equally weighted sub-scores: Productivity and Scalability.
Black-box testing
Observable behavior and outcomes only — no inspection of internal code or configuration.
Unified data model
Same semantics and numbers across tools via the BARC BI Reference Data Model.
BARC-validated results
All analysis and scoring logic is developed and executed exclusively by BARC.
Real-world enterprise scenarios
Real data volumes, model complexity and concurrent user loads.
Deterministic scripts
Identical interaction sequences across platforms, executed by bots.
Vendor-neutral setup
Built by certified partners with long-standing expertise, not vendors.
How the process works
- Two Sub-Scores, One Total: The BARC Benchmark Score is an equal combination of two sub-scores: Productivity (50%) and Scalability (50%).
- Baseline 100: The best result from the initial benchmark round establishes the baseline score of 100. All other results are measured against this baseline.
- Scores Can Exceed 100: As technology improves, scores can surpass the 100-point baseline. A score above 100 demonstrates measurable, real-world progress over the original benchmark.
BARC Benchmark Total Score
Weighting
Sample Score
BARC Productivity Score
Measures how quickly and efficiently users can complete real-world BI or CPM tasks.
Weighting
Sample Score
BARC Scalability Score
Evaluates system performance under increasing data volumes, user load and solution complexity.
Weighting
Sample Score
BARC BI Reference Data Model
Fairness is the foundation of a trustworthy benchmark. Comparing different systems on different data leads to meaningless results.
To ensure a level playing field, BARC developed the BARC BI Reference Data Model and the BARC Data Model Generator. These tools guarantee that every benchmarked system is tested against the exact same data structure and volume. It simulates complex scenarios across core domains like sales and finance.*
*Marked in grey: planned scope of the BARC Data Model and Data Model Generator
Business Objects:
- Sales
- Products
- Customers
Attributes and Business Structures:
- Product hierarchy
- Product attributes
- Customers attributes
- List price down to contribution margin
- Discount scales
Business Objects:
- P&L
- Balance Sheet
- Cash Flow
Attributes and Business Structures:
- Corporate hierarchy
- Currency conversion
- Actual figures
- Plans
- Forecast for P&L
- Reasonable P&L size (50-100 positions)
Business Objects:
- Users
- Assignments
Attributes and Business Structures:
- User groups
- User to attribute assignments (e.g., entities, P&L lines)
Business Objects:
- Batches
- BOM
- Materials
- Suppliers
- Assignments
Attributes and Business Structures:
- Complex products
- Alternative material, intern. suppliers
Attributes and Business Structures:
- Campaigns
- Multichannel
Attributes and Business Structures:
- Employee attributes
- Salary, compensation
The BARC Benchmark applied: Power BI vs. Qlik Sense
Practical Insights, Key Takeaways and Selection Guidance.