Data Sovereignty and Performance: A Test for European Cloud Providers 

Reading time: 4 minutes
US cloud providers dominate the market but are raising increasing concerns around data sovereignty. An independent performance test examines whether European cloud infrastructure offers a technically and economically viable alternative for enterprise workloads.

The U.S. CLOUD Act allows U.S. authorities to demand access to corporate data managed by U.S. companies, regardless of server location. Data stored in Frankfurt, Paris, or Amsterdam remains subject to U.S. law if providers like AWS, Microsoft, or Google run the environment.

This issue gained new urgency after the Trump administration dismissed three members of the Privacy and Civil Liberties Oversight Board (PCLOB) in January 2025. The PCLOB independently monitors whether the EU-U.S. Data Privacy Framework (DPF) complies with privacy requirements. This oversight is a key condition for lawful data transfers to the United States. 

The dismissals prevent the PCLOB from making decisions or performing oversight. While organizations can still transfer personal data from the European Economic Area (EEA) to the U.S., privacy experts are warning about the long-term viability of the DPF.

The gap between awareness and action

Growing concerns about U.S. data access are driving European organizations to prioritize data sovereignty. A recent BARC survey found that 84% of companies consider it strategically important, and 70% say it has become more important over the past two years.

Despite this awareness, many organizations hesitate to act. This reluctance often stems from perceptions that European cloud providers are too slow, expensive, or complex to compete with their U.S. counterparts. But is that actually true?

Performance test under real-world conditions 

BARC conducted an independent performance test of STACKIT, a German cloud provider owned by the Schwarz Group (which operates Lidl and Kaufland, among others). The test used STACKIT’s Dremio-based lakehouse solution with the BARC Data Generator to simulate a typical enterprise analytics workload.

The test environment included: 

  • 1 billion sales records 
  • A 12-dimensional star schema 
  • 150 attribute columns 
  • Complex aggregations and joins across multiple dimension tables 
  • Query execution partially without indexes 

The test demonstrated strong query performance. All queries completed in under one second, except for the initial query after a cluster start, which took 4.2 seconds. When loading 1.1 billion records into the Iceberg format, STACKIT processed over 4 million records per second, even with concurrent reads and writes to the same S3 bucket. This translates to an effective throughput of 3.6 Gbit/s. 

In this test, STACKIT’s S3 backend performance was comparable to hyperscale cloud providers. 

Cost structure and total cost of ownership

STACKIT offers an all-inclusive pricing model where the service price includes compute, storage, networking, and support. A key difference: STACKIT doesn’t charge egress fees for data transfers. 

This approach contrasts with the pricing models of most hyperscale providers. While their advertised entry-level rates may appear lower, the final cost is often higher due to additional fees for network traffic, data egress, API calls, and premium support. 

As a result, STACKIT’s transparent pricing can be more cost-effective for organizations with volatile workloads or significant data transfer needs. The simple pricing also makes costs more predictable, especially for cloud newcomers who struggle to forecast complex consumption-based billing. 

Avoiding vendor lock-in with open standards

Vendor lock-in is a common challenge with hyperscale cloud providers. Building applications on proprietary services like AWS Lambda, Azure AI Foundry, or Google BigQuery ties organizations to a specific platform, making future migrations complex and costly.

STACKIT takes a different approach by building its services on open standards. The platform uses Apache Iceberg for its lakehouse format, S3-compatible APIs for object storage, and Kubernetes for container orchestration. This makes workloads portable across cloud providers without significant refactoring. 

The growing adoption of Apache Iceberg has made it an industry standard for open data lakehouse architecture. Major technology providers have integrated it into their platforms. Netflix, for example, manages over an exabyte of data across millions of Iceberg tables. Microsoft has shifted its strategic focus from Delta Lake to Iceberg, and even competitors like Databricks now offer read and write access to Iceberg tables. 

This widespread support does more than prevent vendor lock-in. They’re the foundation for rapidly adopting new innovations. 

European cloud providers are viable alternatives

European cloud providers now offer practical data sovereignty. BARC’s performance test of STACKIT and Dremio shows that European cloud infrastructure is competitive with hyperscale providers in both performance and cost-effectiveness.

While services for specialized areas like MLOps and advanced AI continue to expand, the core infrastructure is mature. For most mainstream analytics and data science use cases, these platforms are ready for production use. 

So it’s no longer a binary choice between hyperscalers and European providers. Organizations should match workloads to the right platform. Critical data can sit on sovereign infrastructure while less sensitive applications run on global platforms. 

Download the full research note

The full report includes: 

  • Technical deep dive: Cluster configurations, SQL queries, and scaling behavior 
  • Lakehouse architecture: Iceberg, Polaris, and Reflections explained 
  • AI integration: Dremio MCP Server and Semantic Layer overview 
  • Test methodology: Complete setup, data model, and limitations 
  • Cost comparison: STACKIT’s all-inclusive pricing versus hyperscaler pay-per-use models 
Don‘t miss out!
Join over 25,775 data & analytics professionals and get the latest product insights, research, surveys and more!

Discover more content

Author(s)

Senior Analyst Data & AI

Thomas is a BARC Fellow and Senior Analyst at BARC in the area of data & analytics.

For many years, as Director Data & Application Foundation and IT CTO, he built up and expanded the areas of data management, analytics & AI, software development, API management, IoT and data-driven business models in an international DAX company.

He is the co-author of several studies and delivers lectures, seminars and workshops with a high practical relevance. Thomas is also a sought-after coach for transformation projects related to data strategy, data culture and the data-driven enterprise.

His focus is on modern data strategy, data lake(house)-based data management concepts, API-based process integration and in particular the definition, implementation and operationalization of data-driven business processes and business models.

In addition, Thomas is a recognized expert in the Python programming language and the professional use of Python in the enterprise. He also teaches Business Analytics and Industry 4.0 at the University of Applied Sciences in Düsseldorf.

Our newsletter is your source for the latest developments in data, analytics, and AI!