Global financial plumbing is cracking under unprecedented pressure. The world is moving from nightly batch processing to real-time, expecting money to move as instantly as a text message. Yet, beneath the sleek mobile interfaces of modern fintech, the foundations remain fundamentally unchanged… and rapidly running out of breath.
As digital economies scale, the sheer velocity of money is exposing a critical vulnerability. “In the past 10 years, real-time payment volumes have surged up to 10,000x in some markets,” observes Joris Portegies Zwart, CEO of Ximedes. “This is pushing both mainframe and distributed legacy systems toward their architectural limits.”
The culprit is not a lack of computing power, but an impedance mismatch in database design: when engineers force modern transaction volumes through general-purpose databases, they hit a hard wall.
“The problem is that the existing general-purpose infrastructure fails to power increasing transaction processing volumes,” says Joran Dirk Greef, the creator and CEO of TigerBeetle. “We see companies break down when they try to use general-purpose string databases for transaction processing, at least at any reasonable scale, because moving money is primarily about numbers and counting.”
A Problem With a Name
This wall has a name… the “hot key” problem.
General-purpose databases like Postgres are more like Word than Excel—good for variable-length information such as users’ names and addresses, but not explicitly designed to count integers at speed. When multiple transactions hit the same account simultaneously, as they do during peak times, contention locks up the system. The result is a hard ceiling, which can be calculated in terms of Amdahl’s Law, anywhere between 100-1000 transactions per second per account, according to the degree of contention
For a startup processing a few hundred payments a day, this ceiling is invisible. But for a processor handling millions, it becomes the theoretical limit that the entire architecture eventually runs into. In the past, companies would respond by sharding their data, layering workarounds, and accepting complexity as the price of scale, reconciling themselves to batching and delayed processing. However, present demand for real-time processing renders these compromises untenable.
The Case for Purpose-Built Infrastructure
TigerBeetle was built on a different premise: that financial transactions require a fundamentally different kind of database, one designed from the ground up for counting. Where Postgres is the general-purpose filing cabinet, remaining flexible and indispensable for customer information, TigerBeetle becomes the bank vault, purpose-built for all the immutable integers in terms of the strict rules of double-entry accounting.
The architecture reflects this philosophy at every level. Rather than build on top of existing database primitives, TigerBeetle enforces debit-credit consistency and immutability in the database layer itself. This eliminates an entire class of reconciliation errors that plague systems relying on application-layer logic to enforce financial constraints.
The performance numbers are staggering. For example, the Bill and Melinda Gates Foundation, developing Mojaloop, a complex central bank switch involving more than 600 dependencies in a microservices architecture, found that the general-purpose database was the Achilles’ heel of the entire system. After augmenting it with TigerBeetle, just for the number-crunching, throughput jumped from 78 transactions per second to over 2,000. A simple 25x improvement, without touching a single microservice.
Safety First
But speed alone is not enough. What makes TigerBeetle’s architecture notable is that it sets a higher bar for correctness.
“It means that the database now survives things like disk corruption or misdirection of data, even firmware and filesystem bugs, not only making distributed backups across datacenters, but actually automatically testing the backups and self-healing them, a capability pioneered by TigerBeetle when it implemented 2018 research, Protocol-Aware Recovery for Consensus-Based Storage,” says Greef.
The engineering approach further draws on NASA’s Power of Ten Rules for Safety-Critical Code, applying defence-in-depth to every layer of the system. There’s not only code to make the database run, but a whole second layer of code to check the first as it runs. Where some databases rely on the application or users to report latent bugs in the software, TigerBeetle includes over 10,000 internal assertions, or tripwires, dedicated entirely to verifying that the first layer is functioning correctly.
Latency, too, is treated differently. Many systems optimise only for the 99th percentile. TigerBeetle targets the P100, meaning that if you process 10,000 transactions per second, every single one might complete in under 100 milliseconds, in the absolute worst case. Predictable, repeatable, enjoyable.
To stress-test the system before it encounters real-world failures, the team runs TigerBeetle inside a deterministic simulator that compresses 2,000 years of runtime every day, running across a fleet of 1,000 dedicated CPUs, even in Finland, for efficient cooling. The result is a system that, in Greef’s words, is “pre-baked” against failure modes operators will likely never encounter in production.
Where It Fits
For Portegies Zwart and the team at Ximedes, the strategic question is not whether TigerBeetle is impressive. It is: where does a component like this fit within the existing payment architecture, and what new capabilities will it unlock?
The answer is less a rip-and-replace story than a surgical one. TigerBeetle does not position itself as a replacement for the existing database in the stack. Rather, it sits alongside the general-purpose database, handling the workload that general-purpose systems were never designed for: the relentless, high-velocity counting at the core of every financial transaction.
This positioning makes adoption more accessible than it might at first appear. Greef points to companies that completed integration in 48 hours to 45 days, compared to the one to two years typically required to build reliable double-entry accounting over Postgres. “It’s just easier to get up and stay running,” he says simply.
The enterprise use cases are scaling just as fast. Today, the database powers operations at companies processing from hundreds of millions to billions of transactions per month, and is being merged into the Gates Foundation’s Mojaloop pilot, across 20 countries. Yet, the same architecture serves startups tracking everything from energy and AI usage to gaming and commercial property. Whatever needs counting, TigerBeetle accelerates time to market.
The Infrastructure Question Nobody Is Asking
The payments industry has spent years asking how to move money faster. The more pointed question, as Portegies Zwart frames it, is whether past foundations beneath the system were built for the new world that is now here.
Ximedes is partner of Future of Payments NL
More information about this topic? On 26 March 2026, Joris Portegies Zwart will be holding a session about Process your transactions 1000× faster: why legacy databases are hitting hard limits at Future of Payments NL. Visit the website for more information and tickets.






