Designing QuorumBD middleware

Designing Backpressure and Concurrency in QuorumBD Middleware

Building a distributed block storage system requires careful attention not only to replication and consistency, but also to the behaviour of the middleware layer that connects clients to the storage core. In QuorumBD, the middleware is intentionally designed to remain lightweight and deterministic, acting as a reliable bridge between block device clients and the storage cluster.

One of the key architectural topics in this layer is backpressure.

🚦 Backpressure is a feature, not a failure

In many systems, developers instinctively try to read incoming requests as quickly as possible and buffer them internally. In a storage middleware this approach leads to unbounded memory usage and unstable behaviour under load.

QuorumBD takes the opposite approach: when internal limits are reached, the middleware deliberately stops reading requests from the client.

This behaviour propagates naturally through TCP. If the middleware stops reading:

  • the kernel receive buffer fills
  • the TCP receive window closes
  • the client pauses automatically

No custom protocol is required. The operating system’s networking stack provides the backpressure mechanism.

🪟 Window-based flow control

To ensure both fairness and memory safety, QuorumBD uses multiple flow-control windows inside the middleware:

  • Global Byte Window – limits the total memory footprint of in-flight operations
  • Per-Disk Inflight Window – prevents a single disk from monopolizing resources
  • Optional global inflight limits – configurable for different middleware implementations

Together these windows create a natural scheduling mechanism. Requests are only accepted when sufficient capacity exists to process them safely.

This approach achieves several goals simultaneously:

  • predictable memory usage
  • fairness between volumes
  • stable behaviour under high concurrency

🔁 Clear separation of responsibilities

Another important architectural principle in the QuorumBD middleware is the strict separation of event loops.

The middleware typically operates with dedicated loops for sending and receiving:

  • sendLoop – reads requests from the client and forwards them to the storage core
  • recvLoop – receives replies from the core and delivers them back to the client

Reader loops focus exclusively on reading data and forwarding events. Potentially blocking work, such as writing responses or releasing resources, is handled in separate execution paths.

This separation avoids deadlocks and ensures that the system continues making progress even under heavy load.

🧩 Deterministic middleware design

QuorumBD deliberately keeps the middleware simple:

  • it does not persist state
  • it relies on the core cluster for consistency and recovery
  • it focuses on efficient transport and flow control

By minimizing responsibilities at the middleware layer, the system becomes easier to reason about, easier to operate, and easier to recover after failures.

🚀 Towards predictable distributed storage

Distributed storage systems often fail not because of replication algorithms, but because of subtle behaviour under load. Backpressure, fairness, and clear concurrency boundaries are therefore first-class design goals in QuorumBD.

The result is a middleware layer that behaves predictably even under extreme concurrency, while remaining small enough to deploy easily in environments ranging from small clusters to large infrastructure platforms.

In distributed systems, simplicity is rarely accidental. It is usually the result of careful architectural decisions.