Understand FlowWarden’s threading model, virtual threads support, and how to scale event processing with message brokers
FlowWarden processes Change Stream events sequentially within each stream — one event at a time. This is a deliberate design choice that keeps checkpointing simple and guarantees event ordering. This page explains why, when it matters, and how to scale beyond it.
In imperative mode, each @ChangeStream runs on its own dedicated thread managed by Spring Data MongoDB’s DefaultMessageListenerContainer. Events are processed one at a time in order:
Three constraints make sequential processing the natural default:
Checkpoint is a linear cursor — FlowWarden persists a single resume token per stream. If events were processed out of order, a crash after checkpointing event 3 (but before event 2 completes) would lose event 2 forever.
Event ordering matters — MongoDB Change Streams deliver events in oplog order. An update to a document should be processed after its insert, not before.
Simplicity — sequential processing eliminates race conditions, makes handlers easy to reason about, and avoids the need for complex coordination.
Spring Boot 3.2+ supports virtual threads (Project Loom) via spring.threads.virtual.enabled=true. FlowWarden is compatible with virtual threads, but there are important considerations.
Do not use synchronized on handler methods when virtual threads are enabled. This causes thread pinning — the virtual thread remains attached to its carrier thread during the entire synchronized block, blocking it from running other virtual threads.
// DON'T — causes thread pinning with virtual threads@OnInsertsynchronized void handle(ChangeStreamContext<?> ctx) { mongoTemplate.save(...); // blocks the carrier thread}// DO — use ReentrantLock if you need mutual exclusionprivate final ReentrantLock lock = new ReentrantLock();@OnInsertvoid handle(ChangeStreamContext<?> ctx) { lock.lock(); try { mongoTemplate.save(...); } finally { lock.unlock(); }}
In practice, synchronized on a handler is redundant: FlowWarden processes events sequentially within a single stream — there is never concurrent access to the same handler for the same stream. If you need cross-stream coordination, use a ReentrantLock instead.
When a single stream’s sequential throughput is not enough, the recommended pattern is to use FlowWarden as a reliable CDC consumer that fans out events to a message broker. The broker handles parallelism natively via partitions (Kafka) or competing consumers (RabbitMQ).
Kafka consumers then process events in parallel, one consumer per partition:
@KafkaListener(topics = "order-events", groupId = "order-processing")void process(OrderEvent event) { // Heavy processing happens here — in parallel across partitions orderService.fulfill(event);}
Use the document _id or a business key as the Kafka partition key — events for the same entity stay ordered
Idempotency
Broker consumers must be idempotent — FlowWarden guarantees at-least-once to the broker, and the broker guarantees at-least-once to consumers
Deployment mode
Use SINGLE_LEADER on the FlowWarden stream to avoid publishing duplicate events from multiple instances
Publish failures
FlowWarden’s @RetryPolicy + @DeadLetterQueue handle transient broker failures — if the publish fails after all retries, the event lands in FlowWarden’s DLQ
Backpressure
The broker absorbs bursts — FlowWarden publishes at Change Stream speed, consumers process at their own pace