Developing our payments processor has been a
game of balancing latency with throughput, and correctness with delivery on
time. And there's a lot of accidental complexity in the resulting concurrency,
worsened by the JVM's “shared memory” concurrency model, with its 1:1
multi-threading. In this session, let's go lower-level on Java's memory model
and reason about back-pressure, task interruption and synchronization via
modeling state machines in atomic references.