lock

Distributed
Locking

How to prevent race conditions when multiple app servers want to mutate the same resource. Coordinate with confidence.

Why Distributed Locks?

In a single machine, we use built-in mutexes. In a distributed system, memory is not shared.

1. User A updates balance on Server 1.

2. User A updates balance on Server 2 at the same time.

3. Result: Last writer wins. Money is lost.

acquire_lock("user_1")// Needs to be global

1. Redis (Redlock)

Liveness & Safety

Uses SETNX (Set if Not Exists) with a TTL. Redlock is a proposed algorithm for acquiring a lock across multiple Redis nodes to ensure reliability even if one node fails.

Pros

Ultra-fast. Millions of OPS. Native TTL support.

Cons

Dependent on system clocks. Controversial for "strict" safety.

2. Zookeeper / Etcd

Strict Reliability

Uses Ephemeral Nodes and **Sequence Numbers**. If the client disconnects, the lock is automatically released by the cluster.

"Zookeeper is CP (CAP Theorem). It prefers correctness over availability for locks."

3. Fencing Tokens

The ultimate fix for the "GC Pause" problem. A lock service returns a Monotonically Increasing Token. The storage service only accepts a write if the token is greater than the last one seen.

Lock (Token: 33)
arrow_forward
Storage (Accept 33)
arrow_forward
Discard (Token 32)

Interview Guidance

"Optimistic vs Pessimistic"

Always mention Optimistic Locking (Versioning) first. It's much cheaper. Only use heavy Distributed Mutexes if conflicts are high or cost of failure is massive.

The Redlock Debate

Casual mention of Martin Kleppmann's critique of Redlock shows you understand the deep theory of clocks and consensus, not just API usage.