Active Directory replication is always pull-based, pairwise, and per naming context. Server A pulls from Server B for the Domain NC, then pulls again for the Configuration NC, then again for Schema NC. No multi-DC broadcasts, no “sync everything” calls. Each cycle is two DCs and one NC.
This is Part 4 of the AD Replication Deep Dive series. We’ll walk the complete five-step request / send / process / confirm flow between two DCs, with the HWMV and UTDV metadata payloads visible at each step.
What lives inside a naming context
Every DC has at least three NCs:
- Schema NC — attribute and class definitions. Forest-wide.
- Configuration NC — sites, subnets, services, partitions, replication topology. Forest-wide.
- Domain NC — users, groups, computers, OUs for one domain. Per-domain.
Optional NCs include application partitions like DomainDnsZones and ForestDnsZones, and any custom app partitions for things like ADAM/AD LDS workloads. Each NC replicates independently. If the Schema NC is healthy but the Domain NC is failing, you have a partial outage — not a total one.
The five-step replication cycle (overview)
- Server A initiates — sends a replication request to Server B for one specific NC.
- Server B figures out what to send — uses HWMV + UTDV from the request to filter changes.
- Server B sends — ships the object buffer, last USN, More-Data flag, and (sometimes) its UTDV.
- Server A applies the updates — commits each change under a fresh local USN, updates its HWMV.
- Server A checks if it’s done — loops back to step 1 if More-Data was True; otherwise merges Server B’s UTDV.
Step 1: Server A initiates the request
When Server A’s replication scheduler fires (15 seconds after a notification intra-site, or on a polling schedule inter-site), Server A picks one partner and one NC. It sends Server B five things:
- The naming context identifier — e.g., “Domain NC:
DC=contoso,DC=com”. - Max-object limit — how many objects Server A is willing to receive in this packet (default 100 intra-site, 500 inter-site).
- Max-value limit — how many total attribute values may ship (default 1,000 intra-site, 10,000 inter-site).
- The HWMV cursor — “Last time we replicated this NC, the highest USN I got from you was 1,108.”
- Server A’s UTDV — the per-originating-DC map of “here are the highest USNs I’ve seen from every DC in the forest.”
The max-object / max-value caps are critical — they break large change-sets into multiple packets so a 500,000-object catch-up doesn’t saturate a WAN link. Replication continues for as many rounds as needed; the schedule only controls when each cycle starts, not how long it runs.

Step 2: Server B figures out what to send
Server B does three things with the inbound request:
A. Find candidate changes. Server B looks at its own current high-USN for the requested NC and scans for all object changes with USN > the HWMV cursor. If A said “last USN from you was 1,108” and B is now at 1,112, that’s 4 candidate transactions to inspect.
B. Apply propagation dampening. For each candidate change, Server B looks up the originating DC and original USN of that change (stored in replPropertyMetaData). It then asks: does Server A’s UTDV already include this originating USN from this originating DC? If yes, skip — Server A has already seen this change via another partner. This is what stops replication loops.
C. Respect the size caps. Server B fills an output buffer until it hits A’s max-object or max-value limit, then sets a More-Data = True flag. The buffer is sorted by USN, so the next cycle can resume exactly where this one stopped.
Two special cases
- Server A is fully up-to-date: No changes to send. B sends back just the metadata cursors — no objects, More-Data = False.
- Server B has changes but A already has them all via someone else: Propagation dampening filtered everything out. Same outcome — metadata-only response.
Step 3: Server B sends the response
The response packet carries four things:
- The object buffer — the actual changed objects and attributes.
- Last-Object-USN-Changed — the highest USN from Server B that was considered in this cycle (even for changes that were skipped via dampening — this advances A’s cursor correctly).
- More-Data flag — True if Server B hit the size cap and has more queued.
- Server B’s UTDV — only sent when More-Data is False, i.e., this is the last packet of the cycle.
Why withhold the UTDV until the last packet? Because A merges B’s UTDV into its own at the end of the cycle. If A merged mid-cycle and then crashed, its UTDV would claim it’s seen changes it doesn’t actually have yet.

Last-Object-USN-Changed, the More-Data flag, and (only when More-Data is False) B’s UTDV.Step 4: Server A applies the updates
For each object in the buffer Server A:
- Allocates its next local USN (this is an originating-USN-bump on Server A even though the change isn’t originating — the local USN advances for every write, originating or replicated).
- Opens an ESE (Extensible Storage Engine) transaction.
- Writes the changed attributes into the object. Replication is per attribute, not per object — if only
telephoneNumberchanged, only that attribute is touched. - Updates the object’s
uSNChangedto the new local USN. - Commits the transaction. If the commit fails, the local USN is consumed but never re-issued — this is how AD avoids USN reuse on aborted writes.
After all objects are applied, Server A updates its HWMV row for “Server B, this NC” to the Last-Object-USN-Changed Server B returned — not the last successfully-applied USN. This matters: if B’s LastObject says 1,112 but only USN 1,110 actually shipped (1,111 and 1,112 were dampened), A still advances its cursor to 1,112. Otherwise A would re-ask for the dampened changes every cycle forever.
Step 5: Server A checks More-Data and merges UTDV
Two outcomes:
- More-Data = True → back to step 1. Server A immediately fires another request to Server B for the same NC. Repeats until the cycle drains.
- More-Data = False → cycle complete. Server A now merges Server B’s UTDV into its own.
The merge is row-by-row: for each entry in B’s UTDV (one per originating DC ever seen):
- If A doesn’t have a row for that originating DC, add it.
- If A’s row has a lower max-USN than B’s, raise A’s row.
- If A’s row has the same or higher max-USN, leave it.
This is the propagation-dampening superpower: even though A is only replicating with B right now, A’s map of “what I’ve seen from DC E” gets updated transitively through B’s knowledge. The next time A talks to C, it won’t re-ask for things it now knows it has.
Worked example: three DCs, one user creation
Let’s say DC A has been up-to-date with DC B for some time. DC A’s HWMV for B reads 1,108. DC A also has UTDV entries: {B: 1108, C: 100, D: 2350, E: 540}.
DC B is now at USN 1,112. Its UTDV: {A: 1001, B: 1111, C: 100, D: 2350, E: 790}.
A initiates. B scans 1,109–1,112. B’s changes break down as: USN 1,109 was originally from E (originating USN 567), 1,110 from E (originating 788), 1,111 was an originating write on B, 1,112 was from D (originating 2,345).
B applies dampening:
- 1,109 from E originating 567: A’s UTDV says E max = 540, so 567 is new → send.
- 1,110 from E originating 788: still new (788 > 540) → send.
- 1,111 originating on B: A’s UTDV for B is 1,108; this is 1,111 → send.
- 1,112 from D originating 2,345: A’s UTDV for D = 2,350 > 2,345 → skip, A already has it.
B sends 3 changes, Last-Object-USN-Changed = 1,112, More-Data = False, UTDV included.
A applies them. HWMV for B advances to 1,112. UTDV row for E updates 540 → 790 (B told A indirectly that E’s changes are visible up to 790).
Things that bite people
Per-NC scheduling means partial failures
A site link failure mid-cycle leaves Configuration NC synced but Domain NC not. Symptom: new computer accounts created somewhere don’t show on this DC even though sites and services data looks fine. Always check repadmin /showrepl per-NC, never assume one NC’s state matches another’s.
UTDV bloat in long-lived forests
Every DC ever promoted leaves a UTDV row, forever (until tombstone cleanup completes). A 20-year-old forest with 80 historic DCs has 80 UTDV rows per NC even if only 5 DCs exist today. Not a problem, but the metadata grows.
Snapshot reverts invalidate cursors
If a DC is reverted, its invocation ID rotates. Every other DC’s HWMV / UTDV row for that DC is wiped on the next cycle and rebuilt from scratch — the next replication pull is effectively a full re-sync of that NC from that partner. Expect a temporary bandwidth spike.
repadmin shows you cursors, not raw changes
repadmin /showrepl displays the HWMV cursor per NC per partner. repadmin /showutdvec DC01 "DC=contoso,DC=com" displays the UTDV. repadmin /showchanges dumps actual pending changes. Use the right one for the question you’re asking.
What’s next
You now know how a single NC replicates between two DCs. Part 5 in the AD Replication Deep Dive pathway covers the question we’ve been quietly assuming: how often does this cycle fire? The answer is different inside one site (15 seconds) versus between sites (15 minutes minimum, often hours), and the timing choice cascades into every subsequent design decision.