AD replication runs on two clocks. Inside a site, it’s near-realtime — 15 seconds after any change. Across sites, it’s scheduled polling — default 180 minutes, minimum 15 minutes, configurable per site link. These two numbers drive almost every design decision in a multi-site forest, so this is Part 5 of the AD Replication Deep Dive series — the cadence layer.
The two-clock model
Active Directory assumes two completely different network environments coexist in any real-world deployment:
- Intra-site (inside one AD site): DCs share a LAN. Bandwidth is cheap, latency is low, packet loss is negligible. Designed for fast convergence.
- Inter-site (between AD sites): DCs are separated by a WAN, VPN, or expensive leased line. Bandwidth costs money, latency is high, congestion is real. Designed for predictable, bounded traffic.
An “AD site” isn’t a geographic concept — it’s a routing concept. Two physical buildings in the same city with a 10Gbps fibre link between them can be a single AD site. A single building with two VLANs separated by a slow firewall might be two sites. The dividing line is “is the link fast enough to treat as LAN?”
Intra-site replication: change-notification model
Default cadence: 15 seconds
When a DC commits an originating write, it doesn’t immediately announce the change. It waits 15 seconds (a tunable delay, registry value Replicator notify pause after modify (secs)). The 15-second hold serves two purposes:
- Batching: If an admin script creates 50 users in 5 seconds, all 50 changes are bundled into one replication packet instead of 50 separate ones.
- Stability: Brief in-progress edits (a half-completed bulk import) get a chance to settle before propagating.
After 15 seconds, the DC sends a change notification to each intra-site partner. Each partner then immediately requests the changes. End-to-end an intra-site change typically converges across all DCs in 15–45 seconds.
Fallback: 1-hour polling
Notifications can be missed (DC reboots, network glitch). Each DC also runs a scheduled poll every hour against each intra-site partner to catch anything the notification missed. Default Replicator inter site packet size (objects) caps each poll’s batch size.
Transport: RPC over IP
Intra-site replication uses RPC over IP with no compression. CPU savings (no compress / decompress step) matter more than bandwidth at LAN speeds.
Connection objects
The KCC (covered in Part 6) automatically builds a ring of intra-site connection objects so every DC ends up no more than 3 hops from every other DC. In a 5-DC site, that’s 5 connections in a ring. In a 50-DC site, the KCC builds extra chords across the ring to keep the 3-hop guarantee.
Inter-site replication: scheduled polling model
Default cadence: 180 minutes (3 hours)
Inter-site replication does not use change notifications by default. The destination DC polls the source on a schedule defined by the site link between the two sites. Default poll frequency: 180 minutes.
You can lower this all the way to 15 minutes, but not below. The 15-minute floor is hard-coded — trying to set 1-minute polling won’t make replication faster, it just gets clamped.
Schedule windows
Each site link has a 7-day x 24-hour grid where you can mark hours as available or blocked. Outside available hours, replication doesn’t fire at all. Useful for things like “our satellite office WAN is metered after 6pm — no replication from 6pm to 6am.”
Inside an available window the link still polls only every frequency minutes. So if frequency is 60 and the window is 09:00–17:00, you get 8 cycles per day, not 480 (one per minute).
Cost — the routing metric
Each site link has a numeric “cost.” The KCC’s ISTG (Inter-Site Topology Generator) builds the cross-site replication graph using site-link costs the same way a router uses metrics. Default cost is 100. Make a slow link cost 500 and a fast one cost 50, and traffic prefers the fast link.
Site-link bridging (transitive routing between site links) is on by default — if Site A ↔ B and B ↔ C exist, the KCC knows A ↔ C is reachable through B.
Transport: IP (RPC) or SMTP
- IP transport (default) — RPC over IP, compressed when payload is over a threshold. Compression matters at WAN speeds: a 4 MB schema change compresses to ~400 KB.
- SMTP transport — legacy, for sites where IP isn’t routable. Only carries the Configuration and Schema NCs, not Domain NC. Almost never used today.
Bridgehead servers
Inter-site traffic doesn’t fan out from every DC. Each site nominates one DC per NC as the bridgehead. The bridgehead handles all inbound and outbound cross-site replication for that NC, then locally redistributes via intra-site. Covered in detail in Part 7 of this series.
Side-by-side comparison
| Aspect | Intra-site | Inter-site |
|---|---|---|
| Trigger | Change notification | Scheduled poll |
| Default cadence | 15 sec (after notify delay) | 180 min |
| Minimum cadence | ~5 sec (tunable, rarely useful) | 15 min (hard floor) |
| Transport | RPC over IP, uncompressed | RPC over IP (compressed) or SMTP |
| Topology | Ring + chords (3-hop max) | Hub-spoke via bridgeheads |
| Built by | KCC (every 15 min) | ISTG (every 15 min) |
| Convergence | 15–45 sec across the site | Frequency × number of hops |
How a password change actually propagates
Password changes are a useful example because they get special urgent-replication treatment:
- User changes password on DC1 in Site A.
- DC1 commits the change and immediately notifies the PDC Emulator for the domain, regardless of site. This is the urgent replication path — doesn’t wait for normal cadence.
- DC1 also queues normal intra-site notification to its site partners (15-sec delay).
- The PDC Emulator now has the new password. Any failed login on any DC in the forest will be re-checked against the PDC before locking the account out. So even before the password fully replicates, the user can log in everywhere via the PDC fallback.
- Normal cross-site replication catches up the rest of the forest at the next site-link poll.
Other changes that trigger urgent replication: account lockouts, RID pool changes, trust object changes, LSA secret changes. Most of these have specific recovery use cases — you don’t want a 3-hour delay between an account being locked at HQ and a branch DC seeing the lockout.
Tuning levers
Shorten inter-site frequency
Get-ADReplicationSiteLink -Filter * |
Set-ADReplicationSiteLink -ReplicationFrequencyInMinutes 15
Sets every site link to the minimum 15-minute floor. Use only on fast WAN links — on a metered cellular site link this is expensive.
Enable inter-site change notifications
Set-ADReplicationSiteLink -Identity DEFAULTIPSITELINK `
-Options @{Add="USE_NOTIFY"}
Adds the USE_NOTIFY flag to a site link, which makes inter-site behave like intra-site (15-sec delay then notify) on that link. Only do this on LAN-quality inter-site links — if the link is slow this floods it.
Disable site-link compression
Set-ADReplicationSiteLink -Identity DEFAULTIPSITELINK `
-Options @{Add="DISABLE_COMPRESSION"}
Saves CPU on the bridgeheads at the cost of bandwidth. Useful when the WAN is fast but the bridgehead DCs are CPU-bound.
Things that bite people
Treating “site” as a building
An AD site is a subnet group with fast intra-group links. Three buildings on the same campus across the street from each other are usually one site, not three. Conversely, two VLANs in the same datacentre separated by a 100 Mbps firewall might genuinely need to be separate sites.
Site-less DCs
If a subnet isn’t registered in Sites and Services, every client and DC in that subnet ends up in the Default-First-Site-Name, often along with DCs from completely unrelated locations. Replication math goes wrong and authentication routing breaks. Always register every subnet.
15-minute polling on a 9.6 Kbps link
Just because you can set 15-minute polling doesn’t mean you should. A slow WAN with many DCs can’t complete a poll cycle in 15 minutes, so polls overlap and queue, and end up effectively running constantly. Match the cadence to the link.
Schedule windows that overlap zero hours
It’s easy to misclick the schedule grid and leave no available hours on a site link. Replication then never fires, you’re effectively partitioned. Always sanity-check the schedule after editing.
Forgetting urgent-replication channels
“Password changes converge slowly across sites” — usually wrong. The PDC Emulator already has the password via urgent replication. If users actually can’t log in across sites, the cause is usually DNS or Kerberos time skew, not slow replication.
What’s next
You know the two clocks. Part 6 in the AD Replication Deep Dive pathway covers the component that builds the actual replication topology those clocks run on: the Knowledge Consistency Checker (KCC) — how AD automatically computes which DC talks to which, refreshes that map every 15 minutes, and reroutes around failures without admin intervention.