Field Guide

Storage Replica partnerships stay connected while log volumes silently become the bottleneck.

Use this when replication health looks green and actual recovery posture weakens because log growth or latency went unplanned.

Start from the failing condition, not the loudest symptom.

Use this when replication health looks green and actual recovery posture weakens because log growth or latency went unplanned. Treat the visible error as the end of the chain and work backward until the first dependency that actually moved is obvious.

Separate healthy dependencies from the one that actually broke.

  • Capture one failing path, one known-good path, and the exact change window before touching configuration.
  • Confirm Storage Replica state from both the control plane and an affected workload so you are not troubleshooting a cached or partial view.
  • Confirm Disaster Recovery state from both the control plane and an affected workload so you are not troubleshooting a cached or partial view.
  • Confirm Windows Server state from both the control plane and an affected workload so you are not troubleshooting a cached or partial view.
  • Record which dependency actually moved first: identity, name resolution, transport, policy, or runtime state.

Recover the path without widening the blast radius.

  • Prove the break is centered in Storage Replica before editing the next layer down the dependency chain.
  • Correct the narrowest failing state first, then retest from the same path that originally failed.
  • Re-register, reload, or restart only the component tied to Disaster Recovery rather than stacking broad changes together.
  • Validate with a second client, site, or node so the fix is not limited to one warm cache or one host.
  • Capture the final health evidence and the triggering condition so the next incident starts from facts instead of memory.