Back to Insights

The Hybrid DNS Pattern That Survives Cutover Night

Cutover night has a predictable failure mode.

The hub is up. The VPN is stable. The Private Endpoint is approved. Azure SQL has public access disabled. Everyone is ready to celebrate.

Then someone runs one command:

$ nslookup azsql1.database.windows.net

And the answer comes back public.

No outage banner. No firewall drop. No routing alarm.

Just DNS quietly sending your "private" traffic toward the public world.

This post documents the Hybrid DNS pattern we standardize on for UKLifeLabs-style landing zones. Central Private DNS zones in the hub. Azure DNS Private Resolver as the control plane. A design you can explain in 30 seconds and validate in 60.

Production Proven

This pattern is running in production across enterprise landing zones:

12+
Landing Zones
50K+
DNS Queries/Day
0
Cutover Failures

TL;DR

Centralize Private DNS zones in your hub subscription. Use Azure DNS Private Resolver (not VMs). One zone per service, never per team. Validate in 60 seconds with nslookup. Get the Terraform code →


What We Are Solving

We want on-prem workloads and Azure spoke workloads to resolve Azure PaaS service names to Private Endpoint IPs, without:

Non-negotiable rule

If the service is private, the name must always resolve to a private IP.

Challenge: DNS Ownership Breaks Private Endpoint Designs

The most common failure is not networking. It is ownership.

When every team creates its own Private DNS zones, you get:

Key takeaway

Private Endpoints are a network feature, but DNS is an operating-model decision.

Decision 1: Centralize Private DNS Zones in the Hub

All Private DNS zones live in the Hub (Connectivity) subscription or resource group.

Example:

Why this works:

Rule

One zone per service. Never one zone per team.

Decision 2: Use Azure DNS Private Resolver (No DNS VMs)

We deliberately avoid DNS forwarder VMs.

Instead we use:

Benefits:

Key takeaway

DNS should not be a custom VM workload.

Target Architecture

Topology: On-prem → Hub → Spoke

Azure DNS Private Resolver Hybrid Architecture Diagram
Hybrid DNS Architecture: On-premises DNS queries flow through Azure DNS Private Resolver to resolve Private Endpoint addresses
View Technical Topology Schematic Technical Topology Diagram

The Mental Model: The Airport Analogy

If the diagram above feels complex, use this mental model. Think of your Azure Hub not as a network, but as an international airport.

The big question is always the same: when someone asks for a name, who answers it, and where does the question travel?

Cast of Characters

The Inbound Endpoint (Arrival Gate)

This is where questions from on-premises "land" in Azure. It receives queries asking "Where is that private SQL database?"

The Outbound Endpoint (Departure Gate)

This is where questions from Azure "fly out" to on-premises. It sends queries asking "Where is that legacy mainframe app?"

Private DNS Zone (The Phonebook)

Azure's internal directory. If the name is in this book (e.g., privatelink.database.windows.net), Azure answers instantly with a private IP.

Hub (Connectivity)

Spoke (Workload)

Workflow: Cutover-Safe Resolution Path

  1. Client queries azsql1.database.windows.net via on-prem DNS
  2. On-prem DNS conditionally forwards database.windows.net to Resolver inbound endpoint
  3. Inbound endpoint hands query to DNS Private Resolver
  4. Resolver determines Private Link CNAME
  5. Resolver queries privatelink.database.windows.net
  6. Private DNS zone returns Private Endpoint IP
  7. Resolver returns answer to on-prem DNS
  8. On-prem DNS returns private IP to client
  9. Client connects privately to Azure SQL via Private Endpoint

Result

Same name. Private IP. Private path.

The 3 DNS Flows

There are only three stories that matter. If you can trace these, you can troubleshoot anything.

Flow 1: On-prem asks for Azure Private Name

Flow 1 Diagram: On-prem to Azure
View Technical Flow Schematic Technical Flow 1 Diagram

Flow 2: Azure VM asks for Azure Private Name

Flow 2 Diagram: Azure to Azure
View Technical Flow Schematic Technical Flow 2 Diagram

Flow 3: Azure VM asks for On-prem Name

Flow 3 Diagram: Azure to On-prem
View Technical Flow Schematic Technical Flow 3 Diagram

Alternative Configuration: All-In-One Path (Option B)

In some strict designs, you might point Spoke VMs directly to the Inbound Endpoint IP as their custom DNS server.

Option B: Pointing Spokes to Inbound Endpoint
View Technical Schematic for Option B Technical Option B Diagram
Option B: Spoke workloads use the Inbound Endpoint (10.0.0.8) as their direct DNS server

Trade-off: This gives you one DNS IP for everything, but adds a dependency on the hub path for all queries (even internal VNet ones).

Why This Survives Cutover Night

Common Failure Modes

Watch out for these

60-Second Validation Test

From on-prem:

$ nslookup azsql1.database.windows.net

Expected:

If you get a public IP, DNS is broken.

Operating Model

Platform Team

Application Teams

Reference Implementation

A working Terraform-based reference implementation is available here:

https://github.com/appliedailearner/privatednsresolver


Closing Thought

Private Link is not the hard part.

Making DNS boring is.

This pattern does exactly that.


What's Next?

Get the Code

Production-ready Terraform implementation

View on GitHub →

Related Patterns

Private Link best practices

Download Presentation →

Ready to operationalize your Azure journey?

Contact Me View the Toolkit

Spread the Insight

Back to Insights