Connect Azure Stack to public domain / IP

I managed to get Azure Stack running on a public URL. To help you understand the architecture in TP2 and components used I will try to explain step by step how things have been done. For the networking part to get it to work you need BGP on your switch configured. If you connect Azure Stack up to internet you are going to advertise public and private IP’s from the software load balancer and the gateway VM. In order to accomplish routing of public and private IP’s to the correct next hop’s in your network I implemented source based routing (aka Policy Based Routing PBR). With this mechanism I am able to define a policy for sources that are private IP’s and a policy for source IP’s that are public IP’s. When the source IP is a public IP I route it to the internet gateway. If the source IP is a private IP I route it to the NAT gateway. I used 2 HPE 5900 as my leaf switches and a HPE 7904 as spine switch. The peering for BGP to the SLB is done with the leaf switch.

My core switch has BGP peering with both of the leaf switches I used in my MAS rack. As the POC only has one NIC connected we will focus on the leaf switch that is connected to our host. This is my BGP setup in the lab on the core router:

clip_image002

The 10.1.254.21 and 23 are the loopback IP’s on leaf switches. Here you see the policy based routing including the ACL. You will see that private IP’s use the default route in the route table and that for the public IP’s I defined a next hop in the policy-based-route (apply next-hop x.x.x.x):

clip_image004 clip_image006

Here is the BGP setup of my leaf switch:

clip_image008

You might have noticed that I couldn’t use the loopback for peering as the deployment script in TP2 didn’t took that in account. So the peering is just from SLB IP to switch IP. I found out that actually in TP2 peering is not established on the TRANSIT network but on the HNV PA network.

Here you see once some tenants created some public IP’s and the portal is advertised on the network by the SLB:

clip_image010

OK, Let’s get started to get TP2 running on public IP’s.

But… Before we continue. This is not supported in any way. As stated in my TP1 public URL blog post. I don’t know what I might have exposed to the internet that has bugs in it as we are running a technical preview. My environment is a sealed environment and used for labs only. be warned!

First I copied over the Configuration folder from the C:\CloudDeployment location to my own laptop. From there I started to edit IP’s and hostnames. This is because I learned that I had to reinstall my environment couple of times to get it right eventually. And when I want to reinstall in the near future, I only have to copy this folder and overwrite the reinstalled host folder with my version.

There are in my opinion a couple scenarios to make Azure Stack available on the network.

1. Like Azure. Where we configure the portal, ARM and the public IP’s assigned to VIP’s and VM’s directly to the internet.

2. Internal only. Where we connect the stack to the internal network and, as we need with a POC scenario, connected Azure AD for authentication. See also this guide what might make it a bit easier to follow

In this post I am going to describe option 1. For the second scenario the only thing you need to leave default are parts where I configure the ‘EnableOutboundNat’ = False.

To get an understanding of the networking in Azure Stack I created an Excel Sheet where I mapped all the IP’s in a default installation:

clip_image012

I also found out that this list below is the URL’s for Azure Stack. Mapped them already with my new public IP range:

clip_image014

So my network mappings to summarize are:

Current New
Management 192.168.200.0/24 10.1.111.0/24
SMB Data 1 192.168.100.0/27 10.1.121.0/24
PA (Internal) 192.168.101.0/26 10.1.132.0/22
Transit 192.168.104.0/25 10.1.249.128/27
Deployment 10.1.1.128/27 N/A
Public VIP 192.168.102.0/24 x5.1xx.1×4.32/27
Private VIP 192.168.105.0/27 10.1.123.0/24

So when I had these mappings sorted out I started to change configuration files. I opened Code and point the folder to the configuration folder root. My advice is to do the same:

clip_image016

I opened the New-OneNodeManifest.ps1 and changed the DOMAINNAMEFQDN to my public domain name. That was the simplest one.

Now I also can easily navigate between the XML files as we have to modify a few… 🙂 Let’s start at the OneNodeCustomerConfigTemplate.xml

Here are defined the VLAN’s and network subnets for Storage, Management, HNV (PA), External VIP, Internal VIP and ManagementAPP:

clip_image018

Be aware that some gateway IP’s ending with.0 If you are using /24 subnet that’s fine. But look at my external range where I use a /27 from the .32 subnet id. I used the gateway with just the subnet id here. That seemed to work for me. Also take into account All default gateways should match your switch interface IP’s. The whole stack is being deployed in an internal switch. In the end we remove the internal gateway and replace it with the switch.

Now we head over to the Infrastructure folder under Roles. You see the folders related to the Infrastructure components. In some folders there is a OneNodeRole.xml. All files that I had to change are only the OneNodeRole.xml and never a Role.xml. Under BareMetal you see a OneNodeRole.xml. Look at the file and noticed that there are IP addresses that need to be changed. Change them according your network mapping scheme:

clip_image020

Go over to the Storage folder and there are the cluster networks that needs to be changed

clip_image022

That was the infrastructure part. Now we need to head over to the Fabric folder.

In the ADFS Folder in the OneNodeRole.xml edit line 22 & 30. Remember if you are using public IP’s that “EnableOutboundNat” should be “false” (Option 1 we discussed earlier). In this file the ADFS and the Graph external IP is set:

clip_image023

Save the file and head over to the ASQL folder. There we need to specify the cluster IP and the cluster resource IP’s. Change line 67, 69 & 76 to match your network.

clip_image025

In the next folder (BGP) edit the VPN range IP’s (line12). This is less important as we shut down the BGPNATVM later on.

clip_image027

Next in the FabricRingServices folder there are many subfolders. We only need to update the OneNodeRole.xml in the root and XRP folder.

In the root edit line 199 & 248. Be aware, the first is the internal VIP, the second the external VIP.

clip_image028

clip_image030

In the XRP folder edit the OneNodeRole.xml line 14 & 40 to match your public domain:

clip_image032

Under the Gateway folder edit the OnNodeRole.xml line 27 to match your extern IP

clip_image034

Now one folder down we have the IdentityProvider folder and there a OneNodeRole.xml file. Edit line 10 to match your public domain name:

clip_image036

In the folder KeyVault edit the OneNodeRole.xml file line 12, 31 & 41 to match your public domain and your IP network:

clip_image037

Next in the NC folder edit the OneNodeRole.xml file line 38 & 40 to match your network settings:

clip_image039

Notice line 40, here you need to specify the gateway ip for the internal VIP. This will be assigned to the SLB and does not need to be an interface on your switch!

Next file we need to edit a lot. Open the OneNodeRole.xml in the VirtualMachine folder.

Change in this file all the IP subnet of the VM’s. In my case I left the node IP’s the same and just changed first 3 octets.

clip_image040

Next is the WAS folder. Open the OneNodeRole.xml and edit line 36 & 52 with your network settings. In the same file edit the domain name with yours on line 96, 97, 118, 133 & 142

clip_image042

clip_image044

The last folder we need to edit is the WOSS folder. Edit line 60 to the network that match your new external IP.

clip_image046

So that where all the changes. Now copy the configuration folder from your machine over to the Azure Stack host and overwrite where prompted.

Then just follow the regular installation. If your host can not find the domain when trying to join the host to the domain edit the hostfile and point your public domain name to the DC VM IP.

When the deployment is finished. Stop the BGPNATVM and enable the second network adapter. I renamed mine to SdnSwitch-Nic and configured the SdnSwitch to map to the SdnSwitch Adapter I just enabled.

Stop-VM -Name MAS-NATBGP01

Set-VMSwitch -Name SdnSwitch -NetAdapterName SdnSwitch-Nic

When all is correct you should see that peering from your core switch is established and routes are published in the leaf switch and on their turn advertised to the Spine / Core switch.

When the installation is running you can start configuring your DNS with the table listed above. Copy the RootCA from the MAS-CA01\C$\windows\system32\CertSrv\CertEnroll to your local laptop and your good to go to enter the portal from external!

clip_image048

I also got Site to Site VPN working with a Vnet in Azure:

clip_image050

Soon more in an upcoming video!

Spread the word. Share this post!