In this post I want to explain how I got my POC environment running on a public URL. What we need for this? Because I have some strange issues with my NATVM (I think because of involvement of the network controller) I needed 8 public IP addresses. A small modification to my Hyper-V host file and a script setting in the POC deployment setup.
Let me first emphasize… I like to search the cutting edge in this technology. It took me quite a lot of time and I learned a lot from the inside out of Azure Stack. This is not supported at all. Whatever bugs might be in the code that I just happily exposed to the internet. This environment is a sealed environment with dedicated network and so for me the risk is minimal. Be careful in your own environment and let me state again, This is at your own risk and not supported in any way! Oh, and those who want to test it in nested Hyper-V… That wont work. you will end up with blue screens.
So enough disclaimers and warnings, you probably want to get started. First prepare the Hyper-V host with 2016 and download the bits. We need to set the AD domain to match the public domain name. In the AzureStackPOCVhd there is a file Test-AzureStackDeploymentParameters. Open it and search for azurestack.local. Change it to your domain name:
Because we used a public domain and need internet access for AAD once the domain join for the Hyper-V host will start it will look to the internet domain and resolve it’s IP address. To avoid this open the host file and add this line in the host file:
192.168.100.2 public.fqdn ( in my case it is azurestack.nl)
I have the luxury of a dedicated public internet VLAN , but my host is on a internal network. So I had the unsupported trunk setup to my single NIC and tagged the MGT VLAN on the NIC it self. So now I am compliant with the requirement for installing Azure Stack. Assuming all your disks and other requirements are in place I run the Deploy-AzureStack.ps1:
This will deploy my NATVM directly in the public VLAN that I have allowed on my VLAN trunk to the host. Now after the step of deploying the ADVM and trying to create the cluster and SOFS I got an error. I used ISCSI on the internal network and my VNIC CCI_External_vSwitch was untagged by the creation of the vswitch. So I had to remove the VLAN first from the physical NIC (where I first putted it on) to allow all VLANs tagged into the vswitch. Then I used this command to assign the MGT VLAN to my host MGT vNIC:
$vnic = Get-VMNetworkAdapter -ManagementOS -Name CCI_External_vSwitch
Set-VMNetworkAdapterVlan -VMNetworkAdapter $vnic -Access -VlanId 21
So now I am back at the unsupported scenario where I have used 2 VLANs for my Azure Stack deployment. The VLAN 21 is my management network and my VLAN 150 is the Internet VLAN.
Now the deployment can be continued by running the deploy-azurestack.ps1 again with the parameters you specified earlier.
If your deployment continuous without an error just make sure you changed the network before the NATVM is deployed. So this could be done after the restart when the host is joined to the domain. Otherwise you won’t have internet access inside the virtual network later on.
So after a couple of hours my deployment was finished. First thing I always do before making any modification is to make sure the stack is running as it should. Make sure I can access the portal etc. etc.
Now the fun part begins. To start mapping URLs to VIPs I opened the DNS on the ADVM. All portal services are directed to VIPs in the 192.168.133.0 range. I counted 5 pieces.
Next I logged in to the NATVM (and if you are stubborn and used nested, this is where it goes blue). Open RRAS management console and expand NATVM –> IPv4 –> NAT. right click on ethernet2 and choose properties. First add the public IP range as pool to the adapter. Then click on reservations and create the mappings to the VIPs:
You see I added 3 extra mappings. This I did because as soon I added the range the internet for the other internal VM’s didn’t work anymore. In my opinion I needed at least 3 VMs with internet connection, so I gave them a public IP mapping but only for outbound connectivity. You see that all VIPs also have the allow incoming sessions checked. This is to save me the hassle of all the port mappings i else need to do.
Go to your DNS provider to add all the URLs:
Now let’s see if we did a good job. First copy the root CA from the ADVM to your local machine and import it into the trusted root certificate authorities:
Finally the moment of truth. Go to your external URL and try to log in:
I managed to create storage accounts and VMs and a VNET, but the public IP on the VM is internal one so couldn’t connect from outside to the VM unfortunately. Storage is working also with the Storage Explorer:
Have fun using Azure Stack on a public URL!