In the previous post we went through the process of deploying an Bicep template using parameter that where called directly from PowerShell.
This is ok when deploying resources that require only a few values to be set, but this can become difficult to manage when there are a lot of parameter like when deploying virtual machines or web apps.
A parameter file for Bicep is a json file that has the specific parameter names and values set.
I used this Microsoft document as a reference to create the parameter file.
Next we will create a template to deploy a virtual machine and it network interface. To create base template in visual studio code type
res-nic to populate the network interface:
Use res-vm-windows to populate the virtual machine.
I will be creating parameters for each part that requires customization and anything that doesn’t I will leave with the hardcode values like. The @descirpiton is a Parameter decorators that allow use to add a description of what the parameter is used for.
I create two variables for the vnetId and subnetref that are used for the network interface
Below is what the updated virtual machine resource template looks like.
Once we have the Bicep template file ready the next step is to configure the parameter file. I copied the default template file code from the above Microsoft document and added in each of the required parameters.
To get the virtual network ID that will be used in the parameters file go to the virtual network and click on properties > Resource ID.
Once we have that we can fill out the reset of the parameter values.
After the template file has been configured, we can test the deployment same way as the storage account and use the whatif to confirm there are no errors.
As I have not set the admin password in the template or parameter file the deployment will prompt for the admin password to be set on the VM.
If the test deployment comes back without any issues we can check the results from the whaif deployment to confirm all the are correct.
Since the template and parameter files have returned no error we are ready to run and deploy the new VM resource.
If we check the resource group the new VM, OSDisk and network interface should be created.
Now that we have all the template and parameter file working we can just create a new parameter file for each VM resource. We can now create fully customized VM’s pretty quickly instead of having to deploy using the Azure market place and manually select the options we want to set.
In a previous post we went through deploying resource using ARM templates. I have been recently working on creating some new templates to be used for additional deployment and decided to use Bicep templates instead as it seem to an easier and more straight forward way of writing template files to deploy resources in Azure.
Bicep uses a simpler syntax than creating templates using JSON, the syntax is declarative and specifies which resources and resource properties you want to deploy.
Below is a comparison of a JSON and Bicep template file to create a storage account.
We can see that the bicep file is an easier format to read and create.
To get started with Bicep we will first be installing the Bicep extension to Visual Studio Code so we can edit and modify Bicep templates.
The extension adds intellisense which makes it easier than looking up the full syntax for each resource.
To create a new template click on new file in VS code and select bicep as the language.
To create a template using intellisense either start typing the res-resource type or the resource type itself.
Click on the resource to pre populate the template.
The values can be hard code in the template file but this would require manual change each time the template is run.
We can add parameters to the template to make it more reusable. There are a few different types of parameters that can be used.
Below is a brief description of each type.
Single line value
Allow multiple values
Required true or false
Allows property types
Microsoft have a more in depth document on the different data types.
Once connected we can use the New-AzResourceGroupDeployment to deploy the template and specify the parameters values. To test the deployment for errors without running an actual deployment we can use the -whatif parameter.
During a recent project I have been deploying new VM to Azure, when trying to configure the Azure VM backup I was getting a failure at taking snapshot.
The error that showed in the reason was UserErrorRequestDisallowedByPolicy.
This was being caused by a policy that one of the Azure Admins had setup to require tags be configure on resource groups. When a initial backup is run it creates a resource group to save the restore point collection to and it is this resource group that is getting blocked by the Azure tag policy.
To view the policy details we can go to Policy > assignments
Click on the policy to view the parameter’s.
There are two option to work around this issue, either changing the policy from a Deny effect to a Modify effect, or create the resource group manually.
I will be creating a manual resource group as I am not that familiar with creating custom policy yet and this was the quicker workaround.
Below is the link to the Microsoft document on creating a manual resource group for restore collection point.
Here are the steps that I did to get around this, by manually creating the resource group that will be used for the backup.
This needs to be RG name with 1 as this starting number in my case I used TheSleepyAdmin_Backup_RG1.
In the backup policy we specify the new resource group. Go to Azure Backup center > Backup policies.
Put in the name of the resource group we create manually without the number. In my case this was TheSleepyAdmin_Backup_RG
Wait for the policy update to complete.
Now try the backup again and it should complete.
If we check the resource group we can see that the restore point collection has been created.
Any addtional backup should now also be successfully, if the resource group becomes full it will try to create a new RG so there maybe a need to create another RG in the future. I will be having a look at creating or updating the tag policy to apply a modify instead of a deny but that will be in a different post as this seems like it would be a better longer term solution.
I have recently been looking at using Azure Resource Manager templates (ARM) to deploy and redeploy resources in Azure. I haven’t really done a lot with ARM templates so I though it might be helpful to do a few test runs and try figure out how to deploy resource in Azure using ARM templates.
In this post we will be going to through creating a ARM template from an existing resource group and what we need to do to redeploy to a new resources group.
ARM Templates are JSON files that define the infrastructure and configuration that will be deploy.
First we are going to export the template from Azure resource group that we want to redeploy to another resource group.
Logon to the Azure portal and go to resource groups.
Select the resource group that we want to export the template from.
Go to Automation and select export template.
This will bring up the ARM template for the resource group. We can then download the template to modify, I will be using visual studio code with the Azure Resource Manager (ARM) Tools extension added to edit the template.
Once the zip file is download and extracted there will be two JSON files, parameters’ and template.
When we look at the template file itself there will be a set of parameters. There are default values for each parameters which are the names of each resource in the resource group. I remove the default values.
The parameters are what is used to define the name of the resources that are created.
When I first started to look at the ARM templates they did seem very confusing but if you break them up in to each part instead of looking at it as a whole it made a lot easier for me to understand how the template worked.
If we take the below as a example this part of the JSON defines the virtual network and subnet to be created. It sets the location, subnet prefixes and one subnet for 10.0.0.0/24.
If there are IP address assigned or the subnet need to be changed this can be updated in the JSON file.
Once the JSON file has been modified we can then use this to deploy to Azure. The two way we will be going through in this post is using Azure portal Deploy from a custom template and second we will be going through adding the parameters to the parameters JSON and deploy using PowerShell.
First we will go through the portal deployment.
Logon to the Azure portal and go to deploy from a custom template.
We could search for template using the quick start template if we don’t have a existing template
but we will be using build your own template so we will be selecting build your own template.
Once the blade opens click load file and select the JSON template file this will then load the template.
Click save this should then a view like the below that we can manually in put the details we want to use for the deployment.
Click next and the arm template will be validated.
Click create to start the deployment.
When I deployed the template I had some issues with the VM creation.
This was caused by a few different issue. The first was the managed disk which returned the below error.
“Parameter ‘osDisk.managedDisk.id’ is not allowed.”
Go to the already configure vault, select Site Recovery and click on prepare infrastructureOnce the wizard has started select the require goalsI am not running the planning tools as this is a test but it is recommended to run before starting a deployment to verify the required bandwidth. Next we will download the OVA appliance that will be imported to VMwareOnce the OVA has been downloaded and imported to VMware on boot up the server will require you to read / accept a licence agreement and provide an administrator password.
Give the server a name (this will show up in as the configuration server in Azure after the setup as been completed)Next step is to sign in to Azure tenant that the server will connect to for replicationNext we will go through the configuration steps first step is to set the interface that will be used to connect to on-prem devices & connection back to Azure there can be two different NIC’s assigned if required. Next is to configure the Recovery vault that will be used, select the subscription, the recovery vault RG and recovery service vault that has been configured. Install the MySQL software Next a validation test will run. (I am getting a warring for memory and CPU as I didn’t have enough memory / CPU and had to edit the VM to run on less resource but it will still complete)Next is to connect to the vCenter server that is running the VM’s that are to be replicated to Azure. Last step will configure the configuration server in Azure.Once this has been completed we can go back to the Azure portal and we should now see the configuration server show under prepare infrastructure setup
Select the subscription and deployment model to be used for failover I am using Resource ManagerNext create a replication policy to apply to the ASR configuration server. Once the configuration is done we can now protect and replicate our on-prem VM’s , go back to site recovery and select step 1: Replicate Application Select source, source location (Configuration server on-prem)Machine type (Physical / virtual), vCenter (If virtual) and the process serverSelect the subscription, RG that the VM will replicate too and the deployment modelNext select the server that will be replicated the VM must be powered on and be running VMware tools be available for replication other wise they will be grey-outSelect the required disk type, storage account last step is to assign the policy required (Multiple policy can be created base on the recovery time requirements and retention times)
Last step is to enabled replication
Once enabled check the site recovery jobs to see the progressOnce replication has completed we can create a recovery plan, go to recovery Plans (Site Recovery and select Recovery planGive the plan a name, select source, target , deployment type and select the VM’s that will be added to the recovery.
With more companies looking to move workloads from on-prem to cloud providers, it can be difficult to work out the cost for current workloads.
In Azure we can utilize Azure Migrate Services.
At the time of this post VMware is only supported, this will be extended to Hyper-v in future releases. VMware VMs must be managed by vCenter Server version 5.5, 6.0, 6.5 or 6.7.
In this post we will be going through the process off assessing the on-prem VMware environment and view the assessment report.
The environment we will be assessing is VMware vCenter 6.7 and ESXi 6.7 and a VM running Windows Server 2016.
The architecture of the Azure Migrate Service is shown in the following diagram
Below is the process
Create a project: In Azure, create an Azure Migrate project
Discover the machines: Download collector VM as an OVA and import to vCenter
Collect the information: The collector collects VM metadata using VMware PowerCLI cmdlets. Discovery is agentless and doesn’t install anything on VMware hosts or VMs. The collected metadata includes VM information (cores, memory, disks, disk sizes, and network adapters). It also collects performance data for VMs
Assess the project: The metadata is pushed to the Azure Migrate project. You can view it in the Azure portal.
Logon to Azure
Go to All services search for migration project
Select Create migration project
Give the project a Name, subscription, Resource group & Geography
Select Discover & Assess
Select Discover and Assess. then Discover machines download OVA. The system requirement for the OVA are:
CPU: 8 vCPU’s; Memory: 16GB; HardDrive: 80GB
Next step is to import the OVA to VMware
Go to vCenter
Browse to the OVA file location and selectSelect the Name and location of the OVASelect the destination cluster Click NextSelect destination data store and specify either thick or thin provisioned diskSelect the port group tha the VM will useReview and confirm settings
Once the OVA is imported, power on the VM
Read and accept the license terms and give the collector an admin password.
Log into the VM and run the connector utility on the desktop.
Got through the prerequisites checks
Next step is to connect to the vCenter. Put in the vCenter IP or hostname, Username / Password and once connect select the cluster or host that is currently running the VM’s that need to be assessed for migration.
Next step is to connect back to Azure using the migration project credentials that were generated when creating the project
Click continue and the last screen will start the discovery this can take a while to complete (60 minutes or so)
Once the discovery has completed, we then need to Create assessment
Create a new group and select the VM that will be assessed
Once this has completed go to Overview and click on the assessment
Once in the assessment you can see the readiness of your VM and the Monthly estimated cost for running the workloads and
Click on the VM to view specific details and performance stats
We can now see what the cost will be for migrating workloads to Azure and this can be presented to give a better understanding of the cost savings that can be achieved with cloud migrations.
In this post I am going to go through setting up an Azure resource group, VNet and deployment of a basic VM. There are many different VM version that can be deployed.
Below is a table with the current VM types, sizes and description:
B, Dsv3, Dv3,
Balanced CPU-to-memory ratio. Ideal for testing and development,
small to medium databases, and low to medium traffic web servers.
Fsv2, Fs, F
High CPU-to-memory ratio.
Good for medium traffic web servers,
network appliances, batch processes, and application servers.
Esv3, Ev3, M,
GS, G, DSv2,
High memory-to-CPU ratio.
Great for relational database servers,
medium to large caches, and in-memory analytics.
High disk throughput and IO.
Ideal for Big Data, SQL, and NoSQL databases.
NV, NVv2, NC,
Specialized virtual machines targeted for heavy graphic rendering and video editing,
as well as model training and inferencing (ND)
with deep learning. Available with single or multiple GPUs.
Our fastest and most powerful CPU virtual machines
with optional high-throughput network interfaces (RDMA).
First step for deploying a VM is to create a resource group, a resource group is basically a container object that will hold Azure objects like VNet’s, VM and any other Azure serivces that will be added to the RG . A RG can be created while deploying a VM but I prefer to create them before hand.
Logon to the Azure portal, once in the Azure portal if the resource groups tab is not showing.
Go to All services > Resource Groups
Once on resource groups click on Add
Give the resource group a name, select a subscription and set the location.
The resource group should only take a few seconds to create. Once created you should get an alert.
Now that there is a resource group, we can move to the next step which is to create a new VNet. all services > Virtual networks
Once in Virtual network’s go to create virtual network. Give the Network a name, IP address space /Subnet mask, select subscription, location, added to a resource group and set the IP range that will be available for use.
Once completed the new VNet will show under virtual networks.
Final step is to start creating VM’s go to all services > Virtual machines
Click on create new Virtual machine
Set the subscriptions that will be used, resource group, VM name & image type. We can also do availability options for high availability and resilience.
Select VM size, user name and allowed ports.
Next page allows you to change the disks used for the VM (premiere SSD, standard SSD or standard HHD) if the disk is change this may reset the VM type so I would usually leave this as is, unless there is a specific reason to change.
Next step is to select the VNet / subnet that will be used for the VM.
There is auto shutdown feature in Azure. I like to use this on my Lab as it saves credit as this is only a lab server, I want the VM to shut down at 12AM. I can start the VM up again when I want to do any further testing.
I wont add any guest config, tags so the last step is to review and validate the VM
The VM should now deploy it will take a while to deploy once completed the VM will now show under Virtual Machines.
If we check the resource group, we can now see all the object contained in the resource group.