When deploying VMs in Azure using template we need to be able to check the VM SKU and sizes to be able to update templates to deploy different OS version and VMs sizes.
There are a few different methods that can be used.
There is the Microsoft document, below is the link to the virtual machine size docs.
This is ok when deploying resources that require only a few values to be set, but this can become difficult to manage when there are a lot of parameter like when deploying virtual machines or web apps.
A parameter file for Bicep is a json file that has the specific parameter names and values set.
I used this Microsoft document as a reference to create the parameter file.
Next we will create a template to deploy a virtual machine and it network interface. To create base template in visual studio code type
res-nic to populate the network interface:
Use res-vm-windows to populate the virtual machine.
I will be creating parameters for each part that requires customization and anything that doesn’t I will leave with the hardcode values like. The @descirpiton is a Parameter decorators that allow use to add a description of what the parameter is used for.
I create two variables for the vnetId and subnetref that are used for the network interface
Below is what the updated virtual machine resource template looks like.
Once we have the Bicep template file ready the next step is to configure the parameter file. I copied the default template file code from the above Microsoft document and added in each of the required parameters.
To get the virtual network ID that will be used in the parameters file go to the virtual network and click on properties > Resource ID.
Once we have that we can fill out the reset of the parameter values.
After the template file has been configured, we can test the deployment same way as the storage account and use the whatif to confirm there are no errors.
As I have not set the admin password in the template or parameter file the deployment will prompt for the admin password to be set on the VM.
If the test deployment comes back without any issues we can check the results from the whaif deployment to confirm all the are correct.
Since the template and parameter files have returned no error we are ready to run and deploy the new VM resource.
If we check the resource group the new VM, OSDisk and network interface should be created.
Now that we have all the template and parameter file working we can just create a new parameter file for each VM resource. We can now create fully customized VM’s pretty quickly instead of having to deploy using the Azure market place and manually select the options we want to set.
In a previous post we went through deploying resource using ARM templates. I have been recently working on creating some new templates to be used for additional deployment and decided to use Bicep templates instead as it seem to an easier and more straight forward way of writing template files to deploy resources in Azure.
Bicep uses a simpler syntax than creating templates using JSON, the syntax is declarative and specifies which resources and resource properties you want to deploy.
Below is a comparison of a JSON and Bicep template file to create a storage account.
We can see that the bicep file is an easier format to read and create.
JSON TemplateBicep Template
To get started with Bicep we will first be installing the Bicep extension to Visual Studio Code so we can edit and modify Bicep templates.
The extension adds intellisense which makes it easier than looking up the full syntax for each resource.
To create a new template click on new file in VS code and select bicep as the language.
To create a template using intellisense either start typing the res-resource type or the resource type itself.
Click on the resource to pre populate the template.
The values can be hard code in the template file but this would require manual change each time the template is run.
We can add parameters to the template to make it more reusable. There are a few different types of parameters that can be used.
Below is a brief description of each type.
Parameter
Description
string
Single line value
array
Allow multiple values
int
Integer value
bool
Required true or false
objects
Allows property types
Microsoft have a more in depth document on the different data types.
The below is an example of a template with parameters set.
Once we have the template we need to install Azure PowerShell if not already installed and to install the Bicep tools, below is the document to install.
Once connected we can use the New-AzResourceGroupDeployment to deploy the template and specify the parameters values. To test the deployment for errors without running an actual deployment we can use the -whatif parameter.
Run the command and it should return any error if there is an issue or green if there is no issue.
Once the template comes back with no issues we can remove the -whatif and the storage account will be created.
To confirm the storage account has been created use the Get-AzStorageAccount
Using parameters in the Bicep template is fine when there are only a few that need to be called but for more complex deployment it will be better to use a parameters file and pre fill each value.
In the next post we will go through the process of using a template to create a VM and its network interface and will call a parameters files instead of call the parameters in PowerShell directly.
During a recent project I have been deploying new VM to Azure, when trying to configure the Azure VM backup I was getting a failure at taking snapshot.
The error that showed in the reason was UserErrorRequestDisallowedByPolicy.
This was being caused by a policy that one of the Azure Admins had setup to require tags be configure on resource groups. When a initial backup is run it creates a resource group to save the restore point collection to and it is this resource group that is getting blocked by the Azure tag policy.
To view the policy details we can go to Policy > assignments
Click on the policy to view the parameter’s.
There are two option to work around this issue, either changing the policy from a Deny effect to a Modify effect, or create the resource group manually.
I will be creating a manual resource group as I am not that familiar with creating custom policy yet and this was the quicker workaround.
Below is the link to the Microsoft document on creating a manual resource group for restore collection point.
Here are the steps that I did to get around this, by manually creating the resource group that will be used for the backup.
This needs to be RG name with 1 as this starting number in my case I used TheSleepyAdmin_Backup_RG1.
In the backup policy we specify the new resource group. Go to Azure Backup center > Backup policies.
Put in the name of the resource group we create manually without the number. In my case this was TheSleepyAdmin_Backup_RG
Wait for the policy update to complete.
Now try the backup again and it should complete.
If we check the resource group we can see that the restore point collection has been created.
Any addtional backup should now also be successfully, if the resource group becomes full it will try to create a new RG so there maybe a need to create another RG in the future. I will be having a look at creating or updating the tag policy to apply a modify instead of a deny but that will be in a different post as this seems like it would be a better longer term solution.
In this post we will go over the different methods to enabled accelerated networking in an existing Azure VM.
Accelerated networking improves performance as it allows the network interfaces of Azure VM to bypass the host.
Screen shot from Microsoft documentation
Below are some of the benefits of using accelerated network.
Lower Latency / Higher packets per second
Reduced jitter,
Decreased CPU utilization
Accelerated networking is only supported on VM that have 2 or more CPU’s. If the VM’s are in a availability set all VM’s in the set need to be powered off before updating.
There are three way’s to enabled accelerated networking on existing VMs use either AZ PowerShell Module or the AZ CLI and directly in the Azure portal.
To enable in the Azure portal go to Virtual machines > Networking and select the required network interface.
To enable first Power off the VM,
Select the network interface and click on the name. This will bring you to the network interface configuration page.
Click on enable accelerated networking
You will have to confirm you have validate that your operating system is supported.
Once completed the network interface should now show have accelerated networking enabled.
Enabling in the console is fine for one or two interfaces but if there are a few to update doing PowerShell or AZ CLI will be a quicker method.
To update using the AZ PowerShell Module, first we need to install the module.
Once the command completes, we can run the command to check the network interfaces again and one should now have enabledaccleratednetworking set to true.
If there were multiple network interfaces in the resource group to enable, we could get the list and loop through each, but each VM would need to be supported or they would error out.
I have recently been looking at using Azure Resource Manager templates (ARM) to deploy and redeploy resources in Azure. I haven’t really done a lot with ARM templates so I though it might be helpful to do a few test runs and try figure out how to deploy resource in Azure using ARM templates.
In this post we will be going to through creating a ARM template from an existing resource group and what we need to do to redeploy to a new resources group.
ARM Templates are JSON files that define the infrastructure and configuration that will be deploy.
First we are going to export the template from Azure resource group that we want to redeploy to another resource group.
Logon to the Azure portal and go to resource groups.
Select the resource group that we want to export the template from.
Go to Automation and select export template.
This will bring up the ARM template for the resource group. We can then download the template to modify, I will be using visual studio code with the Azure Resource Manager (ARM) Tools extension added to edit the template.
Once the zip file is download and extracted there will be two JSON files, parameters’ and template.
When we look at the template file itself there will be a set of parameters. There are default values for each parameters which are the names of each resource in the resource group. I remove the default values.
The parameters are what is used to define the name of the resources that are created.
When I first started to look at the ARM templates they did seem very confusing but if you break them up in to each part instead of looking at it as a whole it made a lot easier for me to understand how the template worked.
If we take the below as a example this part of the JSON defines the virtual network and subnet to be created. It sets the location, subnet prefixes and one subnet for 10.0.0.0/24.
If there are IP address assigned or the subnet need to be changed this can be updated in the JSON file.
Once the JSON file has been modified we can then use this to deploy to Azure. The two way we will be going through in this post is using Azure portal Deploy from a custom template and second we will be going through adding the parameters to the parameters JSON and deploy using PowerShell.
First we will go through the portal deployment.
Logon to the Azure portal and go to deploy from a custom template.
We could search for template using the quick start template if we don’t have a existing template
but we will be using build your own template so we will be selecting build your own template.
Once the blade opens click load file and select the JSON template file this will then load the template.
Click save this should then a view like the below that we can manually in put the details we want to use for the deployment.
Click next and the arm template will be validated.
Click create to start the deployment.
When I deployed the template I had some issues with the VM creation.
This was caused by a few different issue. The first was the managed disk which returned the below error.
“Parameter ‘osDisk.managedDisk.id’ is not allowed.”
The last issue was due to the admin password not being set. To fix this I added a new parameter at the start of the template
and set it under below under os profile
Once this was done I went back to deploy a custom template and readd the details which should now have the addtional admin password field.
The deployment now completed without issue.
The resources now be deployed to the resource group.
The second method we will be going through to deploy the ARM template is to use PowerShell.
We will be using the New-AzResourceGroupDeployment command to deploy the template.
We first will be modifying the parameters file to set the names that will be used.
For the adminpassword I will be adding the password to the parameter’s file but in production this should not be done and instead use something like Azure key vault to store the password.
First we need to install the Azure module
Install-Module -Name Az -Scope CurrentUser -Repository PSGallery -Force
Next we run
Connect-AzAccount
Next we run the New-AzResourceGroupDeployment I used the verbose parameter to get more details on the deployment. We will be calling the template and parameter JSON files.
New-AzResourceGroupDeployment -ResourceGroupName "resource group" -TemplateFile "path to template json" -TemplateParameterFile "path to parameters json"
Below is the command running and provisioning the resources in the template.
Once the deployment completes all the resources will show under the resource group.
We can also use the ARM template to redeploy a resource that has been removed.
If we run the New-AzResourceGroupDeployment again after a resource has been deleted the deployment picks up that the missing resources and redeploys.
This was my first attempt at doing ARM and it not as complicated as I first thought I will probable do a few more post in the future after I have some more time working with ARM templates.
The main limits for VMware are guest disk need to be less than 4TB and vCenter needs to be at least 5.5.
I have a previous post on how to configured a recovery service vault so I wont be going over that again but if you need to configure here is the previous post.
Go to the already configure vault, select Site Recovery and click on prepare infrastructureOnce the wizard has started select the require goalsI am not running the planning tools as this is a test but it is recommended to run before starting a deployment to verify the required bandwidth. Next we will download the OVA appliance that will be imported to VMwareOnce the OVA has been downloaded and imported to VMware on boot up the server will require you to read / accept a licence agreement and provide an administrator password.
Give the server a name (this will show up in as the configuration server in Azure after the setup as been completed)Next step is to sign in to Azure tenant that the server will connect to for replicationNext we will go through the configuration steps first step is to set the interface that will be used to connect to on-prem devices & connection back to Azure there can be two different NIC’s assigned if required. Next is to configure the Recovery vault that will be used, select the subscription, the recovery vault RG and recovery service vault that has been configured. Install the MySQL software Next a validation test will run. (I am getting a warring for memory and CPU as I didn’t have enough memory / CPU and had to edit the VM to run on less resource but it will still complete)Next is to connect to the vCenter server that is running the VM’s that are to be replicated to Azure. Last step will configure the configuration server in Azure.Once this has been completed we can go back to the Azure portal and we should now see the configuration server show under prepare infrastructure setup
Select the subscription and deployment model to be used for failover I am using Resource ManagerNext create a replication policy to apply to the ASR configuration server. Once the configuration is done we can now protect and replicate our on-prem VM’s , go back to site recovery and select step 1: Replicate Application Select source, source location (Configuration server on-prem)Machine type (Physical / virtual), vCenter (If virtual) and the process serverSelect the subscription, RG that the VM will replicate too and the deployment modelNext select the server that will be replicated the VM must be powered on and be running VMware tools be available for replication other wise they will be grey-outSelect the required disk type, storage account last step is to assign the policy required (Multiple policy can be created base on the recovery time requirements and retention times)
Last step is to enabled replication
Once enabled check the site recovery jobs to see the progressOnce replication has completed we can create a recovery plan, go to recovery Plans (Site Recovery and select Recovery planGive the plan a name, select source, target , deployment type and select the VM’s that will be added to the recovery.
With more companies looking to move workloads from on-prem to cloud providers, it can be difficult to work out the cost for current workloads.
In Azure we can utilize Azure Migrate Services.
At the time of this post VMware is only supported, this will be extended to Hyper-v in future releases. VMware VMs must be managed by vCenter Server version 5.5, 6.0, 6.5 or 6.7.
In this post we will be going through the process off assessing the on-prem VMware environment and view the assessment report.
The environment we will be assessing is VMware vCenter 6.7 and ESXi 6.7 and a VM running Windows Server 2016.
The architecture of the Azure Migrate Service is shown in the following diagram
Below is the process
Create a project: In Azure, create an Azure Migrate project
Discover the machines: Download collector VM as an OVA and import to vCenter
Collect the information: The collector collects VM metadata using VMware PowerCLI cmdlets. Discovery is agentless and doesn’t install anything on VMware hosts or VMs. The collected metadata includes VM information (cores, memory, disks, disk sizes, and network adapters). It also collects performance data for VMs
Assess the project: The metadata is pushed to the Azure Migrate project. You can view it in the Azure portal.
Logon to Azure
Go to All services search for migration project
Select Create migration project
Give the project a Name, subscription, Resource group & Geography
Select Discover & Assess
Select Discover and Assess. then Discover machines download OVA. The system requirement for the OVA are:
CPU: 8 vCPU’s; Memory: 16GB; HardDrive: 80GB
Next step is to import the OVA to VMware
Go to vCenter
Browse to the OVA file location and selectSelect the Name and location of the OVASelect the destination cluster Click NextSelect destination data store and specify either thick or thin provisioned diskSelect the port group tha the VM will useReview and confirm settings
Once the OVA is imported, power on the VM
Read and accept the license terms and give the collector an admin password.
Log into the VM and run the connector utility on the desktop.
Got through the prerequisites checks
Next step is to connect to the vCenter. Put in the vCenter IP or hostname, Username / Password and once connect select the cluster or host that is currently running the VM’s that need to be assessed for migration.
Next step is to connect back to Azure using the migration project credentials that were generated when creating the project
Click continue and the last screen will start the discovery this can take a while to complete (60 minutes or so)
Once the discovery has completed, we then need to Create assessment
Create a new group and select the VM that will be assessed
Once this has completed go to Overview and click on the assessment
Once in the assessment you can see the readiness of your VM and the Monthly estimated cost for running the workloads and
Click on the VM to view specific details and performance stats
We can now see what the cost will be for migrating workloads to Azure and this can be presented to give a better understanding of the cost savings that can be achieved with cloud migrations.
In this post I am going to go through setting up a weekly backup for VM’s using Azure Recovery Service Vault.
Recovery Services vaults protect:
Azure Resource Manager-deployed VMs
Classic VMs
Standard storage VMs
Premium storage VMs
VMs running on Managed Disks
VMs encrypted using Azure Disk Encryption
Application consistent backup of Windows VMs using VSS and Linux VMs using custom pre-snapshot and post-snapshot scripts
To backup a single VM we can click on the VM and go to backup and configure the Recovery Vault. I want to add all my servers at one time so I will create Recovery Vault first.
Once logged on go to All Services > Recovery Services vaults
Once in Recovery Services vaults click create
Give the Recovery Vault a Name, assign a subscription, resource group and location.
Once the deployment has finished, click on the newly create object.
First thing I am going to set the backup configuration to locally-redundant as this is just for my Lab VM’s and it will save on cost.
Go to Manage > Backup Infrastructure and set to Locally-redundant.
I am going to create a custom policy as I only want to backup my test VM’s once a week. go to Manage > Backup policies and click Add.
Once in the new backup policy configure settings as required. I have set frequency to every sunday at 22:00 and set retention to 4 weeks backups. Click create once all settings are configured.
The policy should now be available to assign to backup jobs. Next step is to setup the backup. Go to Getting started > backup
Select where the work load is running (Azure or on prem), I only want to backup my Azure Lab VM so I selected Azure. Next select backup type
VM
Azure File Share (in preview at the time of the post)
SQL server in Azure VM (in preview at the time of the post)
Select the backup policy, I am using the policy created above.
Next select the VM’s that will be backed up.
Click enable backup to finish the config.
I will kick off a manual backup job to get an initial backup.
Click on backup Item > Azure Virtual Machine > Backup now
To view backup jobs go to Monitoring > Backup Jobs
Once the backup is complete, the option to run VM restore or file level recovery becomes available.
In this post I am going to go through setting up an Azure resource group, VNet and deployment of a basic VM. There are many different VM version that can be deployed.
Below is a table with the current VM types, sizes and description:
Type
Sizes
Description
General purpose
B, Dsv3, Dv3,
DSv2, Dv2,
Av2, DC
Balanced CPU-to-memory ratio. Ideal for testing and development,
small to medium databases, and low to medium traffic web servers.
Compute optimized
Fsv2, Fs, F
High CPU-to-memory ratio.
Good for medium traffic web servers,
network appliances, batch processes, and application servers.
Memory optimized
Esv3, Ev3, M,
GS, G, DSv2,
Dv2
High memory-to-CPU ratio.
Great for relational database servers,
medium to large caches, and in-memory analytics.
Storage optimized
Ls
High disk throughput and IO.
Ideal for Big Data, SQL, and NoSQL databases.
GPU
NV, NVv2, NC,
NCv2, NCv3,
ND
Specialized virtual machines targeted for heavy graphic rendering and video editing,
as well as model training and inferencing (ND)
with deep learning. Available with single or multiple GPUs.
High performance
compute
H
Our fastest and most powerful CPU virtual machines
with optional high-throughput network interfaces (RDMA).
First step for deploying a VM is to create a resource group, a resource group is basically a container object that will hold Azure objects like VNet’s, VM and any other Azure serivces that will be added to the RG . A RG can be created while deploying a VM but I prefer to create them before hand.
Logon to the Azure portal, once in the Azure portal if the resource groups tab is not showing.
Go to All services > Resource Groups
Once on resource groups click on Add
Give the resource group a name, select a subscription and set the location.
The resource group should only take a few seconds to create. Once created you should get an alert.
Now that there is a resource group, we can move to the next step which is to create a new VNet. all services > Virtual networks
Once in Virtual network’s go to create virtual network. Give the Network a name, IP address space /Subnet mask, select subscription, location, added to a resource group and set the IP range that will be available for use.
Once completed the new VNet will show under virtual networks.
Final step is to start creating VM’s go to all services > Virtual machines
Click on create new Virtual machine
Set the subscriptions that will be used, resource group, VM name & image type. We can also do availability options for high availability and resilience.
Select VM size, user name and allowed ports.
Next page allows you to change the disks used for the VM (premiere SSD, standard SSD or standard HHD) if the disk is change this may reset the VM type so I would usually leave this as is, unless there is a specific reason to change.
Next step is to select the VNet / subnet that will be used for the VM.
There is auto shutdown feature in Azure. I like to use this on my Lab as it saves credit as this is only a lab server, I want the VM to shut down at 12AM. I can start the VM up again when I want to do any further testing.
I wont add any guest config, tags so the last step is to review and validate the VM
The VM should now deploy it will take a while to deploy once completed the VM will now show under Virtual Machines.
If we check the resource group, we can now see all the object contained in the resource group.