Getting Started with Microsoft AZ PowerShell

In this post we will be going through using the Microsoft PowerShell AZ Module to manage resources in Azure.

Using PowerShell to managed Azure resources can have advantages over using the Azure portal as it can be easier to do tasks and export configuration data.

I use the Az module learn pages below to find the commands I need.

https://learn.microsoft.com/en-us/powershell/module/?view=azps-10.3.0

First step is to install the Az module

Install-Module -Name Az

If you already have the Az module installed it a good idea to update as new command and fixes can be added to the module over time.

Update-Module -Name Az -Force

Once we have the module installed we can connect to start managing resources.

Use Connect-AzAccount command to connect and put the Azure account with permission to connect.

If you connect and there are multiple subscriptions, you can use the below command to check what subscription are available and the select the correct subscription to use.

Get-AzSubscription

Set-AzContext -SubscriptionName SubName

To find a command we can use the Get-Command and wildcards to return the list of commands with a specify name below I am looking for network commands

Get-Command Get-Az*Network*

To list all resource groups we can use

Get-AzResourceGroup

To select a specify resource group use

Get-AzResourceGroup -ResourceGroupName Sleepyadmin_Ops_RG

To get a list of virtual network we can use

Get-AzVirtualNetwork

When running commands like Get-AzVirtualNetwork, they will sometimes only return a few properties.

To return the full set of properties we can use format-list

Get-AzVirtualNetwork | Format-List

We can select specify properties by using Parentheses and the properties name. Below we are select the virtual network address space.

(Get-AzVirtualNetwork).AddressSpace.AddressPrefixes

To select specify properties we can use select-object, to get sub properties like address space and subnets we can use hash tables to format the properties like below.

Get-AzVirtualNetwork | Select-Object ResourceGroupName,Name, @{N='AddressSpace';E={$_.AddressSpace.AddressPrefixes}}, @{N='SubnetName';E={$_.Subnets.Name}}

We can use the above method with other command like with Network security groups to export firewall rules.

Get-AzNetworkSecurityGroup -Name NSG_Name | Select-Object Name,@{N="RuleName";E={$_.SecurityRules.Name}},ResourceGroupName,@{N="DestinationPortRange";E={$_.SecurityRules.DestinationPortRange}}

This has been a quick overview of getting start with Az PowerShell.

Deploying Azure VM using PowerShell and CSV

In this post we will be going through the process of using PowerShell to deploy VM’s using the Az module. We will then use a csv file to create multiple VMs.

To start we will need an existing resource group, vnet and subnet that will be use for the VM to be built in and connect the nic.

Next we need to have the Az PowerShell module installed this can be done by using

Install-Module -Name Az -Repository PSGallery -Force

if the module is already installed it a good idea to check if there is any updates we can use the below command to update the module.

Update-Module -Name Az -Force

Once we have the module and pre-req resources in Azure we can start to create the command that will be used to build the VM.

I use the New-AzVM learn page to check on what parameters I wanted to use to create the VM .

We will need to chose a sku size and image to deploy. These can be checked using PowerShell.

For the below exam I will be using a Standard_d2ds image sku to search, we can also search based on cores or memory either.

Get-AzVMSize -Location northeurope | Where-Object {$_.Name -like "Standard_D2ds*"}

Next we need a image for this I will be using Windows server 2022.

Get-AzVMImageSku -Location "northeurope" -PublisherName "MicrosoftWindowsServer" -Offer "WindowsServer" | Where-Object {$_.Skus -like "*2022*"}

The image value is made by joining three rows together:

PublisherName:Offer:Skus

for 2022 Azure edition it will be the below.

Now that we have the sku and image we can start to create the command to build the VM.

I will be using splatting as it easier to read when using a lot of parameters.

$vmla = "AzureUser"
$vmlapass = ConvertTo-SecureString "password" -AsPlainText -Force

$vmproperties = @{
    ResourceGroupName = "RG Name"
    Name = "VM Name"
    Location = "AZ Region"
    VirtualNetworkName = "vnet"
    SubnetName = "subnet"
    Size = "sku size"
    Image = "Image"
    Zone = "1"
    Credential =  New-Object System.Management.Automation.PSCredential ($vmla, $vmlapass);
}

Once we have the parameters set we can then call them using

New-AzVm @vmproperties

We can run get-azvm to check the vm is created and running.

To deploy multiple VM’s using a csv file we can use the same splatting we will just be updating to using a foreach loop and import csv.

First we need to create the csv below is the heading that I used.

Next we need to update the parameters splatting to use the variable that will be used in the foreach loop in my case its $vm.

Below is the updated parameters.

Save the script and we will connect to Azure again and run.

The csv and splatting can be updated to add or remove additional parameters.

The only issue I have found is that there is no way to stop a new NSG being create unless you specify an existing NSG. For each VM I deployed it create a new NSG.

I had to manually go in and delete these NSG’s as I am using a NSG on the subnet and don’t want them on the VMs nics.

It looks like in a future release of the Az module you will be allowed to specify no NSG.

https://github.com/Azure/azure-powershell/issues/16890

This can be an useful alterative for deployments of VM’s if you aren’t that familiar with Arm or bicep templates and more comfortable using PowerShell.

Getting Started with KQL Part 3: Query and Structure Data

In the last post we went through different KQL Operators,

In this post we will be going to querying and structuring date as well as create some basic charts.

Logon to Azure and go to Log Analytics workspace and select the workspace.

Click on logs, the main logs we will be working with in these posts will be storage blobs but the same principal can be used on any logs.

I used the below learn article for reference on the different columns

https://learn.microsoft.com/en-us/azure/azure-monitor/reference/tables/storagebloblogs

I also used the quick reference for what each operators is.

https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/kql-quick-reference

If we want to filter by a specific column and value we can use where operator to return only the specific rows.

The below query is to get the authentication that use SAS.

If we want to only select certain columns we can use project.

If we wanted a count of each authentication type we can use summarize.

Next we can create a pie chart by a column , I used operation name.

We can also render as charts.

In the next post will go through create email reports and action groups.

Getting Started with KQL Part 2: Working with Operators

In the last post we went through setting up a log workspace and setting up diagnostic setting to send data to the workspace.

In this post we will be going through using the different KQL operators

First we will use the search operator to return all data in the log workspace, this can be useful when trying to find a the table we want to query or see specific event type.

search *

If we want to return specific number for of rows but not in specific order we can use take 10

Table 
|take  10 

To return a list of unique values in a column we can use distinct.

Table 
| distinct AppDisplayName

To select multiple rows we can use the or operator

Table
| where colume_name contains "value" or colume_name  contains "value" Signup Portal"
| project  value1, value2

To order the data we can use order by

Table
| where colume_name contains "value" or colume_name  contains "value" Signup Portal"
| order by type
| project  value1, value2

To return the first set to rows we can use top

Table
| top 10 by colume_name
| project value1 

To return data between a specific date and time we can use the between operator

Table
| where TimeGenerated between (datetime(2023-08-14T19:12:00) .. datetime(2023-08-15T19:12:00))

In the next post we will go through Query and Structure Data, as well as creating data in visualizing data in charts using the render operator.

Migrating Existing Azure VM to Availability Zone

In this post we will be going through the process of migrating a VM in to an availability zone.

Azure availability zones are designed to help with availability for business-critical workloads. Using availability zone allow VMs to be replicated to different datacenter within the same Azure region.

There is not currently an easy process to move a VM in to an availability zone as this needs to be configured when the VM is originally deployed.

The process to migrate does required down time for the VM as we will need to snapshot the VM, create new disks from the snapshot and deploy in to an availability zone.

We will be using PowerShell to create a snapshot of the existing VM disks , create the new disks from the snapshots and create a new VM using the existing VM config.

I will be doing this on test VM with no live data but for an live server make sure to have a full backup as we will be deleting the VM and use the existing configuration to recreate the VM with disk in the availability zone.

I created a test VM call SAZTest with one data disk, that is not in an availability zone.

First we need to connect to AZ PowerShell and select the correct subscription if there are multiple.

Next we get the VM that we will create a snapshot of the disk. We will be using the $vm variable to get the disk and for recreating the VM later to keep the existing configuration.

$resourceGroup = "Resource Group"
$vmName = "VMName"
$vm = Get-AzVM -ResourceGroupName $resourceGroup -Name $vmName

First we need to power off the VM if it running either through the Azure portal or running the below command.

Stop-AzVM -ResourceGroupName resourceGroup -Name vmName

Next we can create the snapshot I used the below link for reference.

https://learn.microsoft.com/en-us/azure/virtual-machines/snapshot-copy-managed-disk?tabs=portal

$location = "location"
$snapshotName = "$($vm.StorageProfile.OsDisk.Name)-snapshot"

$snapshot =  New-AzSnapshotConfig -SourceUri $vm.StorageProfile.OsDisk.ManagedDisk.Id -Location $location -CreateOption copy

$vmossnapshot = New-AzSnapshot -Snapshot $snapshot -SnapshotName $snapshotName -ResourceGroupName $resourceGroup

$snapshot = Get-AzSnapshot -ResourceGroupName $resourceGroup -SnapshotName $snapshotName 

If we check under snapshots in the Azure portal we will see the newly create snapshot disk.

We could also create the snapshot disk directly from the Azure portal using snapshots blade,

We will use this method for the data disk, go to the VM and select the data disk.

Select create snapshot.

Add in the details

Go through and leave setting as default

Wait for the deployment to complete and the second snapshot should show.

To create the data disk using PowerShell it pretty much the same process as the OS disk.

To view the disk attached to the VM we can use the data disk sub property.

$vm.StorageProfile.DataDisks

Since we only have one disk we can run the set of commands once but if there where a few disks it would be easier to loop through them.

$datadisk = $vm.StorageProfile.DataDisks

$snapshotdataconfig = New-AzSnapshotConfig -SourceUri $datadisk.ManagedDisk.Id -Location $location -CreateOption copy -SkuName Standard_ZRS

$snapshot_data = New-AzSnapshot -Snapshot $snapshotdataconfig -SnapshotName ($datadisk.Name + '-snapshot') -ResourceGroupName $resourceGroup

We can run the below to show the snapshots.

Get-AzSnapshot -ResourceGroupName resourceGroup | Select-Object Name

Next we need to create a new managed disk from the snapshots.

https://learn.microsoft.com/en-us/azure/virtual-machines/scripts/virtual-machines-powershell-sample-create-managed-disk-from-snapshot

We should have the snapshot already in the $snapshot but if not we can run again before create the new disk config and disk.

$snapshot = Get-AzSnapshot -ResourceGroupName $resourceGroup -SnapshotName $snapshotName 

$diskconfig = New-AzDiskConfig -Location $snapshot.Location -SourceResourceId $snapshot.Id -CreateOption Copy -SkuName Standard_LRS -Zone 1

$OSdisk = New-AzDisk -Disk $diskConfig -ResourceGroupName $resourceGroup -DiskName ($vm.StorageProfile.OsDisk.Name +"_1")

We need to run the same set of command for all the data disks.

$datasnapshot = Get-AzSnapshot -ResourceGroupName $resourceGroup -SnapshotName $snapshot_data.Name 

$datadiskConfig = New-AzDiskConfig -Location $datasnapshot.Location -SourceResourceId $datasnapshot.Id -CreateOption Copy -SkuName Standard_LRS -Zone 1

$datadisk = New-AzDisk -Disk $datadiskConfig -ResourceGroupName $resourceGroup -DiskName ($datadisk.Name + "_1")

Now if we check the resource group we should see the new disk.

Now we need to delete the original VM so that we create a new VM using the existing configuration with newly created disk in zone 1.

Either delete the VM from the Azure Portal or run

Remove-AzVM -ResourceGroupName resourceGroup -Name vmName  

We need to use New-AzVMConfig, copy the existing SKU size, attach the OS / data disks that we created and add the existing network interface.

I used the below learn article as reference.

https://learn.microsoft.com/en-us/powershell/module/az.compute/new-azvmconfig?view=azps-10.1.0

$createvm = New-AzVMConfig -VMName $vm.Name -VMSize $vm.HardwareProfile.VmSize -Zone 1

Set-AzVMOSDisk -VM $createvm -CreateOption Attach -ManagedDiskId $OSdisk.Id -Name $OSdisk.Name -Windows

$vmdatadisk = Get-AzDisk -ResourceGroupName $resourceGroup -DiskName $datadisk.Name

Add-AzVMDataDisk -VM $createvm  -Name $vmdatadisk.Name -ManagedDiskId $vmdatadisk.Id  -Lun 0 -DiskSizeInGB $vmdatadisk.DiskSizeGB -CreateOption Attach 

Next we can add the existing network adapter.

Add-AzVMNetworkInterface -VM $createvm -Id $vm.NetworkProfile.NetworkInterfaces.id -Primary

Next we set the VM boot diagnostic if this is not set the VM will default to create a storage account to use for boot.

Set-AzVMBootDiagnostic -VM $createvm -Enable

We can also change this after by going to boot diagnostics on the VM and changing to enabled with managed storage account.

Last step is to create the new VM. If you have Azure Hybrid Benefit licensing you can enabled this on the by adding the -LicenseType Windows_Server (I don’t so I wont be on this example).

New-AzVM -ResourceGroupName $resourceGroup -Location $vm.Location -VM $createvm -DisableBginfoExtension

Now when the deployment finishes we can see the VM is now running in Zone 1.

Once its confirmed that the VM is running and that all date is available.

The last step is to remove the old disk and snapshots so that we don’t get charged for them.

Go to each disk / Snapshot and delete the original VM disk and snapshots.

Getting Started with KQL Part 1: Azure Log Workspace and diagnostic settings

Kusto Query Language (KQL), is a query language developed by Microsoft for querying and analyzing data. KQL is specifically designed for working with structured, semi-structured, and unstructured data, and it offers a simple and intuitive syntax for expressing complex queries.

KQL is used in log analytics, security monitoring, and business intelligence. It allows users to extract insights from vast amounts of data by using a combination of filtering, aggregating, and transforming operations. With its rich set of operators, functions, and control flow statements, KQL enables users to perform advanced analytics and create sophisticated queries.

In the next few blog post’s we will be going through how to send data to a log workspace and creating KQL queries to show how we can visualize and gather data.

I have wanted to do more with KQL and I am using this series to improve my own KQL and hopefully it will be of use to other if you are just starting out.

First step is we need to create a Azure Log workspace.

Go to Log Analytics workspace blade in Azure and click create.

  • Select a subscription
  • Create or select and existing resource group
  • Give the workspace a name
  • Set the region

Select tags if needed, review and create.

It will take a few minutes to finish deploying.

Once we have the log workspace created we can start to send data that we can then query.

First we will be sending storage account diagnostic data.

To enabled to go the storage account and select diagnostic settings.

Enabled the diagnostic for the required storage type, I am going to enable for blob and file.

Click on the storage type and click add diagnostic settings.

Select the logs to send and the log analytics workspace.

After enabling for both file and blob the diagnostic status should now show as enabled.

We will can generate some data by creating and delete some blob and azure file share.

Once we have some data can start to write our KQL query. First we can run against the full table by using StorageBlobLogs. I used the Azure monitor reference to find the correct table to use.

https://learn.microsoft.com/en-us/azure/azure-monitor/reference/

If we want to select specify rows and filter the data that is returned we can use the where operator. In below example I am select only the storage write results.

StorageBlobLogs
| where Category == 'StorageWrite'

In the next post we will go through using different operators like project, summarize, rendering…

Azure Key Vault: Access With PowerShell

Azure Key Vault is a cloud-based service provided by Microsoft Azure that allows users to securely store and manage keys, secrets, and certificates used in their applications and services.

It acts as a repository for sensitive information and provides a secure way to access and manage sensitive data. Key Vault offers features such as key and secret rotation, access policies, and auditing capabilities, ensuring that sensitive data remains protected at all times.

It integrates with other Azure services and provides encryption and access control, making it a reliable solution for safeguarding critical data.

In this post we will be going through configuring Azure Key Vault, adding some secrets and calling these secrets using PowerShell.

Before using Azure Key Vault, there are a few prerequisites that need to be in place:

  1. Azure Subscription: You will need an active Azure subscription to create and manage Azure Key Vaults.
  2. Resource Group: Create a resource group in Azure
  3. Access Control: Ensure that you have the necessary permissions and role-based access control (RBAC) rights to create and manage Azure Key Vaults. The required roles typically include Owner, Contributor, or Key Vault Contributor.
  4. Network Configuration: Configure your virtual network and firewall rules to allow access to the Azure Key Vault. You can limit access to specific IP addresses or Azure services based on your security requirements.

I will be using a pre-existing resource group and VNET and we wont be covering those in this post.

Azure Key Vault and secrets can be created either using AZ CLI, AZ PowerShell and the Azure portal. In this post we will be using the Azure portal and will create a new secret using AZ PowerShell.

First step is to create a new key vault in the Azure admin portal search for key vault and open the blade.

Click create

Select the resource group

  • give the vault a name
  • set the region
  • set the pricing tier
  • set soft delete

Set the permission model and resource access.

I will be leaving public access open but in production I would limit this and create a private endpoint.

Review the setting and create the key vault.

The deployment will take a minute to complete.

Before we can being using the key vault we need to give permission.

Go to the key vault, select IAM and click add.

Next select the role to assign. In this case I am using Key Vault Administator.

Select the member, I am using a group, this could also be a managed Identity incase we need to allow Azure function or automation account to connect.

Review and assign the permissions.

Now that we have the key vault and permission set we can add some secrets.

Go to objects and secrets. Click on generate/import

Give the secret a name and a value, we can also set activation / expiration dates and tags.

Click create to add the secret.

Now we should see the secret in the secrets blade.

We can view the value directly in the Azure console be clicking on the secret and view the secret value.

The last step is to test that we can call the value using PowerShell. To run these command we first have to install the AZ PowerShell module.

Connect using AZ PowerShell

Connect-AzAccount

If there are multiple subscriptions we need to set using Set-AzContext

Set-AzContext -Subscription "Subscription name"

Use the Get-AzKeyVaultSecret command to to view the secret

Get-AzKeyVaultSecret -VaultName "vault name"

To retrieve the value as plain text use

Get-AzKeyVaultSecret -VaultName "vault name" -Name "secretvalue" -AsPlainText

To create a new secret using AZ PowerShell

$setsecretvalue = ConvertTo-SecureString "This is another secret value" -AsPlainText

Set-AzKeyVaultSecret -VaultName "vault name" -Name "secret name" -SecretValue $setsecretvalue

Now we can call the secret to view that the value has been set.

This was quick run through of calling secrets using Azure Key Vault and PowerShell, this can be use full when a scripts that need to authenticate and can be used to remove any hard coded passwords or strings from scripts.

Using Managed Identities with Azure Automation Runbooks

Azure Automation with Managed Identity enables you to automate and orchestrate various tasks and processes within Azure environments. With the added benefit of Managed Identity, it eliminates the need for using credentials and simplifies authentication and authorization process.

By leveraging Azure Automation, you can create and schedule runbooks, which are sets of instructions or scripts, to perform routine or complex operations such as provisioning resources, managing virtual machines, or deploying applications. These runbooks can be executed on a schedule or triggered by specific events.

In this post we will go through setting in an Azure Automation account, runbook and use a managed identity to authentication.

To create a automation account.

Go to the Automation Account blade and click create.

Set the subscription, resource group, name and region.

Set the managed identities, we will be using system assigned the below link to Microsoft document outlines the differences between system / User.

https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview#managed-identity-types

We will be setting connectivity configuration to private access and create a private endpoint to assign a NIC with a private IP.

Next set tags if required.

Review the details and deploy.

It will take a minute or so for the deployment to complete.

We can now check and confirm that the service principal for the automation account has been created.

We can check by going to to Azure AD > Enterprise Application and search with the objectID

Next we need to assign the required permission. To assign go to Identity and select Azure role assignments.

The permission can be scoped to subscription, resource group, key vault, storage account. I will be setting scope to storage and assigned the storage blob data reader to one of my storage accounts.

Save and the role should apply.

Next we will create a runbook to run a set of task using the managed identity of the automation account.

Add in the PowerShell script to be run. I am going to try create a blob container but this should fail as I have only give read access. Click publish to allow the runbook to be run.

Click start to run the runbook.

Once competed we can view the errors from the run.

We can also check the enterprise app for the automation account and see the sign-in was successful.

This was just a quick example Azure Automation with Managed identify, this can be a very useful tool for setting up task that need to be run on a scheduled in Azure.

Deploy and Configure Azure Application Gateway

Azure Application Gateway is a service provided by Microsoft Azure that allows you to build a scalable and secure web traffic routing solution. It acts as a reverse proxy service, providing features like load balancing, SSL termination, URL-based routing, session affinity, and application-level security.

Designed to optimize web application delivery, Azure Application Gateway enhances the performance, availability, and security of your web applications. It plays a crucial role in managing and distributing traffic to backend servers, ensuring high availability and seamless user experience.

In this post we will be going through deploying and configuring an Azure Application Gateway and present an internal website.

Prerequisites for Deploying Azure Application Gateway:

  • Azure Subscription: You need an active Azure subscription to create and deploy Azure Application Gateway.
  • Resource Group: A resource group .
  • Virtual Network: Set up a virtual network (VNET) where you’ll deploy the Azure Application Gateway. The VNET should have at least one subnet for the gateway and will be dedicated for the gateway.
  • Backend Servers: Prepare the backend servers or resources that will handle the web traffic forwarded by the Application Gateway.
  • SSL Certificate: If you want to enable SSL termination, obtain an SSL certificate and private key for your domain.
  • Public IP Address: Public IP address for the Azure Application Gateway. If you want to expose your application to the internet, a public IP is required.
  • DNS Configuration: If you plan to use a custom domain name, set up the DNS records to point to the public IP address of the Application Gateway.
  • Application Gateway SKU: Choose the appropriate Application Gateway SKU based on your performance and scalability requirements.

Once all prerequisites are met we can start to deploy the Application gateway.

To start the deployment go to the load balancing blade or search for application gateway and click create.

Select the resource group

set the name, region and tier, instance count, if using WAF2 create a firewall policy and set the VNET to be used(this is recommend to be a /24 subnet mask).

Set frontend IP.

Set the backend target (this can be through IP /FQDN, VM or App Services). I used VM

We need to add a routing rule listener and backend targets.

Set tags and then review configuration and deploy.

The Application Gateway will take a little while to deploy. Once the deployment has finished the gateway should now be showing.

Once deployed we can check to see if the site is contactable by using the public IP for the Application Gateway.

Next we will enable the application gateway logs to send to a log workspace.

Go to Application Gateway and select the diagnostic settings and add diagnostic settings.

Enabled all logs and select log workspace.

We can also add rules to the WAF policy to restrict access to the site. I will be adding a rule to only allow access from my public IP.

To add the rule go the Web Application Firewall policies (WAF) blade and select the policy we created earlier.

Go to custom rules and select add custom rule.

Give the rule a name, enabled, set priority and set the conditions.

Once the rule is added, click save to apply.

We can test accessing the site from the allowed IP and then from a different IP to check the block is working.

We can confirm using the log workspace we configure in the diagnostic setting earlier using a KQL query .

I used the below query.

AzureDiagnostics 
| where ResourceProvider == "MICROSOFT.NETWORK" and Category == "ApplicationGatewayFirewallLog" 
| 
project TimeGenerated, clientIp_s, hostname_s, ruleSetType_s, ruleId_s, action_s

This was a quick run through of deploying and configuring an Application Gateway.

Configure Azure Site Recovery Zone to Zone Replication

In this post we will be going through setting up and configuring Azure Site Recovery (ASR) using zone to zone replication.

The zone to zone replication feature of Azure Site Recovery allows the replicate data and resources within the same region but across different zones. This is particularly useful for high availability of applications and services in the event of a zone outage.

Zone to Zone Replication in Azure ASR offers several advantages. It provides low recovery point objectives (RPOs) and recovery time objectives (RTOs) by continuously replicating changes. This ensures minimal data loss and enables quick failover in case of a disruption.

When configuring zone to zone replication there are a few pre-req, one is that the VM’s that are going to be replicated needs to be in an availability zone if not you wont be able to select it.

For existing VM’ the only way to move to an availability zone is through a CLI to copy and re deploy the VM.

Setting the availablity zone can be done when building the VM through the portal or using bicep /arm templates.

The second pre-req is that there is enough free IP on the subnet.

To keep the same IP address as the source VM it needs to be configured to use a static address on the network adapter settings.

To start using ASR we need to create a new recovery services vault.

To create a new ASR go to Recovery Services vaults blade in the Azure portal and click on create.

Select the subscription that the vault will be deployed, resource group, give the vault a name and set the region.

Add tags if in use and then review and create.

The deployment of the vault can take a few minutes complete.

Select the vault, go to replicated items and select Azure virtual machines.

Select the region, resource group and select the availability zone that the VM was deployed too.

Select the VM to be replicated, if the VM doesn’t show check to make sure it was deployed to an availability zone.

Next either create a new target resource group or use an existing one and select failover network.

Select availability options to specify the target zone.

Select the default or create a replication policy. I set to update extension to allow ASR to manage (this will create an Azure Automation Account).

Review setting to confirm everything is correct and enable.

If we check the site recovery jobs we should see the replication be create.

We can also see the status of the replication under replicated items.

It will take a little while for the synchronization to complete. Once completed there will be a warning, this is just alerting that the failover hasn’t been tested.

Last step will be to test failover.

If we click on the three dots beside the replicated item we then have the option to failover or test failover.

select the recover point and virtual network (for a test failover this can’t be the same as the production VNET.)

To check the status of the failover job, go to Monitoring > Site Recovery Jobs and select the failover job.

Once the job completed there should be a VM running with -test at the end of the name in the recovery resource group.

The VM will have a different IP address than the production VM, once where finished testing the last step is to clean up the failover test.

Go back to the replicated items and select cleanup test failover.

We can add in a note if we want, tick testing complete and then click ok to remove the test VM.

The cleanup job will run this will take a few minutes to complete and the test VM should now be gone.

To do the full failover is the same process just using failover.