Azure FileShare: CMDKEY: Credentials cannot be saved

I was setting up an Azure FileShare recently and wanted to connect with the storage account key to configure the initial permission and folder structure.

When trying to added the connect account details I was getting the below response.

CMDKEY: Credentials cannot be saved

The issue was due to one group policy that was blocking saved passwords. The specific policy setting was Network access: Do not allow storage of passwords and credentials for network authentication

To fix the issue I had to create a new GPO that would change the policy setting for the device I was connecting from. Setting to Disabled.

Once this was updated the command then completed successfully.

After getting the credential to successfully add I still couldn’t map the shared.

I was getting New-PSDrive : The specified network password is not correct

After a bit of troubleshooting the issue was down to having NTLM v2 disabled on the security setting on the Azure FileShare.

The settings is under storage account > select the storage > Data storage > File shares > Security.

Under security check that NTLM v2 is enabled under Authentication mechanisms.

After enabling NTLM v2, I was able to connect to the Azure FileShare using the storage account key.

Deploying Azure VM using PowerShell and CSV

In this post we will be going through the process of using PowerShell to deploy VM’s using the Az module. We will then use a csv file to create multiple VMs.

To start we will need an existing resource group, vnet and subnet that will be use for the VM to be built in and connect the nic.

Next we need to have the Az PowerShell module installed this can be done by using

Install-Module -Name Az -Repository PSGallery -Force

if the module is already installed it a good idea to check if there is any updates we can use the below command to update the module.

Update-Module -Name Az -Force

Once we have the module and pre-req resources in Azure we can start to create the command that will be used to build the VM.

I use the New-AzVM learn page to check on what parameters I wanted to use to create the VM .

We will need to chose a sku size and image to deploy. These can be checked using PowerShell.

For the below exam I will be using a Standard_d2ds image sku to search, we can also search based on cores or memory either.

Get-AzVMSize -Location northeurope | Where-Object {$_.Name -like "Standard_D2ds*"}

Next we need a image for this I will be using Windows server 2022.

Get-AzVMImageSku -Location "northeurope" -PublisherName "MicrosoftWindowsServer" -Offer "WindowsServer" | Where-Object {$_.Skus -like "*2022*"}

The image value is made by joining three rows together:

PublisherName:Offer:Skus

for 2022 Azure edition it will be the below.

Now that we have the sku and image we can start to create the command to build the VM.

I will be using splatting as it easier to read when using a lot of parameters.

$vmla = "AzureUser"
$vmlapass = ConvertTo-SecureString "password" -AsPlainText -Force

$vmproperties = @{
    ResourceGroupName = "RG Name"
    Name = "VM Name"
    Location = "AZ Region"
    VirtualNetworkName = "vnet"
    SubnetName = "subnet"
    Size = "sku size"
    Image = "Image"
    Zone = "1"
    Credential =  New-Object System.Management.Automation.PSCredential ($vmla, $vmlapass);
}

Once we have the parameters set we can then call them using

New-AzVm @vmproperties

We can run get-azvm to check the vm is created and running.

To deploy multiple VM’s using a csv file we can use the same splatting we will just be updating to using a foreach loop and import csv.

First we need to create the csv below is the heading that I used.

Next we need to update the parameters splatting to use the variable that will be used in the foreach loop in my case its $vm.

Below is the updated parameters.

Save the script and we will connect to Azure again and run.

The csv and splatting can be updated to add or remove additional parameters.

The only issue I have found is that there is no way to stop a new NSG being create unless you specify an existing NSG. For each VM I deployed it create a new NSG.

I had to manually go in and delete these NSG’s as I am using a NSG on the subnet and don’t want them on the VMs nics.

It looks like in a future release of the Az module you will be allowed to specify no NSG.

https://github.com/Azure/azure-powershell/issues/16890

This can be an useful alterative for deployments of VM’s if you aren’t that familiar with Arm or bicep templates and more comfortable using PowerShell.

Migrating Existing Azure VM to Availability Zone

In this post we will be going through the process of migrating a VM in to an availability zone.

Azure availability zones are designed to help with availability for business-critical workloads. Using availability zone allow VMs to be replicated to different datacenter within the same Azure region.

There is not currently an easy process to move a VM in to an availability zone as this needs to be configured when the VM is originally deployed.

The process to migrate does required down time for the VM as we will need to snapshot the VM, create new disks from the snapshot and deploy in to an availability zone.

We will be using PowerShell to create a snapshot of the existing VM disks , create the new disks from the snapshots and create a new VM using the existing VM config.

I will be doing this on test VM with no live data but for an live server make sure to have a full backup as we will be deleting the VM and use the existing configuration to recreate the VM with disk in the availability zone.

I created a test VM call SAZTest with one data disk, that is not in an availability zone.

First we need to connect to AZ PowerShell and select the correct subscription if there are multiple.

Next we get the VM that we will create a snapshot of the disk. We will be using the $vm variable to get the disk and for recreating the VM later to keep the existing configuration.

$resourceGroup = "Resource Group"
$vmName = "VMName"
$vm = Get-AzVM -ResourceGroupName $resourceGroup -Name $vmName

First we need to power off the VM if it running either through the Azure portal or running the below command.

Stop-AzVM -ResourceGroupName resourceGroup -Name vmName

Next we can create the snapshot I used the below link for reference.

https://learn.microsoft.com/en-us/azure/virtual-machines/snapshot-copy-managed-disk?tabs=portal

$location = "location"
$snapshotName = "$($vm.StorageProfile.OsDisk.Name)-snapshot"

$snapshot =  New-AzSnapshotConfig -SourceUri $vm.StorageProfile.OsDisk.ManagedDisk.Id -Location $location -CreateOption copy

$vmossnapshot = New-AzSnapshot -Snapshot $snapshot -SnapshotName $snapshotName -ResourceGroupName $resourceGroup

$snapshot = Get-AzSnapshot -ResourceGroupName $resourceGroup -SnapshotName $snapshotName 

If we check under snapshots in the Azure portal we will see the newly create snapshot disk.

We could also create the snapshot disk directly from the Azure portal using snapshots blade,

We will use this method for the data disk, go to the VM and select the data disk.

Select create snapshot.

Add in the details

Go through and leave setting as default

Wait for the deployment to complete and the second snapshot should show.

To create the data disk using PowerShell it pretty much the same process as the OS disk.

To view the disk attached to the VM we can use the data disk sub property.

$vm.StorageProfile.DataDisks

Since we only have one disk we can run the set of commands once but if there where a few disks it would be easier to loop through them.

$datadisk = $vm.StorageProfile.DataDisks

$snapshotdataconfig = New-AzSnapshotConfig -SourceUri $datadisk.ManagedDisk.Id -Location $location -CreateOption copy -SkuName Standard_ZRS

$snapshot_data = New-AzSnapshot -Snapshot $snapshotdataconfig -SnapshotName ($datadisk.Name + '-snapshot') -ResourceGroupName $resourceGroup

We can run the below to show the snapshots.

Get-AzSnapshot -ResourceGroupName resourceGroup | Select-Object Name

Next we need to create a new managed disk from the snapshots.

https://learn.microsoft.com/en-us/azure/virtual-machines/scripts/virtual-machines-powershell-sample-create-managed-disk-from-snapshot

We should have the snapshot already in the $snapshot but if not we can run again before create the new disk config and disk.

$snapshot = Get-AzSnapshot -ResourceGroupName $resourceGroup -SnapshotName $snapshotName 

$diskconfig = New-AzDiskConfig -Location $snapshot.Location -SourceResourceId $snapshot.Id -CreateOption Copy -SkuName Standard_LRS -Zone 1

$OSdisk = New-AzDisk -Disk $diskConfig -ResourceGroupName $resourceGroup -DiskName ($vm.StorageProfile.OsDisk.Name +"_1")

We need to run the same set of command for all the data disks.

$datasnapshot = Get-AzSnapshot -ResourceGroupName $resourceGroup -SnapshotName $snapshot_data.Name 

$datadiskConfig = New-AzDiskConfig -Location $datasnapshot.Location -SourceResourceId $datasnapshot.Id -CreateOption Copy -SkuName Standard_LRS -Zone 1

$datadisk = New-AzDisk -Disk $datadiskConfig -ResourceGroupName $resourceGroup -DiskName ($datadisk.Name + "_1")

Now if we check the resource group we should see the new disk.

Now we need to delete the original VM so that we create a new VM using the existing configuration with newly created disk in zone 1.

Either delete the VM from the Azure Portal or run

Remove-AzVM -ResourceGroupName resourceGroup -Name vmName  

We need to use New-AzVMConfig, copy the existing SKU size, attach the OS / data disks that we created and add the existing network interface.

I used the below learn article as reference.

https://learn.microsoft.com/en-us/powershell/module/az.compute/new-azvmconfig?view=azps-10.1.0

$createvm = New-AzVMConfig -VMName $vm.Name -VMSize $vm.HardwareProfile.VmSize -Zone 1

Set-AzVMOSDisk -VM $createvm -CreateOption Attach -ManagedDiskId $OSdisk.Id -Name $OSdisk.Name -Windows

$vmdatadisk = Get-AzDisk -ResourceGroupName $resourceGroup -DiskName $datadisk.Name

Add-AzVMDataDisk -VM $createvm  -Name $vmdatadisk.Name -ManagedDiskId $vmdatadisk.Id  -Lun 0 -DiskSizeInGB $vmdatadisk.DiskSizeGB -CreateOption Attach 

Next we can add the existing network adapter.

Add-AzVMNetworkInterface -VM $createvm -Id $vm.NetworkProfile.NetworkInterfaces.id -Primary

Next we set the VM boot diagnostic if this is not set the VM will default to create a storage account to use for boot.

Set-AzVMBootDiagnostic -VM $createvm -Enable

We can also change this after by going to boot diagnostics on the VM and changing to enabled with managed storage account.

Last step is to create the new VM.

New-AzVM -ResourceGroupName $resourceGroup -Location $vm.Location -VM $createvm -DisableBginfoExtension

Now when the deployment finishes we can see the VM is now running in Zone 1.

Once its confirmed that the VM is running and that all date is available.

The last step is to remove the old disk and snapshots so that we don’t get charged for them.

Go to each disk / Snapshot and delete the original VM disk and snapshots.

Configure Azure Site Recovery Zone to Zone Replication

In this post we will be going through setting up and configuring Azure Site Recovery (ASR) using zone to zone replication.

The zone to zone replication feature of Azure Site Recovery allows the replicate data and resources within the same region but across different zones. This is particularly useful for high availability of applications and services in the event of a zone outage.

Zone to Zone Replication in Azure ASR offers several advantages. It provides low recovery point objectives (RPOs) and recovery time objectives (RTOs) by continuously replicating changes. This ensures minimal data loss and enables quick failover in case of a disruption.

When configuring zone to zone replication there are a few pre-req, one is that the VM’s that are going to be replicated needs to be in an availability zone if not you wont be able to select it.

For existing VM’ the only way to move to an availability zone is through a CLI to copy and re deploy the VM.

Setting the availablity zone can be done when building the VM through the portal or using bicep /arm templates.

The second pre-req is that there is enough free IP on the subnet.

To keep the same IP address as the source VM it needs to be configured to use a static address on the network adapter settings.

To start using ASR we need to create a new recovery services vault.

To create a new ASR go to Recovery Services vaults blade in the Azure portal and click on create.

Select the subscription that the vault will be deployed, resource group, give the vault a name and set the region.

Add tags if in use and then review and create.

The deployment of the vault can take a few minutes complete.

Select the vault, go to replicated items and select Azure virtual machines.

Select the region, resource group and select the availability zone that the VM was deployed too.

Select the VM to be replicated, if the VM doesn’t show check to make sure it was deployed to an availability zone.

Next either create a new target resource group or use an existing one and select failover network.

Select availability options to specify the target zone.

Select the default or create a replication policy. I set to update extension to allow ASR to manage (this will create an Azure Automation Account).

Review setting to confirm everything is correct and enable.

If we check the site recovery jobs we should see the replication be create.

We can also see the status of the replication under replicated items.

It will take a little while for the synchronization to complete. Once completed there will be a warning, this is just alerting that the failover hasn’t been tested.

Last step will be to test failover.

If we click on the three dots beside the replicated item we then have the option to failover or test failover.

select the recover point and virtual network (for a test failover this can’t be the same as the production VNET.)

To check the status of the failover job, go to Monitoring > Site Recovery Jobs and select the failover job.

Once the job completed there should be a VM running with -test at the end of the name in the recovery resource group.

The VM will have a different IP address than the production VM, once where finished testing the last step is to clean up the failover test.

Go back to the replicated items and select cleanup test failover.

We can add in a note if we want, tick testing complete and then click ok to remove the test VM.

The cleanup job will run this will take a few minutes to complete and the test VM should now be gone.

To do the full failover is the same process just using failover.

Configure Azure Backup Email Reporting

In this post we will be going through the process of setting up Azure Backup Email Reporting.

First step is to configure the Azure Backup Reporting, for this you will need to have already configured at least one Azure Recovery service vaults and have backup running.

I have gone through this process in a previous post so wont be going over here. see previous post for steps involved.

Once we have the backup vault and backup configured we need to configure an Log Analytics workspace to send diagnostic data to so the reports can generate data.

To create a Log Analytics Workspace go to the Azure Admin portal > Log Analytics workspaces.

Click create

Select the resource group the workspace will be created in, give it a name and select the region.

Add tags if required and create.

Now that we have the log workspace we can configure the backup vault to send diagnostic data.

Go to Backup center > Vault

Select the vault that will have diagnostic enabled and go to Diagnostic settings.

Give the diagnostics a name, select the Azure backup category’s and send to log workspace. Select the log workspace to send to.

Click save it can take a few hours before data starts to show in the workspace.

To check that the backup report are showing data,

Go to Backup center > Backup reports and select the workspace from the drop down list.

Click on summary tab to view the summary report.

Once we have reports working we can now configure the email reporting.

To configure email reporting,

Go to Backup center > Backup reports > Email Report

Give the task a name (this will be the name of the logic app), set the subscription, resource group and region.

We will also need to set email frequency, destination email address and email subject.

There is a bug in the naming of the task it suppose to allow hyphens but the UI gives an error if you use them. The work around to this is to create without the hyphens and then once the logic app is deployed clone with the correct naming.

Once the logic app is created if we want to use hyphens, go to Clone and use hyphens for the name of the cloned logic app.

Then remove the logic app without the hyphens.

Next we need to approve both the API’s

On the Office365 API authorize the account that is authorize will be used to send the mail so if there is no mailbox associated against the account you will receive an error like the below.

“Office 365 Outlook” connector Error:- REST API is not yet supported for this mailbox. This error can occur for sandbox (test) accounts or for accounts that are on a dedicated (on-premise) mail server

To use a Shared Mailbox the Logic app will need modified the send a mail v2 action and add in a from address and use an account that has send as permission set on the mailbox.

https://learn.microsoft.com/en-us/connectors/office365/#send-an-email-from-a-shared-mailbox-(v2)

Once both API have been authorized we can run the trigger to test the job.

The backup report should then send.

To modify the time that the mail is sent at, we will need to set the time in the logic app. Open the logic app designer and add a new parameter under Recurrence.

Set the value to the start time required.

The summary mail should now send daily at 9AM.

How to check VM SKU and VM Series Sizes Different Methods

When deploying VMs in Azure using template we need to be able to check the VM SKU and sizes to be able to update templates to deploy different OS version and VMs sizes.

There are a few different methods that can be used.

There is the Microsoft document, below is the link to the virtual machine size docs.

https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-general

We can select the type in this case we will use general purpose.

Select the size, I selected ddv5-ddsv5-series. There will be table listing the VM sizes.

We also check the the VM size from the Azure portal by creating a new VM and changing the VM sizes.

Last method is to use either PowerShell or Azure CLI to query the require details on Windows Image SKU and VM sizes.

First we need to install the Azure AZ PowerShell module.

Run the below command to install the AZ module

Install-Module -Name Az -Scope CurrentUser -Repository PSGallery -Force

Once the module is installed run

Connect-AzAccount

We can use the Get-AzVMImagePulisher to get the publisher name in this case I was looking for Microsoft Windows.

Get-AzVMImagePublisher -Location northeurope | Where-Object {$_.PublisherName -like 'MicrosofWindows*'}

To check all available Windows Server images we can run.

Get-AzVMImageSku -Location northeurope -PublisherName MicrosoftWindowsServer -Offer Windowsserver

We can use where-object filter to by server OS version.

Get-AzVMImageSku -Location northeurope -PublisherName MicrosoftWindowsServer -Offer Windowsserver | Where-Object {$_.Skus -like '2022*'}

To get the VM series size use the below command to check size in the specific region.

Get-AzVMSize -Location northeurope

To filter by a specific cores or name we will use where-object again.

Get-AzVMSize -Location northeurope | Where-Object {$_.Numberofcores -eq '4' -and $_.Name -like 'Standard_D*'}

Now once we have the SKU and Image size we can update our template file with the required VM size and image references.

Using Parameter File With Bicep Templates

In the previous post we went through the process of deploying an Bicep template using parameter that where called directly from PowerShell.

This is ok when deploying resources that require only a few values to be set, but this can become difficult to manage when there are a lot of parameter like when deploying virtual machines or web apps.

A parameter file for Bicep is a json file that has the specific parameter names and values set.

I used this Microsoft document as a reference to create the parameter file.

https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/parameter-files

Below is the Bicep file we created in the last post.

A parameter file for the storage Bicep template would look like the below.

Once we have the parameter file, we are ready to test the deployment. To test without actually deploying the resource add the whatif parameter.

New-AzResourceGroupDeployment -ResourceGroupName ResourceGroupName -TemplateFile .\BicepTemplate.bicep -TemplateParameterFile .\BicepParamter.json -WhatIf

Next we will create a template to deploy a virtual machine and it network interface. To create base template in visual studio code type

res-nic to populate the network interface:

Use res-vm-windows to populate the virtual machine.

I will be creating parameters for each part that requires customization and anything that doesn’t I will leave with the hardcode values like. The @descirpiton is a Parameter decorators that allow use to add a description of what the parameter is used for.

I create two variables for the vnetId and subnetref that are used for the network interface

Below is what the updated virtual machine resource template looks like.

Once we have the Bicep template file ready the next step is to configure the parameter file. I copied the default template file code from the above Microsoft document and added in each of the required parameters.

To get the virtual network ID that will be used in the parameters file go to the virtual network and click on properties > Resource ID.

Once we have that we can fill out the reset of the parameter values.

After the template file has been configured, we can test the deployment same way as the storage account and use the whatif to confirm there are no errors.

As I have not set the admin password in the template or parameter file the deployment will prompt for the admin password to be set on the VM.

If the test deployment comes back without any issues we can check the results from the whaif deployment to confirm all the are correct.

Since the template and parameter files have returned no error we are ready to run and deploy the new VM resource.

If we check the resource group the new VM, OSDisk and network interface should be created.

Now that we have all the template and parameter file working we can just create a new parameter file for each VM resource. We can now create fully customized VM’s pretty quickly instead of having to deploy using the Azure market place and manually select the options we want to set.

Deploy Azure Resource Using Bicep template and PowerShell

In a previous post we went through deploying resource using ARM templates. I have been recently working on creating some new templates to be used for additional deployment and decided to use Bicep templates instead as it seem to an easier and more straight forward way of writing template files to deploy resources in Azure.

Bicep uses a simpler syntax than creating templates using JSON, the syntax is declarative and specifies which resources and resource properties you want to deploy.

Below is a comparison of a JSON and Bicep template file to create a storage account.

We can see that the bicep file is an easier format to read and create.

JSON Template
Bicep Template

To get started with Bicep we will first be installing the Bicep extension to Visual Studio Code so we can edit and modify Bicep templates.

The extension adds intellisense which makes it easier than looking up the full syntax for each resource.

To create a new template click on new file in VS code and select bicep as the language.

To create a template using intellisense either start typing the res-resource type or the resource type itself.

Click on the resource to pre populate the template.

The values can be hard code in the template file but this would require manual change each time the template is run.

We can add parameters to the template to make it more reusable. There are a few different types of parameters that can be used.

Below is a brief description of each type.

ParameterDescription
stringSingle line value
arrayAllow multiple values
intInteger value
boolRequired true or false
objectsAllows property types

Microsoft have a more in depth document on the different data types.

https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/data-types

The below is an example of a template with parameters set.

Once we have the template we need to install Azure PowerShell if not already installed and to install the Bicep tools, below is the document to install.

https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/install#install-manually

To connect to Azure PowerShell use

Connect-AzAccount

Once connected we can use the New-AzResourceGroupDeployment to deploy the template and specify the parameters values. To test the deployment for errors without running an actual deployment we can use the -whatif parameter.

New-AzResourceGroupDeployment -ResourceGroupName ResourceGroup -TemplateFile pathtotempaltefile\BicepStorageTest.bicep -location regsion -storagename storageaccountname -storagetype Standard_LRS -WhatIf

Run the command and it should return any error if there is an issue or green if there is no issue.

Once the template comes back with no issues we can remove the -whatif and the storage account will be created.

To confirm the storage account has been created use the Get-AzStorageAccount

Using parameters in the Bicep template is fine when there are only a few that need to be called but for more complex deployment it will be better to use a parameters file and pre fill each value.

In the next post we will go through the process of using a template to create a VM and its network interface and will call a parameters files instead of call the parameters in PowerShell directly.

Azure VM Snapshot Backup: UserErrorRequestDisallowedByPolicy

During a recent project I have been deploying new VM to Azure, when trying to configure the Azure VM backup I was getting a failure at taking snapshot.

The error that showed in the reason was UserErrorRequestDisallowedByPolicy.

This was being caused by a policy that one of the Azure Admins had setup to require tags be configure on resource groups. When a initial backup is run it creates a resource group to save the restore point collection to and it is this resource group that is getting blocked by the Azure tag policy.

To view the policy details we can go to Policy > assignments

Click on the policy to view the parameter’s.

There are two option to work around this issue, either changing the policy from a Deny effect to a Modify effect, or create the resource group manually.

I will be creating a manual resource group as I am not that familiar with creating custom policy yet and this was the quicker workaround.

Below is the link to the Microsoft document on creating a manual resource group for restore collection point.

https://docs.microsoft.com/en-us/azure/backup/backup-during-vm-creation#azure-backup-resource-group-for-virtual-machines

Here are the steps that I did to get around this, by manually creating the resource group that will be used for the backup.

This needs to be RG name with 1 as this starting number in my case I used TheSleepyAdmin_Backup_RG1.

In the backup policy we specify the new resource group. Go to Azure Backup center > Backup policies.

Put in the name of the resource group we create manually without the number. In my case this was TheSleepyAdmin_Backup_RG

Wait for the policy update to complete.

Now try the backup again and it should complete.

If we check the resource group we can see that the restore point collection has been created.

Any addtional backup should now also be successfully, if the resource group becomes full it will try to create a new RG so there maybe a need to create another RG in the future. I will be having a look at creating or updating the tag policy to apply a modify instead of a deny but that will be in a different post as this seems like it would be a better longer term solution.

Enable Accelerated Networking on existing Azure VM’s

In this post we will go over the different methods to enabled accelerated networking in an existing Azure VM.

Accelerated networking improves performance as it allows the network interfaces of Azure VM to bypass the host.

Screen shot from Microsoft documentation

Below are some of the benefits of using accelerated network.

Lower Latency / Higher packets per second

Reduced jitter,

Decreased CPU utilization

Accelerated networking is only supported on VM that have 2 or more CPU’s. If the VM’s are in a availability set all VM’s in the set need to be powered off before updating.

There are three way’s to enabled accelerated networking on existing VMs use either AZ PowerShell Module or the AZ CLI and directly in the Azure portal.

To enable in the Azure portal go to Virtual machines > Networking and select the required network interface.

To enable first Power off the VM,

Select the network interface and click on the name. This will bring you to the network interface configuration page.

Click on enable accelerated networking

You will have to confirm you have validate that your operating system is supported.

Once completed the network interface should now show have accelerated networking enabled.

Enabling in the console is fine for one or two interfaces but if there are a few to update doing PowerShell or AZ CLI will be a quicker method.

To update using the AZ PowerShell Module, first we need to install the module.

To install run the below command

Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
Install-Module -Name Az -Scope CurrentUser -Repository PSGallery -Force

Once installed use the below to connect, you will be prompted to put in Azure account details.

Connect-AzAccount

Once connected, we can check if the network interfaces have accelerated networking using the below command.

Get-AzNetworkInterface -ResourceGroupName RGName | Select-Object Name,EnableAcceleratedNetworking

To enabled accelerated networking the VM needs to be stopped and deallocated so either power off from the Azure console or use stop-azvm

Stop-AzVM  -Name VMName -ResourceGroupName RGName

To enable we need to get the network adapter information into a variable and then set the enabledacceleratednetworking property to true.

$networkacc = Get-AzNetworkInterface -ResourceGroupName RGName-Name nicname
$networkacc.EnableAcceleratedNetworking = $true
$networkacc | Set-AzNetworkInterface

Once the command completes, we can run the command to check the network interfaces again and one should now have enabledaccleratednetworking set to true.

If there were multiple network interfaces in the resource group to enable, we could get the list and loop through each, but each VM would need to be supported or they would error out.

$networkaccs = Get-AzNetworkInterface -ResourceGroupName RGName
foreach ($networkacc in $networkaccs){

$networkacc.EnableAcceleratedNetworking = $true
$networkacc | Set-AzNetworkInterface

}

Last step is to power back on the VM either from the Azure portal or using AZ PowerShell.

Start-AzVM  -Name VMName -ResourceGroupName RGName

That is the process for setting using AZ Powershell.

To set using the Azure CLI, first we need to install the go to the below and download the MSI installer.

https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-windows?tabs=azure-cli

Once installed launch PowerShell.

to logon either use az login for an interactive logon process

or use with username and password parameter. (This method will not work with MFA so we will be using the interactive method.)

az login -u <username> -p <password>

When running the az login command you will be brought to the standard login.microsoft.com page.

Once signed in, we can query the resource group for network interfaces to see what has acceleratednetworking enabled

az network nic list --resource-group RGName --query [].[name,enableAcceleratedNetworking] --output table

To update the interfaces the VM needs to be powered off either in the Azure console or using AZ Cli

To use AZ Cli

az vm deallocate --resource-group RGName --name VMName
az network nic update --name NicName --resource-group RGName --accelerated-networking true

Once the command completes run the list command again to confirm that acceleratednetworking is set to true.

last step is to start the VM using either Azure port or Az Cli

az vm start --resource-group RGName --name VMName

The network interfaces should now have accelerated networking enabled.