HPE Gen 10 SUM Integrated Smart Update Tools VMware

I was installing some new ESXi host using HPE Gen 10 blade servers and was getting a warning when trying to update the firmware using HPE SPP (Service pack for proliant). The issue is due to HPE changing from installing updates directly to the ESXi OS level and instead using HPE ILO.

HPE03

If iSUT is not installed the update will be staged on the host but wont install. To install, download iSUT from the HPE Support site. Below is the link to version 2.3.6 which was the version I used. ISUT_Tool

HPE15Once download and extracted we need to copy the file to the ESXi host, the easiest way to copy is by enabling SSH on the host and using WinSCP.     HPE04I created a folder called hpe_isut on the ESXi host to copy the file to. HPE05Once the files are copied over, use either SSH or ESXi Shell to install the tools. I used SSH with putty as it was easier.

To install use esxcli command, this is the command I used.

esxcli software vib install -d /hpe_sut/sut-esxi6.0-bundle-2.3.6.0-16.zipHPE06Once the install has completed, a restart is required to finish the install. After the reboot the next step is to set the iSUT mode there are 4 <OnDemand/AutoStage/AutoDeploy/AutodeployReboot>

I chose AutoDeploy, to set the mode use the below command.

sut –set mode=AutoDeployHPE09

Once this has completed, run the inventory again from the SPP and the warning should now be gone and the firmware and driver updates should now apply. HPE10

HPE14

MBAM Policy Error code: -2147217402 on Windows 10 1903

I was installing MBAM 2.5 SP1 client on Windows 10 1903 during SCCM task sequence. Once the device was built and the user tried to put in a pin and start the encryption it would fail.

The below error was showing in the MBAM event logs. fail1We currently install MBAM successfully on Windows 1809 so it didn’t look like a MBAM server issue or connection issue. I checked and there was a newer servicing release from May 2019

https://support.microsoft.com/en-us/help/4505175/may-2019-servicing-release-for-microsoft-desktop-optimization-pack

This update adds support for Windows 10 1903. I applied this update and can now encrypt successfully.

AD RDS Profile Removal PowerShell

I was having a issue with slow logon times and temporary profile when users where logging on to an Windows RDS 2012 farm. I had a look at the issue and it was down to the RDS profile path in AD being set to use an old decommissioned server.RDS1

Once I found the issue, I need to figure out how many users where affected so the easiest way I could do this was to use PowerShell.

Below is the report script that I used.  The distinguished name used in Get-ADUser -searchbase will need to be update and the export path.  (Test before running any script and also check the quotes and doubles quotes when copying) 

## Get list of Users
$RDUsers = Get-aduser -SearchBase “OU=TestUsers,OU=Users,DC=Domain,DC=Local” -Filter *

## Set Results Array
$Results = @()

foreach ($user in $RDUsers){
$RD = [ADSI]”LDAP://$($user.DistinguishedName)”
if ($RD.Properties.Contains(“userParameters”)){

$profilepath = $RD.psbase.Invokeget(“terminalservicesprofilepath”)
$profileHome = $RD.psbase.Invokeget(“terminalServicesHomeDirectory”)
$props = @{
UserName = $user.SamAccountName
RDSProfile = $profilepath
RDSHome = $profileHome
DistinguishedName = $user.DistinguishedName
}

$Results += New-Object psobject -Property $props
}

else {
Write-Host “No UserParameters set on” $user.SamAccountName -ForegroundColor Green
}
}
$Results | Export-Csv C:\Temp\Logs\RDSProfile.csv -NoTypeInformation

RDS2

Once the script has completed the results will be export to a CSV with all user and there profile pathsRDS3Once we have the list we can either remove manually or the better option use the CSV and remove the profiles using the invokeset method.

Below is the script I used. The script could be run against all AD users but I prefer to limit the amount of object I have to run against. (This will replace values on users so should be fully tested before apply to large amount of users.)

$RDProfile = Import-Csv -Path C:\Temp\Logs\RDSProfile.csv
foreach ($RDU in $RDProfile){
Write-Warning “Removing Profile from $($RDU.UserName)”
$RD = [ADSI]”LDAP://$($RDU.DistinguishedName)”
$RD.psbase.Invokeset(“terminalservicesprofilepath”,”$null”)
$RD.psbase.Invokeset(“TerminalServicesHomeDirectory”,”$null”)
$RD.setinfo()
}

RDS4

After the script has run the profiles should now be cleared.RDS5

Surface Pro 6 1TB Disk 0 not found SCCM OSD

We recently started to build the new surface pro 6 1TB using SCCM. When imaging the task sequence kept failing at the format and partitioning step. When I checked the SMSTS log I could see the below errors

Invalid disk number specified: 0

OSDDiskPart.exe failed: 0x80070490

error1

From the error the problem was that there was no disk 0 available, I usually only see this when there are driver issues with the storage controller.

To check and see what disk where available, we can open a command prompt in the task sequence (As long as it’s enabled on the boot image) with F8 and run diskpart, once diskpart has opened use the list disk command to view available disk.

On the surface pro there was no disk 0 or 1 but instead the 1TB disk shows as disk 2 which is the reason the format is failing as by default the disk that is to be formatted and partitioned is disk 0.

I had a look online and the reason seems to be that the 1TB disk in the Surface Pro 6 is actually 2 x 512GB disk mirrored using Storage Spaces technology. See support KB below.

https://support.microsoft.com/en-us/help/4046108/disk0-not-found-when-you-deploy-windows-on-surface

This is why the disk shows as 2 and the default disk number is set to 0.

error2

The support KB says to change the format task to use disk 2 but this would require a second task sequence which is not ideal as it means more management overhead.

To work around this I have created an additional format and partition step in my existing task sequence and use WMI query’s to apply the specific format step for the Surface Pro 6 1TB.

WMI query to exclude device with no disk 0

Disk index: SELECT * FROM Win32_DiskDrive WHERE Index = “0”error3

WMI query for surface pro 6 1TB

Disk index: SELECT * FROM Win32_DiskDrive WHERE Index = “2”

Select surface pro device: SELECT * from Win32_ComputerSystem WHERE Model LIKE “%Surface Pro 6%”error4

Once the query is set on the format and partition disk step in the task sequence, set the disk number to 2.error5Now I can image the surface pro 6 1TB model successfully.

Create Windows 10 Answer file

In a previous post we went through creating and deploying language packs there was an additional step to create a Windows 10 answer file, I though it might be helpful to do a post on creating a basic answer file using Windows System Image Manager (SIM).

First step is to install Windows Assessment and deployment kit (ADK). To download the latest ADK use the below link:

https://go.microsoft.com/fwlink/?linkid=2086042

UASelect deployment tools. UA2Once the install has finished, go to the start menu and down to Windows Kits and open Windows System Image Manager .

UA4

First step is to select a Windows image file, to get the image file just need to extra the required Windows ISO to a folder.UA5Below is the extracted ISO.UA12Go to Sources > install.wim.UA6Select the version.UA7Next create a new answer file.UA8Go to Windows Image

UA13

Select Components, on the components below is the different type and examples

amd_64 =64bit components, only used on x64 installs

wow_64 = 32bit components or support components for 32bit installed on x64, only used on x64 installs

x86 = 32bit components installed on x86, only used on x86 installs

I used “amd64_Microsoft-Windows-Shell-Setup_10.0.17763.1_neutral” > “OOBE” and click Add settings to Pass 7 oobeSystem.

UA9.pngEdit “amd64_Microsoft-Windows-Shell-Setup__neutral” to add in Register Owner and any other details as required.UA11Next edit the required OOBE settings.UA10Once all the settings have been added save the answer file.UA14Last step is to create a package for the answer file and apply the answer file in the SCCM task sequence.UA15

Below is the content of the XML file.

<?xml version=”1.0″ encoding=”utf-8″?>
<unattend xmlns=”urn:schemas-microsoft-com:unattend”>
<settings pass=”oobeSystem”>
<component name=”Microsoft-Windows-Shell-Setup” processorArchitecture=”amd64″ publicKeyToken=”31bf3856ad364e35″ language=”neutral” versionScope=”nonSxS” xmlns:wcm=”http://schemas.microsoft.com/WMIConfig/2002/State&#8221; xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”&gt;
<OOBE>
<HideEULAPage>true</HideEULAPage>
<HideLocalAccountScreen>true</HideLocalAccountScreen>
<HideOEMRegistrationScreen>true</HideOEMRegistrationScreen>
<HideOnlineAccountScreens>true</HideOnlineAccountScreens>
<HideWirelessSetupInOOBE>true</HideWirelessSetupInOOBE>
<ProtectYourPC>3</ProtectYourPC>
<SkipMachineOOBE>true</SkipMachineOOBE>
<SkipUserOOBE>true</SkipUserOOBE>
</OOBE>
<RegisteredOwner>TheSleepyAdmin</RegisteredOwner>
</component>
</settings>
<cpi:offlineImage cpi:source=”” xmlns:cpi=”urn:schemas-microsoft-com:cpi” />
</unattend>

 

Deploying Microsoft LAPS Part 3

In the last post we went through deploying the LAPS agent using a script, GPO or SCCM. The next step is to configure the GPO settings to apply LAPS management policy’s

First, we create a new GPO to apply the LAPS management policy’s, the LAPS policy’s are under Computer Configuration > Polices > Administrative Templates > LAPS (If this doesn’t show the ADMX template is probable missing and will need to be installed. This can be done using the LAPS installer)LAPSGP1Password settings policy used to set the password complexity, length and ageLAPSGP2Specify the account that the password policy will apply to if this is the default administrator account this should be left at defaultLAPSGP3LAPSGP4Enabled management of local admin accountLAPSGP5Once the policy is configured apply the policy against the required OULAPSPass1To confirm that all settings are working, run a gpupdate on a test device. Once applied we can check the password in a few different ways

First way is to run PowerShell command:LAPSPass

Second way is to use the LAPS UI , this can be either used from the management server or installed on local computer using the LAPS installer and selecting the LAPS management tools

LAPSPass2LAPSPass3

The third method is to check the AD computer attribute ms-Mcs-AdmPwd:LAPSPass4

Last step is to set delegated access to a security group or set of users to view and reset the local administrator password. Use the below command to verify the current rights

Find-AdmPwdExtendedRights -identity:OU distinguishedName

LAPSPass5There are two command to set the rights, one for read and one for reset rights

Set-AdmPwdReadPasswordPermission -OrgUnit OU distinguishedName -AllowedPrincipals “HelpDesk_LAPS_Access”

Set-AdmPwdResetPasswordPermission -OrgUnit OU distinguishedName -AllowedPrincipals “HelpDesk_LAPS_Access”

LAPSPass6Last step is to verify the permission have been appliedLAPSPass7

LAPS is now deployed and ready to use.

 

Deploying Microsoft LAPS Part 2

In the last post we went through installing LAPS management tools, extending the AD schema and setting the delegation rights for computer OU to allow computer to write back to the LAPS password attribute.

The next step is to install the LAPS client this can be done either by using a script, group policy or SCCM.

I used the below the script to install remotely just need to create the complist with host name of devices and update the sharename and verions of LAPS that is required

$Computers = Get-Content “C:\Temp\complist.txt”
foreach ($Computer in $Computers){
Write-Warning “installing LAPS on $Computer”
$command = “msiexec /i C:\windows\temp\LAPS.x64.msi /quiet”
$Remotecmd = “CMD.EXE /c ” + $command
Copy-Item \\sharename\LAPS.x64.msi -Destination \\$Computer\c$\windows\temp\
Invoke-WmiMethod -class Win32_process -name Create -ArgumentList $Remotecmd -ComputerName $Computer | Out-Null
}

The second option is to deploy using GPO software install

Craete a new GPO > Policies > Software settings > software installtion > New packageLAPS6Add the installerLAPS7LAPS8LAPS9Next apply the policy agaist the OU or use security filtering to apply to specific devices once the policy is applied logon to the device and run gpupdate /force to apply LAPS10

Third option is to use a tool like SCCM to package the application and deploy to devices. This would be my preferred way as its gives the best reporting.

We won’t go through the process but the command line install will  msiexec /i C:\windows\temp\LAPS.x64.msi /quietLASCCMLASCCM1

Deploying Microsoft LAPS Part 1

In this post we will be going through deploying and configuring Microsoft LAPS (Local Administration Password Solution).  LAPS is a solution to automate the changing of a local administrator account on every computer in the domain.

To install LAPS will require a management server / workstation I will be installing on my domain controller.

Supported Operating System

Windows 10, Windows 7, Windows 8, Windows 8.1, Windows Server 2003, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, Windows Server 2016, Windows Server 2019, Windows Vista

Active Directory: (requires AD schema extension)
• Windows 2003 SP1 or later.
Managed machines: 
• Windows Server 2003 SP2 or later, or Windows Server 2003 x64 Edition SP2 or later.
Note: Itanium-based machines are not supported.
Management tools: 
• .NET Framework 4.0
• PowerShell 2.0 or later

First step is to download the install files for LAPS

https://www.microsoft.com/en-us/download/details.aspx?id=46899

Next install the full deployment of LAPS on the designated management server / workstation.

Run LAPS installer for operating system verisonLAPS1LAPS2LAPS3

Install full management tools LAPS4LAPS5

After the management tools have been installed the next step is to extend the AD schema

The LAPS PowerShell module is called AdmPws.PSLAPSAD1

To update the Schema first add the LAPS module and then run

Update-AdmPwdADSchemaLAPSAD2

Last step is to delegate right to computer objects to allow them to write to the ms-MCS-AdmPwd and ms-Mcs-AdmPwdExpirationTime AD attributes.

Set-AdmPwdComputerSelfPermission -OrgUnit “OU=Computers,DC=Domain,DC=local”LAPSAD4

In the next post we will go through delegating access to specific users to allow them read the ms-MCS-AdmPwd attribute and to deploy the LAPS client through SCCM , script and GPO.

 

Configure Azure Site Recovery for VMware

Azure site recovery (ASR) is a DR / Migration tool from Microsoft and can be used to configure DR between data centers or Azure.

In this post we will be going through setting up ASR to replicate a VM from VMware to Azure.

There are some limitations for ASR these are listed the below link

https://docs.microsoft.com/en-us/azure/site-recovery/vmware-physical-azure-support-matrix

The main limits for VMware are guest disk need to be less than 4TB and vCenter needs to be at least 5.5.

I have a previous post on how to configured a recovery service vault so I wont be going over that again but if you need to configure here is the previous post.

https://thesleepyadmins.com/2018/11/23/azure-vm-backup-using-azure-recovery-service-vault/

Logon to Azure

Go to Recovery Services vaults

ASR1Go to the already configure vault, select Site Recovery and click on prepare infrastructureASR2Once the wizard has started select the require goalsASR3I am not running the planning tools as this is a test but it is recommended to run before starting a deployment to verify the required bandwidth. ASR4Next we will download the OVA appliance that will be imported to VMwareASR5Once the OVA has been downloaded and imported to VMware on boot up the server will require you to read / accept a licence agreement and provide an administrator password.

Give the server a name (this will show up in as the configuration server in Azure after the setup as been completed)ASR6Next step is to sign in to Azure tenant that the server will connect to for replicationASR8Next we will go through the configuration steps first step is to set the interface that will be used to connect to on-prem devices & connection back to Azure there can be two different NIC’s assigned if required. ASR9Next is to configure the Recovery vault that will be used, select the subscription, the recovery vault RG and recovery service vault that has been configured. ASR10Install the MySQL software ASR11Next a validation test will run. (I am getting a warring for memory and CPU as I didn’t have enough memory / CPU and had to edit the VM to run on less resource but it will still complete)ASR12Next is to connect to the vCenter server that is running the VM’s that are to be replicated to Azure. ASR13Last step will configure the configuration server in Azure.ASR15Once this has been completed we can go back to the Azure portal and we should now see the configuration server show under prepare infrastructure setupASR16

Select the subscription and deployment model to be used for failover I am using Resource ManagerASR17Next create a replication policy to apply to the ASR configuration server. ASR18ASR19Once the configuration is done we can now protect and replicate our on-prem VM’s , go back to site recovery and select step 1: Replicate Application ASR20Select source, source location (Configuration server on-prem)Machine type (Physical / virtual), vCenter (If virtual) and the process serverASR21Select the subscription, RG that the VM will replicate too and the deployment modelASR22Next select the server that will be replicated the VM must be powered on and be running VMware tools be available for replication other wise they will be grey-outASR23Select the required disk type, storage account ASR24last step is to assign the policy required (Multiple policy can be created base on the recovery time requirements and retention times)ASR25

Last step is to enabled replication

ASR26

Once enabled check the site recovery jobs to see the progressASR27Once replication has completed we can create a recovery plan, go to recovery Plans (Site Recovery and select Recovery planASR28Give the plan a name, select source, target , deployment type and select the VM’s that will be added to the recovery.ASR29ASR31

Azure Migrate Services Setup

With more companies looking to move workloads from on-prem to cloud providers, it can be difficult to work out the cost for current workloads.

In Azure we can utilize Azure Migrate Services.

At the time of this post VMware is only supported, this will be extended to Hyper-v in future releases. VMware VMs must be managed by vCenter Server version 5.5, 6.0, 6.5 or 6.7.

In this post we will be going through the process off assessing the on-prem VMware environment and view the assessment report.

The environment we will be assessing is VMware vCenter 6.7 and ESXi 6.7 and a VM running Windows Server 2016.

The architecture of the Azure Migrate Service is shown in the following diagram

AZMIG

Below is the process

  • Create a project: In Azure, create an Azure Migrate project
  • Discover the machines: Download collector VM as an OVA and import to vCenter
  • Collect the information: The collector collects VM metadata using VMware PowerCLI cmdlets. Discovery is agentless and doesn’t install anything on VMware hosts or VMs. The collected metadata includes VM information (cores, memory, disks, disk sizes, and network adapters). It also collects performance data for VMs
  • Assess the project: The metadata is pushed to the Azure Migrate project. You can view it in the Azure portal.

Logon to Azure

Go to All services search for migration projectAZMIG_1

Select Create migration project

AZMIG_2

Give the project a Name, subscription, Resource group & Geography

AZMIG_3

Select Discover & Assess AZMIG_4

Select Discover and Assess. then Discover machines AZMIG_5download OVA. The system requirement for the OVA are:

CPU: 8 vCPU’s; Memory: 16GB; HardDrive: 80GBAZMIG_6

Next step is to import the OVA to VMware

Go to vCenter

AZMIG_7

Browse to the OVA file location and selectAZMIG_8Select the Name and location of the OVAAZMIG_9Select the destination cluster AZMIG_10Click NextAZMIG_11Select destination data store and specify either thick or thin provisioned diskAZMIG_12Select the port group tha the VM will useAZMIG_13Review and confirm settingsAZMIG_14

Once the OVA is imported, power on the VM

Read and accept the license terms and give the collector an admin password.

Log into the VM and run the connector utility on the desktop.

AZMIG_15

Got through the prerequisites checksAZMIG_16

Next step is to connect to the vCenter. Put in the vCenter IP or hostname, Username / Password and once connect select the cluster or host that is currently running the VM’s that need to be assessed for migration.

AZMIG_17

Next step is to connect back to Azure using the migration project credentials that were generated when creating the projectAZMIG_19AZMIG_18

Click continue and the last screen will start the discovery this can take a while to complete (60 minutes or so)AZMIG_20

Once the discovery has completed, we then need to Create assessmentAZMIG_21

Create a new group and select the VM that will be assessedAZMIG_22

Once this has completed go to Overview and click on the assessmentAZMIG_23

Once in the assessment you can see the readiness of your VM and the Monthly estimated cost for running the workloads and AZMIG_24

Click on the VM to view specific details and performance statsAZMIG_25

We can now see what the cost will be for migrating workloads to Azure and this can be presented to give a better understanding of the cost savings that can be achieved with cloud migrations.